text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
|Northam Platinum Mine Service Water Treatment||| Print ||
Northam Platinum Zondereinde, located 100km From Rustenburg near Thabazimbi on the Northern part of the Western Limb of the bushveld complex, is one of the deepest platinum producing mines in the world. The main shafts operate at depths between 1200m - 2200m below surface, exploiting both the Merensky and UG2 reefs. Surface infrastructure comprises two concentrator plants (both for Merensky and UG2 ore), a smelter and base metals removal plant, where copper and Nickel is extracted from the high grade PGM concentrate. Northam Platinum employs approximately 8700 people, and production is bench-marked at 300000 oz annually. Northam pioneered the use of hydro-power in the mining industry – a now commonplace practice.
Supplying water underground for everything from drilling, cooling and cleaning requires a large volume of water, to the tune of 45 million liters of water per day! This water arrives at the Pre-cool towers at 25oC and departs the surface at a temperature of between 4 – 6oC, and this requires a large amount of cooling capacity. As with any industrial cooling unit, biological fouling is a big concern as it greatly reduces the cooling capacity of the system. Biological growth inhibits heat transfer mechanisms and leads to an inefficient process. This water also needs to be Class 1 drinking water quality, although this water is not intended for human consumption, the disinfection is a precaution so that should someone drink the water – they won’t be hospitalized through water-bourne viruses. Doing this requires large infrastructure and monitoring.
Northam’s previous water treatment consultants were unable to keep up with the disinfection demand required and as a result a large amount of biological growth was reported and sub-standard water was being delivered to the plant. Arch Chemicals was therefore called in and asked to draw up a plan to solve the disinfection requirements.
Arch Chemicals’ Granular Doser was identified as the ideal chemical dosing system for this application. With a hopper capacity of 175kg it is capable of treating up to 175 000 000 liters of water per day. The sophisticated electronic control mechanisms make use of a 4-20mA signal coupled to a variable speed drive, to ensure that the addition of concentrated HTH is done proportionately. 5 Granular Dosers were installed on site.
The HTH® Scientific Granular Dosers receive the 4-20mA Signal from the free chlorine monitors which are located on the water supply and return lines. Five of these units measure the Free Chlorine, pH and temperature 24hrs per day. Plans are in place to provide remote communication access to these units via a unique control system that will allow constant surveillance and reporting functionality.
Eight large Pre-Cool towers process the water returning from underground, at a rate of 45 000 000 liters per day. This water is measured for its free chlorine levels and HTH® Scientific is added automatically. As with any industrial cooling unit, biological fouling is a big concern as it greatly reduces the cooling capacity of the system. Biological growth inhibits heat transfer mechanisms and leads to an inefficient process.
This water is then further cooled by the Fridge plant systems, before it ventures underground. The fridge plant itself cools by means of a refrigerant. During the cooling process the refrigerant needs to be cooled in order for it to return to the liquid phase. This requires a further 8 cooling towers circulating 4 000 000 liters of water per day. This water is also measured and HTH® Scientific is automatically added to effectively disinfect the water and to kill all micro-organisms.
Finally, in the last phase of the circuit the water is measured before it departs the surface. This water is continuously supplied chlorinated at a Free Chlorine reading of 0.3ppm so that it may be fee of all pathogenic bacteria and have resistance to any further contamination. | <urn:uuid:1428d903-78c0-4823-8f5d-f758c6c8a826> | {
"date": "2014-10-23T04:29:15",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450097.39/warc/CC-MAIN-20141017005730-00026-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9497165083885193,
"score": 2.578125,
"token_count": 821,
"url": "http://www.hthscientific.co.za/industrial/northam-platinum-mine-service-water-treatment"
} |
A Cow in northern Michigan last year developed tuberculosis, and, in just a few hours, the Michigan State Department of Agriculture was able to figure out where the animal was born and what other livestock it may have come in contact with. The quick action was made possible by data stored in a radio-frequency identification chip on a round plastic tag pierced through the animal's ear. As a result, other cattle that might have been infected with TB were found and tested before they could pass the disease along, possibly even to humans.
In contrast, it took two weeks for federal officials to complete the DNA tests that confirmed the Alberta, Canada, birthplace of a Washington state cow that was identified on Dec. 23 as having bovine spongiform encephalopathy, or mad-cow disease. U.S. Department of Agriculture officials had to wade through a mound of paper records and other data maintained by breeders and meatpackers to trace and recall beef that may have been exposed to tissue from the infected cow. Prices for live cattle dropped about 15% to 80 cents per pound the week of the discovery. And last week, a herd of nearly 450 Holstein calves, among them the unidentifiable offspring of the infected cow, had to be destroyed.
A national livestock-tracking system would help avert these dramatic outcomes. Such a program would be the biggest IT project ever attempted by the meat industry, potentially costing nearly $600 million over six years, according to those working on the project. An RFID-enabled ear tag alone can cost up to five times as much as a typical 75-cent metal ear tag with an identification number, though RFID tags could drop to $1 each in volume if all 100 million cattle in the country had them.
News source: InformationWeek - Cattle Trails | <urn:uuid:d3072c37-aa59-403e-ba52-838ba46a150f> | {
"date": "2015-05-22T16:48:26",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925696.30/warc/CC-MAIN-20150521113205-00329-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9757025837898254,
"score": 3.234375,
"token_count": 364,
"url": "http://www.neowin.net/news/cattle-trails"
} |
The most significant moment at the Eichmann Trial occurred when the Polish-born writer Yehiel Feiner collapsed while testifying on the stand in Jerusalem, after he was asked a simple procedural question at the beginning of his testimony—the reason why he concealed his identify behind the pseudonym Ka-Tzetnik 135633 (Ka-Tzetnik is the Yiddish term for a concentration camp inmate).
“It was not a pen name. I do not regard myself as a writer and a composer of literary material. This is a chronicle of the planet of Auschwitz. I was there for about two years. Time there was not like it is here on earth. Every fraction of a minute there passed on a different scale of time. And the inhabitants of this planet had no names, they had no parents nor did they have children. There they did not dress in the way we dress here; they were not born there and they did not give birth; they breathed according to different laws of nature; they did not live—nor did they die—according to the laws of this world. Their name was the number Ka-Tzetnik.”
Later in his testimony, Ka-Tzetnik stood and turned around, and he then collapsed on the ground.
Several years ago in Tablet, David Mikics explored the literary legacy of Yehiel Feiner, with a particular focus on his post-Holocaust works of Salamandra (1946) and House of Dolls (1953), written under his name of Ka-Tzetnik 135633, and noted, almost in passing, a small book of Yiddish poetry that he published in 1931. Before the Holocaust, Feiner was a musician, writer, and poet, who contributed articles to local Yiddish newspapers and, in 1931, published a volume of twenty-two Yiddish poems. However, as historian Tom Segev writes in The Seventh Million, “[a]fter Auschwitz, [he] made every effort to consign his early work to oblivion, going so far as to personally remove it from libraries. He also discarded his original name. Auschwitz, having robbed him of his family, also robbed him of his identity, leaving only the prisoner.”
In his Tablet article, Mikics recounted the story about how, in 1993, when Ka-Tzetnik was told that a copy of his 1931 collection of twenty-two poems was available at Hebrew University’s National Library of Israel, “he stole it, burned it, and sent the charred remains back to the library with the instruction that the rest of it should be reduced to ashes, like all of his pre-Auschwitz existence.” His request to the director of the library’s circulation department was, “to burn the remains of his book just as my world and all that was dear to me was burnt in the Auschwitz crematorium.” For the past few decades, whenever scholars analyzed the literary writings of Ka-Tzetnik, they have focused on Salamandra and House of Dolls, owing to the fact his 1931 volume was removed from circulation by the author himself.
But in what is being presented as a major find, the Kestenbaum & Company auction house in New York City recently announced that Lot 119 of their June 26 auction was an autographed first edition, with photographic frontispiece, of Yehiel Feiner’s 1931 Tsveyuntsvantsik—“Twenty-Two Poems”—which they describe as “possibly the only complete copy extant of … Ka-Tzetnik’s immensely scarce first book written in his youth in Poland, of the utmost rarity.” The sale price for Thursday afternoon is estimated at $7,000-$10,000.
Upon seeing the auction listing, I reached out to Professor Yehiel Szeintuch, who informed me through a colleague that though Ka-Tzetnik famously destroyed a copy of Twenty-Two Poems from the National Library of Israel, Szeintuch has since replaced the copy from his own collection. Seeking to further establish the rarity of the volume at auction, I also walked into The YIVO Institute for Jewish Research in Manhattan and, after some research into Ka-Tzetnik’s various publications that are available in their great Yiddish archive, a copy of his work was located within The Vilna Collection, the surviving remnant of YIVO’s prewar library from Vilna.
Over the past week, in my efforts to learn more about Ka-Tzetnik’s Twenty-Two Poems, I asked my friend Professor Naomi Seidman, a scholar of Yiddish literature at Berkeley’s Graduate Theological Union, to explore some of the themes of Ka-Tzetnik’s earliest published work with that of his post-Holocaust writings. In a common trope of Yiddish poetry, Seidman found that Twenty-Two Poems reflected Feiner’s “attachment to traditional Jewish language as a metaphor for modern artistic creation,” and that “everything the young poet meets falls into the ready-made sighs and shadowed gravestones and bitter heart of the young poet.”
She also wrote, “One potentially interesting question is what happens when such a poetic sensibility (abstract, death-obsessed, full of cliches about poetry and art), meets the concrete details of the Holocaust? Everything the young poet meets falls into the ready-made sighs and shadowed gravestones and bitter heart of the young poet. So what, then, when he meets what we know he meets?”
I reached out to Kestenbaum & Company and asked whether they are aware that copies of Ka-Tzetnik’s Twenty-Two Poems are currently available to be studied at the National Library of Israel and at the YIVO Institute for Jewish Research in New York, among other research institutions, and if this detail will be made known at the auction on Thursday afternoon.
A Kestenbaum representative responded that “the copy contains the exceptionally rare frontispiece photographic portrait of the author,” and that they will not announce that there are, as well, several academic libraries that also have copies of Ka-Tzetnik’s Twenty-Two Poems available to researchers.
Related: Holocaust Pulp Fiction | <urn:uuid:75d03976-2fe9-410c-b0cd-cbe9455fe60a> | {
"date": "2018-07-16T08:49:58",
"dump": "CC-MAIN-2018-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589237.16/warc/CC-MAIN-20180716080356-20180716100356-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9732605814933777,
"score": 2.734375,
"token_count": 1342,
"url": "https://www.tabletmag.com/scroll/177165/the-lost-poems-of-ka-tzetnik-135633"
} |
Environmental regulators need to ensure water being discharged from the Hazelwood Pondage into the Latrobe River would not have long-term effects on the region's waterways, according to environmentalists.
Environment Victoria campaigns manager Nicholas Aberle was concerned PFAS levels in the pondage could have disastrous effects in year to come.
"PFAS could be the next asbestos. We won't find out the effects of these chemicals until further down the road, and until then, we should apply precautionary principles to manage exposure," Dr Aberle said.
"How confident are we that these levels are genuinely safe? What are the long-term effects of low-grade exposure to this chemical? We may not know for the next 20 years."
The Environment Protection Authority gave ENGIE permission to double the amount of water being discharged out of the pondage from 150 to 300 million litres a day if necessary, for up to 60 days.
ENGIE had applied for the emergency discharge licence to take pressure off the dam wall and reduce any risks associated with integrity issues in the 50 year-old structure.
The EPA issued a statement that PFAS levels were well below the 95 per cent level of protection ecosystem standard which was not unusual in many parts of the state.
The regulator said the additional flow would result in an increase in salt concentration but with no ecological impacts. Water turbidity and pH would also have negligible impacts on the river.
"I assume that ENGIE and the EPA will make sure that whatever water is being discharged, there will be no impacts downstream. Let's get the evidence there will be no impact," Dr Aberele said.
Dr Aberle was sceptical as to how ENGIE had allowed the dam wall to get to a state of emergency.
He was also concerned about the energy company's ability to manage a full-pit lake in the Hazelwood mine void.
"ENGIE has a big challenge in the rehabilitation of the mine pit and people need confidence. If ENGIE cannot look after a wall in a dam, how will they look after an entire pit of water?
"Why have pondage conditions become so frail that they need to have an emergency discharge? Why hasn't ENGIE been managing the wall for the past decade to ensure it is stable?"
An ENGIE spokesman said the company monitored water quality discharges from the pondage on a weekly basis as required by its EPA license.
"ENGIE Hazelwood continues to meet its regulatory obligations and is working with regulators, as required, on the final void rehabilitation," he said. | <urn:uuid:c35d3c97-a095-4e5a-aa45-ffa408f022f3> | {
"date": "2019-03-24T21:46:27",
"dump": "CC-MAIN-2019-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203493.88/warc/CC-MAIN-20190324210143-20190324232143-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9703810214996338,
"score": 2.578125,
"token_count": 519,
"url": "http://www.latrobevalleyexpress.com.au/story/5541817/ev-queries-water-release/?cs=1840"
} |
For the Burmese, the most unpopular part of Barack Obama's recent speech to the University of Yangon was his reference to the Rohingya who, as he so eloquently stated, 'hold within them the same dignity as you and I do'. Bengali-speaking and Muslim, they are descended from seventh century Arab traders who settled in what is now Arakan (or Rakhine) State but are regarded as 'interlopers' from Bangladesh by the majority of Burmese.
Under the U Nu government in the 1950s, they had Burmese citizenship until General Ne Win, the man credited with plunging Burma into poverty, introduced the 1983 Citizenship Act of Burma which recognised 135 'national races' but not the Rohingya who remain stateless to this day.
Statelessness means they are robbed of all the rights we have as citizens. They even have to ask permission of the local authorities to get married (after payment of the usual bribe) and are allowed only two children per marriage. Unauthorised marriages can result in ten years' imprisonment, and having more than two children means they are unregistered and denied healthcare and education and are often subject to forced labour.
The current conflict between the Rohingya and their Buddhist Arakanese neighbours was sparked by the alleged rape of a Buddhist girl by a Muslim. Since then, at least 200 people have been killed and 115,000 driven from their homes, the vast majority of them Rohingyas. The deep-seated cause of the furore is access to scarce land.
Even a politician of the moral stature of Aung San Suu Kyi could only timidly say that both communities had suffered and both had breached human rights laws. One commentator compared this to saying that whites as well as blacks violated human rights in apartheid South Africa.
The deeply ingrained hatred of the Rohingyas in Burma generally has resulted in NGOs such as Médecins sans Frontières being denied access to the injured and to their not being treated in hospitals.
Prior to the Obama speech, President Thein Sein of Burma had indicated to the UN he would address the Rohingya situation and look at everything from 'resettlement of displaced populations to granting of citizenship'. It is likely that offers of citizenship will only be made, if at all, to 'third generation Rohingya' — which excludes hundreds of thousands of people who either could not prove that status or had migrated from Bangladesh later.
Obama's speech, while admitting that 'every nation struggles to define citizenship', still maintained that the American experience was based on universal principles about 'the right of people to live without the threat that their families may be harmed or their homes burned simply because of who they are or where they come from'.
On current form, there is little hope that the Rohingya will benefit from Obama's visit, designed to show America's friendship, if, in Obama's words, the fist of despotic regimes is unclenched. That applies not just to Burma itself but the region where the Rohingyas' boats of desperation are regularly turned back into the sea.
The recent ASEAN meeting in Cambodia resulted in the ten member states adopting an 'ASEAN Human Rights Declaration' which has been widely criticised for allowing that rights can be restricted if they endanger public security, public morals or public order, and that rights must be weighed against public duty. In other words, you can have your rights so long as you agree with the regime of the day.
It will be interesting to see in 2014, when Burma (or Myanmar, to give it the generals' name) chairs ASEAN, has treated its most despised minority.
I would have urged the Australian Government, with its renewed Asian enthusiasm, to intervene on the Rohingyas' behalf but, given the recent inhumane ideas from both Government and Opposition about the treatment of asylum seekers, it has lost all right to the moral legitimacy required to speak up for the oppressed.
Duncan MacLaren lectures in international development studies at Australian Catholic University and coordinates its Refugee Program on the Thai-Burma Border offering tertiary education to Burmese refugees and migrants. | <urn:uuid:b9b041b8-3d1b-48b2-8579-30b82d774b74> | {
"date": "2016-10-21T21:55:41",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718309.70/warc/CC-MAIN-20161020183838-00386-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.974319338798523,
"score": 2.90625,
"token_count": 842,
"url": "http://www.eurekastreet.com.au/article.aspx?aeid=34323"
} |
Until now, we have been passing through the foundational doctrine of the Upanishads – namely, the nature of the Ultimate Reality. What is there, finally? In several ways we have been told that whatever is there, finally, can be only a single Reality and it cannot be more than one. This concept was corroborated by a famous mantra that I quoted from the Rig Veda Samhita – ekam sat: “Existence is one only.” The Ultimate Being is Existence. Being and Existence mean the same thing. That which exists cannot be more than one.
Everything has to exist, in some form or the other. Trees exist, stones exist, you exist, I exist, mountains exist, stars exist – all things exist. Existence is a common factor underlying every modification thereof as name and form. Whatever be the variety that is perceivable, all this variety is, at its root, an existence of something. Something has to exist, whatever that something be. The Real cannot be non-existent, because even the concept of non-existence would be impossible unless it is related to the existence of the concept itself. So the Upanishads say: “This Existence is supreme, complete, universal, all-pervading, the only Being.” Because It is all-pervading and filling all space, very large in its extent, it is called Brahman. That which fills, That which swells, That which expands, That which is everywhere and is all things – That is the plenum, the completeness, the fullness of Reality; and That is called Brahman in the Sanskrit language. Brahma-vid apnoti param (Tait. 2.1.1), says the Taittiriya Upanishad: “Whoever realises this Brahman attains to the Supreme Felicity.” It is so because of the fact that when anyone contacts Pure Existence, that contact is equal to the contact of all things. It is like touching the very bottom of the sea of Reality. Hence, Brahman is All-Existence. The knowing of it is of paramount importance.
The Upanishads highlight various ways and means of attaining this Supreme Brahman. The principal method prescribed is direct inward communion with that Reality. Direct inward communion is called meditation. Deep thought, profound thinking and a fundamental, basic feeling for it – longing for it, and getting oneself convinced about one’s non-difference from it because of its being All-Existence – is the great meditational technique of the Upanishads. Inasmuch as this meditation is nothing but the affirmation of the knowledge of the universal existence of Brahman, it is also calledjnana, the path of wisdom. The meditation of the Upanishads is the affirmation of the wisdom of the nature of Brahman. Whoever knows this Brahman attains the Supreme Being. Brahma-vid apnoti param, tad eshabhyukta, satyam jnanam anantam brahma(Tait. 2.1.1). How do we define this Brahman? Satyam jnanam anantam: This is the name of the Supreme Being. It is Pure Existence, satyam, Ultimate Truth. It is Omniscience, All-Knowledge, so it is called jnanam. It is everywhere, infinite; therefore, it is called anantam. What is Brahman? Satyam jnanam anantam brahma.
Yo veda nihitam guhayam parame vyoman so’snute sarvan kaman saha brahmana vipascita (Tait. 2.1.1). This is an oracle in the second section of the Taittiriya Upanishad which gives us the secret of the final attainment of bliss and freedom. This satyam jnanam anantam brahma, this Supreme Truth-Knowledge-Bliss-Infinity is, of course, as has been mentioned before, everywhere. It is also hidden deeply in the cave of your own heart – nihitam guhayam. Guha is the cave, the deepest recess of your own being. That is verily this Ultimate Being. You have to be very cautious in not allowing this thought to slip out at any time – namely, your deepest recess of existence cannot be outside the deepest recess of the cosmos. The all-encompassing nature of Brahman also envelops your basic being.
When this universal Brahman is conceived as the deepest reality of an individual, it is called the Atman – the essential Self of anything. It is the essential Self and not the physical, not the mental, not even the causal sheath of your personality; all of these, as you know very well, get negated in another condition of your being – namely, deep sleep. The analysis of deep sleep is a master key to open the gates of the secret of your own existence. Neither the body, nor the mind, nor this so-called ignorant sheath can be considered as your own reality. Blissful sleep cannot be a condition of ignorance, because the experience of bliss has to go together with a kind of consciousness of that experience. This essential Being of yours indicates the character of the Universal Reality also. It is a sense of freedom and bliss that you enjoy when you come in contact with It. Do you not feel free and happy when you go into a state of deep sleep? Can the freedom and the happiness of sleep be compared with any other pleasure of this world? Even a king who cannot sleep for days together would ask for the boon of being able to sleep for some days, rather than having a vast, material kingdom. To go into your own Self is the best achievement, the highest attainment, whereas to go outside yourself, however far beyond you may go, is that much the worse for you. Knowledge of the Self is knowledge of the Absolute. Atma-jnana is alsoBrahma-jnana. The knowledge of the deepest in you is also the knowledge of the essential secret of the universe. So, whoever knows that supreme satyam jnanam anantam, Truth-Knowledge-Infinity, as hidden in the cave of one’s own heart, directly comes in contact with that satyam jnanam anantam brahma. Simultaneously, you begin to feel a bliss of contact with all things. Saha brahmana vipascita so’nute sarvan kaman: “All desires get fulfilled there in an instant.”
In this world, to fulfil different desires, you have to employ different means. There, a single means is enough to give you the happiness of everything – not one thing after the other, successively, but simultaneously, instantaneously. In your current state, if you have one pleasure, you cannot have another pleasure at the same time, and if you want to have a third kind of pleasure, the first two must go. Thus, you cannot have varieties of pleasure at the same time because of the conditioning factor introduced by the sense organs in such experience. Your senses do not give you simultaneous knowledge of anything. When one thing is happening, another thing is forgotten. But in the contact of Brahman, there is simultaneous knowledge of all things. At one stroke everything is known, and everything is enjoyed also. It is impossible for us mortals, thinking through the sense organs and through this body, to imagine what it could be to enjoy all things at the same time.
It is not merely possessing a kingdom; that also may look like a happiness which is sudden and simultaneous. A king who is the ruler of this whole world may imagine that he has simultaneous happiness of the entire kingdom of the earth. “The entire earth is mine,” the king may feel. But the entire earth stands outside the king. The experiencing consciousness of the king does not hold under his grip or possession this vast earth that he considers as the means of his satisfaction. So the king’s happiness is a futile, imaginary pleasure; really, he does not possess the world. The world stands outside. If the object of experience stands outside the experience, the experience cannot be regarded as complete. Unless the object of experience enters into you and becomes part and parcel of your own existence, you will not be able to enjoy that object. All objects cause anxiety in the mind because they stand outside the experiencing consciousness. Even if you have a heap of gold in the grip of your palm, it cannot cause you happiness. It will only cause anxieties of different types – such as how to keep it, how to use it, how to protect it, how not to lose it, and how to see that it is not leading you to bereavement. The possessor of gold and silver is filled with anxieties, and that person cannot sleep well. Even a king cannot sleep well because of the fear of attack from sources that are external to him. To be secure under conditions which are totally external to yourself is hard, indeed, to imagine.
Brahman experience is not an object of contact; it is an identity. The object is the experiencing consciousness itself. The content of awareness becomes the awareness; existence and consciousness merge into each other. Sat becomes chit, chit becomessat. It is not actually one thing becoming another thing; the one thing is the other thing. Existence is nothing but the consciousness of existence. When you say that you exist, you are at the same time affirming that you are conscious that you exist. You are not merely existing, minus the consciousness of existence. It is not an appendage that is added on to existence in the form of consciousness. Consciousness is not a quality or an attribute of existence, like the greenness of a leaf or the redness of a flower – nothing of the kind. You cannot consider consciousness to be connected to existence; itis existence. Actually, existence-consciousness means consciousness which is – or existence which is aware of its existence. In that state, which is called Brahman-knowledge or Brahman-experience, there is simultaneous experience of all things. There is all-existence, a simultaneous knowledge of all things – omniscience, a simultaneous taneous enjoyment of all things, and perfect freedom. It is perfect freedom because there is nothing to obstruct your freedom in that state. Here, in this world, whatever freedom you may have is limited by the existence of other things in this world. Your freedom is limited by the freedom of another person and, therefore, your freedom is limited to that extent. You cannot have unlimited freedom in this world. But That (Brahman) is unlimited freedom. It is unlimited because anantam brahma: “Infinite is Brahman.”
Now you have, as students of this great doctrine of the Upanishads, questions of various types: “What is this world? We understand what you are saying. Now, what isthis world that we are seeing in front of us? How are we to reconcile this perceived world with that Great Thing that you are speaking of?” The cosmological scheme that follows in the very same Upanishad after this statement about the absoluteness of Brahman gives us a brief idea as to how we have to set in harmony the nature of this perceived world with the eternal existence of Brahman.
Tasmat va etasmat atmana akasas sambhutah (Tait. 2.1.1): “From this Universal Atman, space emanated” – as it were. This is something hard for us to conceive at the present moment. Space is actually the negation of the infinity of Brahman. Infinity does not mean extension or dimension – but space is extension, dimension, distance. So, immediately a contradiction is introduced at the very beginning of the concept of creation. God is negated, as it were, for various reasons, the moment creation is conceived, one reason being that the creation appears as an external manifestation, whereas God – Brahman – is the Universal Existence. We know the difference between universality and externality. The moment there is the concept of space, there is also automatically introduced into it the concept of time. We cannot separate space and time. Duration and extension go together. Actually, according to modern findings at least, space and time are not dead appearances, lifeless presentations before us. For us, to our common perception, spatial extension may look like a lifeless dimension which does not speak, which does not think, which has nothing to say. Time also seems to be some kind of movement which has no brain to think; it is like a machine moving like a bulldozer in some direction. This is what we may think with our paltry, inadequate knowledge of what space and time are. Space and time are not dead things; they are basic vibrations of the cosmos. Motion goes together with space-time. Not only according to modern scientific terminology, but also in the ancient thought of the Agama and Tantra, one may say that the concept of space-time goes together with motion, force.
A tremendous vibration, an uncanny force is generated the moment there is the beginning of what we call creation. It is a central point that begins to vibrate – bindu, as it is called in the Agama Shastra. Bindu is a point. It is not a point which is geometrical, which has a nucleus; it is a cosmic point, a centre which is everywhere with a circumference nowhere, as people generally say. It is a point that is everywhere, which is inconceivable to ordinary thought. It is a tremendous vibratory centre. Modern astronomy also seems to be hinging on this point when it concludes there was a ‘big bang’ when creation took place – a splitting of the cosmic atom. The atom should not be considered as a little particle; it is a cosmic centre. The entire space-time arrangement is one point, like an egg – brahmanda, as it is called. A globular structure is easy to conceive, and so we call it an ‘anda’, a kind of egg – a cosmic egg. Tadandam abhavat haimam sahasramsh samaprabham (Manu 1.9) says the Manusmriti: “Even millions of suns cannot be equal in brilliance to that cosmic spot.” Therefore, it is not a point as we can geometrically imagine. It is an inconceivable point.
The Universal cannot be thought by the mind and, therefore, that cosmic point also cannot be really thought of. Astronomers call it the cosmic atom. But the word ‘atom’ has such peculiar suggestiveness to our thinking mind that often we are likely to slip into the thought of it being a little, small thing. The smallness and the bigness question does not arise there. In that condition, we cannot say what is small and what is big. “Who is a tall man?” If I ask you this, whom will you bring? “Bring a short man.” These are all relative terms. In comparison with a tall man, someone may look short, etc. So there is no such thing as a tall man or a short man, a long shirt or a short shirt; they are comparative words. So, too, we cannot say what kind of atom it was. Therefore, they call it brahmanda; and it split, we are told, into two halves. What kind of halves they are is not very clear. The subject and the object, can we say? The Cosmic Subject and the Cosmic Object can be two halves of the cosmic egg – or we may say it is the Cosmic Awareness meeting with the Cosmic Object, which is material in its nature. The materiality of the object follows automatically from its segregation from the perceiving consciousness. The concept of matter also has to be very carefully noted. Here, in this condition, ‘matter’ actually means a hard stone or granite or a brick; it is also a vibration. The Samkhya definition of prakriti, in its highest condition, is not in the form of a solid object but a vibratory condition of a tripartite nature – sattva, rajas and tamas. Certain Upanishads analogically tell us that these two halves of the cosmic egg are something like the two halves of a split pea. The pea is one whole, but it has two halves.
Everything in the world has a subjective side and an objective side. I conceive of myself as a subject and, for some other reason, I also conceive of myself as an object. The impact that is produced upon me by conditions that are not me may make me feel that I am an object, but the impact that I produce on the external conditions may make me feel that I am a subject. That which exists outside my perceiving consciousness may make me conceive of myself as a subject of perception, but the presence of such an object for itself will appear as an object. This dualism, cosmically introduced at the very beginning of things, is the subject of all the religious doctrines of creation, wherever one may go in this world. God created the world, somehow. This ‘somehow’ brings in this peculiarity of the externalisation of God’s Universality. “The Supreme Purusha sacrificed Himself as this cosmos,” says the Purusha Sukta. The supreme alienation of the Universal into the supreme externality is called creation. God alienated Himself, as it were, in the form of this large, vast, perceived world. He has become this vast world. I mentioned to you previously the difficulty arising out of using such words as ‘becoming’, ‘transforming’, etc. I will not go into that subject once again. These words have to be understood in their proper connotation and signification.
Tasmat va etasmat atmana akasas sambhutah (Tait. 2.1.1): This fundamental cosmic space-time-motion, or vibration, became more and more gross in the form of wind –vayu. Actually, the word ‘vayu‘ used here should not be taken in the sense of what we breathe through the nostrils. It is, again, a vibration of a vital nature, which we callprana. An energy manifested itself; cosmic energy emanated, as it were, from this basic vibratory centre which is the space-time-motion complex, to put it in a modern, intelligible style. The solidification, condensation and more and more externalisation of the preceding one in the succeeding stage is actually the process of the coming of what is called the elements. From space, or akasha, arose vayu; from vayu, or air, came friction – heat, or fire; from there came the liquefied form, water; and then came the solid form of the earth.
Tasmad va etasmad atmana akasa sambhutah, akasad vayuh, vayor agnih, agner apah, adbhyah prthivi, prthivya osadhayah (Tait. 2.1.1): “All vegetation started from the earth.” Osadhibhyo annam: The diet that we consume is nothing but the vegetation growing on earth. Annat purushah: Our personality is an adumbration, solidification, concretisation, clarification – whatever we may call it – of the food that we eat. In the personality of the human being we find in a miniature form all that has come cosmically down to the earth, right from the Supreme Brahman – satyam jnanam anantam brahma. So the universe is called brahmanda and the individual is called pindanda. The macrocosm is the universe, and the microcosm, or the individual, is a cross-section of the macrocosm. All that is in the universe you will find in yourself. You are a miniature of creation. If you know yourself, you know the whole world. This is why it is said, “Know thyself and be free.” Nobody says “Go outside and know things.” It will not serve your purpose. Know yourself and all things are known, because you are the nearest thing that can be contacted and the nearest thing containing all things that are the furthest and the remotest. Therefore, the Ultimate Reality is also called the nearest and the furthest. Tad dure tad vad antike (Isa 5): “Very far is It” – in terms of the spatio-temporal expanse of creation; “Very near is It” – as the Self of your own existence.
The miniature individual, as I mentioned, has all the layers of the universe. These are the physicality of the lowest earth, the vibratory form of the prana, the mental creation or the mentation, the power of thought, which is reflected in the process of creation from the Ultimate Being Itself, and a peculiar negation that we experience in our own self in the form of the ultimate causality of sleep, which is comparable to the negation that was referred to just now in the form of the manifestation of space-time-motion. This individualised microcosmic representation of the cosmic layers is seen individually as a series of what is called the koshas, or the coverings of the consciousness in us. We may, in a way, say the whole universe is a covering up over Brahman.
The cosmic sheaths can be conceived, and they are really conceived many a time when we speak of Brahman becoming Ishvara, Ishvara becoming Hiranyagarbha, Hiranyagarbha becoming Virat, and so on. These sheaths in us – the physical, vital, mental, intellectual and causal – are the inverted forms of the otherwise-vertical, we may say, forms of the cosmic sheaths which are in the form of the five elements – earth, water, fire, air and ether, going upwards from below. The Ultimate satyam jnanam anantam is negated, as it were, in this creation, because the Universal being is absent in all that is external. The word ‘external’ contradicts anything that can be considered as universal. In a way, God is denied in this world. We cannot see God anywhere; we see only particulars and spread-out things which are external in nature. Nevertheless, as the Isavasya Upanishad warns us, the so-called negated, abolished existence of the Supreme Reality is also hiddenly present as the Atman behind the earth, the Atman behind water, fire, air and ether. There is an Atman even behind space and time. Various degrees of the manifestation of universality can be seen in the operation of the five elements. The Universal is least manifest in the earth, more manifest in water, still more in fire, still more in air and still more in space, so that space looks almost universal, but yet it is not universal because it is externalised.
In a similar manner, in our own personality also, there is a degree of the manifestation of externality and materiality. The physical body is the most material and the most external, visible thing among other things. Very hard substance is this physical body and very external; we can see it with the eyes. The internal externalities are not so easily contactable, but yet are conceivable and observable through analysis. The so-called physicality and externality of the body is made to feel its existence, its very life itself, by the movement of a vibration inside, called prana shakti. When the pranaoperates through the cells of the body, we feel that the body is alive; every little fingertip, every toe is alive. It is alive, so-called, because of the prana pervading every part of the body. If the prana is withdrawn, there is paralytic stroke or even death of that particular part. If the prana is entirely withdrawn, the so-called living body becomes a corpse. It becomes dead matter – matter per se.
So our individuality, as a symbol of conscious existence, is a contribution; it comes from the prana, the vital energy that is operating within this body. But the prana is operating because of the thoughts of the mind. We can direct the prana, or the energy, in different directions by the concentration of thought of the mind. If the mind thinks only of one particular thing, the pranic energy is directed to that particular thing only. Little children look beautiful because of the equal distribution of pranic energy in their bodies. They do not have sensory desires projected through any particular organ. As the child grows and grows, he becomes less beautiful to look at because the senses begin to appropriate much of the pranic energy for their own individual operation. The senses become more and more active when we grow into adults or old men. But a little child is beautiful. Whether it is a king’s child or a beggar’s child, one cannot make a distinction; little children are so nice!
Therefore, the prana enlivens this body, but is itself conditioned by the thoughts of the mind, and the mind is a name that we give to an indeterminate way of thinking. “Something is there.” When we feel that something is there, but we do not actually know what is there, we are just indeterminately thinking. But when we are sure that something of a specific type is there – “Oh, I see. It is a tree. It is a lamppost. It is a human being” – this determined identification of the nature of a thing which was indeterminately thought by the mind is the work of the intellect, reason, or buddhi, as it is called. These layers are very clear now: the physical, the vital, the mental and the intellectual.
There is another thing that is totally indeterminate, and that is the condition of our experiences in deep sleep. It is a potential of all future experience and a repository of all past experiences. It clouds consciousness to such an extent that in deep sleep, when it is preponderating, we cannot even think. Thus, in this individuality of ours, in this microcosm that we are, there is a miniature representation of the cosmic creative process. As the peels of the onion constitute the onion, so these sheaths constitute our personality and even the cosmic creative process.
This is, briefly, what I can tell you about the essential teaching of one of the sections of the Taittiriya Upanishad, which tells us three things. The first teaching is that the Ultimate Reality is Existence-Knowledge-Bliss, and it is hidden in the cave of the heart of every individual – knowing which, one becomes all things and enjoys perfect freedom and bliss. The second teaching is that all things that we call the universal manifestation emanate from this Supreme Being only. The third teaching is that we, as individuals, are also part and parcel of this creation and we have in us a miniature representation of everything that is manifest cosmically. For the time being, this is enough for you as far as the Taittiriya Upanishad is concerned.
The Mandukya Upanishad goes deeper into this teaching of the Taittiriya Upanishad by an analysis of the states of consciousness that seem to be involved in the categorisation of the sheaths. The involvement of the basic Atman-consciousness in us, in the sheaths – gradationally – becomes experience, which is waking, dreaming and deep sleep – jagrat, swapna andsushupti. | <urn:uuid:4f1e7f18-b4b5-46a7-b8dd-d62975525a9b> | {
"date": "2018-05-26T05:53:32",
"dump": "CC-MAIN-2018-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867311.83/warc/CC-MAIN-20180526053929-20180526073929-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9549217820167542,
"score": 2.734375,
"token_count": 5816,
"url": "https://onesadhaka.wordpress.com/category/yoga/"
} |
This glossary contains an alphabetical list of Buddhist terms that you may find on this website. Many of the terms now include phoneticized Sanskrit (Skt) as well as two forms of Tibetan—the phonetic version (Tib), which is a guide to pronunciation, and transliteration using the Wylie method (Wyl). Search for the term you want by entering it in the search box or browse through the listing by clicking on the letters below.
Glossary terms for "A"
Treasury of Knowledge, by Vasubandhu; one of the main philosophical texts studied in Tibetan monasteries.
Asanga’s Compendium of Higher Knowledge is one of the principal philosophical texts studied in Tibetan monasteries, particularly revered for its clarity and for the exposition of mind and mental factors.
(Tib: shä rap kyi pa röl tu chin pä men ngak gi ten chö ngön par tok pä gyen chä jawa)
Ornament for Clear Realizations, by Maitreya; a philosophical text studied in Tibetan monasteries.
Also called ultimate refuge, absolute refuge—as opposed to conventional refuge—is the ultimate attainment of the three refuges; absolute Buddha is the dharmakaya, the buddha's omniscient mind, absolute Dharma is the true cessation of suffering and absolute Sangha is any being who has attained the true cessation of suffering and become an arya being.
Also known as “the I-maker” this is the eighth main mind posited by the Cittamatra school, which asserts that there needs to be a separate consciousness where the sense of I resides. The other schools only posit six main consciousnesses, but the Cittamatra school posits two additional types—afflictive mental consciousness and mind basis of all.
The psycho-physical constituents that make up a sentient being: form, feeling, discriminative awareness, compositional factors and consciousness. Beings of the desire and form realms have all five whereas beings in the formless realm no longer have the aggregate of form.
An early Indian king who imprisoned and killed his father, Bimbisara. Realizing the enormity of this sin and guided by the Buddha, he purified this negativity and became an arhat.
Light; one of the offering substances. Aloke is Tibetanized; the actual Sanskrit is aloka.
The site of an ancient Buddhist stupa in modern Andra Pradesh, India, and also the place where Buddha first gave the Kalachakra empowerment, according to the Vajrayana tradition. In 2006, His Holiness the Dalai Lama gave a Kalachakra empowerment there.
The northeastern region of Tibet that borders on China.
One of the bodhisattvas who accompanied Shakyamuni Buddha.
Of the two main types of meditation, this is a meditation where the subject is examined using logical reasoning, as opposed to single-pointed concentration or fixed meditation (Tib: jog gom) where the mind focuses on one single object.
No-self; as opposed to atman (self); the term used for selflessness in the Four Noble Truths Sutra.
A character in a classic Dharma story about choosing the wrong guru and committing horrendous actions. Angulimala killed 999 people and made a rosary out of their fingers. He was prevented from killing his thousandth victim by the Buddha, and he was able to purify and become an arhat.
Water (for washing); one of the offering substances.
The Tibetan translates as "foe destroyer." A person who has destroyed their inner enemy, the delusions, and attained liberation from cyclic existence.
A female arhat.
Also known as chebulic myrobalan; the botanical name is terminalia cherbula. A fruit that is one of the three fundamental Tibetan medicines; the Medicine Buddha holds the stem of the arura plant in his right hand. Ordinary arura is commonly used in Tibetan medical compounds; special arura—which is said to cure any sickness—is extremely rare.
Literally, noble. One who has realized the wisdom of emptiness.
The fourth-century Indian master who received directly from Maitreya Buddha the extensive, or method, lineage of Shakyamuni Buddha's teachings. Said to have founded the Cittamatra school of Buddhist philosophy. He is one of six great Indian scholars, known as the Six Ornaments.
Indian emperor of the Maurya Dynasty (about 250 BC) who converted to Buddhism and propagated Buddhism across Asia.
The third-century Indian master, renowned for his scholarship and poetry, who is the author of Fifty Verses of Guru Devotion.
Demi-god. A being in the god realms who enjoys greater comfort and pleasure than human beings, but who suffers from jealousy and quarreling.
The renowned Indian master who went to Tibet in 1042 to help in the revival of Buddhism and established the Kadam tradition. Atisha wrote the seminal text, A Lamp for the Path to Enlightenment, in which he organized the Buddha's teachings into clear steps, known as lamrim, or stages of the path to enlightenment. | <urn:uuid:5a1c50fc-ff12-4643-a3c1-59d6b1336251> | {
"date": "2019-03-18T14:13:22",
"dump": "CC-MAIN-2019-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201329.40/warc/CC-MAIN-20190318132220-20190318153639-00007.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9446852803230286,
"score": 3.546875,
"token_count": 1086,
"url": "http://mail.lamayeshe.com/glossary/a"
} |
Physician-assisted suicide is defined as “a patient self-administering a lethal dose of an oral medication that has been prescribed by a physician,” according to Physicians for Compassionate Care, a group that operates out of Oregon.
The Netherlands was the first country to introduce the concept of assisted suicide almost 20 years ago, finally legalizing it in 2002, which paved the way for euthanasia—the direct killing of patients by lethal injection or dosage, with or without their approval, whether because of terminal illness or merely because of “unbearable or unrelieved suffering.”
Some 2,000 Dutch die by euthanasia every year; nine hundred are euthanized without their permission. Belgium followed the Netherlands’ lead in 2003. Luxembourg—a country of 480,000 people, 87 percent of whom are Catholic—is the third European country to legalize assisted suicide/euthanasia, passing it by a 30-26 vote in their parliament this past February.
In the US, the movement began with Washington state’s attempt to pass Initiative 119, which favored physician-assisted suicide and which was rejected by voters 17 years ago. Since then, physician-assisted suicide has ricocheted from state to state—25 in all—failing passage by voters or legislators, often repeatedly. Oregon, the only exception, legalized physician-assisted suicide in 1994.
In 2008, physician-assisted suicide legislation reappeared for the sixth consecutive year in the Arizona legislature, and in Wisconsin’s legislature for the 16th time. Neither bill is expected to be made into law in the near future.
In California , lawmakers failed to reintroduce a bill to legalize assisted suicide in February of this year because they lacked enough votes for it to be approved and for it to survive through the full assembly by a legislative deadline. This was the fourth year in a row such legislation had been introduced in the California assembly since voters rejected California’s Proposition 161 in 1995.
WASHINGTON STATE TRIES AGAIN
With this latest failure to pass a California assisted suicide law, Carol Hogan, California Catholic Conference communications director, predicted that the assisted suicide debate—and the funding for it—would now shift to Washington. The prediction came true on January 9, 2008, when 71-year-old former Washington governor Booth Gardner filed an assisted suicide initiative.
Getting the necessary 225,000 signatures for the initiative to appear on the November 2008 ballot, and getting them by this July, will be his “last campaign,” Gardner said. The fact that he has suffered from Parkinson’s disease for the last 15 years was part of the reason he decided to go ahead with the initiative campaign, he claims.
“I just feel very strongly that I ought to have control over my life. I hate to lose control. That’s just my nature, and a lot of people feel that way,” Gardner said after completing paperwork for the “Death with Dignity Act.” Ironically, since Parkinson’s is not considered a terminal illness, Gardner would not be eligible for assisted suicide under this new legislation.
With the help of his primary sponsor, Compassion and Choices of Washington—a state chapter of the renamed Hemlock Society— Gardner has created a political action committee called It’s My Decision. This newly formed non-profit is supported by such sympathetic organizations as the American Civil Liberties Union. So far it has raised $900,000, hoping for a campaign war chest of around $5 million.
Meanwhile, the Coalition Against Assisted Suicide is also working to raise funds and awareness across the state. Its members include disability rights advocates, physicians, nurses, hospice workers, minority groups, religious organizations, and other concerned citizens.
Coalition spokesman Duane French is a quadriplegic who heads the Washington chapter of Not Dead Yet. Paralyzed by a diving accident as a teenager, French fears not only that such legislation “opens the door for abuses,” but that there is “an agenda that goes well beyond people with terminal illnesses. People with disabilities are next on their list.”
He went on to explain that “Booth [Gardner] has already acknowledged he would prefer the law be far broader—but they have to start somewhere.”
French disagrees with Gardner’s stated concern about “control” being the primary incentive. What is more likely, he believes, is that “others don’t see value in your life.” So the assisted suicide campaign “is fueled by fear and feeds the forces of prejudice and discrimination against the terminally ill, seniors, and people with disabilities.”
Gardner’s 45-year old son, Doug Gardner, a born-again Christian, also vehemently opposes his father’s campaign. Current Governor Christine Gregoire is personally against the idea, saying that it is “ very difficult to support,” while admitting that she wouldn’t actively oppose the initiative as she doesn’t wish “to impose my morality on others.”
The Washington State Medical Association officially opposes physician-assisted suicide, according to spokeswoman Jennifer Hanscom, who explained that the organization “looks at it from the standpoint of how care should be improved at the end of life so people aren’t forced to make that decision.”
The Washington Hospice and Palliative Care Organization and the Washington State Hospital Association have also rejected Gardner’s campaign.
ANALYZING INITIATIVE 1000
Washington’s Initiative 1000, the “Death with Dignity Act,” duplicates Oregon’s physician-assisted suicide law. And like the Oregon bill, it is fraught with problems.
Rita Marker, executive director of the International Task Force on Euthanasia and Assisted Suicide, has analyzed the initiative, which classifies a lethal drug overdose as a medical treatment option and permits a doctor to help a patient commit suicide if he has a life expectancy of six months or less. She finds that under the initiative:
- Physicians are not required, but only encouraged, to notify family members in advance of a patient taking the lethal prescription.
- There are no safeguards for the patient once the prescription is written, no monitoring required to insure the competency of the patient at the time of the overdose, and no checking for overt pressure or force being applied to the patient to take the drug against his will.
- Government health programs, managed care programs, and HMOs are allowed to approve prescriptions for health care cost-cutting purposes.
- Although coercion and force are prohibited, there is nothing preventing a physician from suggesting or encouraging patients to use assisted suicide.
- Doctors are permitted to prescribe lethal barbiturates for mentally ill or depressed patients. A referral for counseling is only necessary if the attending or consulting physician believes a patient may be suffering from a psychiatric or psychological disorder or depression that “ caus[es] impaired judgment.” If the counselor believes the patient’s judgment is not impaired, the lethal prescription may be issued.
- Although physicians are required to report assisted suicide deaths to the state, there are no penalties for not reporting them or for filing incomplete or inaccurate reports.
- A patient must make two oral requests and one witnessed, written request for assisted suicide. The two oral requests (which need not be witnessed) can be phoned in, the written request can be mailed to the doctor, and the doctor can fax the prescription to a pharmacy, where the patient—or someone designated as the patient’s agent—can pick it up.
- Using the term “suicide” is not allowed. All state agencies must refer to assisted suicide as “obtaining and self-administering life-ending medication.” Death certificates must refer to the cause of death as the illness the patient was diagnosed with before dying by assisted suicide.
SUICIDE BY ANOTHER NAME
This final stipulation in the law was put in place when it was determined that the word “suicide” is upsetting to people. A Gallup poll, following the failure of yet another California assisted suicide bill (AB 654 in 2005), forced a change in tactics, according to an LA Weekly article, “The Semantics of Assisted Suicide Aid in Dying.” The April 2007 article reported that “58 percent of respondents supported ‘doctor-assisted suicide,’ yet after replacing the phrase with ‘physician aid in dying,’ that number jumped to 75 percent.”
This inspired Compassion and Choices to initiate “a nationwide campaign to expunge the word ‘suicide’ from the right-to-die debate,” according to the article. Successful lobbying by CAC convinced the American Public Health Association “to adopt the ‘value neutral’ term ‘aid in dying.’”
This semantic tack showed up in Duane French’s recent unsuccessful court challenge to the wording of Washington’s Initiative 1000. In refusing to allow the words “physician-assisted suicide” to be used in the ballot or official voters’ pamphlet description, Thurston County Superior Court Judge Chris Wickham reasoned that “it is a somewhat loaded term,” believing the phrase “conjures up images of Jack Kevorkian.”
A March 2, 2008 Seattle Times story reported that “proponents argue that it’s inaccurate to call it suicide when a dying patient chooses to hasten death with a prescription.” Attorney for Initiative 1000 Jessica Skelton suggested that the word “suicide” is “politicized language” that “implies a value judgment and carries with it a social stigma.”
So instead of “suicide,” voters will read that Initiative 1000 would allow some terminally ill patients “to request and self-administer lethal medication” prescribed by a doctor.
“If people were not ashamed, they would call it what it is: assisted suicide,” French said. He believes the semantic move to be an attempt to hide the term because “society really hasn’t changed, and people don’t support it.”
OREGON’S REALITY CHECK
No matter the terminology, the experience of legalized assisted suicide in Oregon continues to supply voters and legislators alike with enough reasons to take a pass on similar legislation.
In 2006, Dr. William Toffler, National Director of Physicians for Compassionate Care Education Foundation (PCCEF) in Oregon, made a statement to the BBC concerning an upcoming vote in Britain’s House of Lords on a physician-assisted suicide bill (later defeated 148 to 100). He said: “There has been a profound shift in attitude in my state since the voters of Oregon narrowly embraced assisted suicide 11 years ago. A shift, I believe, that has been detrimental to our patients, degraded the quality of medical care, and compromised the integrity of my profession.”
Toffler noted the change in health care in the state, where patients “with serious illnesses are sometimes fearful of the motives of doctors or consultants.” One woman confided to him that she feared her oncologist “might be one of the ‘death doctors.’”
Toffler, who is also a professor of family medicine at Oregon Health and Science University in Portland, discussed the state’s increasing health care cuts. He regularly receives notices that “many services and drugs for my patients—even some pain medications—won’t be paid for by the state health plan.” The health coverage has been reduced for in-home palliative care as well. At the same time, he says, “assisted suicide is fully covered and sanctioned by the state of Oregon and by our collective tax dollars,” listing the procedure under “pain management.”
In fact, in 2003 the Oregon Health Plan dropped from their beneficiary list 10,000 low-income Oregonians—including patients with AIDS, those awaiting bone marrow transplants, the mentally ill, and those with seizure disorders. In the next two years, an additional 75,000 Oregonians were cut from the plan’s list.
According to the PCCEF website, 60 percent of Oregon physicians limit or do not see Medicaid patients; 40 percent limit or do not see Medicare patients. Seventeen percent of Oregonians are without health insurance, a statistic increasing at a rate faster than any other state over the past four years.
Even though Oregon has the sixth highest suicide rate among those over 65 years of age (excluding those who die from assisted suicide), less than 5 percent of assisted suicide recipients had mental health consultations between 2003 and 2005, and only two of the 46 patients who died by assisted suicide in 2006 were first referred for psychiatric evaluation.
The “safeguards” in the law insist that patients must be competent and capable of self-dosing, are not depressed, have made the choice without coercion, and have a life expectancy of less than six months. However, according to media accounts, many patients who have died through assisted suicide are depressed, have dementia, have been coerced, have swallowing problems, and have lived over a year after being determined eligible. There are accounts of patients and their family members “shopping” for physicians willing to prescribe high-dose barbiturates.
A study from June 2000 to March 2002 showed that there were twice the number of dying patients considered to be in moderate or severe pain and distress as there were prior to the passage of Oregon’s physician-assisted suicide law.
On March 18, 2008, the latest “Death with Dignity” report was released by the Oregon Department of Human Services. Shane Macaulay, M.D., a member of the Coalition Against Assisted Suicide, said that the report “is deeply flawed.”
“Numbers of reported prescriptions and deaths have tripled since the first year assisted suicide was legalized, with a 30 percent increase in just the past year,” Macaulay stated. Not one patient was referred for psychological evaluation, some patients were given lethal prescriptions after knowing the prescribing physician for less than a week, and one patient lived for a year and a half after receiving the lethal prescription, even though only patients with six months or less to live are allowed access to physician-assisted suicide.
These are the cases that are known, but, according to Macaulay, the real statistics “are shrouded in secrecy.” He said: “There is no way to verify the reliability of the reports issued by the state. Under Oregon’s assisted suicide law, the state has no authority to investigate abuses or physician noncompliance. It’s a listing of whatever was provided to them [by prescribing physicians] and nothing more.”
TAKE THE PLEDGE
Through a new effort called “Take the Pledge,” Physicians for Compassionate Care is trying to educate doctors on how to bring back integrity in the practice of medicine and the trust that is placed in them by their patients, especially those facing terminal illness. “A main strategy of the pro-suicide movement is to marginalize [physicians] because informed physicians are the largest obstacle to their cause,” according to PCCEF’s website.
“Take the Pledge” includes an adaptation of the Hippocratic Oath for doctors to download and display in their offices. It is a declaration that physicians care about their patients, and will care for them through their illness, “managing their symptoms, including pain.” It is a simple way for doctors to acknowledge where they stand on assisted suicide and give patients an opportunity to discuss their concerns. (View a copy at: http://www.pccef.org/ttp/download.htm) The PCCEF website also makes available “dignity conserving questions,” listing approaches physicians can use to ascertain how patients are dealing with end-of-life concerns.
Another important figure in the movement opposing assisted suicide is Eileen Geller, R.N., B.S.N. She has worked for more than 20 years in the Seattle area as a registered nurse in hospice and palliative care, later becoming founder and president of Consoling Grace and Consoling Communities. These organizations were created “ to be a resource to parishes, communities, and organizations, but also to any person who is struggling with illness, care-giving, or grief.”
Geller believes that a long-term solution must go beyond fighting initiatives to legalize assisted suicide, concentrating instead on something more fundamental. “Clearly we need to fight assisted suicide politically, but equally important, we need to build effective networks of community care that combat the culture of death that spawns such killing initiatives. To do one without the other is morally inconsistent with the core of our beliefs as made manifest in [John Paul II’s encyclical] The Gospel of Life,” she emphasized.
“Initiatives like I-1000 in Washington state masquerade under such terms as compassion and choice, using innocuous phrases like ‘death with dignity’ and ‘hastened death’” to hide from view the reality of “the unmerciful killing of those who are medically, socially, and economically vulnerable,” she said.
Geller sees a need, instead, for “compassion-in-action” and “loving kindness extended toward all those who are in the ‘market’ for assisted suicide.” We should be caring for the vulnerable, those who are alone and depressed, “who feel as if they are a burden to their families, their caregivers, and their fellow parishioners,” she said.
The Church in recent years has spoken out in ever-more explicit terms about euthanasia. In November of last year, Pope Benedict XVI spoke to participants at the 22nd international conference promoted by the Pontifical Council for Health Care Ministry, which dealt with “ The Pastoral Care of Elderly Sick People.” He remarked that euthanasia seems to be “ one of the more alarming symptoms of the culture of death that is advancing above all in the society of well-being.”
The Holy Father reminded his audience of John Paul II’s exhortation to scientists and physicians that they never resort “to the temptation to have recourse to the practices of shortening the life of the elderly or the sick, practices that would in fact result in forms of euthanasia.”
Pope Benedict also cited a passage from his recent encyclical on hope: “A society unable to accept its suffering members, and incapable of helping to share their suffering, and to bear it inwardly through ‘com-passion,’ is a cruel and inhuman society.”
To all those who live in pain from illness, disability, or old age, and those who care for them, Benedict encouraged them “not to lose their serenity, because nothing, not even death, can separate us from the love of Christ.”
Elenor K. Schoen is a writer and certified health care ethicist living in Shoreline, Washington.
If you value the news and views Catholic World Report provides, please consider donating to support our efforts. Your contribution will help us continue to make CWR available to all readers worldwide for free, without a subscription. Thank you for your generosity! | <urn:uuid:02f1250a-729e-4e91-8f7f-55786b55b4a4> | {
"date": "2019-10-14T12:54:09",
"dump": "CC-MAIN-2019-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00136.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9594464898109436,
"score": 2.59375,
"token_count": 4077,
"url": "https://www.catholicworldreport.com/2008/12/24/the-movement-that-wont-die/"
} |
According to the Skanda Purana, the goddess Shakti observed 21 days of austerity to get half of the body of Lord Shiva. This vratam (austerity) is known as kedhara gowri vratam. This is the day Lord Shiva accepted Shakti into the left half of the form and appeared as Ardhanarishvara.
`The Great Goddess, known as Devi (literally `goddess`), has many guises. She is `AMMa` the gentle and approachable mother.As Jaganmatha, or Mother of the universe, she assumes cosmic proportions, destroying evil and addressing herself to the creation and dissolution of the worlds. She is worshiped by thousands of names that often reflect local customs and legends. She is one and she is many. As an expansion of this maha Shakthi, Sri Meenakshi-devi, who is the wife of Lord Sundaresvara (Lord Shiva), is worshiped for all types of benedictions. She is said to guard over
her devotees and protect them from all harm.
Sri Meenakshi was self-born from a sacrificial fire to King Malayadvaja and his queen, Kanakamanala, in Madurai . She is named Meenakshi because her eyes are compared with those of fish she never blinks and is always watching over her | <urn:uuid:4af6faec-b755-40a4-b70f-c74a346aff51> | {
"date": "2018-01-21T06:18:00",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890314.60/warc/CC-MAIN-20180121060717-20180121080717-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9746137261390686,
"score": 2.8125,
"token_count": 288,
"url": "http://kmbat.blogspot.com/2009/10/blog-post.html?showComment=1263149219233"
} |
There are four kinds of disasters that people in the United States have to worry about for the sake of their house: Wildfires, earthquakes, hurricanes, and tornadoes. Each occurs in certain regions only.
In wildfire prone areas, use steel and concrete instead of wood, and have a pool in case there's a problem with the water supply.
Earthquakes are trickier, but ancient Asians found a solution: The Pagoda is a completely earthquake-proof building. You need to have a loose beam in a chamber, attached to the roof of the first floor. This absorbs most of the energy from the quake. When an earthquake strikes, the building shakes like a gelatin dessert, but is not damaged. Japanese engineers have adapted this technology to modern buildings.
Hurricanes offer two challenges: wind and water. The windows are generally the first thing damaged, due to flying debris. Authorities in hurricane zones advocate attaching plywood boards to windows, to avoid damage. Most houses built in hurricane zones are strong enough to survive it, but it is also a good idea to seal the house watertight in case of flooding.
Tornadoes are the big challenge. A tornado can easily rip a house out of its foundation and fling it hundreds of miles away. Authorities in the region advocate going underground, which is the only thing that protects people. Everything above ground level is instantly assumed lost. In which case, I suppose a redundant house, in which every room above ground has a nearly identical counterpart below ground, is the only solution. In the event of a tornado, move everything to the basement and continue your life as before. | <urn:uuid:24ed44a8-699e-46f1-b24b-18b30c8d2db9> | {
"date": "2017-08-24T01:16:04",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126017.3/warc/CC-MAIN-20170824004740-20170824024740-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9621770977973938,
"score": 2.71875,
"token_count": 329,
"url": "http://madengineering.blogspot.com/2009/08/disaster-resistant-house.html"
} |
Japan’s catastrophe may mean severe semiconductor shortage
March 24, 2011
Technology prices are set to rise after a chemical plant damaged by the Tsunami has been highlighted as a core producer of a unique resin used by nearly half of the world’s semiconductor manufacturers.
Semiconductors are used to manufacture a broad variety of complex technology based components used in everything from cars to LCDs. And the resulting global shortage of this unique resin will drive semi-conductor manufacturing delays and costs up, which will be passed through the supply chain to end user prices.
Analysts have told SCMR that auto plants in the U.S. and Germany have already confirmed they are looking closely at their supply chain to define future product levels; many are heavily reliant on electronic component manufacture in Japan.
The Sendai located plant also produced Copper based products and solvents for cleaning Printed Circuit Boards (PCBs). Both are further critical materials used in the production of key technology components.
And while many large manufacturers have already invoked “business continuity” plans to overcome short-term supply issues, concern is rising over medium- to long-term plans to overcome a global shortage.
“IT materials are at the start of the supply chain – an issue at this birthing stage of products has a knock-on effect further down the chain globally,” said Iain Bowles, of Probrand, a major supplier of top branded computer products based in Birmingham England.
“South Korean and Taiwanese semiconductor manufacturers have confirmed they are unsure how long existing inventories of materials will last or how logistics, power or staffing disruptions will impact supplies,” he added.
Fuel shortages in Japan are significantly disrupting logistics, which is hampering alternative supply routes.
“Changing to an alternative resin source is a major issue as its characteristics influence overall design and performance of a semiconductor and therefore the end-user product,” said Bowles. “This is the cleft stick in the middle of the IT supply chain. Redesign is both time and money sensitive.”
Bowles agreed with other analysts who maintained that the full impact of this disaster has yet to be measured.
“But we can see a pattern of short-term power and fuel shortage limiting production in Japan that will influence delivery of products well into the future and some brands are already defining product shortages April onwards,” he said. “Additional nuances like unique chemical supply are adding to the complexity of Japan’s supply chain challenges.”
Brittain Ladd, a supply chain consultant and lecturer, told SCMR that it is not too late to act, however.
“I have been inundated with calls from companies asking me what they can do immediately since their supply chain has been disrupted,” he said.
He added that software forecasting and contingency planning packages are being made available, and that it is not too late to act.
“We are also reinforcing the importance of ensuring their is alignment throughout the supply chain so that when a disaster strikes, corporations are able to respond,” he said.
For related stories click here.
Subscribe to Supply Chain Management Review magazine
cutting supply chain costs and case studies in supply chain best practices. Start Your Subscription Today! | <urn:uuid:e7cd7d52-7657-4b44-a97f-cab3df237a6b> | {
"date": "2016-02-10T00:32:45",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158481.37/warc/CC-MAIN-20160205193918-00049-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9562251567840576,
"score": 2.515625,
"token_count": 679,
"url": "http://www.scmr.com/article/japans_catastrophe_may_mean_severe_semi-conductor_shortage/"
} |
Components of the Mass Media.
Šim darbam šodien ir īpaša akcijas cena! *
- Parastā cena:
- € 2,49
- Akcijas cena:
- € 2,04
- € 0,45 (18%)
The Mass Media
What is the Mass Media?
Mass Media- refers to print, radio, television, and other communication technologies
Mass- implies that the media reach many people
Media- signifies that communication does not take place directly through face-to-face interaction. Audiences have the capability of tuning in or out on the media mass
Causes of Media Growth.
The Protestant Reformation: In 1517 Martin Luther wanted people to develop a more personal relationship with the bible and encouraged millions of people to read the bible. The bible then became the first mass media product in the…
E-pasta adrese, uz kuru nosūtīt darba saiti:
Saite uz darbu: | <urn:uuid:b2933a0e-7b63-45b2-b8a5-6e5bd3289a2f> | {
"date": "2016-10-25T21:15:34",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720380.80/warc/CC-MAIN-20161020183840-00036-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.657972514629364,
"score": 3.125,
"token_count": 222,
"url": "https://www.atlants.lv/eseja/components-of-the-mass-media/121845/"
} |
Artificial Intelligence transforming science to process information and solve problems in a sense of similar and sometimes superior.
Artificial Intelligence is a portion of our day-to-day lives, while recognition methods or path-finding programs but designing substances experts have been drawing AI to comprehend modern culture and even enhance our wellness. Yet wisdom does significantly a lot more than simply advocating the paths and eateries. It is altering the way in which the whole world is being studied by scientists throughout disciplines.
Inspired from the proximity of scientists, research workers, more and psychologists, Stanford researcher’s devoting intelligence discover choices to traditional batteries to map poverty at Africa as well as even know their own heads. The character of data collection has magnified the effect of machine-learning inquiry. Long are the times when observations that are human would amass and then also log them. Modern-day tools, lurking or if satellites in the ocean's bottom, are generating enormous quantities of advice.
Synthetic wisdom is a section of our everyday lives at the shape of voice recognition programs and commodity recommendation programs and navigation gear when a scientist's pipe fantasy. Most these depend upon computer calculations which resolve issues at a sense very similar to -- and superior to and process advice.
Machine learning algorithms, by comparison, have zero difficulties. Form them and they're made to spot styles. Machine learning engineering and AI have propagated fast empowering discoveries in areas as various as searching, atomic physics, and even animal behaviour perhaps maybe not can change how researchers’ function, however the way they presume, because its capacities enlarge.
It might have obtained quite a large sum of time and that then now there would've become the problem of replicability. If the data-set was reanalysed by investigators, they may set. Time was into an algorithm stored by feeding it and removed the chance of mistake or prejudice.
Experiments with Google gives opportunities for developers, designers and researchers to submit their open source experiment directly to Google.
Choosing the right data science and analytics course will no more be difficult as this article presents a list of the 10 best courses in India.
Wolfram Alpha, a search engine which makes use of computational intelligence, returns accurate results for factual queries of all complexity levels.
Data, Insights and Intelligence media platform and bring the best resources to explore valuable technologies which will shape tomorrow. | <urn:uuid:b5bfa5a3-7440-4e4b-95d8-91667e6c0f57> | {
"date": "2019-04-24T16:58:57",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578650225.76/warc/CC-MAIN-20190424154437-20190424180049-00009.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.955622673034668,
"score": 2.890625,
"token_count": 481,
"url": "https://www.wimoxez.com/science/the-way-artificial-intelligence-is-transforming-science/"
} |
Stanford literary scholar traces cultural history of our obsession with youth
With philosophy, history and literature as his guides, Stanford Professor Robert Harrison investigates how Western ideas of youthfulness have evolved from classical antiquity to the present.
In Western culture, Harrison noted, classical antiquity plays a fundamental role in cultural rejuvenation.
"We have many different antiquities in the course of our history. The Middle Ages had its antiquity, which is different than the antiquity of the Renaissance. There is an Enlightenment antiquity, different than the antiquity retrieved by the Romantics, or the Modernists, and so forth, yet in each case the new grew out of the old."
Keeping up appearances
"We live in an age of juvenescence," said Harrison, who hosts the radio talk show Entitled Opinions (about Life and Literature)
"Juvenescence" draws on the biological concept of neoteny, a term that refers to the retention of juvenile characteristics through adulthood.
According to Harrison, the term juvenescence has two meanings, either in positive terms of cultural rejuvenation or, on the other hand, of juvenilization.
"Rejuvenation is about recognizing heritage and legacy, and incorporating and re-appropriating historical perspective in the present – like the Founding Fathers did when they created a new nation by drawing on ancient models of republicanism and creatively retrieving many legacies of the past,"Harrison said, citing an example from his book.
"Unlike rejuvenation, juvenilization is characterized by the loss of cultural memory and a shallowing of our historical age."
Harrison proposed another example from his forthcoming work, drawing on 20th-century literature that highlights these two contrasting aspects of age.
"I use two figures to answer the question of how old we are in our age of juvenescence. One is Lolita, from Vladimir Nabokov's novel, and the other is Molloy, from Samuel Beckett's eponymous work. Culturally, we are at once as young as Lolita and as old as Molloy. That makes us a very strange age indeed,"he said.
A bedridden but educated vagrant, Molloy is the heir of multi-millennial tradition but now decrepit and seemingly endlessly old. Lolita, on the other hand, belongs to a new age, as an adolescent with no historical memory who will live and die an adolescent no matter how old she gets.
"Culturally speaking, be that in terms of dress codes, mentality, lifestyles and marketing, the world that we live in is astonishingly youthful and in many respects infantile," Harrison said.
As Harrison sees it, the average citizen of the developed world today enjoys the luxury of remaining childishly innocent with respect to the instruments that he or she operates, consumes and otherwise depends on daily. "I feel ambivalent about where we are culturally in this age of ours. It is hard to say whether we are on the cusp of a wholesale rejuvenation of human culture or whether we are tumbling into a dangerous and irresponsible juvenility."
Being OBSESSED > being CHILDISH / IMMATURE? - want to be a youth, etc.
© THANARAT Asvasirayothin, all rights reserved | <urn:uuid:57c6834f-ca2a-4205-bd44-9387a4b26df8> | {
"date": "2018-07-21T11:46:36",
"dump": "CC-MAIN-2018-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592523.89/warc/CC-MAIN-20180721110117-20180721130117-00616.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9442734718322754,
"score": 2.703125,
"token_count": 671,
"url": "http://workflow.arts.ac.uk/artefact/artefact.php?artefact=1941939&view=178246&block=1921564"
} |
Apparently, the lowly fern deserves more respect.
New research scheduled to appear as the journal Nature’s cover story on February 1 concludes that ferns and horsetails are not -- as currently believed -- lower, transitional evolutionary grades between mosses and flowering plants. In fact, ferns and horsetails, together, are the closest living relatives to seed plants.
"Today's systematists are using genomic tools to re-write the textbooks on animal and plant evolution," says James Rodman, program director in NSF's division of environmental biology, which funded the research. "This research is the latest major rearrangement of the plant tree of life. It will encourage others to explore ferns as model organisms for basic ecological and physiological studies."
The research calls for rethinking the "family tree" of green plants, according to scientists. Also, it uncovers a research shortcoming: All main plant model organisms used for research (such as Arabidopsis, which became the first plant to have all its genes sequenced) are recently evolved flowering plants.
This limitation could compromise scientific research. Models in the newly identified fern and horsetail lineage are needed to round out the study of plant development and evolution. This could help scientists fight invasive species, engineer genetic traits, develop better crops and prospect the botanical world for medicines.
The new research uses morphological and DNA sequence data to show that horsetails and ferns make up one genetically related group, which evolved in parallel to the other major genetically related group made up of seed plants and including flowering plants.
"Our discovery that 99 percent of vascular plants fall into two major lineages with separate evolutionary histories dating back 400 million years. It will likely have a significant impact on several disciplines, including ecology, evolutionary biology and plant developmental genetics," said Kathleen Pryer, lead author of the paper and assistant curator in botany at The Field Museum in Chicago. "Viewing these two genetically related groups as contemporaneous and ancient lineages will likely also have profound consequences on our understanding of how terrestrial ecosystems and landscapes evolved."
The work of Pryer and her colleagues builds on the Deep Green project, a collaboration of researchers dedicated to uncovering the evolution of and interrelation of all green plants. In 1999, Deep Green reported at an international botanical conference that DNA analysis indicates that all green plants -- from the tiniest single-celled algae to the grandest redwoods -- descended from a common single-celled ancestor a billion years ago. Green plants, which include some 500,000 species, are among the best-documented groups in the tree of life.
Cite This Page: | <urn:uuid:a096bb4d-0ae3-4c3d-b794-b07c36c00502> | {
"date": "2015-11-25T08:31:29",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445033.85/warc/CC-MAIN-20151124205405-00271-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9412193298339844,
"score": 3.625,
"token_count": 548,
"url": "http://www.sciencedaily.com/releases/2001/02/010201072033.htm"
} |
West Nile virus: Still a threat
West Nile virus remains a threat to horses. However, with the right vaccine and preventive measures, it's not too late for horse owners to help protect their horses against this life-threatening disease.
West Nile encephalomyelitis is an inflammation of the central nervous system that is caused by an infection with WNV. It is transmitted by mosquitoes--which feed on infected birds or other animals--to horses, humans and other mammals. So far in 2012, 31 states have reported 157 cases of WNV in horses, with Louisiana and Texas having the most confirmed cases--26 and 16, respectively.
The number of reported WNV cases fell from 1,069 in 2006 to 146 in 2010, and the decline is said by health experts to reflect both vaccination and naturally acquired immunity.
"It is a good sign that the number of cases has declined over the last decade, however there has been an increasing number of both human and equine cases, especially over the last couple months," said Tom Lenz, DVM, MS, senior director, equine technical services, Pfizer Animal Health.
Vaccination remains the most effective way to help protect horses against West Nile and other encephalic or mosquito-borne diseases, such as Eastern equine encephalomyelitis and Western equine encephalomyelitis. A trusted vaccine is available to help offer demonstrated protection against WNV and, Eastern and Western equine encephalomyelitis and tetanus--WEST NILE-INNOVATOR + EWT--all in a single vaccine. According to the American Association of Equine Practitioners guidelines, WNV is considered a core vaccination, along with Eastern equine encephalomyelitis, Western equine encephalomyelitis, tetanus and rabies.
In conjunction with vaccination, use good techniques for managing mosquitoes. This includes:
--Destroying any mosquito breeding habitats by removing all potential sources of stagnant water.
--Cleaning and emptying any water-holding container, such as water buckets, water troughs and plastic containers, on a weekly basis.
Remember that WNV does not always lead to signs of illness. Horses that do become clinically ill, the virus infects the central nervous system and may cause symptoms such as loss of appetite and depression. Other clinical signs may include fever, weakness or paralysis of hind limbs, impaired vision, ataxia, aimless wandering, walking in circles, hyper-excitability or coma.5 Horse owners should contact a veterinarian immediately if they notice signs or symptoms of WNV infection in their horses, especially if they are exhibiting neurological signs. The case fatality rate for horses exhibiting clinical signs of WNV infection is approximately 33 percent.
No matter the location, horses can be at risk. By providing proper vaccination and helping to manage mosquito populations, horse owners can do their part to help prevent WNV infections.
For more information on the WEST NILE-INNOVATOR line of vaccines contact your Pfizer Animal Health representative, visit https://animalhealth.pfizer.com/sites/pahweb/us/en/products/Pages/West_Nile_Innovator.aspx or call 855-4AH-PFIZER (855-424-7349). | <urn:uuid:a01c7546-f832-42a2-8885-ef7cf6f82063> | {
"date": "2014-08-20T20:41:48",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500812662.69/warc/CC-MAIN-20140820021332-00228-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9201536774635315,
"score": 3.40625,
"token_count": 680,
"url": "http://www.hpj.com/archives/2013/feb13/feb18/0910PfizerWNVstillathreatko.cfm"
} |
Hoping to cast light on the mystery about how did water come to the planet Earth, ESA’s Rosetta spacecraft made a test of the comet's water (read more). It was discovered that the water on 67P is different then on our planet, to be more exact, that water has D2O (also known as heavy water) more then 3 times higher then on Earth, which is the highest ever amount found in nature.
I have decided to re-create similar water to the one found on 67P and paint with this water a set of works of the comet.
What is heavy water? Deuterium oxide (2H2O ) or D2O, is a form of water that contains a larger than normal amount of the hydrogenisotope deuterium (also known as heavy hydrogen, which can be symbolized as 2H or D) rather than the common hydrogen-1isotope (called protium, symbolized as 1H) that makes up most of the hydrogen in normal water. Long story short. I needed to recreate this heavy water in order to enrich my regular New York top to the amount found on the comet.
After my research I have discovered that D2O is not possible to generate, nature does not produce it, and the only D2O found has probably formed during the Big Bang. My only option was to extract it, but there are only about 156 deuterium atoms per million hydrogen atoms (1 per 6410). After reading few articles and blogs, I understood that I could use electrolysis process in order to bring the level of heavy water higher. Surely, I would need a lot of energy and voltage to generate the pure D2O, and in my art project I am just using this process for an educational purpose.
I used a 22 V 550 Amps AC/DC adopter as my electrolysis devise.
Electrolysis will be decomposing H2O into H and O, leaving D2O alone.
Hydrogen will appear at the cathode (the negatively charged electrode, where electrons enter the water), and oxygen will appear at the anode (the positively charged electrode).
It was recommended to attach stainless steel or graphite at the ends of each wire.
I started my tasting. For the stainless steel I took 2 stainless forks. In order to speed up the process it was suggested to add electrolyte, such as baking soda or salt.
Try 1: stainless forks, salt.
After just few seconds I could see 2 gasses (H and O) were forming on the forks. In few minutes the color of the water was rapidly turning rusty. After another 3-4 min the water turned green. I can not use rusty water for painting with watercolor.
Try 2: stainless forks, backing soda
This time it was not so fast of a color change, but even the stainless steel was oxidizing and turning into rust. It was still not good, a lot of rusty sediment. Though, if I would keep this water over night, some of the rust would settle on the bottom, the rest would float on the surface. Funny fact: if I would tap on the container, top sediment would slowly settle down. The water could be filtered and used for painting. But I wanted even better result.
Try 3: graphite rods, salt
This time all went well. I used artist leds (graphite), the water was clear. But using salt in the water will make my painting form crystals (I have used this effect in the series of my works: Sky’s Darkest Spot). I had to eliminate salt.
Try 4: graphite rods, backing soda
This was the perfect run. Water stayed clear. Soda did not effect the paint! Success!
I decided to use electrolysis for 3 hours each time. This amount of time was still not enough to get my water to the amount of D2O that was found on 67P. So I ordered the 100% pure Heavy water from United Nuclear. It was costly (12$ plus shipping per 10 grams of water!), but necessary.
Two drops of D2O to the whole large jar of water after electrolysis has made the trick. I now had what I was looking for: H2O enriched with D2O, just like on the comet 67P.
You would ask, how did it effect my painting? I did a test of 100% D2O and H2O on the watercolor. Both of the drops dried the same way (maybe D2O a little slower) and no visible differences were noticed. But this is not the point, is it? | <urn:uuid:6d457e22-609f-4278-b74b-6824ca803777> | {
"date": "2019-10-18T01:28:51",
"dump": "CC-MAIN-2019-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9740703701972961,
"score": 3.140625,
"token_count": 961,
"url": "https://www.ekaterina-smirnova.com/blog/tag/heavy+water"
} |
Bearings have a couple of measurements that you should be aware of. 8mm, shown with these bearings, tells you the diameter of the inner hole. This diameter matches the diameter of the axle used on the trucks. 8mm axles are by far the most common.
The next number is shown as an abec rating. This number classifies the precision of a bearing when manufactured. Abec1 is considered the lowest rating, and abec9 as the highest. An abec9 bearing will be ground or polished to a higher tolerance and in turn may be faster and last longer. A very precise bearing will have reduced friction, friction causes heat, and heat ultimately will cause bearings to fail.
Ceramic bearings use balls that are made from a very precise ceramic material. This material will also run cooler than metal balls, and heat is what causes bearings to fail.
We always recommend that you clean and lube bearings regularly. A well maintained Abec5 bearing will outlast an Abec7 bearing that is not maintained. | <urn:uuid:adfd3ad2-071b-4886-854a-4db6609b46f6> | {
"date": "2013-12-10T16:22:23",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164022163/warc/CC-MAIN-20131204133342-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9442557096481323,
"score": 2.625,
"token_count": 211,
"url": "http://thelongboardstore.com/bearings/ceramic-bearings/"
} |
Thirty-six years ago John Glenn became the first U.S. astronaut to orbit the Earth, setting America on its historic course to land on the moon and win the space race with the Soviet Union.
He became such a popular American hero that President John F. Kennedy ordered NASA not to send Glenn into space again for fear of losing him.
At 77, Glenn is about to become the oldest person to go into space. He is a crew member on Thursday's scheduled launch of the Discovery shuttle, a mission he hopes will again break new ground by showing that space travel may have no age barrier.
The fact that a person Glenn's age is considered fit enough for such an arduous mission is likely to have enormous significance
for a large and growing segment of the U.S. population who are living longer, and who need to know that their later years can be healthy and productive ones.
Contrary to earlier gloomy forecasts that people would continue to live longer, but they would be sicker, studies from around the world show that an increasing proportion of older people are in good shape.
"We really do have a new stage of life, one of productive, vigorous older people, and Glenn represents that," said Dr. Robert Butler, professor of geriatrics at Mt. Sinai School of Medicine in New York and CEO of the International Longevity Center.
"It really is incredible because it says that the fear that we'd just have a bunch of demented, decrepit old people isn't true," said Butler, a friend of Glenn's who will watch Discovery's launch from the Kennedy Space Center in Florida. "People are not only living longer, they are living better."
In space, Glenn will be in a unique laboratory to study aging. On Earth, the body is in a constant tug of war with gravity, building sturdy bones and strong muscles and making the heart pump hard to lift blood from the feet to the head.
But in the near weightlessness of space, the body does not have to fight gravity. As a result, bones and muscles begin to deteriorate, the heart grows weaker, the immune system falters, sense of balance goes haywire, and hormones go out of kilter. Most astronauts also experience sleep disturbances because of their 90 minute light-dark cycle as they zoom around the planet.
These changes, although they are reversible in astronauts back on the ground, are similar to changes that occur with aging over a 10- to 30-year period.
Scientists are eager to learn if the mechanisms behind the age-mimicking changes in orbit and normal aging are the same or different, and if finding treatments for one will benefit the other.
Since orbital flight causes bone and muscle loss and many of the other changes that occur with aging in only a week's time, NASA is studying younger and older astronauts to devise ways of curtailing these potentially dangerous changes.
For instance, a manned mission to Mars is not feasible right now because after 2 1/2 years in weightlessness a space traveler would be too weak to walk out of the ship.
For people on Earth, similar changes occur over a much longer period of time, causing frailty, easily breakable bones, instability, sleep disorders, heart failure and decreasing resistance to infections.
Learning how to tame space travel's deleterious effects on the body could provide a bounty of new preventive treatments for aging earthbound humans.
"If NASA develops interventions for astronauts, that's going to have a real public health impact on Earth for osteoporosis, cardiovascular dysfunction, immune problems and other age-related disorders," said Andrew Monjan, chief of the National Institute on Aging's neurobiology of aging branch, who had been planning the space-aging studies with NASA long before Glenn was picked for the mission.
NASA and NIA officials acknowledge that important answers to the aging puzzle are not likely to come from Glenn's flight alone. Studying one person usually doesn't produce major solutions, but it may give scientists tantalizing clues.
"What can we conclude from one 77-year-old?" said Monjan. "The fact is that we can't conclude anything. But if everything works out well, we will be able to start asking further questions about normal, healthy aging."
Glenn, who is retiring as a U.S. senator from Ohio at the end of the year, has yearned to return to space ever since his one-man flight in the cramped Mercury 7 space capsule in 1962. His chance came when NASA decided to study the effects of weightlessness on the body and to devise ways to get around them. When it became apparent that NASA was thinking about sending an older astronaut into space, he campaigned to be that orbital guinea pig. | <urn:uuid:3202517f-ce1e-4184-861e-5374d40db810> | {
"date": "2016-08-30T06:07:12",
"dump": "CC-MAIN-2016-36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982969890.75/warc/CC-MAIN-20160823200929-00024-ip-10-153-172-175.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9624481797218323,
"score": 2.984375,
"token_count": 968,
"url": "http://articles.chicagotribune.com/1998-10-25/news/9810250349_1_bone-and-muscle-loss-age-barrier-international-longevity-center"
} |
A brushless DC motor (BLDC) is a synchronous electric motor which
is powered by direct-current electricity (DC) and which has an
electronically controlled commutation system, instead of a
mechanical commutation system based on brushes. In such motors,
current and torque, voltage and rpm are linearly related. | <urn:uuid:43fa235a-0c24-4101-961c-dbeb76a4f57f> | {
"date": "2018-01-17T04:42:12",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.7429583072662354,
"score": 2.859375,
"token_count": 71,
"url": "http://daittopower-com.sale.spintoband.com/pz682acd3-brushless-motor.html"
} |
Certain drugs may also help to control pain. These include anti-inflammatory medicines such as ibuprofen, aspirin, naproxen, indomethacin, and many others. While some of these are sold over the counter, they can have side effects, most notably gastrointestinal bleeding. A newer anti-inflammatory, celecoxib (Celebrex), may have fewer gastrointestinal side effects.
Your thoughts can affect pain severity. In wartime, soldiers’ wounds often are less painful than the same wounds would be in a civilian. Why? Because the wounds signify that the wounded soldier will soon be going home, but the civilian’s injury is a source of fear and anxiety. This is known as the “Anzio effect” after the World War II battle that took place near Anzio, Italy, where it was first noted.
While there is currently no cure for diabetes, researchers are hopeful for advancements. A 2017 pilot study may provide hope for a diabetes cure in the future. Researchers found that an intensive metabolic intervention, combining personalized exercise routines, strict diet, and glucose-controlling drugs could achieve partial or complete remission in 40 percent of patients, who were then able to stop their medication. More comprehensive studies are in the pipeline.
In 2003, ephedrine -- also known as ma huang -- became the first herbal stimulant ever banned by the FDA. It was a popular component of over-the-counter weight loss drugs. Ephedrine had some benefits, but it could cause far more harm, especially in high doses: insomnia (difficulty falling and staying asleep), high blood pressure, glaucoma, and urinary retention. This herbal supplement has also been associated with numerous cases of stroke.
1. Refined sugar - We all know that sugar, until it is in its most natural form, is bad for people suffering from diabetes. When consumed, refined sugar spikes the blood sugar rapidly. Sometimes even the natural form like honey can cause a sudden spike in the blood sugar levels. So, it’s better to avoid refined sugar by all means if you are a diabetic.
Neuropathy is one of the common effects of diabetes. It’s estimated that 60-70 percent of people with diabetes will develop some sort of neuropathy throughout their lives. By 2050, it’s estimated that over 48 million people in the United States will be diagnosed with diabetes. That means in the future, anywhere from 28-33 million Americans could be affected by diabetic neuropathy.
Alcohol: Alcohol can dangerously increase blood sugar and lead to liver toxicity. Research published in Annals of Internal Medicine found that there was a 43 percent increased incidence of diabetes associated with heavy consumption of alcohol, which is defined as three or more drinks per day. (8) Beer and sweet liquors are especially high in carbohydrates and should be avoided.
Gestational diabetes mellitus (GDM) resembles type 2 DM in several respects, involving a combination of relatively inadequate insulin secretion and responsiveness. It occurs in about 2–10% of all pregnancies and may improve or disappear after delivery. However, after pregnancy approximately 5–10% of women with GDM are found to have DM, most commonly type 2. GDM is fully treatable, but requires careful medical supervision throughout the pregnancy. Management may include dietary changes, blood glucose monitoring, and in some cases, insulin may be required.
In addition, as early as in 2008, the Swedish Board of Health and Welfare examined and approved advice on LCHF within the health care system. Advice on LCHF is, according to the Swedish Board of Health and Welfare’s review, in accordance with science and proven knowledge. In other words, certified healthcare professionals, who give such advice (for example myself) can feel completely confident.
Sex is a good pain reliever, and orgasm is more powerful than almost any drug in relieving pain. Rutgers University professor and sex researcher Beverly Whipple, PhD, found that when women had orgasms, their pain “thresholds” went up by more than 108%. In other words, things that usually hurt them no longer had an effect. She believes men have similar responses, though she’s only studied women. The pain-reducing effect seems to last for hours.
The first media reports of Darkes' supposed cure, along with a similar description of the "rare" gene that partially explained it, began surfacing in February 2017. At the time, Darkes made it clear that his doctors in Northampton were still reviewing the test results, and that they would report on their findings soon. A story published in March 2017 in the Northampton Chronicle and Echo reported that Darkes' test results "are expected to be published next week."
There is no cure for diabetes. It’s a chronic condition that must be managed for life. This seems odd, given all the modern medical technology we have at our disposal. We can insert heart pacemakers, perform liver transplants, even adapt to bionic limbs, but coming up with a replacement for the islets that produce insulin in the pancreas appears to be out of reach for now. There is something about the pancreas that makes it difficult to fix, which is part of the reason pancreatic cancer remains so deadly.
The advice above is therefore not only illogical, but also works poorly. It completely lacks scientific support according to a Swedish expert investigation. On the contrary, in recent years similar carbohydrate-rich dietary advice has been shown to increase the risk of getting diabetes and worsen blood sugar levels long-term in people who are already diabetic. The advice doesn’t improve diabetics’ health in any other way either.
Family or personal history. Your risk increases if you have prediabetes — a precursor to type 2 diabetes — or if a close family member, such as a parent or sibling, has type 2 diabetes. You're also at greater risk if you had gestational diabetes during a previous pregnancy, if you delivered a very large baby or if you had an unexplained stillbirth. | <urn:uuid:d6fd6b4e-7c3f-4cf2-bd6c-d876628dce8b> | {
"date": "2019-01-22T12:13:18",
"dump": "CC-MAIN-2019-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583850393.61/warc/CC-MAIN-20190122120040-20190122142040-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9648339152336121,
"score": 2.671875,
"token_count": 1250,
"url": "http://docwellnessformula.com/how-do-you-cure-diabetes-leg-pain--how-to-cure-diabetes-with-home-remedies-.html"
} |
Internal bleeding is the result of tissues, organs or blood vessels spilling blood into areas of the body that do not typically contain blood or participate in active blood circulation. If not controlled, internal bleeding can cause serious health complications including anemia or even death. Excessive internal bleeding may require supplemental nutrients like folate and vitamin C to boost red blood cell production.
Internal bleeding causes the accumulation of blood in some tissues and internal compartments, and can be fatal if not stopped. According to MedlinePlus Medical Encyclopedia, symptoms of internal bleeding include chest pain, abdominal pain and discomfort, changes in skin tone, or blood visualized in vomit or stool. Other symptoms may include confusion, muscle weakness, loss of blood pressure, and shortness of breath, dizziness or light-headedness. You should always seek immediate medical help if you are concerned that you may be experiencing internal bleeding.
Vitamin C Function
Vitamin C is an essential water-soluble vitamin that you get naturally from the foods that you eat. Vitamin C, which is also known as ascorbic acid, plays a vital role in a variety of biological functions and health-promoting processes. According to the University of Oregon Linus Pauling Institute, vitamin C is important for collagen production, bone, tendon and ligament health, and as a powerful antioxidant. Vitamin C deficiency, more commonly known as scurvy, is a deadly disease that primarily occurs in individuals suffering from severe malnutrition.
Function of Folate
Folate is an essential nutritional vitamin that is also known as folic acid or vitamin B9. Similar to other B vitamins, folate is necessary for metabolism, which is the conversion of food into energy. In addition to its role in metabolism, folate is critical for a number for biological process including health nervous system function, liver function, fetal brain development and the production of red blood cells. Folate deficiencies may result in mild to severe cognitive problems, developmental delays, gastrointestinal discomfort, shortness of breath and gingivitis.
Loss of Blood and Vitamins
Due to the many biological roles associated with essential vitamins like vitamin C and folate, excessive blood loss through internal bleeding may deplete your body’s ability to manufacture blood cells. Anemia is the result of a lowered amount of red blood cells in your body which results in a decrease in the amount of oxygen that is delivered to your organs and tissues. According to the National Heart Lung and Blood Institute, vitamin C and folate are both essential nutrients needed for red blood cell production to overcome anemic health complications. Physicians may suggest an increase in vitamin C and folate, among other nutritional components like iron to boost production of red blood cells. | <urn:uuid:1f6d4f94-e4dd-4853-89e7-bda2faf7425a> | {
"date": "2019-02-22T11:02:09",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247515149.92/warc/CC-MAIN-20190222094419-20190222120419-00136.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9180852770805359,
"score": 3.53125,
"token_count": 548,
"url": "https://healthfully.com/480465-vitamin-c-folate-and-internal-bleeding.html"
} |
The Penobscot Experimental Forest (PEF) is located in the towns of Bradley and Eddington in east-central Maine, about 10 miles north of the city of Bangor. This approximately 3,900-acre experimental forest was established in 1950 as a site for long-term U.S. Forest Service research. Though there are 22 experimental forests in the Northern Research Station and 80 nationwide, the PEF is the only one in the transitional zone between the eastern broadleaf and boreal forests known as the Acadian Region. The PEF thus serves as an important and unique source of information about the ecology and silviculture of mixed-species northern conifers.
Research on the PEF is critical to forest managers and policy makers throughout the region. Though initially established to study the production potential of a range of silvicultural practices, the research program on the PEF has expanded over time to include investigations of numerous aspects of ecosystem structure and function, including tree and understory vegetation composition, leaf area and photosynthesis, genetic diversity, wildlife – habitat relationships, nutrient cycling, and carbon storage. Today, Forest Service scientists, university partners, and other cooperators maintain long-term silvicultural experiments and conduct short-term observational and manipulative studies to better understand the mechanisms of ecosystem change over time. | <urn:uuid:2867e605-32ad-47c0-84d8-e170cb125956> | {
"date": "2014-09-02T09:59:58",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030952-00040-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9415509104728699,
"score": 3.078125,
"token_count": 266,
"url": "http://www.nrs.fs.fed.us/ef/locations/me/penobscot/"
} |
As an amateur photographer, Adam Ratana needed to find out when the Sun is going to rise or set at a particular landmark on a particular day. Traditionally, photographers would need to travel to a particular destination at the exact time and day to view the rise of the sun and moon to plan and determine the best time of day for a shoot.
The result was Sun Surveyor, an iOS and Android app that uses Google Maps APIs to visualise the location of the sun and the moon anywhere in the world. The app makes it easy to figure out when the natural lighting will be just right – and get the ideal shot. Sun Surveyor uses augmented reality to overlay the paths of the sun and moon on a camera’s view, so users can see where in the sky they’ll be at a specific time and place. Photographers can use it to plan their shots ahead of time, and businesses can use it to gauge things like how best to align solar panels to make the most efficient use of the angle of the sun.
- Photographers can plan and visualise their shots ahead of time
- Sun Surveyor overlays the path of the sun and moon on any Street View location anywhere in the world so users can see the exact location
Millions of apps and sites use Google Maps APIs to benefit from a powerful mapping platform. Discover which set of APIs Sun Surveyor has used to create their website and mobile application:
Google Maps Elevation API
Calculate the height of a landmark
Google Maps Time Zone API
Capture the time zone of a landmark
Google Street View Image API
See the exact location and view
Google Maps Android API
Mobile app for Android users
Google Maps SDK for iOS
Mobile app for iOS users | <urn:uuid:d58adfcf-5fc4-46e0-bfba-6eaa9e13a305> | {
"date": "2018-01-18T00:17:46",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887024.1/warc/CC-MAIN-20180117232418-20180118012418-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8683010339736938,
"score": 2.8125,
"token_count": 357,
"url": "https://enterprise.google.com/intl/en_uk/maps/customers/sun-surveyor.html"
} |
This article describes the ability to plug things into circuits while the power is on and requires a knowledge of basic electronics.
Not that long ago, it was considered mandatory to switch electronic equipment off
before inserting or removing a card (i.e., module). Today there is a demand for live
insertion systems that permit you to plug a new card into a bus without first powering
down the computer. This facility is also called hot-
In order to support live insertion, the device being inserted must not affect the
operation of the bus during the few milliseconds that insertion takes place. For
example, the FutureBus+ (an early general-
When live insertion occurs, each pin on the inserted device that is connected to
a bus line in the host system must be in a high-
The following discussion is taken from an Philips Semiconductor Application note
Figure 1 provides an electrical model of hot insertion. Figure 1a shows how we can regard the system into which we are inserting the card (i.e., the backplane bus) as a transmission line, and the card itself is can be modeled as a lumped capacitance¾this is largely true if the connector in the card is short and it is driven by typical semiconductor devices.
FIGURE 1 An electrical model of live insertion
Figure 1b shows the equivalent circuit of the system when the card is inserted. A
Vpeak = Vbus -
where Vbus is the voltage on the bus at the moment of insertion, Vi is the voltage at pin on the card making contact, R is the impedance of the bus (assumed to be resistive), and C is the lumped capacitance of the pin. This equation shows that, at the moment of insertion, the voltage at the pin making contact is initially Vi and rises to Vbus. The exponential rate at which Vi rises to Vbus depends only on the time constant RC. However, since the point of insertion may also represent a mismatch between the impedance of the bus and the pin, the peak voltage at the pin is Vpeak(1 + G), where G is the reflection coefficient at the pin.
If Vswitch is the switching threshold of the bus, the width of the glitch at Vswitch is given by:
tglitch = -
Clearly, anyone designing a live insertion system will strive to minimize the width
of glitches during insertion. Short glitches require the designer to select bus drivers
and with low I/O capacitances. Furthermore, in order to minimize the effect of the
G term, the output rise and fall times of bus-
Semiconductor manufacturers have designed bus drivers that are well suited to live
insertion; for example, figure 10.27 shows the output stage of a bus driver with
FIGURE 2 Power-
The output buffer is enabled by a NOR gate. When the output of the NOR gate is high the buffer can actively drive the bus. When the output of the NOR gate is low, the output buffer is disabled and the bus is not driven¾the situation required during live insertion.
As you can see from figure 2, the NOR gate has two inputs. One is a conventional
enable, OE*, and the other is derived from a circuit containing a transistor T1 and
two diodes, D1 and D2. When transistor T1 conducts, the voltage at its collector
is low and the NOR gate is enabled. However, in order for T1 to conduct, the voltage
at the Vcc pin must be approximately three times the voltages across a forward-
Shin’s paper describes the results of glitch testing in the laboratory. A bus is pulled up to 5 V by a 3 kW resistor and insertions and removals made while observing the voltage level on the bus. The test was considered failed if a glitch cause the voltage level on the bus to drop below VOHmin = 3 V (this is a conservative test because an error would probably not occur at VOHmin).
Shin found that the best configuration was to hold the data input of the bus driver high, and to connect the driver’s OE* pin to Vcc during the insertion. This configuration passed 99.9% of the insertion tests (all extraction tests were passed). If however, both the bus driver’s OE* and data input pins were connected to Vcc during insertion, only 20% of the insertion tests were successful (all extraction tests were passed).
These results demonstrate that card removal seems to be less prone to errors than card insertion, and that the device characteristics of the bus drivers are very important. | <urn:uuid:ba5295ae-72b0-407c-99fc-94e2fdcb40c0> | {
"date": "2019-02-21T18:14:54",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247506094.64/warc/CC-MAIN-20190221172909-20190221194909-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9469844102859497,
"score": 3.421875,
"token_count": 950,
"url": "http://alanclements.org/hotinsertion.html"
} |
Excerpts from "the first rough draft of history" as reported in The Washington Post on this date in the 20th century.
The U.S. invasion of French North Africa, code-named "Operation Torch," was met with some initial resistance from French troops under orders from the Vichy government of unoccupied France, but the French high commissioner in North Africa capitulated after a few days and handed over the French territory to the Allies. In less than a year, Axis forces there were forced to surrender, giving the Allies the bases they needed to launch the invasion of Italy. An excerpt from The Post of Nov. 8, 1942:
By Edward T. Folliard
Post Staff Writer
A"powerful" American Expeditionary Force has invaded French Africa, on both the Atlantic and Mediterranean coasts.
The Yanks, with their British allies, are moving to drive the Germans and Italians completely off the African Continent.
News of the gigantic movement against Vichy France's rich and strategic colonies came in an electrifying statement from the White House at 9 o'clock last night.
The announcement, signalizing the first great offensive under the Stars and Stripes in this war, came exactly 11 months after Pearl Harbor.
The White House said the American landings in French Africa were aimed at forestalling an Axis invasion, with its "threat to America" across the comparatively narrow sea and to "provide an effective second front assistance to our heroic allies in Russia."
British naval and air forces, it was said, are assisting the American Army, and in the immediate future a "considerable number" of British army divisions will follow.
The momentous American-British landings, in conjunction with Lieut. Gen. Bernard L. Montgomery's shattering drive against Field Marshal Erwin Rommel, it was said, would deny the Axis a starting point for an attack on the Americas. ...
The eloquent voice of President Roosevelt, recorded for American and British radio stations "some time ago," told the French people that the Americans would "cause you no harm," and asked them to help "where you are able." Leaflets bearing his words were dropped on French soil from warplanes.
Lieut. Gen. Dwight D. Eisenhower, commander of the American forces in the European theater, is commander in chief of the forces moving into French Africa.
He issued an appeal to the French garrisons, saying that he had "given orders that no offensive action be undertaken against you on condition that for your part you take the same attitude." ...
From London came word that this was "the start of the real American war in the European theater of operations." | <urn:uuid:dc59f470-92f6-48c0-af3a-b6899de6c3b2> | {
"date": "2018-09-24T00:24:45",
"dump": "CC-MAIN-2018-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159938.71/warc/CC-MAIN-20180923232129-20180924012529-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9678836464881897,
"score": 3.375,
"token_count": 545,
"url": "https://www.washingtonpost.com/archive/lifestyle/1999/11/08/operation-torch/63713ec1-eaa2-4eaf-ab3a-70e327b11044/"
} |
Among the many instruments that people have devised to communicate with one another, the postcard fills many roles. If you need a simple way to send a quick note, to let someone know you’re thinking of them, to save or send a souvenir of your travels, or merely to document your own surroundings – postcards can meet all of these needs and more.
The American postcard was first developed in the 1870s, and the first souvenir postcard in the 1890s. They quickly became immensely popular, with their “Golden Era” spanning from around 1907 to 1915.
During that period, the U.S. Postal Service introduced the “divided back” postcard, which included a line on the blank side to separate the address area from the message area. Also during this period, Kodak produced a specialized “postcard camera” which enabled the quick production of “real photo” postcards.
The postcards highlighted in this exhibit come from our American Picture Postcard Collection (RHC-103). Their photographs and illustrations depict the locales and sights of Michigan, and show us how things used to be. | <urn:uuid:f7b2fa27-6ac8-4846-bce2-49028636b116> | {
"date": "2018-03-25T01:33:12",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651481.98/warc/CC-MAIN-20180325005509-20180325025509-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.948590874671936,
"score": 3.390625,
"token_count": 239,
"url": "https://gvsuspecialcollections.wordpress.com/2018/03/05/michigan-picture-postcards/"
} |
In some situations, you may have a single data value associated with a data set that you would like to visualize "on" the data set. For example, you may want to visualize a boundary condition parameter defined for each surface in a computational fluid dynamics simulation. ParaView 4.3 now makes that possible.
You can now color data sets in ParaView by the value of a tuple in a field data array associated with the data set. Field data arrays in a data set now appear in the list of data arrays available to select for coloring. If a field data array is chosen, the first tuple in the array will be used to determine the color of the entire surface of the object being visualized. Tuples may have more than one component, in which case a specific element of the tuple can be selected (similar to how X, Y, or Z components can be selected for vectors) or the magnitude of the tuple can be selected instead.
Coloring by field data also works when you have a composite data set where each member data set has a field data array with the same name. As an example, consider the ParaView example data file
tube.exii (available to download here). It is a multiblock data set containing four blocks. Each block has a field data array named "ElementBlockIds" that has one element with the block ID. If you select the "ElementBlockIds" array to use for coloring the data, each block will be colored by the element block ID associated with that block. | <urn:uuid:41e71e88-cb86-484d-9c5e-01dac7e0aa98> | {
"date": "2017-10-18T09:09:56",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822851.65/warc/CC-MAIN-20171018085500-20171018105500-00856.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8769646883010864,
"score": 2.65625,
"token_count": 314,
"url": "https://blog.kitware.com/new-in-paraview-coloring-by-field-data/"
} |
The image on page 401 of George W. Hunter’s 1907 Elements of Biology is strikingly out of place. It is a Greek bronze flattened to a black silhouette. A woodblock engraving in a textbook otherwise illustrated with halftone photographs. A relic of Renaissance anatomy covered by the soot of the Age of Steam. Yet there it stands, owning the page.
The Nervous Icon (as I’ve come to call the image) was a popular feature in biology textbooks into the 1950s. Picked up, rephotographed and copied with apparently little concern for image quality, artistry, copyright or context. It was treated poorly, just plopped in and barely referenced in the later texts in which it appeared.
But something told me there was a story here. I felt as if the Nervous Icon was a courier carrying a secret message from the past.
It turns out that tracing the history of this image – exploring when it was first cut, how it was reproduced, where it appeared, and why it remained popular even as similar classically styled illustrations were retired – reveals surprising connections between the seemingly disparate topics of printing technology, print piracy, electricity, telegraphy, spirituality, abolition, and that most central of nineteenth century anxieties, masturbation. The Nervous Icon’s secret is that, in its hyper-nakedness, it warned of the dangerous interconnectedness of the body, where stimulation, or over-stimulation, of any one part would cause damage to the entire system.
I have identified nearly 100 instances of the Nervous Icon from books older than Hunter’s Elements. I’ve written before that the anonymous carver of the Nervous Icon most certainly used as reference illustrations published in Andreas Vesalius’ famous 1543 Renaissance anatomy, De Humani Corporis Fabrica. But strangely, while similarly posed and frequently associated images of the human skeletal system, muscular system and circulatory system could be traced back through nearly identical copies rendered in the eighteenth century, seventeenth century or earilier, the Nervous Icon, as far as I have been able to determine, dates only to the early 1840s, with the eariest example so far found appearing in
Agustín Yáñez y Girona’s Lecciones de Historia Natural, published in Barcelona in 1844. William Benjamin Carpenter’s Popular Cyclopaedia of Natural Science, published in London in 1843.
The earliest domestic (U.S.) example I’ve found appears in Calvin Cutter’s Anatomy and Physiology, published in Boston in 1847. The identical illustration appears in Cutter’s 1850 Anatomy, Physiology, and Hygiene. A nearly identical version appears in Thomas Lambert’s Human Anatomy, Physiology and Hygiene, published in Hartford in 1854. And an identical example can be found in Dionysius Lardner’s The Museum of Science and Art Vol. VIII, a popular British encyclopedia published in London in 1855.
Over the next century, reprints, copies and re-drawn variations sprouted up everywhere.
It is not strange that an image like the Nervous Icon got around in the mid-nineteenth century. According to Sarah Crider Arndt (see comment in Sarah’s blog, Print on the Periphery), publishers often copied illustrations by hiring local engravers to cut new plates. Slow communication between commercial centers in the United States, non-existent international copyright laws, the imperative for accuracy in scientific illustration and the difficulty of creating original works when cadavers are your models made pirating both possible and profitable. Despite cooperative trade agreements, which gradually brought control to whole-text plagiarism, pirated illustrations, particularly anatomical illustrations for which the term “original” had little meaning, were copied and copied again, following conventions that were centuries old.
From the Renaissance on, illustrators of human anatomy had referenced classical Greek statuary to create the silhouettes within which they would display the body’s inner workings. Through its many piratical admirers, Andreas Vesalius’s anatomy gave birth to a small number of standard illustration armatures. These figures, always male, obviously white, were frequently rendered with their weight asymmetrically distributed, in motion, with one hand open and the other with its forefinger delicately extended.
The Nervous Icon was clearly built with parts first rendered in Vesalius, and it carries a strong family resemblance to other statuesque frames then in common circulation. However, the image is unusual for a number of reasons. First, while the human lymphatic, circulatory, skeletal and muscular systems had long been rendered within a human silhouette, the human nervous system was most often rendered as sort of an anthropomorphized bush. Second, its outline appears nowhere in the many popular European anatomical dictionaries and encyclopedic works available. Third, the reversal, white lines on a black body, is somewhat unusual. Finally, and perhaps most curiously, the Nervous Icon seems to have materialized on two sides of an ocean simultaneously, without the telltale signs of it being re-engraved by a lesser hand when it traveled west.
To understand how this was possible, and why it is important, we need to detour briefly into a short history of printing in the industrial era.
By the 1840s, three methods had been developed to imprint images on paper: intaglio, which typically involved an engraved copper or steel plate; planographic, which involved a lithographic stone; and relief, which usually relied on a small but easy to work piece of basswood. All had their advantages and limitations. The key problem with metal and stone was their relative cost, a major hiccup at a time when cheap paper and fast presses were set to feed a large literate population ravenous for stuff to read. But it wasn’t just the materials and skills necessary for metal engraving and lithography that made these techniques expensive. To put text next to image, separate time consuming and waste producing press passes were required.
Woodblock illustrations were far more convenient. They were less costly to cut and, as a relief technology like movable type, could be locked up in attractive page compositions that could be printed in one pass.
But there was a big downside. While woodblocks in the hands of a skilled artist could be carved with a level of detail nearly indistinguishable from a copper plate engraving, that detail didn’t hold up on press. A noticeable decay in quality was apparent after just a few hundred impressions.
The workaround, invented in the early 1700s and in common use by the 1820s, was to make a plate copy by pouring hot metal into a paper mâché mold cast from a locked-up printer’s forme. The result was called a stereotype. Stereotypes were expensive but robust, and once created, could be bent around the drum of a high-speed press and be at the ready if a text proved popular enough to demand a reprint (related story and video). Also, a stereotype, or multiple stereotypes created from a common mold, could be shipped to cooperative printers in other cities to expand the market and beat the pirates. But while good enough for text, because of the material used to make the molds and the pressures required for decent reproduction, stereotypes did not reproduce the details in illustrations very well and tended to damage the woodblock originals.
Which brings us back to the Nervous Icon.
Calvin Cutter’s 1847 edition of Anatomy and Physiology proudly proclaims itself a “third stereotype edition.” However, even with what appears to be a weak printing (further degraded by the low resolution Google scan) the detailed structure of the brain and the sharp points of the finely tapered nerves are well rendered. If the image was printed from a stereotype, and it likely was, the relief image used in the printer’s forme must have been of reasonably high quality. But it almost certainly wasn’t the original woodblock, unless that woodblock was traded between competitors Cutter and Lambert, and managed to travel east across the Atlantic to also end up in Lardner’s encyclopedia.
Here’s where things get a little spooky. It is possible, probable even, that the technique that allowed detailed and near-exact copies of the Nervous Icon to appear in Cutter, Lambert and Lardner’s books is intimately connected to the questions of why the image was imagined in the first place, how it spread and why it proved so popular.
That connection is electricity.
In the late 1830s and into the 1840s electricity was everywhere. It was in the air in the form of “animal magnetism.” It was in the wires in the form of Samuel Morse’s dots and dashes. And it was in our bodies, transmitted through the nerves that most physiologists now agreed were the messengers of sensation. Mesmerists and phrenologists, technologists and scientists, physicians and missionaries were all enthralled by electricity’s almost spiritual properties. As Michael Sappol writes in A Traffic of Dead Bodies, electromagnetism became identified with “the élan vital, the force that animated the living body, and possibly the inorganic universe as well” (153).
But sensations magically transmitted at the speed of light caused as much fear as fascination. Many physiologists in the early nineteenth century believed that stimulation, specifically over-stimulation, was the cause of disease. Through what was seen as a more than metaphorical similarity between the nervous system and the telegraphic system, electricity connected concerns about the individual and national body. As Sam Halliday suggests in Science and Technology in the Age of Hawthorne, Mellville, Twain, and James (2007), electricity was a common theme that tied together debates concerning the education of women, the abolition of slavery and fear that middle-class teenage boys could masturbate themselves into madness.
Education in anatomy became a popular rage in the United States in the first decades of the nineteenth century, promoted, according to Sappol, as having “an intrinsically ‘civilizing’ effect.” Critics complained that education in anatomy “tends to render the student a Materialist,” however this complaint was trumped by the “democratic” appeal among an increasing prosperous but also increasingly anxious public insecure in its bourgeois identity; a class “only a thin stratum above the anatomy rioters, and maybe not even that” (Sappol 161).
Strangely, this leads us again back to the Nervous Icon. For while in the air, in the wires and in our bodies, thanks to the invention of Smee batteries and the higher tech Daniell cells, by the 1840s electricity was also flowing in the print shop, there to drive a new process for making copies of woodblock illustrations called electrotyping.
Electrotyping, a process for depositing a thin layer of copper against a graphite coated wax mold, was invented in 1838. By 1841 it was in use among magazine and book publishers in Europe and the United States. It was a relatively expensive process, but the plates or plate components it created were both finely detailed and durable.
Electrotyping allowed for the manufacture of multiple copies of an original woodblock illustration (or a complete composed printer’s forme or any three-dimensional surface – see video). It was used hand-in-hand with stereotyping. For example, compositors at regional newspapers were often sent electrotyped copies of ads that they would then place into a page, along with other ads and movable type, and then stereotype the whole page in preparation for printing. Merchants could also send less expensive stereotypes of their ads to newspapers. But stereotypes made from stereotypes, again because of the material used to make the molds, were of noticeably lesser quality.
Without examining the originals, and perhaps even then, it is impossible to know if the Nervous Icons in Girona, Cutter, Lambert and Lardner’s books were created from a common woodblock, or stereotyped copies, or electrotyped copies of an original. One thing that does seem clear. These images were not re-engraved copies. How do we know that? From the presence of the fine straight dotted lines in the Icon’s left shoulder and wrist, its sides and its calves. These lines are the vestiges of pointers that, in a yet to be identified original, connected to a key listing the nervous system’s various elements.
(UPDATE: Probable original now identified.)
Careful comparison of the Cutter and Lardner images betray their common ancestry. Rather than chopping the leader lines at the edge of the silhouette, as Lambert did, Cutter used one black dash external to the figure to connect the white lines to his 16-point key. These single dashes overlay the longer Lardner lines exactly. The Lambert image, though it has no key, also carries vestigial white lines, most of which overlay the Cutter and Lardner examples exactly, though the line that enters the Lambert torso from the left cuts at a slightly different angle than the other two.
Still, we cannot know for sure if these images, even if they were created from a common original (which I believe to be the case), were stereotyped from stereotypes, stereotyped from electrotypes, or even stereotyped from a common woodblock master. But there is one last clue that I think tips the balance toward electrotypes.
After it was used by Lardner, the “original” version of the Nervous Icon, the version with the vestigial leader lines, disappeared from books, replaced by near-exact but line-free duplicates, most of which suffer from reproductive decay (versions 1.1 through 1.4 in the database). The most degraded example can be found in Sanborn Tenney’s Elements of Zoology, published by Charles Scribner’s Sons in 1875. It looks to be a stereotype of a stereotype of a stereotype. Other publishers seemed to maintain access to something closer to an original master, and good reproductions are found in Smith (1885) and Hunter (1907 and 1911). But then something surprising happens.
Sometime during the print run of George W. Hunter’s 1911 biology textbook, Essentials of Biology, a “new” and much more detailed master was substituted for the version used in the book’s first printings; a master that carried those once mysterious white dotted lines. The same master was then used to create the plates for Hunter’s 1914 book, A Civic Biology, a copy of which I am fortunate to own and therefore can examine directly. This image, though a woodcut, is finely engraved. If it was created from a basswood master, that master had to have been remarkably sturdy. More likely, much more likely I think, the 1914 Hunter version of the Nervous Icon was created from an electrotyped master. This is not the only possible explanation, but it is a logical solution.
A Civic Biology was the last Hunter textbook to feature the Nervous Icon. Other authors used the image, but ironically in the era of “advanced” photolithography and offset printing, the image decayed rapidly and noticeably generation to generation, both in form and print quality. By the mid-twentieth century, instruction regarding the consequences of unnecessary stimulation of the body’s electrical system gave way to more direct instruction over the possible consequences of stimulation. By the 1960s, the nervous system was just one transparent overlay, co-equal with the arteries and veins. The system now called out for special attention in the post-Kinsey, post-Playboy, post-Pill era was, of course, the reproductive.
Nearly all of Cutter’s illustrations appear to be re-engraved copies of illustrations published a couple of years earlier in Smith and Horner’s Anatomical Atlas
As with Gutenberg’s invention of movable type, one of the first uses of electrotypes (or ‘electros’ as they were often called) was in a printing of the Bible (see article). The finely rendered illustrations, though impressive, were considered ‘indelicate’ by some.
Halliday, Sam. 2007. Science and Technology in the Age of Hawthorne, Melville, Twain, and James: Thinking and Writing Electricity. New York: Palgrave Macmillan.
Johns, Adrian. 2009. Piracy: The Intellectual Property Wars from Gutenberg to Gates. Chicago: University of Chicago Press.
Sappol, Michael. 2002. A Traffic of Dead Bodies: Anatomy and Embodied Social Identity in Nineteenth-Century America. Princeton: Princeton University Press.
Stengers, Jean and Anne Van Neck. 2001 (translation of original published in 1998). Masturbation: The History of a Great Terror. New York: Palgrave. | <urn:uuid:ccccef9f-8383-43c6-b5cf-082862af2141> | {
"date": "2017-09-20T11:06:12",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687255.13/warc/CC-MAIN-20170920104615-20170920124615-00696.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9539626240730286,
"score": 2.796875,
"token_count": 3557,
"url": "http://www.textbookhistory.com/i-speak-to-you-through-electrical-language-traveling-into-the-nineteenth-century-with-the-nervous-icon-2/"
} |
eattheplanet.org is an affiliate marketer. We may earn commission from links to products and services on this page.
our facebook page for additional articles and updates.
Follow us on Twitter @EatThePlanetOrg
Sheep’s Sorrel(Rumex acetosella) is a common lawn weed. This is an edible plant that tastes great. It grows throughout the summer in many soil types. Sheep’s sorrel is native to Europe and Asian but has naturalized in the Northern United States. It has a uniquely shaped leaf that makes it easy to identify.
Edibility and Culinary Use
Sheep’s Sorrel contains oxalic acid so it has a slightly sour or tangy flavor. It can be eaten raw or cooked, the tangy taste is a great addition to salads, but it also tastes great eaten alone. Take a look at this Tangy Sorrel Salad Recipe. The roots can also be used in soups or salads. There are varieties of this plant grown commercially because its unique flavor is valued. This is one of those wild plants that tastes just as good if not better than most foods that are commonly purchased at the grocery store.
Sheep’s Sorrel is known as a plant with many nutrients including: Vitamin C, B, D,E, K, beta carotene, magnesium, phosphorus, and potassium. It is known for its antibacterial properties, and has been used to treat bacterial infections including Staph, E.coli and Salmonella. It is also know as a treatment for urinary and kidney dysfunction. Sheep’s sorrel is believed to contain more antioxidants than most herbs and is used as a Native American cancer treatment called Essiac.
Sheep’s Sorrel contains oxalic acid just like rhubarb, spinach and some other common vegetables. Oxalic acid aggravates conditions such as rheumatism, arthritis, gout, kidney stones or hyperacidity. So if your doctor has told you to avoid oxalic acid then avoid sheep’s Sorrel. Side effects of overdosing on Sheep’s Sorrel could include: headache, nausea, diarrhea, and tingling of the tongue.
Sheep’s Sorrel is a wild edible that can be found in many locations in the Northern US and tastes great, especially used in conjunction with other plants. It is easy to identify once you’ve seen the leaf shape, so next time you’re out foraging for wild edibles mix in some sheep’s sorrel, and notice the difference.
Featured Videos - eattheplanet.org
Many of our readers find that subscribing to Eat The Planet is the best way to make sure they don't miss any of our valuable information about wild edibles. | <urn:uuid:d7e0966c-a500-4486-937c-d63bc66433a4> | {
"date": "2019-08-18T12:46:06",
"dump": "CC-MAIN-2019-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9421598315238953,
"score": 2.609375,
"token_count": 582,
"url": "http://eattheplanet.org/sheeps-sorrel-a-common-weed-with-flavor/"
} |
December 19, 2011
Emperor Trajan and Transylvania
In fact in Rome, there is mention of Transylvania in a rather indirect way. The Dacians who were a group of people who lived in Dacia, and this Kingdom was in the general area of Transylvania. Trajan fought with these people and would nominally conquer them. They went on to build cities there- including Alba Iulia.
Over the years, as there more fighting happened within the borders of the Roman Empire, this led to the slow retreat of Roman influence in Transylvania. However, there are still reminders of that influence.
The aim of the fighting was to gain control of the gold mines in Transylvania. To this end, he did defeat the Dacians. Trajan had a column built and part of that column depicts the battle between the Romans and the Dacians. This is called Trajan's column -- it can still be seen today. | <urn:uuid:a325827a-394f-4473-bbb9-dbbacee25918> | {
"date": "2018-10-20T01:57:10",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512501.27/warc/CC-MAIN-20181020013721-20181020035221-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9880045652389526,
"score": 3.484375,
"token_count": 204,
"url": "http://www.thingsabouttransylvania.com/2011/12/emperor-trajan-and-transylvania.html"
} |
In a photo:
- Simplifying is good. Often very good.
- Diagonals can also be very good.
- The Rule of Thirds is also often very good.
- Tilting the camera is a way to simplify.
- Tilting is also a way to create diagonals.
- And to help you get to the Rule of Thirds.
So it stands to reason that if you tilt and simplify a the same time, you may end up with some reasonable images.
A few examples from the other day – taken with the Fuji X100, which is still a great toy. As you learn more about it it gets better.
Because this camera has a fixed lens (35mm, full frame equivalent) you end up tilting instead of zooming in and out – and this makes your pictures better.
Here’s me, the other day – and look at the texture and converging diagonals:
Here’s a salad, served with colour and texture – and with a blurred background that “tells a story by making the viewer put it all together”:
And a few more food and drink snaps:
And a non-food snap: the best calculator series ever made (you do not need an “=” button!)
Can you see a pattern emerge?
Here’s your homework. Go shoot some pictures:
- With a 35mm lens length (real 35, i.e. use 24mm on a crop camera).
- Tilt to simplify or to get diagonals or to be able to compose with the Rule of Thirds.
- Shoot at wide open aperture (low “f-number”).
- Get close.
- Use high enough ISO to get non-blurry images.
- Use available light.
And have fun!
One of my favourite photo styles is this: high-key black and white, against a simple white background. This reduces the clutter to a minimum and starkly emphasizes the subject. Like in this image from the 20 November Mono, Ontario all-day workshop:
Tara, by Michael Willems
What I would say if I were to discuss this:
- The image screams out “black and white”.
- Clothes (white) and wall (white) both disappear. I like the emphasis this gives the subject and the pose.
- I like the 1970s feeling. I added a little grain to this image in Lightroom to emphasize that.
- Slight, very slight, soft beautiful shadows are important.
- Light is simple: one flash bounced behind me.
- Of course you use exposure compensation and the histogram to check your exposure. But you knew that. Hit the right side (just).
Try a portrait like this! All you need is a white wall, a camera, an on-camera flash, and a model in white.
Even when you take a simple snapshot, as a photographer you should think about how to do it. Almost subconsciously, I apply the same rules and the same thinking to a snapshot that I do to a photo I am paid for.
So I thought it might be worthwhile to discuss some of that thinking. In that context, here is a snapshot I took the other day of a friend:
Michael's friend Ninon, shot with a wide angle lens
In the second or two before I take that snap, what is some of my thinking, and what are some of the decisions I make?
- Subject: What is this a photo of? (it is a happy snap, so “camera-aware” and a smile are just great). Check.
- Light: Where is the light coming from? In this case it is from her front, indirect reflected light, i.e. nice flattering light. Check.
- Lens choice: I want to use a wide angle lens here because this is a situational portrait, a city woman in her city. Wide angle lenses put a subject in context. I want a wide angle lens also because it creates those nice diagonals that converge on the subject, can you see them? Finally, I also want wide angle to show depth in the photo (a technique knows as “close-far”).
- Depth of field: I want to draw attention to my subject by blurring the background, so I use Aperture mode (A/Av) with an aperture of f/2.8. Wide angle lenses are sharp all over, but by using a fast one (f/2.8) and by getting close I can still blur the background dramatically.
- Composition: I am using the rule of thirds. “Uncle Fred” puts the subject in all his images smack bang in the middle: I use off-centre composition. In this case the centre of attention (her face) is one third from the right, one third from the top. And she is looking into the picture, not out of it.
- Moment: you need to capture the right moment. I shot four times and by photo number four, her smile was best. Shoot a lot, even in a portrait. so you capture just the right moment. I also thought the right moment included the “suits” in the background. After all, King and Wellington, downtown Toronto, means suits out for (if not out to) lunch. So I was delighted to see them approach and took the four shots just as they passed behind her.
That is, in a nutshell, what I thought in the seconds leading up to this picture.
That is my thinking. Yours may have been different, and that is of course perfectly OK. There is not one good picture: there are 100 billion. The essence here is not what my conclusions were, but the fact that I was thinking at all, instead of just blindly snapping.
Light, moment and composition/subject, that is what makes up a picture. So think of those every time you take one, and your pictures will get better.
One “detector’ that you should train yourself to have is the “curve-detector”. Curves lead gently through the picture, like a gentle slow journey. Two samples, picked from many:
Both taken on the same day in Gamla Stan: Old Stockholm.
Another “fill the frame”-shot here for your edification.
Shot while I was having dinner at a wedding – I was one of the two photographers but we were fed, at a very nice table in a good position. And even there I was shooting. Like the arrangement at the centre of the table.
You can get good pictures anywhere you try. The closer you get, the easier it is. I used my 70-200mm lens for this. So whenever you think “what do I shoot now”, you can always pick some interesting item, zoom all the way in on it, and shoot. | <urn:uuid:71cc4d3a-1b1c-447e-b9d3-cc9a157cf8ee> | {
"date": "2018-09-23T06:11:11",
"dump": "CC-MAIN-2018-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159160.59/warc/CC-MAIN-20180923055928-20180923080328-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.950603187084198,
"score": 2.765625,
"token_count": 1453,
"url": "https://www.speedlighter.ca/tag/composition/"
} |
Located half way between Ho Chi Minh and Hanoi on Vietnam’s beautiful east coast, Da Nang is Vietnam’s third largest city and gateway to the country’s tropical Central Region. In the past two decades Da Nang has seen considerable investment in local infrastructure, hotels and resorts. The city’s new airport is one of the country’s most modern, having opened in 2014. The history of Da Nang and central Vietnam extends back thousands of years, from the time of the great Hindu Champa Dynasty. Temple ruins and various archaeological sites from the ancient Cham city of Singhapura reveal the importance the region played in Vietnam’s culture, while the nearby imperial cities of Hue and Hoi An offer picturesque settings with ornate gates, palaces and temples. Centuries later, European pursuits brought with it French influences most notably in architecture and food. Today there are many fascinating buildings that feature a decidedly Parisian flair.
Founded by French archaeologists and opened in 1919, Cham Museum houses a collection of over 60 sandstone sculptures from the ancient Cham civilisation. These historic works date from the 8th to the 12th centuries and are made mostly from sandstone, terra cotta and bronze. The pieces on display represent Gods, holy animals and architectural decorations from Cham temples.
A cluster of abandoned and partially ruined Hindu temples, My Son was constructed between the 4th and 14th century by the Champa kings. The Champa’s ruled Central Vietnam from c200AD until c1700AD before being over thrown in the 19th century. During the Vietnam War in the 1960s, an act of Congress prohibited the bombing of My Son in hopes that the site could be preserved for future generations. It is now a UNESCO protected site.
A short drive north of Da Nang, the Imperial City of Hue with its protective mote, fortress like walls, and imposing gates are reminiscent of the architectural grace and harmony associated with Asian cultures. As the palace where the last Vietnamese ruler reigned, the palace and grounds have great historic significance. A UNESCO World Heritage Site, the Imperial City consists of inner courts, peaceful temples and gardens.
Da Nang is spread over a wide area, and while it’s a city easy to navigate, it’s not always possible to get places on foot due to the distance. The most common method of transport around the city is by taxi. Metered taxis can be hailed and are generally low in price. If flying into Da Nang, the taxi ride from the airport to the city takes about 15 minutes. Bicycles are both a cost effective and fun way to get around the city. Bach Dang Road is the main north/south road situated along the Han River. Many of the hotels, restaurants and shops are located here and the beach is just a short ride away. Train travel to Hue, Nha Trang and other regional cities is available from Da Nang’s Haiphong Station. It is both an economical and picturesque way to see the country.
Visitors to Da Nang will find a wide range of hotels and resorts available in all price ranges. Small hotels in the city centre on both sides of the river provide economical accommodation, while high rise hotels and resorts near the beach offer a range of upscale amenities and beach activities. Whatever price range you choose, finding a hotel near the beach should be an easy task. Da Nang also caters to youth and student travellers with several popular hostels in the city.
Scorching heat and high humidity during hot season from May to September. Enjoy sunny skies on the beach in Da Nang. During September to March, there could be high levels of rainfall, so remember to bring an umbrella with you if you are travelling to Da Nang at this time of the year.
Vietnamese Dong (VND) | <urn:uuid:096a869d-f274-45a6-8e75-6c32409a36f8> | {
"date": "2018-10-21T17:31:59",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514162.67/warc/CC-MAIN-20181021161035-20181021182535-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9445620179176331,
"score": 2.734375,
"token_count": 796,
"url": "https://www.hkexpress.com/en-vn/see/destination-guides/da-nang"
} |
Historically, Muslims have often been told that there is no such thing as anti-Muslim racism, because Muslims are a religious group and not a race. Hence Muslims could legitimately ask for toleration and religious pluralism, but not for inclusion in anti-racist egalitarian analyses and initiatives.
The background to this mistaken view, of course, is the fact that Anglophone scholars of racism originally understood it in terms of biology, and specifically in terms of the black-white binary. At the same time, other scholars, especially in continental Europe, understood racism in terms of antisemitism, which again in modern times has had a biological underpinning.
However, it has become clear that these two paradigms failed to capture some contemporary experiences, such as anti-Asian cultural racism in Britain or anti-Arab cultural racism in France. Muslims, triggered by controversies such as The Satanic Verses affair and other incidents, responded to such hostilities and articulated their misrecognition.
The battle to define Islamophobia as a form of racism
While a number of Anglophone authors, including myself, started using the concept of Islamophobia in the late 1980s and early 1990s, it was the Runnymede Trust, with its report, ‘Islamophobia: a challenge to us all’ in 1997, which launched the career of the term as a concept of public discourse in Britain and much beyond it. It presented Islamophobia as “a useful shorthand way of referring to dread or fear of Islam – and therefore to fear or dislike of all or most Muslims”.
Although the report was ground-breaking and played a crucial role in getting people to think about anti-Muslim prejudice, I felt it did not sufficiently locate Islamophobia as a racism, like say, antisemitism.
I continued to write about Islamophobia as a form of cultural racism, which may be built on racism based on physical appearance (eg., colour-racism) but is a form of racism in its own right – again, like antisemitism. This also became the approach of UNESCO and I am pleased to see that it has been explicitly embraced by the new Runnymede Trust report, ‘Islamophobia: still a challenge for us all’, published in November, 2017.
In spite of these developments, the view that anti-Muslim racism does not exist continues to be expressed even today, with some denying that there is a racism that could be labelled ‘Islamophobia’. However, thankfully it no longer has the hegemony it once did.
The danger of reducing Muslims to the ‘Other’
Understanding some contemporary treatment of Muslims and aspects of their societal status in terms of ‘racialisation’ clearly is an advance. That being said, we should beware that the conceptualisation of Muslims in the West is not reduced to racialisation or any other ‘Othering’ theoretical frame such as Orientalism.
By definition, ‘Othering’ sees a minority in terms of how a dominant group negatively and stereotypically imagines that minority as something ‘other’, as inferior or threatening, and to be excluded. Indeed, the dominant group typically projects its own fears and anxieties on to the minority.
The danger of reducing Muslims to racialised identities is particularly high at the moment because the Islamophobic ‘othering’ of Muslims is acute, and if anything, rising.
This can be seen in how aggressive negative portrayals of Muslims is standard in so much rightwing nationalism, whether in President Trump’s Muslim bans, Marine Le Pen’s Front National, Alternative fur Deutschland in Germany or in various parties in central and eastern Europe, including the Freedom Party in Austria, which has now entered government.
Discourses about Muslims are central to the internal debates in UKIP about whether to become a working-class party of welfarism or one defending ‘our way of life’ against the alleged threats of Islamisation. Western media routinely present Muslims as unBritish, unFrench, unGerman and so on and with a degree of hostility that no other group suffers (except perhaps the Roma in parts of central Europe).
It is therefore right that scholarly and public attention should be focussed on this racialisation of Muslim that is creating a deep, long term division in our societies which may be very difficult to reverse.
Towards an anti-essentialist view of Muslims
Yet, like all ethnic or religious groups Muslims are not merely created by their oppressors but have their own sense of identity too. They seek to not be defined by others but to supplant negative and exclusionary stereotypes with positive and prideful identities. Oppressive misrecognitions, thus, sociologically imply and politically demand recognition. Our analyses therefore should be framed in terms of a struggle for recognition. Multicultural inclusivity means recognising and respecting these identities.
Recognition of Muslims’ own identities of course does not mean thinking of Muslims as a group with uniform attributes or a single mindset, all having the same view on religion, personal morality, politics, the international world order and so on.
In this respect, Muslims are just like any other group – they cannot be understood in terms of a single essence. Groups do not have discrete, nor indeed, fixed boundaries as these boundaries may vary across time and place, across social contexts and will be the subject of social construction and social change – and Muslims are no different in this respect.
This ‘anti-essentialism’ is rightly deployed in the study of Islamophobia and Muslims. It is a powerful way of handling ascriptive discourses, of showing that various popular or dominant ideas about Muslims, just as in the case of, say, women, LGBT people and so on, are not true as such but are aspects of socially constructed images that have been made to stick on to those groups of people because the ascribers are more powerful than the ascribed.
Anti-essentialism is an intellectually compelling idea, and a persuasive resource in the cause of equality, but it does not imply that Muslim identities are merely constructed by Orientalists and racists.
Drawing the line between reasonable criticism and Islamophobia
Most people will agree that Islamophobia must be distinguished from reasonable criticism of Muslims and aspects of Islam, but this is a difficult distinction to make. It also begs the question: what are reasonable criticisms that Muslims and non-Muslims may make in relation to some Muslim views about, say, gender or education or secularism?
The study of Islamophobia must not squeeze out the possibility of such discussion, but must endeavour to show us where it becomes Islamophobic. It must point out common examples of caricaturing, or assumptions that all Muslims think in a particular way, in order to create a climate where reasonable dialogue is possible.
Yet merely identifying the unreasonable and the populist is not enough. Our analysis should lead us to what is reasonable: to what criticisms may be made of Muslims and/or Islam, and what criticisms that Muslims make of contemporary Western societies that are also worthy of hearing.
Non-Muslims must be able to publicly state that some Muslim views or practices (eg., in relation to divorce) are oppressive of women – after all plenty of Muslim women say so – but they must do so in a civil and respectful manner and without offensive language or imagery. Yet we should beware of those who latch on to such views simply to express and promote Islamophobia, in the way that a lot of right-wingers who have spent decades arguing against feminists and against gay rights have today become champions of women’s and LGBT rights.
Every minority must be able to negotiate, modify, accept criticism and change in its own way; a dialogue must be distinguished from a one-sided imposition. That is the challenge for us all.
Tariq Modood is Professor of Sociology, Politics and Public Policy at the University of Bristol.
Image by MTSOfan. | <urn:uuid:dedf7d05-283d-42e9-ba83-56db31401364> | {
"date": "2018-08-15T04:49:09",
"dump": "CC-MAIN-2018-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209884.38/warc/CC-MAIN-20180815043905-20180815063905-00616.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9600436687469482,
"score": 2.78125,
"token_count": 1630,
"url": "http://www.pqblog.org.uk/2018/04/no-such-thing-as-anti-muslim-racism.html"
} |
Eminent Domain is an interesting state of affairs in which someone besides yourself is able, and authorized, to take possession of your land acreage or property. This can be done by states and by national governments who are even able to extend that ability to others, including other individuals, corporate businesses and other types of governmental bodies.
This generally happens when the governing body is using the land for the betterment of a public situation, or for bringing better financial status to a particular area. Private property that is used in this way is often used for public facilities, or taken down in order to make way for train lines and automobile throughways.
There are laws that keep the enacting of Eminent Domain within reasonable limits, ensuring that this use of power is due to public betterment or increased public safety. In fact, one famous case in Ohio is the Rookwood Partners v. Joe Horney case in which Mr. Horney became the one individual – out of 71 others – to keep his rental home when Rookwood Partners was attempting to purchase all houses in the Edwards and Edmondson roads area. By law the owners are to be paid fair market value for their confiscated land, acreage or property. Rookwood Partners attempted to use Eminent Domain to take his house without his approval, but the Ohio High Court decreed that Eminent Domain could not be affected on behalf of a private business undertaking. This became a landmark case not just for Cincinnati and Ohio, but for the Country. | <urn:uuid:f72a41ff-6b96-49aa-a76b-d84af3552b80> | {
"date": "2018-07-18T17:52:53",
"dump": "CC-MAIN-2018-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590314.29/warc/CC-MAIN-20180718174111-20180718194111-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9704461693763733,
"score": 2.515625,
"token_count": 303,
"url": "http://cincyland.com/eminent-domain/"
} |
Flowers are not the only summer attractions. This intense foliage combination highlights three features of effective plant combinations: form, texture, and color.
The broad foliage of the ‘Hadspen Blue’ hosta, with its gentle sway and its neat pattern of evenly spaced veins, makes a bold contrast with the long and slender, narrowly pointed leaves of ‘Allgold’ Japanese forest grass, also called hakone grass. Creating a contrast of shapes in this way is often very effective.
The two plants also differ in texture. The hosta foliage is noticeably thick and rather heavy, with a slightly waxy look; the grass is slender and more delicate in appearance, and has a light shimmer to its surface.
The poise of the two plants is also distinct. The hosta leaves are rigid in their stance, held on sturdy though unobtrusive stems, while the grass flickers in the slightest breeze. The heavy-duty foliage of ‘Hadspen Blue’ is, by the way, also less susceptible to slug damage than that of most hostas.
Finally, the colors. This is a good example of muted contrast. Petunias and marigolds can also be blue and yellow, but this foliage pairing is entirely without their brashness. ‘Hadspen Blue’ is one of the bluest of all hostas, yet there is just a hint of a yellowish green tint in its foliage to make a color connection with the grass while at the same time maintaining its contrast.
And in the corner are the forget-me-nots, their sharp blue flowers winding down as the hosta and hakone grass spread into their space to mask the dying stems. Forget-me-nots are biennials and will self-sow to bloom again in more or less the same place next year. If seedlings don’t spring up in quite the right place, you can always move them.
Photos: © Gardenphotos.com | <urn:uuid:4ba46ca8-23c3-4bd0-96a3-17e4d350e849> | {
"date": "2014-12-21T02:35:42",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770616.6/warc/CC-MAIN-20141217075250-00019-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9566906690597534,
"score": 2.515625,
"token_count": 411,
"url": "http://www.organicgardening.com/learn-and-grow/3-plants-for-early-summer-color"
} |
Thanks to a mild winter, the allergy season has already arrived and it's hitting hard. If you suffer from allergies, you might need to watch what you eat, because certain foods could make your allergies worse.
Identifying the allergens that affect you is the first step to treatment. Doctors test allergens such as grasses, trees, molds, weeds, and pets.
In addition, some raw produce can trigger the same allergies.
If you find that produce makes your allergies worse, you don't have to stop eating. Just cook the food.
Many people opt for an allergy shot to treat their symptoms. Doctors give patients very small doses of the trigger allergens to build up the allergic response. Shots are administered once a week for three to six months.
For a short-term solution, neti pots can also clear up some discomfort. | <urn:uuid:c21c8a03-fd3f-4cce-8820-d5105948f137> | {
"date": "2014-04-16T04:32:08",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00011-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9587726593017578,
"score": 2.609375,
"token_count": 175,
"url": "http://www.king5.com/health/jean/Raw-vegetables-spring-allergies-200309281.html"
} |
Each season has its own particular trouble, and this year it is yellowing leaves.
The fuchsia is a deciduous shrub so will naturally shed some yellow bottom leaves, but this must not be confused with leaf drop. This could be caused by an attack of aphids, thrips or red spider, but here the pests or marks can be readily seen on the leaves.
If no marks are to be seen, the trouble could be one of many reaons; all the species and triphylla hybrids are notorious for shedding their bottom leaves, mainly due to big changes in temperature, or most likely this year to incorrect watering, particularly heavy watering after excessive drying out. With our normal varieties the yellowing of leaves could most likely be either underwatering or overwatering. Another cause is where you have to use very hard tap water, for fuchsias resent too much lime. A very common cause for excessive yellowing, but which is rarely identified, is sun scorch, this is caused by spraying or hosing the foliage during the early summer weeks, without adequate shading and letting the sun’s rays dry off the moisture. Established plants in their second year or older, which have not been repotted with fresh compost very often show foliage yellowing due to the lack of magnesium.
There is usually a large reserve of this essential trace element in our potting composts, but it is when we are growing the same plants in the same compost that we exhaust the supply and then magnesium deficiency starts to make itself felt with the lower leaves turning pale yellow.
Magnesium’s important role within the plant is a constituent of chlorophyll, the substance which makes plants green. Chlorophyll is essential as it is responsible for absorbing the sun’s energy and turning it into chemical energy enabling the plant to make food grow.
Excessive magnesium deficiency is not common but when it does happen, it is found first in the greenhouse. Without this trace element the plant cannot make chlorophyll: pale yellow spots and streaks appear on the lower leaves, which eventually drop off. A severe shortage can render plants leafless by mid summer, much the same as a bad attack of red spider.
The remedy is quite simple—a dose of Epsom salts at the rate of loz to the gallon and applied either as a foliar spray or watered in two or three times when the first symptoms appear.
Another trace element deficiency which will turn the upper leaves yellow is iron deficiency. This will need an application of Sequestrene Plant Tonic or Sequestrene Granules or Maxicrop with Iron. Another product which produces results is Bio Multi Tonic
Plants bedded out will also experience yellow leaf drop, these are usually the victims of excessive changes in temperature. The plants were not hardened off enough before being planted, and especially where plants have not been repotted with fresh compost before bedding out. | <urn:uuid:94eeb35d-bac8-4bd7-864d-87443699accf> | {
"date": "2014-10-24T18:00:57",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646352.2/warc/CC-MAIN-20141024030046-00302-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9582393169403076,
"score": 3.59375,
"token_count": 599,
"url": "http://www.thebfs.org.uk/Tips/yellowleaves.asp"
} |
Enzymes that prevent the buildup of reactive oxygen molecules in cells. Superoxide dismutase (q.v.) and catalase (q.v.) are the primary enzymes involved in this process. Superoxide dismutase converts the superoxide radical (O2−) to H2O2 and catalase breaks down H2O2 into water and O2. Transgenic Drosophila containing additional copies of the genes encoding superoxide dismutase and catalase have increased longevity. See Chronology, 1994, Orr and Sohol; free radical theory of aging.
Subjects: Genetics and Genomics. | <urn:uuid:d1625379-0c05-44c8-a195-05bd72d1ecf3> | {
"date": "2017-09-24T05:35:40",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689874.50/warc/CC-MAIN-20170924044206-20170924064206-00696.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8485819697380066,
"score": 2.796875,
"token_count": 132,
"url": "http://oxfordindex.oup.com/view/10.1093/oi/authority.20110803095417243"
} |
Break Free From Passive Aggression
An In Depth Guide to Combating Passive Aggressive Behaviour
- 7027 Words
- Ages 12 and up
No matter where you live, language you speak, political alignment, what religion, if any, you believe in… one thing is for sure…You would have encountered someone with Passive Aggression...and if you haven’t...maybe that person is you.
In this guide we’ll be discussing what Passive Aggressive Behaviour is, its origins, how this disorder affects people’s lives and how best to combat it.
his guide is meant to be of use for anyone who is keen on developing a better understanding of PAB, to help/support concerned people to discover various methods for helping others, also, to serve passive aggressive people as a tool for self-help.
This guide will inform you of what to look for in yourself or others to determine whether or not you/another is just experiencing few of the characteristics of PAB or if action is needed. [more]
Keywords: Aggression, Passive Aggression, self-help | <urn:uuid:e185f86c-12bc-4aef-ba3a-8557c47876bd> | {
"date": "2020-01-25T16:50:13",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00536.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9053236246109009,
"score": 2.609375,
"token_count": 229,
"url": "https://www.bookrix.com/search;keywords:passive%20aggressive,searchoption:books.html"
} |
Teens Spend 'Astounding' Nine Hours a Day in Front of Screens: Researchers
BY MAGGIE FOX AND ERIKA EDWARDS
American teenagers spend an "astounding" nine hours a day with digital technology, entertaining themselves with streaming video, listening to music and playing games, researchers reported Tuesday.
And "tweens" aged 8 to 12 are spending six hours with media, the non-profit group Common Sense Media reports.
That's is in addition to using digital gadgets for homework, the group reports in its five-year update on kids' use of media.
"The fact that tween and teens in the U.S. are using an average of six to nine hours' worth of media a day is still astounding," the group says in its report.
"It shows you that kids spend more time with media and technology than they do with their parents, time in school, or any other thing."
"The sheer volume of media and technology that American kids spent time with his absolutely mind-boggling," James Steyer, CEO and founder of the group, told NBC News.
"It shows you that kids spend more time with media and technology than they do with their parents, time in school, or any other thing. They are literally living in a 24/7 media and technology world."
The group surveyed 2,658 8- to 18-year-olds in February and March for the report, which it says represents children across the nation.
"On any given day, American teenagers (13- to 18-year-olds) average about nine hours of entertainment media use, excluding time spent at school or for homework.
Tweens (8- to 12-year-olds) use an average of about six hours' worth of entertainment media daily," the report reads.
Most of this involves screen time - 4.5 hours for tweens and nearly seven hours for teens.
One worrying finding: kids are trying to multitask when they're doing homework and schoolwork and the evidence is strong that this just doesn't work, Steyer said.
"One of the most interesting findings in this landmark research study is the fact that two thirds of teens think that they can multitask while doing their homework and they're wrong. They simply can't," Steyer said.
They're doing just what too many adults do - switching over to texts and social media while working, and interrupting their thought process, the survey found.
"The evidence from some of my colleagues at Stanford and the Harvard (education) school is clear. You cannot multi-task and do your homework effectively, but two out of three American teens think that you can," he added. "It gets in the way of your ability to concentrate and to synthesize information well."
But schools are encouraging kids to use computers and perhaps enabling this counterproductive behavior, Steyer said.
"I think that's really the bottom line message is that while technology used wisely, can be an extraordinary learning tool and basic part of our kids' education, we have to teach kids that they should focus on the learning process and not constantly switch back and forth between Facebook and Instagram and texting and whatever," he said.
Ironically, all this tech could also be hurting the children's ability to communicate, the report concludes.
"There is nothing better than face-to-face communication for understanding emotions and empathy and really being able to communicate with people," Steyer said.
"When you are constantly on your phone or texting people in an anonymous or very impersonal way, it's a very different communication and studies show that that can impact intimacy, empathy, and some of the basic elements of human communication," he added.
"There is nothing better than face-to-face communication for understanding emotions and empathy and really being able to communicate with people."
"Even the old-fashioned telephone is in many cases a lot better than texting or email because you can feel the emotion in someone's conversation."
Other findings in the report:
• Old-fashioned TV and music rule - 2/3 of tweens watch TV every day and 2/3 of teens listen to music daily
• Most use is passive: 39 percent of a teen's time using computers, tablets and smartphones is passive, such as watching a video; 26 percent is spent on communication; 25 percent is interactive and 3 percent is creating content.
• Boys game, girls still read more. "Teen boys average 56 minutes a day playing video games, compared with only seven minutes for girls. On the other hand, teen girls spend about 40 minutes more a day with social media than boys on average," the report reads.
• There are socioeconomic and ethnic divides. "Lower-income teens average more than eight hours a day with screen media compared with 5 hours and 42 minutes among higher-income teens, a difference of two hours and 25 minutes a day," the report finds.
• Teens and tweens both feel social media is something they have to do to keep up, but it's not their favorite activity.
And the next generation may be set up to spend even more time with devices. A report released Monday found even babies spend time with devices such as tablet computers and smart phones. | <urn:uuid:6125a9c3-c3c7-47b7-a511-f310850ce54c> | {
"date": "2019-09-16T20:44:11",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572934.73/warc/CC-MAIN-20190916200355-20190916222355-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9687842130661011,
"score": 2.546875,
"token_count": 1076,
"url": "https://www.wvea.org/content/teens-spend-astounding-nine-hours-day-front-screens-researchers"
} |
Recommended Grade Level: 11 - 12
In this course, students will examine the functions of the body’s biological systems– including skeletal, muscular, circulatory, respiratory, digestive, reproductive, nervous, and integumentary systems. In addition to identifying the function of each organ and system, students will study medical terminology and the function of cells and tissues.
*This course is recommended for grades 10-12. Anatomy is a recommended pre-requisite. | <urn:uuid:2fbff01a-8754-4461-9f34-31f9a143627b> | {
"date": "2019-09-20T19:17:36",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8669790029525757,
"score": 2.734375,
"token_count": 94,
"url": "https://redcomet.org/course/physiology/"
} |
The government wants to ban all cartoons and caricatures in the NCERT Political Science textbooks. These cartoons make books interesting for students as they portray public opinion on political shifts. Besides, they offer perspectives which are different from the linear view of political history.
Does the government truly believe that cartoons in textbooks are affecting 'impressionable young minds' more than exposure to the blatant corruption in the Indian system, the progressively deteriorating law and order, and the precarious condition of the economy?
Do they believe that students do not have the right to political opinions?
The government is violating the Fundamental Right to Freedom of Expression by trying to censor these cartoons. They claim that these cartoons hurt the sentiments of various communities. How have these issues not been raised since 2006 (when the books were first published)?
We love the cartoons in our books, and we, as students, want them to stay. We believe that non-academics have no right to decide what is good or bad for students.
- Harnidh Kaur (Campaign Manager)
JOIN OUR FACEBOOK GROUP FOR UPDATES ON THE CAMPAIGN: https://www.facebook.com/groups/230025217108534/
Stop removal of cartoons from the NCERT textbooks
he government wants to ban all cartoons from political science books in NCERT. The recent controversy on a cartoon related to Ambedkar has given the MPs and the government the much needed fodder to ban these, instead on focusing on the much more pressing issues that plague our country. The cartoons are made by renowned cartoonists like Shankar and R.K. Laxman and have a historical significance that goes beyond the whims of the current ruling party.
Cartoons not only make learning more easy but also helps students to look at different perspective. Banning these cartoons is akin to denying students the right to form political opinions. As students, we believe these cartoons are the perfect learning aid and we want them to continue in our books. | <urn:uuid:e91eff7d-9f5a-4090-8130-7974ca5ff716> | {
"date": "2015-10-05T10:15:40",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736677221.11/warc/CC-MAIN-20151001215757-00054-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9467446804046631,
"score": 2.8125,
"token_count": 405,
"url": "https://www.change.org/p/mr-kapil-sibal-stop-removal-of-cartoons-from-ncert-textbooks"
} |
Trails to the Past
My name is Marie Miller and as your Uinta County Wyoming Administrator if you have any Obituaries, News clippings, Death, Marriage, Birth or any other information Feel free to email me with your contributions.
Uinta County was organized in 1869.
The Territory of Wyoming was organized by Act of Congress in the year 1868, with the boundaries of the present state. As originally divided, its four counties, extending from Montana on the north to the Colorado and Utah lines on the south, were Laramie, Albany, Carbon and Carter. In its derivation Carter County stood alone among not only the counties of Wyoming but of the nation as well, for it was composed of portions of the three great western accessions of the United States. That part northeast of the Shoshone range came from the Louisiana Purchase of 1803. From these mountains to the 42nd parallel is a tract acquired from the Oregon Territory, to which the claim of the United States was definitely established in 1846. The land south of this came to us from the Mexican Cession of 1848. The territories of Utah, organized in 1850; Dakota, in 1861, and Idaho, in 1863, contributed to the formation of Carter County. From the time of the organization of the Territory of Utah in 1850 to the establishment of our territorial government, the southern part was known as Green River County, Utah.
At the meeting of the first territorial legislature in 1869 Carter County was divided into Sweetwater and Uinta Counties. The original Uinta County was about fifty by two hundred and eighty miles in size. Within it lay nearly all of the Yellowstone National Park.
The information on Trails to the Past © Copyright 2019 may be used in personal family history research, with source citation. The pages in entirety may not be duplicated for publication in any fashion without the permission of the owner. Commercial use of any material on this site is not permitted. Please respect the wishes of those who have contributed their time and efforts to make this free site possible.~Thank you! | <urn:uuid:0fce2f63-5ec9-4b10-aee5-e797a7033a93> | {
"date": "2019-12-09T23:05:57",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525598.55/warc/CC-MAIN-20191209225803-20191210013803-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9635318517684937,
"score": 2.578125,
"token_count": 426,
"url": "https://sites.rootsweb.com/~wyuinttp/"
} |
Through conservation agreements, indigenous Bolivian communities are expanding their incomes while leaving forests standing.
How two weeks in a Nicaraguan forest caused one would-be primatologist to rethink her career path.
To build a sustainable economy based on protecting Suriname’s valuable forests and rivers, time is running out.
There are now more ring-tailed lemurs in zoos around the world than remain in the wild in Madagascar.
A scientific survey has confirmed high numbers of whales and dolphins in Timor-Leste’s waters.
Here’s why Costa Rica’s success fighting deforestation and poverty can be a useful model for Cambodia.
Norbil Becerra’s career took an unexpected turn after visiting a forest reserve in Peru’s Alto Mayo region.
Get up to speed on the current plight of elephants — as well as signs for hope for their future.
Adequately protecting the world’s largest tropical rainforest has long proved elusive. Here’s why it’s finally achievable.
Our new executive director of wildlife trafficking explains common misconceptions about the illegal wildlife trade. | <urn:uuid:28b4a16c-0db4-4dc1-b55b-b454d65cc54a> | {
"date": "2018-07-20T08:54:18",
"dump": "CC-MAIN-2018-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591575.49/warc/CC-MAIN-20180720080634-20180720100634-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9149896502494812,
"score": 2.625,
"token_count": 236,
"url": "https://blog.conservation.org/tag/ecotourism/"
} |
Does your ability to taste certain flavors give you better defense against sinus infections? Probably, according to a new study from the University of Pennsylvania.
Humans can detect five different flavors—sweet, salty, sour, bitter, and savory. About a quarter of the population can't taste the bitter, whereas another quarter are hyper-sensitive to even the smallest quantities of this flavor. Everyone else falls somewhere in the middle of the two extremes. But the ones that are more adept at detecting bitter have something on the rest of us—better immune defenses.
Researchers found that their receptors (T2R38s), will set off an alarm in the respiratory system when they spot the slightest hint of something bitter on their radar. They act like security guards and produce a response to an invading bacterial infection, and develop more biofilm or mucus to keep the sickness at bay. So while you might have a little bit of a runny nose, if you are super sensitive to the taste of say, coffee or dark chocolate, there's a chance you're less prone to a nasty sinus infection. [University of Pennsylvania via Futurity]
Image credit: Karuka/Shutterstock | <urn:uuid:f9cf05e4-00b5-4211-9fa5-3362b24af304> | {
"date": "2016-05-04T11:46:44",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123023.37/warc/CC-MAIN-20160428161523-00070-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9233487844467163,
"score": 3.1875,
"token_count": 239,
"url": "http://gizmodo.com/5950230/are-you-a-super-taster-youre-almost-exempt-from-sinus-infections?tag=health"
} |
What is Temperature?
Temperature is the average speed (V) or the kinetic energy
of the atoms for
molecules in a substance.
There are three different temperature scales:
Absolute Zero is the lowest temperature possible. It is the temperature at which all
molecular motion/energy stops. This is from the Third Law of Thermodynamics, we will talk
the First Law later.
- The lowest temperatures observed are in the depths of outer space, these temperatures
are 3 degrees above absolute zero.
|Of Absolute Zero
|which water freezes
|which water boils
As you can see Celsius and Kelvin are on the same
scale, 273 degrees apart:
C = K - 273
The converson between Fahrenheit and Celsius is alittle more
C = (F-32)*(5/9) or F = (9/5)C + 32
What is heat?
Heat is the total internal kinetic enery of the atoms and molecules that make up a substance.
Since heat is a form of energy, it is measured in Joules.
- 1 Joule = 1 Nm = 1 kg m2/s2
- 1 calorie is the heat energy needed to raise 1 gram of water by 1 degree Celsius. 1
calorie = 4.186 Joules
Two liters of boiling water has more heat (energy) that one liter of boiling water.
Heat will not flow between two objects of the same temperature.
Heat is really energy in the process of being transferred from one object to another because of
the temperature difference between them. | <urn:uuid:3dc411f1-63d0-4dc6-8545-92d3d07bed67> | {
"date": "2014-10-31T18:58:53",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900160.30/warc/CC-MAIN-20141030025820-00003-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8361192345619202,
"score": 3.6875,
"token_count": 330,
"url": "http://www.aos.wisc.edu/~aalopez/aos101/wk4/temp.html"
} |
If your doctor has diagnosed you with Type 2 diabetes, then she has probably already told you about the importance of adding exercise to your treatment plan. Physical activity can help you improve your blood sugar control, lose weight, and reduce your risk of heart disease, peripheral artery disease and nerve problems that are often associated with diabetes. In many cases, the right combination of diet and exercise can even help eliminate the need for medication for people with Type 2 diabetes.|
But before you get started, you need to understand how exercise influences blood glucose regulation, and how to avoid potential problems, minimize risks, and recognize when you may need to get additional information or support from your health care provider. *The general information in this article is not a substitute for talking to your health care provider before you begin an exercise program, or if you experience any problems in connection with your exercise.
How Exercise Benefits People with Type 2 Diabetes
In addition to boosting your energy levels, mood, and capacity to burn calories for weight loss, regular exercise can lead to the following benefits:
The Best Exercises for People with Type 2 Diabetes
Improved blood sugar control by enhancing insulin sensitivity. Exercising on a regular basis makes muscles use insulin better. When muscles are able to use insulin better, they are able to pull more glucose from the bloodstream to use for energy. The more vigorously you exercise, the more glucose you’ll use, and the longer the positive effects on your blood glucose levels will last.
Increased insulin sensitivity. Type-2 diabetics who exercise regularly need less insulin to move glucose from the bloodstream and into the cells that need it.
Reduced need for medication. Combined with a healthy eating plan, regular exercise can reduce—or even eliminate—the need for glucose-lowering medication in some people.
Reduced cardiovascular risks. Diabetes has negative effects on heart health, increasing the risk of heart attack, stroke, and other cardiovascular diseases. Exercise reduces these risks by increasing HDL (good) cholesterol, lowering LDL (bad) cholesterol, and reducing triglycerides in the blood stream. Physical activity also improves blood flow, increases your heart’s pumping power, and reduces blood pressure.
Always discuss your exercise plan with your doctor before starting, especially if you’re taking medication or experiencing diabetes-related medical complications (discussed above and below).
Experts generally recommend that people with diabetes engage in moderate aerobic (cardio) exercise that lasts at least 30 minutes, on four or more days of the week.
In addition, moderate strength training (except as noted below) and flexibility exercises are also highly beneficial. These exercises will help you better use your muscles without soreness and decrease your risk of injury.
Always warm up for at least five minutes before you exercise, and cool down for at least five minutes afterwards before you stop moving.
If it’s been a while since you’ve done much physical activity, and 30 minutes at a time is too much right off the bat, you can start with 10 minutes (or even less) and gradually increase your workout duration as you become more fit.
Moderately-intense cardio should elevate your heart rate to a level that is challenging, but not so difficult that you can’t do it for 30 minutes.
Examples of moderate intensity exercise include brisk walking, bicycling, dancing, swimming, climbing stairs, cross-country hiking, aerobics classes, cardio machines such as the elliptical, skating, tennis, and other sports.
If you pick activities that you enjoy, you'll be more likely to stick with your exercise plan.
Being active every day is better for you than doing more exercise on fewer days of the week, and scheduling your exercise at the same time of day can help with blood glucose control. | <urn:uuid:8b0a69ba-9a6e-494f-9c1d-4c3cc30d3fa5> | {
"date": "2014-11-25T23:37:38",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004246.54/warc/CC-MAIN-20141125155644-00000-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9395993947982788,
"score": 3.109375,
"token_count": 773,
"url": "http://www.sparkpeople.com/resource/fitness_articles.asp?id=819"
} |
Women's Health Care
Tailored to you
Caring For Your Baby
A new baby brings joy but also challenges to daily life. We are here to help and make sure you feel confident caring for your baby when you leave the hospital as well as in the weeks, months and years to follow.
Choosing how to feed your baby has life-long effects for your baby and for you. What you have seen and learned about infant feeding from your family, friends, and teachers is likely to influence your attitude and perceptions. Whether you definitely plan to breastfeed or you are still unsure, consider the fact that your milk is the best milk for your baby. It is the ideal first food for your baby’s first several months.
- Breastfeeding. Nature designed human milk especially for human babies. It has several advantages over any substitute ever developed. Your milk has just the right balance of nutrients, and it has them in a form most easily used by the human baby’s immature body systems. Because it was made for your human baby, your milk is also the most gentle on your baby’s systems.
- Bottle-feeding. If you decide not to breastfeed, or are unable to breastfeed, commercial iron-fortified formulas can provide adequate nutrition for your infant. Infant formulas have enough protein, calories, fat, vitamins, and minerals for growth. However, formula doesn’t have the immune factors that are in breast milk. The immune factors in breast milk can help prevent infections.
Helpful hints for feeding your baby
These are some helpful hints for feeding your baby:
- Breast milk is best for your baby and is beneficial even if you only breastfeed for a short amount of time, or part-time.
- Offer cow’s milk-based formula with iron as first choice of formula, if you do not breastfeed.
- Keep your baby on breast milk or baby formula until he or she is 1-year-old.
- Start solid foods when your baby can hold up his or her head, sit-up with support, and no longer has tongue thrusting (4 to 6 months).
- When starting solids, start with rice cereal mixed with breast milk or formula on a spoon. Do not give solids in the bottle or with an infant feeder.
- Once your baby is tolerating cereal, offer vegetables, then add fruits, and then meats.
- Ask your child’s healthcare provider about the best way to add new foods to your baby’s diet.
- Progress in texture of foods so that your baby is eating table foods by his or her first birthday.
- Do not give honey & foods that can be easily choked on (like hot dogs, peanuts, grapes, raisins, or popcorn) to your child during his or her first year of life.
- Unless your child is known to have or has severe allergies (for instance, breaking out in hives, vomiting, or having trouble breathing), recent reports and studies have shown that introducing whole eggs and peanut butter at a young age — even at 4 to 6 months — reduces the chance of your child developing allergies to these foods. Talk to your child’s healthcare provider about whether these foods are appropriate for your child.
Mothers Milk Bank
The oldest operating human milk bank in the United States, Christiana Care’s Mothers’ Milk Bank stores, tests and distributes donated mothers’ milk to meet the specific needs of infants for whom human milk is prescribed by physicians.
The Milk Bank also stores and dispenses milk that mothers collect for their own hospitalized newborns. Mothers’ milk is often a gift of life. It is a gift that is generously given by nursing mothers who are willing to share breast milk for which their own infants have no need.
Many babies who receive milk from the Mothers’ Milk Bank would be unable to thrive without it. Babies need donor milk because of:
- Allergies and formula intolerances.
- Failure to thrive.
- Immunological deficiencies.
- Postoperative nutrition.
- Inborn errors of metabolism.
At Christiana Care, neonatologists—doctors who are specially trained to care for premature and sick infants—encourage the use of mothers’ milk. Mothers’ milk is the ideal nourishment for a newborn baby because it is easier than formula for the baby to digest, reducing the risk of stomach and intestinal complications. This is most important for any critically ill infant. | <urn:uuid:301312a7-3017-43b6-9640-0c3814a41be3> | {
"date": "2019-02-22T17:44:51",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518497.90/warc/CC-MAIN-20190222155556-20190222181556-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9536265730857849,
"score": 2.84375,
"token_count": 932,
"url": "https://christianacare.org/services/women/moms/caring-for-your-baby/feeding/"
} |
Scientists refer to the year 1905 as Albert Einstein’s “annus mirabilis”—his year of miracles. While working as a patent clerk, Einstein spent his free time debating physics and working on theories that would end up altering the way we think of the world. All within a few months, he completed a series of papers, the least of which included his theory of special relativity and the renowned equation E=mc². Yet among these better-known contributions was also his most revolutionary contribution. Over a hundred years ago, Einstein submitted a paper that directly challenged the orthodoxy of physics. The paper described his radical insight into the nature of light as a particle.
In 1905, all physicists explained light in the same way. Whether the flame of a candle or the glow of the sun, light was known to be a wave. It was a time-honored, unquestionable fact. For over a century, scientists had grown in their certainty of this, citing experiments that made certain the wave-nature of light, while overlooking some of its stranger behaviors. For example, when light strikes certain metals, an electron is lost in the process; but if light were only an electromagnetic wave, this would be impossible. Albert Einstein would not overlook these peculiarities, proposing that light was not only a wave, but also consisted of localized particles.
Einstein knew that his theory was radical, even mentioning to friends that the subject matter of his March paper was “very revolutionary.” Yet perhaps the most helpful aspect of his theory was the unassuming attitude with which he presented his far-reaching thoughts. He seemed to recognize that there was an unfathomable quality within the dual nature of light, and that attempting to understand light at all was a lofty endeavor. “What I see in nature,” he once noted, “is a magnificent structure that we can comprehend only very imperfectly, and that must fill a thinking person with a feeling of humility.”
Science has of course had many advances since Einstein, though with these advances we seem to have misplaced our acceptance of the unfathomable and respect for mystery. Anything unknown often seems to be seen as a problem to be solved with just a matter of time until it is understood and explainable. And yet, most of us still experience moments of awe where we are suddenly comfortable again with mystery, or awed even that we should discover this thing in the first place. It seems obvious at these moments that the mind cannot be held in our explanations of it, if for no other reason than that it recognizes in awe and beauty that there is more to see and know.
One of the things about Christianity that I admire most is its comfort with mystery even in knowing. “O the depth of the riches and wisdom and knowledge of God! How unsearchable are his judgments and how inscrutable his ways! ‘For who has known the mind of the Lord? Or who has been his counselor?'”(1)
The Christian story is about a God who goes inexplicably out of the way to know and to be known, to offer us a name and to call us by name, to show the world a triune God who is worth knowing and loving. Jesus came near so that God would be fathomable. And yet how unfathomable is a God who comes near? There is mystery to life that is unplumbed by our own minds, even as it is held experientially in our moments and minds. Why do we have these minds? Why this instinct to search and know? How is it that we should know God by name, or know the voice of the Son, or the comfort of the Spirit? And how shall we respond to the kind of God who invites a love of knowing and participating in this love? “This is what the LORD says, he who made the earth, the LORD who formed it and established it—the LORD is his name: ‘Call to me and I will answer you and tell you great and unsearchable things you do not know'”(2)
In 1905, Einstein’s departure from the established beliefs about light so disturbed the scientific community that his particle theory of light was not accepted for two decades. His theory was and remains a revolutionary concept. The idea of light being both a wave and a particle is still a strange mystery to grasp. Even so, it is incredible that we should know light enough to marvel at it. It is altogether unfathomable that the light of the world has come near enough to be known.
Jill Carattini is managing editor of A Slice of Infinity at Ravi Zacharias International Ministries in Atlanta, Georgia.
(1) Romans 11:3-36.
(2) Jeremiah 33:2-3. | <urn:uuid:df51de14-84c5-4cc1-a8ac-43a6d92a381d> | {
"date": "2015-03-29T06:05:07",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298228.32/warc/CC-MAIN-20150323172138-00118-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9820473790168762,
"score": 3.34375,
"token_count": 987,
"url": "http://rzim.org/a-slice-of-infinity/the-mystery-of-light"
} |
Natural communities that are threatened by Canada thistle include non-forested plant communities such as prairies, barrens, savannas, glades, sand dunes, fields and meadows that have been impacted by disturbance. As it establishes itself in an area, Canada thistle crowds out and replaces native plants, changes the structure and species composition of natural plant communities and reduces plant and animal diversity. This highly invasive thistle prevents the coexistence of other plant species through shading, competition for soil resources and possibly through the release of chemical toxins poisonous to other plants. Canada thistle is declared a "noxious weed" throughout the U.S. and has long been recognized as a major agricultural pest, costing tens of millions of dollars in direct crop losses annually and additional millions costs for control. Only recently have the harmful impacts of Canada thistle to native species and natural ecosystems received notable attention. <www.invasive.org>
In 100 Years of Change in the Distribution of Common Indiana Weeds by William and Edith Overlease (2002) reported that Canada thistle noted in Indiana in 1899 (Coulter’s Catalogue of Indiana Plants) “to be found in many parts of the state; more abundant north”. In 1940 (Deam’s Flora) Canada thistle was noted to be “infrequent to frequent in the lake area and is more or less local south of this area”. Canada thistle was reported in Carroll, Cass, Elkhart, Jasper, Kosciusko, LaGrange, Lake, La Porte, Marion, Marshall, Miami, Monroe, Morgan, Montgomery, Newton, Noble, Porter, Putnam, St. Joseph, Starke, Steuben, Wayne and Wells.
In 2002, Overlease reported Canada thistle in the following 89 counties in Indiana: Adams, Allen, Bartholomew, Benton, Blackford, Boone, Brown, Carroll, Cass, Clark, Clay, Clinton, Crawford, Daviess, Dearborn, Decatur, DeKalb, Delaware, Dubois, Elkhart, Fayette, Floyd, Fountain, Franklin, Fulton, Gibson, Grant, Greene, Hamilton, Hancock, Hendricks, Henry, Howard, Huntington, Jackson, Jasper, Jay, Jefferson, Jennings, Johnson, Kosciusko, LaGrange, Lake, La Porte, Lawrence, Madison, Marion, Marshall, Martin, Miami, Monroe, Montgomery, Morgan, Newton, Noble, Ohio, Orange, Owen, Parke, Pike, Porter, Posey, Pulaski, Putnam, Randolph, Ripley, Rush, St. Joseph, Scott, Shelby, Spencer, Starke, Steuben, Sullivan, Switzerland, Tippecanoe, Tipton, Union, Vanderburgh, Vermillion, Vigo, Wabash, Warren, Warrick, Washington, Wayne, Wells, White, and Whitely. | <urn:uuid:e1aa04fd-c2dc-4ba3-80e4-e9968ac19608> | {
"date": "2014-04-17T07:24:48",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00035-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8992484211921692,
"score": 3.46875,
"token_count": 598,
"url": "http://extension.entm.purdue.edu/CAPS/pestInfo/canadaThistle.htm"
} |
Molecular Secrets of Ancient Chinese Herbal Remedy Discovered [& related Alternative Medicine Resources]
For roughly two thousand years, Chinese herbalists have treated Malaria using a root extract, commonly known as Chang Shan, from a type of hydrangea that grows in Tibet and Nepal. More recent studies suggest that halofuginone, a compound derived from this extract’s bioactive ingredient, could be used to treat many autoimmune disorders as well. Now, researchers from the Harvard School of Dental Medicine have discovered the molecular secrets behind this herbal extract’s power.
It turns out that halofuginone (HF) triggers a stress-response pathway that blocks the development of a harmful class of immune cells, called Th17 cells, which have been implicated in many autoimmune disorders.
“HF prevents the autoimmune response without dampening immunity altogether,” said Malcolm Whitman, a professor of developmental biology at Harvard School of Dental Medicine and senior author on the new study. “This compound could inspire novel therapeutic approaches to a variety of autoimmune disorders.”
“This study is an exciting example of how solving the molecular mechanism of traditional herbal medicine can lead both to new insights into physiological regulation and to novel approaches to the treatment of disease,” said Tracy Keller, an instructor in Whitman’s lab and the first author on the paper….
Related General Resources for Complementary/Alternative/Integrative Medicine
- MEDLINE plus: Alternative Medicine Trusted health information links from the US National Institutes of Health (NIH). Includes basic information, news, organizations, specific conditions, multimedia, financial issues, and more
- Bandolier: Evidenced Based Thinking about Healthcare - Alternative Medicine
The site brings together the best evidence available about complementary and alternative therapies for consumers and professionals. It contains stories, systematic reviews and meta-analyses of complementary and alternative therapies with abstracts.
- National Center for Complementary and Alternative Medicine
NCCAM is dedicated to exploring complementary and alternative healing practices in the context of rigorous science, training complementary and alternative medicine (CAM) researchers, and disseminating authoritative information to the public and professionals.
- New York Online Access to Health (NOAH)
NOAH offers this selection of complementary and alternative therapies without endorsement.
- Office of Cancer Complementary Alternative Medicine
The NIH, National Cancer Institute (NCI) Office of Cancer Complementary and Alternative Medicine (OCCAM) was established in October 1998 to coordinate and enhance the activities of the National Cancer Institute (NCI) in the arena of complementary and alternative medicine (CAM).
- American Pain Foundation Provides New Tools and Resources on Safe Use of Complementary and Alternative Medicine (CAM) More than 83 Million Americans Use CAM Therapy (prweb.com)
- Complementary and alternative medicine need more randomized trials (kevinmd.com)
- Dried licorice root fights the bacteria that cause tooth decay and gum disease (with related alternative medicine links) (jflahiff.wordpress.com)
- Upward Trend in Alternative and Complementary Medicine Use Revealed in Nation Center for Health Statistics Report Impacts Puma Method Travel Sickness Prevention Sales (prweb.com)
- Complementary and Alternative Medicine Treatments in Psychiatry (beyondmeds.com)
- Why Write About Alternative Medicine? Part One: The Media (jdc325.wordpress.com)
- Second Alternative Medicine Telesummit “Wellness Revolution 2″ Launches November 8 (prweb.com)
- Integrative Medicine to Treat Eating Disorders (psychcentral.com)
- Scientists discover molecular secrets of 2,000-year-old Chinese herbal remedy (physorg.com)
- Vote on alternative medicine falls victim to dark arts of the internet (smh.com.au)
- Global Traditional Medicine Market to Reach US$114 Billion by 2015, According to a New Report Published by Global Industry Analysts, Inc. (prweb.com)
- Molecular secrets of ancient Chinese herbal remedy discovered (sciencedaily.com)
- Antipodean CAM (sciencebasedmedicine.org)
- Lobby Group Formed to Remove Alternative Medicine, Chiropractic Courses from Universities (truthsupport.wordpress.com)
- Expert Launches Natural Supplement Website and Alternative Medicine Blog (prweb.com)
- Herbal Remedies (prophet666.com)
- Chinese Herbal Medicine (acurelief.wordpress.com)
- Hangovers Cured For Good? Ancient Chinese Herbal Remedy Yields Amazing Results – Huffington Post (huffingtonpost.com)
No comments yet. | <urn:uuid:2ad0e720-0a8b-4440-a835-04f5954de437> | {
"date": "2014-04-18T18:12:53",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00403-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8462631702423096,
"score": 2.640625,
"token_count": 976,
"url": "https://jflahiff.wordpress.com/2012/02/15/molecular-secrets-of-ancient-chinese-herbal-remedy-discovered-related-alternative-medicine-resources/"
} |
Off to Class
From 9 to 12 | 64 pages
When North American kids picture a school, odds are they see rows of desks, stacks of textbooks, and linoleum hallways. They probably don’t picture caves, boats, or train platforms — but there are schools in caves, and on boats and on train platforms. There are green schools, mobile schools, and even treehouse schools. There’s a whole world of unusual schools out there!
But the most amazing thing about these schools isn’t their location or what they look like. It’s that they provide a place for students who face some of the toughest environmental and cultural challenges, and live some of the most unique lifestyles, to learn. Education is not readily available for kids everywhere, and many communities are strapped for the resources that would make it easier for kids to go to school. In short, it’s not always easy getting kids off to class — but people around the world are finding creative ways to do it.
In Off to Class, readers will travel to India, Burkina Faso, and Brazil; to Russia, China, Uganda, and a dozen other countries, to visit some of these incredible schools, and, through personal interviews conducted by author Susan Hughes, meet the students who attend them too. And their stories aren’t just inspiring; they’ll also get you to think about school and the world in a whole new way. | <urn:uuid:6e1f905c-df3a-4688-a3cd-4cfe7ec2e59e> | {
"date": "2017-06-22T20:32:33",
"dump": "CC-MAIN-2017-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319902.52/warc/CC-MAIN-20170622201826-20170622221826-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9608840942382812,
"score": 2.5625,
"token_count": 299,
"url": "https://shop.owlkids.com/products/off-to-class-1"
} |
Teaching is a profession where we can collectively agree to a vision and a set of practices that we will live by. It is also a profession where we can ‘do our own thing’ within the walls of our classrooms. As a profession we are great at giving the appearance of change whilst maintaining the status quo of established routines and norms. This is frustrating for leaders who are implementing change initiatives centred upon solid evidence of effective practice.
So how do we open our practice so that we attain a sustainable reality that meets the needs of the students we teach? By now it should be no surprise that we have a deep belief in Assessment for Learning practices. These practices are firmly entrenched in the use of quality data and analysis of this data for next learning steps. In my last post I addressed the idea of Teaching as Inquiry as a means investigating evidence based strategies in the pursuit of student achievement. For us, the answer to sustainable practice can be found in peer coaching and reflective journals.
We have a mental model that if it is good enough for students then it is good enough for teachers. If we believe that students need time to reflect and gather evidence of their learning (development) then adults need to do the same. Teachers identify their priority learners and then foreground them in their online professional journals. Next teaching steps are planned and then information gathered about how these steps have helped move the priority learners closer to their goals. This then results in an iterative inquiry based upon data.
Peer coaching and observation is a crucial component in sustaining any innovation or shift in teaching practice. Once teachers have identified next teaching or learning steps they need feedback. This comes in the form of a coach, a trusted colleague, coming in to observe the teaching. The key difference here is that the teacher seeking feedback is asking for feedback in a particular area of our assessment for learning teacher matrices. They want information so that they can reflect about what they need to next.
After the observation the coach and teacher dialogue about the data collected. It is important to note that the coach is not there to fix the teacher being observed. The coach uses facilitative questioning techniques to help the teacher come to their own insights about where next learning steps may be. These insights are recorded in professional journals and the process begins again. The data from the observations forms part of the picture and is collated in the online journal along with reflections, ideas and thoughts about next steps. These journals can be shared with colleagues so they can contribute.
This expectation that we all give and receive feedback about our teaching and learning practices ensures that there is a collective responsibility towards sustaining and improving our assessment for learning pedagogy. This shared responsibility for priority learners and their achievement ensures that we all hold ourselves to account and are always pushing ourselves to learn and teach more effectively. | <urn:uuid:dc1f5a85-ed5c-451e-989f-1b455a701746> | {
"date": "2018-05-20T21:28:08",
"dump": "CC-MAIN-2018-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863689.50/warc/CC-MAIN-20180520205455-20180520225455-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9582368731498718,
"score": 2.65625,
"token_count": 566,
"url": "http://stephenlethbridge.com/?p=233"
} |
By Dan Sadovsky, C.E.O K-Mars Optical
This article follows the evolution of multi-focal lenses through the latest innovations in progressive lens technology. Multi-focal technology begun much earlier than most imagine, yet gained its boost only in the recent two decades, for socio-economic reasons. What is the state-of-the-art progressive lens technology today? What’s next, and what is the future of this market?
The article argues that the future of eyeglasses lenses is in advanced Free Form technologies combining sophisticated software algorithms and robotics, bringing the complete lens design and manufacturing to the optical laboratory level. Single vision Semi-finished lens blanks turn into individually-unique lenses for the specific patient’s lifestyle. This future is already here with commercially-available technology.
An Early Beginning
Benjamin Franklin is credited for inventing the first pair of bifocals in the early 1784. According to the story, “He was getting old and was having trouble seeing both up-close and at a distance. Getting tired of switching between two types of glasses, he devised a way to have both types of lenses fit into the frame. The distance lens was placed at the top and the up-close lens was placed at the bottom.”
The early Bifocals
Use of eye care industry multi-focal correction lenses increased from 40% in the 80s of the last century to 53% of total correction lenses dispensed in USA in 2008 (Jobson Publication).
Bifocal lenses have two powers only: one for seeing at distance and the other for seeing up close or reading. Objects in between (such as computer screen or items on the grocery store shelf) often remain indistinct with bifocals.
Attempts were taken to compensate this shortcoming with tri-focal lenses which offer the wearer additional field of corrected vision. Yet, the Bifocal and tri-focal lenses are carry an inherent disadvantage and are uncomfortable due to the sharp transition between focal points. Moreover, in some cases the use can lead to quite a few unpleasant outcomes. One hazard is known as Computer Vision Syndrome (CVS), might result in from using a computer for extended periods. Bifocal wearers have to sit closer to the screen and tilt their heads back to see through the bottom part of the lens. This unnatural posture can lead to muscle strain, neck pain and other symptoms of CVS.
Natural Vision Correction
The idea of natural vision correction avoiding an “image jump” – progressive addition lenses (PAL) was first patented in 1907 by Own Aves (British Patent 15,735), but due to the impracticality and complexity this design was never commercialized.
The Varilux lens was the first modern design of PAL. It was developed by Bernard Maitenaz, patented in 1953, and introduced by the Société des Lunetiers (that later became part of Essilor) in 1959.
The illustration of a progressive lens below shows the typical
configuration of distance, intermediate and near zones
Early progressive lenses offered relatively crude designs creating regions of aberration away from the optic axis, yielding poor visual resolution (blur). Combining a collection of powers in a single surface resulted in geometric distortions to the visual field, increasing with addition of power. The early progressive-enthusiastic users accepted these disadvantages for a very simple reason: Progressive lenses gave them a “younger” look; Since bifocal and related designs are associated with ‘old age’, the lack of segments on the lens surface of a progressive lens appears more ‘youthful’ because lenses associated with younger wearers, as single vision lenses are free of segments or lines on the surface.
The motto those days was “The best progressive design”. Market leaders attempted to develop pre-made designs to fit wide range of cases (e.g. Varilux Comfort). Yet, this “one size fits all” concept did not satisfy the growing market of aging young individuals.
Mid 1990s through 2006 – Quest for the ideal progressive lens
It is very much thanks to the maturing Baby Boomer generation that the decade following the mid 90s was a constant quest for the ideal progressive lens. The mid 1990s were the beginning of economical prosperity, which lasted through the first decade of the second millennium. Unlike their ancestors, Baby Boomers reaching their 40s and 50s had the financial stability to demand eyewear suited to contemporary lifestyle, supporting their office, leisure and sports activities. Ah and, yes. The eyewear must also be fashionable and “young” looking.
Industry giants were challenged by increasing consumer demand combining comfort with fashion. Smaller frame styles became fashionable and three piece rimless mounts and wrap around sport glasses complicated the task even more.
Introducing the Free Form Lenses
It was a small, unknown Israeli company Shamir, who first introduce the next revolutionary concept, turning away from the “ideal design” concept to the extreme opposite: Individually tailored lens for each patient, optimized for his/her condition, problem and lifestyle. The revolution materialized with the commercial implementation released to the market in 2001. It so happens that these pioneers came from outside the eye care field, combining mathematicians, software and robotic architects. Thinking “out-of-the-box” they have succeeded to implement cost-effective desktop-manufacturing technology enabling optical labs to produce high-quality individually-tailored lenses from plain single vision blanks.
Threatened by the new technology revolutionizing the market and consumers’ expectations, industry leaders were each forced to come up with a matching commercial response. This has indeed yielded a variety of branded “Free Form” implementations. Unfortunately, they all differ and lag behind the original Shamir technology for the mere fact that they bind the client laboratory to pre-manufactured blanks of that brand. A little disappointing, isn’t it? After all, it seems that during those years more efforts were placed into marketing campaigns rather than in research aimed to real freedom…
It is something to expect, that most of the eye care industry leaders weren’t too happy with the new emerging technology. The immediate commercial implication is freeing the laboratories from huge inventories of pre made semi finished lenses and thus dependence from industry leaders. Realizing this provides an explanation to their resulting solutions “overcoming” this problem by tying the user labs to their proprietary blanks. It is understandable. They are trying to hold the fort for as long as they can…
So what do we have so far?
- We have a growing aging “young” consumer population demanding eyewear supporting combinations of activities and lifestyles unparalleled before.
- We have a variety of desktop production solutions, which accommodate the consumers’ demand, yet are mostly driven by the giant lens manufacturers, and limited to pre-manufactured proprietary blanks.
- We have witnessed the feasibility and availability of more freedom achieved with technology such as Shamir’s, that goes one step further and frees the lab from dependency on specific blanks and manufacturer.
Understanding similar scenarios in other industries in is inevitable to realize that technology-based commercial processes can be delayed but not stopped.
Freeing the lab from inventory cost will eventually reduce the production cost, thus premium individual progressives are going to become more affordable to the consumer. Similar processes occurred with computers, GPS, cellphones, I-pods and other luxury item prices.
The future is at the doorstep, and it does seem that Shamir Free Form technology is really the pioneer, pointing in the direction that the eye care industry will be moving in years to come.
C.E.O K-Mars Optical | <urn:uuid:31ddcfee-a137-4796-bc20-774e56752871> | {
"date": "2018-07-19T19:26:00",
"dump": "CC-MAIN-2018-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9494309425354004,
"score": 2.6875,
"token_count": 1610,
"url": "http://www.kmarsoptical.com/from-multifocal-to-free-form-progressive-lenses-chronology/"
} |
Often, upper and lower respiratory infections are mistaken to be the same. However, they are different, based on the location of the infection. If the infection is caused in the upper respiratory tract, it is known as upper RTI and the organs that are infected in upper respiratory tract infection are nose, mouth, larynx or the voice box, windpipe or trachea as well as sinus. In case of the lower respiratory tract infection, the lungs and the bronchial tubes are infected.
Although in case of upper respiratory tract infection, the infections are always caused by virus and bacteria that are contagious, in case of lower respiratory tract infection, along with virus and bacteria, physical substances may also be the cause of the infection. Pneumonia is one such type of lower respiratory tract infection. Though it is referred to a lower respiratory tract infection, pneumonia can actually be referred to any type of inflammation caused to the lungs. When the alveoli or the air sacs of the lungs are filled with fluid and causes inflammation, for any reason whatsoever, the condition is referred to as pneumonia.
Upper Respiratory Tract Infection Vs. Pneumonia: Differences Based on Causes
As already mentioned upper respiratory tract infections are caused by virus and bacteria and are highly contagious. They are commonly referred to as “cold” and since they are contagious, it is referred as you ‘catch cold’. You get them from an already infected person by coming in close contact with them. Even if you do not have a direct contact with an infected person, you still may get infections indirectly as these virus and bacteria are highly communicable and transferred.
Pneumonia on the other hand, though also caused by virus and bacteria, is not contagious. The bacteria that lead to pneumonia are commonly Streptococcus pneumoniae. However, gram positive and gram negative bacteria can also cause pneumonia. Along with these, pneumonia can also be caused if you are exposed to and inhale toxic fumes or consume irritants through foods and drinks. Upper respiratory tract infections can never occur by being exposed to toxic fumes or irritants.
Upper Respiratory Tract Infection Vs. Pneumonia: Differences Based on Symptoms
Usually, the initial signs and symptoms of lower respiratory tract infections are similar to that of upper respiratory tract infections. Therefore, if you have pneumonia or bronchitis or other lower respiratory tract infection, it might be difficult for you to determine the exact cause of the condition. However, of course, in most cases, the signs and symptoms of common cold or sore throat or sinusitis or tonsillitis are less severe than the signs and symptoms of bronchitis and pneumonia.
Symptoms of Upper Respiratory Tract Infection:
- Runny nose is the most common sign of any upper respiratory tract infection or common cold infection
- Congestion in nose and head
- Coughing with mucus production
- Sore throat
- Hoarse voice
- Pain in ear
- Difficulty in breathing due to congestion in the nose.
Symptoms of Pneumonia:
Like any other lower respiratory tract infection, the signs and symptoms of pneumonia are –
- Production of sputum that can be white, yellowish green or yellowish grey or even clear in colour
- Pain in chest
- Discomfort in chest
- Retraction in the chest wall
- Abnormal breathing sound
- Cyanosis or discolouration of the skin
- Abnormal and rapid breathing known as tachypnea
- Difficulty sleeping because of the nagging and continuous coughing.
The signs and symptoms of upper respiratory tract infection usually go within a few days or weeks. However, signs and symptoms of lower respiratory tract infection, especially those of pneumonia stay for long – sometimes for up to months. If it lasts for more than 3 weeks and if you get no relief after coughing or throwing up sputum, you probably have pneumonia. Sometimes the coughs do not produce sputum and sometimes they do; at times the sputum may also have blood stains. The lower respiratory tract infection being difficult to access with regards to the location is, hence, difficult to treat as well. In fact, if it is not treated on time, it can cause serious harm to your health.
Upper Respiratory Tract Infection Vs. Pneumonia: Differences Based on Treatment
While upper respiratory tract infections can be treated with antibiotics, lower respiratory tract infections are difficult to be treated by antibiotics. The doctor will prescribe other medicines and treatment methods to reduce the inflammation. You may also need some serious lifestyle changes in order to get well. So, see a doctor if you find your symptoms similar to those of pneumonia before it is too late.
- What is Upper Respiratory Tract Infection: Causes, Symptoms
- Diagnosis, Treatment of Upper Respiratory Tract Infection & its Prevention
- Causes & Symptoms of Lower Respiratory Tract Infection
- Can You Get Pneumonia from a Sinus Infection?
- Difference Between Upper Respiratory Tract Infection and Bronchitis | <urn:uuid:6ec66365-2296-4016-906e-5af780253ee1> | {
"date": "2019-12-06T09:15:31",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540486979.4/warc/CC-MAIN-20191206073120-20191206101120-00136.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9388989210128784,
"score": 3.828125,
"token_count": 1055,
"url": "https://www.epainassist.com/differences-and-comparisons/upper-respiratory-tract-infection-vs-pneumonia"
} |
On Friday afternoon, the State Department released a draft of its much-anticipated new analysis of the environmental impact of the proposed Keystone XL pipeline. Although the report makes no firm statement one way or the other about whether the controversial pipeline from Canada to Texas should be approved, some of its conclusions have enviros worried that a greenlight is inevitable.
The administration has spent more than two years considering whether to approve the 1,600-mile pipeline that would carry oil from Canada’s tar sands to refineries in Texas. Because the pipeline crosses an international border, the State Department gets to decide whether it should be built. Climate change activists have been holding rallies and civil disobedience actions outside the White House for the past year and a half in an effort to convince the administration to block the project. Obama delayed a decision on the pipeline in November 2011, asking the State Department to produce more research on the pipeline’s potential environmental impact—the report, a “supplemental environmental impact statement,” or SEIS, that was issued Friday afternoon.
Enviros immediately seized on the new report, arguing against its claim that any spills associated with the pipeline are “expected to be rare and relatively small,” and said it underestimated the project’s contribution to planet-warming greenhouse gas emissions. They also challenged the idea that TransCanada’s pipeline will not make a huge difference in the development of the tar sands, pointing to the industry’s own claims that the pipeline is essential to their plans to expand export of this type of oil.
“If they don’t have [Keystone XL], they won’t be able to expand the tar sands like they’ve been planning to,” said Bill McKibben, the author and activist whose group, 350.org, has organized the pipeline protests. He called the pipeline “the most important issue for the environmental movement in a very long time,” noting that it has brought “huge numbers of Americans into the streets.”
Michael Brune, president of the Sierra Club, noted the timing of the draft’s release. “You know the news is bad when it’s buried at 4 o’clock on a Friday afternoon,” he said on a call with reporters shortly after the release. Enviros have framed the pipeline as a test of Obama’s sincerity on dealing with climate change. Brune acknowledged that the SEIS likely “makes the president’s job more difficult” because it will increase pressure on him to approve the pipeline.
But, Brune added, “this is the president’s decision. He can either lead our country to a clean energy future … or he can approve a pipeline that will bring the dirtiest oil on the planet through the US, and for the next decades we will know that the Keystone XL was approved under Obama at the time that we needed strong leadership on this issue.”
The report is in draft form and will be open for public comment for 45 days. After that, the State Department will issue a final report and, eventually, a final decision on whether the pipeline should be built.
McKibben said the pipeline’s critics will not be deterred by Friday’s draft report. “I don’t think anybody is going to walk away form this fight,” McKibben said. “My guess is this will produce more determination in a lot of people.” | <urn:uuid:0bdc501e-465c-4ca7-a74f-10d7f24faa03> | {
"date": "2017-11-19T12:54:41",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805578.23/warc/CC-MAIN-20171119115102-20171119135102-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9506058096885681,
"score": 2.609375,
"token_count": 724,
"url": "http://www.motherjones.com/politics/2013/03/enviros-attack-latest-state-dept-report-keystone-xl/"
} |
PRINCETON – Since the 2008 financial crisis, most industrial economies have avoided anything like the collapse that occurred during the Great Depression of the 1930’s. But, despite large-scale fiscal and monetary stimulus, they are not experiencing any dramatic economic rebound. Moreover, the pre-crisis trend of rising income and wealth inequality is continuing (in marked contrast to the post-Great Depression period, in which inequality declined). And survey data show a rapid decline in people’s satisfaction and confidence about the future.
The explanation of the post-crisis malaise – and people’s perception of it – lies in the combination of economic uncertainty and the emergence of radically new forms of social interaction. Long-term structural shifts are fundamentally changing the nature of work, and thus of the way that we think of economic exchange.
In the early twentieth century, a large share of even advanced economies’ populations was still employed in agriculture. That proportion subsequently fell sharply, and the same decline could later be seen in industrial employment. Since the late twentieth century, most employment growth has come in services, particularly personal services – a pattern that looks like a reversal of a previous historical trend.
At the beginning of the twentieth century, upper-middle-class households had a substantial staff of cooks, maids, nannies, and cleaners. In the interwar years, these employees largely disappeared from the lives of all but the ultra-rich. The iconoclastic British historian A. J. P. Taylor quipped that laments about the decline of Britain were really generalized reflections of Oxford academics’ view of the “servant problem.”
By the end of the twentieth century, however, many of these old service occupations were reappearing on a large scale, as dual-career households needed additional “help.” The employment of nannies, au pairs, babysitters, and day mothers reflected carefully differentiated approaches to the problem of looking after children.
After child care, there followed hordes of private tutors, test coaches, and university admissions consultants. And, beyond childhood and adolescence, the need for specialized personal support only grew.
Some of the new services would stretch the imagination of previous ages. Dating agencies have developed increasingly complex algorithms to sort out their clients’ romantic lives. Lawyers work out prenuptial contracts, and then the complexities of divorce negotiations. Design consultants choose our interiors and clothing. Personal trainers look after our fitness. Cosmeticians, skin-care specialists, and tattoo artists shape our appearance.
Two of the largest areas of service-employment expansion have been education and health services. And yet this has not been a result of adding more teachers or doctors. Instead, a new division of labor has surrounded the classical providers of education and healing with more and more layers of administration. Doctors need experts to deal with insurance forms, negotiate with other doctors and pharmaceutical providers, and manage legal risks. Educational specialists fill every conceivable logistical and administrative gap, run sports and arts programs, guarantee diversity, and oversee technology transfer to the private sector. Indeed, a rapidly growing army of administrators is overrunning our universities.
None of these new services can easily be standardized, or dealt with at long distances (as can some types of clerical legal and financial work). The caregivers and consultants need to be on location. And that raises a question of control. How can child-care providers be trusted? Cautious parents seek agents to select their employees and technology to monitor them as they work. So, to find out about the reputation of service providers, we need still more service providers: ratings and surveys and agents to tell us about agents.
The new service economy extends market relations to areas of life in which, previously, informal assistance and guidance within family units prevailed. To the extent that employment and income in the new services can be easily recorded, this change implies an increase in measurable economic wealth and output, because unpaid household services are ignored in GDP calculations.
Experts might thus interpret the macroeconomic consequences as largely positive. But the element of personal dependence is a throwback to the preindustrial world.
The zenith of the old service economy was the court of Louis XIV, where specialist courtiers attended to the Sun King’s every need, even the most intimate (there was a Groom of the King’s Close Stool). In that pre-modern world, private life was extraordinarily public, whereas the social movements of the nineteenth and twentieth centuries dramatically expanded the realm of individual privacy and self-definition.
Today’s new service economy is driven by the resulting uncertainty over identity. We need advice on every aspect of life, provided in a complex world by people whom we think to be experts in ever-narrower and more specialized fields. We can easily monitor that advice and subject it to statistical testing: are our children doing better on tests? Are we more fit? Are we dating more people who share our perceived interests?
Paradoxically, the new technological possibilities are also eliminating privacy. We are moving back to the Sun King’s world, in which everything personal is known, rumored, or whispered. But now, with electronic surveillance, personal dependence has never been more extreme, more humiliating, and more depressing.
This might explain some of the public dissatisfaction captured in so many surveys, even when economic conditions are not dire. Subjectively, modern growth feels problematic, and perhaps even immoral. | <urn:uuid:8c7371c4-a182-4339-8528-776443b05b17> | {
"date": "2014-10-01T22:20:41",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663611.15/warc/CC-MAIN-20140930004103-00264-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9445542693138123,
"score": 2.828125,
"token_count": 1114,
"url": "http://www.project-syndicate.org/commentary/the-rise-of-services-and-personal-dependence-by-harold-james"
} |
2014 ICD-9-CM Diagnosis Code 201.9
Hodgkin's disease unspecified type
- There are 9 ICD-9-CM codes below 201.9 that define this diagnosis in greater detail. Do not use this code on a reimbursement claim.
- A cancer of the immune system that is marked by the presence of a type of cell called the reed-sternberg cell. The two major types of hodgkin lymphoma are classical hodgkin lymphoma and nodular lymphocyte-predominant hodgkin lymphoma. Symptoms include the painless enlargement of lymph nodes, spleen, or other immune tissue. Other symptoms include fever, weight loss, fatigue, or night sweats.
- A lymphoma, previously known as hodgkin's disease, characterized by the presence of reed-sternberg cells. There are two distinct subtypes: nodular lymphocyte predominant hodgkin lymphoma and classical hodgkin lymphoma. Hodgkin lymphoma has a bimodal age distribution, and involves primarily lymph nodes. Current therapy for hodgkin lymphoma has resulted in an excellent outcome and cure for the majority of patients.
- A malignant disease characterized by progressive enlargement of the lymph nodes, spleen, and general lymphoid tissue. In the classical variant, giant usually multinucleate hodgkin's and reed-sternberg cells are present; in the nodular lymphocyte predominant variant, lymphocytic and histiocytic cells are seen.
- A malignant disease of the lymphatic system that is characterized by painless enlargement of lymph nodes, the spleen, or other lymphatic tissue. It is sometimes accompanied by symptoms such as fever, weight loss, fatigue, and night sweats.
- An obsolete term referring to hodgkin lymphoma.
- Hodgkin disease is a type of lymphoma. lymphoma is cancer of lymph tissue found in the lymph nodes, spleen, liver, and bone marrow. The first sign of hodgkin disease is often an enlarged lymph node. The disease can spread to nearby lymph nodes. Later it may spread to the lungs, liver or bone marrow. The cause is unknown. Hodgkin disease is rare. Symptoms include
doctors can diagnose hodgkin disease with a biopsy. This involves removing and examining a piece of tissue under a microscope. Treatment varies depending on how far the disease has spread and often includes radiation therapy or chemotherapy. The earlier the disease is diagnosed, the more effective the treatment. In most cases, hodgkin disease can be cured. nih: national cancer institute
- painless swelling of the lymph nodes in the neck, armpits, or groin
- fever and chills
- night sweats
- weight loss
- loss of appetite
- itchy skin
- Malignant disease characterized by progressive enlargement of the lymph nodes, spleen, and general lymphoid tissue, and the presence of large, usually multinucleate, cells (reed-sternberg cells) of unknown origin. | <urn:uuid:5bdc1e77-9878-4529-9e68-3200a31e05c1> | {
"date": "2014-11-29T06:14:30",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931013466.18/warc/CC-MAIN-20141125155653-00124-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8968979716300964,
"score": 2.734375,
"token_count": 645,
"url": "http://www.icd9data.com/2014/Volume1/140-239/200-209/201/201.9.htm"
} |
WHAT ARE CONTAINERS?
Containers are reusable storage units used by the cargo industry to store, protect and transport raw materials, manufactured goods and other goods between the seaports of different countries. First appearing in the 1950s, they are usually rectangular and primarily made of metal or fibreglass. Their design means they can carry heavy loads, be placed on top of each other into space-saving stacks on ships and in ports and resist the harsh environment of ocean voyages. The use of containers allows individual items or packages to be grouped into a single larger unit load. This reduces cargo handling which improves security, allows faster freight transport and reduces damage and losses. The use of standard container sizes simplifies movement, handling and port facilities by allowing common facilities, technology and equipment. It also means that shipping containers can be intermodal (be used by more than one mode of transport). This means, for example, that containers offloaded from international vessels at ports like Sydney or Melbourne can be transferred to rail wagons or road trucks and moved to urban or regional areas, and vice versa.
Shipping containers typically exist in 20 foot (6.1 metres) and 40 foot lengths (12.2 metres). The standard measure of containers in international trade is the twenty foot equivalent (TEU) which is the space occupied by a standard 20 foot container. Different types of containers are available depending upon the type of freight to be moved (e.g. refrigerated, liquid, insulated, flat and ventilated). Specialised or customised containers can be used for sensitive, fragile, dangerous or confidential items.
Statistics in datasets from the Ports Australia website show that for 2009-10 containerised trade accounted for just over 21% total import tonnage for Australian ports, and just under 4% of export tonnage. The main commodities not transported in shipping containers include raw materials like coal, iron ore and natural gas, although containers can be used (for example, coal can be transported in a lined container). Australia's crude oil and petroleum product imports are generally transported in merchant ships designed for the bulk transport of oil. Motor vehicles are generally transported in Roll on/Roll off ships, which are designed to allow vehicles to be driven straight onto and off the decks of the ship on their own wheels. This allows the cargo to be efficiently loaded and unloaded at port via ramps built into the vessel.
The efficient movement of containers both through a port and to / from a port is essential to all parties involved in international trade. Exporters, importers and their agents report a range of information about international cargo to Customs and Border Protection. This information, described later in the paper, was assessed for the feasibility of the ABS producing and releasing international container movement statistics.
This page last updated 15 September 2011 | <urn:uuid:f4a0021c-f3c7-4bf9-bcec-3598d5dbb031> | {
"date": "2015-07-05T03:54:08",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097199.58/warc/CC-MAIN-20150627031817-00146-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9200685024261475,
"score": 3.640625,
"token_count": 566,
"url": "http://www.abs.gov.au/AUSSTATS/[email protected]/Latestproducts/5368.0.55.018Main%20Features42009-10?opendocument&tabname=Summary&prodno=5368.0.55.018&issue=2009-10&num=&view="
} |
Using the Visual C# Development Environment
The Visual C# integrated development environment (IDE) is a collection of development tools exposed through a common user interface. Some of the tools are shared with other Visual Studio languages, and some, such as the C# compiler, are unique to Visual C#. This topic provides links to the most important Visual C# tools.
Provides an overview of many of the features and tools included in Visual Studio for application development.
Describes how to create a project that contains all the source code files, resource files such as icons, references to external files, and configuration data such as compiler settings.
Describes the default keyboard shortcut schemes.
Describes Visual Studio tools that help you modify and manipulate text, code, and markup, insert and configure controls and other objects and namespaces, and add references to external components and resources.
Provides links to topics that describe Visual C#–specific features, such as automatic code generation and IntelliSense for most recently used members.
Provides an overview of using Code Snippets in Visual C# to automatically add common code constructs to your application
Provides links to procedures about how to use the Find and Replace window, Bookmarks, and the Task List and Error List to locate lines of code.
Explains how to browse hierarchies of classes, class members, and resources.
Describes how to add a configuration file (app.config) to a C# project.
Describes how the IDE enables you to view metadata as source code.
Lists refactoring operations that help you modify your code without changing the behavior of your application.
Explains how to configure debug, release, and special builds of your Visual Studio solution.
Describes how to run the Visual Studio Debugger to resolve logic and semantic errors.
Shows how to add or edit resources for your project, such as strings, images, icons, audio, and files.
Compares different Visual Studio deployment technologies, such as ClickOnce and Windows Installer. | <urn:uuid:a16e4a71-6185-4a87-93f2-2007fab09ff1> | {
"date": "2015-08-03T13:18:20",
"dump": "CC-MAIN-2015-32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989891.18/warc/CC-MAIN-20150728002309-00331-ip-10-236-191-2.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8483675122261047,
"score": 2.59375,
"token_count": 416,
"url": "https://msdn.microsoft.com/en-US/library/ms173063(v=vs.110).aspx"
} |
Data centers have changed considerably over the last decade. The integration of cloud computing specifically has altered the approach in which they’re constructed and set up, and have given way to the hyperscale data center.
Prior to the cloud, most data center’s networking infrastructures were pretty simple. They consisted of less internet-dependent, separate applications like a targeted staff, unconnected, task-specific management tools, and hardware stored on separate racks.
Now, most applications in a data center work in harmony through cloud and web integration, which has led to hyperscale data centers. But what exactly is a hyperscale data center?
What are Hyperscale Data Centers?
A hyperscale data center is built to scale and adapt to the current needs of any business or industry. It should be seamless and flexible in its networking, storage, and memory capabilities.
Think about it this way: if old-school data centers were built to support physical servers and virtual machines in the hundreds and thousands, respectively, then hyperscale data centers support and work with hundreds of thousands of individual, physical servers and virtual machines. It’s an upgraded, maximum performance structure, operated via high-speed network connections.
Hyperscale data centers are built around:
- Large data centers with large numbers of servers
- Delivering the best software experience through optimized systems for storage and speed
- Minimizing the focus and need for physical hardware (and proper hardware disposal)
- Allowing for easier, more balanced scalability baed on need and demand
From the edge to more densely populated areas, right now the United States is leading the way on hyperscale data center operators, with almost half located here. Next in line is China with about 8 percent of the hyperscale data center hosting market, followed by various regions throughout Europe, the Middle East, Asian Pacific, Africa, and Latin America where a few remaining data centers are peppered.
Generally, the systems in place in hyperscale data centers are used by businesses that greatly outpace their competition. These businesses make up a majority of the infrastructure services market, are are known known as the delivery mechanism behind much of the cloud-powered web.
The infrastructure services market includes:
- Platform as a service (PaaS)
- Cloud services (hosted and private)
- Infrastructure as a service (IaaS)
Major companies that are using hyperscale data centers include Amazon Web Services, Microsoft Azure, IBMSoftLayer, and Google Platform, with Amazon’s AWS claiming primary dominance. Regardless, with all of these corporations’ ability to scale, more and more businesses will see it’s in their best interests to start shifting their infrastructures completely to the cloud.
Benefits of Hyperscale Data Center
So, why bother with hyperscale data centers? Is it really worth the trouble? The main benefits to stem from hyperscale data centers are speed and efficiency.
With hyperscale data centers, your business can quickly deploy and extend systems to solve problems with much less difficulty. Which means, in general, higher computing power will be available at a lower cost. Plus there are plenty of advantages on the security side as well, as security options can be programmed directly into the software rather than the traditional wired route in a systems’ hardware.
Another big advantage point is the flexibility hyperscale computing offers overall. It brings the agile environment that allows for your business to quickly scale up or down as needed. And when traditional computing resources to scale up or down would otherwise be time-consuming and very expensive, hyperscale structures look more and more appealing.
Plus as businesses continue to grow, evolve, and compete, the need to access critical data quickly and efficiently ensures your business can be future-proof. With no end in sight for data consumption, it’s important to be flexible and prepared to be able to scale in the appropriate direction at moment’s notice.
What Hyperscale Data Centers Mean for Your Business
Hyperscale data centers will help your business be more efficient, extensible, and flexible in its computing functionality. But despite its clear connection to cloud computing, Hyperscale is more closely embedded in hardware than software.
The best hyperscale companies focus on delivering customer demands through efficient performance. That’s contrary to the limiting nature of previous cloud installations, where server size and availability can often be an issue. Vertical and horizontal scaling of form factors not only helps to add machines to your infrastructure, but extend the power and life of those machines that are already in action.
That doesn’t mean hyperscale data centers are without their challenges, though.
With so many organizations still resisting cloud-based storage, on-premise database storage sizes are still outranking those on the cloud. Plus consider that many cloud data bases still max out at 16TB of storage, meaning directly scaling up from a 4TB database wouldn’t be possible.
Not to mention the need for a physical space large enough to house and support such a high number of servers as well as the right team of employees to manage onsite tasks and determine the right KPIs to track the health and security of systems.
Though hyperscale data centers are structured to handle a larger workload at a faster, more efficient rate, that doesn’t mean there aren’t inherent risks. One hyperscale data center houses and connects to hundreds of thousands of virtual machines that are asked to handle billions of operations on a daily basis. The daily volume makes the potential for disaster and security threats, and the required recovery time, a much larger gamble in the short term.
Is Hyperscale the Right Choice?
With so many of the potential challenges of hyperscale data centers, do the rewards really outweigh the potential risks? Is that much computing power really necessary?
Well, right now there is plenty of need in the workloads of many of today’s data-intensive and highly interoperable systems. And current trajectories suggest that those who don’t need it right now, will need it in the coming years. As stated earlier, Big Data is here to stay, and it will continue to be expensive and difficult for smaller scale offsite platforms to host.
Hyperscale data centers offer the opportunity to scale up quickly while remaining flexible and more cost effective in the long-term. When customers expect service and results in a single millisecond, the clear advantages of transitioning to a hyperscale infrastructure are clear.
When It’s Time for a Change, Choose the Right ITAD Companuy
Major advances in data centers and how they’re constructed won’t cease any time soon, and that isn’t a bad thing! Technology continues to advance and develop at an increasingly fast rate, and having the capability to store, send, and connect data more efficiently will only help advance things more. And the added flexibility of being able to scale up or down when needed points more to hyperscale data centers as the future.
Keep an eye on the major IT, tech, and data-providing organizations and be aware of how hyperscale computing and the implementation of hyperscale data centers may effect you and your business. If an upgrade or renovation to your data center requires a full on data center decommission, be sure to enlist the help of a certified IT asset disposition company.
At Exit Technologies, we offer full IT equipment services ranging from asset recovery, network equipment sales and recycling, data erasure, and full data center decommission services.
Have something to add? Let us know your thoughts in the comments below! | <urn:uuid:62c34dcc-d227-4bee-a37e-45f5d91e8752> | {
"date": "2019-04-26T00:26:47",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.930715799331665,
"score": 2.78125,
"token_count": 1561,
"url": "https://www.exittechnologies.com/blog/data-center/what-are-hyperscale-data-centers/"
} |
Broken Song which is ostensibly about the anthropologist T.G.H. Strehlow and his translation of the spiritual songs of the aboriginal people of central Australia. At the end of the book, one has some insight into the importance of these songs to the Aborigines and their belief system, but along the way we have an exciting examination of Strehlow’s human motivations: growing up trilingual, parental influence, ethnicity, Lutheranism, Australian identity, racism, aboriginal politics, academic jealousies, the pitfalls of translation, personal passions, adultery, possessiveness, and even his involvement with an unsolved child murder case.
"Sensing belief systems: review of Broken Song: T.G.H. Strehlow and Aboriginal Possession by Barry Hill,"
Culture Mandala: The Bulletin of the Centre for East-West Cultural and Economic Studies:
2, Article 8.
Available at: http://epublications.bond.edu.au/cm/vol7/iss2/8 | <urn:uuid:bf700d0e-618f-405e-be9a-ba029f9579af> | {
"date": "2013-05-18T17:48:30",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8735277652740479,
"score": 2.515625,
"token_count": 210,
"url": "http://epublications.bond.edu.au/cm/vol7/iss2/8/"
} |
Stormwater Management: Why It’s Important
by McKenzi Heger | December 10, 2018
The importance of an effective stormwater management system is imperative to your safety and the health of the environment. We recently announced a 20-year contract with the City of Annapolis, so the local benefits of a stormwater management system have been at the top of our mind, especially given the greater frequency of rain we’ve received and the potential for a winter full of snow.
What is stormwater management?
Stormwater comes from any precipitation falling from the sky including rain, sleet, or melting snow and can be effectively managed through stormwater management by companies like GreenVest. As more land is developed, we also see a change in the natural patterns and rates of runoff Mother Nature has created. Construction of buildings, parking lots and roadways, however, alters the volume and velocity of natural runoff, leaving a significant need to properly convey our stormwater safely to its final destination (i.e. its designated body of water).
Why is stormwater management so important?
When we build roads, buildings and other structures, the natural infiltration of runoff is interrupted, resulting in increased rates of runoff and localized flooding. Increased levels of impervious surface area add pollution to streams, rivers, and creeks and ultimately larger water bodies like the Chesapeake. Decreased infiltration and increased runoff create the need for stormwater management, which provides significant benefit to people (health, welfare, and safety) and the environment (health, function, and sustainability).
Traditional stormwater management includes detention ponds that dotted our communities. These ponds detain stormwater runoff, holding it or slowly releasing it over time to the nearest water body. While this may seem like a solid plan to manage stormwater, it has its shortcomings. Traditional ponds were not sized to handle what is now considered a minimum “water quality” storm event, let alone providing detention for larger storm events and many were not designed to provide groundwater infiltration – thus not solving the root of the problem. There are three key components that are considered in contemporary stormwater management design:
- Effective treatment of water quality
- Control of excess runoff volume and velocity
- Groundwater infiltration
Controlling the volume and velocity of stormwater provides important functions to people including flood risk management and storm damage prevention. Furthermore, groundwater recharge is an important part of stormwater management as it helps maintain the base flow in nearby stream and wetlands, replenishing drinking water supplies and reducing the overall volume of runoff helping to reduce or eliminate erosion.
Perhaps one of the most important factors to consider in stormwater management is preventing pollution by supporting good water quality. This is essential to supporting both ecosystem and community health, function and resiliency. Runoff can include bacteria and organic matter from trash and animal waste, oil and grease from leaky cars on the roads, toxic chemicals like pesticides and more. Many of these pollutants carry nitrogen, phosphorous and sediment which in small amounts are beneficial to aquatic systems but in larger amounts are detrimental to system health and function. These pollutants are harmful to our environment as they are carried by rivers, streams, and creeks and into larger bodies of water like the Chesapeake Bay.
What can be done about runoff, and who can help implement management techniques?
Federal, state and local laws govern what can be discharged into our waterbodies and how that runoff must be treated prior to discharge. These laws require permits, Federal, State and local, to repair, expand or construct new impervious surfaces. Additionally, permits are required when companies, like GreenVest, implement effective stormwater management practices or programs. For example, GreenVest is helping the City of Annapolis comply with its Municipal Separate Storm Sewer (MS4) Permit throughout the life of our contract. This MS4 compliance program will help address water quality, stormwater volume, and velocity and where possible promote groundwater recharge. This program will provide significant benefit to local residents as well as the health and function of water local bodies.
In order for us to thrive, we must first take care of our environment. As we protect our water resources and ecosystems with effective stormwater management, we invest in our future. Learn more about our past stormwater management projects. | <urn:uuid:5675c2b4-3ecc-4aa5-9567-9703d49765fe> | {
"date": "2019-07-22T07:41:09",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527828.69/warc/CC-MAIN-20190722072309-20190722094309-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9429992437362671,
"score": 3.34375,
"token_count": 875,
"url": "https://www.greenvestus.com/2018/12/10/stormwater-management-what-is-it-part-one/"
} |
Posted: August 29, 2013
Many obstetricians make more money for C-sections than for vaginal deliveries. In a recent study, these doctors were more likely to perform the costly procedure than doctors paid a flat salary. But when the pregnant women were also physicians, doctors seemed less swayed by financial incentives.
Pregnant doctors are less likely than other women to deliver their babies via C-section, recent research suggests. Economists say that may be because the physician patients feel more empowered to question the obstetrician.
In 1938, a Work Projects Administration poster urged pregnant women to look to their doctors for guidance.
Obstetricians perform more cesarean sections when there are financial incentives to do so, according to a new study that explores links between economic incentives and medical decision-making during childbirth.
About 1 in 3 babies born today is delivered via C-section, compared to 1 in 5 babies delivered via the surgical procedure in 1996. During the same time period, the annual medical costs of childbirth in the U.S. have grown by $3 billion annually. There are significant variations in the rate of cesarean deliveries in different parts of the country — in Louisiana, for example, the C-section rate is nearly twice as high as in Alaska.
Obstetricians in many medical settings are paid more for C-sections. In a new working paper published by the National Bureau of Economic Research, health care economists Erin Johnson and M. Marit Rehavi calculated that doctors might make a few hundred dollars more for a C-section compared to a vaginal delivery, and a hospital might make a few thousand dollars more.
Johnson and Rehavi decided to explore the reasons for the increased number of surgical childbirth procedures via an unusual tack: They hypothesized that obstetricians would be less likely to be swayed by financial incentives when patients themselves had significant medical expertise and knowledge. By contrast, the researchers figured, such incentives might play a larger role in medical decision-making when patients knew very little.
In some ways, this is analogous to what happens when people take their cars to mechanics. People who are knowledgeable about cars are likely to push back against unnecessary repairs, whereas those who don't know much about cars are less likely to take issue with the mechanic's advice.
In childbirth, Johnson and Rehavi figured, this meant that obstetricians would perform fewer C-sections when their patients were themselves doctors.
"The idea is that physicians have medical knowledge," Johnson says. "If the obstetrician is deviating from the best treatment because of their own financial incentive, the patient [who is a] doctor would be able to push back against the obstetrician. But that might not be the case for nondoctors because they simply do not have the medical knowledge to know whether or not this C-section is the appropriate [method of delivery] for them."
The researchers tracked large numbers of births in California and Texas via databases that checked to see whether the mothers were themselves doctors.
"We found that doctors are about 10 percent less likely to get C-sections," Johnson says. "So obstetricians appear to be treating their physician patients differently than [they treat] their nonphysician patients."
Johnson says she thinks it unlikely that the doctors are conscious of the role financial incentives seem to be playing in their decisions. Rather, she says, a variety of analyses by economists suggests that incentives affect behavior in many different ways — often subtly.
Indeed, Johnson and Rehavi found that there was no disparity in the C-section rate between physician mothers and nonphysician mothers when the surgical procedures were scheduled in advance. Scheduled C-section decisions tend to be less subjective — a variety of medical conditions, such as a baby being in the breech position, call for a C-section.
Rather, she says, the disparity came about in what are known as unscheduled C-sections, when labor is attempted but does not go well. Patient and obstetrician then find themselves in a gray zone, where a judgment has to be made about whether to terminate labor and deliver the baby surgically.
Johnson and Rehavi also analyzed disparities in medical settings where doctors were paid a flat salary. In these cases, Johnson and Rehavi found there was a disincentive to perform the surgical procedures, which typically involve more time. In these settings, more of the mothers who were physicians received C-sections than mothers who were not physicians. Presumably, Johnson says, this means that some nonphysician mothers who needed C-sections did not get them in these settings.
Johnson suggests that one solution to the disparities lies in better patient knowledge and empowerment.
Shots - Health News
Please follow our community discussion rules when composing your comments. | <urn:uuid:b95cb779-fa6d-4fa3-a2cc-6933a8cda7b8> | {
"date": "2015-03-03T08:54:24",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463165.18/warc/CC-MAIN-20150226074103-00000-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9775812029838562,
"score": 2.921875,
"token_count": 978,
"url": "http://www.ideastream.org/news/npr/216479305"
} |
|41||Wind turbine phone charger
|Group members: Charles Hummel chummel2
Emre Ercikti ercikti2
Sachin Reddy ssreddy2
A small wind turbine with a total height of half a meter will be used to generate AC power which will be converted to DC power. The DC power will charge (ideally) a removable battery pack that would have an output for charging a phone.
Solar phone chargers are great but they do not work when there is no direct sunlight. This turbine will fill that gap.
For this project we will:
1. Create power electronics that will convert the AC power to DC.
a) This circuitry will also include protection measures such as short-circuit protection
2. Implement sensors that detect wind speed and direction and software that will suggest optimal turbine turbine orientation.
3. Using the sensors implement, a cut-off mechanism to keep the generator and/or power electronics from being damaged from excessive wind speeds.
4. To scale the project's difficulty we would design our own battery pack using SLA's and implement a battery management system as well.
The mounting system will be a removable strap that would let the user secure the turbine as well as holes in the base that pegs could fit through to secure to the ground if need be. | <urn:uuid:40013ffb-a534-40b9-ac50-cb20294c096b> | {
"date": "2018-05-28T07:51:46",
"dump": "CC-MAIN-2018-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8921508193016052,
"score": 3.140625,
"token_count": 276,
"url": "https://courses.engr.illinois.edu/ece445/project.asp?id=6036"
} |
Général Alexandre Dumas, XIXesiècle
The father of Alexandre Dumas (Père), famous author of The Count of Monte Cristo and of The Three Musketeers, was the son of a French nobleman and a black Caribbean slave. During the turmoil of the French Revolution, Alex Dumas, for that was the name he adopted, rose through the ranks and became the first ever black general in the French army. Serving with Napoleon during his expedition to Egypt, he attempted to return in a dilapidated ship that soon began to take water from all sides, forcing them to set ashore in Sicily in the them hostile kingdom of Naples. Imprisoned in a damp cold cell in a fortress in Taranto, he slept on straw atop a stone bed and was kept in solitary confinement for one year without ever knowing who had accused him or of what. In that, he later served as inspiration for his son’s novel, in which Edmond Dantès, the future Count of Monte Cristo, was imprisoned under similar circumstances.
Already in Egypt the general’s health had deteriorated - from what is described as a strange paralysis of his face. In his cell he became acutely ill, suddenly collapsing from abdominal pain, later found lying half delirious in a puddle of vomit. A servant brought him a little goat milk, but the pain grew worse. Then the servant gave him spoonfuls of olive oil mixed with lemon juice, and within three hours over forty enemas, which later he claimed saved his life.
At last a doctor came. He ordered treatments that may have been accepted practice at the time but led him to suspect he was being poisoned. He gave him cold water to drink, which made him worse. Then the servant resumed his ministrations of lemon juice, olive oil, and more enemas. Later the doctor returned and prescribed blistering, bloodletting, and also ear injections that for a time apparently left him totally deaf. Other doctors came and concluded that his symptoms, loss of vision, deafness, and facial paralysis, were signs of “melancholia.” Then suddenly one of the doctors himself dropped dead, reinforcing suspicions of foul play by somebody.
Then a new doctor arrived. He prescribed more injections into his ears, a powder blown into his eyes, and half an ounce of cream of tartar. The abdominal pain grew worse, leading to more blistering of the arms and the nape of the neck and behind the ears. He was now suffering from perpetual insomnia. Again suspecting poisoning, he would pretend taking the pills they gave him but secretly threw them away. Then the tide of war turning in favor of France, some French sympathizers secretly sent him a large chunk of chocolate and some medicinal cinchona. He improved “marvelously,” though still deaf in the left ear, practically blind in the right eye, with terrible headaches and permanent buzzing in the ears. As Napoleon's troops drew closer to Naples, he was moved to Brindisi, then released and repatriated to France after one year's captivity. He was partially blind and deaf, weakened by malnutrition, and walking with a limp because bloodletting had “severed a tendon.” In France he lived until 1807, never reinstated nor receiving a pension, having earlier on incurred the enmity of Napoleon, who at that time had also abandoned the egalitarianism of the French revolution in favor of policies against French citizens of color.
Abstracted from the Black Count, by Tom Reiss, Random House 2012.
George Dunea, MD, Editor-in-Chief | <urn:uuid:a8eb7051-9b4e-499e-a323-f43fa359f4a6> | {
"date": "2017-03-23T12:23:23",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186895.51/warc/CC-MAIN-20170322212946-00401-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.984392523765564,
"score": 3.1875,
"token_count": 751,
"url": "http://hekint.org/index.php?id=178"
} |
ERIC Number: ED199177
Record Type: RIE
Publication Date: 1981
Reference Count: 0
Seven Steps for Preparing a Social Studies Fair Project.
McCue, Lydia L.
This student booklet outlines seven steps to aid elementary and secondary school students in participating in a social studies county fair. The first step is an understanding of the fair rules. Second, the student decides whether to work in a group or alone, after reading advantages and disadvantages of both approaches. Step three, choosing a topic, describes nine categories and topics within those categories. Students may choose from history, economics, political science, geography, anthropology, sociology, psychology, interdisciplinary, or special theme topics. The fourth step lists sources of information for the research portion of the project. Step five outlines the research steps: stating the question, collecting information, organizing and summarizing, stating conclusions, and stating the importance of conclusions. Steps six and seven concern constructing a display and preparing for judging. The judging consists of an oral presentation and a written abstract as well as a visual display. (KC)
Publication Type: Guides - Non-Classroom
Education Level: N/A
Authoring Institution: West Virginia State Dept. of Education, Charleston. Div. of Instructional Learning Systems.
Note: For a related document, see SO 013 275. Photograph | <urn:uuid:7ef575ff-0a0d-4268-a0aa-6d4e54ab85a0> | {
"date": "2014-08-23T07:29:53",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500825341.30/warc/CC-MAIN-20140820021345-00008-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8640143871307373,
"score": 3.671875,
"token_count": 277,
"url": "http://eric.ed.gov/?id=ED199177"
} |
- High court's voting rights debate omits role of one woman
- Viola Liuzzo's murder helped spark Voting Rights Act passage
- Detroit housewife's activism outraged nation
- Liuzzo's family fought FBI coverup to clear her name
On March 26, 1965, Penny Liuzzo was watching the "Donna Reed Show" at her home in Detroit when a wave of nausea suddenly swept over her. In an instant, she knew what had happened.
"Oh my God," she thought as she stood up and walked out of the room. "My mom's dead."
When Penny's mother, Viola Liuzzo, had called home a week earlier to tell her family she was going to Selma, Alabama, Penny had been engulfed by a sense of dread. She tried to talk her mother out of going.
''I'm never going to see you again, Mom. I know it. I just feel it. Please let me go in your place. I'll go."
Liuzzo laughed off her daughter's fears. Viola had been determined to help marchers in Selma after watching newsreel footage of civil rights marchers being beaten there. She had cried after the newscast ended. ''I'm tired of sitting here watching people get beat up," she told her family before driving off to Selma.
The call came at midnight. After experiencing her bout of nausea, Penny had gone to bed but could not sleep. She heard her father answer the phone. "Penny, your mother's dead! Your mother's dead," he wailed.
Then something happened that Penny still cannot explain 40 years later. Her 6-year-old sister, Sally, walked into the bedroom and said, "No, Mama's not dead. I just saw her walking in the hall."
The murder of Viola Liuzzo was one of the most shocking moments in the civil rights movement. On a winding, isolated road outside Selma, Liuzzo was ambushed and shot to death by a car full of Ku Klux Klansmen.
She was murdered while giving a ride to a 19-year-old black man, Leroy Moton, one of many civil rights marchers she had driven around Selma. Liuzzo had joined the movement's carpool system soon after arriving in the small Alabama town. Liuzzo's murder became international news. Her photo became a fixture in history books. Her name has been inscribed on civil rights memorials throughout the United States.
But people had far less sympathy for Liuzzo when she was murdered. Hate mail flooded her family's Detroit home, accusing her of being a deranged communist. Crosses were burned in front of the home. Her husband, Anthony Liuzzo Sr., had to hire armed guards to protect his family.
A Ladies' Home Journal magazine survey taken right after Liuzzo's death asked its readers what kind of woman would leave her family for a civil rights demonstration. The magazine suggested that she had brought death on herself by leaving home -- and 55% of its readers agreed.
"It was horrible," Penny says. "People sent [copies of] this magazine that showed her body in the car with the blood and bullet holes. They called her a white whore and a nigger lover, and said that she was having relations with black men."
Even Sally did not escape the public's wrath. Students threw rocks at her and taunted her on the way to school, Penny says.
The family says they were even more devastated when they learned years later who had initiated the public backlash -- J. Edgar Hoover, director of the FBI. To absolve itself of culpability in her death -- an FBI informant was in the car with the men who killed Liuzzo -- the FBI released her psychiatric records and directed a smear campaign to suggest that Liuzzo was promiscuous.
"Your mother has not died in vain," the Rev. Martin Luther King Jr. told Penny at her mother's funeral. Yet she wondered for years if that was true.
The loss of her mother and the public backlash shattered Penny's family. Her father never recovered. Her sisters and brothers struggled.
And Penny carried around a knot of bitterness for years.
The effect on Sally was brutal. "My heart just broke when Sally was 11 years old and we went to visit my mom's grave and she just sobbed on my shoulder, 'Please, tell me what she was like. I don't remember. I don't remember. Please, I can't remember her voice.'"
But Penny still has plenty of memories of her mother. As the eldest child, she spent the most time with her. Today she is a housewife and a mother of four sons living near Fresno, California, with her boyfriend, Bryce. She is a warm and open woman who loves to laugh. It's odd to connect a string of tragedies with such a cheerful woman. Now 66, she has struggled with diabetes and was once legally blind until laser surgery helped her see well enough to drive.
"She was always for the underdog," she says of her mother. "Once, our neighbors had a fire. She went around and took up a collection to replace the toys -- this was around Christmastime -- they had eight kids."
Mary Stanton, author of the definitive Liuzzo biography, "From Selma to Sorrow," says Viola Liuzzo discovered that a secretary where she worked had been laid off without severance pay. She gave the woman her entire paycheck hoping it would embarrass her employer into giving the woman severance. It didn't, and Liuzzo paid for her activism by losing her own job.
Viola Liuzzo was a restless person. She married at 16 but had it annulled the next day. She married again and had two daughters, Penny and Mary. Seven years later she was divorced again. In 1950, she married Anthony Liuzzo Sr., a Teamsters leader. They had three children, Anthony Jr., Thomas and Sally.
She was also ambitious. Viola Liuzzo wouldn't settle for being a housewife. Though she was a ninth-grade dropout, in 1961 she enrolled in night classes to become a medical assistant. She graduated with top honors. She was a member of the Catholic Church but left after a priest told her that a child she had miscarried would never see the face of God. She joined the Unitarian Universalist Church.
Stanton says she was intrigued by Liuzzo's refusal to play the part of the submissive housewife. While her neighbors were taking cooking classes or doing church volunteer work, Liuzzo was preparing for a career, crusading for workplace rights, and going back to college.
"She was one of these people who got really involved in everything she did," Stanton says. "They become like a vortex that sucks other people into their enthusiasm."
By 1965, Penny was becoming closer to her mom after some stormy adolescent years. "I just graduated from high school and we had just become friends," she says.
When Liuzzo decided to go to Selma, she did it in typically impulsive fashion. She was taking classes at Wayne State University when she called home. "I'm going," she cheerfully announced. "I'm on the way."
That's when Penny had her premonition. She tried to persuade her mother not to go, telling her that she would die. ''I'll pee on your grave," Liuzzo told her daughter, laughing. And off to Selma she drove.
There, Liuzzo was one of 2,000 marchers gathered in response to a plea from King. She plunged right in, joining the movement's transportation committee, ferrying civil rights marchers around Selma for six days. Some of those marchers were black men. Liuzzo had to be aware of the dangers of a white woman being seen in a car with a black man at the time, says David Truskoff, one of the marchers who met Liuzzo in Selma.
Truskoff, who would later write "The Second Civil War," says the Rev. James Reeb had just been murdered when Liuzzo arrived in Selma. Cars displaying swastikas drove by marchers constantly. White locals made obscene gestures at white women marchers walking next to black men.
The journalists who had assembled for the Selma march weren't much better, Truskoff says. The press trucks were "half-full of rednecks." Many of them had heard Gov. George Wallace publicly warn Alabamans that white women like Liuzzo who had come down from the North for the march would be going back home to give birth to black babies.
Truskoff says he warned the marchers that these journalists were trying to photograph marchers at night when they camped out in the open during the five-day, 50-mile march to Montgomery. "What some of these crackers really wanted to see were black men with white women in some of these sleeping bags."
The last time Truskoff saw Liuzzo was in a Selma church. She was standing before an applauding audience with a check in her hand. "She brought it up onto the stage and gave Hosea [Williams] a check from her husband's union," he says. "On her way back, there was a big cheer and applause. She was just beaming. She walked past me, nodding at me as if to say, 'We're going to win this thing.'"
On the last day of the march, Liuzzo joined the 3,200 people walking into Montgomery for a rousing rally capped by a speech by King. She then drove back to Selma with Moton and other marchers.
Liuzzo dropped off her passengers in Selma and returned with Moton to Montgomery to pick up more marchers. They were driving on U.S. 80 when a car filled with four white men pulled alongside Liuzzo's car. One of the men shot Liuzzo in the head, killing her instantly, according to police reports.
President Lyndon B. Johnson appeared on television the next day to announce the arrest of four Ku Klux Klan members: Eugene Thomas, 43; William Eaton, 41; Collie Leroy Wilkins Jr., 21; and Thomas Rowe Jr., 34. Rowe, it was later disclosed, was an FBI informant.
The condemnation of Reeb's murder in Selma had been instantaneous and widespread. That was far from the case for Liuzzo. Racism, sexism and the FBI combined to provoke a backlash against her.
First, an all-white, all-male jury acquitted all four men of Liuzzo's murder. Then they were tried again under different charges. Their trial was moved to a different jurisdiction and three were sentenced to 10 years in prison for violating Liuzzo's civil rights. The fourth, Rowe, was not convicted after being granted immunity.
After the verdict, Stanton says, bumper stickers started appearing on cars and trucks in Lowndes County, where Liuzzo was murdered, saying, "Open Season."
The FBI then went after Liuzzo's reputation. Stanton says they tried to cover up for the fact that their informant in the car did nothing to prevent Liuzzo's murder. Hoover began telling President Johnson that Liuzzo was having sex with black men, was a drug addict, and had a husband who was involved in organized crime.
The FBI then leaked this misinformation to the press, which soon began writing stories questioning Liuzzo's mental health (she had once suffered a nervous breakdown) and her morality. Anthony Liuzzo found himself defending his wife's character to newspaper reporters. The Liuzzo family would only discover what the FBI had done years later, after obtaining documents under the Freedom of Information Act.
Penny says her father was eaten away by the criticism of his wife. "It took the soul right out of him," she says. "He never was the same. He started drinking a lot."
Stanton says Anthony Liuzzo Sr. was viewed as a failure. "He was seen as a macho Teamster who couldn't keep his woman in line." He died in 1978, still tormented about the gossip surrounding Viola. For a decade he had been trying to persuade the FBI to return her wedding ring to him. They finally did so -- two years after he died.
The effect on the other family members also was devastating. Penny had two bad marriages; so did Sally. Penny says both married too quickly as a way of taking their minds off the loss of their mother. Sally was hit particularly hard by the death of her mother and, later, her father.
"Sally has just got a grip on her life and she's in her 40s," Penny says. "She was an orphan at 20."
Her two brothers, Anthony Jr. and Tommy, who were 13 and 10 at the time, later dropped out of high school. "They were devastated and they retreated from society," she says.
Anthony Liuzzo Jr., the eldest son, has periodically popped into public view since his mother's murder. In 1975, he filed a $2 million lawsuit against the FBI on behalf of himself and his siblings for the agency's complicity in his mother's death.
"My brother always said there was a government conspiracy, but I didn't believe him," she says. During the trial, the FBI admitted that it had shredded 10,000 pages of documents connected to Liuzzo's murder. Still, the FBI won. In 1983 a federal judge threw out the lawsuit and ordered the family to pay the government $80,000 in court costs. The judge later changed that demand after the television show "20/20" did a report on the trial and people became outraged at the judge's order.
Penny says she was shocked to learn about the FBI's role in her mother's death.
"At first, I thought they were the heroes," she says quietly. "I was disappointed. I didn't want it to be that way. ... I wanted America to be like our forefathers wanted it to be, and it's not." The court's decision changed the lives of her brothers as well, she says. "It drove my brothers nuts," she says. "They couldn't take it anymore."
For a time, Anthony Jr. was a leader in a militia faction. He doesn't talk publicly anymore, Penny says. "Ever since 9/11, he's gone way underground."
After her mother's death, Penny, too, felt as if she were being dragged into despair with the rest of her family. Once, when Penny was in a college political science class, she interrupted an instructor who was talking about justice in the South, telling him, "There is no justice in the South."
The teacher knew who Penny's mother was. "Every dog has its day," he told her.
Penny wondered if that were true, especially after her family lost the suit against the FBI and was forced to pay the court costs.
Katie Rager, Penny's longtime friend, says Penny was simmering with anger when they became friends. "She was angry at the government. She was angry at the KKK. We would just talk for hours and hours about how unfair it was, about how the people who murdered her mother took her away from her kids."
Penny admits that her mother's death made her pessimistic about her own future. "I prayed every night, 'God, don't take me away from my kids. Don't let me die until my kids are older.'"
She found some refuge in her faith. With Rager, she used to go to a little church near Fresno and read Bible verses about forgiveness. She began reading about Native American spirituality, which emphasized being grateful for every little thing in life.
Rager says that Penny gradually changed -- so much so that whenever Rager had a problem, she turned to Penny. "She came out of this cocoon of loathing, hate, and anger and just blossomed into this beautiful, empathetic person."
The bitterness may subside, but not Penny's sense of loss. Over the years, Penny says, she found herself dreaming about her mother. She misses her spark and energy. "Sometimes when I'm feeling blue, I wish I could call my mother up."
She has never been tempted to blame God for her mother's murder. "You can't blame the higher power for what man's free will does. We all have our paths to go down. She chose that path and God loved her. He must have."
Another way Penny overcame bitterness was thinking of her mother's attitude toward hate. Liuzzo had seen much of it growing up in the segregated South. "My mom said the best thing, and I took it to heart: 'Hate hurts the hater, not the hated. It eats you up. It's too consuming. It makes you so unhappy.'"
Motherhood gave Penny another reason not to be bitter. When Penny gave birth to her first son, she resolved not to let her anger infect her boys as it had the other men in her family. "How can you be a good mom and be hateful?" she says. "Adults who grow up prejudiced -- how did they learn that? Their parents were role models. You have to be a living example.''
Penny got her chance to be a living example with an unexpected encounter in court. During her family's suit against the government, Penny was giving a deposition when she encountered Eugene Thomas, one of the men arrested for the murder of her mother.
Penny was sitting outside the courtroom in a waiting room with her son John when Thomas walked into the room. At first, he just stood there and said nothing as he looked at her, Penny says. Then he asked her, "Can you forgive me?"
Penny paused. Then she said, "Yeah, I do."
Thomas' shoulders relaxed, and relief seemed to wash over his face. "Thank you," he said. Then he turned and walked out of the room. After she tells me that story, I ask Penny why she would so readily forgive the man who participated in the killing of her mother. Penny says she actually felt sorry for Thomas. He looked like he was in agony. "I didn't hesitate. I could see the look on his face. I'm not out to crush people. Everybody lives with their own torture.''
She didn't hesitate because she's now found something else to live for -- her sons. Penny says she doesn't want to hurt any more. So she's chosen to be grateful, not bitter. It's what her mother would have wanted.
"I really have a good life. I'm not the richest person in the world. But I have people who love and adore me. All four of my boys, I've never had a major problem with my kids. If God would say I'm going to grant you a gift for my life, I would never have come up with the gift he gave me." | <urn:uuid:baafdb46-bd08-458a-ad17-89e887d00c1c> | {
"date": "2017-04-29T07:21:45",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123318.85/warc/CC-MAIN-20170423031203-00531-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9918901324272156,
"score": 2.53125,
"token_count": 3958,
"url": "http://edition.cnn.com/2013/02/28/politics/civil-rights-viola-liuzzo/index.html?hpt=hp_c1&imw=Y&iref=mpstoryemail"
} |
In just a few short years, federal budget deficits and the national debt rose from obscurity to become America's newest obsession. Unfortunately, while interest in the issue has grown, rigid ideology and an increasingly vitriolic and entrenched public dialogue have crowded out thoughtful discourse, preventing even basic education
on budgets, the deficit, and the role of government.
Fiscal Therapy is an antidote to the demagoguery and half-truths. It explains the scope and nature of the deficit problem facing the United States and offers sensible, balanced, workable solutions in clear language, drawing on national history, the experiences of other countries, and economic analysis. According to author William G. Gale, what is at stake in solving the deficit problem is the social contract that governs how Americans interact with their government.
Restructuring government and balancing the long-term budget are monumental
tasks. While Americans need not and should not abandon their fundamental values,
the required actions will be profound. No country makes such changes easily or
quickly, but failure to act will ultimately guarantee long-term economic ruin.
Gale proposes a set of policies to restore fiscal balance through shared sacrifice. His proposal would restructure taxes and spending programs, cut overall government
outlays, raise revenues, and put the economy and the budget on sound footing.
2. How We Got Here
3. Where We're Heading
4. Why the Deficit Is a Problem?
5. Fixing the Problem | <urn:uuid:2df6f2d0-f284-4419-ba4f-8321b5084a87> | {
"date": "2013-12-11T17:46:24",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164040899/warc/CC-MAIN-20131204133400-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9159911274909973,
"score": 2.828125,
"token_count": 302,
"url": "http://www.brookings.edu/research/books/2013/fiscaltherapy"
} |
- Inhabitat – Sustainable Design Innovation, Eco Architecture, Green Building - http://inhabitat.com -
Design Proposal for the Newark Visitor Center Competition
Posted By Bridgette Meinhold On February 26, 2010 @ 4:08 pm In Architecture,Sustainable Building | 1 Comment
Last year, New Jersey held a competition to design a visitor center for their largest city, Newark. Over two hundred entries were submitted for the contest, which required the use of innovative sustainable features. One of the finalists was Brooklyn-based architecture firm, super interesting!, with their proposal “Engaging Ecology – Connecting Community,” which features a strong focus on the ties between the local environment and the surrounding community. The visitor center they envision includes a tidal marsh, a permanent exhibit on the history of Newark, a bioremediation system and would be built from reclaimed materials.
While the other three final design proposals were also great, we liked super interesting!’s attention to the surrounding environment. They proposed new tidal marsh wetlands to allow visitors to connect with the river flowing out of the city, while also acting as an environmental barometer of sorts to gauge climate change. If sea levels rise as predicted, visitors can see the changes to the wetland habitat. The tidal marsh also acts as a natural bioremediation and filtration system for the runoff from nearby streets and parking lots before it hits the ocean.
The actual visitor’s center would be built from reclaimed materials like brick and wood, and built on a concrete plinth above the wetlands. A marsh courtyard would be situated in the center and visitors would actually be able to go down and touch the water rather than just viewing it from afar. Heating for the center would be provided by geothermal source radiant heat as well as solar thermal collectors, and cooling would be aided by natural ventilation cooled across the wetlands. Inside the center, rooms would be available for teaching and community events as well as to educate visitors on the history of the area.
Article printed from Inhabitat – Sustainable Design Innovation, Eco Architecture, Green Building: http://inhabitat.com
URL to article: http://inhabitat.com/design-proposal-for-the-newark-visitor-center-competition/
URLs in this post:
Image: http://www.inhabitat.com/2010/02/26/design-proposal-for-the-newark-visitor-center-competition/si-newark-street/
New Jersey: http://www.inhabitat.com/2009/11/11/mms-factory-goes-green-with-solar-garden/
visitor center: http://www.inhabitat.com/2009/11/12/triangular-tech-center-incorporates-green-roof-and-solar-panels/
other three final design proposals: http://www.visitnewarknj.org/Home/Finalists.html
wetlands: http://www.inhabitat.com/2010/01/19/green-roofed-suncheon-wetlands-center-flows-with-the-tides/
reclaimed materials: http://inhabitat.com../recycledmaterials
geothermal source: http://www.inhabitat.com/2010/02/15/som-unveils-plans-for-a-green-geothermal-district-in-beijing/
+ super interesting!: http://www.super-interesting.com/newark.html
Copyright © 2011 Inhabitat Local - New York. All rights reserved. | <urn:uuid:eb989d9e-8988-4ecf-940f-637f9ca25fdd> | {
"date": "2014-10-21T15:25:32",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444493.40/warc/CC-MAIN-20141017005724-00170-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.91429603099823,
"score": 2.71875,
"token_count": 793,
"url": "http://inhabitat.com/design-proposal-for-the-newark-visitor-center-competition/print/"
} |
“You look tired, Antigone,” said Emma to her nearest neighbour, a pale girl of eighteen.
Antigone would think she was in prison, to be used like that.
Antigone, with a woman's instinct, entreats him to choose the only way still left of safety.
Did you put it into his head to paint me as Antigone, that he might have my likeness for this?
Thus the Antigone carries us beyond the region of hereditary disaster into the more universal sphere of ethical casuistry.
In this act of holy devotion Antigone succeeded; Polynikes was buried.
Inhumanity: even in the "Antigone," even in Goethe's "Iphigenia."
Sophocles, the dramatist, puts noble words into the mouth of Antigone.
He had endeavoured to make "the inward, unwritten law," of which Antigone speaks, the source of every outward moral law.
Antigone, Juliet and Robinson Crusoe were all the victims of accident.
In classical mythology, a daughter of King Oedipus. Her two brothers killed each other in single combat over the kingship of their city. Although burial or cremation of the dead was a religious obligation among the Greeks, the king forbade the burial of one of the brothers, for he was considered a traitor. Antigone, torn between her religious and legal obligations, disobeyed the king's order and buried her brother. She was then condemned to death for her crime. | <urn:uuid:01c9664a-ca02-47e9-a515-7ab6eac2f7b4> | {
"date": "2016-12-05T03:20:09",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541518.17/warc/CC-MAIN-20161202170901-00232-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9784975051879883,
"score": 3.65625,
"token_count": 327,
"url": "http://www.dictionary.com/browse/antigone"
} |
Central to the theory of Anthropogenic Global Warming (AGW) is the assumption that the Earth and every one of its subsystems behaviors as if they were blackbodies, that is their “emissivity” potential is calculated as 1.0.
But this is an erroneous assumption because the Earth and its subsystems are not blackbodies, but gray-bodies. The Earth and all of its subsystems are gray-bodies because they do not absorb the whole load of radiant energy that they receive from the Sun and they do not emit the whole load of radiant energy that they absorb.
Furthermore the role of carbon dioxide is misunderstood. According to AGW hypothesis, carbon dioxide is the second most significant driver of the Earth’s temperature, behind the water vapor, which is considered the most important driver of the Earth’s climate. Other authors of AGW discharge absolutely the role of water vapor and focus their arguments on the carbon dioxide.
What is the total emissivity of carbon dioxide? I will consider this question with reference to the science of radiative heat transfer.
Total Emissivity of the Carbon Dioxide – The Partial Pressures Method
In 1954, Hoyt C. Hottel undertook an experiment for determining the total emissivity of the carbon dioxide and the water vapor . He found that the total emissivity was linked to the temperature of the gas and its partial pressure. As the temperature increased above 277 K, the total emissivity of the carbon dioxide decreased, and as the partial pressure (p) of the carbon dioxide increased, its total emissivity also increased.
Hottel found also that the total emissivity of the carbon dioxide in a saturated state was very low (Ɛcd = 0.23 at 1.524 atm-m and Tcd = 1,116 °C).
As Hottel diminished the partial pressure of the carbon dioxide, its total emissivity also decreased in such form that, below a partial pressure of 0.006096 atm-m and a temperature of 33 °C, the total emissivity of the carbon dioxide was not quantifiable because it was almost zero.
After Hottel’s experiment, in 1972, Bo Leckner made the same experiment and corrected and error on the graphs plotted by Hottel. However, Leckner’s results placed the carbon dioxide in a lower stand than that found by Hottel.
The missing part, however, remained at the real partial pressure of the carbon dioxide in the Earth’s atmosphere and instantaneous temperatures. Contemporary authors, like Michael Modest, and Donald Pitts and Leighton Sissom made use of the following formula to know the total emissivity of the carbon dioxide considering the whole emissive spectrum, at any instantaneous tropospheric temperature and altitude :
Ɛcd = [1 – (((a-1 * 1 –PE)/(a + b – (1 + PE)) * e (-c (Log10 ((paL)m / paL)^2))] * (Ɛcd)0
Introducing 7700 meters as the average altitude of the troposphere and the real partial pressure of the atmospheric carbon dioxide (0.00038 atm-m), the resulting total emissivity of the carbon dioxide is 0.0017 (0.002, rounding up the number).
Evidently, the carbon dioxide is not a blackbody, but a very inefficient emitter (a gray-body). For comparison, Acetylene has a total emissivity that is 485 times higher than the total emissivity of the carbon dioxide.
After getting this outstanding result, I proceeded to test my results by means of another methodology that is also based on experimental and observational data. The algorithm is outlined in the following section.
Total Emissivity of CO2 – Mean Free Path Length and Crossing Time Lapse of Quantum/Waves Method
The mean free path length is the distance traversed by quantum/waves through a given medium before it collides with a particle with gravitational mass. The crossing time lapse is the time spent by the quantum/waves on crossing a determined medium; in this case, the atmosphere is such medium.
As the carbon dioxide is an absorber of longwave IR, we will consider only the quantum/waves emitted by the surface towards the outer space.
The mean free path length of quantum/waves emitted by the surface, traversing the Earth’s troposphere, is l = 47 m, and the crossing time is t = 0.0042 s (4.2 milliseconds).
Considering l = 47 m to know the crossing time lapse of quantum/waves through the troposphere, I obtained the crossing time lapse t = 0.0042 s. By introducing t into the following equation, we obtain the real total emissivity of the atmospheric carbon dioxide:
Ɛcd = [1-(e (t * (- 1/s))] / √π
Ɛcd = [1-(e (0.0042 s * (1/s))] / √ 3.141592… = 0.0024
Therefore, the total emissivity of the atmospheric carbon dioxide obtained by considering the mean free path length and the crossing time lapse for the quantum/waves emitted from the surface coincides with the value obtained from the partial pressures method:
Ɛcd 1 = 0.0017 = 0.0017
Ɛcd 2 = 0.0024 = 0.0024
The difference is 0.0007, which is trivial in this kind of assessment.
In the introduction I asked: What is the total emissivity of carbon dioxide?
In this note I have calculated the real total emissivity of the atmospheric carbon dioxide at its current partial pressure and instantaneous temperature to be 0.002.
Clearly carbon dioxide is not a nearly blackbody system as suggested by the IPCC and does not have an emissivity of 1.0. Quite the opposite, given its total absorptivity, which is the same than its total emissivity, the carbon dioxide is a quite inefficient – on absorbing and emitting radiation – making it a gray-body.
Accepting that carbon dioxide is not a black body and that the potential of the carbon dioxide to absorb and emit radiant energy is negligible, I conclude that the AGW hypothesis is based on unreal magnitudes, unreal processes and unreal physics.
This blog post was inspired by Chapter 12 of the book ‘Slaying the Sky Dragon.
“This first catechism will be referred to in a later figure as the ‘Cold Earth Fallacy’, and it is based on the erroneous assumption that the earth’s surface and all the other entities involved in its radiative losses to free space all have unit emissivity. The second catechism has already been discussed: the contention that Venus’ high surface temperature is caused by the ‘greenhouse effect’ of its CO2 atmosphere.”
-Dr. Martin Hertzberg. Slaying the Sky Dragon-Death of the Greenhouse Gas Theory. 2011. Chapter 12. Page 163.
[1.] Hertzberg, Martin. Slaying the Sky Dragon-Death of the Greenhouse Gas Theory. 2011. Chapter 12. Page 163.
[6.] Hottel, H. C. Radiant Heat Transmission-3rd Edition. 1954. McGraw-Hill, NY.
[7.] Leckner, B. The Spectral and Total Emissivity of Water Vapor and Carbon Dioxide. Combustion and Flame. Volume 17; Issue 1; August 1971, Pages 37-44.
[8.] Modest, Michael F. Radiative Heat Transfer-Second Edition. 2003. Elsevier Science, USA and Academic Press, UK.
[9.] Lang, Kenneth. 2006. Astrophysical Formulae. Springer-Verlag Berlin Heidelberg. Vol. 1. Sections 1.11 and 1.12.
[10.] Maoz, Dan. Astrophysics in a Nutshell. 2007. Princeton University Press. Pp. 36-41
[11.] Dr. Hertzberg is an internationally recognized expert on combustion, flames, explosions, and fire research with over 100 publications in those areas. He established and supervised the explosion testing laboratory at the U. S. Bureau of Mines facility in Pittsburgh (now NIOSH). Test equipment developed in that laboratory have been widely replicated and incorporated into ASTM standards. Published test results from that laboratory are used for the hazard evaluation of industrial dusts and gases. While with the Federal Government he served as a consultant for several Government Agencies (MSHA, DOE, NAS) and professional groups (such as EPRI). He is the author of two US patents: 1) Submicron Particulate Detectors, and 2) Multichannel Infrared Pyrometers. http://www.explosionexpert.com/pages/1/index.htm
Read more from Nasif by scrolling here: http://jennifermarohasy.com/blog/author/nasif-s-nahle/ | <urn:uuid:02b56f4b-50eb-4e5d-a5a7-cd9dd65a5acd> | {
"date": "2017-03-24T15:48:39",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188213.41/warc/CC-MAIN-20170322212948-00091-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9113510251045227,
"score": 3.546875,
"token_count": 1924,
"url": "http://jennifermarohasy.com/2011/03/total-emissivity-of-the-earth-and-atmospheric-carbon-dioxide/?cp=5"
} |
|Weather: You Like It or Not - Learning about the Importance of and Flaws in Weather Prediction|
In this lesson, students explore the importance of weather prediction and learn about some of the flaws inherent in the process. By researching specific storm types, they are able to prepare and deliver their own weather reports. The material includes links to additional information and resources.
Intended for grade levels:
Type of resource:
No specific technical requirements, just a browser required
Cost / Copyright:
Copyright 2003 The New York Times Company. Teachers of grades 3 through 12, or parents of children of like age (collectively, "Educators"), may print and reproduce full in print format for students the crossword puzzle, daily news quiz, daily lesson plan, lesson plans from the lesson plan archive, the related lesson plan article and resources, as those materials are so labeled on The Learning Network (collectively, the "Content") for classroom and instructional use only and not for resale or redistribution.
DLESE Catalog ID: DWEL-000-000-000-412
Resource contact / Creator / Publisher:
Author: Rachel McClain Klein
The New York Times Learning Network | <urn:uuid:a05e0c22-892f-4fe1-aa53-286cafb4d662> | {
"date": "2016-05-29T05:47:38",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278389.62/warc/CC-MAIN-20160524002118-00111-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8594285845756531,
"score": 3.640625,
"token_count": 241,
"url": "http://www.dlese.org/library/catalog_DWEL-000-000-000-412.htm"
} |
ShouTime dumped the incredibly rare game Omega (Nihon System)
. It’s a ball-and-paddle game running on similar hardware to Sega’s Gigas. These games use an NEC MC-8123 CPU module containing a Z80 core, decryption circuitry, and an 8 KiB encryption key in battery-backed RAM. When fetching a byte from ROM or RAM, the CPU chooses a byte from the encryption key based on twelve of the address bits and whether it’s an M1 (opcode fetch) cycle or not. This byte from the encryption key controls what permutation (if any) is applied to the byte the CPU fetches. This encryption scheme could have been brutal, requiring extensive analysis of a working CPU module to crack, if it weren’t for a fatal flaw: Sega used a simple linear congruential generator
algorithm to create 8 KiB keys from 24-bit seeds. That means there are less than seventeen million encryption keys to test. Seventeen million might sound like a lot, but it’s far less than the total possible number of keys
, and definitely small enough to apply a known plaintext attack in a reasonable amount of time.
So how do we go about attacking it? First we have to make an assumption about what the game program is going to be doing. Given that the hardware looks pretty similar to Gigas and Free Kick, I guessed that one of the first things the program would do is write a zero somewhere to disable the non-maskable interrupt generator disable maskable interrupts. So I wrote a program to find candidate seeds (no, I won’t show you the source code for this program – it’s embarrassingly ugly and hacky, not something I could ever be proud of):
- Start with first possible 24-bit seed value
- Generate 8 KiB key using algorithm known to be used by Sega
- Decrypt first few bytes of program ROM using this key
- If it looks like Z80 code to store zero somewhere and disable interrupts, log the seed
- Repeat for next possible seed value until we run out of values to try
This ran in just a few minutes on an i7 notebook, and narrowed down the millions of possible seed values to just five candidates: 36DF3D, 6F45E0, 7909D0, 861226, and BE78C9 (in hexadecimal notation). Now I could have tried these in order, but it looked like Sega had made another misstep: besides using a predictable algorithm to generate the key, they also used a predictable seed value to feed this algorithm. The candidate seeds value 861226 looks like a date in year-month-day format. It turns out this seed generates the correct key to decrypt the game program, so I guess we know what someone at Sega was doing the day after Christmas in 1986.
Brian Troha hooked up the peripheral emulation, and the game will be playable in MAME 0.183 (due for release on 22 February). Colours aren’t quite right as we don’t have dumps of the palette PROMs yet, but we expect to resolve this in a future release. Thanks to ShouTime and everyone else involved in preserving this very rare piece of arcade history. | <urn:uuid:e2236113-8f85-4011-9a82-bc526475f388> | {
"date": "2017-12-18T16:50:54",
"dump": "CC-MAIN-2017-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948618633.95/warc/CC-MAIN-20171218161254-20171218183254-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.904912531375885,
"score": 2.984375,
"token_count": 687,
"url": "http://forum.mamedev.org/viewtopic.php?f=13&t=178&p=553&sid=7c4af0716d0b31b47df835da2336672b"
} |
More than half a million new electric cars hit the world’s roads last year, making it a question of how, not if, energy demand will be affected in the future. So, what will this mean for traditional resources such as oil? BP Magazine considers the possibilities with chief economist Spencer Dale
Imagine a time when electric cars could outsell gasoline cars by ten to one. It may seem unlikely, but that era has been and gone. This was actually a reality in the United States at the turn of the 20th century, when 600 electric taxis roamed the streets of New York and an electric car held the land speed record.
Then, in 1908, gasoline-powered Model T Fords began rolling off the production line in their thousands, making motoring more affordable and stifling the progress of electric vehicles. Today, electric vehicles are back. What was once the aspiration of one or two niche car makers has become a central element in the strategies of virtually all of the world’s major manufacturers. With improving technology, falling battery costs and the need to improve urban air quality on its side, the electric vehicle is well placed to increase its share of the global car fleet.
Here, BP’s chief economist Spencer Dale talks through the numbers, contemplating what electric vehicles might mean in the future.
Will more electric vehicles on the roads mean lower demand for oil?
The world currently consumes 95 million barrels of oil per day (Mb/d) overall, with the global car fleet accounting for 19 Mb/d, or around one fifth of that total. BP’s Energy Outlook 2035 forecasts growth in electric car numbers over the next two decades from around 1.2 million vehicles today to around 70 million in 2035 (see graph below) – nearly a 60-fold increase. Meanwhile, the total global car fleet will only double – but that means adding about another 900m cars to the 900m on the world’s roads today (shown here, left). So, demand for oil from cars will continue to increase – by about 5Mb/d by 2035. But the increase will be less than double, despite the numbers of cars doubling. Some of that mitigation is down to increasing electric vehicle numbers, but much more will come from gains in the fuel efficiency of gasoline engines.
Overall, global oil demand is projected to grow by around 20 Mb/d over the next 20 years, driven by increasing prosperity in fast-growing Asian economies. In short, electric vehicles will have an impact on oil demand over the next 20 years, but not a game-changing one.
What if electric vehicles grow faster than you think?
Anything is possible. In its ‘450 scenario’, the International Energy Agency (IEA) sets out a pathway for the entire energy system consistent with limiting carbon dioxide emissions, such that there is a better than evens chance of global mean temperatures increasing by less than two degrees Celsius by 2100. In this forecast, the IEA assumes 450 million electric vehicles on the roads by 2035, some 380 million vehicles more than we envisage in our Outlook. This is at the very top end of the range of external forecasts I have seen, consistent with significant changes in technology or policy.
In this scenario, growth in oil demand would be almost 5Mb/d lower relative to the case in which electric vehicles didn’t grow at all. This will dampen oil demand to some extent, but it won’t stop it from increasing overall. We have to keep in mind that 80% of oil demand comes from other parts of the transport sector and from industry which are likely to continue to expand.
Are there other factors that will curb the growth in oil demand?
Efficiency is a key factor, and one that could dwarf the impact of electric cars. Over the past 20 years, passenger vehicles have become increasingly efficient, moving from a typical car range of 25-30 miles per gallon (mpg) to 30-35 mpg today. This process will continue to evolve over the next 20 years, with the potential for vehicles to reach up to 50 mpg. This would lead to a huge saving in oil consumption of up to 15Mb/d – compared to a prospective 1-5 Mb/d drop in demand due to electric vehicles.
This suggests we should perhaps place more attention on the pace of gains in vehicle efficiency and less on the growth of electric vehicles.
Will more electric vehicles mean lower carbon dioxide emissions?
There is no straightforward answer to that question.
Electric vehicles are likely to dampen the growth in oil demand and hence carbon dioxide emissions.
But, during the phase when electric vehicles account for a minority of passenger cars, which could last for decades, the emissions benefits could be outweighed by the potential gains associated with oil-powered cars becoming ever more efficient.
And, of course, there is the question of the fuels used to produce the electricity used to charge the batteries of the electric vehicle. In some parts of the world where the power sector is heavily reliant on coal, reductions in overall carbon emissions may be minimal – or worse: it is tantamount to switching from an oil-fuelled car to a coal-powered one.
Does this mean electric vehicles are a bad idea?
Of course not. They are a very good idea for a variety of reasons, not least the need to improve urban air quality and reduce carbon emissions. All of this will be coupled with a rapid evolution of the transportation sector as autonomous driving, shared-car ownership and ride sharing change our relationship with cars.
Electric vehicles will form a foundation for a lower carbon future, but it would be wise for us to pay as much attention to improving car efficiency and using more gas, and less coal, in power generation. These two factors alone could generate carbon savings over the next 20 years many times greater than that associated with the expansion of electric vehicles. Of course, in an ideal world, all of these things will advance at once. But in the real world with limited resources, choices have to be made.
- For more on electric vehicles, read the full speech by Spencer Dale. | <urn:uuid:7970fdf0-d0c6-47b5-b6e8-c641f57e667d> | {
"date": "2018-03-21T07:05:11",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9575821161270142,
"score": 2.6875,
"token_count": 1258,
"url": "https://www.bp.com/en/global/corporate/bp-magazine/observations/spencer-dale-on-electric-vehicles-and-future-energy-demand.html"
} |
As a country aspiring to become a global energy superpower, Canada must take a lead in mitigating the negative impact of greenhouse gas emissions on the economy and the environment, if only to ensure access to markets where public concern over climate change might raise challenges. One way to do so is by placing a cost on carbon.
Many economists agree that a formal carbon pricing policy is the best way to reduce carbon dioxide pollution. Compelling producers of carbon emissions to account for the cost implications of pollution produced in the manufacture and use of their products gives them an incentive to invest in cleaner ways of doing business. At the same time, increasing product prices that include the cost of carbon will encourage consumers to cut down on energy use and reduce spending on energy-intensive goods. In that way, consumer demand for, and corporate supply of, low- carbon goods and services meet up to create new economic activity. That’s the way markets are supposed to work.
While some people argue that carbon pricing will stifle the economy and raise the cost of living, the more prevalent view is that doing nothing to address climate change caused by carbon dioxide emissions is by far the most expensive and damaging option.
The debate over carbon pricing in Canada is not new. Some jurisdictions have forged ahead and implemented their own formal policies. For example, Alberta and British Columbia fix carbon prices at a set rate while Quebec’s incoming cap-and-trade system is part of the Western Climate Initiative and is linked to California’s system. Other jurisdictions are still considering carbon pricing legislation, but it’s unclear what it will look like. At the federal level, the regulations being created to limit emissions in various sectors effectively translate into a carbon price that might not be apparent, but is just as real and far less efficient because the cost is not transparent to the market.
This hodgepodge approach is not good for companies operating in Canada because it provides only limited policy certainty and leaves them guessing about future policies in jurisdictions that are still unsure of how to deal with carbon pricing. Policy certainty, particularly on issues with significant financial implications is crucial to the success of industry sectors and the economy as a whole. A new survey by Sustainable Prosperity conducted among 10 major energy companies operating in Canada shows that most already use a “shadow” carbon price to prepare for the expansion of carbon pricing. Shadow carbon pricing, generally expressed in terms of dollars per tonne of CO2 or carbon dioxide equivalent (CO2e), is the voluntary use of a notional market price for carbon in internal corporate financial analysis and decision-making processes.
Some companies see shadow carbon pricing as a way to drive performance through operational efficiency and profit maximization and create opportunities for technological innovation and market access.
The 10 companies surveyed — BP, Shell, Suncor, Statoil, Devon, Cenovus, Penn West, Enbridge, Ontario Power Generation and SaskPower — all have some experience in using shadow carbon pricing; seven formally and three informally. Using a shadow carbon price appears to have become an industry standard for the oil and gas sector.
Among the seven companies that formally use a shadow carbon price, the price, Canadian dollars, ranged from $15/tonne to $68/tonne. The top of the range represents a price projection for future years: $48 – $68/tonne for 2020 and up to 2040.
What this suggests is that major companies in the Canadian energy sector are prepared for carbon pricing.
With the cost of carbon already largely “internalized” in their forward-looking planning and operations, it may be fair to assume that the creation of a carbon price at the national level would not catch the energy sector unprepared.
A national policy is important because, while laudable, corporate leadership in the use of a shadow carbon price is no substitute for the policy certainty of a regulated market price for carbon that levels the playing field between companies, engages consumers and establishes pricing levels commensurate with the attainment of Canada’s national obligations.
Furthermore, while many companies are integrating carbon pricing into their business processes and testing the economics of their projects for a range of prices, there is little indication that shadow carbon pricing is being used to manage the risk of more significant carbon abatement costs in the future.
This shows clearly that company action cannot be expected to substitute for government policy on this crucial issue.
As long as shadow carbon prices are voluntarily applied and not regulated there is unlikely to be an impact on consumer prices. Without that transparency, one of the chief advantages of a pricing instrument — its ability to influence the behaviour and choices of companies and consumers — is muted.
Alex Wood is senior director of policy and markets at Sustainable Prosperity and Tyler Elm is chairman of the Canadian Chamber of Commerce’s energy and environment committee. | <urn:uuid:9321cb1e-fd7b-41d2-a118-9fda1d13521c> | {
"date": "2018-02-23T12:49:33",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9399855732917786,
"score": 3.203125,
"token_count": 980,
"url": "http://www.vancouversun.com/opinion/op-ed/Projecting+carbon+price+good+business/8054029/story.html"
} |
While doing your holiday shopping, have you ever wondered what toys your speech therapist would recommend? Hmmm…maybe not. But with just a little bit of forethought, Santa and his elves can be sure to deliver toys that are both fun and therapeutic for children with speech delay.
First, let’s talk about electronics, particularly tablets. I know, I know… They seem like a great option for children with speech delays. As with most things, though, the usefulness of the toy depends on the child. Electronic toys can be useful if they can capture the child’s attention and if the child is able to use it with another person. If a child becomes hyperfocused on one part of the toy/game, the child starts pushing/pulling away, or the game/toy no longer requires turn-taking, it’s not a great choice.
With many electronic games and toys, it can appear that your child is learning language skills because they can answer questions or follow directions in that app on the iPad. However, is your child really answering the question?
It’s important to be sure your child is actually learning.
When your child is playing with an electronic toy, check a few things:
- Is he putting the answer to a question in before the question has been stated? Some children memorize the order of the answers (for example, the child may think to himself, “for the first question I’m supposed to touch the second bubble on the left”) rather than learning the answers to the questions.
- Is he randomly guessing? Many children intuitively understand that if they just keep pushing buttons, eventually they’ll hit the right one. Rather than considering the question first, they may make many choices in rapid succession to get to the reinforcing reward for answering correctly.
- Does the “interactive” game provide the correct answer whether or not the child has tried to participate?
Remember: electronics can not replace human interaction.
Electronics can be helpful for children with speech delay as long as they are used appropriately. Following are a few tips for using electronics with your child:
- Use them to supplement teaching. If you have just worked on colors, try an app that focuses on colors.
- Take turns and add language (Oh, look! That one says jump! What animal jumps?). It is often best if the adult is holding the electronic (sometimes you may have to remove it from sight in between turns).
- Use it as a reward. (You did all of your work now you can play a game!)
- Limit the amount of time they can use the electronic device, and stick to it.
Non-electronic toys appropriate for your child will depend on his age and language levels. I love games that come in pieces, are interactive, and can work on a variety of skills. Here are a few of my favorites:
- Puzzles, piggy banks, and Little People sets are great to work on requesting, location concepts (on, in, out, off), and identifying and labeling.
- Books, especially interactive books, are great at working on identifying, labeling, two-word phrases, and targeting specific grammatical structures.
- With a farm set, you can work on animal noises, labeling, identifying, two-word (or more) phrases, location concepts, following one and two step directions, and answering simple questions.
- Blocks can be used for imitating sounds, imitating repetitive words, colors, counting, and actions.
- Mr. Potato Head can be used for identifying and labeling body parts, two word (or more) combinations, requests, answering questions, comparing, and personal pronouns.
- With Play-Doh you can work on making silly sounds, imitating actions, requesting, following directions, and vocabulary.
Many of the toys included above can do double-duty for children who have other delays, as well. For example, puzzles are useful for developing fine motor skills. Of course, not every possible toy is mentioned here, so use your best judgment, keeping in mind your child’s goals and the following questions:
Will I be able to interact with my child using this toy?
What skills can we work on while playing with this toy?
Will my child enjoy it?
We always want our kiddos to have fun with the materials that we choose. If they are not having fun with a toy, then they are not going to want to play with it. It is important to choose toys/games that you will be able to use to work on language skills, but it is more important that your child enjoys the activity. When they are enjoying themselves, they are likely to use more language. So go out there and find some cool, new, “speech therapist approved” toys that your child will love!
Merry Christmas and Happy Holidays!
Jessie Nelson Willis, M.Ed., CCC-SLP | <urn:uuid:ef7dd3b9-5143-4d33-aea8-f0f3afc4140a> | {
"date": "2018-12-14T14:41:12",
"dump": "CC-MAIN-2018-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9572278261184692,
"score": 2.640625,
"token_count": 1027,
"url": "https://www.kidscreektherapy.com/gift-ideas-children-speech-delay-toys/"
} |
Jetpacks are a tricky thing. They've been on most sci-fi fans' wishlist since sci-fi became a genre. But unfortunately, as New Scientist points out, there are "the usual caveats that it is hard to strap enough fuel to a person to keep them airborne for more than about 30 seconds."
One solution: Swap out the idea of jet engines and achieve flight through jets of water.
The Jetlev Flyer, which will go on sale soon, crafts together a jetpack harness with two high-powered water hoses. As it slams water out of the nozzle, the pilot launches into the air. The gush is enough to send the pack at 30 m.p.h. and at heights of up to 50 feet in the air. And, as long as you fly over a lake, jetpack mishaps will hopefully end in splash rather than a splat.
Interested in buying one? New Scientist pegs the price at around $230,000. | <urn:uuid:65383c74-0c85-4d79-bbcc-b748dec7a2b0> | {
"date": "2015-06-30T04:44:42",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091587.3/warc/CC-MAIN-20150627031811-00126-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9652158617973328,
"score": 2.859375,
"token_count": 202,
"url": "http://www.csmonitor.com/Technology/Horizons/2009/0217/video-water-powered-jetpack-for-sale"
} |
For each problem on this worksheet, kids subtract single-digit numbers to see how many apples are left after someone has had a tasty snack.
Santa needs help, and your child's subtraction skills can save the day!
Help Max go fly a kite (up to the highest height) by completing these subtraction problems.
Flex your child's math skills with this worksheet set, which focuses on subtracting 1, 2, and 3 from other numbers.
Solve the equations and then color by number to reveal a colorful underwater friend! You'll get some great subtraction and coloring practice while you're at it.
Lizzy the Bee needs help tracking down the right tulip! Help her by practicing simple subtraction equations.
Ask your kindergartener to complete these one-digit subtraction facts, and she'll go above and beyond kindergarten standards in no time!
Go over addition and subtraction facts up to 10 with your kindergartener with this nifty practice test.
Ready to introduce your child to subtraction, but you're not sure if he's ready? This worksheet is a simple and easy introduction. | <urn:uuid:3d3b8910-1679-4bb2-bb6b-48f896c6ee54> | {
"date": "2017-07-25T14:49:25",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425254.88/warc/CC-MAIN-20170725142515-20170725162515-00136.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9252114295959473,
"score": 3.171875,
"token_count": 234,
"url": "https://www.education.com/collection/grits94/subtraction/"
} |
As Lyme Disease Rises, Research Expands
By ELSA BRENNER
Published: September 11, 1994
THE summer of 1994 appears to be the worst in many years for Lyme disease, according to public health officials. In Westchester by the end of July, the latest period for which figures were available, there were 697 confirmed new cases of the disease -- a 45 percent increase over the same period last year. And with a dense snow cover incubating insects over a long and severe winter, this summer has also seen a fourfold increase in the number of ticks, which carry the disease, thriving in woodlands, meadows and backyards countywide, researchers report.
Mary M. Landrigan, a spokeswoman for the County Department of Health, who described the disease as "continuing unabated" in Westchester, said there were 480 Lyme disease cases in the county by the end of July last year, and 551 confirmed cases for that period in 1992. "It's an all-year-round problem, with the peak being the summer months," she said.
The State Department of Health reported 2,220 cases in New York this year through Aug. 24 -- almost as many as the the total number of cases for all of last year, said Dennis J. White, director of the state Tick-Borne Disease Institute in Albany.
Nationally, there are 8,000 to 9,000 new confirmed cases each year, with the Northeast coastal states (Maryland through Massachusetts), the upper Middle West (northern Wisconsin and parts of Minnesota and Michigan) and the Pacific Northwest continuing to be the "hot spots for the disease," said Roy Campbell, a medical epidemiologist for the Centers for Disease Control and Prevention in Fort Collins, Colo. 'It May Be the Worst Year'
Figures for the incidence of Lyme disease nationwide for this year are not yet available because the states have not completed their reporting, he said.
Statistics notwithstanding, Dr. White, who is also chief entomologist for the state, questioned whether this summer was indeed one of the most severe for Lyme disease. "It may be the worst year," he said, "but in the past there has been less surveillance, so we may just be getting better figures now."
Meanwhile, there is hope that new breakthroughs in prevention and treatment of the disease are on the way. For one, a vaccine against the disease is in clinical trials and could be available to the public by 1996, said Dr. Gary P. Wormser, director of the Lyme Disease Clinic of the Westchester County Medical Center in Valhalla. Tracking by Satellite
The clinical trials are being conducted by Connaught Laboratories in Swiftwater, Pa., and by SmithKline Beecham in Philadelphia. In the Connaught study, the first to get Federal Drug Administration approval to study a Lyme disease vaccine for humans, 10,000 volunteers are involved -- with half receiving the vaccine and the others receiving a placebo. Dr. Wormser said results of the study would be available during the winter and the vaccine could be marketed a year or two later.
Also, the National Aeronautics and Space Administration is spinning a satellite 450 miles above Westchester to map areas of vegetation and determine which sites are most likely to have serious tick infestations.
The agency, in a global monitoring and disease-prediction program, is using remote sensing and geographic information systems to track tick populations for Lyme disease and study how other diseases in other parts of the world -- malaria and cholera in the Bay of Bengal and yellow fever in Kenya, for example -- are transmitted.
"Research has picked up a tremendous momentum," Dr. Wormser said. "Things are really moving ahead compared to five years ago."
But for long-term victims of the disease, there has until recently been little solace in statistics and most research studies. Concerned with the persistence of debilitating symptoms -- fever and fatigue among them -- years after contracting the disease, some victims of chronic Lyme disease say doctors and researchers have ignored their needs.
Betty Gross, an Irvington resident who contracted Lyme disease in the early 1980's and is now a member of the Lower Hudson Valley Regional Lyme Disease Advisory Council of the Westchester County Department of Health, said that the illness had become a controversial political issue. 20 Local Support Groups
"The victims of this disease want a cure," she said. "We need the problem solved. But researchers are answering to their grantees rather than the victims, and they're putting their own spin on the disease."
At the Katonah offices of the Lyme Disease Coalition of New York, an advocacy group representing more than 20 local Lyme disease support groups, Ginger Lucie, the coalition's president, said that researchers were trying to take the illness and "wrap it up in a tidy, little box." She said most studies failed to address the pressing needs of victims who are debilitated by the symptoms of Lyme disease.
Those who suffer long-term effects continue to be frustrated by the limited therapies available to them, and doctors like Kenneth B. Liegner in Armonk say that antibiotics, the standard prescribed treatment, are "not doing the trick."
"It's a real dilemma," he said. "We still do not have the means to cure this." | <urn:uuid:7438ce79-621b-425b-9096-5ec7bcac9d89> | {
"date": "2015-11-27T20:06:57",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450559.94/warc/CC-MAIN-20151124205410-00133-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.961587131023407,
"score": 2.75,
"token_count": 1082,
"url": "http://www.nytimes.com/1994/09/11/nyregion/as-lyme-disease-rises-research-expands.html?src=pm"
} |
Books / Digital Text
1. The Dependence of the Subjective Valuation of Money on the Existence of Objective Exchange-Value
According to modern value theory, price is the resultant of the interaction in the market of subjective valuations of commodities and price goods. From beginning to end, it is the product of subjective valuations. Goods are valued by the individuals exchanging them, according to their subjective use-values, and their exchange ratios are determined within that range where both supply and demand are in exact quantitative equilibrium. The law of price stated by Menger and Böhm-Bawerk provides a complete and numerically precise explanation of these exchange ratios; it accounts exhaustively for all the phenomena of direct exchange. Under bilateral competition, market price is determined within a range whose upper limit is set by the valuations of the lowest bidder among the actual buyers and the highest offerer among the excluded would-be sellers, and whose lower limit is set by the valuations of the lowest offerer among the actual sellers and the highest bidder among the excluded would-be buyers.
This law of price is just as valid for indirect as for direct exchange. The price of money, like other prices, is determined in the last resort by the subjective valuations of buyers and sellers. But, as has been said already, the subjective use-value of money, which coincides with its subjective exchange value, is nothing but the anticipated use-value of the things that are to be bought with it. The subjective value of money must be measured by the marginal utility of the goods for which the money can be exchanged.1
It follows that a valuation of money is possible only on the assumption that the money has a certain objective exchange value. Such a point d'appui is necessary before the gap between satisfaction and "useless" money can be bridged. Since there is no direct connection between money as such and any human want, individuals can obtain an idea of its utility and consequently of its value only by assuming a definite purchasing power. But it is easy to see that this supposition cannot be anything but an expression of the exchange ratio ruling at the time in the market between the money and commodities.2
Once an exchange ratio between money and commodities has been established in the market, it continues to exercise an influence beyond the period during which it is maintained; it provides the basis for the further valuation of money. Thus the past objective exchange value of money has a certain significance for its present and future valuation. The money prices of today are linked with those of yesterday and before, and with those of tomorrow and after.
But this alone will not suffice to explain the problem of the element of continuity in the value of money; it only postpones the explanation. To trace back the value that money has today to that which it had yesterday, the value that it had yesterday to that which it had the day before, and so on, is to raise the question of what determined the value of money in the first place. Consideration of the origin of the use of money and of the particular components of its value that depend on its monetary function suggests an obvious answer to this question. The first value of money was clearly the value which the goods used as money possessed (thanks to their suitability for satisfying human wants in other ways) at the moment when they were first used as common media of exchange. When individuals began to acquire objects, not for consumption, but to be used as media of exchange, they valued them according to the objective exchange value with which the market already credited them by reason of their "industrial" usefulness, and only as an additional consideration on account of the possibility of using them as media of exchange. The earliest value of money links up with the commodity value of the monetary material. But the value of money since then has been influenced not merely by the factors dependent on its "industrial" uses, which determine the value of the material of which the commodity money is made, but also by those which result from its use as money. Not only its supply and demand for industrial purposes, but also its supply and demand for use as a medium of exchange, have influenced the value of gold from that point of time onward when it was first used as money.3
- 1. See pp. 99. Also Böhm-Bawerk, Kapital und Kapitalzins, Part II, p. 274; Wieser, Der natürliche Wert, p. 46. (Eng. trans. The Theory of Natural Value.)
- 2. See Wieser, "Der Geldwert und seine Veränderungen," Schriften des Vereins für Sozialpolitik. 132:513 ff.
- 3. See Knies, Geld und Kredit (Berlin, 1885), vol. 1, p. 324. | <urn:uuid:7280531f-8e87-465f-a51c-1f6dae331203> | {
"date": "2017-07-24T06:16:05",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424721.74/warc/CC-MAIN-20170724042246-20170724062246-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9629894495010376,
"score": 3.078125,
"token_count": 992,
"url": "https://mises.org/library/theory-money-and-credit/html/ppp/1229"
} |
Farming dominated the lives of most Medieval people. Many peasants in Medieval England worked the land and, as a result, farming was critically important to a peasant family in Medieval England. Most people lived in villages where there was plenty of land for farming. Medieval towns were small but still needed the food produced by surrounding villages.
Farming was a way of life for many. Medieval farming, by our standards, was very crude. Medieval farmers/peasants had no access to tractors, combine harvesters etc. Farming tools were very crude. Peasants had specific work they had to do in each month and following this "farming year" was very important.
Harvesting a crop using sickles and scythes
Farms were much smaller then and the peasants who worked the land did not own the land they worked on. This belonged to the lord of the manor. In this sense, peasants were simply tenants who worked a strip of land or maybe several strips. Hence why farming was called strip farming in Medieval times.
A peasant family was unlikely to be able to own that most valuable of farming animals – an ox. An ox or horse was known as a 'beast of burden' as it could do a great deal of work that people would have found impossible to do. A team of oxen at ploughing time was vital and a village might club together to buy one or two and then use them on a rota basis. In fact, villagers frequently helped one another to ensure the vital farming work got done. This was especially true at ploughing time, seeding time and harvesting.
A ploughing team at work
The most common tools used by farmers were metal tipped ploughs for turning over the soil and harrows to cover up the soil when seeds had been planted. The use of manure was basic and artificial fertilisers as we would know did not exist.
Growing crops was a very hit and miss affair and a successful crop was due to a lot of hard work but also the result of some luck.
In the summer (the growing season) farmers needed sun to get their crops to grow. Though weather was a lot more predictable in Medieval England, just one heavy downpour could flatten a crop and all but destroy it. With no substantial harvest, a peasant still had to find money or goods to pay his taxes. But too much sun and not enough moisture in the soil could result in the crop not reaching its full potential. A spring frost could destroy seeds if they had been recently planted.
The winter did not mean a farmer had an easy time. There were plenty of tasks to do even if he could not grow crops at that particular time.
Some estates had a reeve employed to ensure that peasants worked well and did not steal from a lord.
|Let the reeve be all the time with the serfs (peasants) in the lord's fields.....because serfs neglect their work and it is necessary to guard against their fraud......the reeve must oversee all work...........if they (serfs) do not work well, let them be punished. Written by Walter of Henley c. 1275| | <urn:uuid:dbf439e8-2417-4796-8b8a-fcdd30cd3b07> | {
"date": "2014-03-09T13:20:38",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999678747/warc/CC-MAIN-20140305060758-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9898840188980103,
"score": 3.9375,
"token_count": 651,
"url": "http://www.historylearningsite.co.uk/medieval_farming1.htm"
} |
On the morning of August 7, 1998, near Uvalde, Texas, a Hughes 269B helicopter was substantially damaged during a hard landing.
According to the NTSB, after flying about 30 min. herding cattle, the pilot located three cattle that were in a gated adjoining pasture. He landed the helicopter in a large mesquite flat near the gate so his passenger could get out and open it. After positioning the helicopter, he attempted a confined area takeoff. He stated that he had to lift straight up to clear trees. Upon reaching a hover just above the treetops, the pilot felt power bleeding off so he lowered the nose trying to get airspeed. Unable to reach effective translational lift he turned toward a narrow clearing using right pedal and reduced collective to make a run-on landing. Upon ground contact, the right skid dug into the rain soaked ground, and the helicopter rolled onto its side. The commercial pilot and passenger were not injured.
After the accident, the pilot reported to an FAA inspector that it had been raining for a day and a half prior to the accident and that the weather was hot and muggy. He estimated the temperature to be about 95 deg. with high humidity and no wind. He also stated that he did not believe he had any type of mechanical failure and that the engine seemed to be performing normally. He felt that the density altitude, gross weight and out-of-ground effect operation all contributed to the accident.
It is important to remember that helicopter performance is a function of the density of the surrounding air. Density altitude is the reference standard used to measure performance and is determined by correcting pressure altitude for temperature. What is normally not factored in is the amount of water vapor present. Known as relative humidity, it is the amount of water vapor present (expressed as a percentage) versus the amount of water vapor the air can hold for a given temperature. Water is comprised of hydrogen and oxygen, which is less dense than the oxygen and nitrogen that make up dry air. As the humidity rises, the water vapor displaces the air molecules and lowers the density. Cooler air cannot hold a significant volume of water vapor, however hot air can hold a large amount, so as temperature and humidity rise aircraft performance can decrease exponentially.
Charts in the flight manual can be used to predict aircraft performance for a given density altitude. However, they are typically for dry air conditions. When temperature and humidity are high, it becomes extremely important to reduce expected performance levels. It is not just airfoils that are affected by humidity, but engine performance as well. A combustion engine can lose as much as 12 percent of its power on hot and humid day versus around 3 percent for a turbine.
Moreover, charts typically show in-ground-effect hover performance, out-of-ground effect hover performance and certain take-off criteria. When a pilot must maneuver in less dense air, the charts can be helpful but require more interpretation. Nevertheless, understanding the effects of high density altitude in all flight regimes is critical to safe operations. Consider the following accident:
On the afternoon of August 10, 2001, a Eurocopter AS350-B2 helicopter, on a sightseeing flight, collided with terrain at 4,041 ft. during an uncontrolled descent about 4 mi. east of Meadview, Ariz. Impact forces and a post-crash fire destroyed the helicopter. The pilot and five passengers were killed, and the remaining passenger sustained serious injuries.
According to the NTSB, company pilots who landed at the accident site within minutes of the accident were asked about the atmospheric conditions. One pilot noted that the winds were calm and the temperature was approximately 106 deg. F. (41 deg. C.) and that there was no turbulence. Another pilot said that the weather was clear, sunny, hot, and that there was no turbulence at all, especially when crossing the ridge.
At the time of the accident the helicopter's gross weight was calculated to be 4,515 lb., 446 lb. below its maximum allowable weight. The density altitude at the point of impact was over 8,000 ft.
The NTSB determined that the probable cause of this accident was the pilot's decision to maneuver the helicopter in a flight regime and in a high density altitude environment which significantly decreased the helicopter's performance capability, resulting in a high rate of descent from which recovery was not possible. Factors contributing to the accident were high density altitude and the pilot's decision to maneuver the helicopter in proximity to precipitous terrain, which effectively limited remedial options available.
Again, according to the NTSB, interviews with company pilots and supervisors revealed that they considered the accident pilot one of their very best. He was consistently praised for his knowledge of the helicopter and its systems. They stated that even the mechanics and other maintenance personnel within the company praised his knowledge, skills, and abilities.
When a highly skilled pilot under-estimates the effects of high density altitude, it underscores the importance of completely understanding the performance limitations imposed by the environment. Any pilot flying in these conditions would be wise to error on the side of caution to ensure an adequate margin of performance. | <urn:uuid:a5238901-9072-44ca-8b9a-c7c090eee9cd> | {
"date": "2017-11-21T10:13:35",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806338.36/warc/CC-MAIN-20171121094039-20171121114039-00456.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9679713845252991,
"score": 3.640625,
"token_count": 1048,
"url": "http://www.rotorandwing.com/2004/09/01/safety-watch-7/"
} |
An international team of scientists, including Anthony Marks, professor emeritus at Southern Methodist Univeristy, have rejected the existing view that modern humans left Africa around 70,000 years ago. Their data reveal that humans left Africa at least 50,000 years earlier than previously suggested and were, in fact, present in eastern Arabia as early as 125,000 years ago.
These “anatomically modern” humans — you and me — had evolved in Africa about 200,000 years ago and subsequently populated the rest of the world.
The new study is “Did Modern Humans Travel Out of Africa Via Arabia?” It was published in the journal Science and reports findings from an eight-year archaeological excavation at Jebel Faya in the United Arab Emirates. The project, led by Hans-Peter Uerpmann from Eberhard-Karls-University, Tubingen, Germany, reached Palaeolithic levels in 2006.
Palaeolithic stone tools were technologically similar to tools from east Africa
SMU’s Marks and Vitaly Usik, National Academy of Sciences, Kiev, Ukraine, analyzed the Palaeolithic stone tools found at the site and discovered that they were technologically similar to tools produced by early modern humans in east Africa, but very different from those produced to the north, in the Levant and the mountains of Iran. This suggested that early modern humans migrated into Arabia directly from Africa and not via the Nile Valley and the Near East as is usually suggested.
The direct route from east Africa to Jebel Faya crosses the southern Red Sea and the flat, waterless Nejd Plateau of the southern Arabian interior, both of which represent major obstacles to human migration. However, Adrian Parker, Oxford Brookes University, studied sea-level and climate change records for the region and concluded that the direct migration route may have been passable for brief periods in the past.
During Ice Ages, large amounts of water are stored on land as ice, causing global sea-levels to fall. At these times, the Bab al-Mandab seaway of the southern Red Sea narrows considerably, making it easier to cross.
Lower sea level made more direct route possible
Natural climate changes at the end of Ice Ages cause rainfall over the Nejd Plateau to increase, making the area habitable.
“By 130,000 years ago, sea-level was still about 100 meters lower than at present while the Nejd Plateau was already passable,” Parker said. “There was a brief period where modern humans may have been able to use the direct route from east Africa to Jebel Faya.”
Armitage calculated the age of the stone tools at Jebel Faya using a technique called luminescence dating. His ages revealed that modern humans were at Jebel Faya by around 125,000 years ago, immediately after the period in which the Bab al-Mandab seaway and Nejd Plateau were passable.
“Archaeology without ages is like a jigsaw with the interlocking edges removed — you have lots of individual pieces of information but you can’t fit them together to produce the big picture,” Armitage said.
At Jebel Faya, the ages reveal a fascinating picture in which modern humans migrated out of Africa much earlier than previously thought, helped by global fluctuations in sea-level and climate change in the Arabian peninsula. These findings will stimulate a re-evaluation of the means by which modern humans became a global species.
The work at Jebel Faya was directed by Hans-Peter Uerpmann and co-directed by Margarethe Uerpmann of the Centre for Scientific Archaeology at Eberhard-Karls-University Tubingen, and Sabah Jasim, Directorate of Antiquities, Department of Culture and Information, Government of Sharjah, United Arab Emirates. Palaeolithic artefact analysis was carried out by Anthony Marks and Vitaly Usik. Paleoenvironmental analysis was carried out by Adrian Parker and luminescence dates were calculated by Simon Armitage.
Funding for work at Jebel Faya was provided by the Government of Sharjah, the ROCEEH project (Heidelberg Academy of Sciences), Humboldt Foundation, Oxford Brookes University and the German Science Foundation. | <urn:uuid:a39e1f4f-6afc-4171-aee2-ad18966e347f> | {
"date": "2014-03-10T00:44:10",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010509865/warc/CC-MAIN-20140305090829-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9520532488822937,
"score": 3.703125,
"token_count": 891,
"url": "http://blog.smu.edu/research/2011/01/27/new-findings-reveal-that-modern-humans-left-africa-much-earlier-than-previously-thought/"
} |
After the devastating 1906 San Francisco, California earthquake, a fault trace was discovered that could be followed along the ground in a more or less straight line for 270 miles. It was found that the earth on one side of the fault had slipped compared to the earth on the other side of the fault by up to 21 feet (7 m). This fault trace drew the curiosity of a number of scientists, especially since nobody had yet been able to explain what was happening within the earth to cause earthquakes. Up until this earthquake, it had generally been assumed that the forces leading to the occurrence of earthquakes must be close to the locations of the earthquakes themselves.
Harry Fielding Reid, after studying the fault trace of the 1906 earthquake, postulated that the forces causing earthquakes were not close to the earthquake source but very distant. Reid's idea was that these distant forces cause a gradual build up of stress in the earth over tens or hundreds or thousands of years, slowly distorting the earth underneath our feet. Eventually, a pre-existing weakness in the earth--called a fault or a fault zone--can not resist the strain any longer and fails catastrophically. This is something like pulling a rubber band gradually until the band snaps. This theory is known as the "elastic rebound theory."
The following animation shows a bird's eye view of a country road that cuts through an orchard. Passing right down the middle of the orchard, and across the road, is a fault zone. The animation shows how the earth is gradually distorted about the fault, in response to distant forces, eventually leading to sudden slip or displacement along the fault--what we call an earthquake.
Note: If you do not hava a Java-capable browser but do have a browser capable of displaying static images, you should see static images in place of the animation, so go ahead and push the Animation button. | <urn:uuid:8b1d85d5-7c2b-4cc2-9a3e-321476dafda7> | {
"date": "2014-09-02T13:57:03",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922087.15/warc/CC-MAIN-20140909042200-00053-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9591609239578247,
"score": 4.21875,
"token_count": 379,
"url": "http://projects.eri.ucsb.edu/understanding/elastic/intro-rebound.html"
} |
Coal in the Heart of Appalachian Life
Andreas Baur, Assistant Professor of Chemistry; Judy Byers, Abelina Suarez Professor of English, Director of West Virginia Folklife Center; Galen Hansen, Professor of Physics; Erica Harvey, Professor of Chemistry; Debra Hemler, Associate Professor of Science Education, Coordinator of Science Education; Phillip J. Mason, Professor of Biology, Dean, College of Science & Technology; Noel Tenney, Adjunct Professor of Folklife; and Michelle Bright, Student Preceptor (Science Education Major); Fairmont State University, Fairmont, West Virginia
Coal in the Heart of Appalachian Life is Fairmont State University's first Learning Community. It links a team-taught, integrated science course, Science in the Heart of Appalachia, with a humanities course, Introduction to Folklore. The integrated science course is organized around the following questions, which have both regional and global significance: What is energy and what will be the future demand for it? What are nonrenewable resources? What is the future of coal as an energy source? What alternative sources of energy will emerge to supplement fossil fuels? What responsibilities do I have as an energy consumer? What are the ecological, public health, and social/cultural consequences of extracting and burning coal? The humanities course places a number of these questions in a cultural context and examines the impact of the mining industry on Appalachia history and culture.
The exploration of coal enables students to learn some basic
principles of geology such stratigraphy, classification of rocks
and minerals, and geologic time. They explore chemistry
fundamentals such as bonding, acidity, combustion, and the
organization of matter. Physics helps them to better understand
energy, energy transformations, heat and thermal emissions, and
power plant functioning. Using biology/ecology they investigate
photosynthesis, aquatic community structure/responses to acid
pollution, carbon cycling/global warming, and respiratory
physiology/disease. The science component of this Learning
Community meets twice weekly for two hours, permitting group work
and discovery-based activities. Students present their research in
a poster session at the end of the semester. The folklore course
includes three hours of classroom time as well as a laboratory
component for experiential learning and fieldbased research in
which students collect oral histories and family folklore, and
document artifacts of the coal culture of Appalachia.
Folklore/culture of coal:
- Develop a background in the components of folklore and folklife through the historical and philosophical approaches to topics. Measurements: Readings, reflection journal, discussions, examinations.
- Identify and analyze traits and attitudes that have formulated the stereotyping of Appalachia, both as region and as a society of people within a region with a special emphasis on the coal history and culture of Central (the Heart) of Appalachia. Measurements: Reflective journal, Socratic questioning, essay exams, field trips.
- Identify and analyze the three basic categories of folklore/folklife with a specific emphasis on the culture and folklore of coal including customs, superstitions, festivals, performing arts, oral history, foods, poetry and speech. Measurements: Hands-on direct observations, essay exams, Socratic questioning.
- Produce a personal (family) oral history with an emphasis on the cultural influences of coal, direct and indirect. Measurements: Practice techniques of field research/collecting/analysis, including interviewing, recording transcribing, dissemination, motifing.
- Produce a folklore collection with an emphasis on coal culture
Practicing indicated field techniques.**
the WV Folklore Center at Fairmont State College.
- Develop an appreciation of science as productive way of viewing nature and natural phenomena, based upon models derived from common experience.
- Gain a broader comprehension of the process of science, especially as it relates to the science of coal and the influence of coal on peoples' lives.
- Develop an appreciation of the unique perspectives that each science discipline has on the science of coal.
- Civic Engagement - The value of science to your life, the need to understand fundamental processes and concepts.
Geology of Coal:
- Sedimentary Environments in WV - describe the process of formation of sedimentary rock.
- Coal Formation - Explain the formation of coal and distinguish between eastern and western types.
- Geologic Time - Explain construction of geological time scale, relate stratigraphy to age of rocks, differentiate between coal deposits in the U.S. in terms of age.
- Topography - Describe the difference between tectonic mountains and erosional features, investigate the influences of topography on mining methods.
- Mining - Distinguish between surface mining, deep mining and mountaintop removal.
- Economic Geology - Evaluate mining techniques and economic advantages of each.
- Civic Engagement - Recognize coal as a nonrenewable resource and assess implications.
Chemistry of Coal:
- Explain the requirements for combustion, recognize structure of common hydrocarbons.
- Develop molecular view of matter and explain forces that hold matter together.
- Develop and utilize criteria to categorize types of coal, graphite and diamond.
- Describe what coal is, how its composition results in desirable
and undesirable properties
(energy source, heavy metal content, sulfur content) and the consequences of contents.
- Explain acidity, its causes and effects (acid mine drainage, acid rain).
- Perform simple analysis procedure, analyze and interpret data, draw conclusions.
- Civic Engagement - Explore implications of acid mine drainage and economic, cultural and environmental trade-offs involved in coal extraction and use.
Physics of Coal:
- Describe how kinetic and potential energy are related to concepts of conservation and non-conservation forces and work.
- Demonstrate an understanding of how solar energy stored in coal as chemical (electrical) potential energy is converted into mechanical and electrical energy.
- Explain thermodynamic concepts of heat, work and entropy and models of engines.
- Utilize the concepts of mechanical, electrical and heat energy, work and force in an analysis of the social and cultural significance of coal.
- Civic Engagement - New coal technologies, future of coal extraction industry.
Ecology/Biology of Coal:
- Explain the limitations of energy use as it relates to the energy flow through natural ecosystems and the implications of coal as a nonrenewable resource.
- Describe the impacts of acid mine drainage upon aquatic ecosystems and discuss the remediation options available.
- Explain the impact of mountaintop removal and valley-fill upon the health of aquatic ecosystems.
- Demonstrate an understanding of the causes of 'black-lung" disease, the course of the disease for the individual, the incident rates in relationship with different mining techniques and remediation activities.
- Describe the public health issues associated with 'black-lung" disease and the role of the coal industry in seeking solutions.
- Civic Engagement - Role of the public in establishment of acceptable risk levels for mountaintop removal and acid mine drainage. | <urn:uuid:d9eec6a9-7064-45c0-856c-32e260a4af1f> | {
"date": "2014-07-22T07:20:07",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00216-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8869861364364624,
"score": 3.109375,
"token_count": 1475,
"url": "http://serc.carleton.edu/sencer/coal_appalachia/index.html"
} |
|By Marko Lukša
This article was excerpted from the book Kubernetes in Action.
A replication controller is a Kubernetes resource that ensures a pod (or multiple copies of the same pod) is always up and running. If the pod disappears for any reason (like in the event of a node disappearing from the cluster), the replication controller creates a new pod immediately. Figure 1 shows what happens when a node (Node 1) goes down and takes two pods with it. Pod A is a standalone pod, while Pod B is backed by a replication controller. After the node disappears, the replication controller will create a new pod (Pod B2) to replace the now missing Pod B. On the other hand, Pod A is lost completely – nothing will ever recreate it.
Figure 1 When a node fails, only pods backed by a replication controller are recreated
The replication controller in the previous example manages only a single pod, but replication controllers, in general, are meant to manage multiple replicas of a pod, hence their name.
The operation of a replication controller
A replication controller, in essence, constantly monitors the list of running pods and makes sure the actual number of pods of some type always matches the desired number. If there are too few pods running, it creates new pods based on a pod template that is configured on the replication controller at that moment. If there are too many pods running, it removes the excess pods.
You might be wondering how there can be more than the desired number of running pods. This can happen for a number of reasons:
- A pod of the same type was created manually.
- A node that is running a pod disappears, a new replacement pod is then created by the replication controller, and then the lost node reappears.
- The desired number of pods is decreased, etc.
I’ve used the term pod types a few times. Actually, there’s no such thing. Replication controllers don’t operate on pod types, but simply on sets of pods that match a certain label selector. So, a replication controller’s job is really just making sure that there is always an exact number of pods matching its label selector. If there isn’t, a replication controller takes the appropriate action to reconcile the actual with the desired number. . The operation of a replication controller can be thought of as a constantly running loop shown in figure 2.
Figure 2 Replication controller’s reconciliation loop
Parts of a replication controller
A replication controller has three essential parts:
- a label selector, which determines what pods are in the replication controller’s scope,
- a replica count, which specifies the desired number of pods that should be running, and
- a pod template, which is used when creating new pods.
A replication controller’s replica count, the label selector and even the pod template can all be modified at any time, but only changes to the replica count affect existing pods. Changes to the label selector and the pod template have no effect on existing pods whatsoever. Changing the label selector makes the existing pods fall out of the scope of the replication controller, so the controller stops caring about them completely. Replication controllers also don’t care about the actual “contents” of its pods (the Docker images, environment variables and other things) once it creates the pod. The template therefore only affects new pods created by this replication controller. The template is simply used as a cookie cutter to stamp out new pods.
Like many things in Kubernetes, a replication controller, although an incredibly simple concept, provides or enables the following powerful features:
- It makes sure a pod (or multiple pod instances) is always running by starting new pods when an existing pod fails, is terminated or is deleted.
- When a cluster node fails, it creates replacement pods for all the pods that were running on the failed node (of course only those that were under the replication controller’s control).
- It enables easy horizontal scaling of pods. You can scale a replication controller up or down or have the scaling performed automatically by a horizontalpod autoscaler.
- It enables rolling updates of pods – by having two replication controllers, with one managing pods of the previous version and another one managing pods of the new version and then slowly decreasing the number of replicas on the first, and increasing the number of replicas on the second.
But it’s important to note that, powerful as they are, replication controllers never actually relocate existing pod instances. A pod instance is never actually moved to another node. Instead, a replication controller always completely replaces the old instance with a new one.
Creating, using and deleting a replication controller
Let’s see how to create a replication controller and then use it to horizontally scale a group of pods. But first, let’s do a clean sweep of our Kubernetes cluster to remove all resources we’ve created so far. We can delete all pods, replication controllers and other objects at once with the following command:
$ kubectl delete all --all
The command will list every object as it deletes it. As soon as all the pods terminate, our Kubernetes cluster should be empty again.
Creating a replication controller
Like pods and other Kubernetes resources, we create a replication controller by posting a JSON or YAML descriptor to the Kubernetes REST API endpoint.
Let’s create a YAML file called kubia-rc.yaml for our replication controller:
apiVersion: v1 kind: ReplicationController ❶ metadata: name: kubia ❷ spec: replicas: 3 ❸ selector: ❹ app: kubia ❹ template: ❺ metadata: ❺ labels: ❺ app: kubia ❺ spec: ❺ containers: ❺ - name: kubia ❺ image: luksa/kubia ❺
❶ What this descriptor is describing
❷ The name of this replication controller (RC)
❸ The desired number of pod instances
❹ The pod selector determining what pods the RC is operating on
❺ The pod template for creating new pods
When we post it to the API, Kubernetes will create a new replication controller named kubia, which will make sure there are always three instances of a pod matching the label selector app=kubia running. When there aren’t enough pods, new pods will be created from the provided pod template. The three parts of our replication controller are shown in figure 3.
Figure 3 The three key parts of a replication controller (pod selector, replica count and pod template)
The pod labels in the template must obviously match the label selector of the replication controller, otherwise the controller would just keep creating new pods indefinitely, since spinning up a new pod would not bring the actual replica count any closer to the desired number of replicas. To prevent such scenarios, the API server doesn’t allow creating a replication controller where the selector does not match the labels in the pod template.
To create the replication controller, we use the kubectl create command, which you already know by now:
$ kubectl create -f kubia-rc.yaml replicationcontroller “kubia” created
As soon as the replication controller is created, it goes to work. Since there are no pods with the app=kubia label, the replication controller will spin up three new pods from the pod template. Here’s a list of the pods. Has the replication controller done its job?
$ kubectl get po NAME READY STATUS RESTARTS AGE kubia-53thy 0/1 ContainerCreating 0 2s kubia-k0xz6 0/1 ContainerCreating 0 2s kubia-q3vkg 0/1 ContainerCreating 0 2s | <urn:uuid:5238aee0-d361-4afc-abd9-e639ab966586> | {
"date": "2018-02-21T18:41:34",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.88763028383255,
"score": 3.203125,
"token_count": 1656,
"url": "http://freecontent.manning.com/kubernetes-in-action-introducing-replication-controllers/"
} |
Carroll did not relish living under the British monarchy. He had all but made up his mind to move from Oxford, Canada, to Illinois when a neighbor, John
Brooks, brought news that changed Carroll’s mind. It was 1839.
Brooks had been “out West.” He
gave Carroll a glowing account of a place called Linn County in the territory of Iowa. “It seemed to us all like a great
undertaking,” Carroll’s son, George, wrote 56 years later. The family of nine and Brooks set out in late
May. Their caravan consisted of two
spans of horses, two covered wagons, six cows and the family dog, Watch. They passed through Detroit, Michigan City and across the “boring
prairies” of Illinois. They crossed the Mississippi on a rickety scow at a
point then known as Jugtown, just north of Muscatine. One of the oarsmen was Dyer Usher, who later
moved to Linn County. Several weeks later the Carroll clan arrived
at a cluster of cabins at Linn Grove. “There
is a heap of hard work and dreadful poor living here,” a woman told them. Asking directions to Marion, she said: “Come right
straight ahead and go right straight through.”
The next day, July 4, they arrived in Marion. Eventually, the Carrolls
claimed 320 acres east of what later became known as Mound Farm, site today of Mount Mercy College. That pioneer story is typical. The bulk of Iowa’s first citizens came
from somewhere else in the United States or Canada. In 1850, only 22,000 of Iowa’s 192,214 residents were
foreign born. In 1860 foreign-born
accounted for 106,000 of the state’s 674,913 population. All heeded Horace Greeley’s admonition to go
west to the “land of the unhidden sky.”
Stories of Iowa’s prairie
flourished. There were tales of winds so
strong that frequently one would have to lie flat and clutch the tall grass to
keep from being blown away. The number of snakes encountered were far fewer than indicated
by tales that circulated back East.
Everyone was looking for cheap land.
There were other reasons too.
There was political turmoil in the scattered states that eventually
became Germany. There was famine in Ireland. Germans and Irish were among the largest
groups to first come to Iowa. The Dutch arrived in Baltimore in 1847. Some went to Michigan, another group left for
The first families from Norway settled in Lee County and several years later
in northeast Iowa. The first Danish settlement in Iowa was in Benton County in 1854. Spurred by absolute rule under Austrians and
a potato crop failure, Czechs from the Bohemia region began coming to Iowa in 1848. The big influx came in the 1860’s. Hungarian nobles, fleeing a revolution,
settled in Iowa in the 1840’s. They traveled to Decatur County and formed the
settlement of New Buda. But prairie life
was too hard for these wealthy folks. They
left and their town disappeared. The
names of many Iowa towns reflect where
pioneers came from: New Vienna, New Hampton, New Virginia, Pella (Dutch), Protovin (Czech), Emmetsburg (Ireland), Holstein and Hamburg (Germany) and Swedesburg.
Dale Kueter, Gazette staff writer) | <urn:uuid:cc63014c-5a4f-4c04-af42-66100235a934> | {
"date": "2014-04-24T12:55:30",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00507-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9670736789703369,
"score": 3.140625,
"token_count": 765,
"url": "http://www.genealogy.com/users/h/e/r/Laurese-E-Herron/FILE/0010page.html"
} |
Ebola pandemic. Flashing outbreaks of “animal” flu. The massive spread of the “plague of the XXI century” – AIDS. Pharmaceutical companies report on the creation of the next “miracle vaccines” against these misfortunes. But unfortunately…
All kinds of “pioneers” of panacea for monstrous diseases earn billions from their highly controversial vaccines. But at the same time they like to refer in every possible way to the great forerunner – the “disinterested” Robert Koch, the Nobel laureate and the “conqueror of tuberculosis”.
Heinrich Hermann Robert Koch was rightly considered the head of European microbiologists. A simple rural doctor, he had passion for scientific research. Working in a primitive rural laboratory, Koch developed a number of new methods in the study of microbes. In 1871, his wife gave Robert a microscope for his birthday, and since then he spent all days at the device examining various tissues … In 1890, Koch invented tuberculin – a drug designed for mass tuberculin diagnostics in the form of a Mantoux test. His whole life is an example of the asceticism of a scientist who has treated people in different countries of the world. | <urn:uuid:c27032c5-b156-4f60-9043-274f918a61d1> | {
"date": "2019-09-18T18:44:22",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9362577199935913,
"score": 3,
"token_count": 262,
"url": "https://mirfaces.com/category/science/"
} |
As educators, it is important to evaluate the tone we set in our classroom. Of course, we think we set a positive nurturing tone… but how do we know for sure? After all, it might surprise you to know how your students view you. Are you energetic, supportive, and encouraging? Do you unconsciously grimace in frustration the fifth time you ask a student to quit tapping their pencil? Do you fly through the lessons, or take too long? These are important questions we should be asking ourselves because the tone we set directly impacts student learning. Are you brave enough? Following are three ways we can assess ourselves to find out for sure.
The Standard Observation
The first and most common way of assessment is to have a trusted and objective teacher (or administrator) observe a lesson or two. Let him/her know if there is a particular area you would like to focus on (i.e. length of lesson). Try to teach in your normal style instead of playing to your observer, this way you will get a more accurate assessment. Be prepared to hear the constructive criticism, and act on any recommendations.
Videotape a Lesson
The second way is to videotape a lesson. Simply set up a camera in the back of the room and press play. The nice thing about videoing a lesson is that you can go back and review it at your leisure. Try leaving the camera running during transitions for extra insights. This is an easy and valuable way to assess your teaching.
Watch Yourself Through Your Students
I have to say, this is my favorite way to evaluate myself. Your students watch you all day. They know your style better than anyone. Pick a dependable student to teach a short review lesson. You will essentially be watching yourself teach because they will mimic your teaching! My students absolutely LOVE to ‘step into the teachers shoes.' I encourage you to try this in your classroom!
Self-assessment is as good for us as it is for our students. If you’ve never tried to see yourself through your student’s eyes before, try it this year. Then ask yourself, "Would you want to be a student in your class?" You will be doing both yourself and your students a valuable service. | <urn:uuid:f4a85fcc-4478-451f-969c-4d37b92c27eb> | {
"date": "2014-09-22T06:10:40",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136896.39/warc/CC-MAIN-20140914011216-00143-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9562175273895264,
"score": 3,
"token_count": 459,
"url": "http://www.fortheloveofteaching.net/2010_07_01_archive.html"
} |
As he did in An Intimate Look at the Night Sky and The Path, astronomer and physicist Chet Raymo makes this exploration of the prime meridian—the line of zero longitude and the standard for all the world's maps and clocks—a personal one, walking it from Brighton through Greenwich to the North Sea.
"The story of the prime meridian is in itself fascinating: in 1884 an international agreement fixed a meridian of zero longitude and standard time through southeast England. But Raymo, a physicist and science writer who wrote a popular weekly column for the Boston Globe, goes beyond this tale. He uses an actual walk along the meridian as a 'thread on which to hang' a history of astronomy, geology and paleontology. Stops at sites near the meridian include Newton's rooms at Cambridge, Darwin's house at Downe, the infamous town of Piltdown, and the place where the first dinosaur fossils were discovered. A walk with this delightful writer is the best exercise a reader could have."—Scientific American | <urn:uuid:d3d14baa-10ad-4929-abe9-f86e230abb03> | {
"date": "2016-04-29T14:42:38",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111365.36/warc/CC-MAIN-20160428161511-00058-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9299288392066956,
"score": 2.640625,
"token_count": 215,
"url": "http://www.daedalus-books.com/Products/Detail.asp?ProductID=105149&Media=Book&SubCategoryID=2212&ReturnUrl=%2FProducts%2FCategoryMain.asp%3FMajorCategoryID%3D28%26Media%3DBook%26Special%3D"
} |
Anthocephalus morindaefolius Korth. Rubiaceae/Naucleaceae.
East Indies and Sumatra. This large tree is cultivated in Bengal, North India and elsewhere. The flowers are offered on Hindu shrines. The yellow fruit, the size of a small orange, is eaten. The plant is a native of the Siamese countries.
Sturtevant's Edible Plants of the World, 1919, was edited by U. P. Hedrick. | <urn:uuid:5209a557-16d5-423a-92a0-e37e4ddfc426> | {
"date": "2014-10-31T22:09:23",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900397.29/warc/CC-MAIN-20141030025820-00019-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8807228207588196,
"score": 2.5625,
"token_count": 101,
"url": "http://www.henriettes-herb.com/eclectic/sturtevant/anthocephalus.html"
} |
Why we celebrate: Run to the polls! Once every four years the American government gives its citizens the chance to elect a new president; and there are local races to participate in, too.
What to do: Show your kids the importance of voting by teaching them about the candidates and issues under consideration. Take little ones in the booth with you so they can see the electoral process; it's a memory they'll keep.
Why we celebrate: Education can be so important for your child's future, and one of the best ways he can learn is through books.
What to do: Share your favorite story with your kids. Have them read to you, and congratulate them on their excellent work. For older children, visit your local library, and get them their own library card; or give them a sticker or gold star for being such good readers.
Why we celebrate: Recognized after World War I, Veteran's Day is a day to pay tribute to the heroes of our great country who fought to ensure peace in the world.
What to do: Explain to your children about war,and why our armed forces battle to protect and preserve. Think about how old veterans of different wars would be (grandfather's age, uncle's age, teacher's age) so children can relate.
Why we celebrate: Fall foliage, crisp air, the last days before it gets too cold -- these are all great reasons to get the kids moving outdoors.
What to do: In your local park or woods, dress warmly and put on comfortable shoes. Then, take a hike! Point out birds you see or hear, different shaped leaves or seeds that have fallen on the ground, cloud patterns, animal tracks, or anything else that catches your eye.
Why we celebrate: Volunteering is a great way to help your community, and when the whole family gets involved it shows children you really mean to make the world a better place.
What to do: Close to Thanksgiving, it's the perfect opportunity to donate food or time to a local shelter or soup kitchen.
Why we celebrate: After a hard first year in the new world, and with the help of their Native American neighbors, the Pilgrims' 1621 harvest turned out a plentiful amount of corn, fruits, vegetables, and other foods that would help them through the long winter. And so they held a big feast to celebrate both their success and their appreciation of friends and loved ones.
What to do: Being with family and friends is an important part of this day of thanks. Have everyone at the table share something they are thankful for this year. | <urn:uuid:afc22f5f-bf44-451d-b9be-853e6ce36658> | {
"date": "2018-08-19T20:00:21",
"dump": "CC-MAIN-2018-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215284.54/warc/CC-MAIN-20180819184710-20180819204710-00576.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9480878710746765,
"score": 3.53125,
"token_count": 530,
"url": "https://www.parents.com/fun/parties/special-occasions/why-we-love-november/"
} |
India Adopts a New Refined Protocol to Monitor Tigersby Prerna Singh Bindra
Will make India world leader in big cat monitoring, say scientists.
In a move welcomed widely by the conservation and scientific community, the National Tiger Conservation Authority (NTCA) has adopted new refined protocols for intensive annual monitoring of tiger source populations under ‘Phase IV’ of National Tiger Estimation. The new protocol is expected to lead to more robust estimates of population density, change in numbers over time and other crucial parameters such as survival and recruitment rates in key wild tiger populations. Another key feature is that it enables State Forest Departments to formally collaborate with qualified scientists to derive rigorous estimates that go beyond simple minimum numbers as planned earlier.
The refinements have been developed over the past three years by NTCA and the Wildlife Institute of India (WII), with the Centre for Wildlife Studies (CWS), Bangalore, playing a key supportive technical role. The process of finalising the protocol involved participation of qualified scientists and wildlife managers across the country.
Some salient points of the new intensive monitoring protocol:
- Annual monitoring of tiger source populations using capture-recapture methods based on individual identification of tigers from camera trap data or fecal DNA. These protocols will work in tandem with a national tiger photographic data base repository to be developed and maintained at NTCA.
- Minimum sampling area of 400 sq km at a time, with a sampling intensity of 1,000 trap nights per 100 sq km to be attained.
- The annual camera trap survey to be completed in 45-60 days.
- If deployment of camera traps in an entire reserve – or parts of it – is not feasible for any reason, fecal DNA samples may be collected within 45-60 day survey period and analysed to arrive at reserve wide tiger numbers using capture-recapture methods.
- Protocols are also laid down for estimating prey densities using line transect surveys following design and analysis methods prescribed in program DISTANCE software.
This methodology will make monitoring results more directly linked to tiger numbers, and will ensure generation of reliable data amenable for sound analysis. It is expected to yield reliable estimates of tiger densities and numbers in all source populations where such scientific monitoring is taken up.
Welcoming the new protocol, Dr Ullas Karanth, Director Wildlife Conservation Society-India Program, says, “If implemented fully, this protocol will put India’s tiger monitoring program well ahead of any other big cat monitoring program anywhere in the world.” He acknowledged the “spirit of innovation” shown by Dr. Rajesh Gopal (Member Secretary — NTCA) and the solid cooperation of Sri PR Sinha (Director — WII) in introducing these refinements, which has balanced science with the realities on the ground. He acknowledged the initiative provided by former Minister Jairam Ramesh to the process in 2009, as well as the support from the current MEF Ms. Jayanthi Natarajan in providing steady support thereafter. Karanth views the new protocol as a major step forward and said that the collaborative process envisaged is also expected to bring wider participation of qualified scientists, as well as greater transparency.
Breaking with the past
India has indeed come a long way since the flawed pugmark census followed for over three decades since the inception of Project Tiger in 1973. According to Dr Karanth, the ‘pugmark census’ was “an extremely unreliable ad-hoc method, which allowed reserve managers to generate tiger numbers that often created a false sense of security.” He advocated capture-recapture sampling through the strategic deployment of automatic cameras in tiger habitats as an established, powerful method to photographically ‘catch’ samples of tigers from populations, in order to estimate numbers. He has developed these methodologies in Karnataka since 1990, in a series of research projects implemented in collaboration with the State Forest Department.
It may be worthwhile to mention here, that in 2004, when Sariska claimed a population of about 24 tigers — as counted by the pugmark census, the tiger was already extinct in the reserve.
The turnaround came in 2005, after a Tiger Task Force appointed by the Prime Minister, and chaired by environmentalist Sunita Narain, ruled that the ‘pugmark census’ was invalid, and recommended that it be abandoned in favour of modern approaches. Following this, the government switched to sampling-based country wide estimation, which, however, focused on trends over larger regions based on encounter rates. While this signalled a decisive shift away from the past unscientific practice of trying to ‘census’ wildlife populations, to a more global standard based on statistical sampling, it came with its own constraints. The most serious flaw with this effort of estimating tigers was the basic futility of trying to generate all-India level tiger counts once in four years. It did not intensively monitor tigers annually, and thus was unable to track numbers, survival, mortalities and recruitments in key source populations year after year.
Now, the once-in-four year national estimation (termed Phases I to III) will be augmented by Phase IV–intensive camera trapping of tiger source populations to track the fate of individual tigers, and estimate survival and recruitment rates to gauge how each of these populations is faring. If rigorously adopted, the new protocol will help avoid future Sariska-like situations. A declining population will set off alarm bells, leading to timely corrective action, and thus stave off local extinction of a population.
India’s 41 tiger reserves cover about 50,000 sq km, and the country is overall estimated to have about 1,00,000 sq km of potential tiger habitat. However, only about 20,000 sq km holds key source populations. It will serve well to properly implement the new protocol in this 20,000 sq.km, as this will cover about 90 per cent of the country’s tiger population.
Intensive monitoring of this nature has already proven effective in Karnataka, where in association with the Karnataka Forest Department, CWS has developed and implemented a rigorous source population monitoring scheme. Currently, 4,000 sq km of tiger source areas in five Protected Areas are being sampled using cameras traps prey surveys. These efforts have lead to individual identification of over 500 tigers over the years and estimates of vital population parameters in the studied area.
On the basis of this data, it has been ascertained that the tiger population in Karnataka is stable, and even increasing in some areas like Bhadra Tiger Reserve and Kudremukh National Park. | <urn:uuid:edd762b8-c008-4a08-aae7-76bdec9892f4> | {
"date": "2014-09-17T15:31:08",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123996.28/warc/CC-MAIN-20140914011203-00346-ip-10-196-40-205.us-west-1.compute.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.931631326675415,
"score": 2.578125,
"token_count": 1367,
"url": "http://www.conservationindia.org/articles/india-adopts-a-new-refined-protocol-to-monitor-tigers"
} |
A Philadelphia nun, Mother Katharine Drexel, will likely be declared a saint of the Roman Catholic Church some time this year or next, making her only the second native-born American to achieve this honor. Students of philanthropy may be interested to learn that Katharine already has another distinction—she may well be the only American whose philanthropic work was ever recognized by an income tax exemption through a special act of Congress.
Born into one of the largest fortunes in 19th century America, Drexel chose to walk away from it all for a convent life of vowed poverty. Her considerable fortune was funneled into charity, mostly for the education of blacks and American Indians. While her sense of charity was extraordinary, it was not out of keeping with the philosophy of a remarkable family which believed wealth carried an obligation of sharing.
Katharine Mary Drexel was born in Philadelphia in 1858 as the second child of Francis and Hannah Drexel. Katharine’s father was the eldest son of Francis Drexel, an Austrian artist who had migrated to America and eventually founded the successful investment banking firm of Drexel and Company.
Hannah Drexel died a month after the birth of Katharine, her second child, and Francis remarried in 1860. His new wife, Emma Bouvier Drexel, took his two little girls and raised them as her own. Eventually, the couple had a third child, Louise.
Emma was easily the most important influence in the lives of the young Katharine and her sisters. Emma (Jacqueline Kennedy Onassis was the great granddaughter of her brother, John Bouvier) was a woman of profound Catholic piety, and charity was an integral part of her faith. Twice a week, hundreds of poor Philadelphians would assemble at the back gate of the Drexel home on Walnut Street. Emma, aided by her daughters and a paid assistant, would fulfill requests for clothing, shoes, food or rent, depending upon the needs of the family.
The three children lived a largely sheltered life of private education at home and participated in family vacations that might include a European tour or visits to the various scenic wonders of America. As they neared womanhood, vacations were permitted to the summer homes of very close friends or relatives, and while Francis Drexel would often accompany his daughters, Emma would remain at home, citing the needs of her various charities.
A “Spendthrift” Will
The idyllic existence of this close-knit family was shattered in 1879 when Emma was diagnosed with cancer. The disease progressed slowly and painfully until her death four years later. It was only then that the full extent of her personal charity became public knowledge—she had been quietly paying the rent for more than 150 Philadelphia families.
Francis Drexel survived his wife by only two years, dying in 1885 of a sudden heart attack. His death left his three grief-stricken daughters heiresses to one of America’s most impressive fortunes. At $15.5 million, an enormous sum in the late 19th century, it was the largest estate filed in Philadelphia up to that time. Under the terms of his will, 10 percent of the estate was distributed immediately to 29 (mostly Catholic) charities.
The remaining $14 million was willed to the three sisters, none of whom were married. Francis Drexel, wary of fortune hunters who might take advantage of his daughters, had written what was known as a “spendthrift will.” The sisters would receive only the income of the estate; their children, if any, would eventually inherit the principal. If any died childless, the income from her share would pass to the surviving sister or sisters.
During this critical period in their lives, the three sisters relied on their uncle, Anthony Drexel, for advice. Anthony guided them as they embarked on a path of philanthropy, a field with which he was quite familiar. He himself was in the process of founding the Drexel Institute of Technology. This school, through an initial endowment of $3 million, provided women and men of modest means with practical training in useful occupations. It has since grown into Drexel University, one of the nation’s premier engineering schools.
Prior to their father’s death, the Drexel sisters had been continuing their mother’s charities. Now, with their increased income, they expanded their benefactions, and while each supported the work of the others, they tended to have individual areas of special concern.
Because Francis Drexel had a special interest in Catholic orphanages for boys, Elizabeth, his eldest daughter, concentrated on this field. She built the St. Francis Industrial School in the Philadelphia suburb of Eddington. This residential school took adolescent boys from the orphanages and gave them training in a variety of occupations. This was an improvement over the previous practice which saw the children placed as apprentices with masters who all too often exploited them for cheap labor. Louise gravitated to the field of “Negro” education.
Katharine, from her teen years, had a special interest in the plight of American Indians. Almost by coincidence, her spiritual advisor, Rev. James O’Connor, was named Bishop of Omaha, a diocese which encompassed a number of Indian reservations. Through O’Connor, Katharine was put in touch with several prominent missionaries to the Indians, and began a life work of sponsoring churches and schools in Indian Territory, a work that expanded to include the much larger field of similar institutions for oppressed blacks. While education and material advancement of minorities was one of the goals of her outreach, the primary motive was evangelization. Katharine, a pious Catholic, wished to introduce her faith to these largely unchurched peoples, and the schools were a means toward this end.
Becoming a Missionary
But something entirely different was going on in her personal life. Shortly after Emma Drexel’s death, Katharine began to give serious consideration to entering the convent. This desire became more intense after the death of her father. This would mean renouncing her fortune for a life of vowed poverty, shut off from most contact with the outside world.
Had she lived, Emma Drexel would probably have advised against such a choice—it was her stated opinion that women of wealth could do far more good through remaining in the world and devoting their time and talents to charitable works.
Nor did Katharine receive much encouragement from Bishop O’Connor, who told her she would not be able to make the adjustment from a world of luxury to the rigorous life of a nun. Then too, he believed, should she enter the convent, she would have to relinquish her growing role as the prime financial backer for the Catholic Indian missions.
Katharine was building schools which were then administered by religious congregations aided by a government tuition subsidy. The federal government, under a policy initiated by President Ulysses S. Grant, saw religious-operated schools, Protestant and Catholic, as a practical path for the advancement of the Indian tribes.
In 1886-87 the “All Three,” as the sisters styled themselves, toured Europe, with visits to a number of boys’ training schools after which Elizabeth could pattern her St. Francis.
During an audience with Pope Leo XIII, Katharine asked for European missionaries to Indian Territory, “Why not, my child, become a missionary yourself?” the Pope countered.
Katharine was startled; this was not a step she was quite ready to take. But it appears to have helped strengthen her resolve to enter a convent. Finally, in late 1888, Bishop O’Connor consented to her becoming a nun but suggested she found her own congregation which would work exclusively among Indians and blacks. In 1891, after her own formation as a nun, her Sisters of the Blessed Sacrament for Indians and Colored came into being. While Katharine continued to aid other charities, the bulk of her future funds would be funneled into schools conducted by the Pennsylvania-based congregation.
Learning To Use Straw Buyers
Mother Katharine Drexel, as she was now known, opened St. Catherine’s School in Santa Fe, her congregation’s first school for Indian children, in 1894. That same year she purchased an estate in Rock Castle, Virginia, where she built St. Francis de Sales School, a boarding school for black girls which complemented St. Emma’s, a nearby boys’ school which Louise had founded.
Many other schools and missions would follow through the South, the West, and the urban slums of the East. Very often there was such extreme prejudice and community opposition, that property for the schools would have to be purchased through a third party.
Most of the parents who sent their children to Katharine’s schools were not Catholic, but the state of public education, especially for blacks in the South, was so deplorable that the schools were welcomed by the black community.
Katharine realized her Sisters could not begin to fill the vast need by themselves. She seized an opportunity when, in 1915, Louisiana relocated a black college, Southern University, out of New Orleans. Through a straw buyer she purchased the vacant campus and reopened the school as Xavier College (now Xavier University). In its early years the primary mission of the college was to train lay teachers who would then staff schools for black children in rural Louisiana. Xavier was the first and only Catholic college for African Americans, and a pioneer in co-education. It continues to this day as a respected, though now integrated, institution of higher learning. It is worth noting that while Katharine was pouring millions into her schools, her own lifestyle was one of poverty. A Xavier graduate noted how she without hesitation bought a bus for the school’s sports teams, but traveled on the street car when she visited the campus.
Katharine’s charities outstripped her considerable fortune, especially after the introduction of the federal income tax, which at one point was gobbling up a third of her income. In a move that would be hard to imagine with today’s Alternative Minimum Tax, Congress in 1924 passed a bill providing that any person who had given at least 90 percent of their income to charity for the preceding 10 years would be exempt from federal taxes. The bill was widely understood to include no one but Katharine Drexel.
Katharine Drexel continued in her mission to Native Americans and African Americans until her late seventies when failing health forced her retirement. She lived on in her quiet cloister until her death in 1955, at the age of 96. By then her congregation could count 61 missions, mostly schools, throughout the country, although she had financed many more. In all Katharine Drexel had distributed some $20 million, more than the total of the trust fund bequeathed to herself and her two sisters.
As the Catholic Church considers her canonization, it is on grounds of holiness, not philanthropy. But philanthropy itself can be a manifestation of holiness, especially when those endowed with fortune understand, as Katharine did, that we are all but stewards, and that wealth is meant to be shared.
Lou Baldwin is a reporter with the Catholic Standard in Philadelphia. | <urn:uuid:fa61b52a-0320-4c39-8318-14c8264bfe46> | {
"date": "2014-10-22T13:48:31",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447020.15/warc/CC-MAIN-20141017005727-00217-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9837751984596252,
"score": 3.203125,
"token_count": 2335,
"url": "http://www.philanthropyroundtable.org/topic/excellence_in_philanthropy/giving_it_all"
} |
Most of us had never heard the word Eyjafjallajökull before this week. We can print the name, but virtually no one in the world media community can pronounce it. No, it is not the name of a potato peeler sold at IKEA; it’s the Icelandic volcano that has shut down air traffic in Europe for the past four days.
On June 24, 1982 ash from the eruption of Mount Galunggung shut down all four engines of British Airways Flight 9, a Boeing 747 flying over the Indian Ocean south of Java. Fortunately the crew was able to restart the engines after the plane had flown out of the ash cloud, but all engines were severely damaged when the crippled aircraft finally touched down in Jakarta. This dramatic incident called attention to the aircraft damage that can occur from volcanic ash.
While Eyjafjallajökull’s eruptions have been much smaller than those from Mount St. Helens and ash has reached altitudes of 20,000 feet, compared with 50,000 feet from St. Helens, the volcanic debris from this Icelandic wonder has remained stuck over northern European air space, due to stagnant winds. It is possible that European airports could be closed yet another week, until the prevailing winds blow the ash southward.
The effect on the international airline industry, which uses connecting flights through London, Amsterdam, and Frankfurt, has been devastating. Hundreds of thousands of travelers remain stuck in places like Shanghai, Bangkok, Nairobi, and Bombay awaiting connecting flights through Europe to the United States and places west. The economic effect on Europe’s domestic airline industry must be horrible, with all flights grounded.
I wondered about the effect of Eyjafjallajökull on global warming. Apparently, in the short run volcanic eruptions may actually cool the earth as the ash and dust particles blown into the atmosphere can reflect the sun’s rays away from the earth’s surface. In 1991 a huge eruption of Mount Pinatubo in the Philippines (an explosion ten times greater than Mount St. Helens) actually reduced the earth’s temperatures by half of a degree over the next year. Eyjafjallajökull is not expected to have any significant effect on the earth’s temperatures. It could, at most, cool Northern Europe for a time.
Not to worry. While even the eruptions of Eyjafjallajökull won’t cool the earth enough to spare us from the disaster called global warming, we can rest assured that Nancy Pelosi, Harry Ried, and Barack Obama can come up with the proper legislation to get the job done! | <urn:uuid:b1b520e8-a8eb-4694-8be3-64565baa878f> | {
"date": "2019-11-18T19:56:45",
"dump": "CC-MAIN-2019-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9393988847732544,
"score": 3.171875,
"token_count": 554,
"url": "https://donsalyards.com/2010/04/18/eyjafjallajokull/"
} |
Pete Daniel. Dispossession: Discrimination against African American Farmers in the Age of Civil Rights. Chapel Hill: University of North Carolina Press, 2013. 336 pp. $34.95 (cloth), ISBN 978-1-4696-0201-1.
Reviewed by Dionna Richardson (University of Akron)
Published on H-1960s (June, 2013)
Commissioned by Zachary J. Lechner
Saying One Thing and Doing Another: Agriculture and the Duality of Institutionalized Racism in the Modern United States
As we near the fiftieth anniversary of the milestone year of 1968, several monographs are emerging that reassess what we knew or, rather, thought we knew about that tumultuous decade and the years surrounding it. While many historians are familiar with “Freedom Summer” and the landmark equal rights legislative acts of 1964 and 1965, Pete Daniel in Dispossession strives to explain an important but relatively unknown aspect of the civil rights movement: the struggles against institutional discrimination that targeted black farmers in the U.S. South and their ability to serve as a window into the relationship between race and government in modern American history.
In the years following the Civil War, the number of black farmers in the South grew rapidly. There were almost one million black-owned farms by the 1920s. After the world wars and the Great Depression, the numbers of farms in general decreased, but not proportionately along the color line. What Daniel exposes is the emergence in the years between 1940 and 1964 of a drastically disparate ratio of white-owned to black-owned farms that came about as many Americans were led to believe that federal New Deal policies and bureaucratic agencies such as the U.S. Department of Agriculture (USDA) were working hard to secure equal access for all Americans. At a time when civil rights laws were being enacted to end discrimination, black-owned farms decreased by 93 percent as black farmers were denied access to federal programs, loans, and education needed for their survival.
Daniel’s central argument is that the federal government, primarily the USDA, and its affiliate state and local committees rhetorically promoted equal rights to the public while actively and systematically suppressing southern black farmers’ access to federal programs. This practice was accomplished via outward discrimination in the form of voter intimidation and denial of capital as well through limited access to education about new agricultural technologies that could keep them competitive with the larger white farms. In this meticulously researched and important monograph, Daniel shows that high-ranking officials at the USDA were not only aware of the various forms of discrimination, but they refused to act on them and, in most cases, even acknowledge them. He offers this hypothesis in a direct challenge to scholars who have explained the decline by citing either the structural shift between labor- and capital-intensive farming or by claiming that blacks simply left the South on their own, giving plantation owners no choice but to replace them with machinery. Daniel instead makes the case that the dispossession of black farmers in the U.S. South during these tumultuous decades was much more insidious and had more to do with institutional racism and discrimination than benign economic patterns and shifts in population.
Daniel comes to these conclusions by mining a large primary source base of organizational records and government documents. Central to his thesis are the findings of the U.S. Commission on Civil Rights, who began a campaign in the spring of 1964 that focused exclusively on USDA discrimination. In chapter 2, titled “Evidence,” Daniel explains how the commission uncovered vast inequities between white and black farmers with regard to their access not only to government programs but also to agents and seats on government oversight committees. He shows how southerners at the local level blatantly denied access to blacks while high-level bureaucrats looked the other way. The USDA outwardly denied the commission’s finding, but Daniel’s research indicates they continued with business as usual.
Situating his argument in the larger historiography on the civil rights movement, Daniel explains that there was more activism occurring in the Deep South during the mid-1960s than the voter registration drives often associated with the Student Nonviolent Coordinating Committee’s (SNCC) “Freedom Summer.” In a chapter entitled “Freedom Autumn,” Daniel illustrates that while the horrors of white actions during Freedom Summer may have held headlines, there was a lot of important, yet unpublicized activity occurring that aimed to help farmers threatened by dispossession. Primarily, this involved educating blacks about their eligibility for federal programs, as well as assistance with navigating the intentionally long and difficult forms that one had to complete in order to apply.
Daniel argues that the massive bureaucracy created in Washington under the banner of equal rights was actually a machine “used to mask continuing discrimination rather than end it” (p. 154). He highlights how the use of selective statistics, such as those used in reports to illustrate compliance, ignored areas of contention or occurrences where complaints of discrimination were filed. These documents along with false compliance reports were often used to publicly discredit accusations of discriminatory practices by the USDA and its agencies. In the same vein, blacks were also being excluded from influential public offices and seats on committees that held authority over access to USDA programs while it appeared to the public that the USDA was doing the opposite. Citing several individual instances, with a focus on the specific case study of Willie Strain, Daniel explains that blacks like Strain, who were gaining influence as activists or leaders among the farmers, were often “rewarded” with promotions or job offers which placed them in official positions that were meaningful in title only. These new positions were, in reality, designed as “damage control” to limit the activists’ influence on the ground by placing them behind desks and granting them little political power.
The main criticism to be said of this book is that it is, at times, difficult to follow due to Daniel’s capacious interrogation of New Deal programs, which could confuse the novice reader who is not familiar with the “alphabet soup” nature of the many different agencies. There is a key to the agencies following the preface at the start of the book, but it is frustrating that after the first time the agency is mentioned it is then referred to only by acronym throughout the rest of the monograph. It would have been easier if at least in a note the reader was reminded of which agency Daniel is discussing when he first mentions them again in each chapter.
In all, Dispossession is a worthwhile read for anyone who is interested in studies of the civil rights era and African American struggles for equality. The critical exposure of discrimination at all levels of government is both informative and provocative and is a welcome addition to the historiographical conversation that has sought to investigate and expose the hypocrisy behind institutionalized racism during the era of civil rights. While perhaps too dense for undergraduates, Dispossession could be a valuable contribution to a graduate seminar on the relationship between race and government in U.S. history.
If there is additional discussion of this review, you may access it through the network, at: https://networks.h-net.org/h-1960s.
Dionna Richardson. Review of Daniel, Pete, Dispossession: Discrimination against African American Farmers in the Age of Civil Rights.
H-1960s, H-Net Reviews.
|This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.| | <urn:uuid:599df756-32dd-4dcd-b5aa-3d83136f3515> | {
"date": "2017-11-23T07:12:32",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9695208072662354,
"score": 2.703125,
"token_count": 1529,
"url": "http://www.h-net.org/reviews/showrev.php?id=38616"
} |
Thanks to Harvard and Princeton scientists, we’ll be able to tell our grandchildren that “back in our day we had to walk to school uphill both ways, in the snow, all year long. And we didn’t have any fancy-shmancy molecular bio-computers that monitored our health…”
Researchers at Harvard University and Princeton University have made a crucial step toward building biological computers, tiny implantable devices that can monitor the activities and characteristics of human cells. The information provided by these “molecular doctors,” constructed entirely of DNA, RNA, and proteins, could eventually revolutionize medicine by directing therapies only to diseased cells or tissues.
The results will be published this week in the journal Nature Biotechnology.
“Each human cell already has all of the tools required to build these biocomputers on its own,” says Harvard’s Yaakov (Kobi) Benenson, a Bauer Fellow in the Faculty of Arts and Sciences’ Center for Systems Biology. “All that must be provided is a genetic blueprint of the machine and our own biology will do the rest. Your cells will literally build these biocomputers for you.”
Evaluating Boolean logic equations inside cells, these molecular automata will detect anything from the presence of a mutated gene to the activity of genes within the cell. The biocomputers’ “input” is RNA, proteins, and chemicals found in the cytoplasm; “output” molecules indicating the presence of the telltale signals are easily discernable with basic laboratory equipment.
“Currently we have no tools for reading cellular signals,” Benenson says. “These biocomputers can translate complex cellular signatures, such as activities of multiple genes, into a readily observed output. They can even be programmed to automatically translate that output into a concrete action, meaning they could either be used to label a cell for a clinician to treat or they could trigger therapeutic action themselves.”
Benenson and his colleagues demonstrate in their Nature Biotechnology paper that biocomputers can work in human kidney cells in culture. Research into the system’s ability to monitor and interact with intracellular cues such as mutations and abnormal gene levels is still in progress.
A biocomputer’s calculations, while mathematically simple, could allow researchers to build biosensors or medicine delivery systems capable of singling out very specific types or groups of cells in the human body. Molecular automata could allow doctors to specifically target only cancerous or diseased cells via a sophisticated integration of intracellular disease signals, leaving healthy cells completely unaffected.
Do you think Medicare will try to screw over the “molecular doctors” as much as they do “real” doctors…. | <urn:uuid:8bda10d0-611c-4804-97bd-3b5278a33508> | {
"date": "2017-10-20T04:51:52",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00256.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9332646131515503,
"score": 3.671875,
"token_count": 578,
"url": "https://www.medgadget.com/2007/05/researchers_developing_implantable_biocomputers.html"
} |
Printer friendly version
Shrinking dinosaurs evolved into flying birds
30 July 2014
Southampton, University of
A new study involving scientists from the University of Southampton has revealed how massive, meat-eating, ground-dwelling dinosaurs evolved into agile flying birds: they just kept shrinking and shrinking, for over 50 million years.
Today, in the journal Science, the researchers present a detailed family tree of dinosaurs and their bird descendants, which maps out this unlikely transformation.
They showed that the branch of theropod dinosaurs, which gave rise to modern birds, were the only dinosaurs that kept getting inexorably smaller.
“These bird ancestors also evolved new adaptations, such as feathers, wishbones and wings, four times faster than other dinosaurs,” says co-author Darren Naish, Vertebrate Palaeontologist at the University of Southampton.
“Birds evolved through a unique phase of sustained miniaturisation in dinosaurs,” says lead author Associate Professor Michael Lee, from the University of Adelaide’s School of Earth and Environmental Sciences and the South Australian Museum.
“Being smaller and lighter in the land of giants, with rapidly evolving anatomical adaptations, provided these bird ancestors with new ecological opportunities, such as the ability to climb trees, glide and fly. Ultimately, this evolutionary flexibility helped birds survive the deadly meteorite impact which killed off all their dinosaurian cousins.”
Co-author Gareth Dyke, Senior Lecturer in Vertebrate Palaeontology at the University of Southampton, adds: “The dinosaurs most closely related to birds are all small, and many of them - such as the aptly named Microraptor - had some ability to climb and glide."
The study examined over 1,500 anatomical traits of dinosaurs to reconstruct their family tree. The researchers used sophisticated mathematical modelling to trace evolving adaptions and changing body size over time and across dinosaur branches.
The international team also included Andrea Cau, from the University of Bologna and Museo Geologico Giovanni Capellini.
The study concluded that the branch of dinosaurs leading to birds was more evolutionary innovative than other dinosaur lineages. “Birds out-shrank and out-evolved their dinosaurian ancestors, surviving where their larger, less evolvable relatives could not,” says Associate Professor Lee. | <urn:uuid:28d9d0c9-30dc-4953-9bd2-44e8b7784c76> | {
"date": "2015-08-03T19:09:20",
"dump": "CC-MAIN-2015-32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.92/warc/CC-MAIN-20150728002310-00302-ip-10-236-191-2.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9352121949195862,
"score": 3.578125,
"token_count": 481,
"url": "http://www.alphagalileo.org/ViewItem.aspx?ItemId=144130&CultureCode=en"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.