text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
The Arabs had overrun a vast collection of diverse peoples with diverse customs. Moreover, internal dissensions among the Arabs themselves prevented the establishment of a permanent unified state to govern the whole of the conquered territory. After Muhammad’s death, there was disagreement over the succession. Finally, Muhammad’s eldest companion, Abu Bekr, was chosen khalifa (caliph, the representative of Muhammad). Abu Bekr died in 634, and the next two caliphs, Omar (r. 634-644) and Othman (r. 644-656), were also chosen from outside Muhammad’s family.
Many Arabs resented the caliphs’ assertion of authority over them and longed for their old freedom as nomads. In 656 Othman was murdered. By then those who favored choosing only a member of Muhammad’s own family had grouped themselves around Ali, an early convert and cousin of the Prophet. This party also opposed all reliance on commentaries, or supplemental works, explaining the Koran. Fundamentalists with regard to the holy teachings, they became known as Shiites (the sectarians). Opposed to them was a prominent family, the Umayyads, who backed one of their members, Muawiyah, as caliph.
In 656 All was chosen caliph, and civil war broke out. Ali was murdered in 661. His opponent, Muawiyah, had already proclaimed himself caliph in Damascus in 660. Thus began the dynastic Umayyad caliphate (661-750). On the whole, it saw ninety years of prosperity, good government, brisk trade, and cultural achievement along Byzantine lines. The civil service was run by Greeks, and Greek artists worked for the caliph; the Christian population, except for the payment of a head tax, were on the whole unmolested and better off than they had been before.
Shi’ite opposition to the Umayyads, however, remained strong. The enemies of the Shi’ites called themselves Sunnites (traditionalists). There was little difference between the two groups with regard to religious observances and law, but the Shi’ites felt it their duty to curse the first three caliphs who had ruled before Ali, while the Sunnites deeply revered these three caliphs. The Shi’ites were far more intolerant of the unbeliever, conspired against the government, and celebrated the martyrdom of Ali’s son Hussein, who was killed in battle in 680 as the result of treachery.
In 750 the Shiltes were responsible for the overthrow and murder of the last of the Umayyad caliphs at Damascus, together with ninety members of his family. The leader of the conspirators was Abu’l Abbas—not a Shi’ite himself, but the great-grandson of a cousin of Muhammad. The caliphate was shortly afterward moved east to Baghdad, capital of present-day Iraq and close to the former capital of the Persian Empire, and was thereafter known as the Abbasid caliphate. The days when Islam was primarily an Arab movement under Byzantine influence were over.
Other groups appeared in Islam with varying views of how to interpret the Koran. Some were Sufis who attempted to lose themselves in divine love and whose name was given them from suf, wool, for the undyed wool garment they wore. Politically the rest of the Muslim world fell away from its dependence upon the Abbasids. One of the few Umayyads to escape death in 750, Abd ar-Rahman made his way to Spain and built himself a state centered in the city of Cordova. Rich and strong, his descendants declared themselves caliphs in 929.
Separate Muslim states appeared in Morocco, Tunis, and Egypt, where still another dynasty, this time Shi’ite, built Cairo in the tenth century and began to call themselves caliphs, though they were soon displaced by Sunnites. Rival dynasties also appeared in Persia, in Syria, and in the other Eastern provinces. At Baghdad, though the state took much of its character and culture from its Persian past, power fell gradually into the hands of Turkish troops. Although the caliphate at Baghdad lasted until 1258, during its last two centuries the caliphs were puppets in Turkish hands. | <urn:uuid:8e73d5fa-f8ae-41e8-8ae9-b5d405699252> | {
"date": "2017-07-21T04:46:55",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423716.66/warc/CC-MAIN-20170721042214-20170721062214-00296.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9823877215385437,
"score": 4,
"token_count": 898,
"url": "https://bigsiteofhistory.com/disunity-in-islam-634-1055-byzantium-and-islam/"
} |
Losing weight is not a big deal for a healthy person. Just burn more calories than you intake and your weight will go down. Not all people know if they should lose weight. Your ideal weight could be calculated according to your height. This parameter is known as your body mass index. Another important indicator is the amount of fat in your body. It could be calculated in hospitals and weight loss centers.
You need to know that over sixty percent of Americans are overweight. This is a result of unhealthy diet and poor life style. Most people eat a lot of saturated fats and trans fats which is not doing any good to their health.
Losing weight is not a big problem. However, keeping it off could be quite challenging. But if you find a permanent weight loss plan, you will not have any problems with this and will enjoy your attractive body for many and many years.
It is recommended to choose low nutrition foods that are filling but contain small amount of calories. They help you to deal with hunger, but at the same time promote weight loss. Such products are fruit and vegetables. Consuming fewer calories will help you to keep your weight off for ages.
Keep in mind that food with extra air in it has fewer calories in the same volume. People who consume foods which are air whipped get 30 percent fewer calories than those who eat regular foods.
If you want to lose weight, you should also know what foods to avoid. Do not eat foods that are high in saturated fats and trans fats. Avoid fried foods as they contain a lot of calories which could be stored in your body. Saturated fats also lead vessels to suffer from high cholesterol levels. And this may result in heart disease.
If there are a lot of various foods on the table you will want to try each one of them. And in this case you will go beyond fullness. That is why it is recommended to limit your food choice by several types of snacks. At the same time having some options to choose from is also important as you will not get tired of the same foods.
It is important to replace high calorie drinks with pure water. High calorie beverages are dangerous as they go through your stomach without sense of fullness. At the same time they may contain a lot of calories and all of them could be stored in your body in the form of fat. Your stomach will remain hungry and you will consume some more calories. That is why drinking sweet and high calorie beverages is a way to gain weight, not to lose it. | <urn:uuid:70a078ce-e563-4cc0-930c-ef617edf8e3f> | {
"date": "2019-12-15T07:09:55",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307797.77/warc/CC-MAIN-20191215070636-20191215094636-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9734500646591187,
"score": 2.703125,
"token_count": 510,
"url": "http://www.medicineplanet.org/some-recommendations-on-weight-loss/"
} |
Edited by Jake, Teresa, Loganflinke, CEJ2800 and 5 others
Want to make a fake map? Want it to be original? Look below if you do want to!
Making Your Own Map
1Get graph paper and a pen ready. You might say, "I want to erase", but you can't if you follow this! Okay? Good.Ad
2Draw some dots, some in circles, some not. Name the dots. The ones not in a circle are towns and villages, the others are cities.
3Draw squiggly lines that snake around and all. A few of them might go through towns or cities. Name the lines. These are rivers. You can make one or multiple gather into a circle (a lake) somewhere on the map.
4Add upside-down Vs in little clusters and name them. These are mountains. Make some more rivers that start in the mountains.
5Draw forests which could be little circles, trees, or anything on the map. Name the forest.
6Now that you have natural stuff and towns, add the borders to the nation(s) on your map. Name your nation or nations and draw and name a star (a capital city). In olden times, borders were natural things like lakes or mountains.
7Add the shoreline (optional) and islands.
8Add other stuff like dams, walls, trade routes, and ruins of old things.
9Clean up the map. Any stray lines or messed-up things can be changed. Name everything that is unnamed, and voila! You have it!
- Add a key and a compass rose.
- No one has to see this map but you.
- Some rivers start in mountains, some don't.
- This makes a really weird/cool/realistic map.
Things You'll Need
- Pen or marker
- Paper (preferably graph) | <urn:uuid:36f872ee-5f14-4964-b6db-f13d04f2f80e> | {
"date": "2014-07-29T00:37:17",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510264270.11/warc/CC-MAIN-20140728011744-00306-ip-10-146-231-18.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.921065092086792,
"score": 3.125,
"token_count": 398,
"url": "http://www.wikihow.com/Make-an-Original-Fake-Map"
} |
Jul 15, 2013
06:49 AMConnecticut Today
Coordinated Community Efforts Needed to Control Ticks and Spread of Lyme Disease
A recent article in the Boston Globe talks about the Lyme Disease epidemic in New England, and the challenges in coordinating region-wide efforts to help stem the tide of what is becoming a major public health issue.
The disease, transmitted primarily through bites from deer ticks, has seen a sharp upswing in the past decade as deer populations across the state have exploded. According to the Centers for Disease Control, in 2011 (the most recent year that Lyme Disease statistics are available) Connecticut recorded more than 3,000 cases of Lyme Disease, or an incident rate of about 56 per 100,000 residents, among the highest in the nation. Other states across the Northeast were afflicted with similarly high totals.
From the Boston Globe article:
This regional epidemic has yet to trigger a broad public health response on par with prevention blitzes aimed at some other pervasive maladies. That is partly because ticks are a devious foe. Vacation spots are often loath to publicize the threat for fear of scaring off business, and the public and politicians often do not perceive Lyme as a serious malady. The result is a lopsided spending gap between prevention efforts for tick- and mosquito-borne illnesses.
Various locales have gone with different approaches to dealing with the problem, from task forces and deer-eradication programs to increased community awareness and possible legislative solutions.
At the heart of the issue is how to control deer, who play a key role as host in the tick's life cycle and whose numbers continue to grow—in some suburbs in Connecticut, it's estimated that there are as many as 60 deer per square mile, so the chances of encountering deer ticks are very high. One of the most effective methods to limit the deer is increased hunting, but that issue is fraught with controversy, with passionate and vocal groups well-organized on both sides.
Even if that issue is decided, no one has really stepped forward to establish a coordinated, region-wide tick-control program, which will be needed sooner rather than later if the numbers of those infected by Lyme Disease continue to rise.
To read the full Boston Globe article, click here.Coordinated Community Efforts Needed to Control Ticks and Spread of Lyme Disease | <urn:uuid:3ab95546-930e-457a-ab85-9d3bc774fe93> | {
"date": "2014-04-24T16:11:38",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00179-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9529340863227844,
"score": 3.296875,
"token_count": 481,
"url": "http://connecticutmag.com/Blog/Connecticut-Today/July-2013/Coordinated-Community-Efforts-Needed-to-Control-Ticks-and-Spread-of-Lyme-Disease/index.php"
} |
The waveplate can manipulate the polarization state without a change in light intensity.
Commonly used applications for the waveplate are described in this section.
The half waveplate is used to change direction of the linear polarization.
When the crystal axis (fast axis or slow axis) is aligned parallelyl with the polarization direction of the incident beam,
the polarization of the exit beam will maintain the same direction.
When the crystal axis of the waveplate is rotated for θ from polarization direction of the incident beam,
the polarization of the exit beam rotates for 2θ from polarization direction of the incident beam.
Using this effect, the direction of the linear polarization is arbitrarily rotated with the rotation of the half waveplate.
This method has a merit that the polarization direction is rotatable without change in light intensity.
When the polarization direction of the waveplate is rotated for 90°, the extinction ratio of linear polarization is slightly deteriorated due to the retardation error.
For this reason, insertion of a polarizer next to the waveplate is recommended for the precise polarization measurement,
which requires high extinction ratio.
If a quartz waveplate with high parallelism is used, the polarization direction can be changed without beam deflection.
By combining the polarization beam splitter (PBS) and half waveplate, it is possible to vary the light intensity.
The method can be used to adjust the reflectance as well as the transmittance, and also for ratio between transmission and reflection.
This method is highly efficient, which transmittance loss is all converted into the reflectance gain.
One of the features is dynamic range of light intensity adjustment. (97% to 0.3%, depending on the quality of the PBS)
A half waveplate is used when aligning P and S-polarized light which is separated by PBS into same polarization direction.
Below is an example of optical system to expose the grating by two-beam interferometry.
Interference fringes with good contrast can be obtained by aligning the polarization direction.
It is used to convert linear polarization into circular polarization, but also commonly used for the polarization measurements.
In experiments using a laser, the laser oscillation may be unstable if the back reflection from mirror or optics is returned to the laser.
An optical isolator is used to prevent this returning light.
A typical optical isolators are composed of quarter waveplate and a polarizer.
The light passes through the quarter waveplate two times during the round-trip reflection.
Since the circular polarization does not change its rotational direction in mirror reflection, the retardation of total 180 degrees is obtained from phase difference amount of twice passed through the quarter waveplate.
With the retardation obtained, the polarization direction of the mirror reflection, which passes the quarter waveplate is rotated by 90 degrees with respect to the incident polarization direction. This will make reflected light not able to pass through the polarizer, and block out the back reflection.
Feature of quarter waveplate is that it is possible to convert incident linear polarization into circular polarization,
but also into other state of linear polarization or various elliptical polarization.
Conversely, when elliptical axis of incident light is accurately aligned against quarter waveplate optical axis,
arbitrary elliptical polarization can be converted into linear polarization.
The azimuth γ of the incident linear polarization is defined by the ellipticity of the elliptical polarization,
which corresponds to half of the retardation Δ.
The polarization measurement using this principle is named Senarmont method.
Senarmont method is commonly used when measuring minute stress (birefringence).
A Michelson interferometer using a PBS and quarter waveplate is introduced.
Utilizing the polarization, the unnecessary back reflection is reduced and stability of the interference fringes is enhanced.
Incident light is collected on the observation side without a loss, but in order to observe polarization,
insertion of the polarizer is demanded with 50% reduction of light intensity. | <urn:uuid:d19756a8-e81b-4ba5-8ba4-55a3f1c506ec> | {
"date": "2017-02-21T12:07:38",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00364-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8900541067123413,
"score": 3.21875,
"token_count": 831,
"url": "http://www.global-optosigma.com/en_us/category/opt/opt07.html"
} |
How do binaural frequencies create states of deep healing and peace?
A binaural tone is created within the brain when it is presented with two slightly different frequencies at the same time. This makes the brain sync with the ‘imaginary’ tone, a phenomenon known as entrainment.
As we all know, the state of our brain directly affects how well we can perform the activities at hand and how we process different types of information. We might not be able to name the dominant frequency (brain wave) we are experiencing at any moment in time but we certainly know if we’re feeling sleepy and lethargic, or supercharged and active! We know when we are feeling blissed out or when we are feeling restless and edgy.
Whether you want to achieve a more focused state, meditate more deeply, get a better night’s sleep, activate your body’s healing processes, you need to be able to change your brain state at will.
Today, we’re living in an over-medicated world where far too many have to rely on pharmaceutical drugs to try to heal chronic illness, or attempt to get a good night’s sleep, even though it’s well know that pharmaceuticals interfere with key biological processes and damage the body over the long run.
What you might not know is that you can speed up healing in your body and get a good night’s sleep when you listen to specific frequencies that tap you into a particular brain wave and state of consciousness.
The brain is a powerhouse of activity, with the neurons in your brain constantly talking to each other over a complicated interconnected network. When these neurons fire off, they release small amounts of electricity, which can be measured using an EEG, measurable frequencies commonly known as brainwaves. Generally speaking, the higher the brainwave frequency, the more alert and awake you are. We experience lower frequency brainwaves when we meditate, dream and then go into a dreamless sleep (where real healing occurs).
How do Binaural Beats work?
In 1837, a physicist named Heinrich Wilhelm Dove discovered that listening to certain tones of sound could actually induce certain states of mind. If you listen to a tone of 410hz in your left ear and 400hz in your right ear, your brain will “make up” the difference and hear an imaginary tone of 10hz. You don’t have to think about it, it’s a natural consequence of the sound your ears are receiving and sending to the brain (instantaneously!).
This imaginary tone is called a binaural tone, and your brain then syncs to that specific frequency.
Binaural research has shown that it’s possible to change how you think and feel simply by listening to certain binaural frequencies. Intuitively we can understand this because we all know just how powerful our moods are affected by listening to music we love.
I’ve personally found some binaural frequencies to be very harsh, and not fit for purpose. There’s a lot of junk out there, so listen to your own guidance as much as any research that tells you what “you’re supposed” to feel or think when you listen to a particular sound/frequency.
Does it resonate? Does it create harmony within you, or do you feel disturbed by it?
Understanding Brain Waves
Our brain waves pulsate and oscillate at particular frequencies that can be measured, just like sound waves, in cycles per second. There are basic delineations of different brain wave states, based upon the cycles per second of the brain.
33hz-42hz – Gamma
Spiritual awakening, universal love and harmony
These brainwaves are a bit of a mystery. They’re the highest frequency measurable by today’s instruments but scientists are a little dumbfounded by them. Gamma brainwaves don’t translate to feeling active and alert. Rather, what’s been discovered is that neurons are firing so harmoniously that people feel like they are having a spiritual experience. This brainwave state has been associated with “aha moments”, integration, synthesis, expanding consciousness and superconscious states of learning.
12hz-20hz – Beta
Reaction, engagement, and sensory experiences
They are found in our normal waking state of consciousness. Beta waves are present when our focus of attention is on activities of the external world. Your brain is actively engaged and you are aware of your surroundings. This is the dominant ‘active’ mode of most people when working, shopping etc.
High Beta-from 23 to 33 hz.
Associated with hyperactivity – and also anxiety.
8hz-12hz – Alpha
They occur when we daydream and are often associated with a state of meditation. Alpha waves often become stronger and more regular when our eyes are closed.
3hz – 8hz – Theta
Dreamy, otherworldly, and surreal
You may have experienced Theta right before drifting off to sleep, during a lucid dream, or during a deep meditation. In Theta, you no longer sense the outside world, but you are aware and conscious of your internal world. They are found in states of high creativity and have been equated to states of consciousness found in much shamanic work. Theta waves also occur in state of deep meditation and sleep.
.5hz – 3hz – Delta
Asleep, regenerative and healing
They occur in states of deep sleep or unconsciousness. Some of the newer brain wave work indicates that a state of deep meditation produces Delta waves in highly conscious individuals.
Your body needs this state to heal and regenerate. When you take a drug to go to sleep, your mind is still active, so you don’t actually achieve it. Instead, using sound and doing yoga nidra are two of the most effective ways of allowing the brain to be restful.
Entrainment has been used by medicine men/women and shamans from different cultures since prehistoric days. The ability to create altered states of consciousness through drumming and chanting specifically. Shamanic drumming encompasses a frequency range of from .8 to 5.0 cycles per second, which is in the “theta” range for brain waves.
Tibetan bells, or Ting-Sha’s, have been utilized in Buddhist meditation practice for many centuries. An examination reveals that the two bells, which are rung together, are slightly out of tune with each other. Depending upon the bells, the difference tones between them create ELFs somewhere between 4 and 8 cycles per second. This falls exactly within the range of the brain waves created during meditation and helps shift the brain to these frequencies. It is little wonder that Tibetan bells are experiencing a worldwide increase in popularity as tools for increased relaxation and reduction of stress.
How to use Binaural Tones
If you want to achieve certain states of consciousness, use binaural frequencies to help. Sound is a powerful gateway to explore our inner dimensions if it is used with awareness.
Help us create a Conscious App – and experience the Power of Binaural Frequencies!
We’re creating a conscious app with access to life-changing videos on Energy Medicine, Yoga, Chakra Balancing and also great “healing sounds” audio that incorporates the power of binaural frequencies. Sign up here to know more and support us on Indiegogo when we launch on 26th/27th November 2018. | <urn:uuid:f6649f40-1dcd-4369-849d-911844a96ce6> | {
"date": "2019-04-24T18:16:45",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578655155.88/warc/CC-MAIN-20190424174425-20190424200425-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9401842951774597,
"score": 2.9375,
"token_count": 1554,
"url": "https://www.energytherapy.biz/2018/11/24/how-do-binaural-frequencies-create-states-of-deep-healing-and-peace/"
} |
1997 ComputerizationIn 1997 the Election Commission computerized the City of Cambridge PR elections using a precinct-based optical scanning system and specially designed software. The PR count, which used to be performed manually during the course of a week by a staff of over a hundred, is now completed in a matter of minutes through the electronic sorting, counting, and transfer of votes.
Unofficial results are available on election night. These results are “unofficial” because all ballots have not been counted. The tabulation does not include ballots with write-ins or ballots marked in a way that cannot be read by the scanners. These are auxiliary ballots that must be processed manually and added to the computer totals. They are added on the day after elections. Only there are the results declared to be officials.
To learn more about the unfolding of a PR election in Cambridge, check the Cambridge Board of Elections official website:
Error rateIn Cambridge, the elections have an average of 3,16% error rate. This figure includes both incorrectly marked ballots and blank ballots where the voter may have only participated in a higher-level election. In the future, to further reduce this error rate, Cambridge could allow error correction for the voters.
There were very few invalid ballots.
|2005||City Council election||0,81%|
|2005||School Committee election||4,44%|
|2007||City Council election||0,64%|
|2007||School Committee election||3,16%| | <urn:uuid:a084ac68-3087-4f64-baec-0c7e919a6912> | {
"date": "2015-04-27T09:05:09",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657868.2/warc/CC-MAIN-20150417045737-00201-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9374565482139587,
"score": 2.640625,
"token_count": 306,
"url": "http://archive.fairvote.org/global/?page=2447"
} |
The FDA and USDA split food safety responsibilities currently. A total of 12 agencies enforce about 30 different laws.
In another blow to increase the healthiness of school lunches, pizza sauce is considered a vegetable.
New, stricter standards placed on the chicken and turkey industries are predicted to prevent 25,000 cases of foodborne illness a year.
The FDA was awarded over $4 billion in funding to help pay for its food safety programs, while the USDA experienced budget cuts.
Lawmakers worry that the Obama administration's focus on health care reform may be impeding on climate change legislation.
With the 40th anniversary of the Moon landing upon us, Earth Eats' Cory Barker looks at what could be this generation's Apollo program.
The Obama administration has released a report on the effects climate change is having on our world and what all we should be doing to prevent it.
Food Safety is growing into a "hot button" issue among consumers and lawmakers. Earth Eats has some resources to help you stay informed. | <urn:uuid:cb11ba87-10de-4b64-b33e-42e5e4992d1e> | {
"date": "2016-09-28T14:09:41",
"dump": "CC-MAIN-2016-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661449.76/warc/CC-MAIN-20160924173741-00082-ip-10-143-35-109.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9512726068496704,
"score": 2.53125,
"token_count": 207,
"url": "http://indianapublicmedia.org/eartheats/tag/barack-obama/"
} |
Together with two FRA sister reports on the
EU’s air and southern sea borders, this report’s findings serve to inform EU and Member State practitioners and
policy makers of fundamental rights challenges that can emerge at land borders. Increased awareness should
also help to create a shared understanding among border guards of what fundamental obligations mean for
their daily work, ultimately enhancing fundamental rights compliance at the EU’s external borders.
Equality is one of the five values on which the European Union (EU) is founded; yet women here face inequalities in many respects. Extreme poverty, exclusion and discrimination burden Roma women even further. The European Union Agency for Fundamental Rights (FRA) researched the situation of Roma women in 11 EU Member States.
This report examines the results of the European Union Agency’s for Fundamental Rights (FRA) 2011 Roma survey on education, which show that considerable gaps between Roma and non‑Roma children persist at all educational levels.
This report presents the results of the European Union Agency’s for Fundamental Rights (FRA) 2011 Roma survey on poverty and employment, which show, for example, that although most Roma are actively seeking a job, onlyabout a third of those surveyed has paid work, which is often precarious and informal. It reveals multiple challenges:very low employment rates were observed, in particular for young Roma.
Primarily using data and information collected from five EU Member States, this paper briefly describes the phenomenon of forced marriage and selected legislative measures taken to address it. It lists promising practices
for the prevention of forced marriage and for supporting victims. The paper covers only one among many forms of violence against women analysed by FRA in its Violence against women: an EU-wide survey. Main results report (2014).
In light of a lack of comparable data on the respect, protection and fulfilment of the fundamental rights of lesbian, gay, bisexual and transgender (LGBT) persons, FRA launched in 2012 its European Union (EU) online survey of LGBT persons’ experiences of discrimination, violence and harassment.
This summary, and the related full report, look at how fundamental rights obligations
translate into practical border management tasks.
The report points out challenges as well as promising practices of integrating fundamental
rights compliance into operational tasks that do not compromise but instead enhance the
effectiveness of border checks.
The EU and its Member States took a variety of important steps in 2013 to protect and promote fundamental rights by
new international commitments, revamping legislation and pursuing innovative policies on the ground. Yet, fundamental
rights violations seized the spotlight with distressing frequency: would‑be migrants drowning off the EU’s coast,
unprecedented mass surveillance, racist and extremist‑motivated murders, child poverty and Roma deprivation. | <urn:uuid:72a1de00-dd5c-40af-9329-097b94fc89b4> | {
"date": "2014-11-23T16:18:13",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379546.70/warc/CC-MAIN-20141119123259-00004-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.908809244632721,
"score": 2.8125,
"token_count": 567,
"url": "http://[email protected]/en/publications-and-resources/publications"
} |
April showers bring May flowers, the saying goes.
But not last year. April 2012 brought raging wildfires to the region, including a blaze that blackened over 1,100 acres from eastern Brookhaven into Riverhead Town, destroying homes and property and injuring volunteer firefighters along the way.
Already this year, on Wednesday, the National Weather Service warned of an enhanced threat of wildfires due to dry air and wind.
Although local fire departments have taken the necessary steps to improve what was already a lauded response to the Wildfire of 2012, state officials in Albany should step up their response and get serious about preventing such disasters from ever happening.
Controlled burns, also known as hazard reduction burning, in forested areas are still rarely used in Suffolk County as a way to prevent these windswept and uncontrolled fires from sparking in the first place. Although last year’s wildfire was deemed intentionally set, such fires are natural occurrences and necessary for regeneration within the Pine Barrens. Controlled burns, performed in small areas when conditions are right, should be undertaken regularly as they are in other states with more wildfire experience.
Government officials and residents in the Midwest know from experience that if they don’t act first, Mother Nature will find a way to take care of her forests. And she doesn’t take lives or property into account. Richard Amper of the Long Island Pine Barrens Society, who favors controlled burns, reports this week that state and local officials are looking into conducting more controlled burns, also known as prescribed fires.
But why is everyone still looking, when so many months have already passed?
Since last year, local firefighters have been taking important steps toward better equipping themselves for forest fires and more training. But controlled burns are a much less expensive way to ensure public safety — and it’s good for the environment. Though somewhat unfamiliar here, residents who live near forested areas must understand and refrain from issuing complaints with the government during such burns, something that has led to the cessation of burn programs in other states, ultimately resulting in disastrous results.
New York State Central Pine Barrens Commission executive director John Pavacic said that, while the commission has acted in trying to educate the public and fire departments on how to prevent and tackle accidental fires, it’s also “taking a fresh look” at updating its fire management plan. But it’s a year later and time is of the essence.
A plan that includes a sophisticated controlled burn program should be presented to the public sooner rather than later. | <urn:uuid:dc5c0640-2b66-4cdb-86bd-2850f4252050> | {
"date": "2018-04-22T16:42:17",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945624.76/warc/CC-MAIN-20180422154522-20180422174522-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9655809998512268,
"score": 2.671875,
"token_count": 520,
"url": "https://riverheadnewsreview.timesreview.com/2013/04/44564/editorial-controlled-burns-a-vital-tool-to-prevent-more-wildfires/"
} |
Mastering the American Psychological Association’s (APA) (1994) writing and style guidelines may be as challenging as the most difficult clinical skill for some nursing students. There are a variety of software products available to assist students in applying the APA standards to their written work. These software products vary widely in cost, ease of use, and features. The purpose of this paper is to review the current software products available for WindowsÒ computers. The intent is to provide students who may be new to the APA writing style and to professional writing with the necessary information to make an informed software purchasing choice.
There are two types of software commonly used to assist students to format written work. These are formatting and referencing helpers also called bibliographic software. Formatting helpers are those software products that assist the user in setting up the formatting of the text of the paper and references according to APA, or other style guides such as MLA. Reference helpers assist with formatting references when writing a paper, however some also provide sophisticated assistance in searching online libraries and the Internet.
Identification of Software
The software titles used in this review came from several sources. The primary source was an Internet search using the term “APA software”. Although no additional titles were identified in this way, students and faculty were asked which software they used, if any, to assist in creating class papers. Finally, a discussion on the subject in the Nurse Educators listserv provided additional anecdotal information on various software programs. For the purposes of this paper -- only software that is available for a WindowsÒ operating systems was reviewed. However several of the programs are available in a variety of languages and for MacIntosh Ò computers. The author purchased all formatting programs.
Evaluation of Software
One important point the author would like to make is that ease of use is in the head and hands of the user. Learning style, computer skill, and other factors greatly influence a person’s judgment that one program is easy to use and another is not. The author is an advanced computer user with years of experience in writing and publishing. Since students are more likely to be novice computer users and writers, review of the products was conducted with a “beginners mind”. The goal was to use the software, with minimal technical assistance or use of printed directions.
point is that no software program eliminates the need for the printed APA
manual. All programs reviewed for
this article ease the work of creating references.
Most do not provide the same support for the other writing guidelines.
Students are advised to use the APA manual as a primary tool for
constructing papers and to make decisions about the format of the paper while
using any of the programs. This
will allow for a greater degree of accuracy in applying APA guidelines.
Each of the software products is reviewed in alphabetical order. Refer to Table 1 for a list of the Internet addresses for ordering either demo versions or the entire program. The reference management software includes two distinct types of programs. The first programs are those that the author calls “format helpers”. These programs assist the student with actually writing the paper. They include information on formatting the body of the paper and the reference list. However they do not include assistance with researching databases such as MEDLINEÒ or the Library of Congress. Nor do they allow for searching online databases. The second type of program is called “reference helpers” in this article, also known as bibliographic software by manufacturers. These programs do not provide assistance with formatting a paper. Instead, most are sophisticated database programs that greatly decrease the work of organizing references, searching databases, and creating user defined reference material.
Templates. Most of the format helpers are add-ins to Word. That means that the software is installed into Word as a template, it is not a separate program. The template is a preformatted file that provides the basic outline for the paper. To open any software template, open up the word processing program, select file, new from the toolbar, click on the template name and the file opens.
APA Style Helper 2.0. This program is available directly from the American Psychological Association’s web site. The download was lengthy and had to be repeated twice. The demo version does not allow full access to the features of the program. In order to fully use the downloaded version a serial number must be entered into the program. The author’s electronic mail request for the serial number took several days to receive an answer, as did other requests for technical information..
APA Style helper works in a unique way. The user first enters in the manuscript identifying information (headers, title, etc) and then the references. Once this preliminary data is saved, the user can then open up the file within a word processing program. This is in contrast to all other format helpers which function as templates in a word processing program. Students have reported this two-step process in writing a paper to be somewhat confusing. The most important issue is that the initial manuscript must be saved where the user can find the file to open in the user’s word processing program. In order to update or edit a reference list, the file must be re-opened in the APA program. Once it is open in the APA program, it can be edited, saved, and then reopened in the word processing program to continue working on the body of the paper.
The help files are extensive. They cover not only how to use the program but APA guidelines as well. In order to use the help menu, the user must open up a browser for the files to be displayed. This is another unique feature of this program.
FormatEaseÒ. FormatEaseÒ provides the most comprehensive formatting help of all programs reviewed. It is also relatively easy to install. The default settings for the install were not the correct ones for the author’s computer set up, however, it took just a few minutes to find the correct path to install the template into Word.
FormatEaseÒ provides an array of choices in formatting a document. The user can select a paper, a dissertation, thesis or term paper. For the purposes of this review, the “apapaper” was selected. Once the type of document is selected, the user is presented with an instructional document in which the type is replaced as the paper is created. There is a line which says: “The Title of the Paper Goes Here” and the user replaces that text with the title of the paper.
As the user replaces and adds to the text in the stock document, the paper conforms to APA standards. This product also leads the user through the five levels of headings, the appendix and information on footnotes and figures. This software may be the most suited for students who have no experience with APA and need the largest amount of information about the style guide on their computer screen at one time.
Microsoft Template. Microsoft maintains a library of templates at their site. The APA template is a basic tool for completing documents. The template is formatted to APA line spacing, margins and headers. The template includes a narrative describing how to complete an abstract, table of contents and other APA style features. There is no direct help in formatting references but there is a list of reference examples. This product’s best attribute is that it is free to Microsoft users.
PERRLA. This program was the least expensive and easiest to begin using. An interesting aside is that the developer of the program is an FNP. As with other template based format helpers, PERRLA uses Microsoft Word and is an add-in. The web site gives clear directions for installing PERRLA into Word as a template. Electronic requests for information were returned the same day.
Once installed, an APA style paper is initiated by selecting “new” from the file menu. PERRLA 4.2 is then selected and the user is presented with a popup window that asks for the running head, header, title and author’s name. This information is then placed appropriately throughout the document.
Adding references is very simple. Click on the icon “create citation” and another popup window appears. This window asks for information on the type of publication, for example book, journal, or a chapter in a book. Once the type of publication is selected, the user is prompted for specific information about the citation. As the user inputs information, references to the APA manual also appear. This is a useful feature as it also reinforces standards while assisting with the creation of the paper. Once all the information is completed, the citation is inserted into the paper and the reference is placed at the back of the paper.
Reference Point Ò. Reference PointÒ (RP) is another inexpensive APA template add-in software that is easy to use. Many of the author’s students use this program and recommend that users insert all references first and then type the paper. This helps RP format citations using the same reference correctly each time it appears in a paragraph.
When RP is installed, a template is automatically inserted into Word. To open RP, Open Word, select new file and then the template for RP (APA2000 is the file name if it is being used with Word 2000). When a new document is started, the user is prompted to select the header, running head, title, and so forth until all document options are inserted. Any notations that are not needed, can be bypassed by leaving the field blank.
Suggested Group Learning Activity. It is possible and can be highly desirable to have students create their own template. This is an especially important learning activity for graduate students. The author has given students an assignment of developing their own APA template in both traditional and online classes. This encourages students to learn new features of their word processing programs and to develop something that can be very helpful. This project has been especially helpful when nursing programs have individualized guidelines for formatting papers.
Referencing helpers are those programs that provide assistance with searching, cataloguing, and retrieving information from online libraries and the Internet. They do not provide help with formatting the text of an APA style paper beyond the reference list. All programs format references in multiple styles, including APA. They are amazingly powerful tools that are most helpful when introduced to students early on in their programs. This would allow students to maintain their own “libraries” of research material. These programs are especially helpful for graduate students and researchers. ISI Research Soft distributes Reference Manager, EndNote and ProCite. Oberon distributes Citation and BookWhere2000.
BiblioscapeÒ. Biblisoscape provides several products that assist in searching, cataloguing and managing references. Biblioscpae organizes references into folders like other reference formatters. It can search both libraries and the web and support SQL servers. There is a built in spell checker, which is a value added feature of this product. Biblioscape reports it works well with groups of users which is a feature not tested in this review.
Biblioexpress is a free, smaller version, of the referencing software. It is a smaller version of the product with less features for searching. In this version it is primarily used for collecting references using specific styles. Many undergraduate nursing students will find this program is all they need for collecting references for papers.
Biblioweb is a product designed specifically for use in an
organization’s Intranet. The
manufacturer’s site allows users to test out a live database. While this
program is very useful in formatting a large number of references, it may be
beyond the needs of most students. Nursing
departments that conduct collaborative research may benefit most from use of
BookWhere2000Ò. BookWhere is specifically designed to search online libraries. Search results can be exported into CitationÒ, which is a popular referencing format helper made by the same company. Databases can be searched simultaneously, which saves time. This program was very easy to use. The author was able to search several databases within minutes of installing the software. It is designed to work specifically with CitationÒ.
CitationÒ. Citation is one of several powerful database programs that work from the tools menu of Word or Word Perfect word processing programs. Citation allows a user to create nearly unlimited databases, adding notes to records, searching and retrieving references. Citation also allows users to find duplicate entries; merge databases and spell check documents.
Citation is beyond what a typical undergraduate student would need for referencing helpers but would be very useful for graduate students and researchers. On the Citation web site, there is an excellent lesson plan for instructors to use that shows students what bibliographic software does and how to use it. Click on teaching notes from the home page to find this information.
EndNote 4.0Ò. EndnoteÒ provides a wide variety of tools for creating, organizing, sorting and retrieving information for bibliographies. EndNoteÒ uses “libraries” to organize data. These libraries can be sorted in a variety of ways and hold thousand of references. Bibliographies or reference sheets can be created by drag and dropping references, or cut and pasting. The user can create custom search terms for databases and link these terms to specified fields. Reference lists are generated by selecting references from a list and dragging and dropping them into the file. An add-in is available to make bibliography selection automatic within a word processing program.
The author was able to use this program with a minimum amount of instruction from the user manual. A connection to several databases was made on the first attempt. This program is used by many students, faculty and librarians who are in contact with the author. No user reported dissatisfaction with the program’s ability to manage references.
ProCite is another of the powerful Internet library search programs.
Over 200 libraries can be searched using ProCite.
It is also possible to collect references across several databases,
which decreases the time for necessary to create expansive bibliographies for
larger projects. The user can
collect reference information directly from the web, although graphics,
tables, or figures are not stored, only text.
Reference ManagerÒ. Reference Manager (RM) provides another solid product for searching, organizing, and sorting references. It includes both the searching program, the organizer, and the database builder features. The author found this one the easiest to read from the computer screen. The features of RM, like other reference helpers are very sophisticated and will usually exceed the needs of an undergraduate student. Users of RM report a steep learning curve for this software, as the features are extensive. However, once the overall organization of the files is developed, updating is very easy.
Students and researchers can use this program to track and insert
references into manuscripts, and to catalog research results.
Academic institutions may want to use this product, or others, to track
faculty publications. Librarians
can use this product to keep track of specific collections and to disseminate
information easily to groups of people. Faculty
will find this and other reference manager programs an ideal tools for
developing reading lists for students. Other
features are beyond the scope of this paper, however the author urges any
student beginning a thesis or dissertation to use one of the programs listed
here. These programs not only
save time but promote the discipline in researching that is so important in
quality research studies.
Scholar’s Aide Ò. Scholar’s Aide, like PERRLA, was first developed by an individual who needed the product. It is the program of choice for students on a budget. There is a free 60 day trial period, a free “lite” version and it is the least expensive of the bibliographic software reviewed. They also offer a “starving student” discounted price for the full version.
Scholar’s Aide (SA) is very user friendly. The interface between the user and the computer is simple to use. The program actually consists of two programs. One organizes notes and the other the references. Each feature provides the user with an array of possibilities for searching, organizing, and formatting the outputs.
The program can be downloaded from the web site. The author found that several downloads were necessary in order to get the program properly installed. The downloaded files come in a compressed file format, so it is necessary to first decompress or unzip the file before using the program. Users will need a program that “unzips” files in order to be able to properly install this program. Directions are clearly posted on the web site to do this.
The use of specific style guidelines continues to be an important part of scholarly meetings. Although there are a variety of style formats, APA continues to be the dominant style guide for student and practitioner health professionals. Several programs have been reviewed which will assist the user in formatting papers and manuscripts to meet the APA standard guidelines. The ultimate choice of the software depends on the user’s budget, willingness to use the printed reference book, and familiarity with computerized word processing programs.
As the amount of information available expands, it has become necessary to consider using additional computer aided tools to keep track of relevant information. It is also necessary to present information in a scholarly format so that others will be able to find and utilize reference material as cited in prior publications. Computer software products provide much support for students, faculty, and researchers in gathering, organizing, and utilizing reference material. However, in order to get the maximum benefit from these products, users must devote the necessary time to understand all of the features of the various products. None of the products reviewed replace the conceptualization and organization a writer puts into a paper or literature search. However, they make the completion of many tasks possible in a shorter period . Like other software programs, their usefulness will come directly from the user’s knowledge of the product. | <urn:uuid:7ad3627e-6bd1-414f-85b5-51c0a1beb842> | {
"date": "2014-09-18T15:42:39",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657128304.55/warc/CC-MAIN-20140914011208-00158-ip-10-196-40-205.us-west-1.compute.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9325162172317505,
"score": 2.609375,
"token_count": 3731,
"url": "http://ojni.org/51/article4.htm"
} |
You may have seen a wooly bear caterpillar cross your path this fall.
The wooly bear, Pyrrharctia isabella, is often seen traveling in September and October, on the hunt for a sheltered spot to spend the winter.
The caterpillars have a distinct appearance, fluffy. Another thing that makes them lovable is that they eat weeds, such as dandelion, according to Delaware entomologist Brian Kunkel.
"Some people would like to know there's a caterpillar out there eating something you don't want," he said.
With its variable black and brown bands, the wooly bear is long-rumored to have the ability predict the coming winter – a wide brown band means a mild winter, a thin one means a severe winter.
But is there any truth behind that assertion?
Well, no, not really. There is a theory that weather patterns during the caterpillar's larval development before hatching contributes to changes in the length of bands, but when it comes to hard, well-tested science, the evidence just isn't there.
Retired University of Maryland Eastern Shore professor and entomologist Jeurel Singleton couldn't say for sure what causes variations in the bands, but she does have some ideas. She completed her dissertation in the 1980s on wooly bear caterpillars, analyzing their vision and "why they cross the road."
"The length of the black on them, it has to do with the what they experience when they're growing," Singleton said.
Black absorbs more heat than brown, so, for example, if the caterpillar were cold during a larval stage the implication is they may grow wider black bands to help warm up.
"It's partially a myth, because no one has really studied it," she said.
So for forecasting the future winter, it is best to stick with the advice of meteorologists.
By this time of year, the wooly bear will have typically found a place to settle down for the cold season. After hibernation, what is next for the wooly bear? It will pupate and turn into a Isabella Tiger Moth.
On Twitter @rachaelapcella | <urn:uuid:450cbc04-9219-4008-afbd-502316c4c0f1> | {
"date": "2018-02-22T09:11:47",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814079.59/warc/CC-MAIN-20180222081525-20180222101525-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9707304239273071,
"score": 3.453125,
"token_count": 456,
"url": "http://www.delmarvanow.com/story/news/2015/11/13/debunking-wooly-bear-caterpillar-myth/75715386/?from=new-cookie"
} |
Most types of OI are inherited in an autosomal dominant pattern. Almost all infants with the severe type II OI are born into families without a family history of the condition. Usually, the cause in these families is a new mutation in the egg or sperm or very early embryo in the COL1A1 or COL1A2 gene. In the milder forms of OI, 25-30 percent of cases occur as a result of new mutations. The other cases are inherited from a parent who has the condition. Whether a person has OI due to a new mutation or an inherited genetic change, an adult with the disorder can pass the condition down to future generations.
In autosomal dominant inherited OI, a parent who has OI has one copy of a gene mutation that causes OI. With each of his/her pregnancies, there is a 1 in 2 (50 percent) chance to pass on the OI gene mutation to a child who would have OI, and a 1 in 2 (50 percent) chance to pass on the normal version of the gene to a child who would not have OI.
Rarely, OI can be inherited in an autosomal recessive pattern. Most often, the parents of a child with an autosomal recessive disorder are not affected but are carriers of one copy of the altered gene. Autosomal recessive inheritance means two copies of the gene must be altered for a person to be affected by the disorder. The autosomal recessive form of type III OI usually results from mutations in genes other than COL1A1 and COL1A2.
Though all possible measures have been taken to ensure accuracy, reliability, timeliness and authenticity of the information; Onlymyhealth assumes no liability for the same. Using any information of this website is at the viewers’ risk. Please be informed that we are not responsible for advice/tips given by any third party in form of comments on article pages . If you have or suspect having any medical condition, kindly contact your professional health care provider. | <urn:uuid:4f6a6ba7-4506-4463-9bb6-3add0442fa2c> | {
"date": "2017-03-28T03:02:11",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189589.37/warc/CC-MAIN-20170322212949-00161-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9452903270721436,
"score": 2.546875,
"token_count": 418,
"url": "http://www.onlymyhealth.com/how-do-people-inherit-osteogenesis-imperfecta-12977606155"
} |
The New Booker School in Wolfville, Nova Scotia has decided to learn how to practice the art of video by playing Movie Games. Their goal is to enable their students to be able to learn more about their community through the process of recording and sharing short video stories. These could take the form of interviewing various people of all ages and abilities in the community. Learn more about The New Booker School here: CLICK!
This is a report about the project from the school:
Learner Profile Attribute of the Week:
Communicators: Students express themselves and information through a variety of modes of communication.
Unit of Inquiry
There’s always a renewed energy and excitement when we start a new unit. For the next several weeks we’ll be focused on Who We Are. The Central Idea is: “everyone has a story.”
-Genevieve Allen was a guest speaker this week. She introduced the Kings County Cultural Mapping site and showed samples of films that have been uploaded there.
-The Summative Assessment task this unit will be the creation of a short digital film under the guidance of Kimberly Smith. If they are suitable, the films may be uploaded onto the Cultural Mapping Site and/or our school blog. We had our first class with Kim yesterday and it’s fair to say that our students are very excited about this project.
-To deepen our interdisciplinary practice, our French teacher, Elke Willmann, will join us on Thursday afternoons to bring French into the shooting and editing of the students films. | <urn:uuid:4699091d-42fb-4af4-b18f-65bde9d7843d> | {
"date": "2017-10-18T22:11:12",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823153.58/warc/CC-MAIN-20171018214541-20171018234541-00556.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9622582793235779,
"score": 2.703125,
"token_count": 320,
"url": "http://moviegames.ca/posts/new-booker-school-embraces-movie-games/"
} |
Mary Ice is a member of a spy group. She is about to carry out a secret operation with her colleague.
She has got into a target place just now, but unfortunately the colleague has not reached there yet. She needs to hide from her enemy George Water until the colleague comes. Mary may want to make herself appear in Georgefs sight as short as possible, so she will give less chance for George to find her.
You are requested to write a program that calculates the time Mary is in Georgefs sight before her colleague arrives, given the information about moves of Mary and George as well as obstacles blocking their sight.
Read the Input section for the details of the situation.
The input consists of multiple datasets. Each dataset has the following format:
MaryX1 MaryY1 MaryT1
MaryX2 MaryY2 MaryT2
MaryXL MaryYL MaryTL
GeorgeX1 GeorgeY1 GeorgeT1
GeorgeX2 GeorgeY2 GeorgeT2
GeorgeXM GeorgeYM GeorgeTM
N BlockSX1 BlockSY1 BlockTX1 BlockTY1
BlockSX2 BlockSY2 BlockTX2 BlockTY2
BlockSXN BlockSYN BlockTXN BlockTYN
The first line contains two integers. Time (0 ≤ Time ≤ 100) is the time Mary's colleague reaches the place. R (0 < R < 30000) is the distance George can see - he has a sight of this distance and of 45 degrees left and right from the direction he is moving. In other words, Mary is found by him if and only if she is within this distance from him and in the direction different by not greater than 45 degrees from his moving direction and there is no obstacles between them.
The description of Mary's move follows. Mary moves from (MaryXi, MaryYi) to (MaryXi+1, MaryYi+1) straight and at a constant speed during the time between MaryTi and MaryTi+1, for each 1 ≤ i ≤ L - 1. The following constraints apply: 2 ≤ L ≤ 20, MaryT1 = 0 and MaryTL = Time, and MaryTi < MaryTi+1 for any 1 ≤ i ≤ L - 1.
The description of George's move is given in the same way with the same constraints, following Mary's. In addition, (GeorgeXj, GeorgeYj ) and (GeorgeXj+1, GeorgeYj+1) do not coincide for any 1 ≤ j ≤ M - 1. In other words, George is always moving in some direction.
Finally, there comes the information of the obstacles. Each obstacle has a rectangular shape occupying (BlockSXk, BlockSYk) to (BlockTXk, BlockTYk). No obstacle touches or crosses with another. The number of obstacles ranges from 0 to 20 inclusive.
All the coordinates are integers not greater than 10000 in their absolute values. You may assume that, if the coordinates of Mary's and George's moves would be changed within the distance of 10-6, the solution would be changed by not greater than 10-6.
The last dataset is followed by a line containing two zeros. This line is not a part of any dataset and should not be processed.
For each dataset, print the calculated time in a line. The time may be printed with any number of digits after the decimal point, but should be accurate to 10-4 .
50 100 2 50 50 0 51 51 50 2 0 0 0 1 1 50 0 0 0 | <urn:uuid:629f8db8-9dda-408a-82a7-055024f01f26> | {
"date": "2017-04-24T07:28:25",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00645-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9365565180778503,
"score": 3.21875,
"token_count": 728,
"url": "http://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=2174"
} |
Lisbon Treaty — Referendum — rejected — 5 Mar 2008 at 18:30
Nick Herbert MP, Arundel and South Downs voted to require a referendum before the UK ratified the Treaty of Lisbon, a foundation of the European Union.
The technical process for ratifying the Treaty (which could have been conditional on the referendum) is its mention in the European Union (Amendment) Bill. The Treaty of Lisbon would be incorporated into United Kingdom law by inserting it into the list of treaties covered by the European Communities Act 1972, which is the foundation stone of Britain's membership of the EU.
Owing to a 3-line whipped no-vote by their leader, all Liberal Democrat MPs who voted in this division were considered to have rebelled, and those who were front bench spokesmen had to resign from their positions.
The main aims of the Lisbon Treaty were to:
- Streamline EU institutions
- Establish a permanent President of the European Council (as of 16 March 2010 held by Herman Van Rompuy)
- Establish the post of High Representative of the Union for Foreign Affairs and Security Policy (as of 16 March 2010 held by Catherine Ashton)
- Give new powers to the EU over justice and home affairs
- Remove the national veto in some areas such as energy security and emergency aid
- William Hague MP, House of Commons, 5 March 2008.
- See New Clause 1, Amendments to be discussed in Committee describes the referendum question, 5 March 2008.
- European Union (Amendment) Bill, full text.
- Lisbon EU treaty, Foreign and Commonwealth Office.
- Treaty of Lisbon, Wikipedia.
- European Communities Act 1972, Section 1(2), Consolidated version, List of Treaties that have been incorporated.
- Senior Lib Dems quit over EU vote, BBC News, 5 March 2008.
- BBC News Q&A: The Lisbon Treaty, 5 February 2010
Votes by party, red entries are votes against the majority for that party.
What is Tell? '+1 tell' means that in addition one member of that party was a teller for that division lobby.
What are Boths? An MP can vote both aye and no in the same division. The boths page explains this.
What is Turnout? This is measured against the total membership of the party at the time of the vote.
|Party||Majority (No)||Minority (Aye)||Both||Turnout|
|Con||3||186 (+2 tell)||0||99.0%|
|Lab||308 (+2 tell)||28||0||96.0%| | <urn:uuid:bda8a119-d5fb-4308-8435-9567ff525708> | {
"date": "2016-05-27T04:35:41",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276537.37/warc/CC-MAIN-20160524002116-00236-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9340311884880066,
"score": 2.796875,
"token_count": 558,
"url": "http://www.publicwhip.org.uk/division.php?date=2008-03-05&number=117&mpn=Nick_Herbert&mpc=Arundel_and_South_Downs&house=commons"
} |
Opponents of the Renewable Fuel Standard (RFS2) are putting a negative light on biodiesel in the press to convince American consumers that advanced biofuels increase their daily costs of living. This article is to help set the record straight with the biodiesel industry’s valued partners so all are armed with accurate information to correct these antagonists with a more appropriate food, then fuel message.
The RFS2 calls for 1.28 billion gallons of biodiesel to be used under the biomass-based diesel renewable volume obligations. Today’s biodiesel industry is more than capable of producing the additional 280 million gallons of biodiesel needed to meet the 2013 requirement.
As the biodiesel and feedstock industries advocate for 1.6 billion gallons for the 2014 renewable volume obligations, the rendering industry can help tell the story that many biorefineries use waste from food production. Leftover cooking grease from French fries and fat removed from steaks and pork chops account for a significant portion of biodiesel produced in the United States (US). The rendering industry has garnered significant value from this lower-value products model and consumers ultimately benefit. As biodiesel producers and renderers generate revenue for these lower cost products, farmers are likely to produce more crops and livestock, bringing more protein and carbohydrates into the market for food.
Livestock producers are beneficiaries of at least three significant benefits for every gallon of biodiesel produced: (1) lower relative meal prices due to higher vegetable oilseed crush rates; (2) higher values per head due to increased animal fat prices; and (3) access to crude glycerin as an energy source for feed rations.
With growing demand for livestock feed, soybean meal supplies increase, which creates additional soybean oil for biodiesel utilization. Nationwide, 50 to 60 percent of all US biodiesel is still produced from soybean oil. A December 2010 study by Centrec Consulting Group, LLC stated soybean meal prices could increase by as much as $36 per ton if they weren’t gaining access to the value of soybean oil via biodiesel production. This lost market could cost domestic livestock producers an additional $4.6 billion for soybean meal purchases over the future five-year period (assumed period for the economic study was model year 2011 to 2015). Biodiesel adds value.
Renewable Energy Group (REG) is focused on being a lower-cost feedstock biodiesel producer. While the company is always looking for vendor relationships that benefit its bottom line, REG believes in using raw materials that create a greenhouse gas emissions advantage versus petroleum and support lower consumer food prices. According to a study commissioned by the National Biodiesel Board, since 2007 the price relationship between animal fats and soybean oil has become stronger and increased demand for fats and oils, which has led to increased fat prices. In fact, increased biodiesel production has led to greater demand for animal fats and, in part, led to higher value per head harvested for livestock producers. As an example, review of historic animal fat prices demonstrates that feeder cattle prices have been supported by strong demand for animal fats by uses such as biodiesel. Up to an additional $16.79 of value per head was generated when comparing “pre-biodiesel” tallow and inedible tallow prices with current fats and oils prices.
When the REG Newton biorefinery uses beef tallow, choice white grease, and poultry fat, it essentially reduces rising price pressures on meats in the grocery store. Simply put, biodiesel is supporting food security while making the United States more energy secure. The biodiesel industry needs the rendering industry’s help showcasing this process to policymakers and market influencers. Renderers can contact their legislator via REG’s advocacy website at http://advocacy.regi.com/.
Multiple Feedstock Production Technology Requires Efficiencies
REG’s array of biorefineries includes seven commercial-scale biodiesel facilities with a total capacity of more than 225 million gallons using technology capabilities to match feedstock availability in the area of each plant. Feedstock choice is based on economics.
As an example, the Ralston, IA, plant is co-located with a soy crush facility so it runs on soy oil. REG’s Albert Lea, MN, plant acquired in September 2011 is being upgraded at a cost of $20 million to be capable of using every Environmental Protection Agency (EPA) approved feedstock in the Midwest. The Danville, IL, Newton, IA, and Albert Lea, MN, plants form a functional capability basis, each one different but generally built with the same flexibility. The Seneca, IL, plant can convert free fatty acids as well as triglycerides into biodiesel.
Biodiesel producers with multi-feedstock capabilities using EPA-pathway approved raw materials are key to a diverse, sustainable feedstock market. In addition, a biodiesel company with a multiple feedstock, multiple vendor approach must be focused on highly efficient logistics and conversion capability.
Biodiesel Industry Supports Overall Economy, Offering Benefits to Consumers
The biodiesel industry creates localized job growth, increases the United States’ gross domestic product, and adds value to the agriculture, manufacturing, and transportation industries. Last year, the US biodiesel industry supported more than 63,000 jobs both directly and indirectly. (Analysts note that number would be 19,000 higher with the certainty of the federal blender’s tax credit being in place.) The National Biodiesel Board projects the addition of 30,000 jobs with the increase of the RFS2 obligations in 2013.
As the US manufacturing sector begins to rebound after the recession, the biodiesel industry is doing its part by supporting $6 billion of gross domestic product in the American economy in 2012. That number is projected to grow to nearly $7.93 billion in 2013. The biodiesel industry is a meaningful part of energy independence; every gallon of biodiesel produced at home is one that does not have to be imported.
Biorefiners and feedstock suppliers under the RFS2 are delivering desired results to achieve US energy and food security goals. Farmers, food producers, and restaurants win as the biodiesel industry creates a higher source of revenue across the supply chain. These food producers are rewarded with better margin opportunities, which can lower food prices and save consumers money. That’s not the “food versus fuel” fallacy that RFS opponents want you to believe, but rather food, then fuel.
February 2013 RENDER | back | <urn:uuid:a6fb781a-00e0-42db-971e-4c994f660c56> | {
"date": "2015-03-29T17:19:54",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298660.78/warc/CC-MAIN-20150323172138-00074-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.927290678024292,
"score": 2.75,
"token_count": 1345,
"url": "http://www.rendermagazine.com/articles/2013-issues/february-2013/tell-renewable-fuel-standard-opponents-its-foodthen-fuel/"
} |
This webpage has been prepared by the Intellectual Freedom Committee of the Library Association of Alberta to provide information and assistance to library workers and trustees.
It is the purpose of libraries to support free access to ideas, to promote public information and to foster enlightenment.
These goals are accomplished through a collection that includes the widest diversity of views and expressions including those which are unorthodox and orthodox, populart and unpopular, from whatever viewpoint. A rigorous adherence to the principle of intellectual freedom protects these important rights.
Intellectual Freedom in Action
To ensure that the foundations for these freedoms are established in your community:
1. Include as part of your library's policy:
(a) The Canadian Library Association Statement on Intellectual Freedom and the LAA Statement of Intellectual Freedom;
(b) The Book and Periodical Council Statement on Freedom of Expression and the Freedom to Read;
(c) A statement of "open access" specifying that materials are equally available to all members of the community;
(d) A statement ensuring that access to materials of a controversial nature will not be restricted;
(e) A statement indicating that the responsibility to control access to library materials by children rests with their parents or legal guardians.
2. Develop a collection policy that:
(a) Describes the scope and type of materials included and excluded;
(b) Clearly establishes the delegated responsibility of the librarian to collect materials within the guidelines;
(c) Includes a statement regarding donations and policies for withdrawing and discarding materials;
(d) Contains a procedure for reconsideration of materials that includes a clearly defined method of handling complaints;
(e) Establishes as a goal a collection that includes the widest diversity of views and expressions including those which may be considered unorthodox or unpopular.
3. Adopt the Canadian Library Association Code of Ethics.
4. Defend the principle of freedom to read, not the individual item.
5. Consult the LAA Intellectual Freedom Committee.
For more information about LAA & Intellectual Freedom, email Brian Jackson, Intellectual Freedom Committee Chair by clicking here. | <urn:uuid:beb40adb-39f8-450d-9511-ec3cbe05ba05> | {
"date": "2017-02-26T16:44:56",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00504-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8908408284187317,
"score": 2.546875,
"token_count": 423,
"url": "http://www.laa.ca/page/intellectual%20freedom.aspx"
} |
colorful images are of thin slices of meteorites viewed through a
Part of the group classified as HED meteorites
for their mineral content (Howardite, Eucrite, Diogenite), they likely
to Earth from 4 Vesta,
the mainbelt asteroid currently being explored by NASA's
Why are they thought to be from Vesta?
Because the HED meteorites have visible and infrared spectra
that match the spectrum of
The hypothesis of their origin on Vesta
is also consistent with data from
Dawn's ongoing observations.
by impacts, the diogenites shown here
would have originated deep within the crust of Vesta.
are also found in the lower crust of planet Earth.
A sample scale is indicated by the white bars,
each 2 millimeters long.
Hap McSween (Univ. Tennessee),
A. Beck and T. McCoy (Smithsonian Inst.) | <urn:uuid:ebe51457-c1d9-4dd2-ae15-e45a12f08e6c> | {
"date": "2013-06-19T14:25:51",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8739750981330872,
"score": 3.359375,
"token_count": 192,
"url": "http://www.astrobio.net/index.php?option=com_galleryimg&task=imageofday&imageId=931&msg=&id=&pageNo=13"
} |
Monday, Dec. 18, 2017 | 2 a.m.
Meet the medical professionals in this story
• Diana Grimmesey, MHA, RN, CWCN, Wound Care and Hyperbarics Director at MountainView Hospital
Leeches and maggots have a long history in medicine.
While the idea of using creepy-crawlies to treat serious medical conditions may seem like a relic of medieval times, they have become an important mainstay in modern biotherapeutical practices. They’re each used to treat several diseases and injuries, and do so with a high success rate.
“Leeches and/or maggots are typically used by surgeons — general, plastic, trauma and orthopedic — as well as physicians specializing in wound care,” said Diana Grimmesey, RN.
From reattaching severed fingers to treating infected wounds, the healing power of leeches and maggots is nothing short of amazing.
“Leeches have been used in the field of medicine for more than 2,500 years,” Grimmesey said, and they were used consistently throughout until the 20th century. They weren’t reconsidered by most modern physicians until the late 1970s, when leeches began to gain traction as a significant tool for reattaching tissue following traumatic injuries.
How/why they’re used
Leeches are bloodsuckers — they have three jaws with about 100 teeth each, and within their saliva are enzymes that act as natural anesthetics and anticoagulants.
“Their bite produces a number of benefits to the human body in certain circumstances, especially when it comes to venous stasis — a condition where outflow of blood in the vein is congested,” Grimmesey said. “They are also helpful when trying to reattach tissue to the body.”
Because leeches produce an anticoagulant and literally suck blood from the surface of skin, they are often used to revive delicate veins and improve blood flow following a tissue reattachment procedure.
What is the process like?
“A leech is applied to the cleaned area where there is blood congestion and/or an injury,” Grimmesey said. “Their bite has an anesthetic in it, so it’s painless. They’re left on for about 45 minutes, or until they’re fully engorged, and then they’re easy to remove. Another leech may be applied approximately eight hours later, if necessary.”
One leech typically digests about 1.5 ounces of blood with each application, and leech therapy generally lasts three to seven days, with leeches applied two to three times each day.
Example: A patient has a severed fingertip and the surgeon is able to reattach it. However, following the reattachment surgery, blood is not flowing properly to the fingertip because the small veins have collapsed and are unable to carry enough blood to the newly attached tissue. Leeches might be used to help reinstate the veins, and get the blood circulating again.
According to a 2009 article published in the Journal of Diabetes Science and Technology, the beneficial effects of maggots were observed among military physicians for centuries, but William Baer was the first doctor to test maggots on nonhealing wounds in 1929. Until the 1940s, when antibiotics became readily available, maggots were used by thousands of doctors across the Western world.
Today, as doctors battle an epidemic of nonhealing wounds, and concerns about antibiotic-resistant infections continue to mount, maggots are making an important comeback.
How/why they’re used
Maggots are used to clean wounds that are not healing normally, are infected, or are necrotic (wherein the tissue dies off).
“Wounds that are open longer than one month have a higher risk for developing nonviable tissue in the wound bed, which attracts bacteria and leads to infection,” Grimmesey said. “When maggots are applied to the wound, they secrete an enzyme in their saliva that liquefies nonviable tissue, and then they digest it. The nonviable tissue and the infection are now neatly contained in the maggot’s bodies, and the patient is ready for the next phase of wound healing.”
Maggots often are used for patients who are not good candidates for surgery, or have not responded positively to other types of intervention.
What is the process like?
The wound bed is cleaned and the surrounding tissue is prepped with a thick border of zinc oxide, which prevents the maggots from escaping. Then, maggots are applied to the wound bed.
“We apply about 10 maggots per square centimeter of the wound, and then place a saline damp gauze over the maggots, before placing a dry sterile gauze over the entire area,” Grimmesey said.
Maggots are typically left in place for two to three days, and depending on the wound, some patients may require multiple applications.
Example: A diabetic foot ulcer has left a patient with a deep, nonhealing wound and a serious infection that doctors warn might lead to amputation. Maggots might be applied to the wound to clear out all of the infected and dead tissue, leaving only healthy tissue, which allows the patient to begin the healing process.
How they're regulated
Both leeches and maggots were approved by the FDA in 2004 as a single-use medical device.
Prior to being used, they are kept in safe, sterile containers. Following use, they are immediately disposed of as biohazardous waste.
Leeches and maggots used in medical settings are different than the average ones you might find in nature. They come from medical laboratories that produce them for safe, human use. “The leeches and maggots used in medical settings are sterile and free of environmental bacteria, which significantly decreases the risk for infection,” Grimmesey said.
Did you know?
Grimmesey notes that one of the most common misconceptions about this type of therapy is that the leeches and/or maggots will somehow escape and attach themselves to other places in the body. Don’t worry — this doesn’t happen.
“They are very primitive critters and are driven to constantly eat,” she said. “When they are intentionally placed in a good food source, they don’t leave it.” | <urn:uuid:14459d45-89d4-44b0-86df-805c013f0583> | {
"date": "2019-05-19T07:30:35",
"dump": "CC-MAIN-2019-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254253.31/warc/CC-MAIN-20190519061520-20190519083520-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9558199644088745,
"score": 3.421875,
"token_count": 1374,
"url": "https://lasvegassun.com/native/mountain-view/2017/dec/18/the-healing-power-of-leeches-and-maggots-in-hospit/"
} |
How to Make Bread, Cake, and Cookies in Minecraft
Like in real life, hunger is a critical factor for survivability in the game of Minecraft. Players can satisfy their hunger through finding items to eat (like apples) or by crafting simple foods like bread, cakes, or even cookies. Baking wheat-based food items are a common route taken by most players as collecting wheat is an activity any player can take on, even with little resources. Like flour in real life, bread, cake, and cookies items all require wheat as a main ingredient.
Getting Wheat for Baking Goods
Many Minecraft players begin a wheat farm on their first or second day. As on most farms, growing wheat can be time-consuming, and can require automation using recipes, like a hopper for automatically harvesting wheat into a chest. Wheat is also extremely useful in breeding cows and mooshrooms, as well as other animals in the game, which supply a significant amount of food. This is a recommended strategy as wheat is a useful resource in Minecraft.
Seeds are abundant in most biomes, and seeds can be obtained by breaking weeds you find throughout the biome. However, you need a light source — simply the sun supplemented with torches. Also as in real life, wheat grows best when given bonemeal to act as a fertilizer.
Now that you know how to collect the main ingredient of cakes, cookies, and bread in Minecraft, it’s time to explore the individual recipes and crafting process for each food item.
Bread is a food item that can be eaten by players in Minecraft to restore hunger points and saturation. It is one of the easiest and most common food sources that is created early in the game because the recipe to make bread only requires 3 wheat stalks.
While acquiring the wheat ingredients requires farming, which can be tedious, it also ensures that the player is less vulnerable to the other dangers of the environment in early game-play. For example, it’s much safer for new players to build wheat farms and craft bread as their main food source until they become more powerful to explore the realm.
The other great thing about crafting bread is that no furnace or fuel is required to “bake” the bread. All you need is the 3×3 crafting grid which is made available in your inventory after you have successfully made a crafting table.
To make bread in Minecraft, place 3 wheat stalks in your crafting grid. These items must be placed in a horizontal row as seen below.
Increasingly, players are turning to mushrooms (turned into stew) and carrots as a food source rather than bread and using wheat in other ways. But, if you have a wheat farm setup for recurring harvests, make as much bread as you want. Also, bread can be found in chests or can be obtained by trading with village farmers.
How to Make a Cake
Unlike other food items that are consumed when held, cake is a block that is eaten when placed. Each cake consists of six slices, which can be consumed by a single player or a group (as in a real-life celebration). If a single player eats only part of the cake, that player cannot pick up the remaining cake but can return to eat it later.
Because cake has multiple slices, it can restore up to 6 Hunger bars (1 bar per slice) but has a low saturation score (so you become hungry again quickly). Cake can also be used as mounting for a TNT cannon.
The biggest drawback to making cake is the complexity of the recipe. Before you can make a cake, you need to gather all of the ingredients including:
Craft 3 milk buckets.
Collect wheat and an egg.
To make a cake in Minecraft, place 3 buckets of milk placed on the top row, a sugar-egg-sugar configuration for the second row, and 3 wheat stalks on the bottom row as seen below. After you complete the recipe, the buckets return to the inventory.
How to Bake Cookies
Cookies require cocoa beans, which can be found in dungeon chests, which, in turn, are most commonly found in the jungle biomes, or on jungle trees. In a jungle biome, harvesting cocoa beans is easy, and crafting cookies is more advantageous than bread.
The total hunger points for cookies is higher (per number of wheat stalks used), but because the saturation is lower, you need to eat more often. Many players consider cookies to be more of a rare treat and a novelty item than a long-term food source.
Cocoa beans can also be farmed, and mass-produced like other crops using jungle logs as the “soil.”
To make cookies in Minecraft, place 2 stalks of wheat on either side of a cocoa bean in a horizontal row resulting in wheat – cocoa bean – what configuration. With the following cookie recipe in place, a player will receive a total of 8 cookies, which can be stacked in your inventory.
Now that you know how to make a cake, cookies, and bread in Minecraft, you have officially become an expert baker. Utilize these food items to both satisfy your hunger fatigue and also impress other players. Bringing chocolate chip cookies or decorated cakes when attending a group event or hosting guests on your local server goes a long way towards a good first impression! | <urn:uuid:120c3717-448a-4408-a50b-801763c71eaa> | {
"date": "2018-03-18T02:10:47",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645413.2/warc/CC-MAIN-20180318013134-20180318033134-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9607419967651367,
"score": 2.546875,
"token_count": 1093,
"url": "http://www.dummies.com/programming/programming-games/minecraft/how-to-make-bread-cake-and-cookies-in-minecraft/"
} |
Why we need more breast cancer screening trials
- 1Department of Family and Community Medicine and the Center for Healthcare Policy and Research, University of California, Davis, Sacramento, California, USA
- 2Group Health Research Institute, Group Health Cooperative, Seattle, Washington, USA
- Correspondence to: Dr Joshua J Fenton
Department of Family and Community Medicine and the Center for Healthcare Policy and Research, University of California, Davis, 4860 Y Street, Ste 2300, Sacramento, CA95817, USA;
Despite a high false-positive rate, screening mammography fails to detect one in five breast cancers and even fewer in women with dense breasts. New technologies have been developed to address these limitations, including digital mammography, breast tomosynthesis, MRI and computer-aided detection (CAD). Conceivably, a new technology—either alone or alongside mammography—could yield net benefits to women, ushering in a new era of breast cancer screening. But what sort of data will be needed to infer that a new screening method is better than mammography alone?
New breast cancer screening modalities would ideally be evaluated in head-to-head randomised trials comparing breast cancer mortality in patients screened with the new modality versus patients screened with conventional mammography. In light of variable interpretation across radiologists and the relatively small incremental benefits of new screening modalities, head-to-head trials would likely require huge sample sizes of patients and radiologists to achieve sufficient statistical power.1 In addition, for mortality outcomes, many years of follow-up are required, so evaluated technologies may be obsolete by the time trial findings become available.
Despite these challenges, we still believe that head-to-head clinical trials are necessary to inform clinical and policy decisions regarding breast cancer screening. However, future trials of breast cancer screening will necessarily rely on near-term surrogate outcomes, ideally outcomes that strongly correlate with decreased breast cancer mortality, such as the incidence rate of interval cancers or incidence rate of late-stage cancers. (Interval cancers are cancers diagnosed between screening rounds and putatively reflect both missed cancers and highly aggressive cancers unlikely to be screen-detected.) However, both outcomes are rare, and trials designed with these endpoints would be costly due to the very large sample sizes.
Nevertheless, from the societal perspective, the cost of such trials may still be small compared with the cumulative costs of premature technology adoption. Although no trial has evaluated its impact on interval or late-stage cancer incidence, CAD is now used on most screening mammograms in the USA (increasing the cost of each mammogram by at least 10%). It is difficult to estimate the cost of an adequately powered trial testing CAD's impact on these outcomes, but the Digital Mammographic Imaging Screening Trial (DMIST) cost ∼$26 million to compare sensitivity and specificity of digital versus film-screen mammography in over 49 000 women who each received both examinations.2 If the cost of a head-to-head trial of CAD use versus non-use were fivefold greater than DMIST (∼$125 million), this cost would still be one-fourth the approximate total annual cost of CAD use within the USA (∼$500 million).3
What is the role of further screening trials like DMIST that assess more proximate surrogate outcomes, such as sensitivity and specificity? For increased sensitivity to lead to reduced breast cancer mortality, cancers must be detected significantly earlier (when treatments are more likely to improve survival) than with an alternative method with lower sensitivity. With improved breast cancer treatments, this is a challenging goal to meet. In addition, more sensitive examinations usually reduce specificity and may increase overdiagnosis. Thus, by themselves, trials examining screening accuracy cannot directly address whether the benefits of new technologies are likely to outweigh potential harms.
But data from trials assessing sensitivity and specificity can be used in natural history models of breast cancer that can evaluate the long-term impacts of screening under a variety of real-world scenarios, ranging from varying screening performance to differences in the starting ages or the intervals of screening.4 ,5 Microsimulation models can explicitly weigh the mortality benefits of new technologies (often mediated by reduced incidence of late-stage disease) and potential harms (eg, reduced quality of life following overdiagnosis and non-beneficial treatment). Model inputs can also be modified based on community-based observational studies as they emerge.
We recognise challenges to implementing large screening trials, including limited funding and opportunity costs. Although formidable, challenges are probably not insurmountable with sufficient push from funding and regulatory bodies. By orchestrating the roll-out of new screening regimens in different regions, leaders in Norway have planned a series of randomised trials to address crucial questions about colorectal cancer screening.6 Although screening is not delivered by a national programme in the USA, Medicare could condition coverage of new breast cancer screening technologies based on trial participation, or the collection of high-quality registry data for observational research.7
There remains a vital role for breast cancer screening trials that examine not only sensitivity and specificity but near-term surrogates for breast cancer mortality. The challenge of overcoming the logistical and political barriers to trial implementation will make it tempting to do nothing. National leaders and policymakers will need to articulate and sustain the argument that the societal benefits of large screening trials are too great to allow new screening technologies to disseminate without rigorous evaluation.
Competing interests None. | <urn:uuid:8c108915-cb95-4104-ba83-6b236238e891> | {
"date": "2015-05-03T17:56:49",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448957257.55/warc/CC-MAIN-20150501025557-00041-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9368805885314941,
"score": 2.609375,
"token_count": 1098,
"url": "http://ebm.bmj.com/content/17/6/169.full"
} |
Spell check in Google Sheets
You can use Google Sheets's spell checker to find misspelled words and see suggested spellings. Here's how:
- Click on the cell you'd like to start spell checking your spreadsheet from.
- Under the Tools menu, select Spelling...
- Incorrect words are automatically underlined in red. Simply click on a misspelled word to see suggested spellings and select the correct spelling from the list. If you'd like to keep the original spelling select the option at the bottom of the suggestions.
- Click the Next button to check the spelling on additional cells that you've selected.
- If your spreadsheet has more than one sheet, click the Move to next sheet button to spell check additional sheets.
- When you're done checking the spelling in all of your sheets, click the Done button or the X button in the upper right of the dialog.
Spell check is currently not available in the new Google Sheets, but will be coming soon. | <urn:uuid:73a83af9-6b21-4db1-8929-476639b3be1a> | {
"date": "2014-04-24T02:51:56",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00163-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.841191828250885,
"score": 2.78125,
"token_count": 205,
"url": "https://support.google.com/drive/answer/58193?hl=en&ctx=cb&src=cb&cbid=96mmo9lcw3bj&cbrank=4"
} |
The remains of a huge Tertiary gravel-filled channel lie in the area between the South and Middle Yuba Rivers in northern Nevada County, Calif. The deposits in this channel were the site of some of the most productive hydraulic gold mines in California between the 1850's and 1884.
The gravel occupies a major channel and parts of several tributaries that in Tertiary time cut into a surface of Paleozoic and Mesozoic igneous and metamorphic rocks. The gravel is partly covered by the remains of an extensive sheet of volcanic rocks, but it crops out along the broad crest of the ridge between the canyons of the South and Middle Yuba Rivers. The lower parts of the gravel deposits generally carry the highest values of placer gold. Traditionally, the richest deposits of all are found in the so-called blue gravel, which, when present, lies just above the bedrock and consists of a very coarse, poorly sorted mixture of cobbles, pebbles, sand, and clay. It is unoxidized, and, at least locally, contains appreciable quantities of secondary sulfide minerals, chiefly pyrite.
Information in drill logs from private sources indicates that a 2-mile stretch of the channel near North Columbia contains over half a million ounces of gold dispersed through about 22 million cubic yards of gravel at a grade .averaging about 81 cents per cubic yard. The deposit is buried at depths ranging from 100 to 400 feet.
Several geophysical methods have been tested for their feasibility in determining the configuration of the buried bedrock surface, in delineating channel gravel buried under volcanic rocks, and in identifying concentrations of heavy minerals within the gravel. Although the data have not yet been completely processed, preliminary conclusions indicate that some methods may be quite useful. A combination of seismic-refraction and gravity methods was used to determine the depth and configuration of the bottom of the channel to an accuracy within 10 percent as checked by the drill holes. Seismic-refraction methods have identified depressions which are in the bedrock surface, below volcanic rocks, and which may be occupied by gravels. Seismic methods, however, cannot actually recognize the presence of low-velocity gravels beneath the higher velocity volcanic rocks. Electromagnetic methods, supplemented in part by induced-polarization methods, show promise of being able to recognize and trace blue gravel buried less than 200 feet deep. A broad vague magnetic anomaly across the channel suggests that more precise magnetic studies might delineate concentrations of magnetic material. The usefulness of resistivity methods appears from this study to be quite restricted because of irregular topography and the variable conductivity of layers within the gravel.
Additional publication details
USGS Numbered Series
Tertiary gold-bearing channel gravel in northern Nevada County, California | <urn:uuid:16509843-5a34-4d45-813e-af4fd5f05b6e> | {
"date": "2018-04-27T08:54:40",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524127095762.40/warc/CC-MAIN-20180427075937-20180427095937-00376.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9375065565109253,
"score": 3.578125,
"token_count": 567,
"url": "https://pubs.er.usgs.gov/publication/cir566"
} |
is human resource accounting?
resource accounting aims at depicting the human resources
potential in money terms while casting the organisations
With the emergence of the knowledge economy, recognition
of human capital as an important part of the enterprises
total value has gained importance. This has led to two important
methods to assess the value of human capital
methods to improve the development of human capital in
trying to reflect the value of its people
are using different approaches.
of valuing and accounting of human resources
methods to value and account for human resources can be
classified into the following categories:
Methods based on costs (which include costs incurred by
the company to recruit, hire, train and develop human resources)
Methods based on economic value of human resources and the
capitalisation of companys earnings.
based on cost:
this method, the cost of acquisition i.e. selection, hiring,
training costs of employees are capitalised and written
off over the expected useful life of the employees. In case,
the personnel leave the company before the anticipated period
of service, then the unamortised portion of costs remaining
in the companys books is written off against the profit
and loss account in that year. If the period of service
exceeds the anticipated time, then amortisation of costs
this method, the human resources are valued at their replacement
cost i.e. the monetary implications of replacing existing
personnel. Replacement costs could be positional i.e. replacing
personnel for particular positions or personal i.e. replacing
specific talent or ability of particular persons.
approach suggests competitive bidding for scarce employees
in an organisation i.e. opportunity cost of employees linked
to scarcity. The approach proposes the capitalising of additional
earning potential of each human resource within the company.
this method, standard costs of recruiting, hiring, training,
and developing per grade of employees are determined annually.
The total standard cost for all personnel of the company
is the value of human resources.
based on value:
and Lau method
method estimates the worth of human resources on a group-basis,
as human resource groups account for productivity and performance
this method, the net present value of incremental cash flows
attributed to human resources is taken as the asset value.
and implications of human resource accounting
resource accounting provides quantitative information about
the value of human assets, which helps the top management
to take decisions regarding the adequacy of human resources.
Based on these insights, further steps for recruitment and
selection of personnel are taken.
the organisation, quantitative data on the most valuable
asset has an impact on the decisions of the investors, clients,
and potential staff of the company.
proper valuation and accounting of the human resources is
not done then the management may not be able to recognise
the negative effects of certain programs, which are aimed
at improving profits in the short run. If not recognised
on time, these programs could lead to fall in productivity
levels, high turnover rate and low morale of existing employees.
resource accounting in India
The companies act, 1956 does not explicitly
provide for disclosure on human assets in the financial
statements of the companies. But sensing the benefits derived
from valuing and reporting the human assets, many companies
have voluntarily disclosed all relevant information in their | <urn:uuid:07558550-2e17-4a63-983c-61119c1c5ee2> | {
"date": "2015-11-28T20:09:34",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453805.6/warc/CC-MAIN-20151124205413-00135-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8601868748664856,
"score": 2.578125,
"token_count": 701,
"url": "http://www.themanagementor.com/enlightenmentorareas/finance/cfa/HumanResouceAcc.htm"
} |
scroll to top
Stuck on your essay?
Get ideas from this essay and see how your work stacks up
Word Count: 1,141
Addition Polymerisation The joining of two or more simple molecules or monomers to form a new compound a polymer which has the same empirical formula Mechanism of the Polymerisation of Ethene Thousands of ethene molecules join up to form a type of polyethene known as low density polythene CH2CH2n CH2-CH2 n number of joining molecules can vary from 200-2000 fig 1 The mechanism is known as a free radical reaction as it is the radicals which initiate the three stage process of polymerisation Stage 1 Initiation by free radicals R l H2CCH2 R CH2 CH2 l where r is the radical new radical Fig 2 Stage 2 Propagation Each time a radical hits an ethene molecule a new longer radical is formed R CH2 CH2 l H2CCH2 R CH2 CH2 CH2 CH2 l R H2 CH2 CH2 CH2 l H2CCH2 R CH2 CH2 CH2 CH2 CH2 CH2 l Fig 3 Stage 3 Termination Eventually two radicals combine and the process stops as no new radicals are formed 1 As the chains grow by radical mechanism it is possible for a radial exchange to occur to give a radical within the chain 2 By this mechanism the chains can become more
@Kibin is a lifesaver for my essay right now!!
- Sandra Slivka, student @ UC Berkeley
Wow, this is the best essay help I've ever received!
- Camvu Pham, student @ U of M
If I'd known about @Kibin in college, I would have gotten much more sleep
- Jen Soust, alumni @ UCLA | <urn:uuid:f1f9e06e-25a0-437d-a77a-25e0c951717e> | {
"date": "2016-12-10T05:12:04",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542939.6/warc/CC-MAIN-20161202170902-00072-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9272732138633728,
"score": 3.03125,
"token_count": 374,
"url": "https://www.kibin.com/essay-examples/mechanism-of-the-polymerisation-of-ethene-GT3aJvUt"
} |
I chose to create a Wallwisher for my students to be able to view the guidelines for their final project in class. This way, they could all have easy access to refresh their memory if they forgot what the exact requirements were. This would be a great way to post important information and due dates for all students to see! I posted the link below.
I also chose to create a document on Google Docs, so my students could see how I am going to grade them. Next to each of the eight counts, I will write down the different things I see. This way I can grade each group fairly, because I will know what they did and did not include from the guidelines. The link is below:
These tools would be great for the classroom, because all of the students could access the sites on their own time. It would be an easy way to post deadlines and project information without having to print it out on paper and hand it out. | <urn:uuid:3c65f43b-77f6-4d01-823d-16d6a0e23ea0> | {
"date": "2018-01-18T19:42:23",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9520399570465088,
"score": 2.75,
"token_count": 194,
"url": "http://whitneydavey.blogspot.com/2012/12/tool-6.html"
} |
Maria Wilman was born in 1867 in South Africa's Cape Province, a region that would prove fundamental in shaping her future as a scholar and rock-art collector. Miss Wilman was the second South African female to attend Cambridge University in England, where she was awarded a Science Degree in geology, mineralogy, and chemistry in 1888. In 1893, wishing to further her academic career Miss Wilman returned to Cambridge and completed a Masters of Arts in botany during 1895.
Between 1895 and 1907, she returned to South Africa where she filled a volunteer position in the Geology department of the South African Museum in Cape Town.
Without a formal degree (as these were not conferred on women until the 1930's), and lacking her father's approval she could not accept remuneration for her work and thus remained a volunteer until 1907. It was only in1933 that Miss Wilman formally received her degree from Cambridge University.
During her time at The South African Museum, she reported to Louis Albert Peringuey, whose interest in the San people and their culture spurred him to send her on research trips into the Northern Cape Province and Rhodesia (Zimbabwe).
In 1908, when Miss Wilman became the first director of the Alexander McGregor Memorial Museum in Kimberley, she traveled by ox-wagon through Lesotho and Botswana studying the San people and their cultural products.
The artifacts, implements and other San cultural products that she acquired as director of the museum are among the most important of their kind. Miss Wilman compiled and edited her research and published her book, "Rock-engravings of Griqualand West" which remained the standard text on Southern African Rock Art for almost 5 decades. In 1939 Miss Wilman was awarded an honorary doctorate in law by the University of the Witwatersrand. She died in 1957 | <urn:uuid:8b48fb89-74da-426b-a3d8-4ddf91ff4315> | {
"date": "2016-05-05T02:51:18",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125857.44/warc/CC-MAIN-20160428161525-00072-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9788393378257751,
"score": 2.75,
"token_count": 372,
"url": "http://www.sarada.co.za/people_and_institutions/researchers/maria_wilman/"
} |
Adjuvant therapy or adjuvant treatment – Treatment given in addition to the primary treatment. In prostate cancer, adjuvant treatment often refers to hormone therapy or chemotherapy given after radiotherapy or surgery, which is aimed at destroying any remaining cancer cells.
Advanced prostate cancer – Prostate cancer that has spread to surrounding tissue or has spread to other parts of the body.
Alternative therapy – Therapy used instead of standard medical treatment. Most alternative therapies have not been scientifically tested, so there is little proof that they work and their side effects are not always known.
Anaemia – A drop in the number of red blood cells in your body. Anaemia decreases the amount of oxygen in the body and may cause tiredness and fatigue, breathlessness, paleness and a poor resistance to infection.
Brachytherapy – A type of radiotherapy treatment that implants radioactive material sealed in needles or seeds into or near the tumour.
Biopsy – The removal of a small amount of tissue from the body, for examination under a microscope, to help diagnose a disease.
Cancer – A term for diseases in which abnormal cells divide without control.
Chemotherapy – The use of drugs, which kill or slow cell growth, to treat cancer. These are called cytotoxic drugs.
Clinical trial – Research conducted with the person’s permission, which usually involves a comparison of two or more treatments or diagnostic methods. The aim is to gain a better understanding of the underlying disease process and/or methods to treat it. A clinical trial is conducted with rigorous scientific method for determining the effectiveness of a proposed treatment.
Cultural engagement – actively involve people with respect to their cultural needs.
Cells – The building blocks of the body. Cells can reproduce themselves exactly, unless they are abnormal or damaged, as are cancer cells.
Diagnosis – The identification and naming of a person’s disease.
Digital rectal examination (DRE) – An examination of the prostate gland through the wall of the rectum. Your doctor will insert a finger into the rectum and is able to feel the shape of the prostate gland. Irregularities in the shape and size may be caused by cancer.
Erectile dysfunction – Inability to achieve or maintain an erection firm enough for penetration.
External beam radiotherapy (EBRT) – Uses x-rays directed from an external machine to destroy cancer cells.
Fertility – Ability to have children.
Grade – A score that describes how quickly the tumour is likely to grow.
Hormone – A substance that affects how your body works. Some hormones control growth, others control reproduction. They are distributed around the body through the bloodstream.
Hormone therapy/treatment – Treatment with drugs that minimises the effect of testosterone in the body. This is also known as androgen deprivation therapy (ADT).
Incontinence – Inability to hold or control the loss of urine or faeces.
Locally advanced prostate cancer – Cancer which has spread beyond the prostate capsule and may include the seminal vesicles but still confined to the prostate region.
Lymph nodes – Also called lymph glands. Small, bean-shaped collections of lymph cells scattered across the lymphatic system. They get rid of bacteria and other harmful things. There are lymph nodes in the neck, armpit, groin and abdomen.
Lymphoedema – Swelling caused by a build-up of lymph fluid. This happens when lymph nodes do not drain properly, usually after lymph glands are removed or damaged by radiotherapy.
Metastatic prostate cancer – Small groups of cells have spread from the primary tumour site and started to grow in other parts of the body – such as bones.
Multidisciplinary care – This is when medical, nursing and allied health professionals involved in a person’s care work together with the person to consider all treatment options and develop a care plan that best meets the needs of that person.
Osteoporosis – A decrease in bone mass, causing bones to become fragile. This makes them brittle and liable to break.
Pelvic floor muscles – The floor of the pelvis is made up of muscle layers and tissues. The layers stretch like a hammock from the tailbone at the back to the pubic bone in front. The pelvic floor muscles support the bladder and bowel. The urethra (urine tube) and rectum (anus) pass through the pelvic floor muscles.
Perineal (perineum) – The area between the anus and the scrotum.
Prognosis – The likely outcome of a person’s disease.
Prostate cancer – Cancer of the prostate, the male organ that sits next to the urinary bladder and contributes to semen (sperm fluid) production.
Prostate gland – The prostate gland is normally the size of a walnut. It is located between the bladder and the penis and sits in front of the rectum. It produces fluid that forms part of semen.
Prostate specific antigen (PSA) – A protein produced by cells in the prostate gland, which is usually found in the blood in larger than normal amounts when prostate cancer is present.
Quality of life – An individual’s overall appraisal of their situation and wellbeing. Quality of life encompasses symptoms of the disease and side effects of treatment, functional capacity, social interactions and relationships and occupational functioning.
Radical prostatectomy – A surgical operation that removes the prostate.
Radiotherapy or radiation oncology – The use of radiation, usually x-rays or gamma rays, to kill tumour cells or injure them so they cannot grow or multiply.
Self-management – An awareness and active participation by people with cancer in their recovery, recuperation and rehabilitation, to minimise the consequences of treatment, promote survival, health and wellbeing.
Shared decision-making – Integration of a patient’s values, goals and concerns with the best available evidence about benefits, risks and uncertainties of treatment, in order to achieve appropriate health care decisions. It involves clinicians and patients making decisions about the patient’s management together.
Side effect – Unintended effects of a drug or treatment.
Stage – The extent of a cancer and whether the disease has spread from an original site to other parts of the body.
Staging – Tests to find out, and also a way to describe how far a cancer has spread. Frequently these are based on the tumour, the nodes and the metastases. Staging may be based on clinical or pathological features.
Standard treatment – The best proven treatment, based on results of past research.
Support group – People on whom an individual can rely for the provision of emotional caring and concern, and reinforcement of a sense of personal worth and value. Other components of support may include provision of practical or material aid, information, guidance, feedback and validation of the individual’s stressful experiences and coping choices.
Supportive care – Improving the comfort and quality of life for people with cancer.
Survivorship – In cancer, survivorship focuses on the health and life of a person with cancer beyond the diagnosis and treatment phases. Survivorship includes issues related to follow-up care, late effects of treatment, second cancers, and quality of life.
Testicles – Organs which produce sperm and the male hormone testosterone. They are found in the scrotum.
Testosterone – The major male hormone which is produced by the testicles.
Tumour-Node-Metastasis (TNM) System – A staging system used by clinicians to describe how advanced a particular cancer is, which then informs the type of treatment provided.
Tumour – An abnormal growth of tissue. It may be localised (benign) or invade adjacent tissues (malignant) or distant tissues (metastatic).
Urethra – The tube that carries urine from the bladder, and semen, out through the penis and to the outside of the body.
- American Cancer Society. (2012). Prostate cancer www.cancer.org/acs/groups/cid/ documents/webcontent/003134–pdf.pdf
- Badr, H., & Taylor, C. L. C. (2009). Sexual dysfunction and spousal communication in couples coping with prostate cancer. Psycho–Oncology, 18(7), 735–746.
- Cancer Council Australia (2009). Advanced prostate cancer – a guide for men and their families.
- Cancer Council Australia (2010). Localised prostate cancer – a guide for men and their families.
- Cancer Council NSW. (2011). Understanding prostate cancer – a guide for men with cancer, their families and friends.
- Cancer Council NSW. (2011). Cancer, work & you – Information for employed people affected by cancer.
- Cancer Council Queensland (2011). Sex after treatment for prostate cancer.
- Cassileth, B., Gubili, J., & Yeung, K. (2009). Integrative medicine: complementary therapies and supplements. Nature Reviews Urology, 6(4), 228–233.
- Chapman, S., Barratt, A., & Stockler, M. (2010). Let sleeping dogs lie? What men should know before getting tested for prostate cancer. Sydney: Sydney University Press.
- Cornell, D. (2005). A gay urologist’s changing views on prostate cancer. In G. Perlman & J. Drescher (Eds.), A gay man’s guide to prostate cancer (pp. 29–41). New York: Haworth Medical Press.
- Department of Human Services. (2006). Patient management framework – genitourinary tumour stream: prostate cancer. Melbourne: State Government of Victoria.
- Dowsett, G., Mitchell, A., & O’Keeffe, D. (2012). ‘They just assume everyone’s straight’ – a technical report on prostate cancer and health promotion for gay and bisexual men for the Prostate Cancer Foundation of Australia. Melbourne: Australian Research Centre in Sex, Health & Society, LaTrobe University.
- Goldstone, S. E. (2005). The ups and downs of gay sex after prostate cancer. In G. Perlman & J. Drescher (Eds.), A gay man’s guide to prostate cancer (pp. 43–55). New York: Haworth Medical Press.
- Gray, P. J., & Shipley, W. U. (2012). The importance of combined radiation and endocrine therapy in locally advanced prostate cancer. Asian Journal of Andrology, 14(2), 245–246.
- Horwich, A. (2011). Adjuvant treatments for locally advanced prostate cancer. European Journal of Cancer, 47(Supplement 3), S317–S318.
- Irwin, L. (2007). Homophobia and heterosexism: implications for nursing and nursing practice. Australian Journal of Advanced Nursing, 25(1), 70–76.
- Kirby, R. S., Partin, A. W., Parsons, J. K., & Feneley, M. R. (Eds.). (2008). Treatment Methods for Early and Advanced Prostate Cancer. London: Informa Healthcare.
- Macmillan Cancer Support. Living with prostate cancer. www.macmillan.org. uk/Cancerinformation/Cancertypes/Prostate/Livingwithprostatecancer/ Livingwithprostatecancer.aspx
- Mogorovich, A., Nilsson, A. E., Tyritzis, S. I., Carlsson, S., Jonsson, M., Haendler, L., Nyberg, T., Steineck, G., & Wiklund, N. P. (2013). Radical prostatectomy, sparing of the seminal vesicles, and painful orgasm. The Journal of Sexual Medicine, 10(5), 1417–1423.
- National Cancer Institute. (2011). Support for people with cancer – taking time. U.S. Department of Health and Human Services.
- Nilsson, A. E., Carlsson, S., Johansson, E., Jonsson, M. N., Adding, C., Nyberg, T., Steineck, G., & Wiklund, N. P. (2011). Orgasm–associated urinary incontinence and sexual life after radical prostatectomy. The Journal of Sexual Medicine, 8(9), 2632–2639.
- Osteoporosis Australia. (2012). What you need to know about osteoporosis. www. osteoporosis.org.au/images/stories/consumer_resources_updated/oa_consumer_ brochure.pdf
- Sighinolfi, M. C., Rivalta, M., Mofferdin, A., Micali, S., De Stefani, S., & Bianchi, G. (2009). Potential effectiveness of pelvic floor rehabilitation treatment for post radical prostatectomy incontinence, climacturia, and erectile dysfunction: a case series. The Journal of Sexual Medicine, 6(12), 3496–3499.
- Ussher, J., Butow, P., Wain, G., Hobbs, K., Smith, K., Stenlake, A., Kirsten, L., & Sandoval, M. (2005). Research into the relationship between type of organisation and effectiveness of support groups for people with cancer and their carers. Sydney: Cancer Council NSW. | <urn:uuid:87d9d3e8-5ff6-4532-b9c2-c93287e036cd> | {
"date": "2018-01-18T11:44:57",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887253.36/warc/CC-MAIN-20180118111417-20180118131417-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8806880712509155,
"score": 3.125,
"token_count": 2807,
"url": "http://pcfa.com.au/awareness/for-recently-diagnosed-men-and-their-families/gay-and-bisexual-men/side-effects/"
} |
The princess and the pony
Princess Pinecone wants to be a great warrior, and really wants a horse suitable for a warrior for her birthday. However, she gets a short little pony instead and wonders how this little pony is going to help her become a fearsome warrior.
Typical expectations of bravery and fearlessness are overturned throughout, as Princess Pinecone proves her bravery not through ferociousness in battle but through tenacity and affection. In addition to subverting ideas of a typical princess, the book explores a variety of ways of being brave.
Edited by: Centre for Children’s Literature and Culture Studies (Ireland) | <urn:uuid:dc36cfe7-542c-4067-b50c-f2caa7370af4> | {
"date": "2019-10-15T07:13:24",
"dump": "CC-MAIN-2019-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9648723006248474,
"score": 3.015625,
"token_count": 127,
"url": "https://g-book.eu/bibliography/the-princess-and-the-pony/"
} |
The Mar Thoma Church is one of the historic Churches of Christendom. It is an independent Church with head-quarters in Tiruvalla, Kerala, South India and it belongs to the Eastern family of Churches. The Church is established in A.D. 52 by St. Thomas, one of the Apostles of Jesus Christ. The early Church was known as Malankara Church. In the early centuries the Church had ecclesiastical leadership from the Church of Syria and due to this association with the Syrian Church the name of the Malankara Church has been changed to the Malankara Syrian Church.
The foreign domination in India in quick succession by the Portuguese, the French and the British paved the way for the spread of Christianity in India. The 16th and 17th centuries which saw the heyday of Portuguese rule in India was a period of great missionary activity and the rapid spread of the Roman Church. The missionary efforts of the Roman Church in Malabar were mainly directed towards winning the Syrians over to the obedience of the Pope of Rome. The decrees passed by the Synod of Diampher were all calculated to bring the faith and practices of the Syrian Church into conformity with those of the Church of Rome. To counter this move the leaders of the Syrian Church made frantic efforts to get a Bishop from one of the Eastern Churches. In 1653 the Patriarch of Babylon sent a Bishop named Ayatollah to Malabar, but the Portuguese seized him on his arrival and deported him to GAO, where he was tried by the inquisition and eventually burnt at the stake.
The Syrians became furious when they heard about this dastardly act and they assembled in their thousands in front of the Church at Mattancherry and took an oath to have nothing to do with the Portuguese any more. This is known as the famous "oath of the Coonen Cross" because the granite cross around which the people assembled was inclined to one side. This event was a mile stone and a turning point in the history of the Syrian Church. Thus the ancient Malabar Syrian Church was divided into two branches, one group with the Roman Catholic Church and the other group as an independent Church under the leadership of the Arch Deacon.
The arrival of Mar Gregarious, in 1665, a prelate of the Jacobite Patriarchate of Antioch was indeed a very significant mile stone in the growth of an independent Church in Malabar. He consecrated Arch Deacon Thomas as Bishop under the title of Mar Thoma I. This resulted into a new relationship with the Jacobite Church of Antioch.
Although there were Christians in India long before the spread of Christianity in Portugal, England and Spain, unscriptural customs, superstitious belief and practices had crept into the Syrian Church over the centuries. But there was a nucleus of people who longed for the removal of such unscriptural faith and practices. There were two outstanding leaders in this group, Palakunnathu Abraham Malpan and Kaithayil GeeVarghese Malpan of Puthupally. They had occasions to come into personal contact with the C.M.S. Missionaries and to imbibe their spiritual insights regarding Christian life and nature and functions of the Church as depicted in the New Testament. There was stiff opposition from the conservative sections in the Church. But the movement gathered momentum as time passed.
The first printed Malayalam Bible, translated from Syriac was published in 1811. Known as Ramban Bible it contained ony the four Gospels. By 1841, the whole Bible was translated, printed and released.
Rapid progress has been made in the wake of the Reformation movement pioneered by the Martin Luther of the East- Abraham Malpan. He translated the Liturgy of the Holy Communion in Malayalam and he had the courage to celebrate the Holy Communion in Malayalam. On Sunday, August 27, 1837, Abraham Malpan Achen conducted the Holy Communion service in the mother tongue Malayalam at his home parish Maramon Mar Thoma Church. This in fact had marked the resurgence of the ancient Church in Kerala and has given new life and inspiration to the total renewal of the Church. Abraham Malpan soon realized that unless he had the support of a Bishop who was sympathetic towards his reforms, there was little prospect of the reform movement gaining ground. So his nephew Deacon Mathew was sent to Patriarch at Mardin in Syria. Impressed with the character and ability of the Deacon, he was consecrated as Bishop with the title Mathews Mar Athnasius. The inevitable separation took place in 1869. Cases were filed by the conservatives in 1879 regarding Church property, Seminary and parishes.
The Mar Thoma Church having lost their claim to property had to begin from scratch, building churches, organizing themselves as an independent body. The earnestness and spiritual fervor of the leaders, lay and clerical had borne fruit and there was a phenomenal expansion of the Church all along these centuries.
The Church awoke to its social responsibilities and provided leadership to spread its faith and service activities from Tibetan Border in the North to Cape Comorin in the south and the Mar Thoma Church has been singularly fortunate in having a galaxy of 20 Metropolitans who where faithful to the legacy of the Church and building up the social cultural and spiritual atmosphere in Kerala and the world at large. There has been a phenomenal expansion of the Church all along these centuries widening its frontiers to the various countries of West Asia, Africa, North America and Western Europe.
The Mar Thoma Syrian Church today maintain healthy and cordial ecumenical relations with Churches the world over and is a member of the World Council of Churches right from its very inception. Dr. Yuhanon Mar Thoma was elected President of the World Council of Churches in 1955 in Evanston. And Dr. M. M. Thomas, a member of the Mar Thoma Church, as President of the Executive Committee of the W. C. C. and Moderator of the 7th Assembly at Nairobi who rendered yeoman service to the cause of world peace unity and understanding.
The famous Maramon Convention which we have been holding annually for over a century, is a source of great spiritual power and inspiration for innumerable people. The losses were forgotten in the zeal of spiritual fulfillment.
There has been a phenomenal expansion of the Church during the last six decades, widening its frontiers to various countries of West Asia, Africa, North America and Western Europe. The Church has now 1166 parishes including congregations, divided into twelve dioceses. There are 13 Bishops including the Metropolitan and 795 active priests (and 151 retired priests). It has a democratic pattern of administration with a representative assembly (Prathinidhi Mandalam), an executive council (Sabha Council) and an Episcopal Synod.
The Church has been active in the field of education and owns 8 Colleges, 6 Higher Secondary Schools, 1 Vocational Higher Secondary School, 8 High Schools, 1 Training School and other educational institutions owned and managed by individual parishes. We have 3 Technical Institutions at Cherukole, Kalayapuram and Anchal.
The Church has 31 social welfare institutions, 11 destitute homes and five hospitals. The Mar Thoma Tehological Seminary (Estd: in 1926) and 6 other institutes cater to the theological education of both the clergy and the laity. Further, there are three Study Centres, at Managanam, Kottayam and Trivandrum for arranging regular study programmes and to provide opportunities for creative dialogue between church and society on various ethical, moral, social and religious issues. The religious education of children is looked after by the Christian Education Department - the Sunday School Samajam (organized in 1905) and the work among youth is carried on by the Youth Department - the Yuvajana Sakhyam (organized in 1933). The Church has a women’s department - the Mar Thoma Suvisesha Sevika Sanghom (organized in 1919) which is vigorously active.
The Mar Thoma Church is in full communion with the Anglican Church, The Church of South India and the Church of North India and has cordial relations with the various denominations of the Christian Church. The Church actively co-operates with the C.S.I. and the C.N.I. through CCI (Communion of Churches in India).
The Mar Thoma Church is financially independent and maintains its indigenous nature. Its regular work as well as special projects are entirely financed by contributions from its members at home and abroad.
While the history of the Church especially during the last century shows advance and growth in various directions, it will be admitted that there is little room for complacency. In the life of the individual as well as the community, we lag far behind the standard set by our Lord. The Church is in need of renewal in Spirit in order to become more effective and useful instrument in His hands for the extension of His Kingdom. As members of the Church let us therefore surrender ourselves under the mighty hand of God so that He may exalt us and use us for His glory in the years to come. | <urn:uuid:c45e6bb6-72c8-4be7-ba87-4cf69093652f> | {
"date": "2019-03-18T19:37:45",
"dump": "CC-MAIN-2019-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201672.12/warc/CC-MAIN-20190318191656-20190318213656-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9746432304382324,
"score": 3.0625,
"token_count": 1888,
"url": "http://mtcfb.org/marthoma-church-history"
} |
II. PERSONAL AFFECTIONS
1. Passive Affections[Expression of pain.] Lamentation.
cry (vociferation) [more]; scream, howl; outcry, wail of woe, frown, scowl.
tear; weeping; flood of tears, fit of crying, lachrymation, melting mood, weeping and gnashing of teeth.
plaintiveness; languishment; condolence [more].
mourning, weeds, willow, cypress, crape, deep mourning; sackcloth and ashes; lachrymatory; knell [more]; deep death song, dirge, coronach, nenia, requiem, elegy, epicedium; threne; monody, threnody; jeremiad, jeremiade!; ullalulla.
mourner; grumbler (discontent) [more]; Noobe; Heraclitus.
[Verbs] lament, mourn, deplore, grieve, weep over; bewail, bemoan; condole with [more]; fret (suffer) [more]; wear mourning, go into mourning, put on mourning; wear the willow, wear sackcloth and ashes; infandum renovare dolorem [Vergil] (regret) [more]; give sorrow words.
sigh; give a sigh, heave, fetch a sigh; "waft a sigh from Indus to the pole" [Pope]; sigh "like a furnace" [As you Like It]; wail.
cry, weep, sob, greet, blubber, pipe, snivel, bibber, whimper, pule; pipe one's eye; drop tears, shed tears, drow a tear, shed a tear; melt into tears, burst into tears; fondre en larmes; cry oneself blind, cry one's eyes out; yammer.
frown, scowl, make a wry face, gnash one's teeth, wring one's hands, tear one's hair, beat one's breast, roll on the ground, burst with grief.
complain, murmur, mutter, grumble, growl, clamor, make a fuss about, croak, grunt, maunder; deprecate (disapprove) [more].
cry out before one is hurt, complain without cause.
[Adjectives] lamenting; in mourning, in sackcloth and ashes; sorrowing, sorrowfu (unhappy) [more]; mournful, tearful; lachrymose; plaintive, plaintful; querulous, querimonious; in the melting mood; threnetic.
in tears, with tears in one's eyes; with moistened eyes, with watery eyes; bathed in tears, dissolved in tears; "like Niobe all tears" [Hamlet].
[Interjections] heigh-ho! alas! alack! O dear! ah me! woe is me! lackadaisy! well a day! lack a day! alack a day! wellaway! alas the day! O tempora O mores! what a pity! miserabile dictu! O lud lud! too true!
[Phrases] tears standing in the eyes, tears starting from the eyes; eyes suffused, eyes swimming, eyes brimming, eyes overflowing with tears; "if you have tears prepare to shed them now" [Julius Caesar]; interdum lacrymae pondera vocis habent [Ovid]; "strangled his language in his tears" [Henry VIII]; "tears such as angels weep" [Paradise Lost]. | <urn:uuid:7719bc96-0460-45a2-afe1-1ce163816140> | {
"date": "2013-12-08T18:19:58",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163785316/warc/CC-MAIN-20131204132945-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8036820888519287,
"score": 2.671875,
"token_count": 783,
"url": "http://thesaurus.com/roget/VI/839.html"
} |
World Inequality Database on Education
The World Inequality Database on Education (WIDE) highlights the powerful influence of circumstances, such as wealth, gender, ethnicity and location, over which people have little control but which play an important role in shaping their opportunities for education and life. It draws attention to unacceptable levels of education inequality across countries and between groups within countries, with the aim of helping to inform policy design and public debate.
Explore disparities in education across and within countries
Compare groups within countries
Compare overlapping disparities
Selecting an indicator compares disparities between countries for different groups, such as wealth, gender or location. Groups are visualized as coloured dots.
Clicking on a country shows the disparities for different groups, such as gender, wealth or location within the selected country.
Clicking on one of the groups shows overlapping disparities within countries. Combining multiple dimensions of inequality, it can compare, for example, education for rural poor women with urban rich men within a given country. | <urn:uuid:66175985-8297-4fa4-bb65-5a4a5dc1f07a> | {
"date": "2018-06-20T19:00:20",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863834.46/warc/CC-MAIN-20180620182802-20180620202802-00016.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9403346180915833,
"score": 3.640625,
"token_count": 200,
"url": "https://www.education-inequalities.org/"
} |
Climate change abatement strategies
Climate Change Abatement Strategies:
Which Way Is the Wind Blowing?
The article is a verbatim version of the original and is not available for edits or additions by Encyclopedia of Earth editors or authors. Companion articles on the same topic that are editable may exist within the Encyclopedia of Earth.
The mitigation of greenhouse gas emissions, already one hot topic, got even hotter with the 16 June 2009 publication of the White House report Global Climate Change Impacts in the United States. “Choices made about emissions reductions now and over the next few decades will have far-reaching consequences for climate-change impacts,” warned the strongly worded report, which emphasized the growing sense that action must be taken soon to avoid catastrophic public health fallout from accelerating climate change—a sense echoed in a proposed ruling by the U.S. Environmental Protection Agency (EPA) seeking authority to regulate greenhouse gases as a potential public health threat.
How best to drive the United States toward mitigation goals is a matter of disagreement among experts and politicians, however. The task is dauntingly complex because so many sources of greenhouse gases exist. The major sources of U.S. emissions are industry at 30%, transportation (including all forms of mass transit and shipping) at 28%, residential and commercial at 17% each, and agriculture at 8%, according to the Pew Center on Global Climate Change. There are other ways to slice the emissions pie. For instance, the electricity industry, which cross-cuts the above sectors, accounts for 30% of U.S. emissions.
Past regulatory efforts aimed at reducing fossil fuel use—which were geared toward problems other than greenhouse gas emissions, such as trade deficits and traffic congestion—illustrate the need for market forces to get the job done. For instance, it was only after gas prices soared past $4 per gallon in summer 2008 that sales of high-mileage cars finally surged, while SUV sales tanked. But gas prices’ tumble back below $2 per gallon has once again dampened demand for economy cars and raised light trucks’ market share back to roughly half, underscoring the mantra of economists, which has gained adherents among the major environmental groups: when it comes to changing human behavior, prices trump rules and regulations.
Thus, almost everyone who is concerned about climate change mitigation favors putting a price on greenhouse gas emissions. The question is, what strategies will yield the most mitigation bang for the investment buck?
Rules and Regulations: Laced with Loopholes
The history of so-called command-and-control policies—which dictate not only what the regulations are but also how they will be met—illustrates their limitations. The Corporate Average Fuel Economy (CAFE) standards were a logical response to the Arab oil embargo of 1973–1974, which swelled the U.S. trade deficit and saw Americans spending hours in gas lines. CAFE doubled new car fuel mileage to roughly 27.5 mpg by 1985. And that is where fleet-average automobile fuel economy has remained ever since—riddled with loopholes and becalmed by a lack of political will—despite technologic improvements that could probably have raised it by roughly one-third for light trucks and two-thirds for cars, says John DeCicco, a Michigan-based automotive consultant.
Additionally, CAFE spawned the SUV, which was classified for regulatory purposes as a light truck and was thus subject to much lower standards (slightly under 21 mpg) because at the time most were used for commercial purposes and because U.S. automakers lobbied for their exemption. Eventually, light trucks—a category that also includes minivans—grew to more than half the new car market, resulting in a slight decline in fleet-average fuel economy.
It took a convergence of rising worries about the geopolitical and climatologic effects of excessive oil consumption to muster the political will to pass the first significant hike in CAFE standards as part of the Energy Independence and Security Act in December 2007. This hike was supposed to push the fleet average to 35 mpg by 2020, but it included its own loopholes. For instance, it allowed manufacturers to trade credits in the manner of a carbon cap and trade scheme. Car companies could buy unlimited credits from the federal government, meaning they could buy their way out of mileage improvements.
On 19 May 2009 President Obama announced he was superceding that standard with one that will require a fleet average of 35.5 mpg by 2016. Various loopholes have been mitigated, but not eliminated, in Obama’s new standards, says Roland Hwang, vehicles policy director for the Natural Resources Defense Council.
Some jurisdictions promote purchases of high-mileage automobiles by offering benefits to owners of hybrid vehicles such as single-occupant access to high-occupancy vehicle lanes during rush hour. But these policies don’t necessarily encourage drivers to save gas. In metropolitan Washington, DC, for example, the gas-guzzling Chevy Tahoe hybrid SUV (which gets 21 combined city and highway mpg, according to Consumer Reports) automatically has access to high-occupancy vehicle lanes whereas the conventionally powered Honda Civic (at 33 combined mpg) and Smart ForTwo coupe (at 36 combined mpg) do not.
Moreover, the cost of abating carbon dioxide (CO2) with hybrid technology is a high $100–140 per ton of avoided emissions, according to Reducing U.S. Greenhouse Gas Emissions: How Much at What Cost?, a 2007 public–private study produced by groups including Shell Oil, the Natural Resources Defense Council, and the Environmental Defense Fund. By comparison, the report points to numerous measures ranging in cost up to $50 per ton that could be sufficient to cut U.S. carbon emissions projected for 2030 by about one-third, equivalent to a 28% reduction relative to emissions in 2005. In particular, comparable improvement in fuel economy can be achieved in conventionally powered cars—at an overall gain over the lifecycle of the vehicle of $81 per ton of avoided emissions—by using lighter-weight materials, optimal aerodynamics, turbocharging, drivetrain efficiency, and properly inflated tires.
Another example of the limitations of command-and-control legislation is the national Renewable Fuel Standard, adopted under the Energy Policy Act of 2005 and updated in the Energy Independence and Security Act of 2007. This standard called for 9 billion gallons of biomass-based fuel (about 5% of U.S. annual transportation fuel consumption) to be produced beginning in 2008, increasing to 36 billion gallons by 2022. In response, farmers began switching cropland from food to fuel feedstocks, which caused food prices to soar [see “Food vs. Fuel: Diversion of Crops Could Cause More Hunger,” EHP 116:A254–A257 (2008)]. Then, in the 29 February 2008 issue of Science, Joseph Fargione and colleagues showed that, whereas wild land can store immense amounts of carbon, cultivating new land for crops releases this carbon, creating a “carbon debt” that can last for tens to hundreds of years. In the same issue of Science, Timothy Searchinger and colleagues suggested that switching an acre of farmland from food to fuel crops creates demand for new farmland somewhere in the world to make up for that deficit in food production, thereby indirectly contributing to greenhouse gas emissions.
Thus, growing biofuel feedstocks could actually increase greenhouse gas emissions instead of abating them [see “The Carbon Footprint of Biofuels: Can We Shrink It Down to Size in Time?” EHP 116:A246–A252 (2008)]. Furthermore, the sustainability of corn ethanol has been questioned repeatedly because the energy required to produce the ethanol—usually derived from fossil fuel—is almost equal to the energy in the ethanol, obviating any presumed emissions or net energy advantage [see “Battle of the Biofuels,” EHP 115:A92–A95 (2007)].
In May 2009 the EPA proposed a new standard for renewable fuels that would more rigorously account for the carbon content of fuels (this is called a low-carbon fuel standard). However, EPA Administrator Lisa Jackson said corn ethanol distilleries under construction or already completed would likely be exempt from the new regulations. In addition, the EPA proposed tabulating greenhouse gas emissions over 100 years instead of 30. This would improve corn ethanol’s numbers by allowing more time to pay back the carbon debt incurred when new land is plowed, but 30 years is a far more appropriate basis for analyzing lifecycle carbon impact given the probable urgency of mitigating climate change, says Nathanael Greene, a senior energy policy specialist at the Natural Resources Defense Council. In the 6 May 2009 edition of the Washington Post, Frank O’Donnell, head of Clean Air Watch, was quoted as saying, “EPA has left open the option that an exception to good science could be made in the case of a favored special interest.”
The bottom line: it costs roughly 10 times more to achieve a given level of CO2 abatement using a low-carbon fuel standard than it does using carbon pricing, according to a study published in the February 2009 issue of American Economic Journal: Economic Policy by Stephen Holland of the University of North Carolina at Greensboro and Jonathan Hughes and Christopher Knittel of the University of California, Davis. Additionally, although a low-carbon fuel standard taxes high-carbon fuels, it actually subsidizes low-carbon fuels, thus failing to encourage carpooling, reduced driving, or other carbon avoidance, says Holland, a professor of economics.
In another example of cost-ineffective decision-making in politics, Pennsylvania’s Alternative Energy Portfolio Standard Act of 2004 mandated more than 800 Mw (roughly a nuclear plant’s worth) of solar photovoltaics installations by 2021. But Pennsylvania could obtain the same energy from wind for less than one-quarter the cost, according to “Cap and Trade Is Not Enough: Improving U.S. Climate Policy,” a policy paper from the Department of Engineering and Public Policy, Carnegie Mellon University.
Urban mass transit is another oft-touted solution to greenhouse gas emissions. But several experts, including Andreas Schafer, a lecturer at The Martin Center for Architectural and Urban Studies, University of Cambridge, United Kingdom, believe that outside of densely populated cities, the cost of reducing emissions by luring people out of their cars onto buses or subway systems is far too high relative to other means of mitigation to merit consideration on that basis. “It is very difficult to get people out of their cars and put them into mass transit on a significant scale, whereas improving the fuel efficiency of vehicles is significantly more realistic,” says Schafer.
To Market, to Market
Whereas a tax simply puts a price on each ton of CO2 emitted, under a cap-and-trade system policy makers set a limit on annual carbon emissions, then let the price float, dictated by the market. The government gives or auctions “allowances” (permits to emit a specific quantity of carbon) to CO2 emitters, who can buy and sell the allowances among themselves. Thus, a utility that can easily reduce emissions, perhaps through efficiency improvements, can sell its allowances to companies for whom reducing emissions would be more costly than buying the allowances.
Either mechanism—taxation or cap and trade—would best be applied “upstream” at the point of energy production. In other words, instead of having to monitor millions of tailpipes, furnaces, factories, and the like, regulators would oversee “roughly 150 oil refineries, 1,460 coal mines, and 530 natural gas processing plants,” according to Policy Options for Reducing CO2 Emissions, a February 2008 report by the Congressional Budget Office.
Between the two market solutions, economists generally prefer a tax because it’s simpler. But it is very hard to change taxation systems, says Gregory P. Nowell, an associate professor of political science at the University at Albany–SUNY. Voters fear they would lose somehow if the taxation system changes, he says, adding that high taxes in Europe are not necessarily due to environmental foresight. “In Europe in the 1930s the coal industry favored punitive taxation on oil to slow that market’s growth,” he says. “But the advantages of oil over coal were so great that the market grew anyhow. Now those taxes account for fifteen to twenty percent of government revenues, and shifting them to other sectors of the economy would be an electoral nightmare.”
Another political advantage for cap and trade is that it has a precedent in the United States, having been used successfully to reduce sulfur dioxide emissions. However, that task, which merely required switching from high- to low-sulfur coal, was far simpler than replacing an entire energy infrastructure, says Laurie Williams, an enforcement attorney with EPA’s region 9, speaking in her personal capacity with ethics clearance from the agency.
The European Union’s greenhouse gas emissions trading scheme (EU ETS), which also lends credibility to U.S. efforts toward cap and trade, nonetheless has often been criticized for “over-allocating” permits—that is, setting the cap higher than current emissions, which can delay measures to reduce emissions. But Denny Ellerman, a senior lecturer in applied economics at the Massachusetts Institute of Technology Sloan School of Management, says this happened during a trial period from 2005 to 2007, and that recently released data for 2008 indicate the scheme, which serves 27 nations, is now reducing emissions.
Still, some critics worry that setting a cap and letting the market determine the price of emissions—rather than setting the price as with a tax—means that when allowance prices fall, so does the incentive for investing in efficient and low-carbon technology, says Michelle Chan, director of the Green Investments Program at the advocacy group Friends of the Earth. A variety of measures can limit that volatility, such as price floors and ceilings, and provisions that allow companies to bank allowances for future years or borrow them, says economist Ian Parry, a senior fellow at the nonprofit Resources for the Future. Nonetheless, economists hold that a ceiling can weaken the cap.
Whichever market mechanism ultimately prevails on Capitol Hill—assuming one does—economists acknowledge the legislation may require complementary measures to offset certain “market failures,” or situations in which the prices of goods do not reflect the true cost of producing those goods. As one example, consumers often fail to consider lifecycle costs of items ranging from light bulbs to houses, or they do so with short several-year horizons in contrast to, say, utility companies, which take a 20- to 30-year view.
These market failures are costly, according to Reducing U.S. Greenhouse Gas Emissions: How Much at What Cost? Nearly 40% of abatement could be achieved at negative marginal cost, according to the report. For example, it is cheaper to build efficient buildings, vehicles, and appliances than it is to retrofit or retire them early, yet such options must be pursued quickly because the potential benefit diminishes rapidly as more inefficient buildings and vehicles are produced.
The American Clean Energy and Security Act of 2009
Market strategies and complementary measures for supporting them are both addressed in the American Clean Energy and Security Act of 2009, sponsored by Representatives Henry Waxman (D–CA) and Edward Markey (D–MA). The bill creates an economy-wide cap-and-trade program at the level of refiners, importers of liquid fuels, and the coal mining industry, augmented by a smorgasbord of complementary measures.
The bill aims to boost the share of low- or zero-carbon primary energy (energy that exists in raw form, such as the coal or uranium used in power plants to generate electricity, or the solar energy that hits a collector, as opposed to the resulting electricity or heat they provide to consumers) to 18% by 2020 and 46% by 2050, according to the EPA’s Preliminary Analysis of the Waxman–Markey Discussion Draft. Low- and zero-carbon energy sources include renewable fuels, nuclear power, and fossil fuels with carbon capture and storage measures. The bill also aims to reduce total greenhouse gas emissions by 20% by 2020 and by 83% by 2050, relative to 2005 levels.
If the allowances were auctioned rather than given to CO2 emitters, and if most of the revenues from those auctions were given to households, the annual cost of the legislation would be less than $150 per household, according to the EPA analysis. However, the current plan is to give away more than 80% of the allowances, says Williams. In the March 2009 working paper “Who Pays for Climate Policy? New Estimates of the Household Burden and Economic Impact of a U.S. Cap-and-Trade System,” author Andrew Chamberlain of the education group Tax Foundation wrote that a cap-and-trade scheme that begins by giving away allowances would cost the poorest households $528 each versus a net gain of $1,904 to the wealthiest households, thanks to windfall profit dividends from shareholding in the companies receiving free allowances.
The legislation’s current iteration includes a renewable electricity portfolio standard, which would require utilities to obtain 20% of the electricity they produce from renewable sources by 2025 and consider emissions over the entire lifecycle of fuel production and use. There are also provisions for deploying plug-in hybrid electric vehicles (PHEVs) in certain regions. This would include requiring utilities to develop plans for the necessary infrastructure, such as stations for charging and battery swapping. The bill also calls for substantial improvements in building efficiency, lighting, appliances, and investments in public transportation, along with awards for inventions that improve industrial efficiency.
According to the EPA analysis, key uncertainties around the bill include the long-term cost of abatement, the availability and cost of domestic “offset” projects, and the technical, political, and social feasibility of new nuclear power and the large-scale practicality of carbon capture and storage [for more information on this technology see “Carbon Capture and Storage: Blue-Sky Technology or Just Blowing Smoke?” EHP 115:A538–A545 (2007)]. The bill also does not address greenhouse gases other than CO2 except to make agricultural greenhouse gas emissions a target for offsets.
The major reliance on offsets for roughly one-third of emissions reductions is one of the strongest criticisms of Waxman–Markey. In an offset scenario, polluters can counterbalance their greenhouse gas emissions by paying for carbon-mitigating activities such as planting trees or building a renewable energy installation. The advantage: offsets are cheaper than allowances. In fact, including offsets in the bill is a way to restrain allowance costs.
However, the market for offsets both in the United States and abroad is already unreliable and could get much worse as it rises to a projected $2 trillion annually by 2020, according to Chan. Moreover, the same kind of financial creativity that figured in the recent mortgage market meltdown would likely apply to the offset market. For example, the financial firm Credit Suisse bundled a series of offsets prior to their being verified as legitimate by the United Nations Clean Development Mechanism (which serves parties to the Kyoto Protocol) and then sliced the bundles into packages for sale in a process called “securitization,” says Chan. This same type of activity rendered mortgage-backed securities so far removed from the original loans and the value of the homes they financed that it became impossible to determine the quality of the loans, contributing to the subprime crisis. “[The same] could happen again as carbon securitization deals get bigger and more complex,” says Chan.
In the April 2008 working paper “A Realistic Policy on International Carbon Offsets,” Stanford researchers Michael W. Wara and David G. Victor wrote that corporations seek the cheapest offsets, which also tend to be the ones where mitigation is hardest to measure and verify. Furthermore, they wrote, “much of the current [Clean Development Mechanism] market does not reflect actual reductions in emissions, and that trend is poised to get worse.” [For more information on these schemes, see “Carbon Offsets: Growing Pains in a Growing Market,” EHP 117:A62–A68 (2009).]
In devising a renewable electricity portfolio standard, it is important to distinguish among technologies, says Granger Morgan, a professor of engineering and public policy at Carnegie Mellon University. It makes sense, he says, to deploy technologies that are “within striking distance of being cost-competitive,” where growing the market might result in new knowledge that could bring costs down to competitive levels. Conversely, if a technology is far from being cost-competitive and unlikely to achieve it in current form, then investing in research and development toward developing a cost-competitive version makes more sense.
Subsidies make sense for wind power, says Morgan. “Wind is now one of the most cost-effective ways to produce low-carbon electricity.” If utilities had to pay for carbon emissions, electricity would be cheaper from wind than from coal, he says. However, Williams criticizes Waxman–Markey for putting “such a low price on carbon it will not make even wind cost-competitive.”
Subsidies also make sense for 20-mile-range PHEV batteries, both to learn how to improve the infrastructure and to provide incentives for development of better batteries, says Morgan, whereas 60-mile batteries would not be cost-effective at present. Moreover, PHEVs are not appropriate in regions where coal, which supplies half of U.S. electricity, is the major source of electricity, because in these cases, they would not necessarily reduce greenhouse emissions.
Waxman–Markey is likely to pass the House in late 2009 or early 2010, says Juliet Eilperin, a Washington Post reporter who covers environmental matters on Capitol Hill. But even with the Senate in Democratic hands, the bill’s passage in that body is by no means assured. Waxman–Markey is somewhat controversial among proponents of a market solution, some of whom favor a carbon tax. It has received praise from some environmentalists (though vehement opposition from others), some economists, and from the EPA analysis—although, says Williams, many agency staff disagree with this analysis. Should Congress fail to act, it would be left to the EPA to regulate greenhouse gases, a process that could take two years, according to an agency spokesperson.
No Easy Answers
Nowell warns that the tools used for greenhouse gas mitigation must be appropriate for the task. “We are facing what is arguably the greatest environmental calamity in human history with regulatory mechanisms that were designed for other uses, including congestion control and tropospheric pollution control,” he says. “It has a heroic quality, but also resembles trying to wage war against a modern army with pitchforks and baseball bats.”
So what strategy offers the most climate mitigation bang for the investment buck? Among 18 experts questioned on mitigation strategies by the U.S. Governmental Accountability Office for its May 2008 report, Climate Change: Expert Opinion on the Economics of Policy Options to Address Climate Change, 7 preferred a tax, and 11 preferred some form of cap and trade. Despite that disagreement, one message came through loud and clear: 16 of the 18 experts urged adoption of some form of carbon pricing as soon as possible.
David C. Holzman
^David C. Holzman writes from Lexington and Wellfleet, Massachusetts, on science, medicine, energy, economics, and cars. He has written for EHP since 1966. | <urn:uuid:9ed87a7f-383a-43be-975a-115bee66f8f0> | {
"date": "2014-03-11T16:34:42",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011231453/warc/CC-MAIN-20140305092031-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9480999112129211,
"score": 2.84375,
"token_count": 4907,
"url": "http://www.eoearth.org/view/article/51cbed437896bb431f690ee4/?topic=49485"
} |
Unless you have the materials and the skills, this is not for you. I will use this in a robot I’m working on, but Mindsensors sells pre-made versions of the same concept.
What this allows you to do is, using either a RC receiver or a NXT servo controller, make pneumatic robots!
These things are relatively easy to make. Materials:
- A drilling machine
- A drill of the same size as LEGO pins.
- A small drill for the servo screws.
- A jigsaw
- A pen
- Misc. LEGO parts.
- A small servo with horn and screws.
- Saw a rectangle big enough for the switch and the servo.
- Draw a line around the servo, and a dot through the pinholes of the switch.
- Drill out the pinholes, and make an extra hole on the servo outline.
- Open the jigsaw, and put the blade through the hole on the line.
- Close the saw and saw out the servo hole.
- Drill small holes for the servo screws.
- Attach the servo and the switch to the wood with any LEGO at hand.
- Bend a piece of iron wire around the switch and the servo horn. | <urn:uuid:74b3781f-922e-4bbd-b380-16b681dfd80f> | {
"date": "2019-06-24T19:59:16",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999709.4/warc/CC-MAIN-20190624191239-20190624213239-00536.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8393139243125916,
"score": 2.59375,
"token_count": 270,
"url": "http://pepijndevos.nl/2012/04/29/diy-servo-controlled-pneumatic-switches.html"
} |
New methods to treat the potential medical problems of premature babies are constantly evolving. Here's an introducion to a few areas that the NICU staff will be monitoring for you baby.
Neonatology is a rapidly changing field of medicine, and there are bound to be variations from hospital to hospital in terminology, technology, and treatments. Your doctor may use slightly different terms to describe some of the medical conditions in babies born prematurely. However, this basic information may help you begin to build your understanding.
Need for Warmth
A premature baby, especially a baby with breathing problems, is marginally supplied with calories and oxygen - the fuels he needs to heat his body. A main objective of your NICU staff is to keep your baby warm - but not too warm. Your baby will mostly likely be placed in an incubator or warmer, so that his temperature may be carefully controlled.
A tiny device that acts as a thermometer may be taped over your baby's belly. It constantly senses your baby's temperature and regulates the temperature of the environment within the incubator. It will increase the warmth when your baby gets too cool and decrease it when he's too warm.
Your baby's axillary (under the arm) temperature will be checked frequently, as well.
For more added warmth, and also to keep him from grabbing or kicking tubes and wires, your baby may also be dressed in mittens, booties, and a hat.
The goal is to keep your baby's body temperature as close to normal as possible-98.6°F (37.0°C). This is also the temperature at which he conserves the most oxygen and calories, and gains the most weight.
It's very common for a premature baby to have breathing problems. The severity of the problem may depend on how prematurely your baby was born. A premature baby's lungs aren't as fully developed and ready to breathe as a full-term baby's. Let's look now at some common problems associated with breathing.
- Apnea and Bradycardia: Apnea is the term used to describe the times a baby interrupts breathing. Apnea is very common among premature babies in the early weeks of life. Apnea is often accompanied by bradycardia-a lower-than-normal heart rate. For a tiny baby this means the heart is beating fewer than 100 times a minute. This is considered slow for a baby, even though an adult heart rate is usually much slower.
- Respiratory Distress Syndrome (RDS): RDS is a breathing disorder that may be found in premature babies. It is caused by the baby's inability to produce surfactant - the fatty substance that coats the alveoli (the tiny air sacs in the lungs) and keeps them from collapsing. An unborn baby's lung tissue begins making small amounts of surfactant in the early weeks of pregnancy, but most babies aren't producing enough surfactant for proper breathing until the 35th week of gestation. However, since babies do vary greatly in their rates of lung development, some premature babies have enough surfactant to breathe without difficulty while others do not. In general, the more premature the baby, the greater the risk of developing RDS.
- Pulmonary Interstitial Emphysema (PIE) and Pneumothorax: If it is necessary for your baby to be on a respirator (breathing machine), the pressure may occasionally cause air to leak from his lungs. Tiny air bubbles may be forced out of the alveoli and in between layers of lung tissue. This condition, called pulmonary interstitial emphysema (PIE), usually subsides as your baby's respiratory problems improve and respirator pressure to the lungs can be reduced. Sometimes a tear can occur and leak into the surrounding chest spaces causing the lung to collapse. This is the condition called pneumothorax.
More than half of all full-term babies and more than three-fourths of all premature babies get jaundice during the first three to seven days of life. This isn't a reason for concern most of the time, although it does cause the baby's skin and whites of his eyes to turn somewhat yellow.
Babies are born with a large number of fetal red blood cells. Normally, as red blood cells break down, bilirubin, which is a yellowish pigment, forms. The bilirubin is detoxified (processed) in the liver. If the enzymes in the liver that process the bilirubin aren't working efficiently yet (which happens often in newborns and especially in premature babies) the bilirubin level rises in the blood and some of it enters body tissue, where it then temporarily causes a yellowing-the condition called jaundice.
While in the NICU, your baby's blood will be frequently checked for a rise in bilirubin. If the levels rise closer to those that are considered unsafe, he may be treated by phototherapy (most common) or exchange transfusion.
If your baby is in the NICU, chances are he will need to receive some type of medication, nutrients or perhaps blood. There are two common ways medicine is provided to your baby in the NICU.
- Umbilical Arterial Catheter: The umbilical arterial catheter is inserted through the end of a baby's umbilical cord (at the belly button) and is threaded through the umbilical artery into the aorta, the main artery supplying the body with oxygenated blood. While this sounds painful, it really isn't. There are no nerve endings in your baby's umbilical cord where the catheter (tiny tube) is inserted.
- IV Pump/Superficial IV: An IV pump is a machine attached to a pole placed near your baby's bed. IV stands for intravenous (in-trah-vee-nous), which means into the veins. | <urn:uuid:7aabc972-31d4-4677-829a-89d7c421a850> | {
"date": "2017-03-28T08:02:26",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.31/warc/CC-MAIN-20170322212949-00166-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9568454623222351,
"score": 3.328125,
"token_count": 1222,
"url": "https://www.enfamil.ca/articles/special-conditions-of-prematurity"
} |
We have an intuitive understanding that objects do not change shape in time and space. We know that the coffee mug does not grow bigger as we bring it toward us and it surely does not cease to exist when we cover it with a newspaper. This knowledge, which is seen in infants as young as 5-months old is called object permanence. Without this information our perception of the world would be chaotic and frightening.
But what are the neural correlates of this property? Turns out, this is still an open question, one that Arun Sripati and Puneeth NC from Indian Institute of Science in Bangalore investigated in the visual system of macaque monkey brain.
It has been previously established that fundamentally, the visual information entering the retina is processed by two different streams. The “where/how” pathway and the “what” pathway. These pathways are hierarchically connected brain regions with neurons processing increasingly complex features of the visual information along the pathway.
Object recognition has been localised to the “what” pathway, also called the ventral stream. At the higher end of this pathway is a structure called the inferior temporal cortex (IT). Earlier work on neurons in this area showed that IT cortex is vital for object recognition. Single neurons in this region respond for entire objects, but do not fire for constituent components of the object. For example, they fire when an image of a face is shown but not when the nose, eyes or lips are shown separately.
What Sripati and Puneet asked was whether the activity of single neurons in IT cortex correlates with the property of object permanence.
To answer this question, they trained naive monkeys to fix their gaze at a central spot on a computer screen using juice squirts as reward. Then an occluder in the form of a brick wall moved towards the object and covered the object completely. Then the occluder moves away to reveal the object again. In “match” trials, the same object reappeared. But in “surprise” trials a completely new object appeared, breaking the expectation of object permanence. While the monkey viewed the stimuli, they simultaneously recorded electrical activity of neurons in the IT cortex in the monkey brain.
They found that small group neurons (8%) in IT cortex fired when the object was shown after occlusion. Among this pool, there was a group that fired in the “surprise” condition and another to the “match” condition. This effect was a generalised property of the IT neurons as they did not fire for a specific object, but for many pairs that were tested. This shows that IT neurons keep track of an object and some respond to object permanence case and some to its violation.
How do these neurons know that the object is the same or different? The authors posit that it could to be a memory based process as the neuron has to remember the object during the occlusion. They did find such a signal in single IT cortex neurons correlating to memory in a small group of IT neurons.
This study adds evidence to the notion that aspects of high-level visual processing, especially object permanence are processed by single neurons in IT cortex. The lead author of the study Arun Sripati says, “Understanding how single neurons in IT represent objects will eventually help us devise better computers and help diagnose and treat disorders of high level vision in humans. Our goal is to understand why we are able to make computers play chess but are unable to make them see”. However, the authors caution that this might not be a causal relationship. Object permanence could very well arise from another area in the visual pathway and this information could be passed on to IT neurons. | <urn:uuid:44389abb-348e-4108-8b7c-ffea4411089f> | {
"date": "2017-12-11T02:09:44",
"dump": "CC-MAIN-2017-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00256.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9595776200294495,
"score": 3.59375,
"token_count": 761,
"url": "https://indiabioscience.org/news/2016/object-permanence-in-inferotemporal-cortex"
} |
A study found that some people actually have a memory issue that can contribute to their internal clock being off. The study found that chronic lateness may originate in what’s called Time-Based Prospective Memory, a function of memory that triggers a time cue. Yours may run late however there are ways to get around that problem.
- Accept that you have this issue--Saying “I’m not ALWAYS late” simply because you are occasionally on time is not the answer. Generally speaking people who are late are nearly always late by the same amount of time. If you are consistently late by 10, 20, or even 30 minutes, then consistently you must keep a personal schedule of appointments 15, 25, 35 minutes earlier than agreed upon.
- Set reminders—Thanks to smart phones there are plenty of ways to set alerts. The easiest is to program a reminder alert that is, instead of the usual hour before an event, perhaps an hour and a half in advance. You can add a second alert at 40 minutes pre-event too.
- STOP MULTI-TASKING—There is no such thing as multi-tasking. Our brains simply can’t focus on more than one thing at a time. All you are doing is slowing down your focus on anyone thing because you are having to basically power up and down on one issue in order to switch to the other. That adds a delay to both tasks you are trying to take on. | <urn:uuid:d7c9bb8e-ab17-49c3-a6bc-dd687664379e> | {
"date": "2019-10-14T04:12:05",
"dump": "CC-MAIN-2019-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9704174995422363,
"score": 2.65625,
"token_count": 302,
"url": "https://961theriver.iheart.com/featured/margie-maybe/content/2017-04-26-why-are-you-late/"
} |
The natural science collection comprises around 800,000 specimens, and is hugely diverse in terms of subject area, specimen type, and taxonomic range. It is Designated as being of national and international significance.
The geology collection includes a wide range of fossils, minerals, rocks and meteorites, telling the story of our planet’s history. Yorkshire’s geology is very well represented, and the mineralogy collection is a particular strength comprising a range of British and largely European specimens including some significant rarities and a modest collection of cut and polished gemstones.
There are a number of type and figured specimens in the fossil collection and some rare assemblages of excavated cave material including Raygill fissure and Victoria Cave. Highlights of the palaeontology collection include a Giant Deer (formerly Irish elk) skeleton, ichthyosaur skeletons, and the Armley Hippo.
The zoology collection includes shells, skeletal material, microscope slides, taxidermy, skins, eggs and spirit specimens. This material represents a vast range of biodiversity including vertebrates, arthropods and molluscs.
The conchology collection (shells) is a particular strength with massive research value, as well as being a valuable resource for learning and display. We hold a range of type and figured material, such as specimens collected by Sylvanus Hanley and material described by Terry Crowley.
The taxidermy collection, including historic as well as recent specimens, is very popular with visitors of all ages, and is an inspiration for artists and scientists alike. We hold taxidermied specimens of endangered species including snow leopard, kakapo, and giant panda, and extinct species such as thylacine, huia, passenger pigeon and hyacinth macaw. These are hugely important for education, display and research purposes. Sadly, museums such as ours are now the only place where many of these species can be seen or researched.
Highlights of our osteology collection include five skulls of the extinct thylacine, and skeletal material from extinct birds such as dodo, great auk, and moa.
We hold a range of entomology material covering all insect orders, including insects collected in Yorkshire and around the world. We hold one of the world’s best collections of fig wasps, collected and recently donated by Dr Stephen Compton. We have a strong collection of butterflies and moths collected both locally and abroad, including specimens of extinct and endangered species.
Our botany collection includes thousands of mounted plant specimens and seeds, as well as dried fungus, mosses and lichen. The flora of Leeds and Yorkshire are well represented, and has recently been made more accessible to the public and researchers through the Museum to Meadow project, funded by the MA’s Effective Collections initiative.
From mysterious seeds to the oldest rocks, tiny fleas to huge mammoth tusks, or aardvarks to zebras, our collections are a valuable resource for anyone wishing to find out more about the world around them. | <urn:uuid:a72566ea-5bf3-4359-8f0a-32592b6a7e20> | {
"date": "2013-12-05T10:00:29",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163043499/warc/CC-MAIN-20131204131723-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9390754103660583,
"score": 3.515625,
"token_count": 631,
"url": "http://www.leeds.gov.uk/museumsandgalleries/Pages/Natural-Science.aspx"
} |
Building independent insurance proof structures demands thinking beyond the box and into the future. Building the way we do today has nowhere to go. There are few ways to improve efficiencies or to reach the goals of creating OsumPODS with the methods or materials used in stick built construction, or any other conventional method such as steel, brick or block. The building industry depends upon cheap labor to be competitive, adding to the problems of illegal immigration. It’s not to say that there aren’t decent wages paid in the building industry, but where grunge work is required, it is not beyond most contractors to turn their backs and let illegal migrants get the work done. Changing our concept of how to build more logically and efficiently can eliminate much of the grunge work. It can also eliminate much of the waste that ends up in landfills.
To create structures that can withstand insurable losses, we must first engineer and then construct and furnish them so that they will meet numerous criteria, including but not limited to being able to withstand without appreciable damage those perils that are currently insured. More important, occupants must be able to survive within them. It is impossible to be totally immune to war and terrorist attacks, however, such structures would be far less vulnerable.
The challenges to reach such goals are first to develop materials and a process that will yield structures that can survive for at least a thousand years. Next will be to design such structures so that they are adaptive to our needs and easily upgradeable. Most important will be to disconnect them from outside utilities to make them independent.
We may ask if adopting a new code for building that incorporates all of the criteria espoused in the OsumPODS concept is enough to balance the impact that we put on this planet. Since our species negatively affect the planet in numerous other ways, it is certainly clear that how we build and how we consume to furnish and maintain what we build is only part of the equation. However, if we are able to achieve the design criteria, living and working in OsumPODS will have positive effects in many other areas. If OsumPODS are able to produce excess energy, that energy can then be utilized to power electric cars and other transportation. As the demand for electric powered vehicles rise, the transportation industry, in general, will be compelled to end production of gasoline and diesel powered vehicles. If OsumPODS owners supplement their diet with home grown fruits and vegetables, then the demand for farming will generally be reduced, thus, so will the demands for water and chemical based fertilizers. If products are manufactured to OsumPODS standards, then the demand for replacement products will predictably be reduced, thus less waste of materials and energy in the manufacturing process.
Hopefully the criteria applied to the OsumPODS concept will be applied to all endeavors, and at that point we will approach a sustainable existence. The significant other factor will be the control of world population. If we are unable to obtain these goals with technology, then the next step will be to actually reduce populations with birth control until we can create a sustainable existence. Once we have reached such goals, it will be important to monitor world population and live within reasonable guidelines. If we fail to take these steps, the environment will do it for us, and the process may prove to be very unpleasant.
It’s certain at first that the OsumPODS concept may appear to be outlandish and affordable only to the very wealthy, or likely to be implemented by just those few that have a high tolerance to risk. In fact, investing in OsumPODS is a venture, whose affordability cannot be weighed in the costs of construction alone. Overall, it is the intention of Building OsumPODS to prove that the concept will, given time, be more affordable than conventional construction. Also, the objective should not be misconstrued as a folly for an adventuresome few who can or are willing to gamble and not buy insurance, or to those that have a vendetta against insurance companies as a whole and hope to see them out of business. The logic of insurance isn’t conceptually bad. It is like the parable of the ant and the grasshopper, and we all know what happens to the grasshopper. The problem with insurance as a business is that it is the goal of the corporation to make its stockholders a reasonable profit if not wealthy, quite often at the expense of its policyholders. In reality, it should be the policyholders that own the company and benefit.
In building OsumPODS, it is not to say that structures must be or will be engineered to be virtually indestructible to be rated insurance proof. It would be near impossible to expect that anything manmade could withstand the forces of a volcano, an asteroid impact, a nuclear explosion, or a giant tsunami. Chances of survival in such rare and extreme events would make it impractical to engineer to prevent losses. Indeed, insurance companies do not insure for war, and most often make exclusions for certain catastrophes. Losses from catastrophic events of high magnitude, acts of God if you prefer, will always be answered by the compassion of those that survive, and by those organizations and governments that can help.
There are factors to implementing the OsumPODS concept that would be nice to ignore, such as dealing with current building codes and zoning, and possible lawsuits from threatened industries. There are certain policies, ideologies and practices in business and government worldwide that must be rethought and changed to help implement the overall goals. Allowing structures to be disconnected from utilities and municipalities, and building stronger and longer lasting structures that won’t be forced to purchase insurance will require much effort and perhaps some sacrifice since many time-valued practices and ideologies will go by the wayside. Perhaps the biggest battle will come from investors as numerous entities and products become obsolete.
Being able to choose to not purchase insurance, which is generally not an option now, would be a way of turning non-productive capital into productive capital and real assets. In reality, building IIPS is buying insurance that’s real, not some blind promise. There is a sad reality to purchasing insurance; once you quit paying premiums you are no longer insured, and if you never received a benefit you threw away money you could have used elsewhere. If you were honest and didn’t defraud the insurance company, you have footed the bill for those that do. In building IIPS, you are insured forever without spending a penny over the cost of construction. You also won’t be putting money into a nonproductive system that is highly susceptible to fraud and greed. | <urn:uuid:27fedcf2-7abb-4853-8cdb-d8050c4dc503> | {
"date": "2018-03-25T01:08:39",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651481.98/warc/CC-MAIN-20180325005509-20180325025509-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9623830914497375,
"score": 2.9375,
"token_count": 1370,
"url": "http://osumpods.michaelatman.com/standard-post/"
} |
Edith M. Baker (1885-1978) — Leader in Medical Social Work
Edith M. Baker was the first medical consultant for the U.S. Childrens Bureau. She was a leader in medical social work as well as in the American Association of Medical Social Workers, for which she served as president from 1929 to 1931, and as the first chairman of the American Association of Medical Social Workers Committee on medical care which was assigned to work with federal agencies to address social problems in health programs. As chairman, she visited the directors of the new health and welfare programs that were a part of the Social Security Acts of the mid-1930s. Her aim was to promote the inclusion of social work staff at high levels.
As a result of Baker’s visits to the U.S. Children’s Bureau, the Chief, Dr. Martha Elliot, challenged Ms. Baker to take leave from her position in St. Louis and come to the Children’s Bureau for six months to put recommendations into effect. Edith Baker accepted this challenge and did not leave the U.S. Children’s Bureau until mandatory retirement at age 70, which occurred in the early 1960’s.
Edith Baker was born in Baltimore, Maryland. She earned a certificate from Simmons College School of Social Work in Boston and did her field work placement at Massachusetts General Hospital where she later was employed as a social worker. She then became the director of the Social Services Department at Washington University Medical School in St. Louis, where she stayed until joining the U.S. Children’s Bureau.
Following her position with the U.S. Children’s Bureau, she became Chief Social Worker in the maternal and child health services at the District of Columbia Health Department. She lived in the District of Columbia until her death. Baker’s personal papers and other materials about her life are available at the Schlesinger Library on Women in American, Radcliffe College in Cambridge, Massachusetts.
Originally Published: NASW Foundation
How to Cite This Article (APA Format): NASW Foundation. (1995). Edith M. Baker (1885-1978) – Leader in medical social work. Social Welfare History Project. Retrieved [date accessed] from http://socialwelfare.library.vcu.edu/people/baker-edith-m/ | <urn:uuid:77a9b62c-170f-41ca-ae2e-6e0a24c64dfe> | {
"date": "2017-10-21T05:04:57",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00556.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9702411890029907,
"score": 2.640625,
"token_count": 485,
"url": "https://socialwelfare.library.vcu.edu/people/baker-edith-m/"
} |
A virulent virus that is devastating rabbit populations in Europe could be released deliberately on a remote Australian island next year. The virus causes a condition known as rabbit haemorrhagic disease (RHD) which, contrary to its name, causes blood to clot. The virus attacks the liver, heart muscle and lungs of the rabbit.
The planned release is part of an experiment to control rabbits in Australia and New Zealand.
A recommendation to release the virus will be made to the New Zealand and Australian governments following a meeting last week at a high-security laboratory near Melbourne, where the virus has been tested over the past two years. But scientists will have to convince quarantine officials that the benefits of releasing the virus outweigh the risk of it spreading to other animals or to the mainland.
Macquarie Island, between Tasmania and Antarctica, has been suggested as a site for the release ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:220a634e-c690-4855-8050-6baa594139f2> | {
"date": "2014-09-21T02:37:39",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134511.2/warc/CC-MAIN-20140914011214-00037-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9645782709121704,
"score": 3.21875,
"token_count": 205,
"url": "http://www.newscientist.com/article/mg13918920.600-rabbit-virus-to-be-let-loose.html"
} |
Great as is the delay and difficulty placed in the way of the development of the immense natural resources of West Africa by the labour problem, there is another cause of delay to this development greater and more terrible by far — namely, the deadliness of the climate. “Nothing hinders a man, Miss Kingsley, half so much as dying,” a friend said to me the other day, after nearly putting his opinion to a practical test. Other parts of the world have more sensational outbreaks of death from epidemics of yellow fever and cholera, but there is no other region in the world that can match West Africa for the steady kill, kill, kill that its malaria works on the white men who come under its influence.
Malaria you will hear glibly talked of; but what malaria means and consists of you will find few men ready to attempt to tell you, and these few by no means of a tale. It is very strange that this terrible form of disease has not attracted more scientific investigators, considering the enormous mortality it causes throughout the tropics and sub-tropics. A few years since, when the peculiar microbes of everything from measles to miracles were being “isolated,” several bacteriologists isolated the malarial microbe, only unfortunately they did not all isolate the same one. A resume of the various claims of these microbes is impossible here, and whether one of them was the true cause, or whether they all have an equal claim to this position, is not yet clear; for malaria, as far as I have seen or read of it seems to be not so much one distinct form of fever as a group of fevers — a genus, not a species. Many things point to this being the case; particularly the different forms so called malarial poisoning takes in different localities. This subject may be also subdivided and complicated by going into the controversy as to whether yellow fever is endemic on the West Coast or not. That it has occurred there from time to time there can be no question: at Fernando Po in 1862 and 1866, in Senegal pretty frequently; and at least one epidemic at Bonny was true yellow fever. But in the case of each of these outbreaks it is said to have been imported from South America, into Fernando Po, by ships from Havana, and into Bonny by a ship which had on her previous run been down the South American ports with a cargo of mules. The litter belonging to this mule cargo was not cleared out of her until she got into Bonny, when it was thrown overside into the river, and then the yellow fever broke out. But, on the other hand, South America taxes West Africa — the Guinea Coast — with having first sent out yellow fever in the cargoes of slaves. This certainly is a strange statement, because the African native rarely has malarial fever severely — he has it, and you are often informed So-and-so has got yellow fever, but he does not often die of it, merely is truly wretched and sick for a day or so, and then recovers. 43
Regarding the haematuria there is also controversy. A very experienced and excellent authority doubts whether this is entirely a malarial fever, or whether it is not, in some cases at any rate, brought on by over-doses of quinine, and Dr. Plehn asserts, and his assertions are heavily backed up by his great success in treating this fever, that quinine has a very bad influence when the characteristic symptoms have declared themselves, and that it should not be given. I hesitate to advise this, because I fear to induce any one to abandon quinine, which is the great weapon against malaria, and not from any want of faith in Dr. Plehn, for he has studied malarial fevers in Cameroon with the greatest energy and devotion, bringing to bear on the subject a sound German mind trained in a German way, and than this, for such subjects, no better thing exists. His brother, also a doctor, was stationed in Cameroon before him, and is now in the German East African possessions, similarly working hard, and when these two shall publish the result of their conjoint investigations, we shall have the most important contribution to our knowledge of malaria that has ever appeared. It is impossible to over-rate the importance of such work as this to West Africa, for the man who will make West Africa pay will be the scientific man who gives us something more powerful against malaria than quinine. It is too much to hope that medical men out at work on the Coast, doctoring day and night, and not only obliged to doctor, but to nurse their white patients, with the balance of their time taken up by giving bills of health to steamers, wrestling with the varied and awful sanitary problems presented by the native town, etc., can have sufficient time or life left in them to carry on series of experiments and of cultures; but they can and do supply to the man in the laboratory at home grand material for him to carry the thing through; meanwhile we wait for that man and do the best we can.
The net results of laboratory investigation, according to the French doctors, is that the mycetozoic malarial bacillus, the microbe of paludism, is amoeboid in its movements, acting on the red corpuscles, leaving nothing of them but the dark pigment found in the skin and organs of malarial subjects. 44 The German doctors make a practice of making microscopic examinations of the blood of a patient, saying that the microbes appear at the commencement of an attack of fever, increase in quantity as the fever increases, and decrease as it decreases, and from these investigations they are able to judge fairly accurately how many remissions may be expected; in fact to judge of the severity of the case which, taken with the knowledge that quinine only affects malarial microbes at a certain stage of their existence, is helpful in treatment.
There is, I may remark, a very peculiar point regarding haematuric disease, the most deadly form of West Coast fever. This disease, so far as we know, has always been present on the South–West Coast, at Loando, the Lower Congo and Gaboon, but it is said not to have appeared in the Rivers until 1881, and then to have spread along the West Coast. My learned friend, Dr. Plehn, doubts this, and says people were less observant in those days, but the symptoms of this fever are so distinct, that I must think it also totally impossible for it not to have been differentiated from the usual remittent or intermittent by the old West Coasters if it had occurred there in former times with anything like the frequency it does now; but we will leave these theoretical and technical considerations and turn to the practical side of the question.
You will always find lots of people ready to give advice on fever, particularly how to avoid getting it, and you will find the most dogmatic of these are people who have been singularly unlucky in the matter, or people who know nothing of local conditions. These latter are the most trying of all to deal with. They tell you, truly enough no doubt, that the malaria is in the air, in the exhalations from the ground, which are greatest about sunrise and sunset, and in the drinking water, and that you must avoid chill, excessive mental and bodily exertion, that you must never get anxious, or excited, or lose your temper. Now there is only one — the drinking water — of this list that you can avoid, for, owing to the great variety and rapid growth of bacteria encouraged by the tropical temperature, and the aqueous saturation of the atmosphere from the heavy rainfall, and the great extent of swamp, etc., it is practically impossible to destroy them in the air to a satisfactory extent. I was presented by scientific friends, when I first went to the West Coast, with two devices supposed to do this. One was a lamp which you burnt some chemical in; it certainly made a smell that nothing could live with — but then I am not nothing, and there are enough smells on the Coast now. I gave it up after the first half-hour. The other device was a muzzle, a respirator, I should say. Well! all I have got to say about that is that you need be a better-looking person than I am to wear a thing like that without causing panic in a district. Then orders to avoid the night air are still more difficult to obey — may I ask how you are to do without air from 6.30 P.M. to 6.30 A.M.? or what other air there is but night air, heavy with malarious exhalations, available then?
The drinking water you have a better chance with, as I will presently state; chill you cannot avoid. When you are at work on the Coast, even with the greatest care, the sudden fall of temperature that occurs after a tornado coming at the end of a stewing-hot day, is sure to tell on any one, and as for the orders regarding temper neither the natives, nor the country, nor the trade, help you in the least. But still you must remember that although it is impossible to fully carry out these orders, you can do a good deal towards doing so, and preventive measures are the great thing, for it is better to escape fever altogether, or to get off with a light touch of it, than to make a sensational recovery from Yellow Jack himself.
There is little doubt that a certain make of man has the best chance of surviving the Coast climate — an energetic, spare, nervous but light-hearted creature, capable of enjoying whatever there may be to enjoy, and incapable of dwelling on discomforts or worries. It is quite possible for a person of this sort to live, and work hard on the Coast for a considerable period, possibly with better health than he would have in England. The full-blooded, corpulent and vigorous should avoid West Africa like the plague. One after another, men and women, who looked, as the saying goes, as if you could take a lease of their lives, I have seen come out and die, and it gives one a sense of horror when they arrive at your West Coast station, for you feel a sort of accessory before the fact to murder, but what can you do except get yourself laughed at as a croaker, and attend the funeral?
The best ways of avoiding the danger of the night air are — to have your evening meal about 6.30 or 7, — 8 is too late; sleep under a mosquito curtain whether there are mosquitoes in your district or not, and have a meal before starting out in the morning, a good hot cup of tea or coffee and bread and butter, if you can get it, if not, something left from last night’s supper or even aguma. Regarding meals, of course we come to the vexed question of stimulants — all the evidence is in favour of alcohol, of a proper sort, taken at proper times, and in proper quantities, being extremely valuable. Take the case of the missionaries, who are almost all teetotalers, they are young men and women who have to pass a medical examination before coming out, and whose lives on the Coast are far easier than those of other classes of white men, yet the mortality among them is far heavier than in any other class.
Mr. Stanley says that wine is the best form of stimulant, but that it should not be taken before the evening meal. Certainly on the South–West Coast, where a heavy, but sound, red wine imported from Portugal is the common drink, the mortality is less than on the West Coast. Beer has had what one might call a thorough trial in Cameroon since the German occupation and is held by authorities to be the cause in part of the number of cases of haematuric fever in that river being greater than in other districts. But this subject requires scientific comparative observation on various parts of the Coast, for Cameroons is at the beginning of the South–West Coast, whereon the percentage of cases of haematuric to those of intermittent and remittent fevers is far higher than on the West Coast.
A comparative study of the diseases of the western division of the continent would, I should say, repay a scientific doctor, if he survived. The material he would have to deal with would be enormous, and in addition to the history of haematuric he would be confronted with the problem of the form of fever which seems to be a recent addition to West African afflictions, the so-called typhoid malaria, which of late years has come into the Rivers, and apparently come to stay. This fever is, I may remark, practically unknown at present in the South–West Coast regions where the “sun for garbage” plan is adhered to. At present the treatment of all white man’s diseases on the Coast practically consists in the treatment of malaria, because whatever disease a person gets hold of takes on a malarial type which masks its true nature. Why, I knew a gentleman who had as fine an attack of the smallpox as any one would not wish to have, and who for days behaved as if he had remittent, and then burst out into the characteristic eruption; and only got all his earthly possessions burnt, and no end of carbolic acid dressings for his pains.
I do not suppose this does much harm, as the malaria is the main thing that wants curing; unless Dr. Plehn is right and quinine is bad in haematuria. His success in dealing with this fever seems to support his opinion; and the French doctors on the Coast, who dose it heavily with quinine, have certainly a very heavy percentage of mortality among their patients with the haematuric, although in the other forms of malarial fever they very rarely lose a patient.
But to return to those preventive measures, and having done what we can with the air, we will turn our attention to the drinking water, for in addition to malarial microbes the drinking and washing water of West Africa is liable to contain dermazoic and entozoic organisms, and if you don’t take care you will get from it into your anatomy Tinea versicolor, Tinea decalvans, Tinea circinata, Tinea sycosis, Tinea favosa, or some other member of that wretched family, let alone being nearly certain to import Trichocephalus dispar, Ascaris lumbricoides, Oxyuris vermicularis, and eight varieties of nematodes, each of them with an awful name of its own, and unpleasant consequences to you, and, lastly, a peculiar abomination, a Filaria. This is not, what its euphonious name may lead you to suppose, a fern, but it is a worm which gets into the white of the eye and leads there a lively existence, causing distressing itching, throbbing and pricking sensations, not affecting the sight until it happens to set up inflammation. I have seen the eyes of natives simply swarming with these Filariae. A curious thing about the disease is that it usually commences in one eye, and when that becomes over-populated an emigration society sets out for the other eye, travelling thither under the skin of the bridge of the nose, looking while in transit like the bridge of a pair of spectacles. A similar, but not identical, worm is fairly common on the Ogowe, and is liable to get under the epidermis of any part of the body. Like the one affecting the eye it is very active in its movements, passing rapidly about under the skin and producing terrible pricking and itching, but very trifling inflammation in those cases which I have seen. The treatment consists of getting the thing out, and the thing to be careful of is to get it out whole, for if any part of it is left in, suppuration sets in, so even if you are personally convinced you have got it out successfully it is just as well to wash out the wound with carbolic or Condy’s fluid. The most frequent sufferers from these Filariae are the natives, but white people do get them.
Do not confuse this Filaria with the Guinea worm, Filaria medinensis, which runs up to ten and twelve feet in length, and whose habits are different. It is more sedentary, but it is in the drinking water inside small crustacea (cyclops). It appears commonly in its human host’s leg, and rapidly grows, curled round and round like a watch-spring, showing raised under the skin. The native treatment of this pest is very cautiously to open the skin over the head of the worm and secure it between a little cleft bit of bamboo and then gradually wind the rest of the affair out. Only a small portion can be wound out at a time, as the wound is very liable to inflame, and should the worm break, it is certain to inflame badly, and a terrible wound will result. You cannot wind it out by the tail because you are then, so to speak, turning its fur the wrong way, and it catches in the wound.
I should, I may remark, strongly advise any one who likes to start early on a canoe journey to see that no native member of the party has a Filaria medinensis on hand; for winding it up is always reserved for a morning job and as many other jobs are similarly reserved it makes for delay.
I know, my friends, that you one and all say that the drinking water at your particular place is of singular beauty and purity, and that you always tell the boys to filter it; but I am convinced that that water is no more to be trusted than the boys, and I am lost in amazement at people of your intelligence trusting the trio of water, boys, and filter, in the way you do. One favourite haunt of mine gets its drinking water from a cemented hole in the back yard into which drains a very strong-smelling black little swamp, which is surrounded by a ridge of sandy ground, on which are situated several groups of native houses, whose inhabitants enhance their fortunes and their drainage by taking in washing. At Fernando Po the other day I was assured as usual that the water was perfection, “beautiful spring coming down from the mountain,” etc. In the course of the afternoon affairs took me up the mountain to Basile, for the first part of the way along the course of the said stream. The first objects of interest I observed in the drinking-water supply were four natives washing themselves and their clothes; the next was the bloated body of a dead goat reposing in a pellucid pool. The path then left the course of the stream, but on arriving in the region of its source I found an interesting little colony of Spanish families which had been imported out whole, children and all, by the Government. They had a nice, neat little cemetery attached, which his excellency the doctor told me was “stocked mostly with children, who were always dying off from worms.” Good, so far, for the drinking water! and as to what that beautiful stream was soaking up when it was round corners — I did not see it, so I do not know — but I will be bound it was some abomination or another. But it’s no use talking, it’s the same all along, Sierra Leone, Grain Coast, Ivory Coast, Gold Coast, Lagos, Rivers, Cameroon, Congo Francais, Kacongo, Congo Belge, and Angola. When you ask your white friends how they can be so reckless about the water, which, as they know, is a decoction of the malarious earth, exposed night and day to the malarious air, they all up and say they are not; they have “got an awfully good filter, and they tell the boys,” etc., and that they themselves often put wine or spirit in the water to kill the microbes. Vanity, vanity! At each and every place I know, “men have died and worms have eaten them.” The safest way of dealing with water I know is to boil it hard for ten minutes at least, and then instantly pour it into a jar with a narrow neck, which plug up with a wad of fresh cotton-wool — not a cork; and should you object to the flat taste of boiled water, plunge into it a bit of red-hot iron, which will make it more agreeable in taste. BEFORE boiling the water you can carefully filter it if you like. A good filter is a very fine thing for clearing drinking water of hippopotami, crocodiles, water snakes, catfish, etc., and I daresay it will stop back sixty per cent. of the live or dead African natives that may be in it; but if you think it is going to stop back the microbe of marsh fever — my good sir, you are mistaken. And remember that you must give up cold water, boiled or unboiled, altogether; for if you take the boiled or filtered water and put it into one of those water-coolers, and leave it hanging exposed to night air or day on the verandah, you might just as well save yourself the trouble of boiling it at all.
Next in danger to the diseases come the remedies for them. Let the new-comer remember, in dealing with quinine, calomel, arsenic, and spirits, that they are not castor sugar nor he a glass bottle, but let him use them all — the two first fairly frequently — not waiting for an attack of fever and then ladling them into himself with a spoon. The third, arsenic — a drug much thought of by the French, who hold that if you establish an arsenic cachexia you do not get a malarial one — should not be taken except under a doctor’s orders. Spirit is undoubtedly extremely valuable when, from causes beyond your control, you have got a chill. Remember always your life hangs on quinine, and that it is most important to keep the system sensitive to it, which you do not do if you keep on pouring in heavy doses of it for nothing and you make yourself deaf into the bargain. I have known people take sixty grains of quinine in a day for a bilious attack and turn it into a disease they only got through by the skin of their teeth; but the prophylactic action of quinine is its great one, as it only has power over malarial microbes at a certain state of their development, — the fully matured microbe it does not affect to any great degree — and therefore by taking it when in a malarious district, say, in a dose of five grams a day, you keep down the malaria which you are bound, even with every care, to get into your system. When you have got very chilled or over-tired, take an extra five grains with a little wine or spirit at any time, and when you know, by reason of aching head and limbs and a sensation of a stream of cold water down your back and an awful temper, that you are in for a fever, send for a doctor if you can. If, as generally happens, there is no doctor near to send for, take a compound calomel and colocynth pill, fifteen grains of quinine and a grain of opium, and go to bed wrapped up in the best blanket available. When safely there take lashings of hot tea or, what is better, a hot drink made from fresh lime-juice, strong and without sugar — fresh limes are almost always to be had — if not, bottled lime-juice does well. Then, in the hot stage, don’t go fanning about, nor in the perspiring stage, for if you get a chill then you may turn a mild dose of fever into a fatal one. If, however, you keep conscientiously rolled in your blanket until the perspiring stage is well over, and stay in bed till the next morning, the chances are you will be all right, though a little shaky about the legs. You should continue the quinine, taking it in five-grain doses, up to fifteen to twenty grains a day for a week after any attack of fever, but you must omit the opium pill. The great thing in West Africa is to keep up your health to a good level, that will enable you to resist fever, and it is exceedingly difficult for most people to do this, because of the difficulty of getting exercise and good food. But do what you may it is almost certain you will get fever during a residence of more than six months on the Coast, and the chances are two to one on the Gold Coast that you will die of it. But, without precautions, you will probably have it within a fortnight of first landing, and your chances of surviving are almost nil. With precautions, in the Rivers and on the S.W. Coast your touch of fever may be a thing inferior in danger and discomfort to a bad cold in England.
Yet remember, before you elect to cast your lot in with the West Coasters, that 85 per cent. of them die of fever or return home with their health permanently wrecked. Also remember that there is no getting acclimatised to the Coast. There are, it is true, a few men out there who, although they have been resident in West Africa for years, have never had fever, but you can count them up on the fingers of one hand. There is another class who have been out for twelve months at a time, and have not had a touch of fever; these you want the fingers of your two hands to count, but no more. By far the largest class is the third, which is made up of those who have a slight dose of fever once a fortnight, and some day, apparently for no extra reason, get a heavy dose and die of it. A very considerable class is the fourth — those who die within a fortnight to a month of going ashore.
The fate of a man depends solely on his power of resisting the so-called malaria, not in his system becoming inured to it. The first class of men that I have cited have some unknown element in their constitutions that renders them immune. With the second class the power of resistance is great, and can be renewed from time to time by a spell home in a European climate. In the third class the state is that of cumulative poisoning; in the fourth of acute poisoning.
Let the new-comer who goes to the Coast take the most cheerful view of these statements and let him regard himself as preordained to be one of the two most favoured classes. Let him take every care short of getting frightened, which is as deadly as taking no care at all, and he may — I sincerely hope he will — survive; for a man who has got the grit in him to go and fight in West Africa for those things worth fighting for — duty, honour and gold — is a man whose death is a dead loss to his country.
The cargoes from West Africa truly may “wives and mithers maist despairing ca’ them lives o’ men.” Yet grievous as is the price England pays for her West African possessions, to us who know the men who risk their lives and die for them, England gets a good equivalent value for it; for she is the greatest manufacturing country in the world, and as such requires markets. Nowadays she requires them more than new colonies. A colony drains annually thousands of the most enterprising and energetic of her children from her, leaving behind them their aged and incapable relations. Moreover, a colony gradually becomes a rival manufacturing centre to the mother country, whereas West Africa will remain for hundreds of years a region that will supply the manufacturer with his raw material, and take in exchange for it his manufactured articles, giving him a good margin of profit. And the holding of our West African markets drains annually a few score of men only — only too often for ever — but the trade they carry on and develop there — a trade, according to Sir George Baden–Powell, of the annual value of nine millions sterling — enables thousands of men, women and children to remain safely in England, in comfort and pleasure, owing to the wages and profits arising from the manufacture and export of the articles used in that trade.
So I trust that those at home in England will give all honour to the men still working in West Africa, or rotting in the weed-grown, snake-infested cemeteries and the forest swamps — men whose battles have been fought out on lonely beaches far away from home and friends and often from another white man’s help, sometimes with savages, but more often with a more deadly foe, with none of the anodyne to death and danger given by the companionship of hundreds of fellow soldiers in a fight with a foe you can see, but with a foe you can see only incarnate in the dreams of your delirium, which runs as a poison in burning veins and aching brain — the dread West Coast fever. And may England never again dream of forfeiting, or playing with, the conquests won for her by those heroes of commerce, the West Coast traders; for of them, as well as of such men as Sir Gerald Portal, truly it may be said — of such is the Kingdom of England.
43 Bilious Haemoglobinuric, black water fever.
44 See also Klebs and Tommasi Crudeli, Arch. f. exp. Path., xi.; Ceci, ibid., xv.; Tommasi Crudeli, La Malaria de Rome, Paris, 1881; Nuovi Studj sulla Natura della Malaria, Rome, 1881; “Malaria and the Ancient Drainage of the Roman Hills,” Practitioner, ii., 1881; Instituzioni de anat. Path., vol. i., Turin, 1882; Marchiafava e Cuboni, Nuovi Studj sulla Natura della Malaria, Acad. dei Lincei, Jan. 2, 1881; Marchand, Virch. Arch., vol. lxxxviii.; Laveran, Nature parasitaire des Accidents d’Impaludisme, Paris, 1881; Richard, Comptes Rendus, 1881; Steinberg, Rep. Nat. Board of Health (U.S.), 1881. Malaria-krankheiten, K. Schwalbe; Berlin, 1890; Parkes, On the Issue of a Spirit Ration in the Ashantee Campaign, Churchill, 1875; Zumsden, Cyclopaedia of Medicine; Ague, Dr. M. D. O’Connell, Calcutta, 1885; Roman Fever, North, Appendix I. British Central Africa, Sir H. H. Johnstone.
Last updated Monday, December 22, 2014 at 10:52 | <urn:uuid:34f85230-d7ff-416b-96ca-0865b1e5ef9d> | {
"date": "2015-07-04T20:51:07",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00076-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9688548445701599,
"score": 2.703125,
"token_count": 6456,
"url": "https://ebooks.adelaide.edu.au/k/kingsley/mary/west/chapter22.html"
} |
This ancient Indian village in the heart of Utah's canyon country was one of the largest Anasazi communities west of the Colorado River. The site is believed to have been occupied from A.D. 1050 to 1200. The village remains largely unexcavated, but many artifacts have been uncovered and are on display in the newly remodeled museum. Anasazi State Park is in the picturesque town of Boulder on State Route 12. Group and individual picnic areas are available. There is no camping. WHO WERE THE ANASAZI? Anasazi is a Navajo word interpreted to mean ancient enemies, enemy ancestors or ancient ones. During the 15th and 16th centuries, the Navajo arrived in what is now the southwestern United States. Ancestors of their foe, the modern Pueblo Indians, inhabited the area prior to the Navajo. What the Anasazi called themselves, however, probably never will be known. More recently, some archaeologists adopted the term Ancestral Pueblo, which suggests common ties with modern Pueblos. Although Ancestral Pueblo is probably more accurate, archaeologists have used the term Anasazi for many decades, and it now is generally accepted. It refers to village dwelling farmers who existed in the southern Colorado Plateau of the Four Corners region of Utah, Colorado, New Mexico, Arizona and southern Nevada between about A.D. 1 and 1300.
Twenty-eight miles northeast of Escalante on Highway 12, or thirty-five miles South of Torrey from Highway 24
Anasazi State Park P.O. Box 1429 Boulder, UT 84716
BLM - Bureau of Land Management | <urn:uuid:306156f3-16bc-4fc0-897a-e78041760593> | {
"date": "2016-12-04T14:16:47",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541322.19/warc/CC-MAIN-20161202170901-00304-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9480541944503784,
"score": 3.296875,
"token_count": 336,
"url": "http://www.us-parks.com/blm/anasazi-state-park/anasazi-state-park.html"
} |
IN THE SENATE OF THE UNITED STATES
December 9, 2009
Mrs. Boxer (for herself, Mr. Durbin, Mr. Kerry, and Mr. Casey) introduced the following bill; which was read twice and referred to the Committee on Health, Education, Labor, and Pensions
To amend the Public Health Service Act to establish an Office of Mitochondrial Disease at the National Institutes of Health, and for other purposes.
This Act may be cited as the
Brittany Wilkinson Mitochondrial
Disease Research and Treatment Enhancement Act.
Findings and purpose
Congress finds the following:
Mitochondrial disease results when there is a defect that reduces the ability of the mitochondria in a cell to produce energy. As mitochondria fail to produce enough energy, the cells will cease to function properly and will eventually die. Organ systems will begin to fail, and the life of the individual is compromised or ended.
There are more than 40 mitochondrial diseases.
Mitochondrial diseases are a relatively newly diagnosed group of diseases, first recognized in the late 1960s. Diagnosis of these diseases is extremely difficult.
Mitochondrial diseases can present themselves at any age, with associated mortality rates that vary depending upon the particular disease. The most severe diseases result in progressive loss of neurological and liver function, and death within several years.
According to the National Institute of Environmental Health Sciences, half of those affected by mitochondrial disease are children, who show symptoms before age five and approximately 80 percent of whom will not survive beyond the age of 20.
Mitochondrial dysfunction is also associated with numerous other disorders, including many neurological diseases (such as Parkinson’s, Alzheimer’s, ALS, and autism), and other diseases associated with aging, diabetes, and cancer.
Mitochondrial diseases are most commonly the result of genetic mutation, either in the nuclear DNA or in the mitochondrial DNA. Some mitochondrial diseases have been attributable to environmental factors that interfere with mitochondrial function.
Researchers estimate that one in 4,000 children will develop a mitochondrial disease related to an inherited mutation by the age of 10 years, and that 1,000–2,000 children are born each year in the United States who will develop mitochondrial disease in their lifetimes. However, studies of umbilical cord blood samples show that one in 200 children are born with both normal and mutant mitochondrial DNA, and the number of children with these mutations who actually develop a disease is unknown.
There are no cures for any of the specifically identified mitochondrial diseases, nor is there a specific treatment for any of these diseases.
Improving our basic understanding of mitochondrial function and dysfunction has potential application to numerous areas of biomedical research. The National Institutes of Health has taken an increased interest in mitochondrial disease and dysfunction and has sponsored a number of activities in recent years aimed at advancing mitochondrial medicine, including incorporating research into functional variation in mitochondria in the Transformative Research Grants Initiative.
It is the purpose of this Act to promote an enhanced research effort aimed at improved understanding of mitochondrial disease and dysfunction and the development of treatments and cures for mitochondrial disease.
Enhancement of research and treatment activities related to mitochondrial disease
Mitochondrial disease research enhancement
Part A of title IV of the Public Health Service Act (42 U.S.C. 281 et seq.) is amended—
by redesignating section 404H as section 404I; and
inserting after section 404G the following new section:
Office of Mitochondrial Disease
There is established within the Office of
the Director of NIH at the Division of Program Coordination, Planning and
Strategic Initiatives, an office to be known as the Office of Mitochondrial
Disease (in this section referred to as the
Office), which shall
be headed by a Director (in this section referred to as the
Director), appointed by the Director of NIH.
Mitochondrial disease research plan
The Director shall develop, make publicly available, and implement a written plan to facilitate and coordinate research into mitochondrial disease.
The plan required under paragraph (1) shall include the following objectives:
Improving coordination of research related to mitochondrial disease among the national research institutes and between the National Institutes of Health and outside researchers.
Providing training to research scientists and clinical researchers engaged in research related to mitochondrial disease.
Conducting programs to provide information and continuing education to health care providers regarding the diagnosis of mitochondrial disease.
Ensuring relevant scientific review groups contain individuals with expertise in mitochondrial disease.
In developing the plan under paragraph (1), the Director shall consult with—
the Director of the National Cancer Institute;
the Director of the National Institute of Child Health and Human Development;
the Director of the National Institute of Environmental Health Sciences;
the Director of the National Heart, Lung, and Blood Institute;
the Director of the National Institute of Neurological Disorders and Stroke;
the Director of the National Institute of Diabetes and Digestive and Kidney Diseases;
the Director of the National Eye Institute;
the Director of the National Institute of Mental Health;
the Director of the National Institute of Arthritis and Muscoloskeletal and Skin Diseases;
the Director of the National Human Genome Research Institute; and
the heads of such other institutes and offices as the Director considers appropriate.
The Director shall update the plan required under paragraph (1) on a biennial basis.
In addition to any grants otherwise awarded by the National Institutes of Health for research in mitochondrial disease, the Director may award competitive, peer-reviewed grants—
for integrated, multi-project research programs related to mitochondrial disease; and
for planning activities associated with integrated, multi-project research programs related to mitochondrial disease.
Centers of Excellence
The Director may award grants to institutions or consortiums of institutions to establish Mitochondrial Disease Centers of Excellence to promote interdisciplinary research and training related to mitochondrial disease.
Use of funds awarded
A grant awarded under paragraph (1) may be used to—
conduct basic and clinical research related to mitochondrial disease;
facilitate training programs for research scientists and health professionals seeking to engage in research related to mitochondrial disease; and
develop and disseminate programs and materials to provide continuing education to health care professionals regarding the recognition, diagnosis, and treatment of mitochondrial disease.
National registry; biorepository
The Director of the Centers for Disease Control and Prevention shall establish a national registry for the maintenance and sharing for research purposes of medical information collected from patients with mitochondrial disease.
The Director of the Centers for Disease Control and Prevention shall establish a national biorepository for the maintenance and sharing for research purposes of tissues and DNA collected from patients with mitochondrial disease.
this section, the term
mitochondrial disease means mitochondrial
diseases, mutations, dysfunctions and functions.
Authorization of appropriations
There is authorized to be appropriated, such sums as may be necessary to carry out this section.
Development of mitochondrial disease research plan
The Director of the Office of Mitochondrial Disease shall develop and make publicly available the mitochondrial disease research plan required under section 404H(b)(1) of the Public Health Service Act, as added by subsection (a) of this section, not later than 180 days after the date of the enactment of this Act. | <urn:uuid:78794ba0-dfb6-42ea-90f5-bfec38d87ce6> | {
"date": "2015-02-26T23:13:54",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936459277.13/warc/CC-MAIN-20150226074059-00042-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9152965545654297,
"score": 2.578125,
"token_count": 1493,
"url": "https://www.govtrack.us/congress/bills/111/s2858/text"
} |
First, I apologize about the previous post on the History of Bronson Caves in Cinema. It is not a complete archive. However, I am currently compiling an extensive archive of screen shots that is considerably more complete. Upon completion I intend to create an artist's book presenting the findings.
The following article appeared in American Cinematographer Magazine in June 1993 and reveals much about the early history of the caves.
THE CAVES THAT COULDN'T DIE!
(A Troglodyte’s Adventures in Bronson Caves and Brush Canyon)
BY JAN ALAN HENDERSON
Body-snatching pods used to hang out in them! Serial Superman Kirk Alyn’s arch enemy The Spider Lady took up residence in them. There have been missions that required an SOS Coast Guard signal. IT tried to conquer the world from them. Killers from Space brainwashed Peter Graves in them. Adam West’s Batman called them home! The loneliest Texas Ranger was bushwhacked in them! More than one Lost Horizon has been seen from them! Flash Gordon battled the great god of Tao on the Planet Mongo, and Charles (Ming, the Merciless) Middleton died in this Hollywood hot spot in 1936. Probably the most photographed pile of rocks on the entire planet, they stand stoically silent.
Nestled high in the Hollywood Hills, below the legendary Hollywood sign, is an indestructible landmark, Bronson Caves. Originally known as Brush Canyon, located in southern Griffith Park, it was developed by the Union Rock Company in 1907, as the Union Brick Quarry. The granite was first removed by truck, but the neighbors objected to the truck traffic, and a rail line was installed. This line ran through the main chamber of the cave to the street below, and ran during restricted hours in the morning and evening. The main cave and its two tributaries were drilled through the mountain to expedite the removal of granite from the back portion of the quarry.
The first cinematic appearance of the caves was the National Pictures serial Lightning Bryce, starring Jack Hoxie and Ann Little. Made in 1919, this Western adventure was directed by Paul C. Hurst, who also costarred as the villain, 'Powder Solvang.' It is open to speculation whether the method of ore removal (truck vs. rail) is the reason for the first appearance of the quarry in this early film. It is possible that Union Rock rented the facility to National Pictures during its conversion from rail to truck in 1918, as a means of supplementing their income during the quarry downtime. The rail tracks and trains in the quarry itself remained, and are evident in the serial.
Taking place in the 1919 Old West, Lightning Bryce could almost be classified as a horror Western. It features an ethereal mystery woman, Indians with sacred gold and crystal balls, and a visit to the Los Angeles Chinatown district, to an opium den run by Dopey Sam. Mixed in with primitive auto chases and western locales is Bronson Caves, as the stone quarry and canyon where the action in Chapters 8 to 10 takes place. Union Rock Company's equipment, outbuildings, scaffolding and conveyor belts are a major part of this serial scenery.
There are chases and captures, which result in the capture of a sinister Indian played by Steve Clement (a full blooded Yaqui Indian, also known as Esteban Clemente or Steve Clemento), who constantly tries to rob Lightning of the sacred gold nuggets. Clement, who was billed as the world's greatest knife thrower in Vaudeville, played a unique part in the formulation of the scenario for the classic jungle thriller, King Kong, in which he played the witch doctor. Clement had an experience in real life close to that of the fictional Carl Denham, while looking for an assistant for his knife throwing act. Playing the character role of Zaroff's Mongolian servant in The Most Dangerous Game, he related this tale to screenwriter Ruth Rose, of a scruffily dressed young maiden in a lunchroom, and an agent who refused to supply Clement with girls for his act. Steve Clement was also shot in the face by one Scarlett O'Hara (Vivien Leigh) in the classic Civil War drama, Gone with the Wind).
One of the cliffhangers involves the heroine and the Indian being hung from the top of Brush Canyon, only to be saved by Lightning Bryce. Bryce quickly dynamites the canyon, which is captured explosively by cinematographer Herbert Glendon.
Glendon gives the viewer a panoramic view of Brush Canyon through a series of long shots, filmed on the highest Eastern ridge of the canyon. There are magnificent silhouette shots of the principal players exploring the caves, with cave dust and smoke adding an eerie dimension to this silent serial gem.
It is interesting to note that in 1919, there was a rock ledge above the cave openings at the rear of the canyon. This ridge was approximately 10 feet wide, and looked as if it could hold three automobiles.
Operations ceased in the quarry in the late Twenties, and all the buildings, rail trains and scaffolding were removed. The ledge above the tri-opening cave was chiseled off, and the caverns have remained as they are today.
By the early 1930's, Bronson Caves was a featured landscape In the new medium of Talkies. 1931 saw the caves play host to Nat Levine's Mascot serial unit, for the production of King of the Wild. This 'all talking serial' features a pre-Frankenstein Boris Karloff as the African sheik Mustapha. Involved with two cohorts in the murder of an Indian rajah, this serial features a letter written in invisible ink, a diamond field in a volcano, jungle animals, and a mysterious old man and woman. Photographed effectively by cinematographers Benjamin Kline and Edward J. Kull, this convoluted multi-genre serial is typical of what would be Nat Levine and Mascot's serial output of the early 30's.
Levine and Company next visited the caves at the end of 1931, for The Lightning Warrior, starring canine favorite Rin-Tin- Tin. Rin-Tin-Tin, then an elder statesdog, required a stunt double, and died in his master's arms shortly after the completion of this serial. Rinty had made silent films for Warner Brothers, which kept the studio afloat in the late 1920`s. The dog was found on a French battlefield by trainer Lee Duncan.
A madman agitator known as the Wolfman (ten years before the Lon Chaney, Jr. classic) triggers an Indian uprising. When Jimmy Carter (played by Frankie Darro) is
killed, the intrigue intensifies. Through twelve complex chapters, Rinty and his pals dodge peril at every turn. Bronson Caves play a vital scenic role in The Lightning Warrior, as the Wolfman's lair. Cinematographer Ben Kline moved into the co-director's chair, which he shared with Armand Schaefer. The show was photographed by Ernest Miller and William Nobles (who later worked at Republic).
John Wayne, Glenn Strange, Charlie King and Eddie Parker fought their way through the caves in the railroad action serial Hurricane Express (Mascot 1932). Hurricane Express is the second of John Wayne's trio of Mascot serials. The third and most popular of these chapter plays to showcase the rugged exterior of Bronson Caves is The Three Musketeers (Mascot 1933). This Foreign Legion thriller boasts a supporting cast of Jack Mulhall, Western favorite Raymond Hatton, Francis X. Bushman, Creighton Chaney (later changed to Lon, Jr.), and Noah Berry, Jr. The Duke plays an American pilot named Tom Wayne, who rescues the Three Musketeers (Mulhall, Hatton, and Bushman) from a group of Arab terrorists. These bandits, known as the Devil's circle, threaten the legionnaires through twelve suspense packed episodes, photographed by Ernest Miller and Tom Gulligan. Released as a serial and a 90 minute feature version, it was re-issued in 1948 as a 70 minute feature entitled Desert Command by Favorite Films.
The fantasy film Deluge (Admiral Productions, Inc., 1933) spotlights the caves with dramatic photography by Norbert Brodine. Brodine (on loan from MGM) gives the audience a preview of the photographic possibilities of Bronson Canyon and Caves, to be realized in 1950's science fiction features - most notably, Invasion of the Body Snatchers.
The star of Deluge is its breathtaking special effects. Often, but incorrectly, credited to Willis O'Brien, these effects were the work of Ned Mann and Russell Lawson (who constructed the miniatures), and Billy N. Williams, co-cinematographer. While Deluge remains largely unseen (Englewood Video did provide a limited VHS release), one can glimpse portions of the dynamic New York destruction sequence in Republic Productions' Dick Tracy vs. Crime, Inc. (1941), S.O.S. Tidal Wave (1938), and Republic's first 'Rocketman' serial, King of the Rocketmen (1949). It was replayed in the Commando Cody, Sky Marshall of the Universe episode entitled “Nightmare Typhoon”.
Two examples of effective night photography in the caves are in The Vampire Bat (Majestic, 1933) and The Monkey's Paw (RKO 1933). The Vampire Bat is a lurid tale of vampirism through Scientific means. With an all-star horror cast of Lionel Atwill, Fay Wray, Melvin Douglas, and Dwight Frye, this crude yet atmospheric thriller was shot on the sets of the Frankenstein village and castle at Universal. Dwight Frye, once again playing a lunatic, Herman Gleib, whose nocturnal bat-keeping antics earn him the number one murder suspect moniker, is chased down by the torch wielding vigilante villagers. They corner him in Bronson Caves, which becomes an interior set that does not resemble the interior of the caves. It should be noted that Ira Morgan's eerie photography of the exterior/interior of the cave adds greatly to the Universal Gothic feel of this Majestic feature.
Morgan had a long career at Columbia Pictures in the 40's in Sam Katzman's serial unit. The Monkey's Paw features stunning night-for-night photography by second unit cinematographer Jack Mackenzie. This night time battle sequence was filmed in one evening in the caves and canyon on October 19, 1932, and wrapped at 5:00 in the morning. Special effects man Harry Redmond detonated the charges, which kicked up the dust in the canyon, adding to the overall effect of the photography. The last charge of the battle was detonated directly in front of the camera.
In 1934, Bronson Canyon returns in Western serial-fare. Mascot Pictures' production of Mystery Mountain starring Ken Maynard and his wonder horse Tarzan, made dual usage of the canyon and caves. A railroad camp occupied one end of the quarry, while the other end was the villain's hideout. Mystery Mountain was photographed by Mascot regulars Ernest Miller and William Nobles.
Ernest Miller and William Nobles also photographed the caves and canyon for Gene Autry's Western/science fiction/musical /fantasy/serial, The Phantom Empire (Mascot 1935). This show features the futuristic city of Murania melting via Jack Coyle and Howard Lydecker's stereopticon plates. This effect utilized a 4x5 stereopticon plate with soft emulsion, heated from underneath. Phantom Empire offers an ample helping of Gene Autry music, the cornpone of Frankie Darro and the Radio Ranch Regulars, and "Smiley" Burnett's hilarious harmonica solos. The art direction and photography involving the canyon and caves are spectacular.
Soldiers riding through the canyon are photographed in much the same style as the exterior of Red Rock Canyon was, for the Universal Pictures' Flash Gordon's Trip to Mars, and Buck Rogers serials. With an Ali Baba and the Forty Thieves style trap door installed on the back main tunnel, and stunning floor to ceiling laboratory equipment inside the caverns, Phantom Empire's science fiction/musical/Western elements make this a unique serial jewel.
Condemned to Live (Invincible Pictures, 1935) is another tale dealing with vampirism which utilizes the caves and cliffs of Bronson Canyon. By inter-cutting ocean shots with those of the rock strata of Bronson Canyon, the audience is led to believe that the caves and canyon are part of this European shoreline. Condemned To Live was filmed on Universal's backlot, as was Vampire Bat (with Bride of Frankenstein having just completed production). Condemned To Live also used the bell tower set of Lon Chaney, Sr.'s The Hunchback of Notre Dame, (Universal 1923), with Ted Billings in his Tyrolean costume from The Bride of Frankenstein as the bell-ringer.
Comic adventure strips were the rage in the 30's. The Sunday and daily appearances of these cartoon features were a sure sell at the box office. Flash Gordon premiered on Sunday, January 7, 1934, in Hearst newspapers throughout the country. Distributed by King Features, Flash Gordon was created by Alex Raymond, a former Wall Street brokerage clerk turned cartoonist. Raymond simultaneously created Jungle Jim to serve as an introduction piece to the new science fiction cartoon. Flash and Jim were created as competition for early favorites Buck Rogers (created in 1929) and Tarzan (created in 1912 by Edgar Rice Burroughs). In an ironic twist of fate, Johnny Weismuller, who originated the role of Tarzan in the Talkies in 1931 for MGM, ended up playing Jungle Jim for 'Jungle Sam' Katzman and Columbia's "B" picture unit.
The Jungle Jim feature Mark of the Gorilla and several other features make use of Bronson Canyon. Two years after Alex Raymond's Jungle Jim and Flash Gordon successes, Universal attained the rights to Raymond's strip. The serial, a highly successful medium in the 30's, would be the format for the interplanetary adventures of Flash Gordon. A radio show of Flash Gordon had been a success, running simultaneously with the comic strips. One of the reasons for Flash Gordon's success was a highly sex- charged story line.
The interior tunnels of Bronson Caves are among some of the most striking backgrounds for Flash's battle with two of mighty Mongo's greatest beasts. The Gocko was the first of these Herculean terrors to be encountered by Flash. Played by Glenn Strange, this monster was aided by wire riggings hooked into the ceiling of the caverns. The suit was reconstructed for the Fire Dragon in later chapters. The Caves also provided the scenery for the climactic ending of Flash Gordon, where Ming the Merciless enters the Sacred Temple of the god Tao. A false perspective was utilized in the tunnel to make the Gocko appear larger than Flash in these battle sequences. A carefully disguised small person stood in for Buster Crabbe as Flash to make these scenes seem larger than life.
Ming's soldiers traveled through the caves in often repeated footage throughout the thirteen interplanetary episodes of Flash Gordon. The success of Flash Gordon prompted two equally successful serials, Flash Gordon's Trip to Mars (Universal 1938, presented in green tints, as were the reissues of Frankenstein, Dracula, and The Old Dark House) and Flash Gordon Conquers The Universe (Universal 1940). The success of Flash Gordon led Republic Pictures to the comic strips. They purchased the rights to the Dick Tracy strip for $10,000. Hiring unknown bit player Ralph Byrd at $150 per week to play Chester Ghoul's protagonist, Republic was off and running in the serial sweepstakes of popular comic heroes.
With the box office popularity of Dick Tracy, Republic cast Byrd opposite Bela Lugosi in S.O.S. Coast Guard (Republic 1937). This well mounted episodic Coast Guard adventure featured effects by Jack Coyle, Howard Lydecker, and the new West Coast transplant, Theodore Lydecker. Most rewarding of these effects is the stereopticon plate gag, recreated with the rock walls of Brush Canyon.
Bela Lugosi's character Boroff has developed a gas that will quite literally melt anything on contact. With his mute assistant, played by serial veteran Richard Alexander, (Prince Barin from the first two Flash Gordon serials), Lugosi wreaks havoc on all who dare defy his new world order. In the serial's climactic sequences, Ralph Byrd and troops deal with Lugosi's monstrous mystery gas and save the day with only a small part of Bronson Canyon and Caves being melted in the process.
Columbia Pictures and Peter Lorre paid a visit to Brush Canyon in the seldom seen Island of Doomed Men (Columbia 1940). Lorre plays a sadist named Steven Denel. Denel would arrange for parole for an inmate, then have him shipped off to Dead Man's Island to work his secret diamond mine (Bronson Canyon). Cameraman Ben Kline's moody photography adds to the bleak desperation of this picture. Lorre leers at his wife (played by the sexy Rochell Hudson), and gleefully flogs the hero (Robert Wilcox) by lantern light in the Canyon.
By 1940, Columbia and Republic Pictures had their own in-studio caves (exteriors and interiors). Both studios continued to use the Canyon exterior as well as the Caves interiors and exteriors.
In Chapter 5 of The Adventures of Captain Marvel, Republic revisits the Canyon and repeats the stereopticon plate melting effect of the entrance to the main tunnel. The Scorpion and his henchmen lure Captain Marvel to the back of the Cave by using a dummy of the Scorpion rigged with a loudspeaker. Marvel discovers the wire and follows it to his mannequin foe, only to find that the walls of the cave are rapidly melting around him. The Scorpion has aimed the Sacred Golden Scorpion (which is a powerful weapon with the potential of turning ordinary rock into gold) at the opening of Bronson Cave, turning the opening into molten liquid. With waves of lava about to consume him, Marvel spies a hole in the cave ceiling and springs through it, avoiding the molten destruction.
Cinematographer William Nobles and Directors William Whitney and John English mix interiors of the in-studio cave and exteriors of Bronson Caves for a highly imaginative result. In one sequence, when the Scorpion is describing his devilish plans to his henchmen, the lighting and photography seem to give the interior studio caves an eerie golden glow. The stereopticon plate effects are again handled masterfully by Howard and Theodore Lydecker, and this effect is repeated in countless Republic serials, most notably King of the Rocketmen and Radar Men To The Moon. Shot in a mere 39 days, and released to standing room only crowds in March of 1941, Captain Marvel is classic serial fantasy. It may be the best sound serial ever produced!
This chapter play might well have been The Adventures of Superman. In 1940 Republic had optioned the Superman story and character, but due to legal complications with D.C. Comics, Republic ceased negotiations with D.C. and turned to Fawcett Comics, which owned Captain Marvel, Spy Smasher, and Captain America. Eight years later, Sam Katzman and Columbia's serial unit brought Superman to the screen in 15 chapters of glowing sepiatone.
While heavily relying on their in-studio caverns, the first Superman serial uses front and back cave entrances of Bronson Caves. The entrance to the Spider Lady's hideout is the front single tunnel of the Cave (in 1966, this opening served as the entrance to the bat cave in 20th Century Fox's popular Batman television program, starring Adam West and Burt Ward), while the back trio of tunnels serve as the backdrop for a mining disaster in Chapter 2, entitled Depths of the Earth. For the interior of the mine, Columbia used their studio cave interior. While cinematographer Ira H. Morgan's low angle photography enhances Bronson Caves as a mine front, there is little his photography can do to save the cheapness of the interior cave sets.
The late 1940's saw a declining movie industry, the emergence of television, and more location shooting for Bronson Caves.
The Lone Ranger had long been a popular character on radio, and its transference to the T.V. screen surprised no one. With veteran Republic player Clayton Moore assuming the title role of The Lone Ranger, and Jay Silverheels as his faithful sidekick, Tonto, this program was an instant success. Bronson Caves and Canyons provided most of the exterior scenery for the first three episodes, which were entitled The Legend of the Lone Ranger. Butch Cavendish, played by the veteran Western/horror actor Glenn Strange, ambushes a group of Texas Rangers in Brush Canyon. After the ambush, Tonto, the Lone Ranger's faithful companion, finds him wounded and nurses him back to health in Bronson Caves.
Bronson Caverns, Canyon and surrounding area played an indispensable part of the 50's science fiction film craze. One of the early visitors to the cave was Robot Monster (Astor Pictures, 1953). This barely watchable, no-budget feature sports a monster which is basically a man in a gorilla suit with a space helmet on, and a bad case of fleas, gyrating around the back entrance of the cave, with feathers blowing madly throughout sequences of long spaced-out embellishments from this furry asinine alien.
Low budget monsters slithered in and out of Bronson Caves throughout the 50's. Some memorable - or unmemorable, depending on the viewer's perspective - monsters the caves were invaded by were the Killers From Space, Teenage Caveman, The Cosmic Man, The Brain From Planet Arous, She Demons, Invisible Invaders, and The Return of Dracula.
Of these troglodytes from other worlds, and demons from the center of our own world, Invasion of the Body Snatchers (Allied Artists, 1956) photographically captures the majesty of Bronson Caverns more than any other picture that featured the Caves. The plot is simple 50's paranoic fare. The hero and heroine are confronted by their hometown friends and family who cultivate pods from another world. These pods are placed next to the sleeping townsfolk, and produce an exact replica of that person. Once the soul integrates with this alien horticulture, the zombie-like subject becomes free of all material strife, and is in a state of blissful euphoria produced from
their new-found plantlike immortality. Fleeing the townsfolk, the two heroes, Miles Bennell -aptly played by Kevin McCarthy - and his former sweetheart Becky Driscoll - sensually played by Dana Wynter - take refuge in a mine shaft (Bronson Caves), complete with a secret crawl space specially dug into the cave floor, covered over with a board walkway. The two struggle to stay awake, after being awake for several days. They hide beneath the false cave floor as the townspeople thunder over them.
The photography of Ellsworth Fredericks, ASC, makes this entire series of scenes horrific. Especially effective is the low angle photography of the two protagonists, soaking wet, trying to keep still as the townsfolk run across the planks merely inches above their heads. After the townsfolk have gone, hearing music the hero goes to check out the Canyon, and the heroine falls asleep. When he returns, he is unaware that she has slipped into slumber and has been possessed. Frederick's intense camera work conveys Kevin McCarthy's reaction of terror, as sweat and mud-soaked schizophrenia, as McCarthy rants and raves to his heroine Wynters, who has been taken over by her pod double.
Fredericks' camera conveys, through a series of low-angle shots, the paranoia of a love lost in a matter of minutes. McCarthy's character runs hysterically into the midst of a traffic jam. He approaches one truck, pulling the canvas backing off the trailer, finding it loaded full of pods, and shortly finds himself in the psychiatric ward. Ellsworth Fredericks' photography of the Caves and this entire low budget thriller is stunning.
The Return of Dracula is another 50's B horror/thriller set in Brush Canyon. In this Gramercy Pictures effort, the Caves play a main part in establishing the atmosphere of this low-budget venture. This descendent of Dracula, expertly played by Francis Lederer, after disembarking from a local train, transplants his coffin deep in the Canyon, in the bowels of Bronson Caves. With many fog-laden coffin openings, this budget filmed saga of Dracula featured a pulse-pounding musical score by composer Gerald Fried.
The amount of Westerns made in Bronson Caves would be incalculable, let alone the amount of television shows. The location is more often booked than not. It served as the backdrop for the conclusion of the John Wayne classic The Searchers, and was used extensively in the Western TV favorites Bonanza and Gunsmoke. The pilot for The Adventure of Superboy was shot in the canyon by Superman TV. producer Whitney Ellsworth.
With its 90-plus year history, it is highly unlikely that Bronson Caves will be torn down to accommodate a mini-mall. In our ever-changing world, it's nice to know some thing of Hollywood history will remain until the end of time.
The Cape Canaveral Monsters, 1960
Flaming Frontier, 1962
They Saved Hitler's Brain, 1963
Flesh Gordon, 1974
Army of Darkness, 1993 | <urn:uuid:a311f52b-74b0-4a89-a4b9-55dc86b46567> | {
"date": "2014-12-20T14:20:44",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769894.131/warc/CC-MAIN-20141217075249-00011-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9457059502601624,
"score": 2.640625,
"token_count": 5352,
"url": "http://www.bricebischoff.blogspot.com/"
} |
- Israeli company has developed device for extracting water from air
- It says its water generator is more energy efficient than others
- Its technologies are already used by the military in seven countries
Water. A vital nutrient, yet one that is inaccessible to many worldwide.
The World Health Organization reports that 780 million people don't have access to clean water, and 3.4 million die each year due to water-borne diseases. But an Israeli company thinks it can play a part in alleviating the crisis by producing drinking water from thin air.
Water-Gen has developed an Atmospheric Water-Generation Units using its "GENius" heat exchanger to chill air and condense water vapor.
"The clean air enters our GENius heat exchanger system where it is dehumidified, the water is removed from the air and collected in a collection tank inside the unit," says co-CEO Arye Kohavi.
"From there the water is passed through an extensive water filtration system which cleans it from possible chemical and microbiological contaminations," he explains. "The clean purified water is stored in an internal water tank which is kept continuously preserved to keep it at high quality over time."
Capturing atmospheric humidity isn't a ground-breaking invention in itself -- other companies already sell atmospheric water generators for commercial and domestic use -- but Water-Gen says it has made its water generator more energy efficient than others by using the cooled air created by the unit to chill incoming air.
"Several companies tried to extract water from the air," says Kohavi. "It looks simple, because air conditioning is extracting water from air. But the issue is to do it very efficiently, to produce as much water as you can per kilowatt of power consumed."
He adds: "When you're very, very efficient, it brings us to the point that it is a real solution. Water from air became actually a solution for drinking water."
The system produces 250-800 liters (65-210 gallons) of potable water a day depending on temperature and humidity conditions and Kohavi says it uses two cents' worth of electricity to produce a liter of water.
Developed primarily for the Israel Defense Forces (IDF), Water-Gen says it has already sold units to militaries in seven countries, but Kohavi is keen to stress that the general population can also benefit from the technology.
He explains: "We believe that the products can be sold to developing countries in different civilian applications. For example in India, [drinking] water for homes is not available and will also be rare in the future. The Atmospheric Water-Generation Unit can be built as a residential unit and serve as a perfect water supply solution for homes in India."
Kohavi says Water-Gen's units can produce a liter of water for 1.5 Rupees, as opposed to 15 Rupees for a liter of bottled water.
Another product Water-Gen has developed is a portable water purification system. It's a battery-operated water filtration unit called Spring. Spring is able to filter 180 liters (48 gallons) of water, and fits into a backpack -- enabling water filtration on the go.
"You can go to any lake, any place, any river, anything in the field, usually contaminated with industrial waste, or anything like that and actually filter it into the best drinking water that exists," says Kohavi.
Major Alisa Zevin, head of the Facilities and Specialized Equipment Section for the IDF, says the unit is revolutionary for them.
"This unit gives logistic independence for the forces and make us ensure that we provide the soldiers high quality water," she says.
In 2013, the IDF took Spring to the Philippines after Typhoon Haiyan devastated the island country and left 4.2 million people affected by water scarcity. The system filtered what was undrinkable water into potable water, and that is what Water-Gen hopes to accomplish elsewhere where the technology is needed.
"It's something as a Westerner you cannot understand because you have a perfect water in the pipe, but people are dying from lack of water," says Kohavi.
Although Water-Gen's developments aren't a solution for the water crisis, Kohavi believes that the technology can do for countries that lack clean water, such as Haiti, what it has done for the Philippines. It can be the technology used to not only to filter water, but to save lives.
"They could actually bring solution, perfect solution, to the people over there," says Kohavi. "For the kids ... They can use the technology to filter water in the field. People are going days just to carry water. And all our solutions can be an alternative for that." | <urn:uuid:436969ec-3652-46dd-9970-565b28f92980> | {
"date": "2016-06-25T23:36:45",
"dump": "CC-MAIN-2016-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00026-ip-10-164-35-72.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9634777903556824,
"score": 2.734375,
"token_count": 974,
"url": "http://edition.cnn.com/2014/04/24/tech/innovation/machine-makes-drinking-water-from-air/"
} |
Economic growth in China has led to significant increases in fossil fuel consumption © stock.xchng (frédéric dupont, patator)
Per capita CO2 emissions in China reach EU levels
Global emissions of carbon dioxide (CO2) – the main cause of global warming – increased by 3% last year. In China, the world’s most populous country, average emissions of CO2 increased by 9% to 7.2 tonnes per capita, bringing China within the range of 6 to 19 tonnes per capita emissions of the major industrialised countries.
In the European Union, CO2 emissions dropped by 3% to 7.5 tonnes per capita. The United States remain one of the largest emitters of CO2, with 17.3 tonnes per capita, despite a decline due to the recession in 2008-2009, high oil prices and an increased share of natural gas.
According to the annual report ‘Trends in global CO2 emissions’, released today by the JRC and the Netherlands Environmental Assessment Agency (PBL), the top emitters contributing to the global 34 billion tonnes of CO2 in 2011 are: China (29%), the United States (16%), the European Union (11%), India (6%), the Russian Federation (5%) and Japan (4%).
With 3%, the 2011 increase in global CO2 emissions is above the past decade's average annual increase of 2.7%.
An estimated cumulative global total of 420 billion tonnes of CO2 has been emitted between 2000 and 2011 due to human activities, including deforestation. Scientific literature suggests that limiting the rise in average global temperature to 2°C above pre-industrial levels – the target internationally adopted in UN climate negotiations – is possible only if cumulative CO2 emissions in the period 2000–2050 do not exceed 1 000 to 1 500 billion tonnes. If the current global trend of increasing CO2 emissions continues, cumulative emissions will surpass this limit within the next two decades | <urn:uuid:5dbd7929-f5e4-4e00-8ee1-ac82d4729d56> | {
"date": "2013-05-21T17:47:01",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8774094581604004,
"score": 3.03125,
"token_count": 398,
"url": "http://ec.europa.eu/dgs/jrc/index.cfm?id=1410&dt_code=NWS&obj_id=15150&ori=RSS"
} |
Twitter is an online social networking and microblogging service that enables users to send and read "tweets", which are text messages limited to 140 characters. Registered users can read and post tweets, but unregistered users can only read them. Social media technology such as Twitter offers many opportunities for learning in the classroom, brings together the ability to collaborate and access worldwide resources, and creates new and interesting ways to communicate in one easily accessible place.
- There are no age restrictions for having a Twitter account
- Privacy - can be set up so that only those you allow to follow you can see your "tweets". This is useful when using Twitter for class projects.
- Style - short commentary, link sharing, simple questions and thoughts, informal
- Pace - quick, great for synchronous activities
- Opportunity to explore the world
- Allows for quick, yet poignant reflections and observations
- Can bring in the opinions and research of guest speakers and experts
- Provides live commentary on people and events
- Ideas can get lost if not tagged properly
- Limited characters which may limit thoughts and expressions
Sample Lesson Ideas
- Group project or presentation feedback
- Questions on an assignment
- Exam preparation
- Progressive collaborative writing. Students agree to take turns contributing to an account or story over a period of time
- Engagement outside of class
- Preparing for next day
- Just-in-time "quizzing" - post questions about the lesson as they are studying
- Follow an event as it unfolds
- Learning and practicing foreign languages - post questions and ask students to respond in the same language or to translate the tweet into their native language
Examples of Uses
- Bulletin board to notify students of changes in the schedule and/or assignments
- Student engagement in large lectures - In large lecture classes where student participation can be intimidating and logistically problematic, Twitter can make it easy for students to engage and discuss during class time.
- Classroom notepad - Using a Twitter hashtag, it’s easy to organize inspiration, reading, ideas, and more for the classroom to share.
- Pop quiz - Send out quick quizzes on Twitter, and have them count for bonus points in the classroom.
- Link sharing - With Twitter, students can share websites with class, making relevant link finding and sharing a classroom assignment.
- Recaps - At the end of a lecture, the instructor can summarize what has been learned in the classroom, encouraging reflection and discussion between students.
- Gathering class comments - Use class hashtags to organize comments, questions and feedback that students have used in class, while also projecting live tweets in class for discussion.
- Search tool to find information about famous people and events
- Communicate with experts- Find authors, scientists, or historians on Twitter and get connected
- Source evaluation - Students can share resources and discuss whether it’s a good or bad source of information, encouraging comments
- Gather real-world data as it happens
- Following the government - Often, local and national political figures have Twitter feeds, and students in the classroom can track their progress.
- As long as students are held accountable for their grammar, using Twitter offers a great opportunity for improving writing and punctuation.
- Reading assignment summaries - Students can build 140-character summaries based on reading assignments, forcing a focus on quality.
More Information:60 Ways to Use Twitter in the Classroom
Top Ten Uses of Twitter for Education by Steve Wheeler
How to Find Exactly What You Want in Twitter - Amazing Twitter Secrets for Educators - Part One and Part Two
How Twitter changed the world, hashtag-by-hashtag - An interesting history about how Twitter is about to become the most expensive watercooler in history. http://www.bbc.co.uk/news/technology-24802766
Twitter - its history, features, and technology
Twitter Handbook for Teachers | <urn:uuid:0db3b610-b789-491c-811c-71823de341f4> | {
"date": "2017-07-27T12:42:34",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428257.45/warc/CC-MAIN-20170727122407-20170727142407-00576.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8904913067817688,
"score": 4.09375,
"token_count": 806,
"url": "https://itrcnews.blogspot.com/2014/02/using-twitter-for-education.html"
} |
There are various computer applications business owners can use for assistance in daily operations. Decision support systems are one of the most useful programs. They organize business data and present it in a way that makes it easy to analyze. For example, these systems can combine sales figures from two different weeks, enabling you to analyze any changes. You are probably a business owner wondering, “what is a decision support system?” Allow this post to answer your questions. Here is a guide to decision support systems, the modern think tank.
Decision support systems gather various types of raw data that allow you to identify trends and issues. One common data type is asset information. You can access information on your current assets as well as data marts, relational data sources and data warehouses. This information is vital to making business decisions. It allows you to keep track of your finances, and acts as a measure of your financial health. You can then compare this data with sales and expense figures to make decisions depending on your company’s worth. Data retention and organization are key elements to answering “What is a decision support system?”
Sales Figure Comparisons
Comparative sales figures are one of the most important figures offered by decision support systems. These figures pull weeks of sales information and compare them side by side. Comparative analysis allows you to identify trends in business activities. It also recognizes positive or negative changes. You can use this information to consider new marketing tactics or initiate changes to your product line. You can also use it to calculate new revenue projections based on sales assumptions. Sales data and comparative features are an important part of decision support systems.
Types of DSS
There are several different types of decision support systems. Each one is aimed at specific members of your company. Communication-driven systems, for example, are aimed at internal teams. They help with collaborative efforts through instant chats, client servers and online meeting systems. Data-driven DSSs are aimed primarily towards managers and product suppliers. They involve databases that are used to check for proper data incorporation. This is typically done through main-frame systems or the internet. Decision support systems come in several varieties that target different individuals. Knowing this is an important part of answering “what is a decision support system?”
Use And Availability
Decision support systems used to be available mainly through bulky server systems. With the advancements in technology, they are now available through various devices. Simple software like Style Intelligence can run on almost any computer. Some software is available on mobile devices, as well. This means that you can monitor your business information from anywhere in the world. You can analyze your data and make decisions with increased efficiency. Decision support systems are easily accessible and available on various platforms.
Support Not Replace
It is important to remember that DSS is not supposed to replace decision making analyses. Instead, it should be used as support for the traditional analysis involved in decision making. DMS helps avoid the technical implementation details of whichever decision making method you use. This allows you to focus on fundamental value judgements, instead. Before using DSS software, you should be entirely familiar with your decision making methodology and with all the options you are presented with. For example, if you need help selecting a franchising business model, you should be familiar with all the different types of business models available. Use DMS software as a support, not a replacement, in order to get the most out of it.
What is a decision support system? Well, a decision support system is a useful tool for any business owner. These systems collect important business data, such as sales figures and asset information. You can analyze this data to measure your company’s financial health and make decisions based on the results for everything from picking community service ideas to deciding on marketing campaigns. Decision support software is available on various platforms and rarely costs much money. Evaluate the attributes laid out in this post and consider using a decision support system for your business.
Photo from http://www.moneycontrol.com/sme-stepup/news/new_solutions_for_business_support_systems-1321282.html | <urn:uuid:12406aa6-867a-44f1-93e4-13c35cc8821e> | {
"date": "2018-01-18T12:02:43",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887253.36/warc/CC-MAIN-20180118111417-20180118131417-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9382960200309753,
"score": 2.6875,
"token_count": 844,
"url": "https://businessfirstfamily.com/what-is-a-decision-support-system-guide/"
} |
The international community was outraged by the revelation that the United States was spying on foreign leaders, and now the United Nations has waded in. The General Assembly unanimously adopted a resolution affirming the right to privacy against unlawful surveillance in the digital age. The resolution “affirms that the same rights that people have offline must also be protected online.” This includes the right to privacy.
The final resolution dropped language that classified the large scale international and domestic interception and collection of personal communications and data as a potential human rights violation. In its place the resolution voiced concern over the “negative impact that surveillance and/or interception of communication” and personal data has on human rights.
Since this is only a General Assembly resolution, it carries no legal weight. However, it does reflect the opinion of the international community.
Even though this doesn’t have the force of law, Ryan Goodman, professor of law at New York University, argues that the resolution has some hidden teeth. While it’s quite obvious from the plain language of the document that the government should refrain from violating digital privacy, it also includes language that challenges governments to protect privacy invasions by private actors by calling on member states to both respect and protect privacy rights.
By calling on states both to respect and protect the right to privacy, the resolution includes an expectation for member states to regulate private actors. Requiring governments to “respect” privacy rights essentially refers to negative rights — freedom from interference by the state. Nothing earth-shattering there. Requiring governments “protect” privacy rights, however, refers to positive obligations upon the state — a duty of the government to safeguard individuals from abuse by third parties. In United Nations circles, it is well understood that such a duty to safeguard includes protection from other private actors, including businesses.
It’s not just some anti-Americanism that is causing this international outcry. Recently a U.S. court ruled that the government has gone too far in its collection of domestic phone records. In the decision, U.S. District Judge Richard Leon said:
“I cannot imagine a more ‘indiscriminate’ and ‘arbitrary invasion’ than this systematic and high-tech collection and retention of personal data on virtually every citizen for purposes of querying and analyzing it without prior judicial approval,” said Leon, an appointee of President George W. Bush. “Surely, such a program infringes on ‘that degree of privacy’ that the Founders enshrined in the Fourth Amendment.”
The world has changed a lot since international human rights were enumerated. Privacy seems to be an ever more scarce commodity. But it’s good to know that the right to privacy still applies in 2013.
Photo Credit: Sebastien Wiertz via Flickr | <urn:uuid:44a4fb32-8a21-4bf7-a37f-d193bd84e642> | {
"date": "2016-09-01T03:41:09",
"dump": "CC-MAIN-2016-36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982958896.58/warc/CC-MAIN-20160823200918-00110-ip-10-153-172-175.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9520547986030579,
"score": 2.578125,
"token_count": 582,
"url": "http://www.care2.com/causes/united-nations-affirms-privacy-rights-in-the-digital-age.html"
} |
The edge of the White Sands dune field in New Mexico transitions abruptly from sand to a dune-free area where grass grows. The wind pushes the sand around within the 400-square-kilometer dune field, so much so that the sandy dunes are said to “migrate.” But the line between dunes and vegetation has remained relatively stable for 60 years: The dunes near the edge of the field seem to stay put. A new study from Pelletier and Jerolmack reveals why.
The authors took advantage of recent advances in laser scanning technology and surveyed the area near the dunes’ edge over a 3-month period. From that data, they were able to determine how much the sand was moving. Closer to the dune-vegetation line, the amount the sand was moving decreased. Next, they used a numerical model to investigate the aerodynamics of the dunes and the force the wind exerts on the sand. They found that closer to the edge of the dune, the pressure from the wind reduced and the velocity of the displaced sand slowed.
The authors concluded that the crest of the dunes upwind from the edge shielded the sand. The improved understanding of the evolution of the dunes at White Sands may increase our understanding of dune evolution in general—perhaps even of the dunes that have been imaged on Titan and Mars. (Journal of Geophysical Research: Earth Surface, doi:10.1002/2014JF003210, 2014)
—Shannon Palus, Freelance Writer
Citation: Palus, S. (2015), Exploring how wind blows sand on dunes, Eos, 96, doi:10.1029/2015EO023949. Published on 16 February 2015. | <urn:uuid:49cf1965-15f8-4122-83c1-e6d83d7ad9e5> | {
"date": "2018-04-27T08:24:58",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524127095762.40/warc/CC-MAIN-20180427075937-20180427095937-00376.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9305245876312256,
"score": 3.640625,
"token_count": 371,
"url": "https://eos.org/research-spotlights/exploring-wind-blows-sand-dunes"
} |
In my latest blog article Surface analysis techniques for medical technology devices, I told you that we recently invested heavily in advanced surface analysis equipment. In this article I am getting a bit more into the details about what we actually can measure with our new equipment.
A scanning electron microscope for advanced analytical applications
Our new Scanning Electron Microscope (SEM) is equipped with various features. It can produce high-resolution images of surface structures, and has a very high magnification range of up to 60,000X. It can perform relative quantification of a material’s elemental composition and is also able to image polymer surfaces.
Information on material composition and topography
The microscope is equipped with two types of electron detectors; one that detects backscattered electrons (BEI) and one that detects secondary electrons (SEI). Common to both of these modes of detection is that the resulting image is generated from the intensity of the signal reaching the detector. The grayscale image represents the electron count of the detector resulting from a specific position of the electron beam on the sample surface. As a result, dark areas represent a relatively lower electron count compared to brighter areas.
Using the BEI detector, the resulting image contains a high degree of information on the material composition as more electrons are backscattered from heavier atoms at the surface compared to the amount reflected from lighter elements. The SEI detector provides images with a high degree of topographical information. This is a result of the depth, within the samples, at which these electrons are emitted. Also, the SEI detector is better suited for obtaining high quality images at larger magnification.
Low vacuum mode enables imaging of polymers
Due to the nature of the electron microscope, high quality images require good electrical conductance of the material being examined. This can cause some difficulties when imaging for example polymeric materials as negative charge, building up on the surface of the material, will deflect the incoming electrons used for imaging. For such materials the microscope is equipped with a low vacuum mode. Running the microscope in this mode introduces charge carriers in the form molecules from the ambient air. This reduces the degree of sample charging and enables the imaging of poorly conducting materials, such as polymers.
Using the SEM for determining chemical composition
Elemental analysis, using the SEM, is based on the detection of characteristic X-rays. These are generated by the interactions between the incident electrons, and the electrons of the atoms comprising the sample. The majority of the collision events, that generate such X-rays, take place below the surface of the material. As a result, the probing depth of the Energy Dispersive X-Ray Spectroscopy System (EDX) is typically a few micrometres. One of the factors affecting the probing depth is the acceleration voltage used for the analysis.
The analysis is performed by scanning the electron beam over the area of interest, while collecting the complete X-ray spectrum. From the collected spectrum, the chemical composition can be calculated. The method is semi-quantitative, as the elemental composition is expressed as a function of the total number of elements included in the analysis. When generating characteristic X-rays, the acceleration voltage of the incoming electrons, is highly important. Some elements can be detected using a relatively low acceleration voltage, while detection of others requires a higher voltage.
Our new microscope is capable of operating with three different acceleration voltages. This enables detection of a wide range of elements, including carbon, titanium, vanadium, aluminium, chromium, cobalt and tungsten. As a result, the EDX system can e.g. be used for distinguishing between grade 4 and grade 5 titanium. Another option with the EDX system is to perform elemental mapping of surface features, resulting in an overview of the spatial location of elements.
Specialised optical microscope enables measurement of surface roughness parameters
Our white light interference surface profiler is a specialised optical microscope that uses optical interference to detect the focal point on the surface of a sample. Using piezoelectronics the height position of the microscope lens can be varied with a high degree of precision, in steps of around 10 nanometres. By continuously varying the height position of the lens, while detecting the interference pattern resulting from the surface, the focal point can be determined for each position of the area being imaged. This results in a 3D topographical map of the surface. Via the equipment software, the acquired data can be treated to allow for calculating both classical 2D surface roughness parameters as well as 3D parameters.
The size of the analysed area depends on the objective lens used for the analysis. The lens also determines the resolution of the acquired 3D data set. Larger surface roughness values can be determined using an objective lens with low magnification. Lower surface roughness values require an objective lens with a higher magnification.
X-Ray fluorescence for fast and reliable identification of materials
While our SEM/EDX system is capable of determining material compositions, this approach is relatively time consuming and sets certain limits to the size of the sample being analysed. Our new X-Ray Fluorescence analyser (XRF) is also capable of determining the elemental composition of metals but does this under ambient conditions and is, furthermore, not limited by the size of the sample being analysed. The XRF analyser utilizes the principle of X-ray fluorescence. When radiating a sample with X-rays, the sample atoms will generate characteristic X-ray emission. Similar to the EDX analysis of the SEM, the emission can be used qualitatively to both determine and quantify the elements of the material. The penetration depth of the incident X-rays is significantly larger than for the electrons used by the SEM/EDX. As a result, the XRF analysis determines the bulk composition of the material and is, thus, better suited for e.g. verifying that the correct material is being used for the production of a specific part.
Every aspect of our manufacturing process, from design to distribution, must meet strict quality standards. Not only our customers, but also vendors, suppliers, contractors, OEMs and third parties need to be sure that their products comply with medical device rules and regulations. With our new surface and material analysis equipment, we can find and minimise possible risks in the manufacturing process. We can improve and refine processes, contributing to our state-of-the-art production facilities, and ensure that we meet these strict quality standards. Having this equipment in-house, we can measure more often and with greater accuracy than before. It provides a higher product quality, and minimises the risk that a product that doesn’t meet the quality standards, is delivered to the customer.
Did you find this article interesting? Let me know by sharing it on LinkedIn! There you can find more updates and news from the medtech industry and Elos Medtech. Also read Surface analysis techniques for medical technology devices. | <urn:uuid:472573f6-d1c6-4083-aebc-fccafa4fbbd5> | {
"date": "2018-04-20T04:54:11",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937114.2/warc/CC-MAIN-20180420042340-20180420062340-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9020365476608276,
"score": 3.046875,
"token_count": 1411,
"url": "http://elosmedtech.com/measurement-and-surface-analysis-of-medical-devices/"
} |
Medical Waste Program
The purpose of this program is to protect the general public, health care facility and solid waste management personnel from injury and exposure to pathogenic organisms in medical wastes. As mandated by the Medical Waste Management Act , (Health and Safety Code, Sections 117600 - 118360), the Department of Health Services, (DHS) Environmental Management Branch regulates the storage, transportation, and disposal of regulated medical waste.
The Medical Waste Management Act (MWMA), Section 117705 of the California Health and Safety Code (H&SC) considers any person whose act or process produces medical waste to be a "medical waste generator" in California (e.g. a facility or business that generates, and/or stores medical waste on-site). Medical waste generators may be either large quantity generators (generate 200 lbs./month or more), or small quantity generators (generate less than 200 lbs./month).
Medical waste is often described as any solid waste that is generated in the diagnosis, treatment, or immunization of human beings or animals, in research pertaining thereto, or in the production or testing of biologicals, including but not limited to:
- Fluid blood, fluid blood products, containers and equipment containing fluid blood or blood from animals know to be infected with disease known to be infectious to humans.
- Laboratory waste from human or animal specimen cultures.
- Animal parts, tissues, fluids, or carcasses suspected of being contaminated with infectious agents known to be contagious to humans.
- Cultures and stocks of infectious agents including waste from the production of bacteria, viruses, spores, discarded live attenuated vaccines, discarded animal vaccines (including Brucellosis and Contagious Ecthyma), and de-vices used to transfer, inoculate and mix cultures.
- Sharps waste including hypodermic needles, lancets, blades, acupuncture needles, root canal files, broken glass and syringes contaminated with bio-hazardous waste and trauma scene waste capable of cutting or piercing.
- Human or animal specimen cultures from medical and pathology laboratories.
- Human surgery specimens or tissues removed at surgery or autopsy contaminated with infectious agents.
- Tissues which have been fixed in formaldehyde or other fixatives or contaminated with chemotherapeutic agents, including gloves, towels, bags, tubing and disposable gowns.
The Act requires all medical waste generators who treat their medical waste onsite to register and obtain a permit from our Department. Large quantity medical waste generators (generate 200 lbs./month or more) who do not treat their medical waste onsite are only required to register with our Department. In addition, if you are planning to let other small medical waste generators store their medical waste at your facility, you need to apply for the Common Storage Facility Permit . Also, be aware that if you generate less than 20 pounds of medical waste per week and/or transport less than 20 pounds at one time, you need to apply for a Limited Quantity Hauling Exemption permit .
The following information will assist you in understanding your responsibilities under the law requiring Medical Waste Facilities to register and obtain permits for the storage, transfer, treatment and disposal of medical waste. Please read the enclosed information carefully before filling out the forms.
Who Must Register, Obtain A Permit or A Haulers Exemption
Medical waste generators or activities that are in one of the following categories:
- Large generators (200 or more lbs./month) of medical waste.
- Any generator or health care professional that treats medical waste on-site.
- Any person who operates a common storage facility.
- A transfer station operation.
- Limited Quantity Hauler Exemption - Any person generating less than 20 pounds per week and hauling less than 20 pounds of medical waste at any one time.
- Any health care professional who is licensed by the State Licensing and Certified and is one of the listed facility types, list under section 117995.
Who is Exempt from Registration, Permit and Exemption Requirements
- Small Quantity Generators (SQG) (less than 200 lbs./month) who do not treat waste onsite.
- SQG who use licensed hazardous waste haulers to transport medical waste offsite.
- SQG who use a common storage facility.
How to Comply
- Complete the Pre-Application Questionnaire . If your answers indicate you are not required to register as a medical waste generator or meet the hauler exemption, then complete the certification on page 4 of the Information Packet for Medical Waste Generators and return the form to our Department.
- If you are required to register as a medical waste generator, as indicated by affirmative answers to any of the questions 2, 3, 4, and 5, on the Pre-Application Questionnaire (found in the Medical Waste Management Plan ), then:
Medical Waste Facilities include, but are not limited to:
- Chronic Dialysis Clinics
- Physician's Offices
- Medical and Dental Offices
- Education and Research Centers
- Laboratories, Research Laboratories
- Surgery Centers
- Skilled Nurses Facilities
- Veterinary Hospitals, Veterinary Clinics
Definitions (H&SC 117600 – 118360)
A Small Quantity Generator (SQG) is a medical waste generator, other than a trauma scene waste management practitioner, that generates less than 200 pounds per month of medical waste.
A Large Quantity Generator (LQG) is a medical waste generator, other than a trauma scene waste management practitioner, that generates 200 or more pounds of medical waste in any month of a 12 month period. A permit is required from this Department if your facility is a LQG .
Treatment means any method, technique, or process designed to change the biological character or composition of any medical waste so as to eliminate its potential for causing disease, as specified in Chapter A. A permit is required from this Department for businesses and Health Care professionals who treat (sterilize) medical waste on-site .
A Common Storage Facility (CSF) means any designated accumulation area that is onsite and is used by small quantity generators otherwise operating independently for the storage of medical waste for collection by a registered hazardous waste hauler. A permit is required from this Department for businesses that provide medical waste management services which include a designated medical waste common storage location/facility .
A Limited Quantity Hauling Exemption (LQHE) is a generator who wishes to transport his/her own medical waste to a permitted transfer station or common storage facility. The generator must generate no more than 20 pounds of waste per week and may not transport more than 20 pounds of waste at any one time. A permit is required from this Department for facilities who meet this definition .
Medical Waste Management Plan or equivalent: Large and small quantity generators shall file with the Department of Environmental Resources and maintain, on-site, a certified document that identifies the wastes you generate at your facility and how they are managed. A permit is required from this Department for facilities that must file and maintain a Medical Waste Management Plan .
Should you have any questions, please contact Robert Riess at 209-525-6749 or any Hazardous Materials District Inspector at 209-525-6700.
Download an Application for a Medical Waste Management Plan (includes the LQHE, Common Storage Facility Permit, Registration and Permit for Medical Waste Generation and Treatment) .
- California Dept. of Public Health (CDPH) Main Page
- CDPH Authorized Mail-Back Services
- CDPH Medical Waste Laws, Regulations, and Standards
- CDPH Medical Waste Management Program
- CDPH Hospital Pollution Prevention Program
- CDPH Permitted Off-Site Treatment Facilities
- CDPH Self-Assessment Manual for Proper management of Medical Waste
- Department of Health Services (DHS) Management of Pharmaceutical Waste
- California Code of Regulations, Title 22, Minimum Standards for Permitting Medical Waste Facilities | <urn:uuid:32e4fd31-e732-4aae-abd7-e02611ed0a62> | {
"date": "2019-02-22T23:43:06",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249406966.99/warc/CC-MAIN-20190222220601-20190223002601-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.902803361415863,
"score": 2.859375,
"token_count": 1639,
"url": "http://www.stancounty.com/er/hazmat/medical-waste.shtm"
} |
Q. I am having trouble differentiating between mild thalassemia and iron-deficiency anemia. I am not sure why the red cell distribution width would be lower in thalassemia than IDA. Could you please explain this and list other ways to tell them apart?
A. As iron-deficiency anemia progresses, and the patient’s serum iron drops lower and lower, each successive wave of new red cells gets smaller and smaller. So there are some kind of small cells, and some really small cells (as you can see in the image of iron-deficiency anemia above). The red cell distribution width (RDW) is high in iron deficiency anemia because there is a wide variation in red cell size. In mild thalassemia (alpha or beta), the red cells are strangely all the same size; there is virtually no variation. So the RDW is low. This difference in RDW is helpful when you’re trying to differentiate IDA and thalassemia; if you have a microcytic, hypochromic anemia, the next thing you’d do is look at the RDW (or just look at the blood smear). If the RDW is low (the cells are mostly the same size), then it’s probably thalassemia. If the RDW is high (the cells vary a lot in size), then it’s probably iron deficiency anemia.
Another thing to do is look at the RBC. In IDA, the RBC is low (there isn’t enough iron around, so the bone marrow makes fewer cells). In mild thalassemia, however, the RBC tends to be normal or even elevated. The reasons for this are unclear.
To definitively diagnose IDA, you need to do iron studies; to definitively diagnose thalassemia, you need to do hemoglobin electrophoresis. But you can get a pretty good idea by looking at the things discussed above.
- Kanopo said Hello, it’s great to find this website for further medical knowledge, it’s really help!...
- Abdul_dotun said Love this…
- Meera said i really can’t get enough of your website thank you so much ! best wishes..
- Cathy said Thank you so much!! I love this!
- Magda Assaf said I like the idea very much, keep going
- Kristine said Yay!! Love that 🙂
- Kristine said Thanks, Victoria! I’m glad it was understandable 🙂
- Kristine said Thanks, Kathleen!! So glad you liked it.
- DR.MAMTA PATHAK said Very good..I appreciate it
- Victoria said You did a great job by breaking It down to a layman understanding. Thanks
- Gopal said Very nice simple explanation
- Lujain said 🙂 | <urn:uuid:e07add2c-1e4c-4867-bd0a-060881336989> | {
"date": "2017-01-20T05:43:16",
"dump": "CC-MAIN-2017-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9184290170669556,
"score": 2.6875,
"token_count": 605,
"url": "http://www.pathologystudent.com/?p=2145"
} |
President Barack Obama isn't on the Arizona primary election ballot, but judging from the campaigns, candidates and political rhetoric it seems he is running for every political office in the state, even Corporation Commissioner.
AZ Dept. of Commerce Report - Solar Cells
Single – Crystalline Cells
The oldest and most efficient type of photovoltaic cell is made from single-crystalline Silicon. It is called single-crystalline because the atoms form a nearly perfect, regular lattice – if you could see into the cell, it would look exactly the same in almost every spot. In these cells, electrons released during the photovoltaic effect have clear, unobstructed paths on which to travel.
Most silicon comes from ordinary sand and several steps are required to turn it into a crystalline solar cell. The silicon must first be separated from the oxygen with which it is chemically bound. Then it must be purified to a point where the material includes less than one non-silicon atom per billion. The resulting semiconductor grade silicon is one of the world’s purest commercial materials and has a price tag of $40 to $50 per kilogram.
The process of growing crystalline silicon begins with a vat of extremely hot, liquid silicon. A “seed” of single-crystal silicon on a long wire is placed inside the vat. Then, over the course of many hours, the liquid silicon is cooled while the seed is slowly rotated and withdrawn. As they cool, silicon atoms inside the vat bond with silicon atoms of the seed. The slower and smoother the process, the more likely the atoms are to bond in the perfect lattice structure.
When the wire in fully removed, it holds a crystal about 8 inches in diameter and 3 feet long – the size of long salami. It is cut into wafers, 8/1000 to 10/1000 of an inch thick with a diamond-edge blade and much of the silicon crystal, now worth hundreds of dollars per kilogram, is turned into dust in the process. The wafers are polished, processed into cells, and mounted in modules.
More than a hundred industry and university research teams have worked to upgrade and automate the manufacture of crystalline silicon solar cells. They try to further reduce the cost of purified silicon, to develop high-speed crystal pullers and water-slicing techniques, and to improve the overall design of modules.
One of the main objectives of PV research, however, has been to increase the efficiency with which photovoltaic modules convert sunlight into electricity. Commercial solar modules typically turn 10 to 14 percent of the sunlight that strikes them into electricity. In the laboratory, module efficiencies of more than 20 percent have been achieved.
NOTE: Photovoltaic conversion efficiency is generally based on module output rather than cell output. Modules include many connections and tiny wires in which electricity is lost. Consequently, they give lower efficiencies than individual cells.Polycrystalline Silicon Cells
Polycrystalline photovoltaic cells are exactly what the name implies – a patchwork quilt of single-crystalline silicon molecules. Connections between these molecules are random and do not form a perfect lattice structure. Polycrystalline cells are less efficient than single-crystalline cells because released electrons cannot follow clear paths.
These cells are produced by pouring hot, liquid silicon into square molds or casts. The silicon is cooled to form solid blocks, which are sliced like single-crystalline silicon.
These cells are less expensive to produce than single-crystalline cells because their manufacturing process does not require many careful hours of cooling and rotating silicon material.
The main challenge of polycrystalline cells is attaining a sufficiently high efficiency. Typically, the boundaries between crystals impede the flow of electrons, resulting in module efficiencies of only 7 to 10 percent.Concentrator cells
Concentrator cells employ lenses and mirrors to focus the sun’s light onto a high-efficiency, single-crystalline cell. Concentrators help gather sunlight so that a smaller-than-normal cell can produce the same amount of electricity as a standard module. Efficiencies range from 15 to 20 percent with efficiencies as high as 26 percent for a single cell.
Although they use less of the costly photovoltaic material, other elements increase their cost. Because of their lenses and mirrors, for example, concentrator cells must air directly at the sun. A tracking system is crucial for effective operation.Thin-film technologies
In the past decade, much progress has been made in developing and refining thin-film photographic cells. These cells are created by depositing hot, liquid silicon or other semi-conductor materials onto glass, metal or plastic.
One thin-film technology, which is already employed in many PV modules, is called “amorphous silicon”. It is composed or randomly arranged atoms, forming a dense, noncrystalline material resembling glass. The silicon layer is less than a millionth of a meter (a micron) thick requiring considerably less pure silicon then other cell types.
Researchers are working to obtain higher efficiency from this material, which lacks the ordered structure and inherent photovoltaic properties of crystalline silicon. Today’s commercial efficiency average 5 to 6 percent but efficiencies as high as 14.5 percent have been exhibited in laboratories.Tandem Cells
These cells are still in the developmental stage but offer great potential for the future of photovoltaics. Tandem, or multiple-junction cells, are actually several cells stacked on top of each other. Each cell layer is able to convert a different wavelength, or color, of the light spectrum into electricity.
Tandem cells have displayed efficiencies higher than 14 percent in the laboratory and theorist predict efficiencies as 35 to 40 percent. | <urn:uuid:35d03c91-877d-4c67-9a65-eb828f19f071> | {
"date": "2014-07-22T15:16:27",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858962.69/warc/CC-MAIN-20140722025738-00208-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9296919703483582,
"score": 3.25,
"token_count": 1201,
"url": "http://www.azsolarcenter.org/tech-science/solar-for-consumers/photovoltaics/azdoc-report-pv/report-on-pv-solar-cells.html"
} |
Monday, June 30, 2008
1. When seen against the skyline, leaves are always darker than the background.
2. When seen below the skyline, leaves in a natural setting are always lighter than their surroundings.
The "skyline" is the line of the top of the trees against the sky. Here's an example of Rule 1, which is not surprising.
The second law may come as a surprise, because we always tend to think of leaves as dark silhouettes, and we tend to paint them that way. But in a natural setting, whenever you see any leaf against any background below the sky, chances are that nearly every single leaf is lighter than what is behind it.
The exceptions to Rule #1 are so rare that they are momentary and breathtaking. Here’s a shot of a Rule 1 exception, taken from a fast-moving car when the late afternoon light penetrated beneath a deck of stormclouds. The effect only lasted five minutes. It can be very exciting to break this rule, but all the conditions should be carefully observed.
The exceptions to Rule #2 (pink circle at right) happen a little more often, but usually only when leaves are seen against human interventions, like lawns, walls, or cleared areas. If you walk around in a forest or a meadow, the leaves are almost always lighter than what’s around them.
I assume that Rule #2 happens because of the light-seeking nature of leaves. They are little machines that are superb at angling for the best position to capture the most light. | <urn:uuid:2586fe25-0e42-4eaf-bad1-653a7ff5e8b3> | {
"date": "2014-10-23T00:00:53",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507448169.17/warc/CC-MAIN-20141017005728-00312-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9656845927238464,
"score": 3.09375,
"token_count": 323,
"url": "http://gurneyjourney.blogspot.com/2008/06/two-rules-of-foliage.html?showComment=1215131880000"
} |
The California Science Center is located near downtown Los Angeles, in a neighborhood called Exposition Park.
The California Science Center is all about hands-on science. It features both permanent exhibits and a rotating roster of intriguing traveling exhibits. Permanent exhibits include: Creative World, where you can explore the wonders of human innovation; World of Life, where you learn that all things-even single cell bacterium-perform the same life processes; and the SKETCH Foundation Gallery, where you can view real space capsules and discover how scientific principles affect the design of these objects. Traveling exhibits could have you investigating everything from the science, math, and psychology used in magic illusions to the mysteries of the human body-depending on when you go, of course! Movies at the IMAX Theater make you feel as though you're part of the action, thanks to a giant screen that's 7-stories high. And don't forget to stop by the Science Court, where kids can ride a bicycle across a one-inch cable, a mere 43 feet above the ground. Yikes!
Each summer the California Science Center hosts Hands-On Science Camp for kids up to grade 12. Campers can choose from over 25 classes such as 3, 2, 1, Blast Off!, Fantastic Physics, and Skate Science. The Science Center also hosts sleepovers in the exhibits with fun activities and an IMAX film.
You can visit the California Science Center without going to sunny Los Angeles by logging on to www.californiasciencecenter.org. Only problem is, if you visit on your computer, you won\'t get to pedal the high-wire bicycle!
Go to the DFTV Boards and tell us about your science center visit. | <urn:uuid:6c53d0e2-5cb1-4863-bbb2-0ffaecf5969a> | {
"date": "2014-12-20T05:10:01",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769392.53/warc/CC-MAIN-20141217075249-00123-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.924480140209198,
"score": 2.53125,
"token_count": 351,
"url": "http://pbskids.org/dragonflytv/gps/sci_center_california.html"
} |
On June 14, 2016, President Obama strongly condemned Donald Trump’s xenophobic comments following the horrific massacre in Orlando.
I suspect most Americans have a general sense of the racism and xenophobia that President Obama called “a shameful part of our history”. The details and consequences of that history, however, are often forgotten. Given the “dangerous” thinking Trump and others are espousing, some of those details, especially those with regard to the fear and persecution of immigrants and refugees, bear remembering.
After the Chinese immigrants and refugees arrived in the US during the 1850s and 1860s the US House of Representatives’ Committee on Education and Labor concluded that “the only purpose in society for which they are available is to perform manual labor.”
As I have detailed in my book, Bilingual Public Schooling in the United States: A History of America’s “Polyglot Boardinghouse, the House’s committee, agreeing with others in the nation who were fearful of the newcomers, resolved that the Chinese “cannot and will not assimilate with our people, but remain unalterably aliens.” The result of this “dangerous . . . kind of thinking,” to use President Obama’s words, was the Chinese Exclusion Act of 1882.
Exclusion also was the consequence of the xenophobic attitudes Americans developed toward the eastern and southern Europeans, especially the Italians, Russian Jews, and Poles.
The US Immigration Commission, drawing on the racial pseudo-science of the day, stated that many of these newcomers were biologically “prone to criminal activity and had ‘little adaptability to highly organized society.’” (The Committee’s suggestion that certain groups of people are predisposed toward criminality eerily parallels Trump’s outrageous remarks about Mexicans and Mexican-Americans last year.)
This very dangerous kind of thinking not only led to the Immigration Restriction Act of 1924 – which essentially excluded all but a handful of eastern and southern Europeans from entering the nation – but also America’s brief fascination with eugenics.
The fear of Chinese and southern and eastern Europeans immigrants developed during times of relative peace in the US. During war, however, xenophobia historically has been almost fanatical. When the United Stated declared war on Germany in 1917, German immigrants (as well as German-Americans who had lived in the US. for generations) were seen as domestic enemies. Even though the German-Americans were loyal to the Allies, they erroneously were suspected of plotting to poison water supplies and foodstuff and to bomb the nation’s industries and infrastructure.
The consequences of this dangerous kind of thinking was unprecedented. All things German suddenly became un-American – even “sauerkraut” was briefly renamed “liberty cabbage.” On the home front, a nativist mob mentality took over, leading to the killing of a German pastor in Indiana and the lynching of a German immigrant in Illinois.
Of course, these examples of the consequences of what President Obama called a “dangerous . . . mindset” barely scratch the surface of the “shameful part[s] of our past” that the president reminded us not to repeat – a shameful past that also includes the mass killing of Native Americans, the enslavement and prejudice against African Americans, and the internment of Japanese-Americans during World War II.
Throughout US history those who were discriminated against and their millions of conscientious supporters resisted that “dangerous . . . kind of thinking.” Obama is correct: prejudiced thinking “betrays the very values America stands for,” and so, like those in the past, we must resist that “mindset.” Let not this be another moment in history that we come to regret.
This article was excerpted from: ‘Trump’s “Dangerous” Thinking’. | <urn:uuid:852ebe43-cf22-47b0-adee-772f18bd1f20> | {
"date": "2018-11-20T08:37:02",
"dump": "CC-MAIN-2018-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746301.92/warc/CC-MAIN-20181120071442-20181120093442-00136.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9682992100715637,
"score": 2.890625,
"token_count": 824,
"url": "https://www.thenews.com.pk/amp/132711-Dangerous-thinking"
} |
Patents provide a means for protecting an invention. The government grants a period of exclusivity for the use of the technology in exchange for a thorough description of the invention which others may use at the end of the period of exclusivity. Through this exchange, technologies are advanced by two means. First by encouraging further investment and development of the invention during the period of exclusivity and second through the dissemination of information to all interested parties. After the period of exclusivity has expired, these other interested parties can utilize the technology without a license. The patentee’s period of exclusivity is enforced through patent infringement litigation.
UNeMed seeks patent protection on behalf of UNMC for technologies which it determines have a high probability of being licensed and patented. The resulting patents and patent applications are owned by the Board of Regents of the University of Nebraska as per Regents Policy 4.4.2. To document this ownership, University of Nebraska inventors are required to sign Assignments.
Inventorship of a patent is determined by the claims and may change as the claims are altered during patent prosecution. Due to differences between the claims, the inventorship of subsequent patent applications may vary greatly from the inventorship of the earlier patent(s). Inventorship is taken very seriously by UNeMed and UNMC’s outside patent attorneys because errors in inventorship can invalidate an issued patent.
Inventorship is a matter of law and involves conception as well as certain aspects of reduction to practice. Conception can occur at a variety of levels – from the broad conception of an idea to the more specific conceptions which occur during reduction to practice. For legal purposes, it doesn’t matter at what level conception occurred – all are treated equally in terms of qualifying an individual as an inventor.
Likewise, it doesn’t matter how many claims in a patent an individual is associated with as an inventor. If there are thirty claims and an individual is only an inventor of one of those claims while another is an inventor of all thirty claims, under U.S. patent law both are treated as equal inventors.
Lastly, unlike publications, the order in which inventors are listed in a patent application does not convey any meaning. Some organizations always list inventors in alphabetical order while others choose to list inventors in order of seniority. UNeMed tends to have inventors listed in the same order in which their names were listed in the New Invention Notification (NIN).
To be patented, an invention must: 1) be novel, 2) have utility, and 3) be non-obvious. To be novel, the invention must be new. This means that the invention must be different in some way from all other ideas that have already been disclosed to the public in any form (publications, presentations, posters, dissertations, etc.). To have utility, the invention must perform some useful function and benefit society in some manner. Lastly, the invention must not be obvious to anyone knowledgeable in the in the area of science. For biotech and biomedical patents it is also necessary to have in vitro data supporting your invention. More and more the United States Patent and Trademark Office (USPTO) is also expecting in vivo data to be included within the patent application.
The novelty and non-obviousness factors of patentability can be greatly impacted by the earlier publications of others. It may not be possible to patent an invention if many different publications mentioned each of the separate elements of the invention. A publication is relevant to the USPTO even if it was not widely disseminated to the public or if the current invention is only mentioned briefly and/or obliquely in the publication.
Earlier public disclosures of the invention by the inventors can also prohibit patenting. U.S. patent law allows for the patenting of an invention up to one year after it has been publicly disclosed. (This one year date after publication is commonly referred to as a bar date.) However, the patent laws of nearly all foreign countries prohibit patenting once a public disclosure has occurred. This immediate prohibition of patenting after a public disclosure includes International (PCT) patent applications.
UNeMed researches the patentability and marketability of UNMC inventions thoroughly before making a decision regarding whether to seek patent protection for an invention. U.S. biotech and biomedical patents typically cost between $12,000 and $22,000 apiece. UNeMed is responsible for paying these legal expenses and strives to ensure it is investing its resources in the most promising inventions.
It is important to note that patentability is only an assessment of whether protection can be procured for an invention; it is not a proper assessment of the scientific value of an invention. An invention may be a great contribution to the body of science, but not be patentable.
Anatomy of Patent Application
The United States Patent and Trademark Office’s (USPTO) regulations dictate the formatting of utility patent applications. There are four major sections of a patent application which serve different purposes and have unique formatting requirements. These sections are listed below in order of their occurrence within a utility patent application.
The specification often encompasses the bulk of a patent application and includes the following subsections (listed in order of appearance): Field of the Invention, Background of the Invention, Summary of the Invention, Brief Description of the Drawings, and Detailed Description of the Invention. The specification determines how words used within the entire patent application will be defined. If the specification doesn’t fully enable/support the contents of the claims, the patent probably will not be allowed to issue or withstand litigation. This is the area of the patent application where the invention must be described in enough detail to enable one skilled in the art to reproduce the patented technology. Generally, applicants cast as wide a net as possible around the technology within the specification – sometimes including speculations and predictions regarding future developments of the technology.
This is the most “legal” portion of the patent application. There are many federal laws, USPTO regulations, and much case law which specifies how claims can be written and even what some specific words mean when used within claims. The claims can only cover material which is thoroughly described within the specification. Claims can be altered/amended by either the applicant’s patent attorney or the Examiner during patent prosecution. When a patent issues, the actual patent protection obtained is limited to what is covered in the final format of the claims. For the most part, claims can only be independent (stand alone) or dependant (citing back to a previous claim). Dependant claims contain all the limitations set forth in the cited claim.
The abstract is fairly self explanatory. It will appear on the front of the issued patent. There is a limit on how long it can be and Examiners can force applicants to alter an abstract if they believe it isn’t descriptive enough.
The United States Patent and Trademark Office (USPTO) has strict guidelines for the formatting of drawings and these pages are traditionally prepared by professional drafters who specialize in preparing drawings for filing with the USPTO. In biotech patent applications the drawings most often include figures (charts, graphs, etc.). Once an application is filed, drawings cannot be altered in any substantive manner.
Types of Patent Applications
Categories of U.S. Patents
U.S. patent law includes three distinct categories of patent applications: plant, design and utility. Plant patent applications can be sought for new asexual varieties of plants. Design patents provide protection for the appearance of an article of manufacture. Utility applications are granted for new and useful processes, machines, articles of manufacture, or compositions of matter. UNeMed primarily uses utility patents to protect UNMC’s intellectual property.
Types of U.S. Utility Applications
U.S. patent law allows utility patent applications to claim priority to other patent applications. As a result, utility patent applications are divided into several different subcategories based on their immediate claim to priority. Below is a table outlining these subcategories.
|Regular||Describes a patent application which either doesn’t claim priority back to another application or claims immediate priority to a U.S. provisional application.|
|Divisional||Applications filed after an Examiner has issued a Restriction Requirement (divides claims into separate inventions) in the immediate prior application (parent) and the applicant desires to seek patent protection for one of the groups of claims dropped from the parent application.|
|Continuation||A subsequent application which contains no new subject matter.|
|Continuation-In-Part or CIP||A subsequent application which contains some combination of older subject matter from the parent application and new information.|
|Nationalized||Describes a patent application which claims immediate priority to an International (PCT) application.|
Provisionals are a procedural type of patent application which cost less than other types of patent applications to file, but are never prosecuted and never issue as patents. Provisionals have a one year life span during which a second, nonprovisional patent application must be filed to protect the technology described in the provisional patent application. There is much more flexibility in the formatting of provisional applications than with other types of patent applications and, as a result, provisionals sometimes don’t resemble other types of applications. Care must be taken with the content of provisional applications because subsequent nonprovisional applications can only claim priority back to the subject matter disclosed within the provisional.
U.S. Provisional Applications
Like the U.S., most countries have established their own patent offices and patent laws. There are two major differences between U.S. patent law and the patent laws of most other countries. First, the U.S. operates under a system where patents are granted to the earliest inventor of a technology who diligently developed and sought protection for the technology (i.e. “first to invent”). Most other countries operate under a “first to file” system where patents are granted to the first to seek patent protection. Second, the U.S. provides a one year grace period to seek patent protection after the public disclosure of an invention while the majority of other countries consider all rights to seek patent protection lost immediately after a public disclosure has occurred. This immediate prohibition of patenting after a public disclosure includes International (PCT) patent applications.
Foreign patent prosecution can be costly and difficult due to the subtle differences between each country’s patent laws and the need to locate an experienced foreign associate to oversee patent prosecution with each individual patent office. Rather than immediately filing patent applications in each individual country where patent protection is desired, most organizations take advantage of International (PCT) applications.
Most countries have agreed to participate in the World Intellectual Property Organization (WIPO) and its corresponding International (aka PCT – Patent Cooperation Treaty) patent application. This system allows applicants to file an International (PCT) application with a designated receiving office (the USPTO is one of these entities) and receive some feedback regarding the invention’s patentability from that office before committing financially to filing patent applications in numerous individual countries. PCT applications never issue as patents, but do receive some prosecution. PCT applications are expensive and most entities (including UNeMed) file them judiciously.
The Life of a U.S. Patent
Inventor participation in the patenting process results in better patents. Overall, University inventors are encouraged to participate in the patenting process to the extent they are able. However, inventor participation is needed at some points of the process including:
- 1. Reviewing a draft of the patent application (Step 1)
- 2. Reviewing a draft list of references for inclusion in the Information Disclosure Statement (IDS) (Step 3)
- 3. Signing Declarations (Step 3)
- 4. Signing Declarations (Step 3)
- 5. Sometimes during the prosecution process when an Examiner request additional data, clarification, etc. (Step 4)
UNeMed regularly keeps inventors informed as patent applications progress through each state of patenting process. Inventors should always feel free to contact UNeMed if they have any questions regarding one of their patent applications or the patenting process in general.
The process of obtaining a U.S. patent for a biomedical or biotech invention currently lasts approximately 3-7 years. After the patent application has been filed there will be a 2-4 year delay before an Examiner at the United States Patent and Trademark Office (USPTO) reviews the application followed by another 1-3 years of patent prosecution. Once issued, a U.S. utility patent expires 20 years after its earliest, nonprovisional priority date. The USPTO does grant patent extensions based on USPTO delays in prosecution. Below is a more detailed outline of the patenting process.
Step 1 – Drafting (estimated timeframe 2-6 weeks)
UNeMed engages a wide variety of outside patent counsel to write and prosecute patent applications on behalf of UNMC. All of these patent attorneys are experienced in patent prosecution and are experts in a specific area of science. The majority of these individuals earned doctorate degrees and conducted industrial and/or academic research in an area of science pertinent to UNMC’s current research before attending law school. The expertise and experience of these professionals is necessary to ensure UNeMed obtains the best patent protection possible for UNMC’s technologies. UNeMed closely monitors and coordinates the efforts of these patent attorneys throughout the patenting process.
Step 2 – Filing
Once a final draft is approved, the external patent attorney will file the application either immediately or within one or two days. UNeMed provides outside counsel with the final approval of the draft. Before providing this final approval, UNeMed ensures that all of the inventors’ questions regarding the draft have been addressed and that all inventors have been given ample opportunity to review a draft.
Step 3 – Waiting for a first Office Action (estimated timeframe 1-4 years)
Once a patent application is filed, there will be at least a 12 month delay before the first communication is received from an Examiner at the USPTO. Several administrative activities are attended to during this time period. Some of these activities require inventor participation such as: submitting Declarations and Assignments to the USPTO (inventors sign these documents and return them to UNeMed) and filing Information Disclosure Statements (IDSs) (inventors review a draft list of references for inclusion in the IDS). UNeMed strives to make each inventor’s participation in these activities as easy as possible.
Step 4 – Prosecution (estimated timeframe 1-3 years)
Patent prosecution is essentially a negotiation between the applicant’s patent attorney and the USPTO’s Examiner regarding what is patentable. The process is initiated by an Examiner issuing an official written communication. The first of these written communications is usually either a Restriction Requirement or an Office Action. Generally, Examiner’s include a multitude of reasons for rejecting the claims in the first Office Action. Usually, it is not until the second Office Action that we obtain a true sense of how patentable the Examiner finds the current claims.
Later communications from Examiners include additional Office Actions, Final Office Actions, Advisory Actions and Examiner Amendments.
Applicants are given a set period of time to file responses to the Examiner’s communications. In these responses, applicants’ patent attorneys argue that the Examiner’s rejections are not appropriate. In making these arguments, patent attorneys will sometimes include amendments to the claims or clarify the scientific nature of the invention for the Examiner
Step 5 – Post Prosecution Activities (estimated timeframe 1-6 months)
Once prosecution has ended, the Examiner will issue a Notice of Allowance and UNMC’s patent attorney will pay the USPTO’s issue fee. UNeMed sends inventors notice that the patent will issue. It’s at this stage that UNeMed determines whether any additional patent applications (divisional, continuation, continuation-in-part) are warranted.
Step 6 – Patent Issues (estimated timeframe 1-4 months after fee paid)
UNeMed sends inventors a copy of the issued patent. The original Letters Patent are stored by UNeMed on behalf of the Board of Regents of the University of Nebraska.
Step 7 – Pre Patent Expiration Activities
After a patent issues and before it expires, patent holders must pay maintenance fees to the USPTO. These maintenance fees are due 3.5, 7.5 and 11.5 years after the patent issues. If one of these fees are not paid, the USPTO classifies the patent as abandoned.
Step 8 – Patent Expiration
U.S. patents expire 20 years from the earliest nonprovisional priority filing date. The USPTO grants patent term adjustments to compensate applicants for USPTO caused delays in the patenting process. Additionally, patent term extensions can be granted due to delays in the FDA approval process. Once a patent expires, the applicant’s period of exclusivity is over and anyone can utilize the claimed invention. | <urn:uuid:e58d44c1-cd54-4cff-a74d-884e99a2fcc5> | {
"date": "2015-05-30T04:02:48",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.88/warc/CC-MAIN-20150521113210-00004-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.924996554851532,
"score": 3,
"token_count": 3559,
"url": "http://www.unemed.com/for-researchers/ip-primer"
} |
What is good and bad Cholesterol?
Every human body will have LDL also called bad cholesterol and HDL also known as good cholesterol. It is a waxy, fa tlike substance and very much needed for your body to function normal. Cholesterol is naturally present in cell walls or membranes everywhere in the body, including the brain, nerves, muscles, skin, liver, intestines, and heart. – Health Info.
If High level of LDL or low level of HDL may cause heart disease and stroke. An unhealthy diet can cause high cholesterol and may be hereditary factor . A low-cholesterol diet will improve cholesterol levels. If the low-cholesterol diet does not work to lower bad cholesterol and increase good cholesterol, consult your doctor.
How Does Cholesterol Cause Heart Disease?
With excess cholesterol build up in the walls of arteries and causes “hardening of the arteries” and arteries become narrowed and blood flow to the heart is slowed down or blocked. The blood carries oxygen to the heart. Due to blocked arteries enough blood and oxygen cannot reach the heart and cause chest pain. If the blood supply to a portion of the heart is completely cut off by a blockage, the result is a heart attack.
Watch your LDL-HDL and Triglyceride: The total score must be below 180mg/dl and the score is arrived by the equation HDL+LDL+20% Triglyceride. It is observed that those with high blood triglycerides the HDL will be at low levels allowing risk to health. Generic history, smoking, overweight, beta blokers will reduce HDL levels. LDL level raises with heavy foods. Triglycerides is the fat in the body and they should be under control. If the triglyceride level increase with low HDL or high LDL will develop fatty substance in the arteries and increases the risk of heart attack or stroke. | <urn:uuid:f682fc45-b87d-4c44-87f9-2e40cbcd90e2> | {
"date": "2019-01-16T18:56:41",
"dump": "CC-MAIN-2019-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657557.2/warc/CC-MAIN-20190116175238-20190116201238-00296.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9052302241325378,
"score": 3.53125,
"token_count": 381,
"url": "http://globalinfoonline.com/avoid-fatty-food-and-control-cholesterol/"
} |
The Federal government is failing to protect children in foster care from the devastating effects of potent, psychiatric medications that alter the mind, according to a new report.
A report filed in the Government Accountability Office (GAO) says that thousands of foster children across the Nation are being prescribed powerful psychiatric medications at doses that exceed the maximum levels approved by the Food and Drug Administration (FDA). Within that number there is a subgroup that’s taking five or more psychiatric drugs simultaneously despite potential safety issues. Some of the drugs are not even approved for psychiatric use by the FDA.
The report’s findings are the result of a two-year probe featuring five States: Florida, Massachusetts, Michigan, Oregon and Texas. Of the approximately 100,000 foster kids studied, investigators found that about one-third were prescribed at least one psychiatric drug.
The States spent more than $375 million for prescriptions provided through fee-for-service programs to foster and non-foster children. The report says that while the high cost does not necessarily show that doctors prescribe the drugs inappropriately for financial gain, there is no evidence that it was safe to take five or more psychiatric drugs in adults or children; yet hundreds of both foster and non-foster children were prescribed such a medication regimen. | <urn:uuid:28333653-83d4-4ef3-b2f6-561e30fc4381> | {
"date": "2015-01-31T09:51:08",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118108509.48/warc/CC-MAIN-20150124164828-00056-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9512733817100525,
"score": 2.515625,
"token_count": 259,
"url": "http://personalliberty.com/gao-foster-children-prescribed-psych-drugs-at-alarming-rates/?like=1&source=post_flair&_wpnonce=836876f408"
} |
Marshall County Health Department Immunizations
Immunizations protect from dangerous diseases which can have serious complications including death. The World Health Organization has estimated that over 2.5 million deaths are averted through vaccination each year. Keeping your child’s immunizations up to date helps ensure your child is protected against dangerous diseases. Immunizations are given according to the Advisory Committee on Immunization Practices (ACIP) recommended schedule. | <urn:uuid:dd5cd7d4-c80a-4b8d-afd0-4f3981478c75> | {
"date": "2018-11-22T10:23:50",
"dump": "CC-MAIN-2018-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9386078715324402,
"score": 2.828125,
"token_count": 85,
"url": "http://www.marshallcohealthdepartment.com/clinic/immunizations.php"
} |
A group of South Koreans living in Japan is pushing to build a memorial monument for Koreans killed in the atomic bombing during World War II, the South Korean consulate general in Fukuoka said on Sunday.
With the support of the South Korean government, the federation of Korean residents in Japan is seeking to build the monument inside the Nagasaki Peace Park that commemorates victims of the atomic bomb dropped on the city by the U.S. during the war.
Koreans were also killed by the bombing, mostly those who were mobilized for forced labor, but little has been done to compensate or remember them. Korea was colonized by Japan from 1910-1945 before gaining independence with the end of the war.
If successfully installed, the monument will become the second of its kind in Japan, following the first one set up in the 1970's inside a similar peace park in Hiroshima, another Japanese city struck by the atomic bomb during the war.
The city of Nagasaki is currently reviewing the plan, the consulate general said.
"During the last 68 years, almost nothing has been done about the victims. And (the latest plan) is likely to become an exemplary case for cooperation with the city of Nagasaki," an official at the consulate general said.
The official said that the monument would not contain any criticism of Japan. He said the intent was fully explained to the Japanese host city in response to some Japanese reports opposing the plan. (Yonhap News) | <urn:uuid:d2940185-9c52-4d0c-8d32-4b08938c9b27> | {
"date": "2016-07-31T05:33:11",
"dump": "CC-MAIN-2016-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258950570.93/warc/CC-MAIN-20160723072910-00037-ip-10-185-27-174.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9755179286003113,
"score": 3.125,
"token_count": 296,
"url": "http://www.koreaherald.com/view.php?ud=20140202000287"
} |
Many companies are already heavily reliant to advanced computer systems, such as UPS. Also known as uninterruptible power supply, UPS offer huge improvements in power protection than typical power stabilizers. However, there are things we should know before purchasing a UPS unit. Some dedicated UPS system may lead to over-specification, especially if there’s a gap between the critical load and the installed capacity of UPS. Some large-sized UPS could make inefficient use of floor space, but they can be treated as hot-swappable modules, when combined with proper server racks. Power can be added as the overall requirements grow, without additional footprint. Modular UPS solutions can achieve 70 higher power loading, to reduce both cooling and energy energy costs.
Another solution is by using transformerless UPS solution and this could be an additional effort to improve energy efficiency. UPS systems with three-phase power can feature smart energy management and they could be designed to lower energy consumptions. This will minimize space requirements and generate less CO2. Transformerless UPS could reduce the physical footprint by two-thirds. There will also be cost savings on power expenditure and significant savings can be achieved after a few years of usages. Flexible growth and right-sizing are essential when we want to maximize UPS efficiency. In addition, we should consider other factors as well, including maintenance, regular inspections and post-installations. This will ensure that UPS units can provide round-the-clock availability, especially through scheduled maintenance.
UPS system may also cater for individual usages and some compact UPS systems can also be used in homes. They are flexible and easy to maintain. They have good redundancy and efficient expansion systems in a very small footprint. This should reduce running costs through near-unity power factor and higher operating efficiency. With energy saving technology and scalable architecture, modern UPS system should ensure high level of protection, especially for critical load. This will meet requirements to establish energy-efficient solutions at home. The important of UPS system can be demonstrated even for compact, household solutions. Although these systems may only provide 15 minutes of usages after the blackout, users should have enough time to save their works and shut down their computers properly. This will prevent data loss and potential hardware damages caused by sudden loss of power.
There are main functions of UPS and firstly, it ensures steady and continuous voltage, not only during blackouts, but also brownouts, that can happen quite frequently. This will ensure that we get only the highest power supply. The second function of UPS is to ensure continuous power even during critical, peak load using its battery backup. It would be reassuring for both business and household users to know that they have power protection solutions that can work as intended. In this case, business operations could remain uninterrupted, especially when the blackout happens for less than an hour. UPS is essential even in developed countries where power supply is more reliable. There are cases when electricity becomes unreliable due to natural causes, such as hurricane and malfunctions in power generation system. | <urn:uuid:cb96d06e-3188-4d15-9f0b-a4b75e2fcbd7> | {
"date": "2018-06-21T06:18:20",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.963962972164154,
"score": 2.546875,
"token_count": 612,
"url": "http://www.iitp.net/why-do-we-need-ups/"
} |
Common Disease Under the Radar
Bacterial vaginosis (BV) is a little-known yet very common disease. It is an infection in the vagina caused by an abnormal growth of bacteria. It is more common than a yeast infection and can threaten a woman’s reproductive health. If untreated, BV can spread into the uterus and cause damage to the Fallopian tubes. According to the Centers for Disease Control (CDC), BV can increase a woman’s chance of contracting HIV if she is exposed to the virus, and it can increase the risk that an HIV-positive woman will pass the virus on to her partner. In addition to the increased risk of getting HIV, BV is associated with an increase in pelvic inflammatory disease after invasive procedures including hysterectomy and abortion.
Unfortunately, BV is not widely discussed, and even though being tested for it is a significant reason for getting a Pap smear, many women do not even know it exists.
One of the main symptoms of BV is a vaginal discharge that is white or gray and has a “strong fishy odor,” according to the CDC.
Dr. Patricia A. Robertson, the founding co-director of the Lesbian Health and Research Center at the University of California at San Francisco, suggests one of the reasons for women’s lack of knowledge on the subject is, “Women are [not] that comfortable discoursing about their vaginal discharge.”
Robertson says that 23 percent of lesbian couples are concordant for BV: If your girlfriend is diagnosed with it, there is a 23 percent chance you have it, and vice versa. “My own approach has been that if a lesbian is treated for BV and it recurs, [it’s] time for her partner to come in for an evaluation,” says Robertson.
In addition to the increased risk of contracting HIV, other complications associated with having BV can be startling. If a woman is pregnant and has BV, there is a significant increase in the likelihood that she will have a premature birth and that the child will have low birth weight.
Treating BV is easy—provided it is detected early. It is treatable with antibiotics, either metronidazole or clindamycin.
Though there is no specific cause of BV, several things can increase the risk. A new partner or douching can upset the balance of bacteria in the vaginal area.
What are other things that women need to know about BV? Robertson says, “That [a] persistent vaginal discharge needs to be evaluated. And that all lesbians need yearly Pap smears from the age of 21 until the age of 30.” If Pap screenings have been normal three times in a row by the time a woman turns 30, she can get them every two years. Contact your gyno for more information, or visit the CDC’s website, cdc.gov/std/bv. | <urn:uuid:b9bc300b-a9e9-4d6f-80cd-ffc47e560f0f> | {
"date": "2017-01-16T11:01:33",
"dump": "CC-MAIN-2017-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9585601091384888,
"score": 2.90625,
"token_count": 616,
"url": "http://www.curvemag.com/Curve-Magazine/June-2008/Common-Disease-Under-the-Radar/"
} |
As urban areas develop, changes occur in their landscape. Buildings, roads, and other infrastructure replace open land and vegetation. Surfaces that were once permeable and moist become impermeable and dry. These changes cause urban regions to become warmer than their rural surroundings, forming an “island” of higher temperatures in the landscape.
Heat islands occur on the surface and in the atmosphere. On a hot, sunny summer day, the sun can heat dry, exposed urban surfaces, such as roofs and pavement, to temperatures 50–90°F (27–50°C) hotter than the air1, while shaded or moist surfaces—often in more rural surroundings—remain close to air temperatures. Surface urban heat islands are typically present day and night, but tend to be strongest during the day when the sun is shining.
Surface and atmospheric temperatures vary over different land use areas. Surface temperatures vary more than air temperatures during the day, but they both are fairly similar at night. The dip and spike in surface temperatures over the pond show how water maintains a fairly constant temperature day and night, due to its high heat capacity.CGPCS Notes brings Prelims and Mains programs for CGPCS Prelims and CGPCS Mains Exam preparation. Various Programs initiated by CGPCS Notes are as follows:- | <urn:uuid:7cd52a78-6141-4436-8a30-9faf5d926d0a> | {
"date": "2019-12-08T05:23:43",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540506459.47/warc/CC-MAIN-20191208044407-20191208072407-00056.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9348979592323303,
"score": 4.1875,
"token_count": 269,
"url": "http://chhattisgarh.pscnotes.com/dmpq/geography-dmpq/dmpq-explain-the-urban-heat-island/"
} |
Their beaks hanging open in a pant as their mighty wings beat to keep them aloft, the dozen sandhill cranes following scientists on a migratory journey south were clearly struggling during a heat spell in rural Kentucky the other morning.
The pilots of the ultralight aircraft leading the birds saw them stray from their usual lineup and, after flying only 17 miles, decided to make the first emergency stop since leaving a Wisconsin wildlife refuge 20 days before.
"Warm moist air doesn't have a lot of oxygen in it," said pilot Bill Lishman. "They started to drop out, and we just got them on the ground before they started to go on their own."
On Tuesday, the group safely reached the halfway point, though heat, strong headwinds and fog over the weekend slowed the birds and the crew that is guiding them as they made their way through Kentucky on the way to Florida. Two weeks after passing through Illinois, they landed about 3 miles north of the Tennessee border, just shy of 625 miles into what scientists say could be the longest human-led migration of birds ever attempted.
Lishman said that unseasonably warm weather--it was about 80 degrees Sunday morning--has complicated the migration, but the flights have otherwise gone smoothly. Except for the one bird that deserted the flock in Wisconsin, the cranes are healthy and loyally following the costumed pilots they were trained to regard not as people, but as parents.
"It's always the weather that we're worried about," said Lishman, a co-creator of the ultralight technique, which was featured in the 1996 movie "Fly Away Home."
Biologists and project crew members are closely monitoring the birds' behavior and tinkering with the trip's logistics to prepare for a more critical migration planned for next year involving a flock of the sandhill's endangered relatives, whooping cranes. This year's trial run is meant to establish a safe path for the whooping cranes, which scientists hope will constitute the continent's second migratory flock and ensure the species' survival.
Since leaving on Oct. 3, the sandhill team has learned how vulnerable its plans are to the whims of the weather--making closely spaced, emergency landing sites crucial to the migration's success.
For the initial weeks of the trip, they sailed through clear skies and encountered nothing to slow them down but a little frost.
"The first 400 miles, it was fabulous," said Joe Duff, another pilot and technique co-creator. "The birds would line up like a string of pearls off the wing."
But on Friday, gusty winds forced the pilots to turn around in Washington County, Ky., and the next day, fog kept them from leaving at all. Sunday, the group lifted off later than usual to avoid an early fog.
Flying during the hotter part of the day proved too difficult for the birds, and the pilots looked for a place to touch down early. Realizing the flock might need to land suddenly, Lishman had traveled ahead and left a note on a property owner's door. The man welcomed his surprise guests and invited them to a party he was throwing that night.
"He cooked up ribs for everybody, it was just excellent," Lishman said.
The cranes, which are secluded from human contact and kept overnight in a large pen, have behaved predictably in general, although one bird, known as No. 4, has repeatedly gotten dangerously close to the ultralight.
The crane has flown up under the ultralight's wing, which resembles that of a hang-glider, then climbed up to soar on a buffer of air in front of the wing and glide down to bump out the lead bird.
Because Duff is worried about getting gangly bird legs or necks tangled in the aircraft's wires, he has given No. 4 a few light taps with the wingtip as punishment.
"I'm trying to make it a negative experience," he said.
The trip may take longer than the 32 days originally estimated, but group leaders are hoping a tailwind and a little cool air will push them to Florida by early November. | <urn:uuid:33382cbf-c852-46aa-8923-ca14cea2b2f8> | {
"date": "2014-10-21T15:10:46",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444493.40/warc/CC-MAIN-20141017005724-00170-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9740409255027771,
"score": 2.96875,
"token_count": 853,
"url": "http://articles.chicagotribune.com/2000-10-25/news/0010250326_1_whooping-cranes-sandhill-migratory"
} |
You can tell a lot by looking at your cat's tail -- much more than just which way he's facing. Cats flick, twitch, swish and wiggle their tails to express a range of moods and emotions. Some of these movement are voluntary, but others seem to require little conscious thought.
Cat tails are more complex than meets the eye. Depending on the breed, your cat has 18 to 23 bones in his tail, which is actually an extension of his spine. He controls it with voluntary muscles at the base of his tail. The tail nerves collect in a bundle together with those from his hind legs and rear before they collect into the spinal cord. So, on the one hand, your cat's tail can be articulated with great nuance. On the other hand, it's strongly linked to other movements.
If you've ever watched your cat stalk a bird from a windowsill, you know your friendly feline is actually an accomplished hunter. You also probably know his tail sways back and forth, back and forth, and back and forth when he's in hunting mode. This movement, which starts as a voluntary thought, appears to continue as a primal brain script -- not unlike the way you continue walking as if on autopilot after you take a step or two.
While many of your cat's tail movements signify emotional responses -- a wide wiggle is agitation, a sharp twitch is anger, a raised, wavy tail is happiness -- others are more pragmatic. Your cat uses his tail as a counterweight, just like a tightrope walker uses her pole. It allows him to nimbly jump to seemingly unstable perches and navigate tight, balance beam-thick paths. Such tail movements are, quite clearly, conscious, although there's a twinge of instinct and self-preservation at play, too. After all, don't you push out your arms when you trip and fall down?
Thoughts All Folks!
Cat tails are mood rings. Different wiggles mean different things, whether it's the cat wagging the tail or the tail wagging the cat. Biologically, your cat clearly has the ability to trigger voluntary muscles and make his tail move. Still, once engaged in higher-level thinking, primitive scripts appear to run on their own momentum, or when enacted by surprise events. So does your cat have to think to wiggle his tail? It appears he can do that, but that it requires so little thought that it blurs the line between conscious and unconscious thought.
- The Ohio State University Extension, 4-H: Cat Behaviors
- Catster: 5 Cool Facts You Should Know About Your Cat's Tail
- Healthy Cats Care: Cat Language
- Pets.Ca: Tip 60 -- Cat Wagging Its Tail
- Web MD: Cat Chat -- Understanding Feline Body Language
- Washington State University Extension, 4-H: Cat Anatomy and Physiology
- J. Physiol: Tendon Organ Discharge During Voluntary Movements in Cats
- Michael Blann/Lifesize/Getty Images | <urn:uuid:9ff83ecd-af16-4647-96be-afe813d6ca7d> | {
"date": "2014-08-30T02:11:20",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833715.76/warc/CC-MAIN-20140820021353-00470-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9527918100357056,
"score": 3.015625,
"token_count": 630,
"url": "http://pets.thenest.com/cats-think-wiggle-tail-10121.html"
} |
Although the Lusitano horse was not considered separate from the Andalusian breed until 1966, it possesses several distinctive features, including a lower tail, more sloping hindquarters, and a more significantly convex head. A light horse breed, the Lusitano weighs on average less than 1,500 pounds, with long legs and a short, muscular back, and the Lusitano is usually light grey in color. Modern Lusitano horses were bred to be used in Portuguese bull fights and established a reputation as intelligent horses especially adept at a variety of equestrian activities.
Some breed associations believe that the Lusitano has been used as a saddle horse for more than 5,000 years, making it the world’s oldest saddle horse. Due to the breed’s intelligence and size, it was commonly used by nobility and crusaders and helped pioneer the art of dressage.
About the author:
An award-winning television journalist, Carmen María Montiel has served as the news anchor for both English- and Spanish-speaking broadcasts. Additionally, Carmen María Montiel frequently chairs galas and fashion shows, and she attended the Haras Cup when it presented its first competition of Lusitano horses. | <urn:uuid:3a297b16-495f-4927-b1dd-08d522e4e462> | {
"date": "2017-07-24T20:29:09",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424910.80/warc/CC-MAIN-20170724202315-20170724222315-00616.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9643562436103821,
"score": 2.875,
"token_count": 256,
"url": "https://carmenmariamontiel.wordpress.com/2015/08/02/about-the-lusitano-horse-a-brief-history/"
} |
While Others Seek to Inject CO2, Airgas Sells It
Just one of the many suppliers of industrial and commercial carbon dioxide, Airgas, Inc. (ARG: NYSE) recently announced plans to build a new carbon dioxide plant in Houston. The press release hit news wires right along with announcements of carbon capture projects and other investments to reduce greenhouse effect from too much CO2 in the atmosphere.
In one those strange twists that makes our world so interesting and vexing at the same time, is the fact that we use carbon dioxide all the while we invest wildly to reduce CO2 emissions. An inert gas at normal temperature, carbon dioxide liquefies under high pressure. Carbon dioxide is highly reactive, making it is a handy compound to a wide range of applications such as freezing food, treating alkaline water and facilitating oil recovery that wells. It also finds its way into various products such as fire extinguishers. It is used to de-caffeinate coffee and jazzes up carbonated beverages.
Airgas claims a total of eleven plants for purification and liquefaction of carbon dioxide. Much of the supply goes to Airgas’ dry ice facility also in Texas. Airgas has struck an agreement with oil and gas exploration company Denbury Resources (DNR: NYSE) to deliver raw carbon dioxide to the new Houston plant. The new plant replaces an older plant that is being shuttered this year after the principal supplier of raw carbon dioxide discontinued operations.
There are a variety of sources for carbon dioxide. Besides the CO2 that you and I respire, CO2 results from the combustion of coal or other hydrocarbons. Unfortunately, the concentration of CO2 in ambient air and in stack gases from simple combustion sources such as heaters, boilers, furnaces is not high enough to make carbon dioxide recovery commercially feasible.
Commercially-produced carbon dioxide is principally recovered from large-scale industrial plants which produce hydrogen or ammonia. These sources typically use natural gas, coal or some other hydrocarbon for feedstock. Another carbon dioxide source is large-volume fermentation processes in which plant products are made into ethanol. Breweries producing beer from various grain products are a traditional source. Corn-to-ethanol plants have been the most rapidly growing source of feed gas for CO2 recovery. CO2 is also comingled with oil and gas deposits.
Denbury will be supplying raw carbon dioxide it brings up in its Gulf Coast gas wells. The company claims ownership in every known producing CO2 well in the Gulf Coast region. Denbury also owns CO2 producing wells in the Rocky Mountain region, where it simply re-injects the CO2 back into the geological formation. With demand growing for "injection” CO2 to facilitate extraction of oil and gas from stubborn deposits, Denbury is planning a CO2 capture facility and pipeline at Riley Ridge in the Rocky Mountain region. Denbury says it will require $70 million to complete the initial phase of the CO2 capture facilities at Riley Ridge. The company expects to capture up to 13 million cubic feet per day of CO2.
The required investment for Denbury is a drop in the bucket compared to the hundreds of millions being spent to get CO2 back into the ground. A recent forecast for CO2 prices starts at $0.75 per thousand cubic feet in 2015, and rises to approximately $4.00 per thousand cubic feet in 2030. A separate feasibility study estimated that CO2 from industrial processes or power plants can be captured and transported approximately 100 miles at costs ranging between $1 and $3.50 per thousand cubic feet. It is not hard to understand why carbon capture requires public support.
Airgas trades at 17.5 times estimated earnings for 2013 - a bit of a premium to the industrial chemicals sector. A higher than average profit margin helps set the company apart from the crowd. While Airgas is a major player, others in the industry have larger market share. Debt-to-equity is 110.0% is nearly double the industry average. After the recent run-up in the U.S. equity markets, Airgas appears fully valued. A review of recent trading patterns suggests that Airgas is headed toward $113.00. From the vantage point of the current price level, anyone considering a long position in the stock is well advised to accumulate shares judiciously.
Debra Fiakas is the Managing Director of Crystal Equity Research, an alternative research resource on small capitalization companies in selected industries.
Neither the author of the Small Cap Strategist web log, Crystal Equity Research nor its affiliates have a beneficial interest in the companies mentioned herein. | <urn:uuid:7dec9e09-6308-47c3-83c0-5cd6ffd09e2a> | {
"date": "2015-03-04T15:15:01",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463606.79/warc/CC-MAIN-20150226074103-00092-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9324723482131958,
"score": 2.578125,
"token_count": 947,
"url": "http://www.altenergystocks.com/archives/2013/02/while_others_seek_to_inject_co2_airgas_sells_it_1.html"
} |
WASHINGTON _ The true cost of war is incalculable.
But if you piled up a stack of $1,000 bills to pay for the US-led invasion of Iraq that began 10 years ago this week, how high would it reach?
If it was merely the $50 billion that US officials confidently predicted at the outset, it would stretch about 17,500 feet, or 3.3 miles.
In fact, the actual number of greenbacks with the image of President Grover Cleveland it would take to settle up from the second longest war in American history would be about 250 miles high, reaching farther than the International Space Station.
That is one way of picturing the new research by a Harvard Kennedy School economist on the full bill of toppling Saddam Hussein and occupying the country for more than eight years.
Professor Linda Bilmes has concluded that both the direct costs of fighting the conflict from March 2003 until the US military withdrew in 2011 and the continuing financial burden of providing health care and other benefits to millions to veterans is far higher that was estimated even a few years ago.
“The minimum that it could cost now is $4 trillion,” Bilmes told the Carnegie Endowment for International Peace in Washington on Thursday. “We have trillions of dollars that have sort of sneaked up on us.”
For example, a few years ago Bilmes and Nobel Prize winning economist Joseph Stiglitz predicted in their landmark book, “Three Trillion Dollar War,” that about 45 percent of the troops sent to fight in the conflict might require disability compensation. Now it looks like 56 percent will.
“Our estimates were far too low,” said Bilmes, who reviewed hundreds of thousands of disability claims and whose updated research will be made public on Friday. “This reflects a great deal of suffering.”
The numbers are likely to climb further, she said, noting that the peak year for supporting veterans of World War I was 1969—more than 50 years after the conflict, when surviving veterans required geriatric care.
Then there are the so-called societal costs to take into account, including the price of oil, which she believes went up from $25 a barrel in 2002—where it had been for decades—to $140 a barrel three years later in part because of the war. It is now hovering at around a $100 a barrel.
“It set off a chain of events that had far reaching consequences,” said Bilmes, who thinks the war’s financial burden on the US Treasury may have also contributed to the US financial crisis in 2007-2008.
But none of the costs, including those that could have been predicted, were actually budgeted for.
“The US has borrowed all the money,” she said.
Bilmes, who is also a member of the US Department of Labor Veterans Employment and Training Advisory Board, is putting forward a series of recommendations to policy makers to avoid some of the budgeting mistakes.
One is to establish a veterans’ trust find “at the time we go to war” to help pay for some of the cost of taking care of the men and women who are sent to fight it. Another is include funding to fight the war in the regular budget, not in so-called “supplemental” funding bills that do not get included in the federal government’s balance sheet.
“The us lacks any kind of system to track war costs,” she said. “By ignoring the costs [in Iraq] we made it much easier to make poor choices.” | <urn:uuid:5a6b69a7-756a-4cd2-9c92-463f4c9e5208> | {
"date": "2015-04-02T09:45:50",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427132827069.83/warc/CC-MAIN-20150323174707-00138-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9712764620780945,
"score": 2.828125,
"token_count": 748,
"url": "http://www.boston.com/news/local/blogs/war-and-peace/2013/03/21/iraq-war-trillion-price-tag-and-counting/VFBYx1fEnLeVLDj2rOeVhO/blog.html?p1=Well_MostPop_Emailed3"
} |
As bad news about the poaching of the African rhino continues to come in, we (and our friends over at World Wildlife Fund) turn our attention to Indonesia, where amazing photos recently surfaced of the critically endangered Javan rhino. Similar to the slightly larger Indian rhinoceros, Javan rhinos have a pointed upper lip used for grabbing food, a small or even nonexistent horn on females, with massive skin folds that give it a pronounced armored appearance.
The Javan rhino is one of the most threatened of WWF‘s flagship species, with an estimated population of 40-60 left in Ujung Kulon National Park in Java (less than a dozen of a Vietnamese sub-species are believed to remain in existence). The species faces increasing pressures from growing human population, poaching and disease, not to mention the threat of being wiped out completely by an eruption of the nearby Anak Krakatau volcano. All of which makes the new photographs and video captured when four Javan rhinos triggered a motion-activated camera trap all the more dramatic and timely.
WWF is currently working with the Indonesian government, the International Rhino Foundation, the Indonesian Rhino Foundation, the Asian Rhino Project, the IUCN/SSC Rhino Specialist Group and local communities to protect Javan rhinos from poaching, to monitor the population and, perhaps most importantly, to establish a second population through translocation. –Bret Love
If you enjoyed our story about Saving Javan Rhinos, you might also like: | <urn:uuid:ef3f839a-8f61-412d-bedd-a87e7a26e2ef> | {
"date": "2015-02-27T11:34:21",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461216.38/warc/CC-MAIN-20150226074101-00059-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9205622673034668,
"score": 3.3125,
"token_count": 307,
"url": "http://greenglobaltravel.com/2011/03/15/saving-javan-rhinos/"
} |
Java 5 introduced a nice utility called java.util.Scanner which is capable to read input form command line in Java. Using Scanner is nice and clean way of retrieving user input from console or command line. Scanner can accept InputStream, Reader or simply path of file from where to read input. In order to reading from command line we can pass System.in into Scanner's constructor as source of input. Scanner offers several benefit over classical BufferedReader approach, here are some of benefits of using java.util.Scanner for reading input from command line in Java:
1) Scanner supports regular expression, which gives you power to read only matching pattern from input.
2) Scanner has methods like nextInt(), nextFloat() which can be used to read numeric input from command line and can be directly used in code without using Integer.parseInt() or any other parsing logic to convert String to Integer or String to double for floating point input.
Reading input from command line should be the first few thing which new Java programmer should be taught as it help them to write interactive program and complete programming exercise like checking for prime number, finding factorial of number, reversing String etc. Once you are comfortable reading input from command line you can write many interactive Java application without learning in GUI technology like Swing or AWT.
Java program to read input from command prompt in Java
Here is sample code example of How to read input from command line or command prompt in Java using java.util.Scanner class:
Just note that java.util.Scanner nextInt() will throw "Exception in thread "main" java.util.InputMismatchException" exception if you entered String or character data which is not number while reading input using nextInt(). That’s all on how to read user input from command line or command prompt in Java.
Related Java programming tutorials for beginners | <urn:uuid:d4ae78a0-7e4a-4c0b-893d-6d514aa8e441> | {
"date": "2015-09-04T21:02:01",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645366585.94/warc/CC-MAIN-20150827031606-00286-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8585037589073181,
"score": 3.171875,
"token_count": 389,
"url": "http://javarevisited.blogspot.com/2012/12/how-to-read-input-from-command-line-in-java.html"
} |
Bats of Belize
The more than 1100 kinds of bats around the world amount to approximately one quarter of all mammal species. They are found everywhere except in the most extreme desert and polar regions. The majority inhabit tropical forests where, in total numbers, they outnumber all other mammals combined.
Bats come in an amazing variety of sizes and appearances. The world's smallest mammal, the bumblebee bat of Thailand, weighs less than a penny, but some flying foxes of the Old World tropics have wingspans of up to 6 feet. The big-eyed, winsome expressions of flying foxes often surprise people who would never have thought that a bat could be attractive. Some bats have long angora-like fur, ranging in color from bright red or yellow to jet black or white. One species is furless, and another even has pink wings and ears. A few are so brightly patterned that they are known as butterfly bats. Others have enormous ears, nose leafs, and intricate facial features that may seem bizarre at first, but become more fascinating than strange when their sophisticated role in navigation is explained.
The listing of bats found in Belize was provided by Dr. Bruce W. Miller, PhD. (Neotropical Bat Risk Assessments). Our original Bats of Belize page showed some species that have not actually been found in Belize. Also, it seems some of the species and sub-species names have been changed over the years. Our thanks to Dr. Miller for his kind assistance in correcting the errors on our earlier page. Hopefully, the information below is now complete and correct.
Sac-winged Bat - Emballonuridae
Sac-winged Bats are small, insect-feeding bats with mostly brown or gray fur and relatively large eyes. Many Sac-winged Bats roost at almost vertical substrates with the folded forearms supporting the body.
Sac-winged Bats can be found in humid rain forests, seasonal semi-deciduous forests, and savannas. Most species roost in well-lit places like entries to caves and temples, at the outside of buildings, or in hollow trees and buttress cavities of large trees.
Colonies of some the Sac-winged Bat species are easily found because these bats emit social calls audible to humans.
Emballonurid bats are aerial insectivorous bats that can be easily observed hunting for insects in a slow butterfly-like flight. Larger Sac-winged Bats, like the genus Taphozous, have a more pronounced, powerful flight. Emballonurids are among the first bats to start foraging in the evening. During periods of bad weather, some species may even start foraging in the afternoon. Occasionally, some species also glean insects from leaves.
Emballonurid bats that have been seen in Belize:
Bulldog Bats - Noctilionidae
Bull-dog or mastiff bats are medium-sized bats, often brightly colored. The region around the mouth is distinctive. The lips are full and form cheek pouches, in which the bats store food as they feed while flying. The tail of bulldog bats runs through the uropatagium for about half the length of the membrane, then exits dorsally, and the terminal part of the tail is free. The feet and claws range from relatively large (Noctilio albiventris) to relatively enormous (Noctilio leporinus) in size, and the legs are proportionately longer than in most other bats. The ears are moderately large and a tragus is present. Bulldog bats have a pungent odor, described by some as "fishy.
Most Bulldog Bats feed only on insects. The only Bulldog Bat found in Belize, Noctilio leporinus, takes fish, frogs, and crustaceans as well. To capture fish, these bats use their echolocation to locate exposed fins or ripples made by fish swimming near the surface. They then drag their claws through these ripples. Their hind claws are unusually large and sharp and serve as efficient gaffs. Once out of the water, the fish is carried to a perch, where it is eaten by the bat. Noctilio leporinus may also capture insects and crustaceans on the surface of the water.
These bats usually roost near water, often in hollow trees or in deep cracks in rocks.
Free-tailed Bats – Molossidae
Molossids are known as free-tailed bats, because their bony tail extends to the end of a well-developed tail membrane (uropatagium) and considerably beyond. They often crawl backwards when on the ground, using their tail as a sort of "feeler." Molossids are small to moderately large bats. Their muzzles are usually short and broad, and they often have wide, fleshy lips that may have folds or creases. Many have a distinctive pad over their noses; this pad is often endowed with odd bristles with spatulate tips. Most free-tailed bats have relatively short but broad ears.
The tragus is tiny, but opposite it, an antitragus is unusually well developed. All species have long, narrow wings, apparently adapted for fast but relatively unmaneuverable flight in open places. Their wing and tail membranes are unusually tough and leathery. Molossids also have short, strong legs and broad feet. Like their nose pads, molossids' feet are well endowed with sensory bristles (also with spatulate tips). They are excellent climbers, perhaps because they launch themselves for flight from a considerable height above the ground. Because of their long, narrow wings, they must attain considerable speed before they can develop enough lift to fly. They accomplish this by falling some distance from their roost or take-off point.
Molossids generally have short, even velvety fur. Most are black or brown, and many species have distinctive reddish and brownish or blackish color phases. Their roosting habits range from solitary to living in immense colonies of millions of bats, usually in caves. In the neighborhood of these large colonies, molossids consume enormous numbers of insects.
Molossids Bats that live in Belize:
Leaf-chinned Bat - Mormoopidae
The Leaf-chinned Bat is a small to medium-sized bat. Their lips are large, and their lower lips are complexly folded and ornately decorated with plates and flaps of skin. The mouth is distinctively shaped like a funnel when open. Leaf-chinned Bats are also called "moustache bat" because of a fringe of stiff hairs on their muzzles. Their eyes are small compared to the eyes of bats of similar body size. The ears vary in size and shape but always have a tragus (which always has a secondary fold). In some species, the wings attach to the body high along the midline of the back, so that the surface of the back appears naked.
Beneath the wings, however, is a normal coating of fur. The fur of most species is brown or reddish brown, but within species some individuals vary considerably in color.
Leaf-chinned Bats are strictly insectivorous and generally live near water. They roost sociably, sometimes in very large colonies, and some species are thought to roost exclusively in caves. They can be found in a wide range of habitat types, from rainforest to arid deserts.
Mormoopidae Bats known to live in Belize:
Funnel-eared Bat – Natalidae
Natalus mexicanus, the Mexican funnel-eared bat, is the only member of Natalidae that is found in Belize. They are aerial insectivores that appear to be specialists in feeding on spiders. All of these bats have funnel-shaped ears and long, slender hind legs.
Leaf-nosed Bats - Phyllostonidea
New World leaf-nosed bats are a common and diverse group that includes around 143 species, placed in 49 genera. The relationships of these genera are not fully understood.
The most conspicuous characteristic of phyllostomids is a "noseleaf", a fleshy protuberance from the nose that ranges from in size from nearly as long as the head to, in a few species, complete absence.
Many species also have bumps, warts, and other protuberances on the head near the noseleaf or on the chin. In most species, the noseleaf is a relatively simple spear-shaped structure.
Phyllostonidea Bats that have been identified in Belize:
Plain-nosed Bats – Vespertilionidae
Plain-nosed Bats is the largest family of bats: it includes 35 genera and 318 species! With this many species there are exceptions to almost every generalization about this family.
Vespertilionids, or evening bats, have small eyes, ears with both a tragus (fleshy ear outgrowth) and an anterior basal lobe (except Tomopeas). Their tails are relatively long and extend to the edge of the tail membrane or beyond.
This large family includes a wide range of sizes. Some vesper bats weigh only 4 grams as adults, whereas others weigh up to 50 grams. Most of these bats are black or brown colored, but some are orangish or have other markings.
Many vespertilionids live in caves, but these bats can also be found in mine shafts, tunnels, tree roosts, rock crevices, buildings, etc. Some species contaminate human habitations with feces and noise, but this annoyance is more than offset by the bats' consumption of huge quantities of insects. Some species roost in large colonies, but others are solitary or live in small groups or pairs. Males and females tend to roost apart most of the year, and some species have maternity colonies.
Vespertilionidae bats that have been seen in Belize. | <urn:uuid:63293e81-fac6-4dac-8b2b-980ffcec6779> | {
"date": "2014-11-24T04:47:49",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380394.54/warc/CC-MAIN-20141119123300-00144-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9318076968193054,
"score": 3.53125,
"token_count": 2095,
"url": "http://consejo.bz/belize/bats.html"
} |
When writing fiction there are no hard and fast rules, but there are some guidelines you ought to be aware of. In terms of plotting, whether you are a plotter or a pantser, there is an essential basic arc that you will want your narrative to fit.
The Basic Plot Arc (or Story Arc):
In a basic plot arc, your story should look something like this:
Essential aspects to remember in a basic plot:
- Your plot begins with an event that changes everything. This event triggers a number of increasingly dramatic plot points, which are connected and which lead to an ultimate crisis and resolution.
- Additional dramatic points that do not contribute to the plot itself are a distraction. Everything must result in something.
The Romance Plot Arc:
In a romance plot arc, the requirements of your narrative structure become a little more specific. It should look something like this:
Essential aspects to remember in a romance plot:
- Heroine and hero must meet quickly, after setting up normal life, thus triggering the main plot.
- The story must progress with a series of midway plot points that increase in drama and, most importantly, focus on the romance, leading to the romance crisis and resolution. Will they or won’t they end up together? That’s the point of a romance. Each plot point should be heightening the tension over that question, as well as the desire between your two characters.
- Internal obstacles (emotional conflicts) are more important than external obstacles. External obstacles (secondary characters/plots) might heighten internal ones, but the internal obstacles should be the focus.
- Hero and heroine must be in-scene together (even if they are in conflict), or thinking about each other, for the majority of the novel. When they aren’t together, the reader is bored.
- Typically the climax in a romance is achieved by your heroine and/or hero making a decision to sacrifice something for love.
- It’s all about your two main characters (and their feelings), everything else is secondary.
The Secondary Plot Arc:
Of course, a lot of romance stories have a secondary plot, whether it be a suspense plot, a paranormal plot, or a plot about related secondary characters (etc). When we add a secondary plot into the mix, the story gets a bit more complex.
Although it should hold less focus than your main romance plot, your secondary plot needs to be just as cohesive and take a similar structure. It should look something like this:
Essential aspects to remember in a secondary plot:
Your secondary storyline must have a similar structure to your primary plot. You need a trigger, a build up to a climax and a resolution. Most importantly, your trigger, build up, climax and resolution need to be related and lead into each other.
- Plot points that do not lead anywhere are a distraction and should be cut.
- Try to have your secondary storyline feed into your romance storyline.
- Make sure all your plot lines are resolved by the end.
TOP PRIORITIES FOR A ROMANCE PLOT ARC:
- Plot your romance so that it is the focus of your storyline.
- Ensure that there are obstacles and plot points that affect the romance.
- Ensure that your secondary plot is cohesive and resolved, feeding rather than detracting from the romance plot.
Holly Kench of Visibility Fiction | <urn:uuid:1763fbe5-606f-4d6e-96ff-62176628365c> | {
"date": "2017-04-28T02:20:19",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00236-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9252215623855591,
"score": 3.109375,
"token_count": 702,
"url": "http://www.paidauthor.com/how-to-use-a-story-arc-in-a-romance-novel/"
} |
Kakuzo Okakura first described Japanese tea culture to a readership in the U.S. in The Book of Tea in 1906. Since then, his book, his ideas, and Japanese tea culture have traveled across the world.
A highlight for many sojourners to Japan is drinking a tea bowl of freshly prepared green matcha prepared by a kimono-clad expert in a wooden tea ceremony house within a traditional Japanese garden. Foreigners interested in the tea ceremony usually flock to Kyoto and the famous tea-growing region of Uji.
I decided to travel to the Izura Coast of Kita-Ibaraki, Japan, to walk in the footsteps of Kakuzo Okakura (also known as Tenshin Okakura): writer, inventor, translator, writer, art collector, philosopher, museum curator, art patron, and a bridge connecting Eastern and Western cultures.
Fluent in Chinese, English, and Japanese, Okakura drank tea with and influenced many cultural personages of the early nineteenth century, including American poet Ezra Pound, German philosopher Martin Heidegger, Indian Swami Vivekananda, American art philanthropist Isabella Stewart Gardner, and Japanese painter Taikan.
In his forties, when he was not traveling the world or curating art for the Department of Chinese and Japanese Art at the Museum of Fine Arts, Boston, he spent his days with family on the Izura Coast. That is where he also encouraged a nascent art movement within Japan and where he designed a unique tea ceremony house that Okakura named Rokkakudo Hall.
Rokkakudo, which means six corners, is probably the only hexagonal tea ceremony house in existence. It stands on a rocky spit in an inlet nearly surrounded by cliffs and clinging Japanese pines. The building combines Chinese and Japanese elements. Thick dark tiles cover a sloping pavilion roof. The outside walls have the rusty-red coloring of ancient Chinese pagodas, which it was meant to resemble.
Within Rokkakudo, Okakura and fellow artists that influenced Japanese culture sipped matcha while gazing at colorful sunsets, at waves crashing over green seaweed and jagged rocks, and at shapely clouds above a long horizon. Okakura wrote, “The tea-room was an oasis in the dreary waste of existence where weary travelers could meet to drink from the common spring of art-appreciation.”
The devastating earthquake and tsunami of 2011 that destroyed Fukushima annihilated Rokkakudo in Kita-Ibaraki. Only the foundation on the rocks remained after the walls, roof, windows, and everything inside were dragged into the sea.
Because of its value as a national cultural treasure of Japan, experts from America and Europe worked with local professors from Ibaraki University to recreate Rokkakudo based on old photos, written descriptions, and original pieces of Rokkakudo that scuba divers managed to locate on the seabed. The experts searched Japan and the world for lumber, tiles, and glass windows that were identical to those that composed the tea house in 1905.
More proof of the significance of Okakura to Japanese tea culture is that the head of the Edosenke tea school holds an annual memorial tea ceremony event to commemorate Okakura’s death. Entrance is allowed only to a select few during special occasions.
I stood outside of Rokkakudo and listened to the waves sweeping back and forth over rocks. A cloudless blue sky touched the sea. Night-black crows squawked while chasing each other between green pine trees. Creating my impromptu tea ceremony, I sat in silent contemplation on a sunlit rock and sipped green tea from a thermos that I had brought for the occasion. Then I walked to the nearby Itsuura Kanko Hotel for a more traditional tea experience.
The Itsuura Kanko Hotel is a venerable institution long associated with art and tea. Located on the coast above Rokkakudo, many of its windows offer views of the tea house and gorgeous seascapes. Proprietress Wakako Murata is a Japanese tea ceremony master of the Urasenke tea school. Ms. Murata provides a relaxed version of the tea ceremony in her lobby between 4:30 to 5:30 every evening.
Though the lobby is large, open, and modern—unlike a typical tea ceremony room—Japanese culture mixes with Western culture. An intricately arranged, yet simple in appearance, flower arrangement provides atmosphere. Ms. Murata, whose family has owned the hotel for generations, is a lovely Japanese woman in an elegant kimono. All her movements, from wiping the edges of ceramic bowls to folding napkins to pouring water with an ancient ladle to using a bamboo whisk to whip matcha powder into an aromatic frothy beverage, carried dignity while extending hospitality.
I watched her precise movements. Finally, my turn to drink arrived. Shifting an imperfect bowl in my hands, I admired the design and brought its smooth surface to my lips. The matcha aroma wafted into my awareness. I tasted warm vegetal sweetness. Perhaps, I experienced what Okakura described in his book:
“Tea … is a religion of the art of life. The beverage grew to be an excuse for the worship of purity and refinement, a sacred function at which the host and guest joined to produce for that occasion the utmost beatitude of the mundane.”
The moment ended too soon. However, Ms. Murata kindly prepared more matcha for me the following morning before my departure. After the Itsuura Kanko Hotel, I visited the Tenshin Memorial Museum of Art, a five-minute drive away.
Ibaraki prefecture created a stunningly beautiful museum that teaches visitors about how Okakura protected traditional art, inspired modern artists, illuminated aspects of Japanese culture to the world, and other achievements. Displays of contemporary Japanese artists are a large part of the museum, too. One could spend an entire day there without feeling bored.
As a tea lover, I was most interested in visiting the Okakura Tenshin Memorial Room, which displays his manuscripts, first copies of The Book of Tea, and the tea utensils he used in Rokkakudo.
If you go, be sure to complete your journey in the museum café by enjoying what Okakura described as “the cup of humanity.” | <urn:uuid:91710042-a1a4-461c-9e23-5b750aa1e11f> | {
"date": "2019-07-17T17:05:59",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9482231140136719,
"score": 3.03125,
"token_count": 1326,
"url": "https://teajourney.pub/kakuzo-okakura-and-the-cup-of-humanity/"
} |
Theories – Development & Disability Essay Sample
Get Full Essay
Get access to this section to get all help you need with your essay and educational issues.Get Access
Theories – Development & Disability Essay Sample
The psychological and mental development of a person takes its course with a major input form his or her surroundings. Additionally, the mental and physical capabilities of a person depend on a number of factors. The most important factor, of course, is the personal conviction and resilience of a person. Although the debate on the extent and complexity of human psychology will linger on for eternity, scientists have already reached some conclusions. Human psychology, due to its evolving nature, generates a set of questions that need to be answered that is the scope of personality development, and its impacts on treating disabilities. The second, and more important one, is the confusion about the success of life span treatments.
Life span development theory viz a viz the treatment of lifelong disabilities can be defined in a number of ways (Juntunen & Atkinson, 2001). The theory of life span development essentially puts its focus on the growth and evolution of a disability and the counter measures taken to prevent or cure it. The core of the theory lies in planning and executing processes and exercises that help a person overcome his or her disabilities over a long period of his life. As in the words, the theory looks great and highly implemental. In real life, however, the things are actually quite different. Even some psychologists often misunderstand development psychology, as a whole. The theory essentially involves the practical implementation that transcends the barriers between a patient and a counselor.
Sigmund Freud pointed out in his theories of human evolution that the “id” plays a crucial role in the overall development of a person. Although ego does play a more pivotal role in the later life, “id” gets the ball rolling. Mental development starts from the pre-natal years. A person acts under the commands of id for many years after birth. After the onset of ego, the situation does change but not in every person. At times, id overwhelms the rational thinking generated by ego and vice versa. According to life-span theory, the super ego is thus the deciding factor under these situations.
Super ego does help in the third-stage development of human beings but it is not the final deciding factor. Unlike some parts of the theory – that suggest a controlling nature of super ego –the ego that floats freely in the conscious, preconscious and unconscious stages. The life story of a terminally person can effectively define this conflict. A person let us suppose his name as Alex was injured in a car crash. The injuries were severe and affected his body as well as brain. While the body paralysis was controlled after physical therapies, mental development and capabilities came to a standstill.
Traumatic brain injuries are known for altering the lifestyles of the persons affected. It was the case with Alex. After the initial recuperation, the actual test started, both for him as well as his family and counselors. He lost his memory almost to its entirety. The rehabilitation, both physical and mental, thus posed huge challenges for the caregivers. Now, as per the tenants of life-span theory, he has to be re-acquiesced with his previous life to bring improvement. That is actually a very difficult process. First, the caregivers had to position themselves as per the needs and demands of that person. The biological changes were severe enough that it altered the lives of his family.
The life span theory, in such cases, emphasizes the need for a strong and quick counseling to improve the condition of the patient. That is, however, neither easy nor a quick process. The counselor has to understand the condition of the patient in detail. The biological changes in Alex resulted in a relative loss of bowel control, inconsistent eating habits, and a return to the infant-like stage of life. Now, the counselors had to bring him back to his normal self. This involved the lengthy counseling sessions where they tried to revitalize his ego and crush the dominance of id. Frontal lobe injury made it even more difficult to heal the language and movement inconsistencies. His family failed to understand the pain and suffering he was going through. His wife left him for another man and his mother cared for his children.
As per the theory, the involvement of family is essential in the rehabilitation of such patients. It, however, fails to give us any clues on what to do if the family is non-cooperative or even if the person has no family. The exercises given in the theory can help in the recovery but most include the family members as part of the treatment. In many cases, the implementation of the theory fails to result in an improvement solely because of this compulsion. Nevertheless, in some other cases life-span development is highly effective in the treatment. In dementia and drug addiction, life-span theory can work wonders. In the first case, dementia develops over the age and older people are most likely to suffer from this condition. As the loss of memory sinks in, most people become estranged from their surroundings. This reclusive behavior can be effectively treated with life span development (White & Merluzzi, 1998).
A counselor needs to do a few things before the actual start of the treatment. He or she has to take the client into confidence over the course of treatment and the procedures applied. After the initial treatment, life span theory can be applied in stages. The first step towards treatment is the reliving of the memories of childhood. Every person has some profound memories and the id remains strongly perched in the unconscious. Once the client is able to recall childhood memories, the process becomes a bit easier. The reliving of teenage and adult memories can take a longer time. A client may face difficulties in grappling with the fact that the golden memories of his life are lost or diminishing. Here, the counselor has to take the things slower. The treatment should involve the examples of constructivism and learning. (Moreland, 1979)
The treatment of drug addicts is one of the most difficult tasks any psychologist can undertake. Addiction erases the feelings of happiness, achievements, and success from a person’s mind. Instead, he or she relives the bad memories and tumultuous times of his or her life and take refuge in drugs. These people, despite knowing their past, do not want to come to terms with life. Counseling sessions for drug addicts thus involve the notion of guilt to help these patients in recovery. Once the client understands the self-destructive process, he or she can be reminded of his or her achievements in the past. Additionally, and more importantly, the focus should be on the developmental theory. All faces and unpleasant facts should begin from the childhood and subsequently moved to adulthood. This learning process can prove to be very effective in a rather speedy recovery.
Juntunen, Cindy L. (2001). Counseling across the lifespan: Prevention and Treatment. Sage Publications, Inc.
Moreland, John R. (1979). Some Implications of Life-Span Development for Counseling Psychology. Accessed June 27, 2009: http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=EJ197502&ERICExtSearch_SearchType_0=no&accno=EJ197502
White, Robert D. (1998). Life-span perspectives on health and illness. Lawrence Erlbaum. | <urn:uuid:94b441ba-c09b-4e00-a071-adc3d836842f> | {
"date": "2018-10-23T23:16:28",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517495.99/warc/CC-MAIN-20181023220444-20181024001944-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9607924222946167,
"score": 2.84375,
"token_count": 1553,
"url": "https://blablawriting.com/theories-development-disability-essay"
} |
Photo has outdoors experts thinking wolverine
Published 4:00 am, Friday, March 7, 2008
A dark photograph of a ferocious-looking animal with an almond-colored stripe is being touted as the first documented wolverine in California in more than three-quarters of a century.
The digital picture, taken by a camera planted in the Tahoe National Forest, captured the muscular carnivore last week during a research project aimed at the wolverine's weasel family relative, the marten.
The image of the elusive creature has created a sensation among wildlife experts and ecologists, who have tried for years to get the wolverine listed as an endangered species.
The wolverine is believed by many to be extinct in California. The species has not been documented in the state since 1922, the last recorded killing of the furry animal in the Sierra Nevada. There have been a smattering of reported sightings through the years, but most were discounted as mistaken identity, Zielinski said.
Zielinski and Oregon State University graduate student Katie Moriarty had set up the heat- and motion-sensitive digital camera facing a tree where food and a scent lure were placed, in the forest about 10 miles north of Truckee.
A moment of disbelief
Moriarty, 26, was working on a master's thesis on the American marten, a slender brown weasel that likes old-growth forests. Zielinski had done a similar survey 30 years before, and the two of them, using funding from the research station, were trying to determine whether the distribution of the animals had changed through time.
"I looked at the camera memory cards on Sunday," Moriarty said. "A research assistant had removed the cards on Friday and said he could not identify one of the photos."
When she looked at the image, Moriarty said she could not believe her eyes. The image was of the rear of an animal that appeared very much to be a wolverine taken in the early morning of Feb. 28.
"It was extremely shocking. It's still extremely shocking," Moriarty said.
Moriarty is familiar with the species. Several of Moriarty's colleagues had been studying wolverines, and one had conducted an extensive but ultimately futile search for the creature in these very same forests in the early 1990s.
Studying the telltale black and brown markings of the animal in the photo, she couldn't think of anything else it could be but a wolverine.
"I jumped up and down for a while, then I looked at it, then I would get up again and then sit down and look at it again," she said. It took Moriarty 10 minutes to compose herself, and then she called Zielinski and, barely able to contain herself, told him to look at the e-mail she had just sent him.
"I was dumbfounded," Zielinski said. "I just could not believe, nor could she, what we were looking at."
"His words were something like, 'Bill, this looks like the real deal.' "
The discovery will undoubtedly spur more research. Moriarty, Zielinski and Forest Service scientists are planning a major hunt in the area for genetic material, including wolverine scat and hair samples. More cameras are likely to be set up, along with "hair snares," which capture animal hair for DNA analysis.
Researchers are expected to consider more seriously the half-dozen or so reported sightings during the past two decades.
"Every few years there seems to be a bona fide observation, many of which were discredited because they could have been a badger, young black bear or marten," Zielinski said. "If this is a wolverine, we're going to take those sightings more seriously."
The North American wolverine is the largest member of the weasel family, with adults weighing as much as 45 pounds. Stocky and muscular, it has a bushy tail and broad head that reminds people of a small bear.
Remarkably strong, with powerful jaws, wolverines have been known to kill prey as large as a moose, but in North America they are mostly scavengers, sometimes defending scavenged meat against larger animals. Loners, they stake out territory and try to stay out of each other's way. Individual wolverines can range as far as 240 square miles, eating insects, berries, small animals, birds and carrion.
They are more common in the north-central United States, including Minnesota, Michigan and North Dakota, and also can be found in Idaho, Utah, Colorado and Wyoming.
Adult wolverines have no natural predators except humans, who have historically hunted and killed them in large numbers for their fur.
Zielinski said the wolverine in the photograph could have migrated from somewhere north like the Rockies or the Cascade Range in Washington, where wolverines are also known to exist. Another possibility is that it is part of a small group of native wolverines that evaded detection for the better part of a century.
The third possibility, Zielinski said, is that it was an escaped pet, a fugitive from some captive group or planted by a person or group.
"Anything's possible, depending on people's motivations," Zielinski said. "That's why we have to entertain that third possibility."
The discovery of a wolverine in the Sierra could have major land-use implications if the species is ever declared an endangered species, a step that is under consideration by the U.S. Fish and Wildlife Service.
For Moriarty, it has already been a life-changing experience.
"It's extremely overwhelming, but I'm still making my thesis my priority," she said. "I'm not getting deterred too much, but this is an amazing discovery. I look at the picture every day in amazement." | <urn:uuid:c3e7f893-7a08-4d80-8077-6eff5c886591> | {
"date": "2017-04-29T21:38:37",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00002-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.981627881526947,
"score": 2.75,
"token_count": 1216,
"url": "http://www.sfgate.com/green/article/Photo-has-outdoors-experts-thinking-wolverine-3223959.php"
} |
After children become familiar with atoms and elements, they are ready to classify matter. Sixteen picture cards represent two main categories of matter, pure substances and mixtures, in four groups: chemical elements, compounds, homogeneous mixtures, heterogeneous mixtures. The child places picture cards on the plastic chart (18" x 32"), using the information on the back of the card as the control. The picture cards can also be sorted under six label cards 2¼" x 4½") that duplicate the text on the chart. Background information and lesson suggestions are included for the teacher. Cards (3½" x 3¼") are in full color and laminated. | <urn:uuid:5f302814-305a-4fb9-9784-bbb145493bfe> | {
"date": "2014-04-24T00:57:19",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00163-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9202588200569153,
"score": 3.421875,
"token_count": 136,
"url": "http://www.montessoriservices.com/shop-collections/made-in-the-usa/classification-of-matter-chart-cards"
} |
What Is BBC School Report?
BBC News School Report gives 11-16 year-old students in the UK the chance to make their own news reports for a real audience.
Using lesson plans and materials from this website, and with support from BBC staff and partners, teachers help students develop their journalistic skills to become School Reporters.
In March, schools take part in an annual News Day, simultaneously creating video, audio and text-based news reports, and publishing them on a school website, to which the BBC aims to link.
School Reporters produced a stunning array of content on 27 March 2014, with more than 1,000 schools across the UK making the news on the biggest ever School Report News Day.
Students from this school will be making the news for real on 19 March 2015 as they take part in BBC News School Report. We aim to publish the news by 1600 GMT on the News Day, so please save this page as a favourite and return to it later!
Taken from the BBC School Report web page | <urn:uuid:45b6fcab-83c0-4afa-9713-22e4e4381401> | {
"date": "2015-11-25T08:09:29",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445033.85/warc/CC-MAIN-20151124205405-00271-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.949646532535553,
"score": 2.53125,
"token_count": 209,
"url": "http://www.kings-grove.cheshire.sch.uk/website/school_report.php"
} |
Born in 1706 in Boston, Benjamin Franklin is revered as a founding father of the United States and the brains behind a medley of American inventions and organizations. He was a member of the Second Continental Congress, a drafter and signer of the Declaration of Independence and U.S. Constitution and commissioner to France during the American Revolutionary War.
A likeable fellow, Franklin was pivotal in recruiting French aid to the Americans during the war and later signed the Treaty of Paris ending the conflict with the British Empire.
Franklin's unique inventions range from bifocals, the lightning rod (although he didn't invent electricity), the iron furnace stove, or Franklin stove, and an odometer.
A young Ben Franklin helped launch the Library Company, America's first subscription library, in 1731, and organized Philadelphia's Union Fire Company, modeled after Boston's, in 1736. The Philadelphia Fire Department traces its roots to Franklin's company.
Perhaps Franklin's greatest and most enduring creation is America's first university, the University of Pennsylvania. His 1749 pamphlet, "Proposals Relating to the Education of Youth in Pensilvania," proposed a charter "with Power to erect an ACADEMY for the Education of Youth, to govern the same, provide Masters, make Rules, receive Donations, purchase Lands, &c. and to add to their Number, from Time to Time such other Persons as they shall judge suitable."
His proposals became the basis for the Academy, College and Charitable School of Philadelphia, the forbearer to Penn. Franklin, president of the Academy, College and Charitable School from 1749 to 1755, hired the University's first provost, William Smith, and served on the Board of Trustees from 1749 until his death in 1790. Twenty thousand people are said to have attended his funeral.
The links below offer more information about Franklin's remarkable life, achievements and contributions.
A documentary history of Penn's origins and its early years from 1740 to 1791
Penn's site celebrating the 300th anniversary of Benjamin Franklin's birth
An engaging and extensive resource featuring games, narratives, original writings, pictures, streaming video, and more
A chronicle of Franklin's life and accomplishments, compiled by the creators of the PBS special "Benjamin Franklin" | <urn:uuid:4384004b-95f9-421f-8175-d7b709238363> | {
"date": "2014-07-24T12:19:22",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888866.9/warc/CC-MAIN-20140722025808-00120-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9484208822250366,
"score": 3.28125,
"token_count": 471,
"url": "http://origin.www.upenn.edu/about/founder.php"
} |
the U.S. is rapidly losing the global race for high-speed connectivity, as fewer than 8 percent of households have fiber service. And almost 30 percent of the country still isn’t connected to the Internet at all… .
The FCC’s National Broadband Plan of March 2010 suggested that the minimum appropriate speed for every American household by 2020 should be 4 megabits per second for downloads and 1 Mbps for uploads. These speeds are enough, the FCC said, to reliably send and receive e-mail, download Web pages and use simple video conferencing… .
The South Korean government announced a plan to install 1 gigabit per second of symmetric fiber data access in every home by 2012. Hong Kong, Japan and the Netherlands are heading in the same direction. Australia plans to get 93 percent of homes and businesses connected to fiber. In the U.K., a 300 Mbps fiber-to-the-home service will be offered on a wholesale basis… .
The current 4 Mbps Internet access goal is unquestionably shortsighted. It allows the digital divide to survive, and ensures that the U.S. will stagnate…
Think of it this way: With a dialup connection, backing up 5 gigabytes of data (now the standard free plan offered by many storage companies) would take 20 days… . with a cable DOCSIS 3.0 connection, an hour and a half… . With a gigabit fiber-to-the-home connection, it can be done in less than a minute…
a Hollywood blockbusters could be downloaded in 12 seconds, video conferencing would become routine, and every household could see 3D and Super HD images. Americans could be connected instantly to their co-workers, their families, their teachers and their health-care monitors…
To make this happen, though, the U.S. needs to move to a utility model, based on the assumption that all Americans require fiber-optic Internet access at reasonable prices. …
As things stand, the U.S. has the worst of both worlds: no competition and no regulation.
- by Nicholas Christakis
The science of network-synchronized, emergent, self-oranizing, complex adaptive social systems!
How Amoebas Form Social Networks
Network Literacy Part 1
I’ve become convinced that understanding how networks work is an essential 21st century literacy. This is the first in a series of short videos about how the structure and dynamics of networks influences political freedom, economic wealth creation, and participation in the creation of culture. The first video introduces the importance of understanding networks and explains how the underlying technical architecture of the Internet specifically supports the freedom of network users to innovate.
- Howard Rheingold
Network Literacy Part One 2
Networking technologies visualized as extensions
of our basic biological cognitive scaffolding
The National Research Council defines Network Science as:
"the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena.
(moving towards - Organic Process Literacy)
We always lived in a connected world, except we were not so much aware of it. We were aware of it down the line, that we’re not independent from our environment, that we’re not independent of the people around us. We are not independent of the many economic and other forces. But for decades we never perceived connectedness as being quantifiable, as being something that we can describe, that we can measure, that we have ways of quantifying the process. That has changed drastically in the last decade, at many, many different levels.
It has changed partly because we started to be aware of it partly because there were a lot of technological advances that forced us to think about connectedness. We had Worldwide Web, which was all about the links connecting information. We had the Internet, which was all about connecting devices. We had wireless technologies coming our way. Eventually, we had Google, we had Facebook. Slowly, the term ‘network connectedness’ really became part of our life so much so that now the word ‘networks’ is used much more often than evolution or quantum mechanics. It’s really run over it, and now that’s the buzzword.
The question is, what does it mean to be part of the network, or what does it mean to think in terms of the network? What does it mean to take advantage of this connectedness and to understand that? In the last decade, what I kept thinking about is how do you describe mathematically the connectedness? How do you get data to describe that? What does this really mean for us?
- By Brian Solis
His 10 step action list for creating a community based business strategy for relevance in an age of social-network connectitness.
1. Answer why you should engage in social networks and why anyone would want to engage with you
2. Observe what brings them together and define how you can add value to the conversation
3. Identify the influential voices that matter to your world, recognize what’s important to them, and find a way to start a dialogue that can foster a meaningful and mutually beneficial relationship
4. Study the best practices of not just organizations like yours, but also those who are successfully reaching the type of people you’re trying to reach – it’s benching marking against competitors and benchmarking against undefined opportunities
5. Translate all you’ve learned into a convincing presentation written to demonstrate tangible opportunity to your executive board, make the case through numbers, trends, data, insights – understanding they have no idea what’s going on out there and you are both the scout and the navigator (start with a recommended pilot so everyone can learn together)
6. Listen to what they’re saying and develop a process to learn from activity and adapt to interests and steer engagement based on insights
7. Recognize how they use social media and innovate based on what you observe to captivate their attention
8. Align your objectives with their objectives. If you’re unsure of what they’re looking for…ask
9. Invest in the development of content, engagement
10. Build a community, invest in values, spark meaningful dialogue, and offer tangible value…the kind of value they can’t get anywhere else. Take advantage of the medium and the opportunity!
"Disrupting the Marketplace"
Interview with NiQ Lai
Some good advice for the
Ass-Backward CEOs that run
North American’s ISPs
Bill Moyers on Plutonomy (by buffalogeek)
A gloves that allow speech- and hearing-impaired people to communicate with those who don’t use or understand sign language. The gloves are equipped with sensors that recognize sign language and translate it into text on a smart phone, which then converts the text to spoken words. | <urn:uuid:9a279810-7dd5-428e-b117-00d93ba807b9> | {
"date": "2014-07-24T00:15:20",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883905.99/warc/CC-MAIN-20140722025803-00144-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9522252678871155,
"score": 2.546875,
"token_count": 1436,
"url": "http://raycote.tumblr.com/tagged/serious-stuff"
} |
Today, scholars would refer to this conflict with less rhetorical flourish, identifying it simply as a patent thicket in sewing machines. A ?patent thicket? exists when too many patents covering individual elements of a commercial product are separately owned by different entities. This concept is not unique to patent law; it is based on Professor Michael Heller‘s theory of the anticommons in real property, which arises when there is excessive fragmentation of ownership interests in a single parcel of land. According to economic theory, the problem of such excessive fragmentation of ownership interests is straightforward: it increases transaction costs, accentuates hold-out problems, and precipitates costly litigation, which prevents commercial development of the affected property. Additionally, a patent thicket can block new research into follow-on inventions, preventing the “Progress of . . . the useful Arts.“
There is now a debate raging in the literature as to whether patent thickets in fact lead to such problems, and vivid anecdotes abound about obstructed development of new drugs or problems in distributing life-enhancing genetically engineered foods to the developing world. Given this heightened interest among scholars and lawyers concerning the existence and policy significance of patent thickets, a historical analysis of the first patent thicket and its resolution in the first patent pool is important.
In modern patent and property theory, this historical study fills a gap in the scholarship on patent thickets in at least two ways. On one hand, it serves as an empirical case study of a patent thicket that (temporarily) prevented the commercial development of an important product of the Industrial Revolution. There can be no doubt that the Sewing Machine War was a patent thicket. As one historian has observed: “The great advantage of the sewing machine, from the lawyers’ point of view, was that . . . no one complete and entire working sewing machine was ever invented by one person unaided.” The sewing machine was the result of numerous incremental and complementary inventive contributions, which led to a morass of patent infringement litigation given overlapping patent claims to the final commercial product. This is important, because, as Professer Heller has observed, “[a]nticommons theory is now well established, but empirical studies have yet to catch up.” The Sewing Machine War confirms that patent thickets exist, and that they can lead to what Professor Heller has identified as the tragedy of the anticommons.
On the other hand, the story of the sewing machine challenges some underlying assumptions in the current discourse about patent thickets. One assumption is that patent thickets are primarily a modern problem arising from recent changes in technology and law. Professor Heller explicitly makes this point in The Gridlock Economy:
There has been an unnoticed revolution in how we create wealth. In the old economy — ten or twenty years ago — you invented a product and got a patent . . . . Today, the leading edge of wealth creation requires assembly. From drugs to telecom, software to semiconductors, anything high tech demands assembly of innumerable patents.
In fact, Professor Heller‘s first foray into patent thicket theory was assessing a potential anticommons in “biomedical research” that he and his co- author, Professor Rebecca Eisenberg, predicted would occur given extensive patenting of biotech research tools (a prediction that has not yet been borne out). Continuing this focus on biotech, The Gridlock Economy discusses biotech research and development almost exclusively in its analysis of anticommons theory in patent law. Despite some off-hand references to earlier patent thickets, such as a thicket in the first airplane patents that was resolved through Congress‘s enactment of a “compulsory patent pool” in 1917, the focus of the theoretical and empirical studies of patent thickets is on very recent inventions in high-technology and science — computers, telecommunications, and biotech.
A second assumption is that patent thickets are a property problem — too much property that is too easily acquired that results in too much control — and so they are best addressed by limiting the property rights secured to patentees. As Professor Heller euphemistically puts it, “Cutting-edge technology can be rescued from gridlock by creatively adapting property rights.” More specific proposals have called for limiting conveyance rights in patented drugs, authorizing federal agencies to terminate patent rights to avoid patent thickets, and “excluding patentability of genetic inventions for reasons of morality or public order.” Many scholars concerned about patent thickets hail the U.S. Supreme Court‘s recent decision in eBay Inc. v. MercExchange, L.L.C., because the Court made it more difficult for patentees to become hold-outs through threatening or obtaining injunctions. Although Professor Heller, the Founding Father of anticommons theory, acknowledges that “the empirical studies that would prove — or disprove — our theory remain inconclusive,” this has not stopped the numerous proposals of various regulatory or statutory measures to redefine and limit property rights in patents.
The story of the invention and development of the sewing machine challenges these two assumptions insofar as it is a story of a patent thicket in an extremely old technology, but, more important, it is a story of the successful resolution of this thicket through a private-ordering mechanism. The Sewing Machine War was not brought to an end by new federal laws, lawsuits by public interest organizations, or new regulations at the Patent Office, but rather by the patent owners exercising their rights of use and disposition in their property. In so doing, they created the Sewing Machine Combination, which successfully coordinated their overlapping property claims until its last patent expired in 1877. Moreover, the Sewing Machine War is a salient case study because this mid- nineteenth-century patent thicket also included many related issues that are often intertwined today with concerns about modern patent thickets, such as a non- practicing entity (i.e., a “patent troll”) suing infringers after his demands for royalty payments were rejected, massive litigation between multiple parties and in multiple venues, costly prior art searches, and even a hard-fought priority battle over who was the first inventor of the lockstitch.
In this respect, the existence and tremendous commercial success of the Sewing Machine Combination of 1856 — a private-ordering solution to the Sewing Machine War — suggests that the current discourse on patent thickets is empirically impoverished. The Sewing Machine Combination reveals how patent owners have substantial incentives to overcome a patent thicket without prompting by federal officials or judges, and that they can in fact do so through preexisting private- ordering mechanisms, such as contract and corporate law. Heller, to his credit, recognizes that there are “market-driven solutions” to patent thickets, but his writing reveals a deep skepticism about such solutions vis-à-vis his more favorably considered “regulatory solutions.” The Sewing Machine Combination is an example of how patent owners can rescue themselves from commercial gridlock, which unleashed an explosion in productivity and innovation in a product that was central to the success of the Industrial Revolution in nineteenth-century America.
For the complete article, as well as the footnote citations for what is presented herein, please see The Rise and Fall of the First American Patent Thicket: The Sewing Machine War of the 1850s. | <urn:uuid:a1166c54-a528-4e74-a308-a09f992bacf2> | {
"date": "2019-04-22T20:52:54",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422221208-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9546707272529602,
"score": 2.640625,
"token_count": 1510,
"url": "https://www.ipwatchdog.com/2012/05/03/americas-first-patent-thicket-sewing-machine-war-of-the-1850s/id=24521/"
} |
What is a service animal according to the Americans with Disabilities Act (ADA)?
Service animals are animals that have been individually trained to do work or perform tasks for the benefit of a person with a disability, also known as the animal’s handler. Only dogs and miniature horses are considered service animals under the ADA. However, some state and local laws may define service animals more broadly.
Are there limitations to where a service animal can go?
Generally, service animals must be allowed to accompany their handlers in all areas that members of the public may go. A handler is entitled to bring their service animal into these areas even if it won’t perform its service during the visit. Some exceptions follow.
Service animals may be excluded from certain areas of an otherwise public-serving facility. For example:
- Service animals are typically allowed into restaurants, but not into restaurant kitchens; and
- Service animals may be allowed into hospital waiting rooms, cafeterias, ER’s and exam rooms, but not into operating rooms.
What is the difference between a service animal and an emotional support animal (also known as a comfort or therapy animal)?
Service animals are specially trained to do work or perform tasks for their handlers. The service animal’s work must be directly related to its handler’s disability. If asked, the handler must be able to describe the specific tasks or work performed by the animal.
An emotional support animal provides aid to a person with a disability, but does not perform a specific task or duty, as it is not trained to do so. Therefore, an emotional support animal does not meet the definition of a service animal. Obedience training alone is not enough to make it a service animal.
Does an animal need to have any certification or documentation, or wear a vest or tag, to identify it as a service animal?
No. There is no ADA requirement for certification or identification showing that the animal is a service animal.
If the service animal doesn’t have special identification, how can people tell that it’s a legitimate service animal?
There are two questions one may ask of the handler:
- Is this animal required because of a disability?
What task or service has this animal been trained to do?
One may not ask: What is your disability? This is confidential.
When can someone be asked to remove their service animal from the premises?
A service animal’s good behavior is necessary for it to be protected under the ADA.
A handler may be asked to remove their service animal if it causes an actual disruption to business, or if its behavior poses a direct threat to the health or safety of others. For example, if a service animal displays aggressive behavior towards other guests or customers it may be excluded. If it is not housebroken, bites or jumps on another patron, wanders away from its handler, or is clearly out of the owner’s control, it may be removed.
However, it is important not to make assumptions about how an animal will behave. Every situation, handler and service animal must be considered individually, based on actual events. So before taking action, it’s important to establish that the animal’s behavior is not part of its job. (For example, barking may be one of its tasks.)
If a public accommodation excludes a service animal, it should give the animal’s handler the option of continuing to partake of its goods and services without having the service animal on the premises.
Do service animals have to obey leash laws?
Yes, service animals must obey local leash laws, with exceptions if a service animal cannot perform its task while on a leash, or if the handler cannot use a leash, harness, or tether due to their disability. However, the handler must have the animal under control, if not by leash, then by voice control, signals, or other effective means.
Do service animals have to be registered, licensed, and vaccinated like pet dogs?
Yes, if the local law requires pet dogs to be licensed and registered, then service dogs must be as well. Local law requiring vaccinations for pets also applies to service animals.
Are service animal rules the same in housing?
The rules are slightly different in housing because they are usually guided by the Fair Housing Act (FHA) rather than the ADA. Under the FHA, housing managers and landlords must allow an individual to have a service animal in their home regardless of the facility’s pet policy. Additionally, the FHA extends this right more broadly, to include emotional support animals and other assistance animals.
Are service animal rules the same in air transportation?
The rules for air transit are different. Air transit is covered by the Air Carriers Act rather than the ADA. So the Department of Transportation (DOT) regulates service animals on U.S. airlines. Airlines must permit a service animal to accompany a passenger with a disability.
However, airline rules and DOT enforcement policies for emotional support animals continue to develop. Airlines may require current medical documentation, and may have restrictions. Before flying, always check with the airline regarding the latest rules.
Are service animal rules the same in the workplace?
The rules are a little different in the workplace. Under the ADA, employees with disabilities can request that their employers allow them to have service animals, emotional support animals, and other types of assistance animals in the workplace as a reasonable accommodation. This expands possibilities to different species of animals, whether specifically trained to perform a task related to the disability, or not.
The ADA leaves it up to the employer to determine if allowing the animal into the workplace is reasonable. However, state and local laws may have broader protections for employees with service animals.
What’s the proper etiquette for interacting with handlers and their service animals?
- Do not touch or engage with a service animal without permission from its handler
- Do not offer food to a service animal
- Do not ask questions about the handler’s disability
- Speak to the handler about any issues with their animal, for example if the animal is blocking a walkway and you need to pass
If a public place violates the ADA by refusing a service animal from entering, where does the handler file the complaint?
The handler can file the complaint with the federal enforcement agency, the US Department of Justice at www.ada.gov.
More resources and information can be found at:
US Department of Justice, ADA.gov: Frequently Asked Questions about Service Animals and the ADA
About Our Organization:
Northwest ADA Center provides technical assistance, information, and training regarding the Americans with Disabilities Act. Information is provided from the regional office in Washington State and state anchors in Alaska, Idaho, and Oregon. Specialists are available to answer specific questions about all titles of the ADA and accessibility of the built environment. Training staff offer presentations to businesses, organizations, schools, people with disabilities, and the general public.
Northwest ADA Center | www.nwadacenter.org
800-949-4232 | VP for ASL: 425-233-8913 | Fax: 425-774-9303
6912 220th St. SW, Suite 105, Mountlake Terrace, WA 98043
Alternative formats available upon request.
The Northwest ADA Center is a member of the ADA National Network. This fact sheet was developed under grant from the Administration for Community Living (ACL), NIDILRR grant # 90DP0095. However, the contents do not necessarily represent the policy of the ACL, and you should not assume endorsement by the federal government. | <urn:uuid:250222e7-8004-4c48-b1f8-525d6d6840c1> | {
"date": "2019-10-19T22:47:41",
"dump": "CC-MAIN-2019-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9368506669998169,
"score": 3.375,
"token_count": 1561,
"url": "http://nwadacenter.org/factsheet/service-animals-frequently-asked-questions"
} |
In Praise of Wasting Time
|Series:||Ted 2 Ser.|
In this timely and essential audiobook that offers a fresh take on the qualms of modern day life, Professor Alan Lightman investigates the creativity born from allowing our minds to freely roam, without attempting to accomplish anything and without any assigned tasks.
We are all worried about wasting time. Especially in the West, we have created a frenzied lifestyle in which the twenty--four hours of each day are carved up, dissected, and reduced down to ten minute units of efficiency. We take our iPhones and laptops with us on vacation. We check email at restaurants or our brokerage accounts while walking in the park. When the school day ends, our children are overloaded with extras. Our university curricula are so crammed that our young people don't have time to reflect on the material they are supposed to be learning. Yet in the face of our time-driven existence, a great deal of evidence suggests there is great value in wasting time, of letting the mind lie fallow for some periods, of letting minutes and even hours go by without scheduled activities or intended tasks.
Gustav Mahler routinely took three or four--hour walks after lunch, stopping to jot down ideas in his notebook. Carl Jung did his most creative thinking and writing when he visited his country house. In his 1949 autobiography, Albert Einstein described how his thinking involved letting his mind roam over many possibilities and making connections between concepts that were previously unconnected.
With In Praise of Wasting Time, Professor Alan Lightman documents the rush and heave of the modern world, suggests the technological and cultural origins of our time--driven lives, and examines the many values of wasting time-for replenishing the mind, for creative thought, and for finding and solidifying the inner self. Break free from the idea that we must not waste a single second, and discover how sometimes the best thing to do is to do nothing at all. | <urn:uuid:18da9a13-0fad-440e-8717-e4960375472a> | {
"date": "2019-02-22T17:09:00",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518497.90/warc/CC-MAIN-20190222155556-20190222181556-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9583948254585266,
"score": 2.546875,
"token_count": 402,
"url": "https://www.piccadillybooks.co.nz/p/mind-body-spirit-in-praise-of-wasting-time--3"
} |
Introduction: How to Configure Ember for High Speed 3D Printing
One of the principal advantages of 3D printing is that it can be used to manufacture parts that cannot be made using any other technique giving the designer great freedom and permitting them to produce highly optimized parts. A typical example would be a strut optimized for minimum weight while maintaining adequate strength for the application.
Despite this advantage, one of the factors holding back the adoption of 3D printing in manufacturing, is speed. The output of today's 3D printers (across all technologies) is much slower than that of other manufacturing processes such as CNC milling, injection molding or forging. As a result, the cost to manufacture 3D printed parts is prohibitive and often outweighs any benefit from the optimized part (there are some exceptions, such as dental and hearing aid industries, where 3D printers have replaced manual labor and thus led to significant cost savings). If the speed of 3D printing increases, then it can be transformed into a viable manufacturing technique and open up a host of opportunities.
In this Instructable, we're going to look at how to increase the speed of a Digital Light Processing Stereolithography (DLP SLA) 3D printer, specifically the Autodesk Ember 3D Printer. The techniques that we describe here apply to the whole class of DLP SLA printers and can be replicated on many different systems.
The Ember printer is open which means it can be easily used to explore the limits of DLP SLA 3D printing. Through optimization of the printer settings, software, and material (without hardware modifications) it is possible to increase the standard print speed of Ember from 18mm/hour to 440mm/hour an increase by a factor of 24 for a particular class of geometries.
So why is this important and why as a software company is Autodesk conducting this research?
- By only tweaking a few things in the machine settings, software and material you can print high speed on a system that's readily available.
- We want to continue advancing the state of additive manufacturing and we expect the best advances in manufacturing processes to come from approaches that combine hardware, materials, and software.
This research is the first step towards realising high speed 3D printing in a production environment.There are unique design rules that apply for high-speed DLP SLA that are beyond the capabilities of the current generation of design software. By researching in this field, our goal is to drive the additive manufacturing industry forward by developing a connected ecosystem that can provide designers and manufacturers the software they need unlock this class of technology.
We also want to demonstrate the power of an open approach to technology. If Ember were a closed system, then researchers would be unable to explore the limits of additive manufacturing. With Ember, we have created a powerful research platform that gives scientists, engineers and designers the opportunity to explore the future of additive manufacturing.
If this sounds interesting, read step 1 to learn about the science behind Ember. If you're already familiar with how DLP SLA works skip ahead to step 2 to learn how to configure Ember for high speed.
Step 1: Science of Ember
This is how the process works:
- The 3D model is sliced into cross-sectional layers. Each layer is saved as an image and transferred to the printer
- The projector exposes the resin and it solidifies in the shape of the image. The first layer sticks onto the build head and then subsequent layers stick onto the layer above.
- The build head lifts up and then the next layer is printed, this process repeats until the part is finished
You may have noticed in the GIF above, that after each exposure the resin tray rotates back and forth 60 degrees, lets look at this in more detail.
As you expose and create each layer the hardened resin acts as glue, binding the build head to the optical window in the resin tray. The resins that are used in Ember are acrylates and methacrylates photopolymers that cure through a free radical photopolymerization process. To prevent the printed layer binding to the optical window we coat the window with a thin layer of Polydimethylsiloxane (PDMS), which is an oxygen rich silicon rubber. Free radical polymerization is inhibited by the presence of oxygen thus the oxygen in the PDMS prevents a very thin layer of resin, around 5 microns thick, from curing at the surface of the PDMS. This means that the printed layer is not adhered to optical window.
With thin, uncured layers of resin, there would be enormous suction forces exerted on the printed layer if you were to lift up the build head directly. These suction forces are inversely proportional to the thickness of uncured resin, in other words, the thicker the uncured layer of resin the lower the separation force. The suction forces are also proportional to the surface area of the part, the larger the part, the greater the forces.
To take advantage of this in Ember we use a shear separation mechanism. The resin tray rotates 60 degrees until the build head is no longer above the optical window with the uncured resin layer acting as lubrication and minimizing the shear force. After the rotation, the build head is directly above a channel that is deeper than the optical window. At this point, there are over 1000 microns of resin between the printed layer and the bottom of the resin tray, this means the suction force is reduced by a factor of 200 and thus becomes negligible, and you can lift up the build head with a minimal suction force exerted on the printed part. The tray rotates back 60 degrees and then next layer is printed.
We call this process Minimal Force Mechanics, and it allows Ember to reliably, produce parts with incredible detail, like the peacock feather above. BUT it takes around 2-3s per layer and thus represents about 50% of the print time and limits the print speed at 25-micron layers to 18 mm/hour.
If you're interested in learning more about the Ember mechanics, you can download the mechanical CAD and it explore it, the Ember CAD is shared under a Creative Commons Attribution-ShareAlike license.
I'm now going to show you that by optimizing the software and materials you can eliminate this separation step and print at 440mm/hour.
Step 2: Printing at 440mm/hour
440mm/hour is 24 times greater than Ember's typically printer speed and we achieve this through optimization of three things:
- Material - we've designed a resin that cures quicker and at thicker layers
- Process - we've changed the print process by eliminating the separation step and printing at 250micron layers
- Geometry - we've chosen a lattice structure that reduces the surface area per layer
First, we need to prepare a variation on our PR48 resin that will cure quicker and to a deeper depth. We call this resin PR48-High-Speed and the formulation is listed below.
- Oligomer: Allnex Ebecryl 8210 39.8238%, Sartomer SR 494 39.8238%
- Photoinitiator: Esstech TPO+ (2,4,6-Trimethylbenzoyl-diphenylphosphineoxide) 0.4005%
- Reactive diluent: Rahn Genomer 1122 19.9119%
- UV blocker: Mayzo OB+ (2,2’-(2,5-thiophenediyl)bis(5-tertbutylbenzoxazole)) 0.0400%
The UV blocker concentration in PR48-High-Speed has been reduced by a factor of 4 compared to PR48 to allow it to cure quicker and to a deeper depth.
If you want to learn more about how to tune your own resins from Ember, check out this Instructable.
Next we need to configure the printer settings on Ember, you can do this either through emberprinter.com or by SSH into the printer and editing the file /var/smith/config/settings.
On Mac using terminal you can SSH into the printer with follow commands (remember to change the IP address if not connecting over USB)
ssh 192.168.7.2 -l root
Navigate to settings file and edit it
Edit the following settings:
- "ImageScaleFactor" : 1.0,
- "DetectJams" : 0,
Next measure the irradiance output of Ember with a fresh clean resin tray (I recommend using a G&R UV Light Meter Model 220 with a 420nm probe or an ILT 1400 with SLE005/U detector) and configure the "ProjectorLEDCurrent" so that the output is 20 mW/cm^2
If you have edited the print settings file over SSH then remember to enter the following command to make the changes take effect
echo refresh > /tmp/CommandPipe
Print Job Setup:
Now that the material and printer are setup its time to prepare the print job. Open Print Studio and import the model 12-15-14_full_rigid_lattice.stl that is attached. For help on how to use Print Studio refer to this user guide.
Now create a new custom material, you can start this by duplicating the Autodesk CYMK 25 micron profile. Configure the profile as per the screenshots above. The main settings changes are
Burn in Layers
- Number of layers: 10
- Wait (before exposure): 0.5 s
- Exposure time: 3 s
- Separation Slide Velocity: 20 RPM
- Approach Slide Velocity: 20 RPM
- Wait (before exposure): 0 s
- Exposure time: 1.2 s
- Z-axis overlift: 0.25 mm
- Separation Z-axis Velocity: 5 mm/s
- Angle of rotation: 0 degrees
In the object browser, turn off the automatic support generation then slice and send the job to your printer.
I've also attached the job file to the instructable in case you can't be bothered with the above!
Follow the Pre-Print Checklist then sit back and watch your Ember print at 440mm/hour.
Step 3: Explanations, Limitations and Future Work
So that was pretty cool! Lets look at why the optimizations worked, the limitations of the system, what that means in practice and how it could be improved on in the future.
Direct pull (printing without separation) worked in this instance principally because we used software to optimize the geometry and material.
You'll notice from the graph above that the lattice structure that the global surface area (the sum of all the white pixels in a given slice) never exceeds 15% of the slice. The global surface area must remain below 15% so that the suction forces, which remember are proportional to surface area, do not become greater than the strength of the cured resin, the tear strength of the PDMS window and the normal force that the linear drive and motor can deliver. If the suction forces exceed any of these then the failure modes are as follow:
- Suction force > strength of cured resin: the printed object is pulled apart
- Suction force > tear strength of the PDMS: the PDMS so torn apart
- Suction force > normal force delivered by the linear drive and motor: the z-axis jams
You can see from the graph and the video at the top of this step that the geometry changes rapidly from layer to layer showing that fluid can easily flow into the areas that need to cure. If we were to print a vertical column, then after a few layers all the fluid between the part and the PDMS would be used up and it would be difficult to get more fluid into the curing area.
We also optimized the material to make it cure quicker and to a deeper depth by reducing the about of photo-inhibitor, this allowed us to print deeper layers. Technically, you could call this out, because printing at 250 micron layers is 10 times faster than 25 micron layers. But with the optimization of the geometry and process, we were able to make Ember print 24 times faster.
There are four principal limitations to the geometry that you can print
- Global surface area
- Local surface area: The surface area of individual parts of the slice. For example, a strut in the lattice.
- Rate of change of position of local surface area: How the position of local surface area changes from layer to layer
- Strength of the cured material
Global Surface Area:
The suction forces generated by the global surface area of the part must not exceed the normal separation force of the system.
Local Surface Area:
The maximum length of the center of each local surface area to the boundary should be less than maximum distance that a fluid particle could move from the boundary to the center at a given print speed and resin viscosity. Essentially, if the local surface area of a strut is too big, then resin will not be able to reach the center.
Rate of Change of local surface area:
The rate of change of position local surface area should be such that no pixels are exposed in X consecutive layers.
Strength of the cured material:
At a certain speed, the normal forces will become greater than the strength of the cured material causing the printed part to pull itself apart.
So how could you make a faster system?
- Make it stiffer: the z-axis, the resin tray, the optical window and the resin
- Make the inhibition layer thicker
- Make the resin cure quicker and lower viscosity
Make it stiffer:
The stiffer the system, the quicker you can pull and the faster you will print. Every component of the system will need to be stiff enough to withstand the suction forces; this includes the cured resin, the optical window, and the Z-axis. But be careful, if you make the resin too stiff and strong, then it will become difficult to remove from the build head and remove any supports.
Make the inhibition layer thicker:
At 5 microns the inhibition layer just isn't that thick. If you could get the inhibition layer up to 500-1000microns thick, then the suction forces would be negligible, the holy grail, but more challenging than it seems.
Make the resin cure quicker and lower viscosity:
A lower viscosity resin that cures in milliseconds would increase print speed but would not overcome the limitations outlined above.
What do these limitations mean in practice?
For a start, you can't print standard DLP SLA parts like dental restorations, hearing aids or rings. Even thin walled parts like ear shells and dental copings have too much surface area per layer to work (at least on Ember). We have found that all the parts printed using this technique need to be thin strutted lattices.
The Spark team have developed a tool to allow you to create lattice structures from solid models. For example, if we take the ubiquitous Stanford Bunny we can create a lattice representation and then use Print Studio to slice it for Ember, but it 's hard to control the end product using this technique. For example, if you download the bunny models you'll see that some parts of the lattice in the ears are not connected to the main body. To successfully design for high-speed DLP, you need design software that understands the process, the hardware and materials.
At Autodesk, we're
researching, building and testing solutions that will change the future of making. In the future, you may not sit down at a workstation and sketch, extrude and form a part. You could be using a generative design tool like Dreamcatcher, where you input a set of high-level goals including how you want to manufacture the product and the computer iterates through thousands of designs options until it finds one that meets all your goals. The output would be a functional part that is optimized for high-speed DLP.
The key to unlocking high-speed DLP as a manufacturing process isn't just new hardware or materials but, in fact, rests on developing new design software that can take full advantage of the capabilities on offer. That’s why we're building a connected ecosystem of hardware, software and materials so we can deliver production ready additive manufacturing workflows. | <urn:uuid:87b65eae-9c20-40fd-ab79-833052a1b347> | {
"date": "2017-07-23T23:27:02",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424623.68/warc/CC-MAIN-20170723222438-20170724002438-00536.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9118358492851257,
"score": 2.8125,
"token_count": 3394,
"url": "http://www.instructables.com/id/How-to-Configure-Ember-for-High-Speed-3D-Printing/"
} |
Any home or structure must have a solid foundation, and every puzzle that is a building begins with this vital piece. It is the foundation that supports the home, protecting it from sinking, rising, and falling over. In modern construction, every bearing wall and every roof truss must be connected to the foundation in order to complete that puzzle and ensure the integrity of the entire structure.
There are several types of foundations. They all use concrete in some form or another. Which foundation you would use for your home depends on a variety of factors, from the frost line to the overall design of the house. On the other hand, foundations for garden sheds or decks can be relatively simple and are common DIY projects.
The main types of concrete foundations are crawl space, slab-on-grade, and full-height basements. Other types include pier foundations and insulated concrete forms (ICFs), the latter gaining in popularity quickly due to strength, eco-friendliness, and energy efficiency. This section of the CalFinder Library will explain each type of foundation individually in order to provide you with at least a basic knowledge of concrete foundations as you speak with, hire, and observe your local contractor.
Articles related to Concrete Foundations
How much will Window Replacement cost you?
Limited Time Offers from Our Partners
Remodeling tweets and photos posted daily. Join Us on Twitter | <urn:uuid:6ae66615-e16d-42ad-a967-c6750b4020b0> | {
"date": "2015-03-05T14:34:02",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464193.61/warc/CC-MAIN-20150226074104-00108-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.942391574382782,
"score": 3.3125,
"token_count": 280,
"url": "http://www.calfinder.com/library/concrete/foundation"
} |
A new study by Florida State University scientists suggests global temperatures are rising -- in some places. But others are actually cooling.
"Global warming was not as understood as we thought," said Zhaohua Wu, an assistant professor of meteorology at FSU.
The researchers used their own method (called the spatial–temporally multidimensional ensemble empirical mode decomposition method) to analyze surface global temperature trends (except for Antarctica) starting in 1900. The method, they say, better takes into account the differences in global and regional climate trends.
They say the world is getting warmer overall, but it hasn't happened at the same rate everywhere.
"The global warming is not uniform," said Eric Chassignet, director of FSU's Center for Ocean-Atmospheric Prediction Studies. "You have areas that have cooled and areas that have warmed."
The findings included:
-- Warming accelerated until about 1980 and has changed little since.
-- In recent decades, the fastest warming was found in the northern mid-latitudes (for a point of reference, the mid-latitudes span from about the Gulf Coast of Florida to well into Canada).
-- Warming during the study period began in the subtropical and subpolar regions of the Northern Hemisphere. Those two bands of warming grew in size from 1950-85, eventually covering the entire Northern Hemisphere.
-- Another warming band -- in the subtropical region of the Southern Hemisphere -- has been slower to expand. Researchers speculated this was due to less land coverage in the Southern Hemisphere's mid-latitudes.
-- By 1980, except for slight cooling in the northern tip of Greenland and in the vicinity of the Andes, almost all the global land had been warming. Those warming rates have changed little since.
The team of climate researchers includes Wu, Chassignet, Fei Ji, a visiting doctoral student at COAPS; and Jianping Huang, dean of the College of Atmospheric Sciences at Lanzhou University in China.
The researchers said understanding climate change was necessary "to better evaluate its potential societal and economic impact."
The team's work is featured in the May 4 edition of the journal Nature Climate Change. | <urn:uuid:93e93304-998c-4648-aaf1-19144e49d782> | {
"date": "2014-10-23T00:01:46",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507448169.17/warc/CC-MAIN-20141017005728-00312-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.948737621307373,
"score": 3.3125,
"token_count": 448,
"url": "http://blog.al.com/wire/2014/05/new_study_looks_at_where_its_w.html"
} |
DescriçãoEver taken a picture and wished that either more of it had been in focus or the exact reverse and that less had been in focus? Of course you have I certainly have. But what settings on your camera give which results and what if you change one of those settings what is the Depth of Field then. You can calculate it all yourself but it is not easy. This App takes all of those problems and makes it easy to find the answers. Just type in the distance to the subject, the focal length being used and the f-stop and the App calculates the near and far distances of acceptable focus and therefore the depth of field. This is all shown on an easy to understand diagram.
An example is: You are trying to take a photo of a bee on a flower and all but the bee is to be out of focus. Set the distance of the subject to be 200 mm (20 cm, approx 8 inches), set the f-stop to be f/5.6 and set the focal length at 50 mm. The App calculates that the depth of field is 3.41 mm (0.3 cm, a small fraction of an inch). The bee will be partially in focus but bees are more than 3.4 mm across so that depth of field is probably not what you were looking for. Set the camera to f/22 and now the depth of field is 13.39 mm (1.39 cm or just over half an inch) now this is much closer to what you wanted. Now maybe the whole of bee is in focus but still all of its surroundings are out of focus.
This App will help you take much better photographs by being able to understand the relationship between the settings on your camera and the depth of field that they produce. The diagram shown is better than a thousand words of text (as the saying goes). You can now easily see what effect a change in the settings will make.
The Depth of Field Calculator allows you to quickly calculate the near and far distances of acceptable focus. Just select the distance you are focused at, the f/stop and the len's focal length and the diagram shows you at what distance acceptable focus starts and at what distance it ends. You can select to express the focus distance in metres, millimetres, feet and inches. Unlike other Apps this App does not use picker wheels for the f/stop and focal length values so you can enter whatever values you want.
The only other thing you have to do is select the camera you are using. The Camera button takes you to a page with the makers of cameras listed to the left. Select the maker of your camera and then scroll down the list to the model. Select Save and you are now ready to calculate the correct values. The calculations work using a value called the Circle of Confusion. This changes depending on the camera make and model.
If you camera is not listed then email and it will be added. You can generally find your Circle of Confusion value on the web. Once you know this value just set it manually in field at the top and Save.
You can manually specify the Circle of Confusion to use. Just select the field at the top and type in the value required. Select OK and then select Save. The user-defined Circle of Confusion will be used. | <urn:uuid:e162e58d-5a66-45b1-8a77-b9b124e7298b> | {
"date": "2014-03-11T05:11:59",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011129529/warc/CC-MAIN-20140305091849-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9492185115814209,
"score": 2.609375,
"token_count": 676,
"url": "http://www.androidpit.com.br/pt/android/market/apps/app/uk.co.esscomp.depthoffieldcalculator/Depth-of-Field-Calc"
} |
With the current trend toward green within organizations, companies are promoting environmentally friendly marketing strategies. People in general are becoming more ecologically aware. And government is considering or enacting several eco-friendly policies.
IT is no exception to the movement, and it's not surprising. According to a 2007 study by Jonathan Koomey, a staff scientist at the Lawrence Berkeley National Laboratory and a consulting professor at Stanford University, the powering and cooling of servers and auxiliary infrastructure accounted for 1.2 percent of energy consumption in the U.S. during 2005 and cost U.S. businesses $2.7 billion.
Koomey's study also highlights the possibility that servers will demand even more electricity in coming years. "If power per server remains constant, those trends would imply an increase in electricity used by servers worldwide of about 40 percent by 2010. If in addition the average power use per unit goes up at the same rate for each class, as our analysis indicates that it did from 2000 to 2005, total electricity used by servers by 2010 would be 76 percent higher than it was in 2005," notes Koomey.
Such an exponential increase in demand and consumption would have serious consequences. In April 2007, Gartner estimated that the Information and Communication Technologies (ICT) sector was responsible for approximately 2 percent of global carbon dioxide emissions and, should Koomey's estimates prove correct, that percentage will increase substantially.
Furthermore, to meet the increased demand, utility companies need to construct additional power plants - something which would itself require additional power and could result in the loss of nondeveloped green sites. The same holds true in relation to data centers; as they hit their limit of existing capacity, new data centers will need to be constructed, potentially resulting in the loss of yet more green spaces.
Regulations and Incentives
Environmental concerns are not the only thing pushing businesses toward greening their IT operations. Governments are implementing an increasing number of policies to promote and enforce sustainable IT. The European Union (EU) is already working to establish a code of conduct for data centers, and the U.S. Environmental Protection Agency (EPA) is considering similar measures.
Additionally, simple economics play a major role in moving green IT into the enterprise mainstream because greening can lead to substantial savings. Obviously, reducing the amount of energy used will also reduce the amount of the electricity bill. Beyond that, businesses can reap savings on hardware and IT administration and management costs. Further increasing the appeal of green are the incentives being offered by utility companies like Pacific Gas and Electric, which reward customers that implement energy-efficient technologies such as virtualization. It may seem somewhat odd for a company to reward customers for using less of their product, but Pacific Gas and Electric does so in order to avoid the expense of new power plants and to ensure that environmental quality is maintained in the communities they serve.
Also, many consumers now expect businesses to demonstrate their green credentials and will factor this into purchasing decisions. Similarly, in order to demonstrate their own commitment to environmentally friendly practices, businesses expect their suppliers to become ecologically responsible. HP has already started this ball rolling with the latest disclosure of the emissions of its largest suppliers. Gartner recently predicted that "by 2011, suppliers to large global enterprises will need to prove their green credentials via an audited process to retain preferred supplier status."
Given these trends, it is clear that businesses may soon find that greening is no longer optional, but a cost of doing business. Green really has become the new mean.
At least a couple of obstacles to greening remain. The first is simply a lack of motivation. As Mark Bramfitt, the
principal program lead on Pacific Gas and Electric's Customer Energy Efficiency Team, points out, "The cost savings driver is not as strong as it could be because there is still the disconnect between the people making technology decisions and the responsibility for paying the utility bills." The solution to this problem is not complex; it simply needs senior management to be brought on board and take the lead.
A second problem is that businesses often view greening as an entirely technological matter. "We think of the problem of reducing electricity used by information technology equipment as a technical problem, but it's as much a problem of people and institutions ... To attack this problem at its root, we need to modify institutional structures and individual incentives so that the most environmentally sound outcome is also the most profitable one," notes Koomey.
And he is absolutely correct. The rewards from going green are not always as great as they could be. In part, that is because incentive schemes, such as those offered by Pacific Gas and Electric, have only recently come to market and because energy-efficient technologies have been expensive to acquire and to implement. It is also true partly because many businesses simply do not make the most out of green technologies and so do not obtain the maximum ROI.
Virtualization used to be a somewhat esoteric technology deployed mainly in large-scale data center server consolidation projects. However, with VMware and other virtualization vendors releasing products specifically geared for the small and midsized business market (and priced accordingly), virtualization is now within the reach of all.
Virtualization is a technology that enables multiple heterogeneous operating systems to run simultaneously on the same physical hardware. This is advantageous because running a single operating system on a modern high-powered server will almost certainly result in that server being severely underutilized. In fact, in many businesses, servers are operating at anywhere between 5 and 25 percent of their total load capacity, thus wasting a substantial amount of processing power. Virtualization enables workloads from those underutilized servers to be consolidated to a fewer number of servers.
There are a number of obvious benefits to consolidating servers in this manner: reducing the installed server base decreases the amount of electricity needed for powering and cooling, lessens rack space and floor space requirements and cuts future spending on equipment.
These benefits represent the raison d'etre of the majority of virtualization projects, but that is not the whole story. A common misconception about virtualization is that it's simply a method for businesses to reduce their installed server base. But, as VMware puts it, virtualization is a strategic discussion, not a point solution like server consolidation. Businesses that view virtualization as a point solution are likely to sit back once their servers have been consolidated and not explore other areas where they could benefit.
Virtualization enables not only server consolidation, but also the consolidation of people and processes, and this is critically important. According to a 2006 study by IDC, management and administrative expenditure in data centers is growing three times faster than expenditure on computing equipment. So, while it is certainly important for businesses to find ways to reduce their hardware spending, it is even more important that those businesses seek out ways to reduce their management and administrative overhead. Virtualization enables them to do just that.
Virtualization opens the door to a radical overhaul of both people and processes. In the virtual world, servers can be provisioned and brought online in a matter of minutes - a task that takes several hours in the physical world. Problems with applications that would have been tied to a specific physical server can be resolved much more speedily in a virtual environment.
Management duties and responsibilities can be streamlined and rationalized. Problems that would have required on-site action can be handled remotely. Backup and recovery plans will need to be completely overhauled, but the end result will be processes that are much easier to both implement and maintain. Physical servers that were managed by people become virtual servers that can be managed automatically.
Accordingly, virtualization provides a business with an opportunity to do far more than simply cut its electricity bill; it also provides an opportunity to completely rethink the existing IT strategy and reorganize practices and management responsibilities in a manner that is more cost-effective and introduces real agility into operations.
Businesses can further improve their virtualization effort's ROI by implementing solutions that can integrate with their new virtual infrastructure: virtual appliances. A virtual appliance is a physical appliance encapsulated entirely in software. Unlike physical appliances, virtual appliances can be easily evaluated and speedily deployed, moved from one physical system to another and backed up and restored. This both reduces the workload on IT and introduces additional agility and mobility into operations.
To derive the maximum benefit from virtualization, businesses need to plan carefully - not only to ensure that they maximize their savings in every possible area, but also to avoid the perils that can be associated with an incautious, headfirst dive. Somewhat ironically, some of the factors that make virtualization so appealing can also lead to problems. For example, because virtual servers can be brought online easily and speedily, virtual server sprawl is a real risk. While those virtual servers may be less costly than their physical counterparts, they nonetheless still require computer resources and still need to be managed and tracked. To avoid this, businesses must put in place a procurement process similar to that which is used for physical servers.
A Clear Choice
There is no doubt that virtualization - and the greening trend in general - is great for the environment. But it can be great for businesses, too. With careful planning, businesses can use their green IT effort to enhance their public image, increase their agility and cut their operation costs.
The most environmentally sound outcome can be the most profitable one, too. And that's good news for everybody.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access | <urn:uuid:0f6b4192-75a6-465a-ad4f-d7dd6a8a5297> | {
"date": "2018-03-17T16:43:12",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9584582448005676,
"score": 2.75,
"token_count": 1998,
"url": "https://www.information-management.com/news/it-virtualization-helps-to-go-green"
} |
The “Aquarena” App has been developed for use in outdoor water education at Aquarena Springs, at the headwaters of the San Marcos River, San Marcos, Texas. It was primarily designed for use on the iPad, but will also work on the iPhone. The new app is geared toward K-12th grade students and will include a species identification guide to the common fish, birds, plants and animals that inhabit the springs, wetlands, and adjacent watershed. The app will be available for download by visitors to Aquarena Springs, operated by the River Systems Institute, Texas State University. Organized groups may be provided iPads for educational purposes.
Researchers at the River Systems Institute are learning new ways to connect students with water. This new app and other technology enhancements are being designed specifically to allow for research in outdoor education in the unique environment of the San Marcos springs, which includes an entire watershed immediately adjacent to the springs, a wetlands education area, a lake, and a headwaters river.
The ‘Scan’ module enables the ‘Aquarena’ app to read a ‘QR-Code’ (Quick Response Code). These codes could be setup to show videos, display images or play sound clips.
The ‘documents’ module of the app will list the available documents found on the server (online only), or list the available documents within the application itself (offline only).
The ‘videos’ module of the app will list the available videos found on the server (online only), or list the available videos within the application itself (offline only).
The ‘photos’ module of the app will list the available photos found on the server (online only), or list the available photos within the application itself (offline only).
The ‘links’ module of the app will list the available links found on the server (online only).
The ‘info’ module shows information regarding ‘Aquarena’ and the application, as well as the credits for the development of the application. You can also contact (email) Aquarena from this module.
GPS Photo Scavenger Hunt
The ‘Scavenger Hunt’ module offers a fun and unique method of allowing the user of the app to locate and take pictures of various points of interest along the Aquarena trail. A photo journal is kept so that the student can review their tour when they return back to the River Center.
By making a file change on the server, access to certain app modules can be temporarily ‘locked/unlocked’ at will, enforcing remote control of app functionality! | <urn:uuid:f992c2a2-cadc-4dc8-8862-69166129a05f> | {
"date": "2018-05-24T05:59:22",
"dump": "CC-MAIN-2018-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865928.45/warc/CC-MAIN-20180524053902-20180524073902-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8948251605033875,
"score": 3,
"token_count": 551,
"url": "http://aquarena.appstor.io/it"
} |
By Kristen Varganova
As we’ve traveled from one country to the next, the number of visits to mass murder remembrance sites increased dramatically. Aside from the painful history, most memorials that we visited on our trips were set alongside breathtaking scenery. From Lety to AuschwitzBirkenau, and the Sinti-Roma Memorial, I couldn’t help but notice the overarching contrast between life and death.
As we left Prague and began our drive to Lety, I was curious as for why our bus was headed towards the woods. At the site, a small path leads the visitors towards, what looks like, a shattered sphere; the broken figure serves as a memorial for the 1,309 prisoners interned in the camp during WWII. Our tour guide explained that this labor camp was established by the Czechoslovak government, two days before the German occupation. The camp was set up for “people avoiding work and living off of crime.” Although not stated explicitly, such description was aimed towards the Roma population. The site of the memorial is centered among green grass and tall trees; it was clear that the former labor camp was now occupied by a beautiful forest. The voices of two families settling down the hill for an afternoon picnic could be heard from where our group was standing. I was astonished by how much life was present at a place where hundreds were once killed.
A couple of weeks after the visit to Lety, our group took a trip to Auschwitz-Birkenau. During the morning part of the tour, Auschwitz I was covered in fog. The weather alone made this experience so much more dreary than originally anticipated. Our tour guide explained that Auschwitz I was initially built to hold Polish political prisoners of 1940. The first mass murder of prisoners took place in September 1941; the first prisoners to be executed in the gas chambers were 600 Soviet prisoners of war. Birkenau or Auschwitz II, an extension of Auschwitz I, didn’t exist until early 1942. The extension death camp was built by 10,000 Soviet POWs in 1941, by 1924 only 945 of the original 10,000 pows were still alive. Those that survived the building process of Birkenau were ultimately killed at the camp a few weeks after its opening. Birkenau went on to serve as one of the primary sites of the Nazis’ Final Solution during the Holocaust.
The crematoria at Birkenau were located towards the back of the camp where now a tall forest grows. Ponds where the Nazi’s once disposed of human ashes, are spread out around the destroyed gas chambers. As I stood by one of the memorials, a monarch butterfly landed on my shoulder — a sign of life. After spending a few moments at the ponds, our tour guide led us to where Canada I used to stand. At Canada I, scraps of silverware and clothes laid aimlessly under the glass. As I looked closer I saw weeds and green leaves starting to spurt out from under the remains. Yet again, I couldn’t help but notice how beautiful and full of life this dreadful place has become.
Lastly, one of the final mass murder remembrance sites we visited was the Sinti-Roma memorial in Berlin, Germany. The monument is dedicated to the 500,000 Romani people murdered by Nazi Germany during the Holocaust. The memorial itself is a shallow pool surrounded by nature; a Roma poem can be seen inscribed at the bottom of the pool. The site was designed by Dani Karavan and opened by the German chancellor, Angela Merkel, in October of 2012. Information boards hang on the walls of the memorial in chronological order of the genocide. Light music can be heard playing from the speakers hidden among the trees —the site feels serene. The pool overlooks the German parliament building, highlighting the divide between the past and the future. Once again, I was left surprised by the amount of contrast present between life and death in such a painful place of remembrance.
In summary, although all the remembrance sites that we visited throughout our program deal with painful history, they also offer signs of hope. Whether the memorials were built amidst nature on purpose, like that of the Sinti-Roma, or whether life took a course of its own and grew in the place where mass murder sites used to stand, the overarching contrast between life and death was prominent everywhere we went. | <urn:uuid:453cca0c-c17c-4308-b23d-ba7cf0ccfa06> | {
"date": "2019-04-22T14:02:47",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578555187.49/warc/CC-MAIN-20190422135420-20190422161420-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9777535200119019,
"score": 2.609375,
"token_count": 897,
"url": "http://urbanlabsce.eu/the-contrast-between-life-and-death/"
} |
The Function of the Autoclave Jacket
Did you know that most large autoclaves, like horizontal autoclaves, wear a jacket? Why? One of our upcoming posts in the series focuses on the autoclave jacket, which will reveal why we dress up the autoclave. For now it is important to understand the function of the jacket because it helps us understand how steam from the steam generator reaches the chamber. Steam supplied from the generator heats the autoclave jacket and circulates inside the jacket readily available for instant use. In a CSSD (Central Sterile Supply Department) or SPD (Sterile Processing Department) autoclave that sterilizes medical instruments, the autoclave jacket surrounds the chamber thereby heating the chamber, and in some configurations the jacket functions as the autoclave chamber steam source. A valve located between the jacket and the chamber opens letting steam from the jacket enter the chamber, which is then used for sterilization.
There are, however, exceptions. Steam supply from the jacket is applicable to hospital autoclaves, such as those found in the CSSD (also called the SPD). In the case of some autoclaves different steam types are used for different purposes. "Dirty" steam is used to heat up the jacket and is used from either the steam generator or the building's steam. Dirty refers to the quality of steam that is produced. Obviously it is not dirty, it just means that the quality of steam produced is not the highest quality, which is fine in many cases such as instrument sterilization in hospitals or the medical industry. Clean or "pure" steam that comes in contact with materials that require sterilization is generated without coming into contact with any heating elements that can reduce the quality of steam.
What is Clean Steam?
Conventional steam, also known as dirty steam, contains foreign particles from chemicals, from the metal and from the feed water. Normally this does not cause any problems and can be used in many applications such as in hospitals when sterilizing medical instruments. But even though the steam temperatures and humidity are according to requirements for killing most germs, conventional or “dirty steam” cannot be considered 100% clean.
Clean steam is commonly used in high containment BSL3 (biosafety level 3) laboratories that require a higher quality of steam. High quality steam is needed for tissue culture work, sterile water preparation and other special processes. Building steam is not suffiecient to produce clean steam. Clean steam will be produced by a steam generator that uses pipes and fittings that are constructed from stainless steel and brass and pneumatically operated valves, which reduce maintenance and downtime. Stainless steel piping, fittings, and components are made of a higher grade of stainless steel called 316L.
The highest grade of pure/clean steam is generated by a steam to steam generator. The steam is produced by using either the buiding steam or steam from a steam generator. The steam produced does not come into contact with heating elements. Typical uses for this type of clean steam include pharmaceutical applications and food processing
Which Steam Generator Suites Your Autoclave?
Let’s zoom in and examine the different steam supply solutions available. We will understand why and when each solution is used. Ideally, it is preferred to supply steam to the autoclave from a dedicated built-in or stand-alone steam generator, which is designed and constructed for the sole purpose of steam supply for autoclave sterilization. A dedicated steam generator , whether built-in or stand alone is designed with an optimal operating pressure, quality piping and components to ensure that the process of sterilization by steam is not compromised.
Is your Autoclave Steam Generator Introverted or Extroverted?
Autoclaves don't have personalities, but the autoclave steam generator can be positioned either under the autoclave, this is called a built-in steam generator, or it can be external to the autoclave, an independent standalone steam generator.
The main consideration when deciding between a built-in or a standalone steam generator is the size of the autoclave, whether it’s a single or double door autoclave and the steam generator's size. Every CSSD prefers a built-in steam generator with the obvious space saving advantages. Sometimes the autoclave is just too large and then there's no other option but to position it by the side of the autoclave and there were even cases when the steam generator was on top of the autoclave.
Tuttnauer autoclave with built-in steam generator
Dual Steam Supply for Your Autoclave - Why Not Enjoy Both Worlds?
If you want to enjoy both worlds Tuttnauer supplies a hybrid steam supply solution - the dual steam supply autoclave. An autoclave with a dual steam supply has two steam lines connected to the autoclave: One line supplies steam from the building’s steam network and the second line supplies steam from a dedicated steam generator. The main reason for selecting an autoclave with a dual steam supply is in cases when building steam supply is irregular and affected by unstable environmental conditions. Another reason to choose an autoclave with a dual steam supply is that there are times when the availability of steam is turned off. Hospitals sometimes turn off the steam supply in the evenings and nights and often the autoclave needs to function 24/7 to keep up with high instrument turnaround. This is when the dedicated steam generator will kick inn. If the autoclave needs to be ready and functioning at all hours, then having a dual steam supply is important.
A Hospital's Boiler Room
Standalone Steam Generator
Steam to Steam
An alternative way to generate steam is the steam to steam generator, which is also called a clean steam generator. As discussed, this steam is used usually only in advanced laboratories or pharmaceutical autoclaves. It is needed in cases that the level of sterilization requires the steam to be pure when the quality of building steam or standard steam generator is not high enough. It uses plant steam which is called “dirty” steam to generate pure steam. The steam that enters the chamber was produced from water that did not come into contact with heating elements. The water turned into steam by steam and therefore the water doesn't come into contact with electrical parts.
Clean steam uses the energy of the "black" steam for generating steam, instead of using electricity to generate steam. This saves a connection to a high current steam generator by using the building steam to generate the clean steam.
We hope you are now familiar with how steam works, its quality and the different options of autoclave steam generators and steam supply. After we've raised the pressure of the autoclave with steam, we will dramatically reduce it and see how vacuum works in an autoclave and why it is needed. We will explain the ins and outs of the vacuum. What is deep vacuum? Why do we pulse? Why is getting rid of the air important for sterilization assurance? and more. Stay tuned.
But before we take the pressure down, we want to hear from you. How is steam supplied to your autoclave? Did you ever encounter any difficulties or challenges with steam supply? Let us know and if you have questions, we can provide some good answers. | <urn:uuid:2028dc94-2c13-4a3c-9f92-543117e2e79d> | {
"date": "2019-05-25T01:48:55",
"dump": "CC-MAIN-2019-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257845.26/warc/CC-MAIN-20190525004721-20190525030721-00416.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.950974702835083,
"score": 3.515625,
"token_count": 1495,
"url": "https://tuttnauer.com/blog/autoclave/steam-generator"
} |
And studies show up to 20% of all cancer deaths in the U.S. are related to being overweight.
Besides the fact that carrying excess weight, body fat or being obese has been shown to put you at a much greater risk for heart disease and diabetes, being overweight has now also been shown to put you at a much greater risk for colon, kidney, pancreas and breast cancer.
Excess body weight, especially fat around the waist, has been linked to both insulin resistance and the high insulin levels that are thought to promote the growth of cancer cells.
Too much body fat also seems to contribute to inflammation, which can promote cancer growth. Older women are at a greater risk because extra body fat is also associated with the higher levels of estrogen that stimulate estrogen-sensitive cancers of the breast and endometrium.
Best Cancer Fighting Foods for an Anti-Cancer Diet
Most cancer prevention researchers agree that a lifelong commitment to a healthy low-calorie Mediterranean diet is the best way to lose excess weight and maintain a healthy weight. And it's also the best way to help protect yourself from diabetes, heart disease and cancer.
Here are your Mediterranean diet guidelines for diabetes, heart disease and cancer prevention:
- Eat less red meat and drink less alcohol.
- Have fish and poultry several times a week.
- Include more fruits, vegetables and whole grains.
- Use herbs and spices to flavor foods rather than salt.
- And choose healthy olive oil instead of other oils or butter.
And add at least five to nine daily servings of fresh fruits and vegetables to your anti-cancer diet.
Plant based cancer fighting foods are essential to your cancer prevention diet because they're foods high in fiber that are loaded with vitamins and minerals. And they're also the best source for the cancer prevention antioxidants – carotenoids, flavonoids and cruciferous phytonutrients.
According to most reliable authorities, this anti-cancer diet of cancer fighting foods encourages life-long eating habit changes that will help you to lose weight and maintain your weight loss.
What Else Can You Do for Cancer Prevention?
Getting enough daily exercise is another vitally important anti-cancer strategy. The American Institute for Cancer Research recommends at least thirty minutes a day of moderate exercise such as taking a walk around the block, going for a swim or climbing some stairs.
Better yet, to help you manage your weight and to provide even better cancer prevention, aim for sixty minutes of moderate exercise or thirty minutes of vigorous activity every day.
Be sure to subscribe to my free Natural Health Newsletter.
Click here for the Site Map.
Articles you might also enjoy:
How to Lose Weight Fast and Safe
Antioxidants Benefits of Anti-Aging Foods
Omega 3 Fish Oil Benefits for Breast Cancer
Risks and Symptoms of Heart Disease in Women
To subscribe to the Natural Health Newsletter, just enter your email address in the subscribe box at the bottom of this page.
© Copyright by Moss Greene. All Rights Reserved.
Note: The information contained on this website is not intended to be prescriptive. Any attempt to diagnose or treat an illness should come under the direction of a physician who is familiar with nutritional therapy. | <urn:uuid:44bda4d7-f059-400d-b37e-224da27c9197> | {
"date": "2014-09-30T22:05:41",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663167.8/warc/CC-MAIN-20140930004103-00276-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9355180859565735,
"score": 2.796875,
"token_count": 662,
"url": "http://www.bellaonline.com/ArticlesP/art171512.asp"
} |
Vegans and vegetarians have been in their kitchen laboratories for years attempting (and sometimes succeeding) to find a suitable substitute for gelatin. Now it seems that a group of scientists just might have done the trick – kind of. Conventional gelatin is made from collagen inside animals’ skin and bones — yes, we know we are ruining your childhood memories — and a group of researchers have managed to replace that animal base with a human one. Their research appears in the American Chemical Society’s Journal of Agricultural and Food Chemistry.
Don’t worry – no serial killers involved here. The process the researchers perfected actually involves taking human gelatin genes and inserting them into a strain of yeast. With their technology they were able to grow gelatin with controllable features. Jinchun Chen, the leader of the study, and his colleagues believe they can scale this process up to produce large amounts of human-based gelatin. So why, you ask, are the researchers pursuing this possible cannibalistic measure? For medical reasons, of course (though our first thought was jello shots).
It turns out that gelatin has many uses in the world of medicine — it is widely used in medicinal capsules, for one. Many people’s immune systems don’t respond fondly to the animal-derived stuff. Chen and his team, working out of Beijing, were attempting to perfect a gelatin that the human immune system would accept without question, so making it out of human tissue makes sense. Though human-based gelatin on a supermarket shelf is pretty unappetizing, human-based gelatin in the world of medicine seems genius. Let’s hope Chen and his crew are only reaching to conquer the pharmaceutical industry and not the food industry as well.
Via Science Daily | <urn:uuid:fe963326-008c-489c-9699-d7cb1672412f> | {
"date": "2019-07-23T16:22:26",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9622160196304321,
"score": 3.03125,
"token_count": 358,
"url": "https://inhabitat.com/scientists-grow-gelatin-derived-from-human-tissue/"
} |
Nutrition Labels Leave Consumers Confused About Sugar Levels
CHICAGO (CBS) – It’s no secret many foods contain sugar, but you might be shocked to learn just how much is in some seemingly healthy options. CBS 2’s Dorothy Tucker put people to the test to see if they could convert grams on food labels to teaspoons, and asked one consumer advocate about his simple solution.
Michael Roe thought the 72 grams of sugar in a can of AriZona Iced Tea amounts to about 6 teaspoons. He was wrong.
The 72 grams of sugar in that drink are equal to 17 teaspoons of sugar, the same amount you’d get form six Twix bars.
A survey of 700 readers by Consumer World found 80 percent of people were confused about the nutritional information on food labels, because all food labels use the metric system, and ingredients like sugar are measured in grams, not teaspoons.
Others were surprised to learn a bottle of Nantucket Nectars Orange Mango drink lists 65 grams of sugar per bottle, the equivalent of 15 teaspoons of sugar.
The 57 grams of sugar in a Minute Maid Cranberry Apple Raspberry juice bottle equals 13½ teaspoons.
Even a McDonald’s Quarter Pounder with Cheese has a shocking 10 grams of sugar, or 2⅓ teaspoons.
“This is an enormous amount of sugar,” said Achieng Obung.
Holly Herrington, a registered dietician at Northwestern University, said, “Sugar itself is not bad. The problem is that we consume way too much sugar.”
Herrington said consumers aren’t used to measuring ingredients in grams.
“If I’m using spoons, I’m not measuring my grams at home,” said Herrington.
Even seemingly healthy foods like Nature Valley Granola bars have 11 grams of sugar in two bars. That’s more than 2½ teaspoons.
A Dannon Yogurt cup has 25 grams of sugar. That’s six teaspoons in a 6-ounce container.
“That’s a lot of sugar,” said Salma Saad.
Consumer World’s Edgar Dworsky calls the labels confusing and wants the FDA to include teaspoons on labels, so they are easier to understand.
Dworsky said he believes manufacturers love that confusion “because consumers are in the dark about how much sugar…is really in the things they normally eat.”
The FDA has not yet responded to Dworsky’s suggestion to include teaspoons on the labels. They also haven’t responded to CBS 2’s requests for comment.
According to dieticians, women should not have more than 24 grams of sugar a day, or about 6 teaspoons. For men it’s about 36 grams, or 9 teaspoons.
The companies cited in this story are doing what they’re legally required to do, listing sugar in grams on their nutrition labels. | <urn:uuid:752a840a-513f-4be1-9545-96084d5b1815> | {
"date": "2014-10-01T23:23:17",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663611.15/warc/CC-MAIN-20140930004103-00264-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.923288106918335,
"score": 2.578125,
"token_count": 622,
"url": "http://baltimore.cbslocal.com/2012/10/22/nutrition-labels-leave-consumers-confused-about-sugar-levels/"
} |
On a Quest for English
Online role-playing games, which take players onexplorations of medieval fantasy worlds, are showingthe potential to be a powerful tool for ESL learning.
When Professor EddSchneider and gamedesigner Kai Zhengsuggested to attendeesgathered in San Franciscolast spring for theannual Game Developers Conference thatmassively multiplayer online role-playinggames, better known as MMORPGs,could help Asian teens acquire Englishlanguage skills, the two men generatedconsiderable buzz. Their message threw aspotlight on a relatively new area of investigationin the evolving relationshipbetween education and computer games—namely, whether an MMORPG mightserve as a pedagogical tool for studentslearning English as a second language.
Schneider, an associate professor in the Department of Information & Communication Technology at The State University of New York (SUNY) at Potsdam, has been researching games and teaching game design and development for more than 10 years. Zheng, a student in the department, is a Chinese software developer who has written for videogame magazines in China.
In their presentation, Schneider and Zheng argued that the internationally popular MMORPG World of Warcraft (WoW) could be marketed more effectively in China, Korea, and Japan if it were run on English-as-a-second-language servers, which are accessible to players in Asia and the United States. Blizzard Entertainment, the maker of WoW, currently sells the Asian rights to the game to a Chinese company, which runs it on separate servers. Schneider believes that running the game on joint ESL servers could remove one of the greatest roadblocks to sales in that part of the world: parents.
"In China, parents hate computer games," Schneider says. "They want their kids to be studying or involved in sports. Most wouldn't even consider buying World of Warcraft. But Asian parents also want their kids to speak English. We suggested that if they knew their kids would be getting up at 7 a.m.—which is 7 p.m. here—to practice English [the game's default language], they would have less antipathy for the product. I really believe that if Blizzard started an ESL server of English in China, they would make a fortune."
Beyond marketing considerations, Schneider believes that MMORPGs have great potential as tools for ESL programs in US schools. It's a notion born of a project Schneider and Zheng worked on earlier this year, in which a group of SUNY Potsdam graduate students tutored a group of Chinese middle schoolers in English through online computers games. Employing a VoIP connection and Flagship Industries' group-communications program Ventrilo, the graduate students rose at 3 a.m. to interact with students at Shanghai's Qibao Middle School. Over the course of five months, Schneider's students played a range of games with the Qibao students, everything from online Scrabble to various strategy games.
"Basically, I told [my students] that they could teach them English using any game they wanted," Schneider says. "Once a week, all my students would get up in the middle of the night, put on their headsets, and chat for two hours with 12-year-olds in Shanghai. The Chinese kids absolutely loved it. Their teachers told us it was their favorite class."
Scrabble turned out to be the best vocabulary builder of the project, Schneider says, but WoW presented a social environment in which language learning was more contextual, similar to the experience of total immersion in a foreign culture. The game provides a persistent world in which players maneuver their avatars to explore the world, fight monsters, and interact with socalled non-player characters that are part of the game. Those avatars also interact with other players' avatars, and even work in groups to pursue quests—and that requires conversation, which is done through chat windows in which players can type messages to other players within a local area, to members of their party, to members of their guild, or directly to individuals.
"The best way to learn a language is through immersion.The only way you can provide an immersive experience thatscales is in a virtual world." —John Nordlinger, Microsoft Research
"For 20 years, we've been talking about how we will soon be able to put you and a person from China together in the same virtual space to talk and interact," Schneider says. "Well, we can do that right now with MMORPGs."
Schneider believes the Chinese students learned far more conversational English in WoW than they ever would have learned by using a textbook. "You can teach left and right in a classroom setting, but in World of Warcraft, they get a chance to use it," he says. "They went from being afraid to say anything to telling my students, 'This time, I'm going to kick your butt!'" Schneider also observed that the Chinese students were highly motivated to acquire English because it helped to advance them in the game. "Nothing is more motivating than these online games," he says.
Schneider and Zheng are hoping to continue their study of the language-learning potential of World of Warcraft next year with a summer gaming camp. Participating American kids would log on in the evening, Chinese kids in the morning. "The idea behind the camp is to teach English to the Chinese kids," Schneider says, "but also to help both groups of players learn to balance gaming with the rest of their lives."
Schneider and Zheng weren't the first to explore the languagelearning possibilities of an MMORPG. In the spring of 2006, Bruce Gooch, then a professor at Northwestern University, and graduate students Yolanda Rankin and Rachel Gold put together a pilot study to evaluate second language acquisition in the context of gaming. "I championed the idea at Northwestern of using MMORPGs to learn a second language," Gooch says, "but Yolanda cleverly used it for ESL in the study."
In their proposal for the study, the Northwestern researchers wrote: "Since MMORPGs support social interaction between players, MMORPGs serve as the catalyst for fostering students' language proficiency as students interact in a foreign language while playing the game. For these reasons, we believe that MMORPGs embody an interesting and underutilized learning environment for second language acquisition."
The game they selected to use for this study was Sony Online Entertainment's EverQuest II (EQ2), a sequel to the enormously popular EverQuest. Though not as commercially successful as WoW, EQ2 offered key advantages, Rankin explains.
"We thought it would be a better game for language learning than WoW, because everything in the game is labeled, so you have an opportunity to get visual reinforcement of information," she says. "You see a noun and you get a label: This is a bird, this is a fortress. Also, the game's quests are documented and displayed on the screen. As students complete these quests, they develop an appreciation for verbs, adverbs, and colloquial meanings. EverQuest just has a lot more text all over the place."
The eight-week pilot study involved six English language learners (ELLs)—four men and two women—who were either Northwestern graduate students or spouses of Northwestern grad students. Two of the subjects were native speakers of Korean, two spoke Chinese, and two Castilian. They all played EQ2 for at least four hours per week.
Rankin, a PhD candidate in Northwestern's Department of Electrical Engineering and Computer Science, emphasizes that the study was "highly preliminary." But the results do suggest that EverQuest, and possibly MMORPGs in general, reinforce language acquisition for a number of reasons. The pursuit of quests, for example, requires players to become what Rankin calls "active learners" who engage with other players and the gaming environment. The study also supports Schneider and Zheng's conclusion that the games are inherently motivating.
"The game requires them to do things," Rankin says, "to read directions, to interact with other avatars, to travel over the landscape; that's why they learn the language. You have to comprehend the information that's in front of you in order to advance to different levels and complete the quests. And you can't complete the quests without asking for help from other players, which, again, requires you to understand the language." EverQuest II provides several chat channels, allowing players to type messages to each other, ask questions, and meet for joint quests.
RESEARCHERS ARE DEVELOPING A RESORTWITHIN THE 'SECOND LIFE' DIGITAL WORLDTO USE AS A STAGE FOR LANGUAGE STUDIES.
It may not provide participantswith the same quest-orientedchallenges of aWorld of Warcraft or anEverQuest II, but the burgeoning3-D virtual worldknown as Second Life is allabout immersion. SecondLife is a vast digital continent, teeming with avatars (nearly 10 millionby one count) representing “residents” with homes and businesses.
Researchers at nonprofit research institute SRI International see Second Life as an environment with great promisefor English language learners. SRI's Center for Technology in Learning(CTL) has just embarked on a research project calledLakamaka Island, named for a piece of real estate—a tropical island—that the institute has purchased in the Second Life universe. Principalinvestigators Valerie Crawford and Phil Vahey and their team are usingthis virtual island as a staging ground for language-learning studies.
Also on the project is John Brecht, a learning technology engineer.“We're looking at the virtual environment as a means of establishinga concrete context to practice language skills,” he says. “Ratherthan running students through exercises in the abstract, practicingwords and phrases from a textbook, the virtual world allows you toengage students in a virtual role-playing exercise.”
Initially, CTL researchers plan to establish a narrative thread forvisitors to the island, woven around the concept of travel, Brechtexplains. Participants will check in to hotels, order meals, and engagein many of the activities they might experience during a trip to anothercountry. SRI is also developing a voice-recognition engine designed toallow participants to practice their language skills, without having tohave an expensive instructor or native speaker in the room.
The long-term goal of the project, which is still in the prototypingphase, is to create a kind of mutually supportive foreign exchangeprogram, Brecht adds. “The island could be a place where you couldhave English speakers learning Japanese, and Japanese speakerslearning English, with both helping each other through the exercises.Think of it as a bottom-up, self-motivated social system, rather than atraditional top-down, school-like experience.”
Gooch, who is currently an assistant professor of computer science at the University of Victoria in British Columbia, cites another factor: "We know that learning is accelerated if we have an emotional response to the learning. We believe that's what might be going on in the game. I want to defeat an opponent. I'm worried, I'm scared, I'm excited—I'm interested. You tend to remember things that strike you this way."
Perhaps the most important difference between EQ2 and WoW as tools for ELLs is EverQuest's use of audio. The initial release of the game had 130 hours (70,000 lines) of spoken dialogue provided by 1,700 voice actors. Such an audio-rich environment makes for a more immersive experience, Gooch says.
This rich audio component had a beneficial effect on pronunciation in the Northwestern study. Rankin is also convinced that it was responsible for accelerating improvements in comprehension and vocabulary acquisition as the players moved through the game. As they advanced to level 10 and higher, their vocabulary test scores improved significantly. "At that level, just about every avatar you meet has audio associated with the text," she says. "When you're a newbie, only a few key avatars have speech associated with their text. The students told us in the post-interview that hearing the words spoken was enormously helpful."
Role-playing games, however, do appear to have their shortcomings. As good as EverQuest II may be at helping players to build vocabulary, it proved much less effective at conveying the grammatical aspects of language. "It just seems to get lost," says Rankin. "They're understanding meaning, but they're not worried about subject-verb agreement."
And MMORPGs are not best suited for absolute beginners. The researchers concluded in their report on the study: "We realize that if the ESL student is to benefit from the immersive environment represented in role-playing games, the participant should possess a minimum of intermediate-level knowledge of the English language."
Rankin plans on continuing her exploration of the potential of role-playing games in language acquisition in a follow-up study at the University of Mississippi. This time around, she intends to increase the subject pool, prohibit paired play (most of the players in the first study played in twos), and provide a support "scaffolding" that includes a digital dictionary.
"We observed that the students were looking up a lot of phrases in books they brought or translation computers," Gooch explains. "They spent a lot of time asking each other and the tutor what somebody meant: 'I know what this word means, and I know what that word means; what do they mean in this phrase?' Ideally, we'd like to make everything clickable. You say something to me that I don't understand, and I should be able to click on the text on the screen to get definitions of the words or explanations of the phrase—kind of an instant translation."
Meanwhile, Gooch says he is set to start a new a project to investigate the potential of MMORPGs as environments for teaching algebra and geometry. He's also in the midst of testing out role-playing games on English language learners at the high school level. He fully expects that the results from the middle school studies will carry over. "It was a longer process to get permission to work with a high school, or we would have started there with our pilot study," he says. "We have since obtained permission from two high schools, and we now have studies ongoing there." Gooch says preliminary results from these studies are similar to the outcomes obtained in the first study.
"For 20 years, we've been talking about how we will soon be able to putyou and a person from China together in the same virtual space to talkand interact.Well, we can do that right now with MMORPGs." —Edd Schneider, The State University of New York at Potsdam
The Northwestern MMORPG study was first suggested to Gooch by John Nordlinger, program manager for the Microsoft Research group. Nordlinger focuses on using gaming themes and technologies to enhance curriculum. He is currently talking with academics about collaborating on a game to help young kids with algebra and geometry.
One of the best things about these kinds of virtual worlds for English language learners, he says, is they provide them with a safe environment in which to make mistakes. "You aren't your avatar," he says. "You can use that avatar to make mistakes in a game without losing face. And that's a very good thing." (Rankin found in her post-study interviews that this aspect of the EQ2 experience was especially important to her Asian subjects.)
In response to the notion that WoW and EQ2 are likely to salt players' vocabularies with odd terms not found in most classroom language texts (sword, elf, wizard), Nordlinger observes that many students in American English classes are now reading the Harry Potter novels, which operate from a similarly exotic lexicon. "Yes, there are skeletons and vampires in EverQuest, but don't think they're not already in English class," he says.
Whatever their shortcomings, games like WoW and EQ2 provide ELLs with a uniquely scalable immersive experience, which Nordlinger believes is essential in the language-learning process. "The best way to learn a language is through immersion," he says. "The only way you can provide an immersive experience that scales is in a virtual world of the sorts created by Sony Online and Blizzard."
In Nordlinger's view, though, the current crop of MMORPGs is unlikely to find its way directly into the ESL classroom. He sees them rather as curriculum-enhancing technologies. "I think they're going to work best as extracurricular solutions," he says. "Someday someone is really going to get this. They're going to integrate [gaming] into their classroom, and they're going to find that kids learn geometrically more when their extracurricular activities are complementing the classroom."
And what about the dangers of kids' overdosing on role-playing games? "Many people say to me, 'What do I do about my son who's playing World of Warcraft all the time?' I say, 'Tell him he can play as much as he wants, but he has to play it in a different language.'"
-John K. Waters is a freelance writer based in Palo Alto, CA.
This article originally appeared in the 10/01/2007 issue of THE Journal. | <urn:uuid:16ac9137-9fd5-46c6-bcb5-4598204bd9e2> | {
"date": "2016-05-27T22:18:52",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277091.36/warc/CC-MAIN-20160524002117-00031-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9664587378501892,
"score": 2.625,
"token_count": 3576,
"url": "https://thejournal.com/Articles/2007/10/01/On-a-Quest-for-English.aspx"
} |
What Would It Look Like if the Moon was Much Closer?
We’ve shown some amazing moonrise timelapse images to you before, but nothing quite like this. The fictional video shows a scenario where the moon was actually orbiting around the earth at a much closer range than now, similar to the distance of the International Space Station (ISS). The ISS has a low orbit nearly 230 miles up, while the moon normally travels about 239,000 miles away from Earth.
So what would the moon look like if it was the same distance as the ISS?
Interestingly, since the moon would be orbiting much faster than the earth rotates (the ISS takes about 90 minutes to make the round trip), it would no longer rise in the east and set in the west to us, but vice versa instead. In reality, though, if the moon was inside what is known at the Roche limit (or radius), the Earth’s gravitational pull would break the moon apart, likely forming a system of rings, which in turn would send the earth’s rotation out of whack and potentially destroy the planet in the process!
So despite the amazing view, I think we all like the moon right where it’s at. | <urn:uuid:5df63d8b-c481-4675-b340-c6e99eef8f04> | {
"date": "2017-03-01T18:04:49",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00160-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9472368955612183,
"score": 3.71875,
"token_count": 252,
"url": "https://blog.dashburst.com/close-moon-loworbit-iss/"
} |
The Trader, the Owner, the Slave : Parallel Lives in the Age of Slavery Paperback
by James Walvin
There has been nothing like Atlantic slavery. Its scope and the ways in which it has shaped the modern world are so far-reaching as to make it ungraspable.
By examining the lives of three individuals caught up in the enterprise of human enslavement.
James Walvin offers a new and an original interpretation of the barbaric world of slavery and of the historic end to the slave trade in April 1807.
John Newton (1725-1807), author of 'Amazing Grace', was a slave captain who marshalled his human cargoes with a brutality that he looked back on with shame and contrition.
Thomas Thistlewood's (1721-86) unique diary provides some of the most revealing images of a slave owner's life in the most valuable of all British slave colonies.
Olaudah Equiano's (1745-97) experience as a slave now speaks out for lives of millions who went unrecorded.
All three men were contemporaries but what held them together, in its destructive gravitational pull, was the Atlantic slave system.
- Format: Paperback
- Pages: 336 pages, 8 pages b/w plates; 1 map
- Publisher: Vintage Publishing
- Publication Date: 07/02/2008
- Category: Slavery & abolition of slavery
- ISBN: 9780712667630 | <urn:uuid:9f63e050-03d4-4308-98af-1951807cb0b0> | {
"date": "2016-10-22T19:57:13",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719041.14/warc/CC-MAIN-20161020183839-00149-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9176782965660095,
"score": 2.609375,
"token_count": 299,
"url": "http://www.speedyhen.com/Product/James-Walvin/The-Trader-the-Owner-the-Slave--Parallel-Lives-in-the-Age-of-Slavery/1767767"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.