text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
ERIC Number: EJ419047
Record Type: CIJE
Publication Date: 1990
Reference Count: 0
Science through Children's Literature: An Integrated Approach.
Butzow, Carol M.; Butzow, John W.
Science Activities, v27 n3 p29-38 Fall 1990
Discussed is a way to develop an integrated unit based on a fictional book. An example for constructing a unit is included as well as activities from units that consider birds, volcanoes, and measurement. Each unit contains a summary, topic areas, content-related words, activities, related books and references. (KR)
Publication Type: Journal Articles; Guides - Classroom - Teacher
Education Level: N/A
Audience: Teachers; Practitioners
Authoring Institution: N/A | <urn:uuid:25cd53cf-2304-4211-9b06-638457c5cd94> | {
"date": "2018-09-22T13:54:38",
"dump": "CC-MAIN-2018-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158429.55/warc/CC-MAIN-20180922123228-20180922143628-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.87460857629776,
"score": 2.796875,
"token_count": 170,
"url": "https://eric.ed.gov/?id=EJ419047"
} |
The mystery of the Holy Trinity is the central mystery of the Christian faith and life. God reveals himself as Father, Son, and Holy Spirit. The doctrine of the Trinity includes three truths of faith.
First, the Trinity is One. We do not speak of three gods but of one God. Each of the Persons is fully God. They are a unity of Persons in one divine nature.
Second, the Divine Persons are distinct from each other. Father, Son, and Spirit are not three appearances or modes of God, but three identifiable persons, each fully God in a way distinct from the others.
Third, the Divine Persons are in relation to each other. The distinction of each is understood only in reference to the others. The Father cannot be the Father without the Son, nor can the Son be the Son without the Father. The Holy Spirit is related to the Father and the Son who both send him forth.
All Christians are baptized in the name of the Father and of the Son and of the Holy Spirit. The Trinity illumines all the other mysteries of faith.
This article is an excerpt from the United States Catholic Catechism for Adults, copyright © 2006, United States Conference of Catholic Bishops. All rights reserved. Used with permission. | <urn:uuid:f8f49888-5f8a-4ffc-8e77-18004ebb3dad> | {
"date": "2019-12-10T20:37:56",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528490.48/warc/CC-MAIN-20191210180555-20191210204555-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9397407174110413,
"score": 3.03125,
"token_count": 255,
"url": "https://www.cathdal.org/news-es/the-mystery-of-the-holy-trinity"
} |
Induced Hypothermia Therapy
Starting Tuesday, June 1st, Genesis Medical Center began offering a new therapy - induced hypothermia for adult patients surviving a cardiac arrest. Genesis is one of less than 500 facilities in the US to offer this therapy & only the 2nd in the state of Iowa.
What is Induced Hypothermia?
Induced hypothermia is a treatment that improves the neurological outcomes and decreases the mortality rate in our post cardiac arrest patient population.
It can be used on the post cardiac arrest patient who has a return of spontaneous circulation (ROSC) yet neurologically is unresponsive. The American Heart Association has endorsed this treatment as a Class II recommendation, that it is acceptable, safe and considered effective, but true clinical effectiveness is not yet confirmed definitively.
Why Use Induced Hypothermia?
During cardiac arrest, decreased cerebral oxygen can occur as a result of hypotension or lack of perfusion, resulting in cerebral edema and neurological deficits. After successful resuscitation, reperfusion to the brain can exacerbate the edema and cause changes at the cellular level resulting in damaged brain cells and cell death.
Research has shown that mild hypothermia decreases cerebral edema and delays the cascading effects, which damage the brain cells resulting in improved neurological outcomes. Presently we do not offer this therapy for the post cardiac arrest patient who may meet the inclusion criteria.
What is The Process?
Once a patient survives an arrest and is deemed a candidate for this therapy, an "Induced Hypothermia Alert" will be called overhead, similar to other alerts currently in place. Once activated, a team will be notified and the patient will be transferred to an ICU as soon as possible, where the cooling pads will be applied and external cooling will begin. This therapy must be initiated within six hours of a return of spontaneous circulation.
The process may start in the field when emergency responders are called to the scene. In the field patients may be cooled with ice packs or chilled saline. The patient may come directly to the Cath Lab for PCI with the ice packs or chilled saline in place. Once the Cath Lab procedure is completed the patient will be transferred to ICU for application of the cooling pads. Patients meeting certain criteria will be intentionally cooled externally to 33 degrees Celsius (91.4 F) for 24 hours after their arrest, then slowly rewarmed and allowed to wake up. The goal of this therapy is to preserve vital organs, most notably the neurological system, in the hopes of improving the cognitive and motor functions of these patients. | <urn:uuid:a356422d-d497-40e3-bd2b-5f1bd4e572b6> | {
"date": "2018-10-16T10:42:35",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.37/warc/CC-MAIN-20181016093012-20181016114512-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9251368641853333,
"score": 2.765625,
"token_count": 522,
"url": "http://www.genesishealth.com/ghi/tests/hypothermia/"
} |
Towards More Inclusive Strategies to Address Gender-Based Violence
IDS Policy Briefing 104
Download this publication
Sexual and gender-based violence is persistent and devastating, rooted deeply in the lives of men, women, boys and girls globally. Gendered violence does not exist in isolation, and is intertwined with other forms of power, privilege and social exclusion. Processes of marginalisation, unhelpful binary views and institutional discrimination only serve to create, embed, and exacerbate sexual and gender-based violence (SGBV).
Understanding and sharing lessons around the complex social differences that surround SGBV is vital if change is going to happen, and this is particularly with reference to collective action and the role of men and boys. Taking an ‘intersectional analysis’ approach can help to realise the tangled nature of SGBV and how cross-movement alliance building and the sharing of best practice is crucial in tackling this violence. | <urn:uuid:754ed2bd-80d8-46a6-8571-08142a38ec88> | {
"date": "2018-02-17T19:51:16",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9010637998580933,
"score": 2.671875,
"token_count": 190,
"url": "http://www.ids.ac.uk/publication/towards-more-inclusive-strategies-to-address-gender-based-violence"
} |
Biodiversity in Aquicuana
One of the main objectives of Sustainable Bolivia is to learn more about the biodiversity of the Aquicuana Reserve and, therefore, to know what species live there (animals, plants, mushrooms). This will help to better understand the natural reserve in order to protect it and follow its evolution over time. In addition, this will allow the development of the community-based ecotourism project in Aquicuana and offer visitors activities to discover nature in this region rich for its biodiversity. Being a student in biology, I feel lucky for being able to study the biodiversity in the Aquicuana Reserve. It is a great opportunity to be able to help Sustainable Bolivia in my field and to live at the same time an experience that will contribute a lot to me in my curriculum and in my personal life.
What do the volunteers do?
In order to complete the databases, we often go with other volunteers to visit the reserve to try to find species that have not yet been identified. On the field, the objective is to observe and take pictures of the found species. To manage this, sometimes it is necessary to walk several hours in the jungle and make a path using a machete. For nature lovers, it is a true paradise! Thanks to Erik, the founder of the NGO, we have the opportunity to sleep in cabins built in the reserve (in the Pisatahua Ecolodge). This allows us to go out at night and study night species. You might think that at night, the jungle sleeps and that everything is extremely quiet, but on the contrary, at night the sounds are stronger and life seems to wake up in every corner. A terrifying and magical noise at the same time, life is felt everywhere, you have to experience it once. Back to the volunteer house in Riberalta, we continue to identify the species that we have taken pictures of. If one of these species has not yet been identified, we add it to the database. Identification is sometimes difficult, but we are fortunate to have the support of a local biologist. It is also possible to find help from other specialists through the Internet. For the last project of the NGO on amphibians, we have had the help of the University of Texas for the uncertain identifications.
One of the projects of the NGO is to create a Field Guide of the animals of the Reserve that will be published on the NGO’s website and will be showed to visitors. The objective is to offer local and international tourists a tool to identify the species they will see during their stay in Aquicuana. At the beginning, the field guide will contain pictures of each species found in the Reserve with its name. Then, it will be completed with a description of the physiognomy, the way they live and the noise emitted by each of these species. Some animals, such as felines, are very difficult to observe. In this case, despite the absence of data, animals not observed but probably living in the reserve are also added. To complete our database, we use information from local biologists and we also obtain information from other professional field guides.
More than working on biodiversity
Sustainable Bolivia not only works on this biodiversity project. The NGO also has projects providing support locally. For example, every week I have the opportunity to go to help an orphanage by organizing activities for the children. This orphanage is run by two couples who have to take care of everything and receive no help from the State. It only works through donations from a foreign foundation. The participation of the volunteers greatly helps the few people who have to take care of the children seven days a week, twenty-four hours a day. In addition, the hours we spend with children are a very important life lesson for us, who often come from more developed countries.
Coming to Riberalta to help Sustainable Bolivia is also coming to discover another culture. Living several weeks in a country with a totally different culture from ours is an experience that opens the mind. We discover another way of thinking, of living, of behaving with people and this allows us to realize that our reality is just one among many. | <urn:uuid:e862ad11-c862-400e-92f1-244e11467697> | {
"date": "2019-02-23T14:40:58",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249504746.91/warc/CC-MAIN-20190223142639-20190223164639-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9555654525756836,
"score": 2.75,
"token_count": 846,
"url": "http://www.sustainablebolivia.org/discovering-biodiversity-aquicuana-local-culture/"
} |
Table of Contents
What exactly is obesity? Is it just having a little tummy fat? And what are the risks associated with obesity? And how can you combat it? We lay out everything you need to know
It’s common knowledge that the obesity rates and trends in the United States are grim. So grim, in fact, that according to the Centers for Disease and Control Prevention, more than one-third of U.S. adults are dealing with obesity (that’s about 36.5 percent of adults in America).
Even worse, the national childhood obesity rate is a whopping 18.5 percent. This includes a breakdown of rates at about:
- 13.9 percent of children 2-to-5
- 18.4 percent of 6-to-11
- 20.6 percent for 12-to-19
As a nation, the United States is aware that obesity is a mega-problem in our country, but many people, both obese and not, are left with plenty of unanswered questions about the condition.
For example, what does being obese actually mean? What constitutes obesity and how can you know if you’re dealing with the risks associated with being obese? Is being overweight different than being obese?
Further, what are the risks that go hand-in-hand with obesity? Does it mean your heart is less healthy? Does it mean that you’re facing the possibility of a shorter life?
More importantly, how can you combat obesity and what are the best ways to treat it? How can a person fight obesity and the risk that is associated with it?
We’re here to break it down for you. Read our guide below answers to any questions you might have about obesity, tips for how to treat and fight obesity, and even more helpful information that you might be unaware of.
What Constitutes Obesity?
When it comes down to actually defining obesity, a lot of people have a fuzzy idea on where they fall within that category. An overweight person, for example, doesn’t necessarily qualify as an obese person, but often, people will interchange these two terms.
Are you obese if you have a little extra belly fat? Likely, this is not the case. Are you obese if your clothes are fitting a little tighter than usual?
Being overweight and being obese are two different conditions. Determining obesity has a lot to do with what’s known as the Body Mass Index, which is a statistical measurement that’s derived from your height and weight. Typically, if your BMI is between 25 and 29.9, you’re considered overweight, and if your BMI is 30 or over, you’re likely considered obese.
It’s important to note, though, that BMI can be misleading. For the average person, the BMI will work fine, but the BMI indicator will not measure your body fat percentage. Someone who is incredibly muscular for their height will have a higher weight, but a lot more muscle than fat.
They might have a higher-than-normal BMI, but that doesn’t mean that they’re obese. According to the CDC, there are different classes of obesity, too:
- Class 1 is a BMI of 30 – 34
- Class 2 includes BMIs of 35 to 39
- Class 3 includes ranges of 40 and higher.
One of the simplest ways to define obesity is a condition in which a person has accumulated enough body fat that will have a negative effect on their health. When it comes down to it, the best way to determine obesity is to have a direct, frank talk with your doctor about your weight.
The Risks of Obesity
Unfortunately, being obese is far more than just a cosmetic issue – it’s a verified, guaranteed health hazard and can, without a doubt, shorten your lifespan, lower your quality of life, and present real, threatening health issues.
Someone is obese is twice as likely to die prematurely than an average-weight person. It’s unlikely that you’ll meet a doctor who won’t agree that an obese person is more likely to have health issues, too.
An obese person faces both general and specific health risks. If you are obese, your entire body will feel it. Your heart, your joints, your blood pressure, your blood sugar — all of your systems are going to have to work harder to support you and keep you running like normal, often resulting in serious or chronic issues.
Additionally, the extra fat cells your body is carrying around will likely produce inflammation and various hormones which can also contribute to chronic medical conditions.
Obese people have a much higher risk of dealing with life-threatening issues like heart disease, strokes, high blood pressure, and high blood sugar. Because of the excess weight and body fat, an obese person’s heart will have to work overtime to support them, putting extra strain on their system.
Further, you’re at a much greater risk of other conditions like diabetes (a condition where your body’s ability to respond to or produce insulin is impaired, resulting in an abnormal metabolism of glucose), gallstones, cancer, osteoarthritis, and gout.
Certain facets of life will also be more difficult when you’re obese. Simple things like sleeping can be strongly affected when you’re dealing with obesity. Those who are dealing with obesity, are prone to breathing issues such as sleep apnea, where a person stops breathing for a short time during sleep.
Strategies for Reducing Obesity
It might seem like the odds are stacked against you, but don’t panic. There are small ways to start attacking these obstacles and battling obesity. To understand how to combat obesity, it’s important to understand what causes obesity in the first place.
One of the biggest causes of obesity is simple: consuming too many calories. People are unaware that the amount of food they’re consuming (and the types of food they’re consuming) are contributing to their weight gain.
To combat this, you must have a thorough idea of what you’re eating, how many calories are in it, and how it fuels your body.
Consider eating more fruits and vegetables, cutting out certain foods that don’t serve your body (sodas and foods high in fat content). Additionally, try to work with your doctor to get a strict grasp on what you should be eating, specific meal plans, what foods you should be cutting out.
Additionally, obesity is common when someone leads a sedentary lifestyle. The less you move around, the fewer calories you burn. That, combined with overindulging on your calories, can lead to rapid weight gain. But it’s not only about the calories.
Moving around, and physical activity in general, have a huge effect on how your hormones work, and often, those hormones have a huge effect on how your body processes and digests food.
Studies indicate that physical activity has a beneficial effect on insulin levels and are more likely help keep them stable. A sedentary life spent sitting still or in front of a television can easily lead to unwanted pounds, so to combat this, try incorporating movement and physical activity into your daily life.
Start small, shooting for about 30 minutes of cardio-intensive activity every day. Talk with your doctor about gradually increasing your physical activity and discuss what kind of activities might best benefit you.
Instead of driving the half-mile to work, take a walk! Swap out an hour of TV time for an hour on your bike. There are plenty of options for a more active lifestyle!
Many people aren’t aware, but sleep is also a huge factor when it comes to maintaining a healthy weight. Research suggests that people who don’t sleep enough have double the risk of becoming obese than those who do.
A study from The University of Warwick stated that sleep deprivation could lead to obesity because of the increased appetite people have as a result of hormonal changes. In layman’s terms, if you don’t sleep enough, you’re going to produce more Ghrelin (a hormone that stimulates your appetite) and less leptin (a hormone that suppresses your appetite).
To combat this? It’s simple! Get more sleep. Make getting your 8 hours an absolute must. Not only will you feel far more rested, you won’t run the risk of overproducing Ghrelin or increasing your appetite because of your lack of sleep.
Finally, a lack of familiarity or awareness about factors that could affect your weight – hormonal imbalances, digestive issues, medications, endocrine disruptors, and even smoking – can play a significant role in obesity.
Determining any genetic issues your body might have, discussing digestive or hormonal imbalances that could cause you to gain weight, and discussing all of your medicines with your doctor is one of the first steps toward combating obesity.
Overall, the key steps toward combating obesity revolve around making the choice to eat a healthy, nutritious, and calorically-appropriate diet, working at maintaining an active lifestyle, getting enough sleep, and working with your doctor to determine any genetic, medicinal, hormonal, or digestive imbalances that could cause weight gain.
Obesity: Final Thoughts
The risks of obesity go far past any sort of physical appearance or cosmetic ideal. The simple truth is, being obese puts people at much bigger health risks and can, ultimately, lead to a much shorter life.
Obesity is a dangerous condition that’s plaguing the United States, and unfortunately, it’s a slippery slope. Simple parts of your lifestyle can build up, and if you’re not careful, your sedentary way-of-living, over-indulgence of calories, lack of sleep, or ignorance about your personal metabolism issues could lead to an obesity condition.
Don’t let obesity ruin your life. Make changes today. Your body will thank you. | <urn:uuid:58119a4d-41d4-4dec-bcce-c207c1efea2f> | {
"date": "2019-02-17T08:15:49",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481766.50/warc/CC-MAIN-20190217071448-20190217093448-00536.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9476205706596375,
"score": 2.90625,
"token_count": 2066,
"url": "https://learnvercity.com/3723/obesity-really-risks-combat/"
} |
New Yorker: I Don't Want to be Right, 2014-May-19 by Maria Konnikova
It’s the realization that persistently false beliefs stem from issues closely tied to our conception of self that prompted Nyhan and his colleagues to look at less traditional methods of rectifying misinformation. Rather than correcting or augmenting facts, they decided to target people’s beliefs about themselves. In a series of studies that they’ve just submitted for publication, the Dartmouth team approached false-belief correction from a self-affirmation angle, an approach that had previously been used for fighting prejudice and low self-esteem. The theory, pioneered by Claude Steele, suggests that, when people feel their sense of self threatened by the outside world, they are strongly motivated to correct the misperception, be it by reasoning away the inconsistency or by modifying their behavior. | <urn:uuid:d7ee24a2-6c50-43bd-a4cc-0a001be256e4> | {
"date": "2017-09-22T11:36:51",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688940.72/warc/CC-MAIN-20170922112142-20170922132142-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.952663779258728,
"score": 2.8125,
"token_count": 177,
"url": "http://qviews.typepad.com/qviews/2014/06/think-yourself-better.html"
} |
Since 2012, an enthusiastic group of parents – called the PS 199 Science Center committee – has been working very hard to raise funds to build a greenhouse learning lab at PS 199. The PS 199 Science Center will be a 21st-century facility where children can engage in scientific exploration and discovery, combining state- of-the-art systems and equipment with the basic ingredients of soil, water and ladybugs! The vision of the science center is to encourage grade school children to embrace science while making educated choices about their impact on the environment. The laboratory will be built to accommodate a small urban farm and environmental science laboratory to further STEM education in New York City.
Initially the hope was to put the greenhouse on the roof of the school. But raising the needed funds was difficult and the cost of the project went up every year, eventually approaching $2 million.
Principal Louise Xerri, school custodial engineer Theresa DiCristi, the PTA leaders, and the current Science Center committee members worked together to find a way to build the greenhouse at a lower cost. With the help of the architects and engineers who have been working with the school on this project, a new location was identified and a new design was created at half the price of the original. The Science Center committee had been continuing to pursue funding during this time and, by the summer of 2015, had achieved its fundraising goal. The PS 199 Science Center has been almost entirely funded by public officials Gale Brewer, Helen Rosenthal, Linda Rosenthal and Scott Stringer. It has also been supported by the PTA and a handful of dedicated individuals.
The PS 199 Science Center will be built in the raised area along the northwest corner of the school bordering 70th street. It is scheduled to open in the fall of 2016. Once up and running, the Science Center will be able to be shared with others in our community.
The Science Center committee hosts the school’s annual Earth Day celebration, always has a table at the Holiday Party, and presents individual events such as Dirt Day.
PS 199 Science Center | Think Green | www.ps199sciencecenter.org | [email protected] | <urn:uuid:1bc030ba-1ec5-49a2-9e73-c8f278cf9644> | {
"date": "2017-09-26T17:57:25",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696677.93/warc/CC-MAIN-20170926175208-20170926195208-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9568027257919312,
"score": 2.75,
"token_count": 435,
"url": "http://ps199pta.org/science-center/"
} |
Vermifiltration Composting Toilets
Providing over a 100 million rural households affordable and safe sanitation.
In many parts of rural India, the absence of toilets leads many people to defecate in the open which releases fecal pathogens into the environment. These pathogens eventually find their way into drinking water and food and result in nearly 700,000 childhood deaths. In fact, diarrheal disease is the third leading cause of childhood mortality worldwide.
As it stands, too much of the conversation around sanitation is focused on toilet construction. To make a real difference, the solutions need to focus on becoming effective waste management systems.
ITT's worm-based digester system converts fecal sludge into compost in 10-12 hours turning human waste into incredibly effective fertilizer.
Our solution: vermifiltration in lieu of sewers
Sewer systems - the mainstay of sanitation infrastructure throughout the industrialized world - are not a realistic option for these communities. However, pit latrines, which are the most common forms of toilets in rural communities, provide no waste management. A vermifiltration toilet is the most effective solution in absence of a fully developed infrastructure.
Priced at only US$60, the Tiger Toilet - originally developed by Bear Valley ventures from the UK and PriMove India - uses a traditional Indian commode with a pour-flush. The waste then enters a tank, which contains a large number of Eisenia fetida worms (commonly found throughout India) and a drainage layer. The solids are trapped at the top of the system where the worms consumer it and the liquid is filtered through the drainage layer.
ITT has enables the installation of over 3,500 units of the vermifiltration digest through partners around India. The worms have demonstrated significant reduction in fecal solids by over 80% and the effluent quality is higher than from a septic tank. In these two years, households in India have shown very high levels of user satisfaction, as well as an overall deduction of open defecation across the communities.
Vermifiltration also has significant potential as waste management technology for sewage treatment plant applications. Over 200,000 tons of solid waste and 40,000 million litres/day of liquid waste is being generated across India - a challenge that is only expected to grow in the coming decades. When provided with a controlled environment, vermifiltration systems have the potential to convert such vast quantities of human waste into a very useful agricultural resource. We are currently exploring a few different larger-scale applications that could enable municipal governments better manage India’s growing waste management crisis. | <urn:uuid:52f9129a-a2bc-4ac5-8cd5-3233a091c080> | {
"date": "2019-03-26T12:55:43",
"dump": "CC-MAIN-2019-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205163.72/warc/CC-MAIN-20190326115319-20190326141319-00536.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9400081038475037,
"score": 2.921875,
"token_count": 536,
"url": "https://transformativetechnologies.org/portfolio/vermifiltration-composting-toilets/"
} |
Your teens may be more concerned this week with planning a huge Halloween bash than with improving their driving skills. However, this week, and every week in the U.S. at least 68 teenagers are killed in motor-vehicle-related tragedies. Since it’s National Teen Driver Safety Week, it’s the perfect opportunity for you to connect with your teen over the topic of safe driving.
While your teen may feel he or she is a safe driver, remind them that even the professionals aren’t immune to driving dangers. Take last weekend’s untimely death of IndyCar driver Dan Wheldon during the IndyCar Series’ Las Vegas Indy 300 auto race.
This week, Tire Rack Street Survival, a national nonprofit teen-driving program, has some tips to help you and your teens stay SAFE on the roads.
Study the Basics: Teach your teen how to perform a quick maintenance check to ensure the car is working properly. Teen drivers should know where the spare tire is located, what to do in emergency situations and the importance of staying current with the oil change schedule, as outlined in the car’s manual. Are the car’s tires inflated correctly? Is there sufficient tread depth on the tires to ensure a safe stopping distance should an unexpected distraction occur? For more tire-related information, go to Cars.com.
Agree on Limits: Remember, your teen’s license is not about your convenience; it’s about his or her life.
— Set limits on your teen’s driving, particularly in high-risk situations such as prom night, social outings and especially in inclement weather.
— Do not let your teen ride with a young driver that has less than a year’s driving experience. — Remember, the greater the number of teens in the car, the greater the level of distraction.
Form a Plan: Have a clear understanding of where your teen is driving at all times, who he or she is with and what route they intend to take. Confirm check-in times with your teen so he or she can provide updates to their plans.
Establish a Backup: Sometimes teens make mistakes and get themselves into situations where other teen drivers have been drinking and they feel stranded. Make sure your teens have a responsible adult they can call, with a code word, if they feel they shouldn’t be driving or are riding with a young driver who’s driving recklessly or under the influence. Safety first, questions later.
Since teens are well-plugged in today, you can take to the internet to help them out. Bendix Brakes for Teen Safety is a campaign on both Facebook and YouTube that offers quick, quirky teen-friendly videos (complete with #$%@ bleep-outs) on topics such as car maintenance and safe driving.
What’s helped you and your teens to communicate openly and establish trusting boundaries about safe driving? Share your tips and ideas in the comment section below. | <urn:uuid:4fc8a8c8-665e-4a0e-a5ed-4c2f1bc50a9c> | {
"date": "2016-09-28T19:31:01",
"dump": "CC-MAIN-2016-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661763.18/warc/CC-MAIN-20160924173741-00182-ip-10-143-35-109.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9446923136711121,
"score": 2.65625,
"token_count": 618,
"url": "http://www.chicagotribune.com/classified/automotive/chi-safe-teen-driving-guide-20111019-story.html"
} |
Fire scientists know one thing for sure: This will get worse
Subtract out the conspiracists and the willfully ignorant and the argument marshaled by skeptics against global warming, roughly restated, assumes that scientists vastly overstate the consequences of pumping greenhouse gases into Earth’s atmosphere. Uncertainties in their calculations, the skeptics say, make it impossible to determine with confidence how bad the future was going to be. The sour irony of that muttonheaded resistance to data is that, after four decades of being wrong, those people are almost right.
As of July 31, more than 25,000 firefighters are committed to 140 wildfires across the United States — over a million acres aflame. Eight people are dead in California, tens of thousands evacuated, smoke and pyroclastic clouds are visible from space. And all any fire scientist knows for sure is, it only gets worse from here. How much worse? Where? For whom? Experience can’t tell them. The scientists actually are uncertain.
Scientists who help policymakers plan for the future used to make an assumption. They called it stationarity, and the idea was that the extremes of environmental systems — rainfall, river levels, hurricane strength, wildfire damage — obeyed prior constraints. The past was prologue. Climate change has turned that assumption to ash. The fires burning across the western United States (and in Europe) prove that “stationarity is dead,” as a team of researchers (controversially) wrote in the journal Science a decade ago. They were talking about water; now it’s true for fire.
“We can no longer use the observed past as a guide. There’s no stable system that generates a measurable probability of events to use the past record to plan for the future,” says LeRoy Westerling, a management professor who studies wildfires at UC Merced. “Now we have to use physics and complex interactions to project how things could change.”
Wildfires were always part of a complex system. Climate change — carbon dioxide and other greenhouse gases raising the overall temperature of the planet — added to the complexity. The implications of that will play out for millennia. “On top of that is interaction between the climate system, the ecosystem, and how we manage our land use,” Westerling says. “That intersection is very complex, and even more difficult to predict. When I say there’s no new normal, I mean it. The climate will be changing with probably an accelerating pace for the rest of the lives of everyone who is alive today.”
That’s not to say there’s nothing more to learn or do. To the contrary, more data on fire behavior will help researchers build models of what might happen. They’ll look at how best to handle “fuel management,” or the removal of flammable plant matter desiccated by climate change-powered heat waves and drought. More research will help with how to build less flammable buildings, and to identify places where buildings maybe shouldn’t be in the first place. Of course, that all presumes policymakers will listen and act. They haven’t yet. “People talk about ‘resilience,’ they talk about ‘hardening,’” Westerling says. “But we’ve been talking about climate change and risks like wildfire for decades now and haven’t made a whole lot of headway outside of the scientific and management communities.”
It’s true. At least two decades ago — perhaps as long as a century — fire researchers were warning that increasing atmospheric CO2 would mean bigger wildfires. History confirmed at least the latter hypothesis; using data like fire scars and tree ring sizes, researchers have shown that before Europeans came to North America, fires were relatively frequent but relatively small, and indigenous people like the Pueblo used lots of wood for fuel and small-diameter trees for construction. When the Spaniards arrived, spreading disease and forcing people out of their villages, the population crashed by perhaps as much as 90 percent and the forests went back to their natural fire pattern — less frequent, low intensity, and widespread. By the late 19th century, the land changed to livestock grazing and its users had no tolerance for fire at all.
“So in the late 20th and early 21st century, with these hot droughts, fires are ripping now with a severity and ferocity that’s unprecedented,” says Tom Swetnam, a dendrochronologist who did a lot of that tree-ring work. A fire in the Jemez Mountains Swetnam studies burned 40,000 acres in 12 hours, a “horizontal roll vortex fire” that had two wind-driven counter-rotating vortices of flame. “That thing left a canopy hole with no trees over 30,000 acres. A giant hole with no trees,” he says. “There’s no archaeological evidence of that happening in at least 500 years.”
Swetnam actually lives in a fire-prone landscape in New Mexico — right in the proverbial wildland-urban interface, as he says. He knows it’s more dangerous than ever. “It’s sad. It’s worrying. Many of us have been predicting that we were going to see these kinds of events if the temperature continued to rise,” Swetnam says. “We’re seeing our scariest predictions coming true.”
Fire researchers have been hollering about the potential consequences for fires of climate change combined with land use for at least as long as hurricane and flood researchers have been doing the same. It hasn’t kept people from building houses on the Houston floodplain and constructing poorly-planned levees along the Mississippi, and it hasn’t kept people from building houses up next to forests and letting undergrowth and small trees clump together — all while temperatures rise.
“Some of the fires are unusual, but the reason it seems more unusual is that there are people around to see it — fire whorls, large vortices, there are plenty of examples of those,” says Mark Finney, a research forester with the U.S. Forest Service. “But some things are changing.” Drought and temperature are worse. Sprawl is worse. “The worst fires haven’t happened yet,” Finney says. “The Sierra Nevada is primed for this kind of thing, and those kinds of fires would be truly unprecedented for those kinds of ecosystems in the past thousands of years.”
So what happens next? The Ponderosa and Jeffrey pine forests of the west burn, and then don’t come back? They convert to grassland? That hasn’t happened in thousands of years where the Giant Sequoia grow. So … install sprinklers in Sequoia National Forest? “I’m only the latest generation to be frustrated,” Finney says. “At least two, maybe three generations before me experienced exactly the same frustration.” Nobody listened to them, either. And now the latest generation isn’t really sure what’s going to happen next. | <urn:uuid:3b9d6e03-b788-46c8-9e11-37cb181ed320> | {
"date": "2019-08-20T19:29:49",
"dump": "CC-MAIN-2019-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315558.25/warc/CC-MAIN-20190820180442-20190820202442-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9565540552139282,
"score": 3.21875,
"token_count": 1520,
"url": "https://grist.org/article/fire-scientists-know-one-thing-for-sure-this-will-get-worse/"
} |
Consider a flame; a jet of methane, for example, injected into an oxygen-rich atmosphere and set alight. Now try to describe the shape and structure of the flame mathematically, in a way that will allow you to accurately predict how its shape and structure respond to changes in various conditions—oxygen concentration, gas pressure and so on. You will quickly discover that the mathematics of the problem can be derived from basic physical principles but is intractable: there are equations that accurately describe the situation, but they are too difficult to solve. Often the easiest solution, one that is practical in the case of a simple gas jet, is to build a physical model or a prototype, test it, and make some observations and measurements that characterize the system. But what if that's not possible? Then the usual recourse is to build a computational model that simplifies the physics in various ways and brute-forces the solution by crunching through lots of numbers.
Now consider that same flame again from a slightly different perspective: what's actually going on? Yes, the character and behavior of the flame are difficult to characterize and predict with great accuracy, but suppose you already know what a gas flame looks like, and just want to know what it is. Here, the equations are simple. First, methane oxidizes to carbon monoxide, hydrogen and water vapor, giving off energy (heat and light) in the process:
This is quite typical of how we go about explaining just about everything we encounter. To understand the flow of traffic, we think about individual vehicles and the interactions between them. To understand epidemics, we think about the course of the disease in individual patients and the spread of infection from patient to patient. To understand how an industrial chemical affects an ecosystem we look at its effect on individual cells in individual organisms. We take a specimen, study its behavior, and extrapolate it to the population as a whole. This approach gives at least the illusion of explanatory depth; more importantly, it often allows us to establish cause and effect relationships and, based on them, make constructive changes that decisively influence the outcome: impose speed limits, quarantines and environmental regulations, respectively.
Let us try to apply this same approach to a truly complex system: the economies of US and Europe, in the state in which we currently find them: raging government deficits, staggering levels of bad debt, continuous government bailouts and infusions of free money by central banks, record levels of poverty and long-term unemployment and underemployment, and a lack of any meaningful economic growth. Specifically, let us try to characterize the effect of the continuous monetary infusions, bailouts, and stimulus spending. The economics profession has failed to do this and so amateurs are forced to step into the breach. The economists' usual excuse is that it's all very complicated; sure it is, so is a gas flame.
All money is debt. It is created when someone takes out a loan, promising to repay it (with or without interest) with proceeds from his or her future labor. If that promise is broken, the money ceases to exist. In the normal course of affairs, the lender then “loses” the money. If the lender loses more money than he happens to have, then the lender is bankrupted and, economically speaking, ceases to exist as well. What happened during the financial collapse of 2008 is that the real estate bubble burst and many loans went bad at the same time. The response was not to liquidate the lenders who lost more than they had, but to prop them up by issuing further loans that were not supported by any specific mechanism or realistic chance of repayment—just the compulsive thought that big financial organizations must not be allowed to fail because that would irreparably damage the system. Propping up bankrupt institutions by issuing fake money (or, more precisely, fake debt) has been assumed to be less damaging to the system than doing nothing.
This assumption would perhaps have been justified if the financial difficulties were, as was once thought, temporary in nature, that the economy would roar back to life and growth would resume. Now, three years later, we find ourselves back where we started, and this assumption no longer seems tenable. It is not clear why growth should resume, as many factors, persistently high energy prices among them, continue to weigh it down. We shouldn't bet on any more economic expansion, at least not in the developed world. As Richard Heinberg argues persuasively in his latest book, The End of Growth, growth has reached its limits, which are both numerous and insurmountable.
There is a plain and simple distinction between the two kinds of money: real money, which was lent into existence with a specific and realistic promise of repayment by a specific party, and fake money, which was dreamt into existence by a central banker without anyone specifically promising to repay it. Suppose a person walks into a grocery with fake money in his wallet, and buys something. This is no different from paying with counterfeit money: the grocer is getting robbed. But there is also a difference: the officially issued fake money is indistinguishable from real money. But just because you can't spot a fake doesn't mean that you aren't getting robbed. And so the fake money mixes with the real money and sloshes about the economy, robbing each person who touches it, until everybody is poor. Since poor people can't pay back big loans, the central banker's conceit that the fake money is debt seems rather unjustified. It is owed by the central banker to the central banker, and it would be foolish of us to expect him to ever work it off.
I am using the word “robbery” here not to indicate moral indignation or feigned umbrage, of the “I am shocked! Shocked to find that gambling is going on in here!” variety. I might even say that sometimes robbery is justified (“expropriation” or “commandeering” are its more polite, civilized variants). I am using it because the trick—paying with a fake—is an obvious one, and the result—the robbed party becomes poorer—is obvious as well. And so whether it is a retiree spending his deficit-financed social security check at the dollar store or a banker spending his bailout-financed bonus on lavish gifts for his trophy girlfriend, or a construction worker drinking his economic stimulus-financed paycheck at the bar, somebody somewhere is getting robbed—and becoming poorer.
Rest assured, I am not advocating letting people starve or forgo beer or anything of the sort. A warm bed and three squares a day is, to me, a human right. I am not interested in policy (nor are policymakers interested in me). But I am interested in making a specific prediction: that government and central bank efforts to stabilize the financial system and restart economic growth will do the exact opposite: they will destroy that which they are trying to save more completely although a little bit later. They are living on stolen time.
The alternative (in case policymakers suddenly decided to pay attention and were capable of taking on board such a radical notion) is a jubilee: full repudiation of all debts public and private and a ban on all repayments, repossessions and collection activities. This would force a full shutdown and cold restart of the financial system. But it will probably have to happen anyway. In the meantime, do your best to avoid getting robbed. | <urn:uuid:04f049d0-4e6d-4e57-bedd-358e28cc3424> | {
"date": "2014-07-25T09:31:00",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00072-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9677563309669495,
"score": 2.875,
"token_count": 1530,
"url": "http://www.energybulletin.net/print/59483"
} |
Recent years have brought a veritable flood of works concerning the history of Jews in Poland and Polish-Jewish relations. This second issue has been discussed in many works, however, most of them focus on it from the perspective of the Polish people — how did Poles treat their fellow citizens — the Jews, what they thought of them, which stereotypes were common among the Polish society. This issue has not been investigated from the Jewish perspective. Until this day we still don’t really know what how Jews perceived Poland and Poles, whether there existed any established national stereotypes. It is a problem which has in recent years garnered the attention of many scientists.
Additionally there exists, in certain circles, the conviction of the “anti-Polish” sentiments held by Polish Jews, a sentiment usually treated as an absolute truth, one that does not require any evidence. It would be worthwhile to investigate whether this stereotype is reflected in the Jewish press of the inter-war period.
It became apparent that though the stereotype of the Jew (or rather the many stereotypes of Jews) is very easy to recreate based on the Polish inter-war press, the opposite — recreating the stereotype of the Pole from Polish-language Jewish press — is impossible. Further research was required to come closer to understanding how the acculturated Jewish press viewed, or maybe rather presented Polish society. The work was thus broadened to include the press’ attitude towards Poland as a homeland and present Jewish patriotism. In other words: what was the attitude of the acculturated Jewish press towards Poland and Poles? Did it present its readers with some coherent image or was it dependent on the type of the publication, the subject matter, current needs?
In this case the work, though concerning a rather cohesive topic, is not a monograph in the strict sense — presenting every aspect of the topic. It should be treated as a catalog of questions rather than a collection of answers. Most chapters, and sometimes even passages, like the ones on the attitudes towards certain classes or professions, the questions regarding the causes of Anti-Semitism or the issue of reconciling a double patriotism with the demands citizens of any country must meet, deserve their own monographs. Additionally every publication of the dozens presented here awaits its own research paper. As a result this work should be viewed as a review of research questions, an outline of issues which should be researched much more thoroughly and extensively.
We must also remember that any answers to the questions posed in this book apply only to one of the groups in the diverse Jewish community — acculturated Jews. And we must not forget even for a moment that this work is missing a second volume — the presentation of the same issues in the Yiddish press. There is no doubt that there were significant differences between these two perspectives, not lest due to the fact that they were intended for disparate audiences, audiences which differed from each other not only in language but also education, degree of assimilation, a often political leanings. We can only hope that someone will delve into this issue sometime soon.
Anna Landau-Czajka, sociologist and historian, professor — Institute of History of the Polish Academy of Sciences and the Faculty of Social Sciences at the Warsaw University of Life Sciences (SGGW), chairwoman of the JHI’s Program Board. Deals with the history of Polish-Jewish relations, women’s history and the social history of the 20th century. Author of: And they shared one house. The ideas of solving the Jewish issue in Polish publications (1933–1929) (1998), What Alice discovers on her side of the looking-glass. Everyday life, society, government in children’s school books 1785-(2002), And the Son will be Lech… The assimilation of Jews in inter-war Poland (2006). Published a series of texts concerning women’s history in the collected works “A Woman and…” edited by Anna Żarnowska and Andrzej Szwarc. | <urn:uuid:f0afade0-0ca4-4940-8ccb-14acf906a645> | {
"date": "2018-12-17T10:57:11",
"dump": "CC-MAIN-2018-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9638209342956543,
"score": 2.984375,
"token_count": 820,
"url": "http://www.jhi.pl/en/blog/2015-02-26-poland-is-not-them"
} |
Heart disease is a term that is all too familiar to my family. At the very young age of 55, my mother had a heart attack. If that wasn’t enough, two of my uncles had to undergo bypass surgery because of plaque build-up and clogged arteries. After talking at length one night to my Uncle Ray, I really started thinking of how very important it is to take care of your heart. Uncle Ray challenged me that night to do a better job of taking care of my heart than he had his own.
The more I thought about heart health, the more I wanted to find out about it. So I started researching. I literally dove into reading every type of medical literature I could find – from articles, to medical journals and magazines, and anything I could find on the topic of heart disease causes and preventions. That is when I discovered the amazing research of doctors by the name of Dr. Linus Pauling and Dr. Matthias Rath. The results of their research was nothing short of astonishing!
In their research the doctors discovered that the accumulation of plaque on the interior lining of your arteries, which is known as atherosclerosis, actually starts with mechanical stress. Our arteries are very flexible.
Every time the heart beats, the arteries open and close. The closer the artery is to the heart, the greater the stress can get. And over time, this mechanical stress can cause tears or lesions within the artery. For healthy people who get the proper nutrition, these lesions are not a threat. The body will naturally heal them. But for those who do not get the proper nutrition, the lesions are a BIG problem. Signs Of Clogged Arteries
One whose body does not receive the proper level of nutrition will not be able to produce a substance called collagen that is necessary to repair the lesions in the artery wall. Without the presence of collagen, the body will do whatever it can to prevent the artery from leaking. So it grabs particles it finds floating in the blood stream and forms a make shift patch to place over the lesion. After a while, these patches get thicker and thicker producing what we know as plaque build-up. A person whose body always receives the right amount of nutrients will be able to reduce the accumulation of plaque build-up and sometimes eliminate it altogether.
Many people think they are getting all the nutrients they need just because they eat a healthy diet of fruits and vegetables. However, due to modern day farming and the use of new pesticides, fertilizers, and herbicides, the fruits and vegetables sold at the grocery store lack many of the nutrients that they once did. Dr. Pauling’s research revealed that the chances of heart disease can actually be prevented if proper nutrients are given at the right levels.
These key discoveries are what led me, along with a team of PhDs in nutrition, and Douglas Laboratories to come up with the Pauling Therapy Essentials Formula – a nutritional supplement that incorporates all the nutrients necessary to keep arteries healthy, strong, flexible, and free of plaque build-up. So if you would like to reduce your chance of heart disease, reduce or eliminate plaque build-up from your arteries and ultimately save your heart, contact us today by calling 1-800-280-5302. We guarantee, you’ll be glad you did! | <urn:uuid:77910efc-a38d-418c-8062-2861c0ab81c1> | {
"date": "2019-02-22T05:23:39",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513222.88/warc/CC-MAIN-20190222033812-20190222055812-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9581959247589111,
"score": 2.578125,
"token_count": 678,
"url": "https://saveyourheart.com/signs-of-clogged-arteries"
} |
Chapter Resources. Click on one of the following icons to go to that resource. glencoe.com. Image Bank. Foldables. Video Clips and Animations. Chapter Summary. Chapter Review Questions. Standardized Test Practice. glencoe.com. Image Bank.
Click on one of the following icons to go to that resource.
Video Clips and Animations
Chapter Review Questions
Standardized Test Practice
Click on individual thumbnail images to view larger versions.
Pyrite (Fool’s Gold)
Table – Mohs Scale
Rocks from Lava
Rocks from Magma
Limestone and Marble
Rocks and Minerals
Make the following Foldable to compare and contrast the characteristics of rocks and minerals.
Fold one sheet of paper lengthwise.
Fold into thirds.
Unfold and draw overlapping ovals. Cut the top sheet along the folds.
Label the ovals as shown.
Construct a Venn Diagram
As you read the chapter, list the characteristics unique to rocks under the left tab, those unique to minerals under the right tab, those characteristics common to both under the middle tab.
Click image to view movie.
Minerals – Earth’s Jewels
Igneous and Sedimentary Rocks
Metamorphic Rocks and the Rock Cycle
Explain how sediment becomes sedimentary rock.
Sediment is pieces of broken rock, shells, mineral grains, and other materials that are deposited deep in the ocean, where it piles up over time. As more layers of sediment pile up, the layers underneath are compacted. Water flows through the sediment and acts like glue. It is the compacted layers that eventually become sedimentary rock.
Which changes metamorphic rock into sediment?
A. compaction and cementation
B. heat and pressure
D. weathering and erosion
The answer is D. Over time, weathering and erosion change metamorphic rock back into sediment.
List the different properties that are used to identify minerals.
Crystal, cleavage and fracture, color, streak and luster, hardness, and specific gravity are used to identify minerals.
Rank the four minerals from softest to hardest.
The correct order is: talc, gypsum, fluorite, and quartz.
Explain why intrusive igneous rocks have large, visible crystals.
Intrusive igneous rocks are formed by magma that is forced upward toward Earth’s surface, but never reaches it. The hot magma sits under the surface and cools very slowly. The cooling is so slow that the minerals in magma have time to form large crystals.
When a mineral splits into pieces with smooth, regular planes, it is said to have _______.
D. specific gravity
The answer is A. Cleavage is a way that rock can break. When rocks break with smooth, regular planes, they have cleavage. Rocks that break into pieces with jagged, rough edges, fracture.
What type of rock is formed after a geyser erupts?
A. chemical rocks
B. detrital rocks
C. organic rocks
D. volcanic rocks
The answer is A. When a geyser erupts, mineral-rich water evaporates. The minerals are left behind and they eventually form chemical rocks.
Where do extrusive igneous rocks form?
A. Earth’s surface
B. inside Earth
The answer is A. Extrusive igneous rocks form when melted rock material cools on Earth’s surface.
Which is most abundant in Earth’s crust?
The answer is B. Feldspar is a type of silicate mineral.
Which is a mineral sold for profit?
The correct answer is D. A mineral that contains a useful substance that can be sold for profit is called an ore.
To advance to the next item or next page click on any of the following keys: mouse, space bar, enter, down or forward arrow.
Click on this icon to return to the table of contents
Click on this icon to return to the previous slide
Click on this icon to move to the next slide
Click on this icon to open the resources file.
Click on this icon to go to the end of the presentation. | <urn:uuid:a96479f7-c71e-435c-98ca-2b95ad6b2284> | {
"date": "2018-01-18T22:52:55",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887621.26/warc/CC-MAIN-20180118210638-20180118230638-00256.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8434702157974243,
"score": 4.03125,
"token_count": 893,
"url": "https://www.slideserve.com/susan-holland/chapter-resources"
} |
Lady Bird Johnson visiting one of the first Head Start programs in 1966. (National Archives, White House Photo Office Collection)
More than $2 billion in Recovery funds allocated for Head Start and Early Head Start allowed the Department of Health and Human Services (HHS) to reach an additional 61,000 children and families – 6,000 more than originally anticipated – beyond the one million children and families the programs normally serve every year.
The Recovery Act earmarked $1 billion for Head Start and $1.1 billion for Early Head Start. Together, the two federal programs promote school readiness for kids age five and under from low-income families. Both programs focus on language, literacy, and general knowledge skills as well as enhancing physical health and social/emotional development.
In addition to funding those goals, the Recovery money allowed the two programs to:
- Increase staff salaries,
- Improve staff training,
- Upgrade Head Start centers and classrooms,
- Increase hours of operation, and
- Enhance transportation services.
All $2.1 billion has been paid out, distributed among each of the 50 states (including Native American tribal nations), six territories, and the District of Columbia.
Head Start was established in 1965 for eligible four- and five-year-old preschoolers and their families. The program has enrolled more than 25 million children since its inception. Early Head Start was established in 1995 for children three and under and pregnant women because of scientific evidence that a child’s earliest years are extremely important to healthy development.
Back to Featured Stories | <urn:uuid:652d3aec-048b-40aa-b82f-7fee9cd2b7ba> | {
"date": "2014-10-22T14:07:57",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447020.15/warc/CC-MAIN-20141017005727-00217-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.954291820526123,
"score": 3.078125,
"token_count": 317,
"url": "http://www.recovery.gov/arra/News/featured/Pages/Recovery-Funds-Give-Head-Start-a-Leg-Up.aspx"
} |
Science is a field of study in which the students learn to evaluate patterns from the world around them in order to better understand physical and chemical phenomena, and life processes. They use both analytical thought and lateral thinking skills to problem-solve in a diverse range of contexts. The inquiry learning approach is used in junior science, and hands-on contexts are used to connect with the students' worldviews wherever possible.
At Otamatea High School students begin with an inquiry based program of science, in Year 7 and 8. This lays the foundation for the junior science course in Year 9 and 10 in which they learn about the basic principles of matter, living systems, energy, ecology, light and sound, astronomy, and earth science.
Practical sessions and demos are used to illustrate and explain the concepts, and lessons are flexible and responsive to the students' innate curiosities - providing many unique lines of inquiry. Literacy and numeracy skills are taught organically as part of the course, as these skills underpin all of science education.
In Year 11 we have 3 streamed classes, catering to the diverse interests and abilities of the students. An accelerate class completes Level 1 and 2 NCEA standards, as this group has been given the opportunity to gain credits in Year 10.
At Year 12-13 the students select from NCEA level 2-3 Physics, Chemistry and Biology courses, taught by specialist teachers. A mixture of external exams and internally assessed standards are provided. The modern learning environment involves google classroom pages, use of applets and simulations. An interactive, responsive classroom setting provides students with opportunites to discuss concepts, ask questions of their teacher and their peers, and consolidate learning during and after a lesson. Links to the everyday contexts they are familiar with are always provided.
The Year 13 course aims to leave the students with the required skills to succeed at university in science. Critical thinking skills, fair tests, the nature and philosophy of science are all intergrated into the Year 13 courses.
Physics looks at moving systems, electricity, modern physics, and wave motion. The main areas of study are waves / optics, electricity and magnetism, mechanics and modern physics. From how a radio tunes in to the station to Einstein's famous equation, Physics is a subject for anyone who is keen to work on practical solutions to global issues and challenges, anything from geo-engineering to mitigate climate change or developing the latest smartphone camera.
Biology involves the study of living things both in isolation and in interaction with their environment. From the latest developments in genetics, evolution and microbial studies, - to the study of a critically endangered ecosystem, the life sciences provide the foundation for further study of the living world - both within and around us.
Chemistry involves the study of matter in all its incarnations, from the micro to the macro level. Atomic bonding, titrations, acids and bases and explosive reactions are all part of the joy of a chemistry course. The chemistry of the everyday is explored as well as industrial and commerical chemical processes. | <urn:uuid:32b8e41e-9867-4b4d-9a2d-8b93d1305253> | {
"date": "2017-04-26T17:39:50",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121528.59/warc/CC-MAIN-20170423031201-00235-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9382188320159912,
"score": 3.53125,
"token_count": 617,
"url": "http://otamatea.school.nz/our-curriculum/science"
} |
Colorectal cancer kills some 500,000 people every year. It is the second most common form of cancer in women and the third most common in men.
Some 80 per cent of these cancers start as colorectal polyps, which are relatively easy to spot and remove. But conventional colonoscopies are time-consuming, invasive and expensive, so various groups are looking for better ways to do the job.
One of the more promising options is colon capsule endoscopy. This involves a tiny digital camera, a light, a transmitter and a battery in a capsule which the patient swallows. As the capsule passes through the patient’s digestive system, it transmits images wirelessly to a recording device that the patient carries.
That’s handy for the patient who can continue with his or her routine, more or less as usual. But it’s not so good for medical staff who have to analyse the images later. With the camera taking pictures at anything up to 30 frames per second, that can mean long hours studying the images for every patient.
Today, Alexander Mamonov at the he University of Texas at Austin and a few pals unveil an algorithm that can do the job automatically. This program examines each image in the sequence for the tell-tale signs of a polyp and flags up potential candidates for more detailed analysis.
Mamonov and co’s algorithm uses two techniques for spotting polyps. One key difference between a polyp and healthy tissues is that it protrudes from its surroundings. So the algorithm homes in on protrusions in an attempt to spot frames that contain polyps.
This is no simple task. The difficulty is to differentiate between a polyp and the many natural folds that occur in healthy tissue. To do this, the algorithm measures the curvature of the tissue using a sphere-fitting technique. The radius of the sphere that best fits the tissue fold is then a measure of the curvature.
Mamonov and co can then set a threshold curvature above which the frames are flagged for further investigation.
Another important feature of polyps is their texture, which tends to be much rougher than healthy tissue. So the algorithm automatically discards frames that have too little texture in them. However, this process is confounded by bubbles and froth in a frame which can make the image much more textured. So the algorithm also discards those frames with too much texture.
The result is a process that assesses each frame according to two criteria.
So how well does this algorithm work? Mamonov and co have put it through its paces on a data set consisting of almost 19,000 images, 230 of which containing polyps. In this test, the algorithm detected polyps correctly 47 per cent of the time (with a low rate of false positives).
That performance needs to be put in context. The images are not always straightforward to examine, mainly because of shadows cast by the natural curves and folds in colons. These shadows can easily obscure polyps, making them hard to spot.
However, most polyps appear in several frames as the capsule moves through the digestive system and this provides several opportunities to spot it.
So a much better way to measure the performance of the algorithm is to look at its ability to spot polyps in the sequence of images in which they appear, rather than in each frame.
By this measure, the algorithm achieves a much more respectable recognition rate of 81 per cent. The 230 images of interest actually contained only 16 different polyps, of which the algorithm successfully spotted 13. “The algorithm correctly detects 13 out of 16 polyps in at least one frame of each corresponding sequence,” say Mamonov and co.
That’s not perfect, of course. “While our approach is by no means an ultimate solution of the automated polyp detection problem, the achieved performance makes this work an important step towards a fully automated polyp detection procedure,” they say.
That’s an honest assessment. What Manamov and co have developed is a useful stepping stone towards the automated detection of polyps,a goal that has the potential to save countless lives in future.
Ref: arxiv.org/abs/1305.1912 :Automated Polyp Detection in Colon Capsule Endoscopy | <urn:uuid:b3b330a8-ddbb-4876-a59a-6324e9e395f2> | {
"date": "2015-07-07T11:50:00",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099173.16/warc/CC-MAIN-20150627031819-00156-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9488444924354553,
"score": 2.859375,
"token_count": 894,
"url": "http://www.technologyreview.com/view/514786/the-algorithm-that-automatically-detects-polyps-in-images-from-camera-pills/"
} |
|Azalea Society of America | profile | all galleries >> Azaleas... >> Evergreen azaleas... >> Kurume hybrids...||tree view | thumbnails | slideshow|
Kurume azaleas were originally found in the mountains of Japan as early as 300 years ago. As many as 700 different Kurume hybrids have been made since then, of which around 200 are known today. Some of them were imported into the United States beginning in 1915, again in 1917-1918, and again in 1929. More recently a group of 50 more varieties of Kurume hybrids were brought in by the U.S. National Arboretum and released to the public in 1983.
Shown here by permission of their author, Dr. Satoshi Yamaguchi, this diagram and most of the Kurume images in the alphabetic galleries are from his Virtual Azalea website.
Many of the other Kurume images were taken by Dan Krabill in his garden and at the US National Arboretum, with very careful attention to color and nomenclature.
Several mountains, including Mt. Kirishima near the city of Kurume on the island of Kyushu, Japan have stands of R. kiusianum, R. kaempferi, R. sataense and the original Kurume hybrids. Some authorities think the Kurumes are hybrids of the first two species, others think they are hybrids of the latter two species, and still others think other species may be involved.
As shown by this diagram, Dr. Yamaguchi is in the first camp. His diagram also shows that still more species and hybrids have also been used to produce many of the Kurume hybrids now in the nursery trade.
Kurume azaleas tend to grow as upright medium height shrubs, with numerous small flowers in a full range of colors, blooming early to early midseason. | <urn:uuid:84692aa7-7c7d-4aea-8613-613439794662> | {
"date": "2019-06-24T09:02:47",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9731024503707886,
"score": 2.703125,
"token_count": 393,
"url": "https://pbase.com/azaleasociety/kurumehybrids"
} |
All Smiles for Similes
Lesson 3 of 9
Objective: SWBAT determine and write similes to help them understand the meaning an author is trying to get across in reading text or poetry.
What is a Simile?
I start by leading the class into a small discussion on similes by asking them what they are. Students explain that they know they have heard them and would know if I told them what they were they would understand them better.
I write the word on the white board and then write the example..."my hands were as cold as ice." I then explain that a simile will use the word like and as to compare nouns. I have them take out their personal white board. Once they have them out I have them write at the top of it, "similes are used by authors to provide a stronger message or give words more impact."
As a class we briefly discuss what this statement means. The class does a great job of figuring out that when similes are used the author wants us to really understand something more clear so to help us they compare it with something we might be familiar with.
Listening for Similes
To practice I read simile for them, "Her voice hurt my ears like the nails on a chalkboard." We then talk about why an author would write that into their book. The class is very good at predicting why this might be written and creating a scenario for what might be occurring if this was really written in a story.
I then explain that we will play a little listening game, "I Hear." I will read them a poem that contains a simile and when they hear it they will raise their hands like they do for their hearing test with the nurse. After I am finished reading the poems, I will call on someone to share the smilie they heard with the class and we will discuss it.
Here are the two poems I read:
Twinkle, Twinkle Little Star
Now that they have listened to similes and have seen examples of them, it is there turn to try to write a few and determine the meaning of some.
I start by having them write a simile onto their white board. I have them compare themselves to an animal to start. I give them the example, " In the morning I sometimes feel slow as a turtle." I then give them a moment to think and then ask each of them to write their simile on their white board. We then do a brief sharing for those that want to.
I then ask them to compare an object that is in the classroom or in their desk to something else, the example I use is, "my pencil is pointy like tack." I then give them time and they write and share again.
The last part is understanding what a smile stands for. This is where it can be tricky in their reading. I have them do this portion on their white board to start. I give them two similes and have them write their meanings on the white board. I give them one at a time, I have them show me their white boards, and then quickly discuss what they came up with. We do the same thing for the second simile.
Here are the two I gave: "Lisa is as skinny as a rail." and "The lion's teeth were sharp as knives." | <urn:uuid:390e26a7-f0da-4eaf-8183-ec07c842c7ab> | {
"date": "2016-10-24T16:34:16",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719646.50/warc/CC-MAIN-20161020183839-00108-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9837373495101929,
"score": 4.28125,
"token_count": 683,
"url": "http://betterlesson.com/lesson/reflection/10420/exit-paper"
} |
There's a new tone in the latest report on climate change from the United Nations' expert organization on the subject. The Intergovernmental Panel on Climate Change doesn't just forecast the usual sweeping changes that are likely to occur as the planet warms, the kinds of warnings the public has heard (and often ignored) for decades. The report released Sunday goes further by pointing out alarming signs of what is happening already. In a rational world, it would be more than enough to propel world leaders into action.
Corn and wheat yields are down, the report says, possible harbingers of future disruption to the global food supply. Animals are migrating toward areas of cooler temperatures nearer the poles. The mountain snowpack in the Western United States is diminishing, reducing the country's water supply. Coral reefs, which shelter a quarter of the ocean's species, are bleaching — losing the algae that color them, causing their death over time. Droughts and heat waves are becoming more frequent and more intense. The number of people dying from the heat has increased in some regions, while the number of cold-related deaths has decreased.
Climate change is even transforming the Alaskan shore, the report says. The loss of sea ice has produced bigger waves, more erosion and the forced movement of some settlements away from the coast.
Of course, the report also includes a lot of predictions: higher sea levels, flooding of low-lying major cities, more of the extreme weather events that already have become familiar. On food, the report is more circumspect: There will be shifts in what kinds of crops grow best, and where and when they can be farmed, but it's unclear exactly what those transformations will be. Still, people in the poorest countries will be at even greater risk of starvation.
The panel points out that many governments are falling behind in two ways: Not only are they doing too little to slow and perhaps reverse climate change, but they are failing to adapt to its ongoing impacts. The United States has done some incremental planning, and some states and cities have begun more serious work. After Superstorm Sandy, for instance, New York state's utilities commission ordered extensive electrical upgrades to prevent massive power outages from future storms.
Though we can reasonably debate the details of how climate change will affect us and when, the time for debating whether it will have a serious impact is long past. We would do better to discuss how quickly we can reduce its severity by cutting greenhouse gas emissions, and which steps we should take first to reduce the effects that we are already too late to stop. | <urn:uuid:a0f55850-4fb6-461c-b895-8e2d29179bdf> | {
"date": "2017-01-22T12:19:53",
"dump": "CC-MAIN-2017-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00174-ip-10-171-10-70.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9642096757888794,
"score": 3.203125,
"token_count": 522,
"url": "http://www.dailypress.com/la-ed-climate-change-un-ippc-report-20140401-story.html"
} |
In the Waveform Editor, the Editor panel provides a visual representation of sound waves. Below the panel’s default waveform display, which is ideal for evaluating audio amplitude, you can view audio in the spectral display, which reveals audio frequency (low bass to high treble).
A. Drag the divider to change the proportion of each. B. Click the triangle to show or hide the spectral display.
To identify specific channels in stereo and 5.1 surround files, note the indicators in the vertical ruler.
The waveform display shows a waveform as a series of positive and negative peaks. The x‑axis (horizontal ruler) measures time, and the y‑axis (vertical ruler) measures amplitude—the loudness of the audio signal. Quiet audio has both lower peaks and lower valleys (near the center line) than loud audio. You can customize the waveform display by changing the vertical scale and colors.
With its clear indication of amplitude changes, the waveform display is perfect for identifying percussive changes in vocals, drums, and more. To find a particular spoken word, for example, simply look for the peak at the first syllable and the valley after the last.
The spectral display shows a waveform by its frequency components, where the x‑axis (horizontal ruler) measures time and the y‑axis (vertical ruler) measures frequency. This view lets you analyze audio data to see which frequencies are most prevalent. Brighter colors represent greater amplitude components. Colors range from dark blue (low‑amplitude frequencies) to bright yellow (high‑amplitude frequencies).
The spectral display is perfect for removing unwanted sounds, such as coughs and other artifacts.
For stereo and 5.1 surround files, you can view layered or uniquely colored channels. Layered channels better reveal overall volume changes. Uniquely colored channels help you visually distinguish them.
A. Uniquely Colored B. Layered (with Uniquely Colored still selected)
Determines the Fast Fourier transform shape. These functions are listed in order from narrowest to widest. Narrower functions include fewer surrounding frequencies but less precisely reflect center frequencies. Wider functions include more surrounding frequencies but more precisely reflect center frequencies. The Hamming and Blackman options provide excellent overall results.
Specifies the number of vertical bands used to draw frequencies. As you increase resolution, frequency accuracy increases, but time accuracy decreases. Experiment to find the right balance for your audio content. Highly percussive audio, for example, may be better reflected by low resolution.
To adjust resolution directly in the Editor panel, right-click the vertical ruler next to the spectral display, and choose Increase or Decrease Spectral Resolution.
Changes the amplitude range over which frequencies are displayed. Increasing the range intensifies colors, helping you see more detail in quieter audio. This value simply adjusts the spectral display; it does not change audio amplitude.
Indicates amplitude on a scale that shows the range of data values supported by the current bit depth. (See Understanding bit depth.) 32-bit float values reflect the normalized scale below.
More Logarithmic or Linear
Gradually displays frequencies in a more logarithmic scale (reflecting human hearing) or a more linear scale (making high frequencies more visually distinct).
Hold down Shift and roll the mouse wheel over the spectral display to show frequencies more logarithmically (up) or linearly (down). | <urn:uuid:359bb198-4b69-49a1-a7ea-3b3d36eac4c3> | {
"date": "2019-07-15T19:39:49",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195523840.34/warc/CC-MAIN-20190715175205-20190715200239-00050.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.862972617149353,
"score": 2.625,
"token_count": 713,
"url": "https://helpx.adobe.com/nz/audition/using/displaying-audio-waveform-editor.html"
} |
More growers are experiencing successful pest control with beneficial nematodes as they gain more knowledge and awareness of them. Usually, the growers who experience the most success are the ones who understand how to properly apply beneficial nematodes.
There are a variety of factors that influence nematode effectiveness—temperature, humidity, soil moisture, chemical compatibility, spray adjuvants, equipment, mixing – and all play a role in nematode efficacy.
Beginning Your Program
As you begin incorporating beneficial nematodes into your pest control program, be prepared with the knowledge you need to be successful.
Before making an application, make sure you read the label and any supporting literature. Talk to other growers, distributors, manufacturers or consultants to better understand nematode applications and limitations.
Nematodes can be combined with compatible chemicals to help manage insect resistance or to compliment an existing integrated pest management program. Also consider incorporating other biological organisms to more effectively manage all pest life stages. Regardless of the program used, make sure to monitor insect populations, as well as keep detailed pest control records.
Tips For Success
Two major factors affecting nematode efficacy is temperature and moisture. Apply beneficial nematodes to moisten soil when temperatures are below 86ºF. Irrigate before and after nematode applications. Apply nematodes in the early morning or evening to avoid desiccation and UV radiation damage. Pull blackout curtains, increase humidity, turn off lights, close vents, tank-mix with spray adjuvants and increase the application volume to preserve moisture and prolong the life of nematodes in the soil and on foliage.
It is also critical to prevent nematode solutions from settling during applications. Nematodes quickly settle to the bottom of a solution, resulting in uneven nematode distribution and, subsequently, uneven pest control. Growers use three types of circulation – hand mixing (time consuming and labor intensive), mechanical circulation and air circulation.
Air circulation is the best method for keeping nematode solutions viable due to the ability of air to circulate and oxygenate at the same time. Air circulation systems can utilize a compressed air system (if available), or be composed of an electrical air pump attached to a bubbler. Bubblers can be purchased commercially or be handmade.
Mechanical circulation, such as a recirculation pump or paddle mixer, can be equally effective at keeping nematodes in suspension. However, this method can also heat up water solutions and physically damage nematodes over time. Use nematode solutions within two hours if using mechanical circulation and within four hours for air circulation.
Keeping your nematodes cool is important at all times, especially during the application process. A simple way to keep nematode solutions cool is by placing a cold pack in the solution.
Application equipment commonly used for conventional insecticides may be used to apply nematodes, but a few modifications will be important for easy passage of the nematodes. Remove filters of 50 mesh or finer from the application equipment. Set pump pressure below 300 psi and do not apply nematodes through mist nozzles with apertures of 0.5 mm.
With regular nematode applications and effective pest monitoring, you can keep infestations at a low manageable level. Monitor for pests regularly to ensure infestation levels do not exceed action thresholds. If infestation levels get too high, you may need to supplement beneficial nematodes with additional biological control agents and/or pesticide applications.
Beneficial nematodes are effective and are commonly used to control western flower thrips, fungus gnats and shore fly larvae. Spray all areas that may contain pest populations. Treat plugs and cuttings before integrating them into the greenhouse to prevent the spread of outside or invasive insect pests.
Don’t Forget To …
The most important things to remember when mixing and applying nematodes are:
• Effective mixing of nematodes is essential for uniform application and maximum pest control.
• Air circulation is generally the best method for keeping nematodes viable and solutions properly agitated.
• Becker Underwood recommends that nematode solutions be used immediately. Apply nematode solutions within two hours using mechanical circulation and within four hours for air circulation.
• Adding ice packs to nematode stock solutions will maintain cool temperatures and nematode viability.
For more information and resources to help you begin your beneficial nematode program, visit BeckerUnderwood.com. | <urn:uuid:e04d278f-a9c3-45bd-bd74-2d2543011dc4> | {
"date": "2017-10-17T18:40:55",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822480.15/warc/CC-MAIN-20171017181947-20171017201947-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8870839476585388,
"score": 2.796875,
"token_count": 914,
"url": "http://www.greenhousegrower.com/uncategorized/how-to-begin-beneficial-nematode-applications/"
} |
Print version ISSN 0038-2353
S. Afr. j. sci. vol.104 n.3-4 Pretoria Mar./Apr. 2008
Liesl BurgerI, II; Igor A. LitvinI, III; Andrew ForbesI, II, *
ICSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa
IISchool of Physics, University of Kwazulu-Natal, Private Bag X54001, Durban 4000, South Africa
IIILaser Research Institute, University of Stellenbosch, Private Bag X1, Matieland 7602, South Africa
Demonstration of the effect of atmospheric turbulence on the propagation of laser beams is traditionally a difficult task. This is due to the complexities of long-distance measurements and the scarcity of suitable laser wavelengths in atmospheric transmission windows. We demonstrate the simulation of atmospheric turbulence in the laboratory using a phase-only spatial light modulator. We illustrate the advantages of this approach, as well as some of the limitations, when using spatial light modulators for this application. We show experimental results demonstrating these limitations, and discuss the impact they have on the simulation of various turbulence strengths.
As a laser beam propagates through the atmosphere, it spreads laterally due to diffraction. This is a linear and deterministic property of all light propagation. It is often observed, however, that laser beams propagating through the atmosphere have a tendency to wander randomly about their propagation direction (rectilinear path), and appear to accumulate fluctuations in their light intensity, called scintillation (see Fig. 1). These two artifacts are also observed with the naked eye in the case of starlight: the so-called twinkling of the stars is precisely this randomness in path and intensity. It is now well understood that the origin of the randomness is atmospheric turbulence.1,2
While atmospheric turbulence is a stochastic process, there is 'structure' to the randomness. From an optical perspective, the theory of turbulent air is based on considering differences in refractive index between points in the atmosphere. Consider the Gladstone-Dale law that relates the density of air to its refractive index: the greater the density, the higher the refractive index. If random mixing of air of various densities takes place, the atmosphere will also appear random in refractive index. This mixing of the air is often driven by convective currents that move air packets of varying size around. The smallest eddies define the so-called 'inner scale', below which viscous effects are important, while the largest eddies define the so-called 'outer scale', above which the atmosphere can no longer be considered isotropic. Kolmogorov considered the simplified problem of a non-viscous and isotropic atmosphere, so that the inner scale is zero and the outer scale is infinity. These assumptions lead to a well-defined distribution for the randomness in the refractive index of the atmosphere, which can be applied in the laboratory, giving a good approximation for a real atmosphere.1,2 One of the key findings of Kolmogorov has been that the turbulence strength can be described by a single parameter, the atmospheric structure constant, . If is large, the turbulence is strong, and vice versa. Typical values for range from 1015 m2/3 (high turbulence conditions near the ground) to 1018 m2/3 (low turbulence conditions at high altitude).
Not surprisingly, for applications where light must travel through the atmosphere, there are many active techniques to compensate for the effect of turbulence, for better 'vision' in astronomy, to superior signal delivery in telecommunications. The use of adaptive optics for atmospheric turbulence correction is fairly commonplace these days in both astronomical and military applications (see refs 1 and 2 for a good overview of the field). A modern optical element in the form of a spatial light modulator (SLM) has allowed new approaches to adaptive optics, and has already been recommended and used for the production of Zernike modes, adaptive optics and turbulence generation.37 Most of the approaches attempting to demonstrate turbulence have concentrated on Kolmogorov turbulence, and have made use of the SLM as the turbulence screen. Studies have concentrated on the impact of the turbulence on the laser beam and the use of SLMs as closed-loop correctors of turbulence.
We introduce the concepts of turbulence in this paper, and the use of spatial light modulators to describe turbulence. There are two basic aims: first, to expound on the steps required to actually simulate atmospheric turbulence in the laboratory, and second, to point out some of the limitations in using spatial light modulators as turbulence simulators, from a laser beam shaping perspective. That is to say, we are not so concerned with turbulence itself, but the impact of the SLM in simulating this turbulence. We will consider Kolmogorov turbulence as an example, and describe the effects of a single phase screen on the far-field intensity pattern of a laser beam. In the next section we will review the theory needed to create simple turbulence phase screens, and then show how to implement these in a laboratory situation. Finally, we highlight some limitations of using a spatial light modulator for this application.
Creating Kolmogorov phase screens
There are a few simple steps required to calculate a phase screen that represents turbulence. First, one must select an atmospheric turbulence model to use. Once done, the requirement remains that the selected turbulence model should somehow relate to the method used to describe the phase. The Zernike polynomials form a neat basis-set for representing optical phase with the convenient properties that: (a) the polynomial coefficients can be directly related to the known optical aberrations, (b) the polynomials are complete and orthogonal, and (c) the polynomial coefficients required to describe Kolmogorov turbulence can be calculated analytically. An introduction to Zernike polynomials is given in Appendix A.
The key equation is Equation (A7), which we repeat here for convenience:
In this context, Equation (1) implies that any radial phase function ø(ρ, θ) can be described as a sum of Zernike polynomials with weighting coefficients Anm and Bnm. The function ø(ρ, θ) may describe either surfaces of constant phase across an optical wavefront or a phase-only screen used to modify the optical wavefront. We will take the latter view in what is to follow, and use ø(ρ, θ) to describe the phase changes on the optical wavefront due to a turbulent atmosphere using the Kolmogorov theory of turbulence. Since in Kolmogorov turbulence Anm = Bnm, we will usually only refer to the Anm terms, which are sometimes called the even terms. In the case of creating a phase function that describes Kolmogorov turbulence, we use the Noll matrix approach,8 to find suitable values for A nm, after which the problem can then be solved.
Kolmogorov turbulence screens can be created by calculating the spatial covariance matrix from the Zernike expansion of the phase, following the approach of Noll.8 The resulting matrix, Inm (the so-called Noll matrix), can be used to calculate the statistical nature of the Zernike coefficients needed to describe Kolmogorov turbulence. In particular, if it is assumed that the Zernike coefficients are normally distributed with mean zero, then the variance of the distribution can be found from the diagonal terms of the Noll Matrix and can be expressed as
The parameter r0 is the turbulence coherence length, or Fried's scale parameter, and D is the diameter of the aperture (in this case the phase screen diameter). Fried's parameter is determined by the turbulence model used; for Kolmogorov turbulence it is given (for a plane wave) by
where L is the path length through the turbulent atmosphere, and k = 2π/λ, with λ being the wavelength of the light (in vacuum) passing through the atmosphere. The path length can be substantial, and in a worst case, would be the entire length of the atmosphere from an Earth-based laser to a telecommunications satellite (and back!). This path, which is continuous, is often made discrete by dividing the path into several 'phase screens', each representing a small portion of the path. For some cases, using only one phase screen is a good enough approximation, particularly if the length L is not too large, or if the turbulent portion of the atmosphere is thin compared to the total path, as is the case of electromagnetic signals passing through the ionosphere. In the sections to follow, we collapse the turbulent region shown in Fig. 1 into a single phase screen, thus describing a thin layer of turbulence on the far-field propagation of a laser beam (i.e. the turbulence may be thin, but the total propagation length may be any value). Because the turbulence is random, the phase screen should also change randomly.
To illustrate the process of calculating the phase screens, consider the problem of finding values for the A20 coefficient in Equation (1). First, one calculates the value of I20 from Equation (3) and, from this, a normal distribution with mean zero and standard deviation σ20 is created [σ20 is calculated from Equation (2)]. From this distribution one is then able to make random drawings for the value of A20. We now have a random sequence of values for A20. The procedure is repeated for each coefficient Anm. Since all the coefficients are now easily calculated, the summation in Equation (1) can readily be attained for each random drawing, thus constructing many Kolmogorov phase screens that can be used to simulate a randomly varying atmosphere following a Kolmogorov turbulence model (see Fig. 2).
Phase screens were calculated using the procedures discussed in the previous section, and digitized into a 256-level grey-scale image, representing a 0 to 2π phase shift on our phase-only spatial light modulator (Holoeye, model 1080P). The spatial light modulator (SLM) was illuminated with a Gaussian beam, expanded from a helium-neon laser so as to have a near-flat wavefront at the SLM. Far-field intensity patterns were generated using a Fourier-transforming lens, and recorded with a CCD camera (Cohu, model 4812). A schematic illustration of the set-up is shown in Fig. 3.
Spatial light modulator
We have outlined in the previous section how to calculate phase screens that describe Kolmogorov turbulence. We make use of a spatial light modulator in this section to create the phase screens. As has been stated, the SLM must impart a phase change on the laser beam according to Equations (1)(4). Dealing with the phase of the incoming light is reminiscent of holography, and one can regard the spatial light modulator as a digital hologram. The SLM is a liquid crystal display device on a silicon backing, as shown in Fig. 4(a). The liquid crystal is the active component, while the silicon backing is to allow the element to work in reflection mode (the light reflects off the silicon, thus passing through the liquid crystal twice). The liquid crystal display comprises 1920 × 1080 square pixels of 8 µm size each, and can be refreshed at 60 Hz. Each pixel is electrically addressed in discrete steps of 256 voltage levels, corresponding to an 8-bit grey-scale colour display. The principle behind the SLM is illustrated in Fig. 4(b). A grey-scale image is displayed on the SLM, treating it as a standard computer monitor. Each pixel is assigned a colour from black to white in 256 levels (grey scale). The SLM relates the light intensity at a given pixel to an applied voltage, and since the liquid crystal in each pixel is birefringent, the refractive index for a particular incoming polarization of light is changed. Thus two adjacent pixels on the SLM can be addressed to give two very different refractive indices (say, n1 and n2), so that the light falling on the pixels experiences different phase changes (kn1l and kn2l). The phase may be changed from 0 (black) to 2π (white) at each pixel by appropriate choice of image displayed on the SLM. Thus all the calculated and displayed phase screens to be shown are grey-scale images representing the desired local phase change of the incoming light.
There are two main components to the far-field intensity pattern: a beam-wandering effect due to the tip/tilt phase contributions, and a beam-intensity change due to the influence of higher order terms on the phase structure. These higher order terms lead to larger beams, and scintillation in some cases. The full effects of turbulence are described in Fig. 5. The sequence of images from (a)(d) shows experimental images of decreasing levels of turbulence with associated phase screens. As expected, the beam returns back to a near-Gaussian intensity when the turbulence is weak.
The zero order is clearly seen in Fig. 5(a) as a central bright spot. Since the experimental images are captured after a Fourier-transforming configuration, one can readily calculate the expected intensity distribution at the camera, for comparison between theory and experiment. In the far field, the phase modulation due to the Kolmogorov screen is fully realized as intensity changes (which would not have been the case had the intensity been measured immediately after the phase screen). Figure 6 shows a sequence of images: (a) the input Gaussian beam at the SLM, (b) an example phase screen written to the SLM, (c) the calculated laser beam intensity at the camera, and (d) the measured laser beam intensity at the camera.
There is good agreement between the calculated and measured laser beam intensities, with both showing a bright central spot surrounded by a semi-circle of light. Calculations for Fig. 6 were based on D = 5.7 mm and r0 = 0.001 m.
Spatial light modulator limitations
In addition to demonstrating laboratory-based turbulence, we have also explored the limitations of using phase-only SLMs for this application. While this analysis is generally true for all SLMs, we highlight the issues by using our SLM as an example. We find that although the SLM is very versatile in simulating turbulence, there are regimes where it is not well suited, due to insufficient capacity to resolve the phase element. The zonal regions are tightly packed in the case of very strong turbulence. This is evident in Fig. 7(a), where the modulation in phase is very rapid near the edges of the phase screen, and cannot be resolved sufficiently by the available pixels on the SLM. This leads to two simultaneous effects: a variable-level binary element with low efficiency at some points, thus changing the amplitude transmission, and a modified turbulence structure due to shifting in the zone peaks (since the zone positions determine the function of the element) as a result of poorly resolved phase transitions.
The phase change [Fig. 7(b)] at the lower end of the range is too small to record with the available grey-scale levels, since the full 256 levels are associated with a 2π phase shift. Thus again, the continuous phase function is written as a low-level binary element. The consequences of these deleterious effects can be seen in Figs 7(c) and (d): in the former, the additional energy in the various diffraction orders, due to the binary effects of SLM resolution, are noticeable; in the latter case, the zeroth order is very evident, indicating the low diffraction efficiency of the SLM under these conditions [note that Figs 7(a) and (c) do not form a pair, and likewise, (b) and (d)]. Further limitations include the frame rate of the SLM for time evolving turbulence, placing restrictions on the input parameters to the model, such as wind speed.
In order to quantify some of the effects noted above, consider that the efficiency of the SLM in producing the desired turbulence output in the m-th order is given by
with the energy distributed in discrete orders given by m = pN + 1 (for integer p), with N the number of pixels used in describing the phase between zones. Figure 8 shows a strong turbulence screen with example cross sections showing the phase modulation (0 2π) and the corresponding zones. The figure highlights that the zone spacing is not uniform across the SLM, and in this case, the spacing between zones decreases towards the edges of the SLM.
To illustrate the impact of this, we calculate the effective efficiency of the phase screens of Figs 8(b) and (c) as written to the SLM, assuming for convenience that the phase screens are written across 1000 pixels. The results are shown graphically in Figs 9(a) and (b), respectively.
The zone spacing in the horizontal cross section [Fig. 9(a)] is limited to only a few pixels near the edge, which decreases the efficiency in the 1st order to about 88%. The varying efficiency across the element leads to an effective amplitude mask that needs to be included in all calculations. In contrast, Fig. 9(b) shows high efficiency across the entire phase screen due to the large zone spacing. Note that these calculations are based on the phases shown in Figs 8(b) and (c), which we treat as independent functions in this analysis, in order to illustrate the SLM limitation better. When the turbulence is further increased, with D = 5.7 mm and r0 = 0.0001 m, the SLM is reduced to single pixel representation of each zone, thus resulting in either zero diffraction efficiency or zone shifting in a two-level elementboth are undesirable.
When the turbulence is weak (D = 5.7 mm and r0 = 0.1 m), the SLM is still capable of resolving the small phase changes at about 5 pixels, giving >80% efficiency. As the turbulence is further decreased, so the number of pixels used will decrease, until the SLM displays a single-phase change across the entire beam, at the limit of very weak turbulence.
This work has outlined the basic steps required to simulate atmospheric turbulence in the laboratory using a spatial light modulator. The atmospheric model was collapsed into a single plane of turbulence of any desired strength, and implemented as a phase-only screen on the spatial light modulator. A Fourier-transforming lens allowed the far-field intensity pattern to be observed, so that any total propagation distance may be simulated.In order to simulate long-path turbulent atmospheres, one would require multiple phase screens to depict the turbulence; this could readily be implemented in the laboratory using multiple SLMs in series. We have highlighted some of the limitations in using SLMs for this application; these manifest themselves only at very strong and very weak turbulence as binary grating-like errors on the desired phase. We believe that this approach of using SLMs for the simulation of turbulence is versatile and cost-effective, and allows for useful laboratory-scale demonstrations.
1. Andrews L.C. and Phillips R.L. (1998). In Laser Beam Propagation Through Random Media. SPIE Optical Engineering Press, Bellingham, WA. [ Links ]
2. Andrews L.C. (2004). In Field Guide to Atmospheric Optics. SPIE Optical Engineering Press, Bellingham, WA. [ Links ]
3. Love G.D. et al. (1995). Binary adaptive optics: atmospheric wavefront correction with a half-wave phase shifter. Appl. Opt. 34, 60586066. [ Links ]
4. Love G.D. and Gourlay J. (1996). Intensity-only modulation for atmospheric scintillation correction by liquid-crystal spatial light modulators. Opt. Lett. 21, 14961498. [ Links ]
5. Love G.D. (1997). Wavefront correction and production of Zernike modes with a liquid crystal spatial light modulator. Appl. Opt. 36, 15171524. [ Links ]
6. Bold G.T., Barnes T.H., Gourlay J., Sharples R.R. and Haskell T.G. (1998). Practical issues for the use of spatial light modulators in adaptive optics. Opt. Commun. 148, 323330. [ Links ]
7. Dayton D.C., Browne S.L., Sandven S.P., Gondlewski J.D. and Kudryashov A.V. (1998). Theory and laboratory demonstrations on the use of a nematic liquid crystal phase modulator for controlled turbulence generation and adaptive optics. Appl. Opt. 37, 55795589. [ Links ]
8. Noll R.J. (1976). Zernike polynomials and atmospheric turbulence. J. Opt. Soc. Am. 63, 207211. [ Links ]
Received 22 January. Accepted 14 April 2008.
The Zernike polynomials are a set of orthogonal polynomials that arise in the expansion of a wavefront function for optical systems with circular pupils. If we assume that the pupil is of unit radius, then the expansion of an arbitrary function ø(ρ, θ), where ρ∈[0, 1] and θ∈[0, 2π], in an infinite series of these polynomials, will be complete. The circle polynomials of Zernike have the form of complex angular function modulated by a real radial polynomial:
where n > |m|, n > 0 and n m is even.
It can be shown that the radial function also obeys an orthogonality relation, and is given by
The radial function is normalized such that(1) = 1. By definition, when n m ≠ even. Equation (A1) can be written as
thus we can construct two real functions and such that
The orthogonal property of these polynomials allows any function ø(ρ, θ) to be expressed as a linear sum of Unm and Vnm with suitable weighting coefficients Anm and Bnm, respectively, to yield: | <urn:uuid:7bb859c5-42c2-4198-8e10-ac8c0285b783> | {
"date": "2015-03-04T20:50:46",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463658.66/warc/CC-MAIN-20150226074103-00096-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9023714065551758,
"score": 2.75,
"token_count": 4669,
"url": "http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0038-23532008000200011&lng=en&nrm=iso&tlng=en"
} |
Devices in classrooms can empower students when used effectively. But how do teachers know if they are integrating technology effectively? Here are questions to ask about time that help teachers use effectively integrate technology.
What percentage of time are students in creative apps such as Synth, Tour Creator, ThingLink, Jamboard, Canva, Flipgrid, Google My Maps, Google Sites, etc? What percentage of time are students in Google Docs or a word-processing tool?
What percentage of time are students consuming from self-paced interactive tools such as video paired with EdPuzzle, Desmos, Google My Maps, Google Earth, ThingLink, Google Expeditions, etc? What percentage of time are students learning from the teacher and a slideshow?
What percentage of time are teachers speaking to students one-to-one or in groups of five or fewer? What percentage of time are teachers lecturing to the whole class or not speaking at all?
The more a teacher increases the percentage in the first question and decreases it in the second question - the more effectively they are integrating technology.... | <urn:uuid:5d0b7aac-1ceb-4d33-a80c-04c895a286f8> | {
"date": "2019-07-19T06:44:40",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9212754964828491,
"score": 2.5625,
"token_count": 220,
"url": "https://www.bamradionetwork.com/categories/tags/tag/educational-technology"
} |
What is purring?
According to research posted by the American Association for the Advancement of Science, the purr is something cats are able to do from birth when they purr primarily while suckling. Purring is used in a wide variety of circumstances, not just when a cat is happy. For instance, veterinarians have noticed that some cats purr continuously when they are chronically ill or appear to be in severe pain. It is thought that they do so as a way to solicit care from humans.
Others are of the opinion that a cat purrs when ill or in pain to ward off threats. If a cat is ill in the wild, he may purr when approached by another cat, so the approaching cat doesn't feel he is a threat and attack.
Cat purrs range from a deep rumble to a raspy, broken sound, to a high-pitched trill, depending on the cat's mood and/or physiology. Many cats will "wind-down" when going to sleep, with a long purring sigh that drops melodically from a high to a low pitch. A cat purrs at roughly the same velocity of idling diesel engine - around 26 cycles per second.
How does a cat purr?
There has been a lot of speculation on how purring occurs. According to some, a purr is created by the vibration of a cat's vocal cords when it inhales and exhales. Others feel it is caused by soft palate vibrations. Some have wondered if cats have a set of false vocal cords within the larynx.
Some researchers theorize it is a vibration caused by blood passing through the large veins in the cat's chest cavity, amplified by the diaphragm, which passes up the windpipe and into the windpipe and into the sinus cavities of the skull.
Electromyographic tests - they measure the level of electrical activity in muscles - seem to indicate it is caused by the activation of the muscles of the larnyx, and partial closure of the glottis (the opening of the larnyx).
The most recent declarative statement found on the web was made by Katharine Houpt, director of the Animal Behavior Clinic at Cornell University. In 2002, she is quoted as saying: "It's a vibration of the larynx that resonates down to the windpipe and into the diaphragm. Unlike meowing or human speech, purring isn't the result of air passing over the vocal cords." | <urn:uuid:57271c74-296b-48e8-80fe-b3c56f5a689b> | {
"date": "2017-09-24T03:17:39",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689845.76/warc/CC-MAIN-20170924025415-20170924045415-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9578037261962891,
"score": 3.1875,
"token_count": 516,
"url": "http://glendamoore.org/catpurrs.htm"
} |
Chattel slavery is the ownership of a another person. The practice was outlawed in the United States well over 100 years ago. Since corporations are now people, and ownership of people is illegal, then ownership of a corporation must also be illegal now. Legally the case is sound to sue and prosecute all stock holders and all owners of private corporations.
Now if you think such an action might cause businesses to flee the US, think about this... taking a chattel slave across a border is another felony called human trafficing.
Therefore, if such a suit were filed, the Supreme Court of the United States would have no remedy except to reverse their Citizens United decision and declare that corporations are not humans which then strips corporations to their right to free speech and freedom of religion, but most of all, would strip them of the ability to donate unlimited amounts of money to politicians. They would then be re-bound by the previous laws regarding election finance. | <urn:uuid:27d6b31e-3bf0-4a21-bf09-b7f3e87af64f> | {
"date": "2016-04-30T14:57:38",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111868.79/warc/CC-MAIN-20160428161511-00172-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9793259501457214,
"score": 2.5625,
"token_count": 192,
"url": "http://z1p2.newsvine.com/_news/2012/08/03/13100536-it-just-dawned-on-me-ownership-of-a-corporation-is-now-illegal"
} |
Near real time satellite and airborne imagery
The timely distribution of data and information during a wildfire is of critical importance to the Incident Command Structure. The most critical element in rapidly evolving disaster scenarios is the integration of all "observations" of the event to effectively ascertain shifting suppression demands, personnel and equipment redeployment and monitoring fire movement and effectiveness of suppression activities. Improving sensor technologies allow more refined observations from both orbital and sub-orbital platforms, but that data / information is of little value if "stale". Therefore, an element of our WRAP project will evaluate and improve on data delivery (telemetry) to facilitate a rapid decision support mechanism is in place for wildfire analysis.
There is always room for improving the speed with which the right information reaches the right people so that a fire can be managed properly. NASA has pioneered the use of sensor webs, which consist of a distributed set of instruments on the ground, in the air and in space, whose measurements are integrated to improve the assessment and response to wildfire. Different configurations of sensors combined with human knowledge and current information technology, have the potential to yield a strong set of tools to support decision-making. This project will provide the architecture, information and tools necessary for real-time delivery of geospatial information in conjunction with our partners. | <urn:uuid:d812c93c-6d0a-4a31-9688-b1c4edc7463c> | {
"date": "2013-12-07T00:45:42",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052912/warc/CC-MAIN-20131204131732-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9000570774078369,
"score": 2.921875,
"token_count": 263,
"url": "http://geo.arc.nasa.gov/sge/WRAP/products.html"
} |
In the hunt for more spectrum to speed up mobile networks, Vodafone and Huawei Technologies have successfully tested a technology that lets LTE and GSM share the same frequencies.
The speed of future mobile networks will depend on the amount of spectrum mobile operators can get their hands on. The more they get, the wider the roads they can build.
One thing they can do to get more space is to reuse frequencies that are currently used for older technologies such as GSM and 3G. But that isn't as easy as sounds, as operators still have a lot of voice and messaging traffic in those older networks. That traffic isn't going away for a long time, irrespective of the level of competition from Internet-based services.
However, using a technology called GL DSS (GSM-LTE Dynamic Spectrum Sharing) Vodafone and Huawei have shown a way to allow GSM and LTE to coexist.
In a traditional mobile network, operators allocate each technology an exclusive set of frequencies. For example, many operators, including Vodafone, currently hold 20MHz of spectrum at 1.8GHz, of which 10MHz is used for LTE and the rest for GSM traffic.
GL DSS lets Huawei's SRC (Single Radio Controller) give GSM a higher priority during periods of heavy traffic, ensuring that voice calls get though unharmed. But the SRC can also provide more room for LTE when users aren't making calls, allowing for better throughput, the vendor said on Tuesday.
This trial verified the technology's performance in Vodafone Spain's commercial network, with LTE capacity gains of up to 50 percent, according to Huawei. That equals another 32.5Mbps of bandwidth, on paper. Smartphones and other devices with a cellular connection don't have to be upgraded for the technology to work.
"What we see now is that GSM and 3G will live much longer than anyone expected. This could be one of the things you as a carrier do to increase sustainability and scalability," said Sylvain Fabre, research director at Gartner.
Huawei and Vodafone didn't say when they expect GL DSS will become available to users.
Send news tips and comments to [email protected] | <urn:uuid:1b97e090-3332-43a7-978a-c4904a53117d> | {
"date": "2015-01-31T19:22:02",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122030742.53/warc/CC-MAIN-20150124175350-00237-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.953336238861084,
"score": 2.546875,
"token_count": 467,
"url": "http://www.cio.com/article/2375448/mobile/lte-and-gsm-are-getting-hitched-thanks-to-new-technology.html"
} |
World AIDS Day is held on December 1st each year and is an opportunity for people worldwide to unite in the fight against HIV, show their support for people living with HIV and to commemorate people who have died. World AIDS Day was the first ever global health day and the first one was held in 1988.
Broward County Alumnae Chapter of Delta Sigma Theta Sorority, Inc and
Delta Education & Life Development Foundation, Inc. PRESENT:
FAMILY ADOLESCENT HEALTH & SEXUALITY CONFERENCE | <urn:uuid:727a0a0d-cc38-4ecd-8c18-a504b5753d78> | {
"date": "2017-11-22T13:07:06",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806586.6/warc/CC-MAIN-20171122122605-20171122142605-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.973686933517456,
"score": 2.703125,
"token_count": 111,
"url": "https://ufabasofla.wordpress.com/2012/11/29/world-aids-day-december-1-2012/"
} |
Page 1 of 2
Born in 1791, Charles Babbage was the man who invented calculating machines that, although they were never realised in his lifetime, are rightly seen as the forerunners of modern programmable computers. This is the story of his life, his Difference Engine and Analytical Engine.
Can you imagine a steam-driven computer the size of a room?
No, far from it. A computer can be implemented using all sorts of technology - electromechanical (relays), valves, transistors and, most recently of all, integrated circuits.
The idea of a computer is independent of the hardware used to implemented it and a purely mechanical computer is just a feasible as a purely electronic one. Charles Babbage may never have built his computer but it was a computer - designed in metal and intended to be driven by a steam engine.
A Victorian eccentric?
No, the father of computing and a surprisingly modern thinker.
(December 26, 1791- October 18, 1871)
Charles Babbage was born in Surrey on Boxing Day 1791, the son of a banker. This may have been the source of his fascination with numbers but whatever the reason he occupied the Lucasian chair of mathematics at Cambridge from 1828 to 1839. To characterise Babbage as a mathematician is misleading because his interests were much more wide ranging - a polymath is closer.
In Babbage's day mathematics was capable of delivering in theory more than it could in practice. The reason was simply the difficulties in performing the huge amounts of arithmetic needed for tide prediction, navigation and life insurance risks to name but three important numerical problems of the time. Even a few tens of years ago mathematical tables of all sorts were commonplace.
Now it is cheaper, simpler and more reliable to calculate every result from scratch every time it is needed. Before the computer tables were vital but the only way to produce them was laborious and error prone using human effort. Sir John Herschel, the astronomer, said
"an undetected error in a logarithmic table is like a sunken rock at sea"
- dramatic but none the less true.
The solution to the problem was some kind of calculating aid but before the development of electrical science this is easier said than done. There had been mechanical calculating devices before Babbage. Perhaps the best known is the Pascaline, an arrangement of cogs and gears invented by Pascal in 1642. These machines may not have been powerful or reliable but they did show that it was possible to add and subtract using mechanical gear wheels to count up or down. (And we all know that multiplication is just repeated addition and division is just repeated subtraction.)
Babbage didn't just make a better mechanical calculator, however, he built a machine that would perform a specific type of calculation.
The task of producing almost any mathematical table can be reduced to evaluating a suitable polynomial. A polynomial is an expression that involves nothing but squares, cubes and so on of a value.
is a fourth order polynomial because the highest power that it involves is x4.
By selecting the order and the coefficients correctly you can make a polynomial fit most other functions reasonably well. This means that you can create a table by finding a polynomial that fits the function you are interested in reasonably well and then tabulating the polynomial. The only problem is that the polynomial involves lots of multiplications and additions and isn't the sort of thing that you want to spend your life tabulating.
This is where an interesting mathematical property of polynomials becomes important. For example, consider the simple polynomial x3, i.e. x cubed. If you make a table of this function from 1 to N and then take the difference between each successive result you this is called the `1st difference'. Taking the difference between successive values in the 1st difference column gives the 2nd difference and so on. If you actually construct a difference table for x3 you will see something like the followig table.
Notice that the result of the 3rd difference is a constant and so the fourth and subsequent differences would be zero.
This isn't an accident. If you take differences of an nth order polynomial the nth differences are constant and hence all subsequent differences vanish.
Well as I said an interesting property but how can it be useful?
Suppose I ask you to now work out 63 - unaided by machine!
You could start multiplying 6 by itself three times but you already know the value in the 3rd difference column of the table, 6, and adding 6 to 24 gives the next value in the 2nd difference column, i.e. 30, and adding 30 to 61 gives the next value in the 1st difference column i.e. 91 and finally adding this the to the result of 53 gives the solution 216.
So you managed to work out the result of 6 cubed by making three additions!
This is the power of the difference table.
The whole method generalises to any polynomial of any degree. All you need are the first n results in the table from which you can work out the n-1 differences. Once you have these calculating the next and subsequent values in the table is just a matter of n regular additions.
You should be able to see that this procedure is ideal for use by a machine. It is also incredibly well suited to a mechanical implementation.
All you have to do is build an adding mechanism and repeat it for each of the difference sums that you need. At the start of the calculation you would set the number wheels the values the differences and then simply turn the handle to get the next result and the next set of differences. An entire table could be produced by simply repeatedly turning the handle.
Babbage's Brain Is On Display At the London Science Museum
Photo Alan Levine | <urn:uuid:441420df-8071-4dbc-a73b-c0f270cce504> | {
"date": "2015-10-09T05:04:47",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913815.59/warc/CC-MAIN-20151001221833-00100-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.954293966293335,
"score": 3.53125,
"token_count": 1208,
"url": "http://i-programmer.info/history/8-people/106-charles-babbage.html"
} |
An effective solution in times of drought!
The pressures on cattle producers during times of drought are numerous. Managing scare grazing resources and maintaining production of breeding cows particularly become issues. With stock already pulled down by poor feed, graziers need to limit additional stress wherever possible.
Graziers are looking at methods, not only of managing stock now and reducing the stress on their cows, but also reducing the impact of the drought on future years, once the drought has broken.
The use of a weaning ring allows for early weaning without having to separate the calf from the cow.
“The drought is forcing many graziers to wean their calves early to reduce stock numbers and to take pressure off their cows,” says Gillian Stephens, EasyWean.
But this practice of separating small calves from cows is stressful for both the cow and calf. “As most of the stress of weaning is the separation factor, the use of a weaning ring allows graziers to wean their calves early while keeping the cow and calf together.”
“Calves can be weaned next to their mothers, taking the pressure off cows in terms of milk production, and eliminating production loss from the stress of early separation. By weaning early, cows will have a better chance of regaining condition before joining and ensure higher conception rates next season. In addition the calves will continue to grow through the weaning period provided there is sufficient paddock feed,” Gillian said.
Most producers in the drought striken areas of Queensland are likely to be looking to remove their weaners from the farm as soon as possible. Fitting EasyWean to the calf for a week or two prior to separation will significantly reduce the stress on the cow and the calf providing an effective production advantage to beef producers.
The use of EasyWean also allows producers to manage scare grazing resources by keeping cows and calves (weaners) together in one herd, allowing greater flexibility in grazing management. If cows and calves can remain together after weaning, a four to six week weaning is suggested. Management of a single herd can then be an option allowing more paddocks available for planning remaining feed selections and optimising any plant recovery.
For those grazing the Long Paddock, where separating cows and calves is not an option, and many cows are already poorly, using a weaning ring is an ideal solution.
“As cattle graziers ourselves, we understand the pressures facing many producers, not least the financial burden being felt by many. To make our EasyWean solutions more accessible we offer a Rent-a-Ring service and can even provide limited second hand noserings to those needing a more affordable option” Gillian said.
“The use of EasyWean noserings can help in many situations. We will help you manage your drought!”
Contact EasyWean; 1300 327 993; www.easywean.com.au | <urn:uuid:50f4aa6e-4bc5-49f3-8aa3-00447165284a> | {
"date": "2019-07-19T20:45:09",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526359.16/warc/CC-MAIN-20190719202605-20190719224605-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9496228694915771,
"score": 2.59375,
"token_count": 607,
"url": "https://easywean.com.au/effective-weaning-in-times-of-drought/"
} |
About This Chapter
How it works:
- Begin your assignment or other humanities work.
- Identify the literary terms and techniques concepts that you're stuck on.
- Find fun videos on the topics you need to understand.
- Press play, watch and learn!
- Complete the quizzes to test your understanding.
- As needed, submit a question to one of our instructors for personalized support.
Who's it for?
This chapter of our humanities tutoring solution will benefit any student who is trying to learn about literary terms and techniques and earn better grades. This resource can help students, including those who:
- Struggle with understanding symbolism and imagery in literature, allusion and illusion, similes in literature, or any other literary terms and techniques topic
- Have limited time for studying
- Want a cost effective way to supplement their humanities learning
- Prefer learning humanities visually
- Find themselves failing or close to failing their literary terms and techniques unit
- Cope with ADD or ADHD
- Want to get ahead in humanities
- Don't have access to their humanities teacher outside of class
Why it works:
- Engaging Tutors: We make learning literary terms and techniques simple and fun.
- Cost Efficient: For less than 20% of the cost of a private tutor, you'll have unlimited access 24/7.
- Consistent High Quality: Unlike a live humanities tutor, these video lessons are thoroughly reviewed.
- Convenient: Imagine a tutor as portable as your laptop, tablet or smartphone. Learn literary terms and techniques on the go!
- Learn at Your Pace: You can pause and rewatch lessons as often as you'd like, until you master the material.
- Explore various literary periods throughout history.
- Learn the definitions of literary terms in prose and poetry.
- Examine the views of major literary critics and study different kinds of literary theory.
- Discover how to analyze literary passages.
- Develop strategies for taking literary tests.
- Explore the use of symbolism and imagery in literature.
- Define and contrast allusion and illusion.
- Learn the differences between synecdoche and metonymy.
- Find examples of similes in literature.
- Discuss folk ballads and look at examples.
- Define literary devices, paralipsis and paronomasia.
1. Overview of Literary Periods and Movements: A Historical Crash Course
When it comes to studying literature, there's about 1500 years of it to take in - and that's just in the English language! Fortunately, you can check out our crash course of key literary movements to see how the art form has developed over time.
2. Glossary of Literary Terms: Poetry
Before you start your study of poetry, you'll want to have these technical, literary and genre terms at your disposal. Read on to learn the basics of analyzing poetry!
3. Glossary of Literary Terms: Prose
The study of literature is a broad, diverse field. However, there's some general knowledge you should have before you dive in. Check out these terms to get a handle on the basics of prose study.
4. Introduction to Literary Theory: Major Critics and Movements
When you hear the word 'theory,' your mind probably darts to the sciences - the theory of relativity, the theory of gravity, etc. Did you know that literature, too, is full of theory? Check out this lesson to get a basic primer on just what literary theory is, and how you might apply it.
5. How to Analyze a Literary Passage: A Step-by-Step Guide
In this lesson, we will examine the steps involved in the basic analysis of literature. Then, using a well-known fable, we will go through each step of analysis: comprehension, interpreting and drawing conclusions.
6. How to Answer Multiple Choice Questions About Literature: Test-Taking Strategies
In this lesson, we will examine test taking strategies involved in answering multiple-choice questions about literature. Breaking the process down into manageable parts, we will take a look at the literary text, the question itself, and then the given choices.
7. Symbolism & Imagery in Literature: Definitions & Examples
In this lesson you will learn how poets and authors use symbolism in their writing to make it more meaningful and interesting. Explore how descriptive writing called imagery appeals to the senses, adding to works of literature.
8. Allusion and Illusion: Definitions and Examples
Allusions and illusions have little in common besides the fact that they sound similar. Learn the difference between the two and how allusions are an important part of literature and writing - and how to spot them in text.
9. Synecdoche vs. Metonymy: Definitions & Examples
Would you lend your ears for a moment (or at least your eyeballs)? This lesson will explain what synecdoche and metonymy mean and how to spot them in a piece of prose or poetry.
10. Similes in Literature: Definition and Examples
Explore the simile and how, through comparison, it is used as a shorthand to say many things at once. Learn the difference between similes and metaphors, along with many examples of both.
11. Binary Opposition in Literature: Definition & Examples
This lesson will cover the concept of binary opposition in literature. We'll define the term, look at a few examples to explore how it functions in a story and conclude with a quiz to test your knowledge.
12. Intertextuality in Literature: Definition & Examples
Have you ever read something that you know you've seen somewhere before? Some people might explain this as 'intertextuality,' and they wouldn't be wrong. Find out more about this idea that goes much deeper than literary deja-vu in this lesson!
13. Anecdotal Evidence in Literature: Definition & Examples
Anecdotal evidence in literature serves a variety of purposes. Readers are drawn in the direction the author wants them to go in order to advance the plot, reach the climax, and cement the conclusion.
14. Folk Ballad: Definition & Examples
You probably won't hear them on your favorite rock station, but folk ballads have been sung for thousands of years. Come hear about this 'popular' musical tradition and discover some of the most famous songs that belong to it.
15. Literary Devices: Definition & Examples
This lesson studies some of the more common literary devices found in literature. Devices studied include allusion, diction, epigraph, euphemism, foreshadowing, imagery, metaphor/simile, personification, point-of-view and structure.
16. Paralipsis: Definition & Examples
This lesson covers paralipsis. Learn what paralipsis is and how to identify it with the help of examples. Then take a quiz to test your understanding.
17. Paronomasia: Definition & Examples
If your pet goldfish has heard about paronomasia, she most likely learned it in a school. (Get it? Because fish swim in schools?) If you groaned after reading that last sentence, chances are you're already familiar with paronomasia. Explore this lesson to come to terms with this term and its many fun forms.
18. Point-of-View: Definition & Examples
Find out what 'point of view' means and how it's used in literature. Learn about some points of view that are used less often, then test your knowledge with a brief quiz.
19. What is a Narrative Hook? - Definition & Examples
This lesson will assist you in understanding components of narrative hooks found in literature and how they can be applied to your writing. Learn more about narrative hooks, and test your understanding through a quiz.
Earning College Credit
Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the Intro to Humanities: Tutoring Solution course
- Literary Time Periods: Tutoring Solution
- Middle Ages Literature: Tutoring Solution
- The English Renaissance: Tutoring Solution
- Victorian Era Literature: Tutoring Solution
- British Romanticism: Tutoring Solution
- 20th Century British Literature: Tutoring Solution
- Literary Modernism: Tutoring Solution
- Romantic Poetry: Tutoring Solution
- World Literature: Drama: Tutoring Solution
- Ancient and Modern Poetry: Tutoring Solution
- Prominent American Novelists: Tutoring Solution
- Philosophy and Nonfiction: Tutoring Solution
- History of Visual Art: Tutoring Solution
- History of Architecture: Tutoring Solution
- Elements of Music: Tutoring Solution
- Medieval Music: Tutoring Solution
- Renaissance Music: Tutoring Solution
- Baroque Music: Tutoring Solution
- Introduction to the Performing Arts: Tutoring Solution | <urn:uuid:143a1600-e195-446b-b823-c2d0677bf7d1> | {
"date": "2019-01-19T08:57:21",
"dump": "CC-MAIN-2019-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662863.53/warc/CC-MAIN-20190119074836-20190119100836-00096.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9066804647445679,
"score": 4.125,
"token_count": 1930,
"url": "https://study.com/academy/topic/literary-terms-techniques-tutoring-solution.html"
} |
MaryLou and George Boone Gallery
Oct. 13, 2012—Jan. 14, 2013
“The field of photography is extending itself to embrace subjects of strange and sometimes of fearful interest.”
—Oliver Wendell Holmes Sr., July 1863
In the fall of 1862, Oliver Wendell Holmes Sr. received word that his son had been shot through the neck at the Battle of Antietam. The illustrious New England physician and man of letters immediately boarded a train to search for him.
The exhibition on which this website is based contained more than 200 works drawn from the Civil War collections at the Huntington Library.
Curator Jennifer A. Watts explored how photography and other media were used to describe, to explain, and perhaps to come to terms with the trauma that was the Civil War.
The exhibition focused on key episodes to highlight larger cultural themes. These include the Battle of Antietam, not only the bloodiest single day in the nation’s history, but the first in which photographs of American battlefield dead were made; the assassination of Abraham Lincoln and the execution of the conspirators; and the establishment of Gettysburg National Monument as part of larger attempts at reconciliation and healing. Download a complete checklist of all the works included in the exhibition. | <urn:uuid:9561033a-5272-43b5-b9e0-2d3d0b499c09> | {
"date": "2015-04-18T03:22:48",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633512.41/warc/CC-MAIN-20150417045713-00050-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9686192274093628,
"score": 3,
"token_count": 257,
"url": "http://huntington.org/civilwar/about.htm"
} |
Placing a few honey bee colonies along the edges of your cotton fields may increase the amount of lint that finds its way into your picker come harvest time, according to a recent study at Alabama A&M University.
Entomologist Kenneth Ward says that in the past it was difficult to introduce pollinating insects into cotton fields due to the heavy pesticide usage required for cotton production.
However, the increase in recent years to insect-resistant cotton crops and the adoption of boll weevil eradication programs have decreased the number of insecticide applications growers need to make, providing a better opportunity for honey bees to help work their magic.
Could the introduction of supplemental honey bees into a field planted in an insect-resistant transgenic cotton variety, increase cotton fiber yield potential? The short answer, Ward says, is yes.
A two-year on-farm study undertaken by entomologists at Alabama A&M University appears to show an increase in the amount of fiber produced per cotton boll.
“What we found the first year was that the per-boll yield went down significantly as you got further away from the bees,” Ward says. “The second year we saw the same thing. And, although the yield findings still were significant, they weren't quite to the same degree as they were the first year of the study.”
According to Ward's data, in the study's first year, the bee field yielded 1,184 pounds per acre, and the non-bee field yielded 1,046 pounds per acre.
The next year, the bee field yielded 1,313 pounds per acre and the non-bee field yielded 1,212 pounds per acre.
“These trends were detected in producer fields, with limited control over experimental conditions,” he says. “The multi-year data from this study was consistent, and it suggests a positive impact of supplemental honey bees on cotton yield indicators in Bt cotton.”
In the first year of the study, the bee colonies were placed in adjacent corners of one 140- to 160-acre cotton field with another similarly sized field without bee hives used as a control field. Both fields were irrigated using a center pivot system and both were planted in Bt cotton. The bee field was supplemented with approximately one hive of honey bees per acre of cotton.
The study was repeated the next year by simply switching the hives of honey bees to the same positions in the opposite “control” field.
In both years, the bee fields and the non-bee fields were sampled in multiple field locations to obtain cotton yield data and bee activity data.
To break down the cotton yield components, 20 one-to-two-day-old flowering cotton blooms were tagged per sample point weekly for four weeks. The resulting bolls were also collected and counted later in the season, after a defoliation treatment.
To obtain estimates of bee activity in the fields, the researchers at Alabama A&M counted the number of foraging bees within a 40-foot radius for a period of three minutes per sampling point.
The bee activity in the cotton field was as would be expected, Ward says.
“Once you got halfway into the field the bee activity fell off to the level in the non-bee field. There were still bees in both fields, but at different levels.
“This is a small study, but we feel it is reason enough to justify a larger study,” he says.
“The research results strongly suggest that the introduction of honey bees into a Bt cotton crop can have a positive impact on yield potential.”
An added benefit the Alabama researchers discovered was the successful production of honey in the hives along the cotton fields. The bee hives placed alongside the fields each yielded about 50 pounds of “very smooth” honey. | <urn:uuid:83a3ab5c-af15-4647-9fc1-ec848aa47b82> | {
"date": "2014-12-20T14:36:06",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769894.131/warc/CC-MAIN-20141217075249-00011-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9657412171363831,
"score": 3.09375,
"token_count": 795,
"url": "http://southeastfarmpress.com/bees-work-improve-cotton-yields"
} |
There's a time in most of our lives when we surrender to an overwhelming need to be part of the zeitgeist and get a painful, nondescript, instantly regrettable tattoo. There's also a subsequent time when you have three fifths of a second to lie to your mum about what that 'Māori looking smudge' is on the back of your neck.
Now, thanks to researchers at the University of California, you'll have the completely believable response of 'it's a brain-reading, wireless, electronic temporary tattoo'.
You can direct your thanks to Associate Professor of Bioengineering at the University of California, Todd Coleman, the brain-reading genius behind this project.
Coined as a 'Brain Machine Interface', or BMI, the wireless, tattoo-like device has the ability to perform the functions of a normal brain implant, typically used to map brain signals, but it also has further, more exploratory possibilities of remote machine control and even telepathy.
So, assuming your mum is wearing one too, you'll be able to preempt any impending questions about other, ill-thought-out, tramp stamps.
In recent years, rapid progression in brain implant research has allowed scientists to explore the notion of controlling machines with thoughts - and the results are astonishing.
For example, this uplifting video from the team behind Brain Gate - a small chip that's inserted into the brain that allows users to control computers and robotic limbs with their thoughts - shows the level of technological progression in this field and where we could be in 20 years' time.
However, as you can see from the Brain Gate video, brain implants are usually invasive and dangerous procedures that can be fatal if administered incorrectly.
Professor Coleman has developed a new technology to circumvent the danger of invasive brain implants by inventing a miniscule stick-on device that can read your brain signals and transmit them to a nearby computer wirelessly.
The barely visible BMIs consist of circuitry and register at less than 100 microns thick, which is thinner than a human hair. They are made from a rubber like translucent substance that can be molded to any shape and stuck to the forehead.
The BMIs work by reading electrical signals that are associated with brainwaves. They can also be upgraded to provide more in-depth analysis by including thermal sensors to monitor skin temperature and light detectors to analyse blood oxygen levels.
The device is powered by micro solar panels and uses antennae to wirelessly transmit or receive data.
In terms of immediate real-world application, Professor Coleman wants to use the device on premature babies to monitor their mental state and detect the onset of seizures that can lead to brain development problems such as epilepsy.
The BMI is currently being commercialised and prepared for shipping to hospitals and other research labs across the globe.
Achievement unlocked: Telepathy
Outside of sensible applications such as monitoring premature babies, the BMI has more tantalising mad scientist-esq potential applications such as robot control, telepathy and even telekinesis.
Professor Coleman and his team have been experimenting with electrodes to fly unmanned aircraft over fields in the US and he's currently working on scaling down that technology to fit into the BMI.
Although the use of thought to control robots isn't new - as demonstrated in the Brain Gate video earlier - the idea that it can be done with such ease presents new and exciting opportunities.
The BMI's ease of application (slapping it on someone's forehead) and mobility means that any device in your home or work with a wireless receiver could be controlled with a simple thought. | <urn:uuid:ad89c62d-62ce-4636-924c-58e168c3af3e> | {
"date": "2015-10-09T12:14:27",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00084-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9511127471923828,
"score": 2.65625,
"token_count": 730,
"url": "http://www.techradar.com/au/news/world-of-tech/future-tech/would-you-want-an-electronic-tattoo-that-could-read-your-mind--1148670"
} |
The following sun and moon data for Dec. 18, 2008 is provided by the United States Naval Observatory.
Sunrise: 7:17 a.m.
Sunset: 4:53 p.m.
Moonrise: 10:56 p.m. Dec. 17.
Moonset: 11:35 a.m. Dec. 18.
Moon phase: The moon is waning gibbous with 57 percent of the visible disk illuminated. The moon is at last quarter Dec. 19 at 3:30 a.m. Mountain Standard Time.
Winter officially begins Sunday morning, Dec. 21, at 5:04 a.m. with the Winter Solstice.
In addition to marking the start of winter, the solstice — Latin for “sun stands still” — also marks the shortest day of the year. From Sunday forward, the sun gains altitude in our sky as winter begins for the Northern Hemisphere and southern hemisphere residents experience the start of summer and their longest day of the year.
Although Dec. 21 marks the shortest day of the year for the Northern Hemisphere, it does not mark the day of the earliest sunset. According to the United States Naval Observatory, the earliest sunset occurred Dec. 7 for most of the Northern Hemisphere.
The confusion comes when stargazers focus on clock-based sunset and sunset times rather than true solar noon. True solar noon is the time of day when the sun reaches its highest point as it traverses the daytime sky.
In early December, true solar noon comes several minutes earlier by the clock than it does at the solstice around Dec. 21. With true noon coming later on the solstice, so do the sunrise and sunset times.
The discrepancy between clock time and sun time causes the earliest sunset and the earliest sunrise to precede the December solstice. However, the date of the earliest sunset also depends on one’s latitude. At mid-northern latitudes, such as Colorado, the earliest sunset comes in early December each year. Farther north, in places such as Canada and Alaska, the year’s earliest sunset comes around mid–December. As you reach the Arctic Circle, the earliest sunset and the December solstice occur on or near the same day.
In addition, the latest sunrise does not come on the solstice either — for mid–northern latitudes, the latest sunrise comes in early January. Thus, and although the dates vary, the sequence is always the same: earliest sunset in early December, shortest day on the solstice around Dec. 21, latest sunrise in early January.
What causes the solstice?
The solstice is caused by the Earth’s tilt on its axis and its motion in orbit around the sun. Because Earth does not orbit upright, but is instead tilted on its axis by 23–and–a–half degrees, Earth’s northern and southern hemispheres trade places in receiving the sun’s light and warmth most directly. Thus, at the December solstice, Earth is positioned in its orbit so that the North Pole is leaning 23–and–a–half degrees away from the sun, while on the Summer Solstice that tilt is reversed. | <urn:uuid:6434a5cb-2dd4-4e7e-ac7f-4ea38945b3e6> | {
"date": "2016-05-26T12:36:57",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275836.20/warc/CC-MAIN-20160524002115-00196-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9404857158660889,
"score": 3.65625,
"token_count": 659,
"url": "http://pagosasun.com/archives/2008/12december/121808/odskywatch.html"
} |
These days we often hear about how a new treatment, or program, or therapy style, is "evidence based." This gives the listener an impression that the new treatment must be superior in some way.
It is another language construct which has become much more common, especially in mental health care discussions. Recently, its prevalence has tripled in written language, with a steep increase beginning in about 1993, according to the Google NGram viewer.
But was does "evidence based" really mean?
We often hear, for example, that cognitive-behavioural therapy (CBT) is "evidence based." The implication of this statement is that other forms of therapy must not be "evidence based."
It should go without saying that most everything is "evidence based":
An individual's personal account of their experience is a form of evidence.
A randomized controlled prospective trial of therapy supplies another form of evidence.
An opinion from an expert, or from a mystic, or from a random person on a bus, or from a charlatan, is another form of evidence.
the introduction of the phrase "evidence based" may stifle
debate and free thinking about a matter. It implies that the issue it is describing
has already been decided upon.
In psychiatry, the phrase "evidence based" is typically used in the area of CBT research, and in health care policy. But this can bias opinion away from other therapeutic modalities whose practitioners tend not to advertise themselves with this type of language.
I believe that gathering evidence from prospective, randomized, controlled clinical studies is vitally important--so important, in fact, that we must allow such evidence to cause established opinions about care to actually change. There are many cautionary tales in medical history, in which good evidence about something new was dismissed by practitioners who were resistant to changing the style of practice they had grown up with.
But in mental health care, the evolving evidence is often much less robust than it seems. Most studies are of very short duration. Short-term or superficial care approaches may lead to various symptomatic improvements for many people...but long-term data is often not present. Also, a great deal of evidence supports the efficiency of treatments which work for 60-70% of people, but does not support useful options of how to help the other 30-40%.
Many people may look back and value a short-term course of therapy in some way, but what they really found most valuable and life-changing was something quite different, such as having a dedicated caregiver over a period of many years who didn't practice using modern "evidence-based" methods at all...
It is good to think carefully about evidence, and to be prepared to change our practice accordingly. But the phrase "evidence based" is often just a slogan, a form of jargon, and a construct which can lead to unwelcome biases in thinking. Such cognitive short-cuts can often be very efficient, in order to decide on an important matter quickly, but such short-cuts should never be used to make large policy changes in a system. | <urn:uuid:cb8334cb-341b-4373-917a-c49889f5b90b> | {
"date": "2017-09-24T17:27:35",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690112.3/warc/CC-MAIN-20170924171658-20170924191658-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9692197442054749,
"score": 2.921875,
"token_count": 628,
"url": "http://garthkroeker.blogspot.com/2016/05/rhetoric-and-jargon-in-health-care_30.html"
} |
The main thing that average American can do to improve health is: get thin!
How fat are you?
The government recommends your waist be no more than 35 inches if female, 40 inches if male. (If you’re very short, for example because you’re very young, your waist should be a lot smaller than that.)
When measuring your waist, don’t cheat! Measure straight around; don’t dip to avoid the bulge.
Your waist size is more important than your weight, because fat in your belly is more destructive than fat in your legs. Fat in your legs tends to stay there and not bother the rest of your body, but fat in your belly area is more active, closer to your organs (especially your liver), and enters your bloodstream more easily.
Why does the average woman live longer than the average man? Probably because the average woman is thinner (and engages in fewer dangerous activities, such as the military and other “dare you” games & occupations).
According to Einstein’s E=MC2, even an atomic-bomb-size blast consumes just a small amount of matter. So even the most vigorous exercise doesn’t directly reduce weight.
To reduce weight, your body must excrete more matter than it consumes; so to lose weight, you must eat & drink less than you shit, piss, and sweat.
How exercise helps Although exercise doesn’t make you lose weight directly, it makes you lose weight indirectly — because exercise makes you sweat, piss, and shit more without making you want to eat and drink much more.
Although exercise won’t change your weight much, it will make your weight be better proportioned: you’ll have a bigger percentage of muscle and a smaller percentage of fat. Your arms and legs will bulge with muscles and your belly will shrink. Moreover, exercise will raise your HDL (which is good). Better yet, exercise will burn off any excess sugar in your blood. By getting rid of that extra sugar, exercise helps you avoid or control diabetes.
Exercising removes water from your body (via sweat and piss), but “removing water” is not your goal: you goal is to remove belly fat. Sip a little water while exercising — and before and after — to avoid dehydrating, because a dehydrated body has trouble controlling its own temperature and accidentally wrecks itself.
Here are other ways that exercise helps you lose weight:
Kinds of exercise Try walking (because it’s easy, pleasant, and exercises your bottom half), push ups (because they exercise your top half), and swimming (because it exercises your whole body and is fun).
You don’t need to do a marathon. Three short walks per day help your health just as much as one long walk. Walking a mile helps your health nearly as much as running a mile, though running has the advantage of taking less time, so you can get on with the rest of your life. “A mile per day” is the minimum amount necessary to make a noticeable difference in your health; “a mile and a half” is even better.
Any kind of exercise is better than nothing. Some people find “gardening” a pleasant form of exercise. The dare-to-be-different crowd gets exercise by taking the stairs instead of “escalators and elevators” and by parking in the farthest parking spot instead of the closest — though “walking through parking lots” isn’t the most scenic way to get exercise.
To lose weight safely, consume fewer calories. Each gram of fat you eat provides 9 calories, whereas each gram of protein or carbohydrate provides just 4 calories; so the main way to consume fewer calories is to consume less fat.
Make sure you consume fewer saturated fats. But even the best fats, the “unsaturated fats,” still provide 9 calories per gram, so eat fewer unsaturated fats too!
Most nutritionists make these recommendations:
Many foods are advertised as being “fat-free,” but most of them still contain lots of sugar. Since plain sugar provides calories without providing good nutrients, plain sugar is called empty calories and is bad for you. Avoid it. These other simple sugars are also empty calories and should be avoided: corn syrup (which comes from corn), fructose (which comes from fruit), and honey.
To lose weight, the main trick is: don’t binge. Don’t eat large portions of anything. Here’s why:
If you eat a huge meal, your pancreas will have trouble producing enough insulin to digest all those sugars and starches at once. Instead, eat several smaller meals (or small healthy snacks), spaced throughout the day.
If you have diabetes (a pancreas unable to produce enough useful insulin), eating smaller meals is necessary. If you don’t have diabetes yet, eating smaller meals is still desirable — because if you overwork your pancreas often, it will gradually get tired, quit working some year, and you’ll have diabetes then and forevermore.
Once you have diabetes, you can control it (by making sure you always eat small meals) but never cure it.
Nutritionists predict that 1/3 of all Americans will get diabetes before death. The best way to prevent diabetes is to eat small meals, get exercise, and lose weight.
When you eat more sugars and starch than your pancreas can handle, the excess stays in your blood, makes your blood vessels sticky, and wrecks the blood vessels in your eyes (leading to blindness), feet (leading to numbness, unnoticed cuts, infection, and eventual amputation), and kidneys (leading to kidney failure so you spend the rest of your life on a dialysis machine).
Afraid to look thin?
Unfortunately, Americans in this century are fatter than Americans were in the 1900’s or 1800’s or 1700’s. That’s because Americans get less exercise (they drive cars instead of walk, play videogames instead of real sports), eat more junk food (McDonald’s instead of Mom’s cooking), and many other reasons that are obvious. But here’s a reason that’s not so obvious: some people (especially inner-city blacks) are afraid to look thin, because they’re afraid that if they look thin, they’ll look like they have AIDS, and their friends will fear them and they won’t get dates.
Such people are misinformed and need to be reminded that it’s better to be a toothpick than a blimp.
Nutritionists recommend that you be semi-vegetarian: make ¾ of your dinner plate be filled with plants (vegetables, fruit, and high-fiber grains), and just ¼ of your plate come from animals (fish, meat, and dairy). That will give you a wide variety of nutrients and less fat.
Many people have invented fad diets that claim crazy eating can make you thin. Each fad diet has a “catch”:
Most fad diets make you lose weight by being so unappetizing that you want to eat less.
Some diets let you lose 5 or 10 pounds during the first two weeks, but that’s just from losing water, not fat. The next two weeks are harder.
Most diets also tell you to get more exercise. If you claim that the diet “didn’t work,” the diet vendors reply, “You can’t sue us, since you didn’t follow our exercise plan.”
Nutritionists agree that the best way to get thin is to eat normally but with less saturated fat, smaller portions, and more variety.
The trick is to feel full while consuming fewer calories. Since calories come from “fat, protein, and carbohydrates,” eat food containing mainly water & fiber instead.
Some fad diets, such as the Atkins Diet, made the mistake of telling you to avoid all carbohydrates and eat fats instead. Here’s the truth:
The Atkins diet was later modified to say that certain carbohydrates are okay (and don’t count in “net carbs”), but Atkins advice to eat lots of fat is totally wrong. Nutritionists agree that of all the fad diets, the Atkins Diet is the unhealthiest and the South Beach Diet is the healthiest, but even the South Beach Diet is slightly off-kilter.
Just get exercise, eat a variety of food (especially vegetables), and avoid binging (especially on fats, cakes, and sweets). Then you’ll be fine!
Soup Since soup contains mainly water, it makes you feel full without adding many calories. (Just make sure it’s not a “cream” soup, since cream is high in calories.)
Nutritionists have discovered a bizarre fact about soup: water in soup makes you feel fuller than water in a glass, even though it’s the same water. If you’re served chicken and a glass of water, you’ll feel less full than if the water was dumped on the chicken to become soup. When the water is dumped on the chicken to make soup, your eye says “that’s a lot of soup!” and you feel full just looking at it!
Just beware of salt: many canned soups contain too much salt.
Fruit Fresh fruit is like soup: it contains mainly water and makes you feel full without adding many calories.
If you eat 30 raisins (dried grapes) while drinking water, you’ll still feel hungry; but if you eat 30 fresh grapes instead, you’ll feel full, even though the ingredients are the same.
Fruit also contains fiber and lots of nutrients.
Bran cereal For breakfast, try eating bran cereal. Since it’s high in fiber, it makes you feel full without adding many calories. Nutritionists have discovered that people who eat a high-fiber breakfast still feel full, many hours later, whereas people who eat a low-fiber breakfast feel hungry again 2 hours later.
Though bran cereal is good for you, bran muffins are bad, since bran muffins usually include lots of fats added to the bran.
Potato Nutritionists have discovered that the best vegetable for making you “feel full without many calories” is potato.
Just make sure you include the skin (to get its nutrition), cut out any tubers sprouting out (because they’re poisonous), and avoid fatty toppings (such as butter or sour cream). If possible, bake the potato (instead of frying it) or make a potato soup.
Watermelon Another obvious candidate for “full with minimal calories” is watermelon. It contains lots of water and — like all fruits — some fiber.
Black Irish diet If you want to try a fad diet, try mine: it consists of eating mainly potatoes and watermelons. If you wish, try that diet for a week (supplemented by vitamin pills and a few other vegetables to keep you balanced). I call it the Black Irish diet, because it combines the food loved by stereotypical blacks (watermelon) with the food loved by stereotypical Irishmen (potatoes). Here’s why the diet is good:
So after all that preaching, am I a good example? Am I thin?
Not yet. I guess I’d better start taking my own advice! | <urn:uuid:b83ccb70-38d7-4d52-a580-3e9f076203a8> | {
"date": "2016-12-05T18:48:11",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541783.81/warc/CC-MAIN-20161202170901-00136-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9447129964828491,
"score": 2.609375,
"token_count": 2428,
"url": "http://www.angelfire.com/nh/secret/1GetThin.htm"
} |
On April 22, more than a billion people of all nationalities around the world are expected to clean up their communities, plant trees, contact their elected officials, and more, on behalf of the environment and in support of Earth Day.
Organizers of the global event are hoping that with smart investments in sustainable technology, forward-thinking public policy, along with an educated and active public, people everywhere will join in transforming cities around the world and shape the earth’s future.
With thousands of environmental groups in more than 190 countries reaching out to hundreds of millions of people, Earth Day supporters are sending world leaders the loud and clear message that people around the world want decisive action on clean energy now.
Although the first Earth Day led to the creation of the United States Environmental Protection Agency and the passage of the Clean Air, Clean Water and Endangered Species Acts, there is an increasing urgency to put a stop to what many see as irreparable damage to Earth’s ecology and a life-threatening climate change.
A report by the U.N. Intergovernmental Panel on Climate Change warned that if nothing is done to change current emissions patterns of greenhouse gases, global temperature could increase as much as 11 degrees by 2100.
Some experts say the biodiversity crisis is even worse than the threat of climate change. According to www.sciencedaily.com, “Species extinction and the degradation of ecosystems are proceeding rapidly and the pace is accelerating. The world is losing species at a rate that is 100 to 1,000 times faster than the natural extinction rate.”
The report also stated, “The biodiversity crisis — i.e. the rapid loss of species and the rapid degradation of ecosystems — is probably a greater threat than global climate change to the stability and prosperous future of humankind on Earth. There is a need for scientists, politicians and government authorities to closely collaborate if we are to solve this crisis.”
But will they? And will they do something drastic enough in time? Will supporters of Earth Day be able to make the difference our planet needs to guarantee the survival of life on it? If left entirely up to humans to reverse and repair the ongoing damage to our planet, how confident are you of the Earth’s sustainability indefinitely?
Millions of people, while doing their part to keep America beautiful — along with other countries — are not resting their hopes on potential changes in big business, human governments or scientists to solve this crisis. They also pray regularly for God’s Kingdom to come and His will to be done “in earth as it is in heaven.”
Are these just idealistic prayers of naive believers of a Utopian society or is there a very real possibility that we could see Divine intervention in our lifetime? For those who believe it is not a coincidence that our planet inhabits hundreds of thousands of different life forms while scientists have yet to find even one iota of life on any other planet — Divine intervention by a life-giving Creator is a very real solution. Is there any tangible proof anyone can point to? One ancient book holds the key.
In the pages of the Holy Bible, Jesus Christ gave a sign at Matthew 24:3-14, Mark 13:3-13 and Luke 21:7-31 to indicate when the Kingdom of God was near — foretelling wars, food shortages, earthquakes, disease, increases in lawlessness and nations not knowing the way out. Although it is an historical fact that these conditions have plagued humans for centuries, Jesus explained why this composite sign would be different.
In the account at Matthew 24:33-34, Jesus prophesied, “When you see all these things, know that it is near at the doors! Assuredly, I say to you, this generation will by no means pass away till all these things take place.” — New King James Version.
Are we living among the generation who is seeing “all these things take place” in one lifetime? Has any other generation seen the degree of these conditions on such a global scale happening simultaneously? You decide. Jesus said in Luke 21:31, “So you also, when you see these things happening, know that the kingdom of God is near.” — New King James Version.
At that time, according to Revelation 11:18, God will destroy those who destroy the Earth. Centuries earlier He foretold at Ecclesiastes 1:4, “A generation goes and a generation comes, but the earth remains forever.” — English Standard Version.
Do we really need new hills and mountains, new grass and trees, new streams and oceans — or a new earthly society of planet-loving people who work to make every day Earth Day? As Psalm 115:16 says, “The LORD has kept the heavens for himself, but he has given the earth to us humans.” — Contemporary English Version.
As care-takers of planet Earth may we all do our part to support the sustainability of all life on earth, pray for God’s will to be done and act like we live here. | <urn:uuid:ca1c7958-c4c2-4679-9a98-9aad6c2011b6> | {
"date": "2014-11-27T14:47:29",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008720.43/warc/CC-MAIN-20141125155648-00188-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9424762725830078,
"score": 2.859375,
"token_count": 1049,
"url": "http://www.clevelandbanner.com/pages/full_story/push?article-WRIGHT+WAY-+Will+earth+be+destroyed-%20&id=18007922"
} |
|Because proteins are a major structural and metabolic component of all living organisms, the analysis of protein samples can be useful in forensic chemistry.|
This material is designed for the NSW HSC Chemistry Syllabus section 9.2.1 (Because proteins are a major structural and metabolic component of all living organisms, the analysis of protein samples can be useful in forensic chemistry. ). The aim of these pages is to give an overview to the chemistry used in industrial processes and to highlight social and environmental implications.
More information about this material and the syllabus is on our information for teachers page.
At the top and bottom of each page there are Previous and Next buttons which are the best way of following the trail. These buttons will take you to every page in the trail ensuring that you don't miss anything.
Alternatively, you can use the buttons down the side of the page to move between the most important pages.
Lots of words will be highlighted (e.g. polymer) throughout the text of trail. Words marked in this way are in our glossary and clicking on the word will take you directly to the information on that word or phrase.
More information about the use of trails and the way in which this site is structured, is available in our meta-trail. | <urn:uuid:79aee470-2aab-4ff6-ab53-bc12f788fed5> | {
"date": "2017-04-27T22:40:46",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122629.72/warc/CC-MAIN-20170423031202-00530-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8965991139411926,
"score": 2.59375,
"token_count": 260,
"url": "http://discovery.kcpc.usyd.edu.au/9.9.4/index.html"
} |
Intracellular development of filariae in the vector
During its bloodmeal, the Simulium fly ingests the microfilariae from the skin of their host. At the same time, the infective larvae (L3), if present, escape.
The development from the microfilaria to the metacyclic infective larva (L3) takes about 7 to 10 days, depending on the ambient temperature.
The blood in enclosed by the peritrophic membrane, which is secreted by the cells of the mid gut. This membrane must be penetrated by the microfilaria, before it migrates through the hemolymph and settles in the syncitial cells of the longitudinal flight muscles in the fly thorax. There they grow and moult twice before they reach the infective stage (L3). Development of all filariae, as far studied, is always intracellular and involves the hypertrophy of the parasitized cell.
The second stage larvae is immobile. Its looks as a sausage, hence they are called: ‘sausage-stage larva’.
This sausage-stage larva, however, is dead and disintegrating, as one can see from the damaged surface of its cuticula.
Since many years, different ‘types’ of infective larvae were found and distinguished in wild-caught Simulium flies, dissected in Norther Cameroon (Duke, 1967, Renz 1987). They were named: Type A, B, etc. Presumably not all are true Onchocerca species (Type F, and H!). Type G was identified being Onchocerca ochengi (Wahl, Ekale, Enyong, Renz, 1991).
Although the infective larvae of many Onchocerca species look rather similar – and have been mixed up frequently in the past! – they can be distinguished morphologically by their length, the shape of their anterior end and tail. To be sure, we now use molecular techniques to amplify their DNA and sequence the PCR-product. | <urn:uuid:c3485613-dcf2-4307-8196-b3b359cd210f> | {
"date": "2017-05-27T23:04:57",
"dump": "CC-MAIN-2017-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609305.10/warc/CC-MAIN-20170527225634-20170528005634-00164.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9489126801490784,
"score": 2.890625,
"token_count": 433,
"url": "http://www.riverblindness.eu/onchocerciasis/life-cycle/l1-l2-l3-in-the-vector/"
} |
Fall Amaryllis Care
Bring potted amaryllis plants indoors and place the pots on their sides in a dark, cool, dry location. Let the pot dry out - do not water. When new growth starts, set the pot upright, start watering and feeding again, and bring it gradually into a location that has bright, indirect light. The amaryllis bulb should produce spectacular flowers during the winter months.
Gather Flower Seeds
Seeds of many annual garden flowers such as calendula and nicotiana naturally self sow. For a more contained spread, consider gathering the seed and replanting where you want them. Cut the heads into a brown paper sack; as the heads dry, the seeds will fall into the bottom. Store the seeds in an envelope or empty film canister in a cool, dry location for sowing seeds next spring.
Gather Bittersweet Vines
The bright orange berries of bittersweet are a natural for fall decorating indoors and out. If you don't already grow this vine, consider planting one either this fall or next spring. Choose the American bittersweet (Celastrus scandens), rather than the oriental bittersweet (C. orbiculatus), which can become invasive.
Rhubarb will produce more stems than usual next spring if you work in composted manure around the plants now. A shovelful per established plant is plenty. You can also set out new plants this fall. Some of the best varieties, with stems that are red all the way through, are 'Canada Red', 'Chipman's Canada Red', 'Cherry Red', and 'Valentine'.
Start Feeding Birds
It's not too soon to start feeding the birds this winter. Clean birdfeeders thoroughly. If you need new ones, consider the kind with squirrel baffles to keep those pests away. Black oil sunflower seeds are the most popular feed. If you don't like the garden mess, buy prehulled sunflower seeds. Try putting out niger seed for goldfinches and suet blocks for woodpeckers. | <urn:uuid:d9920245-d9cc-4364-a0a4-116e09fa2394> | {
"date": "2015-03-29T10:39:40",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298464.94/warc/CC-MAIN-20150323172138-00242-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.908109724521637,
"score": 2.640625,
"token_count": 432,
"url": "http://www.garden.org/regional/report/arch/reminders/523"
} |
The idea of “de-extinction,” of bringing back a long-gone species like, say, a woolly mammoth, might seem the stuff of science fiction. But it’s almost real, explains author Carl Zimmer in this month’s story “Bringing Them Back to Life.” The cool factor of such a zoological restoration is off the charts, but de-extinction also raises some interesting questions about human beings and our impact on the world. Many extinctions occur because of our thoughtlessness or carelessness. We want a better life. We want to make the unhabitable habitable. We want to fill our stomachs. Sometimes what gets caught in the cross fire of our wants is a species. You could say an extinct species is the collateral damage of human existence. Just because we might be able to bring an extinct species back to life, though, doesn’t mean we should. There’s always the law of unintended consequences to contend with. One example: An errant virus harbored in one of these creatures might wipe out the population of a related species. On the other hand, bringing back a species might put an ailing ecosystem on a healthier footing. It could right an ecological wrong.
So what’s the answer? Is the restoration of an extinct species a moral obligation, a payback for our thoughtless obliteration of species? Or is it playing God? As Ross MacPhee, a curator of mammalogy at New York City’s American Museum of Natural History, said: “What we really need to think about is why we would want to do this in the first place.”blog comments powered by Disqus | <urn:uuid:1e703a0e-e82c-4ce3-be36-ce4897cc4138> | {
"date": "2015-09-02T14:50:44",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645265792.74/warc/CC-MAIN-20150827031425-00111-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9228501915931702,
"score": 3.625,
"token_count": 351,
"url": "http://ngm.nationalgeographic.com/2013/04/editors-note"
} |
When we talk about the blockchain, the first thing that came up in our mind is the security and the security because of the blockchain consensus algorithm. Those who know about the blockchain know that we keep the ledger transactions synchronized across the network to ensure that ledgers only update when the appropriate participants approve transactions and that when ledgers do update, they update with the same transactions in the same order is called consensus. Here we will discuss the three different consensus algorithms.
Practical Byzantine fault tolerance
Imagine that several divisions of the Byzantine army are camped outside an enemy city, each division commanded by its own general. The generals can communicate with one another only by messenger. After observing the enemy, they must decide upon a common plan of action. However, some of the generals may be traitors, trying to prevent the loyal generals from reaching an agreement. The generals must decide on when to attack the city, but they need a strong majority of their army to attack at the same time. The generals must have an algorithm to guarantee that (a) all loyal generals decide upon the same plan of action, and (b) a small number of traitors cannot cause the loyal generals to adopt a bad plan. The loyal generals will all do what the algorithm says they should, but the traitors may do anything they wish. The algorithm must guarantee condition (a) regardless of what the traitors do. The loyal generals should not only reach an agreement but should agree upon a reasonable plan.
Above story represents the Byzantine General’s Problem. There are many solutions for this problem, but we will talk about Practical Byzantine fault tolerance (PBFT).
In 1999, Miguel Castro and Barbara Liskov introduced the “Practical Byzantine Fault Tolerance” (PBFT) algorithm, which provides high-performance Byzantine state machine replication, processing thousands of requests per second with sub-millisecond increases in latency.
We here talking about only PBFT from the number of other solution is because PBFT the only potential solution to Byzantine General’s Problem. IBM backed Hyperledger uses this consensus algorithm. In PBFT each node maintains an internal storage. When a node receives, messages coming through the node is signed by the node to verify its format. Once enough same responses are reached, then a consensus is met that the message is a valid transaction.
Proof of Work
One of the most known algorithms of the all is Proof of Work (PoW). This algorithm is used by one of the strongest crypto-currency that is Bitcoins. This is one of the most commonly used consensuses mechanism. Unlike PBFT, PoW doesn’t require every node on the network to submit their message to reach any consensus. Instead in PoW, an individual can provide the conclusions to reach consensus.
Individual also known as miner calculates the hash of his block header and checks whether the conclusion is correct or not. If the conclusion is wrong, then the miner modify the nonce and then try again.
For example, let’s say we are going to work on a string “blockchain” and our target is to find a variation of the variation of it that SHA-256 hashes to a value beginning with ‘0000’. We vary the string by adding an integer value to the end called a nonce and incrementing it each time.
Finding a match for “blockchain” takes us 1042 tries.
Proof of Stake
And the last one is known as Proof of Stake (PoS). Here the mining is done by a stakeholder who is selected by network based on their stake. Unlike PoW, there is no reward for the block miner in the PoS system. The miners get the transaction fees instead. Crypto-currencies like Peercoin, BlackCoin, Nav Coin uses the PoS system.
So these are the well know three different consensus algorithms. Each one has their pros and cons. | <urn:uuid:29f86000-5d1d-4acd-bfcf-49172a56b4cc> | {
"date": "2018-09-24T03:59:33",
"dump": "CC-MAIN-2018-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160142.86/warc/CC-MAIN-20180924031344-20180924051744-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9258061051368713,
"score": 3.21875,
"token_count": 805,
"url": "https://blog.knoldus.com/consensus-algorithms-in-blockchain/"
} |
From Lagos and Lahore to London, it’s the poorest people who are most affected by air pollution. The poor tend to be priced out of the leafy suburbs where there are fewer…
From Lagos and Lahore to London, it’s the poorest people who are most affected by air pollution. The poor tend to be priced out of the leafy suburbs where there are fewer highways and air quality is better.
Air pollution is caused by harmful particulates and gases released into the air. It leads to premature death from heart disease, stroke, and cancer, as well as acute lower respiratory infections. Indoor and outdoor (ambient) air pollution caused an estimated 7 million deaths globally in 2016, according to the World Health Organization.
Most recorded air pollution-linked deaths occur in developing countries, where laws are weak or not applied, vehicle emission standards are less stringent and coal power stations more prevalent. And in the big cities of developing countries, it’s the poorest who live in cramped informal settlements, often near rubbish dumps, who feel the full force of air pollution.
Nairobi in Kenya is just one example: the huge smouldering dump site in Dandora in the eastern outskirts of the city lies right next to schools, churches, clinics and shops. For people living in nearby places like Canaan, downwind of the dump site, the daily exposure to toxic fumes from the dump affects their overall well-being and health, particularly small children.
In informal settlements like these, indoor air pollution is also a problem. In cities and rural areas, low indoor air quality is a result of burning wood, charcoal, kerosene or other materials inside poorly ventilated homes for cooking, heating or lighting. Again, it’s the most vulnerable, mainly in the developing world, who cannot afford cleaner fuels or alternative technologies and suffer the most.
Clean air is a human right, and a necessary pre-condition for addressing climate change as well as achieving many Sustainable Development Goals. Air pollution does not only damage human health, it also hampers the economy in many ways.
As we approach World Environment Day with its theme of “air pollution”, UN Environment’s urges the world to address this silent killer by living the 4Rs: reduce, recycle, reuse, recover; burning less, wasting less, walking more and driving less; and adopting clean technologies. Governments are encouraged to strengthen their monitoring of air quality and adhering to World Health Organization guidelines, while leading joint actions on finance, environment, health and industry at national and city level.
“We need to bring together citizens, national and local governments, key ministries, the private sector, finance, and academia, and create new partnerships,” says UN Environment’s air quality and e-mobility focal point Rob de Jong. “The air connects all of us and touches everything. The time to act is now!”
UN Environment is taking the lead through research, innovation and implementation of programmes that seek to tackle poor air quality. The organization is a partner in several leading global transport and energy programmes in areas such as fuel economy, short-lived climate pollutants, air quality management strategies and infrastructure development.
Breathe Life, a campaign by the Climate and Clean Air Coalition, the World Health Organization and UN Environment, is running initiatives in 55 cities, as well as numerous countries and regions, benefiting over 153 million citizens. For example, campaign partners energized the public through a sporting challenge that saw 55,000 people pledge to commute by bicycle or on foot.
Air pollution is the theme for World Environment Day on 5 June 2019. The quality of the air we breathe depends on the lifestyle choices we make every day. Learn more about how air pollution affects you, and what is being done to clean the air. What are you doing to reduce your emissions footprint and #BeatAirPollution?
The 2019 World Environment Day is hosted by China. | <urn:uuid:d3015878-564f-4d63-aa07-064ef1a477b6> | {
"date": "2019-05-23T17:22:34",
"dump": "CC-MAIN-2019-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257316.10/warc/CC-MAIN-20190523164007-20190523190007-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9410576224327087,
"score": 3.40625,
"token_count": 813,
"url": "https://www.worldenvironmentday.global/pt-br/node/2126"
} |
7 other chemicals in your food
Propyl gallate, BHA, BHT used to preserve oily products such as mayonnaise
Anyone who's ever read a nutrition label knows that our food supply is full of hard-to-pronounce chemicals. Most are generally recognized as safe, as the Food and Drug Administration likes to say, but a few have given scientists cause for concern.
Azodicarbonamide, for instance. Subway announced last week that it would be removing the controversial chemical from its bread. Generally used for strengthening dough, azodicarbonamide is also found in yoga mats and shoe soles, according to the Centers for Science in the Public Interest. One of the breakdown products is a recognized carcinogen.
Though Subway is going to remove azodicarbonamide, there's a long list of other chemicals used in its bread: calcium carbonate, calcium sulfate, ammonium sulfate, DATEM, sodium stearoyl lactylate, potassium iodate and ascorbic acid, according to the restaurant's website (PDF).
And Subway certainly isn't alone. What other chemical additives are commonly found in your food? Here are seven, picked at random as good practice for the upcoming CNN Spelling Bee (just kidding).
1. Tartrazine and other food dyes
When Kraft announced last year that it would be removing Yellow No. 5 (tartrazine) and No. 6 from certain varieties of its Macaroni & Cheese products, advocates rejoiced. Blue 1, Green 3, Red 40 and others have been loosely linked to everything from hyperactivity in children to cancer in lab animals. Generally found in candy, beverages and baked goods, color additives are also used in cosmetics.
But you knew that, right? Did you also know about the ground-up insects in your drinks? Cochineal extract is an approved artificial dye derived from a small bug that lives on cactus plants in Mexico and South America. As long as you're not allergic, you're safe to drink up, according to the Centers for Science in the Public Interest. Mmmm ...
2. Butylated hydroxyanisole (BHA)
Well, that's a mouthful. BHA is used to preserve some cereals, chewing gum and potato chips, according to the centers. It's also used in rubber and petroleum products.
Butylated hydroxyanisole is "reasonably anticipated to be a human carcinogen," according to the National Institutes of Health (PDF), because of animal studies that have shown that the chemical can cause tumors in rats' and hamsters' forestomachs (something humans don't have) and fish livers.
3. Propyl gallate
Propyl gallate is often used in conjunction with BHA and a chemical called butylated hydroxytoluene, or BHT. These antioxidant preservatives protect oily products from oxidation, which would otherwise cause them to go bad. Propyl gallate can be found in mayonnaise, dried meats, chicken soup and gum, as well as hair-grooming products and adhesives.
Some scientists believe that propyl gallate is an "endocrine disruptor (PDF)," meaning it can interfere with humans' hormones. Endocrine disruptors can lead to developmental, reproductive and/or neurological problems, according to the National Institutes of Health, including fertility issues and an increased risk of some cancers. But the link between propyl gallate and the endocrine system needs to be studied further.
4. Sodium nitrite
Sodium nitrite is most often used in the preservation and coloring of meats, such as bacon, ham, hot dogs, lunch meat and smoked fish. Without it, these products would look gray instead of red.
Sodium nitrite is also found naturally in many vegetables, including beets, celery, radishes and lettuce. But the nitrite found in vegetables comes with ascorbic acid, which prevents our bodies from turning nitrite into nitrosamines.
Nitrosamines are considered potentially carcinogenic to humans. So some companies are adding ascorbic acid to their meat products to inhibit nitrosamine formation, according to the Centers for Science in the Public Interest.
However, the American Meat Institute points out the National Toxicology Program conducted a multi-year review in which rats and mice were fed high levels of nitrate and nitrite in drinking water, and a panel reviewed the findings and concluded that nitrite is safe at the levels used and not a carcinogen.
5. TBHQ (tert-Butylhydroquinone)
This chemical preservative is a form of butane that is used in crackers, potato chips and some fast food. It can also be found in varnish, lacquer and resin. It helps prolong the shelf life of food and, if it's consumed at low levels, is considered safe.
In higher doses -- above what the FDA says manufactures can use in food prep -- TBHQ has been found to cause "nausea, vomiting, ringing in the ears, delirium, a sense of suffocation, and collapse," according to "A Consumer's Dictionary of Food Additives." It may also cause restlessness and vision problems.
6. Silicon dioxide, silica and calcium silicate
Silicon dioxide, also known as silica, is a naturally occurring material (PDF) made up of shells of tiny single-celled algae. You might also recognize it as sand, the kind that gets stuck in your suit at the beach.
Silicon dioxide is used in dry coffee creamer, dried soups and other powdery foods. It is also used as an insect repellent, removing the oily film that covers an insect's body, causing them to dry out and die.
The EPA concluded that the human health risk is low and "not unreasonable." In rat studies, high-dose exposure has caused some lung problems. Another study of Chinese workers who were heavily exposed to the chemical showed a disproportionate number of deaths related to respiratory diseases, lung cancer and cardiovascular diseases. Silicon dioxide has also been associated with the risk of developing autoimmune diseases -- again only after heavy exposure.
7. Triacetin (glycerol triacetate)
Triacetin, also known as glyceryl triacetate, has been approved and generally recognized as safe by the FDA as a food additive.
In food, it is used as a plasticizer for chewing gum and gummy candy. It can be used to keep food from drying out and in some cookies, muffins and cakes. It is also used in perfume, cosmetics and cigarette filters and in drugs like Viagra.
Copyright 2014 by CNN NewSource. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:d913bc30-62d5-457b-bbfc-6a77ffd50033> | {
"date": "2014-03-09T19:07:52",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010213406/warc/CC-MAIN-20140305090333-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9616848826408386,
"score": 2.78125,
"token_count": 1402,
"url": "http://www.ktvz.com/health/7-other-chemicals-in-your-food/-/413012/24386940/-/view/print/-/123iusl/-/index.html"
} |
As global warming triggers heavier rainfall and faster snowmelt in the Arctic, Inuit communities in Canada are reporting more cases of illness attributed to pathogens that have washed into surface water and groundwater, according to a new study. The findings corroborate past research that suggests indigenous people worldwide are being disproportionately affected by climate change. This is because many of them live in regions where the effects are felt first and most strongly, and they might come into closer contact with the natural environment on a daily basis. For example, some indigenous communities lack access to treated water because they are far from urban areas. National Geographic
Archive for April, 2012
Climate Change Linked to Waterborne Diseases in Inuit Communities: A recent study may warn of more widespread threats to water qualityFriday, April 6th, 2012
April 13th, 7:30pm, UAA, Anchorage, Alaska
This talk will begin with an overview of classic examples and recent events of environmental injustice from around the world due to exposure to contaminants, and then focuses on such issues in Alaska. | <urn:uuid:3c9979d6-5273-43e1-a440-a119fef1e316> | {
"date": "2014-11-26T02:50:14",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931005028.11/warc/CC-MAIN-20141125155645-00244-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.952675461769104,
"score": 3.109375,
"token_count": 211,
"url": "http://www.consortiumlibrary.org/blogs/arctic_health/2012/04/"
} |
The University of Arizona has developed technology that could replace 3D displays with holographic ones, potentially ending the rise of 3D TV before it has really taken off.
Professor Nasser Peyghambarian of the College of Optical Sciences at the University of Arizona led a team of researchers to develop the holographic technology, which projects a three-dimensional moving image without the need for special glasses or devices.
“Holographic telepresence means we can record a three-dimensional image in one location and show it in another location, in real-time, anywhere in the world,” said Peyghambarian.
While TV seems like an obvious contendor the technology, Peyghambarian suggests its main use would be in telepresence. It could even be used for telemedicine, where doctors and patients could consult virtually, or advertising and entertainment.
“Let’s say I want to give a presentation in New York,” said Peyghambarian, explaining the telepresence use. “All I need is an array of cameras here in my Tucson office and a fast Internet connection. At the other end, in New York, there would be the 3D display using our laser system. Everything is fully automated and controlled by computer. As the image signals are transmitted, the lasers inscribe them into the screen and render them into a three-dimensional projection of me speaking.”
The prototype uses a 10-inch screen, but the research group are also testing a 17-inch variant, along with a way to show full colour, which is currently not possible on the prototype.
The image is recorded with an array of cameras, which view the subject or object form different angles and perspectives, which is then beamed out by lasers to create the holographic image, made up of hogels, or holographic pixels. The hologram will then naturally decay after several seconds or minutes, depending on the setting.
Previous attempts at holographic technology, famed for its use in science-fiction movies like Star Wars, has failed due to an inability to dynamically update the holographic images. This problem has been addressed with the new technology, making it a viable option for the future.
“At the heart of the system is a screen made from a novel photorefractive material, capable of refreshing holograms every two seconds, making it the first to achieve a speed that can be described as quasi-real-time,” said Pierre-Alexandre Blanche, an assistant research professor for the project.
The photorefractive polymer film required for the holograms was manufactured by Nitto Denko Technical, which helped the University of Arizona with the project.
The technology is featured in today’s issue of the Nature science journal. | <urn:uuid:347d85ba-1a2c-4fef-956b-ff217fb8785e> | {
"date": "2015-04-28T19:54:01",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246662032.70/warc/CC-MAIN-20150417045742-00254-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9406566619873047,
"score": 3.078125,
"token_count": 576,
"url": "http://www.techeye.net/hardware-2/university-of-arizona-takes-3d-hologram-route"
} |
The U.S. has laws designed to protect children, who are presumed to lack well-formed judgment. In divorce cases that go to court, a parent cannot represent a child’s interests, and a parent’s interests may even conflict with the best interests of the child. In these cases the court will appoint a “guardian ad litem”(GAL) to act in the child’s best interests during the trial and provide reasoned advice from an independent individual who does not owe allegiance to either party. (Ad litem is a Latin term meaning “for the lawsuit.”)
Some states require that GALs be attorneys; in other states, lay people are eligible to volunteer as GALs, subject to GAL training and certification. Note: a GAL is appointed solely for the court proceedings, and is not a “guardian” appointed to manage a child’s interests in general. The appointed GAL participates, as appropriate, in pre-trial conferences, mediation and negotiations. The guardian has authority to conduct an independent investigation to ascertain the facts of the case, to investigate the child’s family background, and to meet and interview the child and the parents face-to-face. In court, GALs can cross-examine witnesses called by the parents’ attorneys and can call their own witnesses.
Other GAL duties may include:
- Advising the child, in terms the child can understand, of the nature of the court process, the child’s rights, the role of the GAL, and the potential outcome of the legal action.
- File appropriate petitions, motions, pleadings, briefs, and appeals on behalf of the child and ensure the child is represented by a guardian ad litem in any appeal
- Advise the child, in terms the child can understand, of the court’s decision and its consequences for the child and others in the child’s life.
In some states, GALs make recommendations to the judge as to how they think child custody should be decided in the child’s best interests—not necessarily according to what the child prefers, if he or she is old enough to have an opinion. Usually the child does not appear in court, but if the child is to testify, the GAL will help prepare the child when necessary and appropriate.
There is no attorney client privilege between the GAL and the parent, so nothing that is said is privileged and it can be shared.
The seasoned family law and divorce lawyers at the McGrath Law Firm, founded by attorney Peter McGrath, will walk you through every step of the challenging divorce process to address your concerns and achieve your goals as efficiently as possible. From spousal support, child support, fault, and equitable division of property and debt to valuations, pre-nuptial agreements, annulments, and restraining orders, the experienced attorneys at McGrath Law Firm have a successful track record in all aspects of divorce law. Call us to schedule your consultation at (800) 283-1380. | <urn:uuid:aff36165-85c9-4b58-aa7a-da53529e59ae> | {
"date": "2020-01-22T04:58:11",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9522252082824707,
"score": 2.53125,
"token_count": 639,
"url": "https://mcgrathlawfirm.com/tag/no-attorney-client-privilege/"
} |
In a famous passage from On Writing the New Elements of Medicine (1682-83) Leibniz states the significance of the practical approach in medicine:
In any Machine one must consider its functions or ends, as well as its manner of operating, or by which means the author of the machine achieved its end. And therefore we should take care lest we imagine a machine that would by chance fulfill these same functions, but nevertheless not by the same means, since the precepts governing the conservation of this imaginary machine were different from the laws governing the true machine. Thus it is not surprising that certain new philosophers, with whose very ingenious thoughts about human beings we are familiar, have contributed little to the advancement of medicine, since they have sketched out their man more from the intellect than from experience. (Leibniz: 2016, 297)
We can easily say that Leibniz was in fact a philosopher dealing with imaginary machines in his philosophy of nature. And yet, as Justin Smith (2011, 26) argues, the reason for his interest in medicine was not his eclecticism, but rather its crucial role for his metaphysics of substance and individual, his philosophy of nature and the questions about organized matter and the cause of motion, and ethical concerns about the good and pious life. Medicine is also important as a part of the wider Leibniz project of improving the living conditions of the whole human kind, for „after virtue, health is most important of all” (Leibniz in Smith: 2011, 27).
In what follows, I would like to argue that for Leibniz as a famous polymath, working in many different disciplines and particularly acknowledged for his contribution to math, physics and metaphysics, it was actually difficult to accept his own incompetence in a field of study. Unfortunately, that was the case with the medicine – he was almost completely inexperienced in it, because he hadn’t received medical education (unlike Locke for example) and, unlike many of his early-modern contemporaries, was not used to performing real experiments with or observations on the living body, in order to make a contribution to the not-yet-established biology. Nevertheless, he spoke frequently about these matters – particularly, but not exclusively, in his early writings he refers to different medical and pre-biological examinations, experiments, doctrines, prescriptions, etc. And yet, although he showed strong interest for this field of study, Leibniz never made an effort to become an accomplished physician. The reasonable question is why?
There are three most plausible answers: (1) he had too many responsibilities and could not find enough time to commit himself to a new science; (2) he really believed in study driven by disinterested love, according to which the right way to care about something is to work for its well-being, without asking for anything in return; (3) he was afraid to admit his own incompetence in these matters – something that every man has to make in order to start learning something new.
It is definitely true that during his lifetime Leibniz was overwhelmed by all his activities. He worked as a councilor, diplomat, historian, librarian, inventor, etc. He traveled a lot, for one of his lifetime-long aims and dreams was to help in the establishment of various science societies and, in fact, under the influence of Beacon’s New Atlantis, he was an active contributor to the development of the Science Academies in Saint Petersburg and in Vienna.
However, it should be noted that Leibniz’s well-known ethical doctrine of disinterested love gives him a convenient reason to insist on his right to speak about medical matters, even though he did not have the required medical training. This boldness is especially well demonstrated in the controversy with Georg Ernst Stahl (1709-10), where Leibniz dares to contest the views of the famous physician from Halle. In his paper about the new antidysenteric of 1695-96 he even argues explicitly that “nothing is more precious to men than health and that This may be said all the more fervently by me, who is not a doctor, since I will be less suspect of seeking to advance my own usefulness” (Leibniz in Smith: 2011, 48).
Since here it is not possible to examine in much detail the doctrine of disinterested love, I would like to note only that for Leibniz true love has no other purpose than the happiness and the well-being of its object. This means that he takes into account the role of the intentions in the act and, because of this, he regards it as reasonable to question the intentions of physicians, given the fact they are paid to do what they do. Of course, that does not mean Leibniz rejects the medical profession; on the contrary, he sees it as a very important occupation and even compares the doctors with the father confessors in the church (the people who take care for the immortal soul). Nevertheless, for Leibniz doctors should be organized by and should listen to the advices given by philosophers like him, who do not earn their livings with medicine and are driven only by the pure desire to facilitate the scientific progress with the final goal of improvement of the standard of living of the whole of humanity.
Accordingly, Leibniz tries to restrict himself only to discussing the structure and institutionalizing of medicine and not its contents; however he regularly fails in this enterprise. In his later written Preface to their controversy, Stahl intimates that the reason why his opponent did not understand the physiological part of the True Medical Theory was not only the insufficient time and attention, but also his inability to follow the strict scholastic deductions. Through he tries to sound respectful, Stahl almost explicitly doubts Leibniz’s competence in the subject matter, as he points out, that the latter writes too chaotically and needs too much time to prepare his answers (in particular, after receiving something, written by Stahl for, I quote, “ten or twelve days”, it took Leibniz one year to write down his objections) (Stahl: 2016, 5-7). In conclusion Stahl states: “I shall certainly never be ashamed to confess, and even to declare, that from all these skirmishes I am not able to foresee anything that should ever cause me to fear either for my troops or for my cause” (Stahl: 2016, 11).
Leibniz was definitely not an expert in physiology or medicine and by the time of his controversy with Stahl he was aware of the fact that because of his mature age he will probably never contribute as much as he would like to this field of study. Nevertheless, he continued writing about medical issues and, as we saw, he even dared to oppose himself to one of the main figures of the early-modern medical thought. His interests in disciplines, which nowadays belong to the domain of biology, form a wide range: in his profound study Divine Machines – Leibniz and the Sciences of Life (2011, 260) Justin Smith lists here pharmacy, epidemiology, anatomy, embryology, entomology, taxonomy, physical anthropology and field botany.
However, whether or not Leibniz was aware of his own incompetence in medicine is a hard question and we can never be entirely sure about its answer. He was definitely a confident man, who even in his early career, when he was completely unknown, did not fear to write letters to some of the most prominent thinkers of his time, for example to Thomas Hobbes from 1670. Leibniz was also very proud of the invention of the calculus and the binary system in math, the dynamics and the preservation of force in physics, the system of pre-established harmony in philosophy, etc., and all this could easily made him very self-confident.
However in Leibniz’s corpus we find also signs in favor of the other position, that is, that he knew of and accepted his incompetence. In his controversy with Stahl, for example, we find the following statement: “Whether use of the volatile salts of urine is useless is a question of fact, which I leave to the author of the Response, along with other physicians” (Leibniz: 2016, 393). This leads us to accept that there were some issues from the medical practice, about which the German thinker knew he was not a specialist.
Nevertheless, we do not need to answer this question in order to claim that Leibniz did not really want to announce this incompetency to the public. He had a consistent interest in medicine and, as a rationalist, was at the same time convinced in the significance of the purely theoretical knowledge. Thus he developed many arguments for the sake of the claim that the medicine needs philosophers like him. As he wrote to François de l’Hospital in May 1696: “May it please God that it should come about that doctors philosophize, and philosophers occupy themselves with medicine” (Leibniz in Smith: 2011, 43). Let us now proceed to examination of these main usages of philosophy in medicine.
The Institution of Medicine
One of the main ways in which Leibniz tried to contribute to medicine as a non-physician, but as a skilled diplomat, was by helping for its institutionalization. By making the care for the body the most important task after the care for the immortal soul, Leibniz visualizes reforms for improvement of medical and state institutions which have the final goal to make the health system work better, develop faster and be available for all people from all places and all social classes. In short, he was convinced that the doctors should be organized as the religious orders (Leibniz: 2011, 282) and that the state government should make the care for this medical system his primer mission. For this reason Leibniz hopes for an enlightened ruler, who will praise and promote the work of all the people, contributing to the medicine – physicians, as well as scientists who perform observations on and experimentations with the organic body. In his polemic with Stahl he explicitly throws the blame for the undeveloped stage of the medicine on the government: “Although, to tell the truth, the blame falls rather more on the leaders of the republic, whose task it is to be the guardians of public health and to promote the development of a science, which is so necessary, than on the physicians, on whom it is incumbent to see to the treatment of households” (Leibniz: 2016, 37). Thus, in the end he was particularly disappointed that no one made use of his advises – in the already mentioned letter to de l’Hospital he complains: “I believe that one could go much further, but I have often futilely preached a ‘fable to the deaf’ on this subject” (Leibniz: 2004, 762–63).
The Science of Medicine
Thus for Leibniz the medical science was by that time at the beginning of its development and this justified the significance of the project of its constitution. In order to understand his views, we need to quickly introduce his main epistemological commitments with regard to the holistic image of knowledge.
Completely in accordance with the age of the Enlightenment, Leibniz saw the development of science as a colossal project in which everyone should be involved. This is a direct consequence of his holistic view of knowledge, according to which every discipline has its place and significance. The contribution of many scientists to this scientia generalis as a part from a wide range of scientific communities and with the support of an enlightened ruler would result in the creation of a general, demonstrative encyclopedia of the whole human knowledge. As Maria Rosa Antognazza shows (2017, 22-3), these Leibnizian views are particularly influenced by the encyclopedic and pansophic traditions and especially from Francis Beacon’s ideas with regard to the holism of the knowledge and its final goal – the achievement of human happiness.
In light of this epistemological frame it is not surprising that, although for many early modern physicians the meaning of anatomy and chemistry for the medical practice was questionable, Leibniz explicitly defended them, stating that every knowledge should be respected and developed, even through sometimes its practical usefulness is not evident or yet discovered. For he was convinced the medicine should be regarded in light of a physica specialis – an union between physiology, anatomy and chemistry. Thus, he writes for example about the role of anatomy:
Indeed quite apart from surgery, it is important for the physician to investigate the interior parts of our body. And although until now medicine may not have sufficiently benefited from the inner organization discerned by recent investigators, this, I should suppose, was due more to the negligence of men and, above all, of the practitioners, who hardly devote themselves to the search for truth, than to a defect of the domain itself. (Leibniz: 2016, 37)
Theory and Practice in Medicine
This leads us to Leibniz’s views on the theoretical and practical knowledge and the need of their union for the sake of the scientific progress – an idea that explains the slogan he gave to the Prussian Academy of Science: Theoria cum Praxi. As we can see, this was also the case with the medicine itself – although he insists on the need of practical approach, which includes not only healing practices, but also experiments with and observations on the living body, Leibniz stated the need of theoretical knowledge, in order the medical science to be further developed and completed in the future. This is, in fact, Leibniz’s major project of reconciliation of a priori and a posteriori knowledge, ancient and modern science, the modalities of reason and experience (Becchi: 2017, 56-57).
We can point out three main parts of this mission of the philosophers in the medicine: (1) construction of medical theory, (2) collection of medical data, and (3) announcement of the results. Focusing on the healing of the human, physicians are unable to fulfill any of these main tasks. Hence the first goal of the ‘medical philosophers’ or the philosophers, who theorize in the medicine, is to construct a medical theory – something that a physician cannot be expected to do, because of his unpreparedness in abstract thinking. This theory will further help in the navigation of the accumulated scientific progress, regulating the most important tasks and the most effective methods and synthesizing the received knowledge in a unified system. And for Leibniz here special attention should be paid to the collection of the world’s medical data in a catalog of diseases and remedies as an encyclopedic project, which requires certain synthetic skills. Having completed such a project, the ‘medical philosophers’ have to promote its distribution; so that it is available in every part of the world and everyone can use it. This cannot be done by the physicians themselves, because like the most part of the craftsmen “in addition to not being inclined to teach others who are not their apprentices, [they] are not people who explain themselves intelligibly in writing” (Leibniz: 1999, 961).
Therefore, Leibniz sees the deficiency of early modern medicine in the lack of theory – lack of first principles or foundations, which leads to uncertainty (Smith: 2011, 40). For him the philosophy can free the medicine from its dependence on mystical practices like physiognomy, chiromancy, etc., as well as from the influence of their tendency toward fraudulence and to the right path of empirical experiment with the animal and human body, the observation of bodily symptoms and the examination of medicaments (Smith: 2011, 35).
Thus, for Leibniz the empirical inductive truths can be proved only by observations and experiments and these form the core of, to use the Anne-Lise Rey’s term, a ‘provisional empiricism’ that expects rationalistic approval or correction. The a priori method begins with reasoning about the creator of all things and is certain, but it is often very difficult to accomplish. In those cases we should use the a posteriori method that derives from the results of observations and experiments and is plausible by nature (Rey: 2013, 370-1).
However, as the well-known Leibnizian rationalistic claim states, true knowledge is the a priori knowledge of the reasons. This aspect can be further found in regard to the content of the medicine – the need of identification of the causes for the diseases – something that the practicing doctors disregard in favor of finding and treating the symptoms. Leibniz saw this as a prerequisite for the prevention of diseases in the future, which he valued as an indispensable and fruitful part of medicine, paying special attention, in particular, to the role of the diet for the preservation of health.
In conclusion, I would like to point out that Leibniz’s approach to the question was, in fact, modern and in accordance with the age of Enlightenment, whereas some doctors’ and observers’ secrecy (e.g. Leuwenhoek’s) in regard to their own knowledge was distinctly anti-modern. As Alessandro Becchi (2017, 76-77) shows, the conflict between the two parties was a result from different cultural backgrounds, languages, social layers, etc., but at the same time a symbiosis was necessary for the progress of medicine and of life sciences as a whole. This can be seen as an overcome of the distinction between practical and theoretical medicine, as in Leibniz’s view it will make possible the identification of the cause with an analysis that begins with the experience (Rey: 2013, 370). In Beacon’s metaphor from the New Organon, it is necessary for the ant that collects data and the spider that creates networks to connect and unite, in order to achieve the ideal of the bee (Becchi: 2017, 78).
However, the concordance isn’t coincidence and yet Leibniz frequently oversteps the line he drew between the theoretical and practical medicine, sharing his opinion about substantive medical issues. A plausible reason for this is the self-confidence he gained from his successes in other fields of study and also the awareness of his deep knowledge of mechanical philosophy and the laws of nature that govern the physical world. This could easily have driven him to thinking that as a mechanist he can also contribute to the medical practice. Thus, he writes:
It is evident that the human body is a machine disposed by its author or inventor to certain functions. And thus to write medicine is nothing other than to prescribe to a mechanic a method by which he will be able to conserve the machine that has been entrusted to his care, so that it should continue to operate correctly, like the precepts that are typically given to the custodians of those hydraulic machines by means of which water is dispensed throughout an entire city. (Leibniz: 2011, 297)
This may have led to a curious fact from the history of philosophy – Leibniz himself died in large part because of self-inflicted injuries, which, as we read in the Note on Gout and ‘the Vapors’ of 25 January 1676, he held as remedies for the gout (Leibniz: 2007, 169). Ironically, earlier he had claimed that precisely unwarranted self-confidence in the medicine caused Descartes’ death (Leibniz: 1978, 275). Thus, it might have turned out the idea that the human body is only a machine encouraged the mechanical philosophers to overrate their own ability to treat it and even to make fatal decisions about their own health. These examples demonstrate not only the importance of general philosophical commitments for the real life decisions of the early moderns, but also the real danger of the incompetency in the field of medicine. And so the moral of this story from the history of medicine is that the latter can and should, in fact, learn many things from philosophy, but, at the same time, philosophers should be more afraid for their bodily health than for their reputation in the scientific community.
Leibniz, G. W. (1978). Leibniz gegen Descartes und den Cartehanismus, Die philosophischen Schriften, vol. VI (ed. Gerhardt). New York: Hildesheim, 263-280.
Leibniz, G. W. (1999). Discours touchant la methode de la certitude, et lárt dínventer pour finir les disputes, et pour faire en peu de temps des grands progrés, Leibniz: Sämtliche Schriften und Briefe, R. VI, B. IV. Berlin: Akademie-Verlag, 952-963.
Leibniz, G. W. (2004). Leibniz an Guillaume François de L´Hospital, Leibniz: Sämtliche Schriften und Briefe, R. III, B. VI. Berlin: Akademie-Verlag, 762-763.
Leibniz, G. W. (2007). Note on Goat and ‘the Vapors’ (transl. J. E. H. Smith), The Leibniz Review, vol. 17, 168 – 175.
Leibniz, G. W. (2011). Appendix 1: “Directions Pertaining to the Institution of Medicine” (transl. J. E. H. Smith), Divine Machines: Leibniz and the Sciences of Life (ed. J. E. H. Smith). Princeton & Oxford: Princeton University Press, 275 – 287.
Leibniz, G. W. (2011). Appendix 2: “The Animal Machine” (transl. J. E. H. Smith), Divine Machines: Leibniz and the Sciences of Life (ed. J. E. H. Smith). Princeton & Oxford: Princeton University Press, 288 – 289.
Leibniz, G. W. (2011). Appendix 3: “The Human Body, Like That of Any Animal, Is a Sort of Machine (1680-86)” (transl. J. E. H. Smith), Divine Machines: Leibniz and the Sciences of Life (ed. J. E. H. Smith). Princeton & Oxford: Princeton University Press, 290 – 296.
Leibniz, G. W. (2011). Appendix 4: “On Writing the New Elements of Medicine (1682-83)” (transl. J. E. H. Smith), Divine Machines: Leibniz and the Sciences of Life (ed. J. E. H. Smith). Princeton & Oxford: Princeton University Press, 297 – 302.
Leibniz, G. W. (2016). Animadversions Concerning Certain Assertions of the True Medical Theory, The Leibniz-Stahl Controversy (transl. and ed. Fr. Duchesneau & J. E. H. Smith). New Haven & London: Yale University Press, 16 – 51.
Leibniz, G. W. (2016). Leibniz’s Exceptions and Stahl’s Replies, The Leibniz-Stahl Controversy (transl. and ed. Fr. Duchesneau & J. E. H. Smith). New Haven & London: Yale University Press, 246 – 409.
Stahl, G. E. (2016). Preface to “Negotium otiosum”, The Leibniz-Stahl Controversy (transl. and ed. Fr. Duchesneau & J. E. H. Smith). New Haven & London: Yale University Press, 4 – 15.
Antognazza, M. R. (2017). Philosophy and Science in Leibniz, Tercentenary Essays on the Philosophy and Science of Leibniz (ed. L. Strickland, E. Vynckier, J. Weckend). Cham: Palgrave Macmillan, 19 – 46.
Arthur, R. (2017). Leibniz, Organic Matter and Astrobiology, Tercentenary Essays on the Philosophy and Science of Leibniz (ed. L. Strickland, E. Vynckier, J. Weckend). Cham: Palgrave Macmillan, 81 – 107.
Becchi, A. (2017). Between Learned Science and Technical Knowledge; Leibniz, Leeuwenhoek and the School for Microscopists, Tercentenary Essays on the Philosophy and Science of Leibniz (ed. L. Strickland, E. Vynckier, J. Weckend). Cham: Palgrave Macmillan, 47 – 80.
Duchesneau, F. & Smith, J. E. H. (2016). Introduction, The Leibniz-Stahl Controversy. New Haven & London: Yale University Press, XIII – LXXXIX.
Duchesneau, F. (2011). Chapter 2: “Leibniz Versus Stahl on the Way Machines of Nature Operate”, Machines of Nature and Corporeal Substances in Leibniz (ed. J. E. H. Smith, O. Nachtomy). Berlin: Springer, 11 – 28.
Rey, A. L. (2013). The Status of Leibniz´ Medical Experiments: A Provisional Empirism?, Early Science and Medicine 18-4-5, 360-380.
Smith, J. E. H. (2011). Chapter One: “Que les philosophes medicinassent”. Leibniz’s Encounter with Medicine and It’s Experimental Context, Divine Machines: Leibniz and the Sciences of Life. Princeton & Oxford: Princeton University Press, 25 – 58.
Philosophia 18/2017, pp. 49-58 | <urn:uuid:8e52beaf-6a44-42e4-ad5a-56c37e53f661> | {
"date": "2018-01-20T20:53:04",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9525731801986694,
"score": 2.6875,
"token_count": 5491,
"url": "https://philosophia-bg.com/archive/philosophia-18-2017/leibniz-and-the-fear-of-incompetence-a-story-from-the-history-of-medicine/"
} |
High temperatures and toxic blue-green algae proved a deadly combination for some fish in Grand Lake St. Marys in western Ohio.
Lake-area residents and Ohio parks officials found several hundred fish — mostly gizzard shad and some bluegill and crappies — floating in the lake’s shoreline channels over the weekend.
Fish kills are an annual summertime occurrence at the shallow, 13,000-acre lake, officials said. They pop up when decomposing algae rob the water of oxygen and suffocate some fish.
Gizzard shad are sensitive to low oxygen levels and typically die in the greatest numbers, according to the Ohio Department of Natural Resources.
“We just came off a week of extremely hot days and a calm lake with no wind,” said Milt Miller, manager of the Grand Lake St. Marys Restoration Commission, referring to weather conditions that help the algae grow.
The lake has been plagued by toxic algae since federal officials discovered it there in 2009. Annual warnings have hurt the local tourism economy.
Also called cyanobacteria, blue-green algae are common in most lakes. In Grand Lake St. Marys, they grow thick feeding on phosphorus from manure and fertilizers that rain washes from nearby farm fields.
The algae produce liver and nerve toxins that can sicken people and kill pets and fish.
Similar blooms appear in Lake Erie and other inland lakes, usually starting in late July or August. Warnings at Grand Lake St. Marys advise small children, the elderly and people with weakened immune systems that swimming and wading “are not recommended.” The warnings typically appear on Memorial Day weekend.
The state has spent more than $8 million fighting algae there. Most of the money was spent on two chemical treatments that officials hoped would starve the algae by removing phosphorus from the water.
Still, Miller said, more people are visiting the lake each summer, and concerns about the algae blooms are diminishing.
“We have had no reports of any human or animal illness,” he said. | <urn:uuid:0c3868e9-a778-4d8e-8bfa-d6d30776bbba> | {
"date": "2015-11-30T11:58:29",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461529.84/warc/CC-MAIN-20151124205421-00281-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9472333192825317,
"score": 2.75,
"token_count": 426,
"url": "http://www.dispatch.com/content/stories/local/2013/07/25/heat-algae-kill-hundreds-of-fish-at-grand-lake-st--marys.html"
} |
THE HUDSON RIVER IN THE NINETEENTH CENTURY AND THE MODERNIZATION OF AMERICA
July 7-12 or July 14-19, 2013
Click here for information on Graduate Credit
We are happy to invite you to join us in summer 2013 to participate in an NEH Landmarks of American History and Culture workshop on “The Hudson River in the Nineteenth Century and the Modernization of America.” The workshop will take a distinctly historical and cultural approach to the Hudson River. Participants, who will be designated as NEH Summer Scholars, will explore the Hudson of the nineteenth century as a microcosm of American culture in a century that, more than any other, transformed the country. We aim to equip each participant with a framework for pursuing place-based inquiries back home, so that our study of the Hudson might serve as a springboard to similar studies elsewhere. The program is open to K-12 educators and is devised to be intensive and intellectual, yet collegial. Participants will be invited to share their expertise, collaborate on lesson plans and curriculum, and exchange ideas.
The part of the Hudson River that we will be focusing on stretches from New York Harbor at the base of Manhattan to Newburgh Bay, about 60 miles north of New York City. This region encompasses the dramatic Palisades along the western shore of the river, the broad Tappan Zee on which Washington Irving had his home, and the dramatic Hudson River Highlands, the setting of so many of the famed Hudson River School paintings. During the workshop we will be visiting a number of sites in and around this stretch of river.
If you have questions about the workshop contact us at [email protected] and we will gladly write back. We are very excited to be sponsoring this workshop and look forward to gathering next summer. Thank you!
Meredith Davis Associate Professor of Art History Ramapo College of New Jersey
Stephen P. Rice Professor of American Studies Ramapo College of New Jersey
Any views, findings, conclusions, or recommendations expressed in this program do not necessarily reflect those of the National Endowment for the Humanities. | <urn:uuid:5281173e-7cf6-418a-9fbd-a10d2aeec539> | {
"date": "2014-10-23T15:47:10",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067075.78/warc/CC-MAIN-20141017150107-00311-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9100111126899719,
"score": 2.625,
"token_count": 437,
"url": "http://nehhudson.ramapo.edu/"
} |
|The implantable miniature macular degeneration telescope developed by CentraSight received FDA approval as of July 2010. The device helps to improve vision for those who have advanced macular degeneration or end-stage macular degeneration in which there is severe loss of one's central vision. In end-stage AMD the damage to the retina's macula is permanent and there is no medical drug that can treat it.|
Research for the retina implant began in 2006 at 28 eye centers in the United States. Well known eye centers like Wilmer Ophthalmological Institute in Baltimore, Maryland and Emory Eye Center in Atlanta were some of the study sites.
Over 200 patients participated in the study. The telescope has already been approved for use in Europe and has the Health Canada Listing. Patients have experienced improved vision and improved quality of life.
The implant does not restore lost vision to normal vision but does restore some of the lost central vision. Patients are able to see things they were not able to see before such as faces and writings in books or magazines.
The FDA website reports these results:
"In a 219-patient, multi-center clinical study of the Implantable Macular Degeneration Telescope, 90 percent of patients achieved at least a 2-line gain in either their distance or best-corrected visual acuity, and 75 percent of patients improved their level of vision from severe or profound impairment to moderate impairment."
The FDA approved implantable telescope is placed into the eye with the poorest vision. The telescope actually replaces the existing lens and magnifies images 2.5x. The effect of the magnification of an image decreases the blind spot that is located in one's central vision.
Cornea and cataract eye specialists who are specially trained perform the implant procedure.
It is approximately a one hour out patient procedure. "It absolutely functions as a telescope," said Kathryn Colby, MD, of Massachusetts Eye and Ear Infirmary. "It has two wide-angle, high-power lenses. They are just very, very small."
Very small indeed, the device is about the size of a pea. The macular degeneration telescope works by directing light from the new lens to parts of the macula that are still functioning. Patients first try an external version of the telescope to see how it works for them.
It's not all over once the implant is in place. Training by a low vision specialist is necessary to teach the patient how to coordinate their vision - with one eye having central vision and the other eye peripheral vision - before and after the implant.
Not everyone with macular degeneration is eligible for the telescopic implant. This is not an all inclusive list. There are other requirements besides the ones listed here. They are:
1. Have irreversible, end-stage AMD resulting from either dry or wet AMD
2. Are no longer a candidate for drug treatment for your AMD
3. Have not had cataract surgery in the eye in which the telescope will be implanted
4. Meet age, vision, and cornea health requirements (As of December 2014 the age requirement is 65 and older)
5. No active bleeding or treatment for choroidal neovascularization within the last 6 months
6. No uncontrolled glaucoma
7. No history of retinal detachment
8. No previous intraocular, corneal or refractive surgery in the implantable eye
Currently any patient with end stage AMD needing the implant in the eye that has had cataract surgery has been ineligible for this procedure. However many patients with macular degeneration have had cataract surgery to help improve their vision some with disappointing results.
“Cataract surgery is often performed on patients living with macular degeneration in the hopes that an IOL will improve contrast and light. However, studies show that patients who progress to End-Stage macular degeneration do not experience an appreciable improvement in their visual acuity, post cataract surgery,” said Stephen Lane, MD. “The long term efficacy of the telescope implant in improving vision and quality of life in AMD patients has been demonstrated during studies that followed subjects up to 8 years post-surgery. This study will inform us about the safety, effectiveness, and the appropriate surgical technique for implanting the telescope in patients who have had cataract surgery before.”
On January 10,2017 the FDA granted VisionCare, Inc. approval to begin a clinical study on the use of the CentraSight implant in patients who have had cataract surgery. The clinical trial will begin by evaluating the safety and effectiveness of the telescope/retina implant in patients who have previous cataract surgery with an intraocular lens (IOL). The IOL will be removed and replaced with the implantable miniature telescope by Dr. Isaac Lipshitz.
There are now several CentraSight providers in the US who are seeking to enroll patients with end-stage macular degeneration who have had cataract surgery to see if they might be candidates for the study.
To find out more you may call CentraSight and speak to a representative at 1-877-997-4448.
To go to the CentraSight website to fill out a form to see if you are a candidate click here:
Deposits on the device and increased intraocular pressure are the most common side effects.
According to the FDA website,"Because the IMT is a large device, implantation can lead to extensive loss of corneal endothelial cells (ECD), the layer of cells essential for maintaining the clarity of the cornea, and chronic endothelial cell loss.
The chronic rate of endothelial cell loss is about 5 percent per year. Significant losses in ECD may lead to corneal edema, corneal decompensation, and the need for corneal transplant.
In the study, 10 eyes had unresolved corneal edema, with five resulting in corneal transplants.
The calculated five-year risk for unresolved corneal edema, corneal decompensation, and corneal transplant are 9.2 percent, 6.8 percent and 4.1 percent, respectively."
Again the procedure is performed in only one eye. The eye's natural lens is removed and replaced with a tiny telescope implant.
It is a relatively short out-patient procedure that starts with numbing the eye and administering special eye drops to enlarge the pupil. A speculum, a special instrument, holds the eye open while the surgeon removes the natural lens.
The miniature telescope is then placed where the natural lens was located. Sutures are used to close the surgical incision.
All patients who received the implantable device who were enrolled in the IMT-002 trial were asked to participate in a 5 year follow-up study. This study will monitor long-term safety.
Patients will be examined at six-month intervals up to a total of 5 years following implantation.
They will be monitored for possible complications such as rates of corneal transplant, retinal detachments and corneal edema. | <urn:uuid:59d3a6ea-cb56-46f9-a99e-0a3acabe399d> | {
"date": "2017-05-27T19:22:31",
"dump": "CC-MAIN-2017-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609054.55/warc/CC-MAIN-20170527191102-20170527211102-00188.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9382210373878479,
"score": 2.890625,
"token_count": 1470,
"url": "http://www.webrn-maculardegeneration.com/macular-degeneration-telescope.html"
} |
[American Falls de-watered, via Flickr user rbglasson]
For six months in the summer and fall of 1969, Niagara’s American Falls were “de-watered”, as the Army Corps of Engineers conducted a geological survey of the falls’ rock face, concerned that it was becoming destabilized by erosion. During the interim study period, the dried riverbed and shale was drip-irrigated, like some mineral garden in a tender establishment period, by long pipes stretched across the gap, to maintain a sufficient and stabilizing level of moisture. For a portion of that period, while workers cleaned the former river-bottom of unwanted mosses and drilled test-cores in search of instabilities, a temporary walkway was installed a mere twenty feet from the edge of the dry falls, and tourists were able to explore this otherwise inaccessible and hostile landscape.
A riverbed, in other words, became an ephemeral public park, though as by-product of a potentially colossal geo-re-engineering project1. The authorities even installed temporary interpretative signage explaining the Fall’s geology to inquisitive visitors. Which, of course, raises the possibility that other ephemeral parks might be constructed, perhaps not as by-product, but solely to provide access to new terrains. Without consideration of the practicalities: lower the Hudson for a month, and hold a rock-climbing festival along new cliffs, the competitors scrambling up Hartland Schist in the mist of spray-emitters stabilizing the rocky banks. Let loose the dammed power-lakes of the Tennessee Valley Authority, and hold Bonnaroo on the muddy bottom of Harrison Bay, temporarily un-flooded.
Or, less ephemerally, when the Chicago River is re-reversed, will the city partition and drain it at the canal locks, and sell off the resultant land-rights?
Further fascinating history of Niagara Falls: in the nineteenth century, the “tailraces” — essentially, industrially-scaled discharge pipes which created artificial water falls in the gorge — of the Mill District were nearly as famous a tourist attraction as the natural falls. From the photographs (also: in winter), it is easy to see why; also, at Places, Barbara Penner reviews Ginger Strand’s “Inventing Niagara”; finally, Strand’s own Niagara Toxic Tour. | <urn:uuid:bfab0ef7-0c50-451b-8f66-e42c7fc2384a> | {
"date": "2016-12-04T18:20:59",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541361.65/warc/CC-MAIN-20161202170901-00280-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9385363459587097,
"score": 3.109375,
"token_count": 506,
"url": "http://m.ammoth.us/blog/2010/03/absent-rivers-ephemeral-parks/"
} |
If you are sick with the flu, below are some tips on how to take care of yourself and to keep others healthy.
Know the signs and symptoms of flu. Symptoms of flu include fever or chills and cough or sore throat. In addition, symptoms of flu can include runny nose, body aches, headache, tiredness, diarrhea, or vomiting.
Stay home or at your place of residence if you are sick for at least 24 hours after there is no longer a fever (100 degrees Fahrenheit or 38 degrees Celsius) or signs of a fever (have chills, feel very warm, have a flushed appearance, or are sweating). This should be determined without the use of fever-reducing medications (any medicine that contains ibuprofen or acetaminophen). Staying away from others while sick can prevent others from getting sick too. Ask a roommate or friend to check up on you and to bring you food and supplies if needed.
Cover you mouth and nose with a tissue when coughing or sneezing.
Wash your hands often with soap and water, especially after coughing or sneezing. Alcohol-based hand cleaners are also effective if soap and water are not available.
Avoid touching your eyes, nose, or mouth. Germs spread this way.
Sick people should stay at home or in their residence, except to go to the health care provider's office.
Stay in a separate room and avoid contact with others. If someone is caring for you, wear a mask, if available and tolerable, when they are in the room.
Drink plenty of clear fluids (such as water, broth, sports drinks, and electrolyte beverages for infants) to keep from becoming dehydrated.
Contact your health care provider or institution's health services if you are at higher risk for complications from flu for treatment. People at higher risk for flu complications include children under the age of 5 years, pregnant women, people of any age who have chronic medical conditions (such as asthma, diabetes, or heart disease), and people age 65 years and older.
Contact a healthcare provider or BVU Health Services right away ([email protected] or 712.749.1238) if you are having difficulty breathing or are getting worse. | <urn:uuid:65c4492a-3771-4c3c-bdcf-8f719620e6c9> | {
"date": "2015-08-28T06:56:09",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060633.7/warc/CC-MAIN-20150827025420-00291-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9422021508216858,
"score": 3.234375,
"token_count": 463,
"url": "https://www.bvu.edu/bv/health-services/influenza/Recovery.dot"
} |
11 Jul 2012 - I have just received the following supplement to the original article below that we published almost four years ago about how wine drinkers can best look after their teeth. (This may have been prompted by the terrifying state of my teeth in this video shot at the recent tasting of Julia's 50 Great Portuguese Wines.)
Written by Susan Cooper.
A frequently asked question is 'is there anything I can do or use to prevent [dental] erosion?' The advice usually given is: cut down the amount and frequency of consumption of wine, especially white wine, carbonated drinks, fruit and fruit juices, and pickled products, all of which are the main dietary sources of acid. Use Sensodyne Pronamel toothpaste (or similar product). Rinse the mouth with water after having acidic food or drink. Milk and cheese help to neutralise acids. Chewing sugar-free gum helps to increase the saliva flow which also helps to neutralise acids. Drink through a straw, which helps acids to bypass the teeth, and do not swish acidic drinks around the mouth.' That last bit of advice is of course no good for wine drinkers!
Erosion of enamel may cause sensitivity of the teeth when consuming hot, cold or sweet foods and drink. Over time the teeth may appear more yellow, and be more prone to chipping and of course tooth decay.
The best advice is to visit the dentist and hygienist regularly. There are many causes for 'enamel wear', including the way you brush your teeth. There are many new techniques and products available for treating sensitivity and restoring eroded enamel using products such as bonded composite resin.
13 Oct 2008 - I met wine-loving dental hygienist Susan Cooper at my wine day at Ballymaloe House in Ireland recently. One of the many topics we touched on was that difficult interstice: wine and teeth. I know purple pagers are concerned about this topic since the last time we touched on it here, admittedly back in 2004, there was considerable interest. Susan has kindly set out the relevant issues below.
There are main two issues which are relevant to wine and teeth: erosion of the enamel from the acidic properties of wine, and staining from red wine.
Erosion can sometimes be seen as a translucency of the front teeth, and often teeth may become sensitive. Saliva can help to neutralize and clear acids from the mouth; it also helps to remineralise enamel. However, there is a limit above which saliva loses its capacity to remineralise and dental erosion occurs. It should also be taken into account that alcohol has a dehydrating effect which reduces salivary flow, and wine is generally left in contact with the teeth for a long time as it is swished around the mouth before swallowing. In order to minimise the damage, it is important not to brush the teeth immediately after drinking or tasting wine as the enamel may be soft and easily damaged by brushing. It is probably advisable to wait at least one hour. It may help to eat a neutralising food such as cheese. There are some new products on the market with are designed to help protect against acid erosion such as Sensodyne Pronamel, which helps with sensitivity and may also help to remineralise and harden acid-softened enamel.
Stain is a cosmetic issue and does not harm the enamel. However, it can be unsightly and difficult to remove. The porosity of an individual's enamel will affect the amount and intensity of the stain, and composite filling material (white fillings) are prone to stain. Using a whitening toothpaste may help to remove stain. These toothpastes do not whiten teeth; they contain stain removers which help to restore the enamel to its original colour. Some people find that using an electric toothbrush helps to remove more stain than a manual toothbrush. Rinsing the mouth with water may help to cut down on the amount of stain, and at tasting sessions you will do no damage by gently wiping the teeth with a soft tissue. However, it's best to wait for at least one hour before brushing to limit damage to enamel.
The best advice I can give would be to visit the dental hygienist on a 3-4 monthly basis. The stain can then be professionally removed, and up-to-date advice offered regarding new products to combat stain and erosion. Whitening treatments are available but these need to be discussed with the hygienist or dentist.
Just a quick note regarding alcohol and mouthwash [I think this comment was inspired by my comment that, after years of using regular Listerine instead of wine-unfriendly minty toothpastes when brushing teeth prior to tasting wine. I had recently been warned off it by my dental hygienist because it contains alcohol - JR]: While alcohol abuse is associated with oral cancer, a recent review of published scientific literature has shown that topical alcohol in mouthwashes itself is not associated with the development of oral cancer. It is very difficult to determine the exact role of alcohol in oral cancer as many individuals with a high alcohol intake also tend to have a high tobacco use. | <urn:uuid:5154456d-50ba-4ccf-9cfe-f608f6e907f5> | {
"date": "2015-04-18T15:14:12",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246635547.24/warc/CC-MAIN-20150417045715-00172-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9625633955001831,
"score": 2.703125,
"token_count": 1070,
"url": "http://www.jancisrobinson.com/articles/wine-and-teeth-revisited"
} |
One hundred and thirty six years ago this week, Winston Churchill—arguably the leading statesman of the twentieth century—was born. The son of a British father and an American mother, Churchill is often remembered for his formidable oratory skills and his love of fine cigars. Yet Churchill was also a great friend to America whose warnings about the empty promises of the nascent welfare state have come to fruition.
A great admirer of America, Churchill especially praised our founding document: “The Declaration is not only an American document. It follows on the Magna Carta and the Bill of Rights as the third great title deed on which the liberties of the English-speaking peoples are founded.” Though Britain and America were two separate nations with different forms of governments, they were united in principle: “I believe that our differences are more apparent than real, and are the result of geographical and other physical conditions rather than any true division of principle.” As Justin Lyons explains in “Winston Churchill’s Constitutionalism: A Critique of Socialism in America,” Churchill’s ideas about individual liberty, constitutionalism, and limited government “stemmed from his explicit agreement with the crucial statements of these principles by the American Founders.”
When Churchill saw America’s principles of liberty, constitutionalism, and limited government, threatened with the rise of the welfare state, he admonished America to resist this soft despotism. In “Roosevelt from Afar,” Churchill admits that the American economy was suffering when FDR took office, but FDR used this crisis as an opportunity to centralize his political authority rather than to bolster the free market through decentralized alternatives. Churchill commends Roosevelt’s desire to improve the economic well-being for poorer Americans, but he critiques Roosevelt’s policies toward trade unionism and attacks on wealthy Americans as harmful to the free enterprise system. Drawing on Britain’s experience with trade unions, Churchill understood that unions can cripple an economy: “when one sees an attempt made within the space of a few months to lift American trade unionism by great heaves and bounds [to equal that of Great Britain],” one worries that result could be “a general crippling of that enterprise and flexibility upon which not only the wealth, but the happiness of modern communities depends.” Similarly, redistribution of wealth through penalties on the rich harms the economy: “far from depriving ordinary people of their earnings, [the millionaire] launches enterprise and carries it through, raises values, and he expands that credit without which on a vast scale no fuller economic life can be opened to the millions. To hunt wealth is not to capture commonwealth.” Ultimately, attacks on the wealthy only serve as a distraction from other economic issues.
We can readily recall Churchill’s foresight in foreign affairs—his warnings about appeasing Hitler and the rise of the Soviet Union—but we forget his warnings about America’s welfare state. Unlike the progressives in America and abroad, Churchill recognized that tyranny is still possible—even with a well-intentioned welfare state. Political change does not necessarily mean change for the better. Throughout the nineteenth century, political progress was assumed to be boundless and perpetual. After “terrible wars shattering great empires, laying nations low, sweeping away old institutions and ideas with a scourge of molten steel,” it became evident that the twentieth century would not live up to the nineteenth century’s promise of progress. Democratic regimes—even in America—would not be immune from destruction and degradation.
Years later, Churchill’s warnings about trade unionism and redistribution have proven accurate. Though our current economic situation seems bleak, we must also remember (as Churchill reminds us) that politics is not a mere victim of history. Just as progress is not inevitable in politics, neither is decline. Isn’t it time we looked to our old friend Winston Churchill?
Do you have New Common Sense? Sign up today! | <urn:uuid:4f5fc729-aeef-4e39-afad-61dac64b1ffb> | {
"date": "2015-08-31T02:33:42",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065488.33/warc/CC-MAIN-20150827025425-00348-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9458749890327454,
"score": 3.109375,
"token_count": 820,
"url": "http://dailysignal.com/2010/12/03/churchill-saw-it-coming/"
} |
Negotiators in Copenhagen are very near to finalizing a remarkable deal that will see the vast majority of tropical nations attempt to reduce deforestation by 25% over the next 5 years. It seems obvious that such a deal would offer massive benefits to conservation, but a new study argues that some regions rich in biodiversity could get a raw deal.
The research argues that if safeguards are not put in place, agriculture could be displaced and intensified in regions with relatively low levels of carbon.
Deforestation contributes about 18% of all carbon emissions and is also the biggest driving force in pushing species to extinction; it’s also a relatively cheap way to reduce greenhouse gas emissions. That’s why a deal to reduce emissions from deforestation and degradation (REDD) is an attractive option.
As the money potentially on the table “dwarfs current conservation expenditures in developing countries, REDD could trigger the biggest paradigm shift in conservation history,” according to the study published today in the journal Conservation Letters. “While it is generally assumed that REDD would have positive impacts for biodiversity conservation, this assumption has not been rigorously tested.”
The new report—authored by researchers at institutes including Stanford University in the U.S. and the Institute for Global and Applied Environmental Analysis (GAEA) in Brazil as well as the University of East Anglia (UAE) and the U.N.’s World Conservation Monitoring Centre, both in the U.K.—created maps that compare biodiversity and the amount of carbon stored in ecosystems.
As a proxy for all biodiversity, the researchers looked at three global distribution data sets for mammals, amphibians, and birds, totaling 20,697 species. They compared this to new global estimates of the carbon content of ecosystems from 2008 that take into account biomass both above and below the ground. The overlap between biodiversity and carbon content was explored with statistical correlations.
The results confirmed that REDD could have significant benefits for conservation overall, but the report warns that relatively carbon-poor regions could suffer “a double conservation jeopardy, with conservation investment diverted away from them, and human pressure redirected towards them, as carbon-rich areas become the focus of conservation.” Examples include arid regions such as the Brazilian Cerrado, the Cape Floristic region of South Africa, and the Succulent Karoo of South Africa and Namibia.
“Overall REDD would have a very positive effect for biodiversity conservation, which makes it a very powerful tool that simultaneously addresses two of the greatest global environmental crises of our age,” said lead author Bernardo Strassburg of the UAE and the GAEA, in a statement. He argues that biodiversity distribution should be considered when REDD is planned and implemented.
“The problem is that this [U.N. convention on climate change] is about carbon and not about biodiversity and ecosystem services; that’s the Convention on Biological Diversity,” says Andrew Mitchell of Oxford University, who also directs the Global Canopy Programme, an alliance of 29 organizations fighting deforestation. But the two topics are related, he says: Biodiversity drives carbon, water, and energy cycles. | <urn:uuid:9480b844-cf2c-416c-9b90-4a26f0f46bcf> | {
"date": "2015-04-27T13:54:42",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658376.88/warc/CC-MAIN-20150417045738-00262-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9466243386268616,
"score": 3.859375,
"token_count": 648,
"url": "http://news.sciencemag.org/2009/12/fighting-deforestation-could-imperil-some-ecosystems-study-finds?mobile_switch=mobile"
} |
Sylvia Hoff, a graduate student from the Spemann Graduate School of Biology and Medicine (SGBM), has identified a new gene that causes cystic kidneys in children and young adults. The work by the PhD student Sylvia Hoff and her international collaboration partners was published in the scientific journal Nature Genetics. The research group's results lead to the identification of novel insights into the molecular mechanism underlying NPH, which is a prerequisite for developing pharmacological targets and new therapies for children with nephronophthisis.
Nephronophthisis (NPH) is the most common inherited kidney disease that leads to renal failure in children. The kidneys of affected children develop cysts, and as there is no approved therapy yet, patients need dialysis and renal transplantation. In addition, NPH often affects other organs apart from the kidney, such as the eyes, the liver, or the brain.
The PhD student Sylvia Hoff, together with Dr. Soeren Lienkamp of the Nephrology Department at the Freiburg University Medical Center headed by Prof. Gerd Walz, analyzed the function of NPH proteins during early developmental processes. They found that the ANKS6 protein has functions similar to those of some of the known NPH proteins. In collaboration with research groups in France, USA, Denmark, Switzerland, Egypt, the Netherlands, and Germany, they succeeded in identifying mutations in the ANKS6 gene of children with NPH. This confirmed that ANKS6 is a novel NPH-disease gene. The patients suffered from early onset cystic kidney disease and structural heart abnormalities.
Further analysis revealed that ANKS6 also forms a protein network with three other NPH proteins (INVS, NPHP3, and NEK8) at the cilium, a hair-like structure on the surface of many cells. The formation of this network is regulated by the enzyme HIF1AN. This is the first time that the assembly of NPH proteins has been described as a dynamic process. Thus, the finding sheds some light on how the binding of multiple NPH proteins can be regulated.
This can serve as a basis for investigating the function of NPH protein groups in kidney cells, which will improve our understanding of the disease on the cellular level.
Cite This Page: | <urn:uuid:a12a4704-60ec-4403-91d4-9ced508a0d57> | {
"date": "2015-07-08T02:00:16",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635143.91/warc/CC-MAIN-20150627032715-00296-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9545207619667053,
"score": 2.6875,
"token_count": 471,
"url": "http://www.sciencedaily.com/releases/2013/09/130905085739.htm"
} |
The Courts of Common Pleas are the general trial courts of Pennsylvania. They are organized into 60 judicial districts. Most districts follow the geographic boundaries of counties, but seven of the districts are comprised of two counties. Each district has from one to 93 judges and has a president judge and a court administrator.
Minor courts, or special courts, are the first level of Pennsylvania's judiciary. These courts are presided over by magisterial district judges (MDJs) and municipal court judges. MDJs do not have to be lawyers, but they are required to pass a qualifying exam. | <urn:uuid:308911a1-53c3-47b1-bd7c-c0b65b7a2a04> | {
"date": "2017-11-22T20:16:02",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806660.82/warc/CC-MAIN-20171122194844-20171122214844-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9705383777618408,
"score": 3,
"token_count": 116,
"url": "http://franklincountypa.gov/index.php?section=public_judicial"
} |
polygamy, same-sex marriage, homosexuality
Based on a sample of 814 university students, pro- and anti-same-sex marriage and polygamous marriage groups were established based on students scoring >1 SD above (n = 145; n = 132, respectively) and > 1 SD below the group mean (n = 127; n = 126) on the Attitudes Toward Same-Sex Marriage Scale (ATSSM: Pearl & Paz-Galupo, 2007) and Attitudes Toward Polygamy Scale, which was generated by modifying the ATSSM (ATPM). Compared to pro-same-sex marriage students, anti-same-sex marriage students were significantly more prejudiced against gays and lesbians, authoritarian, religious, and politically conservative. Anti-same-sex marriage students also had less contact with and appreciation for diverse cultural groups, more desire to dominate out-groups, were less autonomous in their thinking, and were more likely to be men. Anti-polygamous students were more strongly opposed same-sex marriage, idealized the traditional family, authoritarian, religious, less autonomous in their thinking, desire to dominate minority groups, and were more likely to be female compared to those who were propolygamous marriage. Results further indicated that, polygamy and same-sex marriage are predicted by different variables, with same-sex marriage being more strongly tied to prejudice against gays and lesbians and polygamous marriage being more strongly tied to beliefs about the inherent morality of conventions surrounding the traditional family. A regression analysis using data from all 814 students yielded almost identical results with regards to identifying variables most predictive of ATSSM. Followup analyses revealed that prejudice against gays and lesbians was the single best predictor of opposition to same-sex marriage and even accounted for the associations between opposition to same-sex marriage and religiosity, political conservatism, and support of traditional marriage and family. With respect to polygamy, data from regression analyses revealed that ATSSM was the best predictor of ATPM. Despite the cultural focus on this variable, however, controlling for ATSSM did not reduce the predictive power of critical variables to a non-significant level. Recommendations for challenging opposition to marriage equality are discussed.
Doctor of Philosophy (Ph.D.)
College of Sciences
Length of Campus-only Access
Doctoral Dissertation (Open Access)
Pearte, Catherine, "Young Adults' Attitudes Toward Same-sex Marriage And Polygamy As A Function Of Demographic, Gender, And Personality Variables" (2010). Electronic Theses and Dissertations. 4430. | <urn:uuid:2a415487-b7b1-4492-add9-7c20fbd4f47f> | {
"date": "2018-03-21T07:14:16",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9386016130447388,
"score": 2.625,
"token_count": 527,
"url": "http://stars.library.ucf.edu/etd/4430/"
} |
The Union of Concerned Scientists's Ed Lyman never met a reactor he liked, despited his profession that he is not prejudiced against nuclear power in principle. Are Lymans concerns about nuclear safety sound? Or is Lyman trying to lead us off the deep end? Is Lyman trying to convince us that a safe reactor is not possible? Take for example the Pebble Bed Modular Reactor, a reactor that seemingly is safe. Unlike Japan's ill fated GE Mark 1 reactors if you shut down the coolant system of the PBMR, nothing bad happens. The PBMR is melt down proof. Now isn't that a safer reactor? "No way," Lyman tells us:
The PBMR has been promoted as a “meltdown-proof ” reactor that would be free of the safety concerns typical of today’s plants. However, while the PBMR does have some attractive safety features, several serious issues remain unresolved. Until they are, it is not possible to support claims that thePBMR design would be significantly safer overall than light-water reactors.You see there Lyman is ready to rescue us from our nuclear safety illusions. What is wrong with the PBMR is simple,
A second unresolved safety issue concerns the reactor’s graphite coolant and fuel pebbles. When exposed to air, graphite burns at a temperature of 400°C, and the reaction can become self-sustaining at 550°C—well below the typical operating temperature of the PBMR. Graphite also burns in the presence of water. Thus extraordinary measures would be needed to prevent air and water from entering the core. Yet according to one expert, “air ingress cannot be eliminated by design.”Rainer Moormann, a German reactor scientist argued that,
graphite burning caused by a huge air ingress may lead to massive fission product releases into the environment.Genera Atomic says Lyman is wrong because nuclear grade graphite does not burn. It is often incorrectly assumed that the combustion behavior of graphite is similar to that of charcoal and coal.
Numerous tests and calculations have shown that it is virtually impossible to burn high-purity, nuclear-grade graphites. Graphite has been heated to white-hot temperatures (~1650°C) without incurring ignition or self-sustained combustion. After removing the heat source, the graphite cooled to room temperature. Unlike nuclear-grade graphite, charcoal and coal burn at rapid rates because:
* They contain high levels of impurities that catalyze the reaction.
* They are very porous, which provides a large internal surface area, resulting in more homogeneous oxidation.
* They generate volatile gases (e.g. methane), which react exothermically to increase temperatures.
* They form a porous ash, which allows oxygen to pass through, but reduces heat losses by conduction and radiation.
* They have lower thermal conductivity and specific heat than graphite.
In fact, because graphite is so resistant to oxidation, it has been identified as a fire extinguishing material for highly reactive metals.
Is this true? The New Scientist published a discussion of the General Atomic claim in its November 4. 1989 edition. The New Scientist investigation pointed out that the graphite in the Windscape fire was inpure, while the relatively pure graphite at Chernobyl contributed little to the that fire's heat. General Atomics in the past offered a demonstration to skeptics who wanted further convincing of their "Graphite does not burn," claim. A block of graphite would be brought out and heated to a red hot temperature. Then oxygen would be blow ovr the red hot graphite which would not catch fire. Needless to say Ed Lyman did not attend one of those demonstrations. The New Scientist did not entirely support the General Atomics Graphite does not burn claim, but the analysis came down on the side of a graphite does burn reluctantly, and is not very dangerous conclusion, pointing to Peter Kroeger's research for support.
The oxidation resistance and heat capacity of graphite serves to mitigate, not exacerbate, the radiological consequences of a hypothetical severe accident that allowed air into the reactor vessel. Similar conclusions were reached after detailed assessments of the Chernobyl event; graphite played little or no role in the progression or consequences of the accident. The red glow observed during the Chernobyl accident was the expected color of luminescence for graphite at 700°C and not a large-scale graphite fire, as some have incorrectly assumed.
Peter Kroeger of Brookhaven National Laboratory used a compluter simulation to check on General Atomic's claim. He found that if openings developed at two opposite ends of a graphite reactor containment structure, air could flow through the core, and graphite structures would burn some, but not very much, and certainly not enough to release radioactive materials embedded in the graphite. Kroeger remarked,
Air ingress into the primary loop requires prior depressurizatlon with significant subsequent air inflow. Scenarios that have been considered are, for Instance, a primary vessel leak such that during decay heat removal via a
main loop or an auxiliary loop, significant amounts of gas can be exchanged between the primary loop and the RB, while the operating loop forces the re- sulting gas mixture through the core . (It may be hard to conceive signi- ficant air ingress and combustible gas discharge from a single break; butonly with such a large break or with several separate breaks and with simultaneous forced flow conditions can significant amounts of air be forced through the core.) Order of magnitude computations indicate that natural circulation can only result In about .1 to .3 kg/s of gas circulation through the core of a typical modular pebble bed reactor. The initial RB air Inventory of about 80 kg mol (even if none were lost during the Initial blowdown) can only cause the burning of about 400 kg of graphite. Thus, air Ingress consequences under natural circulation conditions appear to be less severe than those under the above forced cooldown scenarios.
Four hundred kilograms? That is less than a thousand pounds, hardly a roaring confligration.
Kroeger found that,
Separate code applications for air Ingress with auxiliary loop cooling [34,43,44] generally indicate that fuel temperatures are only raised slightly due to local burning, at most reaching 1200 C for a core with 1000 C design temperature. Thus, fuel failure from excessive temperature is not to be ex- pected. With auxiliary cooling the oxidation stops after 4 to 96 hrs, depend- ing on the assumed air ingress rate and the number of loops operatlij^. The maximum burn-off (averaged over a pebble) ranges from 100 to 350 mg/cm , which represents about 10 to 40% of the total exterior graphite coating of the fueled pebbles. (It should be noted that the higher values are obtained for extremely large assumed air ingress rates, which may not be realistic.)
A further review of the Lyman's (and Moormann's) claim that graphite fires an PBMR are serious nuclear safety issues, is the composition of the Pebbles of Pebble Bed Reactors. The Pebbles are complex manufactured objects. Each pebble contains an inner coat of silicon carbide a nonflamable material that is designed to contain radioactive fission products within the pebble. Any fire on the graphite surface of the pebble would be stopped by the SiC coat, and thus would not lead to a dangerous release of radioactive materials.
Needless to say, Ed Lyman forgot to mention any of Peter Kroeger's research, the General Atomic's argument, or other arguments that makes his simple "Graphite burns" statement less than a serious enditement of pebble bed reactor safety.
Even less so, does the "graphite burns" statement a serious safety objection to the use of graphite in the core of Molten Salt Reactors. It should be noted that the presence of liquid fluoride salts would be a serious inhibitor of any graphite fire, and in the event of salt drainage from a MSR core, a graphite fire would not be a safety issue, because both fission products and nuclear fuel would drain out of the core along with the coolant salt. Thus even if we reject the General Atomic's contention that Nuclear Graphite does not burn, the graphite burns objection does not appear to raise a serious concern about Molten Salt Reactor safety. | <urn:uuid:c708f6b9-d7f9-48fe-8617-c9e32e1a868e> | {
"date": "2018-02-21T11:18:56",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813608.70/warc/CC-MAIN-20180221103712-20180221123712-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9458702206611633,
"score": 2.8125,
"token_count": 1761,
"url": "http://nucleargreen.blogspot.com/2011/03/does-nuclear-grade-graphite-burn.html?showComment=1301778042763"
} |
Implementation of the NBSAP
The Nepal Biodiversity Strategy Implementation Plan (2006-2010), developed in 2006, identifies 13 priority concept projects to be implemented by relevant executing agencies (mostly national) in consultation with the concerned stakeholders. The objectives of the Nepal Biodiversity Strategy Implementation Plan set for the period of 2006-2010 were to: 1) conserve biodiversity of Nepal within and outside protected areas 2) identify, develop and establish legislative, policy and strategic measures necessary to conserve, sustainably utilise and provide access to and share the benefits of Nepal’s biological resources 3) conserve endangered species of wildlife 4) develop legislation, sub-sectoral policies and strategic measures 5) develop sustainable eco-friendly rural tourism 6) domesticate non-timber forest product and explore marketing opportunities for poverty reduction. A thirteen-member Coordination Committee was formed under the chair of the Honourable Minister of Forests and Soil Conservation with representatives from key government ministries, private sector, user groups, civil society, academic institutions and major donors. Five thematic sub-committees (forest, agriculture, sustainable use, genetic resources and bio-security) were also formed to adequately address the issues of different themes related to biodiversity.
Implementation of the Strategy and Plan has improved the conservation and sustainable use of biodiversity by updating and exposing the current state of knowledge, sensitising stakeholders involved in biodiversity conservation, identifying important policy and planning gaps, raising awareness, focusing on priority implementation projects, and providing a framework for the National Biodiversity Coordination Committee through which planning, implementation and the sharing of best practices can take place efficiently and effectively. However, despite some successes, there are considerable gaps in implementation which have led to significant delays in successfully accomplishing the objectives of the Nepal Biodiversity Strategy Implementation Plan.
Nepal is currently in the process of revising its NBSAP and preparing its fifth national report, with the intention to also develop national targets and indicators and integrate the implementation of the NBSAP into the National Development Plan.
Actions taken to achieve the 2020 Aichi Biodiversity Targets
Nepal has established an impressive system of protected areas for the conservation of biodiversity, focusing on species, ecosystems, habitats and biomes. Initiatives from the Government, NGOs and community-based organizations have led to the formation of Forest User Groups for in situ conservation of biodiversity. Therefore, community involvement, buffer zones, leasehold for the poor and private forest programmes have been highly encouraged and implemented throughout the country. So far, over 1.93 million ha of forest are managed under a community-based regime, benefitting over 2.56 million households. Through the revised National Wetland Policy (2013), wetland resources are managed wisely and sustainably with the participation of the local people, including women. A total of 27 Important Bird Areas and 54 Important Plant Areas have been provisionally identified. The Government has imposed restrictions on the export of 12 plant species and one forest product under the Forest Act (1993). Similarly, 27 mammal species, 9 bird species, and 3 reptile species have been given legal protection under the National Parks and Wildlife Conservation Act (1973). The population status of rhinoceros (534), blackbuck (271), crocodile (102), tiger (176), and musk deer have been maintained through effective habitat management and a species-specific conservation action plan .
Several measures to conserve the genetic diversity of crops and livestock are being undertaken under the supervision of the Ministry of Agriculture Development. In situ conservation of crop genetic resources has been jointly initiated by the Nepal Agricultural Research Council, Local Initiatives for Biodiversity Research and Development and Bioversity International. More than 180 tree species are conserved in situ in farmland and seed stands and gene conservation areas are maintained for 3 tree species. The Department of Livestock Services and the National Animal Science Research Institute have jointly identified 25 local livestock breeds. Research has been conducted at phenotypic, chromosome and DNA levels and this process will be continued in regard to other breeds of animals. Moreover, the Genetic Resource Initiative project provided technical inputs for the development of a sui generis system for Plant Variety Protection and Intellectual Property Rights. To avoid or minimize the potential adverse effects that may occur during the movement, transport and use of Living Modified Organisms (LMOs), and to contribute to poverty alleviation though the development and application of biotechnology, the Ministry of Forests and Soil Conservation has framed and approved the National Biosafety Framework (2007). A general list of 166 invasive alien species of Nepal and documentation profiles of the 21 most troublesome plant species have been prepared. The publication of 2 out of 10 volumes of Flora of Nepal was targeted for 2010 under the Darwin Initiative (to date, Volume III covering 21 families from Magnoliaceae to Rosaceae, including 600 species, has been published and another publication is underway). In addition, 64 reports have been published that comprise regional and local flora, as well as fascicles related to particular families.
Monitoring of air quality has begun in the city of Kathmandu. At present, there are six monitoring stations, with information being made available to the public in regard to the level of air pollutants.
Support mechanisms for national implementation (legislation, funding, capacity-building, coordination, mainstreaming, etc.)
A wide array of biodiversity conservation policies, plans and legislative instruments have been formulated and promulgated, providing opportunities to maintain habitats, and/or reduce the population decline of important species. Nepal has signed more than 20 international agreements and obligations, translating many of them into national policies and acts. For instance, the National Agro-biodiversity Policy (2007) addresses the conservation, promotion and utilisation of agro-genetic resources and the rights of the community and state over them. The National Parks and Wildlife Conservation Act (1973) and regulations, such as the National Parks and Wildlife Conservation Regulation (1974), Chitwan National Park Regulation (1974), Himali National Park Regulation (1980), Conservation Area Management Regulation (1996), and Buffer Zones Management Regulation (1996), provide opportunities to conserve biodiversity in the protected areas system. Similarly, the Forest Act (1993) and Forest Regulation (1995) are playing a crucial role in conserving biodiversity beyond the protected areas system at the ecosystem, species and genetic levels.
The Environment Protection Act (1996) has provision to mobilise environmental inspectors for the inspection and monitoring of pollutants and control of pollution. In addition, the Tenth Plan (2002-2007) and Interim Plan (2008-2010) have policy to implement the “Polluter Pays Principle” and introduce a pollution fee. The Government of Nepal has implemented the generic standards for the tolerance limit for industrial (waste water) effluents discharged to inland surface water and public sewers and industry-specific standards (regarding leather, wool processing, fermentation, vegetables ghee and oil, paper and pulp, dairy sugar, cotton textile, soap industries). Effective implementation of these standards will assist in reducing the effects of pollution on biodiversity. The Environment Protection Act (1996) and its Regulation (1997) also oblige the proponent to undertake environment assessment before the implementation of any prescribed project in the buffer zones and conservation areas.
Biodiversity and environment conservation have been integrated into cross-sectoral plans of the Government (e.g. local development plans and programmes, environment assessment and review, environment management plans of infrastructure development projects, Water Resources Strategy, Millennium Development Goals, Poverty Alleviation Fund). Biodiversity conservation programmes are also covered by the media and communication sectors. For instance, the Postal Service Department has been publishing postage stamps related to flora and fauna to raise awareness among the people and communicate biodiversity conservation to the global community. Activities have also been initiated to document and protect traditional knowledge, skills, techniques and practices in collaboration with international and national NGOs. Recently, a country report entitled Forest Genetic Resources of Nepal has been prepared, as a part of the Report on the State of World Forest Genetic Resources, demonstrating national commitment toward the implementation of the Multilateral Environmental Agreements, including the Convention on Biological Diversity.
Mechanisms for monitoring and reviewing implementation
The Ministry of Forests and Soil Conservation, with its five departments (Forest, National Parks and Wildlife Conservation, Plant Resources, Forest Research and Survey, Soil Conservation and Watershed Management) and two divisions (Environment and Monitoring and Evaluation) are primarily responsible for project implementation, monitoring and evaluation. Several programmes and projects have been implemented to monitor major animal species in collaboration with partner organisations, in particular the international NGOs, and to restore and maintain habitats within and outside protected areas. Protected animals of Nepal are also being monitored by means of census-taking. A recent census indicates that the tiger’s population in Nepal has been maintained at 176. However, there is an urgent need to update the lists of other protected and threatened species with information about their respective status and distribution range. | <urn:uuid:5342e0d3-e4f2-448c-9ca9-c19e3daa9663> | {
"date": "2016-12-03T02:22:57",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540804.14/warc/CC-MAIN-20161202170900-00008-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9164562821388245,
"score": 3.0625,
"token_count": 1822,
"url": "https://www.cbd.int/countries/profile/default.shtml?country=np"
} |
Today is the last day of winter. In many parts of the country, the weather outlook calls for winter and summer to “freeze together,” as we call it, since temperatures will be below freezing tomorrow morning, on the First Day of Summer. According to folklore, the freezing together of summer and winter means that the summer will be a good one.
Thus, there are only two seasons in Iceland, summer and winter, according to the Old Norse calendar. The First Day of Summer is a public holiday. On that day, it’s an old custom to give your children a summer gift.
Ironically, the First Day of Summer is often a cold one, inspiring one of our best known poets, Þórarinn Eldjárn, to write a poem once about this day, which he called not Sumardagurinn fyrsti, or First Day of Summer, but 'Sumardagurinn frysti,' meaning Frosty Day of Summer. Many of us have memories of marching in parades on the First Day of Summer, behind a brass band, dressed in hats, mittens and winter coats. | <urn:uuid:db38db07-8e0d-4808-a74b-9b89d45904ae> | {
"date": "2018-10-18T00:38:17",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511365.50/warc/CC-MAIN-20181018001031-20181018022531-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.94050133228302,
"score": 2.734375,
"token_count": 236,
"url": "http://icelandreview.com/news/2016/04/20/forecast-calls-frosty-first-day-summer"
} |
What are the most salient events leading up to the assassination of President Abraham Lincoln?
AN OVERVIEW OF JOHN WILKES BOOTH'S ASSASSINATION OF PRESIDENT ABRAHAM LINCOLN
John Wilkes Booth, born May 10, 1838, was an actor who performed throughout the country in many plays. He was the lead in some of William Shakespeare's most famous works. Additionally, he was a racist and Southern sympathizer during the Civil War. He hated Abraham Lincoln who represented everything Booth was against. Booth blamed Lincoln for all the South's ills. He wanted revenge.
In late summer of 1864 Booth began developing plans to kidnap Lincoln, take him to Richmond (the Confederate capital), and hold him in return for Confederate prisoners of war. By January, 1865, Booth had organized a group of co-conspirators that included Samuel Arnold, Michael O'Laughlen, John Surratt, Lewis Powell (also called Lewis Paine or Payne), George Atzerodt, and David Herold. Additionally, Booth met with Dr. Samuel Mudd both in Maryland (where Mudd lived) and Washington, and he began using Mary Surratt's boardinghouse to meet with his co-conspirators.
On March 17, 1865, the group planned to capture Lincoln who was scheduled to attend a play at a hospital located on the outskirts of Washington. However, the President changed plans and remained in the capital. Thus, Booth's plot to kidnap Lincoln failed.
On April 9, 1865, General Robert E. Lee surrendered to General Ulysses S. Grant at Appomattox. Two days later Lincoln spoke from the White House to a crowd gathered outside. Booth was present as Lincoln suggested in his speech that voting rights be granted to certain blacks. Infuriated, Booth's plans now turned in the direction of assassination.
On the morning of Friday, April 14, Booth dropped by Ford's Theatre and learned that the President and General Grant were planning to attend the evening performance of Our ...
This solution identifies and discusses the most salient events leading up to John Wilkes Booth's assassination of President Abraham Lincoln. | <urn:uuid:6cfd3d93-54c4-4c4b-90ab-9b91fb4367bc> | {
"date": "2017-08-21T17:35:33",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109470.15/warc/CC-MAIN-20170821172333-20170821192333-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9746565818786621,
"score": 3.453125,
"token_count": 440,
"url": "https://brainmass.com/history/north-american-history/assassination-of-president-abraham-lincoln-18892"
} |
NASA is requesting the public and interested organizations to review and comment on the Draft Environmental Impact Statement (DEIS) for the agency’s proposed Mars 2020 mission. The comment period runs through July 21.
During the comment period, NASA will host an online public meeting from 1-3 p.m. EDT Thursday, June 26, at:
The meeting site will be accessible to participants at 12:45 p.m. EDT. The meeting will include briefings about the proposed mission, its power source options, and the findings of the DEIS. A question-and-answer session and an open period for the public to submit live written comments will follow. Advance registration for the meeting is not required.
The DEIS addresses the potential environmental impacts associated with carrying out the Mars 2020 mission, a continuation of NASA’s in-depth exploration of the planet. The mission would include a mobile science rover based closely on the design of the Curiosity rover, which was launched in November 2011 and is operating successfully on Mars.
The mission is planned to launch in July or August 2020 from Florida on an expendable launch vehicle.
NASA will consider all received comments in the development of its Mars 2020 Final Environmental Impact Statement and comments received, and responses to these comments, will be included in the final document.
The DEIS, background material on the proposed mission, and instructions on how to submit comments on the DEIS are available at:
After the conclusion of the virtual public meeting, an on-demand replay of the event also will be available at the above link.
Additional information on NASA’s National Environmental Policy Act process and the proposed Mars 2020 mission can be found at:
Jet Propulsion Laboratory, Pasadena, Calif. | <urn:uuid:a21e5228-f6b9-4353-b3ed-b24f25367826> | {
"date": "2016-12-08T09:51:18",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542520.47/warc/CC-MAIN-20161202170902-00312-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9220276474952698,
"score": 2.578125,
"token_count": 352,
"url": "http://www.lpi.usra.edu/planetary_news/2014/06/22/nasa-invites-public-comment-on-mars-2020-draft-environmental-impact-statement/"
} |
NEW YORK (JTA) — Few prayers are as well known to Jews as Ashamnu (“We have sinned …”) and Al Chet (“For the sin …”), the twin confessions of Yom Kippur. Belief in human sinfulness is more central to Judaism than we think. Sin may not be “original,” as it is in Christianity — inherited from Adam, that is, as a sort of genetic endowment ever after. But it is at least primal: It is there, patent, indelible and unavoidable. We may not be utterly depraved – the teaching with which American Protestantism grew up – but we are indeed sinners.
Talmudic practice, therefore, was to say a confession every single day, a precedent that continued into the Middle Ages and still survives in Sephardi synagogues. Ashkenazi Jews also announce that sinfulness daily in a part of the service called Tachanun (“supplications”), which includes a line from Avinu Malkeinu, “Our Father, Our King, be gracious and answer us, for we have no deeds.”
That translation misses the theological point, however. Classical Christianity believed that we are too sinful to be of any merit on our own. We depend, therefore, on God’s “grace,” the love God gives even though we do not deserve it. Jews, by contrast, preach the value of good deeds, the mitzvot. But Avinu Malkeinu hedges that bet. At least in Tachanun, and certainly from Rosh Hashanah to Yom Kippur, we proclaim “we have no deeds” and rely on God’s “gracious” love instead.
Our two Yom Kippur confessions appeared in “Seder Rav Amram,” the first comprehensive Jewish prayer book (c. 860), and became standard thereafter.
But do Jews really believe we are as sinful as the confessions imply? Nineteenth-century Jews, recently emancipated from medieval ghettos, doubted it. For well more than a century, philosophers had preached the primacy of reason as the cognitive capacity that makes all human beings equal. These two influences, political equality and the fresh air of reason paved the way for a century when all things seemed possible. And indeed, scientific advances and the industrial revolution did seem to promise an end to human suffering just around the corner.
It wasn’t just Jews who felt that way. For Europeans in general, the notion of human sin, whether original (for Christians) or primal (for Jews), lost plausibility. Far from bemoaning human depravity, it seemed, religion should celebrate human nobility. Enlightenment rabbis began paring away Yom Kippur’s heavy accent on sin.
From then until now, new liturgies (usually Reform and Reconstructionist) have shortened the confessions, translated them to lessen their overall impact and created new ones that addressed more obvious shortcomings of human society. But traditionalist liturgies too tried to underscore human promise and explain away the aspects of the confessions that no one believed anymore. Al Chet “is an enumeration of all the sins and errors known to mankind,” said Samson Raphael Hirsch, the founder of modern Orthodoxy. It is not as if we, personally, have done them, but some Jew somewhere has, and as the Talmud says, “All Israelites are responsible for one another.”
Some would say today that as much as the 19th century revealed the human capacity for progress, the 20th and 21st centuries have demonstrated the very opposite. Perhaps we really are as sinful as the traditional liturgy says. Religious “progressives” respond by saying that we suffer only from a failure of nerve and that more than ever, Yom Kippur should reaffirm the liberal faith in human dignity, nobility and virtue. At stake on Yom Kippur this year is not just one confession rather than another, but our faith in humankind and the kind of world we think we are still capable of building.
I am not yet ready to throw in the Enlightenment towel. Back in 1824, Rabbi Gotthold Salomon of Hamburg gave a sermon in which he said, “All of us feel, to one extent or other, that, in spirit and soul, we belong to a higher order than the ephemeral. We feel that we are human in the most noble sense of the word, that we are closely connected to the Father of all existence, and that we could have no higher purpose than to show ourselves worthy of this relationship.”
Those words ring true for us today. We have something to gain from the Enlightenment’s belief that acting for human betterment is the noble thing to do, and that acting nobly is still possible.
Rabbi Lawrence A. Hoffman, the Barbara and Stephen Friedman professor of liturgy, worship and ritual at Hebrew Union College-Jewish Institute of Religion in New York, is the author most recently of “We Have Sinned: Sin and Confession in Judaism — Ashamnu and Al Chet” (Jewish Lights). | <urn:uuid:ed68b554-6bd6-47a8-9a45-012bcb1ed44a> | {
"date": "2015-07-29T23:18:44",
"dump": "CC-MAIN-2015-32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00196-ip-10-236-191-2.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.955439031124115,
"score": 2.625,
"token_count": 1096,
"url": "http://www.jta.org/2012/09/08/life-religion/confessing-our-sins-on-yom-kippur-and-remembering-to-act-nobly"
} |
Desegregation of Gainesville high schools topic of new books
Published: Friday, February 15, 2013 at 1:33 p.m.
Last Modified: Friday, February 15, 2013 at 1:33 p.m.
Every Gainesville student learns about Brown v. Board of Education, but not every student knows how that case played out in his or her hometown. Three Gainesville authors aim to change that.
Buy the books
“Lincoln High School: Its History and Legacy” ($20) can be ordered via email from Albert White at [email protected]. Proceeds benefit the Lincoln High School Alumni Association.
“Beyond Bravery” can be purchased on Amazon.com for Kindle for $5.99.
"Lincoln High School: Its History and Legacy," by Albert White and Kevin McCarthy, is a comprehensive history chronicling Florida's first accredited all-black public high school from its beginnings in 1923 to its federally mandated closing in 1970.
"Beyond Bravery," by LaVon W. Bracy, is a first-person account of Bracy's experience as one of Gainesville High School's first black students in 1964.
Together these two works tell a part of Gainesville history seldom taught in schools.
White attended Lincoln High School from 1956 to 1962. As student council vice president, he was the football team's varsity quarterback and played on the tennis, basketball, track and swim teams. Since most local entertainment facilities were closed to blacks at the time, the junior-senior high school was the cultural hub of Gainesville's African-American community, he said.
"Everything that occurred in our community of any significance happened at Lincoln — our concerts, our plays, our athletic events," White said. "When football games (were held), the whole black community attended," he said.
In response to a federal mandate to desegregate public schools, the Alachua County School Board voted in 1969 to convert Lincoln into a vocational center and to build the new and integrated Eastside High School.
Hundreds of Lincoln High students stayed home to boycott the school's closing. More than 1,000 students marched from Lincoln through UF's Plaza of the Americas to the school board office carrying signs reading, "No Lincoln, No Peace" and "Give Us Lincoln Or Give Us Death."
"The closing took away motivation, hope and opportunity," said White, who was attending college in North Carolina at the time. "When you snatch away the nucleus of a community, you get chaos in some ways."
White's younger brother, Scherwin Henry, was among the Lincoln High students to transfer to Gainesville High School in February 1970 — just months before graduation.
"Lincoln was the heartbeat of the community," said Henry, who is a Gainesville City Commissioner. "What happens when a heart stops beating? It did take the life out of us when they closed Lincoln."
Because he and his Lincoln classmates transferred in the middle of the school year, Henry said they weren't allowed to join sports teams or the band. Lincoln senior traditions, like the Terriers Bark talent show and the crowning of Ms. Lincoln, ended when students transferred to GHS.
"We had nothing to feel a part of. We were just existing," Henry said."We wouldn't get a chance to do what we desired and what we'd been on the path to do."
LaVon Bracy attended Lincoln High School until 1964, when her father and then president of the local NAACP chapter, the Rev. T.A. Wright, enrolled her in Gainesville High School, making her one of the school's first black students.
Bracy's book, "Beyond Bravery," chronicles her experiences facing racism from her peers.
When she took her seat on the first day of school, her classmates moved to the far side of the room, she remembers. Every day after that, classmates would place tacks, dead roaches, rats and snakes on her seat, she said. The cafeteria and the library would empty when she entered.
"They treated me like I had a contagious disease," said Bracy, who now lives in Orlando where she works with voter registration drives. "I didn't have any friends or anyone to talk to."
Bracy said she left Gainesville High School angry and hurt, and didn't return until 2004, when she was invited to speak to Gainesville High School students and teachers about her experience.
"The real story needed to be told from the perspective of someone who experienced it," she said. "My responsibility is to make sure my voice is heard and make sure others know they have a voice."
Bracy said she hopes her book and her life's work show readers the struggles she and her peers went through to achieve the equality enjoyed today.
"I vowed I'd never be in a position where I couldn't speak again," she said. "I vowed to become an advocate for justice and equality."
Kevin McCarthy, who taught linguistics and writing in the University of Florida's English department for 37 years before retiring in 2005, said writing "Lincoln High School: Its History and Legacy," with White was a labor of love.
"There were so many people in town who were so helpful with information, yearbooks, photographs and documents. It wasn't hard to write because there was so much available," said McCarthy, who met White in 2009 when he was working on a biography of Judge Stephan Mickle, the UF College of Law's second black graduate.
McCarthy said he noticed then how much information White had on Lincoln High School and encouraged him to compile it into a volume. The research process took a year. In addition to sorting through White's own archives and those available at UF and the Matheson Museum, McCarthy also visited the former home and gravestone of Lincoln's first principal, A. Quinn Jones, to take photographs.
"I find that illustrations really make a text come alive," he said. "And most people have never seen these photographs before."
The history of Lincoln High School tells a larger story about the history of educational opportunities for blacks in the South.
"The school really did prosper and put Gainesville on the map in terms of African-American education," he said. "I think this book will give everyone involved with Lincoln a good sense of pride."
Today, White and Henry are involved in a movement to reinstate Lincoln Middle School as a high school.
"It will energize the community — white and black," Henry said. "And it will give the African-American community a great sense of pride."
Black history wasn't taught when White was in school, which is why it's important this story is told, he said.
"We want to make sure our kids learn that this school existed in the community and that it was a success," White said.
"That's the message here. We can overcome any challenges we may have" he said. "We can overcome any notions that we are lacking in the ability or wherewithal to learn and be productive."
Reader comments posted to this article may be published in our print edition. All rights reserved. This copyrighted material may not be re-published without permission. Links are encouraged. | <urn:uuid:69d4e0af-6da7-471a-9a21-8106a81e7179> | {
"date": "2016-10-28T00:24:16",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721405.66/warc/CC-MAIN-20161020183841-00294-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9791060090065002,
"score": 2.53125,
"token_count": 1496,
"url": "http://www.gatorsports.com/article/20130215/ARTICLES/130219718/0/archive"
} |
- Word Explorer
|part of speech:
||the loss or hiding of all the lights of a city or region. Cities may have blackouts because of power failures.
||loss of being conscious.
An old head injury causes him to suffer blackouts from time to time.
- similar words: | <urn:uuid:6cf7a264-e469-494a-a7d1-12fb352b6dc2> | {
"date": "2016-04-30T07:30:52",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111620.85/warc/CC-MAIN-20160428161511-00060-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.7997666597366333,
"score": 2.78125,
"token_count": 63,
"url": "http://www.wordsmyth.net/?level=2&rid=4274"
} |
let the wise listen and add to their learning, and let the discerning get guidance-- for understanding proverbs and parables, the sayings and riddles of the wise. Proverbs 1:5-6
The quotes on this page are a collection of observations made on the Jews, by non-Jews..
"Some people like the Jews, and some do not. But no thoughtful man can deny the fact that they are, beyond any question, the most formidable and the most remarkable race which has appeared in the world." - - . Winston Churchill
"The Jew is that sacred being who has brought down from heaven the everlasting fire, and has illumined with it the entire world. He is the religious source, spring, and fountain out of which all the rest of the peoples have drawn their beliefs and their religions." - - Leo Tolstoy
"It was in vain that we locked them up for several hundred years behind the walls of the Ghetto. No sooner were their prison gates unbarred than they easily caught up with us, even on those paths which we opened up without their aid." - - A. A. Leroy Beaulieu, French publicist, 1842
"The Jew gave us the Outside and the Inside - our outlook and our inner life. We can hardly get up in the morning or cross the street without being Jewish. We dream Jewish dreams and hope Jewish hopes. Most of our best words, in fact - new, adventure, surprise, unique, individual, person, vocation, time, history, future, freedom, progress, spirit, faith, hope, justice - are the gifts of the Jews." - - Thomas Cahill, Irish Author
"One of the gifts of the Jewish culture to Christianity is that it has taught Christians to think like Jews, and any modern man who has not learned to think as though he were a Jew can hardly be said to have learned to think at all." - - William Rees-Mogg, former Editor-in-Chief for The Times of London and a member of the House of Lords
"It is certain that in certain parts of the world we can see a peculiar people, separated from the other peoples of the world and this is called the Jewish people...
This people is not only of remarkable antiquity but has also lasted for a singular long time... For whereas the people of Greece and Italy, of Sparta, Athens and Rome and others who came so much later have perished so long ago, these still exist, despite the efforts of so many powerful kings who have tried a hundred times to wipe them out, as their historians testify, and as can easily be judged by the natural order of things over such a long spell of years. They have always been preserved, however, and their preservation was foretold... My encounter with this people amazes me..." - - Blaise Pascal, French Mathematician
"The Jewish vision became the prototype for many similar grand designs for humanity, both divine and man made The Jews, therefore, stand at the center of the perennial attempt to give human life the dignity of a purpose." - - Paul Johnson, American Historian
"As long as the world lasts, all who want to make progress in righteousness will come to Israel for inspiration as to the people who had the sense for righteousness most glowing and strongest." - - Matthew Arnold, British poet and critic
"Indeed it is difficult for all other nations of the world to live in the presence of the Jews. It is irritating and most uncomfortable. The Jews embarrass the world as they have done things which are beyond the imaginable. They have become moral strangers since the day their forefather, Abraham, introduced the world to high ethical standards and to the fear of Heaven. They brought the world the Ten Commandments, which many nations prefer to defy. They violated the rules of history by staying alive, totally at odds with common sense and historical evidence. They outlived all their former enemies, including vast empires such as the Romans and the Greeks. They angered the world with their return to their homeland after 2000 years of exile and after the murder of six million of their brothers and sisters.
They aggravated mankind by building, in the wink of an eye, a democratic State which others were not able to create in even hundreds of years. They built living monuments such as the duty to be holy and the privilege to serve one's fellow men.
They had their hands in every human progressive endeavor, whether in science, medicine, psychology or any other discipline, while totally out of proportion to their actual numbers. They gave the world the Bible and even their "savior."
Jews taught the world not to accept the world as it is, but to transform it, yet only a few nations wanted to listen. Moreover, the Jews introduced the world to one God, yet only a minority wanted to draw the moral consequences. So the nations of the world realize that they would have been lost without the Jews...
And while their subconscious tries to remind them of how much of Western civilization is framed in terms of concepts first articulated by the Jews, they do anything to suppress it.
They deny that Jews remind them of a higher purpose of life and the need to be honorable, and do anything to escape its consequences. It is simply too much to handle for them, too embarrassing to admit, and above all, too difficult to live by.
So the nations of the world decided once again to go out of 'their' way in order to find a stick to hit the Jews. The goal: to prove that Jews are as immoral and guilty of massacre and genocide as some of they themselves are.
All this in order to hide and justify their own failure even to protest when six million Jews were brought to the slaughterhouses of Auschwitz and Dachau; so as to wipe out the moral conscience of which the Jews remind them, and they found a stick.
Nothing could be more gratifying for them than to find the Jews in a struggle with another people (who are completely terrorized by their own leaders) against whom the Jews, against their best wishes, have to defend themselves in order to survive. With great satisfaction, the world allows and initiates the rewriting of history so as to fuel the rage of yet another people against the Jews. This in spite of the fact that the nations understand very well that peace between the parties could have come a long time ago, if only the Jews would have had a fair chance. Instead, they happily jumped on the wagon of hate so as to justify their jealousy of the Jews and their incompetence to deal with their own moral issues.
When Jews look at the bizarre play taking place in The Hague , they can only smile as this artificial game once more proves how the world paradoxically admits the Jews? uniqueness. It is in their need to undermine the Jews that they actually raise them.
The study of history of Europe during the past centuries teaches us one uniform lesson: That the nations which received and in any way dealt fairly and mercifully with the Jew have prospered; and that the nations that have tortured and oppressed them have written out their own curse." - - Olive Schreiner, South African novelist and social activist
"If there is any honor in all the world that I should like, it would be to be an honorary Jewish citizen." - - A.L Rowse, authority on Shakespeare
The Jews William Norman Ewer
But not so odd
As those who choose
A Jewish God
But spurn the Jews T E Brown
Click the banner below to go to the site map and choose another page | <urn:uuid:a491cbd7-5fb5-44f3-a8bc-89ab12828a5f> | {
"date": "2017-04-29T23:15:18",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123632.58/warc/CC-MAIN-20170423031203-00355-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9732866287231445,
"score": 2.71875,
"token_count": 1547,
"url": "http://wildolive.co.uk/quotes%20about%20jews.htm"
} |
Pollution can make drug resistance worse
Antimicrobial resistance A recent study that shows environmental contaminants can cause microbes to become resistant to important medicines is a "wake up call", say Australian microbiologists.
The comments follow a report showing that resistance to anti-parasite drugs can develop in mice exposed to arsenic in their drinking water.
But the findings have much broader significance and could be a warning that triclosan, used in many antibacterial products, could also cause a similar problem, says Dr Stuart Ralph, of the University of Melbourne.
"The more things that we pollute the environment with [these products], the more chance there is of developing cross resistance," he says.
He and colleague, Professor Malcolm McConville, comment on the new findings in this week's Proceedings of the National Academies of Science (PNAS).
Arsenic and resistance
At the end of the 20th century India's Bihar state suffered massive arsenic contamination of its drinking water. In recent years researchers have noticed the development there of widespread resistance to drugs - with a similar structure to arsenic - used to treat visceral leishmaniasis.
Scottish researchers previously proposed that the resistance resulted from exposure of the leishmania parasite, which causes the disease, to arsenic.
In a recent issue of PNAS, the team reported evidence to back their claim.
They put arsenic in the drinking water of mice infected with leishmania and found that the animals developed much higher levels of resistant parasites, compared to control mice who drank uncontaminated water.
"It tells them that probably people in India are dying from failed treatment of leishmaniasis (known in Hindi as kala azar) because they are already getting something that's related to the drug in their drinking water. So the parasites are already resistant to the drug before they've ever seen it," says Ralph.
"As far as I'm aware this is the first time someone's shown that environmental contamination has led to treatment failure with a drug."
Arsenic contamination in water is a problem in some areas of Peru where there is also resistance to antimonial drugs, says Ralph.
Ralph says the arsenic findings are relevant to concerns about the development of antibiotic-resistant bacteria.
"The spread of antimicrobial resistance is actually much faster in bacteria than it is in parasites," he says.
Ralph says one chemical of concern is triclosan, which is an antibacterial washed into the environment from its use in everyday items including handwashes, toothpastes, lunchboxes and cling wrap.
He says, both triclosan and the anti-tuberculosis drug isoniazid kill microbes in the same way. Research had shown that if you grow bacteria related to tuberculosis in the lab with triclosan, resistance to both the antibacterial and isoniazid develops.
While it has yet to be shown that human exposure to triclosan leads to isoniazid resistance, Ralph says it's "quite plausible and may be happening without us even knowing about it".
He says while scientists debate whether triclosan can lead to cross resistance to drugs like isoniazid, the latest research is a "wake-up call" to suggest that it is quite possible.
Ralph says the findings should also be taken note of by those designing new antimicrobial drugs.
"We ought to ask if the compound being designed is similar to an environmental contaminant," he says. | <urn:uuid:ae4e4bc1-ce4e-46e8-969c-1968fd67720e> | {
"date": "2016-02-13T08:20:48",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166222.10/warc/CC-MAIN-20160205193926-00136-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9550621509552002,
"score": 3.03125,
"token_count": 714,
"url": "http://www.abc.net.au/science/articles/2013/11/12/3886175.htm?site=science/tricks"
} |
In order to conserve biodiversity, it is essential to know what species are present in a country, province or area, and exactly where they occur. It is also crucial to know the name of species because this allows access to information about whether the species is threatened, rare, or of special cultural or biological significance. For many animals, only a specialist can provide this information. These specialists are called taxonomists. They not only identify and describe new species that they discover but also go on expeditions to survey areas that we know little about. Many taxonomists are also biogeographers: they plot the distribution of different species, identify patterns of distribution and the processes that determine these distributions (e.g. temperature, altitude and vegetation) using GIS, and identify areas of special importance for conservation. Taxonomists and biogeographers often work in museums, but they can also be employed in conservation agencies and universities. | <urn:uuid:d74bbd9e-36bf-448f-87ac-95f1e8084c1f> | {
"date": "2017-07-25T16:49:53",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425339.22/warc/CC-MAIN-20170725162520-20170725182520-00416.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9432458281517029,
"score": 3.9375,
"token_count": 185,
"url": "http://zssa.co.za/?page_id=236"
} |
Underimmunization Among Children: Effects of Vaccine Safety Concerns on Immunization Status
Objective. To examine the attitudes, beliefs, and behaviors of parents whose children were underimmunized with respect to ≥2 vaccines that have recently received negative attention, compared with parents whose children were fully immunized with respect to the recommended vaccines.
Design. Case-control study.
Setting. A sample of households that participated in the National Immunization Survey were recontacted in 2001.
Main Outcome Measure. Vaccination status was assessed. Case subjects were underimmunized with respect to ≥2 of 3 vaccines (diphtheria-tetanus-pertussis or diphtheria-tetanus-acellular pertussis, hepatitis B, or measles-containing vaccines), and control subjects were fully immunized.
Results. The response rate was 52.1% (2315 of 4440 subjects). Compared with control households, case households were more likely to make $0 to $30 000 (adjusted odds ratio [OR]: 2.7; 95% confidence interval [CI]: 1.5–4.6) than at least $75 000, to have ≥2 providers (OR: 2.0; 95% CI: 1.3–3.1) than 1, and to have ≥4 children (OR: 3.1; 95% CI: 1.5–6.3) than 1 child. With control for demographic and medical care factors, case subjects were more likely than control subjects to not want a new infant to receive all shots (OR: 3.8; 95% CI: 1.5–9.8), to score vaccines as unsafe or somewhat safe (OR: 2.0; 95% CI: 1.2–3.4), and to ask the doctor or nurse not to give the child a vaccine for reasons other than illness (OR: 2.7; 95% CI: 1.2–6.1). Among case subjects, 14.8% of underimmunization was attributable to parental attitudes, beliefs, and behaviors.
Conclusions. Attitudes, beliefs, and behaviors indicative of vaccine safety concerns contribute substantially to underimmunization in the United States. Although concerns were significantly more common among parents of underimmunized children, many parents of fully immunized children demonstrated similar attitudes, beliefs, and behaviors, suggesting a risk to the currently high vaccination levels. Efforts to maintain and improve immunization coverage need to target those with attitudes/beliefs/behaviors indicative of vaccine safety concerns, as well as those with socioeconomic and health care access problems.
Immunizations have reduced the incidence of vaccine-preventable disease by >95% for every pediatric vaccine recommended for routine use before 1990.1 As the number of immunizations has increased, however, reports of postimmunization adverse events, both vaccine-related and coincidental, have increased. This increase, combined with the decrease in the incidence of vaccine-preventable diseases, has resulted in an increased focus on vaccine safety.2 Some have linked vaccinations with acute and chronic illnesses with no known causes, eg, autism and measles-mumps-rubella (MMR) vaccine,3 multiple sclerosis and hepatitis B vaccine,4 and sudden infant death syndrome and diphtheria-tetanus-pertussis (DTP) vaccine.5 Although current scientific evidence does not support associations between vaccines and these conditions,6 such hypotheses continue to circulate.2
Although immunization coverage in the United States is high, concerns about vaccine safety may adversely affect parents’ decisions to immunize their children. This can result in decreased coverage and disease outbreaks.7,8 In recent years, most media attention on adverse events after routine childhood vaccination has focused on DTP or diphtheria-tetanus-acellular pertussis (DTaP), hepatitis B, and MMR vaccines. In some European countries and in Japan, general concern regarding whole-cell pertussis vaccine safety resulted in substantially lower coverage and outbreaks of disease.7 In France, 3 people with multiple sclerosis received damage payments from the government because of the purported association with the hepatitis B vaccine.9 Perhaps the most highly publicized debate involves the hypothesized associations between the MMR vaccine and inflammatory bowel disease and autism.3 The purpose of the present study was to examine vaccine safety-related attitudes, beliefs, and behaviors of parents whose children are underimmunized with respect to ≥2 of these high-profile vaccines (DTP/DTaP vaccine, hepatitis B vaccine, and measles-containing vaccine [MCV]), compared with parents whose children are fully immunized with all recommended vaccines.
The National Immunization Survey (NIS) is conducted by the Centers for Disease Control and Prevention to obtain accurate national and state-specific estimates of vaccination coverage. The NIS samples children 19 to 35 months of age each quarter, using list-assisted, random-digit dialing. A parent/guardian is interviewed to determine demographic and socioeconomic information. At the end of the interview, consent to contact all vaccination providers for the child is requested. If consent is obtained, then mail surveys are sent to the child’s vaccination providers, who obtain the child’s vaccination history from their records. The design of the NIS has been described elsewhere.10
NIS Survey Module
In the NIS Knowledge, Attitudes, and Practices survey, case and control subjects were randomly sampled from 16 498 NIS-participating children with adequate provider-reported immunization data, from January to December 2001.
Case and Control Subjects
Case subjects were defined as children who were underimmunized with respect to ≥2 of DTP/DTaP vaccine, hepatitis B vaccine, and/or MCV, defined as <3 DTP/DTaP vaccine doses, <3 hepatitis B vaccine doses, and 0 MCV doses. Control subjects were children who were fully immunized for their age with respect to all recommended childhood vaccines. Children may be missing individual doses of vaccines for a variety of reasons attributable to vaccine availability. We restricted the analysis to children missing doses of ≥2 high-profile vaccines to increase the likelihood that we were studying parents who had made a purposeful decision not to receive the vaccines. The households of case and control children were contacted by telephone, generally 3 to 9 months after their initial NIS interviews. At least 10 attempts were made to contact selected households with calls made at various times of the day and week. Multiple strategies were used to obtain contact information for families who had moved since their initial NIS interviews. After informed consent was obtained, parents/guardians were asked questions about their attitudes, beliefs, and behaviors regarding vaccine safety and their sources of information about immunizations. To ensure that questions appropriately addressed key concepts and that respondents interpreted survey questions in a standard manner, the draft questionnaire was reviewed by an expert panel and underwent cognitive testing by volunteers through the National Center for Health Statistics Questionnaire Design Research Laboratory.
Demographic Characteristics and Attitude, Belief, and Behavioral Risk Factors
Demographic characteristics and parental attitude, belief, and behavioral risk factors were used to predict the case/control status of each child. Demographic characteristics of the child, demographic and socioeconomic characteristics of the mother, and information about the household were collected in the NIS interview. We obtained information on potential risk factors, including attitudes, beliefs, and behaviors indicative of vaccine safety concerns, and sources of information about immunizations from the NIS Knowledge, Attitudes, and Practices survey. Five key questions were asked, as follows: “If you had another infant today, would you want him/her to get all the recommended immunizations?” “How safe do you think immunizations are for children?” “Have you ever asked the doctor or nurse not to give your child an immunization for a reason other than illness?” “Were there any immunizations you didn’t want to get for your child but did so because they were required by law?” “Do you believe that minor side effects occur with immunizations always, often, sometimes, rarely, or never?” The complete questionnaire is available on request.
Responses to the question, “How safe do you think immunizations are for children?” were dichotomized from an 11-point scale (0 = very unsafe; 10 = very safe) to identify those who believed vaccines were safe (scores of 8–10) versus those who believed vaccines were somewhat safe or unsafe (scores of 0–7). Responses to the question, “Do you believe that minor side effects occur with immunizations always, often, sometimes, rarely, or never?” were dichotomized into always/often and sometimes/rarely/never. For purposes of analysis, race/ethnicity was categorized as non-Hispanic white, African American, Hispanic, or other.
We analyzed case/control status according to sociodemographic characteristics by using the χ2 test. We included variables found to be significant in bivariate analyses in a logistic regression model and used backward elimination to determine sociodemographic variables that predicted case/control status. Subsequently, we placed each potential attitude, belief, and behavioral risk factor in a logistic regression model individually, controlling for the final predictive sociodemographic variables. We included significant risk factors (P < .05) in a logistic regression model, controlling for the predictive sociodemographic characteristics, and used backward elimination to determine the final model, consisting of demographic variables and risk factors. The criterion for keeping a variable in the model was P < .05. We also used logistic regression analyses to determine the sociodemographic variables associated with each of the significant attitude/belief/behavioral risk factors. The sociodemographic variables included race/ethnicity, mother’s age, education, marital status, and income, provider type, and number of providers.
Survey weights that accounted for the probability of selecting a household with a child 19 to 35 months of age were developed. The weights were adjusted to account for nonresponses and inability to obtain provider-reported vaccination histories. All analyses other than those presented in Table 1 used weighted data. SUDAAN (version 8)11 was used for all analyses.
We calculated the percentage of children who were underimmunized for each of the 8 combinations of the 3 risk factors in the final logistic regression model. The comparison group was defined as all children whose parents reported wanting a new infant to receive all recommended immunizations, believed immunizations were safe, and had never asked a doctor or nurse not to give the child a vaccine for reasons other than illness. The attributable risk for a specific combination of risk factors was defined as 1 − the ratio of the percent of children underimmunized in the combination of the 3 risk factors ([% underimmunized] − [% fully immunized]/[% underimmunized]). The estimated number of excess cases for the combination was determined by multiplying the number of children underimmunized for the combination by the attributable risk for the combination.12
Overall, 2315 interviews were completed, among 4440 eligible children sampled (52.1%). Of the 2315 respondents, 13 were excluded from analyses because of misclassification of the child’s vaccination status. The response rate for the control subjects was 54.9%, and that for the case subjects was 47.6%. The primary reason for nonresponse was an inability to locate sampled households that participated in the NIS; refusals or inability to complete a full interview were uncommon.
Case Subjects and Case Description
Of the 2302 eligible respondents, 825 were underimmunized with respect to a single vaccine and were not included in this study. A total of 1015 (weighted estimate for the US birth cohort: 3 185 682) were control subjects (fully immunized), and 462 (weighted estimate: 289 463) were case subjects (underimmunized with respect to ≥2 of the specified vaccines). Table 1 shows the combinations of vaccines missed by case subjects.
Several demographic characteristics differed between case subjects and control subjects in bivariate analyses (Table 2). The final multivariate logistic regression model showed that 3 demographic variables significantly predicted case/control status, namely, income, number of vaccination providers, and number of children in the household. Compared with control subjects, households of case subjects were significantly more likely to make $0 to $30 000 than at least $75 000 (adjusted odds ratio [OR]: 2.7; 95% confidence interval [CI]: 1.5–4.6) and to have ≥2 vaccine providers than 1 (OR: 2.0; 95% CI: 1.3–3.1). Case subjects also were significantly more likely than control subjects to be members of households with ≥4 children (OR: 3.1; 95% CI: 1.5–6.3) than 1 child.
Attitude, Belief, and Behavioral Risk Factors
We compared case and control parental attitudes, beliefs, and behaviors indicative of vaccine safety concerns, controlling for significant demographic variables. Five survey questions were significantly associated with case status, with control for the significant demographic characteristics. Of these, 3 remained in the final logistic model as significant independent predictors, ie, not wanting a new infant to receive all recommended immunizations (attitude), not thinking immunizations are safe (belief), and asking the doctor or nurse not to give the child an immunization for reasons other than illness (behavior) (Table 3).
Demographic Predictors of 3 Significant Risk Factors
We assessed associations between the 3 significant attitude, belief, and behavioral risk factors and household demographic characteristics for both case subjects and control subjects. More parents whose children were underimmunized (7.3%) versus fully immunized (1.4%) reported that they would not want a new infant to receive all recommended immunizations (χ2 = 13.7, P < .005). Logistic regression analysis demonstrated that parents who did not want a new infant to receive all recommended immunizations were less likely to live in households making $50 001 to $75 000, compared with households making more than $75 000 (OR: 0.3; 95% CI: 0.1–0.9). They were also less likely to have their children cared for by mixed public/private or unknown provider types than by a private provider (OR: 0.2; 95% CI: 0.1–1.0).
Scoring vaccines as somewhat or not safe versus safe also was more prevalent among parents whose children were underimmunized (20.2%) than parents whose children were fully immunized (9.8%) (χ2 = 13.3, P < .005). Logistic regression analysis demonstrated that parents scoring vaccines as somewhat or not safe versus safe were more likely to live in households making $30 001 to $50 000 than more than $75 000 (OR: 2.5; 95% CI: 1.2–5.3). These parents were also less likely to have ≥2 vaccination providers for their children, compared with 1 (OR: 0.4; 95% CI: 0.2–0.7), and mothers were more likely to be non-Hispanic white, compared with Hispanic (OR: 2.7; 95% CI: 1.2–6.3) or “other” race (OR: 5.9; 95% CI: 1.2–25.0).
More parents whose children were underimmunized (11.3%) than fully immunized (4.2%) asked the child’s medical provider not to give the child a vaccine for reasons other than illness (χ2 = 8.24, P < .005). The most common reason given by parents for this request was side effects (parents whose children were underimmunized: 57.2%; parents whose children were fully immunized: 45.5%). This was followed by too many shots for parents whose children were underimmunized (33.7%) and disease was not serious for parents whose children were fully immunized (27.1%). Logistic regression analysis demonstrated that households that reported asking their children’s medical provider not to give the child a vaccine for reasons other than illness were more likely than those who did not make this request to have mothers with a college degree, compared with mothers with only a high school diploma (OR: 2.8; 95% CI: 1.2–6.5). In addition, the mothers in these households were less likely to be of “other” race, compared with non-Hispanic white (OR: 0.2; 95% CI: 0.1–0.9).
Table 4 presents the attributable risk associated with all combinations of the 3 attitude, belief, and behavioral risk variables in the model. Of the 289 463 underimmunized children or case subjects (weighted estimate), the number of excess underimmunized children attributable to the 7 combinations of the risk factors was 42 937 (42 937/289 463 = 14.8%). Each of the risk factors contributed a percentage to the total excess of underimmunized children (not wanting a new infant to receive all recommended immunizations: 38.3%; asking the doctor or nurse not to give the child a vaccine for reasons other than illness: 48.1%; not thinking immunizations are safe: 69.0%). The percentages are >100% because the risk factors are not mutually exclusive. For example, among those who asked the doctor or nurse not to give the child a vaccine for reasons other than illness, a greater percentage of case subjects than control subjects also did not want a new infant to receive all immunizations (36.2% vs 3.8%, χ2 = 12.22, P < .005) and thought immunizations were somewhat or not safe (50.9% vs 22.4%, χ2 = 4.74, P = .03).
Our study documents that attitudes, beliefs, and behaviors indicative of vaccine safety concern contribute substantially to underimmunization in the United States. Moreover, although concerns were significantly more common among parents of underimmunized children, many parents of fully immunized children expressed similar attitudes and beliefs, suggesting potential risks to the currently high vaccine coverage levels.
Our data also document that socioeconomic, family, and health care factors are key contributors for the majority of children who are not up to date for ≥2 of the 3 focus vaccines. Similar to our study, associations between poverty and vaccination status were identified in several studies.13–15 In addition, we found that having a larger number of children in the household and having ≥2 providers were significantly and independently associated with case status. These factors were also important in other studies. A study of 13-month-old children in a regional health maintenance organization found that independent predictors of delayed immunization included, among other factors, having a larger number of children and not having a regular doctor,16 and a study of families in Baltimore found lower proportions of age-appropriate immunization among children with ≥2 siblings and children with ≥2 providers during their first 2 years of life.17 The Vaccines for Children program was implemented in 1994 to make free vaccines available at private provider offices for low-income and uninsured children. Despite the tremendous success of that program, our results suggest that having ≥2 providers for vaccinations remains a problem.
Studies showing a relationship between vaccine-related attitudes and beliefs and underimmunization have been conducted in countries outside the United States18–20 and in individual states within the United States.21 Our study provides the first nationally representative survey data of which we are aware that link underimmunization in the United States with vaccine safety concerns. Other studies failed to find this association.17,22,23 However, those studies were small, each included children from a single metropolitan area, and questions about vaccine safety were very general. In contrast, children included in our study were randomly chosen from a national statistical sample, and interviews included specific questions on vaccine safety attitudes, beliefs, and behaviors that had been cognitively pretested. Our study also was conducted more recently; as vaccine safety concerns have become more prominent in the media and on the Internet, the effects of these concerns might have increased. Additional questions have been added to a module of the NIS to better assess trends in vaccine safety attitudes and beliefs.
We estimated that 42 937 children, or slightly >1% of the US birth cohort, did not receive ≥2 of the focus vaccines because of vaccine safety concerns. Although immunization coverage remains high, many parents of fully vaccinated children demonstrated the same attitudes, behaviors, and beliefs as parents of underimmunized children. Independent of case or control status, significant associations were found for ≥1 of these attitudes/beliefs/behaviors with being non-Hispanic white, having a private medical care provider, being in a higher income category, and having a college degree. These characteristics are markedly different from those associated with underimmunization. The larger number of socioeconomically disadvantaged families in the case group may obscure the different demographic features of those who are underimmunized because of vaccine safety concerns. Alternately, some parents may express concern but be able to discuss their worries with their providers and decide to vaccinate their children; 4.2% of fully vaccinated control children had a parent who reported asking that the child not receive a vaccine. A previous study found that more highly educated parents were more likely to trust medical professionals but also were more concerned about contraindications than were less educated parents.21 We found that households with highly educated mothers reported asking that the child not receive a vaccine. School entry laws also may have an impact; 11.7% of case parents and 6.3% of control parents reported that they had received a vaccine they did not want because it was required. Interpersonal factors (doctor-patient relationship), community factors (social norms), and public policy factors24 (immunization laws) all may play important roles in maintaining immunization coverage.
Our study results must be interpreted in the context of several potential limitations. The response rate was 52%, primarily because of an inability to recontact families that had been interviewed for the NIS 3 to 9 months earlier. Weights were adjusted to compensate for differences between responders and nonresponders. Also, parents may have had difficulty recalling their attitudes and beliefs when their children were receiving infant immunizations. It is possible that, in some cases, responses reflected current beliefs, despite interviewers frequently reminding parents to recall when the child was an infant. If vaccine safety concerns had increased since the children of interviewed parents were infants, then the direction of the bias would be toward the null hypothesis (not finding significant associations between beliefs and vaccination status) and the true effects would be greater than reported here. The primary strengths of this study are the large sample size, the ability to weight responses on the basis of the statistical sampling methods, and the ability to analyze both demographic and attitude/belief/behavioral variables as potential predictors of case/control status.
Our study suggests that efforts to maintain and improve immunization coverage need to target those with attitudes/beliefs/behaviors indicative of vaccine safety concerns, as well as those with socioeconomic and health care-related risk factors. Materials that can help vaccination providers communicate effectively with parents about vaccine safety and the balance between the benefits and risks of immunization have been developed (www.cdc.gov/nip/publications). However, it is important to tailor the information provided to each parent’s needs. For example, if the parent needs assurance regarding the safety of vaccines, then presenting information in the Vaccine Safety for Parents brochure may be useful. If a parent questions the need for vaccines for the child, then reviewing information in the Helping Parents Who Question Vaccines: A Provider’s Guide brochure may help the provider talk with the parent. The ability to achieve and sustain disease prevention goals depends, in part, on the success of such communication.
This study was funded by the National Vaccine Program Office, Department of Health and Human Services, and the National Immunization Program, Centers for Disease Control and Prevention.
- ↵Chen RT. Vaccine risks: real, perceived and unknown. Vaccine.1999;17(suppl 3) :S41– S46
- ↵Gout O, Theodorou I, Liblau R, Lyon-Caen O. Central nervous system demyelination after recombinant hepatitis B vaccination: report of 25 cases. Neurology.1997;48 :A424
- ↵Smith PJ, Battaglia MP, Huggins VJ, et al. Overview of the sampling design and statistical methods used in the National Immunization Survey. Am J Prev Med.2001;20(suppl) :17– 24
- ↵Research Triangle Institute. SUDAAN User’s Manual, Release 8.0. Research Triangle Park, NC: Research Triangle Institute; 2002
- ↵Rothman KJ, Greenland S. Modern Epidemiology. New York, NY: Lippincott Williams & Wilkins; 1998
- Bates AS, Wolinsky FD. Personal, financial, and structural barriers to immunization in socioeconomically disadvantaged urban children. Pediatrics. 101:591–596, 1998
- ↵Shawn DH, Gold R. Survey of parents’ attitudes to the recommended Haemophilus influenzae type b vaccine program. CMAJ.1987;136 :1038– 1040
- ↵Bennett P, Smith C. Parents’ attitudinal and social influences on childhood vaccination. Health Educ Res.1992;7 :341– 348
- ↵Taylor JA, Cufley D. The association between parental health beliefs and immunization status among children followed by private pediatricians. Clin Pediatr (Phila).1996;35 :18– 22
- ↵Strobino D, Keane V, Holt E, Hughart N, Guyer B. Parental attitudes do not explain underimmunization. Pediatrics.98 :1076– 1083, 1996
- Copyright © 2004 by the American Academy of Pediatrics | <urn:uuid:cc2c2acc-8a54-4bf2-956a-e97f20d06c57> | {
"date": "2018-05-26T17:50:20",
"dump": "CC-MAIN-2018-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867841.63/warc/CC-MAIN-20180526170654-20180526190654-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9614628553390503,
"score": 2.578125,
"token_count": 5438,
"url": "http://pediatrics.aappublications.org/content/114/1/e16"
} |
Law Report: Brazil
Waste Pickers in Brazil
Brazil provides a role model for integrating waste pickers into the municipal waste management network. In 2002, the country provided official recognition to workers by listing waste picking as an occupation in the Brazilian Classification of Occupations. Waste pickers or catadores contribute to the waste management system by segregating and gathering the recyclables, an important resource of any economy.
Size and Significance
An estimated 200,000 to 800,000 catadores work in Brazil contributing to a robust recycling trade in the country. A thriving trade in recycling of plastic bottles, aluminum cans and glass bottles exists, with impressive records of recycling rates that outperform Europe and the United States.
Catadores work long hours, sometimes exceeding 12 hours, bending, lifting, pushing or walking long distances in doing their work. They live life on the margins and their difficult work conditions are compounded by poor living conditions. As in other countries, their direct contact with contaminated waste renders them susceptible to diseases and consequently a lower life expectancy. Prior to their inclusion, the waste pickers faced constant prejudice and harassment from society.
The average earnings of the catadores are in the range of 400 reals per month, below the national minimum wage of 500 reals per month.
Law and Policy
Article 30, Clause V of Brazilian Constitution stipulates that municipalities are responsible for the management of solid waste services.
National Policy of Solid Waste, 2010: “On September 6th 2007 the National Solid Waste Policy was sent for Congress appraisal as an Executive Power proposition. This proposition advocates the reverse logistics system, which makes the generator of waste responsible for the return of recyclables to the productive chain after consumption, which, in turn, increases the volume of activity for the waste picker. The proposition was recognized as a big advancement for the MNCR (the National Movement of Waste pickers) as it made the inclusion of waste pickers in the reverse logistics system mandatory. This necessitated the availability of fiscal and financial incentives for the recycling industry, for the development of regional programs in partnership with waste picker organizations, and to facilitate the structuring of these organizations. After 20 years of debate the National Policy of Solid Waste was finally approved in July 2010. This Law is outstanding in its recognition of waste pickers, turning what has been a government policy over the years into law. However, it must be mentioned that a last minute maneuver at the Senate House omitted the clause restricting the use of incineration to a “last resort” treatment technology from the final Policy. The Policy was sanctioned by President Lula on August 2nd. During the sanctioning ceremony the MNCR, backed by a technical note issued by the Ministry of the Environment, asked President Lula to veto this alteration when regulating the Policy. This is still to be analysed by the President´s cabinet.” (Dias, 2010)
Federal legislation: In 2001, the collection of recyclables (waste picking) was included as a profession in the Brazilian Occupation Classification (CBO). With this legal recognition, waste pickers gradually found a place in official statistics, enabling research and monitoring of the occupational group.
In 2007, Law # 11.445/07 was passed which established the national guidelines for basic sanitation. Article 57 of this Law (which modifies article 24 of Law # 8.666/93), allows for hiring of waste picker associations and cooperatives directly by municipalities, without a process of tendering of bids, to perform selective waste collection.
A further legal instrument that promoted waste picker social inclusion at a federal level was the Presidential Decree 5940/06 which was presented at the 5th Annual Waste & Citizenship Festival held in Belo Horizonte in August 2006, and organized with the participation of waste picker representatives. This Decree determined that a “Solid Waste Selective Collection” was to be implemented in all federal public buildings in Brazil, and that the material generated was to be delivered to waste picker organizations. The main objective of the Decree was to recognize the labor of waste pickers, and to allow for the generation of income for these workers.
- In 2003 the Minas Gerais State Parliament responded to the demands of the catadores’ movement and extensively discussed the possibilities of an inclusive solid waste management system. Following the debates, the State Government altered DN 52 (that forbade access of catadores to open dumps) by Resolution # 67 in the end of 2003, in which it was added that when closing an open dump municipalities should create labour and income alternatives for the catadores withdrawn from the dumps.
In December 2008 the Law18031/2008 that institutes the Minas Gerais State SW Policy was approved and sanctioned in January 2009. It contains explicit articles dealing with social inclusion of catadores and also economical mechanisms of incentives for municipalities abiding the law.
- In the city of Diadema, the waste pickers’ organizations included in the municipal source-segregation scheme are paid the same amount per tonne of recyclables collected as a private company would be. This was made possible by Law 2336/04, which entitles organizations to be paid by service rendered. Cities like Araxá, Brumadinho, and Londrina pay cooperatives for environmental services.
Organisation and Voice
Brazil’s National Movement of Recycled Material Collectors (MNCR) established in 2001 and have been instrumental in advocating for changes in law and policy. Worker cooperatives have been formed in Rio, BeIo Horizonte, Recife, Niteroi and Salvador.
Coopamare, one of the most successful recycler co-operatives in Brazil, collects 100 tons of recyclables a month, equivalent to half of what is collected by the government recycling programme in Sao Paulo, and at a lower cost. Coopamare members earn US$ 300 per month, twice the minimum wage in Brazil. In comparison, half of the country’s labour force earn less than US$ 150 a month.
- Draft CONAMA Resolution on a National Solid Waste Policy
- Oveview of the legal framework for social inclusion in solid waste management in Brazil
Find more information about Brazil | <urn:uuid:35c7805a-32f8-421d-bdc7-4263e2b13ed4> | {
"date": "2018-04-20T16:24:44",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9385085701942444,
"score": 2.9375,
"token_count": 1289,
"url": "http://globalrec.org/law-report/brazil/"
} |
In this two-part series, theory and practice meet head on as education expert Professor Dylan Wiliam sets up an experimental school classroom. For one term, he takes over a Year 8 class at a secondary comprehensive to test simple ideas that he believes could improve the quality of our children's education.
Some of the higher ability students are not responding well to the new rule of No Hands Up in class, and Wiliam is worried they are at risk of being left behind.
There is a classroom revolt when the teachers remove grades from work. The idea is to make the students actually read the comments on their work in order to help them improve, but they are left confused and angry after becoming so used to the traditional grading system.
By the end of term, however, even Wiliam is surprised by the impact the experiment has had on the students' academic achievement. | <urn:uuid:49cb5e18-8471-4aaf-a7e5-40c94fa0cf58> | {
"date": "2014-12-20T05:34:37",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769392.53/warc/CC-MAIN-20141217075249-00123-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9754046201705933,
"score": 2.84375,
"token_count": 175,
"url": "http://www.bbc.co.uk/programmes/b00v3fk1"
} |
alternatives to pesticides - bugs/pests - cockroaches
Actual size: 1/2” – 5/8”
These six-legged, hard-bodied insects can carry disease, contaminate food and induce allergies. They hide in cracks and crevices during the day and feed at night on water and food crumbs, even wallpaper paste or envelope glue. They prefer warm, moist areas such as kitchens, bathrooms and around washing machines and hot water heaters.
Clean - Cleanliness is crucial. Properly store and dispose of all kitchen wastes. Keep the kitchen clean and free of food scraps. Wash dishes immediately after eating. Keep areas where grease accumulates clean. Wash pastry cloths. Sweep frequently. If you find a cockroach nest, wash and vacuum the area if it is accessible.
Fix - Fix dripping faucets and other leaks and make sure your dish rack drains properly. Damp, dirty mops can also attract roaches
Move - Move debris, firewood and garbage away from the house.
Store - Do not leave pet food or water bowls out at night. Enclose food in sealed containers.
Plug - Close all gaps around pipes and electric lines where they enter the house by using cement or screening. Plug cracks around baseboards, walls, cupboards, pipes, sinks bathroom fixtures and water heaters with latex or silicone caulk.
Least-toxic chemical control
Baking Soda and Powdered Sugar - Mix equal parts and spread around infested area.
Borax and Flour Mix - 1/2 cup borax and 1/4 cup flour and fill a glass jar. Punch small holes in jar lid. Sprinkle powder along baseboards and door sills. CAUTION: Borax is toxic if eaten. This recipe may not be for you if there are young children or pets in the house.
Boric Acid - Use boric acid, but keep it away from areas children or pets may explore. It is particularly useful under the stove and refrigerator or in cracks that cannot easily be plugged. Use roach traps that contain boric acid to monitor the effectiveness of your prevention and control measures. Boric acid is a slow acting, low-toxicity, long-lasting (if kept dry) powder that is effective against ants, cockroaches and other structural pests. It is a digestive and contact poison and is usually applied as a dust. Products often come with a duster-type applicator. It is toxic if ingested, inhaled or comes into contact with abraded or broken skin. It poses a risk to children and pets if they come into contact with it. It is safe to place it in wall voids because it does not evaporate and cannot enter living spaces.
Flour, Cocoa Powder, and Borax - Mix together 2 tablespoons flour, 4 tablespoons borax, and 1 tablespoon cocoa. Set the mixture out in dishes. CAUTION: Borax is toxic if eaten. Keep out of reach of children and pets.
Hedge Apples (Osage Orange) - Cut hedge apples in half and place several in the basement, in cabinets, or under the house to repel roaches.
Oatmeal, Flour, and Plaster of Paris - Mix equal parts and set in dishes. Keep out of reach of children and pets. | <urn:uuid:9ec86006-cfa0-44f1-af30-c8a14c357333> | {
"date": "2015-11-26T12:24:53",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447266.73/warc/CC-MAIN-20151124205407-00202-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9196041226387024,
"score": 2.65625,
"token_count": 689,
"url": "http://www.clark.wa.gov/recycle/A-Z/Materials/atp-bugs-coackroaches.html"
} |
Willow acacia (Acacia salicina) is a drought-tolerant tree that grows up to 3 feet per year. A native of Australia, willow acacia -- sometimes called weeping wattle or Australian willow -- grows in a tall weeping form that resembles weeping willow (Salix babylonian) trees. It is not a true willow but is a member of the legume family (Fabaceae).
Willow acacia is an evergreen tree that grows in a tall, upright single-trunk form. It reaches 20 to 40 feet in height at maturity, spans 15 to 20 feet in width and contains no thorns. The leaves are narrow, gray-green and approximately 3 inches long. The bark is commonly light gray and the branches droop in a weeping form. The willow acacia produces a profusion of fluffy yellow flowers from late summer through early winter, followed by an abundant crop of seed pods.
Willow acacia does best in U.S. Department of Agriculture plant hardiness zones 7 through 10. It's considered a fast-growing tree because with the proper growing conditions and enough water, it will grow up to 3 feet each year. It tolerates a wide variety of soils and is most commonly found in alkaline conditions.
Willow acacia trees are very low-maintenance. Their branches can sometimes break easily in the wind, so occasional pruning may be necessary. As a member of the legume family, this tree will supply much of its own nutrients. It does require supplemental water when it is first becoming established. In the first one to two years, water the tree deeply twice a month during hot seasons.
This tree is prone to falling over if you water it too shallowly. Watering too frequently can also cause this problem, because the tree grows so fast. Unexpected cold spells can kill the tree. Most varieties are hardy down to 20 degrees Fahrenheit, though some have been known to survive brief bouts in the high teens. The abundant production of flowers and seed pods produces extensive litter that may be undesirable around swimming pools. | <urn:uuid:51caf989-8c34-4224-9b6b-5950288239f7> | {
"date": "2018-06-19T09:05:33",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861981.50/warc/CC-MAIN-20180619080121-20180619100121-00536.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9415431618690491,
"score": 3.71875,
"token_count": 430,
"url": "http://homeguides.sfgate.com/fast-willow-acacia-grow-73829.html"
} |
By Mike Markowitz for CoinWeek ….
Those very things that procured him ill repute bear witness to his greatness… Antony was thought disgraced by his marriage with Cleopatra, a queen superior in power and glory … to all who were kings in her time. Antony was so great as to be thought by others worthy of higher things than his own desires.
IT WOULD BE FAIR TO SAY that the Roman people loved Mark Antony, and he broke their hearts. Our modern understanding of this historical figure has been shaped largely by the powerful dramatic performances of Richard Burton in the film Cleopatra (1963) and Marlon Brando in the film version of Shakespeare’s Julius Caesar (1953).
Mark Antony (or Marcus Antonius) played a key role in the civil wars that led to the end of the Republic and the beginning of the Empire – a period numismatists often describe as the “Imperatorial” era. The Latin word imperator at this time meant “commander” or “warlord” rather than “ruler”.
Roman coinage in the name of Marcus Antonius extends from 44 to 31 BCE and the so-called legionary denarii issued in 32-31 BCE to pay his army are, by far, the most abundant Roman silver coins. The best estimate is that between 25 million and 35 million pieces were struck (Harl, 60), and tens of thousands survive today. There are two theories about where these coins were minted. Some believe there was a mobile workshop that moved with Antony’s army in northwestern Greece, while others argue the coins were struck at the town of Patras, which served as Antony’s winter headquarters.
The coin’s obverse shows a galley, sometimes described as Antony’s flagship. The ship has a single bank of eight to 12 oars (the number of oars was probably left to the whim or patience of the die cutter). Above the ship ANT AVG abbreviates the name Antonius along with one of his titles, Augur, a priest of the Roman state religion. Below the ship is his other title III VIR. R.P.C. (tresviri rei publicae constituendae), which loosely translates as “Triumvir for the Reorganization of the Republic”. A triumvir in this case was a member of the “Second Triumvirate” an informal power-sharing arrangement formed in 43 BCE between three men: Antony, Octavian (Julius Caesar’s great-nephew and designated heir,) and Marcus Aemilius Lepidus (c. 88 – 12 BCE), last high priest of the Republic and Caesar’s political ally.
The reverse shows a legionary eagle (aquila) between two standards (signa; singular signum), with an inscription identifying one of the units in Antony’s army. The gilded bronze eagle mounted on a pole was the legion’s sacred emblem – its loss in battle was the worst disgrace a unit could suffer. A standard bearer (signifer) in each of the legion’s 10 cohorts carried the signum, a pole adorned with metal discs and crescents. A full-strength legion in this era had about 4,800 men, and a foot soldier earned 225 denarii a year, paid in three installments.
A unique piece that appeared in a 2012 European auction provides a clue as to how these coins were made. The space where the legion number would normally appear is blank. The cataloguer writes:
[T]his coin supports the theory that dies were prepared in advance, and the legion numbers were engraved as needed
This may have been a trial strike, not intended for circulation, or perhaps an emergency issue, rushed into production with unfinished dies.
For the First Legion, rather than a simple Roman numeral “I,” part of the number is spelled out: LEG PRI for Legio Prima. This is by far the rarest of all the coins in the series, with only three genuine examples recorded in the CoinArchives Pro database at recent auction prices that ranged from $6,700 to $8,100 USD.
Roman numerals were not standardized in that era: “four” might be written as IIII or IV, “nine” might be written as IX or VIIII, and so on. The die engravers might have been Greeks unfamiliar with Rome’s awkward numerals (Greek numerals are much simpler and more logical). There are coins with obvious errors like “IIX” for “XII”.
Legion VI seems to be one of the most common types, with 175 examples in the CoinArchives Pro database. An exceptionally high-grade specimen brought over $3,000 in a recent European auction, but well-worn examples can be found for $100 or less.
Three of the legions had honorific names as well as numbers. Legion XII Antiquae (“The Old One,”) Legion XVII Classicae (“of the Fleet”) and Legion XVIII Lybicae (“The Lybian”; apparently in reference to a victory there, not where the troops were recruited). All of these coins are fairly scarce, with 38, 33 and 20 examples listed in the CoinArchives Pro database, respectively.
Antony’s army had two special units that were also honored on the legionary coinage.
The Praetorian Cohorts (COHORTIUM PRAETORIARUM, probably four in number) were elite units that served as the commander’s personal bodyguard, in camp as well as in battle. The Speculatores (COHORTIS SPECULATORUM) were reconnaissance troops who also manned the scout cruisers of the fleet.
The unusual reverse of the Speculatores’ denarius shows their three standards adorned with model ships and crowned with wreaths, indicating that they played a key role in some naval victory. Both of these types are scarce, with 25 and 27 examples listed in the CoinArchives Pro database, respectively.
Although there were just 23 numbered legions in Antony’s army, there are rare examples of coins with higher numbers. These have generally been dismissed as die engraver’s errors or forgeries, but some may be an early example of “operational deception” intended to exaggerate the army’s true size. A cataloguer writes:
The existence therefore of legions in the service of Antony with numbers greater than XXIII which have escaped the notice of history is entirely possible; many of his units were never at full strength, and some may have effectively marched only on paper. Certainly, it seems to be the case that the suppressed Republican legions in Antony’s service had their records completely erased after the war. It remains probable then that not all of these fleet denarii for legions over XXIII are false or errors as has been assumed, as is demonstrated by the present clearly genuine example unambiguously inscribed LEG XXXIII.
About a dozen gold aurei struck with the same dies as the silver denarii are known. Only four examples have appeared on the numismatic market in recent years. Unlike the debased silver (85 – 90% pure) used for the denarius, the gold is very pure and the surviving coins are full weight — about eight grams. A Legion II aureus, pedigreed to the famous Hunt Collection, sold for over $205,000. A Legion XIII aureus brought almost $160,000 in 2015; a Legion XIX realized over $67,000 in 2008; and a Legion XXII example went for nearly $64,000 in 2009.
The legionary denarii remained in circulation for decades, probably trading at a discount as they wore down to slugs. Many were still in circulation at Pompeii when it was buried under volcanic ash a century after these coins were struck! They were extensively counterfeited in low-grade silver alloy, or silver-coated base metal blanks, and two iron forger’s dies found in the Balkans–one for Legion VI, the other for Legion XII–appeared on the market in 2013.
In 169 CE, for the two-hundredth anniversary of Mark Antony’s defeat at the Battle of Actium, the co-emperors Marcus Aurelius and Lucius Verus issued a commemorative near-replica of the Legion VI denarius. The obverse depicts a rather squashed warship, with the name ANTONIUS AUGUR spelled out in full. The reverse legend is ANTONINVS ET VERVS AVG REST LEG VI – “Antoninus and Verus Restore Legion VI”.
Collect ‘em All
Considering the rarity of some of the issues, assembling a complete collection of the legionary denarii would be a serious challenge for even the wealthiest collector. The Mark Melcher Collection, sold in 2004 in the CNG 67 auction was remarkably comprehensive, missing only the rare Legion I and Praetorian Cohort types.
* * *
“The Comparison of Antony and Demetrius” in Plutarch (n.d.), page 1153
NAC Auction 63, 17 May 2012, Lot 598. Realized $11,638 USD.
Numismatik Lanz, Auction 161, 7 December 2015, Lot 214. Realized $6,781 USD.
A fourth example was withdrawn, possibly because it was suspected as a modern fake.
Roma Numismatics E-sale 36, 27 May 2017, Lot 508. Realized $3,071 USD.
Roma Numismatics, Auction XIII, 23 March 2017, Lot 696. Realized $5,512 USD.
NAC Auction 99, 29 May 2017, Lot 2. Realized $205,381 USD.
NAC Auction 83, 20 May 2015, Lot 520. Realized $159,847 USD.
UBS Auction 78, 9 September 2008, Lot 1211. Realized $67,322 USD.
NAC Auction 51, 5 March 2009, Lot 130. Realized $63,824 USD.
Gemini Auction X, 13 Jan 2013, Lot 473. Realized $4,500 USD.
Gemini Auction X, 13 Jan 2013, Lot 467. Realized $4,250 USD.
UBS Auction 78, 9 September 2008, Lot 1219. Realized $886 USD.
Marcus Aurelius had adopted the name Antoninus in homage to his predecessor, Antoninus Pius.
CNG Auction 67, 22 September 2004, Lots 1220 – 1251.
Fields, Nick. The Roman Army: The Civil Wars 88 – 31 BC. Osprey (2008)
Grueber, Herbert A. “Coinage of the Triumvirs, Antony, Lepidus and Octavian, Illustrative of the History of the Times”, Numismatic Chronicle (1911)
Harl, Kenneth. Coinage in the Roman Economy. Baltimore (1996)
Mattingly, Harold. Roman Coins. London (1967)
Paunov, Eugeni and Ilya Prokopov. “Actium and the Legionary Coinage of Mark Antony. Historical, Economic and Monetary Consequences in Thrace (The Coin Evidence)”, Proceedings of the 1st International Conference, Numismatics, History and Economy in Epirus During Antiquity. Athens (2013)
Plutarch. John Dryden, trans. Lives of the Noble Grecians and Romans. New York (n.d.)
Sanchez, Fernando Lopez. “Military Units of Mark Antony and Lucius Verus: Numismatic Recognition of Distinction”, Israel Numismatic Research 5 (2010)
Vagi, David. Coinage and History of the Roman Empire. Sidney, OH (1999)
NGC-Certified Mark Antony Silver Denarius Coins Currently Available on eBay
|View all items...||(Powered by: WPeBayAds)| | <urn:uuid:c0b45297-e77c-4dac-a09e-1a444899d38b> | {
"date": "2018-11-13T22:37:48",
"dump": "CC-MAIN-2018-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741510.42/warc/CC-MAIN-20181113215316-20181114001316-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9475194811820984,
"score": 2.734375,
"token_count": 2573,
"url": "https://coinweek.com/ancient-coins/coinweek-ancient-coin-series-mark-antonys-legionary-denarius/"
} |
Future generations of astronomers will probably see the 1990s as a watershed. Before that time, the meaning of the word "planet" seemed obvious, even trivial, and completely secure. The discovery of many small bodies orbiting beyond Neptune posed a minor difficulty after 1992, because it raised the question of whether Pluto was really a planet or just another "Kuiper Belt object." And since 1995, the detection of distant worlds revolving around various sunlike stars has proved a real challenge to the long-held categorizations. Which of these bodies are true planets and which are just small partners in binary star systems? Should the definition hinge solely on the mass of the object, or should it also depend on the nature of its orbit? As if these questions were not difficult enough to answer, astronomers must now grapple with a new problem in celestial taxonomy, because some of the extrasolar planets recently discovered are linked to no star at all.
The detection of such "free-floating" planets was anticipated five years ago, when Adam Burrows of the University of Arizona and four colleagues published a paper in Nature that examined the possibility of resolving giant extrasolar planets with modern telescopes. And prospects for the detection of free-floating planets became truly significant in 1996 and 1997, when the first few brown-dwarf stars were reported.
Brown dwarfs are, in essence, failed suns—ones so small that the nuclear fires that power most stars cannot take hold. Astrophysicists believe that the threshold for true stardom lies at about 80 times the mass of Jupiter. A smaller body may briefly burn deuterium (the heavy and more easily fused isotope of hydrogen), but the hydrogen-fusion reactions of normal stars cannot begin easily, and if they do happen, they don't last long. So an object with a mass less than 80 Jupiters is usually labeled a brown-dwarf star.
Fertile grounds for the discovery of brown-dwarf stars are the active star-forming regions. Because the brown dwarfs created in these places are still relatively young and hot, they give off enough infrared light to reveal their presence, despite their diminutive size. Astronomers have now charted scores of brown dwarfs (or brown-dwarf candidates) in such stellar nurseries. Curiously, some of these objects appear to have about the same mass as the larger planets recently discovered around various distant stars. Maria Rosa Zapatero-Osorio (of the Instituto de Astrofísica de Canarias in Tenerife) and several colleagues have determined, for example, that S Ori 47, a small body in the σ Orionus cluster, has a mass of only about 10 to 20 Jupiters. Because this range spans the threshold below which even deuterium fusion fizzles (about a dozen Jupiter masses), the title of their yet-unpublished paper speaks of "reaching the mass boundary between brown dwarfs and giant planets."
Another team, made up of Philip W. Lucas (of the University of Hertfordshire) and Patrick F. Roche (of the University of Oxford) have also searched a stellar nursery in Orion for possible brown dwarfs. They estimate that 13 of their many sightings in Orion's Trapezium cluster are too tiny even to burn deuterium, a conclusion that prompted them to claim discovery of "the first free-floating objects of planetary mass" in the preprint of a paper now in press with the Monthly Notices of the Royal Astronomical Society.
Some would object to the suggestion that such isolated objects are really planets. Lynne A. Hillenbrand, for one, an astronomer at the California Institute of Technology, argues against a definition based solely on mass: "If you're going to call something a planet, you should be sure it formed like a planet." But her Caltech colleague Eduardo L. Martín disagrees. He admits the nomenclature that he and some other astronomers are using for these enigmatic objects might seem bizarre: "This is all very strange ... they are not associated with any star, and still we're calling them planets." But he argues that the classic definition, which depends on how the object formed, involves mechanisms that are as yet poorly understood, whereas the deuterium-burning limit provides a convenient way to distinguish giant planets from brown-dwarf stars according to well-established principles of nuclear physics.
Curiously, neither Hillenbrand nor Martín is willing to accept the British claim of finding free-floating objects of planetary mass. Hillenbrand, who has also probed Orion for such objects, suspects that many of Lucas and Roche's detections are stars outside the cluster masquerading as brown dwarfs or giant planets. "I'm content with the conclusion that they are unrelated objects," she notes. Martín, who is part of the group that found S Ori 47, puts his skepticism more bluntly, pointing out that Lucas and Roche did not support their assertion with spectral evidence that these dim objects are not, in fact, ordinary stars: "If you don't have a spectrum, you don't know what you're talking about; it's as simple as that."
Lucas is, however, sticking to his guns. "I'm 99 percent sure that these objects we're claiming to be planets are planets." His confidence stems, in part, from the spectral observations he gathered since submitting his paper, spectra that he says show just the sort of features one would expect to see coming from giant planets. He has, however, expressed his intent to correct the paper's assertion that he and Roche made the first detection of free-floating objects of planetary mass. It seems that they—along with many science journalists—had completely overlooked a 1998 Science article by a group of Japanese astronomers who had already reported the discovery of two free-floating objects of planetary mass.
In that paper and one that followed shortly afterward (in a similarly prominent publication, the Astrophysical Journal), Motohide Tamura of the National Astronomical Observatory of Japan, along with several Japanese colleagues, described their observations of dim bodies in the Chamaeleon cluster, another site of active star formation. Their estimates for the mass of two isolated objects in the cluster lie below the threshold for burning deuterium, qualifying these objects as free-floating planets, at least according to one definition.
Why didn't the popular press cover this rather momentous discovery? And why had Lucas and Roche failed to realize until now that their detection of free-floating planets was not the first to be reported in the scientific literature? The answer turns out to be quite simple: "The Japanese are very modest, " Lucas explains, "the discovery is not mentioned in the abstract or title of either paper."—David Schneider | <urn:uuid:18d3d6a3-7ac0-4d43-93cc-6292d48e8308> | {
"date": "2015-08-02T08:20:56",
"dump": "CC-MAIN-2015-32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989018.48/warc/CC-MAIN-20150728002309-00133-ip-10-236-191-2.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.958341658115387,
"score": 3.953125,
"token_count": 1393,
"url": "http://www.americanscientist.org/issues/id.541,y.0,no.,content.true,page.1,css.print/issue.aspx"
} |
Fusion Science and Technology / Volume 54 / Number 1 / July 2008 / Pages 39-44
Technical Paper / Iter and Fusion
Tritium as one of the two fuel components for fusion power plays a special role in any fusion device. Due to its volatile character, radioactivity and easy incorporation as HTO it needs to be controlled with special care and due to its scarcity on earth it has to be produced in-situ in future fusion power plants. The paper discusses the present tritium R&D activities in fusion ongoing in the EU and presents the various processes/techniques envisaged for controlling tritium in future fusion reactors focusing mainly on the issues of breeding blankets and the fuel cycle in DEMO. | <urn:uuid:758a041f-7e49-4529-b4b7-3d6f1fd90a56> | {
"date": "2015-11-27T02:48:52",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00346-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9211260080337524,
"score": 2.75,
"token_count": 146,
"url": "http://www.ans.org/pubs/journals/fst/a_1761"
} |
The Mass Migration of the 1950s
Hundreds of thousands of Jewish refugees from Europe and the Arab lands seek a new home in the Jewish State.
The following article is reprinted with permission from The Jewish Agency.
The years between 1948 and 1951 witnessed the largest migration ever to reach the shores of modern Israel. This influx began at a time when the state was in the throes of its greatest struggle for survival, the War of Independence, and continued throughout a period troubled by both security concerns and economic hardship. In the mid‑1950s, a second wave arrived in Israel. The immigrants of the country's first decade radically altered the demographic landscape of Israeli society as well as the balance between Israel and the Jewish diaspora. Many of today's social issues are rooted in this mass migration: Israel's rapid economic growth, social stratification, and the formation of new political frameworks and elites.
Some 688,000 immigrants came to Israel during the country's first three and a half years at an average of close to 200,000 a year. As approximately 650,000 Jews lived in Israel at the time of the establishment of the state, this meant in effect a doubling of the Jewish population, even in light of the fact that some 10 percent of the new immigrants left the country during the next few years. Although immigration declined rapidly during the early 1950s, another 166,000 arrived in the middle of the decade.
The first immigrants to reach the new state were survivors of the Holocaust, some from displaced persons camps in Germany, Austria, and Italy, and others from British detention camps in Cyprus. The remnants of certain communities were transferred virtually in their entirety, for example Bulgarian and Yugoslavian Jewry. Large sections of other communities, such as those from Poland and Rumania, came to Israel during the first years. After the initial influx of European Jews, the percentage of Jews from Moslem countries in Asia and Africa increased considerably (1948 ‑ 14.4%, 1949 ‑ 47.3%, 1950 ‑ 49.6%, 1951 ‑ 71.0%). During 1950 and 1951, special operations were undertaken to bring over Jewish communities perceived to be in serious danger, for example, the Jews of Yemen and Aden (Operation Magic Carpet) and the Jewish community in Iraq (Operation Ezra and Nehemia). During the same period, the vast majority of Libyan Jewry came to the country. Considerable numbers of Jews immigrated from Turkey and Iran as well as from other North African countries (Morocco, Tunisia, and Algeria).
Immigration to Israel (1948‑1951) by Major Countries of Origin
• Poland 106.4
• Yemen and Aden 48.3
• Morocco, Tunisia, Algeria 45.4
• Bulgaria 37.3
• Turkey 34.5
• Libya 3 1.0
• Iran 21.9
• Czechoslovakia 18.8
• Hungary 14.3
• Germany, Austria 10.8
• Egypt 8.8
• USSR 8.2
• Yugoslavia 7.7
Source: Moshe Sicron, "The Mass Aliyah ‑ Its Dimensions, Characteristics and Influences on the Structure of the Israeli Population," in Mordechai Naor, ed., Olim and Ma'abarot 1948‑1952 (Jerusalem: 1986): 34 (Hebrew). During the period between 1955 and 1957, most (62%) immigrants came from North African countries.
There were considerable differences between the immigrants from European countries and those from Asia and Africa. The survivor population was usually older and contained fewer children. On the other hand, the Jews from developing countries in Asia and Africa tended to have a large number of children but a smaller elderly population. The European immigrants were generally better educated. Neither group however, resembled the profile of pre‑state immigration: a significantly lower percentage of the post‑1948 immigrants were in the primary wage earning group (only 50.4% in the 15‑45 age group as compared to 66.8% in earlier immigration waves) and consequently fewer could participate in the work force of the new state. The newer immigrants had less education: 16% of those aged 15 and above had completed secondary education as compared to 34% among the earlier settlers. Women, especially among the immigrants from Asia and Africa, tended less to work outside the home. The professions of the new arrivals were also different than those of their predecessors: few had engaged in agriculture and most had been either small craftsmen (tailors, cobblers, carpenters, smiths) or traders and peddlers.
Effects on the Israeli Population
First and foremost, the mass migration led to a steep rise in the Israeli Jewish population. Not only was the population doubled within a short period of time, but the high fertility rate of many of the newcomers led to continued population increase in the years ahead. This growth was significant both with regard to the ratio between Jews and non‑Jews in Israel and to the demographic role of Israel in the Jewish world. Secondly, due to the large percentage of immigrants from Asia and Africa and to their higher fertility rate, the mass migration led to a change in the ethnic composition of Israeli society. An indication of this trend can be seen in the rise of the proportion of foreign‑born Israelis who were born in Asia and Africa. In November 1948 this proportion stood at 15.1%, but by the end of 1951 it had risen to 36.9%.
Thirdly, the new state now had to deal with a considerable population that to a large extent lacked agricultural or modern professional skills, or the same degree of modem education as the veteran population. Moreover, due to an under‑representation of that age group that could best adapt vocationally to new social and economic conditions, it was difficult to quickly integrate the new population. One of the most important social issues in Israel resulted from the difficulties involved in absorbing the new immigrants.
Did you like this article? MyJewishLearning is a not-for-profit organization. | <urn:uuid:0a8da065-ab92-4814-a047-de03f603224e> | {
"date": "2014-07-28T20:31:52",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00300-ip-10-146-231-18.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9670854210853577,
"score": 3.484375,
"token_count": 1225,
"url": "http://www.myjewishlearning.com/israel/History/1948-1967/Building_the_State/1950s_Migrations.shtml?p=0"
} |
Back to Internet
A Spanish-language translation is available here.
A Brief History of the Internet
Barry M. Leiner, Vinton G. Cerf, David D. Clark,
Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch,
Jon Postel, Larry G. Roberts, Stephen Wolff
- Origins of the Internet
- The Initial Internetting
- ConceptsProving the Ideas
- Transition to Widespread Infrastructure
- The Role of Documentation
- Formation of the Broad Community
- Commercialization of the Technology
- History of the Future
The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location.
The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. Today, terms like "[email protected]" and "http://www.acm.org" trip lightly off the tongue of the random person on the street. 1
This is intended to be a brief, necessarily cursory and incomplete history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any bookstore will find shelves of material written about the Internet. 2
In this paper, 3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions, such as scale, performance, and higher level functionality. There is the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internauts working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure.
The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects - technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.
Origins of the Internet
The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his "Galactic Network" concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA, 4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.
Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job. Kleinrock's conviction of the need for packet switching was confirmed.
In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET", publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at RAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. 5
In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock's team at UCLA. 6
Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the first host computer was connected. Doug Engelbart's project on "Augmentation of Human Intellect" (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to address mapping as well as a directory of the RFC's. One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.
Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.
In October 1972 Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic.
The Initial Internetting Concepts
The original ARPANET grew into the Internet. Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but soon to include packet satellite networks, ground-based packet radio networks and other networks. The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider and made to interwork with the other networks through a meta-level "Internetworking Architecture". Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations. Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with packet switching, special purpose interconnection arrangements between networks were another possibility. While there were other limited ways to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to-end service.
In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers. including other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.
The idea of open-architecture networking was first introduced by Kahn shortly after having arrived at DARPA in 1972. This work was originally part of the packet radio program, but subsequently became a separate program in its own right. At the time, the program was called "Internetting". Key to making the packet radio system work was a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio interference, or withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP.
However, NCP did not have the ability to address networks (and machines) further downstream than a destination IMP on the ARPANET and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts.
Thus, Kahn decided to develop a new version of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.
Four ground rules were critical to Kahn's early thinking:
- Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
- Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source.
- Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
- There would be no global control at the operations level.
Other key issues that needed to be addressed were:
- Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.
- Providing for host to host "pipelining" so that multiple packets could be enroute from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it.
- Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
- The need for end-end checksums, reassembly of packets from fragments and detection of duplicates, if any.
- The need for global addressing
- Techniques for host to host flow control.
- Interfacing with the various operating systems
- There were also other concerns, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.
Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum entitled "Communications Principles for Operating Systems". At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of 1973, after starting the internetting effort, he asked Vint Cerf (then at Stanford) to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original NCP design and development and already had the knowledge about interfacing to existing operating systems. So armed with Kahn's architectural approach to the communications side and with Cerf's NCP experience, they teamed up to spell out the details of what became TCP/IP.
The give and take was highly productive and the first written version 7 of the resulting approach was distributed at a special meeting of the International Network Working Group (INWG) which had been set up at a conference at Sussex University in September 1973. Cerf had been invited to chair this group and used the occasion to hold a meeting of INWG members who were heavily represented at the Sussex Conference.
Some basic approaches emerged from this collaboration between Kahn and Cerf:
- Communication between two processes would logically consist of a very long stream of bytes (they called them octets). The position of any octet in the stream would be used to identify it.
- Flow control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that point.
- It was left open as to exactly how the source and destination would agree on the parameters of the windowing to be used. Defaults were used initially.
- Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was clearly in need of reconsideration when LANs began to appear in the late 1970s.
The original Cerf/Kahn paper on the Internet described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets.
However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP.
A major initial motivation for both the ARPANET and the Internet was resource sharing - for example allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET. Connecting the two together was far more economical that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, electronic mail has probably had the most significant impact of the innovations from that era. Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself (as is discussed below) and later for much of society.
There were other applications proposed in the early days of the Internet, including packet based voice communication (the precursor of Internet telephony), various models of file and disk sharing, and early "worm" programs that showed the concept of agents (and, of course, viruses). A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web. It is the general purpose nature of the service provided by TCP and IP that makes this possible.
Proving the Ideas
DARPA let three contracts to Stanford (Cerf), BBN (Ray Tomlinson) and UCL (Peter Kirstein) to implement TCP/IP (it was simply called TCP in the Cerf/Kahn paper but contained both components). The Stanford team, led by Cerf, produced the detailed specification and within about a year there were three independent implementations of TCP that could interoperate.
This was the beginning of long term experimentation and development to evolve and mature the Internet concepts and technology. Beginning with the first three networks (ARPANET, Packet Radio, and Packet Satellite) and their initial research communities, the experimental environment has grown to incorporate essentially every form of network and a very broad-based research and development community. [REK78] With each expansion has come new challenges.
The early implementations of TCP were done for large time sharing systems such as Tenex and TOPS 20. When desktop computers first appeared, it was thought by some that TCP was too big and complex to run on a personal computer. David Clark and his research group at MIT set out to show that a compact and simple implementation of TCP was possible. They produced an implementation, first for the Xerox Alto (the early personal workstation developed at Xerox PARC) and then for the IBM PC. That implementation was fully interoperable with other TCPs, but was tailored to the application suite and performance objectives of the personal computer, and showed that workstations, as well as large time-sharing systems, could be a part of the Internet. In 1976, Kleinrock published the first book on the ARPANET. It included an emphasis on the complexity of protocols and the pitfalls they often introduce. This book was influential in spreading the lore of packet switching networks to a very wide community.
Widespread development of LANS, PCs and workstations in the 1980s allowed the nascent Internet to flourish. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network technology in the Internet and PCs and workstations the dominant computers. This change from having a few networks with a modest number of time-shared hosts (the original ARPANET model) to having many networks has resulted in a number of new concepts and changes to the underlying technology. First, it resulted in the definition of three network classes (A, B, and C) to accommodate the range of networks. Class A represented large national scale networks (small number of networks with large numbers of hosts); Class B represented regional scale networks; and Class C represented local area networks (large number of networks with relatively few hosts).
A major shift occurred as a result of the increase in scale of the Internet and its associated management issues. To make it easy for people to use the network, hosts were assigned names, so that it was not necessary to remember the numeric addresses. Originally, there were a fairly limited number of hosts, so it was feasible to maintain a single table of all the hosts and their associated names and addresses. The shift to having a large number of independently managed networks (e.g., LANs) meant that having a single table of hosts was no longer feasible, and the Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI. The DNS permitted a scalable distributed mechanism for resolving hierarchical host names (e.g. www.acm.org) into an Internet address.
The increase in the size of the Internet also challenged the capabilities of the routers. Originally, there was a single distributed algorithm for routing that was implemented uniformly by all the routers in the Internet. As the number of networks in the Internet exploded, this initial design could not expand as necessary, so it was replaced by a hierarchical model of routing, with an Interior Gateway Protocol (IGP) used inside each region of the Internet, and an Exterior Gateway Protocol (EGP) used to tie the regions together. This design permitted different regions to use a different IGP, so that different requirements for cost, rapid reconfiguration, robustness and scale could be accommodated. Not only the routing algorithm, but the size of the addressing tables, stressed the capacity of the routers. New approaches for address aggregation, in particular classless inter-domain routing (CIDR), have recently been introduced to control the size of router tables.
As the Internet evolved, one of the major challenges was how to propagate the changes to the software, particularly the host software. DARPA supported UC Berkeley to investigate modifications to the Unix operating system, including incorporating TCP/IP developed at BBN. Although Berkeley later rewrote the BBN code to more efficiently fit into the Unix system and kernel, the incorporation of TCP/IP into the Unix BSD system releases proved to be a critical element in dispersion of the protocols to the research community. Much of the CS research community began to use Unix BSD for their day-to-day computing environment. Looking back, the strategy of incorporating Internet protocols into a supported operating system for the research community was one of the key elements in the successful widespread adoption of the Internet.
One of the more interesting challenges was the transition of the ARPANET host protocol from NCP to TCP/IP as of January 1, 1983. This was a "flag-day" style transition, requiring all hosts to convert simultaneously or be left having to communicate via rather ad-hoc mechanisms. This transition was carefully planned within the community over several years before it actually took place and went surprisingly smoothly (but resulted in a distribution of buttons saying "I survived the TCP/IP transition").
TCP/IP was adopted as a defense standard three years earlier in 1980. This enabled defense to begin sharing in the DARPA Internet technology base and led directly to the eventual partitioning of the military and non- military communities. By 1983, ARPANET was being used by a significant number of defense R&D and operational organizations. The transition of ARPANET from NCP to TCP/IP permitted it to be split into a MILNET supporting operational requirements and an ARPANET supporting research needs.
Thus, by 1985, Internet was already well established as a technology supporting a broad community of researchers and developers, and was beginning to be used by other communities for daily computer communications. Electronic mail was being used broadly across several communities, often with different systems, but interconnection between different mail systems was demonstrating the utility of broad based electronic communications between people.
Transition to Widespread Infrastructure
At the same time that the Internet technology was being experimentally validated and widely used amongst a subset of computer science researchers, other networks and networking technologies were being pursued. The usefulness of computer networking - especially electronic mail - demonstrated by DARPA and Department of Defense contractors on the ARPANET was not lost on other communities and disciplines, so that by the mid-1970s computer networks had begun to spring up wherever funding could be found for the purpose. The U.S. Department of Energy (DoE) established MFENet for its researchers in Magnetic Fusion Energy, whereupon DoE's High Energy Physicists responded by building HEPNet. NASA Space Physicists followed with SPAN, and Rick Adrion, David Farber, and Larry Landweber established CSNET for the (academic and industrial) Computer Science community with an initial grant from the U.S. National Science Foundation (NSF). AT&T's free-wheeling dissemination of the UNIX computer operating system spawned USENET, based on UNIX' built-in UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman devised BITNET, which linked academic mainframe computers in an "email as card images" paradigm.
With the exception of BITNET and USENET, these early networks (including ARPANET) were purpose-built - i.e., they were intended for, and largely restricted to, closed communities of scholars; there was hence little pressure for the individual networks to be compatible and, indeed, they largely were not. In addition, alternate technologies were being pursued in the commercial sector, including XNS from Xerox, DECNet, and IBM's SNA. 8 It remained for the British JANET (1984) and U.S. NSFNET (1985) programs to explicitly announce their intent to serve the entire higher education community, regardless of discipline. Indeed, a condition for a U.S. university to receive NSF funding for an Internet connection was that "... the connection must be made available to ALL qualified users on campus."
In 1985, Dennis Jennings came from Ireland to spend a year at NSF leading the NSFNET program. He worked with the community to help NSF make a critical decision - that TCP/IP would be mandatory for the NSFNET program. When Steve Wolff took over the NSFNET program in 1986, he recognized the need for a wide area networking infrastructure to support the general academic and research community, along with the need to develop a strategy for establishing such infrastructure on a basis ultimately independent of direct federal funding. Policies and strategies were adopted (see below) to achieve that end.
NSF also elected to support DARPA's existing Internet organizational infrastructure, hierarchically arranged under the (then) Internet Activities Board (IAB). The public declaration of this choice was the joint authorship by the IAB's Internet Engineering and Architecture Task Forces and by NSF's Network Technical Advisory Group of RFC 985 (Requirements for Internet Gateways ), which formally ensured interoperability of DARPA's and NSF's pieces of the Internet.
In addition to the selection of TCP/IP for the NSFNET program, Federal agencies made and implemented several other policy decisions which shaped the Internet of today.
- Federal agencies shared the cost of common infrastructure, such as trans-oceanic circuits. They also jointly supported "managed interconnection points" for interagency traffic; the Federal Internet Exchanges (FIX-E and FIX-W) built for this purpose served as models for the Network Access Points and "*IX" facilities that are prominent features of today's Internet architecture.
- To coordinate this sharing, the Federal Networking Council 9 was formed. The FNC also cooperated with other international organizations, such as RARE in Europe, through the Coordinating Committee on Intercontinental Research Networking, CCIRN, to coordinate Internet support of the research community worldwide.
- This sharing and cooperation between agencies on Internet-related issues had a long history. An unprecedented 1981 agreement between Farber, acting for CSNET and the NSF, and DARPA's Kahn, permitted CSNET traffic to share ARPANET infrastructure on a statistical and no-metered-settlements basis.
- Subsequently, in a similar mode, the NSF encouraged its regional (initially academic) networks of the NSFNET to seek commercial, non-academic customers, expand their facilities to serve them, and exploit the resulting economies of scale to lower subscription costs for all.
- On the NSFNET Backbone - the national-scale segment of the NSFNET - NSF enforced an "Acceptable Use Policy" (AUP) which prohibited Backbone usage for purposes "not in support of Research and Education." The predictable (and intended) result of encouraging commercial network traffic at the local and regional level, while denying its access to national-scale transport, was to stimulate the emergence and/or growth of "private", competitive, long-haul networks such as PSI, UUNET, ANS CO+RE, and (later) others. This process of privately-financed augmentation for commercial uses was thrashed out starting in 1988 in a series of NSF-initiated conferences at Harvard's Kennedy School of Government on "The Commercialization and Privatization of the Internet" - and on the "com-priv" list on the net itself.
- In 1988, a National Research Council committee, chaired by Kleinrock and with Kahn and Clark as members, produced a report commissioned by NSF titled "Towards a National Research Network". This report was influential on then Senator Al Gore, and ushered in high speed networks that laid the networking foundation for the future information superhighway.
- In 1994, a National Research Council report, again chaired by Kleinrock (and with Kahn and Clark as members again), Entitled "Realizing The Information Future: The Internet and Beyond" was released. This report, commissioned by NSF, was the document in which a blueprint for the evolution of the information superhighway was articulated and which has had a lasting affect on the way to think about its evolution. It anticipated the critical issues of intellectual property rights, ethics, pricing, education, architecture and regulation for the Internet.
- NSF's privatization policy culminated in April, 1995, with the defunding of the NSFNET Backbone. The funds thereby recovered were (competitively) redistributed to regional networks to buy national-scale Internet connectivity from the now numerous, private, long-haul networks.
The backbone had made the transition from a network built from routers out of the research community (the "Fuzzball" routers from David Mills) to commercial equipment. In its 8 1/2 year lifetime, the Backbone had grown from six nodes with 56 kbps links to 21 nodes with multiple 45 Mbps links. It had seen the Internet grow to over 50,000 networks on all seven continents and outer space, with approximately 29,000 networks in the United States.
Such was the weight of the NSFNET program's ecumenism and funding ($200 million from 1986 to 1995) - and the quality of the protocols themselves - that by 1990 when the ARPANET itself was finally decommissioned10, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide, and IP was well on its way to becoming THE bearer service for the Global Information Infrastructure.
The Role of Documentation
A key to the rapid growth of the Internet has been the free and open access to the basic documents, especially the specifications of the protocols.
The beginnings of the ARPANET and the Internet in the university research community promoted the academic tradition of open publication of ideas and results. However, the normal cycle of traditional academic publication was too formal and too slow for the dynamic exchange of ideas essential to creating networks.
In 1969 a key step was taken by S. Crocker (then at UCLA) in establishing the Request for Comments (or RFC) series of notes. These memos were intended to be an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. As the File Transfer Protocol (FTP) came into use, the RFCs were prepared as online files and accessed via FTP. Now, of course, the RFCs are easily accessed via the World Wide Web at dozens of sites around the world. SRI, in its role as Network Information Center, maintained the online directories. Jon Postel acted as RFC Editor as well as managing the centralized administration of required protocol number assignments, roles that he continues to this day.
The effect of the RFCs was to create a positive feedback loop, with ideas or proposals presented in one RFC triggering another RFC with additional ideas, and so on. When some consensus (or a least a consistent set of ideas) had come together a specification document would be prepared. Such a specification would then be used as the base for implementations by the various research teams.
Over time, the RFCs have become more focused on protocol standards (the "official" specifications), though there are still informational RFCs that describe alternate approaches, or provide background information on protocols and engineering issues. The RFCs are now viewed as the "documents of record" in the Internet engineering and standards community.
The open access to the RFCs (for free, if you have any kind of a connection to the Internet) promotes the growth of the Internet because it allows the actual specifications to be used for examples in college classes and by entrepreneurs developing new systems.
Email has been a significant factor in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet engineering. The very early RFCs often presented a set of ideas developed by the researchers at one location to the rest of the community. After email came into use, the authorship pattern changed - RFCs were presented by joint authors with common view independent of their locations.
The use of specialized email mailing lists has been long used in the development of protocol specifications, and continues to be an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Internet engineering. Each of these working groups has a mailing list to discuss one or more draft documents under development. When consensus is reached on a draft document it may be distributed as an RFC.
As the current rapid expansion of the Internet is fueled by the realization of its capability to promote information sharing, we should understand that the network's first role in information sharing was sharing the information about it's own design and operation through the RFC documents. This unique method for evolving new capabilities in the network will continue to be critical to future evolution of the Internet.
Formation of the Broad Community
The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward. This community spirit has a long history beginning with the early ARPANET. The early ARPANET researchers worked as a close-knit community to accomplish the initial demonstrations of packet switching technology described earlier. Likewise, the Packet Satellite, Packet Radio and several other DARPA computer science research programs were multi-contractor collaborative activities that heavily used whatever available mechanisms there were to coordinate their efforts, starting with electronic mail and adding file sharing, remote access, and eventually World Wide Web capabilities. Each of these programs formed a working group, starting with the ARPANET Network Working Group. Because of the unique role that ARPANET played as an infrastructure supporting the various research programs, as the Internet started to evolve, the Network Working Group evolved into Internet Working Group.
In the late 1970's, recognizing that the growth of the Internet was accompanied by a growth in the size of the interested research community and therefore an increased need for coordination mechanisms, Vint Cerf, then manager of the Internet Program at DARPA, formed several coordination bodies - an International Cooperation Board (ICB), chaired by Peter Kirstein of UCL, to coordinate activities with some cooperating European countries centered on Packet Satellite research, an Internet Research Group which was an inclusive group providing an environment for general exchange of information, and an Internet Configuration Control Board (ICCB), chaired by Clark. The ICCB was an invitational body to assist Cerf in managing the burgeoning Internet activity.
In 1983, when Barry Leiner took over management of the Internet research program at DARPA, he and Clark recognized that the continuing growth of the Internet community demanded a restructuring of the coordination mechanisms. The ICCB was disbanded and in its place a structure of Task Forces was formed, each focused on a particular area of the technology (e.g. routers, end-to-end protocols, etc.). The Internet Activities Board (IAB) was formed from the chairs of the Task Forces. It of course was only a coincidence that the chairs of the Task Forces were the same people as the members of the old ICCB, and Dave Clark continued to act as chair.
After some changing membership on the IAB, Phill Gross became chair of a revitalized Internet Engineering Task Force (IETF), at the time merely one of the IAB Task Forces. As we saw above, by 1985 there was a tremendous growth in the more practical/engineering side of the Internet. This growth resulted in an explosion in the attendance at the IETF meetings, and Gross was compelled to create substructure to the IETF in the form of working groups.
This growth was complemented by a major expansion in the community. No longer was DARPA the only major player in the funding of the Internet. In addition to NSFNet and the various US and international government-funded activities, interest in the commercial sector was beginning to grow. Also in 1985, both Kahn and Leiner left DARPA and there was a significant decrease in Internet activity at DARPA. As a result, the IAB was left without a primary sponsor and increasingly assumed the mantle of leadership.
The growth continued, resulting in even further substructure within both the IAB and IETF. The IETF combined Working Groups into Areas, and designated Area Directors. An Internet Engineering Steering Group (IESG) was formed of the Area Directors. The IAB recognized the increasing importance of the IETF, and restructured the standards process to explicitly recognize the IESG as the major review body for standards. The IAB also restructured so that the rest of the Task Forces (other than the IETF) were combined into an Internet Research Task Force (IRTF) chaired by Postel, with the old task forces renamed as research groups.
The growth in the commercial sector brought with it increased concern regarding the standards process itself. Starting in the early 1980's and continuing to this day, the Internet grew beyond its primarily research roots to include both a broad user community and increased commercial activity. Increased attention was paid to making the process open and fair. This coupled with a recognized need for community support of the Internet eventually led to the formation of the Internet Society in 1991, under the auspices of Kahn's Corporation for National Research Initiatives (CNRI) and the leadership of Cerf, then with CNRI.
In 1992, yet another reorganization took place. In 1992, the Internet Activities Board was re-organized and re-named the Internet Architecture Board operating under the auspices of the Internet Society. A more "peer" relationship was defined between the new IAB and IESG, with the IETF and IESG taking a larger responsibility for the approval of standards. Ultimately, a cooperative and mutually supportive relationship was formed between the IAB, IETF, and Internet Society, with the Internet Society taking on as a goal the provision of service and other measures which would facilitate the work of the IETF.
The recent development and widespread deployment of the World Wide Web has brought with it a new community, as many of the people working on the WWW have not thought of themselves as primarily network researchers and developers. A new coordination organization was formed, the World Wide Web Consortium (W3C). Initially led from MIT's Laboratory for Computer Science by Tim Berners-Lee (the inventor of the WWW) and Al Vezza, W3C has taken on the responsibility for evolving the various protocols and standards associated with the Web.
Thus, through the over two decades of Internet activity, we have seen a steady evolution of organizational structures designed to support and facilitate an ever-increasing community working collaboratively on Internet issues.
Commercialization of the Technology
Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology. In the early 1980s, dozens of vendors were incorporating TCP/IP into their products because they saw buyers for that approach to networking. Unfortunately they lacked both real information about how the technology was supposed to work and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions: SNA, DECNet, Netware, NetBios. The DoD had mandated the use of TCP/IP in many of its purchases but gave little help to the vendors regarding how to build useful TCP/IP products.
In 1985, recognizing this lack of information availability and appropriate training, Dan Lynch in cooperation with the IAB arranged to hold a three day workshop for ALL vendors to come learn about how TCP/IP worked and what it still could not do well. The speakers came mostly from the DARPA research community who had both developed these protocols and used them in day to day work. About 250 vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open about the way things worked (and what still did not work) and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two way discussion was formed that has lasted for over a decade.
After two years of conferences, tutorials, design meetings and workshops, a special event was organized that invited those vendors whose products ran TCP/IP well enough to come together in one room for three days to show off how well they all worked together and also ran over the Internet. In September of 1988 the first Interop trade show was born. 50 companies made the cut. 5,000 engineers from potential customer organizations came to see if it all did work as was promised. It did. Why? Because the vendors worked extremely hard to ensure that everyone's products interoperated with all of the other products - even with those of their competitors. The Interop trade show has grown immensely since then and today it is held in 7 locations around the world each year to an audience of over 250,000 people who come to learn which products work with each other in a seamless manner, learn about the latest products, and discuss the latest technology.
In parallel with the commercialization efforts that were highlighted by the Interop activities, the vendors began to attend the IETF meetings that were held 3 or 4 times a year to discuss new ideas for extensions of the TCP/IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceeds a thousand attendees, mostly from the vendor community and paid for by the attendees themselves. This self-selected group evolves the TCP/IP suite in a mutually cooperative manner. The reason it is so useful is that it is comprised of all stakeholders: researchers, end users and vendors.
Network management provides an example of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation. As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale. Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults. In 1987 it became clear that a protocol was needed that would permit the elements of the network, such as the routers, to be remotely managed in a uniform way. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP) , HEMS (a more complex design from the research community) and CMIP (from the OSI community). A series of meeting led to the decisions that HEMS would be withdrawn as a candidate for standardization, in order to help resolve the contention, but that work on both SNMP and CMIP would go forward, with the idea that the SNMP could be a more near-term solution and CMIP a longer-term approach. The market could choose the one it found more suitable. SNMP is now used almost universally for network based management.
In the last few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. The Internet has now become almost a "commodity" service, and much of the latest attention has been on the use of this global information infrastructure for support of other commercial services. This has been tremendously accelerated by the widespread and rapid adoption of browsers and the World Wide Web technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in technology have been aimed at providing increasingly sophisticated information services on top of the basic Internet data communications.
History of the Future
On October 24, 1995, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet". "Internet" refers to the global information system that -- (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.
The Internet has changed much in the two decades since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was designed before LANs existed, but has accommodated that new network technology, as well as the more recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment.
One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide such new services as real time transport, in order to support, for example, audio and video streams. The availability of pervasive networking (i.e., the Internet) along with powerful affordable computing and communications in portable form (i.e., laptop computers, two-way pagers, PDAs, cellular phones), is making possible a new paradigm of nomadic computing and communications.
This evolution will bring us new applications - Internet telephone and, slightly further out, Internet television. It is evolving to permit more sophisticated forms of pricing and cost recovery, a perhaps painful requirement in this commercial world. It is changing to accommodate yet another generation of underlying network technologies with different characteristics and requirements, from broadband residential access to satellites. New modes of access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself.
The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders - stakeholders now with an economic as well as an intellectual investment in the network. We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stake-holders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles, it will not be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future.
1 Perhaps this is an exaggeration based on the lead author's residence in Silicon Valley.
2 On a recent trip to a Tokyo bookstore, one of the authors counted 14 English language magazines devoted to the Internet.
3 An abbreviated version of this article appears in the 50th anniversary issue of the CACM, Feb. 97. The authors would like to express their appreciation to Andy Rosenbloom, CACM Senior Editor, for both instigating the writing of this article and his invaluable assistance in editing both this and the abbreviated version.
4 The Advanced Research Projects Agency (ARPA) changed its name to Defense Advanced Research Projects Agency (DARPA) in 1971, then back to ARPA in 1993, and back to DARPA in 1996. We refer throughout to DARPA, the current name.
5 It was from the RAND study that the false rumor started claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, only the unrelated RAND study on secure voice considered nuclear war. However, the later work on Internetting did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.
6 Including amongst others Vint Cerf, Steve Crocker, and Jon Postel. Joining them later were David Crocker who was to play an important role in documentation of electronic mail protocols, and Robert Braden, who developed the first NCP and then TCP for IBM mainframes and also was to play a long term role in the ICCB and IAB.
7 This was subsequently published as V. G. Cerf and R. E. Kahn, "A protocol for packet network interconnection" IEEE Trans. Comm. Tech., vol. COM-22, V 5, pp. 627-641, May 1974.
8 The desirability of email interchange, however, led to one of the first "Internet books": !%@:: A Directory of Electronic Mail Addressing and Networks, by Frey and Adams, on email address translation and forwarding.
9 Originally named Federal Research Internet Coordinating Committee, FRICC. The FRICC was originally formed to coordinate U.S. research network activities in support of the international coordination provided by the CCIRN.
10 The decommisioning of the ARPANET was commemorated on its 20th anniversary by a UCLA symposium in 1989.
P. Baran, "On Distributed Communications Networks", IEEE Trans. Comm. Systems, March 1964.
V. G. Cerf and R. E. Kahn, "A protocol for packet network interconnection", IEEE Trans. Comm. Tech., vol. COM-22, V 5, pp. 627-641, May 1974.
S. Crocker, RFC001 Host software, Apr-07-1969.
R. Kahn, Communications Principles for Operating Systems. Internal BBN memorandum, Jan. 1972.
Proceedings of the IEEE, Special Issue on Packet Communication Networks, Volume 66, No. 11, November, 1978. (Guest editor: Robert Kahn, associate guest editors: Keith Uncapher and Harry van Trees)
L. Kleinrock, "Information Flow in Large Communication Nets", RLE Quarterly Progress Report, July 1961.
L. Kleinrock, Communication Nets: Stochastic Message Flow and Delay, Mcgraw-Hill (New York), 1964.
L. Kleinrock, Queueing Systems: Vol II, Computer Applications, John Wiley and Sons (New York), 1976
J.C.R. Licklider & W. Clark, "On-Line Man Computer Communication", August 1962.
L. Roberts & T. Merrill, "Toward a Cooperative Network of Time-Shared Computers", Fall AFIPS Conf., Oct. 1966.
L. Roberts, "Multiple Computer Networks and Intercomputer Communication", ACM Gatlinburg Conf., October 1967.
Barry M. Leiner is Director of the Research Institute for Advanced Computer Science.
Vinton G. Cerf is Senior Vice President, Internet Architecture and Technology, at MCI WorldCom.
David D. Clark is Senior Research Scientist at the MIT Laboratory for Computer Science.
Robert E. Kahn is President of the Corporation for National Research Initiatives.
Leonard Kleinrock is Professor of Computer Science at the University of California, Los Angeles, and is Chairman and Founder of Nomadix.
Daniel C. Lynch is a founder of CyberCash Inc. and of the Interop networking trade show and conferences.
Jon Postel served as Director of the Computer Networks Division of the Information Sciences Institute of the University of Southern California.
Lawrence G. Roberts is Chairman and CTO of Caspian Networks
Stephen Wolff is with Cisco Systems, Inc.
A Brief History of the Internet, version 3.31
Last revised 4 Aug 2000
Send any comments to Barry Leiner or any of the authors | <urn:uuid:7c75f965-9080-4313-b052-ea9f6e3e933b> | {
"date": "2015-05-29T10:11:20",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929978.35/warc/CC-MAIN-20150521113209-00109-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9609407782554626,
"score": 3.359375,
"token_count": 11549,
"url": "http://www.cs.ucsb.edu/~almeroth/classes/F01.201B/papers/history.html"
} |
CJ 103 Criminal Justice Report Writing • 5 Cr.
Presents the fundamentals of written communication, using study guides and practice in mechanics and processes. Activities concentrate on preparing professional documents with appropriate sentence and paragraph structure. Writing models are used to demonstrate effective rhetorical strategies and stylistic options.
After completing this class, students should be able to:
- In a timed classroom situation: Students will be able to conduct an interview of five minutes in length for data gathering. This will result in appropriate notes for use in a report
- As an assignment, students will be able to write a report from notes and resources. This will meet professional criteria for format, grammar, punctuation and spelling.
- As an assignment, Student will be able to write a professional resume in an accepted format.
- As an assignment, Student will be able to edit raw text into grammatically correct English with 80% accuracy
- As an assignment, Student will be able to write a test application that is grammatically sound and targeted appropriately | <urn:uuid:a1be2f1c-8a59-429a-9b93-cb6a786926da> | {
"date": "2015-12-01T07:38:59",
"dump": "CC-MAIN-2015-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398465089.29/warc/CC-MAIN-20151124205425-00354-ip-10-71-132-137.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9077135324478149,
"score": 2.859375,
"token_count": 205,
"url": "https://www.bellevuecollege.edu/classes/All/CJ/103"
} |
A Bit of Herbal Mythology Part II
Many myths have made their way through the years in association with a variety of herbs. Here is part two of our slight history lesson regarding the myths and legends and stories associated with the use of herbs.
Lavender: • Legend says that the pleasant smell of lavender comes from the baby Jesus. After washing his swaddling clothes, Mary hung them to dry on a lavender bush. This gave the plant the scent of Heaven.
• In the Middle Ages, it was believed that couples who place lavender flowers between their bed sheets would never fight.
• According to myth, Hades had developed a lust for a nymph named Minthe. Hade's wife Persephone found out about Hades lust, and angrily transformed Minthe into a plant to be trampled on. Hades could not undo the spell, but he was able to ease it by giving Minthe a wonderfully sweet fragrance; one which would be released whenever her leaves were trampled on.
• The ancient Greeks believed that Aphrodite created oregano. They believed that if it grew around a grave, the deceased would have eternal happiness.
• In Germany, oregano was hung over doorways to protect against evil spells.
• In the Middle Ages, oregano symbolized happiness and love.
• According to myth, the first roses did not have thorns. While Venus' son Cupid was smelling a rose, a bee came out and stung him on the lip. Venus then strung his bow with bees. She removed their stingers and placed them on the stems of the roses.
• Myth also says that all roses were originally white until Venus tore her foot on a briar and all the roses were dyed red with her blood.
• In Christian lore, the red color of roses comes from the blood of Christ.
Rosemary: • From the times of ancient Greece through the Middle Ages, it was believed that rosemary strengthened the brain and memory. When they needed to take exams, students braided rosemary into their hair in order to help their memory.
• The ancient Greeks burned rosemary in order to repel evil spirits and illness.
• In some parts of Europe, it was believed that if an unmarried woman placed rosemary under her pillow, her future husband would be revealed to her in her dream.
• The Romans believed that sage was a sacred herb which gave immortality.
• Up until the 18th century, it was believed that sage increased fertility.
• It was also believed that sage strengthened the mind.
Thyme: • During the Middle Ages it was believed that the scent of thyme inspired bravery. Knights wore scarves with thyme leaves sewn on them during tournaments.
• In English lore, if a person collected thyme flowers from hillsides where fairies lived, and rubbed the flowers on their eyelids, they would be able to see the fairies.
So now we know. I hoped you enjoyed this two part series and if for no other reason - at least this will provide some fun discussions at your next party get together!
Elizabeth Krause is owner of http://www.simpleitaliancooking.com, a website featuring many family Italian recipes which incorporate some of the spices and herbs mentioned. Sign up for her weekly newsletter where she gives additional recipes and cooking tips perfect for easy lunches and dinners! | <urn:uuid:3e5638ca-6377-4c7c-978c-befe60573fa7> | {
"date": "2018-06-21T18:16:31",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864256.26/warc/CC-MAIN-20180621172638-20180621192638-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9761778116226196,
"score": 2.5625,
"token_count": 701,
"url": "https://theessentialherbal.blogspot.com/2010/03/bit-of-herbal-mythology-part-ii.html"
} |
Waterfowl damage and control methods in ripening grain: an overview.
MetadataShow full item record
Damage to swathed grains by ducks, geese, and cranes is a long-standing problem in many parts of central North America. We describe the history of the problem, its nature and extent, its causes, and control tactics used; we also make recommendations for research and management. The problem was first recognized in the early 1900's from a growing conflict involving increased agricultural use of the land, a perceived reduction of waterfowl habitat, and increasing populations of birds. The most damage occurred to swathed grain and frequently coincided with waterfowl migration and changeable weather conditions. Damage occurs by direct consumption, contamination by feces, and trampling of swaths. More grain is trampled than consumed by waterfowl, the ratio being as much as 5:1. One Canadian researcher has estimated Canadian prairiewide losses of $6-$10 million annually. Losses to waterfowl on the northern Great Plains of the United States are largely undetermined. Waterfowl tend to select high points of large rolling fields that provide unobstructed views near bodies of water. Most grain farmers never suffer waterfowl damage; those that do usually tolerate it within reason. Tolerance to damage seems to be declining in a depressed farm economy. Most farmers are willing to alleviate the problem themselves unless a local situation becomes too severe. Many methods are available to reduce losses, but success varies. Methods include permanent and temporary diversionary feeding programs such as baiting stations (United States and Canada) and lure-crops (Canada) on government and private land; hazing with exploders, shotguns, rifles, and pyrotechnic devices; scarecrows of many descriptions, and aircraft. Chemical agents such as repellents and soporifics have been tested sparingly and with limited success. New farming practices, such as planting overwintering grains, straight-combining standing grain, delayed plowing of grain stubbles, and no-till farming, show potential for reducing losses to waterfowl if birds are allowed to feed in these fields undisturbed. Public relations should include better use of the media for disseminating information about scare methods and tactics and forecasting migratory waterfowl movements. These forecasts would alert farmers to the potential for damage so they can implement scare tactics at the earliest possible time, thereby increasing their chances of success. We summarize the background of depredation insurance and damage compensation programs in Canada, their successes, and pitfalls. Both methods seem to be relatively expensive and controversial even though they serve a need. Several potential sources of revenue are suggested to cover the cost of waterfowl damage prevention and damage abatement or mitigation programs, including use of the U.S. Federal Crop Insurance Program. Foremost among recommendations made for wildlife managers and researchers in the United States are problem definition and quantification, use of the media to relay information to the agricultural community, implementation of lure-crops and bait stations, possible changes in farming practices, and research to further develop an environmentally safe and cost-effective chemical deterrent to minimize depredation by waterfowl. | <urn:uuid:a3d13b39-2909-49be-84d8-a82d57b8ea0c> | {
"date": "2018-02-24T05:41:41",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815435.68/warc/CC-MAIN-20180224053236-20180224073236-00256.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9497829079627991,
"score": 3.609375,
"token_count": 650,
"url": "https://repositories.tdl.org/tamug-ir/handle/1969.3/21065"
} |
You must be signed in to read the rest of this article.
Registration on CDEWorld is free. You may also login to CDEWorld with your DentalAegis.com account.
Vitamin D is the collective name for cholecalciferol (vitamin D3) and ergocalciferol (vitamin D2). Cholecalciferol is formed in the skin by ultraviolet radiation inducing a photochemical reaction. Ergocalciferol is derived from plants. In order to be activated, both undergo hydroxylation at the 25 position in the liver by cytochrome P450 enzymes and then are further hydroxylated in the kidney in the 1a position, yielding the active metabolites: 1,25(OH)2D3 and 1,25(OH)2D2. These molecules render their biological effects by binding to the vitamin D receptor (VDR), which is a nuclear receptor highly expressed in organs involved in calcium homeostasis. The net effect of vitamin D is to increase both serum calcium and serum phosphate by stimulating intestinal absorption, bone resorption, and renal reabsorption.1,2 Vitamin D also facilitates phagocytosis by monocytes and monocyte differentiation. Epithelial cells and macrophages increase the expression of antimicrobial peptides (AMPs) on exposure to microbes, which is dependent on the presence of vitamin D.3 Vitamin D also has anti-inflammatory effects by suppressing pro-inflammatory cytokines, IFN-γ, TNF-α, and IL-12. It controls more than 200 genes that are responsible for the regulation of cellular proliferation, differentiation, apoptosis and angiogenesis.4
This paper will address the various roles vitamin D plays in reference to chronic diseases, including periodontitis. It will also discuss how vitamin D deficiency is diagnosed and treated, as well as its effects on other parts of the body besides the skeletal system.
Vitamin D Levels
Vitamin D is a fat-soluble vitamin that acts like a steroid hormone. Although it has long been known that it is essential for bone health, it also affects other organ systems as well as the immune and cardiovascular systems, muscles, and brain. Sources of vitamin D come from sun exposure and food; selected food sources of vitamin D are listed in Table 1.5 Factors such as skin color (amount of melanin that is expressed), age, fat content (being overweight or obese), and living in northern latitudes where sun exposure is usually minimal affect how much vitamin D the body makes and requires.6
A report by the Institute of Medicine (IOM) in November 20107 reviewed the dietary reference intakes (DRI), the first since the 1997-2004 DRIs. The DRIs include statistical components of distribution. Estimated average requirement (EAR) reflects the estimated median requirement. Recommended dietary allowance (RDA) is derived from the EAR and meets or exceeds the requirement for 97.5% of the population. Tolerable upper intake level (UL) is the highest average daily intake that is likely to pose no risk of adverse effects to almost all individuals in the general population. As intake increases above this, the potential risk of adverse effects may increase.
The IOM’s report recommendations are shown in Table 2 (vitamin D DRI) and Table 3 (calcium DRI). Both sets of recommendations are based on the predicted intakes that meet requirements for the other nutrient. Values were listed for infants and children as well as women who are pregnant or lactating. There is less uncertainty concerning calcium because there is more evidence base and the physiology and metabolism are better understood. Two major limitations of estimating vitamin D dietary requirements are: 1) it is also synthesized following sun exposure of the skin; and 2) because it acts like a hormone it undergoes metabolic feedback loops with endocrine and autocrine functions. In the IOM report, it was assumed that sun exposure was at a minimum and sources of vitamin D came from diet.
The measure of serum 25(OH)D has served as a reflection of total vitamin D exposure and is used to determine adequacy or deficiency. The IOM committee’s review of data suggests the following regarding serum 25(OH)D levels:
- < 30 nmol/L = at risk of deficiency
- 30 nmol/L to 50 nmol/L = some but not all potentially at risk for inadequacy
- 50 nmol/L = all persons sufficient
- > 75 nmol/L = not consistently associated with increased benefit
- > 125 nmol/L = reason for concern
Subtle symptoms such as loss of appetite, diarrhea, insomnia, and muscular weakness may indicate a mild deficiency, while symptoms such as nausea, vomiting, and sleepiness may indicate too much vitamin D. Rickets and osteomalacia are well-known diseases relating to vitamin D deficiency. Vitamin D deficiency has now been associated with other diseases such as cancer, cardiovascular disease (CVD), diabetes, and periodontitis, as well as fractures from falls. But just as there is reason for concern about vitamin D deficiency, there is also a need to determine levels of excess and toxicity and to avoid misclassification of vitamin D deficiency.
Evaluation and Therapy
Diagnosing vitamin D deficiency is accomplished by measuring serum 25(OH)D levels. However, because the kidney tightly regulates serum 1,25(OH)2D levels, they can be normal even though the levels of 25(OH)D are low. Therefore, even with high levels of the active hormone, the patient would be vitamin D deficient. The serum 1,25(OH)2D is a measure of the endocrine function and does not indicate the body stores vitamin D or the autocrine functions of vitamin D.
Treatment of vitamin D comes from sunlight, artificial ultraviolet B (UVB), or supplements, each with their drawbacks. With both sunlight and artificial UVB, patients should realize that exposure will age the skin and increase the risk of nonmelanoma skin cancers. Toxicity is unlikely with sun exposure. Treatment with oral supplements is more difficult than with light because high doses are required to get adequate serum levels of 25(OH)D. The amount needed varies due to sunlight exposure, body fat, age, and skin color, and potential toxicity is possible, although rare.
Colecalciferol is available in the United States over the counter and via the Internet in capsules of 400, 1,000, 2,000, 5,000, 10,000, and 50,000 international units (IU). Colecalciferol 1,000 IU/day will result in about 10 ng/ml elevation of serum 25(OH)D in a 3- to 4-month period. Prescription ergocalciferol is available in a 50,000 IU capsule. Physicians can give 1 to 2 doses of 50,000 IU weekly for 8 to 16 weeks and then maintain > 40 ng/ml levels of 25(OH)D with 50,000 IU doses every 1, 2, or 4 weeks. Dosing is dependent on the other aforementioned variables, but ergocalciferol may be less effective than cholecalciferol in raising 25(OH)D levels.8,9 Cod liver oil is not recommended because of the possible risk of vitamin A toxicity. Regular consumption of recommended amounts of vitamin D in a multivitamin or in fortified foods effectively prevents vitamin D deficiency. Since vitamin D is dependent on the cytochrome P450 enzymes, drugs that are also dependent on those enzymes may interact with vitamin D metabolism.
Besides deficiency, toxicity, although rare, can also occur. Urine calcium and then serum calcium will increase when 25(OH)D levels exceed > 150 ng/ml, and then true toxicity occurs when hypercalcemia calcifies internal organs. Toxicity cannot occur through skin production, because any excess previtamin D3 or vitamin D3 is destroyed by sunlight.4
The only absolute contraindication to vitamin D supplementation is vitamin D toxicity or allergy. Other contraindications to sun vitamin D are dermatological conditions. Liver disease is not a contraindication. Relative contraindications are vitamin D hypersensitivity, which occurs when extrarenal tissues produce 1,25(OH)2D in an unregulated manner causing hypercalcemia, and hypercalcemia itself.3
Vitamin D and Systemic Diseases
Vitamin D has anticancer properties by decreasing cell proliferation, inducing differentiation, inducing apoptosis, stopping angiogenesis, and having anti-inflammatory effects. Lappe et al10 found that 1,100 IU of vitamin D and 1,500 mg of calcium per day dramatically reduced the relative risk for incident cancers over a 4-year period as compared to the placebos. Other studies have not been as positive; one study found that women taking 400 IU of vitamin D3 plus calcium did not have a lower risk of breast cancer over placebo.11 Another study of male smokers found those with higher vitamin D concentration had an increased risk for pancreatic cancer with smoking not as a confounding factor,12 while a follow-up study of males and females who were mostly nonsmokers did not find this association with pancreatic cancer except in those with low sun exposure.13 With these conflicting results, the IOM has concluded that at this time the evidence is too weak to make recommendations regarding vitamin D as it relates to cancer and that it is still a “work in progress.”14
In reference to CVD, calcitriol, which is the natural ligand to the vitamin D receptor, has been shown to inhibit vascular smooth muscle cell proliferation, regulate the renin-angiotensin system, decrease coagulation, and exhibit anti-inflammatory properties.15 A review by Wang et al16 suggests that moderate to high doses of vitamin D supplements may reduce CVD risk while calcium supplements have minimal effects. Several other studies have found that patients with low vitamin D levels had a higher risk for heart disease, being diagnosed with hypertension, or having a heart attack.17-19 Even though vitamin D may have a promising effect on cardiovascular disease, no large-scale randomized trials with CVD as the primary prespecified outcome have been completed. However, new randomized trials examining the role of vitamin D supplementation in CVD are in progress.15
Concerning fractures from falls, autoimmune diseases, and diabetes, several studies exist. With muscle weakness associated with vitamin D deficiency, fractures often result from falls. Researchers have concluded that “fall risk reduction begins at 700 IU and increases progressively with higher doses.”20 It has been theorized that vitamin D may contribute to autoimmune diseases because it is an immunomodulator and plays a role in regulating the immune system. A study by Munger et al21 found that people with the highest vitamin D concentrations had a 62% lower risk of developing multiple sclerosis than those with the lowest concentrations. Lastly, some studies have shown a lower risk of type 2 diabetes with vitamin D,22 but more studies need to be done to elucidate if there is a definite link.6
In January 2010, recruitment began for the Vitamin D and Omega-3 Trial (VITAL) study at Brigham and Women’s Hospital and Harvard Medical School in Boston, Massachusetts, which investigated whether taking daily dietary supplements of vitamin D (about 2,000 IU) or fish oil (about 1 gram of omega-3 fatty acids) reduces the risk of developing cancer, heart disease, and stroke in people who do not have a prior history of these illnesses. The primary outcome is incidence of these diseases after 5 years. With a goal of 20,000 recruitments, this study may give more insight on vitamin D supplementation.
Oral Health and Vitamin D
Periodontal disease is a chronic inflammatory disease that affects approximately 35% of US adults over the age of 30.23 Alveolar bone loss is a key feature in periodontitis, and research suggests that osteopenia may be a predisposing factor for periodontal disease by increasing susceptibility to the effects of inflammation-mediated oral bone loss.24 Genetic polymorphisms in the VDR gene have been associated with bone homeostasis and diseases in which bone loss is manifested.
Genetic variants at multiple loci associated with periodontitis synergistically contribute to the overall disease process. There may be candidate genes that play a role in both chronic and aggressive periodontitis. Many of these gene polymorphisms play a role in immunoregulation or metabolism.25
Many studies have looked into the different polymorphisms in the VDR in different ethnic groups. Some studies found a positive association between the tt genotype and t allele in what was then referred to as early onset periodontitis (EOP).26,27 However, there are also contradicting studies that found the T allele significantly associated with chronic periodontitis28,29 and the Tt and tt genotype more prevalent in controls compared to those chronic periodontitis patients.30 With reference to the B or b genotype, several studies did not find a significant difference between the periodontitis population and the controls.31-33 Inagaki et al34 found in their study that loss of alveolar bone, clinical attachment, and teeth occurred highest in AA genotype, while Li et al35 found the FF genotype and the frequency of the F allele significantly higher in the generalized aggressive periodontitis (GAP) group. In relation to GAP, Park et al36 found the short VDR associated with increased risk. However, a common limitation of many of these studies was the small size or the homogeneity of the population in terms of either ethnic group or sex or both.
Periodontal Disease and Vitamin D
Other groups looked at the intake or concentrations of vitamin D in relation to periodontitis. One study found that lower serum 1,25(OH)D3 concentrations were associated with higher attachment loss, which may be explained by the anti-inflammatory effects of vitamin D.37 Krall did two studies. One38 showed no association between vitamin D intake from foods and supplements and the number of teeth with progression of periodontal bone loss. The other one39 stated that although the number of studies on the effects of calcium or vitamin D intake on oral outcomes is limited, they suggest that higher intake levels are associated with reduced prevalence of clinical attachment loss and lower risk of tooth loss. Data from a prospective study of oral health in men show a similar association between higher calcium intake and reduced alveolar bone loss. In agreement with one of Krall’s studies, Miley et al40 showed a trend for better periodontal health in patients receiving periodontal maintenance treatment and vitamin D and calcium supplementation. A recent study found that periodontal disease is more common in women with osteoporosis and is associated with lower vitamin D and higher RANKL and osteoprotogerin.41
Periodontal disease is a multifactorial disease initiated by a bacterial infection leading to a response by the host. Hallmarks of this disease are bone loss and an inflammatory, immune reaction. Vitamin D plays a role in both calcium and bone homeostasis as well as in the immune function. Vitamin D and calcium deficiencies lead to bone loss and increased inflammation, both well-known symptoms of periodontal disease.42 Susceptibility to periodontal disease varies among patients as displayed by their onset, extent, and severity of the disease. Further studies are needed in gender-, ethnic-, and age-specific groups because prior studies utilized groups of test individuals with limited and narrow characteristics.
In conclusion, larger randomized control trials must be performed in both the prevention and treatment of vitamin D deficiency. Although classically thought of as a “bone hormone,” vitamin D plays a role in other parts of the body. It is a predictor for bone health but is also a potential independent predictor of risk for cancer and other chronic diseases. It was thought that when foods were fortified with vitamin D and rickets was no longer a major problem, that the issue with vitamin D was resolved. But now it appears that vitamin D has a greater role in not only skeletal health but nonskeletal health as well. With the definition of < 20 ng/ml of 25(OH)D, approximately 1 billion people worldwide are vitamin D deficient or insufficient.4 Its effects as well as its uses are still to be explored and elucidated, which may help in the treatment of various chronic diseases, including periodontitis. There needs to be a randomized control trial exploring supplementation of vitamin D with periodontal disease measures as the primary outcome to further determine a possible cause-and-effect relationship. Until then, the relationship between vitamin D and periodontitis remains unknown.
1. Amano Y, Komiyama K, Makishima M. Vitamin D and periodontal disease. J Oral Sci.. 2009;51(1):11-20.
2. Yagiela JA, Dowd FJ, Neidle EA. Pharmacology and Therapeutics for Dentistry. 5th ed. St Louis, MO: Mosby; 2004.
3. Cannell JJ, Hollis BW, Zasloff M, Heaney RP. Diagnosis and treatment of vitamin D deficiency. Expert Opin Pharmacother. 2008;9(1):107-118.
4. Holick MF. Vitamin D deficiency. N Engl J Med.. 2007;357(3):266-281.
5. U.S. Department of Agriculture, Agricultural Research Service. USDA National Nutrient Database for Standard Reference, Release 24. Washington, DC: 2011. http://www.ars.usda.gov/Services/docs.htm?docid=8964. Accessed January 23, 2012.
6. Gonzalez C. Vitamin D Supplementation: An Update. US Pharm. 2010;35(10):58-76.
7. Institute of Medicine. Dietary reference intakes for calcium and vitamin D. Washington, DC: Institute of Medicine of the National Academies; 2010.
8. Trang HM, Cole DE, Rubin LA, et al. Evidence that vitamin D3 increases serum 25-hydroxyvitamin D more efficiently than does vitamin D2. Am J Clin Nutr. 1998;68(4)854-858.
9. Armas LA, Hollis BW, Heaney RP. Vitamin D2 is much less effective than vitamin D3 in humans. J Clin Endocrinol Metab.. 2004;89(11):5387-5391.
10. Lappe JM, Travers-Gustafson D, Davies KM, et al. Vitamin D and calcium supplementation reduces cancer risk: results of a randomized trial. Am J Clin Nutr. 2007;85(6):1586-1591.
11. Chelbowski RT, Johnson KC, Kooperberg C, et al. Calcium plus vitamin D supplementation and the risk of breast cancer. J Natl Cancer Inst. 2008;100(22):1581-1591.
12. Stolzenberg-Solomon RZ, Vieth R, Azad A, et al. A prospective nested case-control study of vitamin D status and pancreatic cancer risk in male smokers. Cancer Res. 2006;66(20):10213-10219.
13. Stolzenberg-Solomon RZ, Hayes RB, Horst RL, et al. Serum vitamin D and risk of pancreatic cancer in the prostate, lung, colorectal and ovarian screening trial. Cancer Res. 2009;69(4):1439-1447.
14. Nicholas J. Vitamin D and cancer: uncertainty persists; research continues. J Natl Cancer Inst. 2011;103(11):851-852.
15. Shapses SA, Manson JE. Vitamin D and prevention of cardiovascular disease and diabetes: why the evidence falls short. JAMA. 2011;305(24):2565-2566.
16. Wang L, Manson JE, Song Y, Sesso HD. Systematic review: vitamin D and calcium supplementation in prevention of cardiovascular events. Ann Intern Med. 2010;152(5):315-323.
17. Wang TJ, Pencina MJ, Booth SL, et al. Vitamin D deficiency and risk of cardiovascular disease. Circulation. 2008;117(4):503-511.
18. Giovannucci E, Liu Y, Hollis BW, Rimm EB. 25-Hydroxyvitamin D and risk of myocardial infarction in men: a prospective study. Arch Intern Med. 2008;168(11):1174-1180.
19. Forman JP, Giovannucci E, Holmes MD, et al. Plasma 25-hydroxyvitamin D levels and risk of incident hypertension. Hypertension. 2007;49(5):1063-1069.
20. Liebman B. From sun & sea: new study puts vitamin D & omega 3s to the test. Nutrition Action Healthletter. November 1, 2009:3-7.
21. Munger KL, Levin LI, Hollis BW, et al. Serum 25-hydroxyvitamin D levels and risk of multiple sclerosis. JAMA. 2006;296(23):2832-2838.
22. Pittas AG, Harris SS, Stark PC, Dawson-Hughes B. The effects of calcium and vitamin D supplementation on blood glucose and markers of inflammation in nondiabetic adults. Diabetes Care. 2007;30(4):980-986.
23. Albandar JM, Brunelle JA, Kingman A. Destructive periodontal disease in adults 30 years of age and older in the United States, 1988-1994. J Periodontol. 1999;70(1):13-29.
24. Jeffcoat MK, Chesnut CH III. Systemic osteoporosis and oral bone loss: evidence shows increased risk factors. J Am Dent Assoc. 1993;124(11):49-56.
25. Yoshie H, Kobayashi T, Tai H, Galicia JC. The role of genetic polymorphisms in periodontitis. Periodontology 2000. 2007;43:102-132.
26. Hennig BJ, Parkhill JM, Chapple IL, et al. Association of a vitamin D receptor gene polymorphism with localized early-onset periodontal disease. J Periodontol. 1999;70(9):1032-1038.
27. Sun JL, Meng HX, Cao CF, et al. Relationship between vitamin D receptor gene polymorphism and periodontitis. J Periodontal Res. 2002;37(4):263-267.
28. Tachi Y, Shimpuku H, Nosaka Y, et al. Vitamin D receptor gene polymorphism is associated with chronic periodontitis. Life Sci. 2003;73(26):3313-3321.
29. Wang C, Zhao H, Xiao L, et al. Association between vitamin D receptor gene polymorphisms and severe chronic periodontitis in a Chinese population. J Periodontol. 2009;80(4):603-608.
30. Brett PM, Zygogianni P, Griffiths GS, et al. Functional gene polymorphisms in aggressive and chronic periodontitis. J Dent Res. 2005;84(12):1149-1153.
31. Yoshihara A. Sugita N, Yamamoto K, et al. Analysis of vitamin D and Fcγ receptor polymorphisms in Japanese patients with generalized early-onset periodontitis. J Dent Res. 2001;80(12):2051-2054.
32. de Brito Júnior RB, Scarel-Caminaga RM, Trevilatto PC, et al. Polymorphisms in the vitamin D receptor gene are associated with periodontal disease. J Periodontol. 2004;75(8):1090-1095.
33. Naito M, Miyaki K, Naito T, et al. Association between vitamin D receptor gene haplotypes and chronic periodontitis among Japanese men. Int J Med Sci. 2007;4(4):216-222.
34. Inagaki K, Krall EA, Fleet JC, Garcia RI. Vitamin D receptor alleles, periodontal disease progression, and tooth loss in the VA dental longitudinal study. J Periodontol. 2003;74(2):161-167.
35. Li S, Yang MH, Zeng CA, et al. Association of vitamin D receptor gene polymorphisms in Chinese patients with generalized aggressive periodontitis. J Periodontal Res. 2008;43(3):360-363.
36. Park KS, Nam JH, Choi J. The short vitamin D receptor is associated with increased risk for generalized aggressive periodontitis. J Clin Periodontol. 2006;33(8):524-528.
37. Dietrich T, Joshipura KJ, Dawson-Hughes B, Bischoff-Ferrari HA. Association between serum concentrations of 25-hydroxyvitamin D3 and periodontal disease in the US population. Am J Clin Nutr. 2004;80(1):108-113.
38. Krall EA, Wehler C, Garcia RI, et al. Calcium and vitamin D supplements reduce tooth loss in the elderly. Am J Med. 2001;111(6):452-456.
39. Krall EA. The periodontal-systemic connection: implications for treatment of patients with osteoporosis and periodontal disease. Ann Periodontol. 2001;6(1):209-213.
40. Miley DD, Garcia MN, Hildebolt CF, et al. Cross-sectional study of vitamin D and calcium supplementation effects on chronic periodontits. J Periodontol. 2009;80(9):1433-1439.
41. Jabbar S, Drury J, Fordham J, et al. Plasma vitamin D and cytokines in periodontal disease and postmenopausal osteoporosis. J Periodontal Res. 2011;46(1):97-104.
42. Hildebolt CF. Effect of vitamin D and calcium on periodontitis. J Periodontol. 2005;76(9):1576-1587.
About the Authors
Suellan Go Yao, DMD
Columbia University College of Dental Medicine
Department of Periodontics
New York, New York
New York, New York
James Burke Fine, DMD<
Associate Dean for Postdoctoral Education
Professor of Clinical Dentistry and Postdoctoral Director of the Division of Periodontics
Columbia University College of Dental Medicine
New York, New York
Attending Dental Surgeon
Presbyterian Hospital Dental Service
New York, New York
Private Practice limited to Periodontics
Hoboken, New Jersey | <urn:uuid:4c708eb0-a6f4-4506-9db8-199844008206> | {
"date": "2014-03-11T16:34:42",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011231453/warc/CC-MAIN-20140305092031-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8781954050064087,
"score": 3.5625,
"token_count": 5603,
"url": "http://cced.cdeworld.com/courses/4571"
} |
Volume 9, Issue 7 - July-August 2008
eye on energy
Are You a Lean, Green Sustainable Machine?
Making Lean Manufacturing Part of Your Green Story
by Ric Jackson and David Meier
Simply producing a green product isn’t enough these days. Consumers in the market for green building products are just as concerned about your business’s environmental impact as they are about your product. Companies selling “green” inevitably will be asked if their message is also reflected in their operations and business practices. Sustainability is not a fad; it has become a core business value. Therefore, your products and operations should be in sync when touting your efforts to improve the sustainability of our planet.
Adopting sustainable business practices need not mean a complete overhaul of your operations. Door and window manufacturers employing lean manufacturing methods may already have steps in place toward becoming greener businesses. You can further promote your “lean, green story” by looking for new ways to reduce your impact on the environment and become a better corporate citizen.
How Does Lean Manufacturing Fit In?
Operating lean can have the most impact on your environmental footprint when applied to waste management practices. Waste is costly to an organization—and the planet. Many companies underestimate the costs associated with disposal, collection and even recycling, of waste. Of course, recycling and reusing waste is better than sending it to a landfill, but these options often become too convenient, making it easy to ignore the root of the problem—how to reduce waste in the first place. Lean thinking can help businesses realize a smaller environmental footprint by improving operational efficiencies that place a high emphasis on waste reduction.
As a case in point, Toyota has become the model for lean manufacturing. The Toyota Production System utilizes lean principles to impact the company’s sustainability. Toyota’s lean efforts have led the company to employ a zero impact objective with a goal of generating zero landfill waste. One notable method Toyota employs is using reusable containers to transport materials to suppliers, thereby greatly reducing cardboard use (see related story in the June 2008 issue of DWM, page 32).
Other lean principles, such as moving materials more effectively, minimizing extra handling and improving efficiencies, can lead to a greener business. For example, choose spacers that work interchangeably with various shapes of glass and consider automation to minimize variability. An efficient business and manufacturing process means less embodied energy for your product, as well as less waste in the form of time, costs and materials (see DWM, May 2008, page 8, for a refresher on embodied energy).
Will This Work for Your Business?
The next step would be to determine if the people within your business are prepared to solve problems and improve waste management and other core lean issues. Your workforce needs to be educated on the benefits and objectives of your lean and green initiatives for them to successfully take part in the process.
Some questions to ask as you consider incorporating lean into your sustainability messages
Ric Jackson is the director of marketing and business development for Truseal Technologies Inc. He can be reached at [email protected]. David Meier is an internationally recognized authority on lean manufacturing. He can be reached at [email protected]. The views and opinions expressed in this article do not necessarily reflect those of this magazine. | <urn:uuid:8b52ad5b-afe4-493e-9230-24035740a7a9> | {
"date": "2013-06-20T09:04:09",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9426870346069336,
"score": 2.703125,
"token_count": 679,
"url": "http://www.usglassmag.com/Door_and_Window_Maker/Backissues/2008/JulyAug/eyeonenergy.htm"
} |
In December 2015 at the Twenty-first Conference of Parties on Climate Change (COP21) in Paris, Vietnam pledged a particularly unambitious “intended nationally determined contribution”: an 8% cut in its greenhouse gas (GHG) emissions by 2030, not from today’s level, but from a “business as usual scenario” that amounts to an increase of 3.2x between 2010 and 2030 to 787m tonnes of carbon dioxide equivalent. With an unspecified amount of international financial support, it pledged an intended reduction of 25% compared to business as usual.
There are three big problems to overcome in the above, apart from the overarching global one that COP21 was only made possible by making all such commitments voluntary and therefore legally and even morally unenforceable. First, Vietnam’s small and uninspiring proposed 8% “contribution” to climate change mitigation, for a country that is likely to directly bear a large brunt of the consequences of global warming. Second, worrying industrial data points that cast great scepticism on even achieving these very modest reductions. And third, the knowledge gap that palpably exists in Vietnam today on its own environmental self-monitoring: most listed companies do not yet know their own even basic (“Scope 1″ in the jargon) GHG emissions, while the nation’s environment minister just a few months ago described environmental monitoring in Vietnam as “untimely, inaccurate and infrequent”.
The weight of evidence firmly ranks Vietnam as one of the main countries in the world to be strongly affected by climate change over the remainder of this century. According to the UN’s Intergovernmental Panel on Climate Change, some 1.2 billion people will be directly affected by rising sea levels by 2060, with Asia being hit hardest. The IPCC cites five major countries in particular in this regard: China, India, Bangladesh, Indonesia, and Vietnam. Major littoral cities in these countries (plus Bangkok and Yangon) are squarely at the centre of the likely economic and social impact.
The impact on Vietnam can, for the purposes of an executive summary, be broken down into three major items: food/agriculture, industrial development, and quality of habitat and the human condition. There is considerable overlap between these categories, but at least this gives us a basic framework for thinking about the issues:-
Fisheries & Agriculture. Some studies rate Vietnam first in the world for its sensitivity to the health of its fisheries. Some 24% of the country’s population lives in coastal districts along its 3,200km coastline, fish is a central part of the nation’s diet, and seafood exports were c. USD 8bn in 2015, 5% of total exports and (including domestic consumption) c. 7% of GDP. Coastal mangroves, salt marshes, and coral reefs are each endangered by rising sea levels and tidal surges associated with increasing frequency of typhoons and cyclones, and these are critical to breeding marine life. The warming ocean temperatures (making for lower oxygenation) and rising acidification (mainly from industrial pollution including acid rain) associated with climate change are already causing a northern migration of fish stocks into colder Chinese or China-claimed waters. In the South China sea, coastal fishing grounds have already been depleted to 5-30% of their unexploited stocks.
Inland, 60% of Vietnam’s total river flow and 95% of the mighty Mekong’s come from outside its borders, with upstream damming in China and Laos affecting flows and sediment deposition. The Mekong’s water level in February this year, during the recent El Niño phenomenon, was at its lowest since 1926, with 40-50% of the 2.2m hectares of arable land in the Mekong Delta hit by salinisation (a result of less river water relative to sea level). This affects river fisheries, and even more importantly rice, which is about 75% of total Vietnamese agricultural crop value, an export worth USD 1.6bn in 2015 and a sector worth about 8% of GDP. Together, agriculture and fisheries in 2015 occupied 44% of the nation’s population and 17% of GDP.
Vietnam’s rice yields per hectare – some 4.7 tonnes pa, a high level globally – have risen four times since the 1970s, amid increasingly intensive use of fertilisers and pesticides, but the outlook over the next 40 years for both yields and total plantation area is worrying. One reason is that fertilisers cannot be used any more intensively than they are already; Vietnam already ranks near the top of the world for the amount of them used per hectare. Another is that, with urbanisation and industrialisation, rice plantation area – currently about 4m hectares – is on a long term decline, perhaps of 10% over the coming 20-30 years. Just 28% of Vietnam’s overall land area is suitable for agriculture, with only two thirds of this capable of high-intensity crop production.
But most alarming of all is rising air temperature, the risk of rising sea level and falling river level, which cause drought, salinisation and sea inundation. Average temperature increases of 2-3 degrees centigrade over large parts of Vietnam are forecast over the coming half century, with the frequency of over-35 degree days rising strongly, raising the risk of drought. The risk of too much precipitation during the monsoon is also forecast to rise, as well as extreme weather events – especially in the vital Mekong Delta which produces typically over half of the nation’s rice. The low, flat topography of this region – including half of Ho Chi Minh City itself – means that a one metre rise in sea level would cause the inundation of up to 2m hectares or 33-50% of the region, flooding perhaps 14m residents.
Industrial (and therefore overall economic) development. Vietnam has been an economic development success story since the early 1990s, with the fall in its World Bank-measured poverty rate from 60% in 1993 to 13.5% in 2014 being one of the world’s fastest declines over this period. Like other successful Asian economic developers before it, this has been driven by the strong development of export-driven processing and manufacturing. The majority of Vietnam’s manufacturing capacity is in the southeast of the country, in and near the areas that could be so seriously affected by the phenomena already mentioned. Facilities in these 20 of Vietnam’s 64 provinces could be inundated by that one metre sea level rise, causing not only great economic damage but also environmental, in the form of toxic contamination. Proportionally to its own economic size, when added to the agriculture and fisheries impacts discussed above, Vietnam may thus be the single most badly affected country in Asia by a modest rise in sea level over the coming decades.
Quality of habitat and the human condition. Resulting from the climate and sea level changes are a host of negative effects on human life directly: involuntary migration, pressure on food and land resources, injury, death, malnutrition, disease (including from an expanding habitat area of malaria and dengue), intense heat up and air quality down. Such phenomena, apart from their innate tragedy and badness, could be expected to cause political and social discontent and instability, removing a key enabling factor behind Vietnam’s strong economic growth record and outlook and replacing it with a disabling one.
Vietnam’s contribution to GHG: the cynic and the “poor me” versus the sage
Vietnam’s global contribution to GHG emissions is small: in 2010, some 247m tonnes of carbon dioxide equivalent versus, for example, the US’s 6.7bn tonnes, the EU’s 4.7bn, China’s 9.7bn, Japan’s 1.3bn, South Korea’s 662m, and Thailand’s 346m. However, they are set to keep growing fast. Estimates for future Vietnamese energy consumption generally have it growing by 6-9x over the next 20 years, with resulting GHG emissions tripling over the same period – note the increased GHG-efficiency, but also the vast increase in energy usage to come.
The cynical or “poor us” Vietnamese policy maker or businessman might say: Vietnam is not a major GHG emitter, it is still way too undeveloped, we won’t worry about our GHG amounts at all except for paying the necessary diplomatic lip-service, we know well how to get bilateral and multilateral development aid for any GHG reducing we do – and to supplement the money we will anyway have to spend on mitigating the effects of climate change and adapting to it.
But the sage retorts: what an opportunity for us as Vietnamese policy makers. First, to maximise the assistance we receive on this matter from abroad, we should throw ourselves into GHG reductions; we will receive all the more assistance if we are sincere in our zeal.
Second, this is a classic opportunity for a developing country with abundant wind, solar and hydro resources to perform a leapfrog of its energy sector to a low GHG one. In this respect, the prospect of Vietnam’s demand for coal increasing at a CAGR of 23% over the period 2015-20 (VNHAM’s current forecast) needs to be squeezed dramatically downwards, via an effective package of incentives to promote low-GHG energy (including natural gas) and to dissuade investment in coal-fired energy. Such a policy could lead to a Vietnamese internationally competitive low-GHG energy equipment industry with export potential. As any good schoolboy knows, developing competitive exports in high-growth industries is the holy grail of superior economic growth.
Third, the sage well remembers that autocratic government retains an important advantage in its ability to conduct far-reaching policy change, that most democracies can only dream of. Vietnam has the institutional capacity to direct the country forcefully in such a direction. Additional budgetary costs could be amply met by a radical acceleration of privatisation of the remaining 1000-plus state owned enterprises. For a nervous state apparatchik, it might be worth recalling the recent record: over 70% of GDP is not generated by the state or state-owned enterprises these days, and nearly 90% of the national workforce are not employed by it/them. And hey presto: no counter-revolution has threatened to ensue from already-undertaken privatisations, in fact the reverse. Communist party membership is more a hyper-premium LinkedIn account or Rotary Club membership these days than it is a betrothal to the quixotic theorems of Messrs Marx and Engels.
Fourth, China has been moving towards GHG-mitigating industries in recent years, and the sage remembers: we’ve done well by shadowing China, and positioning ourselves as the lower-cost alternative. For this to remain a potent factor, Vietnam needs to continue to move up the value curve industrially, just as China and the prior Asian developers have done. With respect to China, it might also strike Vietnamese policy makers that avoiding the recent horrific environmental stresses of that nation would be wise.
Make the right choices via government fiat and fiscal incentives
Attracting investment in solar panels, large-scale batteries, electricity storage, wind turbines and so forth should be the overriding focus of the Vietnamese government over the coming decade. A country with only 2% automobile penetration should easily be aiming for a majority of such vehicles in 10-15 years’ time to be electrically powered; in the meantime it should be overseeing a programme to electrify half of the existing motorcycles in the country, a measure the German development institution GIZ says would cut Vietnam’s current overall GHG emissions by 4.2m tonnes of carbon dioxide equivalent, or c. 2%. A significant shift from private vehicles to walking, cycling, trains and buses, and from buses to water transport, would cut 13.6m tonnes, or 7%, estimates GIZ. The urban rail projects underway in Hanoi and Ho Chi Minh City will cut 1.6m tonnes, just under 1%. An energy sector set to grow more than sixfold over the next 20 years should be aiming for a vast majority of it to be low-carbon, including a place for gas and nuclear. Mandatory fuel economy standards for passenger cars should be introduced immediately, following on from the new labelling requirement brought in last year. A nation with a flood of enthusiastic inward foreign direct investment should be insisting on best-in-class environmental standards for the new factories being built.
The above measures, and others, should receive top priority. Vietnamese middle class people are reasonably environmentally minded and there is now a critical mass of opinion for such measures to be adhered to and supported. Many of these measures will not only generate better achievement in the rather abstract matter of GHG reduction, but they will also generate noticeable improvement in here-and-now urban pollution levels, including from particulates, benzine, and nitrogen oxides, and sulphur dioxides.
Vietnam’s government has an impressive record of mobilising its people towards shared or imposed objectives, from war victories against the odds to overnight observance of helmets being required whilst on motorcycles. In the interests of sustainable and rapid economic advancement, we urge it to devote policymaking priority to becoming a leading, clean-energy emerging economy. At present, it is poised at a crossroads and could easily go either way, down Cynic Alley or Sage Boulevard.
Data in this article was sourced from the World Bank, IMF, Asian Development Bank, German development institution GIZ, World Resources Institute, the General Statistics Office of Vietnam, McKinsey, Global Insight, and the Center for Climate and Security. | <urn:uuid:c32603f4-26ff-43c0-a31c-0840be736abe> | {
"date": "2017-08-21T11:50:44",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108268.39/warc/CC-MAIN-20170821114342-20170821134342-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.938998281955719,
"score": 3.046875,
"token_count": 2858,
"url": "http://blog.vietnamholding.com/2016/09/"
} |
Mark Brown takes a look at new research which is aiming to personalise mental health care with sophisticated algorithms – so more people can get treatment that works for them.
Sometimes when we experience mental health difficulty and seek treatment or support it can feel we are a round peg being hammered into a square hole. We are individuals. We are all made up of idiosyncrasies; each of us a living history of what we have been, how we have lived, where we have travelled and what the world has done to us and with us. We are not only our age; not only our gender; or sexuality; not only our mental health difficulties or our distress. We are all of those things and more. Despite this, in mental health, treatments often feel like a very approximate fit, where we have to ‘suck them and see’ before we know whether they work or not.
I recently spoke to Zach Cohen, a researcher seeking to solve this problem with a project using statistical modelling to predict with greater accuracy which mental health treatments will work for people. Zach and his colleagues have been working on something they are referring to as a Personalised Advantage Index, a way of working out which treatments might be best for an individual based on who they are. To create this personalised treatment suggestion engine they have been mashing up data from existing studies with knowledge of the way that treatments interact with each other and what we know about the ways different people respond to them.
Mental health treatment doesn’t always make people feel like they are regarded as individuals. Medical professionals are often unlikely to be fully up-to-date with all existing research and are likely to fall back on past experience and recommendations from colleagues, something as true for talking therapies as it is for medications. We know what symptoms a particular treatment or intervention is supposed to work for and have a hazy-at-best knowledge of which kinds of people might respond best to a particular treatment or intervention. What we often don't know is whether this particular treatment will be the most effective for that particular person. What might work for one person with a particular diagnosis might prove actively harmful to another. In mental health, diagnosis alone often overshadows consideration of other factors.
Over the last decade or so the idea of precision medicine has been gathering pace. The US National Research Council defines precision medicine as an approach to medicine that gives practitioners “the ability to classify individuals into subpopulations that differ in their susceptibility to a particular disease, in the biology or prognosis of those diseases they may develop, or in their response to a specific treatment.”
Many factors can lead to being prescribed treatments for a particular diagnosis that might not be helpful, especially in a mental health system where choices might be limited. Too often people feel they must like it or lump it. The intention of Zach’s work is to make it possible to suggest to someone experiencing a mental health difficulty a treatment or intervention will work best for them based on age, gender, life situation, level of prior experience of treatments or other factors.
At present, Zach’s project has a model made from previous research that they are testing against existing large-scale studies to see whether the treatments their Personalised Advantage Index predicts matches with what actually had the best outcomes for people.
The promise of mental health treatments that start with who we are; in all of our weird, lumpy, contradictory glory, is a tantalising one. When experiencing mental health difficulty people complain of either being told that they must endure a period of trial-and-error to find the best fit treatment for them or that they should learn to live with the treatment they are offered even if it doesn’t feel right or doesn’t work as anticipated. Zach’s work could make mental health prescribing more intelligent and responsive to individual situations, histories and circumstances. Zach and his colleagues are working on something that might help to end the curse of one-size-fits-all treatment decisions and bring a new precision to the treatment people are offered.
Last updated: 7 September 2017 | <urn:uuid:752b6aad-4718-48c3-8844-dc1210ef0f89> | {
"date": "2018-06-21T06:25:14",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9620606303215027,
"score": 2.71875,
"token_count": 829,
"url": "https://www.mqmentalhealth.org/posts/could-we-end-the-trial-and-error-approach-to-mental-health-treatment"
} |
• barrister •
Part of Speech: Noun
Meaning: An attorney in the British legal system licensed to argue cases in higher courts, as opposed to a solicitor, who may only prepare cases for a barrister and argue in certain lower courts.
Notes: People have attempted to build a family around today's word over the years, but none seem to have succeeded. In the mid 19th century the adjective barristerial and the noun barristership appeared in print, but no one seems to remember them any more.
In Play: Despite our best intentions, the best known barrister probably remains Horace Rumpole, the cheroot-smoking, cheap-claret-drinking creation of the British TV series written by Sir John Mortimer. Rumpole of the Bailey, as played by the late Leo McKern, is known for his cagey manipulation of the British legal system and his office colleagues, and for his forthrightness in such quotes as, "Crime doesn't pay, but it's a living."
Word History: Today's Good Word is probably a blend of two words, bar (originally barre when freshly borrowed from French) and obsolete legister, an extension of legist "a specialist at law". The root, bar, now refers to a rail and to the practice of law, as in to practice before the bar. The bar in this case originally referred to the railing in the British courtroom that separates the judge, the lawyers, the accused, and witnesses from the rest of the court (audience). To practice before the bar, then, originally meant to practice in front of this railing. (Today's word was the suggestion of Norman Rich of Lewisburg, Pennsylvania, who recently returned from a grand cruise around the British Isles with a new bag of interesting words.)
Come visit our website at <http://www.alphadictionary.com> for more Good Words and other language resources! | <urn:uuid:31db6686-f9d9-46a9-b8ea-1b9a7331d57b> | {
"date": "2015-02-28T06:53:19",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461848.26/warc/CC-MAIN-20150226074101-00125-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9658966064453125,
"score": 3.03125,
"token_count": 398,
"url": "http://www.alphadictionary.com/goodword/word/print/barrister"
} |
Primary Science Color Mixing Glasses
Take a look! These unique child–size glasses and interchangeable lenses let students observe the world while learning about color.
Features 8 easy–to–change lenses—2 each of red, yellow and blue and distortion lenses that let you see the world like a bug
Teaches primary colors as well as mixing to make secondary colors by combining up to 2 lenses per side
Allows additional instruction and kaleidoscopic fun when you use the lenses on their own
Includes color–mixing chart
Durable plastic glasses are sized just right for children, and wipe clean with a damp cloth
We Also Recommend | <urn:uuid:78b2f668-dfad-4c38-bb7e-8e1ffca18025> | {
"date": "2019-06-19T02:39:03",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998882.88/warc/CC-MAIN-20190619023613-20190619045613-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9010999798774719,
"score": 2.734375,
"token_count": 129,
"url": "https://www.theteacherscrate.com/products/primary-science-color-mixing-glasses"
} |
Gregory Bresiger writes in this weekend’s Mises Daily:
After World War II, Taft ended his career by questioning the Truman Doctrine—which committed the United States to opposing communism in Greece and Turkey as well as almost anywhere else—and later urged president Dwight Eisenhower not to send troops to Indochina to save the French. Their Asian empire was collapsing in the early 1950s. Although initially supportive of President Truman in the Korean War, Taft later complained that the president had never asked for Congressional authorization in sending troops into war. Taft also questioned the legitimacy of the UN resolution calling for American intervention.
Taft hated the term “isolationist,” but said he accepted it if it meant “isolating the United States from the wars of Europe.” Still, isolationism was a sentiment that was in the political mainstream through a large part of the 20th century. | <urn:uuid:baad7627-2509-4692-8fcc-3bca7f731456> | {
"date": "2014-10-23T08:36:00",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507454577.44/warc/CC-MAIN-20141017005734-00121-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9774688482284546,
"score": 2.90625,
"token_count": 187,
"url": "http://bastiat.mises.org/2014/03/robert-taft-and-his-forgotten-isolationism/"
} |
2007 Schools Wikipedia Selection. Related subjects: British History 1500 and before (including Roman Britain); General history
The term Viking commonly denotes the ship-borne explorers, traders, and warriors of the Norsemen who originated in Scandinavia and raided the coasts of the British Isles, France and other parts of Europe from the late 8th century to the 11th century. This period of European history (generally dated to 793– 1066) is often referred to as the Viking Age. It may also be used to denote the entire populations of Viking Age Scandinavia and their settlements elsewhere.
Famed for their navigation ability and long ships, Vikings in a few hundred years colonized the coasts and rivers of Europe, the islands of Shetland, Orkney, the Faroe Islands, Iceland, Greenland, Newfoundland circa 1000 , while still reaching as far south as North Africa, east into Russia and to Constantinople for raiding and trading. Vikings are also widely believed to have been early explorers of North America, with putative expeditions to present-day Canada taking place as early as the 10th century. Viking voyages grew less frequent with the introduction of Christianity to Scandinavia in the late 10th and 11th century. The Viking Age is often considered to have ended with the Battle of Stamford Bridge in 1066.
The word viking was introduced to the English language with romantic connotations in the 18th century. However, etymologists assign the earliest use of the word to Anglo-Frankish writers, who referred to "víkingr" as one who set about to raid and pillage. In the current Scandinavian languages the term viking is applied to the people who went away on viking expeditions, be it for raiding or trading. In English it has become common to use it to refer to the Viking Age Scandinavians in general. The pre-Christian Scandinavian population is also referred to as Norse.
The Viking Age
humb|left|The Gokstad viking ship at display in Oslo, Norway]] The period of North Germanic expansion, usually taken to last from the earliest recorded raids in the 790s until the Norman Conquest of England in 1066, is commonly called the Viking Age. The Normans, however, descended from Scandinavian Vikings that were granted parts of northern France (Normandy) in the 8th century (William the Conqueror's grandfather was a Viking), and from the indigenous population of Neustria. In that respect, the Vikings continued to have an influence in Europe. Likewise, King Harold Godwinson was descended from Danish Vikings. Many of the medieval kings of Norway and Denmark were married to English and Scottish royalty.
Geographically, a "Viking Age" may be assigned not only to the Scandinavian lands (modern Denmark, Norway and Sweden), but also to territories under North Germanic dominance, mainly the Danelaw, Scotland, the Isle of Man and Ireland. Contemporary with the European Viking Age, the Byzantine Empire experienced the greatest period of stability (circa 800– 1071) it would enjoy after the initial wave of Arab conquests in the mid-7th century.
Viking navigators also opened the road to new lands to the north and to the west, resulting in the colonization of Shetland, Orkney, the Faroe Islands, Iceland, Greenland, and even an expedition to, and a short-lived settlement in, Newfoundland circa 1000.
During three centuries, Vikings appeared along the coasts and rivers of Europe, as traders, but also as raiders, and even as settlers. From 839, there were Varangian mercenaries in Byzantine service (most famously Harald Hardrada, who campaigned in North Africa and Jerusalem in the 1030s). Important trading ports during the period include Birka, Hedeby, Kaupang, Jorvik, Staraya Ladoga, Novgorod and Kiev. Generally speaking, the Norwegians expanded to the north and west, the Danes to England and France, settling in the Danelaw, and the Swedes to the east. But the three nations were not yet clearly separated, and still united by the common Old Norse language. The names of Scandinavian kings are known only for the later part of the Viking Age, and only after the end of the Viking Age did the separate kingdoms acquire a distinct identity as nations, which went hand in hand with their christianization. Thus it may be noted that the end of the Viking Age (9th–11th century) for the Scandinavians also marks the start of their relatively brief Middle Ages.
There is evidence showing that the Vikings reached the city of Baghdad, from archeologists discovering loot. However, the Vikings were far less successful in establishing settlements in the middle east, due to the far more centralized and powerful Arab power present, namely that of the Umayyad and then Abbasid empires.
After trade and settlement, cultural impulses flowed from the rest of Europe. Christianity had an early and growing presence in Scandinavia, and with the rise of centralized authority along with a stiffening of coastal defense in the areas the Vikings preyed upon, the Viking raids became more risky and less profitable. With the rise of kings and great nobles and a quasi- feudal system in Scandinavia, they ceased entirely - in the 11th century the Scandinavians are frequently chronicled as combating "Vikings" from the Baltic, which would eventually lead to Danish and Swedish participation in the Baltic crusades and the development of the Hanseatic League. For more info learn about Faroe Islands.
The earliest date given for a Viking raid is 787 when, according to the Anglo-Saxon Chronicle, a group of men from Norway sailed to Portland, in Dorset. There, they were mistaken for merchants by a royal official, and they murdered him when he tried to get them to accompany him to the king's manor to pay a trading tax on their goods. The next recorded attack, dated June 8, 793, was on the monastery at Lindisfarne—the "Holy Island"—on the east coast of England. For the next 200 years, European history is filled with tales of Vikings and their plundering.
Vikings exerted influence throughout the coastal areas of Ireland and Scotland, and conquered and colonized large parts of England (see Danelaw). Wales also saw some Viking settlements on its coast; the modern day city of Swansea takes its name from Sweyne Forkbeard who was shipwrecked at modern day Swansea Bay; neighbouring Gower Peninsula has many place names of Norse origin (such as Worms Head. Worm is the Norse word for dragon, as the Vikings believed that the serpent shaped island was a sleeping dragon). Twenty miles west of Cardiff on the Vale of Glamorgan coast is the semi-flooded island of Tusker Rock, which takes its name from Tuska the Viking whose people semi-colonised the fertile lands of the Vale of Glamorgan. The Britons of Cornwall allied with the Vikings in an unavailing attempt to expel the Saxons from Cornwall in 838. Vikings travelled up the rivers of France and Spain, and gained control of areas in Russia and along the Baltic coast. Stories tell of raids in the Mediterranean and as far east as the Caspian Sea.
Significantly, the Celtic nations of Scotland, Ireland, Wales and in 838 Cornwall, during their battles against the Anglo-Saxons, decided to ally with the Vikings against the Saxons. Possibly as a result, the modern-day Celtic nations of the British Isles, in particular the cities of Cardiff and Swansea in Wales, and in Ireland the cities of Cork, Dublin, Limerick and Waterford, have a certain pride in what is perceived as "Viking ancestry".
Adam of Bremen records in his book Gesta Hammaburgensis Ecclesiae Pontificum, (volume four):
- Aurum ibi plurimum, quod raptu congeritur piratico. Ipsi enim piratae, 'quos illi Wichingos as appellant, nostri Ascomannos regi Danico tributum solvunt.
- "There is much gold here (in Zealand), accumulated by piracy. These pirates, which are called wichingi by their own people, and Ascomanni by our own people, pay tribute to the Danish king."
According to the Anglo-Saxon Chronicles, after Lindisfarne was raided in 793, Vikings continued on small-scale raids across England. Viking raiders struck England in 793 and raided a Christian monastery that held Saint Cuthbert’s relics. The raiders killed the monks and captured the valuables. This raid was called the beginning of the “Viking Age of Invasion”, made possible by the Viking longship. There was great violence during the last decade of the 8th century on England’s northern and western shores. While the initial raiding groups were small, it is believed that a great amount of planning was involved.
During the winter between 840 and 841, the Norwegians raided during the winter instead of the usual summer. They waited on an island off Ireland. In 865 a large army of Danish Vikings, supposedly led by Ivar, Halfdan and Guthrum arrived in East Anglia. They proceeded to cross England into Northumbria and captured York ( Jorvik), where some settled as farmers. Most of the English kingdoms, being in turmoil, could not stand against the Vikings, but Alfred of Wessex managed to keep the Vikings out of his country. Alfred and his successors continued to drive back the Viking frontier and take York.
A new wave of Vikings appeared in England in 947 when Erik Bloodaxe captured York. The Viking presence continued through the reign of Canute the Great (1016-1035), after which a series of inheritance arguments weakened the family reign. The Viking presence dwindled until 1066, when the Norwegians lost their final battle with the English. See also Danelaw.
The Vikings did not get everything their way. In one situation in England, a small Viking fleet attacked a rich monastery at Jarrow. The Vikings were met with stronger resistance than they expected: their leaders were killed, the raiders escaped, only to have their ships beached at Tynemouth and the crews killed by locals. This was one of the last raids on England for about 40 years. The Vikings instead focused on Ireland and Scotland.
The Vikings conducted extensive raids in Ireland and founded a few towns, including Dublin. At some points, they seemingly came close to taking over the whole isle; however, the Scandinavians settled down and intermixed with the Irish. Literature, crafts, and decorative styles in Ireland and the British Isles reflected Scandinavian culture. Vikings traded at Irish markets in Dublin. Excavations found imported fabrics from England, Byzantium, Persia, and central Asia. Dublin became so crowded by the 11th Century that houses were constructed outside the town walls.
The Vikings pillaged monasteries on Ireland’s west coast in 795, and then spread out to cover the rest of the coastline. The north and east of the island were most affected. During the first 40 years, the raids were conducted by small, mobile Viking groups. From 830 on, the groups consisted of large fleets of Viking ships. From 840, the Vikings began establishing permanent bases at the coasts. Dublin was the most significant settlement in the long term. The Irish became accustomed to the Viking presence. In some cases they became allies and also married each other.
In 832, a Viking fleet of about 120 invaded kingdoms on Ireland’s northern and eastern coasts. Some believe that the increased number of invaders coincided with Scandinavian leaders’ desires to control the profitable raids on the western shores of Ireland. During the mid-830s, raids began to push deeper into Ireland, as opposed to just touching the coasts. Navigable waterways made this deeper penetration possible. After 840, the Vikings had several bases in strategic locations dispersed throughout Ireland.
In 838, a small Viking fleet entered the River Liffey in eastern Ireland. The Vikings set up a base, which the Irish called longphorts. This longphort would eventually become Dublin. After this interaction, the Irish experienced Viking forces for about 40 years. The Vikings also established longphorts in Cork, Limerick, Waterford, and Wexford. The Vikings could sail through on the main river and branch off into different areas of the country.
One of the last major battles involving Vikings was the Battle of Clontarf in 1014, in which Vikings fought both for High King Brian Boru's army and for the Viking-led army opposing the High King. Irish and Viking Literature depict the Battle of Clontarf as a gathering of this world and the supernatural. For example, witches, goblins, and demons were present. A Viking poem portrays the environment as strongly pagan. Valkyries chanted and decided who would live and die.
While there are few records from the earliest period, it is believed to be clear that a Scandinavian presence in Scotland increased in the 830s. In 839, a large Viking force believed to be Norwegian invaded the Earn valley and Tay valley which were central to the Pictish kingdom. They slaughtered Eoganan, king of the Picts, and his brother, the vassal king of the Scots. They also killed many members of the Pictish aristocracy. The sophisticated kingdom that had been built fell apart, as did the Pictish leadership. The foundation of Scotland under Kenneth MacAlpin is traditionally attributed to the aftermath of this event.
The isles to the north and west of Scotland were heavily colonised by Norwegian vikings. Shetland, Orkney, the Western Isles, Caithness and Sutherland were under Norse control, sometimes as fiefs under the King of Norway and other times as separate enteties. Shetland and Orkney were the last of these to be incorporated into Scotland in as late as 1468. The vikings also intermixed with the original inhabitants as in with the inhabitants of Galloway to become the Gallgaels.
Wales was not colonised by the Vikings as heavily as eastern England and Ireland. The Vikings did, however, settle in the south around St. David's, Haverfordwest, and Gower, among other places. Place names such as Skokholm, Skomer, and Swansea remain as evidence of the Norse settlement. The Vikings, however, were not able to set up a Viking state or control Wales, owing to the powerful forces of Welsh kings, and, unlike in Scotland, the aristocracy was relatively unharmed.
Gaul or West Francia suffered more severely than East Francia during the Viking raids of the ninth century, which destroyed the Carolingian Empire, though it suffered less severely than the Low Countries. The reign of Charles the Bald, whose military record was one of consistent failure, coincided with some of the worst of these raids, though he did take action by the Edict of Pistres of 864 to secure a standing army of cavalry under royal control to be called upon at all times when necessary to fend off the invaders. He also ordered the building of fortified bridges to prevent inland raids.
Nonetheless, the Bretons allied with the Vikings and, at the Battle of Brissarthe in 865; both Robert, the margrave of Neustria, a march created for defence against the Vikings sailing up the Loire, and Ranulf of Aquitaine died in the battle. The Vikings also took advantage of the civil wars which ravaged the Duchy of Aquitaine in the early years of Charles' reign. In the 840s, Pepin II called in the Vikings to aid him against Charles and they settled down at the mouth of the Garonne. Two dukes of Gascony, Seguin II and William I, died defending Bordeaux from Viking assaults. A later duke, Sancho Mitarra, even settled some at the mouth of the Ardour in an act presaging that of Charles the Simple and the Treaty of Saint-Clair-sur-Epte by which the Vikings were settled in Rouen, creating Normandy as a bulwark against other Vikings.
By the mid 9th century, though apparently not before (Fletcher 1984, ch. 1, note 51), there were Viking attacks on the coastal Kingdom of Asturias in the far northwest of the peninsula, though historical sources are too meagre to assess how frequent or how early raiding was. By the reign of Alfonso III Vikings were stifling the already weak threads of sea communications that tied Galicia (a province of the Kingdom) to the rest of Europe. Richard Fletcher attests raids on the Galician coast in 844 and 858: "Alfonso III was sufficiently worried by the threat of Viking attack to establish fortified strong points near his coastline, as other rulers were doing elsewhere." In 968 Bishop Sisnando of Compostela was killed, the monastery of Curtis was sacked, and measures were ordered for the defence of the inland town of Lugo. After Tui was sacked early in the 11th century, its bishopric remained vacant for the next half-century. Ransom was a motive for abductions: Fletcher instances Amarelo Mestáliz, who was forced to raise money on the security of his land in order to ransom his daughters who had been captured by the Vikings in 1015. Bishop Cresconio of Compostela (ca. 1036–66) repulsed a Viking foray and built the fortress at Torres del Oeste (Council of Catoira) to protect Compostela from the Atlantic approaches. The city of Póvoa de Varzim in Northern Portugal, then a town, was settled by Vikings around the 9th century and its influence kept strong until very recently, mostly due to the practice of endogamy in the community.
In the Islamic south, the first navy of the Emirate was called into being after the humiliating Viking ascent of the Guadalquivir, 844, and was tested in repulsing Vikings in 859. Soon the dockyards at Seville were extended, it was employed to patrol the Iberian coastline under the caliphs Abd al-Rahman III ( 912– 61) and Al-Hakam II ( 961– 76). By the next century piracy from Saracens superseded the Viking scourge.
Explanations of the expansion
Why the viking expansion took place is a much debated topic in nordic history, and there are no clear answers.
One common theory is that the viking homelands were overpopulated. A growing poulation or a lacking ability of the agriculture to support the existing population should have caused a lack of land. For a people living near the coast in possession of good naval technologies, it makes sense to expand overseas in the course of a typical youth bulge effect. One problem with this explanation is that, as a result of the lack of sources, no such rise in population or decline in agricultural production has been proven. This theory is widely accepted as a part of the solution, since it is hard to imagine why a people would colonise new territories if there was not a lack of land at home. However, it does little to explain the plundering raids and trading expeditions, or why the expansion went to overseas countries and not into the big, uncultivated forest areas of the Viking homelands on the Scandinavian peninsula.
Another explanation is that the Vikings used temporary weakness in the regions they travelled to. For instance, the Danish vikings were aware of the internal division of the empire of Charlemagne that begun in the 830's and resulted in the splitting up of the empire. The Danish expeditions England can also have profited from the dissunity of the different English kingdoms.
The decline of old trade routes can also be a part of the explanation. The trade between western Europe and the rest of the Eurasian continent had suffered from a severe decline as a result of the fall of the Roman Empire in the 5th century and the expansion of Islam in the 7th century. At the time of the Viking, the trade on the Mediterranean Sea was at its lowest level. By, for instance, trading furs and slaves against silver and spices with the arabs, and then trading the silver and spices for weapons with the Franks, the vikings acted like a middlehand in the international trade, picking up the role the declining mediterranian trade had previously filled.
Another important factor when it comes to trade is that the destruction of the Frisian fleet by the Franks. This gave the vikings the opportunity to take over its old markets. However, both the explanation underlining dissunity and the one underlining trade explains how the expansion was possible, more than why it occurred. This is why we can consider that in addition to the economic factor, there is also another reason of first Vikings’ raids, they could also originate in resistance to forced Christianization, in particular Charlemagne’s persecutions against all the Pagans people: who have to accept “conversion or the massacre”
Norse mythology, Norse sagas and Old Norse literature tell us about their religion through tales of heroic and mythological heroes. However, the transmission of this information was primarily oral, and we are reliant upon the writings of (later) Christian scholars, such as the Icelanders Snorri Sturluson and Sæmundur fróði, for much of this. Many of these sagas were written in Iceland, and most of them, even if they had no Icelandic provenance, were preserved there after the Middle Ages due to the Icelanders' continued interest in Norse literature and law codes.
Vikings in those sagas are described as if they often struck at accessible and poorly defended targets, usually with impunity. The sagas state that the Vikings built settlements and were skilled craftsmen and traders.
Many rune stones in Scandinavia record the names of participants in Viking expeditions. Other rune stones mention men who died on Viking expeditions, among them the around 25 Ingvar stones in the Mälardalen district of Sweden erected to commemorate members of a disastrous expedition into present-day Russia in the early 11th century. The rune stones are important sources in the study of the entire Norse society and early medieval Scandinavia, not only of the 'Viking' segment of the population (Sawyer, P H: 1997).
Runestones attest to voyages to locations, such as Bath, Greece, Khwaresm, Jerusalem, Italy (as Langobardland), London, Serkland (i.e. the Muslem world), England, and various locations in Eastern Europe.
There are numerous burial sites associated with Vikings. some examples are:
- Gettlinge gravfält, Öland, Sweden, ship outline
- Jelling, Denmark, a World Heritage Site
- Hulterstad gravfält, near the villages of Alby and Hulterstad, Öland, Sweden, ship outline of standing stones
The etymology of "Viking" is somewhat vague. One path might be from the Old Norse word, vík, meaning "bay," "creek," or "inlet," and the suffix -ing, meaning "coming from" or "belonging to." Thus, viking would be a 'person of the bay', or "bayling" for lack of a better word. In Old Norse, this would be spelled víkingr. It may be noted that Viken was the old name of the region bordering on the Skagerrak, from where the first norse merchant-warriors originated. Later on, the term, viking, became synonymous with "naval expedition" or "naval raid", and a víkingr was a member of such expeditions. A second etymology suggested that the term is derived from Old English, wíc, ie. "trading city" (cognate to Latin vicus, "village").
The word viking appears on several rune stones found in Scandinavia. In the Icelanders' sagas, víking refers to an overseas expedition (Old Norse farar i vikingr "to go on an expedition"), and víkingr, to a seaman or warrior taking part in such an expedition.
In Old English, the word wicing appears first in the Anglo-Saxon poem, " Widsith", which probably dates from the 9th century. In Old English, and in the writings of Adam von Bremen, the term refers to a pirate, and is not a name for a people or a culture in general. Regardless of its possible orgins, the word was used more as a verb than as a noun, and connotated an activity and not a distinct group of individuals. To "go viking" was distinctly different from Norse seaborne missions of trade and commerce.
The word disappeared in Middle English, and was reintroduced as viking during 18th century Romanticism (the " Viking revival"), with heroic overtones of " barbarian warrior" or noble savage. During the 20th century, the meaning of the term was expanded to refer not only to the raiders, but also to the entire period; it is now, somewhat confusingly, used as a noun both in the original meaning of raiders, warriors or navigators, and to refer to the Scandinavian population in general. As an adjective, the word is used in expressions like " Viking age," "Viking culture," "Viking colony," etc., generally referring to medieval Scandinavia.
There were two distinct classes of Viking ships: the longship (the largest also known as "drakkar", meaning "dragon" in Norse) and the knarr. The longship, intended for warfare and exploration, was designed for speed and agility, and were equipped with oars to complement the sail as well as making it able to navigate independently of the wind. The longship had a long and narrow hull, as well as a shallow draft, in order to facilitate landings and troop deployments in shallow water. The knarr, on the other hand, was a slower merchant vessel with a greater cargo capacity than the longship. It was designed with a short and broad hull, and a deep draft. It also lacked the oars of the longship.
Longships were used extensively by the Leidang, the Scandinavian defense fleets. The term "Viking ships" has entered common usage, however, possibly because of its romantic associations (discussed below).
In Roskilde are the well-preserved remains of five ships, excavated from nearby Roskilde Fjord in the late 1960s. The ships were scuttled there in the 11th century to block a navigation channel, thus protecting the city which was then the Danish capital, from seaborne assault. These five ships represent the two distinct classes of the Viking Ships, the longship and the knarr.
Longships are not to be confused with longboats.
See also 19th century Viking revival. Early modern publications, dealing with what we now call Viking culture, appeared in the 16th century, e.g. Historia de gentibus septentrionalibus (Olaus Magnus, 1555), and the first edition of the 13th century Gesta Danorum of Saxo Grammaticus in 1514. The pace of publication increased during the 17th century with Latin translations of the Edda (notably Peder Resen's Edda Islandorum of 1665).
The word Viking was popularized, with positive connotations, by Erik Gustaf Geijer in the poem, The Viking, written at the beginning of the 19th century. The word was taken to refer to romanticized, idealized naval warriors, who had very little to do with the historical Viking culture. This renewed interest of Romanticism in the Old North had political implications. A myth about a glorious and brave past was needed to give the Swedes the courage to retake Finland, which had been lost in 1809 during the war between Sweden and Russia. The Geatish Society, of which Geijer was a member, popularized this myth to a great extent. Another Swedish author who had great influence on the perception of the Vikings was Esaias Tegnér, member of the Geatish Society, who wrote a modern version of Friðþjófs saga ins frœkna, which became widely popular in the Nordic countries, the United Kingdom and Germany.
A focus for early British enthusiasts was George Hicke, who published a Linguarum vett. septentrionalium thesaurus in 1703– 05. During the 18th century, British interest and enthusiasm for Iceland and Nordic culture grew dramatically, expressed in English translations as well as original poems, extolling Viking virtues and increased interest in anything Runic that could be found in the Danelaw, rising to a peak during Victorian times.
Nazism and Fascism
Similar to Wagnerian mythology, the romanticism of the heroic Viking ideal appealed to the Germanic supremacist thinkers of Nazi Germany. Political organizations of the same tradition, such as the Norwegian fascist party, Nasjonal Samling, used viking symbolism and imagery widely in its propaganda. The Viking legacy had an impact in parts of Europe, especially the Northern Baltic region, but in no way was the Viking experience particular to Germany. However, the Nazis did not claim themselves to be the descendants of any Viking settlers. Instead, they resorted to the historical and ethnic fact that the Vikings were descendants of other Germanic peoples; this fact is supported by the shared ethnic-genetic elements, and cultural and linguistic traits, of the Germans, Anglo-Saxons, and Viking Scandinavians. In particular, all these peoples also had traditions of Germanic paganism and practiced runelore.
This common Germanic identity became - and still is - the foundation for much National Socialist iconography. For example, the runic emblem of the SS utilized the sig rune of the Elder Futhark and the youth organization Wiking-Jugend made extensive use of the odal rune. This trend still holds true today (see also fascist symbolism).
Since the 1960s, there has been rising enthusiasm for historical reenactment. While the earliest groups had little claim for historical accuracy, the seriousness and accuracy of re-enactors has increased during the 1990s, including many re-enactment groups concentrating on an accurate representation of the Viking Age.
There is a conception that the vikings were very tall and large men. Ibn Fadlan and various European sources mention that the vikings were of great stature. A number of modern studies have been conducted which show vikings to have been on average between 66.3in (168.4cm) and 69.3in (176cm) tall. There is variation, and higher ranking vikings tended to be taller (likely due to better nutrition), but the vikings were, compared to people of today, not incredibly tall men. Compared to people of other parts of Europe of that time the Vikings might have been above average in height.
Apart from two or three representations of (ritual) helmets with protrusions that may be either stylized ravens, snakes or horns, no depiction of Viking Age warriors' helmets, and no actually preserved helmet has horns. In fact, the formal close-quarters style of Viking combat (either in shield walls or aboard "ship islands") would have made horned helmets cumbersome and hazardous to the warrior's own side. Therefore it can be ruled out that Viking warriors had horned helmets, but whether or not they were used in Scandinavian culture for other, ritual purposes remains unproven. The general misconception that Viking warriors wore horned helmets was partly promulgated by the 19th century enthusiasts of the Götiska Förbundet, founded in 1811 in Stockholm, with the aim of promoting the suitability of Norse mythology as subjects of high art and other ethnological and moral aims. The Vikings were also often depicted with winged-helmets and in other clothing taken from Classical antiquity, especially in depictions of Norse gods. This was done in order to legitimize the Vikings and their mythology, by associating it with the Classical world which has always been idealized in European culture. The latter-day mythos created by national romantic ideas blended the Viking Age with glimpses of the Nordic Bronze Age some 2,000 years earlier, for which actual horned helmets, probably for ceremonial purposes, are attested both in petroglyphs and by actual finds (See Bohuslän ). The cliché was perpetuated by cartoons like Hägar the Horrible and Vicky the Viking.
Despite images of Viking marauders who live for plunder, the heart of Viking society is reciprocity, on both a personal, social level and on a broader political level. The Vikings lived in a time when numerous societies were engaged in many violent acts, and the doings of the Vikings put into context are not as savage as they seem. Others of the time period were much more savage than the Vikings, such as the French king, Charlemagne, who cut off the heads of 4,500 Saxons ( Bloody Verdict of Verden) in one day , partly because they would not accept the Christian faith. Actually, the Vikings were not at all as war-crazed as people tend to believe. Most were traders, although some did plunder, often monasteries around Scotland, Wales and England, as they had a lot of valuables in gold and silver.
In the 300-year period where Vikings were most active, there were only approximately 347 attacks that spread from the British Isles to Morocco, Portugal, and Turkey. This number is a lot smaller than most seem to think. In Ireland, where the Vikings are most famous for attacking monasteries, there were only 430 known attacks during this 300-year period.
The use of human skulls as drinking vessels is also ahistorical. The rise of this myth can be traced back to a Ole Worm's Runer seu Danica literatura antiquissima of 1636), warriors drinking ór bjúgviðum hausa [from the curved branches of skulls, i.e. from horns] were rendered as drinking ex craniis eorum quos ceciderunt [from the skulls of those whom they had slain]. (Scandinavian skalli/skalle: skal means simply "shell" and skál/skål "bowl".) The skull-cup allegation may have some history also in relation with other Germanic tribes and Eurasian nomads, such as the Scythians and Pechenegs.
The image of wild-haired, dirty savages sometimes associated with the Vikings in popular culture is a distorted picture of reality. Non-Scandinavian Christians are responsibile for most surviving accounts of the Vikings and consequently, a strong bias exists. This attitude is likely attributed to Christian misunderstandings regarding paganism. Viking tendencies were often misreported and the work of Adam of Bremen, among others, told largely disputable tales of Viking savagery and uncleanliness.
However, it is now known that the Vikings used a variety of tools for personal grooming such as combs, tweezers, razors or specialized "ear spoons". In particular, combs are among the most frequent artifacts from Viking Age excavations. The Vikings also made soap, which they used to bleach their hair as well as for cleaning, as blonde hair was ideal in the Viking culture.
The Vikings in England even had a particular reputation of excessive cleanliness, due to their custom of bathing once a week, on Saturdays (as opposed to the local Anglo-Saxons). To this day, Saturday is referred to as laugardagur/laurdag/lørdag/lördag "washing day" in the Scandinavian languages, though the original meaning is lost in modern speech in most of the Scandinavian languages ("laug" still means "bath" or "pool" in Icelandic).
As for the Rus', who had later acquired a subjected Varangian component, Ibn Rustah explicitly notes their cleanliness, while Ibn Fadlan is disgusted by all of the men sharing the same vessel to wash their faces and blow their noses in the morning. Ibn Fadlan's disgust is probably motivated by his ideas of personal hygiene particular to the Muslim world, such as running water and clean vessels. While the example intended to convey his disgust about the customs of the Rus', at the same time it recorded that they did wash every morning.
Vikings and the Romanticist Viking Revival have inspired many works of fiction, from historical novels directly based on historical events like Frans Gunnar Bengtsson's The Long Ships to loosely historical fantasies like Michael Crichton's Eaters of the Dead to the outright silly, like Erik the Viking.
- Sweyne Forkbeard of Swansea - (the man who founded Swansea in Wales)
- Askold and Dir (legendary Varangian conquerors of Kiev)
- Björn Ironside (pillaged in Italy and son of Ragnar Lodbrok)
- Brodir the Dane (Danish Viking responsible for killing the High King of Ireland, Brian Boru)
- Egill Skallagrímsson (Icelandic warrior and popular skald, see also Egils saga)
- Eirik Blodøks
- Erik the Red (discoverer of Greenland)
- Leif Ericson (discoverer of America/ Vinland, son of Erik the Red) | <urn:uuid:8e8fd239-f4de-4c9f-9f9c-ad3cc5b728ce> | {
"date": "2015-04-01T17:51:34",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131305143.93/warc/CC-MAIN-20150323172145-00058-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9682604074478149,
"score": 3.90625,
"token_count": 7848,
"url": "http://www.cs.mcgill.ca/~rwest/wikispeedia/wpcd/wp/v/Viking.htm"
} |
English (Paper-A) Punjab University, Guess Papers,
For BA/BSc Examination, 2012
1. Explain with reference to the context the following passages.
1. No time to turn at Beauty’s glance,
And watch her feet how they can dance
2. Her trembling lakes, like foamless seas,
Her bird delighting citon trees
In every purple vale!
3. Fall gently, sow flakes
Cover me with white
Cold icy kisses.
4. I will dream
Long draughts of quiet
As a purgation
(New Year Resolutions)
5. In the company of dog lovers,
The rebel expresses a preference for cats
6. I go in the rain, and more than needs
Rope cuts both my wrists behind
(Patriot into Traitor)
7. No nightingale did ever chaunt
More welcome notes to weary bands
8. They killed him with sword and spear
Then the skull opened its mouth,
Huntsman, how did you come here?
9. I listened, motionless and still;
And, as I mounted up the hill,
The music in my heart I bore,
Long after it was heard no more
10. Then practice of losing father, losing faster
Places and names and where it was you went
11. Jealous in honour
Sudden and quick in honour
(All the world a stage)
12. To hopeful eye of youth it still appears
A lane where the roses and hawthorns grow
(Arrival or Departure)
13. There is not any book
Of face of dearest book
That I would not turn from now
14. I cannot rub the strangeness from my sight
I got from looking through a pane of glass
(After Apple Picking)
15. So little happens, the black dog
Cracking his fleas in the hot sun
16. We slowly drove. He knew no haste
For his civility
(Because I could not stop for Death)
17. And when I feel, fair creature of an hour,
Never have relish in the faery power
Of unreflecting love!
(When I have fears)
18. I walked abroad
And saw the ruddy moon lean over a hedge
Like a red faced farmer
19. In the village churchyard there grows an old yew
Every spring if blossoms anew
Old passports can’t do that my dear
(Say this city has ten million souls)
20. Beware! Beware!
His flashing eyes, his floating hair
Weave a circle round his thrice
And close your eyes with holy dread
21. The fog comes
On little cat feet
It sits looking
Over the harbour and city
On silent haunches
And then moves on
22. And may be what they say is true
Of war and war’s alarms
But o that I were young again
And hold her in her arms.
23. A state of mind…. “ Husband died seven months ago “Must I pay the
interest or not? I ask you. Must I say, or must I not? Suppose your
husband is dead and you have got a state of mind and nonsense of that sort
and your steward’s gone away somewhere, devil take him, what tdo you
want me to do?
24. Three times I have fought duels on account of women. I have refused
twelve women and nine have refused me. Yet there was a time when I
played the fool, scented myself, used honeyed words wore jewelery, made
25. People always say we’re the oldest and dullest family in the country.
Nothing ever happens to Sydneys. We never run away with mens wives or
their money or other things.
(Something to talk about)
26. Go out naked? No, you may not think it, my dear, but I do pay some
attention to respectabilities.
27. Can anyone fight life ____ successfully. Life is cunning and its underhand
and you fight straight yourself, and you fancy you are doing something
about it that’s rather fine. But life is crook and fights back ___ crooked.
28. Well, I will do what I can, sir. But breakfast at eight is the master’s rule
just as it used to be before you went away to the war.
(The Boy comes home)
29. It was an accident that I fell in love with john. I did not go manhunting.
But I do say, Lucy, if I d fallen for well, for a Charles, you’s, d have had
the right to exert your influence. I mean every influence. But John is a
30. Now, understand once and for all Philip, while you remain in my house I
expect not only punctuality, also civility and respect.
(The Boy comes Home)
Questions Based on Poems
1. William Davis in his poem ‘Leisure’ tells us that it is our own attitude
towards life that makes it so full of worries. How can we possible avoid.
2. How does the poet fulfil his dream of getting power on earth. The heroes
dream are pure and innocent.
3. New year Resolution is actually a piece of advice on leading good life in
this world. (Message of the Poem)
4. What are the three resolutions which Elizabeth Sewell wants to make at
the Arrival of New year?
5. What are the dreams of American woman as depicted in the poem
6. a) The protagonist in the Rebel is fake, not real.
b) Discuss the element of humour and satire in the poem. (The Rebel)
7. a) ‘Patriot into Trailor’ discusses the reversal of the fortune of a political
b) Discuss the poem as a dramatic monologue bringing out the irony and
pathos of the situation.
8. Do you think that the huntsman is responsible for his own death?
9. How do feelings of anger, jealousy and hatred spoil human happiness
‘Poison Tree by William Blake?
10. What are the seven stages of man’s life?
11. The poetess views death as a gentle and comfortable companion to man.
Do you agree?
12. How does the apple picker pick the apples. What ideas haunt the frmer
(poet) when he is picking apples?
13. Draw a picture of the deserted village in your own words.
14. What are the views of Keats on Love and Beauty? (I have fears)
15. Discuss the fantastic and dream like atmosphere of the poem (Kublai
16. Why does the poet want to become young again?
17. Answer in the light of the poem “Politics” decide to kill the snake. (Snake)
18. Do you agree with the poetess that the art of losing is not hard to master.
How can we master this art?
Expected Short Stories
1.(a) Who are the killers? Why do they want to kill Ole Anderson?
The killers is a thriller and crime story.
(b) Write short notes on
1. Ole Andreson 2. Nick Adam 3. George
2.(a) It is said about Rappaccini that he cares more for science than for
mankind. (Rappaccini as a scientist)
(b) Do you think that Beatrice is true and sincere in her love?
(c) Write short notes on the following:
1. Beatrice 2. Giavonni 3. Baglioni
3.(a) What were the expectations of Ustad Mangu on 1st April?
(b) Why did Ustad Mangu hate the English?
(c) What were the feelings of Ustad Mangu about 1st April?
(d) Ustad Mangu’s row with a Gora soldier.
4.(a) What is the real cause of conflict between Eva and Rosen?
(b) Do you think that Take Pity is a story of unusual heroism and
(c) Compare and contrast the character of Eva and Rosen.
5.(a) Steinbeck remarks when he thinks of the breakfast taken with the cotton
pickers, “It makes the rush of warmth” what did impress him most.
(b) What has made the boss in the Fly so desperate?
(c) Why does the fly mean in the story? (Symbolic meaning)
6.(a) The Happy Prince Tells the Swallow “There is no mystery as great as
(b) Towards the end of the story, the mayor calls ‘Happy Prince’ little better
than a beggar. Discuss it.
(c) What is the role of swallow in the story?
7.(a) The necklace is a tragedy of a vain, proud and showy woman.
(b) She replaces the necklace at the cost of her life and marriage. Discuss
(Life after the loss of Necklace)
(c) She was born into a family of clerks, then why did she think of large
drawing rooms etc what are her dreams.
8.(a) How does the story Duchess and the Jeweller reflect the moral decadence
of the English aristocracy?
(b) How did the Duchess deceive the Jeweller?
(c) The Duchess is more loath some than the Jeweller.
9. Discuss the conflict and tension “In the story, the Shadow in the Rose
10.(a) Why did the soldier kill the panther?
(b) Write a brief sketch of Lisby?
(c) Simon keeps the little willow as a token of Lisby’s love which gives him
strength till his death. (Importance of willow tree)
(d) Who was Simon Byrne? How did he fall in love with Lisby?
1.(a) Write a note on the comic element in the play. ‘The Bear.’
(b) How does Popva behave after the death of her husband?
(c) Write an account of the quarrel between Popva and Smirnov. (Title of the
(d) Write notes on
i. Smirnov ii. Popova
2.(a) How is the conflict between Philip and his uncle resolved?
(b) Write a character sketch of i. Philip ii. James
3.(a) What is the role of Bishop in the play “Something to talk about”?
(b) Wolf is a sheep in Wolf’s clothing.
(c) The play ‘Something to talk about’ is a comedy. Discuss it.
(d) Write notes on
i. Guy Sydney ii. Bishop iii. Lady Redchester
4.(a) Who is Prim Rose? Do you hate Prim Rose as an ill mannered girl?
(b) What kind of marriage did Lucy have?
7. Why did Prim Rose decide to marry the ugliest man of the city.
A Selection of Modern Essays
1.(a) What were the religious and cultural differences between the Muslims and
the Hindus that ultimately led to the creation of Pakistan?
(b) How can America contribute to the progress of Pakistan?
(c) What are the views of Liaquat Ali Khan on freedom.
2.(a) Discuss in brief the writer’s comparison of the situation in England before
the solar eclipse, during and after.
(b) Give an account of the disaster occurred on August 9, 1945 at Naga Saki.
3.(a) How did the writer behave during the destruction of Nagasaki by atom
(b) How does the writer compare and contrast spring season with winter?
(Description of spring or Description of winter).
(c) Whistling of Birds shows its writer’s love of Nature.
4.(a) What details does Gloria Emerson describe about the operation of the
parachute, the descent and landing?
(b) Describe the journey of the writer to the moon.
5.(a) Write a short note on W.B. Yate’s grandfather. How was he different from
(b) Why didn’t the writer and his uncles think it wrong to outwit the
grandfather’s violence and rigour?
6.(a) Write a note on the importance of ‘saying please’?
(b) If bad manners are infectious, so are good manners. Discuss it.
(c) Compare and contrast the character of conductor and liftman.
7.(a) What are qualities of a good guest?
(b) Comment on Beerbohms ‘statement’ I take it that the virtue of hospitality
stands midway between churlishness and mere ostentation.
8.(a) Who are other healers in society?
(c) ‘Doctoring is an art, not a science’. Discuss it.
9.(a) Why does Leacock regard the tailor as immortal? Describe the writer’s
feelings when he found that his tailor had died?
(b) There is, I am certain a deep moral in this. But I will not try to draw it.
(c) What kind of character in “My Tailor” Stephen Leacock draws of the
10.(a) Discuss the factors responsible for the development of beauty industry.
(b) Why does Huxley compare a woman to porcelain jar?
11.(a) What is the dilemma of a bachelor as depicted in Herbert Golds’ The
(b) What are the views of E.M. Forster on (Tolerance)?
(c) What is nazi colution according to Huxley?
12.(a) What is gossip? How does it differ from information?
(b) What is the analytical aspect of gossip.
13.(a) Discuss which good things science can increase and which bad things it
(b) Discuss Chesterton’s offence for which he was interrogated by the
(c) Discuss the utility of vitamins on human body.
The Old Man and the Sea
1. Why did not Santiago mind when other fisher men made fun of him?
2. There are many good fishermen and some great ones. ‘But there is only
you’ says Manolin. Give an estimate of Santiago as a fisherman. (The Old
man as a skilful fisherman)
3. How is Santiago different from other fishermen?
4. How did the old man hook the fish?
5. which is the greater challenge for the old man? Struggle with the fish or
fight with sharks?
6. “Have they beaten me” he thought” I am too old to club sharks to death.
But I will try as long as I have the oars and the short club and the tailler.
7. who was the biggest challenge for the fisherman. Or at the end of the
novel, the old man asks Marolin to prepare the fishing gear once again?
Why does he do so? Or The old man in his struggle against the Marlin
does not get despaired because that would be worse than death)
8. How does Hemingway interpret that Hope and Confidence are two of the
pillars of success alongwith faith? (Santiago’s optimism.)
9. Describe Santiago’s trail of strength with the negro in the tavern at
10. What dreams did the old man dream when in his sleep?
11. Write short notes on
a. Santiago’s justification for killing the fish.
b. Santiago’s love for Di Maggio.
12. The old man looks upon Manolin as not only his apprentice but also his
friend and as equal. (A character sketch of Manolin).
13. The Sea plays an important part in the novel. How?
14. The old man says, “Fish, I love and respect you but I will kill you before
this day ends. Explain. (A character-sketch of fish)
15. What are the main principles of the old man to live his life with (Santiago
as a moral man)
English – B (BA/BSc Annual 2012) Guess Papers Punjab University Next Post:
Physical Nature of Xylem | <urn:uuid:a9f40042-fb22-42f5-ae6c-e5fb379748d3> | {
"date": "2014-03-12T09:36:44",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021587780/warc/CC-MAIN-20140305121307-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9319572448730469,
"score": 2.953125,
"token_count": 3519,
"url": "http://www.notesmela.com/english-paper-a-punjab-university-guess-papers-for-babsc-examination-2012/"
} |
Les Cenci (The Cenci) is Artaud’s only known play based on the guidelines of the Theatre of Cruelty. The play relates Artaud’s version of the story of the late-sixteenth-century Roman nobleman, Francesco Cenci, and his daughter Beatrice. Written in a style meant to overwhelm the audience’s moral preconceptions, The Cenci dramatizes the torture that the cruel Count Cenci invoked upon his family; the family’s plot to have him murdered; and the family’s torture and execution by Catholic authorities. On stage, The Cenci involves a spectacle of light and sound. Artaud directed and starred as Cenci in the original production of the play in 1935. The play shocks the audience not only because of its cruelty, violence, incest, and rape, but because its characters seem to speak strangely and artificially. This is because the theory behind the play, which is influenced by the surrealist movement and by Balinese dance theatre, calls for the characters to represent universal forces instead of realistic individuals. | <urn:uuid:b3639f0e-e187-489f-ae67-a4a9f32d6326> | {
"date": "2017-01-17T09:39:22",
"dump": "CC-MAIN-2017-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00276-ip-10-171-10-70.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9532214403152466,
"score": 2.984375,
"token_count": 228,
"url": "https://www.enotes.com/topics/cenci"
} |
I've always loved this Christmas song: "O Come, O Come Immanuel." This article made it relevant to me today.
Article was written by Dave Boehi for Family Life.
The music of Christmas is one of my favorite parts of the season. I listen to CDs with Christmas music in all kinds of styles—jazz, piano, harp, bluegrass, big band, classical. I’ve got everything from “Christmas in the Mood” to “A Music Box Christmas” to “White Christmas” with Martina McBride.
While driving to work recently, I found myself absorbed in the old hymn, “O Come, O Come Immanuel.” For some reason I thought, These are words that people need to hear today.
At a time of economic uncertainty and rising religious tension—and a time when many marriages and families are feeling the impact of these events—the words of this song speak of hope and joy:
O come, O come Immanuel
and ransom captive Israel
who mourns in lonely exile here
until the Son of God appear
Rejoice, rejoice, Immanuel
shall come to thee, O Israel
I’ve been thinking about the phrase, “… and ransom captive Israel, who mourns in lonely exile here …” When Jesus was born, God’s people literally lived in captivity—they were ruled by the Romans, and they were hoping for a Savior to free them. They wanted relief from their physical suffering.
And yet their captivity and exile was spiritual as well, for they had gone 400 years without hearing from God through prophets or through inspired Scripture. They were not experiencing the blessings of God’s guidance, provision, and presence.
So I find it interesting that, when Immanuel (which means “God with us”) finally did appear, He came as a baby born in lowly circumstances to a poor family. Jesus lived His entire life under the rule of an ungodly and despotic foreign power. And during His public ministry He focused on setting the people of Israel free from spiritual exile rather than physical captivity.
We are like Israel, in that we think our biggest problems are in the physical realm. On a big level, we want relief from economic hardship and terrorism. In our daily lives, we want relief from conflict with a spouse … from problems in raising children … from relational difficulties with parents or siblings or cousins … from an oppressive employer, or a hostile co-worker.
Yet our biggest problems are actually spiritual in nature. In a sense, we all mourn “in lonely exile” when we are not connected to God, when He is not “with us.”
Jesus did not come to liberate us from suffering, but to free our spirits as we go through the suffering that is part of life. He makes it possible for us to connect with God—to know Him personally. For those who have received Christ as Lord and Savior, the Holy Spirit lives within them to guide, comfort, and strengthen them, no matter what their circumstances.
Think of the people you know who have experienced trials and suffering over the last year. People who have lost loved ones, or felt betrayed by a spouse or someone they trusted, or experienced significant sickness or injury. Think of the suffering or heartache you’ve faced.
Aren’t you glad you have a Savior who experienced the same hardships, and suffered so that we could know God?
That’s why we should rejoice at Christmas time. It reminds us of Immanuel, the God who is with us. "Rejoice, rejoice, Immanuel shall come to thee, O Israel!" | <urn:uuid:53198be7-2e89-487d-85a2-fe0ec67f829f> | {
"date": "2015-01-28T09:09:48",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122691893.34/warc/CC-MAIN-20150124180451-00153-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9714177250862122,
"score": 2.53125,
"token_count": 765,
"url": "http://annelivinglife.blogspot.com/2009/12/my-favorite-christmas-song.html"
} |
An enormous intellectual vigor allowed him to follow up hypotheses without becoming wedded to them. Never a writer of small papers, he looked for the larger significance. It may be said that Coon's major contributions to science were the fruitful formulations that followed from his assimilation and organization of massive amounts of information.
PHYSICAL ANTHROPOLOGY: RACIAL ADAPTATIONS
Carleton Coon's The Races of Europe (1939) began as a revision of W. Z. Ripley's 1900 work but ended as a new opus that used every scrap of published information on living populations and prehistoric human remains — and much recorded history besides. Though some of Coon's hypotheses seem dubious today, they allowed him to structure a mass of material in a way that remains impressive. This book was reprinted some years later and is still regarded as a valuable source of data.
Coon's desire was to use Darwinian adaptation to explain the physical characteristics of race. He defined these as the physical features that distinguish modern populations and in 1950 published, with S. M. Garn and J. B. Birdsell, Races: A Study of the Problems of Race Formation in Man. He was exasperated by what he called the "hide-race" attitude of people who, from social or philosophical motives, seemed to deny the existence of obvious biological differences. He became indignant at any suggestion that his interest in race derived from racist motives. Although a good many articles had been written about environmental adaptation of such traits, this book was the first to address the problem as a whole.
After holding several serious ailments at bay for some years, Carl died on June 3, 1981, at his West Gloucester home, shortly before his seventy-seventh birthday. His brilliance left a lasting mark on a generation of anthropologists.
W. W. Howells. "Biographical Memoirs V.58". National Academy of Sciences, 1989. | <urn:uuid:51e4fd5b-068e-446f-9c21-2b7810259413> | {
"date": "2015-03-05T02:30:55",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463679.20/warc/CC-MAIN-20150226074103-00109-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9788559079170227,
"score": 3.609375,
"token_count": 396,
"url": "http://racialreality.blogspot.com/2011/01/coons-work-remains-valuable.html"
} |
High amounts of solid waste is mostly a problem in urban places. Rural communities usually produce less waste because they produce smaller amounts of manufactured goods and have lower populations. Solid waste can be incredibly harmful to the environment in countless numbers of ways. Though some waste will eventually rot, some will not, which can cause the waste to smell and produce the explosive and harmful methane gas. It can also lead to contaminated water and diseases from hazardous pathogens, such as hydrogen sulphide. In the past, the idea of managing solid waste was not common, because of the little impact that humans had on the environment in the 18th century. As time went on, and nations became more industrialized, populations grew and cities were soon overcrowded. When people discovered what solid waste was doing to the environment, the lack of management of waste became a large problem. As of now, the world generates 1.3 billion tons of waste each year, and the amount of globally produced solid waste is estimated to double by the year 2025. Since the amount of natural resources is depleting, the idea of managing waste is exceedingly common and worldwide actions are being taken in order to reduce waste globally.Nationally, China has made a series of attempts to reduce the amount of waste within its region, with some being effective and others failing to achieve its goals. In 2008, China banned the use of thin plastic bags and plastic foam, but enforcement was not strict enough to stop people from manufacturing those goods. Subsequently, this prohibition was lifted later in 2013. In that same year, China also established strict restrictions on electronic waste, which brought a cleaner atmosphere. In early 2017, China placed another ban on 24 types of solid waste materials from being imported to China’s seaports and took effect in January of 2018. This ban included all plastic materials, as well as waste paper and recyclables. China believes that waste from different countries is hazardous and will increase the amount of pollution in south-east asian countries.There has also been several actions taken internationally, and the UN has done many things regarding waste management. In 2016, the United Nations Environment Programme (UNEP) met in Beijing, China, to discuss waste management disposal, and other problems regarding waste in the 11th International Conference on Waste Management and Technology. UNEP also made negotiations with 15 different countries to ratify the Minamata Convention on Mercury, which protects humans from toxic mercury emissions coming from waste. In 2012, UNEP met with several different governments from Africa and addressed the devastating e-waste problem in the countries of Africa. They formed a “Call To Action” initiative, which improved the problem by a large margin. It called many of the governments to develop national systems to improve the collection, recycling, and disposal of many types of e-waste. UNEP they also increased the number of governments addressing waste management by 50% in the last 5 years. As the country with the highest population in the world, China is the largest generator of waste in the world, and its rate of waste of production is rapidly increasing day by day. For many years, the world has imported billions of tons of waste to China, in order to recycle it in China’s facilities. Governments in the U.K have relied on China’s large production of recyclable materials as a way to recycle their own large productions of waste. China is home to some of the world’s largest landfills in cities such as Shanghai and Tianjin, but many of them are predicted to reach their full capacity by 2020. Also, most landfills cannot stop harmful chemicals from seeping through the ground, contaminating groundwater. Incinerating waste, however, causes another set of problems. When waste is incinerated, it is burned at high temperatures until it is converted into residues and gas. Burnt plastics generate harmful substances such as dioxins, and other gases, which cause pollution and acid rain. Fumes from incinerators also contribute to the pollution of China’s big cities. Because of this, the Chinese government would prefer to incinerate waste in clean and safe ways, but the process would require high amounts of means. Consequently, big cities like Beijing and Shanghai burn it cheaply, in order to save money. China believes that there are many measures that can be taken by governments around the world to reduce amounts of solid waste. First, creating laws that require waste-managing companies to better sort out recyclables and non-recyclable items would let more items thrown away to be recycled. Also, using more tax dollars to fund the clean incineration of waste would allow waste to be taken care of, without harming the environment with toxic gases and substances. In addition, programs that require citizens to pay for their garbage is a proven way to decrease the amount of waste thrown away, and increase the rate of items being recycled. Most importantly, educating students about the importance of reducing, reusing, and recycling will help raise awareness about waste management. Though the amount of waste in the world is rapidly increasing each day, a solution will appear with enough time and effort.Works Cited??. “Government moves to tackle e-Waste pollution.” Government moves to tackle e-Waste pollution – China – Chinadaily.Com.cn, www.chinadaily.com.cn/china/2016-07/25/content_26203684.htm.”10 Ways to Cut Global Food Loss and Waste.” 10 Ways to Cut Global Food Loss and Waste | World Resources Institute, www.wri.org/blog/2013/06/10-ways-cut-global-food-loss-and-waste.11th International Conference on Waste Management and Technology | Global Partnership on Waste Management, 21 Oct. 2016, web.unep.org/gpwm/11th-international-conference-waste-management-and-technology.”15 Easy Ways To Reduce Landfill Waste.” Conserve Energy Future, 8 Jan. 2017, www.conserve-energy-future.com/15-easy-ways-to-reduce-landfill-waste.php.”A Brief History of Waste Management.” 24/7 Waste Removal, 13 Dec. 2017, 247wasteremoval.co.uk/blog/a-brief-history-of-waste-management/.”China is officially enacting a plastic waste import ban.” Futurism, 8 Dec. 2017, futurism.com/china-is-enacting-a-plastic-waste-import-ban/.”China produces about a third of plastic waste polluting the world’s oceans, says report.” South China Morning Post, 13 Feb. 2015, www.scmp.com/article/1711744/china-produces-about-third-plastic-waste-polluting-worlds-oceans-says-report.Cornerstone Investments Long only, special situations, medium-term horizon. “Waste Management: Headwinds From China Ban On Foreign Waste.” Seeking Alpha, 10 Jan. 2018, seekingalpha.com/article/4136549-waste-management-headwinds-china-ban-foreign-waste.”Environment: Waste production must peak this century.” Nature News, Nature Publishing Group, www.nature.com/news/environment-waste-production-must-peak-this-century-1.14032.”Flags.” Flag of China, The Almighty Guru, www.thealmightyguru.com/Flags/Flags/Nations/China/Flag%20-%201024.png.”Food Waste and Recycling in China: A Growing Trend?” Food Waste and Recycling in China: A Growing Trend? | Worldwatch Institute, www.worldwatch.org/food-waste-and-recycling-china-growing-trend-1.Garbage — Solutions for Solid Waste, www.learner.org/exhibits/garbage/solidsolut.html.Global Perspectives on Solid Waste Management, www.afn.org/~recycler/waste.html.Hoornweg, Dan, et al. “Waste management in China : issues and recommendations.” Waste management in China : issues and recommendations (English) | The World Bank, 23 Mar. 2006, documents.worldbank.org/curated/en/237151468025135801/Waste-management-in-China-issues-and-recommendations.”Incineration.” Advice on Alcohol and Drugs, www.wrfound.org.uk/articles/incineration.html.Issi. “Environmental impacts.” Green Choices, Green Choices, 1 July 2013, www.greenchoices.org/green-living/waste-recycling/environmental-impacts.Julian. “Introduction.” Green Choices, Green Choices, 7 May 2015, www.greenchoices.org/green-living/waste-recycling/introduction.Laville, Sandra. “Chinese ban on plastic waste imports could see UK pollution rise.” The Guardian, Guardian News and Media, 7 Dec. 2017, www.theguardian.com/environment/2017/dec/07/chinese-ban-on-plastic-waste-imports-could-see-uk-pollution-rise.Leone Young | Dec 06, 2017. “The China Conundrum.” Waste360, 6 Dec. 2017, www.waste360.com/recycling/china-conundrum.Mercury, Minamata Convention on. “Minamata Convention on Mercury > Home.” Minamata Convention on Mercury > Home, www.mercuryconvention.org/.Mian, Md Manik, et al. “Municipal solid waste management in China: a comparative analysis.” SpringerLink, Springer Japan, 13 May 2016, link.springer.com/article/10.1007/s10163-016-0509-9.National Emblem of the People’s Republic of China, Wikimedia, upload.wikimedia.org/wikipedia/commons/a/ab/National_Emblem_of_the_People%27s_Republic_of_China_%282%29.svg.”Reducing household waste.” Waikato Regional Council, www.waikatoregion.govt.nz/Services/Regional-services/Waste-hazardous-substances-and-contaminated-sites/Solid-waste/Reducing-waste/Reducing-household-waste/.Robertson, Jamie. “The Chinese blockage in the global waste disposal system.” BBC News, BBC, 19 Oct. 2017, www.bbc.com/news/business-41582924.Rushton, Lesley. “Health hazards and waste management | British Medical Bulletin | Oxford Academic.” OUP Academic, Oxford University Press, 1 Dec. 2003, academic.oup.com/bmb/article/68/1/183/421368.Schmitz, Rob. “The Burning Problem Of China’s Garbage.” NPR, NPR, 20 Feb. 2017, www.npr.org/sections/parallels/2017/02/20/515814016/the-burning-problem-of-chinas-garbage.Semuels, Alana. “How to Stop Humans From Filling the World With Trash.” The Atlantic, Atlantic Media Company, 22 June 2015, www.theatlantic.com/magazine/archive/2015/07/future-of-trash/395279/.”The world’s trash crisis, and why many Americans are oblivious.” Los Angeles Times, Los Angeles Times, www.latimes.com/world/global-development/la-fg-global-trash-20160422-20160421-snap-htmlstory.html.”Tons of waste dumped – globally, this year.” The World Counts, www.theworldcounts.com/counters/shocking_environmental_facts_and_statistics/world_waste_facts.”Trash Planet: China.” Earth911.Com, 18 Aug. 2015, earth911.com/earth-watch/trash-planet-china/.UN Environment Annual Report 2016 – web.Unep.org. www.bing.com/cr?IG=1FB3194234E842958863041F5FF5BC6E&CID=189041CA0FC766E62ECA4A4F0E686740&rd=1&h=S4a0oC7fdzXSl_TFHJz-no94WaaowFb08RngjAr-avo&v=1&r=http%3a%2f%2fweb.unep.org%2fannualreport%2f2016%2findex.php%3fpage%3d5%26lang%3den&p=DevEx,5067.1.”UN-Backed initiative to address electronic waste problem in Africa adopted.” UN News Center, United Nations, 16 Mar. 2012, www.un.org/apps/news/story.asp?NewsID=41570#.WmerKJM-d-U.Waste Generation. World Bank, siteresources.worldbank.org/INTURBANDEVELOPMENT/Resources/336387-1334852610766/Chap3.pdf.”Waste Management For Kids – the Good, Cute and Ugly.” 24/7 Waste Removal, 13 Dec. 2017, 247wasteremoval.co.uk/blog/waste-management-for-kids-the-good-the-cute-and-the-ugly/.”Ways Forward from China’s Urban Waste Problem – The Nature of Cities.” The Nature of Cities, 2 June 2015, www.thenatureofcities.com/2015/02/01/ways-forward-from-chinas-urban-waste-problem/.”What is Waste Management and Methods of Waste Disposal?” Conserve Energy Future, 15 Apr. 2017, www.conserve-energy-future.com/waste-management-and-waste-disposal-methods.php.”Why Most People Don’t Care about Proper Waste Removal (and Why They Should).” 24/7 Waste Removal, 13 Dec. 2017, 247wasteremoval.co.uk/blog/why-most-people-dont-care-about-proper-waste-removal-and-why-they-should/.”WTE in China.” « Recycling « Waste Management World, 22 Oct. 2015, waste-management-world.com/a/wte-in-china. | <urn:uuid:ddde83d4-ddee-40c9-8a56-42a6b147311a> | {
"date": "2019-12-11T02:45:10",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00496.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8784291744232178,
"score": 4,
"token_count": 3228,
"url": "https://slumdoctor.org/high-lead-to-contaminated-water-and-diseases-from/"
} |
March 17, 1977 “Rails of the World: Paintings by J. Fenwick Lansdowne” opens at the National Museum of Natural History. The exhibition contains 42 paintings, representing 132 species of birds, combining art and science with meticulous realism in watercolors by the artist and naturalist. A widely distributed family of long-toed marsh birds, rails include coots, gallinules, crakes and soras. The paintings were created to illustrate the book “Rails of the World,” written by Smithsonian Secretary S. Dillon Ripley.
Posted: 17 March 2017 | <urn:uuid:d14639f3-33ac-4b91-b7e7-205686281f6c> | {
"date": "2017-11-24T00:12:09",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807044.45/warc/CC-MAIN-20171123233821-20171124013821-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9565115571022034,
"score": 2.59375,
"token_count": 124,
"url": "http://www.e-torch.org/2017/03/march-17-1977/"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.