text
stringlengths
198
630k
id
stringlengths
47
47
metadata
dict
Wisconsin's climate is changing. Wisconsin's cities and towns must also change how they manage their water resources if they are to adapt to the increases in rainfall and groundwater elevation we are already seeing. The Stormwater Working Group has brought together Wisconsin water resource managers to find ways to reduce risk to our communities and improve our stormwater management infrastructure. Recent analysis of historical data, combined with climate model downscaling, suggest that the southern Wisconsin precipitation events of 2008 are part of a trend toward wetter conditions and more intense rainfall. Climate models also suggest that increased winter snow pack, and late winter rainfall, may result in high regional groundwater tables and lake levels, and saturated soil conditions. Local and state government and private sector developers make significant investments in long-lived infrastructure that controls or is affected by stormwater runoff from large rainfalls. Likewise, municipal waste water treatment plant operators make substantial long-term investments in their system capacity that anticipates development, but not increased stormwater inflow and groundwater infiltration. This infrastructure is designed using standards based on rainfall data from the latter half of the 20th century. By having assumed“stationarity” of climate in the design of our infrastructure, we are now vulnerable to the following impacts from more intense rainfall events and elevated groundwater: In summary, our previous investment in public safety and environmental protection risks being overwhelmed by precipitation impacts that are beyond those anticipated by past infrastructure designers and water resource managers. While recent analysis of regional climate and rainfall data have provided insights into changes in climate over the last several decades, our ability to anticipate future conditions, and adopt appropriate adaptation strategies, will require more and better data about precipitation in Wisconsin. For example: There is a growing consensus that scientific knowledge about the potential increase in magnitude and frequency of large rainfalls is sufficient to warrant immediate changes in the methods used to design and manage storm water-related infrastructure. For example, the following steps have been identified by the Stormwater working group: Please contact Ken Potter or David Liebl if you have any comments or concerns regarding Wisconsin's stormwater and our changing climate or if you would like to have someone from the Stormwater Working Group make a presentation to your group or town.
<urn:uuid:a1fb84cf-1ff3-42a0-9234-85a762df8750>
{ "date": "2015-10-08T21:53:42", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737904854.54/warc/CC-MAIN-20151001221824-00116-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.93617182970047, "score": 2.65625, "token_count": 444, "url": "http://www.wicci.wisc.edu/stormwater-working-group.php" }
The History of First United Methodist Church of Glendale The First United Methodist Church of Glendale was established in 1894, just two years after the original Glendale town-site was platted. The Church has served the community uninterruptedly from its early years of serving Methodists from both Glendale and Phoenix. Th first worship services were held at the Alhambra School but people living in Glendale wanted a closer place to worship and soon families were gathering at the present location. On October 4, 1894, Reverend J.A. Crouch was appointed as the first pastor for the Alhambra and Glendale circuit. The church was incorporated on June 18, 1897, and property was purchased for one dollar. It was at the present location that the congregation’s first house of worship, a small white frame building was built for a cost of $2,700. The First Methodist Church of Phoenix donated the windows which were lined with paper that simulated stained glass. Two years later, in 1899, the church purchased adjacent property for a cost of one dollar. The early 1900s proved very challenging for the church. In 1903, a carload of gun powder exploded on the Santa Fe tracks two blocks from the church, breaking some of the church windows. Then, in 1904, the area experienced a severe drought and many families moved away. At its low point, the congregation was reduced to six families. Many pastors came and went as the church struggled to continue during this time. In 1912, the Reverend David Roberts was appointed pastor and the church began to experience a period of rapid growth and greater involvement in the community. Reverend Roberts organized the Boy Scouts in Glendale. By 1917, the small frame church was deemed too small, and in September 1918, a committee was appointed to solicit funds to build a new church. Mr. M.L. Fitzhugh drafted the architectural plans for a new building, a grand two-story Gothic edifice, measuring 100 x 70 feet, with a three-story buttressed tower, that would occupy the full ground space at the northwest corner of present-day Glenn and 58th Drives. In the meantime, in March 1920, the small church was sold to the Seventh-Day Adventists and hauled away to their church site by tractor. The congregation met at the Woman’s Club and the grammar and high schools while the new building was being constructed. The church board rejected the bids on the original design saying a cost of $60,000 to $80,000 as too high, but contracted to begin construction of the basement and Sunday school rooms. Because of cotton crop failures and other delays, the original cornerstone was engraved April 6, 1920, but was not laid until May 4, 1923. At the dedication on May 4, 1923, Reverend Dr. Atkinson, Arizona District Superintendent of the Methodist Episcopalian Church, spoke regarding the church and its relation to the community. He said, in part: “The church means more than mere brick and mortar and a building to grace the street and town— it is a monument to the spiritual and moral worth of the community. It is an Educational institution, where youth learn not the three “Rs” but the greater truth of God and man. It is a social institution where fellowship is furnished in the highest form. It is an institution which the community cannot and will not get on without, therefore it is founded on something higher than mere material, it is founded on the Chief Corner Stone which is the Head of the Corner Jesus Christ” (The Glendale News – May 11, 1923). At the time of the dedication, only the basement and first story were completed. The building was roofed and the brick walls capped and construction stopped. A miscalculation in the design would have rendered the building unsafe if it had been completed as planned. The congregation moved into this building in July, 1923, and met in the basement for six years. The present sanctuary was designed by Architects G.A. Faithful and L.B. Baker, son of Bishop L.B. Baker of Los Angeles, and W.M. Mullen of Glendale was the contractor. Construction proceeded when money was available, and halted when money ran out. The cost of the sanctuary as listed in the January 27, 1929, church bulletin was $22,960.70. This included the cost of moving the parsonage and furnishing the new building. The new church was hailed as the most beautiful building in Glendale. A dedication took place on February 3, 1929. The sanctuary was built of brick purchased from Dolan Brickyard on Grand Avenue. Granite columns and granite arches over the double-door entry were constructed, and a 50-foot tower rose on the northeast corner of the sanctuary. The church had no cooling system but different businesses provided hand-held paper fans with advertisements. A furnace in the basement pushed warm air through long ducts and out a register on each side of the front of the church. Later, two stoves were installed, and finally a gas-fired furnace with a blower. After World War II, three big evaporative coolers, each 20,000 c.f.m.s, were installed. It wasn’t until a few years after the Fellowship Hall was built in the mid-1960s that the current system of heating and cooling was installed. The church was challenged again in the 1930s with a decrease in membership, when the Great Depression caused many families to seek work elsewhere. During this period, there was talk of selling the property to the Church of Latter Day Saints. However, the sale was prevented by a number of members who made personal loans of $1,000 each to meet the mortgage payments. The women of the church also cooked and served Rotary Club dinners each week for eight years, donating all proceeds to the church fund. It was also in the 1930s that the church name was changed to First Methodist Church, eliminating the word Episcopal. Later, in 1968 when the Methodist Church and the Evangelical United Brethren Church merged, the church became known as the First United Methodist Church of Glendale. A major change to the sanctuary occurred during the 1970s when the original green opaque-glass church windows were replaced with thick, sand-cast, brilliantly colored glass windows which depict scenes of Jesus’ ministry and illustrations of some of the parables of his ministry. The windows on the north side of the Sanctuary depict the teachings of Jesus, while the windows on the south side depict the experiences of Jesus. The windows were donated by church families in memory or in honor of loved ones. They were designed by Herbert Menke and fabricated by Judson Studios of Pasadena, California. The mahogany pews and the cylinder lamps that hang from the exposed wooden beams are some of the original furnishings of the church. And each Sunday morning, the peal of the bell in the bell tower calls us to worship. The United Methodist Church of Glendale is one of the oldest churches in Glendale. It was listed on the National Register of Historic Places in January, 2006. However, as Reverend Atkinson pointed out in his message in 1923, a church is more than brick and mortar. The church congregation, through its members involved in United Methodist Women, United Methodist Men, and other church groups, has served the community continuously since it was first chartered in 1894. Community programs supported by the church such as the Westside Food Bank, the Glendale Family Development Center, Boy Scout Troop #62, the Brad Riner Assistance Office, Wesley Community Center, the New Day Center, Justa Center, the Phoenix Homeless Shelter, and Alcoholics Anonymous, has made a difference in our community. We have been proud to be part of these programs and strive to continue our support for them.This congregation also helped to start two sister churches in our conference, Epworth United Methodist Church and Trinity United Methodist Church. Additionally, mission work teams from our church have visited Mexico, Alaska, Africa, Australia, and Fiji.
<urn:uuid:558d955c-2d3c-4804-9bc2-d877a6cb6486>
{ "date": "2018-12-11T05:19:09", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823565.27/warc/CC-MAIN-20181211040413-20181211061913-00616.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9819143414497375, "score": 2.546875, "token_count": 1647, "url": "http://glendalefirstumc.com/our-history/" }
Credit CDC PHIL With West Africa’s Ebola spiraling out of control in Guinea, Sierra Leone and Liberia – and making inroads into Nigeria and Senegal – the expectation is that it is simply a matter of time before a few cases are `exported’ to countries outside of Africa – possibly including countries in North America and Europe. Yesterday, Canada’s PHAC published an Ebola-centric issue of their CCDR (Canada Communicable Disease Report) , with overviews and guidance documents for health care providers. For readers interested in the PDF version, the document is available for download or viewing: CCDR: Volume 40-15, September 4, 2014 (PDF document - 665 KB - 1 page) Special theme issue: Ebola preparedness in Canada This issue is focused on steps that can be taken to prepare for the possibility of caring for a patient with Ebola virus disease (EVD) and provides links to key documents recently posted on the Public Health Agency of Canada website. This guidance is based on currently available scientific evidence and expert opinion and is subject to change as new information becomes available. It should be read in conjunction with relevant provincial, territorial and local legislation, regulations and policies. The guidance documents identified in this issue have been developed based on the Canadian situation and may differ from that developed by other countries. Clinical guidelines for Canada are in development and should be available in the near future. What do health professionals need to know about Ebola? Be vigilant for the recognition, reporting and prompt investigation of patients with symptoms of Ebola virus disease (EVD) and other similar diseases that can cause viral haemorrhagic fevers. Case definition and reporting National Case Definition: Ebola Virus Disease (EVD) Accurately identify patients who may be at risk of EVD. Ebola Virus Disease Case Report Form Submit this form to public health authorities in the province or territory where the EVD patient is receiving care (PDF Document). Provincial/territorial health authorities will notify the Public Health Agency of Canada. Interim Guidance – Ebola Virus Disease: Infection Prevention and Control Measures for Borders, Healthcare Settings and Self-Monitoring at Home Establish appropriate precautions for patients who may have EVD. These may need to be adapted to local requirements. Public Health Management of Cases and Contacts of Human Illness Associated with Ebola Virus Disease (EVD) Ensure that potential EVD cases and contacts are accurately identified and managed to prevent future transmission of the disease.
<urn:uuid:54aa834f-cf3d-4c44-a5c4-b1ca30218eb3>
{ "date": "2017-03-23T23:55:01", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187227.84/warc/CC-MAIN-20170322212947-00411-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9109773635864258, "score": 3.296875, "token_count": 518, "url": "http://afludiary.blogspot.com/2014/09/ccdr-ebola-preparedness-guidance-for.html" }
Open the box I started with a rectangular piece of card a whole number of centimetres long by a whole number of centimetres wide, one of those numbers being a prime. Then I cut out an identical small square from each corner of the card and discarded the four squares. I folded up the four “flaps” on the remaining piece of card to form an open-topped box, choosing the size of the cut squares so that the volume of this open box was the biggest possible. Having made the box, I found that the length of its rectangular base was four times its width. What were the dimensions of the original piece of card? WIN £15 will be awarded to the sender of the first correct answer opened on Wednesday 17 October. The Editor’s decision is final. Please send entries to Enigma 1715, New Scientist, Lacon House, 84 Theobald’s Road, London WC1X 8NS, or to [email protected] (please include your postal address). Answer to 1709 Not original: The alley is 189 centimetres wide The winner Richard Brookfield of Worcester, UK
<urn:uuid:576188cf-a404-4074-b916-a94bc9d65aeb>
{ "date": "2018-09-21T15:14:59", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157216.50/warc/CC-MAIN-20180921151328-20180921171728-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.938660204410553, "score": 2.90625, "token_count": 242, "url": "https://www.newscientist.com/article/mg21528821-400-enigma-number-1715/" }
The NASA/ESA Hubble Space Telescope has snapped a view of several star generations in the central region of the Whirlpool Galaxy (M51), a spiral region 23 million light-years from Earth in the constellation Canes Venatici (the Hunting Dogs). The galaxy's massive center, the bright ball of light in the center of the photograph, is about 80 light-years across and has a brightness of about 100 million suns. Astronomers estimate that it is about 400 million years old and has a mass 40 million times larger than our Sun. The concentration of stars is about 5,000 times higher than in our solar neighborhood, the Milky Way Galaxy. We would see a continuously bright sky if we lived near the bright center. The dark "y" across the center is a sign of dust absorption. The bright dot in the middle of the "y" has a brightness of about one million suns, but a size of less than five light-years. Its power and its tiny size suggest that we have located the elusive central black hole that produces powerful radio jets. Surrounding the center is a much older stellar population that covers a region of about 1,500 light-years in diameter and is at least 8 billion years old, and may be as old as the Universe itself, about 13 billion years. Further away, there is a "necklace" of very young star-forming regions, clusters of infant stars, younger than 10 million years, which are about 700 light-years away from the center. Normally, young stars are found thousands of light-years away. Astronomers believe that stars in the central region were formed when a dwarf companion galaxy - which is not in the photograph - passed close to it, about 400 million years ago, stirring up dust and material for new star birth. The close encounter has been felt for a long time and is believed to be responsible also for the unusually high star formation activity in the bright necklace of young stars. The color image was assembled from four exposures taken Jan. 15, 1995 with Wide Field Planetary Camera-2 in blue, green, and red wavelengths. Object Names: Whirlpool Galaxy, M51 Image Type: Astronomical To access available information and downloadable versions of images in this news release, click on any of the images below:
<urn:uuid:9880b705-5e0d-49a8-b8a0-eb067ffce476>
{ "date": "2015-05-25T15:45:45", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00254-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9493542313575745, "score": 3.75, "token_count": 474, "url": "http://hubblesite.org/newscenter/archive/releases/1996/17/image/a/" }
The government says the U.S. economy grew at a much faster pace last year than previously estimated. The revised growth figures signal a more sustainable economic recovery and help explain why job growth has accelerated this year. The economy expanded at a 2.8 percent annual rate in 2012, up from a previous estimate of 2.2 percent. Consumers and businesses spent more and governments cut back on their spending less. The updated growth figures reported Wednesday by the Commerce Department are part of comprehensive revisions going back several decades. The upgrade to 2012 growth helps resolve a disparity that has puzzled economists. Hiring picked up late last year and has remained solid this year. The economy has created more than 200,000 jobs a month on average since last fall. Yet the government had said that economic growth was tepid last year. Faster growth typically drives more hiring. The previously reported growth figures had economists worried that employers would eventually have to slow hiring. But growth is now closer in line with the job gains, a sign that the more robust hiring may endure. The revisions are part of comprehensive changes, made roughly every five years, to the nation's gross domestic product. GDP is the broadest measure of the output of goods and services and includes everything from restaurant meals to television production to steel manufacturing. The revisions alter the data all the way back to 1929, though the largest changes were made to the past five years. Still, the economy's broad trends are roughly the same as before. The government now says the economy shrank 4.3 percent during the recession, which lasted from December 2007 through June 2009. That's better than the previous estimate of a 4.7 percent decline. But it still remains the deepest downturn since the Great Depression. And the recovery is still subpar. The economy expanded 8.2 percent from June 2009 through the end of last year, up from 7.6 percent, but still the weakest recovery since World War II. Most of the change in growth rates stems from newly available and updated data from agencies such as the Census Bureau and Internal Revenue Service. Many monthly surveys of manufacturing, retail and other businesses are updated with more comprehensive annual reports. The department has also made substantial alterations in how it defines GDP. Those changes have increased the size of the economy, through 2012, by 3 percent, or $560 billion. They include: -- Research and development spending is counted as an investment, rather than an expense. That's because it is similar to other investments, such as factories, industrial machinery and housing, Commerce officials say. R&D can have long-lasting benefits and be used in the production of other items. -- Spending on entertainment and the development of movies, books, music and TV shows are counted as investments. That's because they can generate sales and profits for years after they've been produced. Only long-lasting TV shows, such as sitcoms and dramas, are counted as investment. Reality shows and game shows, which have shorter shelf lives, aren't. -- Future pension benefits promised by governments and private companies are counted as income. Previously, only cash payments by companies and government agencies into their pension plans counted as income. This change boosted Americans' savings rate by about 1.5 percentage points in 2011 and 2012 -- to 5.6 percent and 5.7 percent, respectively. The government says the change better reflects the retirement plans of those Americans with pensions and is less subject to manipulation than the cash payments. Economists say that treating R&D spending as investment recognizes the critical role that intangible assets, such as patents or other intellectual property, now play in the U.S. economy. "It brings the GDP accounts out of the dark ages and into the 21st century," says Joe Carson, an economist at AllianceBernstein. Carson notes that much of the value of a smartphone is in the design, rather than the manufacture of the product, and that wasn't fully captured under previous calculations.
<urn:uuid:be482492-6c65-491e-9ac3-fbc0e4b58fcd>
{ "date": "2015-11-29T16:08:14", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398458553.38/warc/CC-MAIN-20151124205418-00208-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9731276631355286, "score": 2.5625, "token_count": 805, "url": "http://www.dailyherald.com/article/20130731/business/707319907/" }
RISK ASSESSMENT Suitable premises, environment and equipment Outdoor and indoor spaces, furniture, equipment and toys, must be safe and suitable for their purpose. Specific legal requirements The provider must conduct a risk assessment and review it regularly – at least once a year or more frequently where the need arises. The risk assessment must identify aspects of the environment that need to be checked on a regular basis: providers must maintain a record of these particular aspects and when and by whom they have been checked. Providers must determine the regularity of these checks according to their assessment of the significance of individual risks. The provider must take all reasonable steps to ensure that hazards to children – both indoors and outdoors – are kept to a minimum. Statutory guidance to which providers should have regard The risk assessment should cover anything with which a child may come into contact. The premises and equipment should be clean, and providers should be aware of the requirements of health and safety legislation (including hygiene requirements). This should include informing and keeping staff up-to-date. A health and safety policy should be in place which includes procedures for identifying, reporting and dealing with accidents, hazards and faulty equipment. Childminders are continually assessing the risks in their surroundings, whether this is at home or when out and about, at toddler groups, parks etc. Every morning Childminders check to ensure their homes are safe for caring for children, but most don’t document this so have no proof that it is done. With the requirements under EYFS Childminders now need to record their assessments on a regular basis. We have produced some templates that we hope will make completing this task a little easier. There is a detailed risk assessment for the home, along with a set of templates covering a variety of outings. In the Guidance booklet (page 17) it states ’The Statutory Framework for the Early Years Foundation Stage requires providers to conduct a risk assessment and review It is essential that children are provided with safe and secure environments in which to interact and explore rich and diverse learning and development opportunities. Providers need to ensure that, as well as conducting a formal risk assessment, they constantly reappraise both the environments and activities to which children are being exposed and make necessary adjustments to secure their safety at all times. Providers must ensure that the premises, indoors and outdoors, are safe and secure. This should include appropriate measures such as including indoor and outdoor security as part of any assessment made. For example, ponds, drains, pools or any natural water should be made safe or inaccessible to children. Staff should be aware which doors are locked or unlocked, how to use door alarms and security systems, intercoms and name badges.’ It then provides a list of the areas a good risk assessment will look at. We have used this list to develop a downloadable sheet for you to adapt and use.
<urn:uuid:459f772b-4802-46e8-bc9a-5c99bbd60fc3>
{ "date": "2014-03-11T02:21:00", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011094911/warc/CC-MAIN-20140305091814-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9371700882911682, "score": 2.9375, "token_count": 625, "url": "http://www.bromleycma.org.uk/eyfs/risk.html" }
A normal period lasts between 3 and 5 days. During this time, a woman will lose about 80ml of blood, that’s about half a cup. But there are women who have very heavy periods too. The medical term for heavy bleeding during your period is menorrhagia, and this is not as uncommon as you think. About 20% of the female population of the world is afflicted by menorrhagia. This is when there is bleeding that amounts to more than 80ml. But of course, this is not something you’d figure out by measuring the blood loss. However, every woman has an idea of what amounts to a heavy period. Symptoms of Heavy Period - You bleed for more than 7 days. And this occurs more than three months in a row. - You have to change your tampon or sanitary pad more than five times a day. Women with heavy periods typically have to change every couple of hours, sometimes every hour as well. - Your flow is so strong that it interferes with your way of life. You cannot go to work or attend social occasions. If you have to change your plans very often because you have your period or it gets so bad that you have to plan your life around your period, then you have an abnormal flow. - A heavy flow is usually accompanied by extra period pain. You may also feel excessively tired and emotionally fatigued. You may experience extreme depression or get extremely touchy. Most women think that heavy bleeding is normal, that there are women with light flows, medium flows and heavy flows, and that they fall into the last category. However, this is not true. Menorrhagia is a medical condition and it can be treated. So if you connect to any of the above symptoms, then you should see your doctor. All women will have a heavy period at some point in their lives. Let’s take a look at some of the reasons for a heavy period. Causes of Heavy Period - Teens can sometimes have unusually heavy periods as their hormones are still stabilizing. The same goes for women nearing menopause. The hormonal changes that both these groups of women are going through will affect their period flow. - Contraceptives such as IUD (Intrauterine Device) normally cause heavy bleeding. The heavy discharge of blood gradually lessens in few month. But if the heavy period continues for more then 5-6 months, you should consult your gynecologist. - The older you get, the greater the chances the cause of your heavy period is an underlying health condition. For example, heavy bleeding may be a symptom of the following diseases: – Pelvic inflammatory disease (PID) – Fibroid tumors in the uterus – Dysfunctional uterine bleeding (DUB) – Polyps in the cervix – Infection of the cervix or uterus - You may have a bleeding disorder that prevents clotting. This can cause blood to flow constantly as the body is unable to clot it and slow it down. Many women have likened it to opening a tap and feeling a gush of blood. If you have a recurring heavy period, then it is imperative to see a gynecologist so that you can figure out the cause as soon as possible and take curative steps. As is obvious from the above list, the conditions that cause a heavy flow should not be taken lightly as they can be debilitating or even fatal. Complications of Heavy Period - Although it is not common, there have been cases where women suffering from prolonged heavy flow during their period become anemic due to depletion of iron in the body. - In some cases, heavy bleeding has been known to reduce immunity. The women were more likely to develop colds or be affected by whatever bug happened to be making the rounds. - The most frequent complication is that of severe depression. Having a heavy flow causes fatigue and stress not just to the body but to the mind as well. This can in turn affect moods, levels of concentration, and ability to focus during the simplest tasks. Risk Factors of Heavy Period So how do you know if you are more or less likely to suffer from a heavy period at some point? Well, there are some factors that can put you at a higher or lower risk of developing menorrhagia. Age: The older you get, the greater the chances that you will have heavy periods. This is a frequent occurrence in women approaching menopause, which is when the body is undergoing massive hormonal changes. So if you are over 40, you can expect to be a part of the 25% of women in that age group to have heavy bleeding during your period. Another reason your age is a risk factor is because women over the age of 40 are more likely to develop fibroids which, as we have seen earlier, are one of the medical conditions that cause heavy bleeding. A pelvic exam and an ultrasound will be necessary to rule this out. Pregnancy: Research has shown that women who have had kids will have greater blood loss than women who have never had children. The reason for this is unknown. Genetics: This is a pretty obvious risk factor. If your mom had heavy periods, then you will probably inherit the condition as it is already encoded in your genes. Lifestyle: Do you smoke? Then you are putting yourself at risk for period problems even if you do not qualify with any of the above risk factors. Unhealthy habits like smoking literally change the chemistry of your body, so the effects can be pretty long-lasting. Do refer to a separate article on Harmful effects of smoking on women. Remedies for Heavy Period The first thing you should do if you have heavy periods is to see your gynecologist. This cannot be stressed enough. A personal visit to a doctor is the only thing that will ensure an accurate diagnosis. It is only then that the underlying condition can be treated properly. - There is existing medication to reduce blood flow during your period. If you bleed heavily due to a hormonal issue, then you will be prescribed tablets that replenish the supply of that particular hormone in your body. A widely used hormonal tablet is actually the contraceptive pill. It is usually prescribed along with other hormonal tablets to teens and pre-menopausal women. If there is some other condition that is causing heavy periods, then you will be given medication to treat the same. - If the problem cannot be cured with tablets, then the only option left is surgery. - A hysterectomy is often suggested to women who are nearing menopause. This is a procedure by which either a part or the whole uterus is removed. A woman will have no more periods after this. - A less extreme step is dilation and curettage, or D&C. D&C involves scraping the lining of the uterine wall to thin it down. However, this is not a long-term solution. It may help reduce the level of bleeding for a few months, but you will have to figure something else out after that if the problem persists. - Believe it or not, something as simple as taking it easy can work wonders for heavy bleeding. Most of us spend our lives multi-tasking and worrying about what else needs to be done. We therefore lose our connection to our bodies and that’s when things go awry. If you can just learn to manage stress better or cut down on the number of things you do on a daily basis, your body will respond by taking care of you better. - If you prefer treating your body with natural remedies, then try apple cider vinegar. Mix 3 table spoons of apple cider vinegar to about 8 oz of water and drink at least twice a day. Women all over the world have sworn that this works. It also helps relieve symptoms of PMS such as cramping, moodiness and fatigue. - There are also several teas and concoctions that help with excessive bleeding. Cinnamon, yarrow, and coriander seeds in particular are known to be quite effective.
<urn:uuid:728a8292-30ca-42d7-bb0a-751717713976>
{ "date": "2018-02-22T05:05:18", "dump": "CC-MAIN-2018-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9637789726257324, "score": 3.046875, "token_count": 1670, "url": "http://www.glamcheck.com/health/2011/07/30/heavy-period-symptoms-causes-remedies/" }
BALTIMORE, MARYLAND, January 29, 2014 - Robust partnerships between rural community health education centers and academic health care institutions can make substantial strides toward addressing race-, income- and geographically-based health disparities in underserved communities by empowering both the community and leading University institutions, according to newly published research from the University of Maryland School of Medicine. University of Maryland Medical Researcher Claudia R. Baquet, MD, MPH, and her team examined 17 years of a partnership between the University of Maryland School of Medicine and a rural health education non-profit on Maryland's Eastern Shore, the Eastern Shore Area Health Education Center (ESAHEC). The research team found that rural communities were more willing to participate in clinical trials and biospecimen donations when long-term partnerships were established between University Medical Centers in cooperation with local community health educational centers. The paper was published in the most recent issue of Progress in Community Health Partnerships: Research, Education and Action, a journal published by Johns Hopkins University Press. "Maryland's Eastern Shore has a rich history of ethnic, racial and cultural diversity in their communities," says Dr. Baquet, professor and associate dean for policy and planning and director of the Center for Health Disparities at the University of Maryland School of Medicine. "The Eastern Shore represents populations with unique health disparities that are amenable to targeted interventions, " she says. But like many rural regions in the state, the Eastern shore has unique needs when it comes to health care. "Its residents have higher rates of cancer and chronic disease than those who live in urban areas," she says. Furthermore, the area lacks public transportation systems to take patients to and from health care. It also has a growing number of older residents who are Medicare-eligible yet are not aware of the services available to them." The researchers, who formed this partnership, envision that the partnership will become a model for other programs throughout the country, fostering community-engaged research, particularly among rural communities. The partnership between the ESAHEC and the School of Medicine is funded by grants from the National Cancer Institute's (NCI) Center to Reduce Cancer Health Disparities (CRCHD) and the NIH National Institute on Minority Health and Health Disparities (NIMHD). "Dr. Baquet's research is representative of the kind of study the NCI Center to Reduce Cancer Health Disparities has been promoting since the Center's inception over a decade ago," says Sanya A. Springfield, PhD, CRCHD's director. "It's gratifying to see Dr. Baquet's research reflect how a model of mutual respect and trust can lead to community empowerment, a refocus on healthy lifestyle behaviors, and increased willingness to participate in clinical trials and biospecimen donation among our underserved communities. These are all essential components to building greater capacity, eliminating disparities, and advancing the science of cancer health disparities.," Dr. Springfield said. The researchers describe the relationship between the School of Medicine's Office of Policy and Planning and the ESAHEC, a nonprofit funded by the Health Resources Services Administration and the Maryland health department. The goal of the ESAHEC is to use educational partnerships to help address shortages in primary care and specialty health professionals in the nine rural counties that make up the Eastern Shore of Maryland. "This ongoing research partnership with the ESAHEC is special in its truly bi-directional nature, in which both partners participate fully in the research process and each benefits from the other's expertise," says Dr. Baquet. ESAHEC's educational programs include a Bioethics Mini-Medical School for the public and other research and health-topic training for the rural community, as well as continuing education for established health care professionals and a health career education track for children from Grade 8 through Grade 12. The Eastern Shore ESAHEC is one of three partnerships in Maryland and 255 in the nation. Outreach that educates the community and community health professionals about health care and research is core to the issue of improving health care outcomes, increasing access and addressing health disparities, says Baquet. Increasing public trust in research is another major benefit of the program. Examples of types of research jointly conducted by the partners includes: research on barriers to clinical trial participation, strategies to address biospecimen donation for future research purposes, telehealth training, bioethics barriers to research participation, patient navigation to cancer screening for rural and urban communities. "We are hoping that our outreach will encourage greater community trust in academic researchers and greater participation in research, allowing academics to better understand and address the issues facing rural communities," Baquet says. "In turn, we hope that academic health center faculty will become more culturally competent, responsive to community needs and expertise and will learn to include community organizations as meaningful partners in their research. This model has a higher potential for sustainability than the approach that we call 'helicopter research,' where academics conduct studies but do not share their results or return any benefit to the community." "We do have a truly bidirectional partnership," says Jeanne Bromwell, co-author of the article and deputy director and continuing education coordinator at the ESAHEC. "Dr. Baquet respects our role in the community and we very much respect her knowledge and contacts through the School of Medicine. People here often look at academics as outsiders. With our contacts down here, we are able to bring Dr. Baquet's expertise to the community in a way that does not make them feel threatened. It is a phenomenal relationship." The research results are used to develop new programs to educate community members about bioethics and the benefits of clinical research, easing their concerns and suspicions about such studies. Community members have participated in research examining the use of community health workers as patient navigators for cancer screening for African-American patients. They also participated in research that found that telehome care patient monitoring of home health patients with certain chronic diseases improves outcomes. The partnership with the medical school also has provided the ESAHEC access to critical funding for which it would not otherwise be eligible. The program continues to form bonds between the School of Medicine and its students and the residents and health professionals on the Eastern Shore, in keeping with the School's mission, says E. Albert Reece, MD, PhD, MBA, vice president for medical affairs of the University of Maryland and John Z. and Akiko K. Bowers Distinguished Professor and dean of the School of Medicine. "The School of Medicine's mission reaches well beyond Baltimore, throughout the state of Maryland, the nation and, indeed, the world," says Dr. Reece. "We hope that our incredibly valuable partnership with our colleagues on the Eastern Shore will serve as a model for other academic medical institutions across the country, creating a new future for the health of America's rural residents." About the University of Maryland School of Medicine Established in 1807, the University of Maryland School of Medicine was the first public medical school in the United States, and the first to institute a residency-training program. The School of Medicine was the founding school of the University of Maryland and today is an integral part of the 11-campus University System of Maryland. On the University of Maryland's Baltimore campus, the School of Medicine serves as the anchor for a large academic health center which aims to provide the best medical education, conduct the most innovative biomedical research and provide the best patient care and community service to Maryland and beyond. http://www. About the Eastern Shore Area Health Education Center The Eastern Shore Area Health Education Center (AHEC) is a private, non-profit, 501(C) 3 organization, which became operational in 1997. Governed by a 16 person Board of Directors, AHEC services the nine counties comprising Maryland's Eastern Shore. AHEC's goal is to increase the number of health care providers who provide services in rural and underserved areas and eliminate health disparities among diverse populations of the Eastern Shore by providing and coordinating programs that improve the health status of all. In the face of rapidly changing demographics in the region, AHEC leverages federal, state and local resources to support health careers promotion, health professions student rotations in underserved rural communities, continuing education, and community health promotion activities.
<urn:uuid:0d1590cc-9a88-410a-80df-c527f651fe04>
{ "date": "2014-12-20T14:26:20", "dump": "CC-MAIN-2014-52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769894.131/warc/CC-MAIN-20141217075249-00011-ip-10-231-17-201.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.959032416343689, "score": 2.609375, "token_count": 1681, "url": "http://www.eurekalert.org/pub_releases/2014-01/uomm-uom012914.php" }
And after nightfall, in greek and trojan camp alike, the warriors gather around their campfires. They eat and drink after a long hard day, recalling their battles. The elation as a spear hit home, taking down their man. Claiming his armour, the spoils of war. The narrow misses, a spear whistling past their head to strike the ground behind them. Or perhaps into a comrade. They think on the men who fell beside them, spirits flown off to Hades’ halls. Or the men wounded on the field, being tended now. How many of those wounded will step out onto the battlefield at their sides tomorrow. A scream pierces the quiet. How many of those wounded will not survive the night? They think of their parents, their wives and children. Will the trojans keep them safe behind Troy’s high walls. Will the greeks ever return home to see their families again? One man speaks of his son, just a baby when he left home. He is ten years old today. He does not know what his son looks like. Will he ever see his boy grow into a man? Around the campfires they polish armour so that it will gleam bright in the sun. Pray it flash bright in the eyes of their enemies and distract them for that one precious second they need to launch a spear. Sharpen swords and spears, make sure the wood is still strong. Check over their shields and count the new dents and scratches where it saved their lives. Ask Zeus that it shall do so again tomorrow. Then they sleep, or try to. Some cannot. There is too much that haunts their minds. Others, long years and many battles before this have trained them well to shut out all thoughts of war and sleep what few hours they can until rosy fingered dawn calls them again to battle. Dawn. Once a thing of beauty and wonder, of calm, now only holds dread anticipation. In the morning they will prepare. Don their bronze armour with the clasps of silver, helmets with the horse hair crests. They will swallow their fear under the rush and din of battle. They cannot afford to let fear and thoughts of families distract them now. As they ride out onto the field upon their chariots, they can think only of battle. The rolling chariot wheels. The weight of a long-shadowed spear in their hands, ready to let fly. The men coming across that wide plain towards them. The war cry ringing from their throats, echoed back by the enemy. And so too do we storytellers prepare for our day on the battlefield of Troy. We polish our words so that they will fly straight and true to the audience. Excitement and anticipation may war within us, but we will try to sleep well tonight, to be ready when rosy fingered dawn summons us. And we will still our minds, cast aside all fears of forgotten lines, feel only the weight of the spear in our hand and the weight of Homer’s words upon our lips.
<urn:uuid:f04e2bb8-fe88-4876-8a39-c0eb068f0493>
{ "date": "2017-10-21T08:17:40", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00756.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9701482653617859, "score": 2.546875, "token_count": 627, "url": "https://nllavigne.wordpress.com/2014/06/" }
Date of this Version In 1986, Alston Chase published his book Playing God in Yellowstone. In it, he argues that Yellowstone had never, in historic times, ever been in a truly natural state. He points out that when the early European and Anglo explorers first entered Yellowstone, fires were burning everywhere. The fires, he tells us, were set by the Indians in Yellowstone to drive game into areas where they could be trapped and killed. In setting those fires, the Indians were actually controlling and contributing to the ecological balance of the region.
<urn:uuid:bc33095c-5f2c-4d82-8ca8-5d2441ddeeb0>
{ "date": "2017-10-23T04:21:22", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825575.93/warc/CC-MAIN-20171023035656-20171023055656-00156.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9723373055458069, "score": 3.03125, "token_count": 108, "url": "http://digitalcommons.unl.edu/podimproveacad/180/" }
Also found in: Encyclopedia. John McLean served as associate justice on the U.S. Supreme Court for thirty-two years, one of the longest tenures in the history of the Court. "In the [Dred Scott v. Sandford] argument, it was said that a colored citizen would not be an agreeable member of society. This is more a matter of taste than law … [for] under the late treaty with Mexico we made citizens of all grades, combinations, and colors." McLean was born on March 11, 1785, in New Jersey but was raised primarily near Lebanon, Ohio, where his father staked out land that later became the family farm. McLean attended a county school and later was tutored by two schoolmasters, Presbyterian ministers, and paid them with money he earned working as a farm hand. In 1804, at the age of nineteen, he began working as an apprentice to the clerk of the Hamilton County Court of Common Pleas in Cincinnati and also studied law with Arthur St. Clair and John S. Gano, two distinguished Cincinnati lawyers. In 1807 McLean was admitted to the bar, married, and returned to Lebanon to open a printing office. He began publishing the Lebanon Western Star, a partisan journal supporting the Jeffersonian party. Three years later McLean gave his newspaper and printing business to his brother to concentrate full-time on the Practice of Law. At the same time, McLean, who had been raised Presbyterian, converted to Methodism, an experience that would have a strong impact throughout his life. He was active in church affairs and wrote articles about the Bible, and in 1849 he was named honorary president of the American Sunday School Union. In 1812, after a year serving as examiner in the U.S. Land Office in Cincinnati, McLean was elected to the U.S. House of Representatives at the age of twenty-seven and was reelected two years later. During his two terms in the House, McLean was a staunch supporter of President James Madison and his efforts to wage the War of 1812. McLean, unhappy with the salary paid to members of Congress and wanting to be closer to his wife and children, chose not to run again in 1816 and returned home. Back in Ohio, McLean easily won election to one of four judgeships on the Ohio Supreme Court, a demanding position that required him to "ride the circuit," or hear cases throughout the state. In 1822 McLean was again drawn to politics and made an unsuccessful bid for the U.S. Senate. Shortly after McLean lost the election, President James Monroe appointed him commissioner of the General Land Office in Washington, a direct result of McLean's earlier hard work to secure Monroe's nomination for the presidency. The position meant a large increase in salary and led to McLean's appointment the next year to the position of postmaster general. During his six years as postmaster general, McLean expanded the number of routes and deliveries, established thousands of new post offices, and increased the size of the U.S. Postal Service to almost 27,000 employees. Though he served as postmaster under John Quincy Adams, McLean used his considerable political skills to establish ties with Andrew Jackson, who defeated Adams for the presidency in 1828. As a result, McLean was appointed to the U.S. Supreme Court, winning confirmation easily. McLean remained interested in politics during his tenure on the Supreme Court and was even seriously considered as a nominee for the presidency at several national conventions, though his name was withdrawn from consideration each time. His last bid came in 1860, a year before his death, when he was one of the Republican party's candidates. The nomination instead went to Abraham Lincoln. While an associate justice on the High Court, McLean wrote a number of significant opinions, including a strong dissent in the Dred Scott case of 1857 (dred scott v. sandford, 60 U.S. 393 (Mem), 19 How. 393, 15 L. Ed. 691). In Dred Scott, a slave sued his master for freedom after he had been taken to live on free soil for several years. The Supreme Court held that African Americans could not be U.S. citizens and that Congress could not pass legislation preventing Slavery. McLean, however, who had long opposed slavery, argued that Congress could exclude slavery from the territories and could also liberate slaves living in "free" states. McLean's most significant majority opinion came in 1834 in wheaton v. peters, a dispute between two of the Court's reporters of decisions (33 U.S. 591, 8 Pet. 591, 8 L. Ed. 1055 (Mem.) (U.S. Pa., Jan. Term 1834)). Richard Peters sought to republish decisions that had previously been published by Henry Wheaton, his predecessor. Wheaton, worried that he would sell fewer opinions and thus lose profits, sued Peters, alleging Copyright infringement. McLean, writing for the Court, held that the opinions were in the public domain and thus no copyright had been violated. Though McLean enjoyed a long and distinguished career as a jurist, his personal life was less happy. Three of his four daughters died young, as did a brother, and he also lost his first wife in 1840. He and his second wife had one son who died only a few weeks after birth. Though McLean's own health began to fail as early as 1859, he continued to serve on the Court until his death from pneumonia April 4, 1861. Cushman, Clare. 1995. The Supreme Court Justices: Illustrated Biographies 1789–1995. Washington, D.C.: Congressional Quarterly. Witt, Elder, ed. 1990. Guide to the U.S. Supreme Court. 2d ed. Washington D.C.: Congressional Quarterly.
<urn:uuid:eeaf17cc-2f3d-428c-be32-4c70f80debdf>
{ "date": "2019-07-21T13:58:20", "dump": "CC-MAIN-2019-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527000.10/warc/CC-MAIN-20190721123414-20190721145414-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9870165586471558, "score": 2.90625, "token_count": 1213, "url": "http://legal-dictionary.thefreedictionary.com/McLean%2C+John" }
by Leah Smiley As schools are seeing more students of color in the classroom, they must begin to think critically about educational reform and ensuring that all students achieve greatness, regardless of their individual differences. It is my proposition that educational reform, then, includes offering curriculum and instruction with: 1. Accurate examples of diverse contributions to U.S. History 2. Opportunities for students to learn more about inspirational diverse role models 3. Positive reinforcement pertaining to experiences with individuals who are different So why do teachers claim to instruct students in this area of diversity using examples of slavery, rap music, and sports? I’ll tell you why– because they don’t know any better. They watch too much TV; read too many online blogs about people of color robbing, stealing and doing drugs; and they listen to other teachers who have biases. This results in teachers “stereotyping” the black experience. That’s right, I said it: stereotyping. Stereotyping is so much easier than doing a little work to distinguish facts from fiction. As an example, in 2011 there were a few hundred blacks that became successful rappers or athletes– just a few hundred. On the contrary, according to the Small Business Administration, as of 2011, there are over 2 million black owned businesses. In a population of 40 million plus, isn’t it more likely that a black person would be a business owner, than a rapper or athlete? Additionally, the unemployment rate for blacks is 11% (according to the Department of Labor)– which of course is much higher than other groups, but that leaves the question: what about the other 89% of people? African Americans are commonly linked to welfare, public assistance and poverty. But the reality is that roughly 25% of the population lives below poverty, according to the U.S. Census Bureau. This figure could be caused by a number of factors including, unemployment; underemployment (where you have a job, but don’t make enough to support your family); and bad choices resulting in prison records from illegal activities. But that still means that 75% of the population is not in poverty. Here are some other facts: – Washington, DC, which is affectionately called Chocolate City because there are so many black folks, has the highest personal income per capita in the nation, according to the U.S. Census Bureau. -Queens County, New York is the only county with a population of 65,000 or more where African Americans have a higher median household income than White Americans. Nevertheless, there are dozens of cities and suburbs around the country with affluent and highly educated blacks, such as Atlanta, GA; Prince Georges County, MD; Willingboro, NJ; and more. I bring these facts up because it is necessary to educate our students about these issues so that they are not under the assumption that what they read online and see on TV is true. Instead, teachers should instruct students to question images that are in the media or that they learned from their parents. Not only does this build a critical skill that is invaluable in the business world, but it helps them to view individuals who are diverse as individuals. Furthermore, if teachers do a little more research, they may find out that their self-fulfilling prophecy (or notion that diverse parents don’t care about education), is also a stereotypical myth. Perhaps if YOU changed your mind, you would get better results in the area of parental participation– and you just may get some help in teaching students about diversity.
<urn:uuid:f08be2e9-d34a-4632-9073-7152ab2af799>
{ "date": "2017-11-23T07:42:13", "dump": "CC-MAIN-2017-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00256.warc.gz", "int_score": 4, "language": "en", "language_score": 0.965240478515625, "score": 3.625, "token_count": 737, "url": "https://societyfordiversity.wordpress.com/tag/educational-reform/" }
|Zits by Jerry Scott & Jim Borgman (King Features Syndicate)| While the science of geology has relentlessly made new discoveries and breakthroughs, from plate tectonics to mass extinctions to evidence of ancient climate change, the fundamental concepts that geologists use for these discoveries, such as uniformitarianism (what's happens in the present, like lakes slowly filling up with silt, also happened in the past) and superposition (lower layers of rock are older than the ones above them), have been accepted and remained relatively static since at least the late 19th Century. So it's not too often that we get new terms in geology that describe the layers of rock (strata) that are subject to uniformitarianism and superposition. So it's significant, at least within the somewhat insular world of the geoscience community, that a new term has just been proposed - the xenoconformity. If I've got it right (and chances are good that I don't), a xenoconformity is an interval in the rock strata that represents a fundamental, abrupt, and persistent change in the environmental conditions in which the strata were deposited. As an example, consider that sediment that was filling the lake. Imagine that as the layers of sediment were deposited on the lake floor, sulfur from a nearby volcanic eruption acidified the lake water causing the extinction of the fish within. In this example, the lower layers of sediment would have lots of fish bones (fossils) and the usual minerals that form in normal pH conditions, while the upper layers of sediment would be devoid of fossilized fish and have low-pH minerals. The transition between the two sets of strata would be a xenoconformity. For the record, the term was introduced by Alan Carroll of the University of Wisconsin-Madison in the July 2017 issue of Geology (so it's pretty recent), and seconded, as it were, by Galen Halverson of McGill University in the same issue. So if you want to appear hip, cool, and oh so au courant, especially among geologists, just casually drop the term into a conversation. As it's always been our top priority here at Water Dissolves Water that our readers look cool, here are some examples of what you could say to get you started: "I'm not as worried about our loss of global leadership from pulling out of the Paris Accords as I am how we will explain the inevitable xenoconformity to future generations." "Son, if you don't change the filter in that fish tank soon, you'll have a major xenoconformity in the bottom of the aquarium." We think you're starting to get the hang of it, so we'll let you take if from here. Happy geologizing!"Any more bourbon in that glass of lemon juice and you won't have a whiskey sour so much as an out-of-control xenoconformity."
<urn:uuid:ed6cb498-800d-450a-9731-e4654809791f>
{ "date": "2017-07-28T06:48:17", "dump": "CC-MAIN-2017-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448095.6/warc/CC-MAIN-20170728062501-20170728082501-00096.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9669560194015503, "score": 3.078125, "token_count": 608, "url": "http://shokai.blogspot.com/2017/07/the-geology-of-georgia-part-whatever-of.html" }
A little over a week ago, Google patented a new technology related to books; what they call the “Storytelling Device.” You may agree with me that this could be a creative title for books in general, and while books are part of this new patent, the way the actual story is read inside of the book is being taken to another level. A digital, transcendent level. Basically, this new technological invention would add audio, visual projections, light sources, and the like through an electronic connection with a book. The point? “To provide story enhancement effects that are correlated to the interactive book.” Give me a break. I am very curious about our society’s obsession with turning books into “interactive,” digital mediums. Guess what: books are interactive! You know what makes them interactive? Your imagination! Understandably, the type of technology Google has patented would have entertainment value, and could be beneficial to students in classrooms who perhaps are better visual learners, or for students who are blind or deaf and need different materials than their classmates. And while I can see more benefits for textbooks that can become outdated quickly, what about fiction and storybooks? Will this technology be more cost effective and therefore a better tool than a digital projector or tablets? Should we not look at what we have now rather than look at what we could have? Maybe my obsession with keeping books the way they are – bound and made of paper – is clouding my judgement. Maybe I’m afraid that if too much technology is used at home, at school, and in early education, that books will fall by the wayside. I feel like an older generation person complaining about how things have changed since “the good old days.” Maybe, I just need to remember that although approximately 83% of teachers who use technology in their instruction are probably like the teachers I had, or know now, and want to offer their students the best education for the best life, that they explore many types of learning; hardcover and paperback books included. Maybe, while protecting my beloved bound books, I can still be tolerant of new advances for covering a more broad learning spectrum in favor of all children being able to and wanting to learn. You may have guessed my opinion of e-readers by this back and forth, but I suppose I could welcome textbooks and math books, new storybooks and lessons to be projected digitally as I shield my precious printed books from going extinct – which they won’t be any time soon, Google (as long as adult coloring books continue being popular, at least)!
<urn:uuid:24417573-148b-4770-ab68-285c5a158007>
{ "date": "2020-01-21T03:12:19", "dump": "CC-MAIN-2020-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601241.42/warc/CC-MAIN-20200121014531-20200121043531-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9678362607955933, "score": 2.765625, "token_count": 538, "url": "https://theressomethingaboutkm.com/2016/03/14/when-a-book-becomes-a-non-book/" }
Remember More with Rhymes and Acronyms We recall things through memory association. Every piece of data in our brain is associated to other pieces of data in a way. For instance, if you have given the word “banana”, what comes into your mind? Maybe something like: Banana: yellow, elongated, sweet, monkey, fruit However, it is very unlikely that we might see “banana” and think of “coconut” unless of course if you recall something humorous in which an elephant looks like a banana. If you are asked what is the third letter of the alphabet, chances are you will not know that C = 3, but you could easily stream into A, B, C, and then you can say C. You have used association to get the answer because you already know that A is the first letter of the alphabet, and then you have processed the series of letters in the sequence until you get the right letter. If there is no possible association of certain things, it will be very difficult to recall the information you need. As an example, suppose you need to recall that the next bus would take off at 4 PM, there is nothing that you can associate the bus that would suggest the number 4. Therefore, the information can be easily forgotten. If our memory works by association, then we can employ active association between two bits of data. As an example, for the bus that you need to catch at 4 PM, you can picture your bus in your mind and notice that it has 4 wheels. Four wheels, 4 PM. Now you have an association. You now have more chance to remember the time after it has instilled in your memory. There are times when memory association comes very easily. As an example, if you are introduced to Mr. Brown who lives on a house near the end of the street with a brown roof, the idea of Mr. Brown under the brown roof is pretty easy. And what if you need to try recalling your classroom number for a Sociology class, and it just turns out that it’s the same as your locker number. Another easy association! When pieces of data are not associated in any way, we should be more creative in relating things to each other. However, it is not too hard. Most people can learn rhymes or acronyms that can help you remember things such as: – I before E except after C. When it sounds like A as in neighbor and weigh – ROY G BIV for recalling the colors of the rainbow in proper order – N-E-W-S (compass directions) Rhymes and acronyms work because they form an easy-to-remember method of relation between the two things. The rule of thumb is to be creative and imaginative. You do not have to be a poet every time you want to recall something. Just think of an image in your mind that associates to a piece of information. It could be something humorous or funny so that it would be more memorable. As an example, if you need to recall that the basketball court is in Raisin Street. You can imagine a basketball player baking raisin cookies! Just be creative and rest assured that you will have a sharper memory.
<urn:uuid:b3e12856-76c7-4624-b659-5739c1b0744e>
{ "date": "2019-07-22T07:55:16", "dump": "CC-MAIN-2019-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527828.69/warc/CC-MAIN-20190722072309-20190722094309-00496.warc.gz", "int_score": 4, "language": "en", "language_score": 0.962329089641571, "score": 4.15625, "token_count": 674, "url": "http://makeyourbrainfast.com/can-improve-memory/" }
Scientists unveiled Wednesday a complete genetic panorama of microbes in the human digestive track -- an advance that could help cure ailments like ulcers and inflammatory bowel disease (IBD).(AFP) - Scientists unveiled Wednesday a complete genetic panorama of microbes in the human digestive track - an advance that could help cure ailments like ulcers and inflammatory bowel disease (IBD). "This completely changes our vision," said Stanislav-Dusko Ehrlich, a researcher at France's National Institute for Agricultural Research, after the study was published in the journal Nature. Knowing which core bacteria populate a healthy intestine should lead to more accurate diagnosis and prognosis for diseases ranging from ulcers to IBD to Crohn's, which also causes painful inflammation, he said. "In the future, we should be able to modify the (microbial) flora to optimise health and well being," he told AFP. "This also opens up the possibility of prevention through diet, and treatments tailored a person's genetic and microbial profile." More than 100 researchers working over two years found some 3.3 million distinct genes spread across at least 1,000 species of single-celled organisms, virtually all bacteria. "The study is a blueprint," said co-author Jeroen Raes, a scientist at Vrije University in Brussels. "The vast majority of bacteria found were not known before. But now we can start sorting out what they do in terms of function, and how they might relate to disease," he told AFP. The intestinal census was carried out on 124 adults - some healthy, others obese or suffering from IBD - from Denmark and Spain. Using new DNA sequencing techniques, scientists gathered a mass of data equivalent to 200 complete human genomes, Raes said. The number of bacteria discovered is double many previous estimates. But the big surprise was not the diversity, said researchers, but the fact that most humans - despite different diets and environments - appear to share a sizeable least common denominator of microbial flora. Previous studies had suggested that there was relatively little overlap, especially from different corners of the globe. Each individual in the study had at least 160 different species of micro-organisms, adding up to more than half-a-million separate genes, the researchers found. About 40 percent of these genes were shared with at least half of the other volunteers. There are 10 times more microbes in the body than there are human cells, with trillions of bacteria concentrated in the mouth, skin, lungs and especially the gut. Microbes are essential to health, helping to break down indigestible foods, activate our immune system, and produce vitamins. But recent research also points to previously unsuspected roles in obesity, heart disease and intestinal disorders such as Crohn's disease. The new research also sets a benchmark in the methods used to sift through billions of bits of genetic code. "This enormous sequencing effort - the largest of its kind - was made possible by the use of novel technologies," said Raes. With the so-called Illumina Genome Analyser "you get huge bags of very, very small bits of DNA," he explained. "Putting that puzzle back together again is an enormous task. Many people believed that it would not be possible." Much of the sequencing was done by a team at the Beijing Genome Institute.
<urn:uuid:6f5f7a2c-ceeb-4c27-b015-e9fc93a8a146>
{ "date": "2017-01-18T00:03:08", "dump": "CC-MAIN-2017-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9563184380531311, "score": 3.109375, "token_count": 693, "url": "http://www.independent.co.uk/life-style/health-and-families/health-news/study-raises-hope-for-gut-disease-cures-5526156.html" }
A short history of Malawi's maize seed and fertilizer project called Malawi's Input Subsidy Programme. Malawi’s smallholder farmers enjoyed subsidies on fertilizer and maize seed until the mid ‘90s—international pressure notwithstanding—and could access credit and sell produce at supported prices to the state agricultural board (Admarc). According to a 2007 New York Times article written by Celia W Dugger: “Malawi’s leaders have long favoured fertilizer subsidies, but they reluctantly acceded to donor prescriptions, often shaped by foreign-aid fashions in Washington, that featured a faith in private markets and an antipathy to government intervention. However, by 1993, the smallholder agricultural credit system was facing collapse, according to the editor of Starter Pack: A Strategy to Fight Hunger in Developing Countries, Sarah Levy. In 1994, Malawi became a democracy with the election of President Bakili Muluzi and the liberalisation of agriculture accelerated, said Levy. The removal of all government subsidies coincided with market volatility, deteriorating terms of trade and credit scarcity and smallholder production began to tip downwards. “It was here that the Farm Inputs Subsidy Programme (Fisp) had its roots,” said the logistics chief of the Malawian government’s current inputs subsidy programme Charlie Clarke. It followed “in the thinking of Charles Mann, a Harvard economist, and professor Malcolm Blackie of the Rockefeller Foundation, who were looking at new ways to develop agriculture”. He said that Harry Potter, the agricultural adviser for the United Kingdom’s department of international development, worked with these ideas when in 1998 when he developed a programme called Starter Pack, aimed at distributing a small pack of farm inputs to every Malawian household. After his election in 2004, President Bingu wa Mutharika and his government expanded the programme into something called the Targeted Inputs Programme, and then further expanded it into what is today called the Farm Inputs Subsidy Programme, which distributes bigger input amounts—100kg of fertilizer, 5kg and 7.5kg bags of maize seed, and a 2kg bag of legume seed—to around 1.5-million beneficiaries, depending on the resources available in a particular year. The major donor from the outset, and at times the only donor, has been the British government, as the US government took the view that subsidies were undermining their own efforts to promote the role of the private sector in delivering fertilizer and seed. However, a source who was involved in the development of Starter Pack says that the objectives of the British government were not that different from those of the US. “I’ve always thought Starter Pack had two objectives: one was certainly to develop smallholder farmers, but there was another slightly more suspect motive, and it concerned something called the smallholder farmer fertilizer revolving fund, which is the state entity responsible for bringing the bulk of fertilizer required for the current farm inputs programme into the country. “The fund was established during the Mozambique war by the European Union because they were afraid that Malawi was going to become cut off from its nearest sea ports, and that fertilizer was going to run scarce. They built huge storage units in different parts of the country and set about stockpiling fertilizer. The result was a massive rolling stock of fertilizer, which inhibited the growth of a private fertilizer industry, because private investors feared the release of the stock would be used to manipulate prices. Thus the deal struck between donors and the Malawi government at the start, and which is still reflected in the way that the programme is structured today, was that, while the donors would fund the project costs, the Malawi government would contribute fertilizer. While many people involved had the noblest of motives, there’s no doubt that the donors were more interested in seeing Malawi empty out its fertilizer stores, thereby clearing the way for private industry, the source said. Fisp is entering its seventh consecutive year, and is credited with major surges in the national maize crop. In 2011, in spite of the fact that fuel and forex are scarce, resulting in late delivery of fertilizer and seed required for Fisp, many stakeholders anticipate another successful programme. For more information check out our new site African8.
<urn:uuid:fdbadd6a-8df9-40c0-b9cc-df7ae9810a75>
{ "date": "2014-04-17T09:48:04", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00371-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9659970998764038, "score": 3.265625, "token_count": 889, "url": "http://mg.co.za/article/2011-10-20-history-of-malawis-input-subsidy-programme" }
Researchers naturally are interested in knowing how to attract more citations to their papers. Publishing the results of good work helps of course, but everyone knows there are many other factors. Nature news reports on research by Gregory Webster that analyzed the 53,894 articles and review articles published in Science between 1901 and 2000. The advice the study supports is “cite and you shall be cited”. A long reference list at the end of a research paper may be the key to ensuring that it is well cited, according to an analysis of 100 years’ worth of papers published in the journal Science. The research suggests that scientists who reference the work of their peers are more likely to find their own work referenced in turn, and the effect is on the rise, with a single extra reference in an article now producing, on average, a whole additional citation for the referencing paper. ‘There is a ridiculously strong relationship between the number of citations a paper receives and its number of references,” Gregory Webster, the psychologist at the University of Florida in Gainesville who conducted the research, told Nature. “If you want to get more cited, the answer could be to cite more people.’ A plot of the number of references listed in each article against the number of citations it eventually received reveal that almost half of the variation in citation rates among the Science papers can be attributed to the number of references that they include. And — contrary to what people might predict — the relationship is not driven by review articles, which could be expected, on average, to be heavier on references and to garner more citations than standard papers.
<urn:uuid:5c29fc28-80f3-4af8-9761-041b4ad4843b>
{ "date": "2017-12-18T20:19:48", "dump": "CC-MAIN-2017-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948623785.97/warc/CC-MAIN-20171218200208-20171218222208-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9586678147315979, "score": 3.328125, "token_count": 331, "url": "http://ebiquity.umbc.edu/blogger/2010/08/15/papers-with-more-references-are-cited-more-often/" }
The U.S. government should seek to convert some civilian research reactors currently using weapons-grade highly enriched uranium (HEU) fuel to lower-enriched HEU fuel as an interim step on the way to fueling the reactors with a new kind of low-enriched uranium (LEU), according to a new report from a committee of the National Academies of Sciences, Engineering, and Medicine. The report, which was released on Jan. 28, also recommends that the White House develop a 50-year strategy that evaluates future U.S. civilian needs for neutrons and “how these can best be provided by reactors and other sources that do not use highly enriched uranium.” The National Nuclear Security Administration (NNSA) said it had “concerns” about the recommendation for an interim conversion step. In a Feb. 24 email to Arms Control Today, a spokesperson for the NNSA, a semiautonomous agency of the Energy Department, said the recommendation “runs counter to” the U.S. policy of pursuing “the minimization of the civilian use of HEU globally.” Research reactors fueled by weapons-grade HEU continue to be used to produce neutrons for research and other civilian applications. Since 1978 the United States has sought to minimize and where possible eliminate the use of HEU in civilian research reactors. Currently, 74 such reactors remain to be converted or shut down, according to the report. Eight reactors in the United States still use HEU fuel. Weapons-grade uranium is enriched to 90 percent uranium-235, and LEU is enriched to less than 20 percent. Converting reactors to LEU use would reduce the risks that this material could be used to make a nuclear explosive device. The report expressed concern that the conversion of the remaining reactors is taking much longer than originally envisioned. It noted that the U.S. goal in 2004 was to complete the conversions by 2014 but that the goal today is to complete them by 2035. “There are significant technical and nontechnical obstacles associated with eliminating HEU from civilian research reactors,” the report said. One obstacle has been that making the higher-density LEU fuel required to convert high-performance research reactors in the United States and Europe has turned out to be much more difficult than anticipated. The report warned that the new fuel might not even be available by 2035. Consequently, in order to reduce the use of weapons-grade HEU in high-performance reactors, the report called for conversion of these reactors to the use of fuel that has an enrichment level of 25 to 45 percent. All but one of the nine high-performance reactors in the United States and Europe that use weapons-grade HEU could be converted in this manner in approximately five years, the report said. Reducing the fuel enrichment to 45 percent “cuts the attractiveness” of the fuel for use in a nuclear explosive device “by about 40 percent,” compared to 90 percent enrichment, according to the report. The report calculated that, by pursuing this approach for the eight reactors, the use of approximately 3,400 kilograms of weapons-grade HEU could be avoided until the reactors are converted to run on LEU in roughly 12 to 17 years. In a Feb. 3 posting on the website of Harvard University’s Belfer Center for Science and International Affairs, William Tobey, a senior fellow at the center and a member of the National Academies panel that produced the report, wrote that the interim-step recommendation “should in no way be construed as lack of support for eliminating civilian use of HEU. Rather, it is recognition of the fact that estimates of when conversion would be possible have changed, and an effort to minimize risk in light of those changes.” But another expert disagreed with the recommendation, expressing a concern similar to the NNSA’s about the impact on the global effort to phase out civilian use of HEU. In a Feb. 24 email, Alan Kuperman, a political scientist at the University of Texas, said one problem with the National Academies’ approach is that it “would reduce the prospect of converting existing high-performance research reactors to LEU fuel since their operators would resist the expense and inconvenience of having to convert twice.” The most significant nontechnical obstacle to converting the remaining HEU-fueled civilian reactors identified by the report is that more than 40 percent of these reactors are located in Russia. The report said that “conversion of its domestic research reactors is not a high national priority for Russia” and noted that “Russian-U.S. collaboration on research reactor conversion...has all but ceased during the past year” due to the downturn in relations between the two countries over Russia’s actions in Ukraine. Due to the downturn, the NNSA is no longer planning to support the conversion of 41 reactors in Russia, according to the department’s fiscal year 2017 budget submission. This figure includes both civilian reactors and reactors used to power Russian icebreaker ships. The National Academies report recommended that the United States seek to engage “Russian scientists and engineers to continue scientific exchanges and interactions that formed the basis for previous progress in....HEU minimization,” primarily in countries that used to be members of the Soviet Union. In 2012, Congress tasked the National Academies with assessing the progress toward eliminating all worldwide use of HEU in research reactor fuel and medical isotope production facilities. The January report examined the status of conversion of research reactors to LEU use. Another report examining the status of medical isotope production without HEU targets is to be issued later this year. Julia Phillips, former vice president and chief technology officer at Sandia National Laboratories, chaired the committee that authored the January report.
<urn:uuid:120aa561-3884-42ad-aa26-2b4c205d0433>
{ "date": "2016-08-26T06:42:47", "dump": "CC-MAIN-2016-36", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295358.50/warc/CC-MAIN-20160823195815-00222-ip-10-153-172-175.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9498934149742126, "score": 2.578125, "token_count": 1197, "url": "https://www.armscontrol.org/ACT/2016_03/News/Interim-Step-on-HEU-Reactors-Proposed" }
Summary: Jesus told the story to reveal God’s love to saints and sinners alike NR 17-03-07 ES Lk 15:1-3 and 11-32 If we really want to understand the Parable of the Prodigal Son, we need to look at the context of the story. Why did Jesus tell the story? And we find the answer to that question in the opening verses of Luke 15. Let me read them to you: Tax collectors and other notorious sinners often came to listen to Jesus teach. This made the Pharisees and teachers of religious law complain that he was associating with such despicable people – even eating with them. Jesus told this story because people complained who he spent time with. For the Jew such people were beyond the pale Why did the Jews not only dislike tax collectors but they looked down on them? 1. Because they helped the hated Romans collecting taxes – and so were seen as traitors or collaborators with the enemy. 2. Because they took more than they should of – to make themselves rich. The other notorious sinners would be people who were i) immoral – like prostitutes and ii) Those who had jobs that the religious Jew considered “unclean”. Like people who sold pork to the Greeks and Romans living in Israel at the time. Jesus told the parable of the Prodigal Son to show that God loved both those whom the Jews despised – the tax collectors the immoral AS WELL AS God’s chosen people the Jews In the Parable, the tax collectors and prostitutes - the immoral - are represented by the Prodigal Son – and the Jews themselves – the very ones who were complaining about the company that Jesus kept – the upright are represented by the elder brother And of course the Father in the Parable is God himself. And the first thing that strikes us is the standing of the immoral and the upright. They are both sons of the Father! For no one comes to God unless he comes through Jesus Christ. Do you remember what Jesus said: I am the Way the Truth and the Life. No one comes to the Father except through me (Jn 14:6) So both sinner and saint have the same standing in God’s eyes – both are loved as much by the Father And the key for understanding the force of the story is that the Prodigal Son repented!! God is willing to have a relationship with anyone who turns from his wrong deeds. Jesus told the story of the Prodigal Son to show what God’s love is for all of us – regardless of what we have done. And the message is -no one not even the immoral are beyond redemption The story of the Prodigal Son is so well known to us that we are liable to miss the fact that some of elements in the story were to Jesus hearers countercultural. There would have been three elements that would have been profoundly uncomfortable to Jesus’ hearers: 1. The first shock of the parable to a Jewish audience would have been the scandal of the idea of the father agreeing to divide the inheritance BEFORE his death. It has even been suggested that the Prodigal Son’s request would have been tantamount to telling his Father that he wished him dead. And the father’s reaction in giving the son his inheritance was simply - contrary to conventional Jewish wisdom. You just would not do it! In the OT apocryphal book of Sirach, we read this for example: "To son or wife, to brother or friend, do not give power over yourself as long as you live, and do not give your property to another in case you change your mind and must ask for it…... For it is better that your children should ask from you than you should look to the hand of your children." (Sirach 33:20 &22) Nevertheless, in this parable the father grants the son’s request. What a beautiful picture of God letting his children make their own mistakes – even at a cost to himself. And we know that the cost was Jesus’ death of the Cross 2. The second shock of the parable to Jesus’ Jewish audience was the reaction of the father when he saw his prodigal coming home. No self respecting Jewish father would have run to greet his son – let alone one who had disgraced the family by frittering away the family fortune. That would have been too undignified for the head of the family to do that. The father’s actions in the story broke all Middle Eastern protocol. But - as is often the case in Jesus’ parables – it is the twist in the story that makes the point. The father is so pleased, so thrilled to see his prodigal son return that he literally “drapes himself on the neck of the prodigal”.
<urn:uuid:ca047529-19ea-4adb-b553-e39ef9318d23>
{ "date": "2018-10-17T09:57:15", "dump": "CC-MAIN-2018-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511122.49/warc/CC-MAIN-20181017090419-20181017111919-00176.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9744259715080261, "score": 2.78125, "token_count": 1035, "url": "https://www.sermoncentral.com/sermons/the-prodigal-son-revd-martin-dale-sermon-on-parable-prodigal-son-104069" }
You don’t understand what’s happening to your best friend. He survived a serious car accident and is already back at home, but the crash left him with a serious head injury. He says inappropriate things, yells at you when you visit, and is always agitated—even though he doesn’t seem to have any reason to be upset. Is there any way you can help him calm down? TBI Victims Will Often Succumb to Emotional Outbursts Sudden explosive behavior is very common after a brain injury, especially if the extent of the trauma was severe. One reason that these outbursts occur is from the brain damage itself. The brain is responsible for perceiving, processing, and reacting to information. A damaged brain may have trouble reading facial cues, understanding language, or responding to questions in socially acceptable ways. The second reason victims may respond violently to stimuli is that they are reacting to all of the negative implications of their injury. It’s important to remember that your friend is not merely angry, he is likely feeling confused, victimized, and even depressed about the limitations of his condition. It may take some time before he is able to fully cope with the changes. The injured person’s environment is another major factor in determining his emotional response. You may think that taking your loved one to see a T-Bones game will help him get out of the house and have fun, but he may not be able to react appropriately to noises and crowds—he may even be uncomfortable traveling in the car down I-435. How Can I Help My Friend to Respond Appropriately in the Future? Family and friends can take several steps to help a brain injury victim feel more comfortable, lessening the likelihood of an emotional outburst. A few of these practices include: Try to maintain a consistent daily schedule to help your family member adapt. Ensure that your family member has easy access to common knowledge, such as the date, time, his address, and daily schedule. Provide a watch, clock, or calendar as memory aids. Your family member will likely need to rest more often or take breaks from long-term activities to avoid fatigue and frustration. Allow the individual to choose which activities they participate in, taking care to minimize their exposure to dangerous, overwhelming, or over-stimulating environments. Talk to your family member about past events, hobbies and interests, future goals, and familiar friends. Speak in a manner that the patient can understand and that he finds encouraging. Have You Or A Loved One Suffered A Brain Or Spinal Cord Injury? If you've suffered a brain or spinal cord injury you need to speak with an experienced attorney as soon as possible. Please contact us online or call our office directly at 816.471.5111 to schedule your free consultation.
<urn:uuid:a9c1e99f-8c31-4d87-a04b-5a90c93739af>
{ "date": "2019-06-25T18:31:09", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999876.81/warc/CC-MAIN-20190625172832-20190625194832-00376.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9679171442985535, "score": 2.796875, "token_count": 582, "url": "https://www.kansascityaccidentinjuryattorneys.com/library/how-to-help-tbi-victims-suffering-from-emotional-outbursts.cfm" }
Custom Social Cultural Contexts of Language and Literacy essay paper sample Buy custom Social Cultural Contexts of Language and Literacy essay paper cheap Language survey is said to be one of the most important elements in learning as it helps us analyze different types of languages. It all depends on the belief one holds on issues such as the relationship between the subject language and the social identity, your stand on linguistic determination and the rigid altitude towards the language (Routledge, 1997). For example, students in my current classroom slightly agree with me when I say that, a very good language is supposed to have more grammatical rules. According to them, that is not so much necessary as they think that, gone are th days when the language used to be pure as it has changed over time. My grammar and speech should always interconnect. If I change my opinion on this, then this might mean that I am much contributing towards hindering language development to my students. Emphasis should be put on ensuring that the language is kept pure for its development (Sumara, 2002). Sometimes you find that the roles one play in different aspects of life may make you change pronunciation of words and bringing in new terms as you fight to fit in that place. Failure to change your language in one of theese roles might give you hard time in communication since some of the people you are dealing with might not be very conversant with your language. You are forced by the condition to change your language so as to have easy time with people you are dealing with. Incase my students do not use language appropriate to their roles this might bring about poor preparation to their respective places of expertise. It is very important to ensure that appropriate language is used for excellent deliverance. In conclusion, I can say that prescriptive attitude is such important thing in language evaluation.
<urn:uuid:22f0fc96-126c-439c-8e21-8b292e7b6a3f>
{ "date": "2019-08-24T16:47:10", "dump": "CC-MAIN-2019-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9591873288154602, "score": 2.796875, "token_count": 373, "url": "https://primewritings.com/essays/evaluation/social-cultural-contexts-of-language-and-literacy.php" }
Not enough transparency on invisible trans fats Australia should label dangerous trans fats as the USA does. CHOICE tests have shown manufacturers are still using hazardous levels of unnecessary trans fats in processed foods such as pies, cakes and doughnuts - without the need for any disclosure on labels. Of the 32 foods CHOICE analysed 12 contained more than 4% trans fats as a percentage of the total fat – twice the limit that is permitted for food products sold in Denmark and Switzerland. The list of trans-fat offenders includes Four’n Twenty Traditional Meat Pies, Pampas Short Crust Pastry and MacDonald’s MacCafe Iced Donuts. The industrially produced trans fats are mostly found in deep-fried fast food and processed foods made with margarine or shortening. Trans fats can increase the risk of heart disease and sudden death from heart-related causes. They are worse than saturated fats for health and there’s evidence they can also increase the risk of developing diabetes. But unlike consumers in the US and Canada, where trans fats must be listed on food labels, Australians have no way of avoiding doughnuts, pastries, pies and snacks that contain high levels. Since CHOICE last tested food products for trans fats in 2005 there have been some improvements made. But at least one snack food contains even higher amounts. Burns & Ricker Bagel Crisps now contain a greater percentage of trans fats but their importer told CHOICE they were working with the US supplier to reduce the amount. In the US the use of trans fats in foods has decreased 50% since labelling was introduced in 2006. In New York restaurants and fast food outlets aren’t allowed to serve dishes with more than 0.5% of trans fats per serving. Australia’s regulator Food Standards Australia New Zealand (FSANZ) wants to see industrially produced trans fats removed from the food supply but is not in favour of labelling trans fats. CHOICE says Australian consumers deserve better. “We believe the regulator is being too complacent as to the well-established health risks from consuming too many trans fats. Labelling has had a substantial impact on reducing their use in the US and Canada and there’s no reason why it shouldn’t in Australia,” said CHOICE spokesman Christopher Zinn.
<urn:uuid:864cc64b-f7c6-4317-a690-91cc1954bbb1>
{ "date": "2013-12-08T18:31:59", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163785316/warc/CC-MAIN-20131204132945-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9406770467758179, "score": 2.609375, "token_count": 483, "url": "http://www.choice.com.au/media-and-news/media-releases/2009%20media%20release/not-enough-transparency-on-invisible-trans-fats.aspx" }
Dutch pianist Miriam Keesing never expected to research Jewish emigrant children who fled Germany for the Netherlands between 1938 and 1940. It began when she found a photo of a young boy in her family attic while looking for clues about her grandfather, whom she’d never met. Her aunt told her the boy was Uli, a German-Jewish refugee. Gerhard Ulrich Herzberg – Uli – was just 12 years old when he traveled from Germany to live with Keesing’s grandparents in November 1939. He stayed with them until June 1941. In early 1942, the Keesings left for Cuba, where they stayed until the end of the war. It is well known that nearly 10,000 children traveled from Nazi-occupied countries to Great Britain as part of the so-called Kindertransport. Less has been documented about the Dutch Kindertransport, which Keesing spent the last seven years investigating. She’s published dossiers on nearly 2,000 refugee children on her recently launched website, Dokin.nl. It is the first comprehensive source of information on these refugee children. Keesing started her search for Uli by visiting The Hague, where the Dutch national archives and the International Federation of Red Cross and Red Crescent Societies is headquartered. From those records, she learned that he’d gone to live with another family after he left her grandparents’ – they were not allowed take him with them to Cuba. He was arrested in March 1943, sent to the Sobibor concentration camp and murdered. It was that tragic discovery that set Keesing on her path. “I started this by accident,” she says. “I just wanted to learn about Uli, and I began learning about other children, and it started getting bigger and bigger, so I made a database.” Seven years later, she has her website, a literary agent in New York, and is considering studying for a doctorate. She has documented as much as she can on each child, including copies of original paperwork, photos and correspondence from family members. It’s tough work, emotionally. Even those children who survived the war often lost their entire families. “In the beginning, I used to cry every day. It’s the most depressing thing you can think of,” Keesing says. “But I want people to know that this happened.” She’s also discovered children who came to Holland and then seemingly disappeared without a trace, such as Artur-Meinhard Natt, who was 15 when he arrived from Berlin in 1938. Keesing searched until she found a newspaper article that revealed that he’d been shot and killed by Germans in 1940 in Arnhem, the Netherlands. It’s bittersweet, but because of her relentless research, 40 names have been added to the Red Cross’ victims list. “If I were a mother in heaven, and I had a child who died and nobody knew about it, because there was nobody left in the family to look for him, I would want someone to remember him,” she says. She is also revealing important details about places, such as the historic Burgweeshuis, now the Amsterdam History Museum, which served as an orphanage for Jewish refugee children beginning in March 1939. “There were 100 children living there for 14 months,” she says. “That is not nothing. And now finally that fact is known.” The most poignant source for Keesing’s work is firsthand accounts by survivors, whom she has met in the United States and Israel. “I have been amazed by the trust people have in what they share with me,” she says. “They give me old letters and really open up.” She has, for example, a stack of letters written by Elisabeth Eylenburg from Berlin, who had sent her son, Walter, to live in the Netherlands in 1939. The letters she wrote to the woman who fostered Walter, a Mrs. Wijsenbeek, reveal the anxiety, conflict, gratitude and despair of a mother who has had to send her child away to live with strangers, not knowing if she’d see him again. According to the Red Cross archives, Walter’s parents were transported to the Terezin ghetto on Aug. 4, 1943, and Walter was sent the next day to be with them at their request, arriving on Sept. 10, 1943. On Oct. 19, 1944, Walter and his parents were sent to Auschwitz and were gassed on arrival. On the day she launched her site, Keesing received a flood of responses, including a photo of Elisabeth Eylenburg from someone who knew her. “It was the first time I saw a picture of Elisabeth,” Keesing writes on her website, “whose letters I translated and to whom I’ve felt so close.” In 2011, Keesing had the privilege of meeting Hans, Uli’s brother, who’d gone to America in 1939. She had found his phone number online. “It was very difficult to make the call,” she says. “He answered the phone and I said, ‘My name is Miriam Keesing,’ and that was as far as I got. He was 89 years old but the Keesing name was enough for him to know who I was.” She later visited Hans at his home in Chicago and was able to read letters Uli had written to him during his stay with her grandparents. Keesing is inspired by the spirit of many survivors. “What gives me comfort is that 90 percent of these people whom I’ve met and with whom I’ve spoken who lived through this time are nice, very kind, very friendly,” she says. “They have been scarred, but they have no desire for revenge. They are not bitter about the world. I feel blessed to get to know them.” And Keesing, 47, is finding some peace of her own. As it is for many Europeans of the postwar generation, particularly those with Jewish roots, the Holocaust is a difficult subject for Keesing to confront. “My research made me go to Germany for the first time,” she says. “I had passed through it, but I never wanted to be there. I would see beautiful villages and landscapes and think, ‘How could it have happened?’” Even the language was distasteful. “I hated learning German,” she says, “but I had to in school. And until three years ago, I refused to write in German. It feels really strange for me still, but I do it. Maybe that’s another step in my personal progress.” Still, her outlook remains dark. Asked if she feels in the end that there is more good than evil in the world, she’s resolute. “No,” she says. “I’m sorry. I wish. But I’m afraid not.” For her, the awfulness of what took place is what drives her. “My motivation is my pain,” she says. “I put it away so far, so deep, and it’s there and it keeps me going and gives me drive. It’s so painful, what happened. I think for the whole of Europe it’s still painful.”
<urn:uuid:fab7478d-1fa8-481e-b7c5-647323ddb968>
{ "date": "2016-09-27T03:44:56", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660957.45/warc/CC-MAIN-20160924173740-00104-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9867521524429321, "score": 2.5625, "token_count": 1601, "url": "http://www.mcclatchydc.com/news/nation-world/world/article24762511.html" }
|Skip Navigation Links| |Exit Print View| |Understanding the Oracle Java CAPS Match Engine Java CAPS Documentation| The standardization configuration files define additional logic used by the Oracle Java CAPS Match Engine to standardize specific data types. This logic helps define how fields in incoming records are parsed, standardized, and classified for processing. Standardization files include data patterns files, category files, clues files, key type tables, constants files, and reference files. The standardization configuration files are stored in the master index project and appear as nodes in the Standardization Engine node of the project. Several standardization files are common to all implementations of the Oracle Java CAPS Match Engine, but each national domain uses a subset of unique files. The common files are listed directly under the Standardization Engine node of the master index project; the files unique to each national domain are listed in individual sub-folders under the Standardization Engine node. The standardization configuration files for the Oracle Java CAPS Match Engine must follow certain rules for formatting and interdependencies. The following topics provide an overview of the types of configuration files provided for standardization. Several different types of configuration files are included with the Oracle Java CAPS Match Engine, each providing specific information to help the engine standardize and match data according to requirements. Several of these files are common to all supported nationalities, but a small subset is specific to each. Category Files - The Oracle Java CAPS Match Engine uses category files when processing person or business names. These files list common values for certain types of data, such as titles, suffixes, and nicknames for person names or industries and organizations for business names. Category files also define standardized versions of each term or classify the terms into different categories, and some files perform both functions. When processing address files, category files named “clues files” are used. Clues Files - The Oracle Java CAPS Match Engine uses clues files when processing address data types. These files list general terms used in street address fields, define standardized versions of each term, and classify the terms into various component types using predefined address tokens. These files are used by the standardization engine to determine how to parse a street address into its various components. Clues files provide clues in the form of tokens to help the engine recognize the component type of certain values in the input fields. Constants Files - The Oracle Java CAPS Match Engine refers to constants files for information about the standardization files, such as the maximum length of the files. For the address data type, the constants file also describes input and output field lengths. Patterns Files - The patterns files specify how incoming data should be interpreted for standardization based on the format, or pattern, of the data. These files are used only for processing data contained in free-form text fields that must be parsed prior to matching (such as street address fields or business names). Patterns files list possible input data patterns, which are encoded in the form of tokens. Each token signifies a specific component of the free-form text field. For example, in a street address field, the house number is identified by one token, the street name by another, and so on. Patterns files also define the format of the output fields for each input pattern. Key Type Files - For business name processing, the Oracle Java CAPS Match Engine refers to a number of key type files for processing information. These files generally define standard versions of terms commonly found in business names and some classify these terms into various components or industries. These files are used by the standardization engine to determine how to parse a business name into its different components and to recognize the component type of certain values in the input fields. Reference Files - Reference files define general terms that appear in input fields for each data type. Some reference files define terms to ignore and some define terms that indicate the business name is continuing. For example, in business name processing “and” is defined as a joining term. This helps the standardization engine to recognize that the primary business name in “Martin and Sons, Inc.” is “Martin and Sons” instead of just “Martin”. Reference files can also define characters to be ignored by the standardization engine. By default, the Oracle Java CAPS Match Engine supports addresses and names originating from Australia, France, Great Britain, and the United States. Each national domain uses a set of common standardization files and a smaller set of unique, domain-specific files to account for international differences in address formats, names, and so on. You can process with your data using the standardization files for a single domain or you can use multiple domains depending on how the Match Field file is configured.
<urn:uuid:f5958ffe-6481-4258-ba4e-b8c64a5691af>
{ "date": "2017-04-30T23:30:21", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00591-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8411361575126648, "score": 2.703125, "token_count": 969, "url": "http://docs.oracle.com/cd/E21454_01/html/821-2565/ref_sme-standardization_c.html" }
The terms "left-brained" and "right-brained" have come to refer to personality types in popular culture, with an assumption that people who use the right side of their brains more are more creative, thoughtful and subjective, while those who tap the left side more are more logical, detail-oriented and analytical. But there's no evidence for this, suggest findings from a two-year study led by University of Utah neuroscientists who conducted analyses of brain imaging (PLOS One, Aug. 14). The researchers analyzed resting brain scans of 1,011 people ages 7 to 29, measuring their functional lateralization — the specific mental processes taking place in each side of the brain. Turns out, individual differences don't favor one hemisphere or the other, says lead author Jeff Anderson, MD, PhD. "It's absolutely true that some brain functions occur in one or the other side of the brain," Anderson says. "Language tends to be on the left, attention more on the right. But people don't tend to have a stronger left- or right-sided brain network." — Amy Novotney Letters to the Editor - Send us a letter
<urn:uuid:506c3eef-b477-43c3-b8aa-e09072057b88>
{ "date": "2014-08-23T07:32:11", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500825341.30/warc/CC-MAIN-20140820021345-00008-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9383293390274048, "score": 3.21875, "token_count": 241, "url": "http://www.apa.org/monitor/2013/11/right-brained.aspx" }
Along some creek and pond shores in the Sierra Nevada of California there lives a pretty, spotted Bembidion, and this Bembidion has no name. It belongs to the subgenus Liocosmius, a group of Bembidion that range from BC to Baja California, east to New Mexico. There are three described species in the subgenus (B. mundum, B. horni, and B. festivum), and three undescribed (one species in western California, one in Arizona and New Mexico, and the one in the Sierra Nevadas). I have names figured out for two of the undescribed species, but the third one (the one that lives in the Sierra Nevada) eludes me. Here’s what a male of that one looks like. The spots look brighter when it is alive. This species is restricted to the Sierra Nevada of California, and it is unusual within the subgenus in its restriction to mountainous areas. Below is a picture of one of its habitats in the Sierras. At this site it was found along with Bembidion iridescens, B. wickhami, Lionepha pseudoerasa, L. osculans, L. sequoiae, L. erasa, and other species of Bembidiina. What should I call this species? I’ve thought about several possibilities. In looking for names of species, my first stop is usually Roland Brown’s lovely book “Composition of Scientific Words”, which you can download here. While that has given me some ideas, nothing has gelled yet, and I would be delighted to have suggestions. I would like a name that mentioned mountains or the Sierras, or higher elevation. I also would like a name that evokes the brightly spotted pattern, and gives a sense of how sprightly these little beetles look as they run around on the shore. There are many Greek or Latin words that could be used to mean “mountain” (e.g., “mons”, “alpinus”, “oros”); the Spanish “sierra” could also be used. “Spotted” could be included in the name through “guttula”, or “macula”, or “ocellatus”. Even better would be if it makes one thinks of twinkling stars; the Greek “astralos”, meaning spotted with stars, or the Latin “stellatus”, meaning “starred”, would work nicely. Combining a word meaning mountain and another meaning spots or stars would be even better, to give the sense of these beetles as the little stars that live in the mountains; ideally such a combination would be grammatically well-formed, too.
<urn:uuid:9b6da860-aa21-4b7d-925b-5e7daa35666a>
{ "date": "2017-09-25T11:21:04", "dump": "CC-MAIN-2017-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691476.47/warc/CC-MAIN-20170925111643-20170925131643-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9596399664878845, "score": 3.1875, "token_count": 605, "url": "https://subulatepalpomere.com/2013/04/18/what-should-i-name-this-beetle/" }
The last decade has witnessed a flowering of interest in the history of women and cancer, alongside studies on the history of cancer and related topics.(1) While there might be historical trends that explain the attention paid to certain topics in medical history at particular times, the literature on the history of cancer deals with an inherently controversial disease, with historians themselves risking being caught up in the controversy. Most diseases can be described as complex entities, but few of them can be defined as having a pre-disease form that has itself become a separate entity, although one with diffused boundaries. The idea of a pre-cancer stage, or the acknowledgement of the existence of pre-cancerous cells, dates back to the end of the 19th century, yet achieving a clear identification of these cells and debating their potential to become cancerous occupied the whole of the 20th century and, in many respects, diagnostic uncertainties persist in today’s oncology. All three books under review attest to just how controversial the control of cancer and its precursor stages have been, and in so doing, they provide an insightful historical perspective to current policy debates. This is not a concerted effort to prove history’s utility, like that which historians are now compelled to make to prove professional ‘impact’, which may risk imbuing past events with improbable connections to present problems. However, a historical account can challenge the ‘selective’ use of history by policy makers, especially when they use past events as a powerful rationale to support programmes that hold a formidable impact on health issues. From international policy makers to health activists, the resort to historical events, as Anne-Emanuelle Birn rightly affirms, is ‘often selectively invoked − to push forward particular agendas based on (mis)perceived successes of the past’.(2) The role of health or medical historians is crucial in these cases as their examination of medical and health interventions can question the validity of specific policies that have been tailored on a distorted appreciation of past events. One of the most alluded-to examples is the ‘successful’ smallpox eradication programme (1980), which today is often cited to support global health policies that propose technical interventions (vaccines, drugs, diagnostic tests, etc.) as the best approach to deal with countries’ main diseases. As shown by Sanjoy Bhattacharya’s detailed analysis of smallpox in India, its effective eradication has often glossed over the enormous complexities and diversity of actors involved in the programme.(3) These studies suggest that a thorough assessment of past health initiatives could at most assist global policy makers in a more comprehensive reading of what the past can teach, or at least alert them of the existence of a less complacent story of previous health interventions. By engaging with historical and contemporary analyses of female cancers, the books under review show the value of rounded histories: they show that past health interventions need careful historical contextualisation to be meaningful today and to be fully comprehensive. They also have an additional strength, that unlike many disease-specific histories whose broader implications are sometimes difficult to deduce, cancer appears here as a hub to explore and reflect on contemporary medical practices, policies and technologies, and individuals and patients’ groups. This opens up avenues for further enquiries, some of which will be suggested in the next sections. 2. Uncertain entities, tentative diagnosis All three works focus on female cancers; Bryder and Löwy (2011) exclusively on cervical cancer, and Löwy (2010) on cervical and breast cancer. Beyond the most obvious biological reason that makes these diseases ‘women’s cancers’, the authors provide historical and contemporary insights into how women’s gynaecological tumours are metonyms for cancer in medical, social, and cultural terms. They also exemplify how different women’s groups and feminist activists envisioned gynaecological cancers as a site of political ownership, but one that has served to advance multiple and sometimes contesting aims, from specific public health interventions and ascending career opportunities to a broader political agenda. Women’s Bodies and Medical Science. An Inquiry into Cervical Cancer examines the events that led to the development of the 1987–8 Cartwright Inquiry in New Zealand, a government inquiry set up to investigate the responsibility of a group of gynaecologists at the prestigious National Women’s Hospital (NWH) in Auckland for the (mis)management of cervical cancer patients between the 1960s and the 1980s. Portrayed by the press as ‘the biggest medical scandal of the century’, the Inquiry was instigated by an article that appeared in the magazine Metro (1987) co-written by feminist activist and journalist Sandra Coney and sociologist Phillida Bunkle. They denounced what they thought was ‘an unfortunate experiment’ conducted by Dr Herber Green at the NWH in 1966, who in his (supposed) attempt to study the natural course of cervical cancer deliberately withheld conventional treatment in women diagnosed with early stages of the disease [carcinoma in situ (CIS), or CIN3]. The exposé was based on a 1984 medical study – elaborated by doctors in opposition to Green’s ideas who retrospectively followed 948 cases of CIS from 1955 to 1976. The study concluded that those women treated by Green’s conservative approach which had two years of positive Pap smears were 25 times more likely to develop invasive cancer than those treated by hysterectomy and whose smear tests were negative. In these groups deaths were four and eight respectively (here I deleted text in brackets), a rather low rate considering international statistics of the time. The aim of the medical report was to provide evidence for a prompt surgical treatment of all in situ cancers as the authors thought it had a considerable invasive potency. Yet the study had significant consequences when it was reinterpreted by the Metro article and the Cartwright Inquiry as an ‘experimental study’ to ‘differently treat’ women diagnosed with CIS. The latter was denied not only by Green but by the authors of the study itself. A core aim of Bryder’s book is to explain this misunderstanding. In Green’s conservative treatment of CIS, there was no experiment, nor a division between two groups of patients that were treated differently in order to prove a medical hypothesis or ‘personal belief’ about the nature of CIS. Beyond the controversies arising in the 1980s from this specific group of patients, and the extended repercussions it had in ethical terms, one of the elements that Bryder as a medical historian successfully contributes in her thorough reconstruction of the case, is the historical context in which Green’s ideas were framed and developed. One allegation in the Cartwright Inquiry was that Green and his colleagues at NWH were aware of and dismissive of a ‘world view that CIS was a precancerous condition’ (p. 26). Bryder’s detailed research demonstrates that, on the contrary, the Inquiry had ‘inadvertently revealed a profession divided’ (p. 74), because from the 1960s until the 1980s there was no such thing as a ‘universally accepted’ or a ‘world consensus’ about the diagnosis or the treatment of CIS. Throughout chapters one to six, the book provides a rich body of medical literature that points to this disagreement within the medical profession, from pathologists and gynaecologists to epidemiologists and cytologists, revealing a long-simmering controversy over the potential of CIS to become invasive. Löwy’s books also elaborate extensively on the conundrums of identifying and legitimating a specific pre-cancerous lesion of the cervix and the breast, thus confirming from a historigraphical point of view that the quest for cancer diagnosis has provided an impetus for scholarly research on the history of this disease, and constituting indispensable reference works for every scholar of the subject. At an international level, discrepancies about CIS were on three significant levels, a) on diagnosing (classifying) intraepithelial lesions of the cervix into a series of borderline categories (mild, intermediate, or severe dysplasia, and CIS); b) the prognosis from CIS to invasive cancer; and c) the management of CIS lesions. An eminent American pathologist, Leopold Koss, made a legendary comment in 1979 that summarises the differences observed amongst pathologists when assessing abnormal lesions of the cervix: ‘one man’s dysplasia is another man’s carcinoma in situ’ (p. 75). Long-term follow-up studies of women with in situ cancers that were not treated but under careful observation returned inconclusive responses as to the natural course of the disease: In one study conducted in Denmark (1955) 35 per cent of women developed invasive cancer, leading the author to consider CIS as slow-evolving disease, for which immediate treatment was not necessary. Another 1963 American study considered that in 25 per cent of cases the lesion disappeared, while in 6 per cent of cases it developed into invasive cancer. Yet in the latter the performed biopsy was thought to be an important cause for the lesion’s disappearance thus suggesting that a minor intervention could be sufficient to deal with this type of cervical lesions. This view was shared by Green when he proposed a protocol to treat women with CIS by ‘lesser procedures’ which included follow up with Pap smears, colposcopy and punch biopsy, that is, a lesser surgical excision than the most common cone biopsy, which had a higher associated morbidity. Although in the 1950s and 1960s many doctors took a radical approach by treating CIS with hysterectomy, the idea was far from being ubiquitous. Bryder’s analysis focuses mainly in the literature that circulated in the Anglo-Saxon world to support this view. What strikes this reviewer as being less evident, is not the conservative treatment and follow up in itself, but the use of colposcopy as a diagnostic tool, a technique that in Anglo-Saxon countries was far less widespread at the time. Increasing attention to CIS and dysplasia was due to the dissemination of diagnostic tools for the early detection of cervical cancer. The Pap smear test was heavily promoted by the American Cancer Society from 1945, and its rapid internationalisation was secured by training programmes for foreign doctors alongside the availability of American grants. Parallel to the growing enthusiasm for the Pap test, another diagnostic technique, colposcopy, was being disseminated in South America. Both diagnostic tools aimed at detecting cancer of the cervix in its early stages, but they differed greatly in the way they were performed, the people involved, and the skills required, all of which induced a different management of the disease. By the mid 1950s, in parts of Germany, Austria, Switzerland, Brazil and Argentina, colposcopy became routinely used in gynaecological clinics, while in the main European and United States centres the Pap test was the preferred tool. Interestingly enough, since the 1950s onwards, what we see in those countries that adopted colposcopy is a conservative treatment of CIS. Follow up using the colposcope alongside a rigorous histopathological analysis of the removed tissue, gave gynaecologists trained in colposcopy a confidence in the management of CIS that the Pap test alone lacked. In times of uncertainty about the diagnosis of CIS, those who became familiar with the colposcope thought that they had a safer scientific instrument for monitoring the status of those ambiguous cells, in most cases with the added reassurance of a Pap test. Those who thought that colposcopy training was too demanding and time-consuming relied on the simplicity and growing popularity of the Pap test, and managed the disease following the cytologist’s dictum. In the case of the NWH, Bryder notes that there existed personality clashes and strong rivalries between the pathologist, and crucially the colposcopist, and Dr Green who was leading the treatment. It seems there existed a division in the work of diagnosis that simply did not work. For historians interested in medical technologies and their competitive introduction into hospital services, as in this case with the Pap test and colposcopy, a focus on personal disputes may fall short as an explanation. From this perspective, Bryder leaves one wondering if the aura of hope, confidence and technical zeal that colposcopy inculcated in its users in other parts of the world would apply too to its adoption in New Zealand, and if so, whether that was a triggering factor for the controversies that followed or something that precipitated a pre-existent awkward scenario (that is, divided diagnostic labour and personal rivalries). 3. Women and cancer As Bryder documents, the perception that Green experimented on women’s bodies was all-pervasive after the Metro publication and was judged against somewhat extemporaneous notions of medical ethics, patient’s informed choice, and doctor-patient relationships which she analyses in chapter four. These notions were increasingly debated from the 1970s, which saw the rise of the international consumer movement, driven by patient groups and women’s health activists, and paved the way for the figure of the patient-consumer. In addition to the investigation of the alleged ill-treatment of carcinoma in situ, several aspects were included in the terms of reference of the Inquiry, namely, the protection of patient rights in any research conducted at the NWH, patient information about treatment options, the training and teaching of medical students about CIS, doctors’ perspectives on cervical cancer screening, and vaginal examinations on anesthetised women without consent. These are analysed in chapter eight. The broadening of issues thus discussed at the Inquiry, according to Bryder, ‘suited the feminist lobby who saw it as a unique opportunity to canvass those issues relating to women’s health for which they had been campaigning and about which they felt so passionate’ (p. 127). The report concluded that the medical profession had failed patients, and many important recommendations were subsequently implemented, most notably patient advocacy and a national screening programme for cervical cancer. In relation to the former, it is instructive to see certain developments that took place at the time in the UK, a comparison maintained elsewhere by the book given New Zealand’s close relationship and integration within British medical professional bodies. While in the 1980s patient consumer groups in the UK campaigned for more information about health services and disease prognosis, in order to become active partners in their own treatment, statutory acknowledgement was not straightforward. When the Patient’s Charter was sanctioned by the UK government in 1991, it reflected less the principles of patient’s rights and choice as a collective group and more the figure of the patient as an individual consumer of health services within the internal market introduced by the Conservative government. As explained by Alex Mold, both the changing nature of the concept of the patient-consumer and the different groups that historically have represented them should ‘raise doubts about any authority that claims to know what patient-consumers ‘really’ want’.(4) Bryder paints a similar picture in New Zealand, when she argues that the different feminist groups that supported the Inquiry did not represent all women’s voices in relation to the services provided at NWH or to Green’s treatment, nor were unified views the ones offered by organisations representing nurses and Maori women. While in chapter seven Bryder frames the demands of Coney and Bunkle as part of a longstanding battle of feminist health activists against male dominance at NWH, rather more consideration about the development of other patient groups acting in New Zealand, beyond women’s hospitals, would have been welcomed, as would greater reflection on the extent to which the patient-consumer was shaped by the Cartwright report alone. The implementation of a national screening programme in 1990 was another measure celebrated as a direct result of the Inquiry, but this is too subject to historical scrutiny. On the one hand, Bryder argues, New Zealand’s Health Department had been planning the implementation of screening programmes before the Inquiry, and on the other, discussions of the pros and cons of Pap smear screening (discussed in particular in chapter six), and to which Löwy (2011) too offers a rich international literature, were concealed at the time due to feminists’ portrayal of any questioning of screening as being against women’s interests. In addition, feminist views in the late 1980s of screening programmes were not uniform. Many viewed it as a paternalistic medical intervention, which ignored women’s choice and championed the lab’s commercial interests over the physical and emotional effects of the test in women; other groups, as in the case of New Zealand, embraced it as a woman’s right, and blamed an already under fire male medical profession for its lack of sensitivity towards the prevention of the disease. It is true, however, that at that time, cervical cancer was not perceived as a woman’s scourge because of the persistent decline of morbidity and mortality rates in Western societies. Unlike breast cancer, which remained high in the statistics of industrialised countries and led to the formation of powerful women’s movements in the 1990s, cervical cancer plausibly lacked representativeness, but with it too, it obscured debates around pre-cancer management (screening and treatment) and its likely outcomes: discomfort during the test, anxiety, preterm birth, and loss of quality of life. In recent years, the consolidation of consumer groups and the incorporation of guidelines directing clinicians to involve the patient’s decision in their care, alongside studies on women’s preferences and perspectives in breast cancer screening have promoted awareness on the importance of incorporating the patient’s view in the evaluation of screening programmes.(5) Surprisingly, this has such a short history thus far that to unravel its implications in a comprehensive way is still a task for historians in the future. The issue of informed consent for screening (providing patients with information about both benefits and potential harms) is being discussed in some industrialised countries and subject to contested views. Since 2009, women can express in writing ‘informed dissent’, in order to withdraw from cancer screening programmes in the UK, and it is possible that other countries will follow suit and mandate balanced information available to patients. The last two chapters of Women’s Bodies and Medical Science foreground the implementation of the Cartwright Inquiry and the enduring impact this exemplary case had, and still has, for the protagonists and the medical profession more broadly. Both chapters demonstrate that the relationship between consumerism, women’s health movements and the medical profession can enhance, as well as detract from more conventional patient-doctor relationships. Polarized attitudes, as analysed by the book, namely, a complete mistrust of the medical profession, or a blind confidence in medicine and technology, increase the difficulty of elaborating a constructive, balanced and rounded critique of medical interventions and health care practices, but, as Bryder’s account shows, they do not render it impossible. 4. Breast cancer genetics: from family to ethnicity The analysis of the notion of CIS and borderline lesions that troubled pathologists, cytologists and gynaecologists constitutes the core theme of Ilana Löwy’s book Preventive Strikes. Women, Precancer, and Prophylactic Surgery. She offers a holistic and formidable account of pre-cancer in comparative perspective, including Britain, USA and France. Her register is less an illustration of different approaches and more a reflection on the local basis of a presupposed universality of medical knowledge and its hegemonic claims. ‘Medical cultures vary as least as much as national cultures do’ (p. 35), she anticipates. In her study of cervical cancer, French gynaecologists, for example, appeared less prone to perform hysterectomies in cases of CIS, because the sterilisation of fertile women was not a welcome idea in a country that endorsed pro-natalist polices. The development of X-ray and curietherapy technology in the Radium Institute of Paris, which soon became a world renowned specialised institute, provided radiologists with a share in the treatment of cancer which largely differed from the surgical-oriented management observed in the US. In France, the combination of therapy (surgery and radiotherapy) led specialists on occasions to perform radiotherapy as the treatment of preference, especially for cervical cancer, which unlike breast cancer, proved to be highly radiosensitive. In this volume Löwy introduces a very detailed analysis of cervical and mainly breast cancer, their changing histological notions, their modus of detection and treatment. Chapters one and two trace the definition of premalignant lesions by pathologists and with them the notion of cancer as a disease of transformed cells and tissues, that spurred in turn the dogma of ‘early detection’. Radical surgical treatments, which also pervaded the field of breast cancer diagnosis when mastectomy was performed as a diagnostic technique (chapter three), corresponded to an era that conceived cancer as a localised disease that later expanded, invading distant organs. The generalisation of the notion of in situ cancers discussed in chapter four highlights the different approaches that informed treatment: while in the case of cervical cancer hysterectomy was gradually abandoned in favour of conservative methods, in situ breast cancers continued to be treated with radical interventions (mastectomies) well on until the 1980s. The role of screening methods such us the Pap test led to an increase in overtreatment which both transformed the meaning of the test, from diagnosing cancer, to a pre-diagnostic test that indicates the existence of cell abnormalities, whose extirpation concealed in turn the need to establish an accurate diagnosis. These aspects alongside the development of screening campaigns are the subject of chapters five and six, which also introduce the use of mammography screening alongside the controversies arising from the setting up of programmes at population level. Emphasis on the latter leads the author to a broader discussion of screening extending the analysis to three other cancers (prostate, lung, and colon), which appears as a departure from the focus of the book. The section on mammography, on the other hand, offers little discussion in terms of what the technology, as a diagnostic tool, was introducing, and in terms of the novel cancer notions radiologists incorporated as they entered into the field of cancer diagnosis. Overall Lowy’s well-selected case notes extracted from pathology records of different hospitals in the US, UK and France add depth and detail to a history that has been reconstructed at a national level, most successfully by Barron Lerner (2001) and Robert Aronowitz (2007) in the case of the US. Her attention to local variations allows Löwy to formulate a critical and more nuanced perspective on the foundations of cancer diagnosis. Yet variations in diagnosis, and the divergent ways in which pathologists working in different settings have correlated breast lesions with a prediction of their malignancy, it is only part of the story. There were also the series of strategies that have variously attempted to curb a breast cancer death toll that for decades has remained stubbornly high, but not without consequences for those who became survivors of the disease. ‘Preventive strikes’ could be read as a series of reactions/responses to the fear of a devastating disease whose course and exact development could not be anticipated accurately even in the era of evidence-based medicine. In case of doubt, doctors often stated: ‘we cut it out’ or ‘burn it out’, or it is ‘better to err on the side of caution’. These ideas underpinned the professional attitudes that somehow entered in the deontology of medical practice unsettling the balance of decision making: it presupposed medical intervention as radical ‘action’ as opposed to conservative treatment and follow up as ‘neglation’. This is not to say that conservative treatment and watchful care of cancer patients did not exist as an acceptable treatment at particular points in time. It is simply that the latter, as demonstrated by the case of the NWH in New Zealand, was far more questioned than the former. The idea of agency behind an anticipated response to an unexpected outcome of the disease has engulfed patients too, who were also induced to develop ‘preventive strikes’ to tame risk. This is more clearly revealed in the last two chapters of the book through a focus on the developments of oncogenetics in the 1980s and 1990s, which transformed the hereditary suspicion of breast cancer from a disease that ran in the family to an identified localised gen mutation: BRCA (1) and (2). Women carriers of these mutations have a higher than average risk of developing a tumour early in life, bilateral tumours and ovarian cancer. BRCA-positive women are encouraged to undergo prophylactic surgery (bilateral mastectomies, ovariectomy) or to live a life of constant follow up tests (mammography) and above all, live with the threat and anxiety about the ‘higher than average’ appearance of the disease. Their prospectus does not differ much from those diagnosed with CIS, as Löwy puts it, they: ‘enter a limbo between health and disease (becoming a “healthy ill”), which change the way they feel about their dangerous body parts (“living with a ticking time bomb”), and lead to a split between the self and the treacherous part of the body’ (p. 4). Despite the standardisation of genetic tests, ‘the meaning of hereditary risk of breast cancer risk is shaped locally’ (p. 181), Löwy argues, depending on the organisation of health services, the existence or not of a universal healthcare system, intellectual property rules, and the broader configurations of cancer management in particular places. The discovery that women of Ashkenazi Jewish origin had a higher incidence of a specific mutation of BRCA led, however, to different approaches about the role of ethnicity in medical testing. In the US, home to the first laboratory that patented the BRCA genes and subsequently its test, the combination of an aggressive testing campaign – initiated by the laboratory which also offered a cheaper test to scrutinise Ashkenazi mutations in Jewish women – is coupled with a perceived anxiety within this community about the predisposition to certain diseases. Thus, medical intervention counted on the support of the Jewish community and health groups that embraced a screening culture for breast cancer grounded in their ethnic traits. By contrast, in France, ethnicity does not feature as an independent category in risk assessment, and (any) woman with a strong family history of breast cancer can be deferred to a cancer genetic service for a test. A similar pattern is followed in the UK. The persistence of the association of breast cancer genes with ethnic groups in the US, however, points to further directions making the one analysed by Löwy and the Jewish community merely a start. Observation that Hispanic-Latino women in the US have a higher mortality rate compared to white women, led to population studies seeking to elucidate the ethnic profile that helps explain differences in breast cancer outcomes: they found that Latinas have a higher incidence of advanced stages of the disease, tumours of a larger size, developed at a younger age, and with a higher incidence of triple-negative breast cancers (are negative for hormone therapy, and have a poorer prognosis), and a higher incidence of pathogenic BRCA1 mutations.(6) These studies have prompted the analysis of the genotype of breast cancer in Latino woman, currently being investigated by a multi-site study sponsored by the Susan G. Komen for the Cure® – the world’s largest charity devoted to the fight against breast cancer – the National Cancer Institute and five Latin American countries. Other studies, on the contrary, have pointed to the existence of health inequalities as an essential factor behind differences in health outcomes. More specifically, Latino women leaving in the US have higher poverty rates, are less educated, are largely uninsured and in many, their undocumented status prevents them from access to health care at all compared to white women.(7) If, as Löwy has noted for the case of Ashkenazi Jewish women ‘the relative wealth of this population, its elevated level of education, and its high level of health consciousness, made it an excellent target for the marketing of tailored services [BRCA test]’ (p. 192), one may presume that the same will apply to a selected group of Latino women, while in the vast majority it may lead to an increase in health inequalities. In addition, the impact on employability and in access to life and health insurance has been noted by various observers that cautiously look at the surge of genetic tests and the increased commodification of genome products within the broader context of health inequalities in specific healthcare systems. At this juncture, historians such as Daniel Kevles have drawn a connection between new genetic screening policies and the long history of eugenics.(8) Social and medical historians have yet to contribute much to this new association of medical genetics and ethnicity and its impact on health policies, public and lay perceptions of immigration, risk, and identity, and crucially, in bringing to the fore discussion on the social determinants of health alongside (re-)emerging propositions of biological determinism. 5. Secrecy, exposure, and locality: cervical cancer and the politics of visibility Löwy opens her latest book, A Woman’s Disease. The History of Cervical Cancer (2011), by reviewing the story of three famous figures of the 19th, 20th, and 21st centuries: mathematician and computing pioneer Ada Lovelace, politician and First Lady Eva Perón, and TV celebrity Jade Goody. Women of completely different traditions, they share a young and painful death of cervical cancer, whose distinct experiences with the disease provide the author with a springboard to reflect on the changing perception of cervical cancer at different historical times: from a trait of women’s weak constitution, to denial and secrecy, to TV and media exposure. The public perception of the disease mirrors the forms of medical knowledge as well as the fate of its victims. The political secrecy that surrounded Eva’s illness (she was never told she had cancer) seems to escape comparison with the mediatisation of Goody’s disease, whose chemotherapy sessions featured in the media on a daily basis until her death. She was not immune to a political touch when PM Gordon Brown expressed his sympathy with the ‘courageous woman’. Just fifty years or so separates her case from Eva’s, but the formulations of Western medicine and patients’ attitude (cancer-fighters) seem to have changed dramatically. During the 20th century it was not uncommon to withhold diagnosis of cancer from patients, yet it is difficult to think that had Eva been diagnosed with any other deadly disease, her illness would have had a different patient/public treatment. Eva’s corporality in Argentina was as present in life as in death. Her body did not ‘leave’ political life, initially, as a part of the unprecedented ritualisation that characterised the paraphernalia of peronismo which secured her posterity by the technique of embalmment. Subsequently, her body fell prey to the country’s political instabilities feeding Argentinian political divisions during the 25 years in which it was lost (stolen and hidden), until finally buried in a family pantheon in Buenos Aires. Secrecy about her disease and the most advanced treatment she received – she is reported to be the first patient treated with chemotherapy in Argentina – endured in the medical profession even after her death. Revealingly, the First International Congress in Antibiotics and Chemotherapy was held in Buenos Aires just months after Eva’s death in 1952, and although her portrait presided over the inaugural session, no reference was made to the pioneering chemotherapy treatment she received.(9) The politics of secrecy in Eva Peron’s case may not epitomise that for women in general at the time, but it does serve to illustrate how cancer was perceived as a fatal, outrageous disease as opposed to the contemporary notion of pre-cancer as a disease that can be diagnosed and prevented. To explore the meaning of these transformations, Löwy engages with the longer history of cervical cancer, offering a review of one-and-a-half centuries of preventive policies, treatments, and changing explanatory theories, balancing ruptures and continuities while offering a perspective to interpret its successes and failures in a convincing and much needed elaboration for current debates. The book is shaped in a very readable way and remains accessible to any newcomer to the field. The first three chapters are devoted to cervical cancer conceptualisation (irritation theory) and treatment (surgery and radiotherapy) and the expectations and frustrations that therapeutic approaches generated in the European and US context until the 1930s. Although chronic irritation as an explanatory theory for cancer waned in the post-Second War World era, the real breakthrough came in the mid 1960s from the field of virology. The relationship of viruses as causal agents of cancer in humans was first established after a tumour virus was identified (Epstein-Barr Virus) from a common childhood tumour in central Africa (Burkitt’s lymphoma). Since then, six human viruses have been identified as etiologic agents of human cancers, including human papilloma virus (HPV) and cervical cancer in the early 1980s. The latter inaugurated a new era for the perception of the disease, transforming cervical cancer into a sexually transmitted disease. ‘In the 1980s, as in 1826, this disease was linked with a ‘greater moral laxity’ – or, in today’s terms, ‘promiscuity’ – of women from lower social classes’ (p. 141), says Löwy. Although the ‘promiscuity’ hypothesis has lost its class-association after the sexual revolution, it has retained its explanatory value for deprived women who show a tendency of both earlier and more numerous pregnancies. Today experts agree that HPV strains 16 and 18 account for around 70 per cent of cervical cancer, that only a percentage of women infected with HPV will develop the disease, while in the majority of cases the infection regresses spontaneously; that between the detection of premalignant lesions and the development of invasive cancer elapses a period of ten to 15 years. Statistics show that in industrialised countries, the incidence and mortality of cervical cancer has declined to the extent that its prominence no longer constitutes a serious public health problem. Yet the perspective for women, Löwy argues, is still somber. The new visibility provided by HPV infection and the presence of ‘atypical squamous cells of unknown significance’ (ASCUS) only triggers further examinations, HPV test, Pap tests, colposcopy and biopsies, which extend the uncertainty of diagnosis to women’s perception of their body as a nebulous health-ill state. The visibility of diagnostic tests and screening programmes from the 1950s onwards (explored in chapters four and five) contrasts with the less publicised psychological and physical effects on women’s lives. In this sense, A Woman’s Disease seems to retain, and single out from its title, the individual, personal account behind the more general, statistic, average, idea of cervical cancer as ‘a women’s disease’. The book is intercalated with stories and narratives of women that navigate the sequence of diagnostic tools in search for the illumination of a clear diagnostic, while others, despite their loyal compliance to tests, succumb to the disease. Cervical cancer and the politics of visibility can easily be extended to Bryder’s account of the NWH and the Cartwright Inquiry, not least because of the role of the media in conveying simplified and distorted versions of the events, but more broadly, because the whole case seems to cast a shadow on cervical cancer and the politics of visibility, one that synthesises the encounter of two temporalities. On the one hand, the incessantly promoted effectiveness of diagnostic tools, screening, prevention, treatment, evidence-based medicine, and ethical (and shared) decision-making; and on the other, the nature of the medical encounter, where doctors and patients reach decisions, sometimes together and at other times unilaterally, balancing different fears and hopes, elaborating on risk, and living with their life-changing decisions. The latter points to an increasingly occluded story, and an altogether alternative, less progressive, history. An excessive confidence in the former can easily make the latter sound arbitrary, backward or unreasonable. The last chapter of A Woman’s Disease addresses the persistent burden of cervical cancer in developing countries, where management and prevention programmes reflect a quite different debate. Focussing on Brazil, Löwy provides a further reminder of the limitations that new interventions may have in the population. As Pap smear screening is considered to have failed in low-resource settings, new approaches to cervical cancer screening are being tested (HPV test, visual inspection with acetic acid and treatment with cryosurgery) which reveal a global, technical approach instead of the delineation of health policies that will ‘act upon economic and social conditions that hamper the diffusion of preventive measures’ (p. 173). Arguably, there is much more historical work needed in the assessment of Pap smear ‘failure’ in developing countries. Although there is some substance to this repeated claim by policy makers, they paint a very partial picture. As historians came to recognize, and as all three books here attest, the diversity and dynamism of local contexts, issues such as ethnicity, health inequalities, and local medical configurations amount to a more complex scenario for the adoption of technological interventions. And, as in the case of the selective historical ‘success’ of smallpox eradication, it seems that the past ‘failure’ of Pap smear screening in developing countries merits too a closer historical examination. There is a great additional value to be garnered from reading these books collectively. Considered side-by-side these volumes point to a much deeper understanding of the complex interdependencies that exist between women’s bodies, medicine, technologies, policy makers, health activists, the health industry, and the press. Their work is clearly of relevance to scholars in a number of fields, certainly beyond that of medical history. - On women and cancer, see for example, Barron Lerner, The Breast Cancer Wars: Hope, Fear, and the Pursuit of a Cure in Twentieth-Century America (New York, NY, 2001); James Olson, Bathsheba's Breast: Women, Cancer, and History (Baltimore, MD, 2005); Robert Aronowitz, Unnatural History: Breast Cancer and American Society (Cambridge, 2007); Kirsten Gardner, Early Detection: Women, Cancer, & Awareness Campaigns in the Twentieth-Century United States (Chapel Hill, NC, 2006); Three Shots at Prevention: The HPV Vaccine and the Politics of Medicine's Simple Solutions, ed. Keith Wailoo et al. (Baltimore, MD, 2010) Keith Wailoo, How Cancer Crossed the Color Line (New York, NY, 2011); Cancer in the Twentieth Century, ed. David Cantor (Baltimore, MD, 2008).Back to (1) - Anne-Emanuelle Birn, ‘The stages of international (global) health: Histories of success or successes of history? Global Public Health, 4, 1 (2009), 50–68, 51.Back to (2) - Sanjoy Bhattacharya, ‘The World Health Organization and global smallpox eradication’, Journal of Epidemiology and Community Health, 62, 2008, 909–12.Back to (3) - Alex Mold, ‘Patient groups and the construction of the patient-consumer in Britain: an historical overview’, Journal of Social Policy, 39, 4 (2010), 505–521, 518.Back to (4) - Jolyn Hersch, et al. ‘How do we achieve informed choice for women considering breast screening?’, Preventive Medicine, 53, 3 (2011), 144–6.Back to (5) - Tejal Patel et al., ‘Breast cancer in Latinas: Gene expression, differential response to treatments, and differential toxicities in Latinas compared with other population groups’, The Oncologist, 15 (2010), 466–75.Back to (6) - Ibid.Back to (7) - Daniel Kevles, ‘From eugenics to patents: genetics, law, and human rights’, Annals of Human Genetics, 75 (2011), 326–33.Back to (8) - Primer Congreso Internacional de Antibióticos y Quimioterápicos (Buenos Aires, 1953).Back to (9) Ilana Löwy is happy to accept this review and does not wish to comment further.
<urn:uuid:6ea216f7-b609-4f2d-bd48-d6c49c3c3c85>
{ "date": "2017-04-29T14:01:00", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.79/warc/CC-MAIN-20170423031203-00590-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9586920738220215, "score": 2.765625, "token_count": 8481, "url": "http://www.history.ac.uk/reviews/print/review/1239" }
It looks incongruous across the dank, misty farmland north of Ypres. A large party marquee erected amongst the winter stubble; but it marks one of the most ambitious battlefield archaeology projects ever attempted. I last visited this farm a year ago - on that occasion soggy from recent rain and swept by chilly easterly winds. Across that landscape a small survey team were mapping what lay below, using ground-penetrating radar. Ninety years after Flanders was torn apart by war, most of the battlefield has now disappeared. Yet, beneath the soil hidden reminders lie undisturbed. Daily life under fire forced the warring armies to seek safety underground; hundreds of shelters and headquarters were constructed in this sector alone. The archaeologists have spent years searching for one such example - the Vampire Dugout - from where a Brigadier General and his staff planned for attacks that so often proved futile, and costly. The team have used every technique available to them - from radar, to dowsing, from spades to excavators. Finally, it was local information that led to a crucial discovery. The army tunnellers who spent three months digging the shelter did so using a 40-foot deep shaft. Today it's open once more. Now British experts, with their Belgian counterparts, are preparing to enter the tunnel complex. Gazing down at the tiny figures at the base of the shaft, Peter Barton, whose research has been central to the project pointed out timber that looked in remarkable condition. "I've never seen anything like this. This shaft was constructed more than 90 years ago, and you wouldn't know it. We now know that the tunnels are lined with steel, and have survived intact." Buckets of silt are still being winched to the surface yielding the first evidence of those who worked and slept here; a shiny clip of British rifle ammunition, a water container, machine parts, even a brass safety pin. Far more lies beyond, but there are hazards to be overcome. Outside the tent, at a safe distance, a pile of rusting unexploded shells awaits disposal. In the tunnels where pumps once ran night and day, thousands of gallons of water have accumulated, a lake that needs to be dry before the real archaeology can begin. The team describe it as like exploring an under sea wreck without the diving suits. Their work, deep below the old trench lines has barely begun.
<urn:uuid:8e9f32e7-e225-47e0-be51-e1f5f2d2fef0>
{ "date": "2016-06-30T19:51:47", "dump": "CC-MAIN-2016-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00098-ip-10-164-35-72.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9561925530433655, "score": 2.953125, "token_count": 503, "url": "http://news.bbc.co.uk/2/hi/uk_news/7246038.stm" }
Higher Music, Baroque Higher Music, JYC A single bass line (for example cello) with a keyboard part (for example harpsichord) filling in the harmonies. Baroque A small group of soloists (concertino) contrasts with a larger group of instruments (ripieno). Baroque The small group of soloists in a concerto grosso. Baroque The main group of instruments in a concerto grosso. Baroque Little return. In a concerto grosso, the ritornello is the main theme played by the ripieno. The ritornello may return frequently throughout the movement. Variations over a ground bass.
<urn:uuid:76cbad7e-1552-4dd0-a2db-0a7d1f22dfbc>
{ "date": "2018-05-20T21:45:23", "dump": "CC-MAIN-2018-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863689.50/warc/CC-MAIN-20180520205455-20180520225455-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7423558831214905, "score": 2.640625, "token_count": 150, "url": "https://quizlet.com/46441768/higher-music-baroque-flash-cards/" }
Virus Name: Swap Aliases: Falling Letters Boot, Israeli Boot V Status: Rare Discovered: August, 1989 Symptoms: Graphic display; BSC (floppy only); TSR; bad cluster Eff Length: N/A Type Code: RsF - Resident Floppy Boot Sector Infector Detection Method: [Not in Certification Set] Removal Instructions: MDisk, or DOS SYS command The Swap virus, or Israeli Boot virus, was first reported in August 1989. This virus is a memory resident boot sector infector that only infects floppies. The floppy's boot sector is infected the first time it is accessed. One bad cluster will be written on track 39, sectors 6 and 7 with the head unspecified. If track 39, sectors 6 and 7, are not empty, the virus will not infect the disk. Once the virus is memory resident, it uses 2K or RAM. The actual length of the viral code is 740 bytes. The Swap virus activates after being memory resident for 10 minutes. A cascading effect of letters and characters on the system monitor is then seen, similar to the cascading effect of the Cascade and Traceback viruses. The virus was named the Swap virus because the first isolated case had the following phrase located at bytes 00B7-00E4 on track 39, "The Swapping-Virus. (C) June, 1989 by the CIA" However, this phrase is not found on diskettes which have been freshly infected by the Swap virus. A diskette infected with the Swap virus can be easily identified by looking at the boot sector with a sector editor, such as Norton Utilities. The error messages which normally occur at the end of the boot sector will not be there, instead the start of the virus code is present. The remainder of the viral code is located on track 39, sectors 6 and 7.
<urn:uuid:04c1149a-e0db-451e-b6c4-66919009a49b>
{ "date": "2018-09-19T23:07:22", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156311.20/warc/CC-MAIN-20180919220117-20180920000117-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8907788395881653, "score": 2.59375, "token_count": 416, "url": "http://wiw.org/~meta/vsum/view.php?vir=1334" }
We have evidence that we are at our planet’s ecological tipping point. Our consumer-based society has largely led us to this crisis. The fossil fuel consumption that powers our consumer-based, mono-culture dominated, Monsanto-bullied agricultural system is clearly unsustainable, and each day we are becoming more uncertain as to how long we will have cheap fuel. If environmental costs aren’t enough to change our ways, what about human costs? Do we participate in a system that engages in foreign conflict based on a desire to secure that country’s energy resources? What would a modern society that used much, much less fossil fuels look like? Of course, it would be based on a local agricultural system. We would end our reliance on our large-scale food system where our food travels, on average 1500 miles before it reaches us by way of refrigerated trucks. We would return to subsisting off of the goods that can be produced in our local region. To take this concept further, we would move away from our current unsustainable modes of food preservation. Some may have given up hope on the establishment of a more sustainable way of life that can prevent the ecological collapse that looms around us. But others, out of passion for sustainable-living principles, or out of a propensity toward survival, will be interested in learning the techniques that are necessary for not only short-term survival without fossil fuels, but methods of food cultivation and preservation that adequately nourish and sustain the body over time–a way of living and eating that provides for healthy peoples over generations. What would a food system look like that persists despite the decline of the fossil fuel supply? It may not include the household refrigerator and freezer. Energy intensive canning that drastically reduces the vitamin content of foods may be replaced by drying food in low-tech solar dehydrators, or preservation by lactic-acid fermentation, a process where food is transformed by the action of beneficial microorganisms. This is not new technology, but ancient technology used by all of the cultures of humanity that maintained healthy populations over many generations. Milk is preserved by transforming it to more stable products such as kefir, yogurt, cheese, and butter. Meat from large animals is preserved by drying into jerky, allowing to age by hanging, or curing by way of sausage in its various forms. Each home or small community would have small animals such as chickens, quail, or guinea fowl that would be harvested as needed. Food would be fresh, systems would be small, and each person in society would have some food specialty to contribute to the household, or offer foods for sale or barter. All cultures that have stood the test of time have had a way of preserving food that harnessed the ability of lactic acid bacteria to ferment food. For Germans it was sauerkraut – sour greens or sauerruben – sour roots. Koreans have kimchi. Italians have prosciutto and salami, preserved meats with the help of lactic acid bacteria. Not only do these preservation methods preserve food, but they provide increased nutrition, enzymes, beneficial bacteria, easier digestion, and enhanced flavor.
<urn:uuid:ed86a12f-8433-46ea-91cc-43c5868438cd>
{ "date": "2017-07-20T22:34:57", "dump": "CC-MAIN-2017-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423512.93/warc/CC-MAIN-20170720222017-20170721002017-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9600486755371094, "score": 3.328125, "token_count": 643, "url": "https://wellwithmargaret.com/2012/06/17/keep-culture-alive/" }
Research finds lawn chemicals raise cancer risk in Scottish terriers WEST LAFAYETTE, Ind. - Exposure to herbicide-treated lawns and gardens increases the risk of bladder cancer in Scottish terriers, a discovery that could lead to new knowledge about human susceptibility to the disease, according to Purdue University scientists. A team of veterinary researchers including Lawrence T. Glickman has found an association between risk of transitional cell carcinoma of the urinary bladder in Scottish terriers and the dogs' exposure to chemicals found in lawn treatments. The study, based on a survey of dog owners whose pets had recently contracted the disease, may be useful not only for its revelation of potentially carcinogenic substances in our environment, but also because studying the breed may help physicians pinpoint genes in humans that signal susceptibility to bladder cancer. "The risk of transitional cell carcinoma (TCC) was found to be between four and seven times more likely in exposed animals," said Glickman, a professor of epidemiology and environmental medicine in Purdue's School of Veterinary Medicine. "While we hope to determine which of the many chemicals in lawn treatments are responsible, we also hope the similarity between human and dog genomes will allow us to find the genetic predisposition toward this form of cancer found in both Scotties and certain people." The research, which Glickman conducted with Malathi Raghavan, Deborah W. Knapp, Patty L. Bonney and Marcia H. Dawson, all of Purdue's School of Veterinary Medicine, and Indianapolis veterinarian Marcia Dawson, appears in the current (4/15) issue of the Journal of the American Veterinary Medicine Association. According to the National Cancer Institute, about 38,000 men and 15,000 women are diagnosed with bladder cancer each year. Only about 30 percent of human bladder cancers develop from known causes. As Scottish terriers - often called Scotties - have a history of developing bladder cancer far more frequently than other breeds, Glickman and his team decided to examine the dogs' diet, lifestyle and environmental exposures for a possible link to bladder cancer. In an earlier study, Glickman and his colleagues found Scotties are already about 20 times more likely to develop bladder cancer as other breeds. "These dogs are more sensitive to some factors in their environment," Glickman said. "As pets tend to spend a fair amount of time in contact with plants treated with herbicides and insecticides, we decided to find out whether lawn chemicals were having any effect on cancer frequency." Glickman's group obtained their results by surveying the owners of 83 Scottish terriers. All of the animals had bladder cancer and were of approximately the same age. Based on an 18-page questionnaire, owners documented their dogs' housing, duration of exposure to the lawn or garden and information on the particular lawn treatment used (dog owners provided either the label from the treatment bottle or, if a company sprayed the lawns directly from a truck, the name of the lawn service). The results were then compared with a control group of 83 unexposed Scottish terriers of similar age that were undergoing treatment for unrelated ailments. "We found that the occurrence of bladder cancer was between four and seven times higher in the group exposed to herbicides," Glickman said. "The level of risk corresponded directly with exposure to these chemicals: The greater the exposure, the higher the risk." Glickman said it is possible the active ingredient in most lawn and garden sprays - a compound known by its chemical name of 2,4-D - was to blame, although it has been thoroughly tested by the FDA for carcinogenicity. However, he said, it also is possible that one of the so-called inert ingredients in the mixture - ingredients which often make up nearly two-thirds of a treatment's volume - could be responsible for the increased risk. "These other ingredients are thought to be inert and, therefore, are not tested or even listed on the product label," Glickman said. "But 4 billion pounds of these other untested chemicals reach our lawns and gardens every year, and we theorize they are triggering cancer in these animals, which are already at risk because of a peculiarity in their genome." Scottish terriers' genetic predisposition toward developing bladder cancer makes them ideal as "sentinel animals" for researchers like Glickman because they require far less exposure to a carcinogen than other breeds before contracting the disease. "You might compare them to the canaries used in coal mines a century ago," he said. "The difference is that we don't deliberately place our research animals in harm's way. We study animals that have already contracted diseases, bring them to the hospital and then try to find out what combination of genetic predisposition and environmental influence added up to make them ill." Glickman said the similarity between dog and human genomes could lead researchers to find the gene in humans that makes them susceptible to developing bladder cancer. "If such a gene exists in dogs, it's likely that it exists in a similar location in the human genome," Glickman said. "Finding the dog gene could save years in the search for it in humans and could also help us determine which kids need to stay away from lawn chemicals." But Glickman emphasized that because the effect was a combination of chemical and genetic predisposition, the results do not suggest that everyone should avoid treated lawns. "We don't want to indicate that every person is susceptible," he said. "Because this study shows that exposure to the chemicals exacerbates a genetic predisposition in Scotties towards developing TCC, it's likely that only a segment of the human population would be in similar danger. "But we still need to find out who those individuals with the same predisposition are. Until we do, we won't know who's safe and who isn't." As a next step, Glickman will survey children, as well as dogs, in households that have treated lawns and compare the chemicals in their urine samples with those from households where lawns have not been treated. "It's important to find out which lawn chemicals are being taken up by both children and animals," he said. "We hope to start this spring." Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved.
<urn:uuid:b037fd5e-bace-47d3-a974-20a6db5e0cbc>
{ "date": "2015-04-21T19:02:26", "dump": "CC-MAIN-2015-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246643088.10/warc/CC-MAIN-20150417045723-00095-ip-10-235-10-82.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9711636304855347, "score": 2.640625, "token_count": 1316, "url": "http://psychcentral.com/news/archives/2004-04/pu-rfl042004.html" }
To evaluate the safety, acceptability, and accuracy of the minimally invasive test compared with endoscopy for the diagnosis of Barrett’s esophagus, the researchers enrolled 1110 individuals attending 11 UK hospitals for investigational endoscopy of dyspepsia and reflux symptoms. They found that the new test correctly identified 79.9% of the 647 individuals with endoscopically diagnosed Barrett’s esophagus, and that 92.4% of 463 individuals unaffected by Barrett’s esophagus were correctly identified as being unaffected. The sensitivity of the test increased to 87.2% for patients with circumferential Barrett’s segments more than 3 cm, which are known to confer a higher cancer risk. Nearly 94% of the participants swallowed the sampling device (Cytosponge) successfully, there were no adverse effects attributed to the device, and participants who swallowed the device generally rated the experience as acceptable. While the findings indicate that this new cell sampling device might provide a simple, minimally invasive way to identify those patients with reflux symptoms who warrant endoscopy to diagnose Barrett’s esophagus, randomized controlled trials of the test are needed to assess its suitability for clinical implementation. Moreover, because most people with Barrett’s esophagus never develop esophageal cancer, additional biomarkers ideally need to be added to the test to identify those individuals who have the greatest risk of esophageal cancer, and thereby avoid overtreatment of Barrett’s esophagus. The authors say: “The Cytosponge-TFF3 test can diagnose [Barrett’s esophagus] in a manner that is acceptable to patients and logistically feasible across multiple centers. This test may substantially lower the threshold for investigating patients with reflux, as part of a strategy to reduce population mortality from esophageal adenocarcinoma.” source : http://www.sciencedaily.com/releases/2015/01/150129143034.htm
<urn:uuid:cca55f7c-54ea-4177-b772-e134dd321951>
{ "date": "2018-06-19T00:44:48", "dump": "CC-MAIN-2018-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861641.66/warc/CC-MAIN-20180619002120-20180619022120-00096.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9077000617980957, "score": 2.6875, "token_count": 426, "url": "https://cancerlive.net/cancer-knowledge/new-minimally-invasive-test-identifies-patients-for-barretts-esophagus-screening/" }
Health Surveillance and Disease Management / Communicable Diseases / Antimicrobial Resistance Recommendations of a Group of Experts: Standards for the Use of Automated Identification Systems for Bacteria and Susceptibility to Antimicrobials (Brasilia, Brazil, 26–28 October 2004) Full Text (in Spanish, 15 pp, PDF, 296 Kb; chapter headers translated for user orientation) General objective of the Expert Committee: Define the processes that guarantee the quality of the information generated by automated systems for identifying bacteria and testing susceptibility to antimicrobial drugs. In 1995, due to a regional alert on the importance of emerging and reemerging diseases, among which resistance to antibiotics is included, PAHO strengthened its activities in this area. Thus, a network of surveillance of the susceptibility was developed to antibiotics for isolations of Salmonella spp, Shigella spp, and Vibrio cholerae. These microorganisms are important etiologic agents of diarrheal diseases that could, sometimes, require antibiotic treatment. Its importance transcends its individual medical aspects, since its epidemic presentation transfers the problem to a public health dimension. Furthermore, the importance of food contamination, sometimes in the source itself due to infection of farm animals, transforms an individual medical problem into an epidemiological problem with economic and social serious implications. The same thing occurs when these etiologic agents cause outbreaks in countries that obtain resources from tourism. Thus, a problem is created with much broader economic and political impact than that of the original medical problem. The surveillance network for the etiologic agents of enteric diseases sponsored by PAHO began to function in 1996, with the involvement of the National Reference Laboratories (NLRs) of Argentina, Brazil, Chile, Colombia, Costa Rica, Mexico, Peru, and Venezuela. Each of these laboratories would be the head of a local network in which several of the thousands of laboratories in the region would participate, for the purpose of carrying out microbiological analyses. After all, the activities of those laboratories depend on the isolation, identification, and determination of the susceptibility to antibiotics of the species subject to surveillance. The participating countries concluded that, in order to have confidence in the results obtained, it would be necessary to strengthen the quality assurance of the internal practices of each laboratory and to establish a system to allow for periodic performance evaluation, both of the National Reference Laboratory and of the laboratories participating in every country's network. Hence, they accepted that their contribution to the network was conditional upon surveillance activities principlesin the national laboratories being carried out in accordance with quality assurance principles that ensured the veracity of the results obtained. Based on those results, greater rationalization could be achieved in terms of both the empirical treatment of the individual case and potential control measures of importance to the community. The National Laboratory for Enteric Pathogens (LNPE) of Canada agreed to serve as the laboratory organizing the system, which was subsequently joined by laboratories from five Caribbean countries: Bahamas, Barbados, Jamaica, Saint Lucia, and Trinidad and Tobago in 1998, and Cuba in 1999. With support from the Agency for International Development of the United States of America (USAID), six more Latin American countries were also incorporated in the network in 1999: Bolivia, Ecuador, El Salvador, Guatemala, Nicaragua, and Paraguay. The countries participating in the network are committed to providing ongoing support to the corresponding National Reference Laboratory (NLR). In turn, the NLR would function as the head of the network, compiling national information on the identification of the species isolated and their susceptibility to antibiotics. Furthermore, it would supervise the enforcement of principles of quality assurance in each laboratory in the network by means of supervisory visits and would be responsible for carrying out performance evaluations of each laboratory. In this way, the information could be used to the extent that it is reliable. Subsequently, other community species were added to network monitoring: Streptococcus pneumoniae (invasive), Haemophilus influenzae (invasive) Neisseria meningitidis, and Escherichia coli (urinary infection), as well as species isolated in hospital-acquired infections, such as Staphylococcus aureus, Pseudomonas aeruginosa, Acinetobacter spp, Enterococcus spp (E. faecalis and E. faecium), Klebsiella spp and Enterobacter spp. Surveillance of these bacteria calls for an external performance evaluation program carried out by the National Institute of Infectious Diseases (Instituto Nacional de Enfermedades Infecciosas / INEI) of the Institute Carlos Malbrán, of Argentina. Accordingly, it was agreed that the mission of the Latin American Network for Surveillance of Resistance to Antimicrobial Drugs would be to obtain reliable, timely, and replicable microbiological data to be used to improve patient care and strengthening surveillance with the establishment of sustainable quality assurance programs. The efficiency of the monitoring activities in each country depends on the increase in the geographical coverage of surveillance activities; the increase in the number of laboratories participating in the network (Sentinel Centers); the increase in the number of isolations; the improvement in the results of the external performance evaluation; the availability and dissemination of local, national and regional information; and the results of supervisory visits. As complement to the criteria established by the Expert Committee that met in Santiago, Chile, from 24 to 26 February 2003 to define standards for performance evaluation using the Kirby-Bauer antibiogram (PAHO document Recommendations of an Expert Committee: Performance Evaluation Standards for the Kirby-Bauer Antibiogram [Areas of Inhibition or Interpretation]), we propose defining standards for the use of the automated systems of identification and antimicrobial susceptibility. The Expert Committee that met in Brasilia, Brazil, in October 2004, undertook this project, the product of which is this report.
<urn:uuid:766b630f-f1b6-44e5-acd9-662abd408efe>
{ "date": "2014-11-26T12:45:32", "dump": "CC-MAIN-2014-49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006855.76/warc/CC-MAIN-20141125155646-00228-ip-10-235-23-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9183843731880188, "score": 2.875, "token_count": 1210, "url": "http://www1.paho.org/English/AD/DPC/CD/amr-GuiaAutoEvalDesemp.htm" }
... the aesthetic excitement of science ... are probably as many reasons to do science as there are scientists. Too often, the aesthetic excitement of science is sacrificed to its undoubted utilitarian value, a trend that seems to be intensifying. Still, many scientists remain who appreciate the aesthetics of the process from discovery to understanding, and for them a treat is in store. The elements of beautiful science are familiar: first the confrontation of the human mind with a natural phenomenon, then its investigation through observations and experiments, and finally, in best case, the convincing demonstration of the validity of one of the theories through confirmation of its specific predictions. The process can take only a few years and involve only a few scientists or it can span centuries and involve many. The practical consequence may be revolutionary and change the course of history (...) or it may have little or no use. In either case, a full scientific story, especially one that has been unfolding over historic times, can be a lovely thing, like a classical symphony or a gothic cathedral. ' David Botstein in a "Perspective" article on contributions to the knowledge of the molecular biology of colour vision, 11 april 1986, page 142. Jos van Geffen -- Site Map | created: 18 April 1995 last modified: 26 February 2000
<urn:uuid:d3b14258-5ccb-446c-993b-d0a31d3e4d74>
{ "date": "2017-05-24T08:09:55", "dump": "CC-MAIN-2017-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00637.warc.gz", "int_score": 3, "language": "en", "language_score": 0.91072016954422, "score": 2.96875, "token_count": 287, "url": "https://josvg.home.xs4all.nl/pgs/science.html" }
The best of EcoWatch, right in your inbox. Sign up for our email newsletter! Battery Storage Revolution Could 'Sound the Death Knell for Fossil Fuels' If we want to accelerate the world's renewable energy transition, we'll have to modernize the electric grid and we'll need much better batteries. Just look at Germany, which generates so much clean energy on particularly windy and sunny days that electricity prices are often negative. Sure this is good news for a German person's wallet, but as the New York Times noted, "Germany's power grid, like most others around the world, has not yet adapted to the increasing amounts of renewable energy being produced." The problem is that the electrical grid was designed for fossil fuel use, meaning it can struggle to manage all the renewable energy being added to the grid. For instance, California sometimes produces so much solar power that is has to pay neighboring Arizona to take the excess electricity that Californians aren't using to avoid overloading the power lines. Meanwhile, battery storage capacity is not yet advanced enough to take in the surplus generation. Thankfully, a sea change appears to be well underway. WIRED UK reported that 2018 will see energy storage for home use becoming more commonplace. Investors will also increasingly look towards renewable energy storage solutions rather than supply. "We will see a tipping point," Alasdair Cameron, renewable energy campaigner at Friends of the Earth, told WIRED. "Even IKEA has launched a renewable solar battery power storage for domestic use." Coupled with Tesla's Powerwall domestic battery, Cameron added, "storage is moving from the grid to the garage to the landing at home." Furthermore, WIRED pointed out, companies such as EDF Renewable Energy, electric services company E.ON and Dyson are investing in storage development. Energy giants ExxonMobil, Shell and Total are also coming on board with renewable battery systems. Other examples of the battery storage revolution include South Australia, which recently switched on the world's largest battery storage farm. Tesla CEO Elon Musk famously built the massive facility in less than 100 days to help solve the state's energy woes. Musk's battery already proved itself late last month after responding to power outages within milliseconds. In November 2016, Ta'u, an island in American Samoa, turned its nose at fossil fuels and is now almost 100 percent powered with solar panels and batteries thanks to technology from Tesla and SolarCity. And this past October, Scotland switched on the Hywind Scotland, the world's first floating wind farm, that's linked with Statoil's Batwind, a lithium battery that can store one megawatt-hour of power to help mitigate intermittency and optimize output. All that said, 2018 could be a major year for batteries. As WIRED reported: "According to Hugh McNeal of the wind industry's trade body RenewableUK and solar expert Simon Virley of KPMG, this storage revolution is capable of transforming the industry. In 2018, it will become even more competitive and reliable—and will sound the death knell for fossil fuels in the process." EcoWatch Daily Newsletter By Jared Kaufman Eating a better diet has been linked with lower levels of heart disease, stroke and type 2 diabetes. But unfortunately 821 million people — about 1 in 9 worldwide — face hunger, and roughly 2 billion people worldwide are overweight or obese, according to the U.N. World Health Organization. In addition, food insecurity is associated with even higher health care costs in the U.S., particularly among older people. To help direct worldwide focus toward solving these issues, the U.N. Sustainable Development Goals call for the elimination of hunger, food insecurity and undernutrition by 2030. mevans / E+ / Getty Images Calls for Radical Climate Action Grow Louder as NOAA Reports Last Month Was Hottest June Ever Recorded By Jessica Corbett As meteorologists warned Thursday that temperatures above 100°F are expected to impact two-thirds of the country this weekend, U.S. government scientists revealed that last month was the hottest June ever recorded — bolstering calls for radical global action on the climate emergency. By John R. Platt For years now conservationists have warned that many of Madagascar's iconic lemur species face the risk of extinction due to rampant deforestation, the illegal pet trade and the emerging market for the primates' meat. Yes, people eat lemurs, and the reasons they do aren't exactly what we might expect.
<urn:uuid:9dbf8cae-4528-49e7-b75c-bd1f622978f8>
{ "date": "2019-07-21T10:48:34", "dump": "CC-MAIN-2019-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526948.55/warc/CC-MAIN-20190721102738-20190721124738-00096.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9460715651512146, "score": 2.8125, "token_count": 923, "url": "https://www.ecowatch.com/battery-storage-renewable-energy-2526629658.html" }
Twenty percent prevalence of West Nile virus antibody was found in free-ranging medium-sized Wisconsin mammals. No significant differences were noted in antibody prevalence with regard to sex, age, month of collection, or species. Our results suggest a similar route of infection in these mammals. Additional Publication Details West Nile virus antibody prevalence in wild mammals, southern Wisconsin
<urn:uuid:86c314b1-d799-4cff-bb4e-163cbc727de6>
{ "date": "2015-02-02T01:49:40", "dump": "CC-MAIN-2015-06", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00177-ip-10-180-212-252.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9160290956497192, "score": 2.546875, "token_count": 70, "url": "http://pubs.er.usgs.gov/publication/1004023" }
Helicopter pilots are primarily responsible for ensuring the smooth and safe operation of helicopters before, during, and after flight missions. They work in a variety of areas, such as health and safety (including in rescue work, forest fire fighting, and health transport operations), law enforcement, news gathering, tourism, and commercial transport. Helicopter pilots are ultimately responsible for ensuring that their vehicles are mission-ready, which involves examining maintenance and regulatory compliance records and checking that their helicopters meet industry, federal, and state standards. Pilots must seek out repairs and inspections as needed. In flight, helicopter pilots guide the vehicle and communicate with air traffic personnel, following all guidelines for safe travel in the skies. Additionally, they ensure that all passengers and cargo are properly secured prior to take off and at all times during flight. In cases of inclement weather, helicopter pilots must monitor conditions and determine the safety of flying conditions. Helicopter pilots work in several environments, principally in the cockpit of their vehicle and in the hangar or landing pad preparing the vehicle for flight or maintenance. They may also meet with clients, co-workers, and regulatory officials to give information about their and their vehicles capabilities and status. Helicopter pilots work a variety of schedules; they may work rotating shifts, by contract, or on a fixed schedule. Helicopter pilots are required to hold a substantial number of certifications and specific relevant experience. They must have an appropriate helicopter pilot’s license, as well as any industry specific certifications (e.g., a medical certificate to transport patients or organs for transplant). They must have a minimum number of flight hours, included aided and unaided time in the air. In some positions, they may be required to have night flight experience. Helicopter Pilot Tasks Inspect and conduct pre-flight tests and fly helicopters to designated locations. Communicate with ground control, home office and other aircraft. Monitor navigational aids and flight instrumentation. Register flight plans, load helicopters, calculate weight and monitor and adjust fuel levels.
<urn:uuid:4e022154-ef47-4a3c-ba46-ee08a82fba85>
{ "date": "2017-03-30T14:42:01", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194601.22/warc/CC-MAIN-20170322212954-00546-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9519068002700806, "score": 2.90625, "token_count": 418, "url": "http://www.payscale.com/research/US/Job=Helicopter_Pilot/Salary" }
Teenage alcohol use is a serious and ongoing problem in the United States. Alcohol is the most frequently used substance among teens, and alcohol intoxication can result in injury, health issues, and even death. When a teen under the age of 21 imbibes alcohol, this is known as underage drinking. Teen alcohol statistics show that under-aged drinking is very common. Facts About Underage Drinking Here are a few teenage drinking facts that reveal how many teens drink, and why adolescent alcohol consumption is so dangerous. - 61 percent of high school students have consumed alcohol (more than just a few sips) by the end of high school, and 23 percent of high school students have consumed alcohol by 8th grade. - 46 percent of 12th graders have been drunk at least once in their life, and 9 percent of 8th graders have been drunk at least once in their life. - Alcohol plays a role in more than 30 percent of teenage deaths involving accidents, homicide or suicide. In addition, teen alcohol abuse is linked to 189,000 emergency rooms visits by adolescents under age 21. - More than 1.6 million young people between the ages of 12 and 20 reported driving under the influence of alcohol in 2014. - In a national survey on teenage binge drinking, one in eight college students (12 percent) reported having 10 or more drinks in a row within a two-week period. Moreover, one in 25 reported having 15 or more drinks in a row at least once in those two weeks. - Statistics on underage drinking show that youth who drink are more likely to carry out or be the victim of a physical or sexual assault after drinking than others their age who do not drink. Additionally, there are other dangerous consequences of underage drinking, which we will look at later. Risk Factors for Teenage Alcohol Abuse First, let’s look at why and how teens start abusing alcohol. There are environmental, emotional, and behavioral factors that contribute to underage drinking. Risk factors for teenage drinking problems include: - Not enough parental supervision - Spending time with peers who abuse alcohol - Availability of alcohol in the home, or through family members or friends - Experiencing higher levels of impulsiveness, novelty seeking, or aggressive behavior - Having conduct or behavior problems - Difficulty looking at the possible negative consequences of one’s actions. Even without those risk factors, many teens use alcohol. Here are some of the reasons why kids drink: - Curiosity about what it’s like to be drunk - Feeling stressed out and wanting to relax - Peer pressure - Looking for a way to self-medicate the pain of mental health conditions or emotional problems - Thinking that drinking is cool and sophisticated - Wanting to be more independent. Warning Signs of Teenage Alcoholism Signs of underage drinking or alcohol abuse include the following: - Increased anger and irritability - Academic and/or behavioral problems at school - Acting rebellious and defiant - Ignoring responsibilities, such as school, sports, or clubs - Finding a new group of friends - Low energy - Decreased interest in activities they used to enjoy - Not seeming to care about their appearance - Problems concentrating and/or remembering - Slurred speech - Smell of alcohol on their breath - Coordination problems. When a teen connects drinking with their emotional state, this can be a sign of alcoholism. In other words, they may be using alcohol to suppress or self-medicate feelings of anger, sadness, anxiety, or depression. If you are concerned that your child is abusing alcohol, there is professional help available. Don’t hesitate to reach out. The Dangers and Consequences of Underage Drinking According to the Centers for Disease Control and Prevention, there are many negative consequences to alcohol abuse in adolescence. Teens who drink are more likely to experience the following issues: - Problems at school, such as more absences and more failing grades - Social conflicts, including fighting and isolation from activities and peers - Legal problems, such as being arrested for driving while under the influence, or attacking someone while intoxicated - Unwanted, unplanned, and/or unprotected sexual activity - Disruption of normal growth, brain development, and sexual development - Physical and sexual assault - Suicide or homicide - Alcohol-related car crashes and other unintentional injuries, such as burns, falls, and drowning - Issues with memory and cognition - Abusing other drugs - Death from alcohol poisoning. Furthermore, teen alcohol abuse can also increase a teenager’s chances of becoming physically dependent on alcohol as an adult. According to a report by the Substance Abuse and Mental Health Services Administration, 74 percent of adults participating in a substance abuse treatment program started using alcohol or drugs before the age of 17. The Physical Effects of Teen Drinking Underage alcohol intoxication can also cause long-term physical issues. For one, teen alcohol abuse can cause delays in sexual development. Moreover, frequent drinking can cause weight gain, which may eventually put teens at risk for developing high blood pressure and diabetes. Teenagers who keep drinking into adulthood have a higher risk of developing liver problems. The liver helps metabolize nutrients and rid the system of harmful toxins. The liver also metabolizes alcohol. Therefore, excessive drinking can put a tremendous strain on this vital organ. According to the University of Maryland Medical Center, long-term drinkers are more likely to get certain types of cancer. Specifically, alcohol consumption is linked to higher risks of cancer of the head, neck, stomach, and breasts. Alcohol can also harm the pancreas, causing a severely painful condition called pancreatitis. Alcohol and the Teenage Brain Underage drinking consequences include damage to the brain. A teenager’s brain is at a vulnerable stage of development. Therefore, alcohol interferes with this development, causing permanent changes in the ability to learn and remember. Research conducted by neuropsychologists at Duke University indicates that teenage drinking may damage the hippocampus, the part of the brain that enables us to learn and remember. Studies conducted on adolescent rats showed that younger drinkers may be even more likely to suffer neurocognitive deficits than older adults who drink. Moreover, this is especially true if they drink to the point of blacking out. The Duke research team also found that an alarming number of college-age drinkers experience blackouts during heavy episodes of alcohol intoxication. In an electronic survey of almost 800 college students, the researchers found that just over half of the students who responded reported that they had blacked out while drinking. Protecting Kids Against Teen Alcoholism Parents’ actions can significantly impact a teenager’s attitude toward drinking and their willingness to try alcohol while underage. Here are a few steps parents can take to prevent teen alcohol abuse. - Communicate with your teen about the dangers of drinking. - If you drink alcohol, always do so responsibly. - Do not give your teen access to alcohol. - Make a point of getting to know your teen’s friends. - Supervise parties to make sure there is no alcohol available. - Stay in touch with what’s happening in your teen’s life. - Help your teen find hobbies and activities that give them a sense of fulfillment and accomplishment. Research shows that children whose parents are actively involved in their lives are less likely to drink alcohol. Parents can make a real and powerful difference.
<urn:uuid:7b6ba012-78e8-475d-b421-a2bcd58c4957>
{ "date": "2017-12-11T14:51:45", "dump": "CC-MAIN-2017-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513611.22/warc/CC-MAIN-20171211144705-20171211164705-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9408753514289856, "score": 3.359375, "token_count": 1548, "url": "https://www.newportacademy.com/resources/substance-abuse/underage-drinking-dangerous-teens/" }
Over two and a half years after the Climategate scandal fundamentally undermined public confidence in the theory of manmade climate change, questions are continuing to be raised regarding the means used for collecting data for evaluating global warming, and the process of peer review that evaluates the climate studies. The latest challenge confronting advocates of the theory of global warming is a study coauthored by Anthony Watts, a former television meteorologist, president of IntelliWeather, and a "convert" to the ranks of the skeptics of manmade global warming. In 2007, Watts founded SurfaceStations.org, a site which evaluates the weather stations gathering data used to model changes in global temperatures, because of concerns regarding the accuracy of the data. Why would the location of the stations matter? Because the growth and spread of the population of the United States could cause localized changes in temperature without having a larger — even global— effect. For example, measurements from a location that was once in the middle of a field might now be surrounded by blacktop; in such a situation, the world has not necessarily gotten warmer but the area around the monitoring equipment certainly has. The existence of such poorly-placed monitoring equipment is far from hypothetical: an article for FoxNews.com cited several examples: That problem of poorly sited stations thanks to “encroaching urbanity” — locations near asphalt, air conditioning and airports — is well established. A sensor in Marysville, Calif., sits in a parking lot at a fire station next to an air conditioner exhaust and a cell tower. One in Redding, Calif., is housed in a box that also contains a halogen light bulb, which could emit warmth directly onto the gauge. The study conducted by Watts and his colleagues (An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends) draws on the SurfaceStation data to reach several significant conclusions, including the following points: • The analysis demonstrates clearly that siting quality matters. Well sited stations consistently show a significantly cooler trend than poorly sited stations, no matter which class of station is used for a baseline, and also when using no baseline at all. … • It is demonstrated that stations with poor microsite (Class 3, 4, 5) ratings have significantly higher warming trends than well sited stations (Class 1, 2): This is true for, all nine geographical areas of all five data samples. The odds of this result having occurred randomly are quite small. ... • Not only does the NOAA USCHNv2 adjustment process fail to adjust poorly sited stations downward to match the well sited stations, but actually adjusts the well sited stations upwards to match the poorly sited stations. • In addition to this, it is demonstrated that urban sites warm more rapidly than semi-urban sites, which in turn warm more rapidly than rural sites. Since a disproportionate percentage of stations are urban (10%) and semi-urban (25%) when compared with the actual topography of the U.S., this further exaggerates Tmean trends. • NOAA adjustments procedure fails to address these issues. Instead, poorly sited station trends are adjusted sharply upward (not downward), and well sited stations are adjusted upward to match the already-adjusted poor stations. Well sited rural stations show a warming nearly three times greater after NOAA adjustment is applied. In other words, the study determined that not only are many monitoring stations poorly placed, the erroneous data generated by the poorly-placed urban sites is actually being used to adjust the data gathered at better-situated rural sites. What is the result? “The new analysis demonstrates that reported 1979-2008 U.S. temperature trends are spuriously doubled, with 92% of that over-estimation resulting from erroneous NOAA adjustments of well-sited stations upward.” Undoubtedly the new study will draw criticism from advocates of the theory of manmade climate change because it calls into question the reliability of the data upon which the theory has purportedly been based. Consider, for example, one of the critics of “climate change deniers”: Richard Muller, a professor of physics at the University of California at Berkeley who was himself quite recently among those “deniers.” According to a recent opinion article which he wrote for the New York Times (“The Conversion of a Climate Change Skeptic”), Prof. Muller cites the increase in surface temperatures as the reason for his “conversion”: Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause. My total turnaround, in such a short time, is the result of careful and objective analysis by the Berkeley Earth Surface Temperature project, which I founded with my daughter Elizabeth. Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years. Moreover, it appears likely that essentially all of this increase results from the human emission of greenhouse gases. In a sense, there is a point of agreement between the two studies: there has been an increase in temperature at many of the monitoring stations, and that increase has been caused by humans — but there is reason to believe that the temperature change is extremely localized, and that the poor placement of monitoring equipment has proven to be a very poor guide to worldwide trends — a doubling of the temperature change, if Watts, et. al., are correct. The study which Muller cites as the cause for his “conversion” is drawing criticism even from others who would normally be critical of “deniers.” For example, Judith Curry, a climatologist and chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology, is quite critical of Muller’s findings: Judged by standards set by the IPCC and the best of recent observation-based attribution analyses, in my opinion the Rhode, Muller et al. attribution analysis falls way short. … Looking at regional variations provides substantial insights into the attribution. No one that I listen to questions that adding CO2 to the atmosphere will warm the earth’s surface, all other things being equal. The issue is whether anthropogenic activities or natural variability is dominating the climate variability. If the climate shifts hypothesis is correct (this is where I am placing my money), then this is a very difficult thing to untangle, and we will go through periods of rapid warming that are followed by a stagnant or even cooling period, and there are multiple time scales involved for both the external forcing and natural internal variability that conspire to produce unpredictable shifts. The SurfaceStations data raises fundamental questions about the existence of much of the purported warming — let alone the source of any global warming. The fundamental challenges to the science behind global warming have arisen since the Climategate revelations — certainly the conclusions of the Intergovernmental Panel on Climate Change (IPCC) have been fundamentally undermined as the IPCC’s methodology has been subjected to outside scrutiny. The time seems near when the “global warming” of the past 20 years will go the way of the “new ice age” of the 1970s. Photo: a weather monitoring station in an open field at the Bloom Dairy Farm near coldwater Mich.: AP Images
<urn:uuid:d51e4df1-720e-4769-a89a-db63004f8d54>
{ "date": "2016-07-30T11:12:22", "dump": "CC-MAIN-2016-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836397.31/warc/CC-MAIN-20160723071036-00307-ip-10-185-27-174.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9455583095550537, "score": 2.75, "token_count": 1541, "url": "http://www.thenewamerican.com/tech/environment/item/12288-study-shows-global-warming-data-skewed-by-bad-monitoring/12288-study-shows-global-warming-data-skewed-by-bad-monitoring?start=3" }
Name: Donnell Hicks Title: Hunger & Homelessness in America The richest country in the world is battling a major epidemic. It is not AIDS/HIV, high unemployment, or human rights. It is two diseases called hunger and homelessness. America is facing a dark issue that must be resolved by coming together. No matter if you are a Republican or a Democrat, rich or poor we must come together to make a change to end hunger and homelessness for the adults and children who are currently living in pure destitute. The rise in hunger in the United State of America is due in part to low-income wages, a child being raised in a single parent household, and the social economic gap. Hunger affects all urban communities in America. There reality is that African-Americans families are battling the idea of putting food in the household and children across the nation who are poor are experiencing the hunger epidemic. A study in 2011 shows that 46.2 million people live in poverty; 26.5 million people between ages 18-64 years old are living in poverty; 16.1 million children under eighteen are living in poverty; and another 3.6 million seniors 65 years and older are living in poverty. Many people will suggest that poverty plays a major role in the wave of hunger. According to the U.S. Department of Agriculture, another main reason there is hunger in America is due to the shortage of food supplies and the rising costs of food. The national percentage of people receiving food assistance in Florida is 16.2%, the national average in the United States is 14.7%. However, 57.2% of household participants receiving food assistance are in enrolled in at least one of three major federal food assistance programs. These include the Supplemental Nutrition Assistance Program (formally known as food stamps), the National School Lunch Program, and the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). On the other hand, nearly 14 million children are served by Feeding America, over 3 million are ages five and under. (Source: www.feedingamerica.org) Amongst African-Americans living in urban communities, one of the most difficult tasks that African-American families face every day is survival - for the apparent reason that 30% of children live below the poverty line. A study dating back to 1991 shows that 46% of black children were chronically hungry compared to 16% of white children. There is no gleam of hope when hunger takes part in the death of infants. The U.S. is ranked 23rd among infant mortality. Nonetheless, black infants are dying nearly twice the rate of white infants. (Source: www.rollingout.com) Politicians who serve in local, state, and federal governments don’t take people who are homeless seriously. Politicians either ignore the need to aid the homeless or aid to the homeless isn’t a number one factor on their “to-do list.” Little do they know the poverty rate for black children is 32.8% and 32.3% for Hispanic children, compared to 17% for whites, and 3% for Asian children. (www.apa.org) Homelessness in America is more prevalent in urban areas in America with 71% living in central cities, 21% in Suburbs, and 9% in rural areas. On the contrary, 1.6 million people live in emergency shelters or transitional housing. Homelessness in America remains an issue of deep concern as we advance in the 21st century. We must come together and do something about it.
<urn:uuid:7f072013-db22-4599-a777-2e7400d1382f>
{ "date": "2019-02-18T03:59:58", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484648.28/warc/CC-MAIN-20190218033722-20190218055722-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9529153108596802, "score": 2.734375, "token_count": 730, "url": "https://www.dreampathways.org/dpm-archives/hunger-homelessness-in-america" }
On a recent collecting trip, I went over to Chalk Bluffs Natural Area in the Mississippi Alluvial Plain of northeastern Arkansas. My quarry was a population of Cylindera cursitans (ant-like tiger beetle) that has been reported from the site—one of the only known sites for the species in Arkansas. While I was there, I noticed some movement on the trunk of a tree, and a closer look revealed that what appeared to be a piece of bark was actually a beetle—a longhorned beetle to be precise. The elevated gibbosities of the pronotum and white, transverse fasciae of the elytra immediately identify it as Acanthoderes quadrigibba, a not uncommon species in the eastern U.S., but one that I still get excited about whenever I encounter it. Judging by the number and diversity of plant genera that have been recorded as larval hosts for this species—Linsley and Chemsak (1984) recorded Acer, Betula, Carya, Castanea, Celtis, Cercis, Fagus, Ficus, Quercus, Salix, Tilia, and Ulmus—you could be forgiven for thinking that this is one of the most common and abundant species of longhorned beetle in North America. I have not found this to be the case, and I don’t think it is because I’m simply missing it due to its cryptic appearance. Longhorned beetles in the tribe Acanthoderini are, like many species in the family, quite attracted to lights at night, and I’ve done plenty of lighting over the years. What I have noticed is that nearly all of my encounters with this species have been in the Mississippi Alluvial Plain—an area rich with wet, bottomland forests that contrast markedly from the dry to dry-mesic upland forests that cover much of the southern two-thirds of Missouri. I’ve also reared the species a few times from Salix, one of the host genera recorded by Linsley and Chemsak (1984). In both cases, the wood was not freshly dead (as is commonly preferred by many other longhorned beetles), but a little past its prime and starting to get somewhat moist and punky. In the case of this beetle, I suspect that the nature of the host wood may be more important than the species, the preference being for longer dead wood in moister environments. Of course, observations by another collector in another state may completely obliterate my idea, but for now it sounds good. A closeup photograph of the elytral markings of this beetle was the subject of ID Challenge #9, to which a record 18 participants responded (thanks to all who played!). Troy Bartlett takes the win with 12 points (and attention to detail), while Dennis Haines, Max Barclay, Mr. Phidippus, and Josh Basham all score double-digit points. Troy’s win moves him into the top spot in the overall standings of the current BitB Challenge Session with 23 pts, but Dave is breathing down his neck with a deficit of just a single point. Tim Eisele and Max Barclay have also moved to within easy striking distance with 19 and 18 points, respectively, and several others could make a surprise move if the leaders falter. I think I’ll have one more challenge in the current session before deciding the overall winner—look for it in the near future. Linsley, E. G. and J. A. Chemsak. 1984. The Cerambycidae of North America, Part VII, No. 1: Taxonomy and classification of the subfamily Lamiinae, tribes Parmenini through Acanthoderini. University of California Publications in Entomology 102:1–258. Copyright © Ted C. MacRae 2011
<urn:uuid:2a71ac84-2182-497f-871b-735404126034>
{ "date": "2019-05-19T20:40:30", "dump": "CC-MAIN-2019-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255165.2/warc/CC-MAIN-20190519201521-20190519223521-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.957911491394043, "score": 3.21875, "token_count": 806, "url": "https://beetlesinthebush.wordpress.com/2011/07/07/four-humped-longhorned-beetle/" }
What does it take to transform a microbe normally found in the intestinal tract into a cancer-killing machine? These and other questions about the applicability of synthetic biology abound. For cancer research in particular, the field of syn bio could both reveal new insights into the disease and potentially lead to new treatments. Despite exciting developments in synthetic biology, however, whether the NIH will found and dedicate a specific new program to this field remains to be seen. NCI and the National Institute of General Medical Sciences held a workshop in April 2010 with an eye toward understanding the opportunities for biomedical research in synthetic biology, J. Jerry Li, M.D., Ph.D., Division of Cancer Biology program director, told GEN. “We invited program managers from other agencies including the National Science Foundation and the Department of Energy. As of 2010, those agencies had dedicated programs to support synthetic biology but NIH as a whole did not,” he remarked. “We asked ourselves whether it was time to put together a synthetic biology centric program.” NIH held the workshop because recent advances in synthetic biology had created a receptive, anticipatory climate for this new field, Dr. Li added. J. Craig Venter, Ph.D., published work on the first synthetic bacterial genome, and the FDA had just approved Artemisinin, an antimalaria drug produced using engineered bacteria and yeast. Dr. Li explained that NIH had used the investigator-initiated, nonsolicited R01 programs among its various institutes as the main funding mechanism to support synthetic biology research efforts and continue to “believe it’s the right avenue to get these people supported.”
<urn:uuid:a01ba9bc-fe73-46aa-b118-7d960c95a83e>
{ "date": "2015-03-27T13:00:38", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296383.42/warc/CC-MAIN-20150323172136-00126-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9559581875801086, "score": 2.515625, "token_count": 341, "url": "http://www.genengnews.com/insight-and-intelligenceand153/synthetic-biology-delivers-cool-tools-but-new-therapeutics-are-a-ways-off/77899473/?kwrd=Synthetic%20Biology" }
Your diet can make you smarter and yes you can bet it as there is a link between diet and cognition. Although it is known that eating well is vital for our physical health but it is now proved that healthy diet is crucial for mental health as well. A good diet keeps your brain cells healthy, your gray matter happy and makes you smarter. New researches on the influences of diet on brain reveal that what we eat has great effect on brain everyday skill and cognition. So be careful to choose what you eat[AdSense-A] In fact, our brain ages with us as we ages, but adding a smart and brain-healthy food to our menu can improve memory as well as other brain functions which tends to diminished as we ages. But remember, diet is not just the mean to stand smarter; it can prevent form many brain disorders including Alzheimer’s, Dementia and many more which usually occurs as you got ages. Eating well doesn’t mean to eat more; quantity doesn’t matter, but the quality of what you eat and what is suitable for your health and health of brain as well. Following is the list of 10 easily available foods that boost the brainpower, improve cognition, enhance learning and make you smarter. So add this to your menu 1. Fishes and Seafood Seafood especially oily fishes like wild salmon, tuna, and sardines are richer in omega-3 which contains EPA (eicosapentaenoic acid) and DHA (Docosahexaenoic Acid) both are essential fatty acids mean it can not produced in body. DHA is one of the main fatty acid required for healthy development of brain and eyes comprises about 40% of the fatty acid in cell membrane of neurons and for transmitting of nerve impulses between brain cells. Researches show the link between low DHA levels and risk of developing dementia (memory loss) and Alzheimer’s disease (neurodegenerative disorder) and those who consume fish at least 3 times a week have high levels of DHA in their body which reduce the risk of having Alzheimer’s disease in later ages by 39%. 2. Leafy Green and Cruciferous Veggies Green leafy vegetables including cauliflower, broccoli, cabbage, kale, and Brussels sprouts are packed with antioxidants like vitamin A, Vitamin C and carotenoids (a plant compounds) which are powerful brain protectors prevent brain cells damage from free radicals. These vegetables are also riche in vitamin K, which is recognized to improve cognitive function and brainpower 3. Avocado, Oils, Nuts, and Seeds Avocado is as good as green leafy veggies since it is; along with Vitamin A and C, a good source of Vitamin E, another powerful antioxidant and those who consume a moderate quantity of Vitamin E rich foods have 67% less likely to develop Dementia and Alzheimer’s Disease Another benefit of avocado is its monounsaturated fat which improves blood circulation to the parts of body and brain as well. Improved circulation leads to improve cognition and healthy brain 4. Whole grains Comprises only 2% of body weight, the brain consumes 25% of blood glucose to get energy required for steady work of brain. To meet the brain requirement of energy, whole grains are the best source as compared to refined sugar and carbohydrates as refined sugar digest and utilize quickly, didn’t meet the energy requirement whole day. On the other hand, Whole grains including Oatmeal, brown pasta and wheat bran are rich in glucose but have low GI (Glycemic Index) releases glucose gradually into the blood give you a steady level of glucose. This keeps you and your brain to stay focused and mentally alert throughout the day 5. Dark chocolate Dark chocolate contains flavonoids, a powerful antioxidant, along with natural brain stimulants such as caffeine. These ingredients of dark chocolate enable it to protect brain from free radicals and the caffeine improves the focus and concentration and also enhances the production of endorphins (naturally occurring pain killer). A research conducted on 16 healthy adults who consuming 150 mg chocolate for five days reveals improved circulation to brain, leads to improve brain function (cognition) [source] But keep in mind “more can be dangerous” as the chocolate can cause many health problems including acne, increase in the occurrence of migraine headache (due to tyramine) and promote weight gain obesity Berries are considered as powerhouse of antioxidants. Berries including elderberries are loaded with quercetin, a flavonoid type of antioxidant, vital to your brain’s health, built healthy connection between neurons (brain cells). While blueberries on the other hands, has proven effect on memory improvement and delaying the loss of short term memory and reduce the risk and effects of age related (senile) dementia and Alzheimer’s disease. Studies on animals’ shows that consumption of Berries considerably enhanced the learning capacity and motor skills of aging rats, making them mentally equivalent to much younger rats [source] 7. Nuts and seeds Nuts and seeds (including walnuts, Brazil nuts, hazelnuts, peanuts, almonds, cashews sunflower seeds, flax seeds etc) along with green leafy vegetables are another good source of Vitamin E which is a proven brain booster and brain protectors as well. Vitamin E has been associated with great protective and promotive effects on the brain thus, a good intake of these Vitamin E loaded food substance is crucial helps you to prevent cognitive decline, as you got ages. 8. Vegetable Juice Vegetables are the reservoirs of essential vitamins, minerals and antioxidants, true gold mines for health of your brain. Although you may consumes vegetables on daily basis in cooked form but too much cooking and over heating may destroy the essence of their nutritious worth. Vegetable Juice is the way to get all essential nutrients from these precious foods substances, although you can eat vegetables itself if you like their test. I personally like tomato juice for both its better test and nutritional values (and of course for its bright red colorJ). Tomato consist powerful antioxidant lycopene along with Vitamins A and Vitamin C, which could help prevent against the development of Alzheimer’s disease and dementia by disarming the effects of free radicals. Try to make juice at your own at home as it is easy. Commercial juice products can have artificial flavor, added sugar and may not be fresh. After all the meats and sweets, it is time to boost your brain with a traditional spicy curry. Curry is often used in Indian cuisines and its active ingredients includes turmeric and curcumin are potent antioxidant and, in addition, they also enhance the role of body own antioxidants. These substances also destroy Alzheimer’s-causing proteins (amyloid plaques) in the brain preventing the condition to occur. Researches on animal shows that Curcumin enhance the process of birth of new brain cells (neurogenesis) by increasing the level of Brain-Derived Neurotrophic Factor (BDNF, a type of growth hormone) in the brain, and promote connection among existing brain cells . With these properties, curry and it ingredients could help to improve memory and fight away neurodegenerative process in brain. Water is vital for health as it is a universal solvent required for all kind of metabolism to take place. 75% of your brain is water thus a steady water intake is required for proper functioning of brain. Study on the effects of water on brain conducted at Ohio University reseals that people whose were well hydrated significantly get better score on cognitive tests compared with those who weren’t drinking plenty of water. Another research from University of east London evaluated the effects of water on brainpower and shows that drinking three cups of water at the beginning of a task can increase the brain’s reaction time by 14%.
<urn:uuid:e6b37c79-9fae-49ad-bb9c-ca94b9767bc9>
{ "date": "2019-10-15T01:53:46", "dump": "CC-MAIN-2019-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655735.13/warc/CC-MAIN-20191015005905-20191015033405-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9454752206802368, "score": 2.984375, "token_count": 1630, "url": "http://www.crizmo.com/10-brain-healthy-foods-to-boost-your-brain-make-you-smarter.html" }
Revolights revolutionized bicycle lighting when it tossed aside the conventional single lamp and designed a system of LEDs that mount right onto the wheels. The result is one of the smartest inventions to hit the cycling industry in years—and it has the potential to drastically improve the safety of cyclists around the world. But is the function and design practical enough to have a lasting impact on cyclists of all types? Let's take a look at the good and bad to see if this creative new design is right for you. How It Works Two plastic arcs attach to each side of a bicycle rim. Each plastic arc contains 12 LED lights and is connected to a battery that connects around the hub. A magnet that attaches to the frame of the bicycle operates a timing mechanism that keeps the lights illuminated on the front portion of the wheel only (and the reverse on the rear wheel). This creates an arc of light from the wheel that shines on the road to the front and rear while also increasing side visibility, designed to prevent one of the more common causes of car-bicycle related accidents. 1. The amount of light these 24 LED's create on each wheel was much brighter than expected. Because the light source is so close to the road, the lumens of the LED's don't need to be as bright to create a sufficient amount of light to see the road in front of you, which cuts cost considerably. Compared to the Lupine Betty R 9, which costs $1,095, these lights are a steal at $200. 2. The side visibility is second to none. It's nearly impossible for a car not to see you when these are attached to your wheels. There is a true 360 degrees of visibility that provides a level of safety that other lights can't match. 3. The Tron effect of the light arcs looks pretty cool. 4. If you're cramped for training time, especially during the winter when daylight is shortened, these lights will allow you to train safely for longer amounts of time. 6. Once the lights are on, they are really fun. Not only do they attract attention from motorists (which is a good thing), they change the cycling experience. I've had zero close calls using these lights for over a month riding at night, which gave me a feeling of safety that I hadn't felt before when riding in the dark.
<urn:uuid:8b2073f5-c93a-435e-8a97-4bbc4713d00f>
{ "date": "2016-10-22T07:27:33", "dump": "CC-MAIN-2016-44", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00551-ip-10-171-6-4.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9638610482215881, "score": 2.578125, "token_count": 480, "url": "http://www.active.com/cycling/articles/revolights-the-future-of-bicycle-lights" }
Robotics Industry Insights - This industry insights is filed under: Clean Machines: Clean Room Robotics by Bennett Brumson, Contributing Editor Robotic Industries Association Posted 08/17/2006 With the miniaturization of electronic components comes challenges for people to assemble their tiny constituent parts. Humans have difficulty seeing and manipulating the impossibly small elements that make up electronic devices, so they are a good candidate for robotics. However infiltration of dust that is shed by robots and other equipment could damage electronics while in production. The remedy to this: Manufacturers use a special class of robots, those that are designed for use in clean rooms. ‘‘Particle disbursement is the most important issue in clean room robotics. In order to prepare a robot for clean room applications, there is a basic change in its construction, as well as in the way it interfaces with its environment,’‘ says Peter Cavallo, Sales Manager for Robotics at DENSO Robotics, Long Beach California. Clean room is a subset of robotics for electronic assembly and other applications. To prevent the robot itself from becoming a source of contamination, special features are incorporated into them. Like any mechanism, robots shed particulates from belts, gasses from hoses and cables, and dust particles from the movement of end effectors. All of these can be a source for contamination for fragile electronics such as hard disk drives and semiconductors. ‘‘The robot, belts and other moving parts have to be made out of special non-dusting materials,’‘ says Wolfgang Jeutter. ‘‘Vacuum chucks that handle electronics have to be made out of materials like PEEK because some rubber materials gas out and leave a residue on parts.’‘ Jeutter is Vice President at Manz Automation, Inc., North Kingstown, Rhode Island. One method to ensure that particles are not generated by the robot in clean room environments is to apply a vacuum to the arm’s interior. Philip Baratti, Application Engineering Manager at Epson Robots, Carson, California, speaks about how clean rooms are protected from particle shedding through the use of a vacuum. ‘‘To keep particulate from being generated from the robot’s motion, a vacuum is drawn on internal cavities,’‘ Baratti says. ‘‘When under a vacuum, particulate will not be able to move down toward the end of arm tooling.’‘ Baratti adds that all the joints are sealed up tightly so the vacuum remains constant and no contaminates can leak out. Baratti points out that while a non-clean room robot might emit a relatively small amount of particulate, even that much could add up over the working life of a robot or its gripper. ‘‘As grippers open and close 10,000 times over its lifetime, they are going to create some particulate,’‘ Baratti says. Emitting particles onto hard disk drives while their parts are manufactured, assembled and tested could render them useless. Clean room robotics is the perfect means by which parts of hard disk drives are assembled and tested. Brian Carlisle, President of Precise Automation, LLC, Los Altos, California, speaks of the need for minimal particle emission as well as the need for high-speed movement in hard disk drive fabrication. ‘‘When making hard disk drives, the robot must be clean and move several meters at high speeds without emitting particles,’‘ Carlisle says. Carlisle states that another way that particle discharge is kept to a minimum is by the use of high efficiency particulate absorbing filters (HEPA) in clean rooms. ‘‘Clean room work cells are designed to have HEPA filters so only cleaned air is blown in. The robot and grippers are positioned below parts so if any particles do come off them, those particles will not fall onto the parts.’‘ Carlisle went on to say that it is vital to keep any equipment that might emit particles underneath hard disk drives or semiconductors, or to cover them. ‘‘Cell layout is important,’‘ Carlisle concludes. Likewise, Douglas Dalgliesh, Vice President and General Manager with Yamaha Robotics, Edgemont, Pennsylvania, stresses: ‘‘In wafer fabrication, the robot arm is mounted below so contamination will fall away from the product being manufactured.’‘ Dalgliesh adds that there are some exceptions to robotic arm placement in wafer production. If the arm is mounted above the wafer during fabrication, the arm’s cleanliness standard becomes much higher. The robot and its arm are not the only elements in the work cell that need to be kept ultra-clean. It is necessary to have other surfaces be as inert and cleanable as possible. Klaus Papendorf of Stäubli Robotics addresses the need for surfaces to be kept sterile. Papendorf is Worldwide Account Manager for Semiconductors at Stäubli, Duncan, South Carolina. ‘‘Clean room robotics requires special surfaces, like polyurethane and three-lip seals,’‘ Papendorf says. Polyurethane, that tough and chemicals-resistant thermosetting plastic, is used to coat some clean room work cell surfaces. Getting parts to the place when and where they are needed in a clean room work cell without particle dispersion is a challenge that must be overcome when manufacturing sensitive microelectronics. If parts are blown-fed into the work cell, that could be a source of dust contamination. Epson’s Philip Baratti explains: ‘‘Integrators should avoid configuring clean room work cells to blow the screws in for final assembly on hard disk drives because there is a fair amount of particulate that also gets blown through. There are too many opportunities to get unwanted particulate in that process,’‘ Baratti warns. ‘‘Rather than blow feeding screws for assembly, integrators might use a vacuum chuck to pick up screws to put them in place before final assembly of the hard disk drive. That adds to cycle time,’‘ Baratti says. He concludes there are a lot of challenges in the tooling that goes into clean room environments, but these are challenges robotics readily meet. Mark Handelsman, Manager of Industry Marketing at FANUC Robotics America, Inc., Rochester Hills, Michigan, has an interesting view on how robots meet clean room requirements. ‘‘The solutions to preventing particles, liquids, and gases from getting into robots apply to preventing them from leaving the robot.’‘ In other words, how fragile electronics are kept uncontaminated in clean rooms is similar to how robots are protected from hazardous elements when they are working with explosive or radioactive materials. Electrostatic discharge (ESD) is another variable that clean room robotics must deal with if they are to be used efficiently in manufacturing electronic components. Electrostatic discharge is stationary electric charges that could cause electronics to short out during manufacturing, relegating the device to scrap. Steps must be taken to prevent static electricity from building up in a robotic work cell. Most often, this is accomplished by utilizing materials that are non-conductive or by grounding the robot and its peripheral equipment. ‘‘Manz uses materials like carbon fiber, from which we make our gripper heads,’‘ says Wolfgang Jeutter. Manz provides nonconducting carbon grippers to manufacture flat panel displays (LCD’s) and solar panel components. Jeutter went on to say that carbon grippers are necessary because it has to interface with cassettes, conveyors, coating machines and test equipment, as well as hold the LCD substrates. Any metal-based equipment poses a danger of producing static electricity that could harm electrical components. Sealing or grounding the equipment is the primary means to prevent electrostatic discharge, says Brian Carlisle of Precise Automation. ‘‘Electrostatic discharge is bad for hard disk drives and other clean room electronics. Metal surfaces are painted to prevent electrostatic buildup, and components are often made from stainless steel which is grounded,’‘ Carlisle says. ‘‘The air going into the work cell is strictly controlled to reduce ions and humidity to prevent electrostatic build up.’‘ By the same token, Peter Cavallo of DENSO says that the ability to deal with electrostatic discharge is a requirement of clean room robots used in the electronics industry. ‘‘Electrostatic discharge is handled by modifying the robot such as increased grounding or different construction techniques, so the robot will not build up static electricity.’‘ Epson’s Phil Baratti maintains that protection from electrostatic discharge and clean room precautions are linked. ‘‘Electrostatic discharge requirements and clean room requirements for electronic manufacturing go hand-in-hand. There was a time when you could specify either that your robot will be EDS protected or for clean rooms, or both. Now, when you get a clean room robot, it is ESD certified as well,’‘ asserts Baratti. ‘‘It does not make sense to have a clean robot that has electrostatic ‘hot spots’ because they can damage electronic product as easily as a non-clean robot can.’‘ How Clean is Clean? There is a classification system to determine what level of cleanliness is required for a clean room work cell. Cleanliness classes 1 to 100,000 are a scale that measures the particle count per cubic foot of a clean room’s volume. A class 1 clean room has no more than one 0.5 micron particle per cubic foot of air, a class 100 has no more than 100 particles 0.5 micron in size per cubic foot of air. A potential pitfall in planning and integrating a clean room is using equipment that is too clean for a particular application. This problem is addressed by DENSO’s Peter Cavallo. ‘‘From a robotics standpoint, the question is: ‘What does my clean room actually need?’ It is always best to match your requirements to the equipment you are going to need,’‘ Cavallo advises. ‘‘If you are careful about what your requirements are, you can build an efficient yet less costly system.’‘ Cavallo went on to say that during the planning process, requirements for clean room work cells are often not yet fully defined and there is a tendency to require a more stringent cleanliness class than is actually needed. ‘‘It is helpful in controlling costs to have a better idea of what cleanliness level is required. That is more efficient for the cost of equipment and for providing for a larger range of potential product,’‘ Cavallo says. As the requirements become more stringent, the range of product that can be manufactured in that cell get smaller. ‘‘If you narrow the funnel, the view is much smaller,’‘ Cavallo concludes. Cavallo’s sentiment is echoed by Douglas Dalgliesh of Yamaha Robotics: ‘‘Many end-users need a clean room robot but have not developed what specifications are necessary. Many companies tell us that they could use a Class 10 or 100 robot, but have not nailed it down yet,’‘ Dalgliesh says. ‘‘They need to specify what cleanliness level they are looking for. Otherwise, they spend a lot of money on a Class 10 when they really need a Class 100 or Class 1,000 clean room.’‘ Typically, semiconductors need the cleanest environment, Class 1, while most other electronics such as LCD’s and hard disk drive assembly require Class 10 or Class 100. Bright and Clean Outlook As electronics get smaller yet ever more powerful, there will be an increased need for clean room robotics. One need only look at the evolution of smaller cells phones and MP-3 music players to see that. This trend was summed up by Peter Cavallo: ‘‘The size of components are getting down to extremely tiny levels, so any kind of contamination creates problems. As manufacturing environments get cleaner, people will not be able to work in them, so robots will be taking a more important role.’‘ Editor’s Note – For more information, you may contact any of the experts listed in this article or visit Robotics Online, Tech Papers. DENSO Robotics: Peter Cavallo, Sales Manager for Robot Dept., 310-513-7343, [email protected] Epson Robots: Philip Baratti, Applications Engineering Manager, 562-290-5931, [email protected] FANUC Robotics America, Inc. Mark Handelsman, Manger of Industry Marketing, 248-377-7000, [email protected] Manz Automation, Inc., Jeutter Wolfgang, Vice President, 401-295-2150, Precise Automation, LLC, Brian Carlisle, President, 530-888-6256, [email protected] Stäubli Robotics, Klaus Papendorf, Worldwide Key Account Manager Semiconductors 49-921-883-378, k.papendorf@@staubli.com Yamaha Robotics, Douglas Dalgliesh, Vice President and General Manager, 610-325-9940, [email protected] Originally published by RIA via www.robotics.org on 08/17/2006
<urn:uuid:c40adc81-584c-44eb-bcf1-7d7bfaba80bc>
{ "date": "2016-10-23T06:09:24", "dump": "CC-MAIN-2016-44", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00314-ip-10-171-6-4.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9185246825218201, "score": 2.9375, "token_count": 2847, "url": "http://www.robotics.org/content-detail.cfm/Industrial-Robotics-Featured-Articles/Clean-Machines-Clean-Room-Robotics/content_id/1046" }
"Enduring exile" - Address by Mr. António Guterres, United Nations High Commissioner for Refugees, to the High Commissioner's Dialogue on Protection Challenges: Protracted Refugee Situations, Geneva, 11 December 2008 HC Statements, 1 December 2008 Refugees are a symbol of our turbulent times. As each new conflict erupts, the world's newspapers and television screens are filled with pictures of masses on the move, fleeing from their own country with just the clothes on their back and the few possessions they are able to carry. Those who survive the journey depend on the willingness of neighbouring states to open their borders and the ability of humanitarian organizations to provide the new arrivals with food, shelter and other basic needs. But what happens once the exodus is over, the journalists have packed their bags and the world has turned its attention to the next crisis? In the vast majority of cases, the refugees are left behind, obliged to spend the best years of their lives in shabby camps and shanty settlements, exposed to all kind of dangers and with serious restrictions placed upon their rights and freedoms. The problem of protracted refugee situations has reached enormous proportions. According to UNHCR's most recent statistics, some six million people (excluding the special case of more than four million Palestinian refugees) have now been living in exile for five years or longer. More than 30 of these situations are to be found throughout the world, the vast majority of them in African and Asian countries which are struggling to meet the needs of their own citizens. Many of these refugees are effectively trapped in the camps and communities where they are accommodated. They cannot go home because their countries of origin – Afghanistan, Iraq, Myanmar, Somalia and Sudan for example – are at war or are affected by serious human rights violations. Only a tiny proportion have the chance of being resettled in Australia, Canada the USA or another developed country. And in most cases, the authorities in their countries where they have found refuge will not allow them to integrate with the local population or to become citizens of those states. During their long years in exile, these refugees are confronted with a very harsh and difficult life. In some cases they have no freedom of movement, do not have access to land and are forbidden from finding a job. As time passes, the international community loses interest in such situations. Funding dries up and essential services such as education and health care stagnate and then deteriorate. Packed into overcrowded settlements, deprived of an income and with little to occupy their time, these refugee populations are afflicted by all kinds of social ills, including prostitution, rape and violence. Unsurprisingly, and despite the restrictions placed upon them, many take the risk of moving to an urban area or trying to migrate to another country, putting themselves in the dangerous hands of human smugglers and traffickers. Refugee girls and boys suffer enormously in such circumstances. A growing proportion of the world's exiles have been born and raised in the artificial environment of a refugee camp, their parents unable to work and in many cases reliant upon the meagre rations provided by international aid agencies. And even if peace returns to their country of origin, these youngsters will go back to a 'homeland' which they have never seen and where they may not even speak the local language. I consider it intolerable that the human potential of so many people is being wasted during their time in exile and imperative that steps are taken to provide them with a solution to their plight. First, a concerted effort is required to halt the armed conflicts and human rights violations that force people to flee from their country and oblige them to live as refugees. In this respect, the UN has a particularly important role to play, whether by means of mediation, negotiation, the establishment of peacekeeping missions or the punishment of those who are found guilty of war crimes. Second, while funding may be scarce as a result of the financial crisis, every effort must be made to improve conditions for the world's long-term refugees, whether they are living in camps, rural or urban areas. Particular emphasis should be placed on providing exiled populations with livelihoods, education and training. With these resources at their disposal, refugees will be able to live a more productive and rewarding life and prepare for their future, wherever that might be. Finally, while we will not solve the world's protracted refugee situations by moving all of the people concerned to the more developed regions of the world, the richer nations should demonstrate their solidarity with countries that host large numbers of refugees by resettling a proportion of them, especially those whose security and welfare is at greatest risk. The refugee problem is a responsibility of the international community as a whole, and can only be effectively tackled by means of collective and cooperative action. We must ensure that the assistance provided to refugees also brings tangible benefits to local populations. We must encourage the international community to provide adequate support to those countries that are prepared to naturalize refugees and give them citizenship. And we must establish more effective approaches to the return and reintegration of refugees in their countries of origin, thereby enabling them to benefit from and contribute to the peacebuilding process.
<urn:uuid:e455dfe4-487f-4957-9996-8f37cb4fefba>
{ "date": "2014-03-10T14:58:59", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010851505/warc/CC-MAIN-20140305091411-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9652819037437439, "score": 2.890625, "token_count": 1051, "url": "http://unhcr.org/4948c81f2.html" }
Bonding and sealing have become indispensable techniques for joining and/or sealing two or more substrates with each other, not only in industry but also in everyday life. Bonding allows the production of laminated materials, facilitates mobility and communications, positively influences the handling of foods, supports health and hygiene and improves the quality of our lives. Moreover, many innovative products could not be manufactured without the use of bonding techniques. Sealing allows the infilling of gaps between two or more substrates and is a vital component in building and construction. Today it is an essential part in modern engineering including in the automotive and aerospace industries. More than 2,300,000 tonnes of adhesives and sealants are produced and used in Europe each year and this volume is on the increase. Adhesive manufacturers offer more than 250,000 different products for the most diverse applications – and these products are customised for virtually every purpose. Materials and bonding technology The world around us and hence our lifestyle and the way we work are changing at breakneck pace. Who would have thought 15 years ago that computers and mobile phones would now be a part of everyday life? And who could have dreamed of detachable adhesive strips that do not tear away the wallpaper when a poster is removed? The constantly increasing requirements being put upon new consumer products is the driving force for technological progress: Nowadays, each new product that is developed must – as in the past – not only be better and more favourably priced than its predecessor but must also meet the requirement of sustainability. The consideration of environmental aspects means that the development of new products is becoming ever more demanding and that manufacturers must take into consideration more complex requirements for their new products. The increasing requirements put upon products has since time immemorial been the key driving force for the development of advanced and new materials. In addition to the classic metals, these materials include special alloys, plastics and also ceramics and glass. So-called composite materials, produced by combining different materials, have played a major role in this development. Reinforced concrete is a well-known composite material that has been around a long time. Newer composite materials include glass-fibre reinforced plastics and carbon fibre reinforced plastics which are used, for example, for constructing speed boats and yachts and increasingly also for car, rail vehicle and aircraft manufacture. Another good example of the development and use of new materials is the wheel and tyres. Spoked wheels made of wood met the requirements of the ancient Egyptians. Today, the manufacture of tyres for modern means of transport can no longer be achieved using even natural rubber. The high speeds we now expect of a car can only be achieved using composites of different materials – and a car tyre is nothing more than that. The development of new materials with diverse applications puts additional challenges upon processing technology. This is particularly so when different materials have to be joined to make components which retain their individual beneficial properties in the composite product. This raises the question: Which joining technique is able to join these different materials in such a way that their specific properties are retained? Traditional joining techniques have well-known disadvantages. With thermal techniques such as welding, the specific properties of the material alter within the heat-affected zone. Mechanical techniques such as riveting or the use of screws in their turn only allow force transfer at points; In addition, it is necessary to drill holes in the work pieces that are being joined, and this “damages” and hence weakens the materials. In contrast, bonding technology will assume an ever more important role in industry and the handicraft sector in the future because it permits the joining of disparate materials and does not create stresses and weaknesses in the composite structures. This is critical in many modern products where sustainability, speed of manufacture, recycling and long term performance are the norm.
<urn:uuid:862a9c6c-8492-4c0b-ac78-5252ae63502d>
{ "date": "2014-04-20T23:26:37", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00459-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9531924724578857, "score": 3.140625, "token_count": 776, "url": "http://www.feica.com/our-industry" }
|23rd Prime Minister of Australia Elections: 1983, 1984, 1987, 1990 11 March 1983 – 20 December 1991 |Deputy||Lionel Bowen (1983–1990) Paul Keating (1990–1991) Brian Howe (1991) |Preceded by||Malcolm Fraser| |Succeeded by||Paul Keating| The Hawke Government refers to the federal Executive Government of Australia led by Prime Minister Bob Hawke of the Australian Labor Party from 1983 to 1991. The Government followed the Liberal-National Coalition Fraser Government and was succeeded by another Labor administration: the Keating Government, led by Paul Keating after an internal party leadership challenge in 1991. Keating served as Treasurer through much of Hawke's term as Prime Minister and the period is sometimes termed the Hawke-Keating Government despite the fact that there were fundamental differences between the two men and their policies. - 1 Background - 2 Terms in office - 3 See also - 4 References - 5 Bibliography - 6 Further reading Bob Hawke served as president of the Australian Council of Trade Unions (ACTU) from 1969 to 1980. On 14 October 1980, he was preselected as the Australian Labor Party candidate for the Seat of Wills and resigned from the ACTU. Hawke won the seat at the 1980 Election and was appointed as Shadow Minister for Industrial Relations, Employment and Youth Affairs by Opposition Leader Bill Hayden. In 1982, amongst the early 1980s recession, he initiated a leadership challenge against Hayden, and narrowly lost. At the February 1983 Funeral of former Labor Prime Minister Frank Forde, Hayden was persuaded by colleagues to step down, leaving the way open for Hawke to assume leadership of the ALP. In announcing his resignation, Hayden famously remarked that, given the electoral climate, "a drover's dog could lead the Labor Party to victory". Long serving Liberal Prime Minister Malcolm Fraser announced an election that same day, with a date set for 5 March. Hawke served just one month as Opposition Leader before taking the ALP to victory against Fraser at the 1983 Election. Labor had been out of office since the divisive Dismissal of the Whitlam Government in 1975. Terms in office Hawke led the Australian Labor Party to a landslide victory against Malcolm Fraser's Liberal-National Coalition Government at the 1983 Australian Federal Election, with Labor seizing 75 seats in the Australian House of Representatives against the Liberal Party's 33 and the National Party 17. He went on to become Australia's longest serving Labor prime minister and remains the third longest serving Australian prime minister after Robert Menzies and John Howard. Hawke again led the party to the 1984 Election and was returned with a reduced majority, in an expanded House of Representatives: with Labor taking |This article is part of a series about Term of Government (1983-1991) 82 seats to the Coalition's 66. Labor went on to a third straight victory at the 1987 Election and increased its majority from 16 to 24 seats. Hawke fought his final election in 1990, with Labor winning a nine-seat majority. Hawke retired from Parliament in February 1992, following the December 1991 leadership spill which saw him replaced as leader by Paul Keating. The inaugural days of the Hawke government were distinctly different from those of the Whitlam era. Rather than immediately initiating extensive reform programmes, Hawke announced that Fraser's pre-election concealment of the budget deficit meant that many of Labor's election commitments would have to be deferred. Hawke convinced the Labor caucus to divide the ministry into two tiers, with only the most important Ministers attending regular cabinet meetings. This was to avoid what Hawke viewed as the unwieldy nature of the 27-member Whitlam cabinet. The caucus under Hawke exhibited a much more formalised system of parliamentary factions, which significantly altered the dynamics of caucus operations. Hawke and Keating formed an effective political partnership despite their differences. Hawke was a Rhodes Scholar; Keating left high school early. Hawke's enthusiasms were cigars, horse racing and sport whereas Keating preferred classical architecture, Mahler symphonies, and antique collecting. Hawke was consensus-driven whereas Keating revelled in debate. Hawke was a lapsed Protestant and Keating was a practising Catholic. While the impetus for economic reform largely came from Keating, Hawke took the role of reaching consensus and providing political guidance on what was electorally feasible and how best to sell it to the public. In his first term, Hawke set the record for the highest approval rating on the ACNielsen Poll (a record which still stands as of 2008). The government benefited from the disarray within the Liberal opposition after the resignation of Fraser. The Liberals were divided between supporters of John Howard and Andrew Peacock. The conservative Premier of Queensland, Sir Joh Bjelke-Petersen, also helped Hawke with his "Joh for Canberra" campaign in 1987, which proved highly damaging for the conservatives. Exploiting these divisions, Hawke led the Labor Party to comfortable election victories in 1984 and 1987. The Hawke Government came to power in 1983 amidst an economic downturn, but pursued a number of economic reforms that assisted in a strong recovery through the 1980s. Economic factors at play during the Hawke government were globalisation, micro-economic reform and industrial relations reform, as well as the opening of Australian finance and industry to international competition and adjustments to the role of trade unions. Hawke concluded his term as Prime Minister with Australia in the midst of its worst recession since the Great Depression. Economic reform included the floating of the Australian dollar, deregulation of the financial system, dismantling of the tariff system, privatised state sector industries, ended subsidisation of loss-making industries, and the sale of the state-owned Commonwealth Bank of Australia. A fringe benefits tax and a capital gains tax were implemented. Hawke's Prime Ministership saw friction between himself and the grassroots of the Labor Party, who were unhappy at what they viewed as Hawke's iconoclasm and willingness to co-operate with business interests. The Socialist Left faction, as well as prominent Labor figure Barry Jones, offered severe criticism of a number of government decisions. He has also received criticism for his 'confrontationalist style' in siding with the airlines in the 1989 Australian pilots' strike. The Hawke Government did, however, significantly increase the social wage as part of its Accord with the trade unions, a social democratic policy continued by the Keating Government. Improvements to the social wage included improved affordability of and access to key services such as health and child-care and health, together with large increases to payments for low-wage and jobless families with children. Indexation of child payments was also introduced, while coverage of occupational superannuation pensions was also widened significantly, from 46% of employees in 1985 to 79% in 1991. During the course of the Eighties and early Nineties, government benefits substantially improved the incomes of the bottom 20% of households, with rent assistance, family payments, and sole parent benefits all substantially boosted in real terms. According to some historians, when examining the economic reforms carried out during the Eighties in both Australia and New Zealand, “some modest case can be mounted for Labor in Australia as refurbisher of the welfare state”. From 1983 to 1996, improved service provision, higher government transfer payments, and changes to the taxation system “either entirely offset, or at the very least substantially moderated, the increase in inequality of market incomes over the period”. During the period 1983 to 1996, Australia was one of the leading OECD countries in terms of social expenditure growth, with total social spending increasing by more than four percentage points of GDP compared to an OECD average of around 2.5 percentage points. "Active society" measures were also introduced in an attempt to limit the growth of poverty and inequality. From 1980 to 1994, financial assistance for low-income families in Australia increased from 60% of the OECD average in 1980 to 140% in 1994, and it is argued that the social and economic policies delivered under the government-trade union Accord had some substantial success in reducing family poverty, as characterised by reductions in child poverty from the early Eighties onwards. According to the OECD, the percentage of Australians living in poverty fell during the Hawke Government's time in office, from 11.6% of the population in 1985–86 to 9.3% in 1989–90. Child poverty also fell dramatically under the Hawke-Keating Government, with the percentage of children estimated to be living in poverty falling from nearly 16% in 1985 to around 11% by 1995. As noted by Brian Howe, social policy under Hawke was effective in reducing poverty and protecting those most vulnerable to massive social and economic change. According to some observers, "improvements in government policies and programs in income support payments, and services such as education, health, public housing and child care, and the progressive nature of the income tax system, have all contributed to the result that Australia appears to have become a more equal society over the period from 1981–82 to 1993–94". In 1984, the government introduced its three mine policy to limit the number of uranium mines in the Australia to three. The policy resulted from the strength of the anti-nuclear movement within Labor. The 1987 budget extended rental assistance to all Family Allowance Supplement recipients together with longer-term unemployment benefit beneficiaries. A family package was introduced that same year, designed not only to improve the adequacy of welfare payments for low-income families, but was also designed to ensure that participating in part-time work or full-time work didn’t lead to a loss in income support. The Hawke Government’s achievements in boosting financial support to low-income households were substantial, with the family assistance package bringing significant benefits to millions of low-income families in the years ahead. As noted by Ann Harding at the University of Canberra “To appreciate the scale of these changes, let us look at the Browns, a hypothetical family. Mr brown works for a low wage, Mrs Brown looks after two children, and they rent their home. In late 1982 the Browns received just under $13 a week in family allowance – about $25 per week in 1995–96 dollars. In contrast, in January 1996 a family like the Browns would receive $93.10 in family payment and up to $40 a week in rent assistance. You put this in perspective; such a family would have received assistance worth about 4 per cent of average weekly ordinary time earnings in November 1982, but 20 per cent of such earnings in early 1996. We are thus talking about very major changes in the amount of assistance available to low-income working families with children". The Hawke Government carried out a series of other measures during its time in office. Upon taking office in 1983, a Community Employment Program was set up, providing a large number of work experience opportunities in the public and non-profit sectors. Together with smaller programs such as the Community Youth Support scheme (CYSS), this played a major role in both alleviating and reversing the effects of the 1982 economic recession. A Home and Community Care Program (HACC) was established to provide community-based services for frail aged people and people with disabilities, while to combat homelessness a Supported Accommodation Assistance program was introduced to assist those who are homeless, at risk of homelessness, or escaping domestic violence. A bereavement payment equivalent to fourteen weeks pension for the surviving member of a pensioner couple was also introduced, together with an Asylum Seeker Assistance scheme to provide help to applicants for refugee status in need. A wide range of measures were introduced to protect the environment., such a Landcare program, which was established to promote environmental conservation. In addition, spending on housing, education, and health was increased, while an anti-poverty trap package was introduced in the 1985 budget. That same year, rent assistance was extended to include unemployed and low-income working families. The 1985 Tax Summit led to a reduction of loopholes and distortions in the tax system, while the Family Assistance Package (introduced in 1987) significantly strengthened the amount of income support for hundreds of thousands of low-income families. Some sole parents and unemployed persons benefited from other measures designed to reduce barriers to workforce participation, deal with their housing costs, and increase their incomes. In addition, a new Child Support Agency was established, designed to provide a more efficient system of maintenance and tackle child poverty. Funding for public housing and disadvantaged students was also considerably increased. Various measures were also introduced which enhanced the rights of women in the workplace. The Sex Discrimination Act of 1984 prohibited sex discrimination in employment while the Affirmative Action (Equal Employment Opportunity for Women) Act of 1986 required all higher education institutions and all private companies with more than 100 employees to introduce affirmative action programmes on behalf of women. A year later, equal opportunity legislation for the Commonwealth Public Service was introduced. In 1986, a Disability Services Act was passed to expand opportunities for the participation of disabled persons in local communities. A major cash benefit for low-income working households, known as the Family Allowance Supplement, was introduced which reduced poverty and provided a better-graduated system of family income support. This new benefit significantly boosted the level of income support for families principally dependent on social welfare benefits. The supplement was also made fully payable, tax-free, to low-income families who were principally reliant on wages, albeit for those who earned below a certain amount. Above that amount, the payment rate fell by 50 cents for every additional dollar of other income until it vanished entirely from families approaching the middle-income range. In addition, the social security rent allowance was extended to these families if they lived in private rental accommodation. The rates of payment were also index-linked to inflation, while additional benchmarks were fixed to help achieve and maintain relativities with community earnings levels. As a result of the FAS, major improvements were made in the financial position of working families on low incomes. In his memoirs, Hawke described this as "the greatest social reform of my Government, and perhaps of all Labor governments". To increase workforce participation, a Jobs, Education and Training Program (JET) for sole parents was launched, comprising a package of measures aimed at liberalising income tests measures, ensuring access to child care, and upgrading the skills of single parents. This reform (which haws introduced with the intention of combating high levels of poverty amongst single parents) helped to enable many single parents to take on part-time work and increase their earnings. Between 1986 and 1996, according to one estimate, the percentage of single parents receiving 90% or more of their income from benefits fell from 47% to less than 36%. Other important social security initiatives introduced for the unemployed included the introduction of the New Employment Entry payment, while some administrative obstacles and income tests were relaxed. In October 1987, the international Stock Market Slump saw markets crash around the world. The crisis originated when Japan and West Germany pushed up interest rates, pressuring US rates also to rise, triggering a massive sell off of US shares. Global share prices fell an average of 25%, but Australia saw a 40% decline. The Hawke Government responded to the crisis initially by asking the Conciliation and Arbitration Commission to defer its national wage case. Treasurer Keating was advised to tighten monetary policy, but, with forthcoming by-elections and a state election in New South Wales, the Government opted to delay the potentially unpopular move, which would raise interest rates. Commodity prices dropped and the Australian dollar sharply declined. The Reserve Bank conducted a $2 billion intervention to hold the dollar at 68c but it crashed to 51c. In December 1987, Keating said that the Australian economy would weather the storm because the Hawke Government had already balanced its Budget and brought down inflation. The Government postponed policy adjustments, planning a mini-Budget for May. Hawke wrote to US President Reagan calling on the US to reduce its Budget deficit. The Business Council called for wage reductions, decreased government expenditure, a lower dollar and deregulation of the labour market. Seven months into the crisis, Hawke told the State Premiers that the "savings of Australia must be freed" to go into business investment for export expansion, and funding to the States was cut. A phase out of tariff protections was continued and company tax was cut by 10% to 39%. In the May mini-Budget, payment to the states was cut by $870 million and tax cuts deferred. The Government declared cost cutting was completed. A surge in commodity prices began in 1986 and assisted the economy to a small 1987 surplus of $2.3 Billion. With commodity prices now over their peak, economic conditions were entering a decline, with high interest rates, a growing current account deficit, declining demand, increasing foreign debt and a wave of corporate collapses. Furthermore, the collapse of the Eastern Bloc economies, was to see wool and wheat prices decline, savaging Australia's agricultural sector. Keating budgeted a record $9.1 billion surplus for 1989–90, and Labor won the 1990 election, aided by the support of environmentalists. To court the green vote, environment minister Graham Richardson had placed restrictions on mining and logging which had a further detrimental effect on already rising unemployment. David Barnett wrote in 1997 that Labor fiscal policy at this time "self-defeating as "with one hand it was imposing a monetary squeeze, while on the other it was encouraging spending with wage increases and tax cuts". By July 1990, Australia was entering severe recession. Initially, the Treasurer had insisted Australia would face a "soft landing", but after receiving the September quarter accounts indicating a large contraction of 1.6 per cent, he adopted a different political strategy, instead arguing that the downturn was a necessary correction by opening a press conference in November as follows: |“||The first thing to say is, the accounts do show that Australia is in a recession. The most important thing about that is that this is a recession that Australia had to have – Treasurer Paul Keating, November 1990.||”| The popularity of Hawke's prime ministership, along with the health of the Hawke-Keating political partnership deteriorated along with the Australian economy and Keating began to position himself for a challenge. The Government promised economic recovery for 1991 and launched a series of asset sales to increase revenue. GDP sank, unemployment rose, revenue collapsed and welfare payments surged. The Opposition turned to economist John Hewson as its new leader. Hewson argued that the nation was in economic crisis. He said the Hawke-Keating government had increased the severity of the recession by initially encouraging the economy to boom post-stock crash as elections were approaching, which necessitated higher interest rates and tighter monetary policy than would otherwise have been necessary. Hewson called for a radical reform program and formulated a package which included a consumption tax policy and industrial relations reform to address the poor economic situation. The Fightback! policy was launched in November 1991. The comprehensive plan further destabilised Hawke's leadership. The ACTU campaigned for a wage increase. Hawke brokered an increase for waterside workers and public servants. By April 1991, unemployment was nearing 10% and rising. On 3 June, Keating challenged Hawke for the leadership of Labor, but lost the vote and became a destabilising presence on the back bench. The new treasurer, John Kerin and deputy prime minister Brian Howe blamed Keating's 1990 economic policy for the poor state of the Australian economy. Industrial Relations Minister Peter Cook indicated an intention to introduce a more flexible wage system. In his July budget, Kerrin forecast a deficit of $4.7 billion. In a press conference, Kerin was unable to recall what GOS – Gross Operating Surplus – stood for. In December, shortly before Keating's successful second challenge against Hawke, Kerin was removed as Treasurer and appointed Minister for Transport and Communications and the Minister for Finance, Ralph Willis, became Treasurer. Hawke attributed the change to loss of confidence in communication. By 1992, shortly after Hawke lost office, unemployment had reached 11 per cent, the highest level in Australia since the Great Depression of the 1930s. In health, the Whitlam Government's universal health insurance system (Medibank), which had been dismantled by Fraser, was restored under a new name, Medicare while a Pharmaceutical Allowance was also introduced to help pay towards the cost of prescription medicines. The government's response to the AIDS concern is also considered to have been a success. In addition, nursing education was transferred from hospital-based programs to the tertiary education sector, while Australia's first ever national mental health policy was proclaimed. Australia Act and national symbolism In April 1984, the Hawke Government proclaimed Advance Australia Fair as Australia's national anthem, settling an ongoing debate, and at the same time declared green and gold as the national colours of Australia. The Hawke government secured passage of the Australia Act in 1986, severing remaining constitutional ties to Britain: ending the inclusion into Australian law of British Acts of Parliament, and abolishing remaining provisions for appeals to the Privy Council in London. Canberra's New Parliament House was officially opened by Queen Elizabeth II in a grand ceremony in May 1988 and Australian Bicentenary was marked by huge pomp and ceremony across Australia to mark anniversary of the arrival of the First Fleet of British ships at Sydney in 1788. The government refused to fund the tall ship First Fleet Re-enactment Voyage which was staged on Sydney Harbour on Australia Day because it believed this might offend indigenous Australians. In the later years of the Hawke's prime-ministership, Hawke spoke of the idea of a treaty between Aborigines and the government. No such treaty was ever concluded though subsequent events, including the Mabo court decision during the tenure of the Keating Government did progress legal recognition of indigenous land rights. IN 1984, Hawke appointed Charles Perkins as Secretary of the Department of Aboriginal Affairs, making him the first Indigenous Australian to head a Commonwealth department. In 1989 the Hawke Government replaced the Department of Aboriginal Affairs with an Aboriginal and Torres Strait Islander Commission as the main administrative and funding agency for Indigenous Australians. The Aboriginal and Torres Strait Islander Commission began work in March 1990. In 1985, the Hawke government officially returned ownership of Uluru (formerly known as Ayers Rock), with Governor General Sir Ninian Stephen presiding over the ceremony handing the title deeds to the local Pitjantjatjara Aboriginal people. The transfer was done on the basis that a lease-back to the National Parks and Wildlife Service and joint management by members of the local Mutijulu community would be settled upon. In the final year of Hawke's prime ministership, the Royal Commission into Aboriginal Deaths in Custody released its final report, having investigated some 99 deaths between 1980 and 1989. In education, the Hawke Government sought to significantly widen educational opportunities for all Australians. Increased funds were made available for most schools, while both TAFE and higher education were expanded. Measures were taken to improve educational opportunities for Aborigines, as demonstrated by the government providing funding of almost $100 million from 1984 to 1992 for parental education, student support and tutorial assistance through its Aboriginal Education Direct Assistance Program. In addition, an Aboriginal and Torres Strait Islander Capital Grants Program was established to construct and renovate school buildings in remote area communalities. Government expenditure on education under Hawke also rose significantly. On a per-student basis, the increase in Commonwealth funding amounted to 136% for government schools and 71% for non-government schools. A Participation and Equity Program was also established which provided around $250 million mainly to schools with low retention to the end of secondary education from 1983 to 1987. Student numbers in training and vocational education (mainly in TAFE colleges) rose by over 25% under Hawke. University enrolments rose by almost 57%, from 357,000 in 1984 to 559,000 in 1992. The percentage of students in secondary education rose substantially, from 35% in 1982 to 77% in 1992, partly as a result of greater financial assistance to students from low-income backgrounds. Bill Hayden served as Minister for Foreign Affairs in the Hawke Government from 1983, until his 1988 resignation from Parliament to take up the position of Governor General of Australia. The portfolio then passed to Gareth Evans. Hawke sought to raise Australia's international profile in the United States, Russia, China, Japan and south-east Asia and also took an interest in the Israeli–Palestinian conflict. With Soviet president Mikhail Gorbachev running his policy of Perestroika and Glasnost, Hawke visited Moscow in 1987 for discussions on trade and foreign policy. The Hawke Government was the last Australian Government to operate within the international climate of the Cold War, which came to a conclusion in the aftermath of the 1989 Fall of the Berlin Wall. Hawke developed warm relations with Republican Party Presidents Ronald Reagan and George H W Bush, as well as Secretary of State George Shultz. By Hawke's own account, he was an enthusiastic supporter of the US Alliance, though, on various occasions, he had to persuade less enthusiastic members of his caucus to toe the party line. In 1985, the MX Missile controversy saw Hawke, under pressure from within the Labor Party, withdraw support for the splash down and monitoring of long range missile tests planned by the United States in Australian waters. That same year, the ANZUS Alliance was shaken by the decision of New Zealand to block visits by nuclear ships of the United States Navy at New Zealand ports. Hawke unsuccessfully lobbied New Zealand Prime Minister David Lange to change the policy and the ANZUS Treaty faced its most serious test. As part of a policy of cultivating ties with neighbouring Indonesia, the Hawke Government negotiated a zone of co-operation in an area between the Indonesian province of East Timor and northern Australia, known as the Timor Gap Treaty, signed between the governments of Australia and Indonesia. The signatories to the treaty were then Australian Foreign Affairs Minister Gareth Evans and then Indonesian Foreign Minister Ali Alatas. The treaty was signed on 11 December 1989 and came into force on 9 February 1991. It provided for the joint exploitation of petroleum resources in a part of the Timor Sea seabed which were claimed by both Australia and Indonesia and was considered controversial for its overt recognition of Indonesia's sovereignty over East Timor. In the biggest mobilisation of Australian Forces since the Vietnam War, the Government committed Australian naval forces to the 1991 Gulf War in support of the United States led coalition against the regime of Saddam Hussein, following the invasion of oil-rich Kuwait by Iraq on 2 August 1990. The United States amassed a 30 nation coalition of some 30,000 troops and the UN Security Council issued an ultimatum to Iraq for the withdrawal. Operation Desert Storm, an air bombardment, followed by a 43-day war followed Iraq's failure to withdraw. The Royal Australian Navy (RAN) provided vessels for the multi-national naval force, patrolling the Persian Gulf to enforce the UN sanctions. The Government elected to maintain an Australian naval presence in the Gulf following the surrender of Iraq and 1991 Peace Treaty. Ultimately, though Iraq withdrew from Kuwait, its failure to adhere to other conditions of the 1991 Treaty led to the second Iraq War a decade later. In an address to the nation explaining Australia's involvement, Hawke said that to protect small nations, sometimes "tragically", we must fight for peace. Keating replaces Hawke In 1990, a looming tight election saw a tough political operator, Graham Richardson, appointed Environment Minister, whose task it was to attract second-preference votes from the Australian Democrats and other environmental parties. Richardson claimed this as a major factor in the government's narrow re-election, and Hawke's last, in 1990. During Hawke's last months in office, employment assistance programs were expanded, while a Building Better Cities program was launched, promising higher investment in transport and other infrastructure, mainly in outer urban and regional areas. Paul Keating became deputy Prime Minister following the retirement of Lionel Bowen in 1990. New economic challenges emerged in the wake of the 1987 New York Stock Market slump and the ALP lost ground at the 1990 Election. At Kirribilli House in 1988, Hawke and Keating had discussed the possibility of Hawke retiring after the 1990 Election and when Hawke refused to do so, Keating began to publicly hint at dissatisfaction at Hawke's leadership. By 1991 Australia was in deep recession. By now the successful Hawke-Keating political partnership had fractured. Hawke's popularity had declined along with economic conditions. On 3 June, Keating challenged Hawke for the leadership, lost the party ballot and went to the backbench. John Kerin replaced Keating as Treasurer after Keating resigned, although Bob Hawke himself was treasurer for a day after Paul Keating resigned. Kerin had been Minister for Primary Industry but his period as Treasurer was a difficult one, not least because of the ongoing tension between Bob Hawke and Paul Keating. Kerin resigned as Treasurer shortly before Keating's second, successful, bid for leadership in December 1991. On 12 December 1991, a group of Hawke's senior ministers – Kim Beazley, Michael Duffy, Nick Bolkus, Gareth Evans, Gerry Hand and Robert Ray – approached Hawke and asked him to resign. Hawke refused, but was persuaded to call another leadership spill for 19 December 1991. This time, Keating won a narrow victory, winning the leadership of the Labor Party and becoming Prime Minister of Australia on 20 December 1991. - First Hawke Ministry - Second Hawke Ministry - Third Hawke Ministry - Fourth Hawke Ministry - First Keating Ministry - Second Keating Ministry - "Before office – Robert Hawke – Australia's PMs – Australia's Prime Ministers". Primeministers.naa.gov.au. - "Elections – Robert Hawke – Australia's PMs – Australia's Prime Ministers". Primeministers.naa.gov.au. - Kelly, P., (1992), p.57 - Edwards, J.,(1996), p.44 - Edwards, J.,(1996), p.6, p.48 - "The biggest hammering in history". Sydney Morning Herald. 20 May 2008. Retrieved 20 May 2008. - "In office – Robert Hawke – Australia's PMs – Australia's Prime Ministers". Primeministers.naa.gov.au. - Kelly, P., (1992), p.544 - Social Welfare in Developed Market Countries – Google Books. Books.google.com. - Welfare and Work in the Open Economy: Volume II: Diverse Responses to Common ... – Google Books. Books.google.com. - New Voices for Social Democracy: Labor Essays 1999–2000 by Dennis Glover and Glenn Anthony Patmore - Archived 18 August 2008 at the Wayback Machine. - Archived 31 December 2011 at the Wayback Machine. - The Hawke Government: A Critical Retrospective, edited by Susan Ryan & Tony Bramston - Prospect or suspect – uranium mining in Australia Australian Academy of Science, accessed: 18 February 2011 - Mike Steketee: Fierce ALP brawl on uranium policy The Australian, author: Mike Steketee, published: 26 April 2006, accessed: 18 February 2011 - The Australian welfare state: key documents and themes by Jane Thomson and Anthony McMahon - Domestic Violence in Rural Australia by Sarah Wendt - https://web.archive.org/web/20120325015646/http://www.jennymacklin.fahcsia.gov.au/statements/Pages/centenary_age_pension_05june08.aspx. Archived from the original on 25 March 2012. Retrieved 28 March 2012. Missing or empty - Ecotourism: a practical guide for rural communities by Sue Beeton - Welfare reform in rural places: comparative perspectives by Paul Milbourne - Work, family and the law by Jill Murray - Working out: new directions for women's studies by Hilary Hinds, Ann Phoenix, and Jackie Stacey - 2000 Year Book Australia No. 82 by the Australian Bureau of Statistics - The Hawke Government: A Critical Retrospective – Google Books. Books.google.com. - The Hawke Memoirs by Bob Hawke - Barnett & Goward; John Howard Prime Minister; Viking; 1997; Ch 12 - Ross Gittins (6 June 2011). "This time it's a recession we don't have to have". Smh.com.au. - "Before office – Paul Keating – Australia's PMs – Australia's Prime Ministers". Primeministers.naa.gov.au. - "Hawke Government events – 1990". Library.unisa.edu.au. - Barnett & Goward; John Howard Prime Minister; Viking; 1997; Ch 13 - Lewis, Steve (15 August 2011). "ALP elder Kerin quits in disgust". The Courier-Mail. - "Hawke Government events – 1991". Library.unisa.edu.au. - (PDF) https://web.archive.org/web/20080719012751/http://www.tonyburke.com.au/documents/info_kits/Pensioner_Kit06.pdf. Archived from the original (PDF) on 19 July 2008. Retrieved 28 March 2012. Missing or empty - For discussion see William Bowtell, Australia's Response to HIV/AIDS 1982–2005, Lowy Institute for International Policy, May 2005 - "Timeline – Australia's Prime Ministers". Primeministers.naa.gov.au. - "In office – Robert Hawke – Australia's PMs – Australia's Prime Ministers". Primeministers.naa.gov.au. - "Bob Hawke: Reflections on the Australia-US Alliance". Abc.net.au. 25 May 2011. - "Aussies Withdraw Plan to Aid MX Missile Tests". Los Angeles Times. 6 February 1985. - "Hawke Government events – 1985". Library.unisa.edu.au. - "Treaty between Australia and the Republic of Indonesia on the Zone of Cooperation in an Area between the Indonesian Province of East Timor and Northern Australia". Australasian Legal Information Institute – Australian Treaty Series 1991. 1991. Retrieved 20 October 2008. - "Iraq | Australian War Memorial". Awm.gov.au. - "Hawke Ministry (ALP) 4.4.1990 – 20.12.1991". Parliamentary Handbook of the Commonwealth of Australia. Commonwealth of Australia. 10 January 2003. Archived from the original on 18 October 2006. Retrieved 30 November 2006. - "In office – Paul Keating – Australia's PMs – Australia's Prime Ministers". Primeministers.naa.gov.au. - Kelly, Paul; The March of Patriots : The Struggle for Modern Australia; 2009; ISBN 9780522856194. - D'Alpuget, Blanche; Hawke: The Prime Minister; Melbourne University Press; 2010; ISBN 9780522856705. - Evans, Gareth; Inside the Hawke Keating Government: a Cabinet diary; 2014; ISBN 9780522866421. - Hawke, Bob; The Hawke Memoirs; 1994.
<urn:uuid:7583dd59-e5e0-437a-843c-5f829d90c229>
{ "date": "2017-06-23T11:02:51", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320049.84/warc/CC-MAIN-20170623100455-20170623120455-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.956546425819397, "score": 2.703125, "token_count": 7304, "url": "https://en.wikipedia.org/wiki/Hawke_Government" }
As with many scientific topics, there are two sides to this debate. Those against supplements claim they are not necessary whatsoever and that eating a balanced diet with whole foods can provide even the most elite athletes with the nutrients necessary for optimum performance. On the other side are supplement supporters, who claim supplements can be used safely and effectively and are key additions to a whole foods-based sports nutrition program. It is important to note that both sides have science in hand to support their stances. Some players take protein shakes with hopes of losing fat and getting in shape for a season. Contrary to the claims made by supplement com panies, the key to fat loss is not consuming whey protein, or plant protein, for that matter. You must find yourself in a caloric deficit by exercising and eating a clean diet. Protein simply helps preserve your lean mass while shedding those unwanted pounds. In this case, whole food lean protein sources may be your best option to ensure you also consume plenty of fat-boosting vitamins and minerals. The main reasons athletes consume protein shakes are to improve performance, recover faster and grow lean mass. Scientifically, this narrows down to one question: “What is the best type of protein source to consume post-workout to promote muscle protein synthesis?” Let’s take a look at the science. A large body of scientific evidence exists that supports the various benefits of protein supplementation. Contrary to slower-digesting whole food protein sources such as Greek yogurt, lean meats, nuts and seeds, protein powders are processed quickly by our bodies. Many studies support the belief that it is not only the quality and amino acid profile of protein we ingest around exercise that matters, but how fast it is integrated into the muscle cell that ultimately determines growth and recovery. Furthermore, instead of ingesting small “snacks” for protein after a match, it is more beneficial to ingest a larger quantity of fast digesting protein once you get off the court. A recent study at McMaster University in Ontario, Canada, helped support this theory. Subjects were either fed ten intermittent small doses of whey protein to mimic a slower digesting source or one large bolus of whey protein. The individuals who consumed the large single dose of whey protein had a larger increase in blood amino acid concentration compared to the lower, yet sustained concentrations found in the 10 mini dose group. Which was better? The single, large spike in blood amino acids resulted in more muscle protein synthesis and other anabolic signals1. The take home message is that athletes should aim to produce a quick peak in blood amino acids post-game to more effectively repair muscle tissue. Both sides of the argument are partially correct. With respect to whole foods, every athlete should strive to consume a balanced diet founded upon whole foods. No food group should be omitted as they are all important to ideal health as a player. Healthy fats, starches and protein sources all contain a plethora of vitamins, minerals and trace minerals in addition to their macronutrients. These are undeniably vital to performance. The majority of athletes can stop here. For most of us, a balanced diet customized to our sport is more than enough to achieve our performance goals. However, athletes aiming to perform at the highest level could benefit from faster protein absorption around exercise. If a 12 hours shorter recovery time matters or you need an extra half inch of vertical leap after a grueling 8 (eight?) week offseason, you may be a player who would benefit from a protein supplement. At the highest level, the slightest advantage separates champions from those who fail to succeed and thankfully there are many safe, effective ways to naturally push your body to the limit. Maximizing muscle repair and growth is one of those ways. When making a post game or post workout protein shake, keep it simple! Fiber and fat slow digestion, which is not ideal, so include simple, fast-digesting carbohydrates and proteins only. Note: This recovery recipe contains fast absorbing protein (amino acids) combined with fast digesting high molecular weight starch, glucose and other simple sugars to help replenish muscle glycogen stores. 1West et al, American Journal of Clinical Nutrition (2011) 94, 795-803. “Rapid aminoacidemia enhances myofibrillar protein synthesis and anabolic intramuscular signaling responses after resistance exercise.” Originally published in February 2012
<urn:uuid:0c82c0f2-06af-4282-b0f6-d16dd22ac6c8>
{ "date": "2014-09-19T09:47:03", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131238.51/warc/CC-MAIN-20140914011211-00017-ip-10-196-40-205.us-west-1.compute.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9442090392112732, "score": 2.5625, "token_count": 899, "url": "http://volleyballmag.com/articles/42613-protein-drinks-and-volleyball-are-they-really-necessary" }
Jump to Main Content Understanding winter sodium deposition in Taranaki, New Zealand - Yates, L.J., Hedley, M.J. - Soil research 2008 v.46 no.7 pp. 600-609 - algorithms, calcium, coasts, computer software, correlation, dairy farming, equations, magnesium, nutrient management, potassium, rain, rain gauges, sodium, soil, wind direction, wind speed, New Zealand - Research conducted in a limited number of regions has identified that Na deposition rate (kg Na/ha) is strongly influenced by 4 main factors: distance from coast, rainfall, wind speed, and wind direction. Despite the potential importance of Na deposition to the productivity of dairy farms, no comprehensive research has been conducted in Taranaki, New Zealand. Na, K, Ca, and Mg concentrations were determined in weekly rainwater samples collected in standard rain gauges erected at 15 sites, along 4 transects around Taranaki, between May and September 2006. Recorded Na concentrations ranged between 0.40 and 38mg/L. High Na concentrations were associated with low rainfall volumes and proximity to the coast first receiving the prevailing wind, which was, during this period, the southern Taranaki coast. Na deposition ranged between 0.04 and 25kg/ha.week. Equations were derived to predict the average Na concentration in rainwater and Na deposition in Taranaki for the 2006 winter period. The most influential factor explaining the variation in average Na concentration was the distance of the collector from the southern coast. Na and Mg depositions were highly correlated (R²=0.93; P<0.01; n=155), whereas correlations of Na with K or Ca were not as strong (R²=0.49 and 0.61, respectively). Measured Na deposition rates exceed those predicted by algorithms used in current nutrient budgeting software and could be used to improve this nutrient management software.
<urn:uuid:1d40066b-56fa-4852-885a-f493f3dcfe28>
{ "date": "2019-09-16T16:41:18", "dump": "CC-MAIN-2019-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572879.28/warc/CC-MAIN-20190916155946-20190916181946-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9254322052001953, "score": 2.734375, "token_count": 399, "url": "https://pubag.nal.usda.gov/catalog/2167552" }
This storyboard does not have a description. What is wrong Zeus? The humans have upset me for the last time! Run Charles run! Zeus was angry with the humans so he decided to attack them and teach them a lesson because he is a god and he does what he wants. The learnt there lesson to never mess with the gods ever again and not to mess with natural causes. Socrates said that there where no gods and they believe in nonsense. Socrates was executed for not believing in the gods because of the kings orders. In athens they were well educated and flourished. But in sparta you were apart of the army or nothing. Explore Our Articles and Examples Try Our Other Websites! Photos for Class – Search for School-Safe, Creative Commons Photos (It Even Cites for You! – Easily Make and Share Great-Looking Rubrics
<urn:uuid:305af991-e57a-4069-86a7-e48da2499a0b>
{ "date": "2018-01-19T07:59:54", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887832.51/warc/CC-MAIN-20180119065719-20180119085719-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9509775638580322, "score": 2.578125, "token_count": 188, "url": "http://www.storyboardthat.com/storyboards/dakotaberger/greece" }
An Introduction to Buddhism: Teachings, History and Practices (Introduction to Religion 2nd Revised edition) By: Peter Harvey (author)Paperback In this second edition of the best-selling Introduction to Buddhism, Peter Harvey provides a comprehensive introduction to the development of the Buddhist tradition in both Asia and the West. Extensively revised and fully updated, this edition draws on recent scholarship in the field, exploring the tensions and continuities between the different forms of Buddhism. Harvey critiques and corrects some common misconceptions and mistranslations, and discusses key concepts that have often been over-simplified and over-generalised. The volume includes detailed references to scriptures and secondary literature, an updated bibliography and a section on web resources. Key terms are given in Pali and Sanskrit, and Tibetan words are transliterated in the most easily pronounceable form, making this is a truly accessible account. This is an ideal coursebook for students of religion, Asian philosophy and Asian studies, and is also a useful reference for readers wanting an overview of Buddhism and its beliefs. Peter Harvey is Emeritus Professor of Buddhist Studies at the University of Sunderland. He is author of An Introduction to Buddhist Ethics: Foundations, Values and Issues (Cambridge University Press, 2000) and The Selfless Mind: Personality, Consciousness and Nirvana in Early Buddhism (1995). He is editor of the Buddhist Studies Review. Introduction; 1. The Buddha and his Indian context; 2. Early Buddhist teachings: rebirth and karma; 3. Early Buddhist teachings: the four true realities for the spiritually ennobled; 4. Early developments in Buddhism; 5. Mahayana philosophies: the varieties of emptiness; 6. Mahayana holy beings, and Tantric Buddhism; 7. The later history and spread of Buddhism; 8. Buddhist practice: devotion; 9. Buddhist practice: ethics; 10. Buddhist practice: the Sangha; 11. Buddhist practice: meditation and cultivation of experience-based wisdom; 12. The modern history of Buddhism in Asia; 13. Buddhism beyond Asia; Appendix on canons of scriptures; Web resources; Bibliography; Index. Number Of Pages: - ID: 9780521676748 2nd Revised edition - Saver Delivery: Yes - 1st Class Delivery: Yes - Courier Delivery: Yes - Store Delivery: Yes Prices are for internet purchases only. Prices and availability in WHSmith Stores may vary significantly © Copyright 2013 - 2017 WHSmith and its suppliers. WHSmith High Street Limited Greenbridge Road, Swindon, Wiltshire, United Kingdom, SN3 3LD, VAT GB238 5548 36
<urn:uuid:322fcd2d-2086-4138-9b0f-533939f395c2>
{ "date": "2017-02-28T07:09:07", "dump": "CC-MAIN-2017-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00072-ip-10-171-10-108.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8325524926185608, "score": 2.5625, "token_count": 542, "url": "https://www.whsmith.co.uk/products/an-introduction-to-buddhism-teachings-history-and-practices-introduction-to-religion-2nd-revised-edition/9780521676748" }
U.S. farmers depend on a 50-year-old highway system, a 70-year-old inland waterway system and a railway network build in the late 1800s to move their products from the fields to end users. This aging transportation system has been providing U.S. soybean farmers a competitive advantage in the global market, but a recent study funded by the United Soybean Board's (USB's) and soy checkoff's Global Opportunities (GO) program supports the growing evidence that this advantage continues to be threatened by the deterioration of U.S. highways, bridges, rails, locks and dams. The study, "Farm to Market - A Soybean's Journey," analyzed how soybeans and other agricultural products move from the farm gate to customers, highlighting weaknesses found in the system along the way. The study was recommended by the checkoff-funded Soy Transportation Coalition. "The entire transportation network has been vital to the U.S. soy industry, not only in moving our product to domestic processors but also in delivering U.S. soy to our international customers as well," says Dale Profit, soybean farmer from Van Wert, Ohio, and USB director. "We need to protect this advantage if the United States is going to remain the preferred source for soy throughout the world." The U.S. inland waterway system remains a precarious leg of a soybean's journey. The deteriorating lock system remains at risk of failure, and dredging needs to be done to encompass new larger ships that will be possible with the expansion of the Panama Canal, due to open in late 2014. The U.S. Army Corps of Engineers has the responsibility to maintain a depth of 45 feet on the lower Mississippi River, but, due to funding issues, has not been able to dredge to maintain an adequate navigable channel, limiting ships to 42-foot draft, meaning the vessel holds fewer soybeans. If U.S. waterways cannot accommodate these larger ships, the U.S. soy industry may not be able to capitalize on the potential advantages that the expanded Panama Canal will offer. The checkoff-funded study also shows that limiting the volume of soy that can be in one shipment could lead to higher freight costs. The U.S. railway network has also been under pressure, especially as more U.S. soybeans have made their way to China. The industry has seen an increase in rail movement from the western Soybean Belt to the Pacific Northwest. In 2009-10, 68 percent of U.S. soybeans traveling by rail ended their U.S. journey in the Pacific Northwest. The study predicts that China's import of U.S. soy will continue to grow, doubling by 2020-21. "Brazil has several proposed infrastructure projects that haven't been completed yet," adds Profit. "But if those improvements are made in Brazil, it would put them on par with U.S. soybean farmers as far as transportation costs, and we would lose that advantage." Improvements to the transportation infrastructure would make the movement of U.S. soy and other agricultural products more efficient, totaling expected cost savings to U.S. soybean and grain industries of $145.9 million annually, according to the study. U.S. farmers wouldn't be the only ones to benefit from improved infrastructure. Several U.S. industries remain fully dependent on oilseeds and grain. These industries annually provide 1.5 million jobs and more than $352 billion in U.S. output, $41 billion in labor earnings and $74 billion in value added on to the U.S. economy. The 69 farmer-directors of USB oversee the investments of the soy checkoff to maximize profit opportunities for all U.S. soybean farmers. These volunteers invest and leverage checkoff funds to increase the value of U.S. soy meal and oil, to ensure U.S. soybean farmers and their customers have the freedom and infrastructure to operate, and to meet the needs of U.S. soy's customers. As stipulated in the federal Soybean Promotion, Research and Consumer Information Act, the USDA Agricultural Marketing Service has oversight responsibilities for USB and the soy checkoff. For more information on the United Soybean Board, visit www.unitedsoybean.org.
<urn:uuid:199180cc-5ab9-4e77-98cb-f40378344608>
{ "date": "2014-04-16T04:38:34", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00011-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9523022174835205, "score": 2.734375, "token_count": 888, "url": "http://www.wisfarmer.com/editorial/infrastructure-investments-could-save-us-farmers-millions-----jcpg-289851-168777976.html" }
Folks caught on pretty quick to what I was showing in the previous post if the comments are any guide: this feature, seen anywhere in the world by any trained geologist, would be identified readily as a doubly plunging syncline, sedimentary layers twisted into an elongated bowl shaped form. But this particular fold in the rock carries a lot of emotional baggage, located as it is just a few miles from Mt. Ararat in Turkey. After a fuzzy picture of the feature appeared in LIFE magazine in 1960, a number of individuals quickly decided the feature had to be Noah's Ark, and a small cottage industry has grown around the selling of the idea. As mentioned before, I have discussions with students about religion and science, and especially about the age of the Earth and evolutionary theory. They sometimes bring up items like this, in the full confidence that they have discovered for themselves the absolute proof of Noah, a worldwide flood, and a 6,000 year old earth. It's hard to be patient sometimes, to explain that a feature like this is easily explained by science, and that maybe, just maybe, the promoters of the "evidence" on the internet might not have the most honorable of motivations. Here are some arguments about the age of the Earth and evolution you might not want to bring up: 1. Moon dust proves a young moon. 2. NASA computers, in calculating the positions of planets, found a missing day and 40 minutes, proving Joshua’s “long day” (Joshua 10) and Hezekiah’s sundial movement. 3. There are no beneficial mutations. 4. Darwin recanted on his deathbed. 5. Woolly mammoths were flash frozen during the Flood catastrophe (with buttercups in their mouths!). 6. No new species have been produced. 7. Ron Wyatt has found much archaeological proof of the Bible, including the Ark of the Covenant, and also the Ark of Noah, the subject of the picture above. 8. Evolution is just a theory (I've discussed the meaning of theory in the past). 9. Microevolution is true but not macroevolution. 10. The Paluxy tracks in Texas prove that humans and dinosaurs co-existed. 11. The Japanese trawler Zuiyo Maru caught a dead plesiosaur near New Zealand. 12. The speed of light has decreased over time. 13. Archaeopteryx is a fraud. I'm not going to waste my time or yours explaining why these phenomena and assertions are unusable in an argument over the age of the Earth or evolution. I don't have to. I derived this list from a young-earth creationist organization website. It is they who say these shouldn't be used in an argument with scientists. Even they know these arguments are bogus (you are welcome to Topeka, er, uh, Google "creationists arguments that shouldn't be used" if you are interested). There are plenty of other assertions made by young-earth creationists that can be used in a debate that are equally untrue, but at least the YEC folks believe them. To use "facts" like those on the list above reveals a distinct lack of basic internet research skills and a lack of critical thinking. When a person "wants to believe", they are easily duped...
<urn:uuid:a71cff78-12a2-4d2b-80d5-31fb8ad1176f>
{ "date": "2016-10-21T16:39:32", "dump": "CC-MAIN-2016-44", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718285.69/warc/CC-MAIN-20161020183838-00087-ip-10-171-6-4.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9528669714927673, "score": 2.84375, "token_count": 696, "url": "http://geotripper.blogspot.com/2010/04/friday-fun-foto-part-2-please-dont-try.html?showComment=1271884618359" }
U.N. Reports Examine Trends, Causes Of Maternal Deaths “New United Nations data show a 45 percent reduction in maternal deaths since 1990,” according to a WHO press release. A new study from the U.N. discusses the steady progress made in estimating deaths and female births, emphasizing the need for accurate data. Another report from the WHO and published in The Lancet Global Health, examines data for the causes of maternal deaths. The report “finds that more than one in four maternal deaths are caused by pre-existing medical conditions such as diabetes, HIV, malaria, and obesity, whose health impacts can all be aggravated by pregnancy. … ‘Together, the two reports highlight the need to invest in proven solutions, such as quality care for all women during pregnancy and childbirth, and particular care for pregnant women with existing medical conditions,’ says Dr. Flavia Bustreo, assistant director-general of family, women’s and children’s health at WHO…” (5/6).
<urn:uuid:02e0b494-2a41-4474-a90c-38c1d184298f>
{ "date": "2014-11-23T20:11:38", "dump": "CC-MAIN-2014-49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379916.51/warc/CC-MAIN-20141119123259-00224-ip-10-235-23-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9450536966323853, "score": 2.765625, "token_count": 212, "url": "http://kff.org/news-summary/u-n-reports-examine-trends-causes-of-maternal-deaths/" }
There are two major groups of dinosaurs that can be divided into the more famous seven groups of dinosaurs. Saurischia ("lizard-hipped", not really lizard-like but close enough) * Theropods (two-legged meat-eaters, and birds) * Sauropodomorphs (Two and four-legged plant-eaters with long necks) Ornithischia ("bird-hipped", not really bird-like but close enough) * Ornithopods (two-legged plant-eaters) * Stegosaurs (plated dinosaurs) * Ankylosaurs (armored dinosaurs) * Pachycephalosaurs (dome-headed dinosaurs) * Ceratopsians (horned dinosaurs) The word dinosaur comes from the ancient Greek words deinos ("fearfully great", as in awe-inspiring) and sauros (a lizard). The name was invented in 1842 by Richard Owen, who knew they were not lizards, but thought that they may have been derived from them. Owen used the term deinos as a superlative. It was sometimes interpreted as "terrible" in the sense of "inspiring terror". Unfortunately, in more recent decades, this word has come to mean "bad" or "awful", a sense Owen never intended. Dinosaurs lived on every continent, including Antarctica. This does not mean that dinosaurs lived in polar wastelands. During the time of the dinosaurs, the world was much warmer, and Antarctica had forests. Dinosaurs also lived in many different Mesozoic environments, including deserts, forests, and coastal swamps. The dinosaurs from the Mesozoic Era lived from 230 to 65 million years ago, from the Late Triassic Period through the Late Cretaceous Period. The dinosaurs from the Cenozoic Era (most birds, which are a subgroup of theropod dinosaurs) lived from 65 million years ago to today. If you find what you think is a dinosaur bone, this is what you should do. Leave it where it is, even though it's tempting to take it home. Instead, take a picture of it, mark its location on a map, and tell someone at a natural history museum. They will use tools that allow the fragile fossil to be safely collected. If your wondering "what are the best sources for info on dinosaurs" then here they are. The library (first and foremost!), followed by natural history museums (second), then the internet, and finally (and certainly last) TV programs. There's over 1000 Mesozoic (non-bird) dinosaurs have been given scientific names since 1824 when the first dinosaur was named. About half of all the dinosaurs named may not be valid because they were given names based on bits and pieces that do not stand up under modern scientific tests. Nonetheless, it has been estimated that fewer than 10% of all the dinosaurs that actually existed have been found. This is because the fossil record is not complete, and many rocks from the Mesozoic Era are still buried where we cannot get to them. In addition, many dinosaurs must have lived in environments where fossils did not form, so we have no record of them at all. If your wondering "why dinosaurs were so big" these are the answers. There are several reasons: Size is the best defense. Dinosaurs, like several groups of reptiles, had indeterminate growth - which means they continued to grow throughout life, as long as they were healthy and had access to food. Dinosaurs laid eggs, and did not have to nurse their young (like mammals), so they could devote more of their biological "energy budget" to growth. If your wondering " Are dinosaurs extinct?" this is the answer. Yes and no. All the dinosaur species that lived during the Mesozoic Era are extinct. And all of the "traditional" dinosaurs (meaning all the dinosaurs that aren't birds) are extinct. Seven of the eight major groups of dinosaurs are extinct, leaving only the theropod dinosaurs (see above). One of 40+ subgroups of theropod dinosaurs evolved into birds, so that one major group is not extinct. You can say that dinosaurs are 7/8 extinct! My favoite dinosaur - the Spinosaurus. Special thanks to Smithsonian for the info!
<urn:uuid:2e6f44c2-bb16-4701-9332-7ad02bf0c1bb>
{ "date": "2017-03-26T22:52:34", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189313.82/warc/CC-MAIN-20170322212949-00471-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9694909453392029, "score": 3.171875, "token_count": 898, "url": "http://spencersday2011.blogspot.com/2011/04/april-25-2011-information-about.html" }
Sharks Teeth Treasure “Ever been to Jekyll Island’s Shark’s Tooth Beach?” Tom rode his kayak in the eddy at the margin where the marsh grass met the river, and kind of leered at me, wiggling his eyebrows. I was with a small party of other sea kayak novices on a float trip out of Tidelands Nature Center. Tom Woolf was our guide. The green of Jekyll Island’s maritime forest spread to starboard, endless marsh to port. We were resting, and I’d just told Tom that our family came to Jekyll every time we could pry ourselves away from work and school. “We know this place like the back of our hands,” I’d boasted. Apparently the island had a few secrets I wasn’t privy to. “Shark’s Tooth Beach?” I said. “That’s a new one. Where’s it located?” Tom told me how to get there – and what, if I was lucky, I might find. As it happened, Shark’s Tooth Beach turned out to be a memorable adventure, and served to remind me once again – on this island paradise, there’s always something new to see and do. Prehistoric Sharks Teeth Tom told me that Sharks Tooth Beach was a great place to find prehistoric sharks teeth - the teeth of the Megalodon, an almost-legendary shark that lived during the Cenozoic era, from 1.5 to 28 million years ago. Megalodon grew to more than 58' long. Makes a great white look like a minnow. No Megalodons swim in the waters off Jekyll Island nowadays (at least I don't think they do!), but they left their calling cards - their fossilized teeth, which themselves could reach over 7" long. A 52' Megalodon had a bite force of 24,395 lbs., compared to 4,000 lbs. for a typical great white shark (from Wikipedia article entitled "Megalodon"). Megalodon sharks teeth have been found all over the world, but the teeth you'll most likely run across on Sharks Tooth Beach belong to the great white shark. First Aborted Attempt to Reach Sharks Tooth Beach My first try at reaching Sharks Tooth Beach was a washout - literally. Tom had told me that to reach the beach, you had to hike a bit. Following Tom's instructions, I found the trail head just past the entrance to Summer Waves Water Park. It's gated to keep cars out, but a space was left just wide enough for a hiker or biker to get through. I took off on a cloudy afternoon (the last day of our vacation) while Martha napped in the house. I was thinking I'd be on the beach in a few minutes. Not so. The trail winds through maritime forest bordering virgin marshland for more than a mile, so it's a real hike. Hiking boots aren't needed, but they help. The first section was pretty open, but then the trail narrowed and vegetation started to encroach. I was eager to reach the beach. Tom had said that the best time to search for sharks teeth was at extreme low tide. Dawn Zenkert, director of Tidelands Nature Center, confirmed his advice. "Sharks Tooth Beach is covered by a layer of naturally occurring oyster shells," Dawn said. The best time to find anything is at the lowest point of the ebb tide, when a narrow strip of the muddy bottom is exposed. About halfway to the beach, Mother Nature put a damper on my expedition. Thunderstorms rolled in, accompanied by driving rain - and lightning. Hiking across the lowlands is not much fun when you can see lightning striking the marsh on all sides of you. I turned tail and retreated, as fast as I could, back to the car, feeling lucky to get back with hide intact. Sharks Teeth Here I Come The next time we visited Jekyll Island, I was eager to finish what I'd started. I'd talked more with Dawn, and she told me that sharks teeth weren't the only treasures an intrepid explorer could find. She said that others had found shards of pottery left over from when the coastal Indians inhabited the area, as well as tools, weapons, and debris from shipwrecks. So, with visions of treasure dancing in my head, I hit the trail. This time Martha accompanied me (she was born to hike!). No rain this time, but by the time we reached the halfway point, I was wishing for some. It was about 95 degrees out, and humid as the inside of a greenhouse. We passed the point where I'd turned around the year before, and the trail narrowed considerably, with lush vegetation crowding the path in places. Hikers could get through fairly easily, but a biker would need to leave the bike and continue on foot. Off in the distance we could see the towers of the water slides at Summer Waves, with tiny people waiting their turns to slide. I wondered if they could spot us from their vantage point, and if they could were they thinking, "What are those idiots doing down there slogging around in the heat?" Finally we emerged on the crescent-shaped Sharks Tooth Beach. Dawn was right. It wasn't your typical sandy beach. Oyster shells hid the sand except right at water's edge. We'd timed our visit to coincide with low tide, and had hit it perfectly. A strip of mud about 2' in width lay exposed at our feet. We dug around in the mud for about 30 minutes before calling it quits without a tooth to be found. The heat and my rumbling stomach called a halt to our Indiana Jones impressions, and we slogged back to the car, disappointed that we hadn't discovered any sharks teeth, Megalodon or otherwise - but determined to come back again, in cooler weather, armed with digging tools and sieves, to once again try our luck at finding our own version of buried treasure. If you want to experience the thrill of the hunt, you can find the Sharks Tooth Beach trail head off Riverview Drive, just past Summer Waves Water Park. If it's hot, take plenty of water with you. It'll take you about an hour of steady hiking to get their, maybe longer with small kids. Bug spray may be necessary on the way in and out, and sun block when you reach the beach. Might want to pack a lunch. Martha and I took a pack of crackers, and it wasn't nearly enough. Good hunting, and if you discover any sharks teeth (or other treasures), we'd like to hear your story!
<urn:uuid:64bbe03a-d665-4e5e-a4c9-53d2a40297f4>
{ "date": "2017-10-23T02:33:17", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.973537802696228, "score": 2.578125, "token_count": 1424, "url": "http://www.jekyll-island-family-adventures.com/sharks-teeth.html" }
This eye-catching computer-generated animation by Glenn Marshall was created in the open-source programming language Processing. Marshall writes that after creating the application, “I just let the program run till the end of the music, I felt reluctant to interfere too much by trying to sculpt an ending, and just let the code run its own natural course.” Glenn offers more details about the process on his blog. While the movement in the piece above was not created frame-by-frame, the results on the screen are controlled by the artist who designs the application and sets the variables that determine the look of the piece. In most digital animation (CG, Flash), allowing a computer to generate movement is a rote affair that comes in the form of tweening or other types of automation which are designed to make the movement easier to create, not more interesting to watch. Generative animation, however, allows the computer to be a creative partner alongside the artist with resulting movement that would be impossible for either an artist or computer to create by itself. Readers, feel free to share other interesting examples of generative animation that you’ve run across recently. (via Motion Design)
<urn:uuid:a1587239-e69b-42d9-8e85-59db6a9a431f>
{ "date": "2014-10-02T16:09:55", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.5/warc/CC-MAIN-20140930004103-00256-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9567965269088745, "score": 2.640625, "token_count": 241, "url": "http://www.cartoonbrew.com/cgi/music-is-math-by-glenn-marshall-7718.html" }
It’s fortuitous timing that the first statutory holiday weekend of the school year is Thanksgiving. After the initial rush of back to school, and a month of settling into routines, Thanksgiving comes at just the right time to allow everyone to catch their collective breath and to consider what they have to be grateful for. In fact, gratitude can be a powerful tool in learning, particularly effective in helping to focus positive student responses towards school and learning. In his post “Gratitude: A Powerful Tool for Your Classroom” Owen Griffiths describes how having students participate in a gratitude journal helps “harness positive thinking to increase grades, goals and quality of life”. The act of recording gratitude in a journal has positive effects for both students and adults with some of the outcomes including better sleep, more positive outlooks on life and greater social satisfaction. Students are better able to cope with adversity and challenge when they can identify the positives in their lives rather than dwelling on things that bring them down. Heart Mind Online suggests that gratitude can be appreciated by children as young as seven years of age. Coupling gratitude with the act of giving thanks can help benefit students both ways. Research shows that expressed gratitude positively affects both the giver and the receiver of the thanks. The observance of Thanksgiving provides us all with both a welcome holiday break and a reminder that gratitude is a gift that is always in season!
<urn:uuid:19c73cc3-87b9-4f66-b6f3-fbdd4dfea9e0>
{ "date": "2019-06-18T16:36:30", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998808.17/warc/CC-MAIN-20190618163443-20190618185443-00016.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9563749432563782, "score": 3.796875, "token_count": 286, "url": "https://educationmatters2.school.blog/2018/10/09/gratitude-in-learning/" }
Compare book prices at 110 online bookstores worldwide for the lowest price for new & used textbooks and discount books! 1 click to get great deals on cheap books, cheap textbooks & discount college textbooks on sale. Gottfried Wilhelm Leibniz (1646-1716) was hailed by Bertrand Russell as 'one of the supreme intellects of all time'. A towering figure in seventeenth-century philosophy, his complex thought has been championed and satirized in equal measure, most famously in Voltaire's Candide. In this outstanding introduction to his philosophy, Nicholas Jolley introduces and assesses the whole of Leibniz's philosophy. Beginning with an introduction to Leibniz's life and work, he carefully introduces the core elements of Leibniz's metaphysics: his theories of substance, identity and individuation; monads and space and time; and his important debate over the nature of space and time with Newton's champion, Samuel Clarke. He then introduces Leibniz's theories of mind, knowledge, and innate ideas, showing how Leibniz anticipated the distinction between conscious and unconscious states, before examining his theory of free will and the problem of evil. An important feature of the book is its introduction to Leibniz's moral and political philosophy, an overlooked aspect of his work. The final chapter assesses legacy and the impact of his philosophy on philosophy as a whole, particularly on the work of Immanuel Kant. Throughout, Nicholas Jolley places Leibniz in relation to some of the other great philosophers, such as Descartes, Spinoza and Locke, and discusses Leibniz's key works, such as the Monadology and Discourse on Metaphysics. Recent Book Searches: ISBN-10/ISBN-13: 0310337550 / 978-0310337553 / Praying the Scriptures for Your Children: Discover How to Pray God's Purpose for Their Lives / Jodie Berndt 1556352832 / 978-1556352836 / Praying the Psalms, Second Edition: Engaging Scripture and the Life of the Spirit / Walter Brueggemann 159669291X / 978-1596692916 / Live a Praying Life: Open Your Life to God's Power and Provision / Jennifer Dean 0802436986 / 978-0802436986 / A Journey to Victorious Praying: Finding Discipline and Delight in Your Prayer Life / Bill D. Thrasher 0736920889 / 978-0736920889 / The Power of Praying : Help for a Woman's Journey Through Life / Stormie Omartian B00070SG54 / Do equilibrium real business cycle theories explain post-war U.S. business cycles? (NBER working paper series) / Martin S Eichenbaum B000ALO3PC / Fiscal policy in the aftermath of 9/11.: An article from: Journal of Money, Credit & Banking / Martin Eichenbaum, Jonas D.M. Fisher B00071B7CC / Some empirical evidence on the production level and production cost smoothing models of inventory investment (NBER working paper series ; working paper) / Martin S Eichenbaum B00070V3RM / A time series analysis of representative agent models of consumption and leisure choice under uncertainty (NBER working paper series) / Martin S Eichenbaum B00F8JDYTW / Carry Trade and Momentum in Currency Markets (Annual Review of Financial Economics) / Craig Burnside, Martin Eichenbaum, Sergio Rebelo B007FSTDDK / Real business cycle theory : wisdom or whims? / Martin S. National Bureau of Economic Research. Eichenbaum B0006OXC2S / Some empirical evidence on the effects of monetary policy shocks on exchange rates (Working papers series, macroeconomic issues) / Martin S Eichenbaum B007HF8AKI / Prospective deficits and the Asian currency crisis / Craig. Eichenbaum, Martin S. ; Rebelo, Sergio. ; World Bank. Burnside B009SD6GOY / The Hound of the Baskervilles (The Oxford Sherlock Holmes) (Oxford World's Classics) / Sir Arthur Conan Doyle, Sir W. W. Robson B00A7LNTFK / The Case-Book of Sherlock Holmes (Oxford World's Classics;The Oxford Sherlock Holmes) / Arthur Conan Doyle, W. W. Robson 0199536457 / 978-0199536450 / The Jungle Books (Oxford World's Classics) / Rudyard Kipling 1930618727 / 978-1930618725 / The Evolution of Human Life History (School of American Research Advanced Seminar Series) / Kristen Hawkes, Nancy Barrickman, Meredith L. Bastian, Nicholas Blurton Jones, Barry Bogin, Jesper L. Boldsen, Nicholas P. Herrmann, Lyle W. Konigsberg, Elissa B. Krakauer, Shannen L. Robson, Richard R. Paine, Daniel W. Sellen, Matthew M. Skinner, Maria A. van Noordwijk, Carel P. van Schaik, Bernard Wood 047019765X / 978-0470197653 / The Academic Chair's Handbook / Daniel W. Wheeler, Alan T. Seagren, Linda Wysong Becker, Edward R. Kinley, Dara D. Mlinek, Kenneth J. Robson 0691114854 / 978-0691114859 / The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook / N/A B001Q72072 / The Future of an Illusion: Freud's Major Statement Concerning the Role of Religi / Sigmund; Robson-Scott, W. D. (translator) Freud B001BXVUJU / Queen Emma (The Samoan-American Girl Who Founded an Empire In 19th Century New Guinea) / R.W. Robson 1162927364 / 978-1162927367 / Michaud's History Of The Crusades V1 / Joseph Francois Michaud B000GUNVOQ / The Pacific Islands Year Book - 1944 / R. W. ed. Robson 0823216047 / 978-0823216048 / The Forgd Feature: Towards a Poetics of Uncertainty, New and Selected Essays / Ben Belitt 0802863612 / 978-0802863614 / Earthen Vessels: Hopeful Reflections on the Work and Future of Theological Schools / Daniel O. Aleshire 0664240542 / 978-0664240547 / Faithcare: Ministering to All God's People Through the Ages of Life / Daniel O. Aleshire 0195114930 / 978-0195114935 / Being There: Culture and Formation in Two Theological Schools (Religion in America) / Jackson W. Carroll, Barbara G. Wheeler, Daniel O. Aleshire, Penny Long Marler B00E31LFGA / Faithcare: Ministering to All God's People Through the Ages of Life 1st (first) Edition by Aleshire, Daniel O. published by Westminster John Knox Press (1988) / Daniel O. Aleshire B0006Y43YY / Understanding today's youth / Daniel O Aleshire 0311119034 / 978-0311119035 / Comprendamos al Joven de Hoy (Spanish Edition) / Daniel O. Aleshire The goal of this website is to help shoppers compare book prices from different vendors / sellers and find cheap books and cheap college textbooks. Many discount books and discount text books are put on sale by discounted book retailers and discount bookstores everyday. All you need to do is to search and find them. This site also provides many book links to some major bookstores for book details and book coupons. But be sure not quickly jump into any bookstore site to buy. Always click "Compare Price" button to compare prices first. You would be happy that how much you would save by doing book price comparison. Buy Used Books and Used Textbooks It's becoming more and more popular to buy used books and used textbooks among college students for saving. Different second hand books from different sellers may have different conditions. Make sure to check used book condition from the seller's description. Also many book marketplaces put books for sale from small bookstores and individual sellers. Make sure to check store review for seller's reputation if possible. If you are in a hurry to get a book or textbook for your class, you should choose buying new books for prompt shipping. Buy Books from Foreign Country Our goal is to quickly find the cheapest books and college textbooks for you, both new and used, from a large number of bookstores worldwide. Currently our book search engines fetch book prices from US, Canada, UK, New Zealand, Australia, Netherlands, France, Ireland, Germany, and Japan. More bookstores from other countries will be added soon. Before buying from a foreign book store or book shop, be sure to check the shipping options. It's not unusual that shipping could take two to three weeks and cost could be multiple of a domestic shipping charge. Please visit Help Page for Questions regarding ISBN / ISBN-10 / ISBN10, ISBN-13 / ISBN13, EAN / EAN-13, and Amazon
<urn:uuid:131cb9f1-2a39-42ba-a82e-8d06c6b459c6>
{ "date": "2013-12-06T19:43:06", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052462/warc/CC-MAIN-20131204131732-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7982609272003174, "score": 2.546875, "token_count": 1983, "url": "http://www.alldiscountbooks.net/_041528337X_i_.html" }
Experimental Anti-Cancer Drug Made From Corn Lillies Kills Brain Tumor Stem Cells August 30, 2007 A drug that shuts down a critical cell-signaling pathway in the most common and aggressive type of adult brain cancer successfully kills cancer stem cells thought to fuel tumor growth and help cancers evade drug and radiation therapy, a Johns Hopkins study shows. In a series of laboratory and animal experiments, Johns Hopkins scientists blocked the signaling system, known as Hedgehog, with an experimental compound called cyclopamine to explore the blockade’s effect on cancer stem cells that populate glioblastoma multiforme. Cyclopamine has long been known to inhibit Hedgehog signaling. They reported their findings in the journal Stem Cells published online on July 19. “Our study lends evidence to the idea that the lack of effective therapies for glioblastoma may be due to the survival of a rare population of cancer stem cells that appear immune to conventional radiation and chemotherapy,” says Charles G. Eberhart, M.D., Ph.D., associate professor of pathology, ophthalmology and oncology, who led the work. “Hedgehog inhibition kills these cancer stem cells and prevents cancer from growing and may thus develop into the first stem cell—directed therapy for glioblastoma.” Eberhart cautioned that while his study appears to prove the principle of Hedgehog blocking, much work remains before cyclopamine or any similar drug can be tested in patients. Scientists must determine whether the drug can be effectively and safely delivered to the whole body or whether it must go into the brain, and what if any adverse impact on normal stem cells the treatment might cause. “Once you’ve answered those questions in animals, the next step would be starting phase I clinical trials in humans,” Eberhart said. The new study adds to the growing evidence that only a small percentage of cancer cells — in this case stem cells — are capable of unlimited self-renewal and that these cells alone power a tumor’s growth. Eberhart focused on two pathways important to the survival of normal brain stem cells—Hedgehog and Notch—suspecting that brain cancer stem cells cannot live without them. The Hedgehog gene, first studied in fruit flies, got its name because during embryonic development, the mutated version causes flies to resemble a spiky hedgehog. The pathway plays a major role in controlling normal fetal and postnatal development, and, later in life, helping normal adult stem cells function and proliferate. The Johns Hopkins scientists first tested 19 human glioblastomas removed during surgery and frozen immediately, and found Hedgehog active in five at the time of tumor removal. They also found Hedgehog activity in four of seven glioblastoma cell lines. Next, the team used cyclopamine, chemically extracted from corn lilies that grow in the Rocky Mountains, to inhibit Hedgehog in cells lines growing on plastic or as neurospheres, round clusters of stems cells that float in liquid nutrients. This reduced tumor growth in the cell-laden plastic by 40 to 60 percent, and caused the neurospheres to fall apart without any new growth of the cell clusters. The researchers also pretreated mice with cyclopamine before injecting human glioblastoma cells into their brains, resulting in cancer cells that failed to form tumors in the mice. Other researchers have shown that radiotherapy fails to kill all cancer stem cells in glioblastomas, apparently because many of these cells can repair the DNA damage inflicted by radiation. The Hopkins team suggests that blocking the Hedgehog pathway with cyclopamine kills these radiation-resistant cancer stem cells. In previous laboratory experiments, Eberhart used cyclopamine to block Hedgehog using medulloblastoma cells, the most common brain cancer occurring in children. Along with childhood brain cancers, cyclopamine has shown early promise in treating skin cancer; rhabdomyosarcoma, a muscle tumor; and multiple myeloma, a cancer of the white blood cells in bone marrow. “What excites me is that we have taken things we learned about Hedgehog signaling in these relatively rare childhood brain tumors and translated them into an even more aggressive adult tumor,” Eberhart said. More than 10,000 Americans die annually from glioblastomas. Radiation is the standard therapy for the disease, and several years ago, the U.S. Food and Drug Administration approved adding the drug temozolomide to radiotherapy because the combination provided a small survival increase. “This is an incredibly difficult tumor to treat,” says first author Eli E. Bar, Ph.D., a postdoctoral fellow. “Survival for glioblastoma has not changed much in 30 years. With the addition of temozolomide, survival got bumped from 12 months to 14 or 15 months.” This study was funded by the nonprofit Brain Tumors Funders’ Collaborative, which is supported by eight private philanthropic and advocacy organizations. Additional authors are Aneeka Chaudhry, Alex Lin, Xing Fan, Karisa Schreck, William Matsui and Alessandro Olivi from Johns Hopkins; Angelo L. Vescovi of the University of Milan Bicocca in Milan, Italy; and Francesco DeMeco of the Istituto Nazionale Neurologico “Carlo Besta” in Milan.
<urn:uuid:a1e48a04-846c-4a48-83db-4b360cb30b2c>
{ "date": "2014-03-07T09:00:33", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999639602/warc/CC-MAIN-20140305060719-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.914891242980957, "score": 2.671875, "token_count": 1144, "url": "http://www.hopkinsmedicine.org/news/media/releases/experimental_anti_cancer_drug_made_from_corn_lillies_kills_brain_tumor_stem_cells" }
April 12, 2010 This Cassini movie -- the first of its kind -- shows lightning on Saturn's night side flashing in a cloud that is illuminated by light from Saturn's rings. The cloud, whose longest dimension is about 3,000 kilometers (1,900 miles), does not change perceptibly over the 16 minutes of observations covered by the 10-second movie. The lightning flashes are the bright spots within the cloud, and are about 300 kilometers in diameter. The lightning strikes last for short periods of time (less than one second before the time line of the movie was compressed). The energy output of the visible light from the lightning is comparable to the brightest lightning flashes on Earth. At Saturn, there are three types of clouds that might produce lightning. The top layer is made of ammonia ice; the middle layer is made of a compound of hydrogen sulfide and ammonia; the bottom layer is water. The light has to diffuse up through this cloud system, which is over 100 kilometers (60 miles) thick. The width of the lightning spot at the top of the cloud is proportional to the depth where the flash originated. The observed widths indicate that the lightning is originating either in the hydrogen-sulfide-ammonia cloud or in the water ice cloud. The lightning does not appear to originate at the deepest levels of the cloud system, where water is liquid. Also included here are a single still image from the movie and a three-by-three montage of nine frames showing some of the lightning flashes. This movie uses data from two Cassini instruments: the visible light cameras of the imaging science subsystem (ISS) and the radio and plasma wave science (RPWS) instrument. The movie compresses 16 minutes of narrow-angle-camera ISS images down to 10 seconds. The images show the storm cloud and its surroundings, but changes in the shape of such a large cloud over such a short time are imperceptible. The lightning flashes appear as short bursts of light within the cloud. The sound track gives synthetic lightning sounds at the times the radio signals from the lightning were recorded by the RPWS instrument. The radio signals themselves are at frequencies above the range detectable by the human ear, so the sound of thunder would not be appropriate. The imaging team instead chose electrical spark sounds to represent the radio signals. Both the ISS team and RPWS team have gaps in their observations during the 16 minutes the movie covers, so some radio signals do not have a flash to go with them and vice versa. The ISS instrument saw lightning for the first time during the August 2009 northern spring equinox. The RPWS instrument has been detecting lightning at radio wavelengths since Cassini's arrival at Saturn in 2004. Now, seeing the lightning allows scientists to pinpoint its location and measure the optical properties of the flash. Saturnian lightning has interesting differences from lightning on Earth. While Cassini has been observing, only one storm has been active at any one time, and all the storms have been at the same latitude, around 35 degrees south latitude. The storms turn on and off on a timescale of several months. The storm in this movie occurs at this same latitude even after the change of Saturnian seasons at equinox. This movie is a concatenation of nine images taken in visible light with the Cassini spacecraft narrow-angle camera on Nov. 30, 2009. This view is centered on terrain at about 35 degrees south latitude, 45 degrees west longitude. The view was obtained at a distance of approximately 2.6 million kilometers (1.6 million miles) from Saturn. The images were re-projected to a simple cylindrical map projection with a scale of 30 kilometers (19 miles) per pixel. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate in Washington. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging team is based at the Space Science Institute, Boulder, Colo. The radio and plasma wave team is based at the University of Iowa, Iowa City. For more information about the Cassini-Huygens mission visit http://www.nasa.gov/cassini and http://saturn.jpl.nasa.gov. The Cassini imaging team homepage is at http://ciclops.org. The radio and plasma wave team homepage is at http://www-pw.physics.uiowa.edu/cassini/. Credit: NASA/JPL-Caltech/SSI/University of Iowa
<urn:uuid:871cc585-afe0-4b11-8669-c348fbae1f67>
{ "date": "2017-01-23T04:23:43", "dump": "CC-MAIN-2017-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00550-ip-10-171-10-70.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9168620109558105, "score": 3.515625, "token_count": 969, "url": "https://saturn.jpl.nasa.gov/resources/710/" }
The Energy Department on February 7 announced that it is offering up to $7 million in funding to advance the design of technologies that will help communities become more adaptive and prepared for power outages caused by severe weather and other events. Microgrids are localized grids that are normally connected to the more traditional electric grid but can disconnect to operate autonomously, manage and control the flow of electricity and help mitigate grid disturbances. Microgrids also have the ability to cost-effectively integrate storage and distributed generation such as renewable energy, while also supporting demand management programs. The Microgrid Research, Development, and System Design funding opportunity targets teams of communities, technology developers and providers, and utilities to develop advanced microgrid controllers and system designs that will help communities take an innovative and comprehensive approach to microgrid design and implementation. Each applicant will be required to work with an entity or community to design microgrid systems of up to 10 megawatts, which is enough to power a small community. Additionally, applicants will be encouraged to design systems that protect critical infrastructure such as hospitals and water treatment plants. See the Energy Department news release.
<urn:uuid:ea8bfd56-25ed-4742-8b23-33ca861a4cfd>
{ "date": "2016-07-31T05:50:13", "dump": "CC-MAIN-2016-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258950570.93/warc/CC-MAIN-20160723072910-00037-ip-10-185-27-174.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9547251462936401, "score": 2.703125, "token_count": 222, "url": "http://www.energy.gov/eere/amo/articles/energy-department-offers-funding-improve-electric-grid" }
"The evangelical church is steadily becoming a visible presence in Mexican society." For nearly 400 years, the evangelical faith and the study of the Bible were prohibited here. As early as the midsixteenth century, Lutherans who had come with the Spanish conquerors suffered persecution, and the Holy Inquisition was in force in Mexico longer than in many other countries. Eventually, under President Benito Juarez (1806-72), a growing reaction to the Catholic church's power led the government to enact anticlerical legislation, which remained in force until this decade and declared the following restrictions: (1) No church could legally own property; (2) foreigners could not serve as priests or pastors; (3) worship services should be held exclusively in temples or churches, not in public buildings; (4) clergy could not directly or indirectly criticize government authorities; (5) clergy could not vote or participate in politics; (6) mass media should not be used to promote religion; and (7) government leaders supposedly should never participate in religious ceremonies. But in the early 1990s, President Carlos Salinas de Gortari succeeded in reforming the Constitution. As a result, any religious association may now bring in foreign missionaries or pastors provided they are officially affiliated with the church they serve, have their financial support guaranteed for the duration of their service, and fulfill the requirements of the laws of immigration (which are liberally applied). Compare this to the time when foreign missionaries ministered for decades by returning as "tourists." Also, churches now can hold evangelistic campaigns or healing services in public places. Recently, for example, an evangelical group conducted a ...1
<urn:uuid:fcd21a47-5eb5-465f-8065-1b0c2d96008e>
{ "date": "2017-07-23T05:15:21", "dump": "CC-MAIN-2017-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424247.30/warc/CC-MAIN-20170723042657-20170723062657-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9637051820755005, "score": 3.140625, "token_count": 335, "url": "http://www.christianitytoday.com/ct/1998/november16/8td072.html" }
Alexandra Levy: All right, we’re here on December 28, 2012 with Lawrence Litz. First please say your name and spell it. Lawrence Litz: L-A-W – it’s Lawrence Litz, L-A-W-R-E-N-C-E, L-I-T-Z. Levy: So what was it like working in the war on the Manhattan Project? Litz: It was very exciting and I felt that I was doing something worthwhile. Levy: You went to the University of Chicago. What did you study? Litz: I majored in chemistry and minored in mathematics. I also did a lot of physics. Levy: When did you graduate? And did you get married soon afterwards? Litz: I graduated on June 12, 1942 and got married the next day. Levy: What did you do after you graduated? Litz: I went to work in Florence, Alabama in an ammonia plant, which is—and the ammonia is used to make explosives for the war effort back then. Levy: Did you go back to Chicago? Litz: Yes, I wanted to look for another job and they were interested in me because I already had some industrial experience. I left because we had shifted the lab from twelve men and one woman to twelve women and one man, and the man was me. And I was studying the effect of nuclear radiation on materials having to do with the nuclear reactors, which were being built at Hanford, Washington. Then I went to Los Alamos. Levy: What did you do at—or going from Chicago to Los Alamos, was that a strange experience because of the geography? Litz: Very different, particularly because Los Alamos is at seven thousand feet elevation and Chicago was at sea level, so I learned to breathe properly. Levy: So what kind of work did you do at Los Alamos? Litz: I worked on technology involving plutonium and it was high vacuum technique such that we had put the plutonium metal—the plutonium chloride as a salt and reactor was magnesium metal to make magnesium chloride which then could be boiled off and left—and then we were left with a little button of plutonium metal about an eighth of an inch in diameter. This was our first creation of the plutonium metal in the world; so we were the first ones to see it. Levy: So you were the first person to see metallic plutonium? Levy: Who was the second person? Litz: My wife. My wife was working in an adjacent lab and I invited her into my lab to look through the telescope at the reactor which produced the plutonium metal. Levy: And what was the—what is the crucible? Litz: Well the crucible is—kind of a metal crucible, but it is actually cerium monosulfide, which allowed us to melt the plutonium without contaminating it and we—so we had a high vacuum system and I don’t know whether you can see the—this is the—one of the technical pieces for high vacuum system. Levy: And so what—that’s one of the pieces? Litz: Yeah, used to create a high vacuum. Levy: What is it made out of? Litz: This is brass. The crucible is cerium monosulfide. Levy: So as you had to—as more Plutonium was needed, did you have to find new methods? Litz: Well yes, because plutonium as a solid exists in six different crystalline phases between room temperature and its melting point of about 800 degrees centigrade. And of course if you have something that’s changing crystal phase, it changes shape. And the problem—you can’t make a bomb out of something which is changing temperature—which is changing shape as it got hotter. So we learned that we could alloy it with a small amount of gadolinium and then that would stabilize a single phase and then we could cast it in different forms. Levy: Was that well known or was that secret at the time? Litz: It was not well known because plutonium was not known to exist even until we made it in our research lab. Levy: So the gadolinium mixture, that was top secret? Levy: So what was the bomb that you helped develop? Litz: Well, in order to make the bomb, we had to make the plutonium metal in a sphere. And in order to get a sphere we had to cast a—each half of the sphere, and then the sphere would be about eight inches diameter. We put something in the middle of this eight diameter sphere which would release neutrons and would cause the plutonium to undergo fission and release very large amounts of energy. Levy: So did you help cast the plutonium sphere? Litz: Yes. So this was all done in my vacuum to keep the things pure. Levy: Was that dangerous work or was it fairly safe? Litz: Well in a sense it was dangerous because we were working with something that was radioactive. Levy: So did you know that it was for a bomb? Litz: Yes, definitely we were aware. We knew what we were about to do and realized that it was something that was important to the military. Levy: When did you find out that the bomb had been dropped on Japan? Did you find out from the newspaper? Litz: No, actually we had correspondence from the people who actually dropped the bomb because we were—we had to be prepared to make more bombs as necessary. Levy: How did you feel when you found out the bomb had been dropped? Litz: Well we thought that we were going to make the war come to an end, which we did, and that was a very happy feeling. Levy: So why was the bomb important? Litz: Well, because it was a simple way to stop the fighting. Levy: And was that important for you personally? Litz: It was certainly important personally, but also from the scientific point of view, it was a new technology. Levy: Did you have any other relatives in the Army? Litz: My brother was in the Army. Levy: And would he have been involved in the invasion of Japan had it happened? Litz: Yes, I’m sure that had the war stopped—not stopped, then he would have been in the invasion. Levy: And you mentioned that Oppenheimer had spoken to you the day before—or the day the bomb was being dropped or being shipped to Tinian. Do you remember what he said? Litz: Yeah, Oppenheimer actually spoke to us for about two hours on the day before the bomb was shipped to Tinian Island near Japan where it was set to be launched, dropped on Hiroshima. And he stressed that hopefully it would end the war. Levy: Great. After the war, what—did you stay on with the Manhattan project? Litz: Only for about another month, and then I went to graduate school at Ohio State. Levy: How did you react to working in the war? Litz: Well as a scientist, I was happy to do anything in which I had knowledge. And most of my science has to do with chemistry and water and types of solutions. I had very little training in metallurgy, but the fact that I can build high vacuums, I was the right guy to take on this project. Levy: How old were you? Litz: I was only 22 years old then. Levy: Were you one of the younger people? Litz: Yes, definitely. Levy: So how did you get to Los Alamos? Or did you know—when you went did you know what was being done there? Litz: Well, I actually didn’t know. The secrecy was so high we knew only that they were working on radioactive materials. We—I didn’t even know where Los Alamos was. And the people—when we had to go, they just bought the railroad tickets for myself and my wife and took our little puppy with us on the Sante Fe Chief. Levy: So did you end up then going to Lamy, New Mexico? Levy: Which is the closest stop. Litz: Right, the train—the closest stop for the train was Lamy, New Mexico, and it’s about eighteen miles south of Sante Fe. We got off the train in Lamy and then we had to wait for the people to take us. And so we had to just sit and wait on the platform, my wife and I and our little puppy. Levy: Was Lamy very big? Litz: No, Lamy was small. It was sort of built around a train station. About half a block away there was a parking lot in which there were few military vehicles. After we’d been sitting there at the train stop for about half hour one of the men got out and asked if we—if I had been at the Met Lab in Chicago because he recognized me. Then he asked me if I was going to the Hill. I didn’t know but said, “I guess so.” And they drove us to the center of Sante Fe and then we got into another car, which took us to Los Alamos. When we got to Los Alamos they said, “You weren’t supposed to bring your wife with you,” but they should have known since the people in Chicago had bought a ticket for her as well as for me. But since they weren’t expecting a couple they only had a small apartment available for us, but they managed to get us a couple of beds and several lamps so we could at least have a place to sleep that night. Levy: So what was your early work at Los Alamos like? Litz: Well for about two months I worked on the water boiler, a nuclear reactor surrounded with water, which would deflect neutrons back into the material. Then they transferred me over to the Met Lab. Levy: So as a young scientist, how did you feel about working at Los Alamos? Litz: As a young scientist I was really interested in doing anything which would take advantage of my skills. Levy: What did your superiors think about your work there? Litz: They thought I could do almost anything. Levy: So what was a typical day like at Los Alamos? Litz: Well typically we would start working in the lab at about eight in the morning, take a break for lunch—and then Evelyne and I would break together and then go home and have lunch and then we would come back to the laboratory and work until about five. Levy: Did you work every day, even on the weekends? If you don’t remember that’s fine. Litz: I don’t. Levy: What was—how did—do you remember working on casting the plutonium for the third bomb? Litz: The particular day that remembers—that remains in my memory was the day that we cast the plutonium for the third bomb because we weren’t sure that the Japanese would surrender even after the second bomb was dropped. We had to cast the atmospheres for the third, and because time was short we had to cast the two hemispheres at the same time. But it was dangerous to cast them in the same laboratory at the same time so we set up two adjacent laboratories with the high vacuum apparatus and the—so we could cast one hemisphere in each one of the two labs. Levy: How long did that take to cast? Litz: About twenty-four hours and we had to work straight through. Levy: So what did you do when you didn’t need to use the plutonium for a third bomb? Litz: Well after we found that we didn’t need to use the third bomb we decided to use the hemispheres for research. We designed an array, which took the neutrons that came out of the sphere back into the sphere and would keep the neutron radiation away from the scientist who is doing the work experiment. Now one of the men who was working on the experiment accidentally bumped the array, exposing himself to the radiation, and died two weeks later from the radiation. Levy: Do you remember—was that man’s name Louis Slotin? Litz: I don’t recall. Levy: Okay. Were you ever worried about being exposed to toxic materials? Litz: Well of course we were worried about it but that was the job to be done. We always tried to be safe. I worked with rubber gloves and a dry box but I also had to clean up the high vacuum system every three to four weeks, so by the end of the war I end up with what was termed at that time to be exposure to the maximum tolerable dose of radiation. Levy: And how long does plutonium—how long is the half-life? Litz: Plutonium half-life is five thousand years, so it will be around long after I’m gone. Levy: So what was life like in Los Alamos? Did you make a lot of friends? Litz: Yes, we made a lot of very good friends, one of whom I remember was [Richard] Feynman, was one of our neighbors. Levy: Do you have any funny stories about Feynman, or do you remember any conversations you had with him? Litz: Not particularly at this stage. Levy: Okay. Did you keep in touch with any of the other scientists? Litz: We were not allowed to correspond with other scientists at all. In late 1944 we got permission to visit our family in Chicago and I was told that I was not allowed to say anything about what I was working on and I knew that I was being followed by G-men to make sure I didn’t say anything. Levy: Who were G-men? Litz: These were, I guess, engineers who took care of the secrecy information. Levy: So how much longer was the information on the bomb classified for? Litz: Well it was partially declassified about four to five years after the bomb was dropped, but most—but the fine details were not declassified until about 2005. Levy: What can you tell me about J. Robert Oppenheimer? Litz: He was a very positive, encouraging man and very caring about our work. He made sure that we had a complete understanding of the importance of what we were doing. That undoubtedly it would kill many people but this loss of life would end up saving millions of lives. Levy: So what did he tell you in the two hours on the day the bomb was dropped? Litz: Well he told us exactly where the bomb was going to be in Japan, and even though we’re not pleased with the moral aspects of killing people that we need to understand how many lives it would save. Levy: Did you keep in touch with Oppenheimer after the war? Litz: No, I didn’t correspond with him after I left because much of my work, which was with silene and fuel cells, were not of common interest to what I—to what he was doing. Levy: What did you think about during the Cold War when he was accused of being a Communist? Litz: I was very disturbed when he was accused of being a Communist. There was never any indication during the war that he was anything but an extremely loyal American citizen. And as a consequence of those hearings that they had he was really destroyed emotionally. Anyone who knew him during the war had to be extremely disturbed. Levy: So Oppenheimer was someone you admired, then? Levy: So what did—what kind of work did you do after the war? Litz: I went—you have some— Levy: Did you have—did you receive any awards after the war? Litz: I was given an award as a pioneer in the semiconductor field. Levy: Great. So are you proud of your work on the Manhattan Project? Litz: Very definitely. Yes. Levy: Great. Do you have any other stories you’d like to share with your work on the Manhattan Project or your work as a scientist? Litz: Well, I was a very diffuse scientist. I worked with many, many different projects, different technical areas, and I actually have forty-two patents in the areas that I worked on. Levy: Wow, that’s terrific. What kinds of patents are they for? Semiconductors? Litz: Some were semiconductors. I don’t remember all the technicals. Levy: That’s great. So you really enjoyed working as a scientist? Litz: I definitely did. Levy: What did you enjoy the most? Litz: I guess accomplishing things, being able to solve problems. Levy: Did you have a favorite project you worked on? Litz: Not a particular one. I just enjoyed whatever I was doing. Levy: Do you think your work on the Manhattan Project helped you after the war in terms of overcoming challenges and research and teaching you about many things? Litz: Well the sense that working on any project helped—helps you broaden your capabilities, and I think in that context it was important. Levy: Was the Manhattan Project one of the most difficult projects you worked on? Litz: I think it was; yes. Levy: Why do you think that? Litz: Just sort of hazily recollecting one versus the other. Levy: Do you have anything else you’d like to share? Litz: No, I think that’s it. Levy: Great. Well you were terrific, thank you so much. Litz: My pleasure, thank you.
<urn:uuid:4293235f-330b-4668-964d-95640e95c177>
{ "date": "2014-10-25T14:28:05", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648297.22/warc/CC-MAIN-20141024030048-00196-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9867331981658936, "score": 2.53125, "token_count": 3861, "url": "http://www.manhattanprojectvoices.org/oral-histories/lawrence-litzs-interview-2012" }
from The Century Dictionary. - noun A thermometer in which air is used instead of mercury. Sorry, no etymologies found. A new air-thermometer has been invented by M. Pouillet, for the purpose of measuring degrees of heat in very high temperatures; an object hitherto of very difficult attainment. The apparatus is, in a word, a large air-thermometer, inside the bulb of which the subject is sitting. Put another kind of test of heat beyond it and it appears; coat the air-thermometer with a bit of black cloth, and that will absorb heat and reveal it. Watch your air-thermometer, on which the beam of heat is pouring, for the result. While incompetent to produce the faintest glimmer of light, or to affect the most delicate air-thermometer, they will inflame paper, burn up wood, and even ignite combustible metals.
<urn:uuid:d6639d66-a70c-480b-b06a-0b62933ab0c6>
{ "date": "2019-11-13T02:34:26", "dump": "CC-MAIN-2019-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8885583877563477, "score": 2.953125, "token_count": 197, "url": "https://www.wordnik.com/words/air-thermometer" }
Most pawed animals have ten fingers. One of the main exceptions is the little mole: It has an extra “thumb”, which it rests upon while digging and thus increases the size of its digging apparatus. Polydactyly – the presence of supernumerary fingers – is a phenomenon that has already been observed in various land animals in Devon and is also fairly common in humans, dogs and cats. Land vertebrates appear to possess a silent developmental program for polydactyly, which is only activated under certain conditions. In moles, however, polydactyly is the norm, which means the program is constantly activated during embryogenesis. An international team of researchers headed by Marcelo Sánchez-Villagra, a professor of paleontology at the University of Zurich, has studied the molecular-genetic origin and development of the extra thumb in moles. As the scientists reveal in their recent article published in the journal Biology Letters, the additional thumb develops later and differently during embryogenesis than the real fingers. The studies were funded by the Swiss National Science Foundation. Unlike the other fingers on the mole’s hand, the extra thumb does not have moving joints. Instead, it consists of a single, sickle-shaped bone. Using molecular markers, the researchers can now show for the first time that it develops later than the real fingers from a transformed sesamoid bone in the wrist. In shrews, however, the mole’s closest relative, the extra thumb is lacking, which confirms the researchers’ discovery. Male hormones linked to polydactyly The researchers see a connection between the species-specific formation of the extra thumb in the mole and the peculiar “male” genital apparatus of female moles. In many mole species, the females have masculinized genitals and so-called “ovotestes”, i.e. gonads with testicular and ovary tissue instead of normal ovaries. Androgenic steroids are known to influence bone growth, transformation and changes, as well as the transformation of tendons in joints. A high level of maternal testosterone is also thought to be one of the causes of polydactyly in humans. Christian Mitgutsch, Michael K. Richardson, Rafael Jiménez, José E. Martin, Peter Kondrashov, Merijn A. G. de Bakker, Marcelo R. Sánchez-Villagra; Circumenting the polydactyly ‚constraint‘: The mole’s ‚thumb‘. The Royal Society Biology letters, 2011, doi: 10.1098/rsbl.2011.0494
<urn:uuid:4f62b851-5b50-419d-b7d2-735951fa4a56>
{ "date": "2018-01-17T12:44:16", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9203441739082336, "score": 3.421875, "token_count": 563, "url": "http://www.media.uzh.ch/en/Press-Releases/archive/2011/maulwurf.html" }
For those who do not follow British politics closely, the following articles provide some historical perspective on the UK's ambivalent relationship with the EU and the impetus for holding a referendum on EU membership. The following resources summarize the process of withdrawing from the EU, the UK's options for retaining access to the EU's single market, and the constitutional implications for the UK's future as a unified state. Source: Brythones via Wikimedia Commons CC BY-SA 3.0 License Areas that voted to leave the EU are shown in shades of blue. Areas that voted to remain are shown in shades of yellow and gold. The map reflects the high degree of polarization between areas that voted strongly to leave (the Thames estuary, the English Midlands, and economically struggling, post-industrial regions) and those that voted strongly to remain (London, Brighton, Bristol, Cambridge, Cardiff, Liverpool, Manchester, Oxford, Reading, York, and other prosperous urban centers, as well as Scotland and Northern Ireland).
<urn:uuid:a5aa787e-11ef-46fc-86a8-130040ea89ee>
{ "date": "2018-03-19T03:30:35", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646213.26/warc/CC-MAIN-20180319023123-20180319043123-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9323055744171143, "score": 2.65625, "token_count": 203, "url": "http://guides.ll.georgetown.edu/c.php?g=365741&p=3814531" }
This book is an examination of the island of St Helena’s involvement in slave trade abolition. After the establishment of a British Vice-Admiralty court there in 1840, this tiny and remote South Atlantic colony became the hub of naval activity in the region. It served as a base for the Royal Navy’s West Africa Squadron, and as such became the principal receiving depot for intercepted slave ships and their human cargo. During the middle decades of the nineteenth century over 25,000 ‘recaptive’ or ‘liberated’ Africans were landed at the island. Here, in embryonic refugee camps, these former slaves lived and died, genuine freedom still a distant prospect. This book provides an account and evaluation of this episode. It begins by charting the political contexts which drew St Helena into the fray of abolition, and considers how its involvement, at times, came to occupy those at the highest levels of British politics. In the main, however, it focuses on St Helena itself, and examines how matters played out on the ground. The study utilises documentary sources (many previously untouched) which tell the stories of those whose lives became bound up in the compass of anti-slavery, far from London and long after the Abolition Act of 1807. It puts the Black experience at the foreground, aiming to bring a voice to a forgotten people, many of whom died in limbo, in a place that was physically and conceptually between freedom and slavery.
<urn:uuid:49209ddf-427c-4d41-aab3-11704b055035>
{ "date": "2019-03-19T00:32:25", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201812.2/warc/CC-MAIN-20190318232014-20190319014014-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9756608009338379, "score": 3.140625, "token_count": 300, "url": "https://www.liverpooluniversitypress.co.uk/books/isbn/9781781382837/" }
New to Typophile? Accounts are free, and easy to set up. The fovea is the central two degrees of vision in which information is processed during a fixation. The parafovea is the five degrees of vision either side of the fovea, and the peripheral is the remaining vision either side of the parafovea. The majority of the information processed is done so in the fovea. Only low level information is able to be garnered by the parafovea and the peripheral.
<urn:uuid:c0acd7c7-4ad5-4824-9dbe-f56e16abff0e>
{ "date": "2014-03-11T13:52:40", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011205602/warc/CC-MAIN-20140305092005-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9179556369781494, "score": 3.109375, "token_count": 106, "url": "http://typophile.com/node/39291" }
September 16, 2010 Serengeti Threatened By Proposed Highway The Tanzanian government's plan to build a 31-mile highway into the Serengeti would devastate one of the planet's last great wildlife sanctuaries, biologists warned Wednesday. 27 experts on biodiversity in a commentary, published in the journal Nature, said: "the road will cause an environmental disaster."The experts urged Tanzanian officials to use an alternate route that runs further south of the Serengeti. The alternate route would be around 155 miles farther south, below the Ngorongoro Conservation Area. The planned road cuts right through the migratory route more than a million wildebeest use annually, which is part of the last great mass journeys of animals on Earth, they said. Wildebeest play an important role in the fragile ecosystem, maintaining the vitality of the grasslands and sustaining threatened predators such as lions and cheetahs. Simulations have been done that suggest that "if wildebeest access to the Mara river in Kenya is blocked, the population will fall to less than 300,000," said the experts. "This would lead to more grass fires, which would further diminish the quality of grazing by volatizing minerals, and the ecosystem could flip into being a source of atmospheric CO2," they added. Government officials have had the idea of linking coastal Tanzania to Lake Victoria and Uganda, Rwanda, Burundi and the Democratic Republic of Congo for nearly twenty years. In other parks around the world, fences and roads along migratory routes have caused a collapse in the ecosystem, scientists said. Increasing foreign interest in exploring the natural wealth of central Africa has fueled the government's interest in prioritizing the highway. On the Net:
<urn:uuid:b6fa63e0-a929-4ee7-9db6-a03b29bc6856>
{ "date": "2017-10-23T08:29:18", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00356.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9437393546104431, "score": 2.84375, "token_count": 367, "url": "http://www.redorbit.com/news/science/1918104/serengeti_threatened_by_proposed_highway/" }
Large bells such as this were common in the Edo period to mark time for communities. They were often paid for by collecting coins from parishes and locales, and then melted down for the metal. These bells are clapper-less and were struck with a large wooden beam. With the introduction of Western clocks into Japan, fewer large bells, like this one, were needed. Modernity also called for replacing the traditional calendar based on the zodiac with a January to December year. Bells continued to be made, but their use was more commemorative and ceremonial than practical.
<urn:uuid:fefc08ac-d790-4b6a-8727-f55082115f2d>
{ "date": "2016-09-27T08:41:11", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660996.20/warc/CC-MAIN-20160924173740-00182-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9956056475639343, "score": 3.546875, "token_count": 117, "url": "http://crowcollection.org/gallery-item/bell/" }
All you need for NCEA Level 2 Chemistry Beginning Chemistry Workbook is your complete package for NCEA Level 2 Chemistry. This fun and easy-to read workbook features: The concise notes are clearly written, with the important ideas highlighted and a wealth of diagrams and photos illustrating the concepts – Chemistry has never been easier to learn! View Table of Contents Supporting digital resources are available here: Developed by Anne Wignall and Terry Wales, and newly updated and expanded by Rachel Heeney and Gina O’Sullivan, Beginning Chemistry Workbook provides the reading and practice New Zealand students need to succeed. A new edition of Continuing Chemistry is also planned for late 2017.
<urn:uuid:951a1127-1b4b-4c6c-b579-1c0b5201de41>
{ "date": "2017-04-28T08:07:38", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00295-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8904787302017212, "score": 2.984375, "token_count": 143, "url": "http://www.edify.co.nz/shop/secondary/9780947496371" }
Peddling Fear of an Icelandic Volcanic Eruption - By Erik Klemetti - April 12, 2012 As we approach the second anniversary of the Eyjafjallajökull eruption that created air travel havoc across Europe, I suppose it comes as no surprise that the news media has decided to mark the anniversary with fear. I’ve seen a flurry of articles come out over the past few days all pushing the idea that a new eruption in Iceland, bigger and badder that Eyjafjallajökull, is around the corner, waiting to mug you and steal your wallet. Let’s take a quick tour of some of the headlines, shall we? - “Fears of a huge new eruption in Iceland” – Daily News India - “Signs point to imminent major eruption in Iceland” – Vancouver Sun - “Huge Iceland Volcano Showing Activity” – UPI - “Iceland volcano: And you thought the last eruption was bad” – Telegraph Really pitching the soft sell, aren’t they? And guess what? Almost every one of these articles focuses on the Big Bad Wolf of Iceland, Katla. Sure, other volcanoes also show signs of activity (see Askja or even2011 eruption of Grimsvötn), but Katla is the media darling. Katla has definitely had large eruptions in the past, but it isn’t even the standard for large eruptions in Iceland (I think the Laki eruption might have something to say about that). However, Katla is (a) near Eyjafjallajökull; (b) hasn’t erupted in a long time; and (c) easier to pronounce. Now listen, Iceland is a very geologically active place. It sits on the Mid-Atlantic Ridge, were new oceanic crust is born, pushing North America and Europe further apart. It also sits on top of a mantle plume, where hot, buoyant mantle material rises and melts as a decompresses. Both of these factors mean that Iceland has a lot of volcanic activity. It also means many of the volcanoes will appear “restless” as magma moves in conduits under the volcano, sometimes at depths of 30 or more kilometers below the surface – and although magma is moving, it doesn’t mean an eruption is going to happen next week. Volcanoes are dynamic features that are always responding to new intrusions of magma, but remember this key fact: volcanoes spend much more time not erupting than erupting. This key idea is what makes volcano monitoring such a challenge – we can see the signs of activity, like earthquakes, degassing, warming of the Earth’s surface, steam explosions, deformation, but deciding that volcano X will erupt on a specific date far in the future is just not possible. Sure, we can say the probability is higher that a volcano will erupt if it shows some of these signs, but really, for any active volcano, for each day that passes, we are closer to its next eruption (whenever that might be). Katla will erupt again, but do we need to rehash the fear of total Airtravelopocalypse each time it hiccups? I sure hope not. The two things we really don’t know about the next eruption of Katla: (1) when it is going to happen and (2) how big will it be. Without this knowledge, all this wailing and gnashing of teeth is for one reason only – to get people to read your article. There is no scientific basis for you to be any more afraid of Katla now than at any time – and even if the signs of activity increase, the fear shouldn’t come with it. As an example, the 2011 eruption of Grimsvötn was, in many times, larger than the Eyjafjallajökull eruption – taller plume, higher rate of eruption (initially) – but it did not cause anywhere close to the chaos that Eyjafjallajökull caused in European/North American air traffic. What I’m trying to get across is this: Every eruption in Iceland is not doom. Every rumble of a volcano is not a sign of a “huge new eruption”. We live on a geologically active planet and significant geologic events are going to happen (just look at the earthquakes in Indonesia and Mexico yesterday). However, living in fear of that big eruption or that big earthquake isn’t going to help us be prepared for the next one. Image: Aqua image of Eyjafjallajokull erupting on May 8, 2010. Image by NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response Team
<urn:uuid:d6eeb689-ac78-423b-b9c3-4f879a84c8d5>
{ "date": "2013-12-07T10:56:07", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054000/warc/CC-MAIN-20131204131734-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9409902095794678, "score": 2.59375, "token_count": 1010, "url": "http://www.chromographicsinstitute.com/tag/the-media/" }
The Australian Defence Force is set to spend more than $5 million to develop battlefield robots which have a sense of morality. It's commissioned the University of New South Wales in Canberra and the University of Queensland to find ways to make machines behave in an ethical way in wartime. The universities will spend an additional $3.5 million in what would be world-leading research. Robots are increasingly being used by the military but there is a fear that, because they have no sense of right and wrong, they might commit atrocities when left to their own devices. The idea is to identify what humans believe to be right and wrong and then program that into machines so they behave in the way humans would want them to. If a human would not shoot at schoolchildren crossing a market square, the researchers want to find a way of getting robot soldiers to hold their fire in the same way. It might be possible, for example, to teach killer robots to identify a Red Cross symbol on a vehicle and decide not to shoot at it. A team of ethicists and engineers is being assembled at UNSW's Canberra campus to work out technical ways of embedding human morality into machines. The work will involve surveying members of the public and the military to see what they think is acceptable behaviour. The work is being led by Dr Jai Galliott, who has a background both as a philosopher and as a military man in the Royal Australian Navy. The money is being channelled through Australia's Defence Cooperative Research Centre. Many ethical dilemmas tax philosophers when they think about war, particularly the question of how many collateral deaths may be acceptable to destroy an important military target. Dr Galliott cited a case where two NATO rockets hit a train packed with civilians as it crossed a targeted bridge in Serbia in 1999. The rockets had no sense of right or wrong, and so didn't abort the attack with the sudden appearance of civilians on the target - even if the technology had allowed them to do so. Dr Galliott said there might be a way to program the missiles of the future to recognise large, moving civilian objects and not hit them if they suddenly come into view. There are two parts to the problem: working out what is right and wrong on a battlefield and, secondly, finding ways of putting that into machines. Accordingly, the work will involve philosophers as well as computer coders and engineers. The engineers would develop technologies like pattern recognition, so that war robots could recognise shapes and movements to better identify targets and non-targets. The other side is the human element. "The idea is to figure out when a human would say 'stop', and build that into the system," Dr Galliott said. As artificial intelligence develops, there have been increasingly loud concerns from some of the world's leading scientists about its potential implications. Might a machine become so intelligent it could override its human designer? In the past, this was a question for the world of science fiction. Think of the movie Robocop, in which a company devlops a heavily armed robot police officer which (spoiler alert) turns on its board of directors in the final scene. That world is now much nearer. There are already robot sentries on the border between North and South Korea, for example. Their full automation has been turned off, according to the South Korean government, to prevent them hitting innocent, non-threatening people. Their guns can only be triggered by human soldiers. But there are many "lethal autonomous weapons" which can independently search and engage targets - albeit, usually, with a human pulling the trigger (whether on a battlefield or from a monitor in, for example, Nevada). As technology moves, the human element may become less necessary. Robots are becoming more autonomous - more intelligent. The task of the researchers is to program in more constraints to stop tragedies happening. The Australian Defence Force is now at the forefront of developing that technology.
<urn:uuid:b37b661a-3b1f-4fba-a26b-602c1493ab0e>
{ "date": "2019-03-23T19:32:57", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202924.93/warc/CC-MAIN-20190323181713-20190323203713-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9586387872695923, "score": 2.890625, "token_count": 810, "url": "https://www.smh.com.au/technology/the-defence-force-is-developing-killer-robots-with-hearts-of-gold-20190228-p510xj.html?utm_campaign=Andy%20Abramson&utm_medium=email&utm_source=Revue%20newsletter" }
Science of Parenting: Children and Pets AMES, Iowa — When the kids start clamoring for a kitten or a dog or a hamster, how should parents respond? During April, family life specialists Donna Donald and Lori Hayungs talk about when and whether children should have pets in the Science of Parenting blog from Iowa State University Extension and Outreach. “Children and animals seem like a perfect match. Many of us adults remember the special bonds we had with pets when we were children,” Donald said. “It’s hard to resist the pleas of our kids when it comes to the adorable kittens and puppies and other little critters. But we also realize it is a major commitment to bring a pet into our home.” During April Donald and Hayungs will address the big question: Is a pet worth the effort? “We’ll share research in human-animal interaction and talk about how to determine the right age for children to have pets, how to make choices about pets and what pets can teach our children,” Hayungs said. Through the Science of Parenting, www.scienceofparenting.org, ISU Extension and Outreach specialists share and discuss research-based information and resources to help parents rear their children. Parents can join in the conversation and share thoughts and experiences, as well as how they handle parenting responsibilities. Counties Main Menu - County Home - About Us - 4-H & Youth - Agriculture & Environment - Business & Community - Families & Healthy Living
<urn:uuid:ddcf0741-bbc2-414b-ae0b-1bc49640ecc7>
{ "date": "2016-02-12T18:40:39", "dump": "CC-MAIN-2016-07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165070.40/warc/CC-MAIN-20160205193925-00230-ip-10-236-182-209.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9339199066162109, "score": 2.953125, "token_count": 323, "url": "http://www.extension.iastate.edu/story/node/17040" }
Get ready for back to school with these free printable daily calendars perfect to use along or in a daily calendar notebook. These daily calendars are specifically made for Kindergarten age children who are learning they days of the week and month, but are still unable to correctly form all their letters. Depending on your child, these may also be perfect for your Preschool and 1st grade child too. Kindergarten Calendar NotebooksThese free printable worksheets are perfect for Kindergartners. Just print these black and white sheets and you are ready to go. No prep required. I suggest laminating them so they can be reused all month long with dry erase markers. The following skills are covered in these Daily Kindergarten Calendars: - tracing months - tracing days of the week - learning seasons - telling time to the hour - tens, ones - counting to 100 See our Kindergarten Daily Calendars in action: - 50 Books Kindergartners can Read Themselves (printable bookmarks) - Alphabet Dot to Dot - Alphabet Playdough PLAYmats - Alphabet Book, Cut & Paste - Phonics Coloring Sheets - Kindergarten Spelling Practice Activity - Hershey Kiss Word Families - Kindergarten Sliders - Crazy Roads Sight Words Game - Reading the Easy Way 2 – Kindergarten Dolch Sight Word 12 week Reading Program - (6 sight words games, 32 sight words worksheets, 10 sight word readers, and more!) Download Kindergarten Daily Calendars - By downloading from my site you agree to the following: - This is for personal use only (to use in a coop or classroom please purchase a classroom licensed edition here TPT store) - This may NOT be sold, hosted, reproduced, or stored on any other site (including blog, Facebook, Dropbox, etc.) - Graphics Purchased and used with permission from Scrappin Doodles #94836, Little Red, and Ashley Hughes - I offer free printables to bless my readers AND to provide for my family. Your frequent visits to my blog & support purchasing through affiliates links and ads keep the lights on so to speak. Thanks you!
<urn:uuid:8b2368ad-a0fc-484a-8813-3396734aa4e8>
{ "date": "2017-03-23T22:01:22", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00076-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8825070858001709, "score": 2.5625, "token_count": 467, "url": "http://www.123homeschool4me.com/2015/08/free-kindergarten-daily-calendar.html" }
For beginners, have the students put two rhyming words on 2 leaves. (See examples below.) Then have them color each set of rhyming words the same color. You can limit the number of rhyming pairs by telling them which colors they can use (red, orange, yellow, green, brown). Then, have them trace their arm and open hand onto a sheet of drawing or construction paper to resemble a tree. (See my pathetic example below. :) Cut out the leaves and glue onto The Rhyming Tree. Now this is where you can get creative. You can also make this an Opposites Tree. Or anything where you would get a pair of different colored fall leaves to place on the tree. You're only restricted by your imagination!
<urn:uuid:e6eaa078-3cb2-4dc7-ad9e-5f6292e76f11>
{ "date": "2017-06-22T12:03:05", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00056.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9452065825462341, "score": 3.546875, "token_count": 158, "url": "http://kindermooney.blogspot.com/2014/11/the-rhyming-tree.html" }
One of the most common and competitive types of hacking attacks are DDoS attacks. This type of attack initiates a high frequency of requests to network resources or services, creating an excessive load on them. As a result, access to them becomes difficult or is suspended for some time. Particularly relevant for today is the issue of protection against DDoS-attacks to such network resources as: This list is not complete, it only reflects the purpose of such an attack: to reduce the popularity of the competitor by blocking access to their Internet resource that is caused by the high loads generated by botnets.
<urn:uuid:4606202c-db3a-4361-8b62-e9c20578bcf2>
{ "date": "2019-02-17T01:42:32", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481428.19/warc/CC-MAIN-20190217010854-20190217032854-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9604959487915039, "score": 2.53125, "token_count": 119, "url": "https://en.protectmaster.org/tags/DDoS+protection/" }
Bright green and frilly, Boston ferns (Nephrolepis exaltata "Bostoniensis") bring the rain forest indoors on chilly winter days and lend grace to summer Victorian porches -- that is, until their fronds begin dying back and dropping hundreds of little brown leaves on the floor. Take cues from your fern’s native environment to bring your ailing fern back to life. Boston Fern Basics The family of ferns contains plants hardy to the coldest climates of the U.S., but the Boston fern is tropical, growing outdoors only in U.S. Department of Agriculture plant hardiness zones 10 through 12. Its ancestors dwelt on forest floors, in high humidity, moist soil, filtered shade and low temperatures. The lovely plant on your porch descends from an unusual plant with long, drooping fronds discovered in a shipment received by a Massachusetts florist. What Boston Ferns Need When you bring Boston ferns indoors or plant them in the garden, they need organically rich soil that holds moisture but drains water. Indoor ferns crave humidity and a tray of pebbles filled halfway up with water or double-potting with moist sphagnum moss will raise humidity around the plant. During the winter, they’ll tolerate surface dryness on their soil, but still need bright light. A north-facing window -- or near an east window with sheer curtains -- in a spare room with temperatures from 50 to 55 degrees Fahrenheit comes close to their native environment. Boston ferns thrive outdoors in a Mediterranean-type climate, as long as they're in bright shade and their soil is kept moist. Failure of Boston ferns typically occurs when they suddenly move to a warm, dry, relatively dark interior for the winter, when temperatures drop to around 40 F or when frosty nights rob the air of moisture. The environmental shock causes browning, from the tips on the ends of the fronds inward. By winter’s end, even surviving plants may be shedding leaves. Unless the plants show signs of infestation by scale insects or mealybugs, they might still be saved. Throw out shocked plants infested with pests. Restoring By Renewal The worst technique to use on a shocked plant is to shock it further -- and repotting is a major shock. If your Boston fern needs a larger pot, wait until it has recovered and is actively growing midspring. Trim the drooping fronds back to about 2 inches long and leave any healthy upright fronds in the center of the plant intact. If all fronds are drying and dying, trim them all to 2 inches. Clean out the dead leaves and check the soil for offsets -- baby ferns -- which can be separated and planted in their own pots. Boston ferns tend to sterility and propagate using stolons as well as spores. Soak your fern well and allow all the water to run out of the drainage hole before setting the pot on a tray in bright light -- outdoors if nighttime temperatures have reached 50 F.
<urn:uuid:61e0e339-f373-43f7-ab18-faf5100f84c6>
{ "date": "2019-04-23T23:56:44", "dump": "CC-MAIN-2019-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578616424.69/warc/CC-MAIN-20190423234808-20190424020808-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.90776526927948, "score": 3.359375, "token_count": 638, "url": "https://homeguides.sfgate.com/can-revive-almost-dead-boston-ferns-92165.html" }
Members of Congress are considering several bills designed to combat climate change. Chief among them is Senate bill 2191--America's Climate Security Act of 2007--spearheaded by Joseph Lieberman (I-CT) and John Warner (R-VA). This bill would set a limit on the emissions of greenhouse gases, mainly carbon dioxide from the combustion of coal, oil, and natural gas. Since energy is the lifeblood of the American economy, 85 percent of which comes from these fossil fuels, S. 2191 represents an extraordinary level of economic interference by the federal government. For this reason, it is important for policymakers to have a sense of the economic impacts of S. 2191 that would go hand in hand with any possible environmental benefits. This Center for Data Analysis (CDA) reportdescribes and quantifies those economic impacts. Our analysis makes clear that S. 2191 promises extraordinary perils for the American economy. Arbitrary restrictions predicated on multiple, untested, and undeveloped technologies will lead to severe restrictions on energy use and large increases in energy costs. In addition to the direct impact on consumers' budgets, these higher energy costs will spread through the economy and inject unnecessary inefficiencies at virtually every stage of production and consumption--all of which will add yet more financial burdens that must be borne by American taxpayers. S. 2191 extracts trillions of dollars from the millions of American energy consumers and delivers this wealth to permanently identified classes of recipients, such as tribal groups and preferred technology sectors, while largely circumventing the normal congressional appropriations process. Unbound by the periodic review of the normal budgetary process, this de facto tax-and-spend program threatens to become permanent--independent of the goals of the legislation. The recent experience with ethanol mandates illustrates some of the costs and risks created when a government imposes significant new regulations on the energy market. ethanol production has been bedeviled by unintended impacts on world food prices, unexpected environmental degradation from expanding acres under cultivation, and frustratingly slow progress in commercializing cellulosic ethanol production. In spite of tremendous expense, the production goals set for ethanol are unlikely to be met, and the hoped-for environmental improvements are even less likely to occur. Yet the challenges posed by the ethanol program are a small fraction of those posed by S. 2191. S. 2191 imposes strict upper limits on the emission of six greenhouse gases (GHGs) with the primary emphasis on carbon dioxide (CO2). The mechanism for capping these emissions requires emitters to acquire federally created permits (allowances) for each ton emitted. The cost of the allowances will be significant and will lead to large increases in the cost of energy. Because the allowances have an economic effect much like the effect of an energy tax, the increase in energy costs creates correspondingly large transfers of income from private energy consumers to Implementing S. 2191 will be very costly, even given the most generous assumptions. To put a firm floor under the cost estimates, we assume that all of the problems of meeting currently enacted federal, state, and local legislation are overcome. A further unlikely condition is added; namely, that a critical but unproven technology--carbon capture and sequestration--will be ready for full-scale commercial use in just 10 years. Making a more reasonable assumption about just this one technology leads to dramatically higher (but by no means worst-case) costs. We use these two cases to bracket our cost projections of S. 2191: - Cumulative gross domestic product (GDP) losses are at least $1.7 trillion and could reach $4.8 trillion by 2030 (in inflation-adjusted 2006 dollars). - Single-year GDP losses hit at least $155 billion and realistically could exceed $500 billion (in inflation-adjusted 2006 - Annual job losses exceed 500,000 before 2030 and could approach - The annual cost of emission permits to energy users will be at least $100 billion by 2020 and could exceed $300 billion by 2030 (in inflation-adjusted 2006 dollars). - The average household will pay $467 more each year for its natural gas and electricity (in inflation-adjusted 2006 dollars). That means that the average household will spend an additional $8,870 to purchase household energy over the period 2012 Our analysis does not extend beyond 2030, at which point S. 2191 mandates GHG reductions to 33 percent below the 2005 level. However, it should be noted that the mandated GHG reductions continue to become more severe and must be 70 percent below the 2005 level by 2050. In addition to taking a bite out of consumers' pocketbooks, the high energy prices throw a monkey wrench into the production side of the economy. Contrary to the claims of an economic boost from "green investment" and "green-collar" job creation, S. 2191 reduces economic growth, GDP, and employment Though there are some initial years during which S. 2191 spurs additional investment, this investment is completely undermined by the negative effects of higher energy prices. Investment contributes to the economy when it increases future productivity and income. The greater and more effective the investment, the greater the increase in future income. Since income (as measured by GDP) drops as a result of S. 2191, it is clear that more capital is destroyed than is created. The cumulative GDP losses for the period 2010 to 2030 fall between $1.7 trillion and $4.8 trillion, with single-year losses reaching into the hundreds of billions. The hope for "green-collar" jobs meets a similar fate. Firms are saddled with significantly higher energy costs that must be reflected in their product prices. The higher prices make their products less attractive to consumers and thus less competitive. As a result, employment drops along with the drop With S. 2191, there is an initial small employment increase as firms build and purchase the newer more CO2-friendly plants and equipment. However, any "green-collar" jobs created are more than offset by other job losses. The initial uptick is small compared to the hundreds of thousands of lost jobs in later years. Table 1 shows the high and low projections of the employment and income effects of S. 2191. A less prominent part of S. 2191 subjects all imported goods to GHG emission rules. An understandable attempt to limit our loss of international competitiveness, this provision opens yet another area of uncertainty. For all imported goods, it will be necessary to measure the GHG footprint, compare the relative aggressiveness of national GHG limiting programs, and assign a possible emissions tariff. The inherent imprecision involved with such calculations leaves international trade vulnerable to bureaucratic caprice and increased trade tensions. Description of the Legislation S. 2191 is a cap-and-trade bill. It caps greenhouse gas emissions from regulated entities beginning in 2012. At first, each power plant, factory, refinery, and other regulated entity will be allocated allowances (rights to emit) for six greenhouse gases. However, only 40 percent of the allowances will be allocated to these entities. The remaining 60 percent will be auctioned off or distributed to other entities. Most emitters will need to purchase at least some allowances at auction. For instance, firms that reduce their CO2 emissions in order to meet the S. 2191 targets will still have to purchase 60 percent of the needed allowances in 2012 and an even higher fraction in subsequent Emitters who reduce their emissions below their annual allotment can sell their excess allowances to those who don't--the trade part of cap-and-trade. Over time, the cap is ratcheted down from a freeze at 2005 emissions levels in 2012 to a 70 percent reduction below those levels by 2050. In addition, the fraction of allowances that are given to the emitters is reduced, and a larger fraction is auctioned to the highest bidder. The primary man-made greenhouse gas is carbon dioxide and is the main focus of this Distribution of Auction Proceeds S. 2191 specifies how the distribution of the auction proceeds will be spent, with constant percentages from 2012 to 2036. The auction process depends on the creation of a new nonprofit corporation called the Climate Change Credit Corporation to initiate and complete the auctioning of allowances. Eleven percent will be allocated to an advanced-technology vehicles-manufacturing incentive. While 44 percent is to be spent on low-carbon energy technology, advanced coal and sequestration programs, and cellulosic biomass ethanol technology programs, 45 percent is to be spent on assisting individuals, families, firms, and organizations in the transition to a low-carbon regime. This includes 20 percent allocated to an Energy Assistance Fund, 20 percent allocated to an Adaptation Fund, and 5 percent allocated to a Climate Change Worker Training Fund. Specifically, the training fund would attempt to provide quality job training to workers displaced by this bill, provide temporary wages and health care benefits to those who are displaced, and provide funding for state-managed worker-training programs. Especially given the very wide range of projected auction proceeds, earmarking them for decades into the future risks creating additional de facto entitlement programs. Proponents of cap and trade describe it as a flexible and market-based approach that allows the private sector to find the most cost-effective means of reducing greenhouse gas emissions. They expect the program to motivate fossil energy producers and users to reduce their carbon dioxide emissions through improvements in energy efficiency, expanded use of energy sources with fewer or no carbon emissions, or new carbon capture and sequestration (CCS) technologies that allow such emissions to be stored underground rather than released into the atmosphere. In contrast, critics fear that many of the necessary advances are decades away from being technologically and economically viable and that, in the interim, the caps in S. 2191 can be met only with severe reductions in energy use, which would drive up energy costs significantly--and would be, in effect, a massive Proponents of these reductions point to the success of a similar cap-and-trade program in the 1990 Clean Air Act amendments to restrict sulfur dioxide emissions from coal-fired power plants. This program led to emissions reductions at a cost lower than anticipated. Critics question the success of this program as well as its relevance to the far more difficult task of regulating The comparison has a more fundamental flaw, however. In contrast to the undeveloped and speculative state of current CCS technology, the technology for reducing sulfur dioxide emissions was already commercialized and widely implemented before the 1990 Clean Air Act amendments were passed. Critics also point to the substantial difficulties that the European Union has faced since implementing its greenhouse gas cap-and-trade program in 2005 in order to comply with the Kyoto Protocol, the multilateral treaty on emissions that the United States declined to ratify. The cap-and-trade specifics of S. 2191--the overall targets and timetables, the types of emissions and economic sectors covered, the method of allocating allowances, the measures designed to add flexibility, the provisions affecting trade, and many other factors--will determine the extent and distribution of the costs and, indeed, whether the goals are realistically achievable. These specifics are explained in more detail below. In addition to the provisions of the bill, the many baseline assumptions about the future also affect the projected costs of S. 2191. They include assumptions about the pace of technological advances, especially those regarding the CCS breakthroughs that will be necessary for the continued use of coal, the energy source with the highest CO2 emissions per unit of energy. Continued use of coal is critical because it provides half of the nation's Assumptions about America's economic growth and concomitant energy needs are also of great importance, as are assumptions about the effect of previously enacted energy legislation, particularly the Energy Independence and Security Act of 2007. This CDA report discusses three different views of this country's economic future, each shaped by different policies designed to reduce atmospheric carbon dioxide and, presumably, to reduce the warming trend in global climate change. Policymakers and others who follow the climate change debate closely should find each of these three views helpful in understanding the policy alternatives currently before us. These three views are: - The current-law baseline. Presented here is a highly detailed, 30-year economic forecast that incorporates the principal elements of energy and climate change policies signed into law last - Simulation of S. 2191, America's Climate Security Act of 2007, sponsored by Senators Joseph Lieberman (I-CT) and John Warner (R-VA). The simulation builds on the detailed baseline and assumes that critical technologies are fully developed. - An alternative, more realistic scenario in which critical technology does not materialize over the 20-year forecast Key Assumptions. The baseline for the Lieberman-Warner simulations builds on the Global Insight (GI) November 2007 long-term-trend forecast. The GI model assumes [T]he economy suffers no major mishaps between now and 2037. It grows smoothly, in the sense that actual output follows potential output relatively closely. This projection is best described as depicting the mean of all possible paths that the economy could follow in the absence of major disruptions. Such disruptions include large oil price shocks, untoward swings in macroeconomic policy, or excessively rapid increases in demand. The GI long-term model forecasts the trend of the U.S. economy. "Trend" means the most likely path that the economy will follow if, for instance, it is not disturbed by a recession, extremely high oil prices, or the collapse of major trading partners. One way to think about the long-term trend is to imagine a pathway through the cyclical patterns of our economy, as well as the effects of cyclical patterns in foreign economies on the U.S. economy. Given the fiscal and economic challenges facing the United States (particularly the mounting federal deficits stemming from the long-expected explosion in Social Security, Medicare, and Medicaid outlays), the long term already has significant risks. The baseline assumes that the economy successfully avoids any sharp drops. At the same time, there is no inclusion of similarly large, potentially positive, shocks to the economy. Energy prices, patterns of use, and supply change continuously in response to legislation and market conditions. To evaluate the economic impact of S. 2191, we must establish what would be the expected levels of emissions and available technology over the bill's proposed lifetime in the absence of its passage. Only with the baseline situation determined can the costs of meeting the goals and constraints of S. 2191 be estimated. Two fundamental trends establish the baseline path of CO2 emissions. First, aggregate income growth leads to greater demand for power across all sectors of the economy. Most of this power is generated by burning fossil fuels. Partially offsetting the associated increase in CO2 emissions is the second trend of increasing carbon efficiency in the energy sector. The improved efficiency comes from a variety of changes in both production and consumption, including power-generating technology that increases the yield of useable power for each ton of CO2 emitted; continual improvements in the energy efficiency of appliances, new homes, and light vehicles; more use of renewable fuels; and greater generation and use of nuclear power. Government mandates--federal, state, and local--continue to force additional energy efficiency and limit CO2 emissions, which helps to achieve the goals of S. 2191. These mandates may work in parallel with S. 2191, and they create compliance costs, but since these compliance costs are already in force without the passage of S. 2191, they are not attributable to the Examples of the baseline costs necessary for meeting the S. 2191 goals but attributable to other legislation include: - Manufacturing cars and trucks that satisfy the much higher fuel-economy standards mandated for the next 20 years, - Producing 36 billion gallons of biofuels including 16 billion gallons of cellulosic ethanol, - Complying with expensive new building codes, and - Producing ever more energy-efficient household appliances. Aggregate Energy Use. Continued gains in energy efficiency will restrain the growth of energy demand below the rates of economic growth and below the rates experienced in the past half-century--roughly 1.5 percent per year. These efficiencies are driven by both markets and mandates. We project baseline primary energy demand to grow at 0.5 percent each year through 2030. Petroleum. As always, higher prices push back on quantities demanded. Though petroleum prices should come down from the current record levels as supply disruptions and bottlenecks ease, they will remain well above 1990 prices. According to baseline assumptions, petroleum prices will settle around $70 a barrel in nominal terms and decline to $46 a barrel (in 2006 dollars) by 2030. Even in the absence of Corporate Average Fuel economy (CAFE) limit changes, higher prices induce consumers to move to more efficient vehicles. On the mandates side, the Energy Independence and Security Act of 2007 (EISA) raises the bar for vehicle fuel efficiency. The CAFE standard rises to 35 miles per gallon by 2020 for all light vehicles. For subsequent years, the EISA mandate reads: For model years 2021 through 2030, the average fuel economy required to be attained by each fleet of passenger and non-passenger automobiles manufactured for sale in the United States shall be the maximum feasible average fuel economy standard for each fleet for that model year. The expected CAFE standards are 47.5 miles per gallon for new passenger cars and 32 miles per gallon for new trucks by 2030, and the average for all light vehicles, whether new or old, will be 33 miles per gallon. Overall, petroleum consumption will grow by 0.6 percent per year between 2005 and 2030. Natural Gas. In the baseline scenario, gas prices settle just below $7 per million British thermal units (Btus). This is less than the current price but well above the 1990s levels. Alaskan pipeline deliveries will not start until 2025, at which point they will help to offset supply reductions in the Lower 48 as well as imports from Canada. Nearly 100 gigawatts of old natural-gas-steam capacity is retired, and 50 gigawatts of the more efficient "natural gas combined cycle" (NGCC) plants are built. Total natural gas consumption grows by 0.4 percent per year through 2030. Coal. In the baseline case, coal use is restrained by slower growth of energy demand and increasing generation of nuclear and renewable power. Demand will grow by an average of 0.2 percent each year through 2030. One hundred gigawatts of old inefficient energy is retired. Sixty-five gigawatts of new and replacement coal-fired power-generation plants will be added using the "integrated gas combined cycle" (IGCC) or advanced pulverized-coal technologies. These more efficient technologies use less coal and emit less CO2 per unit of electricity generated and are ready to be fitted for carbon capture and sequestration. Because of the additional cost, there is no use of CCS technology in the baseline case. Better and more widely adapted scrubbing technology allows broader use of high-sulfur coal. This will open up more sourcing options and lower the average cost of coal in the energy In real dollars, coal prices will settle near the levels observed in the 1990s. Nuclear Energy. Though there are no significant CO2 emissions from nuclear power generation, it is not considered "renewable" for the purpose of meeting existing state-imposed targets. Nevertheless, federal incentives are already in place for an additional nuclear power capacity. There will be 12 gigawatts of new capacity built and 3 gigawatts of uprated additional capacity added at existing plants. Resolving the problems with waste disposal is a major hurdle in expanding nuclear power generation. The baseline assumption is that nuclear power plants will continue to store the waste on site. Given the already high use of available capacity, electricity generated by nuclear power is projected to grow by only 0.5 percent per year through 2030. Renewable Energy Sources. Federal and state initiatives already in place seek to increase the use of renewable energy sources. The definition of "renewable" varies from state to state but generally includes biomass, wind, and solar Higher fuel prices along with state and federal mandates cause renewable fuel use to grow at 5.5 percent per year through 2030. We assume that producers will be able to meet the ethanol (corn-based and cellulose-based) targets set by the EISA, though experience thus far suggests otherwise. Simulations of Lieberman-Warner Key Assumptions. Responding to concerns about adverse environmental impacts of anthropogenic greenhouse gas emissions, S. 2191 sets ever more stringent caps on emissions of these gases. Using previous emission levels as yardsticks, the 2012 cap is set at the 2005 emission level. The cap drops to 15 percent below the 2005 emission level by 2020 and 33 percent below by 2030. By 2050, the goal is to have man-made GHG emissions at 70 percent below those of 2005. Though the main focus for the emissions targets is CO2, Lieberman-Warner rules apply to six greenhouse gases: carbon dioxide, methane, nitrous oxide, sulfur hexafluoride, perfluorocarbon, and some byproduct hydrofluorocarbons (HFCs). All emissions are measured in terms of the warming potential of carbon Some of these other gases have much higher greenhouse effects per ton of emissions than does CO2. However, these gases are emitted in much smaller volumes by human activity. CO2 creates about 85 percent of the man-made GHG warming; therefore, this study examines only the economic impact of constraints on CO2 Under the Lieberman-Warner bill, producers of petroleum products, producers of natural gas, and consumers of coal must have CO2 allowances in proportion to the output (or consumption, in the case of coal) of these fuels. The quantity of allowances available each year is equal to the cap on CO2 emissions for that year. Some activities and technologies that reduce emissions of greenhouse gases can earn allowance credits, which can then be sold or used to offset required allowances. There are provisions that allow unused allowances to be saved for future years and, within limits, to borrow future allowances. The costs of borrowing are so high and the rewards of saving are so distant and uncertain that our analysis assumes no borrowing or saving of S. 2191 creates the Climate Change Credit Corporation to administer the distribution of allowances and to track their ownership. In the first phase of implementation, 40 percent of the allowances are issued to current emitters. This fraction declines until 2025, at which point emitters receive zero allowances and must purchase 100 percent of the allowances they Barriers to trade: Title VI, Global Effort to Reduce Greenhouse Gas Emissions Title VI of S. 2191 is part of a global effort to reduce greenhouse gas emissions and ensures that emitting GHG in other countries does not undermine U.S. efforts to reduce GHG. The bill's supporters hope to encourage international action on GHG To this end, the bill includes the suggestion that the President establish an interagency group to determine whether or not other countries have taken similar action to limit their release of GHG. The interagency group will be responsible for creating a reserve of international allowances, and any U.S. importer of covered goods must submit international allowances as a condition for the trade Thus, importers of covered goods must submit emissions allowances that are equal in value to those required for those goods in our system. For instance, if the production of a product generates two tons of CO2 , importers of this product need two tons of allowances for each product they import. An importer must also submit a written declaration to the administrator of U.S.Customs and Border Protection for each import. Failure to make a CO2 emissions declaration bars the importation of a good into the United States. The only exceptions will be for countries that have taken similar action to reduce GHG and countries that are identified by the United Nations as the Though perhaps well-intentioned, Title VI has the potential to do serious harm to international commerce. Complex and ambiguous, it could prove to be a loose cannon--destroying trade relations instead of reducing environmental damage. Coal Technology. Due to its abundance, coal is the cheapest source of energy and fuels about half of America's electricity supply. Carbon capture and sequestration is a promising but not yet commercialized technology for dramatically reducing CO2 emissions from coal-powered electricity. Of course, CCS technology has additional costs, which are higher when retrofitting existing plants than when building the technology into new plants. Even with the additional costs, CCS becomes viable in new plants when allowance costs exceed $50 per ton of CO2 Initial modeling showed that this $50 threshold will be reached faster than CCS technology is likely to become available. Therefore, we assume that CCS technology is adopted as soon as it is practical. That date cannot be predicted with any certainty. The costs of meeting the CO2 reductions mandated by S. 2191 are very sensitive to changes in the rate at which CCS technology is developed. Our generous scenario operates on the assumption that any coal-fired plant built after 2018 uses CCS. A second scenario assumes that the significant technological and political hurdles prevent CCS adoption before 2030. Natural Gas. Because of its higher cost, natural gas is not competitive with coal in the baseline case of zero CO2 restrictions. Though natural gas generates less CO2 per Btu than does today's coal, it is not competitive when coal generators use CCS. In the in-between case, with some CO2 restrictions and no CCS, high allowance prices make coal more expensive and natural gas relatively more attractive. The in-between case drives up natural gas prices and is the most costly of all for the For carbon-allowance prices in the $30 to $40 range, replacing old steam plants with combined-cycle natural gas plants makes sense. When allowance prices exceed $50, coal plants with CCS are more competitive. Regional price differences and the long lag times in replacing power plants ensure that electricity will be generated by both coal and gas for the foreseeable future. Nuclear Energy. The projection is for no additional nuclear power beyond the base case. Allocation of Allowances (Required Permits for Emitting CO2) The largest initial allocations go to two covered entities: power (electricity producers) and industry (such as manufacturers). - Allocation to power includes new entrants, rural electric cooperatives, and incumbents. - Allocation to industry includes new entrants, incumbents, and revocation of distribution upon facility shutdown. If a facility is permanently shut down, it must return the difference of carbon dioxide equivalents emitted and the number of allowances received from the Environmental Protection Agency (EPA). For both of these covered entities, allocation equals 20 percent of allowances from 2012 to 2016 and then decreases by 1 percent per year until it reaches zero in 2036. Ten percent of allocated allowances would go to load-serving entities, such as electric and gas distributors and demand-side management programs. Entities receiving allowances would be forced to pass the value of the allowance on to their customers in an attempt to mitigate the economic impact on lower-income and middle-income families. More specifically, the proceeds can mitigate the economic impact on low-income and middle-income users by reducing transmission charges and issuing rebates. On the other hand, the proceeds can be used to promote energy efficiency on the part of the consumer. Under S. 2191, the EPA would be responsible for allocating emission allowances and distributing auction proceeds. The EPA would allocate up to 9 percent of allowances to states between 2012 and 2050 for rates reflecting efficiency measures, building efficiency compliance, enactment of stringent measures, Low Income Home Energy Assistance Plan (LIHEAP), population size, and the local economy's carbon intensity. States will receive a minimum of 5 percent and an additional 1 percent to 4 percent based on the measures they take to reduce emissions. Mitigating the economic impact on low-income families is only one of 12 ways the proceeds can be used. Others include: - Reducing use of electricity and natural gas, minimizing waste, - Investing in non-emitting electricity technology; - Improving public transportation and rail services; - Using advanced technology to reduce or sequester GHG; - Addressing local and regional impacts-- including relocation of communities affected by climate change; - Mitigating obstacles to electricity investment by new - Providing assistance to displaced workers; - Mitigating impacts on energy-intensive industries in internationally competitive markets; - Reducing hazardous fuels and preventing and suppressing - Funding rural, municipal, and agricultural water projects. Other Allowances. Eight percent of allocated allowances are designated for agriculture and forestry sequestration programs, while another 4 percent is generically allocated to support the development of CCS as well as geological sequestration. Five percent of allowances are awards for early action for covered entities, including facilities attempting to lower GHG emissions since 1994, and would decline by 1 percent each year until they reach zero in 2017. Auction of Allowances. By 2012, 18 percent of the allowances will be auctioned as part of the annual auction program. This number will increase by 3 percent per year until 2017 and then increase by 2 percent per year until 2035, when it reaches 67 percent. From 2035 to 2036, it will jump to 73 percent and remain at that level until 2050, the sunset date for S. 2191. Additionally, the Lieberman-Warner bill requires an "early auction" within 180 days of enactment of the bill. At this time, 6 percent of the 2012 allowances, 4 percent of the 2013 allowances, and 2 percent of the 2014 allowances will be auctioned. The total cost of allowances will be passed on to energy consumers and represents an unprecedented tax hike. The annual cost of this tax (adjusted for inflation to 2006 dollars) will be at least $100 billion and could well exceed $300 billion per year by 2030. Renewable Energy Sources. Current state and federal legislation calls for more than tripling the amount of renewable energy in power generation and increasing transport biofuels by more than 1,000 percent. This includes 16 billion gallons per year of corn-based ethanol and biodiesel and 20 billion gallons per year of cellulosic ethanol and biodiesel. Again, our assumption is that cellulosic biofuels become commercially feasible in time to meet the mandates that are already planned. While S. 2191 has no additional mandates for biofuels, the costs of allowances for fossil fuels lead to greater use of biofuels. At this time, there is no commercially feasible cellulosic ethanol production. If this technology fails to deliver as projected, energy prices will have to rise enough to reduce the quantity of energy demanded by the amount of missing cellulosic Economic Costs of the Lieberman-Warner The Lieberman-Warner bill affects the economy directly through higher prices for carbon-based energy, which reduces quantity demanded and, thus, the quantity supplied of energy from carbon sources. Energy prices rise because energy producers must pay a fee for each ton of carbon they emit. The fee structure is intended to create an incentive for producers to invest in technologies that reduce carbon emissions during energy production. The bill's sponsors and supporters hope that the fees are sufficiently high to create a strong incentive and demand for cleaner energy production and for the widespread adoption of carbon capture and sequestration technology. The economic model we use to estimate the bill's broad economic effects treats the fees like a tax on energy producers. Thus, energy prices increase by the amount of the fee or tax. The demand for energy, which largely determines the consumption and, thus, the taxes collected, responds to higher energy prices both directly and indirectly. The direct effect is a reduction in the consumption of carbon-based energy and a shift, where possible, to substitutes that either do not require the fee or require a smaller one. The indirect effects are more complex. Generally speaking, the carbon fees reduce the amount of energy used in producing goods and services, which slows the demand for labor and capital and reduces the rate of return on productive capital. This "supply-side" impact exerts the predictable secondary effects on labor and capital income, which depresses consumption. These are not unexpected effects. Carbon-reduction schemes that depend on fees or taxes attain their goals of lower atmospheric carbon by slowing carbon-based economic activity. Of course, advocates of this approach hope that other energy sources will arise that can be used as perfect substitutes for the reduced carbon-based energy. Our first simulation of S. 2191 attempts to make everything happen just as the authors of the legislation envision. We call this simulation the "generous assumptions" simulation, as discussed above in our assumptions section. That is, assuming the carbon-reduction targets discussed above, the implementation of CCS as well as expanded and new low-carbon fuels occurs just as planned and on time. The process is assumed to be unhampered by lawsuits or bureaucratic inefficiencies in the deployment of technology grants and consumption subsidies. Everything is "by the book." Our second simulation relaxes the assumption that CCS technology is implemented and increases the value of carbon fees by approximately 30 percent each year after 2018. We call this simulation the "reasonable assumption" simulation. Every other assumption of our first simulation is retained. Table 3 shows the carbon fees per ton at five-year intervals in our two simulations. Displayed alongside these values are the fees determined in other simulations of S. 2191. If we have succeeded in these two efforts, then policymakers can expect something like the following economic effects: Economic Output Declines. The broadest measure of economic activity is the change in GDP after accounting for inflation. GDP measures the dollar value of all goods and services produced in the United States during the year for final sale to consumers. In the generous-assumptions simulation, GDP increases slightly during the first few years as, for instance, energy producers decommission power plants and build new ones that are capable of accommodating CCS. This investment-driven burst of GDP subsides after 2018. Higher energy prices decrease the use of carbon-based energy in production of goods, incomes fall, and demand for goods subsides. GDP declines in 2020 by $94 billion, in 2025 by $129 billion, and in 2030 by $111 billion (all, again, after inflation). When CCS is not implemented, the higher carbon fees produce more adverse economic effects. GDP is $330 billion below its baseline levels by 2025 and $436 billion below its baseline levels by 2030. This slowdown in GDP is seen more dramatically in the slump in manufacturing output. Again, manufacturing benefits from the initial investment in new energy production and fuel sources, but the sector's declines are sharp thereafter. Indeed, by 2020, manufacturing output in this energy-sensitive sector is 2.4 percent to 5.8 percent below what it would be if S. 2191 never becomes law. By 2030, the manufacturing sector has lost $319 billion to $767 billion in output when compared to our baseline; that is, when compared to the economic world without Number of jobs Declines. The loss of economic output is the proverbial tip of the economic iceberg. Below the surface are economic reactions to the legislation that led up to the drop in output. Employment growth slows sharply following the boomlet of the first few years. Potential employment (or the job growth that would be implied by the demand for goods and services and the relevant cost of capital used in production) slumps sharply. In 2025, nearly a half-million jobs per year fail to materialize. The job losses expand to more than 600,000 in 2026. Indeed, in no year after the boomlet does the economy under Lieberman-Warner outperform the baseline economy where S. 2191 never becomes law. For manufacturing workers, the news is grim indeed. That sector would likely continue declining in numbers thanks to increased productivity: Our baseline contains a 9 percent decline between 2008 and 2030. Lieberman-Warner accelerates this decrease substantially: Under our generous-assumptions simulation, employment in manufacturing declines by 23 percent over that same time period, or more than twice the rate without Other, less energy-intensive sectors, however, do not suffer such decreases. Employment in retail establishments ends the 22-year period 2 percent ahead of its 2008 level, despite significant cutbacks on household consumption levels. Employment in information businesses grows by 29 percent over this same time Because the distribution of energy-intensive jobs across the country is unequal, some states and congressional districts will be hit particularly hard. Notable among the most adversely affected states are Wisconsin, New Hampshire, Illinois, and Energy Prices Rise. Higher energy prices, of course, are the root cause of the slower economy. As Chart 7 shows, consumer prices for electricity, natural gas, and home heating oil increase significantly between 2015 and 2030. Indeed, by the last year of our simulation, the total energy bill for the average American consumer has gone up $8,870 from 2012. Incomes and Consumption Decline. Declining demand for energy-intensive products reduces employment and incomes in the businesses producing these products. Workers and investors earn less, and household incomes decline. Reductions in income in these sectors spread and cause declines in demand for other sectors of the economy. Our simulation captures this effect of higher energy prices. Under the generous-assumptions simulation, the income that individuals have after taxes declines by $47 billion (after inflation) in 2015 and by $50.7 billion in 2030. Our reasonable-assumptions simulation contains worse news: Disposable personal income falls $120 billion below baseline in 2015 and averages $68 billion below baseline over the entire period of 2008 to 2030. Consumption outlays by individuals and households follow the pattern of lower income. In 2020, consumption expenditures are $52 billion lower than they would be in an economic world in which S. 2191 is not the law. Personal consumption outlays (after inflation) are $67 billion lower by 2030 and average $54 billion below baseline over the entire 22-year forecast period. Under a more reasonable assessment of the likelihood of standard use of CCS, consumption expenditures by individuals average $113 billion lower over the 22-year forecast period. These declines in consumption are particularly dramatic in those parts of the economy that are sensitive to economic shocks: consumer durables, financial services, and discretionary medical services, among others. Chart 8 shows the effects of the decline in personal consumption outlays. The Lieberman-Warner climate change bill is, in many respects, an unprecedented proposal. Its limits on CO2 and other greenhouse gas emissions would impose significant costs on virtually the entire American economy. In addition, complicated tariff rules, dependent on evaluating the GHG restrictions of all trading partners, add another unknowable dimension to the costs, fueling the overall uncertainty. The problems for our economy are increased by S. 2191's reliance on complex and costly technologies that have yet to be developed. The fact that this large-scale transformation of the economy must occur over relatively tight timeframes only amplifies the costs and uncertainties. The impacts would be felt by every Even under a fairly optimistic set of assumptions, the economic impact of S. 2191 is likely to be serious for the job market, household budgets, energy prices, and the economy overall. The burden will be shouldered by the average American. The bill would have the same effect as a major new energy tax--only worse. In the case of S. 2191, increases in the tax rate are set by forces beyond legislative control. Under a more realistic set of assumptions, the impact would be considerably more severe. More significant than the wealth destroyed by S. 2191 is the wealth transferred from the energy-using public to a list of selected special interests. Overall, S. 2191 would likely be--by far--the most expensive environmental undertaking in history. William W. Beach is Director of the Center for Data Analysis; David W. Kreutzer, Ph.D., is Senior Policy Analyst for Energy Economics and Climate Change in the Center for Data Analysis; Ben Lieberman is Senior Policy Analyst in Energy and the Environment in the Thomas A. Roe Institute for Economic Policy Studies; and Nicolas D. Loris is a Research Assistant in the Roe Institute at The Heritage Analysts at The Heritage Foundation and Global Insight, Inc., employed a wide array of analytical models to produce the micro- and macroeconomic results reported in this paper. This section describes the models and the major steps taken by these analysts in shaping the modeling results. U.S. Energy Model (Long-Term) Global Insight's U.S. Energy Model has been designed to analyze the factors that determine the outlook for U.S. energy markets. A staff of more than 15 energy professionals supports the model and forecasting effort. The model is constructed as a system of several models that can be used to assess intra-market issues independently of each other. The integrated system is used to produce Global Insight's baseline Energy Outlook and allows users to simulate changes in domestic energy markets. The U.S. Energy Model is an integrated system of fuel and electric power models and the End-User Demand Model. The solution is achieved through an iterative procedure. Also, monthly models of petroleum and natural gas prices use the framework of the long-term forecast with additional weekly and monthly information to analyze seasonal fuel prices and update the price forecasts on a monthly basis. The major models of the Energy Model and their interrelationships are described below. End-Use Demand Model. Demand for final-use energy is modeled by sector, fuel, and census region, based on the competitive position of each fuel in its end-market. The total demand for energy is estimated as a function of the stock of energy equipment, technology change, prices of competing final energy sources, and economic performance. The initial demand profile by region of the U.S. for each fuel is then integrated with the U.S. Petroleum, Natural Gas, Coal, and Electric Power Models, each of which consists of three major sub-modules--a supply and transformation module, a transportation/transmission/distribution module, and a wholesale/retail price module. Petroleum Model. The U.S. Petroleum Model uses the world oil price projection from Global Insight's Global Oil Outlook. The model then determines refined petroleum product prices to end-users by adding refining markups, inventory, and transportation costs. For selected products, federal, state and local taxes are also accounted for in the model. The U.S. Petroleum Model also provides a baseline projection of U.S. crude and natural gas production that is based on an annual review of data and literature on U.S. reserves, production, and technological progress. A simulation block for investigating the supply response under alternative assumptions is part of this model. Imported supplies of crude and petroleum products are developed by the difference between domestic production and the sum of the direct consumption of petroleum by consumers and the transformation demand for petroleum by the power sector. Model. The Natural Gas Model consists of three major sub-modules: a supply module, a transmission/distribution module, and a spot-pricing module. The supply module projects production based on analysis of U.S. reserve data, exploratory and development drilling, and technological progress. A simulation block for investigating supply responses under alternative assumptions is part of this module. The transmission/distribution module projects cost by The spot-pricing model integrates the results of the End-User Demand Model, the natural gas demand by the power sector from the Electric Power Model, and the embedded supply and transmission/distribution modules to determine producer prices by basin. A conclusive solution is developed through an interactive Model. The Coal Model is a simulation model designed to replicate the market response of this sector under alternative scenarios. Finalized through the interactive process, the baseline market analysis is provided by JD Energy (an affiliated coal and power consulting firm) that includes analysis and forecasts of coal production, rail costs, coal flows, and coal prices. Model. The U.S. Electric Power Model is a detailed, regional (census region) model of the power-generation sector combined with a more aggregate module of the regional transmission and distribution sector. The preliminary demand for regional generation is determined as a function of the demand for electricity determined in the End-User Demand Model, transmission losses, and trade. Generation requirements are met through the capacity module, which projects capacity decisions based on fuel prices, operating and maintenance costs, and technological progress. Usage is projected as a function of load and marginal production cost. Through this analysis, a preliminary demand for a certain fuel by the power sector is developed that is finalized in the iterative process. Model. The Energy Balances Model completes the process. This model provides national and regional summations of energy use across all fuel types and customer classes. Operation of the Energy Models. Lieberman- Warner sets very aggressive carbon-reduction targets between 2012 and 2050 for the covered sectors. Using the energy models described above, simulation resulted in carbon dioxide allowances rising swiftly from $20 per metric ton in 2012 to $50 in 2020 and $70 in 2030 (all in 2006 prices). These allowances significantly raise energy prices for consumers. Allowed offsets were applied to the targets, which influenced the estimation of required fees. In addition, Lieberman-Warner lays out two other mechanisms for achieving the carbon-reduction targets: increasing energy from non-carbon sources and implementation of carbon capture and The absolute gains from additional non-carbon energy sources are relatively small, given the significant incentives already in place for this growth from EISA. For CCS, we assumed that its use in energy production became competitive with energy produced with natural gas only when the allowance fee rose above $50 per ton. For the generous-assumption simulation, we also assumed that the technology of carbon capture and storage was available for widespread use when the fee rose to this level. We also took into account the new-build and retirement and replacement options, which were inputs to the energy models that estimated the allowance Global Insight Long-Term U.S. The Global Insight long-term U.S. macroeconomic model is a large-scale 30-year (120-quarter) macroeconometric model of the U.S. economy. It is used primarily for commercial forecasting. Over the years, analysts at The Heritage Foundation's Center for Data Analysis have worked with economists at Global Insight to adapt the GI model to policy analysis. In simulations, CDA analysts use the GI model to evaluate the effects of policy changes not just on disposable income and consumption in the short run, but also on the economy's long-run potential. They can do so because the GI model imposes the long-run structure of a neoclassical growth model but makes short-run fluctuations in aggregate demand a focus of analysis. The Global Insight model can be used to forecast over 1,400 macroeconomic aggregates. Those aggregates describe final demand, aggregate supply, incomes, industry production, interest rates, and financial flows in the U.S. economy. The GI model includes such a wealth of information about the effects of important changes in the economic and policy environment because it encompasses detailed modeling of consumer spending, residential and non-residential investment, government spending, personal and corporate incomes, federal (and state and local) tax revenues, trade flows, financial markets, inflation, and potential gross domestic product. Consistent with the rational-expectations hypothesis, economic decision-making in the GI model is generally forward-looking. In some cases, Global Insight assumes that expectations are largely a function of past experience and recent changes in the economy. Such a retroactive approach is taken in the model because GI believes that expectations change little in advance of actual changes in the economic and policy variables about which economic decision-makers form expectations. Operation of the U.S. Macroeconomic The policy changes contained in Lieberman- Warner and implemented in the U.S. Energy Model (as described above) resulted in over 71 changes in the U.S. Macroeconomic Model. These changes ranged from energy-source variables (such as the price of West Texas Intermediate crude oil, an industry benchmark price series) to the carbon tax rate per ton of coal. These energy model results were introduced into the macro model in the Energy Price Effects. Heritage analysts used the market price changes in the refiner's acquisition price for oil (West Texas Intermediate) and in natural gas prices at the wellhead (Henry Hub) directly from the energy model. The macro model contains a host of producer prices that are changed through their interaction with other variables in this model. However, the policy changes in Lieberman-Warner affect producer prices in the energy sectors directly. Thus, the energy model's settings for these producer prices were used instead of those in the macro model. Technically, energy producer prices were exogenous and driven by corresponding prices from the energy model. The following producer price categories were affected: coal, natural gas, electricity, natural gas, petroleum products, and residual fuel oil. We employed a similar procedure in implementing changes in consumer prices. In this case, the variables affected were all consumption-price deflators. Once again, we substituted energy-model settings for these variables for their macro-model counterparts. The following consumption price deflators were affected: fuel oil and coal, gasoline, electricity, and Energy Consumption Effects. Both the energy model and the macro model contain equations that predict changes in demand for energy, given changes in energy prices, but the energy model contains a more detailed treatment of demand. Preferring details over generality, we lined up the demand equations in both models and substituted settings from the energy model for those in the macro model. Specifically, we lined up these demand equations: - Total energy consumption, - Total end-use consumption for petroleum, - Total end-use consumption for natural gas, - Total end-use consumption for coal, and - Total end-use consumption for electricity. One key transformation that took place dealt with the differing demand units used between the two models in calculating residential consumption. The energy model expresses demand in trillions of British thermal units, while the macro model projects demand in billions of constant dollars. Another key transformation focused on consumer spending on gasoline. The energy model does not contain a separate forecast for spending on gasoline or other motor fuels. To overcome this, we projected the change in consumer spending on gasoline based on the energy model's change in total highway fuel consumption. Revenue Estimates. The energy model produces estimates of carbon emissions and of the carbon fee in dollars per metric ton. It is a simple matter to multiply emissions by the carbon fee to obtain the "revenue" from the emissions permits. Heritage analysts assumed that the revenue value of permits equals the entire value of these permits as government revenue, whether or not they are formally auctioned. If the government chooses to transfer ownership of the permits to other entities, then that would be reflected as a transfer payment in the national income accounts. The macro model permits allocation of permit revenues to the states, which was accomplished by multiplying total permit revenue by the statutory state percentage for each year. These revenues then were allocated to various specified functions as follows: - Revenues for general state needs other than low-income - Revenues for low-income support administered by the - Revenues allocated to electricity and gas distributors for - Revenues allocated to covered entities, and - Revenues for federal government consumption. Capital Spending. The energy model calculates capital spending by electric utilities in the base case and in the Lieberman-Warner case. Spending is higher (at least initially) and costlier in the Lieberman-Warner case because higher-cost power plants are built or because old plants are refurbished. The change in spending was applied to the macro model variable for real spending on utility investment after conversion to the appropriate The analysts then calculated what amount of spending would have been required to produce the same level of electricity capacity had the mix of spending been the same as the baseline. The purpose here is to measure the extra resources that had to go into utility construction simply because of the introduction of the resources related to the carbon fee that will produce lower emissions but which will not produce extra GDP. Operation of the U.S. Macroeconomic Model for Lieberman-Warner with Reasonable Assumptions The Lieberman-Warner simulation with reasonable assumptions builds on the generous-assumptions simulation by relaxing the CCS implementation schedule. As discussed in the assumptions section of this report, there are many reasons to doubt that CCS will be implemented over the forecast period. Relaxing the CCS implementation schedule provides policymakers with an alternative that increases the economic costs of S. 2191 without significantly altering the legislation's other key assumptions. That is, the alternative or reasonable simulation attempts to portray the economic effects of carbon fees that are higher than in the generous-assumptions simulation while leaving nearly all of the other policy assumptions untouched. We have calculated that carbon fees would have to increase $20 per metric ton, from $68 to $88 (adjusted for inflation), by 2030 to compensate, through decreased energy consumption, for carbon reductions that otherwise would be attained through carbon capture and sequestration. These higher carbon fees would begin in 2015, or about the time that CCS implementation is projected to result in a slowing of carbon-fee growth in the generous-assumptions simulation. For example, with CCS, carbon fees would be $50 in 2020 instead of $65. To implement the assumption of higher carbon fees, analysts adjusted the settings of the generous-assumptions simulation described above. We left in place the energy input prices (oil, natural gas, coal, and so forth) that were used in the basic, or generous, simulation. Likewise untouched were all of the assumptions about energy production and demand contained in the State-by-State Employment Losses
<urn:uuid:70192a95-a0e9-4408-bbef-07ff20e8012d>
{ "date": "2015-07-02T16:33:24", "dump": "CC-MAIN-2015-27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095632.21/warc/CC-MAIN-20150627031815-00136-ip-10-179-60-89.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9035349488258362, "score": 2.765625, "token_count": 12192, "url": "http://www.heritage.org/research/reports/2008/05/the-economic-costs-of-the-lieberman-warner-climate-change-legislation" }
The back is a vital component part of the human body, consisting of the bones and muscles which allow us to stand upright, walk and lift. The spine consists of small bones called vertebrae that are stacked on top of one another, separated by soft tissue, referred to as disks. The lower back consists of five (lumbar) vertebrae that are connected by stabilizing ligaments, allowing for flexibility and movement. The majority of back sports injuries occur in the muscles, ligaments and vertebrae in the lower back. Almost all athletes experience back pain at some point during their athletic careers. According to the 2012 Midwest Orthopaedics at Rush/Illinois Athletic Trainers Association survey of Illinois certified athletic trainers, back injuries were reported among the top five most common high school sports injuries. What are common back injuries among young athletes? The most common back injuries among young athletes are ligament sprains or muscle strains in the lower back. These injuries can occur from overuse, improper mechanics, insufficient conditioning or stretching and trauma. However, more serious injuries, such as spondylolysis and spondylolisthesis can occur that may mimic a sprain or strain, so it is always important to see a sports medicine or orthopedic specialist if you experience persistent lower back pain. Spondylolysis is a stress fracture to a vertebra, usually the fourth or the fifth, in the lower back, typically due to excessive stress and pressure on the lower back. When the stress fracture weakens the spine too much, it may cause spondylolisthesis, in which the vertebra begins to slip and shift out of place. Some sports and activities may put the athlete at higher risk for development of these conditions. High risk sports include swimming/diving, gymnastics and wrestling. Which athletes most commonly get back problems? Athletes who participate in sports in which significant force is exerted on the lower back such as running, cycling, football and skiing are more prone to back injury, as well as athletes in sports that involve twisting such as golf, tennis, baseball and gymnastics. What are the symptoms of back injuries? The athlete will feel pain in his or her lower back which worsens with activity. In some cases, the symptoms of spondylolysis and spondylolisthesis are not obvious and may feel like a muscle strain. Pain may spread across the lower back and worsen as the back is arched. If the spondylolisthesis becomes more serious, the slipping vertebra may begin to press against the nerves and stiffen the lower back and hamstring muscles. Pain radiating down the leg or numbness in the leg or foot may represent a lumbar disc problem. What is the recommended treatment for back injuries? Typically, the recommended treatment for back pain is rest from activity, ice and anti-inflammatory pain medications. Heating pads may also be helpful in relieving pain. If symptoms persist, physical therapy may enhance recovery. Additional treatment may include: electric stimulation, massage, stretching and exercises to strengthen the abdominals and back. In some cases of spondylolysis and spondylolisthesis, a brace may be needed to stabilize the lower back. In severe cases of spondylolisthesis, surgery may be necessary if the vertebra continues to shift and does not respond to standard treatment. If an athlete’s vertebra slips more than 50 percent, he or she may be encouraged to participate in a sport that is less stressful on their back. Athletes with this condition should be examined periodically by an orthopedic physician to ensure the disk does not slip further. What are some strategies and exercises for preventing back injuries? For additional information about the Midwest Orthopaedics experts in the field of spine and neck surgery, call 877 MD BONES (877.632.6637). MOR is proud to be designated as a Blue Cross Blue Shield Blue Distinction® Center for Spine Surgery. Blue Distinction Centers demonstrate an expertise in quality care, resulting in better overall outcomes for patients, by meeting objective clinical measures developed with input from expert physicians and medical organizations.
<urn:uuid:c515c026-e3fc-472b-9272-436ce86d4489>
{ "date": "2019-07-18T11:46:54", "dump": "CC-MAIN-2019-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9443155527114868, "score": 3.265625, "token_count": 862, "url": "https://sportsmedicineweekly.com/2015/07/09/back-injuries-2/" }
The Radicals' Rancorous Rage Radicals' Ideas and Methods Continue to Torment America JUNE 01, 2005 by BECKY AKERS In a revolution for liberty, they sought power. In an age of individuality and self-reliance, they demanded obedience. In a century of personal excellence, they relished “leveling.” They called themselves Radical Patriots, as though the troops who starved and froze at Valley Forge weren’t patriotic enough. But these eighteenth-century politicians had about them little that was either radical or patriotic. They tried to subvert the truly radical revolution raging round them because, as one Loyalist bitterly summarized it, they “hate Tyranny, but . . . their meaning is they hate Tyranny when themselves are not the Tyrants.”1 The Radicals first roared to power in Philadelphia in the 1770s. They were establishing themselves, flexing their muscles, when the British sent them flying and occupied the town during the winter of 1777–78. Philadelphia’s reprieve ended with the British withdrawal that June. The Radicals returned, with policies so disastrous that they brought the city to the brink of financial ruin and civil war. Nevertheless, their influence seeped throughout the state because their ideology had been codified in Pennsylvania’s constitution. That document extolled government as a benign agent for progress, declaring that God “alone knows to what degree of earthly happiness mankind may attain by perfecting the arts of government. . . .”2 From Pennsylvania, the Radicals ascended to the Continental Congress. They never achieved their dream of ruling America, but for a few heady months they ruled Congress. Fortunately, the Radicals as a political party faded with the war. Unfortunately, their legacy lingers to this day. Their rapid rise was helped by the desperate circumstances the American Revolution inflicted on Philadelphia. Before the war, Philadelphia had been one of the New World’s loveliest cities. Its wide streets were paved, a contrast to the dirt lanes in other towns, and they lay at right angles in a spacious, logical grid. Lining them were elegant brick homes and churches, general stores, specialty shops, and even a few theaters, despite Quaker objections. Boasting roughly 30,000 inhabitants, Philadelphia was the largest city in the British empire after London (with 1,000,000). Then came the war. Philadelphia’s glory sank beneath the twin blows of inflation and invasion. Under the Crown, the 13 colonies had been forbidden to coin silver and gold. That meant the newly “free and independent States” had few mines, no dies for coining, and consequently no hard money for prosecuting the war. Congress turned to the printing presses, whose abundance in literate America proved a curse when paper money flooded forth. The resulting inflation crippled the revolution as seriously as a military defeat. Everyone suffered as markets emptied and necessities became luxuries. But at least those Americans who farmed would not starve. Philadelphians, on the other hand, were unable to grow the food and firewood they could no longer buy. In September 1777 British and Hessian troops under General Sir William Howe captured Philadelphia. They would make the city their winter quarters for the next nine months. While civilians scrambled for scraps, the enemy feasted at banquets, threw parties, gambled, and attended theater, often in the company of Philadelphia’s young belles. Some of these girls were Loyalists; most probably cared little about politics, especially when a party was in the offing. A few may have been Patriots stranded in the city, though many Patriots, real and Radical, fled their homes. The British officers who took over those abandoned houses did not trouble themselves to preserve rebel property. They chopped holes in parlor floors so that privies could drain into cellars. They fed furniture and fences to their fires. They looted valuables and trampled gardens. They converted churches into riding schools after cooking dinner over the pews and pulpits. With callous irony, they degraded the State House, which had seen the signing of the Declaration, by imprisoning captured American officers there. When the army evacuated the following June, both varieties of Patriots returned to a city and to homes devastated almost beyond recognition. The officers and troops who had wreaked such damage were gone, beyond the homeowners’ revenge. But large numbers of Philadelphians in addition to the flirting ladies had remained in town through the winter. Whether they were too old or weak to leave, or whether they were Loyalists glad to welcome His Majesty’s government into the rebels’ capital, these folks had accommodated the troops, sometimes by choice, other times by compulsion. That made them all Loyalists to the furious Patriots now seeing their ruined homes for the first time. The Radicals, consummate politicians, manipulated this explosive situation to increase their power. They welcomed citizens’ demands that revenge be taken for the destruction and dissipation the British had left in their wake. Radicals promised that their government would enforce morality while rooting out the corrupt culture the British had foisted on their city. Coincidentally, that meant rooting out anyone who enjoyed British fashions, books, victuals, or friends. The Radicals also promised a solution to the worsening inflation. They had already tried their hand at this in 1776, when they passed laws to save the credit of the Continental dollar—which succeeded as well as if they had legislated that the Continental Army would no longer lose battles. Nevertheless, blithe in the face of failure, the Radicals now tried fixing prices and wages. Though the Radicals had no authority to do so, they appointed a “Committee of Inspection” to spy on merchants and guarantee that they were cheating themselves in accordance with the new policies. The committee was soon poking its nose into all sorts of private transactions. Merchants suspected of selling goods for more than the Radicals liked were hauled before the committee and threatened with seizure of their stock—or worse. Though one leading Radical disapproved of these extralegal shenanigans, he wanted to monitor those “suspected characters” whose “spirit of Aristocracy and Pride of Wealth” prompted them to sell their goods for a profit.3 Goods went from scarce to nonexistent as merchants packed up their wares and sought saner markets in states where price-fixing was still the stuff of madness and “inconsistent with the principles of liberty.”4 The Radicals retaliated by condemning the entire class of merchants, cursing them as “forestallers” and “monopolists.” Price Controls Violate Property Rights In 1779, with hunger still haunting Philadelphia, 80 of those forestallers and monopolists argued before the Pennsylvania Council that requiring anyone to accept an arbitrary price for his goods destroyed property rights: “The limitation of prices is in the principle unjust, because it invades the laws of property, by compelling a person to accept of less in exchange for his goods than he could otherwise obtain, and therefore acts as a tax upon part of the community only.”5 The merchants pointed out that price-fixing had accomplished exactly the opposite of its proponents’ claims: far from reducing costs, it had instead made the fixed goods scarce while raising prices on those goods that had thus far escaped the government’s control. Anyone who could afford to was hoarding in anticipation of further scarcity. Also bewailing Radical economics was General John Cadwalader, a merchant whose service with Pennsylvania’s militia had nevertheless not been enough to redeem him in Radicals’ eyes. He warned that controlling prices “must inevitably produce immediate ruin to the merchants and mechanics [the working class]; and a scarcity, if not a want of every necessary of life, to the whole city.” Worse, there was no natural famine, only the shortage that results when government interferes with supply and demand: “A plentiful harvest has filled the country with an abundance . . . and a market would bring such quantities to the city, that there would be no want of these necessaries in the future.”6 Pennsylvania’s delegate to Congress, James Wilson, protested price-fixing schemes to that body: “There are certain things . . . which absolute power cannot do. The whole power of the Roman emperors could not add a single letter to the alphabet. Augustus could not compel old bachelors to marry,” and government could neither improve nor prevent the give-and-take of the market.7 But it would take more than a ruined city to dent Radical arrogance. Even after witnessing the misery to which their policies had reduced a once wealthy town, they refused to admit their mistakes. They remained true to the Politicians’ Creed––“I believe it’s everyone else’s fault, not mine”—and excused Philadelphia’s empty pantries by proclaiming, “If goods have been removed, we are not the persons who have removed them; and if those who have been guilty of such practises, should plead in excuse that they did it because they could get a few pounds more in other places, what is it but to confess they care nothing for the welfare of the community among whom they reside, and that avarice and self-interest are their only principles.”8 “Avarice and self-interest” were the worst sins a Radical could conceive, far more heinous than stealing Loyalist estates or hanging political opponents. One Radical even fumed that “to induce persons to lend money [to the Continental Army] by promises of exorbitant interest, is not only to dishonour a virtuous cause by applying to our vices for support, but is adding distress to our country, by fueling the disease which occasioned it.”9 Radicals saw wealth as corrupting—unless, of course, it was theirs. Wealth was a mark not of ambition, foresight, discipline, and self-restraint, but of wickedness, while those who created wealth, who owned businesses or land, were evil. Making money, per se, was evil too. The Radicals strove to reform those showing self-interest, the wealthy and those trying to become wealthy, by vilifying their “greed” and hobbling them with regulations. The Radicals expected citizens to injure themselves in favor of the “common good,” which, as defined by the Radicals, meant their regulations: “the social compact in a state of civil society . . . requires that every right or power claimed or exercised by any man or set of men, should be in subordination to the common good.”10 Then, as astute officials often do, the Radicals redefined their terms. Rather than a market’s being free when left alone by government, it was free, they declared, when it guaranteed “the right of everyone to partake of it, and to deal to the best advantage he can, on just and equitable principles, subordinate to the common good; and as soon as this line is encroached on, either by the one extorting more for an article than it is worth, or the other for demanding it for less than its value, the freedom is equally invaded and requires to be regulated.”11 Obviously, only Radical bureaucrats could decide whose principles were just and equitable, when private deals violated the common good, and what sorts of regulation would best redress extortionate prices, as well as the point at which those prices became extortionate. Radicals further controlled the economy by branding certain transactions moral and others sinful. Men selling shoddy wares at low, Radical-approved prices were good. Men smuggling rare goods into Philadelphia for sale on the black market were bad because they charged high prices to cover their risk and trouble. Radicals expected Philadelphians to content themselves with moldy bread and sour butter, sold at controlled prices, rather than hanker for good but expensive beef and pork. The Radicals did nothing by halves: they loathed and loved with equal ferocity.They hated wealthy men, extravagance with one’s own money, frugality with the public’s money, free markets, monarchy. They loved government (providing they ran it), mobs, demagoguery, and, amazingly, the Revolutionary War. That last might have been their one virtue, had their fanaticism not turned it into a vice. They persecuted, sometimes to death, anyone whose support for the war they deemed lukewarm. The words to describe Radical ideology would not be coined until a later century’s horrific experiments in totalitarianism, but they were fascists in their itch for control, socialists in their economics, and Marxists in their humorless sanctimony. They were also utopians who cared little for their victims as they struggled to remake the world to their Spartan specifications. Their version of nirvana was frighteningly modern: a strong government regulating social and economic interactions while forcing citizens to be virtuous—or at least to cultivate those “virtues” the Radicals approved. These consisted primarily of veneration for the state, simplicity in manners and fashion, disdain for luxury, and thrift. The Radicals also expected every citizen to “feel for the public as for himself.”12 Those who “felt” for family and friends ahead of the abstract “public,” who were wealthy or aspired to be, who were ambitious and self-interested, and who defined the Radicals’ virtues differently or prized other virtues more were enemies of the state. Also high in the Radical pantheon were equality and democracy. And, as many Americans still do, the Radicals stretched these strictly political ideas to cover all of life. Anyone who considered himself a notch above his fellows, even if he had earned such distinction, could hardly be a good Patriot. Most likely, he was not a Patriot at all. It wasn’t long before anyone of great learning or wealth or excellence in any area was suspected, even hated. That applied particularly to some of the wealthiest folks in the world, the British king and nobility. Hating them was a Radical duty, if not a downright pleasure. Indeed, the Radicals so savored the hating that they extended it to all things British. The revolution, then, became a war aimed at the British rather than the British government. That distortion, immortalized in countless textbooks and taught in countless classrooms, allows the significance of a rebellion against the statist muck miring mankind to slip past unnoticed. Despite their catastrophic reign, the Radical Patriots have escaped all censure. This may be due to the legitimacy that men who should have known better, such as Benjamin Franklin and Thomas Paine, lent them by helping them write Pennsylvania’s constitution. But many lesser-known Radicals are also revered as heroes. Joseph Reed, for example, a leading Radical who became president of Pennsylvania, began the war as a lackluster officer on General Washington’s staff. But Reed benefited from something more telling than courage: an admiring descendant wrote his biography. He whitewashed Reed’s record with the army and also papered over blemishes in his career with the Radicals. President Reed could sound positively Robespierrian at times—he once called two citizens whom he was about to hang “animals” and expressed hopes for their “speedy execution”13—but his biographer ignored such outbursts. Then, too, the Radicals have been almost entirely forgotten. Out of the extensive body of literature on the American Revolution—Amazon.com carries almost 4,000 books on George Washington alone—perhaps a handful of volumes mention them at all, and only one is devoted to them. That study was written by a Marxist who openly admitted his admiration for his subject.14 But though the Radicals have disappeared so completely not even footnotes disclose them, their ideas continue to torment the country—as do their methods: what worked on eighteenth-century Americans works as well today, and politicians, seldom original in their evil, merely recycle Radical tricks. During their tenure in Philadelphia, the Radicals pulled stunts still popular in the political repertoire, whether setting wage and price controls or banning anything fun, specifically theater, horse-racing, and gambling. They stifled dissent by dismissing their critics as “Loyalists” in cahoots with the British, just as the President’s critics today are slandered as soft on terrorism. Not surprisingly, many Philadelphians with choice estates turned out to be Loyalists whether they protested Radical measures or not, and their properties were confiscated in an early version of asset forfeiture. They were the lucky ones: a few “Loyalists” who especially irritated the Radicals were hanged. Finally, as they committed their worst outrages, the Radicals canted about liberty. Like modern leaders, they used the same words other Americans did but first took care to twist them to their purposes.The Radicals called for “freedom” loudly and often, but they meant freedom through government, not freedom from government. Nor were they concerned that they thereby spoke not of freedom at all but of slavery. They were perhaps the first American politicians to use the rhetoric of liberty to destroy liberty. The beggary the Radicals inflicted on eighteenth-century Americans warns 21st-century Americans against the state. Neither original nor unique in their folly, the Radicals were the usual run of rulers, mouthing the same tired lies, hiding behind the same old excuses. Like today’s politicians, the Radicals claimed they could manage markets better than those participating in them. When that failed, they played one group of citizens against another, consumers against merchants, Patriots against Loyalists, persuading each that the other was an enemy from whom only government could save them. The cooperation inherent in free markets vanquishes such paranoia, but many folks, then and now, listen to the demagogues instead of trusting their own experiences in the marketplace. And because revolutionary Americans nearly worshipped political freedom, the Radicals couched even their most dictatorial laws and ideas in the language of liberty. However, they subtly and without fanfare reinterpreted terms until their words meant the opposite of what their audience actually heard. So it goes today. Politicians speak of “security” when they mean surveillance by government, “gun rights” when they mean gun registration, and “equality” when they mean that some groups will be favored over others. A poet who survived the Radicals’ rampage described their tactics, still in use today: The Mob tumultuous instant Seize With Rancrous Rage, on whom they please. The People Cannot Err. Can it be wrong in Freedom’s cause To Tread down Justice, Order, Law When all the Mob concur?15 1. Samuel to Hannah Peters, n.d., Samuel Peters, Papers, Connecticut Historical Society,VIII, 24. 2. Pennsylvania Constitution, 1776. 3. William B. Reed, The Life and Correspondence of Joseph Reed, 2 vols. (Philadelphia: Lindsay & Blakiston, 1847), vol. 2, p. 139. 4. Quoted in Thomas Fleming, Liberty! The American Revolution (New York: Viking, 1997), p. 285. 5. Pennsylvania Packet, September 10, 1779. 6. General John Cadwalader, Pennsylvania Packet, July 31, 1779, quoted in Sam Bass Warner, Jr., The Private City: Philadelphia in Three Periods of its Growth (Philadelphia: University of Pennsylvania Press, 1968) p. 41. 7. James Wilson, quoted in Page Smith, A New Age Now Begins: A People’s History of the American Revolution, 2 vols. (New York: McGraw-Hill Book Company, 1976), p. 1364. 8. Pennsylvania Packet, September 25, 1779. 9. Massachusetts Historical Society, Proceedings (Boston, 2d series, vol. III [1855–58]), p. 15. 10. Pennsylvania Packet, September 10, 1779. 11. Steven Rosswurm, Arms, Country, and Class: The Philadelphia Militia and the “Lower Sort” During the American Revolution (New Brunswick, N.J., and London: Rutgers University Press, 1987), p. 196. 12. Principles and Articles of the Constitutional Society (a Radical political club), Pennsylvania Packet, April 1, 1779. 13. Pennsylvania Packet, November 7, 1778. 14. Robert Brunhouse, The Counter-Revolution in Pennsylvania, 1776–1790 (Harrisburg, Pa.: Pennsylvania Historical and Museum Commission, 1971 ). 15. Joseph Stansbury, “Historical Ballad of the Proceedings at Philada 24th & 25th of May.” MS. 1491–1492, Chester County Historical Society.
<urn:uuid:5e946106-b7bb-4500-90cc-12001e0b78c3>
{ "date": "2014-09-20T13:57:18", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133417.25/warc/CC-MAIN-20140914011213-00131-ip-10-196-40-205.us-west-1.compute.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9726327061653137, "score": 2.953125, "token_count": 4384, "url": "http://www.fee.org/the_freeman/detail/the-radicals-rancorous-rage" }
What is Cub Scouting? The mission of the Boy Scouts of America is to prepare young people to make ethical and moral choices over their lifetimes by instilling in them the values of the Scout Oath and Law. Since 1930, the Boy Scouts of America has helped younger boys through Cub Scouting. It is a year-round family program designed for boys who are in the first grade through fifth grade (or 7, 8, 9, and 10 years of age). Parents, leaders, and organizations work together to achieve the purposes of Cub Scouting. Currently, Cub Scouting is the largest of the BSA's three membership divisions with membership over 1 million. (The others are Boy Scouting and Venturing.) The ten purposes of Cub Scouting are: (1) Character Development (2) Spiritual Growth (3) Good Citizenship (4) Sportsmanship and Fitness (5) Family Understanding (6) Respectful Relationships (7) Personal Achievement (8) Friendly Service (9) Fun and Adventure (10) Preparation for Boy Scouts Cub Scouting members join a Cub Scout pack and are assigned to a den, usually a neighborhood group of six to nine boys. Tiger Cubs (first-graders), Wolf Cub Scouts (second graders), Bear Cub Scouts (third graders), and Webelos Scouts (fourth and fifth graders) meet weekly. Once a month, all of the dens and family members gather for a pack meeting under the direction of a Cubmaster and pack committee. The committee includes parents of boys in the pack and members of the chartered organization.
<urn:uuid:edb1bcda-ac57-48e4-b95d-e4a11799c691>
{ "date": "2015-05-30T20:28:09", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932705.91/warc/CC-MAIN-20150521113212-00244-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9266904592514038, "score": 3.03125, "token_count": 327, "url": "http://cubscoutpack97.org/index.php?option=com_content&view=article&id=19&Itemid=27" }
Depression is a risk for women during, and especially after pregnancy … a condition called postpartum depression. By some estimates, postpartum depression affects as many as one in four new mothers in the first year after childbirth. It can make a mother feel sad, worthless, and hopeless, and make it more difficult to care for and bond with her baby. Back in 2007, an expert panel appointed by the American Psychiatric Association concluded that people who consume higher amounts of omega-3s from fish (EPA and DHA) generally enjoy reduced risks of depression and other mood disorders. For more on that, see “Top Psych Panel Says Omega-3s Deter Depression, Bipolar Disorder.” But what about postpartum depression? Although most of the several epidemiological and clinical studies published to date produced positive evidence, the results are considered encouraging but inconclusive … in part because of the paucity of sound, reliable studies. As a researcher at Emory University put it in a review published in April of 2011, “The results are mixed, but one recently completed large trial found no evidence of benefit among women who received DHA during pregnancy.” (Ramakrishnan U 2011) The authors of two other recent evidence reviews reached similar conclusions: “In conclusion, the question of whether EPA and DHA administration is effective in the prevention or treatment of perinatal depression cannot be answered yet. The quality of research in this area needs to improve.” (Jans LA et al. 2010) “Overall, results have been inconclusive, but further investigation of omega-3 fatty acids is warranted because they did improve depression scores and appeared to be safe during pregnancy.” (Borja-Hart NL, Marino J 2010) (The perinatal period is the time immediately before and after birth … starting at about the 20th week of gestation and ending about one month after birth.) Recently, a small clinical trial from the University of Connecticut (UConn) found that omega-3 fish oil appears to reduce symptoms of postpartum depression, adding more pressure to conduct large, well-controlled trials. As the authors wrote, “These results offer a basis for guidelines for DHA consumption by pregnant women and for community-based efforts to increase awareness of the value of DHA/fish consumption for maternal mental health.” (Judge MP et al. 2011) UConn study finds positive indications For the past several years, Michelle Price Judge, an assistant professor-in-residence at the UConn School of Nursing, has been looking into how omega-3s derived from fish impact maternal and infant health. In a recent study, Judge focused on whether DHA – the omega-3 essential to brain development and function – lowers the risk of postpartum depression when it is consumed during pregnancy (Judge MP et al. 2011). Her coauthors included UConn Professor Cheryl Beck – an international expert on postpartum depression – and Carol Lammi-Keefe of Louisiana State University. They conducted a randomized, double-blind study involving 42 pregnant women, monitored from the 24th week of pregnancy to birth. The results were presented at Experimental Biology 2011 in Washington, D.C. in April. Dr. Judge’s team found that, compared with women who took a placebo pill, those who took 300mg of DHA five days a week had lower scores on a standardized postpartum depression screening scale (developed by Beck). The women in the fish oil group had significantly lower scores for symptoms of anxiety/insecurity, emotional liability (characterized by excessive emotional reactions and frequent mood changes), and “sense of loss of self”. However, because their study was quite small, Judge said that her group could not conclude that fish oil supplements reduce the risk or severity of major postpartum depression. Omega-3s, fish fat, and pregnancy In some animals, a deficiency of omega-3 DHA has been associated with lower brain levels of important neurotransmitters such as dopamine and serotonin, which play key roles in mood elevation. Additionally, high blood levels of omega-3s can reduce levels of certain messenger proteins “cytokines” that promote systemic inflammation, which also is considered a factor in depression. Dr. Judge made these cogent comments: “Generally, experts agree that the omega-3 fatty acids derived from fish are beneficial to maternal and infant health. Yet on average, pregnant women consume less than half of the level considered optimal during pregnancy. If women consume 12 ounces (two to three servings) of fish weekly, there is no need for fish oil supplementation. Women who consume very little or no fish should consider supplementation.” (UConn 2011) Her opinion – that the rewards of fishy diets to children and pregnant/nursing mothers far outweighs any hypothetical risk – has ample support … see “Experts Urge an End to Fishy U.S. Advice for Mothers, Children” and “FDA Analysis Supports More Fish for Moms and Kids.” Worldwide, health authorities recommend that pregnant women consume at least 200 mg of DHA daily. In fact, experts recommend from 260mg to 660mg per day for all adult women, pregnant or otherwise. Fatty fish such as wild salmon, sardines, herring, tuna, including canned light tuna, are excellent sources of DHA and EPA … the omega-3 fatty acids found only in fish oil. Some fish, such as shark, swordfish, king mackerel, and marlin can contain high amounts of mercury, and should be avoided. Fish oil supplements provide a safe alternative, either because they have been chemically refined to remove almost all mercury, or because they come from naturally pure fish such as wild Alaskan salmon. Prior research has shown that the omega-3 fatty acids found in the primary fat of fish like salmon and tuna are preferentially transferred through the placenta during the later stages of pregnancy in order to help the baby grow and mature. As a result, expectant mothers often show a depletion of maternal stores of omega-3s in their bodies. According to Dr. Judge, this lack of DHA in mothers is compounded by the fact that pregnant women tend to eat only a fraction of the amount of fish and DHA considered optimal during pregnancy. The research was funded by the Patrick and Catherine Weldon Donaghue Medical Research Foundation. Borja-Hart NL, Marino J. Role of omega-3 Fatty acids for prevention or treatment of perinatal depression. Pharmacotherapy. 2010 Feb;30(2):210-6. Review. Freeman MP. Complementary and alternative medicine for perinatal depression. J Affect Disord. 2009 Jan;112(1-3):1-10. Epub 2008 Aug 8. Review. Jans LA, Giltay EJ, Van der Does AJ. The efficacy of n-3 fatty acids DHA and EPA (fish oil) for perinatal depression. Br J Nutr. 2010 Dec;104(11):1577-85. Epub 2010 Nov 16. Review. Judge MP et al. Maternal docosahexaenoic acid (DHA, 22:6n-3) consumption during pregnancy decreases postpartum depression (PPD) symptomatology. The FASEB Journal. 2011;25:349.7 Accessed at http://www.fasebj.org/cgi/content/meeting_abstract/25/1_MeetingAbstracts/349.7?sid=722a0ce8-35e9-4a7e-b5ae-625ef03534f3 Judge MP, Harel O, Lammi-Keefe CJ. A docosahexaenoic acid-functional food during pregnancy benefits infant visual acuity at four but not six months of age. Lipids. 2007 Mar;42(2):117-22. Epub 2007 Jan 19. Judge MP, Harel O, Lammi-Keefe CJ. Maternal consumption of a docosahexaenoic acid-containing functional food during pregnancy: benefit for infant performance on problem-solving but not on recognition memory tasks at age 9 mo. Am J Clin Nutr. 2007 Jun;85(6):1572-7. Ramakrishnan U. Fatty acid status and maternal mental health. Matern Child Nutr. 2011 Apr;7 Suppl 2:99-111. doi: 10.1111/j.1740-8709.2011.00312.x. Review. University of Connecticut (UConn). Fish Oil May Reduce Postpartum Depression Symptoms. June 1, 2011. Accessed at http://today.uconn.edu/blog/2011/06/fish-oil-may-reduce-postpartum-depression-symptoms/
<urn:uuid:529428a0-8594-4f06-8186-d0e9b68690c5>
{ "date": "2013-12-18T10:15:32", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758389/warc/CC-MAIN-20131218054918-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9310389757156372, "score": 2.578125, "token_count": 1865, "url": "http://www.vitalchoice.com/shop/pc/articlesView.asp?id=1466" }
Section 3 - Frequency of Primes In this section, I will present more specific results concerning the frequency of occurrence of primes, in terms of formulae that provide infinite supplies of primes and formulae that provide a high density of primes. I will also consider the counting of primes of various forms and investigate prime gaps. We have already established the infinitude of primes. Since all but the first prime are odd, it is obvious that there is an infinity of primes of the form 2n+1 (or 2n-1) since all odd numbers can be represented in this form. What if we restrict consideration further to numbers of the form 4n+1, 4n+3, 8n+1, etc. ? In the final analysis, it was proved by Dirichlet that for any positive integers a and b such that gcd (a, b) = 1, there is an infinity of primes of the form an+b. However, the proof of this involves mathematics too advanced for inclusion here. Nevertheless, simpler maths can still be used to provide short proofs for particular forms, as we shall see. First, I need to develop some additional theory. Definition : Let p be an odd prime and gcd(a, p) = 1. If the congruence x2 º a (mod p) has a solution, then a is a quadratic residue of p, and if not then a is a quadratic non-residue of p. This concept arises naturally from attempts to find solutions of quadratic congruences of the more general form ax2 + bx + c (mod p). Lemma 3.1 (Euler's Criterion). Let p be an odd prime and gcd(a, p) = 1. Then a is a quadratic residue of p if and only if a(p-1)/2 º 1 (mod p). Proof : Suppose a is a quadratic residue. Then x2 º a (mod p) for some x. Since gcd(a, p) = 1, we have gcd(x, p) = 1, so by Fermat's Little Theorem, a(p-1)/2 º xp-1 º 1 (mod p), as required. Now suppose that a(p-1)/2 º 1 (mod p), and let r be a primitive root of p. Then a º rk (mod p) for some k, so rk(p-1)/2 º 1 (mod p). Now, by Lemma 2.13(ii), p-1 divides k(p-1)/2, so k must be an even number. Let k = 2j. Then (rj)2 º a (mod p), so rj is a solution of x2 º a (mod p). Hence result. Definition : Let p be an odd prime and gcd(a, p) = 1. The Legendre symbol (a/p) is defined as follows : if a is a quadratic residue of p, then (a/p) = 1, otherwise (a/p) = -1. The Legendre symbol is a very convenient and condensed notation, and allows us to develop mathematical relations involving it as a Boolean operator. Lemma 3.2 : Proof : (i) if x2 º a (mod p) and a º b (mod p) then x2 º b (mod p) by the transitivity of congruence. (ii) Take x = a. (iii) This is a reformulation of Euler's Criterion. Proof of (iv) : (ab/p) º (ab)(p-1)/2 º a(p-1)/2.b(p-1)/2 º (a/p).(b/p) (mod p). Each of (ab/p), (a/p) and (b/p) is either 1 or -1, so the difference (ab/p) - (a/p).(b/p) is either 0, 2, or -2. Since p is an odd prime, this difference must be 0. Hence result. In (ii) above, we can take a = 1, so that (1/p) = 1. Also, by definition (-1/p) º (-1)(p-1)/2 (mod p), and since (-1/p) is either 1 or -1, we must have (-1/p) = (-1)(p-1)/2. Since (p-1)/2 is even if p is of the form 4n+1 and odd if p is of the form 4n+3, this can be rephrased as : Lemma 3.3. (-1/p) = 1 if p º 1 (mod 4) and (-1/p) = - 1 if p º 3 (mod 4). We can find such explicit expressions in other cases, for instance : Lemma 3.4. (2/p) = , i.e. (2/p) = 1 if p º ±1 (mod 8) and (2/p) = -1 if p º ±3 (mod 8). Proof : Consider the congruences : p - 1 º 1.(-1)1 (mod p) 2 º 2.(-1)2 (mod p) p - 3 º 3.(-1)3 (mod p) 4 º 4.(-1)4 (mod p) etc. up to the halfway point r º ((p-1)/2).(-1)(p-1)/2 (mod p), where r is either (p-1)/2 or p-(p-1)/2, depending as p º 1 or 3 (mod p), respectively. The left hand side of each of these congruences is even and consists of every even number up to p-1. Hence, multiplying the congruences, we obtain 2(p-1)/2.((p-1)/2)! º ((p-1)/2)!(-1)1+2+…+(p-1)/2 (mod p). Cancelling the common term we get 2(p-1)/2 º (mod p), as required, by Lemma 3.2(iii) and the fact that both sides are either 1 or -1. We can now give our first improvement over the 2n+1 formula. Proof of (i) : This involves only very basic theory. Assume the statement is false and let N be the number obtained by subtracting 1 from the product of all primes of the form 4n+3. Then N is also of the form 4n+3. Now, N is odd and so all its divisors are of the form 4n+1 or 4n+3. However, the product of two or more integers of the form 4n+1 must also be of the form 4n+1, so N must be divisible by a prime q of the form 4n+3. But q also divides N+1, so q divides 1, a contradiction. Hence result. Proof of (ii) : Assume the statement is false and let N be the number obtained by adding 1 to the square of the product of all primes of the form 4n+1. Now, N is odd, so let p be an odd prime that divides N. Then by definition, -1 is a quadratic residue of p, i.e. (-1/p) = 1. By Lemma 3.3, p is therefore of the form 4n+1. But p also divides N-1, so p divides 1, a contradiction. Hence result. Lemma 3.6. There is an infinity of primes of the form 8n-1. Proof : Assume the statement is false. Let P be the product of all primes of the form 8n- 1, and let N = (4P)2 - 2. Now N is odd, so let p be an odd prime that divides N. Then (4P)2 º 2 (mod p) so 2 is a quadratic residue of p. By Lemma 3.4, we must have p º ±1 (mod 8). However, the product of two or more integers of the form 8n+1 must also be of the form 8n+1, so N must be divisible by a prime q of the form 8n-1. But q also divides N+2, so q divides 2, a contradiction. Hence result. Such arguments as those just given can be repeated and extended for other forms, especially since we can use the following famous result. Lemma 3.7 (Gauss' Quadratic Reciprocity Law). If p and q are distinct odd primes, then (p/q).(q/p) = . The proof of this result is not difficult conceptually, but does involve a lot of formatting of mathematical equations and so I shall leave it out. It can be found in the reference texts. Consider the prime 3. We have 12 º 22 º 1 (mod 3), so (p/3) = 1 if p º 1 mod 3 and (p/3) = -1 if p º 2 mod 3. Using Lemma 3.7, we have (3/p) = (p/3) if p º 1 mod 4 and (3/p) = -(p/3) if p º 3 mod 4. Combining these, we obtain the following additional explicit result: Lemma 3.8. (3/p) = 1 if p º ±1 (mod 12) and (3/p) = -1 if p º ±5 (mod 12). This result will be of use when we consider Fermat numbers. Let us now consider counting prime numbers. Definition : Let p(x) be the number of primes less than or equal to x. It has been known for over one hundred years that p(x) ~ x / log(x) (the Prime Number Theorem, proved independently by Hadamard and de la valle Poussin), with the expectation that a particular number p is prime being 1 / log(p). The proof of these facts is well beyond the scope of these pages, in the realms of analytic number theory, but we shall make use of the results from time to time. Obviously, we can verify the value of p(x) for small x by brute force, and this has been done in the past at least up to x = 1012. However, a method exists, originated by Meissel and refined in turn by Lehmer, then Lagarias, Miller and Odlyzko, then Deleglise and Rivat, that allows the calculation of p(x) using only the knowledge of explicitly calculated values up to Ö x, plus some additional calculations on the number of integers surviving trial division by the first p(Öx) primes. This has been used to calculate p(x) to much higher values. The current record is p(4*1022) = 783,964,159,852,157,952,242. The pi-x project is an active collaboration taking place to calculate p(1023). Check the records section for a table of prime counts to various powers of 10. All odd primes are either of the form 4n+1 or 4n+3, and we have seen that both of these arithmetic progressions contain an infinity of primes. However, there is no indication as to the relative growth in numbers of primes for each of these. Let us expand the definition of the counting function, as follows. Definition. Let pd,a(x) be the number of primes less than or equal to x in the arithmetic progression dn + a, where gcd(a, d) = 1. Consider d = 4. Then we can take a = 1 and a = 3. We observe that p4,1(x) £ p4,3(x) for all small x, and it is not until x = 26861 that p4,1(x) > p4,3(x). Similarly, apart from p = 3, all odd primes are either of the form 3n+1 or 3n+2 and we can check that p3,1(x) £ p3,2(x) for all small x. This time, however, it is not until x = 608981813029 that p3,1(x) > p3,2(x). Needless to say, it has been proved that primes in arithmetical progressions to the same modulus have the same density in the long run, namely that pd,a(x) ~ x / [j(d)log(x)], noting that the right hand side of this formula does not depend on a. Primes seem to be fairly commonly occurring, and the Prime Number Theorem gives an idea of how common. In fact, there is still a 1% chance of a random number near 1043 being prime (a percentage which increases if we restrict our selections to those where the units digit is 1, 3, 7 or 9), so let us make some reasonable conjectures : In fact, Conjecture A has the status of a theorem, whereas Conjecture B remains unproved. However, it is an entirely reasonable assumption, I'm sure you would agree. Now (n+1)2 = n2 + 2n + 1. The following conjecture is even tighter. Casual observation would tend to sustain the idea that wherever we are in the integers, we are never too far away from a prime. In fact, even though the above conjectures are reasonable, and may well be true, it is easy to produce arbitrarily long sequences of integers, none of which is a prime. For instance, the sequence starting at n!+2 and ending at n!+n has n-1 consecutive composites, and extending the sequence on both sides may continue to provide composites for some time, although ultimately we know that a prime will be found. On the other hand, for this sequence to produce 1000 composites in a row, we must consider n to be near 1000, and 1000! is a number of 2568 digits. Similarly, the sequence starting at p#+2 and ending at p#+p, where p is a prime, has p-1 consecutive composites, and 1000# has 416 digits. Can we find runs of composites of length n for numbers significantly smaller numbers? This problem is often viewed in terms of prime gaps, in the following manner. Definition : Let dn = pn+1 - pn be the gap between successive primes. Obviously, since p1 = 2 is the only even prime, we have dn is even for all n > 1. Also, a gap of dn provides a sequence of dn-1 consecutive composite integers. The first gaps are 1, 2, 2, 4, 2, 4, 2, 4, 6, 2, 6, 4, 2, 4, 6, 6, 2, …, and in general there are two issues at hand : Point (i) presupposes that there is always a gap of size 2k for any k. The following conjecture goes even further : D. For any k, there are infinitely many values of n such that dn = 2k. This conjecture has not even been proved for k = 1, though it is thought to be true. While evaluating p(x) by explicitly calculating every prime less than or equal to x, both the first occurrences for various 2k, and the largest gap up to that point can be read off immediately. Occasionally, a first gap of size 2k will be found before a first gap of size 2m, where k > m. In this case, the first occurrence of 2m-1 consecutive composite numbers occurs within the earlier gap. The smallest example of this is the first occurrence of a gap of size 10, following the prime 139, which is larger than the first gap of size 14, which follows the prime 113. There are two main methods used for finding large prime gaps: theory-based examination of specially constructed sequences with built-in divisibility properties, or brute force, the latter encouraged by the advent of efficiently implemented sieving and pseudoprime tests, by searching for large runs of composites flanked by probable primes. For convenience I refer to these as pseudoprime gaps. They may be converted to prime gaps once deterministic tests are performed on the endpoints. Check the records section for the current best. Since we can always find a run of composites of any length, it is of interest to measure how much better these runs are than the pathological examples given above. If we consider that a run of n composites can be found just after n#, then a good measure is the ratio g : log(p) where p is the probable prime at the lower end of the run and g is the gap. The larger this ratio, the larger the gap in comparison to the expected gap size at that magnitude. Since log(n#) » n, we can consider the pathological case as giving a ratio of g : g = 1. Dubner has noted that if p#+1 and p#-1 are both composite, a common condition, then the range p#-p to p#+p has a run of 2p+1 composites at least, and therefore we should consider the pathological case as giving a ratio (or score) of 2g : g = 2. The constructed solutions tend to give the best scores. Paul Leyland keeps a record list of the gaps that score highest in this manner. The special case dn = 2 has come under a lot of scrutiny. Definition. If p and p+2 are both prime, then they are called twin primes. The first few examples of twin primes are (3,5);(5,7);(11,13);(17,19);(29,31);(41,43), etc. As alluded to earlier, it has not been proved that the number of twin prime pairs in infinite, but there is no-one who actually doubts this to be true. On the other hand, while the sum of the inverses of all primes diverges, the sum of the inverses of all twin primes (primes in more than one pair counting twice) converges to Brun's constant (B » 1.902160577783278…). Additionally, it has been proved that there are arbitrarily long sequences of primes not containing a twin prime pair. The number of twin prime pairs with smallest number less than or equal to x is denoted p2(x). The current record is p2(3*1015) = 3310517800844. See the records section for the top-10 known twin prime pairs, and a table of twin prime counts for various powers of 10. Comprehensive lists of prime gaps and twin prime counts are available on Thomas Nicely's website. There is also competition to find the largest pair of twin primes. The form k*2n ± 1 is typical of the largest twin primes, again because of the availability of suitable primality tests. There is even an interest in the largest known number of consecutive twin primes, that is, a sequence of 2n consecutive primes providing n pairs of twins. This currently stands at n = 8, the smallest sequence starting at the prime 1107819732821 and finishing at the prime 1107819733063 (deVries, 2001). I have already mentioned Dirichlet's Theorem, which states that arithmetic progressions of the form a + kd contain an infinity of primes whenever gcd(a, d) = 1. A corollary of this is the following : Lemma 3.9. If gcd(a, d) = 1 and 2 divides ad, then there exists an infinite sequence of pairwise relatively prime integers n1, n2, …, such that a + nid is prime. Proof : Let a0 = a+d, d0 = 2d.(a-d). Then gcd(a0, 2) = gcd(a0, d) = gcd(a0, a-d) = 1. Hence gcd(ao, d0) = 1. By Dirichlet, there is an m1 such that a0 + m1d0 = p1 is a prime. Now, a0 + m1d0 = a + n1d where n1 = 1+2m1.(a-d), so n1 º 1 (mod (a-d)). Assume that k ³ 1 and n1, n2, …, nk have already been found, and let Nk = n1n2…nk. Then Nk º 1 (mod (a-d)). Let ak = a - d - 2dNk and dk = 2dNk.(a-d). Then gcd(ak, 2) = gcd(ak, d) = gcd(ak, Nk) = gcd(a-d, Nk) = gcd(ak, a-d) = gcd(ak, 2dNk) = 1. By Dirichlet, there exists mk+1 such that ak + mk+1dk = pk+1 is prime. Now, ak + mk+1dk = a + nk+1d where nk+1 = 2Nk - 1 + 2mk+1Nk.(a-d), so nk+1 º 1 (mod (a-d)), and nk+1 º -1 (mod Nk), so gcd(nk+1, nj) = 1 for all j £ k. Hence result. The primes themselves form an infinite sequence of pairwise relatively prime integers. This leads to the following conjecture. E. If gcd(a, d) = 1 and 2 divides ad, then there exist infinitely many primes q1, q2, …, such that a + qid is prime. The connection with twin primes is obvious - just take a = 2 and d = 1. Taking a = 1 and d = 2, conjecture E provides the following : F. There are infinitely many primes q such that 2q+1 is also prime. Definition : Such primes q as in Conjecture F are called Sophie-Germain primes. Are there infinitely many Sophie-Germain primes ? Probably, but no proof exists. However, whenever a very large prime is found, it is usually checked to see if it is one of a Sophie-Germain prime pair. It is also convenient to sieve numbers of the form k*2n -1 and k*2n+1 -1 to search for large Sophie-Germain primes. Check the records section for the current top-10. Needless to say, primes of this form would not be given a name if they did not have some additional importance, one of which will become apparent in Section 4. Concerning arithmetic progressions directly, there are several issues worth investigating : In the first case, we try to find snapshots of consecutive values in the sequence a + kd which are all prime. In the second case, the primes must be consecutive. It seems reasonable to expect progressions of type (ii) to have small d, while those of type (i) can be more flexible. This is given some justification by the following. Lemma 3.10. If gcd (a, d) = 1, d ³ 2 and a, a+d, a+2d, … , a+(n-1)d is a sequence of n primes in arithmetic progression, and q is the largest prime less than or equal to n, then either the product of the primes up to q divides d, or a = q and the product of the primes less than q divides d. Proof : If p is a prime not dividing d, then the numbers a, a+d, a+2d, … , a+(p-1)d are pairwise incongruent modulo p, and one of them is divisible by p. Assume that the product of primes up to q does not divide d, and let p be the smallest such that p £ n and p does not divide d. Then p divides a+kd for some 0 £ k < p. But a+kd is prime, so p = a+kd. But a is a prime, and so if k ¹ 0 then a < p, so by definition of p, a divides d, which is false, so k = 0, i.e. p = a. If p < q, then p < n, so a+pd is prime, which is false since it is divisible by p. Hence p = q. By the definitions of p and q, either p does not exist, in which case all the primes up to q divide d, or p = a = q, and the primes smaller than p (and therefore q) divide d, as required. Here's a simple example : a = 5, d = 6 gives a progression of length n = 4, so q = 3. Now, either a = q, which is not true in this case, or 2*3 divides d, which is true. Simply writing out the first few primes in a grid, it is easy to see that there is no shortage of primes in arithmetic progression (PAPs). PAPs of length 2 have been considered already in the form of prime gaps. It has been proved that there is an infinity of PAPs of length 3. However, for longer PAPs, we are left with the following conjecture : G. For any n ³ 4, are there infinitely many PAPs of length n ? As usual, we are always interested in the longest and largest PAPs. Check the records section for the largest known 3-PAPs, 4-PAPs, 5-PAPs and 6-PAPs. In contrast, the longest PAP currently known has length 22 : 11410337850553 + 4609098694200*k, for 0 £ k £ 21 (Pritchard, Moran, Thyssen, 1993) In this case, the gap is the product of a lot of small primes, a common occurrence for long PAPs, and for which I will give an explanation soon. The longest PAP currently known consisting of consecutive primes has length 10 (Toplic, Forbes, 1998), and is rather amazing : 100996972469714247637786655587969840329509324689190041803603417758904341703348882159067229719 + 210, 420, 630, 840, 1050, 1260, 1470, 1680, 1890 Check the records section for the largest known 3-CPAPs and 4-CPAPs. I mentioned above that p(x) ~ x / log(x) and that the likelihood that x is prime is approximately 1 / log(x). But what if x (or p, since we are interested in primes) has already survived a sieve to a limit q ? Now, it is obvious that the proportion of integers surviving a sieve to q is s(q) = If p has survived a sieve to q, then the likelihood that p is a prime therefore becomes 1/[s(q)log(p)], or t(q)/log(p), where t(q) = 1/s(q), and the expected number of survivors required in order to obtain a prime is the inverse of this. By a classical result of Mertens, the value of t(q) rises logarithmically as eg .log(q), where g is Euler's constant and eg » 1.781, so for large values of q, increasing the trial division limit by small percentages does not give us any real benefits. On the other hand, when q is low, adding a few extra primes to the sieve makes a big difference. Setting q = 257 reduces the chances of any number surviving trial division by over 90%. We have considered gaps between primes. However, an interesting extension is to consider gaps between survivors of trial division to a particular limit. The following conjecture appeared in the primenumbers mailing list. H. Let p be an odd prime and q be the largest prime less than p. Then the maximum gap possible between numbers surviving a trial division sieve to p is 2q. (The original statement of the conjecture was in terms of consecutive numbers each divisible by a prime less than or equal to p, but it amounts to the same thing). As with many other conjectures, this seems reasonable when investigated for small p. It is obvious that the combined divisibility pattern will repeat modulo p#, so we can use this as a search limit. The conjecture is true for all primes p £ 19, with maximum allowable gap being reached in all cases. However, for p = 23, the conjecture implies that the biggest gap is 38, whereas in reality, the largest gap found is 40. The estimate then holds good for p = 29 and p = 31, but then fails again at p = 37. In fact, p = 41 is the only other value for which the statement holds, with actual results diverging more and more from the estimate. It is certainly possible that loose estimates may be found heuristically for upper bounds on the gap sizes in this case, but I shall leave that as an exercise. Having established the fact that primes occur frequently but perhaps with hidden pattern, in Section 4, we will investigate some more advanced primality tests.
<urn:uuid:56beeff3-22a0-41cc-b5e5-62f2e477dd44>
{ "date": "2018-01-21T18:47:27", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00256.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9203190207481384, "score": 3.609375, "token_count": 6417, "url": "http://irvinemclean.com/maths/pfaq3.htm" }
The Gacaca court (Kinyarwanda: [ɡɑtʃɑtʃɑ]) is a system of community justice inspired by Rwandan tradition where gacaca can be loosely translated to "justice amongst the grass". This traditional, communal justice was adapted in 2001 to fit the needs of Rwanda in the wake of the 1994 Rwandan Genocide (also known as "Hutu vs Tutsi" ) where an estimated 800,000 people were killed, tortured and raped. After the genocide, the new Rwandan Patriotic Front's government struggled to pursue justice on such a massive scale, and therein to develop just means for the humane detention and prosecution of the more than 100,000 people accused of genocide, war crimes, and related crimes against humanity. By 2000, approximately 130,000 alleged genocide perpetrators populated Rwanda's prisons (Reyntjens & Vandeginste 2005, 110). Using the justice system Rwanda had in place, the trial of such massive numbers of alleged perpetrators would take well over 100 years during which Rwanda's economy would crumble as a massive amount of their population awaited trial in prison. For this reason they chose to adapt and create a large-scale justice system, which would work alongside the International Criminal Tribunal for Rwanda, in order to heal as a people and to thrive as a country. In response, Rwanda implemented the Gacaca court system, which necessarily evolved to fit the scenario from its prior form of traditional cultural communal law enforcement procedures. The Gacaca courts are a method of transitional justice and are designed to promote communal healing and rebuilding in the wake of the Rwandan Genocide. Rwanda has especially focused on community rebuilding placing justice in the hands of trusted citizens. However, the system has come under criticism from a number of sources, including the Survivors Fund, which represents survivors of the genocide, due to the danger that it poses to survivors and there have been a number of reports on survivors being targeted for giving evidence at the courts. However, the Rwandan government maintains the success of Gacaca Courts citing their present success as a country. History of Gacaca Within 17th century Rwanda, prior to colonization, the extended lineage or family (umuryungo), which encompassed several households (inzu), was the main unit of social organization within Rwandan society. The status of people within families were based upon the age and sex of the person. Only aged married men, without living parents, were independent while all others, especially women, were dependent upon what the men dictated. The family lineage controlled arranged marriages, ancestral traditions and ceremonies, the payment or retrieval of debts, and was the primary source of security for people. Ruling over these lineages were Kings (mwami). Within Rwanda, Kings ruled over many different sections of Rwanda. The king, within Rwandan society, embodied power, justice, and knowledge and was the mediator of any major dispute within their region. However, before disputes were brought to the kings, they were heard locally by wise men as what is referred to as Gacaca. The name Gacaca is derived from the Kinyarwanda word umucaca meaning “a plant so soft to sit on that people prefer to gather on it”. Originally, Gacaca gatherings were meant to restore order and harmony within communities by acknowledging wrongs and having justice restored to those who were victims. However, with the colonization of Rwanda and the arrival of western systems of law, Rwandan society soon began to change as a whole. With this implementation and usage of western legal systems, Rwandans began to go to courts to deal with their disputes. In turn, Kings and wisemen soon began to lose their legitimacy within Rwandan society. And with this loss of legitimacy, Gacaca courts began to dwindle down in numbers. After the conclusion of the Rwandan genocide, the new Rwandan government was having difficulty prosecuting approximately 130,000 alleged perpetrators of the genocide. Originally, perpetrators of the genocide were to be tried in the ICTR (International Criminal Tribunal for Rwanda); however, the vast number of perpetrators made it highly improbable that they would all be convicted. Given that there was insufficient resources to organise first-world courts then the Gacaca system had to be preferred over the only alternative to the Gacaca system for local communities which might have been revenge. To deal with this problem, Gacaca courts were installed, the goal of which was to: - Establish truth about what happened - Accelerate the legal proceedings for those accused of Genocide Crimes - Eradicate the culture of impunity - Reconcile Rwandans and reinforce their unity - Use the capacities of Rwandan society to deal with its problems through a justice-based Rwandan custom. The categorization of Gacaca courts in Rwanda is based on the concept of a cell and a sector. A cell is equivalent to a small community while a sector is equivalent to a small group of cells making up a village. Within these two categories, there were 9,013 cells and 1545 sectors, with over 12,103 Gacaca courts established nationwide. Presiding over the Gacaca meetings are judges known as inyangamugayo. These judges are elected to serve on a nine-person council. During the Gacaca process, there were two phases which took place. Starting between 2005-2006, information was taken from those who were accused from all Gacaca cells. The approximate number of those who were accused was 850,000 with about 50,000 of those being deceased. The categorization of crimes committed by these 850,000 is as follows: June 2004-March 2007 |Type||Category 1||Category 2 (1st & 2nd)||Category 2 (3rd)||Category 3| |Crime:||1. Planners, organizers, supervisors, ringleaders 2. Persons who occupied positions of leadership 3. Well-known murderers 4. Torturers 5. Rapists 6. Persons who committed dehumanizing acts on a dead body||1. ‘Ordinary killers’ in serious attacks 2. Those who committed attacks in order to kill but without attaining this goal||3. Those who committed attacks against others, without the intention to kill||Those who committed property offences| |Court:||Ordinary Court||Sector Gacaca||Sector Gacaca||Cell Gacaca| |Sentence:||No data||No data||No data||No data| |Without Confession:||Death Penalty or Life imprisonment||25-30 Years||5-7 Years||Civil Reparation| |Confession before appearance on the list of suspects||25-30 Years||7-12 Years||1-3 Years||Civil Reparation| |Confession after appearance on the list of suspects||25–30 years||12-15 Years||3-5 Years||Civil Reparation| |Accessory sentence||Perpetual and total loss of civil rights||Permanent loss of a listed number of civil rights||/||/| March 2007 Onwards |Type:||Category 1||Category 2 (1st, 2nd, & 3rd)||Category 2 (4th&5th)||Category 2 (6th)||Category 3| |Crime:||1. Persons who occupied positions of leadership 2. Rapists||1. Well-known murderers 2. Torturers 3. Persons who committed dehumanizing acts on a dead body||1. ‘Ordinary killers’ in serious attacks 2. Those who committed attacks in order to kill but without attaining this goal||Those who committed attacks against others, without the intention to kill||Those who committed property offences| |Court:||Ordinary Court||Sector Gacaca||Sector Gacaca||Sector Gacaca||Cell Gacaca| |Sentence:||No data||No data||No data||No data||No data| |Without Confession:||Death Penalty or Life imprisonment||30 years or Life imprisonment||15-19 Years||5-7 Years||Civil Reparation| |Confession before appearance on the list of suspects||20-24 Years||20-24 Years||8-11 Years||1-2 Years||Civil Reparation| |Confession after appearance on the list of suspects||25-30 Years||25-29 Years||12-14 Years||3-4 Years||Civil Reparation| |Accessory sentence||Permanent loss of a listed number of civil rights||No confession: permanent loss - Confession: temporary loss of a listed number of civil rights||No confession: permanent loss - Confession: temporary loss of a listed number of civil rights||/||/| The approximate number of people who were to be tried in these three categories: Category 1: 77,269 Category 2: 432,557 Category 3: 308,739 Gacaca's Predecessors and Partners in Justice |This article's factual accuracy may be compromised due to out-of-date information. (November 2011)| The spontaneous emergence of the Gacaca activities and the gradual support for Gacaca by the authorities was clearly motivated by the fact that the ordinary justice system was virtually non-existent after the genocide. The Gacaca had to do what it did before—relieve the pressure on the ordinary courts. These were now not working slowly, as they did before, but not working at all. Once they started to work, they were quickly overloaded with the cases of genocide suspects who were filling the prisons. This new form of justice was bold, but not unprecedented: This becomes evident when one considers the emerging numbers of Truth and Reconciliation Commissions (TRC) as, for example, that in South Africa. The slogan of the South African TRC “Revealing is Healing” and its argument that truth-telling serves as a “therapeutic function” underline this assumption. The TRC format was suggested to the Rwandan government, but ultimately they chose to pursue mass justice through Gacaca; a system where their country had roots and familiarity. Another form of Rwandan justice which has worked alongside Gacaca is the International Criminal Tribunal for Rwanda (ICTR). The United Nations Security Council established the International Criminal Tribunal for Rwanda to "prosecute persons responsible for genocide and other serious violations of international humanitarian law committed in the territory of Rwanda and neighbouring States, between 1 January 1994 and 31 December 1994". The Tribunal is located in Arusha, Tanzania, and has offices in Kigali, Rwanda. Its Appeals Chamber is located in The Hague, Netherlands. Since it opened in 1995, the Tribunal has indicted 93 individuals whom it considered responsible for serious violations of international humanitarian law committed in Rwanda in 1994. The ICTR has played a pioneering role in the establishment of a credible international criminal justice system and is the first ever international tribunal to deliver verdicts in relation to genocide, and the first to interpret the definition of genocide set forth in the 1948 Geneva Conventions. It also is the first international tribunal to define rape in international criminal law and to recognise rape as a means of perpetrating genocide. The ICTR delivered its last trial judgement on 20 December 2012 in the Ngirabatware case. Following this milestone, the Tribunal's remaining judicial work now rests solely with the Appeals Chamber. As of October 2014, only one case comprising six separate appeals is pending before the ICTR Appeals Chamber. One additional appeal from ICTR trial judgement was delivered in December 2014 in the Ngirabatware case by the appeals chamber of the Mechanism for International Criminal Tribunals, which started assuming responsibility for the ICTR's residual functions on 1 July 2012. The ICTR's formal closure is scheduled to coincide with the return of the Appeals Chamber's judgement in its last appeal. Until the return of that judgement in 2015, the ICTR will continue its efforts to end impunity for those responsible for the Genocide through a combination of judicial, outreach, and capacity-building efforts. Through these efforts, the ICTR will fulfil its mandate of bringing justice to the victims of the Genocide and, in the process, hopes to deter others from committing similar atrocities in the future. The criticisms of Rwanda's pursuit of justice through Gacaca are not few nor are they to be ignored, however the successes of Gacaca also deserve focus. Rwanda’s experiment in mass community-based justice has been a mixed success. Many Rwandans agree that it has shed light on what happened in their local communities during the 100 days of genocide in 1994, even if not all of the truth was revealed. They say it helped some families find murdered relatives’ bodies which they could finally bury with some dignity. It has also ensured that tens of thousands of perpetrators were brought to justice. Some Rwandans say that it has helped set in motion reconciliation within their communities. The majority of praise for Gacaca has come from Rwanda's government and the Rwandan citizens who have direct experience with the system. Naturally this is a bias source, however it is important to note that it is those most affected by the Rwandan genocide who are offering praise, citing a sense of closure, acceptance, and forgiveness following Gacaca trials. The Gacaca trials also served to promote reconciliation by providing a means for victims to learn the truth about the death of their family members and relatives. They also gave perpetrators the opportunity to confess their crimes, show remorse and ask for forgiveness in front of their community. In addition to success on a more personal level, the enormity of the operation speaks measures: More than 12,000 community-based courts tried more than1.2 million cases throughout the country. Furthermore, the overall cost of these Gacaca trials is approximately 4 million. These numbers become increasingly impressive when held up against those relate to International Criminal Tribunal for Rwanda who indicted only 93 people and sentenced only 61 at a cost of 1 billion. The casual format of Gacaca has led to many legal criticisms of to format which include the following: No right to a lawyer, no right to be presumed innocent until proven otherwise, no right to be informed of charges being brought against you, no right to case/defense preparation time, no right to be present at one's own trial, no right to confront witnesses, no right against self incrimination, no right against double jeopardy, no right against arbitrary arrest and detention, and furthermore, there is vast evidence of corruption among officials. "You have to give money. Gacaca judges aren't paid so they make arrangements to get money from those who are accused." Said a man accused of genocide who said he had paid a bribe to gacaca judges. The lack of legal representation is, in majority, a result of the genocide itself wherein the vast majority of people of such professions were casualties. This brings, perhaps, the biggest issue of gacaca: the lack of legal representation. Gacaca functions using "peoples of integrity" as judges, lawyers, and the jury. Not only are some of these people perpetrators themselves, but the lack of financial compensation for the position and the lack of training make them susceptible to bribe and to leading unfair trials. Senior Human Rights Watch adviser Alison Des Forges said the lack of legal representation was a serious concern. "The authorities' view is that this is a quasi-customary kind of procedure, and there never used to be lawyers, so there's no need for lawyers now. The problem with that is that little is the same except for the name. In this system, there is considerable weight given to the official side. The office of the prosecutor provides considerable assistance to the bench [of judges] in terms of making its determination, so you no longer have a level playing field." There may, however, be no alternative to the Gacaca trials, she added. "Obviously the problem of delivering justice after the genocide is an overwhelming problem. Gacaca may not be ideal but there is at this point no alternative.... The official explanation I think is that people did not speak openly until the Gacaca process and now many more accusations are surfacing. Also, the concession program, which requires the naming of all those who participated along with the accused [in return for a lighter sentence], has led to a multiplication of names. "How many of these are well-founded, what is the credibility of the evidence, these are very serious concerns." There are criticisms and controversy surrounding the decision to implement Gacaca courts. Human rights groups worry about the fairness since trials are held without lawyers which means that there is less protection for defendants than in conventional courts. In addition Conventional trials have seen false accusations and intimidation of witnesses on both sides; issues of revenge have been raised as a concern. The acquittal rate has been 20 percent which suggests a large number of trials were not well-founded. Also because the trials are based on witnesses testimonies, the length of time between the crime and trial heightens the risk that the witnesses' memories will be unreliable. Removal of RPF crimes The government's decision to exclude crimes committed by soldiers of the current ruling party, the RPF, from gacaca courts' jurisdiction has left victims of their crimes still waiting for justice, Human Rights Watch said. Soldiers of the RPF, which ended the genocide in July 1994 and went on to form the current government, killed tens of thousands of people between April and December 1994. In 2004, the gacaca law was amended to exclude such crimes, and the government worked to ensure that these crimes were not discussed in gacaca. "One of the serious shortcomings of gacaca has been its failure to provide justice to all victims of serious crimes committed in 1994", Bekele said. "By removing RPF crimes from their jurisdiction, the government limited the potential of the gacaca courts to foster long-term reconciliation in Rwanda." "The biggest problem with gacaca is the crimes we can't discuss. We're told that certain crimes, those killings by the RPF, cannot be discussed in gacaca even though the families need to talk. We're told to be quiet on these matters. It's a big problem. It's not justice." Said a relative of a victim of crimes by soldiers of the current ruling party. ||This section possibly contains original research. (May 2015) (Learn how and when to remove this template message)| Because gacaca's original purpose was not to handle crimes at the level of severity as those committed during the genocide, the punishments associated with determination of guilt often do not fit the crime and require further proximity and intimacy between the perpetrator and victim. Despite its restorative nature, gacaca is a legal process and with this in mind punishment constitutes a major element of the gacaca courts. Perpetrators found guilty are sentenced to some form of punishment, but it is important to note that this rarely takes the form of a jail sentence and instead demands tasks such as the rebuilding of victims’ homes, working in their fields or other variations of community service. Thus, despite gacaca’s clear punitive and legal elements, in many ways the nature of punishment remains within a restorative framework of repairing the harm done through practical measures. - Court of law - Crime against humanity - Ethnic cleansing - History of Rwanda - Politics of Rwanda - Rwandan Genocide - Survivors Fund - International Criminal Tribunal for Rwanda - "Transitional Justice and DDR: The Case of Rwanda". Lars Waldorf, International Center for Transitional Justice - "What Is transitional justice?".International Center for Transitional Justice - McVeigh, Karen (2006-03-12). "Spate of killings obstructs Rwanda's quest for justice". London: The Observer. Retrieved 2006-03-12. - Ingelaere, Bert (2008). "Traditional Justice and Reconciliation after Violent Conflict: Learning from African Experiences" (PDF). International Institute for Democracy and Electoral Assistance 2008. - Remembering Rwanda's genocide, Catherine Wambua, 1 June 2012, Al Jazeera, Retrieved 2 March 2016 - "The Gacaca Courts in Rwanda" (PDF). Retrieved April 28, 2015. - "Gacaca Courts and Restorative Justice in Rwanda". E-International Relations. Retrieved April 28, 2015. - "The ITCR in Brief". The United Nations. Retrieved April 28, 2015. - "Justice Compromised" (PDF). Human Rights Watch. Retrieved April 28, 2015. - "Background Information on the Justice and Reconciliation Process in Rwanda". The United Nations. Retrieved April 28, 2015. - "The ICTR in Brief". The United Nations. Retrieved April 28, 2015. - "Rwanda: Mixed Legacy for Community-Based Genocide Courts". Human Rights Watch. Retrieved April 28, 2015. - Vasagar, Jeevan (2005-03-17). "Grassroots justice". The Guardian. London. Retrieved 2010-05-03. - CHARLOTTE CLAPHAM (2012) Gacaca: A Successful Experiment in Restorative Justice? <http://www.e-ir.info/2012/07/30/gacaca-a-successful-experiment-in-restorative-justice-2/> - Harrell, Peter E., Rwanda's Gamble: Gacaca and a New Model of Transitional Justice. New York: Writer's Advantage Press, 2003. - Human Rights Watch. 2004. Struggling to Survive: Barriers to Justice for Rape Victims in Rwanda. New York: Human Rights Watch. Available: http://hrw.org/reports/2004/rwanda0904/rwanda0904.pdf. - Reyntjens, Filip and Stef Vandeginste. 2005. "Rwanda: An Atypical Transition." In Roads to Reconciliation, edited by Elin Skaar, et al. Lanham, MD: Lexington Books. - Stover, Eric and Weinstein, Harvey (2004). My Neighbor, My Enemy: Justice and Community in the Aftermath of Mass Atrocity. Cambridge: Cambridge University Press. ISBN 0-521-54264-2. - Clark, Phil (2012) How Rwanda judged its genocide London: Africa Research Institute - Clark, Phil (2010) The Gacaca Courts, Post-Genocide Justice and Reconciliation in Rwanda: Justice Without Lawyers. Cambridge: Cambridge University Press. - Susanne Buckley-Zistel (2006): 'The Truth Heals?' Gacaca Jurisdictions and the Consolidation of Peace in Rwanda. Die Friedens-Warte Heft 1-2, pp. 113–130. - Simon Gabisirege/Stella Babalola (2001): Perceptions about the Gacaca Law in Rwanda. Baltimore: Johns Hopkins University. - Stover, Eric and Harvey Weinstein (eds) (2004) My Neighbor, My Enemy: Justice and Community in the Aftermath of mass Atrocity. Cambridge: Cambridge University Press. - National Service of Gacaca Jurisdictions Official Rwandan government website.
<urn:uuid:04840a57-9df6-4fd6-a0e7-203d87b47865>
{ "date": "2016-09-27T14:34:33", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661087.58/warc/CC-MAIN-20160924173741-00282-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9452968835830688, "score": 3.90625, "token_count": 4811, "url": "https://en.wikipedia.org/wiki/Gacaca_court" }
The Nephites prosper—Pride, wealth, and class distinctions arise—The Church is rent with dissensions—Satan leads the people in open rebellion—Many prophets cry repentance and are slain—Their murderers conspire to take over the government. About A.D. 26–30. 1 And now it came to pass that the people of the Nephites did all return to their own lands in the twenty and sixth year, every man, with his family, his flocks and his herds, his ahorses and his cattle, and all things whatsoever did belong unto them. 2 And it came to pass that they had anot eaten up all their provisions; therefore they did take with them all that they had not devoured, of all their grain of every kind, and their gold, and their silver, and all their precious things, and they did return to their own lands and their possessions, both on the north and on the south, both on the land northward and on the land southward. 3 And they granted unto those robbers who had aentered into a covenant to keep the peace of the land, who were desirous to remain Lamanites, lands, according to their numbers, that they might have, with their labors, wherewith to subsist upon; and thus they did establish peace in all the land. 4 And they began again to prosper and to wax great; and the twenty and sixth and seventh years passed away, and there was great aorder in the land; and they had formed their laws according to equity and justice. 5 And now there was nothing in all the land to hinder the people from prospering continually, except they should fall into transgression. 7 And it came to pass that there were many cities built anew, and there were many old cities repaired. 8 And there were many ahighways cast up, and many roads made, which led from city to city, and from land to land, and from place to place. 9 And thus passed away the twenty and eighth year, and the people had continual peace. 10 But it came to pass in the twenty and ninth year there began to be some disputings among the people; and some were lifted up unto pride and aboastings because of their exceedingly great riches, yea, even unto great persecutions; 12 And the people began to be distinguished by ranks, according to their ariches and their chances for learning; yea, some were bignorant because of their poverty, and others did receive great clearning because of their riches. 13 Some were lifted up in pride, and others were exceedingly humble; some did return railing for railing, while others would receive railing and apersecution and all manner of bafflictions, and would not turn and crevile again, but were humble and penitent before God. 14 And thus there became a great inequality in all the land, insomuch that the church began to be broken up; yea, insomuch that in the thirtieth year the church was broken up in all the land save it were among a few of the Lamanites who were converted unto the true faith; and athey would not depart from it, for they were firm, and steadfast, and immovable, willing with all bdiligence to keep the commandments of the Lord. 15 Now the cause of this iniquity of the people was this—aSatan had great bpower, unto the stirring up of the people to do all manner of iniquity, and to the puffing them up with pride, tempting them to seek for power, and authority, and criches, and the vain things of the world. 16 And thus Satan did lead away the hearts of the people to do all manner of iniquity; therefore they had enjoyed peace but a few years. 17 And thus, in the commencement of the thirtieth year—the people having been adelivered up for the space of a long time to be carried about by the btemptations of the devil whithersoever he desired to carry them, and to do whatsoever iniquity he desired they should—and thus in the commencement of this, the thirtieth year, they were in a state of awful wickedness. 19 And now it was in the days of Lachoneus, the son of aLachoneus, for Lachoneus did fill the seat of his father and did govern the people that year. 20 And there began to be men ainspired from heaven and sent forth, standing among the people in all the land, preaching and testifying boldly of the sins and iniquities of the people, and testifying unto them concerning the redemption which the Lord would make for his people, or in other words, the resurrection of Christ; and they did testify boldly of his bdeath and sufferings. 21 Now there were many of the people who were exceedingly angry because of those who testified of these things; and those who were angry were chiefly the chief judges, and they who ahad been high priests and lawyers; yea, all those who were lawyers were angry with those who testified of these things. 22 Now there was no lawyer nor judge nor high priest that could have power to condemn any one to death save their condemnation was signed by the governor of the land. 23 Now there were many of those awho testified of the things pertaining to Christ who testified boldly, who were taken and put to death bsecretly by the judges, that the knowledge of their death came not unto the governor of the land until after their death. 24 Now behold, this was contrary to the laws of the land, that any man should be put to death except they had power from the governor of the land— 25 Therefore a complaint came up unto the land of Zarahemla, to the governor of the land, against these judges who had condemned the prophets of the Lord unto adeath, not according to the law. 26 Now it came to pass that they were taken and brought up before the judge, to be judged of the crime which they had done, according to the alaw which had been given by the people. 27 Now it came to pass that those judges had many friends and kindreds; and the remainder, yea, even almost all the lawyers and the high priests, did gather themselves together, and unite with the kindreds of those judges who were to be tried according to the law. 28 And they did enter into a acovenant one with another, yea, even into that covenant which was given by them of old, which covenant was given and administered by the bdevil, to combine against all righteousness. 29 Therefore they did combine against the people of the Lord, and enter into a covenant to destroy them, and to deliver those who were guilty of murder from the grasp of justice, which was about to be administered according to the law. 30 And they did set at defiance the law and the rights of their country; and they did covenant one with another to destroy the governor, and to establish a aking over the land, that the land should no more be at bliberty but should be subject unto kings.
<urn:uuid:db6775e4-e153-4b62-8a78-e5b0e95eb444>
{ "date": "2018-12-16T17:03:55", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00496.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9918502569198608, "score": 2.640625, "token_count": 1487, "url": "https://www.lds.org/scriptures/bofm/3-ne/6.21?lang=eng&country=hk" }
Although most osteoporosis risk factors like age and health history cannot be controlled, there are steps you can take to help prevent or at least slow down bone loss. Changes in diet and exercise can increase your calcium levels and help maintain the bone mass already in place. Anyone at risk for osteoporosis should monitor their calcium and vitamin D intake. Calcium helps build and maintain bone and vitamin D helps your body absorb calcium. You can get calcium from foods, supplements, or a combination of the two. If you use supplements, keep in mind that calcium is best absorbed in individual doses no larger than 500 to 600 milligrams (mg). It’s best for adults to take doses twice or three times per day. See the chart below for recommendations from the National Institutes of Health regarding daily calcium and vitamin intake. (Note: “IU” refers to international units.) |0 to 6 months||210 mg||200 IU| |7 to 12 months||270 mg||200 IU| |1 to 3 years||500 mg||200 IU| |4 to 8 years||800 mg||200 IU| |9 to 18 years||1,300 mg||200 IU| |19 to 50 years||1,000 mg||200 IU| |51 to 70 years||1,200 mg||400 IU| |Over 70 years||1,200 mg||600 IU| There are also many excellent dietary sources of calcium and vitamin D: - dairy: Dairy products such as milk, cheese and yogurt are rich in calcium and vitamin D. - fortified foods: Certain common foods and beverages are often fortified with calcium and vitamin D. They include certain brands of breakfast cereals, juice, and bread. Check the label to see if these nutrients have been added. - leafy green vegetables: Kale, broccoli, okra, collard greens, mustard greens, Chinese cabbage, and turnip greens all contain calcium. Bones need resistance to grow strong, which is why weight-bearing exercises are not only good for your muscles, but also your bones. Activities and fitness equipment that help strengthen bone include: - free weights - weight machines - resistance bands, which you can use at the gym, at home, and while traveling - walking or jogging - low-impact aerobics (elliptical training, swimming, or biking) Both excessive intake of alcohol and smoking increase your risk of osteoporosis. It’s best to limit alcohol consumption to two drinks per day. A 6-ounce glass of wine, 12-ounce bottle of beer, or a 1.5-ounce glass of hard liquor are all considered one standard drink. If you are a smoker, get some assistance to kick the habit as soon as possible. According to the American Academy of Orthopaedic Surgeons, old adults who smoke are 30 to 40 percent more likely to break their hips. If you already have osteoporosis, take the above preventive measures and try to avoid unnecessary risks of fractures. Hire someone to clean the house gutters so you won’t risk falling from a ladder. Clear the pathways through your home to eliminate any tripping hazards. Be wary of slippery steps and walkways. And use a cane or walker if you feel it would keep you safer. You can check the National Osteoporosis Foundation for gentle sitting, turning, and standing postures that can protect your hips and spine. The site is also a good source for finding bone-friendly exercise activities.
<urn:uuid:c95b2311-0dde-407a-8285-fd52640fc3d7>
{ "date": "2015-08-29T03:07:51", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064167.28/warc/CC-MAIN-20150827025424-00114-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9189639687538147, "score": 2.921875, "token_count": 736, "url": "http://www.healthline.com/health/osteoporosis-prevention" }
اقلیم ساحل و منطقه مه در منطقه تاراپاکا، کویر آتاکاما، شیلی |کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات| |66981||2008||11 صفحه PDF||سفارش دهید||6417 کلمه| Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت) Journal : Atmospheric Research, Volume 87, Issues 3–4, March 2008, Pages 301–311 The weather is warmer near sea level, with an annual average temperature of 18 °C. At high elevation sites like Alto Patache, the temperature decreases at a rate of 0.7 °C for every 100-m increase in altitude. The average annual minimum temperature often approaches 1 °C in winter, while the mean annual temperature range is significant (8.3 °C in Los Cóndores). The mean monthly relative humidity in Alto Patache is over 80%, except during the summer months. During autumn, winter and spring high elevation fog is present in the study area at altitudes ranging from 650 m up to 1060 m, giving annual water yields of 0.8 to 7 L m− 2 day− 1. If vegetation is used as an indicator, the foggy zone lies between 650 m a.s.l. and 1200 m a.s.l. About 70% of the mountain range experiences the foggy climate, as opposed to the coastal plains that are characterized by a cloudy climate.
<urn:uuid:a8132f0a-194e-4f43-bf39-9fa66bda6912>
{ "date": "2018-02-21T01:47:38", "dump": "CC-MAIN-2018-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7015436887741089, "score": 2.890625, "token_count": 438, "url": "http://isiarticles.com/article/66981" }
A third of people treated for cancer develop adverse side effects within their mouth. But while these effects can be devastating to teeth and gums, there are ways to minimize the damage. Treatments like chemotherapy and radiation work by destroying cancer cells. Unfortunately, they may also destroy normal cells. The accumulation of this “collateral damage” ultimately affects uninvolved areas and organ systems of the body. Chemotherapy, for example, can interrupt bone marrow blood cell formation and decrease the body's ability to fight infection. These ripple effects can eventually reach the mouth. It's not uncommon for cancer patients to develop mouth sores or see an increase in tooth decay or periodontal (gum) disease. The treatments may also inhibit saliva flow: because saliva neutralizes acid and provides other benefits that lower disease risk, dental disease is more likely to develop when the salivary flow is reduced. The first step to minimizing these effects is to improve oral health before cancer treatment begins. An unhealthy mouth vastly increases the chances for problems during treatment. Cooperating with your cancer physicians, we should attempt to treat any diseases present as soon as possible. During cancer treatment we should also monitor your oral health and intervene when appropriate. If at all possible, you should continue regular dental visits for cleaning and checkups, and more so if conditions warrant. We can also protect your teeth and gums with protective measures like antibacterial mouth rinses, saliva stimulation or high-potency fluoride applications for your enamel. What's most important, though, is what you can do for yourself to care for your mouth during the treatment period. Be sure to brush daily with a soft-bristle brush and fluoride toothpaste. You can use a weak solution of one-quarter teaspoon each of salt and baking soda to a quart of warm water to rinse your mouth and soothe any sores. And be sure to drink plenty of water to reduce dry mouth. While you're waging your battle against cancer, stay vigilant about your teeth and gums. Taking care of them will ensure that after you've won your war against this malignant foe your mouth will be healthy too. If you would like more information on taking care of your teeth and gums during cancer treatment, please contact us or schedule an appointment for a consultation. You can also learn more about this topic by reading the Dear Doctor magazine article “Oral Health During Cancer Treatment.” Probably a day doesn’t go by that you don’t encounter advertising for dental implants. And for good reason: implants have taken the world of dentistry by storm. Since their inception over thirty years ago, implants have rocketed ahead of more conventional tooth replacements to become the premier choice among both dentists and patients. But what is an implant—and why are these state-of-the-art dental devices so popular? Resemblance to natural teeth. More than any other type of dental restoration, dental implants mimic both the appearance and function of natural teeth. Just as teeth have two main parts—the roots beneath the gum surface and the visible crown—so implants have a similar construction. At their heart, implants are root replacements by way of a titanium metal post imbedded in the jawbone. To this we can permanently attach a life-like porcelain crown or even another form of restoration (more about that in a moment). Durability. Implant materials and unique design foster a long-term success rate after ten years in the 95-plus percentile. They achieve this longevity primarily due to the use of titanium as the primary metal in the implant post. Because bone has an affinity for titanium, it will grow and adhere to the post over time to create a well-anchored hold. With proper maintenance and care implants can last for decades, making them a wise, cost-effective investment. Added stability for other restorations. While most people associate implants with single tooth replacements, the technology has a much broader reach. For example, just a few strategically-placed implants can support a removable denture, giving this traditional restoration much more security and stability. What’s more, it can help stop bone loss, one of the main drawbacks of conventional dentures. In like fashion, implants can support a fixed bridge, eliminating the need to permanently alter adjacent teeth often used to support a conventional bridge. With continuing advances, implant technology is becoming increasingly useful for a variety of restorative situations. Depending on your individual tooth-loss situation, dental implants could put the form and function back in your smile for many years to come. If you would like more information on dental implant restorations, please contact us or schedule an appointment for a consultation. You can also learn more about this topic by reading the Dear Doctor magazine article “Dental Implants: Your Best Option for Replacing Teeth.” If there's anything that makes Alfonso Ribeiro happier than his long-running gig as host of America's Funniest Home Videos, it's the time he gets to spend with his family: his wife Angela, their two young sons, and Alfonso's teenaged daughter. As the proud dad told Dear Doctor–Dentistry & Oral Health magazine, "The best part of being a father is the smiles and the warmth you get from your children." Because Alfonso and Angela want to make sure those little smiles stay healthy, they are careful to keep on top of their kids' oral health at home—and with regular checkups at the dental office. If you, too, want to help your children get on the road to good oral health, here are five tips: - Start off Right—Even before teeth emerge, gently wipe baby's gums with a clean, moist washcloth. When the first teeth appear, brush them with a tiny dab of fluoride on a soft-bristled toothbrush. Schedule an age-one dental visit for a complete evaluation, and to help your child get accustomed to the dental office. - Teach Them Well—When they're first learning how to take care of their teeth, most kids need a lot of help. Be patient as you demonstrate the proper way to brush and floss…over and over again. When they're ready, let them try it themselves—but keep an eye on their progress, and offer help when it's needed. - Watch What They Eat & Drink—Consuming foods high in sugar or starch may give kids momentary satisfaction…but these substances also feed the harmful bacteria that cause tooth decay. The same goes for sodas, juices and acidic drinks—the major sources of sugar in many children's diets. If you allow sugary snacks, limit them to around mealtimes—that gives the mouth a chance to recover its natural balance. - Keep Up the Good Work—That means brushing twice a day and flossing at least once a day, every single day. If motivation is an issue, encourage your kids by letting them pick out a special brush, toothpaste or floss. You can also give stickers, or use a chart to show progress and provide a reward after a certain period of time. And don't forget to give them a good example to follow! - Get Regular Dental Checkups—This applies to both kids and adults, but it's especially important during the years when they are rapidly growing! Timely treatment with sealants, topical fluoride applications or fillings can often help keep a small problem from turning into a major headache. Bringing your kids to the dental office early—and regularly—is the best way to set them up for a lifetime of good checkups…even if they're a little nervous at first. Speaking of his youngest child, Alfonso Ribeiro said "I think the first time he was really frightened, but then the dentist made him feel better—and so since then, going back, it's actually a nice experience." Our goal is to provide this experience for every patient. If you have questions about your child's dental hygiene routine, call the office or schedule a consultation. You can learn more in the Dear Doctor magazine article “How to Help Your Child Develop the Best Habits for Oral Health.” All crowns are designed to restore functionality to a damaged tooth. But crowns can differ from one another in their appearance, in the material they’re made from, and how they blend with other teeth. A crown is a metal or porcelain artifice that’s bonded permanently over a decayed or damaged tooth. Every crown process begins with preparation of the tooth so the crown will fit over it. Afterward, we make an impression of the prepared tooth digitally or with an elastic material that most often is sent to a dental laboratory to create the new crown. It’s at this point where crown composition and design can diverge. Most of the first known crowns were made of metal (usually gold or silver), which is still a component in some crowns today. A few decades ago dental porcelain, a form of ceramic that could provide a tooth-like appearance, began to emerge as a crown material. The first types of porcelain could match a real tooth’s color or texture, but were brittle and didn’t hold up well to biting forces. Dentists developed a crown with a metal interior for strength and a fused outside layer of porcelain for appearance. This hybrid became the crown design of choice up until the last decade. It is being overtaken, though, by all-ceramic crowns made with new forms of more durable porcelain, some strengthened with a material known as Lucite. Today, only about 40% of crowns installed annually are the metal-porcelain hybrid, while all-porcelain crowns are growing in popularity. Of course, these newer porcelain crowns and the attention to the artistic detail they require are often more expensive than more traditional crowns. If you depend on dental insurance to help with your dental care costs, you may find your policy maximum benefit for these newer type crowns won’t cover the costs. If you want the most affordable price and are satisfied primarily with restored function, a basic crown is still a viable choice. If, however, you would like a crown that does the most for your smile, you may want to consider one with newer, stronger porcelain and made with greater artistic detail by the dental technician. In either case, the crown you receive will restore lost function and provide some degree of improvement to the appearance of a damaged tooth. Unlike our primitive ancestors, our teeth have it relatively easy. Human diets today are much more refined than their counterparts from thousands of years ago. Ancient teeth recovered from those bygone eras bear that out, showing much more wear on average than modern teeth. Even so, our modern teeth still wear as we age—sometimes at an accelerated rate. But while you can't eliminate wearing entirely, you can take steps to minimize it and preserve your teeth in your later years. Here are 3 things you can do to slow your teeth's wearing process. Prevent dental disease. Healthy teeth endure quite well even while being subjected to daily biting forces produced when we eat. But teeth weakened by tooth decay are more susceptible to wear. To avoid this, you should practice daily brushing and flossing to remove disease-causing dental plaque. And see your dentist at least twice a year for more thorough dental cleanings and checkups. Straighten your bite. A poor bite, where the top and bottom teeth don't fit together properly, isn't just an appearance problem—it could also cause accelerated tooth wear. Having your bite orthodontically corrected not only gives you a new smile, it can also reduce abnormal biting forces that are contributing to wear. And don't let age stop you: except in cases of bone deterioration or other severe dental problems, older adults whose gums are healthy can undergo orthodontics and achieve healthy results. Seek help for bruxism. The term bruxism refers to any involuntary habit of grinding teeth, which can produce abnormally high biting forces. Over time this can increase tooth wear or weaken teeth to the point of fracture or other severe damage. While bruxism is uncommon in adults, it's still a habit that needs to be addressed if it occurs. The usual culprit is high stress, which can be better managed through therapy or biofeedback. Your dentist can also fashion you a custom guard to wear that will prevent upper and lower teeth from wearing against each other. If you would like more information on minimizing teeth wear, please contact us or schedule an appointment for a consultation. You can also learn more about this topic by reading the Dear Doctor magazine article “How and Why Teeth Wear.” This website includes materials that are protected by copyright, or other proprietary rights. Transmission or reproduction of protected items beyond that allowed by fair use, as defined in the copyright laws, requires the written permission of the copyright owners.
<urn:uuid:d85f99d3-69d1-4965-b0bd-1e4b8f567cc1>
{ "date": "2019-06-18T23:19:09", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998844.16/warc/CC-MAIN-20190618223541-20190619005541-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9502636790275574, "score": 3.09375, "token_count": 2681, "url": "https://www.blackwoodorthodontics.com/blog.html" }
Write legibly in manuscript and in cursive. 0301.1.12 Links verified on 1/1/2018 - Cursive: Lowercase - Alphabet Animation - To see the animation, move your mouse over a letter on this page. (from the site, Handwriting for Kids) - Writing Wizard - Makes English handwriting practice worksheets Type a word or short sentence and then set a number of display options. - Zaner-Bloser Writing Practice - Dotted practice, letters to color and trace and more from abcteach.com - D'Nealian Basics Writing Practice - Dotted practice, letters to color and trace and more from abcteach.com - More resources linked on another I4C page - many practice sheets available here. site for teachers | PowerPoint show | Acrobat document | Word document | whiteboard resource | sound | video format | interactive lesson | a quiz | lesson plan | to print
<urn:uuid:4c03af11-c919-484e-b571-77192e954925>
{ "date": "2019-01-21T08:23:42", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763839.28/warc/CC-MAIN-20190121070334-20190121092334-00376.warc.gz", "int_score": 4, "language": "en", "language_score": 0.7794712781906128, "score": 3.671875, "token_count": 195, "url": "https://www.internet4classrooms.com/grade_level_help/write_legibly_language_arts_third_3rd_grade.htm?sl=newsletter_jul_2016" }
Ronald Gillespie - Molecule Essay Example It’s been over fifty years since Ronald Gillespie first proposed the basic idea of the VSEPR (Valence Shell Electron Pair Repulsion) theory - Ronald Gillespie introduction. Since then he has been making great contributions to the world of chemistry. Ronald J Gillespie was born August 21, 1924 in London England. He attended the University of London graduating with his B. Sc in 1945, and a PH. D in 1949. After graduating, he became an Assistance Lecturer and then a Lecturer in the chemistry department. He moved to Canada in 1958, where he became a professor at McMaster University in Hamilton, Ontario. Developed in 1957 with Ronal Nyholm, Gillespie has done extensive work on expanding the idea of the VSEPR model of molecular geometry. The theory they created is much more effect to predict, explain and describe the 3D molecular shapes (linear, pyramidal, cubical etc. ) based on the number of electron pairs that are found on the outer shell. Their theory is based on electron repulsion of bonded and unbonded electron pairs. essay sample on "Ronald Gillespie"? We will write a cheap essay sample on "Ronald Gillespie" specifically for you for only $12.90/page More Molecule Essay Topics. Giving Gillespie’s interest in chemical education, he had originally developed the VSEPR theory as an aid for teaching. He has been recognized for his work by The Manufacturing Chemists’ College Chemistry, Chemical Institute of Canada and the McMaster Student’s Union. Gillespie retired in 1989, but still continues his research. He is determined to understand the exceptions to the VSEPR model. Together with his student is researching full time to meet his goal.
<urn:uuid:9e346524-f88e-48c7-a8b9-34bfb3a0611a>
{ "date": "2017-11-18T17:22:18", "dump": "CC-MAIN-2017-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805008.39/warc/CC-MAIN-20171118171235-20171118191235-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9611684679985046, "score": 2.5625, "token_count": 365, "url": "https://graduateway.com/ronald-gillespie/" }
Beauty and the yeast Rebecca Newman, quality control manager with Dogfish Head Craft Brewery says the microbiology of yeast is crucial to a beer's taste. (4:02) WASHINGTON - A beer drinker looking to quench his thirst might not give a second thought to what microbiologists call "the master ingredient" in beer. "I think the typical consumer doesn't really think about the yeast, but if it goes wrong they'll definitely know it was a yeast problem," says Rebecca Newman, quality control manager for Dogfish Head Craft Brewery. Newman was recruited by Anheuser-Busch right out of college in the mid-'80s, armed with a degree in food science and technology. She and Charlie Bamforth, Ph.D., Anheuser-Busch Endowed Professor of Malting and Brewing Sciences, University of California - Davis, will be speaking Thursday evening at the headquarters of the American Society for Microbiology, in an event called "The Microbiology of Beer." The American Academy of Microbiology produced a report entitled "If the yeast ain't happy, ain't nobody happy." "I look at yeast as being the conductor of an orchestra, with all the ingredients as the instruments that would go into making the different beers," says Newman. And I look at the yeast as conducting all those ingredients to come up with a final beer flavor," says Newman. Newman says unique strains of yeast make unique flavors. The yeast's the thing "A Belgian-style yeast makes a beer that has the flavors of green bananas and cloves. The beer doesn't have green bananas and cloves in it, it's just the flavors from the yeast," says Newman. A different kind of beer would require a different yeast, according to Newman. "If you were drinking an Oktoberfest beer, you would want a yeast that has a mid- palate resonance, you wouldn't want it yeast-forward, because an Oktoberfest is very malt-forward," says Newman. Yeasts are members of the fungal kingdom, and there are approximately 1500 species of yeast. The two major kinds of brewer's yeast are ale yeast and lager yeast. "The brewer has lots of tools to modify the flavor," says Newman. "It's not only the ingredients, it's the temperature of the fermentation. Too fast, too hot fermentation the yeast will stall out or produce off-flavors." "You want to coddle your yeast to start, and then watch them through their fermentation profile," says Newman. Newman says home brewers often compare notes on strains of yeast and fermentation. Professional brewers are less likely to share the yeast details. "Sometimes you just don't discuss what your proprietary strain is, because it's a strain you've worked on to develop, in house," says Newman. "It's your magic - your own personal conductor." © 2013 WTOP. All Rights Reserved.
<urn:uuid:60644fc2-3391-4177-80d8-121491f3f2f1>
{ "date": "2014-07-25T14:33:06", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894275.63/warc/CC-MAIN-20140722025814-00064-ip-10-33-131-23.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9742812514305115, "score": 2.609375, "token_count": 611, "url": "http://www.wtop.com/1230/3475323/Got-beer-Thank-a-microbiologist" }
Or download our app "Guided Lessons by Education.com" on your device's app store. One thing every culture has in common is a set of folk tales and myths, and this coloring series features mythological creatures from all over the world. Help your child find ones that come from his own heritage, or encourage him to learn about one he's never heard of before! In Chinese culture, the dragon is a symbol of power and good luck. If you color in this Chinese dragon coloring page, maybe you'll find a little good fortune! Find more mythical creatures here.
<urn:uuid:204b1a70-f362-4a8f-afb7-e89843f61589>
{ "date": "2019-10-14T09:35:47", "dump": "CC-MAIN-2019-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649841.6/warc/CC-MAIN-20191014074313-20191014101313-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.96547931432724, "score": 2.734375, "token_count": 119, "url": "https://www.education.com/worksheet/article/mythical-creatures-chinese-dragon/" }
We’re gonna take a look at what tilt-shift photography is, the various ways to do it, and some amazing video examples of multiple images used to create an animated effect. What Is Tilt-Shift Photography? According to Wikipedia “Tilt-shift photography” refers to the use of camera movements on small- and medium-format cameras, and sometimes specifically refers to the use of tilt for selective focus, often for simulating a miniature scene. Sometimes the term is used when the shallow depth of field is simulated with digital postprocessing; the name may derive from the tilt-shift lens normally required when the effect is produced optically. 1.) The first method is very simple. If you are a photographer then simply buy a tilt-shift lens. Yes there is a catch. They probably cost more than your camera and if you have a grand or two to spare then go for it. Tilt-Shift Lens for sell. 2.) For method two we can get a little creative and build our own tilt-shift lens. Creative Pro wrote an awesome article a while back on how to make your own here. 3.) Finally for method three we can just skip the lens altogether and worry about creating the effect in the comfort of our own home with Photoshop. Yeah it’s cheating and not a real tilt-shift shot but I won’t tell if you won’t. Let’s take a look and see how this is done. Tutorial – How to fake tilt-shift in Photoshop Photography Is Cool But Video Is Amazing Now that we have a good idea of what tilt-shift photography is lets take a look at some wonderful video examples. Some are done using many photographs to create an animated look and others are done in After Effects. Why do one image when you can do 30fps? Mouseover to see this author's bio. Nisha is the head blogger for Slodive.com. She loves tattoos and inspirational quotes. Check her out on google plus https://plus.google.com/u/0/116437517919411097994.Nisha Patel's Archive
<urn:uuid:a69a30c7-25ce-4243-8d88-ed026be08256>
{ "date": "2015-11-29T01:33:55", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455135.96/warc/CC-MAIN-20151124205415-00064-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8984284400939941, "score": 2.578125, "token_count": 445, "url": "http://slodive.com/design/tilt-shift-videos/" }
4 Quarters Make 1 Dollar Welcome to another edition of Be A Better Rapper Now! You may be wondering how to start rapping. One of the first things you need to learn how to do is hack the beat. What does hacking the beat mean? I’m not talking about stealing a beat from another producer or artist. That’s called beat jacking which is another topic. What I mean by hacking the beat is understanding the core fundamentals of the beat that you are writing to. In the real world we measure things by width, depth, inches, centimeters, yards, feet, miles etc. In music you have measurements as well which are pretty basic. The term that is used to determine length in a song is ironically called “measures” or in hip hop measures are also referred to as “bars”. Bars and measures are exactly the same thing. All they are is a small piece of space in musical time. For example, in Hip Hop a bar is typically 4 beats. When I say beats i’m not talking about Dr. Dre headphones or hip hop instrumentals. Beats are pretty much little check points within each bar. Each bar will typically have 4 beats in it. Think of it like this, 4 quarters make 1 dollar so 4 beats make one measure. These 4 beats are typically referred to as quarter notes. So again 4 quarter notes make 1 measure or bar. I am sure you have heard people in music especially live bands count 1,2,3,4 and then they start playing. What they are doing is what you call a pre-count. Which is establishing 1 bar or measure to the tempo of the song so the band can start on time and in tempo with one another. Depending on how fast the tempo is will determine how quickly the 1,2,3,4 count is. Dancing With The Beat When learning how to rap understanding this basic structure of musical length will allow you to hack any beat in hip hop. Typically the kick drum hits on the 1st and 3rd quarter note or beat and the snare drum typically hits on the 2nd and 4th quarter note or beat. Knowing this information will give you a clear indication of where you are in the actual bar. The reason this is important to know is because rapping is mimicking the percussive elements to the beat (instrumental). This allows you to ensure that when you are writing your lyrics or rapping that you are on beat. I am sure you have heard the term “on beat” before and that’s exactly what I means. It means that you are in time & in sync with the tempo of the track and the drums typically carry the rhythm. So when you are rapping you want to be verbally dancing with the beat with your words landing on top of the kick and snare drums most of the time. You should be complementing the beat and adding to it instead of taking away from it by being off beat. Your rhymes should flow so well with the beat that they seem as if they are a part of the drum kit. This is going to make your flow sound tight and professional opposed to sounding like an amateur. Speaking The Same Language Another reason why it is important to know how to count bars is to be able to speak the same musical language with other artists. For example let say an artist hits you up and they are asking for you to jump on a track with them and they say that they need 16 bars. You need to be able to know how long 16 bars is. It would be really embarrassing for you to write 24 bars only for them to hit you back and be like “yo I asked for 16 bars your verse is to long for the song”. Then you have to go back and do double work to fit your verse into 16 bars. Tempos vary and some tracks are going to be faster than others so it’s really important that you are able to count your bars. Typically in Hip Hop verses are 16 bars and hooks are 8 bars. 1 of the core fundamentals Learning how to rap is made up of several core fundamentals and this is definitely one of them. Having this knowledge allows you to be much more structured and systematic with establishing your rhythms as you begin to figure out how you are going to flow to the beat. Hacking the beat will act as guide to ensure that you are not off tempo and that you are able to measure your bars correctly. Please share your thoughts in the comments section below. I would love to hear your take on hacking beats. If you found this article helpful and would like to receive updates when new editions to this series are released & also receive a free copy of my e-book called The #1 Fundamental To Rapping then make sure you sign up here.
<urn:uuid:73c99402-6011-492c-9104-5a4d98e0c5f4>
{ "date": "2018-09-20T22:06:57", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156622.36/warc/CC-MAIN-20180920214659-20180920235059-00496.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9650494456291199, "score": 2.640625, "token_count": 989, "url": "http://colemizestudios.com/how-to-start-rapping/" }