text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
One of the fundamental measures of traffic on a road system is the volume of traffic using the road in a given interval of time. This is also called the flow and is expressed in vehicle per hour or vehicles per day. When the traffic is composed of a number of types of vehicles, it is a common practice to convert the flow into equivalent passenger car unit (PCUs), by certain equivalery factors. The flow is then expressed as PCUs per hour or PCU’s per day. This means that the vehicle count needs to be called by considering their class & type. Another aspect of the traffic flow is its variation. Due to various extend reasons, the rate of traffic flow on a stretch is not constant. Understanding its variation is important for various traffic engineering applications. For example, the variation of traffic flow within an hour is important for traffic signal design. Its implication is that the traffic flow needs to be collected at various shot intervals, typically 5, 10, or 15 mts interval. Needless to say that finer the count interval, the more accurate is the modeling of the variation. The variation of traffic with in an hour is called the peak hour factor which is defined as the ratio of the peak hour flow rate to the total flow. Peak hour factor of 1implies the flow rate is constant, and typically occurs at highly congested traffic.
<urn:uuid:489a2e17-6542-45fd-983b-126c4e6e7ff4>
CC-MAIN-2016-26
http://iitb.vlab.co.in/?sub=42&brch=132&sim=464&cnt=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969153
273
3.78125
4
1 Answer | Add Yours The Soviet Union was by no means satisfied with the actions of the western Allies in 1943. The Soviets wanted very much for the western Allies to open a true “second front” in the west and were very unhappy when the Allies did not do so. From 1941 to 1944, the Soviets were doing the huge majority of the actual fighting against the Germans. The German invasion of the USSR in 1941 involved tremendous numbers of people. The Soviets took heavy casualties and were close to being defeated at least until early 1943. Meanwhile, the Germans were essentially not being challenged in the west. The Soviets wanted a major invasion of Europe to divert German attention from them and ease the pressure on them. Because of this, the Soviets were not particularly happy with what the western Allies actually did. They felt that the action in North Africa and the later invasion of Italy were insufficient to truly draw the Germans’ attentions. They felt that a more massive and more threatening invasion was needed. This is why the Soviets were not satisfied with the actions of their allies in 1943. We’ve answered 328,196 questions. We can answer yours, too.Ask a question
<urn:uuid:9563c915-ec14-43b1-a98d-f2c16805665e>
CC-MAIN-2016-26
http://www.enotes.com/homework-help/what-was-strategy-allies-go-across-english-channel-422641
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00164-ip-10-164-35-72.ec2.internal.warc.gz
en
0.987043
241
3.4375
3
Born of the author's own experience working with teachers and principals, Action Research, Fourth Edition, provides a research-based step-by-step outline of how to do action research. The author guides teachers and administrators through the action research process via numerous concrete illustrations; positioning it as a fundamental component of teaching. Action Research helps to develop teachers and administrators with professional attitudes, who embrace action, progress, and reform. Features Balanced coverage of quantitative data collection and analysis techniques Chapter 4, Data Collection Techniques, covers collection techniques for the most frequently used qualitative and quantitative data, including observations, interviews, teacher-made tests, and standardized test data. Chapter 6, Data Analysis and Interpretation, guides students through data analysis and provides techniques, coding guidelines, and examples for analyzing both quantitative and qualitative data. Additional coverage of mixed methods research has been added throughout the book. A focus on producing critical consumers of action research A new chapter, Evaluating Action Research (Chapter 9), helps students become critical consumers of research. Included in Chapter 9 is an article from an action research journal that is analyzed using the new criteria for evaluating action research. Appendix A, Action Research in Action, contains an extended example and evaluation of an action research case study. An expanded coverage of ethics Chapter 2, Ethics, provides an expanded discussion of ethical guidelines and provides guidance for seeking and obtaining Institutional Review Board (IRB) approval. Integration to the MyEducationLab for Action Research website The fourth edition of Action Research includes margin note integration with MyEducationLab for Action Research, a dynamic online learning environment that provides students with the opportunity to build a better understanding of action research through engagement with real products from the research process. A user-friendly format Chapter objectives give students targets to shoot for as they read and study Key Concept boxes provide students with an efficient review of important vocabulary and theory Research in Action checklists provide students with guidelines to use in each stage of the action research process Back to top Rent Action Research 4th edition today, or search our site for other textbooks by Geoffrey E. Mills. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Pearson. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
<urn:uuid:9cee37a2-32b7-4ab9-b8c5-41d766fd0200>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/action-research-4th-edition-9780138020217-0138020213
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.905275
460
3.015625
3
The following is important context when using the tool: - Earnings data is for graduates found working in Texas for the equivalent of full time for a full year. - The job market, employment rates, and earnings vary across the state; therefore, the location of employment may contribute to salary variances. - The age and work-related experience of graduates, as well as individual strengths and goals, contribute to employment and earnings outcomes. - First-year and even fifth-year earnings may not represent long-term income prospects. Tenth-year earnings may not fully represent long-term income prospects. - Past performance is not a guarantee of future performance. It is, however, a useful guide. - An education and degree have value beyond that of income.
<urn:uuid:eee46977-5490-4b12-90df-4eeebcb4cd07>
CC-MAIN-2016-26
http://www.utsystem.edu/seekut/Terms.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94942
157
2.65625
3
- Development & Aid - Economy & Trade - Human Rights - Global Governance - Civil Society Saturday, June 25, 2016 - Our oceans face a grim outlook in the coming decades. Ocean acidification, loss of marine biodiversity, climate change, pollution and over-exploitation of resources all point to the urgent need for a new paradigm on caring for the earth’s oceans—”business as usual” is simply not an option anymore, experts say. The extreme rate of acidification – the term used to describe the decrease in ocean pH levels caused by man-made CO2 emissions – has happened before, Carol Turley of Plymouth Marine Laboratory said, a claim that might have been comforting if she hadn’t been referring to the time when dinosaurs died out. This is a “huge environmental crisis,” she told attendees at an information session at European Parliament this month, addressing challenges and solutions for the world’s oceans months ahead of the United Nations Conference on Sustainable Development, Rio+20, slated to be held in Brazil in June. Turley joked that she’s often called the “acid queen” because of her bleak message, though the plight of more than 70 percent of the earth’s surface is not in the least bit humorous. Each year, the ocean absorbs roughly 26 percent of total CO2 emissions, which have increased by 30 percent since the beginning of the Industrial Revolution in 1750, according to the International Ocean Acidification Reference User Group. Ocean acidification affects marine life with calcium carbonate skeletons and shells, making them sensitive to even small changes in acidity. Acidification also reduces the availability of calcium for plankton and shelled species, which constitute the base of the entire marine food chain, creating a disastrous domino affect that could wipe out entire ecosystems. The oceans could be 150 percent more acidic by 2100, she added. This means drastic decreases in yields from fisheries, and mass extinction of marine life. The world is currently losing natural resources at a rate humans haven’t even begun to describe, she said. Changing public opinion Sadly, rallying the public behind the necessity of ocean preservation has proved difficult. Global attention has largely been focused on the economy, particularly on the latest bout of economic chaos in the United States and Europe. “Our greatest challenge is to convince citizens that environmental targets (don’t go) against economic progress,” European Union Commissioner for Maritime Affairs and Fisheries, Maria Damanaki, stressed. For some, it’s a problem of “out of sight, out of mind,” said Watson-Wright, arguing that people disregard oceans as a priority since they live on land. But even landlocked countries have a great stake in ocean sustainability, she stressed. With Rio+20, designed to commemorate the 20th anniversary of the first United Nations Conference on Environment and Development, only a few months away, it is past time to discuss solutions. Raphaël Billé, program director for biodiversity and adaptation at the Institute for Sustainable Development and International Relations (IDDRI), called for stronger language on environmental goals, in order to improve political momentum in the priority themes articulated by the conference organisers. He noted that Rio+20 is less than concrete in terms of political agreements, but is an opportunity to assess progress and renew political commitments, in the hopes of paving the way for hard decisions later. Can Rio+20 be a game changer? Rio+20 will feature oceans as one of seven themes, which also include food, energy, cities, water, and disasters. Since the first meeting in Rio 20 years ago, there has been some progress on protections for the oceans, according to UNESCO, which includes decisions made within the Johannesburg Plan of Implementation, agreed upon during the Earth Summit in 2002. Plans for the world’s oceans at Rio+20 are outlined as ten proposals under four main objectives, according to UNESCO’s IOC: taking concrete action to reduce stressors and restore the structure and function of marine ecosystems; support for a “Blue-Green” economy; moving toward policy, legal and institutional reforms; and supporting marine research and monitoring, evaluation, and technology. The concerns over our planet’s oceans are not new, IDDRI pointed out in an article submitted to the U.N. in early November 2011; most of these problems have been recognised for decades, and, according to the article, “The only way forward is to recognise the overall failure of oceanic governance, to study the successes at hand, and to develop strategies that seriously take both into account.” The article also mentioned the conflicts between oceanic governance and resistance to make it more sustainable, especially when costs begin to add up. Though various experts have expressed doubt that the meeting in Rio will yield sufficient results for the planet, activists and scientists alike are turning up the heat on conference attendees to leverage political power at the gathering to make tough, lasting decisions that might give the oceans and their essential ecologies a shot at survival.
<urn:uuid:5d83f661-f9d2-40fa-a1dc-281377103afd>
CC-MAIN-2016-26
http://www.ipsnews.net/2012/03/oceans-will-not-survive-lsquobusiness-as-usualrsquo/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00048-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935139
1,057
3.265625
3
The Volatiles Laboratory Volatiles are a class of organic compounds which easily become gaseous at room temperature. Some volatiles are known or suspected carcinogens. As pollutants, these compounds may enter the environment in various ways: - Spread through water as dissolved gases - Permeate soil - Pass into the air Sometimes these chemicals enter our environment through contamination such as spills. Other times volatile chemicals may be introduced as inadvertant side effects of normal water treatment (i.e. chlorine disinfection). Thousands of samples being tested for volatiles pass through the laboratory each year. Most are water samples taken from public utilities in order to monitor their safety. Any source of water deemed as a public drinking water supply must be analyzed in the CAS volatiles laboratory. The methods used for the identification and measurement of volatiles employ gas chromatography with a mass spectrometer. Using this method of analysis enables the laboratory to detect volatiles at extremely low concentrations. Most can be detected at concentrations lower than one part per billion Samples gases are desorbed to a capillary column. This chromatography column separates the different components from one another, thus allowing them to be introduced to the mass spectrometer where they are identified and quantified.
<urn:uuid:18e2bc2e-2f78-4e50-9726-b0d114d60643>
CC-MAIN-2016-26
http://dnr.mo.gov/env/esp/cas/volatiles-analysis.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928451
261
3.6875
4
rounding double to two decimal places is there an easy way to round a double to two decimal places? i.e. 1.98999 would round to 1.99 the math.round method only rounds to nearest integer. decimalformat class converts number to a string. i figure there must be an easy way to do this...?? pls advise Thanks for help with that... Thanks for your help with that post. Am I insane, or does anyone else think that it's crazy not to have a simple already built function in Java to round a double to specified # of decimal places? Am I missing something? Why would everyone want to custom code this function?
<urn:uuid:84a2a5ee-b9a0-4cb8-958f-0f905d856623>
CC-MAIN-2016-26
http://www.java-forums.org/advanced-java/4130-rounding-double-two-decimal-places-print.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921134
143
2.515625
3
Where is the best place to live to avoid seasonal allergies There is no easy answer regarding where a person should live to avoid allergies. Researchers are active in this area. Unfortunately, due to the high level of transportability of pollen grains over long distances, pollen and other Allergy causing criteria cannot be avoided altogether. Until recent years, desert climates, such as parts of Arizona and Nevada, were relatively safe havens to avoid allergies from airborne pollen, since native plants in these areas tended to be sparse, and consist mostly of ornamentals which were not wind-pollinated. Insects would take care of transferring the heavy, sticky pollen from one plant to another, keeping the air relatively free of pollen. However, researchers have found that with residential communities popping up throughout these areas, the situation has changed. The people moving from other parts of the country would bring along their favorite trees, shrubs, and grasses to make the desert more like home. With those plants came the airborne pollen, and changed the content of the air during allergy seasons. Many people relocate to avoid one particular allergy, and end up developing sensitivity to another type within the same family of allergens instead, according to Allergy researchers. It is the protein in pollen that causes allergies. Proteins within a plant family are very similar and often highly cross-reactive. It is extremely difficult to completely avoid an entire plant family when moving from one part of the country to another. An individual may move from a location with a dense amount of the box elder (Acer negundo) tree, a member of the Maple (Aceraceae) Family, only to find that their new location has plenty of other types of maple. Researchers suggest that after a few months or years, the individual may become sensitized (allergic) to the other types as well. There really is no permanent, easy escape. General rules of thumb are: - Our Botany researchers say mountainous areas tend to have very little weed pollen (but they do have considerable tree pollen) - forests tend to have little weed pollen for botanical species, but obviously have enormous amounts of tree pollen. - areas populated by humans tend to have grass pollen, as do agricultural areas - the Pacific Northwest has a smaller amount of ragweed pollen from botanical species than most other areas of the country (but does have standard amounts of trees, grasses, and other weeds besides ragweed) *Remember that these are generalizations only. Further research into any specific location should be performed to either disprove or affirm these allergy generalizations before assuming they apply. Researchers encourage you to do your own research with Pollen.com's History and Comparison features. One may wish to vacation during the notoriously high pollen season of a particular plant, but this may not be practical. Monitoring levels and counts of the particular pollen types that affect you, and reacting accordingly by avoiding exposure as much as possible is about the only thing that can be done. Staying indoors during peak times, and medicating properly may be ways to combat the problem. Working with your doctor, you can develop a plan of defense that may help you.
<urn:uuid:96ae3b0b-dd8a-4317-890b-31a5bc310e68>
CC-MAIN-2016-26
http://www.pollenlibrary.com/faq1.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962662
641
2.8125
3
DAR ES SALAAM, Tanzania — By almost any measure, job prospects for young people in this East African nation should be bountiful. Tanzania ranks among the world’s 30 fastest growing economies and spends a higher percentage of its GDP on education than all but 26 others. In theory, this should correspond to the rapid creation of new jobs and an abundance of well-educated young people to fill them. But Tanzania is facing a youth unemployment crisis rivaled by few other nations in the world. In 2012, Tanzania was home to more unemployed 15 to 24-year-olds per capita than 109 other countries. In a survey by the non-governmental organization Restless Development, out of over 1,000 young people across Tanzania, only 14 percent reported working a formal, wage-earning job. “Most of the young people we’re meeting … they want to be employed in Dar es Salaam, but the problem is they don’t have the qualifications,” said Nicas Nigumba of Restless Development, which works to help young people find better job opportunities. He says many young women are kept out of school and in the home by tradition, despite their longing for an education and a formal job. “Often they cannot discuss these things with their parents so they are coming to us,” he said. The problem is in the numbers: An astonishing 45 percent of Tanzania’s population is under 15, largely the result of high fertility rates and a decrease in child mortality, according to an April report by the World Bank. The result? Each year, 900,000 young Tanzanians enter a job market that is generating only 50,000 to 60,000 new jobs. Researchers say the problem originates with the country’s lackluster educational system, in which 65 percent of students failed the national exam required to pass secondary school last year. What’s more, youth advocates say schools fail to teach the skills and intellectual prowess employers are looking for. “The system of education in Tanzania—it teaches people general things, not skills they need for employment,” said Ally Mawanja, Program Coordinator for Restless Development. “We just give them a degree, but it’s hard to use that degree.” The result is that better-educated young men and women who migrate here from Kenya or other neighboring countries are filling these skilled jobs. “There are Kenyans coming to Tanzania and they're getting jobs that Tanzanians should be getting,” said Nigumba. Women in Tanzania face a host of additional challenges to finding a job, the foremost of which is their severe under-education in comparison with men. Indeed, Tanzania placed second to worst out of 68 countries in the 2013 Global Gender Gap report due to a dramatic discrepancy between boys and girls in educational attainment, among other factors. The percentage of girls who attend secondary school is decreasing, down to 45 percent in 2009 from 48 percent four years earlier, according to Tanzania’s education ministry. That Tanzania’s gender divide is as abysmal as its youth unemployment rate is no coincidence: It is largely the result of societal practices that limit girls’ education. Chief among them is the longstanding practice of mandatory pregnancy testing in schools. The practice dates back to before Tanzania’s 1961 independence, and derives from the cultural notion that unmarried girls should not have sex (though there seems to be little stigma against boys who do). Although there is no law that mandates pregnancy testing, many school administrators mistakenly believe there is, and Tanzania’s government has done little to correct them. Girls found to be pregnant are sent home to domestic work rather than finishing their education, which often prevents them from finding formal employment in the future. According to a September report by the New York-based Center for Reproductive Rights, an estimated 55,000 girls were forced out of Tanzanian schools between 2003 and 2011 for being pregnant. One woman, Hamida, was forced out of school at age 17 because of a positive pregnancy test. She said she has been unsuccessful in finding even unskilled work in a market. “I would like to be helped to go back to school,” she told researchers, “because life without studying is like ending your opportunities.” Of the several young women interviewed in the recent survey who had been expelled for pregnancy, not one reported full-time employment. And when women are able to find employment, many encounter sexual harassment in the workplace. “There are some old men and teachers who are trying to seduce us; we fail to say no due to fear and envy,” said a woman in the country’s Temeke district on the southern side of Dar Es Salaam in an anonymous testimony collected by Restless Development last year. “At the end we lose our chances to getting education and reaching our goals.” “Most of us are jobless because we have [refused] to offer sex to get jobs,” said another. Organizations like Restless Development are providing job skills training to some of Tanzania’s youth, and they are holding discussions with parents and school administrators nationwide in an attempt to end practices like forced pregnancy testing. Still, these programs are few and far between. More jobs and training will be needed to employ Tanzania’s bulging young generation. More from GlobalPost: The toxic toll of Tanzania's child labor practices
<urn:uuid:f61de70f-6c7f-4edc-a07a-f398fd483329>
CC-MAIN-2016-26
http://www.globalpost.com/dispatches/globalpost-blogs/rights/tanzania-youth-unemployment-crisis
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962919
1,143
2.609375
3
Habitat monitoring is an important tool for assessing the threat and conservation status of species and protected areas. This can be done at global and regional scales, where data are available. Conservation International (CI) uses habitat-based indicators, among other indicators, to assess the impacts of its conservation work. Donors for conservation projects are also requiring monitoring indicators to assess the impacts of their conservation investments. The UN Convention on Biological Diversity also has developed a list of indicators for monitoring to assess progress towards the 2010 target. Another major global assessment of species threat is the IUCN Red List. Threat levels estimated by the IUCN are dependent on criteria that include habitat extent, fragmentation and rate of change. To View Data Access Click Here The lists of indicators prioritized by these and other related consortia, such as the Conservation Measures Partnership and the Cambridge Group, all reinforce the key roles of habitat extent, fragmentation and rate of change in threat assessments. This is in part because of 1) the importance of habitats themselves, both as assemblages of species and as performers of ecological processes, 2) the relationships among a species’ habitat extent, resource availability and potential population, and 3) the relatively low cost and high feasibility of habitat monitoring with satellite data. Forest monitoring is critically important to assessments of biodiversity and conservation status because of the high levels of diversity and endemism in forests. This is especially so in the tropics and non-tropical biodiversity hot spots. In these areas, rates of forest clearance and fragmentation are rapid. They have direct implications for the threat and conservation status of species and protected areas. Monitoring forest change is also important for global climate because of both deforestation emissions and altered land-atmosphere exchanges of energy, water and carbon. CI and colleagues have begun monitoring forests although have the goal of monitoring other and more precise habitat types. This is partly possible globally for cover and fragmentation. However, changes in extent in most places must be monitored at higher spatial resolutions. The regional results presented here are part of an effort to provide a high-resolution baseline of cover, fragmentation and change for the ~1990 to ~2000 time period. This corresponds to the baseline for the indicators of CI and the CBD. They also complement the many high-quality in-country maps of forest and other habitat types, which are far more abundant than in-country products of habitat change. For selected areas the baselines have been updated to ~2005. CI and partners are conducting a series of forest-monitoring studies to estimate changes over time. Because of the small scale of most changes, this must be done at a finer resolution than that of a global analysis of cover. We rely largely on the 30-m resolution data from Landsat, especially the global archives made available for free from NASA’s Geocover program. For all areas listed we have created maps of forest cover and change from ~1990 to ~2000; for some areas additional data from ~1986, ~1975 or earlier are included, and for selected areas the forest cover and change maps have been updated to ~2005. All CI maps have been produced with a common methodology. Some improvements are anticipated; however there are some key aspects common to all. This is also true for those produced by partners and colleagues. CI’s methodology and a discussion of other methods used for the products listed here is provided below. In most cases the maps were produced via collaboration between CI or other US-based researchers and in-country partners. Data and Methods The forest cover and change monitoring approach involves two or three epochs of relatively high resolution satellite imagery, such as Landsat data. Forest cover and change are mapped by analyzing imagery from circa 1990, 2000, and 2005 and are mapped in a single process using imagery from two dates at once (1990-2000 or 2000-2005). Forest is defined as closed canopy, mature natural forest. If Landsat imagery are used, these data are generally acquired at no-cost; the Geocover c.2000 Landsat image usually provides the base image as this product has been ortho-rectified and has the highest locational fidelity. The two other epochs are registered to the Geocover and two classifications of forest cover and change for the two periods generated. A supervised classification approach is employed. Changes are directly classified within multi-temporal images and numerous sub-classes created for each final class. Multiple iterations are run using maximum likelihood classification or See5 [through the Classification and Regression Tree (CART) classification interface] and sub-classes from the final iteration are merged. Analysts delineate training sites for each land cover or change class, based on visual interpretation and by referring to ground reference data and high-resolution imagery, such as Quickbird, available on Google Earth. For 3-date classifications, a matrix is generated to highlight class overlap, and recoding, by referencing the input images, is then performed where necessary to yield the final 3-date classification. Validation of each final product is performed using available aerial photographs and satellite imagery available through Google Earth. The resultant data represent a series of regional or national level studies. Each one provides a complete estimate of forest cover and change in both over time. The final maps are filtered so that patches and clearings as small as 2 hectares are reported. In some cases, sub-classes are included, such as Spiny Forest and Woodland and Mangrove in Madagascar. For each of the nine areas listed below, a zip file is provided containing a summary document, digital map, and graphics CABS and partners have a goal to complete a precise baseline estimate of forest cover, fragmentation and change for the entire tropics, country-by country. These have been created with in-country partners, and links to all partners and sites with similar products produced by colleagues are provided below. A map of current or completed projects is provided below (global progress map) These data can provide baseline rates of change for various assessments, including the Convention on Biological Diversity. They also form a basis for analysis of threats and conservation status of all protected areas, other priority areas, and forest-obligate species that have estimates of ranges. Related Analysis of Threat and Conservation For each of these maps of forest cover and change, standard analyses are conducted to summarize the threat and conservation status of individual species and protected areas. These include data from the IUCN World Database on Protected Areas and global assessments of the IUCN Species Specialist Groups, the Red List consortium and Birdlife International. We believe it is possible and cost effective to conduct high-resolution monitoring of all tropical forests and other forested hot spots on a five-year basis. This however requires delivery of a low-cost, global ~2005 data set based on Landsat and other data, and an ensured continuation of such programs. For complete forest coverage, and for most other habitats, monitoring can now be conducted at 250m or coarser on a yearly basis. This would use newer versions of MODIS and SPOT VEGETATION data than those used in the global products analyzed here. Raw data are available, but coordination among implementing agencies and laboratories must be directed towards a new era of near-real time monitoring habitat change. List of Projects and Partners 1. Brazilian Amazon (INPE, NASA LBA) 2. Brazilian coastal forest (S.O.S. Mata Atlantica, INPE, CI) 3. Burma (SI, CI) 4. CARPE landscapes (UMD, CI) 5. Central America Biodiversity Corridor (U.S. AID, NASA, SERVIR) 6. Kenya, Coastal Forests (Sokoine University of Agriculture, CI 7. Tanzania, Coastal Forests (Sokoine University of Agriculture, CI) 8. Tanzania, Eastern Arc mountains (Sokoine University of Agriculture, Forest and Beekeeping Division of the Ministry of Natural Resources and Tourism)) 9. Liberia and Guinea (CI) 10. Madagascar (CI) 11. Meso America (ECOSUR, BTFS, WCS, CI) 12. Non-Brazilian Amazon and Andes (Bolivia, Peru, Ecuador, Colombia, Venezuela) 13. North-east Philippines (CI) 14. Papua New Guinea (CI) 15. Paraguayan forest and woodlands (UMD, Guyra Paraguay, CI) 16. Sichuan Alps (CI) 17. Sumatra (WRI, CI, UMD) Forest cover and change data for each of the regions below can be accessed here. Associated data descriptions and publications are contained in each zip file on the Data Access page. China c.1990-c.2000 (Sichuan Province) Mexico c.1990-c.2000-c.2007 (5 southern states) Philippines c.1990-c.2000 (selected corridors) Sumatra, Indonesia c.1990-c.2000 Tanzania (Eastern Arc Mountains) c.1970-c.2000 Tanzania (Eastern Arc Mountains) c.1990-c.200 Tanzania (Coastal Forests) c.1990-c.2000
<urn:uuid:6eccf60e-18ca-4e25-8596-36ba1f418591>
CC-MAIN-2016-26
https://learning.conservation.org/spatial_monitoring/Forest/Pages/default.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90928
1,893
4
4
Difference between Dengue Fever and Malaria Dengue fever and malaria amongst the topmost dreaded tropical diseases known to man. Aside from their tendency to occur in the tropics and their common carrier however, they are as different as any two diseases can be, as you will see in this comparison article. Dengue fever is an acute febrile disease that is transmitted mainly by mosquitoes. The disease commonly occurs in the tropics and it could be potentially be lethal. Dengue fever is caused by four virus serotypes that are closely related to one another, all of which belong to the Flavivirus genus Flaviviridae family. The disease was first identified in 1779, and it has also been called breakbone fever, due to its extremely painful symptoms. Malaria is an infectious disease carried by mosquitoes as well. Like dengue fever, it is quite common in the tropics, although it has also been seen in the Americas and Africa in addition to Asia. The disease is caused by a eukaryotic protist belonging to the Plasmodium genus, and it affects as many as 250 million people every year. In Africa alone, the disease causes the death of anywhere from one to three million people a year, most of whom are children. Malaria is in fact considered a primary contributor to poverty in many countries. Dengue fever often manifests itself as a fever that occurs very suddenly, which may be accompanied by headache, muscle pain, and point in the joints that can be very severe. This last symptom has given the disease the name break-bone fever or bone crusher disease. Dengue fever can also cause the skin to erupt in a hemorrhagic rash characterized by bright red spots that first appear on the lower limbs and the chest, and later spreads throughout the rest of the body. Patients may also experience severe pain behind the eyes, abdominal pain, nausea, diarrhea and the vomiting of what looks like coffee grounds but is actually congealed blood. Some of the symptoms of malaria are fever, chills, joint pain, vomiting, anemia, and convulsions. Patients may also experience a degree of damage to the retina. Malaria typically causes the appearance of a series of characteristic symptoms, starting with a period of sudden coldness followed by fever and sweating that may last from four to six hours. This cycle may repeat itself every two or three days, depending on the particular strain that the patient has. Although there is no vaccine for dengue fever and malaria, there are medications available that can reduce the risk of contracting the diseases. These medications have to be taken daily or weekly, and they provide a feasible option only for those who are temporary visitors to the area, since local residents are often unable to afford the cost of such medications. Similarities and Differences - An acute febrile disease that is transmitted mainly by mosquitoes - Caused by four virus serotypes that are closely related to one another - An infectious disease carried by mosquitoes - Affects as many as 250 million people every year - The infection resides in the liver.
<urn:uuid:006f90d6-7673-4857-8dd5-357323342086>
CC-MAIN-2016-26
http://recomparison.com/comparisons/101196/dengue-fever-vs-malaria/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97279
635
4
4
Why is Greece in turmoil? Greek citizens are furious about the austerity measures the government has put in place to try to trim that county’s huge and unmanageable debts. Government wages have been cut, jobs eliminated, pensions trimmed and taxes raised. The government must make even deeper cuts in order to qualify for more emergency loans from the International Monetary Fund and other European countries. In essence, Greece doesn’t have enough money to pay its bills. Without help it could default on its loans, making it the first Western European country to do so in more than 60 years. What are the origins of the European debt crisis? Greece and several other European countries have been living beyond their means. They have built up huge sovereign debts, partly to finance stimulus to try to get their economies moving again after the 2008-2009 recession. But the debt problems go back much further. Economists say many countries such as Greece have been overspending for years, or even decades. Worries over debt repayment have prompted lenders to become more conservative, raising interest rates and making borrowing more expensive for those countries that already have crippling debt – creating a spiral that is hard to stop. How could it spread? If Greece defaults, the banks it owes money to could teeter, or even topple. Greek, German and French banks hold the most Greek government debt and would be under the most pressure. And Greece isn’t the only problem country. There are concerns over high debt levels in Spain, Belgium and even Italy – the third largest economy in the euro zone. The problem is exacerbated by the fact that so many European nations share the euro, putting pressure on the healthy countries in the euro zone to prop up the weaker ones. Troubled countries cannot boost their exports by cutting the value of their currency – a move that might have been possible if they didn’t share the euro with all the others. How much is it going to cost to fix the crisis? The Europeans have already proposed a rescue fund of €440-billion, to come from the IMF, wealthier euro zone countries, and the European Central Bank. It would be used to bail out countries or recapitalize banks. But that could rise to as much as €2-trillion, the level some economists say will be needed to restore confidence that Europe can handle the crisis. Numbers of that magnitude will put a strain on everyone involved, particularly the countries that have to come up with the bulk of the funds. What is the worst-case scenario? If Greece, or another European country, defaults, it could trigger a domino effect of bank failures (those to whom Greece owes money) and possibly prompt others to default. It would likely freeze credit markets, where everyone becomes afraid to lend to everyone else. That’s the kind of credit crunch that plunged the world into recession in 2008 after investment bank Lehman Bros. collapsed. Why is this important for Canada? Canadian banks have little direct exposure to Greece, but they do have links to European banks that could be hurt if the situation deteriorates. And a Greek default or other disaster in Europe would likely plunge the world into a downturn. Canada’s export-led economy would not be immune.Report Typo/Error
<urn:uuid:8272190c-8e2d-475e-963d-03226a561afd>
CC-MAIN-2016-26
http://www.theglobeandmail.com/report-on-business/understanding-the-european-debt-crisis/article558125/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956996
667
2.9375
3
- Jump right in! Anybody and everybody is welcome to help others! It is not required that you be a professor, an accredited educator, or a professional tutor. All you need is a familiarity with a topic and the ability to communicate your knowledge. - Encourage learning. The goal of these forums is to help people learn and, as they say, "One best learns by doing." While it is often easiest to simply post "the answer" to an exercise, a student learns nothing from this that wasn't already in the back of the book. Instead, strive to provide hints, helps, leading questions, and other avenues for the learner to investigate. - Support growth. If you're helping, then you already "know how to do it". The goal of these forums is that the learners grow in understanding as you have. Providing the complete hand-in solution to a take-home test might be "fun" for you, but the student (experience shows) learns very little which is positive. - Model good habits. Please speak clearly, display consideration and good manners, and model useful mathematical habits of writing and thought. Explaining similar examples and posting links to on-topic web pages is great; doing a student's homework or posting ads is not. - Be professional. Keep in mind that readers cannot hear your cheerful tone or see your friendly wink. While a friendly tone is certainly preferred, it is often best to exercise caution regarding sarcasm, off-topic humor, and the like. If you're tired, go to bed and come back, refreshed, the next day.
<urn:uuid:0604a919-db24-40dd-8ad8-f3bb20406496>
CC-MAIN-2016-26
http://www.purplemath.com/learning/viewtopic.php?p=4
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955707
328
2.515625
3
Towards the end of the 14th century a branch of the powerful Gaelic O’Neill family moved from West Ulster into what is now Antrim and Down. Becoming known as the Clandeboye O’Neill’s, they became increasingly powerful and were seen as a growing threat by the English monarchs, especially Elizabeth. Not only did Gaelic chieftains and powerful Anglo-Norman families contest English rule in Ireland but the country was also seen as a back door into England for her great rivals, France and Spain. In 1571, Elizabeth, a great believer in colonization, granted her Secretary-of-State Sir Thomas Smith a huge 360,000 acres of East Ulster to plant English settlers in an effort to seize control of the Clandeboye O’Neill territory and control the native Irish. The grant included all of the area we know of today as North Down and the Ards, apart from the southern tip of the Peninsula, which was controlled by the Anglo-Norman Savage family. Unfortunately for Smith, the booklet he printed to advertise his new lands was read by the Clandeboye O’Neill chief, Sir Brian O’Neill, who just a few years earlier had been knighted by Elizabeth. Furious at her duplicity in secretly arranging for the colonization of O’Neill territory, he burned down all the major buildings in the area, making it difficult for the plantation to take hold. Launching a wave of attacks on these early English settlers, the O’Neill’s scorched the land Smith claimed, burning abbeys, monasteries and churches, and leaving Clandeboye, ‘totally waste and void of inhabitants’. With the subsequent collapse of the Smith colony – mismanagement and the death of Smith’s son also contributed to its failure – Elizabeth was forced to agree to a peaceful compromise with Brian O’Neill’s successor, his grandson Con O’Neill, in 1587. Over the next few decades the English stepped up their campaign to rule Ireland, particularly in Ulster. Despite Smith’s failure, the concept of colonization, or plantation of settlers, continued to appeal. But it would take a Scot, King James 1, to give the go ahead for the first successful plantation, and two Scots, Sir Hugh Montgomery and Sir James Hamilton, to oversee it. Perhaps even more importantly, the settlers this time would not be English, but Scots, far hardier and temperamentally suited, King James believed, for the task ahead.
<urn:uuid:1d2e0ae4-479c-4a7d-a5b5-e8760f32f7b5>
CC-MAIN-2016-26
http://www.ulsterscotstrail.com/taxonomy/term/1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00057-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967473
532
3.671875
4
Along with being Easter, March 31st is Photo Preservation DayAccording to the data provided by photo digitization company ScanMyPhotos.com, fewer than 8 percent of all photos taken have been digitized. That means there are trillions of pictures floating around without any sort of backup. While scanned versions of an old picture don't retain the same sort of nostalgia, having a backup is absolutely essential in case of floods, fires or even theft. ScanMyPhotos has also revealed that around 40 percent of people they surveyed have never backed up the data on their computer, which means that even those nicely preserved digital copies could be lost. While the loss of these photos can be a problem, it's one that's easily solved and easily prevented. ScanMyPhotos recommends using the coming weekend as a time to get the family together to dig through old photo books and find just what pictures need to be saved for the future.
<urn:uuid:a7bb1310-827a-4553-a90b-c8db2165671d>
CC-MAIN-2016-26
http://www.steves-digicams.com/news/along_with_being_easter_march_31st_is_photo_preservation_day.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972876
183
2.53125
3
The Khitan people, who dominated a large chunk of Manchuria between 916 and 1125 AD, used two different scripts - the "large script", which came into use in about 920 AD, the "small script", which was reputedly created in about 925 AD by the Khitan scholar Diela, who was inspired by the Uighur alphabet. The two scripts were used in parallel and appear to have little in common in terms of the forms of the characters and the ways they were assembled into compound characters. Khitan, an extinct Altaic language which was once spoken in Manchuria. The language and the Khitan people were known as 遼 (Liao) in Chinese. The large Script was written in vertical columns running from top to bottom and from right to left. Some of the characters were taken from Chinese, while others were independent inventions. Here are most of the large Script characters. The variant forms of some characters are not shown here. The small script consists of 370 characters, including logograms, syllabograms and possibly some phonograms. A selection of small script characters Information about the Khitan script and people Akkadian Cuneiform, Ancient Egyptian (Demotic), Ancient Egyptian (Hieratic), Ancient Egyptian (Hieroglyphs), Chinese, Chữ-nôm, Cuneiform, Japanese, Jurchen, Khitan, Linear B, Luwian, Mayan, Naxi, Sumerian Cuneiform, Tangut (Hsihsia)
<urn:uuid:3682a68a-7c77-4d76-b5a8-93cb41832b46>
CC-MAIN-2016-26
http://www.omniglot.com/writing/khitan.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953245
325
3.8125
4
More than 150 languages and dialects are spoken by the Indigenous peoples in Brazil today. They are part of the near 7,000 languages spoken today in the world (SIL International, 2009). Before the arrival of the Portuguese, however, only in Brazil that number was probably close to 1,000. In the process of colonization of Brazil, the Tupinambá language, the most widely spoken along the coast, was adopted by many colonists and missionaries, taught to Indians grouped in the missions and recognized as Língua Geral. Today, many words of Tupi origin are part of the vocabulary of Brazilians. Just as the Tupi languages have influenced the Portuguese spoken in Brazil, contact among peoples ensures that Indigenous tongues do not exist in isolation and change constantly. In addition to mutual influences, languages have among themselves common origins. They are part of linguistic families, which in turn can be part of a larger division, the linguistic branch. And just as languages are not isolated, neither are their speakers. In Brazil there are many Indigenous peoples and individuals who can speak and/or understand more than one language; and it is not uncommon to find villages where several tongues are spoken. Among such diversity, however, only 25 peoples count more than 5,000 speakers of indigenous languages: Apurinã, Ashaninka, Baniwa, Baré, Chiquitano, Guajajara, Guarani [Guarani Ñandeva / Guarani Kaiowá / Guarani Mbya], Galibi do Oiapoque, Ingarikó, Kaxinawá, Kubeo, Kulina, Kaingang, Kayapó, Makuxi, Munduruku, Sateré-Mawé, Taurepang, Terena, Ticuna, Timbira, Tukano, Wapixana, Xavante, Yanomami, Ye'kuana. Getting to know this vast repertoire has been a challenge to linguists. To keep it alive and well has been the goal of many projects of Indigenous school education. In order to know which languages are spoken by each one of present-day Brazil’s 227 Indigenous peoples, access General table.
<urn:uuid:9256b906-a4de-4877-9ed9-32b68df383d4>
CC-MAIN-2016-26
http://pib.socioambiental.org/en/c/no-brasil-atual/linguas/introducao
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953006
463
3.578125
4
March 31, 2004 Road Safety Focus of World Health Day 2004 Road Safety Is No Accident; 1.2 Million Deaths Could Be Prevented Each Year Adnan Hyder, MD, MPH, PhD, an international road traffic safety expert with the Johns Hopkins Bloomberg School of Public Health, will participate in events for the World Health Organization’s (WHO) World Health Day 2004. Dr. Hyder will also address a United Nations forum on April 15. For the first time in its history, WHO will devote World Health Day to road traffic safety. The annual event, held each April 7, will focus on the magnitude of the problem road traffic injuries pose globally and promote successful ways to prevent these accidents. Each year, 1.2 million men, women and children are killed, and millions more are disabled. The rate of injuries and deaths increases in developing nations where pedestrians and vehicles share the same roads. World Health Day 2004’s main event will be an international symposium in Paris, featuring testimonials and panel discussions on traffic safety. The WHO will also release the “World Report on Road Traffic Injury Prevention,” which was co-edited by Dr. Hyder, who is an assistant professor and the Leon Robertson Faculty Development Chair in the School’s Department of International Health. He is also a member of the School’s Center for Injury Research and Policy. Dr. Hyder will also participate in a roundtable discussion on how governments and academic institutions can help reduce road traffic deaths and injuries. Hundreds of organizations around the world will host regional and national events throughout the day to raise awareness about road traffic injuries and their cost to society. “These events have been planned to stimulate a new level of commitment to taking on this issue,” said Dr. Hyder. “It is important to have follow-up after World Health Day 2004 to ensure road traffic injuries aren’t forgotten or simply thought of as infrequent accidents. These injuries and deaths are predictable and preventable.” World Health Day 2004 will address the known strategies to prevent these traffic deaths, such as reducing rates of speed, limiting alcohol consumption, wearing proper restraints and ensuring greater visibility of people walking and cycling. A five-year strategy for road traffic injury prevention has been developed to ensure the awareness campaign maintains its momentum after World Health Day 2004. Although the vast majority of road traffic deaths and injuries occur in developing countries, the issue remains a concern for developed countries. In the United States, 71,000 pedestrians were injured and 4,808 were killed in traffic crashes in 2002, according to the U.S. Department of Transportation. These deaths represent 11 percent of all traffic fatalities. In the state of Maryland, the five-year average (1998-2002) of pedestrians involved in accidents is 2,896. More than 100 of those were fatal. Dr. Hyder also authored an editorial on road traffic injuries in the April 2004 issue of Bulletin of the World Health Organization. In it, he states, “These events represent unprecedented opportunities for the global health community to wake up to the preventable devastation caused by road traffic injuries. National players need to take charge and develop plans of action that are evidence-based, specific to their context and practical to implement. It is time to initiate activities which will lead to a sharp decrease in the loss of life and health from road traffic injuries, especially in low and middle income countries.” Dr. Hyder is scheduled to address the United Nations on April 15, at the Stakeholder Forum on Global Road Safety. In addition, he has plans for future road traffic injury prevention studies in Malaysia, Pakistan, Kenya and Uganda.Public Affairs Media Contacts for the Johns Hopkins Bloomberg School of Public Health: Kenna Brigham or Tim Parsons at 410-955-6878 or [email protected]. Photographs of Adnan Hyder are available upon request.
<urn:uuid:80aa0f8d-ad6c-4234-83b4-f0e54c062f2d>
CC-MAIN-2016-26
http://www.jhsph.edu/news/news-releases/2004/hyder-world-health-day.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00178-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952671
797
2.609375
3
The Flying Sugar: The Sugar glider (Petaurus breviceps), sometimes called the Flying Sugar, is a small gliding possum native to eastern and northern mainland Australia, New Guinea, and the Bismarck Archipelago, and introduced to Tasmania. Weights and Measures: The Sugar Glider is around 16 to 20 cm (6.3 to 7.5 in) in length, with a tail almost as long as the body, and weigh between 90 and 150 grams (3 to 5.3 oz). Grey, Cream, Black: The fur is generally pearl gray, with black and cream patches at the base of the black or gray ears. Other color variations include leucistic and albino recessive traits. The tail tapers only moderately and the last quarter of it is black, often with a dark tip. The muzzle is short and rounded. Northern forms tend to be brown colored rather than gray and, as predicted by Bergmann's Rule, smaller. Flaps for Gliding: The most noticeable features of its anatomy, however, are the twin skin membranes called patagium which extend from the fifth finger of the forelimb back to the first toe of the hind foot. Hidden Hang Glider: These are inconspicuous when the Sugar glider is at rest — it merely looks a little flabby, as though it had lost a lot of weight recently — but immediately obvious when it takes flight. The membranes are used to glide between trees: when fully extended they form an aerodynamic surface the size of a large handkerchief. Gliding to Food and Safety: The gliding membranes are primarily used as an efficient way to get to food resources. They may also, as a secondary function, help the Sugar glider escape predators like goannas, introduced foxes and cats, and the marsupial carnivores that foxes, cats, and dingos largely supplanted. I Believe I Can Glide: The ability to glide from tree to tree is clearly of little value with regard to the Sugar glider's avian predators, however, in particular owls and kookaburras. 50 Meter Dash: Although its aerial adaptation looks rather clumsy and primitive by comparison with the highly specialized limbs of birds and bats, the Sugar glider can glide for a surprisingly long distance — flights have been measured at over 50 meters (55 yd) — and steer effectively by curving one or other of the patagium. It uses its hind legs to thrust powerfully away from a tree, and when about 3 meters (3 yd) from the destination tree trunk, brings its hind legs up close to the body and swoops upwards to make contact with all four limbs together. Furry Features: Flying phalangers are typically nocturnal, most being small in size (sometimes around 400 mm, counting the tail), and have folds of loose skin running from the wrists to the ankles. They use this skin to glide from tree to tree by jumping and holding out their limbs spread-eagle. They're able to travel for distances as long as 100 meters. Beside the distinctive skin folds, flying phalangers also have large, forward facing eyes, short (though pointed faces), and long, flat tails which are used as rudders while gliding. All are omnivores, and eat tree sap, gum, nectar, pollen, and insects, along with manna and honeydew. Most flying phalangers appear to be solitary, though the Yellow-bellied glider and Sugar glider are both known to live in groups. All text is available under the terms of the GNU Free Documentation License
<urn:uuid:ec1e4158-a920-493f-8725-8096680e28d7>
CC-MAIN-2016-26
http://sheppardsoftware.com/content/animals/animals/mammals/glider.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00122-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954038
756
3.40625
3
1911 Encyclopædia Britannica/Zähringen (family) ZÄHRINGEN, the name of an old and influential German family, taken from the castle and village of that name near Freiburg-im-Breisgau. The earliest known member of the family was probably one Bezelin, a count in the Breisgau, who was living early in the 11th century. Bezelin's son Bertold I. (d. 1078) was count of Zähringen and was related to the Hohenstaufen family. He received a promise of the duchy of Swabia, which, however, was not fulfilled, but in 1061 he was made duke of Carinthia. Although this dignity was a titular one only Bertold lost it when he joined a rising against the emperor Henry IV. in 1073. His son Bertold II. (d. 1111), who like his father fought against Henry IV., inherited the land of the counts of Rheinfelden in 1090 and took the title of duke of Zähringen; he was succeeded in turn by his sons, Bertold III. (d. 1122) and Conrad (d. 1152). In 1127 Conrad inherited some land in Burgundy and about this date he was appointed by the German king, Lothair the Saxon, rector of the kingdom of Burgundy or Arles. This office was held by the Zähringens until 1218 and hence they are sometimes called dukes of Burgundy. Bertold IV. (d. 1186), who followed his father Conrad, spent much of his time in Italy in the train of the emperor Frederick I.; his son and successor, Bertold V., showed his prowess by reducing the Burgundian nobles to order. This latter duke was the founder of the town of Bern, and when he died in February 1218 the main line of the Zähringen family became extinct. By extensive acquisitions of land the Zähringens had become very powerful in the districts now known as Switzerland and Baden, and when their territories were divided in 1218 part of them passed to the counts of Kyburg and thence to the house of Habsburg. The family now ruling in Baden is descended from Hermann, margrave of Verona (d. 1074), a son of duke Bertold I., and the grand-duke is thus the present representative of the Zähringens. See E. J. Leichtlen, Die Zähringer (Freiburg, 1831); and E. Heyck, Geschichte der Herzoge von Zähringen (Freiburg, 1891), and Urkunden, Siegel und Wappen der Herzoge von Zähringen (Freiburg, 1892).
<urn:uuid:f0836763-ea98-4fc8-8363-440a84816512>
CC-MAIN-2016-26
https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Z%C3%A4hringen_(family)
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982732
610
3.0625
3
Description: The garrison of Wotje, at peak strength in December 1943, consisted on 3,298 men, 2,103 Navy and 429 Army personnel, and 766 civilian (and Korean) construction workers under the command of Captain (later Rear-Admiral) Nobukazu Yoshimi, of the Imperial Japanese Navy. Of these only 1244 or 37.72% survived until surrender in August 1944. Between mid-1943 and Aug. 1945, US aircraft dropped 3500t bombs and US ships shot 1000t shells onto Wotje. While the first attacks were carrier-based and irregular, daily attacks were started after Majuro and Kwajalein had fallen to the US. At the same time, all supply lines to Wotje were cut off, and the Japanese garrison was left to starve. Of the originally 3300 strong Japanese garrison only 1200 (37%) survived. Casualties occurred from air raids, diseases, accidents, and suicides, but mainly from starvation.
<urn:uuid:79027f9a-17c7-40ea-bf79-37c9b3d0216e>
CC-MAIN-2016-26
http://www.gearthhacks.com/dlfile33937/Japanese-Seaplane-Base-on-Wotje-Island,-Wotje-Atoll.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969163
202
2.828125
3
By Helen Briggs Science reporter, BBC News An ancient ancestor of the elephant from 37 million years ago lived in water and had a similar lifestyle to a hippo, a fossil study has suggested. The animal was said to be similar to a tapir, a hoofed mammal which looks like a cross between a horse and a rhino. Experts from Oxford University and Stony Brook University, New York, analysed chemical signatures preserved in fossil teeth. These indicated that the animal grazed on plants in rivers or swamps. The study, published in Proceedings of the National Academy of Sciences, could shed light on the lifestyle and behaviour of modern elephants. Dr Erik Seiffert, co-author of the study, told BBC News: "It has often been assumed that elephants have evolved from fully terrestrial ancestors and have always had this kind of a lifestyle. "Now we can really start to think about how their lifestyle and behaviour might have been shaped by a very different kind of existence in the distant past. "It could help us to understand more about the origins of the anatomy and ecology of living elephants." DNA evidence suggests that elephants are related to seagoing manatees and dugongs, and another land-based mammal, the rabbit-like hyrax. This led to the theory that elephants and their extinct relatives may have evolved from a water-dwelling ancestor. Scientists in the UK and the US looked at fossil teeth of two species that belong to an extinct family of mammals related to the elephant and, more distantly, the sea cow. They lived in northern Egypt during the Eocene Epoch, about 37 million years ago. Alexander Liu of the University of Oxford and Erik Seiffert of Stony Brook University, New York, analysed the patterns of different oxygen and carbon atoms, or isotopes, laid down in tooth enamel to investigate the lifestyle and diet of the creatures. The isotopic signals suggest that Barytherium and Moeritherium, as they are called, were largely aquatic, feeding on freshwater vegetation in rivers or swamps. At the time the deserts of northern Egypt, where the teeth were unearthed, were covered by sub-tropical rainforest and swamps. Dr Erik Seiffert told BBC News: "The isotopic pattern preserved in their teeth is very similar to that of living aquatic mammals. "It supports the hypothesis that, at some point early in the evolution of elephants, these animals were very dedicated to either a fully aquatic or amphibious lifestyle - they probably spent most of their life in water." Co-author Alexander Liu said the animal was not completely aquatic, since it lacked adaptations like a "stream-lined body or flipper-like limbs". He said: "It seems that [Moeritherium] was almost certainly an animal that ate freshwater plants and led a semi-aquatic lifestyle, similar to that of hippos." It is not clear how and why the ancestor of elephants left the water for a life on land. One theory is that a cooling event at the end of the Eocene dried up swamps and rivers, forcing animals out on to the land. "There's little real evidence yet to suggest that's true," said Alexander Liu. "We've got an awful lot of pieces in the puzzle; if we could find one more example of an aquatic or semi-aquatic elephant that would be extremely convincing."
<urn:uuid:af1ddc98-0242-4a93-8b6b-a2433050c7eb>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/science/nature/7347284.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96563
704
3.4375
3
Protection and comfort are no longer mutually exclusive for welders Joint project produces promising research results on improving the protective effect and wearing comfort of clothing designed to protect welders from splashes of molten metal. BOENNIGHEIM (on) In a joint AiF research project (IGF-No. 17680 N), the Deutsches Textilforschungszentrum Nord-West gGmbH (DTNW) in Krefeld and the Hohenstein Institute in Boennigheim studied new finishes for protective clothing for welders that would be repellent to splashes of molten metal. The aim was to improve the protective effect and the wearing comfort, while retaining adequate resistance to washing in industrial textile care conditions. They investigated the effect of coatings based on inorganic-organic hybrid polymers and the use of hollow microspheres and ground carbon fibre as additives in organic coatings. As well as improving the protective effect, they also demonstrated that the newly developed finishes do not affect the wearing comfort of the fabric. Problems with existing protective clothing for welders Existing protective clothing for welders normally consists of tightly woven cotton material that has a very high weight per unit area and poor "breathability". The splashes of molten metal that occur during welding are normally only shortly exposed to the protective clothing. Those splashes can be at temperatures higher than 1600°C and so they damage the fibres of the protective clothing. The following applies: the heavier the fabric, the more effective the insulation or barrier against molten metal. However, the heavy, stiff materials that are used cause increased sweating, so they are not comfortable for welders to wear. Alternative protective clothing for welders, made of high-performance fibres like meta-aramids, does offer better temperature resistance and can therefore offer a similar protective effect with less weight per unit area, but it is very expensive and therefore not widely used on the market. Starting point for the research project The researchers' objective was to develop a finish for protective clothing for welders that would be extremely stable and resistant to thermal and oxidative influences. The finish should also encourage the liquid metal to run off as quickly as possible before the fibres can be damaged. They wanted, therefore, to develop a more efficient finish which could be applied to lighter textiles, so that they were still comfortable to wear. Structure and results of the research project In the first stage, the DTNW researchers produced thin layers of a material with a high inorganic share and particularly good thermal resistance. This was done using what is called the sol-gel process. They used metal oxides like silica, alumina or zirconium oxide which have melting points higher than the temperature of molten steel. Applying these thin, thermally resistant layers did not have any relevant insulating effect, so no significant improvement in the protection for welders under DIN ISO 9150 was recorded. To improve the protective effect, the amount of heat reaching the body has to be reduced. In the light of this, the contact time between the textile and the hot motel metal needs to be shortened. Consequently, in the next stage, functional silanes and other additives were added to the base solutions, resulting in a reduction in the surface energy. This finish enabled textiles in commercial use to be put in a higher protection class. As an alternative, tests were carried out at the Hohenstein Institute on finishes using organic polymers such as silicon and fluorocarbon in which ceramic hollow microspheres and carbon fibres were incorporated. When hollow microspheres were used, there was no improvement in the repellent effect on splashes of molten metal. However, the addition of ground carbon fibres did improve the protective effect on lightweight fabric. The reason for this lays in the high thermal conductivity of the coating. The findings from the research work so far offer some promising leads for the future development of more lightweight protective clothing. Another positive result was that none of the finishes that were tested had a significant effect on the comfort of the material. However, there is currently still room for improvement regarding the wash resistance of the finishes. On a trial basis, the researchers have also carried out some preliminary tests involving the production of specially structured surfaces. This allowed the protection class to be improved without the use of fluorinated additives. It can be compared with the so-called "lotus effect", because the splashes of molten metal form "pearls" on the surface of the fabric like drops of water on a lotus plant. A follow-on project by the Deutsches Textilforschungszentrum Nord-West gGmbH (DTNW) and the Hohenstein Institute is due to investigate this new approach as well as the interaction of the thermally conductive coating using organic binder systems. |Protective clothing for welders and UV radiation| |In a different research project, a new approach has been developed at the Hohenstein Institute for verifying the effectiveness of the protective clothing that welders wear in protecting against UV radiation. Detailed information can be found via the following link: http://www.hohenstein.de/pr-656-DE|
<urn:uuid:c6dc57d0-d2fb-4720-a049-c84f33676962>
CC-MAIN-2016-26
http://www.hohenstein.de/en/inline/pressrelease_115392.xhtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948191
1,074
2.78125
3
Shenandoah National Park Quarter Shenandoah National Park has more than 500 miles of trails. Some trails lead to a waterfall or scenic view; others go deep into the forest and wilderness. But the Appalachian Trail is probably the most famous...and definitely the longest. It runs all the way from Maine to Georgia! More about that in a minute. This month’s quarter, honoring Shenandoah National Park, features some of the eye-popping landscape you can see from Little Stony Man, a mountain not quite as high as nearby Stony Man, also in the park. Both lie along the Appalachian Trail. Other great hiking destinations in this park are Old Rag Mountain (one of the most popular and challenging) and Limberlost Trail (more friendly to people who have mobility challenges). Limberlost Trail passes through forest and a stand of mountain laurel that they say is stunningly beautiful when it blooms in June. If you’d rather drive than hike, there’s Skyline Drive, though your car will have to share the road with bicycles, pedestrians, and the occasional bear. This paved road meanders through the mountains for the whole length of the park, with several campgrounds and many scenic overlooks along the way. Now back to the Appalachian Trail. This famous trail is just that: a public footpath. It winds for 2,184 miles along the wilderness of the Appalachian Mountains. How many adult footsteps would it take to hike it? About 5 million! It was 1921 when the idea for the trail got rolling. Private citizens built it, finishing in 1937. Who takes care of it? As you might think, since it crosses so many state lines, it takes a combination of resources including thousands of volunteers working with the National Park Service, U.S. Forest Service, Appalachian Trail Conservancy, and state agencies. And one other cool fact: the same Appalachian Trail connects this month’s park with the park of last February’s Coin of the Month, the Great Smoky Mountains National Park. The Appalachian Trail runs all the way through both parks and beyond both ends!
<urn:uuid:1393edcd-cf80-4587-ae23-110e00cc2cde>
CC-MAIN-2016-26
http://www.usmint.gov/kids/coinNews/coinOfTheMonth/2014/04.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945795
440
2.53125
3
Health Impact News Editor Comments There is a dirty secret in the vaccine business that is very well documented: the live oral polio vaccine can actually spread polio and causes “non-polio acute flaccid paralysis.” This is very well-known throughout the world, but in the pro-vaccine U.S. mainstream media, this information is seldom, if ever, published, so most Americans are still under the assumption that polio has been eradicated due to vaccines. Unfortunately, that is a false belief not supported by the facts. As usual, we need to look at reports outside the U.S. media to find out what is happening with vaccines around the world. LiveMint in India reported on the polio-free myth in India, explaining how the live polio vaccine was responsible for increases in paralysis. LiveMint is the second largest business newspaper in India and has an exclusive relationship with the Wall Street Journal. So this report was from the “mainstream” media in India. Vidya Krishnan reported the story: India to get polio-free status amid rise in acute flaccid paralysis cases. What is clear from this report, and well documented in peer-reviewed literature, is that the term “polio-free status” is completely meaningless. The designation of a country as “polio-free” is simply a triumphant marketing cry by pharmaceutical interests to continue promoting their live polio vaccines, even in the face of overwhelming evidence that oral polio vaccines do far more harm than good. The oral polio vaccine is banned in the United States and many other countries. Here are some excerpts from Vidya’s article: New Delhi: India will on Monday be accorded “polio-free” status by the World Health Organization (WHO), with not a single case of the crippling disease being reported in the past three years, but studies show the alarming rise of another similar paralytic condition that experts suspect may be a result of increased dosage of polio drops. The last case of polio in the country was reported on 13 January, 2011, from West Bengal. Following the “polio-free” status, India will be certified as a polio-free nation by March, leaving Afghanistan, Pakistan, and Nigeria as the remaining polio endemic countries. India’s dramatic turnout in polio eradication, though, has seen a consistent sidelining of the increasing incidence of non-polio acute flaccid paralysis (NPAFP) cases. In the last 13 months, India has reported at least 53,000 cases of NPAFP. Many health activists say the government, in its rush to get the polio-free certification for the country, ignored the increasing incidence of NPAFP. Acute flaccid paralysis (AFP) is a condition in which a patient suffers from paralysis that results in floppy limbs due to reduced muscle tone. While AFP is symptomatic of polio, it can be caused by other diseases such as the Guillain-Barre Syndrome and nerve lesions as well—the primary cause fueling the argument that India is not really free of wild polio virus. Highest NPAFP rate Government surveillance data show that while India is set to be tagged as polio-free, it has actually become the nation with the world’s highest rate of NPAFP incidence. In the past 13 months, India has reported 53,563 cases of NPAFP at a national rate of 12 per 100,000 children—way above the global benchmark set by WHO of 2 per 100,000. Two doctors from Delhi’s St. Stephens Hospital, Neetu Vashisht and Jacob Puliyel, who compiled data from the national polio surveillance project, found a link between the increase in dosage of polio vaccination and the increasing cases of NPAFP. “Most experts will tell you the cases of NPAFP have increased because of better surveillance. This is bunkum,” said Puliyel. “As per global benchmarks, as polio incidence comes down, the rate of NPAFP should also reduce. Instead, AFP cases have been increasing steadily.” “In 2010, the government reduced the number of pulse polio doses from 10 to 6. What we found was that between 2010-2013, the number of APF cases also came down. Our paper argues that other kinds of polio are being caused by the excessive administration of polio dosages,” Puliyel said. “Another proof is that states like Kerala and Goa, where dosages were less, AFP cases was also less. Majority of NPAFP cases are reported from Bihar and UP, where several immunization rounds are held to reach universal coverage. These are figures the government does not want to admit.” Polio’s global resurgence “Even if the polio-free certificate was a legitimate success, it is just that—a certificate,” said Deepak Kapoor, head of Rotary International’s national pulse polio committee. Since 2005, there has been a resurgence of polio in Syria, Egypt, Tajikistan, and Israel. So, while India is celebrating the success of its polio campaign, the threat of a resurgence is ongoing and real, Kapoor said. Gupta of the health ministry said: “India has become the first country to issue travel advisory concerning importation. Having said that, the WHO certification will not be affected by re-importation as it is about not having indigenous wild polio virus in the environment.” India’s strategy to maintain its polio-free status involves phasing out the oral polio vaccine (OPV) due to adverse effects. To contain the “wild” polio virus, OPV uses viruses which are “attenuated” but still alive. This weakened version of polio virus activates an immune response in the body. The India expert advisory group on polio has recommended that the country’s immunization programme switch from trivalent oral polio vaccine and only rely on the oral bivalent variant, reducing chances of vaccine derived polio virus infection. The switch will be accompanied with a booster shot of injectable polio vaccine. The WHO strategic advisory group of experts (SAGE) on immunization has called for a global, coordinated withdrawal of type 2-containing OPV by the end of 2016, and switch to bivalent OPV. Health Impact News Editor Comments So let’s summarize what is reported in this article by two doctors who compiled data from the national polio surveillance project in India, the head of Rotary International’s national pulse polio committee, and someone from the Health Ministry in India: - There is a direct correlation between doses of the live oral polio vaccine and the incidence of non-polio acute flaccid paralysis. - In the past 13 months, India has reported 53,563 cases of “non-polio acute flaccid paralysis,” giving India the distinction of having the highest rate of non-polio acute flaccid paralysis in the world. - Even in other countries that have “polio-free” status, there is a resurgence of polio and that includes India. - The live oral polio vaccine is so dangerous, with so many side effects, that it is being phased out. WHO has called on a complete withdrawal of the vaccine by the end of 2016. The big question is: why are they waiting until the end of 2016 when the dangers of this vaccine are so well known?? Could the fact that UNICEF, the primary agency used to “eradicate polio” by the pharmaceutical manufacturers, purchased 1.7 BILLION doses of the oral polio vaccine in 2013, representing BILLIONS of dollars of revenue for the vaccine manufacturers, have anything to do with not phasing out the oral polio vaccine immediately? We reported late last year how UNICEF used the Philippines Typhoon tragedy, as well as the Syrian refugee tragedies, to buy more live polio vaccines and start giving these vaccines in mass polio vaccination programs, despite the fact that there had been no recorded incidents of polio in the Philippines since 1993, and none in Syria since 1999. They used these tragedies to justify increasing their purchase of the live oral polio vaccine from 1.35 billion doses to 1.7 billion for 2013. The financial motive to continue such a lucrative market, where these vaccines are purchased by the United Nations through tax dollars of contributing member countries, the largest of which is the United States, must be a very strong motivation indeed to continue the oral vaccine program, and get ALL countries around the world the “polio-free” certification. To learn more about the known adverse effects of the live polio vaccine, see: - When Can We Stop Using Oral Poliovirus Vaccine? Oxford Journals Clinical Infectious Diseases - Time for a Worldwide Shift from Oral Polio Vaccine to Inactivated Polio Vaccine Oxford Journals Clinical Infectious Diseases - Polio programme: let us declare victory and move on. India Journal of Medical Ethics Confirmed: India’s Polio Eradication Campaign in 2011 Caused 47,500 Cases of Vaccine-Induced Polio Paralysis More on the Polio Vaccine: How Corporate Greed, Biased Science, and Coercive Government Threaten Our Human Rights, Our Health, and Our Children by Louise Kuo Habakus and Mary Holland J.D. FREE Shipping Available!
<urn:uuid:8ae1303e-476d-4cfa-83c8-abac199d8665>
CC-MAIN-2016-26
http://healthimpactnews.com/2014/the-vaccine-myth-of-polio-free-status-polio-vaccine-caused-53000-paralysis-victims-in-india-last-year/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938357
1,961
2.671875
3
For taking supersharp pictures of space, the go-to telescope is the Hubble, in orbit above the earth. But astronomers can't just use the space telescope whenever they feel like it; they have to bid for time on the badly oversubscribed instrument. After about 2010, when the aging Hubble starts to fail, astronomers won't be able to go to it at all. That's why space watchers are always looking for clever ways to take high-resolution images from the ground without the atmospheric blurring that made the Hubble such a good idea. And it's why a recent announcement by Cambridge University and Caltech made scientists take notice. By wedding an innovative electronic light detector to the Hale Telescope at Mount Palomar in California--until 1990, the world's largest--astronomers were able to snap at least one space photo that was literally twice as sharp as a comparable Hubble image and, they bragged, 50,000 times cheaper. The concept behind the detector, which is known, cutely, as the Lucky Camera, is very simple: the earth's roiling atmosphere acts as a distorting lens, which changes moment by moment as pockets of warmer or cooler air constantly pass in front of a given object. That's why stars twinkle and why ground-based telescopes can be only so sharp. The stars twinkle for the Lucky Camera too. But it snaps 20 images every second, and every so often one of those images, purely by chance, will be taken through a calm patch of sky--much as a broken clock is right twice a day. So the computer that runs the Lucky Camera saves those rare, perfect images and discards the rest. And because a single 1‚ĀĄ20-second exposure of a faint celestial object is almost invisible, the computer combines the good images electronically, ultimately producing a usable one. It's so simple, you'd think astronomers would have thought of it long ago, and you'd be right. "This technique was first proposed back in 1978," says Craig Mackay, of the Institute of Astronomy at Cambridge, the Lucky Camera's lead scientist, "and we first tried it in 1985." Back then, though, detectors were very slow; it took 10 seconds to snap each exposure, and it took all night on a supercomputer to make one usable image. "Now," says Mackay, "we can make images in real time on a PC." Still, the Lucky Camera isn't a true replacement for the Hubble. Since it has to throw away most of its images, it isn't very efficient. Yes, it took a picture sharper than the Hubble could, but it took a lot longer. The instrument is also limited to a patch of sky only about 1‚ĀĄ120th the width of the full moon; the Hubble's field of view is 150 times as large. And the Hubble can see ultraviolet and infrared light, which the atmosphere blocks. Ultimately, says Mackay, "we're not competing with the Hubble. We're simply trying to provide an alternate for when the Hubble dies."
<urn:uuid:62f3eee7-ddf1-469e-a216-95c597a500c9>
CC-MAIN-2016-26
http://content.time.com/time/magazine/article/0,9171,1666281,00.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961806
631
3.578125
4
A dwarf attractive flowering cushion succulent, drought tolerant and excellent for rockeries and living walls. Crassula setulosa grows naturally as a dense mat, forming a convex cushion sometimes up to 40 cm wide and 5-10 cm high (25 cm in flower), usually in crevices and shallow soil pockets on vertical or steep rock faces. The root system is adventitious. The species is very variable in appearance throughout its distribution especially with regard to its leaves which vary in size, shape and hairiness but are typically 6-20 mm long and 2-10 mm wide, more often than not with a convex upper leaf surface, and tapering towards a point. Fine hairs are usually present on the upper leaf surface but hairless forms also occur. Leaves vary in colour from bright green to grey green in the very hairy forms. The plant is highly branched giving rise to the dense mat of foliage. Crassula setulosa foliage Crassula setulosa inflorescence Depending on the form, flowers appear in midsummer through to autumn. They are small and cup-shaped with petal tips spreading from midway up the corolla tube, usually 3 mm in diameter, and white often tinged red. They cluster to form a dense inflorescence which can be up to 15 cm high. Flowers develop into small capsules which release fine dust-like seed. There are five recognized subspecies: C. setulosa var. setulosa, C. setulosa var.jenkinsii, C. setulosa var. deminuta, C. setulosa var. rubra and C. setulosa var. longiciliata. C. setulosa var. deminuta is considered vulnerable in the 2009 red data listing. The other subspecies have no threat status. Distribution and habitat C. setulosa is often more prevalent at higher altitudes particularly over 600 m and its distribution stretches from the Eastern Cape in the south west, being most prevalent in Lesotho (southern Drakensberg) and the northern Drakensberg mountains, and as far north as the Blouberg in the Limpopo Province. Derivation of name and historical aspects The name refers to the leaves being covered in small bristles. Plants are usually found in rock crevices or shallow soil pockets in protected moist and shaded places on steep or vertical rock faces, and very rarely in undisturbed flat gravel areas. Growing on cliffs is an effective anti-herbivory mechanism. The geology on which they occur is varied and includes sandstone, granite, shale and basalt. Uses and cultural aspects There are no known uses for this plant. Growing Crassula setulosa Crassulas are amongst the easiest plants to cultivate and this species is no exception. They can grow in virtually any type of soil but prefer soils with half gritty sand and half fine compost. They can grow in light shade or in sunny positions and make excellent cushion plants for rockeries or living walls. They are not demanding with water but will grow faster under moister conditions. Watering should take place at least once a week but they will survive with less. This species is grown primarily for its very dense carpet-like cushions of bright green or grey foliage which looks very appealing in a rockery or container. Plants can live for many years and will grow to neatly occupy the available space. This species tends not to flower much in warmer climates if at all which helps to keep the plants looking neat and tidy. However, if conditions are right they will flower and may self seed themselves and replenish themselves naturally. Like many of the closely related Sedum species in Europe, this species is also particularly well suited for use in establishing an artificial living wall. Plants do not require much water and are resilient to dry spells. They also grow in full sun and full shade making them very versatile plants and seldom die back in large patches leaving “bald spots” which makes them particularly suitable for living walls. Propagation is easy from cuttings, including leaf cuttings, at any time of the year which means they can be bulked up very easily. Cuttings can be tiny and only need to have a short section of stem with a few nodes and a few leaves. Insert them into a dry gritty growing medium and water them lightly. Keep the pot in a bright situation out of direct sunlight at first. Rooting should take place within 2 weeks. Once the plants show signs of having formed roots, move them to a sunny spot, and before long young plantlets will form, forming small rosettes which will rapidly become dense cushions. If one could ever acquire seed, this could be sown in spring. Mix the dust-like seed with a small quantity of fine sand. Spread the sand evenly over the surface of the soil which should be the same as the growing medium. Water immediately, preferably from below by standing the pot in a tray of water every few days. Keep the soil moist like this for the first month or so. Before long, tiny green plantlets should appear on the soil surface. Start to let the soil dry out between waterings. Soon the plants will bulk up and if one achieves the ideal growing conditions, one can raise many thousands of plants like this. The use of a damping-off fungicide is advisable to prevent rot. Occasionally plants suffer from fungal infections which appear as brown blotches on their leaves. This can be treated with a fungicide and good ventilation. Otherwise, they are essentially pest free. References and further reading - Rowley, G. 2003. Crassula, A grower's guide. Cactus & Co. libri. - Tolken, H. R. 1985. Crassulaceae. Flora of Southern Africa. Vol. 14. Botanical Research Institute. Kirstenbosch National Botanical Garden
<urn:uuid:29244118-aa7c-40cd-8c8d-67eb3ee3761a>
CC-MAIN-2016-26
http://www.plantzafrica.com/plantcd/crassulaset.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00057-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937079
1,228
3.09375
3
BIOTIC Species Information for Ahnfeltia plicata |Researched by||Will Rayment||Data supplied by||MarLIN| |Refereed by||Dr Fabio Rindi| |Typical food types||Not relevant||Habit||Attached| |Bioturbator||Not relevant||Flexibility||High (>45 degrees)| |Height||Growth Rate||See additional information| |Adult dispersal potential||None||Dependency||Independent| |General Biology Additional Information||Growth rate Maggs & Pueschel (1989) recorded observations on growth of Ahnfeltia plicata from Nova Scotia. 4 months after germination of carpospores, tetrasporophyte crusts had grown up to 2.6 mm in diameter. 2 months after germination of tetraspores, the basal holdfast had reached 1.1 mm in diameter, with numerous hair like fronds emerging. After 14 months the axes had grown up to 50 mm in length. In a continuous spray culture with water at 8-11°C and light intensities of 40-60 µE/m²/s, mean apical growth of Ahnfeltia plicata was 17.2 µm/day over 19 days (Indergaard et al., 1986). Permanently immersed plants under the same conditions grew at approximately 7 µm/day. Conversely, percentage biomass increase was greater under the permanent immersion regime; 0.57% increase in mass/day vs. 0.20% for the plants in spray culture (Indergaard et al., 1986). |Biology References||Fish & Fish, 1996, Dickinson, 1963, Dixon & Irvine, 1977, Maggs & Pueschel, 1989, Indergaard et al., 1986, Bird et al., 1991,|
<urn:uuid:73337f5e-7df7-40cb-8b51-097f3f648e5b>
CC-MAIN-2016-26
http://www.marlin.ac.uk/biotic/browse.php?sp=4379&show=biology
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.747329
388
2.859375
3
Parliament’s upcoming monsoon session (5-30 August) is likely to consider a Constitution Amendment Bill for ratification of the 2011 India-Bangladesh Boundary Agreement, aimed at resolving the complex nature of border demarcation between the two countries. It is an important legislative business considering the fact that India shares its longest border with Bangladesh (4096 kilometers) and also a geographical fact that India’s northeast is locked by Bangladesh. During his visit to Bangladesh, Prime Minister Manmohan Singh had on 6 September, 2011 signed with Bangladesh a Protocol on the Land Boundary Agreement (LBA), which has since been ratified by Bangladesh and awaiting a similar nod from India. The sum and substance of the LBA is that it would amount to India ceding 10,000-acres of territory to Bangladesh and the projections about the perceived population displacement arising out of this proposed legislation are that not more than 3500 Bangladeshis would be legally arriving in India, should they choose to do so in the first place. The LBA agreement has deep foreign policy and strategic connotations for both neighbours, particularly India, and impinges on major Indian national security parameters. There are pros and cons with regard to the LBA agreement that the Indian parliamentarians would have to consider before they eventually determine whether the UPA government’s Constitution Amendment Bill is passed. The parliamentarians would also have to see, before they vote for or against the proposed legislation, whether the move is in the best national interests of India. The “Ayes” would have the following arguments: 1. It should be supported because it will resolve the decades-old festering boundary dispute between India and Bangladesh that has been causing immense difficulties to the affected populations. 2. We should support it because the ten thousand acreage of territory set to be ceded to Bangladesh, if this legislation were to be passed, is actually territory which India does not control or administer anyway. Therefore, even if we are to sacrifice that big a piece of land (which we are in any case not controlling) for the sake of streamlining the border issues, it is worth a go. 3. No large-scale displacement of human population on either side of the Indo-Bangladesh border is expected if this legislation were to be approved. At the most 3500 Bangladeshis would be entitled t o Indian citizenship, should they choose to do so in the first place considering the uncertainties involved in such human population transfers. If we can do this at such a small cost for the sake of a structured boundary agreement, it is worth it considering that tens of millions of Bangladeshis are living in India anyway. 4. If you don’t do it now, you won’t be able to do ever because of three factors: (i) the rivers keep changing their courses, (ii) the human population in the said areas keeps shifting, and (iii) the current political will which is there between the two governments may not be there for long. The last point needs elaboration. Bangladesh has to elect a new government within six months or before 24 January, 2014 when the current five-year tenure of the India-friendly Sheikh Hasina government expires. No one can rule out the return of Begum Khaleda Zia, who is known to be close to Islamabad and averse to New Delhi. Hasina has to be armed with tangible things if she hopes to win the forthcoming parliamentary polls. India has a moral duty to bail out its friend rather than leaving her in the lurch. For doing this, she needs two things from India: the LBA and the Teesta River Water Sharing formula. Both are problematic for India and have their downsides for the Indian domestic politics. The LBA has virtually come to fruition and India has to just pick this ripe fruit and deliver it to Hasina. Once this happens, New Delhi and Dhaka can work on Teesta, evidently a more tricky bilateral issue. That is the importance of the LBA and the upcoming Constitution Amendment Bill in Indian parliament. The ‘Nays’ of the LBA may have the following arguments: 1. It will be a loss of face and a serious foreign policy disaster for India to cede any territory to anyone in a peacetime situation when wars are waged for gaining territories. Why should India yield a single inch of its territory? 2. Will it not tantamount to look the other way on the serious issue of Bangladeshi infiltration? The UPA road is heavily mined as far as this proposed constitution amendment bill goes. That is because the main opposition party, the BJP, has already said that it will oppose the move. The BJP has maintained that the UPA government is flawed and has warned that the Bangladeshi infiltrators would be all over the country and the Indian northeast will be in flames. The Trinamool Congress, the ruling party of West Bengal which shares the longest border with Bangladesh, has also given indications of its opposition to the LBA move. In such a situation, it will be a stiff challenge for the UPA government to navigate the LBA constitution amendment bill to the shores. The question is: is it a good idea to cede 10000 acres of Indian territory to Bangladesh for long-term strategic, security and foreign policy objectives? The ball would shortly be in the people’s court – or more precisely in the court of the people’s representatives. The writer is a Firstpost columnist and a strategic analyst who can be reached at [email protected].
<urn:uuid:cc20661d-8722-4821-9b33-6c7251c16ee4>
CC-MAIN-2016-26
http://www.firstpost.com/world/is-it-a-good-idea-for-india-to-cede-10000-acres-to-bangladesh-957987.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955044
1,152
2.765625
3
This story was part of a special section of the July-August, 2010 edition of the Washington Monthly magazine that was guest edited by Richard Lee Colvin, editor of The Hechinger Report. No major city in America has worked longer and harder on its dropout problem than Philadelphia. Yet those efforts, going back nearly half a century, have gained traction only in the last ten years. Between 2001 and 2009 the percentage of Philadelphia students who entered ninth grade and graduated in four years increased from 48 percent to 56 percent. Those gains might seem modest, and are clearly insufficient. But the fact that they occurred at all, and at a time when dropout rates nationally have not budged, suggests that Philadelphia is doing something right. More on dropouts The Hechinger Report partnered with Washington Monthly to take an in-depth look at the dropout problem and how three cities have tackled the issue. It’s a measure of the complexity of the problem, however, that it is difficult to discern which of the flurry of policies and practices that have been tried here are responsible for the gains. Unlike in New York, Philadelphia has not followed a single blueprint or plan. Instead, the work on the issue has accreted over time, with new reforms and initiatives, most of them privately conceived or supported, added to the mix along the way. In the last five years the city has concentrated on providing students with an ever-growing array of options to the city’s traditional high schools—charter schools, small alternative or “accelerated” schools—based on students’ needs and inclination. Yet some of the most promising experiments in reform have also occurred in the city’s traditional high schools, which the vast majority of its students still attend. But for bureaucratic and budgetary reasons those initiatives have seldom been sustained. If Philadelphia wants to continue to make progress, it’ll have to find a way to do so, and the Obama administration’s efforts to combat the dropout problem could provide some real help. In 1968, Philadelphia’s business, political, and civic elite got together to figure out how to get more high school kids to stay in school and prevent them from being swept up in the maelstrom of anger and urban violence touched off by the assassinations of Martin Luther King Jr. and Robert F. Kennedy. The year before, as many as 3,500 African American students demonstrated at school district headquarters demanding better schools, more black teachers, and culturally relevant courses and textbooks. The big idea the leaders formulated was career academies: subunits within large neighborhood schools that blended academics with a vocational training and established stronger relationships between students and their peers and teachers. The first such academy in the nation, focused on preparing students for jobs in the electrical field, opened in 1969 at Thomas Edison High School, which had the highest dropout rate in the city. New career academies were started throughout the 1970s and ’80s, and by the mid’90s there were twenty-nine in the city and several thousand nationwide. Extensive research deemed the academies to be a successful anti-dropout strategy. Over the next thirty years, with strong support and substantial nudging from the city’s foundations and private sector, the school district would attack the dropout problem in a number of other ways. In 1982, when Constance Clayton became superintendent, she looked at the city’s neighborhood high schools and saw “lethargy and sameness and undue stability of faculty and administrators.” She said in a 1993 interview that she saw good anti-dropout programs, career academies among them, but they reached only a relatively small number of students in what was then a school district of more than 200,000 students. By the numbers 48 percent — 2001 on-time graduation rate 56 percent — 2009 on-time graduation rate 38 — High schools in 2002 90 — High schools today (including 29 charters) 15 percent — Poverty rate in city in 1970 24 percent — Poverty rate in city today More than 75 percent — Students behind academically in 2000 Embracing the efforts of the Philadelphia High Schools Collaborative, an outside organization dedicated to reforming city high schools, she decided to shake things up. The Collaborative effort built on the career academies example and divided the high schools into smaller, semi autonomous units within one building that focused more attention on incoming ninth graders. Good results were seen almost immediately at three pilot schools— better attendance, more success in classes, a more studious atmosphere. Eventually, twenty-two high schools were using parts of the strategy and 20,000 students were being affected. The goal was to create more intimate, personalized environments for learning, a concept that still drives much of the thinking on how to reduce the dropout rate. But the kinds of problems that typically squelch major reforms in large urban school districts were present in Philadelphia as well. Skeptics questioned the statistics showing improvement. The Philadelphia teachers union objected to making the smaller units equivalent to separate schools, which affected teachers’ seniority and job security. Money problems grew. Clayton also had her differences with the Collaborative; she retired in 1993, and the effort faded. The “small learning communities” continued to exist, but lost the autonomy that made them effective. In many high schools, they began to function like academic tracks, separating students by ability. Meanwhile, vocational career academies were reduced in number, from twenty-nine in the ’90s to only ten today. It’s impossible to say what effect these on-again-off-again reforms had on the school district’s overall dropout rate. By narrowly defining who was a dropout, Philadelphia and other school districts had for decades been underreporting their actual attrition rates. Whatever the effect of the anti-dropout measures, they were overwhelmed by the flight of white and black working-and middle-class families to the suburbs and a growing poverty rate in the city, which rose from 15 percent in 1970 to 24 percent today, according to U.S. Census data. Students were promoted in elementary and middle schools even though they weren’t learning fundamental skills; by 2000 more than 75 percent of the students who enrolled in the district’s neighborhood high schools were far behind academically. In 1999, Philadelphia’s civic community pushed yet another remedy aimed at reworking the high schools that Clayton, more than a decade before, had characterized as outmoded and resistant to change. The Philadelphia Education Fund, which combines money from foundations, wealthy individuals, corporations, and public agencies, persuaded the school district to bring in a new approach to its worst schools. The model, developed at Johns Hopkins University, was called Talent Development High Schools, and its primary goal was to keep ninth graders on track toward graduation by making sure they passed all of their courses. Over the next four years, the model would show progress in seven of the district’s high schools. A 2004 evaluation by MDRC, the public policy research organization, found that the Talent Development schools “produced substantial gains in academic course credits earned and promotion rates and mod est improvements in attendance.” A new reformer arrives In 2002, Paul Vallas, the energetic, do-it-all-at-once former CEO of the Chicago Public Schools, was hired as Philadelphia’s sixth superintendent in thirty years. He arrived just after the state had declared the Philadelphia schools financially and academically bankrupt, replaced the mayorally appointed school board with a School Reform Commission with a majority named by the governor, and demanded that the district turn over many of its worst-performing schools to private, sometimes for-profit operators. Vallas embraced the “diverse provider” strategy even as he continued to push for more money and implement his own agenda. After the MDRC study came out, Vallas said the district could not afford to continue the existing Talent Development High Schools, let alone expand the program intact. Instead, he said all the neighborhood schools would borrow some ideas from Talent Development. James Kemple, the researcher who headed up the study of Talent Development, was at the meeting in which Vallas said he’d do his own take on the model. Kemple cautioned him against trying to do it piecemeal. “I was trying to make the case with Paul that the best research you have … is based on this version of the model,” Kemple said. He called Vallas’s decision “changing horses midstream,” and said that when decisions are not made based on evidence they result in districts implementing the “reform du jour.” Rather than attempt to fix the large neighborhood high schools, Vallas’s plan was to create alternatives to them. He started twenty-six new small schools, backed the creation of more charter schools, and created disciplinary schools that were run on contract by private companies. As of 2002, there were thirty-eight public high schools in Philadelphia, with an average enrollment of 1,700 students. By 2007, there were sixty-two schools, including charters. Today there are ninety, twenty-nine of them charters. As Vallas was deciding to move away from Talent Development, Robert Balfanz and Ruth Curran Neild, two Johns Hopkins researchers, began a retrospective study, paid for by a number of national and local foundations, of the “dropout crisis,” covering the years from 2000 to 2005. Their 2006 report, called Unfulfilled Promise, was the first definitive counting of high school dropouts in the district, after decades of policies aimed at stemming the tide. They found that, during the period studied, some 30,000 Philadelphia students had dropped out, and thousands more were “near dropouts” who showed up less than half the time. On a positive note, however, they found evidence of improvement. More than 52 percent of the class of 2005 graduated on time in four years. That was about 4 percentage points higher than the average for the previous four years. Until that study, “[w]e didn’t have a public fix on who was dropping out, where they were dropping out from, and what kind of services they need,” said Neild. Because it was one of the first studies to define the graduation rate in terms of cohorts—tracing the fortunes of each entering ninth-grade class and showing how many graduate—“it helped people realize the scale of it,” she said. The researchers discovered that many of those most likely to drop out could be identified beginning in the sixth grade and nearly all of them by the ninth grade. They advised that high schools alone could not fix the problem. The middle school grades would have to do a better job of educating their students. Keeping ninth graders on track needed to be a priority. Also important, however, was that one in five dropouts were older students who had either quit school or entered the juvenile justice system a few credits short of a diploma. The researchers recommended the creation of alternative institutions instead of expecting these youths to reenter the high schools they had already given up on. This had the potential to bump up the graduation rate quickly without dealing with the messy politics and adult interests that come with the territory in high school reform efforts. Not surprisingly, it was this last recommendation that Vallas seized on, because it was consistent with what he was already doing. There also was demand. The release of the report had marked the launch of a new advocacy group called the Project U-Turn Collaborative that would help implement some of these recommendations. In the first year after its October 2006 launch, Project U-Turn raised $10 million from public and private sources, and 1,500 dropouts contacted the project to ask for help in getting a diploma. But seats could only be found for 158 in the city’s existing alternative schools. Vallas created the Office of Multiple Pathways to Graduation to expand programs for disengaged youth. He contracted with private companies to run “accelerated” schools that could help students graduate more quickly. Arlene Ackerman succeeded Vallas in 2008, and she has added seats to the network, which now can accommodate 2,200 youths. Under Ackerman the district has also set up a Re-engagement Center, where former students can come and be referred to a school within the expanding network of options. And with funding from the U.S. Department of Labor, Philadelphia community organizations are now helping students who have dropped out earn either a GED or credits toward a diploma. The traditional high schools have not been abandoned by the new wave of reformers. Since Project U-Turn was created, the city has won about $65 million in grants, also from the Labor Department, for programs in seven neighborhood high schools that were cited as “persistently dangerous.” Using some of this money, the district is creating in most of its neighborhood schools “bridge” programs that try to engage ninth graders in the summer before high school, reviving a practice first introduced by Clayton in the late ’80s. Ackerman has a new plan called Renaissance Schools in which some of the worst schools will be converted to charters or slated for turnaround treatment within the district, some directly under her supervision. In the first year, three long-troubled high schools made that list. The right direction Though disentangling the effects of all these policies on the city’s overall dropout rate isn’t easy, the numbers are certainly moving in the right direction. Between 2005 and 2009 the percentage of students who entered ninth grade and graduated in four years increased from 52 percent to 56 percent. And the six-year graduation rate has been steadily inching up—from 57 percent for the class of 2005 to 60 percent for the class of 2007. At least some of that six-year graduation rate increase is attributable to the new “accelerated” schools, according to Project U-Turn data. This special report was made possible with the generous support of (in alphabetical order): The Boston Foundation Carnegie Corporation of New York Nellie Mae Education Foundation William Penn Foundation It could be that Vallas and Project U-Turn are right and that taking on dysfunctional high schools was too hard and expensive, at least at the time. But there’s a limit to what the alternative schools Vallas and Ackerman have encouraged can do: most of the students entering them have accumulated very few high school credits and have reading and math proficiency that hovers around the fifth-grade level. Even with the improvements, each year more than 8,000 Philadelphia students drop out, most from the neighborhood schools. Project U-Turn’s goal is to cut that number by at least 2,000 students by the end of the upcoming school year. Philadelphia Mayor Michael Nutter has set a high bar as well. He has committed city resources to increasing the six-year graduation rate to 80 percent. To reach those audacious goals, Philadelphia will need to do what it hasn’t succeeded in doing in the past—fix neighborhood schools. And with the Obama administration now pledging billions of federal dollars for school “turnaround” efforts, Philadelphia has another opportunity to keep trying. Dale Mezzacappa, a former reporter for the Philadelphia Inquirer, is a contributing editor of the independent Philadelphia Public School Notebook.
<urn:uuid:34dca235-b74a-4a63-9dcf-8aa36966d720>
CC-MAIN-2016-26
http://hechingerreport.org/content/philadelphia-after-decades-of-effort-a-decade-of-progress_3320/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975992
3,214
2.609375
3
- Historic Sites Nathaniel Bowditch The Practical Navigator Salem’s irascible little “arithmetic sailor” made seamanship a science and left all mariners in his debt August 1960 | Volume 11, Issue 5 Blunt had heard of Bowditch’s genius and begged him to find out what was wrong with the Navigator . Bowditch agreed to do what he could. To begin with, he found that Moore’s method for establishing longitude was not fully reliable. Secondly, the calculations in the tables had been done so carelessly that the book was literally a mass of errors. We read in Bowditch’s journal: “Another error in Moore’s … Eight errors in Moore’s today … Five more errors in Moore …” And so on. Eventually, Bowditch was to note the incredible total of over eight thousand mistakes. Even to revise these existing tables to the point where they could be trusted, Bowditch needed a great deal of time, another long sea voyage if possible. The opportunity was to come sooner than he anticipated. Another necessity dictated his return to the sea. During the period ashore he had married Elizabeth Boardman. He continued to work on the Moore revisions, but he quickly found the world does not pay for effort alone. There was only one way of obtaining another financial stake, and so in August of 1798, five months after his marriage, he again sailed with Captain Prince in the Astrea , this time for Spain. Bowditch must have kissed his bride good-by with a heavy heart and deep forebodings. Elizabeth was seriously ill with consumption. In Alicante his worst fears were realized. He received word indirectly that she had died. Bowditch had never been popular in Salem, and a plaintive entry in his journal is indicative of the humble status of this heartsick and lonely man: “… none of my friends in Salem have seen fit to notify me or give me any details of the death of my beloved wife.” The Astrea returned to Salem in April, 1799. Bowditch turned his corrections of Moore over to Blunt, and the book was copyrighted in May of the same year. In spite of his lack of personal acceptance in Salem, and in spite of his grief over the death of his wife, there was for Bowditch one ray of light. In certain circles his genius as a mathematician was beginning to be recognized. He was elected to the American Academy of Arts and Sciences. Yet in September of 1799, when his revised edition of Moore’s Navigator appeared, the title page didn’t even mention Bowditch’s name. While the book was on the press—in July—Bowditch again sailed with Captain Prince in the Astrea , and again for Manila. Anticipating a third edition of Moore’s , Blunt had asked Bowditch to continue his corrections, but the day before the Astrea left port, Blunt appeared on board with another idea. The moment seemed right, he said, for a whole new book on navigation. Instead of continuing to revise Moore during the voyage that lay ahead, Blunt suggested that Bowditch do a book that would be truly his own—one that would have everything in it that Moore’s lacked and that would, above all, be accurate. Bowditch immediately agreed. He abhorred the blustering, and to him stupid, men who went to sea. Their inefficiency (by his standards), their heavy-handed intolerance and slavish acceptance of dogma enraged him. On the other hand, he loved the sea deeply and was profoundly moved by the stately grace of the great square-riggers. He was fascinated at the thought of writing a book that could guide them safely about the world by means of careful, accurate mathematics. He was delighted at the idea that something of his could enable the great ships to fulfill their inherent functional loveliness and the precise, marvelous logic that had gone into their construction. Yet Bowditch knew that, except for the vast miscellany of information he had collected in his journals, a book of his own meant starting completely from scratch. It would take years to finish, but as the coast of America dropped astern of the Astrea , he summoned up all his enthusiasm and set to work. Ordinarily, American ships outbound to the Orient waited for the shift in the monsoon and then headed north into the Indies with a fair wind. In steady airs, with the wind from the stern, a ship would not have to tack exceedingly, and positions could be determined accurately enough to give her a reasonable chance for survival in the maze of islands that lay ahead. Following this procedure—waiting for favorable winds—a vessel might take as long as three years to make the round trip between Salem and Manila. But Prince and Bowditch determined to sail the Astrea by the stars alone and have her home in a year or less. Prince was well aware of the chances he was taking; nevertheless, he entrusted the navigation of his ship to the little “arithmetic sailor.” When he reached the Indian Ocean, he piled on the canvas and at once headed north into the teeth of the monsoon. Nowadays it is almost impossible to estimate the very real dangers of such a voyage. During the Astrea ’s first trip to Manila the wind had been dead astern, blowing the ship swiftly and surely to its destination. This time it was dead ahead; tacking back and forth, the ship had to fight her way mile by mile toward Manila.
<urn:uuid:14336334-ee4f-4651-a670-043f85fb8ace>
CC-MAIN-2016-26
http://www.americanheritage.com/content/nathaniel-bowditch-practical-navigator?page=5&nid=51302
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00025-ip-10-164-35-72.ec2.internal.warc.gz
en
0.984646
1,173
2.890625
3
Walking often tops the list of good ways to get exercise. But walking isn't easy for everyone. A number of conditions can cause leg pain that makes walking difficult. The June 2008 issue of the Harvard Health Letter looks at treatment and management of four of these non-arthritic conditions. Among them: Peripheral artery disease. This condition, a form of atherosclerosis, typically affects the arteries that supply the leg muscles. This can cause leg cramps, or the legs may feel heavy or tire easily. Researchers have found that structured, supervised exercise programs can help people with peripheral artery disease increase the amount of walking they can do without pain. These programs usually involve walking until it hurts, resting until the pain goes away, and walking again. The regimen is most effective if people follow it for about 30 minutes several days a week. If peripheral artery disease is serious or doesn't improve with treatment, doctors can reopen a blocked artery in the leg with the same procedures used for coronary artery disease: angioplasty or bypass surgery. Chronic venous insufficiency. This condition of poor circulation involves the veins and the blood's return trip to the heart and lungs. Symptoms include swelling, skin inflammation, and open wounds on the ankle. Legs may feel achy or heavy. A mild case can be helped by lying on your back and using a pillow to elevate your legs. If you're sitting for long periods, pointing your toes up and down several times can help. More severe cases can be treated with compression stockings. Surgical treatments are reserved for the most serious cases. To continue reading this article, you must login Subscribe to Harvard Health Online for immediate access to health news and information from Harvard Medical School.
<urn:uuid:4e8349fd-9aa2-456a-98d6-f44ec3a4c2ce>
CC-MAIN-2016-26
http://www.health.harvard.edu/press_releases/what-you-can-do-about-leg-pain
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939151
351
2.921875
3
Black-handed Spider Monkey Black-handed, or Geoffroy’s, spider monkeys are diurnal (active during the day) and are almost entirely arboreal (tree-dwelling). Their limbs are adapted for agile movements, and their prehensile tails are able to act as a “fifth limb,” which can support their entire body weight. Like most primates, spider monkeys are very social; grooming helps reinforce these bonds. Do you notice any of them picking through the hair of another monkey? This is their way of staying close!
<urn:uuid:1e750228-1dea-4d00-9dba-cc630fec0d45>
CC-MAIN-2016-26
http://www.dallaszoo.com/animals-mammals/black-handed-spider-monkey/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968439
115
2.890625
3
What Every American Needs To Understand About The Economy As the United States debates its economic future in light of large government budget deficits, it is important that the public has a clear understanding of how the economy works. A good starting point for understanding how the economy works is to understand how it is measured. Economies are measured in terms of their Gross Domestic Product or GDP. GDP is made up of personal consumption expenditure, private investment, net trade (i.e. exports minus imports) and government spending at both the federal level and the state and local level. If the size of each of these components is known, it is only necessary to add them together to find the size of the whole economy. In 2009, the United States GDP was $14.1 trillion, according to the Bureau of Economic Analysis (BEA). Of that amount, spending on personal consumption accounted for 71% or $10 trillion; private investment 11% or $1.6 trillion; and government spending 21% or $2.9 trillion, with federal government spending of $1.1 trillion and state and local government spending of $1.8 trillion. Net trade deducted 3% or $390 billion from GDP because exports from the US were $390 billion less than imports into the US.
<urn:uuid:f7856564-0f0b-454c-a843-8e8e36edb6e1>
CC-MAIN-2016-26
http://revolutionradio.org/?p=11206
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946933
256
3.609375
4
Searching the Internet and the World Wide Web just got a little easier, thanks to the efforts of UW researchers who have designed electronic "slaves" that can seek out information for you. The Internet is a collection of computer networks that contain a cornucopia of information. But finding something in a hurry in that electronic haystack is not always easy. Several searching tools have been created to help users find needed information, but they can be difficult to access at peak times of system usage. And even then, a user may need to run more than one type of search to try to locate the desired data. In 1991, UW computer scientists Oren Etzioni and Daniel Weld set out to create a software robot, or "softbot," that could make searching easier and faster. They gave the softbot a detailed knowledge of the Internet, along with enough artificial intelligence to interpret instructions and to evaluate and screen retrieved information. The Internet softbot was recognized by Discover magazine's 1995 Awards for Technological Innovation as a finalist in the software category. Apple Computer has licensed a component of this technology. With graduate student Erik Selberg, Etzioni developed a similar softbot for the World Wide Web, called "MetaCrawler," that can operate eight search tools simultaneously, rather than just one at a time as a user would have to do. Moreover, MetaCrawler uses its artificial "noodle" to prune as much as 75% of the material it finds that is irrelevant, outdated, or unavailable, saving users the trouble of sifting through mountains of data themselves.
<urn:uuid:fd0dc324-cb9f-4316-a416-f4a179629b9d>
CC-MAIN-2016-26
http://www.washington.edu/research/pathbreakers/1991a.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953439
322
3.390625
3
In patients who have, or might have, an obstruction (blockage) of the kidney, an internal drainage tube called a 'stent' is commonly placed in the ureter, between the kidney and the bladder. This is placed there in order to temporarily relieve the obstruction. What is a Ureteric Stent? A ureteric stent is a specially designed hollow tube, made of a flexible plastic material that is placed in the ureter. How does a stent stay in place? The stents are designed to stay in the urinary system by having both the ends coiled. The top end coils in the kidney and the lower end coils inside the bladder to prevent its displacement. The stents are flexible enough to withstand various body movements. How long will the stent stay in the body? This can range from a few days to months depending on each particular situation. A stent in the right position can stay in for 3-6 months without the need to replace it. STENTS ARE NOT PERMANENT!!! A stent must either be exchanged for a new one or removed after a maximum of 6 months. How is a stent removed? Usually a small fiber-optic scope (cystoscope) is advanced into the bladder through the urinary channel (urethra) and the stent is grasped and removed. Sometimes a stent can be left with a thread attached to its lower end that stays outside the body. The doctors can remove such stents by just pulling this thread. LIVING WITH A URETERAL STENT Ureteric stents are designed to allow people to lead as normal a life as possible, but they may cause side effects. In placing a stent, there is a balance between its advantages in relieving the obstruction and any possible disadvantages in the form of side effects What are the possible side effects associated with a stent? Many patients do not experience problems with the stents. In those who do the symptoms can range from very mild to severe. The majority of patients with a stent in place will be aware of its presence most of the time. Discomfort or pain Physical activities and sports There are generally no physical limitations due to the stent, however, you may experience some discomfort in the kidney area and passing of blood in your urine, especially if sports or strenuous physical activities are involved. Some people get tired more quickly when a stent is in place. You can continue to work normally with the stent inside your body, however, if the work involves lot of physical activities, you may experience more discomfort and especially blood. Travel and holidays It is possible to travel with a stent in place, provided the underlying kidney condition and your general health allows this. There is always a chance however, that you could have difficulties due to the stent that would then require treatment away from your normal urologist (though this is uncommon). There are no restrictions on your sex life due to the presence of a stent, unless there is a thread attached, in which case sexual activity should be avoided as it may dislodge the stent. When should I call for a help (785-749-0639)?
<urn:uuid:e688d798-65ee-4a53-b1b7-e7217d71aeb5>
CC-MAIN-2016-26
http://www.lawrenceurology.com/ureteral-stents.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00152-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950591
671
2.96875
3
Because the motivation to achieve and the fear of failure so strongly affect enjoyment of and performance in achievement situations, researchers have been very interested in understanding how these motives develop. They have found that children who have high levels of achievement motivation but little fear of failure tend to have a history of encouragement and reward for success and independence. Parents of such children tend to emphasize the positive aspects of achievement and to praise their children for their efforts to achieve. Importantly, when their children try hard but nevertheless fail, these parents do not punish or criticize them. Instead, they encourage them to continue their attempts and praise them for their persistence. Because of the emphasis that the parents place on striving to meet standards of excellence, this value is adopted by the children and serves to guide their behavior. The background of the fear-of-failure child is quite different. These children tend to want to avoid new experiences or activities because of the punishment or rejection associated with previous failures. Their parents tend to focus only on the success or failure experienced by the child, not on the effort the child puts out. They express displeasure with the child when failure occurs but take success for granted and expect it. In some cases, unrealistically high goals are set for the child, and the parents express displeasure when the child does not succeed. It is quite easy to see how such a background would result in a child who has learned to dread failure. Ironically, once the fear develops, its disruptive effects are likely to further decrease the chances of success. What often occurs is a vicious cycle in which failure results in increased anxiety, which in turn helps to ensure future failure. What Parents Can Do Sports can be a training ground for the development of positive motivation toward achievement. Parents and coaches can have an important influence on developing attitudes concerning success and failure. Research on the development of the need for achievement and fear of failure offers some pretty clear guidelines for how you can help your child develop a healthy achievement orientation. The key principles seem to be encouraging the child to give maximum effort and rewarding him or her for that effort. Make sure the achievement standards you set are reasonable and within your child's capabilities. When success occurs, enjoy the success with your child and express appreciation for the effort that went into it. Never be punitive or rejecting if the child tries but does not succeed. Show your child that you understand how disappointed he or she is, and encourage the child to continue trying. Communicate love and acceptance regardless of success or failure. If you want to avoid developing fear of failure, don't give your child a reason to dread failure.
<urn:uuid:be7d46aa-75de-4697-b374-b34f51bac513>
CC-MAIN-2016-26
http://www.usaswimming.org/ViewMiscArticle.aspx?TabId=1729&mid=9576&ItemId=5288
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961163
521
3.40625
3
A House Divided: How Ohio Politics Shaped the Civil WarNext Page → William Medill was the twenty-second governor of Ohio. Though he was born in Deleware, he spent much of his adult life in Ohio where he became a prominent politician. Medill served as a representative in the Ohio House of Representatives for four consecutive terms and was even elected Speaker of the House in 1836 and 1837. In 1838 Medill, a Democrat, was elected to the U.S. House of Representatives. He was relected in 1840, but in 1842 he was not reelected for a third term. Despite this defeat, Medill's political career continued to flourish. In 1845 President James K. Polk appointed him as second assistant postmaster general and was reappointed as commissioner of Indian affairs a short time later. Medill held the position of commisioner until the end of Polk's presidency and then returned to Ohio. Back in Ohio Medill quickly involved himself in state politics again, becoming a delegate to the Ohio Constitutional Convention in 1850. The convention was responsible for preparing the Ohio Constitution of 1851. For the first time in Ohio history the new constitution established a position of lieutenant governor, in which Medill was the first elected to hold this position. And, in 1853 when Governor Reuben Wood resigned to accept his appointment to the United States Consul to Chile, Medill replaced him as governor. Medill won a second term as governor in 1853. In 1855 Medill faced a fierce opponent in the next governor's race: Salmon P. Chase, a member of the newly formed Republican Party. Salmon P. Chase was a lawyer and well known Abolitionist. After James Birney was arrested for helping a runaway slave escape, Chase defended him in court. Chase also defended runaway slaves, an activity that prompted Southerners to refer to him as the "Attorney General of Fugitive Slaves." Though Chase was unsuccessful in these cases, he won the veneration of the African American community for his efforts. Originally a member of the Whig Party, Chase helped develop numerous new political parties focusing on the end of slavery. In the 1840s Chase helped in the creation of the Liberty Party and in 1848 he helped organize the Free Soil Party in Ohio. Ironically, the Free Soil Party and Democrats worked together to elect Chase to the U.S. Senate in 1850. While in the senate, Chase worked tirelessly to fight the expansion of slavery. He opposed the Fugitive Slave Act and the Kansas-Nebraska Act. Because of Chase's stance on slavery, he quickly became involved in the forming of yet another political party in Ohio; the Fusion Party. The party soon became known as the Republican Party. In 1855 Chase entered the governor's race in Ohio. He ran against William Medill and former governor Allen Trimble.* Slavery was the predominant issue during the election and Chase won by a significant margin. Chase served as governor until 1860, when he returned to the U.S. Senate. Chase was only a senator for a couple of days. President Abraham Lincoln appointed him as secretary of the treasury. Chase accepted and resigned from his seat in the Senate. It was during Chase's years as Secretary of the Treasury when the United States began to print "In God We Trust" on all currency. Lincoln and Chase did not always see eye to eye, but they shared a mutual respect for each other. Even though Chase resigned as secretary in 1864, Lincoln quickly reappointed him to another position; the Supreme Court as Chief Justice. Chase held this position for only a short time before Lincoln was assassinated. In 1865 Chase was responsible for swearing in Andrew Johnson as president. *Allen Trimble was Ohio's eighth governor. When he was elected governor in 1822 he was a member of the Federalist Party. During the 1855 election he identified himself with the Know-Nothing Party (also known as the American Party). To find out more about William Medill visit http://www.ohiohistorycentral.org/entry.php?rec=270&nm=William-Medill To find out more about Salmon P. Chase visit http://www.ohiohistorycentral.org/entry.php?rec=92
<urn:uuid:2b261cd9-beab-4224-be47-3b0a703078e4>
CC-MAIN-2016-26
http://www.ohiocivilwar150.org/omeka/exhibits/show/a-house-divided-/--ohio-governors
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00007-ip-10-164-35-72.ec2.internal.warc.gz
en
0.986426
875
3.890625
4
Q: My 13-year-old cat Patches was recently diagnosed with kidney failure. The veterinarian said that urinary stones have formed in her bladder and ureter on one side, and that these may be contributing to her kidney problems because the ones in the ureter are blocking the kidneys’ ability to flow. He also said that removing the stones would be dangerous because there are so many. What would you recommend? A: As you are probably aware the kidneys filter the blood of certain waste products and eliminate them in the urine, which flows from the kidney to the bladder through a tube known as the ureter. The urine is then voided through the urethra. Stones can form or lodge anywhere along that tract and can cause a blockage to the flow of urine. If this occurs in the urethra and the cat or dog is completely obstructed, severe damage to the bladder will occur, often within 24 hours. The bladder may rupture or urine may cease to be produced. Either way, the untreated patient will rapidly become toxic and die. In your case, multiple stones are in the ureter, which makes both the diagnosis and determination of treatment a bit more difficult. Anyone who even knows someone who has passed a kidney stone is aware that it is an incredibly painful event. In dogs and cats, if it is painful, they don’t tell us. In fact, a ureter can be completely obstructed and result in complete loss of function in the associated kidney and the owner never notices a problem. Only when there is a complete obstruction and infection at the same time do they usually appear sick enough that we know there is a problem, and this can be rapidly fatal. Otherwise, we often find these cases when more subtle signs of kidney failure arise, or through bloodwork or incidentally on X-rays. When a kidney is completely obstructed, it will become permanently damaged due to pressure. Removing an obstruction after one month will only result in about 30-40 percent of normal function regained. After 6 weeks, less than 3 percent will remain after relieving obstruction. So, recovery is inversely related to the degree and duration of the obstruction. Therefore, early recognition and treatment are critical to the preserving or regaining kidney function. In many cases, the stones are not completely blocking the flow of urine but create enough of an obstruction that the back pressure slowly destroys the kidney and compromises its function. This can put a cat with declining kidneys into kidney failure. If there is a single stone, it can often be removed with a relatively simple surgery called a ureterotomy. In these cases, a small incision in the ureter is made and the stone is removed. This is easy when the ureter is enlarged and there is a single stone. When the ureter is not enlarged and/or when there are multiple stones, this surgery becomes more difficult and riskier. So, in a case like this, which sounds like Patches, we would first assess the kidney and see if there is any apparent function. If the kidney is functional, the obstruction should be removed or bypassed to try and salvage whatever kidney function remains. In our hospital, we offer a new alternative to the risks faced with opening the ureter. Our surgeons prefer to pass a stent, which is a small plastic tube, from the kidney to the bladder. This will bypass the stone and re-establish flow without cutting into the ureter. Over time, the stent causes the ureter to dilate and the stone will pass harmlessly into the bladder. This procedure also has the advantage of allowing future stones to pass as well. In dogs, this procedure can be performed in a minimally invasive manner, utilizing fluoroscopy (moving X-rays) or ultrasound to guide us, while avoiding an open surgery. We use a newly developed coated polyurethane stent. This is an innovative product made from a polymer called ThermoStar. This product will begin as a firm structure but will soften at body temperature for long-term comfort. This type of procedure has given a completely new direction to an old problem in maintaining the kidney function in stone-forming cats and dogs. Dr. Henri Bianucci and Dr. Perry Jameson are with Veterinary Specialty Care LLC. Send questions to [email protected].
<urn:uuid:f5a5e5ba-e7da-4c55-a833-d9acb9ef96d6>
CC-MAIN-2016-26
http://www.postandcourier.com/apps/pbcs.dll/article?AID=/20121228/PC12/121229375/1117/stones-can-damage-kidneys-in-pets
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00027-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962216
907
3.046875
3
Herpes zoster infection — commonly called shingles — is caused by the varicella zoster virus, which also causes chicken pox. When you get chicken pox, the inactive virus can remain in some of the nerves in your body for many years. Although vaccines for chicken pox and shingles are available, neither guarantees that you will not get the disease. Shingles can cause a chronic pain known as post-herpetic neuralgia. The same virus may cause Ramsey Hunt syndrome, an infection of the facial nerve that can be extremely painful due to irritation and swelling of the nerve. Pain – The First Sign The first sign of shingles is usually pain on one side or in one area of the body. It may be accompanied by tingling or burning sensations. Sometimes shingles causes intense itching as well. The area affected will depend on which nerves are involved. The most common location is a strip that wraps around one side or the other of your torso, but it may occur around one eye, one side of the face or neck or along any nerve path in the body. The burning sensation can be extremely painful. The symptoms of shingles can also mimic other health problems such as heart or kidney disease. Skin Redness and Blisters Redness of the skin followed by blisters is usually the next symptom in most people. The blisters are small and filled with clear to yellowish fluid. Blisters may occur singly or in clusters, and are usually surrounded by bright red, slightly swollen skin. As the disease progresses, the blisters will begin to break, leaving open areas of raw tissue that ooze clear, yellow or pink-tinged fluid. Eventually, the blisters develop yellow crusts that gradually dry, leaving patches of reddened skin in their wake. Most blisters do not cause permanent scarring, although it may take a few weeks for the redness to fade. Occasionally a person with shingles will have typical pain, itching and burning, but will not develop a rash. In addition to the rash, some people have what are called systemic symptoms. You might have a fever and chills or feel generally ill, as if you have the flu. In addition to the pain of the blisters, you might develop pain in the abdomen or joints. Some people develop headaches even if the rash is on the torso. Swollen glands, called lymph nodes, are common in areas close to the infection. Fatigue is another common symptom of shingles. Because shingles affects the nerves of the body, you may develop a number of symptoms related to sensation or movement, particularly if the infection is in the facial area. You might have difficulty moving muscles in the face or be unable to move your eyes. Ptosis — a condition in which one eyelid droops and cannot be raised by using the eye muscle – is another possible symptom. You might develop hearing problems if the virus affects nerves in the ear. When the nerves of the eyes are involved, you might have vision problems. Shingles can also cause problems with your ability to taste things. Shingles usually lasts for two or three weeks. Although it is unusual to develop shingles more than once, there are a number of possible complications. Post-herpetic neuralgia is more common in people over the age of 60 and results from damage to the nerves. The pain can be very severe and may last for a long time. Infections from bacteria can occur because there are open areas in the skin. If the bacteria get into the blood stream, you might develop a systemic infection known as sepsis or an infection of the brain called encephalitis. Shingles in the eye can cause blindness and in the ear can cause deafness. You are more likely to develop shingles if you are older than 60, had chickenpox before the age of one or your immune system is weakened. There is no known way to prevent shingles and half of all people who reach the age of 85 may develop the condition. Shingles may be contagious to pregnant women and is spread by respiratory droplets in the same way as the common cold, so avoid women of childbearing age and children who have not had chickenpox while the disease is active.
<urn:uuid:b0d888b6-5fab-4216-a7fa-c55a510b4f0d>
CC-MAIN-2016-26
http://medicalhealthwatch.com/shingles-signs-and-symptoms/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00079-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954787
881
3.734375
4
FDA Approves Truvada to Help Prevent HIV Infections in Healthy Individuals On Monday, the FDA approved a groundbreaking drug called Truvada, the first medication shown to prevent HIV in people who have sex with those infected with the virus that causes AIDS. In trials where HIV-negative individuals had unprotected sex with multiple partners, including some HIV carriers, the daily pill cut the risk of HIV infection by 42 percent compared with a placebo. And in another trial involving heterosexual couples where one partner was infected — and condoms were regularly used — Truvada reduced the risk of infection by 75 percent. The drug was already approved for use in combination with other drugs for the treatment of HIV, but researcher Dr. Connie Celum, a professor of global health and medicine at the University of Washington, said, “It is exciting to consider the potential impact of this new HIV prevention tool, which could contribute to significantly reducing new HIV infections.” FDA Commissioner Dr. Margaret Hamburg echoed those sentiments, adding, “Every year, about 50,000 U.S. adults and adolescents are diagnosed with HIV infection, despite the availability of prevention methods and strategies to educate, test, and care for people living with the disease.” The FDA also said in a press release that for prevention purposes, Truvada should be used along with common prevention methods including safe sex practices and regular HIV testing.
<urn:uuid:c65d05d1-c62d-4b48-85e0-40c05e6225e5>
CC-MAIN-2016-26
http://wibx950.com/truvada-hiv-infections/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954927
287
2.671875
3
1. General JN Chaudhuri, was born on 10th Jun 1908 and received his early education in Calcutta and the High Gate School, London. He obtained a nomination to the Royal Military College, Sandhurst and was commissioned in February 1928 into the 7 Light Cavalry. 2. During World War II he went overseas with the famous 5 Indian Division and saw active service in Sudan, Eritrea, Abyssinia and the Western Desert and was awarded the OBE and Mentioned in Dispatches thrice. In August 1944, he took over command of the 16 Light Cavalry, to become the first Indian Commanding Officer to lead an armoured Regiment into battle and won great renown for fighting in Central Burma. At the end of the Burma campaign, he saw service in French Indo-China with his regiment in Java. 3. In January 1946 he was appointed as Brigadier-in-Charge, Administration, Malaya Command, and was the third Indian to become a Brigadier in the Indian Army. A year later, he went to England to attend a course at the Imperial Defence College and on his return to India he became Brigadier (Plans) and later Director of Military Operations and Intelligence at Army HQ. In February 1948, he was promoted Major General and became officiating Chief of the General Staff. In May 1948, Gen Chaudhuri took over command of the 1 Armoured Division which played a major role in the Hyderabad Operations, and then was appointed Military Governor of the Hyderabad State for over a year. In January 1952, he became Adjutant General, Army HQ and in January 1953, he again took over as Chief of the General Staff. 4. Gen Chaudhuri served as the Chief of Army Staff of the Indian Army from 20th Nov 1962 to 7th Jun 1966 with great distinction. He passed away on 6th Apr 1983.
<urn:uuid:fda0fd21-3dab-43d3-9a24-ec5c178b8a03>
CC-MAIN-2016-26
http://indianarmy.nic.in/Site/FormTemplete/frmTemp1PTC2C.aspx?MnId=WOzoqF7K1kfnJmjZMvN5Tg==&ParentID=9rswcKqgg5jFJwqbK6wCkA==
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.992021
387
2.5625
3
First, the new ages for the Spanish handaxes have several implications, some of which are aptly discussed by John Hawks. The first is that how refined some handaxes are is not necessarily a good indicator of their overall age. That is, biface morphology doesn't simply gradually go from coarse to fine over time, and biface morphology is influenced by many factors and essentially reflect use considerations at the end of an individual handaxe's use-life (McPherron 2000). This was a well-established fact before this new study, but these dates underscore that lack of correlation especially well. A second implication is that the Acheulean (yup, that's how you spell it!) is therefore likely to be much older than previously assumed. The general consensus has been for some years that this industry first appeared in Europe around 600kya (cf. Monnier 2006). The age of 900kya for an Acheulean assemblage in Spain thus pushes back that date of first appearance by several hundred thousand years. What is more, unless you accept that hominins using Acheulean tools came to Spain directly from Africa (across the Strait of Gibraltar?), this age implies that the Acheulean in more eastern parts of Europe must be even older, though hard evidence of this is currently lacking. The earliest Acheulean site outside of Africa is 'Ubediya, in the Jordan Valley, dating to ca. 1.4mya. Assuming a single origin for Acheulean technology, this would mean that the amount of time it took handaxes to diffuse across the European mainland is effectively cut almost in half and now stands at a maximum of about 500,000 years, a long time to be sure, but much less than the previously accepted almost million year interval. This has some important implications in constraining models of early hominin dispersion in Europe and how that relates to the subsequent development of Neanderthals (e.g., Hublin 2009). And this would make sense, really, given the usefulness of handaxes as a technological innovation. The thing about handaxes is that they are generally described as unchanging over their 1.6my history, although this impression is based on morphology alone and doesn't really reflect the state of thinking among most scholars involved in Lower Paleolithic research. In a nutshell, handaxes were highly polyvalent from a functional perspective and not putting individual occurrences in proper context is what results in this mistaken impression of stasis (Machin 2009, Nowell and Chang 2009). A contextualized approach to handaxe variability is what allows archaeologists to seize on the richness and diversity of Acheulean behavior (Hosfield 2008). It is also what allows us to make sense of outliers like the Lake Makgadikgadi specimens. By any standard, at 30+cm in length these things are frikkin' huge! Strikingly, the press release only mentions that these very large items were found, without any discussion of how their size is unusual and what this distinctiveness might mean. These specific artifacts are of uncertain age, and their function is also uncertain - at that size, it is unclear exactly what practical function they might have served, as they would have been rather unwieldy to use, unless they were somehow hafted, in which case their heft might be an indication of their ultimate function. Most people tend to assume handaxes were made and used as stand-alone hand-held tools. This lithic-centric view has led to some conjecture that the skill manifest in handaxe manufacture might have served as a form of 'advertisement' to potential mates by especially technically proficient knappers (e.g., Kohn and Mithen 2009). This has been challenged on both theoretical and practical grounds, most eloquently by Nowell and Chang (2008) who detail how such a model cannot, in fact, be argued to be founded on evolutionary theory as commonly defined. Machin (2009:35-36) argues persuasively that handaxes morphology cannot be understood by reference to single-cause explanations since "variability is caused by the differing motivations and constraints – ecological, physiological, biological, cognitive and social – which act upon the individual agent at any given point in time." The sheer timespan and geographical distribution of handaxes certainly agrees with her - it's unlikely that handaxes served the same function in all contexts in which they are found. In a way, handaxes are perhaps best understood as an especially useful and versatile technological innovation that allowed them to be if not all things to all (pre-)people at least many things to many (pre-)people. Getting back to the Spanish handaxes described by Scott and Gibert, this raises some interesting questions. The first among these is why, given their recognized usefulness, such implements would be so scarce when they are first documented in the record - at Estrecho del Quípar (the site dating to 900kya), there is only one handaxe in the assemblage, and based on its morphology (Fig. S4: flake scars are present on both sides of the piece, but is not very extensive at all toward the center of either face) some analysts might consider it a core or bifacially flaked cobble instead of a proper handaxe. To be fair, the authors refer to other studies that show that handaxes are not very frequent in most Acheulean assemblages (i.e., Monnier 2006), and they also describe a contemporary Spanish assemblage that lacks handaxes altogether to explain why and absence or low frequency of bifaces is not necessarily a problem to labeling the assemblage as Acheulean. However, this begs the question of what an Acheulean assemblage actually is if not one that contains handaxes, a question that Gilliane Monnier has addressed in great detail, concluding that It is time for a comprehensive revision of the Lower/Middle Paleolithic periodization based upon a synthesis of multiple aspects of the archaeological record, including climate, subsistence, landscape use, mobility and exchange, symbol use, cognition, and biological evolution, in order to determine whether we should maintain a two-phase system [Lower vs. Middle Paleolithic] and, if so, how it should be defined. (Monnier 2006: 729) If that's the case, what can we really say about the oldest appearance of the Acheulean without an in-depth consideration of these complementary - and necessary - lines of evidence instead of only focusing on the presence of large bifacial artifacts? Hosfield, R. Stability or Flexibility? Handaxes and Hominins in the Lower Paleolithic. In Time and Change: Archaeological and Anthropological Perspectives on the Long-Term in Hunter-Gatherer Societies (D. Papagianni, R. Layton and H. Maschner, eds.), pp. 15-36. Oxbow Books, Oxford. Hublin, J.J. 2009. The Origins of Neandertals. PNAS 106:16022-16027. Machin, A. 2008. Why Bifaces Just Aren't That Sexy: A Response to Kohn and Mithen (1999). Antiquity 82: 761-769. Machin, A. 2009. The role of the individual agent in Acheulean biface variability: A multi-factorial model. Journal of Social Archaeology 9: 35-58. McPherron, S.P. 2000. Handaxes as a Measure of the Mental Capabilities of Early Hominids. Journal of Archaeological Science 27:655-663. Monnier, G. 2006. The Lower/Middle Paleolithic Periodization in Western Europe: An Evaluation. Current Anthropology 47:709-744. Nowell, A., and M.L. Chang. 2009. The Case Against Sexual Selection as an Explanation of Handaxe Morphology. PaleoAnthropology 2009: 77-88. Scott G.R., and S. Gibert. 2009. The oldest hand-axes in Europe. Nature 461:82-85.
<urn:uuid:e52e48ae-d490-4111-97a5-c24f95eae5bc>
CC-MAIN-2016-26
http://averyremoteperiodindeed.blogspot.com/2009/09/two-sides-to-every-biface.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945385
1,711
3.203125
3
February 27, 2013 International Trade Meeting Offers Hope for Greater Protection of Wildlife by Mark Jones The world is facing a biodiversity crisis, and many of the plants and animals we share it with are threatened with extinction. There are multiple reasons for this crisis, and for many species the solutions are complex. However, international trade in wild animals and their body parts is one significant threat that can and should be addressed. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) was formed for exactly that purpose—it brings countries together to regulate the international trade in wildlife. The upcoming meeting CITES celebrates its 40th anniversary this year, and its 177 member countries (‘Parties’) will convene at the 16th meeting of the ‘Conference of the Parties’ in Bangkok, Thailand, in March to consider the regulation of international trade across a whole range of species and circumstances. As well as representatives of the Parties, a large number of non-governmental organisations, including animal protection organisations and wildlife trade interest groups, will be in attendance. UK government officials will be attending the meeting and the UK will vote as per the agreed line taken by the European Union. HSI/UK has been working tirelessly in the months leading up to the March meeting. Members of our staff have met with Members of Parliament, Ministers, officals from the Department of Environment, Food & Rural Affairs and other stakeholders to raise awareness of the issues and encourage the UK government to support better protection. Species under discussion Among the proposals that will be discussed are bans on international commercial trade in polar bears and African manatees (sea cows) and their parts, and restrictions on the trade in a number of species of sharks and rays threatened by the trade in their fins and other body parts. The Conference will also consider proposals to restrict the export of rhino horn hunting trophies from South Africa in the light of the poaching crisis that is threatening the very future of the world’s remaining five species of rhino. Elephants will feature heavily in the discussions, with calls for a strict moratorium on any further sale of ivory stockpiles while efforts are ongoing to bring an end to the massacre of elephants across Africa to supply the seemingly insatiable demand for ivory in the Far East. Imperfect, but important CITES is a complex and often frustrating forum in which to work, where the protection of endangered and threatened species is at the mercy of scientific, cultural and political considerations. Nevertheless, it is a vital component to species protection and can make a great contribution to the very future of species and biodiversity as a whole. Humane Society International will be there to lobby CITES Parties to adopt a precautionary approach in support of species conservation and animal welfare. Mark Jones is executive director of Humane Society International/UK.
<urn:uuid:543d7f65-598a-40d1-b78c-8ca891e229f6>
CC-MAIN-2016-26
http://www.hsi.org/world/united_kingdom/news/news/2013/02/cites_meeting_hopes_022713.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935549
591
2.609375
3
Carbon Monoxide in Your Home Carbon monoxide within the home is responsible for many deaths every year. This deadly and undetectable gas is extremely dangerous when breathed in and can kills in minutes. Those that aren’t killed by it can suffer a range of symptoms from short term one to permanent brain and major organ damage amongst other things. The elderly, those with heart or lung problems, young children, animals, pregnant mothers and unborn babies are particularly susceptible to carbon monoxide poisoning, as are those that already have certain levels of CO in the blood, such as smokers or those exposed to carbon monoxide as an occupational hazard. There are many items and appliances within the home that can be the source of carbon monoxide pollution, probably far more than many people realise. The most common appliances are fuel burning heaters, such as furnaces, water heaters, butane or gas heaters, stoves and gas ovens, central heating systems, and refrigerators. Using these appliances in poorly vented or enclosed spaces can increase the chances of carbon monoxide pollution, as can blocked vents and chimney flues. In order to decrease the risks of carbon monoxide poisoning within the home, it is important to stick to some basic but very important rules: - Always have your appliances fitted by a certified and experienced professional - Have your appliances checked regularly, and have your vents and chimney checked and cleaned on a regular basis - Always adhere to manufacturers instructions when using these appliances - Never use fuel-burning appliances in enclosed and un-vented spaces - Never use a gas stove or oven to warm your home - Make sure that you have a high standard CO detector fitted outside sleeping areas, and main living areas. This should be placed high up or on the ceiling as CO rises rapidly. - Be aware of the symptoms of carbon monoxide poisoning so that you can take appropriate action should the need arise You should also note that your car exhaust can emit carbon monoxide fumes, and you should never leave the engine running idle in an enclosed space such as the garage. This can not only pollute the air in the garage, but the carbon monoxide fumes can also seep quickly in to the home putting everyone in the house at risk as well. Taking the necessary steps to make your home as carbon monoxide-proof as possible could protect you and your family from serious illness, permanent damage and death, and these simple steps can go a long way towards preventing pollution of the air within your home. Remember, carbon monoxide cannot be seen, tasted or smelt, so your only defence against this potential killer is prevention.
<urn:uuid:28c081ca-7b26-46f0-87c3-ecd5c5efd6b6>
CC-MAIN-2016-26
http://www.silentshadow.org/carbon-monoxide-in-your-home.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00038-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943388
545
3.0625
3
May 22, 2008 The Internet Immune System Metaphors can be useful constructs. When employed properly, they can help us understand something that is complex and confounding by comparing it to something analogous and familiar. In the Taipei deep dive on Security and Society, we tapped into the immune system metaphor, diligently comparing Internet security to the security systems that govern the human body. And the exercise helped us identify some undeniable weaknesses in the world of digital security. We spent most of our time in Taipei talking about digital security (though we did touch on the intersection of digital and physical security…more on that later). And the immune system analogy is certainly not a new one. After all, we call malicious code “viruses.” Computers get “infected” and need to be “quarantined.” So when participants began comparing network security to the SARS outbreak that hit this area hard 5 years ago, it wasn’t all that surprising. But what was surprising was how the conversation illuminated some of the gaps in today’s digital security, and how we might take a lesson from the marvelous human immune system. For example, our immune system is not overly concerned with preventing viruses from entering the body. It is concerned, however, with controlling, containing, and assimilating the virus as quickly as possible once it is discovered. One participant called it “an ecological view of security, rather than an absolute view.” By that he meant, we should be focused on maintaining the overall health of the body, keeping the immune system strong, rather than tilting at windmills by trying to prevent any and all attacks. The “body” in this case could be seen as an individual computer system, or the entire network. And the concept is that by allowing a steady series of small attacks on different parts of the system, we gradually strengthen the overall network. It’s not unlike biological evolution, and you could argue that we are in the midst of an accelerated version of digital Darwinian as we speak. Another area in which the immune system analogy worked was that of detection and response. When the human body is infected, there are a series of universally recognized signs: fever, cough, sneezing, fatigue, nausea. These symptoms alert us that our immune system has been engaged, and we know to get extra rest, avoid other humans, or go to a doctor. But in the Internet world, victims rarely even know they’ve been victimized. Data gets stolen, PCs are compromised, and credit card numbers are bought and sold, but most people are lucky if they ever find out, let alone with an early warning. The symptoms are subtle, and sometimes undetectable. If you are one of the lucky ones (and I say that with tongue firmly in cheek), and you are somehow made aware you’ve been victimized online, then what? The human body kicks an elaborate defense system into gear. A virus is reported to the authorities (the immune system) and then immediately acted upon. But where is the analog in the digital world? If you bring your PC to the police station, and file a report that says “someone has accessed my system illegally,” they would probably laugh you out of the station. But why? Who are the authorities on digital crime? And why shouldn’t there be an enforcement body that is as powerful as cops walking the neighborhood beat? “We really need to work on systems that can alert someone when they have been victimized,” said Rama Subramaniam of Valiant Technologies, a digital forensics company based in Chennai. “The police also need to take on a role so that these crimes can be properly investigated and prosecuted.” This sentiment mirrored the thoughts of Tokyo’s participants; that legislation around digital crime is severely lacking. It also shed light on the fact that the worlds of digital and physical security are not all that different, but for some reason remain separate. Crimes that take place online have very real consequences in the physical world. Which begs the question of why the same law enforcement agencies that police the physical world should not also be policing the digital world? We ran this immune system metaphor into the ground before it was all over, but that’s not to say that it wasn’t useful. For instance, one participant noted that right now we have a hodgepodge of security systems for the various constituents on the network. Each has wildly varying levels of quality and effectiveness (not to mention cost.) But there is no international immune system, a security system that is looking after the overall health of the system. And that could cost us all dearly some day. TrackBack URL for this entry: Listed below are links to weblogs that reference The Internet Immune System: Dan Geer has applied the concept of biological ecosystems to security and has some very insightful things to say on the theme. (http://geer.tinho.net/geer.sourceboston.txt) As I mentioned during the Taipei GIO meet, the new cyber police needs to be put together as a virtual organisation, by drawing from the traditional federal police organisations, CERTs, ISPs and Telcos, Web Service Providers and Financial Services Institutions. New linkages have to be set up and made effective quickly. Posted by: Nandkumar Saravade | May 31, 2008 9:17:14 PM Issues surrounding cyberspace are fertile ground for analogies of all kinds. My personal favorites are from anthropology and the history of civilization. There are already structures in place that can be utilized to create secure subsets of cyberspace, with little more than a restructuring of ICANN. My personal feeling is that the mentality of the monolithic cyber-arena, in which restrictions in one area imply restrictions all around is artificial, and hampering security efforts. Maybe we should be looking more closely at analogies to the fortified town, or walled city, and attempting to create subspaces where activity essential to the daily functioning of society is secured, and individuals are protected from predators. Much of the problem in establishing security structures may be conceptual, and philosophical - Related to the dramatic development of the internet in ways that could not be anticipated. There isn't one internet, or two, but as many internets as there are nodes on the system. Posted by: Tim R. | Jun 5, 2008 9:46:24 AM -Nandkumar-, please forgive the spelling error. Posted by: Tim R. | Jun 5, 2008 9:48:29 AM The comments to this entry are closed.
<urn:uuid:7e81f873-d3ff-406f-a93f-888557628520>
CC-MAIN-2016-26
http://gio.typepad.com/blog/2008/05/the-internet-im.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00158-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953002
1,381
2.828125
3
The Nobel Prize for the discovery and analysis of penicillin was awarded in 1945 [Nobel Laureates: Sir Alexander Fleming, Ernst Boris Chain, Sir Howard Walter Florey]. It was about this time that penicillin became widely available in Europe and North America. By 1946 6% of Staphylococcus aureus strains were resistant to penicillin. Resistance in other species of bacteria was also detected in the 1940's. By 1960 up to 60% of Staphylococcus aureus strains were resistant with similar levels of resistance reported in other clinically relevant strains causing a wide variety of diseases (Livermore, 2000). Penicillins are a class of antibiotics with a core structure called a β-lactam. The different types of penicillin have different R groups on one end of the core structure. A typical examples of a penicillin is penicillin G [Monday's Molecule #30]. Others common derivatives are ampicillin and amoxicillin. The original resistance to this entire class of drugs was caused mostly by the evolution of bacterial enzymes that could degrade them before they could block cell wall synthesis. (Recall that bacteria have cell walls and penicillin blocks cell wall synthesis [How Penicillin Works to Kill Bacteria].) It seems strange that the evolution of penicillin resistance would require a totally new enzyme for degrading the drug. Where did this enzyme come from? And how did it arise so quickly in so many different species? The degrading enzyme is called penicillinase, β-lactamase, or oxacillinase. They all refer to the same class of enzyme that binds penicillins and then cleaves the β-lactam unit releasing fragments that are inactive. The enzymes are related to the cell wall transpeptidase that is the target of the drug. The inhibition of the transpeptidase is effective because penicillin resembles the natural substrate of the reaction: the dipeptide, D-alanine-D-alanine. In the normal reaction, D-Ala-D-Ala binds to the enzyme and the peptide bond is cleaved causing release one of the D-Ala residues. The other one, which is part of the cell wall peptidoglycan, remains bound to the enzyme. In the second part of the reaction, the peptidoglycan product is transferred from the enzyme to a cell wall crosslinking molecule. This frees the enzyme for further reactions (see How Penicillin Works to Kill Bacteria for more information). Penicillin binds to the peptidase as well and the β-lactam bond is cleaved resulting in the covalent attachment of the drug to the enzyme. However, unlike the normal substrate, the drug moiety cannot be released from the transpeptidase so the enzyme is permanently inactivated. This leads to disruption of cell wall synthesis and death. Resistant strains have acquired mutations in the transpeptidase gene that allow the release of the cleaved drug. Thus, the mutant enzyme acts like a β-lactamase by binding penicillins, cleaving them, and releasing the products. Although the β-lactamases evolved from the transpeptidase target enzymes, the sequence similarity between them is often quite low in any given species. This is one of the cases where structural similarity reveals the common ancestry [see the SCOP Family beta-Lactamase/D-ala carboxypeptidase]. It's clear that several different β-lactamases have evolved independently but, in many cases, a particular species of bacteria seems to have Livermore, D.M. (2000) Antibiotic resistance in staphylococci. Int. J. Antimicrob. Agents 16:s3-s10.
<urn:uuid:285c035c-118e-4a02-a305-195deacba5c1>
CC-MAIN-2016-26
http://sandwalk.blogspot.com/2007/06/penicillin-resistance-in-bacteria.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935088
806
3.859375
4
Published on March 20th, 2013 | by SLAC National Accelerator Laboratory1 PETE Devices Could Harvest Solar’s Wasted Heat March 20th, 2013 by SLAC National Accelerator Laboratory Scientists working at the Stanford Institute for Materials and Energy Sciences (SIMES) have improved an innovative solar-energy device to be about 100 times more efficient than its previous design in converting the sun’s light and heat into electricity. “This is a major step toward making practical devices based on our technique for harnessing both the light and heat energy provided by the sun,” said Nicholas Melosh, associate professor of materials science and engineering at Stanford and a researcher with SIMES, a joint institute of Stanford University and SLAC National Accelerator Laboratory. The new device is based on the photon-enhanced thermionic emission (PETE) process first demonstrated in 2010. In a report last week in Nature Communications, the group describes how they improved the device’s efficiency from a few hundredths of a percent to nearly 2%, and said they expect to achieve at least another 10-fold gain in the future. Turning On the Heat Conventional photovoltaic cells use a portion of the sun’s spectrum of wavelengths to generate electricity. But PETE uses a special semiconductor chip to make electricity by using the entire spectrum of sunlight, including wavelengths that generate heat. In fact, the efficiency of thermionic emission improves dramatically at high temperatures, so adding PETE to utility-scale concentrating solar power plants, such as multi-megawatt power tower and parabolic trough projects in California’s Mojave Desert, may increase their electrical output by 5%. Those systems use mirrors to focus sunlight into super-bright, blazingly hot regions that boil water into steam, which then spins an electrical generator. “When placed where the sunlight is focused, our PETE chips produce electricity directly; and the hotter it is, the more electricity it will make,” Melosh said. The heart of the improved PETE chip is a sandwich of two semiconductor layers: One is optimized to absorb sunlight and create long-lived free electrons, while the other is designed to emit those electrons from the device so they can be collected as an electrical current. A cesium oxide coating on the second layer eases the electrons’ passage from the chip. Future research is aimed at making the device up to an additional 10 times more efficient by developing new coatings or surface treatments that will preserve the atomic arrangement of the second layer’s outer surface at the high temperatures it will encounter in the concentrating solar power plant. “We expect that other materials, such as those incorporating barium or strontium, will make the surface much more stable up to at least 500 degrees Celsius,” said Jared Schwede, a Stanford graduate student who performed many of the PETE experiments, who explains the technology below. An additional challenge will be to engineer the device to withstand the dramatic 500-degree daily temperature swings at solar power plants, as their systems heat up during the day and then cool down at night. PETE research has received support from Stanford’s Global Climate and Energy Project, the Gordon and Betty Moore Foundation, the Department of Energy’s SunShot Initiative and the Defense Advanced Research Projects Agency. Source: Mike Ross, SLAC National Accelerator Laboratory Photos & Video: Brad Plummer, SLAC National Accelerator Laboratory Citation: J.W. Schwede et al., Nature Communications, 12 Mar 2013 (10.1038/ncomms2577) Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
<urn:uuid:081503ff-f836-48cb-a29b-953e2cc1345a>
CC-MAIN-2016-26
http://cleantechnica.com/2013/03/20/pete-devices-could-harvest-solars-wasted-heat/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902194
798
2.84375
3
Just as many people are making resolutions to stay away from food, birds are using food and other, sometimes odd, techniques to stay warm and survive winter. “Birds employ a number of methods to survive the adversity of winter,” said John Schaust, Chief Naturalist with Wild Birds Unlimited Nature Shop. Food is the most essential element, providing birds with the energy, stamina and nutrition they need. To stay warm, birds will expend energy very quickly, some losing up to 10% of their body weight on extremely cold nights. An ample supply of high-calorie foods such as black oil sunflower, peanuts and suet is crucial to a bird’s survival. ” We can play a vital role when feeding the birds becomes critical during extremely cold conditions.” stated Schaust. “At these times, a supply of food can mean the difference between life and death for a bird.” Most birds adjust their feathers to create air pockets to help them keep warm. “You will often notice the birds look fatter or ‘puffed up’ during cold weather,” explained Schaust. “This is because the birds are fluffing up their feathers; the more air space, the better the insulation.”
<urn:uuid:6655df88-2b99-44ba-9f38-889cf5fb3756>
CC-MAIN-2016-26
http://franchise.business-opportunities.biz/2009/01/06/birds-implement-multiple-techniques-to-survive-winter/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00048-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954307
264
3.890625
4
Understanding the climate of the past is extremely valuable to help put modern weather observations into a long-term context. Although we have considerable records of past weather, especially over land, more data is always welcome. Given the British obsession with the weather it is perhaps of no surprise that more data is available, it is just buried in hand-written logbooks. Transcribing this data is normally a time-consuming and expensive task…. The Old Weather project aims to change all this. It has put online the logbooks from more than 200 Royal Navy warships, from the extended World War 1 period (1914-1923). These ships recorded various aspects of the weather every 4 hours for years at a time! The clever interface allows volunteers to transcribe the observations quickly and accurately. More than 8,000 volunteers are freely giving up their spare time to contribute to our science. In less than 9 months they have contributed more than 3 million new weather observations to our historical records! These observations will allow us to better characterise and understand the causes of past climate variability. Watching the voyages gives a great idea of what is possible using this technology. Why am I writing about this? Well, Old Weather is planning to expand to use new types of logooks. I am hoping to utilise Old Weather to extend our records of Arctic climate and sea-ice back to the 18th century – it just depends on the funding….
<urn:uuid:19b85e32-ab85-45cf-80f3-04d6ac3131b1>
CC-MAIN-2016-26
http://www.climate-lab-book.ac.uk/2011/learning-about-past-climate-from-ships-logs/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935225
287
3.109375
3
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2004 March 3 Explanation: Was Mars ever wet enough to support life? To help answer this question, NASA launched two rover missions to the red planet and landed them in regions that satellite images indicated might have been covered with water. Yesterday, mounting evidence was released indicating that the Mars Opportunity rover had indeed uncovered indications that its landing site, Meridiani Planum, was once quite wet. Evidence that liquid water once flowed includes the physical appearance of many rocks, rocks with niches where crystals appear to have grown, and rocks with sulfates. Pictured above, Opportunity looks back on its now empty lander. Visible is some of the light rock outcropping that yielded water indications, as well as the rim of the small crater where Opportunity landed. The rover will continue to explore its surroundings and try to determine the nature and extent that water molded the region. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: LHEA at NASA / GSFC & Michigan Tech. U.
<urn:uuid:36096851-e2a3-489f-ab30-84dc7686f054>
CC-MAIN-2016-26
http://apod.nasa.gov/apod/ap040303.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942895
255
3.59375
4
One of the ways children show me how they are processing the impressionistic lesson of the formation of the Universe is by how they are interpreting the posters in our room which are used during the presentation. First, the most immature child processes this wonderful impressionistic story very literally. I find this is often a young first year child. This vision is not wrong for where the child is developmentally. I know the child will see it from a different facet as he is older. The second and third year child (and often the first year upper child) is wanting to show what he believes really happened at that moment on the Earth. His drawings are more “photo realistic.” For the older Upper Elementary child, the interpretation is often back to the impression of the work. He understands the work and many of the concepts literally and is now ready to put his individual stamp on Dr. Montessori’s work. AV and JV had become interested in creating their own God With No Hands cards. Well not cards in the case of AV. AV wants to quilt the felt to make a soft poster. JV has been focused on a minimalist approach. JV is using cut paper. Elegant. I find the child’s vision is very helpful for discussions of other Montessori lessons. It provides a window into the child’s thoughts, understandings, and tendencies. I have observed some Montessori classes shading photo copies of the charts as line art sheets and making a book. I would find this difficult for me as a directress, because the meeting of the child with the story is personal and provides such a window into her soul. I wouldn’t want to miss those clues.
<urn:uuid:e911fab6-bb3f-4c7b-b441-5b0d78bf9f27>
CC-MAIN-2016-26
https://eavice.wordpress.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00126-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972757
356
2.8125
3
In this chapter we study several closely intertwined topics: property testing, probabilistically checkable proofs of proximity (PCPPs), and constraint satisfaction problems (CSPs). All of our work will be centred around the task of testing whether an unknown boolean function is a dictator. We begin by extending the BLR Test to give a $3$-query property testing algorithm for the class of dictator functions. This in turn allows us to give a $3$-query testing algorithm for any property, so long as the right “proof” is provided. We then introduce CSPs, which are in fact identical to string testing algorithms. Finally, we explain how dictator tests can be translated into computational complexity results for CSPs and we sketch the proofs of some of Håstad’s optimal inapproximability results.
<urn:uuid:3a1a2c58-7ed6-4dc2-b0fe-51ee38e08170>
CC-MAIN-2016-26
http://www.contrib.andrew.cmu.edu/~ryanod/?p=1144
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.883988
172
2.640625
3
Tutorial: Luna 9 |“|| We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard. — John F. Kennedy An often overlooked achievement of the Soviet space program were the unmanned Lunik and Luna probes. Sure, most of them failed in various ways, but those which actually completed their objectives were groundbreaking. The Americans might have been the first and only to actually walk on the moon in person, but the first object to reach the moon was Luna 2, the first pictures of the far side of the moon were taken by Luna 3 and the first soft landing on the moon was Luna 9. The Soviets also managed to do three sample-return missions from the moon (Luna 16, 20 and 24). Launching an unmanned spacecraft to another celestial body and returning with a soil sample is a feat not reproduced by anyone else to this day. This feat, however, was diminished by happening after Apollo 11 returned with soil samples from the moon, so their scientific value wasn't as high as anticipated. In this mission we will re-enact the flight of Luna 9, the first soft landing on the moon. The original Luna 9 landed by deploying a large airbag to cushion its impact. Unfortunately KSP hasn't got anything like that as of version 1.0.5, so we will do a landing with rockets instead. The probe itself will be as minimalistic as possible. We will use the Probodobodyne QBE as the control unit. It's the most impact-resistant command pod available and thus has the best chances to survive the impact on the Mun surface, should we screw up the landing. We will plaster its surface with OX-STAT Photovoltaic Panels for energy supply. The lack of a battery means that the vessel will be non-functional when in the shade of Kerbin or the moon. We will have to keep this in mind. Like most Soviet space mission, the rocket was based on the R-7 design. The rocket used for the Luna mission was basically the same which brought Sputnik and Gagarin into space, just with yet another additional stage. I did, however, took the liberty of increasing the size of the tanks of the first and third stage and upgrade the engine of the third stage because it wouldn't work otherwise to account for numerous small advances in propulsion technology. - Payload stage - Fourth stage - Third stage - Second stage - First stage Use the first two stages to get into orbit and the third to circularize it and get rid of any declination to the orbit of Mun. Going to the Mun Get into a transfer orbit just like in the "Going to the Mun" ingame tutorial: Wait until the Mun is 100° ahead of your periapsis and then plan a prograde acceleration maneuver on it which leads you straight to the Mun. Try to find a trajectory which gets you close, but not on a collision course. You can check the closest distance of your course by checking the height of the Mun periapsis. When there is no periapsis marker, you are on a collision course. Remember that our probe won't work in the darkness, so make sure that the maneuver is on the daylight side of Kerbin. After you performed the acceleration maneuver, check your trajectory again to make sure that it really points to the moon and correct if necessary. The earlier you correct your course the less fuel will it cost you. Also don't miss any vertical difference. The best way to correct this is at the ascending/descending node. Getting into Munar orbit As soon as you entered the sphere of influence of Mun (you notice because your trajectory suddenly changes) plan a maneuver to get onto a circular orbit. The most efficient way to do this is by boosting retrograde on the Mun periapsis. Again, start your burn a bit early. Our engine isn't the strongest one, so we might need a very long burn duration. When you don't have much fuel left, you might decide to skip orbiting and go for a straight-on collision course the moment you enter the sphere of influence. Mun has no atmosphere and hasn't got many high mountains, so an orbit of just a few thousand meters is safe and will give you a great view. Maybe you will even spot one of the legendary Mun arches. After you enjoyed the view from orbit it's time to land. Boost retrograde to get onto a course which gets you to the surface. Note that because the Mun has no atmosphere, you can control exactly the location where you want to land, but be sure to correct for the rotation of the Mun. Remember the energy problem of our probe: Land on the day side. Also try to land in a very steep angle - it makes the braking maneuver easier. The most important thing when doing a landing on a planet without an atmosphere is to not lose your nerves and start wasting fuel by braking too early. The faster you go, the less time you spend in the gravity field, and the less speed will be added which you have to kill. The later you brake, the more fuel-efficient will it be. To keep track of how much time you have left, set a maneuver point on the surface. When you start to panic, go into staging view, face the retrograde marker and boost forward. Watch your speed and try to get it gradually lower. A good rule of thumb is to regulate it so you have 1 m/s for every 100 meter of altitude. When you almost touch the ground, kill your engine (if it's on while you separate, it will stay on after separation, fly around uncontrolled and possibly hit and damage your probe), release the satellite and maneuver it to the ground using monopropellant. Now that you have tasted Mun soil, the next step is obvious: Bring a Kerbal to the Mun and back.
<urn:uuid:a4e7347f-5d4c-4c0f-9b36-ef3f3e44fb7e>
CC-MAIN-2016-26
http://wiki.kerbalspaceprogram.com/wiki/Tutorial:_Luna_9
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954786
1,233
2.890625
3
- Historic Sites The Battle Of Lake Erie "With half the western world at stake, See Perry on the middle lake.” —Nineteenth-century ballad February 1976 | Volume 27, Issue 2 The destruction on the decks of the Lawrence was appalling. The air was filled with iron and great jagged splinters of wood, and the wounded tottered below faster than Usher Parsons, the surgeon, could treat them. “It seemed,” he said, “as though heaven and earth were at loggerheads.” John Brooks, the affable and popular lieutenant of marines and the handsomest man in the fleet, had his hip carried away by a cannonball and lay on the deck in agony, begging for a pistol with which to kill himself. Lying next to him, Samuel Hambleton, the purser, who was also wounded, took a verbal disposition of his will before the lieutenant died. The wounded crawled away to hide, but there was no safe, stout corner in the hastily built brig. Parsons was helping a midshipman to his feet after dressing a wound in his arm when the boy was torn out of his hands by a shot that smashed through the hull. Five cannonballs passed through the cabin where he was working. Blood spilled on the deck faster than men could throw sand on it, and sailors slipped and fell as they strained at the guns. The hammocks were shot apart, and the scraps of cloth that filled them danced in the smoky air like snowflakes. They settled on the bloody head of Lieutenant John Yarnall, Perry’s second-in-command, and gave him the appearance of a huge owl as he kept the guns manned and working. Spars and rigging tumbled down from aloft, round shot hulled the ship again and again, men fell dead and were clawed apart by canister; and through it all the ship’s dog, a small black spaniel, wailed and keened. Courage takes strange forms. It is said that Perry suffered a psychopathic fear of cows and would splash across a muddy road to avoid going near one of the innocuous beasts; but here he was, in the center of and bearing full responsibility for what was undoubtedly the worst place on earth at the moment, and he was utterly composed. An hour and a half into the chaotic afternoon he appeared at the skylight over the sickbay and calmly asked Parsons to spare him one of his assistants. He returned six time» and finally, with all the assistants gone, asked if there were any wounded who could pull a rope. A few men actually dragged themselves back to the deck. But it was no use. By 2:30 P.M. , after an almost unbelievable defense, there was not a gun working on the Lawrence , and 80 per cent of her crew were down. And off out of range the Niagara still stood undamaged; Parsons says that many of the wounded cursed her in their last words. Nobody will ever know what was going through Jesse Duncan Elliott’s mind as he watched his sister ship get hammered into a listing ruin. He was some years older than Perry and felt that he should have had command of the squadron, and his jealousy may have been such that, like John Paul Jones’s mad ally Captain Landais, he stood back waiting for his superior to be killed so that he could come in at the end of the fight and claim the victory. Much later his apologists would give the insufficient explanation that he was simply obeying orders by keeping the line of battle intact. The Caledonia was a slow sailor, and he was stuck behind her, reluctant to leave his station. Whatever the reason, as the Lawrence ’s last gun stopped firing Elliott did leave the line and pass to windward of the ruined flagship. He was sure that Perry was dead, and it is a pity that there is no clear record of his reactions when Perry clambered up over the side of the Niagara and stood facing him. On board the Lawrence Perry, miraculously unhurt, had determined that there still was a ship’s boat, also miraculously unhurt. He had hauled down the “Don’t Give up the Ship” battle flag—but not the American flag—and took it with him as he climbed into the boat, leaving Yarnall in command of the ship and the nine men still fit for duty. Thickly banked powder smoke covered him for part of the way as he made for the Niagara , but for most of the fifteen-minute journey the water around him was roiled with musketry and round shot. But Perry made it through unhurt. As he climbed aboard Elliott’s ship he saw, with “unspeakable pain,” Yarnall lower the flag of the Lawrence in surrender. But it did not stay lowered for long, and the British never had a chance to take possession of the ship. Perry exchanged a word or two with Elliott, sent him back in the Lawrence’s dinghy to bring up the gunboats, and then, taking command of the Niagara , steered her toward the Detroit .
<urn:uuid:906a746c-18a9-46a6-b10f-93c33816a7a0>
CC-MAIN-2016-26
http://www.americanheritage.com/content/battle-lake-erie-0?page=8
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00163-ip-10-164-35-72.ec2.internal.warc.gz
en
0.988246
1,068
3.234375
3
1996 – The U.S. Virgin Islands Division of Fish and Wildlife petitioned to protect Agave eggersiana and Solanum conocarpum under the Endangered Species Act. November 16, 1998 – The U.S. Fish and Wildlife Service agreed in a 90-day finding that there was credible information supporting listing and committed to issuing a final finding within nine months as to whether listing these species was warranted. September 1, 2004 – After the Service failed to act for six years, the Center filed suit. April 26, 2005 – The lawsuit resulted in a settlement agreement and the Service agreed to submit its final finding by February 2006. 2006 – The Service substantially changed its position, disregarded the opinions of its own experts, and published a 12-month “not warranted” finding for Agave eggersiana and Solanum conocarpum precluding any protection for the species under the Endangered Species Act. September 9, 2008 – The Center filed suit against the Service once again for its failure to list the plants. August 19, 2009 – The Center reached a settlement with the Service requiring the agency to take concrete steps toward protecting both plants. The Service agreed to propose a listing rule for Agave eggersiana by September 17, 2010, and to propose a listing rule for Solanum conocarpum by February 15, 2009. September 21, 2010 – The Service announced that while Agave eggersiana warranted Endangered Species Act protection, that protection was precluded by higher-priority actions. The plant was relegated to the candidate list. February 18, 2011 – The Service announced that while Solanum conocarpum warranted Endangered Species Act protection, that protection was precluded by higher-priority actions. The plant was relegated to the candidate list to join Agave eggersiana and more than 250 other imperiled species. July 12, 2011 – The Center reached a landmark agreement with the Fish and Wildlife Service compelling the agency to move forward in the protection process for 757 species, including Agave eggersiana. October 21, 2013 – Following lawsuits brought by the Center, the Service proposed Endangered Species Act protection for three rare plants from the U.S. Virgin Islands and Puerto Rico. Egger’s agave, island brittleleaf and Puerto Rico manjack are imminently threatened by land development and have been on a waiting list for federal protection since 1980. September 8, 2014 – Pursuant to a 2011 agreement with the Center for Biological Diversity, the Service announced Endangered Species Act protection for Egger’s agave, island brittleleaf and Puerto Rico manjack. |Photo by Robin Cooley||HOME / DONATE NOW / SIGN UP FOR E-NETWORK / CONTACT US / PHOTO USE /|
<urn:uuid:1197ce0f-78ae-4c21-8ea4-b14fe702be74>
CC-MAIN-2016-26
http://www.biologicaldiversity.org/species/plants/Virgin_Islands_plants/action_timeline.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00184-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936121
575
2.828125
3
Japan will attempt to staunch the massive amounts of contaminated groundwater flowing into the sea from the crippled Fukushima Daiichi nuclear complex with a giant wall of ice. Japan’s Nuclear Regulatory Agency, the government oversight body created after the ongoing Fukushima crisis revealed previous official watchdogs to be ineffective, has signed off on construction of a network of pipes, pumps and compressors designed to freeze the ground and create a mile-long “ice wall” to block the path of water flowing between surrounding mountains and the Pacific Ocean. The plan, adapted from procedures used to dig tunnels near waterways, has been discussed for over a year. No ground-freezing project of this size has ever been attempted, and there is no real sense of how well it would work. The plan also comes with concerns: What if freezing causes the ground to sink? What if the ice and the ensuing expansion and contraction interrupts or further damages drainage in the reactor buildings? What if a heat wave or heat from the plant causes parts of the wall to melt? And, what if there is a prolonged loss of power to this cooling system? And what happens if (or more like “when”) the water goes around the ice wall (because, as they say, water seeks its own level)? The OK on the ice plan comes days after TEPCO, the nominal owners of Fukushima, started dumping water directly into the Pacific that they said had been diverted around the highly radioactive nuclear plant structures. That water is not completely free of radioactive contamination, but TEPCO has assured area fisherman, who had long opposed the dumping, that the amount of radioactivity in this water is low. What constitutes “low,” both in terms of the amounts in each ton of water and what will accumulate and bio-concentrate in sea life, is a matter of much debate. The latest plan also comes less than a week after another failure of the system designed to decontaminate radioactive water. The system, built over a year ago to deal with the tens of thousands of tons of water accumulating in aboveground tanks at the site, cannot remove all radioactivity and has never been fully functional. All of these plans and mitigation scenarios easily go with descriptors like “stopgap” and “too little too late” — and that speaks to a broader point about nuclear power. Fukushima supposedly had backup systems and was said to have protocols to handle all emergencies. Clearly, it did not even have the less-than-adequate fail-safes TEPCO claimed were there, but even if it did, what if that still didn’t prevent disaster? (And, indeed, in the case of at least two of the damaged reactors, it probably would not have.) “Defense in depth” is catchphrase across the nuclear power industry, and it is meant to imply that backups on top of backups will head off the biggest kinds of disasters (station blackout, loss of coolant accidents, loss of containment, core melt-downs and melt-throughs). There is evidence on the grounds of a number of nuclear plants to contradict these confident predictions, but even beyond the evidence, the question that is not posed, the question that Fukushima indicates can still not be answered, is “OK, nothing can go wrong — but what if something does?”
<urn:uuid:a3dde8c0-402d-4485-a7d2-9a1324a01637>
CC-MAIN-2016-26
http://america.aljazeera.com/blogs/scrutineer/2014/5/27/at-fukushima-iceisjustanotherbrickinthewallofdenial.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00019-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960826
691
3.5
4
Here is a thought provoking documentary on the similarities and differences between the Bible and the Book of Mormon. It is a long program, but well worth the watch. It brings out these important points: - There is no archaeological evidence to support the location of any city named in the Americas listed in the BoM. - There are no pottery or pottery shards to show evidence of any of the tribes named in the BoM. - There are no independent historical evidences by surrounding civilizations that any of the tribes listed in the BoM ever existed. - There have been no coins found from any civilization listed in the BoM. - There are no evidences of any of the great battles fought in BoM. - There are a few historical mistakes listed: horses did not exist in the Americas until westerners brought them from Europe; metallurgy did not exist in the Americas until Europeans brought it; machinery did not exist in the Americas until Europeans brought it; wheat and barley did not exist in the Americas until Europeans brought them; elephants did not exist in the Americas a the time of the civilizations in the BoM. - The BoM says that Jesus was born in Jerusalem. - The BoM says that Lehi and his sons, being Jewish, built a new Jewish Temple in the Americas. This could not be so, as they, being Jewish, knew that the only true temple could be in Jerusalem. There is no evidence that this temple ever existed in the Americas.
<urn:uuid:e7ac4c43-fb18-4e40-b98b-a6ed686e59af>
CC-MAIN-2016-26
http://www.rickboyne.com/2008/06/bible-vs-book-of-mormon.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96033
306
2.734375
3
Overweight teens actually eat fewer calories daily on average than their trimmer counterparts, a new study finds. Among 12- to 14-year-old girls in the study, girls who were very obese ate about 300 fewer calories on average daily than obese girls, and obese girls consumed 110 fewer calories daily than healthy-weight girls. When the researchers looked at calories consumed by 15- to 17-year old boys, they found that obese boys ate about 220 fewer calories a day than boys who were overweight (but not obese). And overweight boys consumed about 375 fewer calories than healthy-weight boys, the study showed. The findings illustrate the difficulty of losing weight by cutting calories alone, especially when the weight is gained early in life, the researchers said. "For older children and teenagers, increasing involvement in physical activity may be more important to weight and health than is their child’s diet," said study researcher Asheley Cockrell Skinner, an assistant professor of health policy and pediatrics at the University of North Carolina at Chapel Hill. "Parents of all children should aim for a healthy diet, but don’t assume that overweight children are eating any worse than their peers," she said. The findings may provide validation for overweight teens facing a frustrating reality: they eat less than their normal-weight peers, yet continue to weigh more. "I think our findings are particularly important from a social perspective," Cockrell Skinner said. "It’s easy for society to make assumptions that kids are eating a lot of junk, which can also imply blame for their obesity, but the research doesn’t bear that out." The findings are published online today (Sept. 10) in the journal Pediatrics. Eating and obesity More than a third of children and adolescents are overweight or obese, according to the Centers for Disease Control and Prevention (CDC). In the study, Cockrell Skinner and colleagues analyzed data gathered from 12,650 U.S. children during the CDC's National Health and Nutrition Examination Survey between 2001 and 2008. They looked at the number of calories that children reported (for young children, their parents reported calorie intake) consuming daily, based on a detailed, two-day food questionnaire. During a physical exam, researchers noted the children's heights and weights, and used this to calculate their body mass index (BMI). Based on their BMIs, children were considered to be healthy weight, overweight, obese or very obese. Among young children, the researchers were not surprised to find that those who were overweight or obese generally ate more calories daily than healthy-weight children. For example, obese 3- to 5-year-old girls ate an average of 1,670 calories daily, whereas healthy-weight girls consumed 1,578 calories daily. Very obese 6- to 8-year-old boys ate 2,127 calories per day, whereas healthy-weight boys ate 1,978 calories. However, around ages 9 to 11, the pattern turned around — children with higher BMIs ate less than their peers. Several factors contribute to why the change occurs around this age, Cockrell Skinner said. "The body is a complex system, and once a person is overweight, the body tends to want to stay that way," she said. Kids of this age also start to have more control over what they are eating, she said, and may want to eat things similar to their friends. The researchers also found, inline with previous studies, that overweight and obese children tended to be less physically active than healthy-weight kids. What parents can do The findings highlight the need to prevent obesity early in life, Cockrell Skinner said. With young children, parents should allow their child to determine when they are full, and not encourage overeating. For weight-loss efforts in older children and teens, "focusing on activity may prove to be a more useful strategy than encouraging caloric restriction," the researchers wrote in their study. All parents should aim for their children to have a healthy diet, but not assume that overweight children are eating any worse than their peers, Cockrell Skinner said. "I think the most important thing is that kids become more active," Cockrell Skinner said. "Even in the absence of any weight loss, activity is good for overall health, and cardiovascular health specifically." A sharply reduction of children's calories are not good for their growing, developing bodies, and in addition, such diets aren’t sustainable when a child's peers are eating differently, she said. "Being more active and making healthy food choices are very important to long-term health, and that’s the most important goal," she said. Pass it on: Weight-loss efforts for overweight and obese teens should focus on increasing physical activity.
<urn:uuid:65ad8cee-6fbc-45f2-911c-933f7ae6a83a>
CC-MAIN-2016-26
http://www.livescience.com/23057-overweight-teens-kids-calories-weight-loss.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970918
975
3.09375
3
A table lamp contains a simple electrical circuit comprising of the conductive route and insulated surrounding parts. The electricity travels via the brass in the plug, along the copper inside the brown ‘live’ wire to the alloys in the electric bulb, and then returns via the blue ‘neutral’ wire. A switch interrupts the circuit. The third prong for the yellow and green ‘Earth’ wire – not always present on a lamp – is a safety route that encourages rogue electricity to travel through it rather that you if the blue and the brown wires have connected, making a short circuit. Needless to say, understanding how electricity travels is essential in the process of detecting and fixing the fault in any electrical appliance. You will need: Long nose pliers A small flat and small crosshead screwdriver 1) Before you begin, check it’s not just the bulb that has blown by replacing it with a new one, or switching it with a bulb from a working lamp. Test the wall socket too by plugging in a working appliance. 2) Once you’re sure it’s the lamp at fault, make sure you work in a well-lit area and on a flat surface. Unplug the lamp – never work on an appliance that is plugged in. 3) The problem could be a blown fuse. Open the plug. British plugs are safe, but quite fiddly. Unscrew the top part of the plug (the part with the three prongs) while laying the back of the plug in your other hand. Don’t fully remove any of the screws. Remove the fuse and check it using the multimeter (these cost around £10). The simplest testers to use have two leads with metal probes or clips at the end, and a main body with an indicator needle. Choose the OHM setting on the meter, connect the red probe to either end of the fuse and the black probe to the other. If the fuse is not broken, the meter will register by moving the indicator needle from left to right. Always replace the fuse with one of the same rating. 4) If the fuse is fine, next check the wiring. “Bad” wiring is not only incorrect wiring, but also wiring that isn’t tight and secure. a) Wiring the plug Check to see that all the wiring is tight and secure. New plugs have instructions on an attached paper – read them first. The wiring is always to the same principle, but all plugs have a cable securing device that differs slightly in some designs. Make sure the white sheathing cable is securely held in the plug, as this will keep the wiring firm if the cable is tugged or tripped over. The brown (live) wire connects to the prong marked “L” with the fuse fitted. The blue wire fits into the prong marked “N” on the plug. If you are rewiring the plug, use wire cutters to ensure that wires are the correct length, as indicated in the instructions, and use wire strippers to expose about 5mm of copper wire. Make sure that all the exposed copper is securely attached under the brass screw on each of the prongs in the plug. Never use kitchen knives or your teeth on bare wires! b) Wiring the bulb holder Check the wiring in the bulb holder. Unscrew the fitting and inspect the connections. If in doubt, replace the entire bulb holder with a new one. A metal bulb holder requires a three core, earthed cable. Lamps with plastic fittings need a two core, six amp cable. c) Replacing the cord If you see any breaks or exposed wiring, it is advisable to replace the entire cord. Threading a new cable through the inside core of some lamps can be quite tricky, but you can connect a new cord to the end of the old one by joining the exposed wires together after removing the plug – or exposing the wires at a point above any broken part of the cable – and taping them together with electrical tape to create a smooth connection. 5) Lastly, always test for continuity with your multimeter before plugging your lamp back into the socket. Place one tester probe on the brown or blue connector prongs on the plug, and the other probe on one of the spring terminals inside the bulb holder. Make sure the probes are not touching any other part of the plug or bulb holder. If the test indicator needle does not swing to the right, move one of the probes to the other prong. Carry out the same procedure for the other prong. If the needle does not swing, check that the switch is switched on. Finally, place both probes on both prongs of the plug. This time, the needle should not swing. This indicates that there is not a short circuit. Place a working bulb in the bulb holder and plug the lamp into the wall socket. Alison Winfield-Chislett is the founder of the Goodlife Centre, an independent practical learning space in Waterloo, London that provides a way for busy, office-bound people to attend evening and weekend beginner workshops in basic DIY, woodwork, upholstery and traditional crafts. The Live Better Challenge is funded by Unilever; its focus is sustainable living. All content is editorially independent except for pieces labelled advertisement feature. Find out more here.
<urn:uuid:d82179b3-65e3-4493-8d07-2c04191c6791>
CC-MAIN-2016-26
http://www.theguardian.com/lifeandstyle/2014/aug/25/how-to-mend-a-faulty-table-lamp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.911713
1,111
3.140625
3
posted on Jan, 24 2004 @ 11:09 PM At about 12:10am EST Opportunity had a flawless entry and landing on the Mars Surface with a 2 to 3g landing, according to NASA. This is a very "light" drop as the unit is rated for landings of up to 40g. Within hours, the lander has deployed and began sending back pictures. Above image was taken by NASA's Odyssey spacecraft, and is a close-up of Opportunity's predicted landing site at a region on Mars called Meridiani NASA Press Release... First Images of Opportunity Site Show Bizarre Landscape January 25, 2004 NASA's Opportunity rover returned the first pictures of its landing site early today, revealing a surreal, dark landscape unlike any ever seen before Opportunity relayed the images and other data via NASA's Mars Odyssey orbiter. The data showed that the spacecraft is healthy, said Matt Wallace, mission manager at NASA's Jet Propulsion Laboratory. "Opportunity has touched down in a bizarre, alien landscape," said Dr. Steve Squyres of Cornell University, Ithaca, N.Y., principal investigator for the science instruments on Opportunity and its twin, Spirit. "I'm flabbergasted. I'm astonished. I'm blown away." (more at above link) [Edited on 25-1-2004 by SkepticOverlord]
<urn:uuid:093544c3-2de8-4096-9016-20c011f085e4>
CC-MAIN-2016-26
http://www.abovetopsecret.com/forum/thread30806/pg1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915427
288
2.546875
3
What is lymphatic filariasis? Lymphatic filariasis is a parasitic disease that is transmitted to humans, like malaria and yellow fever, via a mosquito bite. However, unlike those acute infections, lymphatic filariasis may not manifest itself for years or decades after the initial infection. Though there are millions of people infected throughout the world, this parasitic disease hasn’t been seen in the U.S. in about 100 years. There are three species of parasites that cause lymphatic filariasis–Wuchereria bancrofti; which is more widely distributed; Asia, Africa, India, South America and some Caribbean Islands and Brugia malayi and B. timori which are more restricted to parts of Asia. These parasites are transmitted by several species of mosquito; Culex, Anopheles, Aedes and Mansonia depending on the geographic area. When the mosquito takes a blood meal on a person, it injects parasitic larvae onto the skin, where it penetrates the bite wound. After which in time the larvae develop into adults (females can be up to 100 mm in length) and reside in the lymphatic system of the upper or lower limbs or groin (all species). With W. bancrofti, in human males the adult worms may end up in the lymphatic channels of the spermatic cord. Here the adult male and female worms mate and produce eggs (microfilariae) which circulate in the blood and lymph. The microfilariae only appears in blood at certain times; Wuchereria at night, Brugia during the day. Most infections are asymptomatic. Any disease present may be due to immune response. If the infection persists the chronic stages of disease develop. It will then go into an inflammatory stage where lymphadema, orchitis and hydrocele occur. The obstructive stage of the disease is called elephantiasis. In this stage, which may take years, there is a blockage of lymph flow due to masses of worms. Tissue becomes fibrotic and skin thickens. Enlarged legs, arms, mammory glands and genitalia are classic appearances of elephantiasis. Diagnosis in the acute stages can be made by finding microfilariae in blood smears. Antigen detection and molecular methods can also be used to diagnose. Microfilariae are not found in persons with elephantiasis. Diethylcarbamazine (DEC) is the drug of choice. The drug kills the microfilaria and some of the adult worms. Later stages of the disease require different treatment. According to the Centers for Disease Control and Prevention (CDC), lymphedema and elephantiasis are not indications for DEC treatment because most people with lymphedema are not actively infected with the filarial parasite. To prevent the lymphedema from getting worse, patients should ask their physician for a referral to a lymphedema therapist so they can be informed about some basic principles of care such as hygiene, exercise and treatment of wounds. Patients with hydrocele may have evidence of active infection, but typically do not improve clinically following treatment with DEC. The treatment for hydrocele is surgery. There is not a vaccine to prevent filariasis. Travelers to endemic areas should use mosquito repellent on exposed skin between dusk and dawn. For more infectious disease news and information, visit and “like” the Infectious Disease News Facebook page
<urn:uuid:9ad25beb-364c-450f-8d0a-c783c5e6a0b2>
CC-MAIN-2016-26
http://www.theglobaldispatch.com/what-is-lymphatic-filariasis-69166/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00009-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921009
727
3.84375
4
NASA's Chandra X-ray Observatory has captured a peculiar event that scientist may have never seen before -- the collision of a dwarf galaxy with a much more massive spiral galaxy. The huge cloud of superheated gas is around 6 million degrees Fahrenheit and located about 60 million light year's from Earth, NASA says. An image combining both X-ray and optical light vividly shows the collision's scene. The Chandra X-ray data, shown in purple, shows the comet shaped appearance of the gas cloud. This is caused by the dwarf galaxy's motion when it crashed into the larger galaxy, called NBC 1232. Blue and white optical data from European Southern Observatory’s Very Large Telescope was then layered with the X-ray image, revealing the spiral galaxy. Close to the head of the comet-like shape are several extremely bright areas and very strong X-ray emission that, according to NASA, are believed to be the formation of powerful stars, sparked by the collision. Continue Reading Below The X-ray's telescope first detected the heat of the collision that -- due to its extreme temperatures -- only glows in X-ray light. Scientist then began to put together what exactly had created the superheated ball of gas. If the event is confirmed, it will be the first time such a collision has ever been detected only in X-rays, possibly expanding scientists understanding of the way galaxies grow from these types of collisions. The collision is expected to continue for around 50 million years, allowing the X-rays to be emitted for tens to hundreds of millions of years after that.
<urn:uuid:636b3b96-3daf-4a5a-a066-7dae5c3e783b>
CC-MAIN-2016-26
http://www.ibtimes.com/dwarf-galaxy-captured-colliding-large-spiral-galaxy-1386301
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00017-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949412
324
3.6875
4
Section 5: Slowing Light: Lasers Interacting with Cooled Atoms So far, we have focused on the motion of atoms—how we damp their thermal motion by atom cooling, how this leads to phase locking of millions of atoms and to the formation of Bose-Einstein condensates. For a moment, however, we will shift our attention to what happens internally within individual atoms. Sodium belongs to the family of alkali atoms, which have a single outermost, or valence, electron that orbits around both the nucleus and other more tightly bound electrons. The valence electron can have only discrete energies, which correspond to the atom's internal energy levels. Excited states of the atom correspond to the electron being promoted to larger orbits around the nucleus as compared to the lowest energy state, the (internal) ground state. These states determine how the atom interacts with light—and which frequencies it will absorb strongly. Under resonant conditions, when light has a frequency that matches the energy difference between two energy levels, very strong interactions between light and atoms can take place. Figure 11: Internal, quantized energy levels of the atom. Source: © Lene V. Hau. More info When we are done with the cooling process, all the cooled atoms are found in the internal ground state, which we call 1 in Figure 11. An atom has other energy levels—for example state 2 corresponds to a slightly higher energy. With all the atoms in state 1, we illuminate the atom cloud with a yellow laser beam. We call this the "coupling" laser; and it has a frequency corresponding to the energy difference between states 2 and 3 (the latter is much higher in energy than either 1 or 2). If the atoms were actually in state 2, they would absorb coupling laser light, but since they are not, no absorption takes place. Rather, with the coupling laser, we manipulate the optical properties of the cloud—its refractive index and opacity. We now send a laser pulse—the "probe" pulse into the system. The probe laser beam has a frequency corresponding roughly to the energy difference between states 1 and 3. It is this probe laser pulse that we slow down. The presence of the coupling laser, and its interaction with the cooled atoms, generates a very strange refractive index for the probe laser pulse. Remember the notion of refractive index: Glass has a refractive index that is a little larger than that of free space (a vacuum). Therefore, light slows down a bit when it passes a window: by roughly 30%. Now we want light to slow down by factors of 10 to 100 million. You might think that we do this by creating a very large refractive index, but this is not at all the case. If it were, we would just create, with our atom cloud, the world's best mirror. The light pulse would reflect and no light would actually enter the cloud. To slow the probe pulse dramatically, we manipulate the refractive index very differently. We make sure its average is very close to its value in free space—so no reflection takes place—and at the same time, we create a rapid variation of the index so it varies very rapidly with the probe laser frequency. A short pulse of light "sniffs out" this variation in the index because a pulse actually contains a small range of frequencies. Each of these frequency components sees a different refractive index and therefore travels at a different velocity. This velocity, that of a continuous beam of one pure frequency, is the phase velocity. The pulse of light is located where all the frequency components are precisely in sync (or, more technically, in phase). In an ordinary medium such as glass, all the components move at practically the same velocity, and the place where they are in sync—the location of the pulse—also travels at that speed. In the strange medium we are dealing with, the place where the components are in sync moves much slower than the phase velocity; and the light pulse slows dramatically. The velocity of the pulse is called the "group velocity," because the pulse consists of a group of beams of different frequencies. Figure 12: Refractive index variation with the frequency of a probe laser pulse. Source: © Reprinted by permission from Macmillan Publishers Ltd: Nature 397, 594-598 (18 February 1999). More info Another interesting thing happens. In the absence of the coupling laser beam, the "probe" laser pulse would be completely absorbed because the probe laser is tuned to the energy difference between states 1 and 3, and the atoms start out in state 1 as we discussed above. When the atoms absorb probe photons, they jump from state 1 to state 3; after a brief time, the excited atoms relax by reemitting light, but at random and in all directions. The cloud would glow bright yellow, but all information about the original light pulse would be obliterated. Since we instead first turn the coupling laser on and then send the probe laser pulse in, this absorption is prevented. The two laser beams shift the atoms into a quantum superposition of states 1 and 2, meaning that each atom is in both states at once. State 1 alone would absorb the probe light, and state 2 would absorb the coupling beam, each by moving atoms to state 3, which would then emit light at random. Together, however, the two processes cancel out, like evenly matched competitors in a tug of war—an effect called quantum interference. The superposition state is called a dark state because the atoms in essence cannot see the laser beams (they remain "in the dark"). The atoms appear transparent to the probe beam because they cannot absorb it in the dark state, an effect called "electromagnetically induced transparency." Which superposition is dark—what ratio of states 1 and 2 is needed—varies according to the ratio of light in the coupling and probe beams at each location—more precisely, to the ratio of the electric fields of the probe pulse and coupling laser beam. Once the system starts in a dark state (as it does in this case: 100 percent coupling beam and 100 percent state 1), it adjusts to remain dark even when the probe beam lights up. The quantum interference effect is also responsible for the rapid variation of the refractive index that leads to slow light. The light speed can be controlled by simply controlling the coupling laser intensity: the lower the intensity, the steeper the slope, and the lower the light speed. In short, the light speed scales directly with the coupling intensity.
<urn:uuid:fd6d8733-5a56-4ce7-92b6-6f59d886b6dd>
CC-MAIN-2016-26
http://www.learner.org/courses/physics/unit/text.html?unit=7&secNum=5
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00184-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942653
1,335
3.828125
4
Posted by Anonymous on Wednesday, February 20, 2013 at 8:59pm. A 100 L reaction container is charged with 0.724 mol of NOBr, which decomposes at a certain temperature** (say between 100 and 150 oC) according to the following reaction: NOBr(g) ↔ NO(g) + 0.5Br2(g) At equilibrium the bromine concentration is 1.82x10-3 M. Calculate Kc (in M0.5) i got .067 At a certain temperature* (probably not 25 ºC), the solubility of silver sulfate, Ag2SO4, is 0.018 mol/L. Calculate its solubility product constant for this temperature. SIG. FIG. (required because number is small) *Solubility product constants are very temperature sensitive. They are generally reported at 25 ºC. Not necessarily using this temperature allows me some flexibility. i got 2.73e-5 At a certain temperature, the solubility of potassium iodate, KIO3, is 53.0 g/L. Calculate its solubility product constant for this temperature. - Chemistry - DrBob222, Wednesday, February 20, 2013 at 9:56pm In the first problem, what is (M0.5)? - Chemistry - DrBob222, Wednesday, February 20, 2013 at 10:09pm Ag2SO4 ==> 2Ag^+ + SO4^2- Ksp = (Ag^+)^2(SO4^2-) Ksp = (2*0.018)^2*(0.018) = ? closer to 2.33E-5 I think. KIO3 ==> K^+ + IO3^- 53.0/214 = about 0.248 Ksp = (K^+)(IO3^-) Ksp = 0.248)(0.248) = ? Frankly I think this is ridiculous. Does anyone use Ksp values for SOLUBLE materials? - Chemistry - Anonymous, Thursday, February 21, 2013 at 6:31pm thanks so much! Answer This Question More Related Questions - Chemistry - A 100 L reaction container is charged with 0.782 mol of NOBr, which ... - Chemistry - Given the elementary reaction: 2 NOBr==>2 NO + Br2 k = 0.80 Which... - ap chemisrty - Kp for the following reaction is 0.16 at 25°C. 2 NOBr(g) 2 NO(g... - chemistry - Nitric oxide reacts with bromine gas at elevated temperatures ... - Chemistry - 1. Nitric oxide reacts with bromine gas at elevated temperatures ... - Chemistry - Nitrosyl bromide, NOBr, readily dissociated according to the ... - chemistry - Nitrosyl bromide decomposes according to the following equation. ... - Chemistry - Determine Kc for the following reaction: 1/2N2(g) + 1/2O2(g)+ 1/2Br(... - chemistry - . For the gas reaction at low pressure 2NOBr ↔ 2NO + Br2 Δ... - chem class - Nitrosyl bromide decomposes according to the following equation. ...
<urn:uuid:0b861551-c83b-427f-b6c2-ed9816efb608>
CC-MAIN-2016-26
http://www.jiskha.com/display.cgi?id=1361411944
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00140-ip-10-164-35-72.ec2.internal.warc.gz
en
0.845324
713
2.78125
3
Details about Envisioning Landscapes, Making Worlds: The past decade has witnessed a remarkable resurgence in the intellectual interplay between geography and the humanities in both academic and public circles. The metaphors and concepts of geography now permeate literature, philosophy and the arts. Concepts such as space, place, landscape, mapping and territory have become pervasive as conceptual frameworks and core metaphors in recent publications by humanities scholars and well-known writers. Envisioning Landscapes, Making Worlds contains over twenty-five contributions from leading scholars who have engaged this vital intellectual project from various perspectives, both inside and outside of the field of geography. The book is divided into four sections representing different modes of examining the depth and complexity of human meaning invested in maps, attached to landscapes, and embedded in the spaces and places of modern life. The topics covered range widely and include interpretations of space, place, and landscape in literature and the visual arts, philosophical reflections on geographical knowledge, cultural imagination in scientific exploration and travel accounts, and expanded geographical understanding through digital and participatory methodologies. The clashing and blending of cultures caused by globalization and the new technologies that profoundly alter human environmental experience suggest new geographical narratives and representations that are explored here by a multidisciplinary group of authors.This book is essential reading for students, scholars, and interested general readers seeking to understand the new synergies and creative interplay emerging from this broad intellectual engagement with meaning and geographic experience. Back to top Rent Envisioning Landscapes, Making Worlds 1st edition today, or search our site for other textbooks by Stephen Daniels. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Routledge. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Geography tutors now.
<urn:uuid:1d789793-cfd9-41a9-b870-a3785d52f2d9>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/envisioning-landscapes-making-worlds-1st-edition-9780415589789-0415589789?ii=7&trackid=d6417e4b&omre_ir=1&omre_sp=
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920871
369
2.53125
3
A new study suggests that fish oil might be one of the most effective preventative measures against developing Alzheimer's disease, for those who aren't genetically inclined to develop the common form of dementia. There is presently no cure for Alzheimer's; it is a progressive disease and eventually leads to death. It is most often diagnosed in those over 65 years of age, and is projected to affect 1 in 85 globally by 2050. It is presently the sixth leading cause of death in the United States. Researchers from Rhode Island Hospital studied three groups of adults ages 55-90, utilizing neuropsychological tests and brain magnetic resonance imaging biannually. The participants in the study, all part of the Alzheimer’s Disease Neuroimaging Initiative (ADNI), comprised 229 adults with no signs of the disease; 397 who were diagnosed with mild cognitive impairment; and 193 with Alzheimer’s. The ADNI study ran from 2003 until 2010. Results showed that adults who had not displayed any symptoms of the onset of Alzheimer's saw a significantly less decline in cognitive function and brain shrinkage than those who weren't taking the substance. Cognitive decline was measured using the Alzheimer’s Disease Assessment Scale (ADAS-cog) and the Mini Mental State Exam (MMSE). Though, researchers pointed that those who are genetically predisposed to developing Alzheimer’s, carriers of the APOE (apolipoprotein E) gene, might not be able to metabolize DHA (docosahexaenoic acid), the fatty acid in fish oil thought to promote cognitive benefits. Yet, taking fish oil is suggested regardless, as it might prevent the onset of Alzheimer's from being triggered late in life. The most widely available dietary source of DHA comes from cold-water, oily fish, such as salmon, herring, mackerel, anchovies and sardines. Doctor Andrew Weil explains some fish oil facts: The Hodge Twins reveal that a man who doesn't take fish oil is pretty much a female, but likewise advise not to take fish oil: Aside from cognitive benefits, the omega-3 fatty acids in fish oil have been shown to help in preventing heart disease. Other studies have revealed that fish oil might be beneficial to those who suffer from clinical depression, anxiety, cancer, psoriasis and macular degeneration, although benefits have yet to be proven. Image via Wikimedia Commons
<urn:uuid:070b5a08-0be8-46bc-ac52-dfea47605f65>
CC-MAIN-2016-26
http://www.webpronews.com/fish-oil-protects-against-alzheimers-2014-07/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951306
493
3.015625
3
Garlic is no doubt the wonder food. With a plethora of medicinal properties it is a great food. Legend suggests that garlic may ward off evil spirits, such as vampires. Now scientists are finding that garlic, //or a flavour component of pungent herb, may help ward off carcinogens produced by meat cooked at high temperatures. Cooking protein-rich foods like meats and eggs at high temperatures releases a chemical called PhIP, a suspected carcinogen. Epidemiological studies have shown that the incidence of breast cancer is higher among women who eat large quantities of meat, although fat and caloric intake and hormone exposure may contribute to this increased risk. Diallyl sulfide (DAS), a flavour component of garlic, has been shown to inhibit the effects of PhIP that, when biologically active, can cause DNA damage or transform substances in the body into carcinogens. Ronald D. Thomas, Ph.D., and a team of researchers at Florida A&M University in Tallahassee hypothesized that PhIP enhances the metabolism of the enzymes linked to carcinogenesis. They further suggested that the diallyl sulfide derived from garlic might counter this activity. "We treated human breast epithelial cells with equal amounts of PhIP and DAS separately, and the two together, for periods ranging from three to 24 hours," said Thomas. "PhIP induced expression of the cancer-causing enzyme at every stage, up to 40-fold, while DAS completely inhibited the PhIP enzyme from becoming carcinogenic." The finding demonstrates for the first time that DAS triggers a gene alteration in PhIP that may play a significant role in preventing cancer, notably breast cancer, induced by PhIP in well-done meats. Thomas noted that no studies have shown a link between cooking vegetables and fruits and PhIP, regardless of the method used. Source: Florida A&M University Page: 1 Related medicine news :1 . Another ray of hope for Arthritis Sufferers . Another injectable contraceptive Pill approved by FDA3 . The Possibility Of Fighting HIV with Another Virus Questioned 4 . New Research Has Found Another Use For Botox 5 . Yet Another Harmful Consequence Of Smoking6 . Another Treatment Option for Benign Tumors7 . Yet Another Treatment Option For Cancer 8 . Another Alternative For Diabetics On Insulin 9 . Another Use For Epilepsy Drugs 10 . Another Cause of Thyroid Cancer Discovered11 . Yet Another Use For Cell Phone Cameras
<urn:uuid:52095356-ecc8-4854-94d1-db9822e10674>
CC-MAIN-2016-26
http://www.bio-medicine.org/medicine-news/Another-Feather-In-The-Garlic-Cap-3A-Helpful-In-Avoiding-Breast-Cancer-5506-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00079-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90904
517
3.078125
3
One Ugly Plant! by Jane Scherer What do actor Tom Cruise and the desert plant Welwitschia mirabilis have in common? Well, one thing's for certain. . .it isn't good looks. Remember the greenhouse scene in the movie Minority Report? Yep, that wasn't a creeping spider or a pile of trash behind the star. It was Welwitschia, one of the ugliest plants alive! “It is without question the most wonderful plant ever brought to this country, and one of the ugliest,” said the Keeper of the Royal Botanic Gardens in England in 1863 when he was given a specimen. Today, botanic gardens around the world grow Welwitschia, but they are infants compared to the ancient ones growing in the wild. Welwitschias live in the coastal desert regions of Namibia and Angola, Africa. They get the moisture they need to survive from fog rolling in from the ocean. Carbon-14 dating has placed the age of two of the plants at 1,500-years-plus! “It's a fascinating plant because it is so bizarre,” says Judy Jernstedt, a plant morphologist at the University of California who has traveled halfway around the world to see them. “Basically, Welwitschia has only two ratty-looking leaves that last hundreds if not thousands of years.” The stem of an adult plant is a look-alike for an upside-down traffic cone. From it, two long, straplike leaves grow and grow and never fall off. As the centuries pass, the desert winds whip, shred, and tangle them into a shoulder-high mass of twisted ribbons. An African name for the plant says it all: “long-haired thing.” Named after Friedrich Welwitsch, the explorer, Welwitschia, bears small cones instead of flowers. Its male and female organs are separated. Where in the scheme of plants does it belong? And there is another mystery. Desert plants grow with little or no water. They can't seal their tissues completely to hold what little there is, because they need to take in carbon dioxide for photosynthesis. As a result, most have no leaves, or tiny ones. Welwitschia's leaves spread a quarter of a meter and release a liter of water a day! Botanists think it must come from the plant's collection of soil moisture. A female plant produces some 20,000 seeds each year. In a greenhouse they germinate freely, but in the desert 90 percent of them mold. The 10 percent that survive send down long taproots in just a few weeks. “Welwitschia combines some traits of gymnosperms such as conifers and also traits of flowering plants, but it still isn't clear to plant biologists exactly where it fits into the emerging picture of plant evolution,” says Jernstedt. - carbon-14 dating: Determining the age of an ancient specimen by the amount of carbon-14 it contains. - morphologist: A biologist who deals with the form and the structure of organisms, without consideration of function. - How does the Welwitschia miabilis gather water? [anno: The plant gathers water from the fog that rolls in off the ocean and probably through the soil.] - Since the plant lives in a coastal desert region, what in the plant's daily behavior seems odd? [anno: It seems odd that the plant releases a liter of water a day from its leaves.] - The Welwitschia mirabilis has adapted to live in a hot climate, and it is efficient at collecting moisture. How might a plant adapt to a climate where there is a lot of rain? How might a plant adapt to a region where there is almost no water? Design a plant that has adapted to a climate where there is less sunlight or water than average. What special features would you give your plant to help it survive? Draw a picture of your plant, and label its parts. Include explanations of special features that help your plant adapt and carry out photosynthesis. [anno: Answers will vary. Students should draw a picture of a plant and include labels of various plant features. Drawings should also have explanations of how different features are adapted to a particular environment and help the plant carry out photosynthesis.]
<urn:uuid:e83c3b41-e8c3-486d-bbee-32d64c3a1cc4>
CC-MAIN-2016-26
http://www.eduplace.com/science/hmsc/5/a/cricket/cktcontent_5a21.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00047-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947437
915
2.921875
3
Biological Sciences Division Researchers Develop, Improve and Enhance Technologies for Rapid Analysis of Complex Samples FAIMS/IMS is meeting the proteomics throughput challenge Results: Scientists at Pacific Northwest National Laboratory can now analyze complex biochemical and other samples in minutes, opening doors to scientific study and practical applications in areas that were hamstrung by long waits for results. By coupling FAIMS (Field Asymmetric waveform Ion Mobility Spectrometry) and conventional Ion Mobility Spectrometry (IMS), they've developed a broadly useful new technology capable of quickly separating, identifying and quantifying the components of complex mixtures. Analyses can potentially be done in seconds to minutes rather than hours to tens of hours. Why it matters: The speed and separation power of FAIMS/IMS (in conjunction with mass spectrometry, MS) create a new realm of research opportunities in areas such as proteomics, metabolomics and characterization of natural fuels and organic matter in soil. The technology could also help determine the health risks of low-level radiation exposure, understand natural bioremediation processes to enable their control and detect explosives and chemical or biological warfare agents. Technologies for separating and characterizing ions based on their mobility in gases have been around for more than three decades. Conventional IMS that distinguishes ions by absolute mobility (that depends on the collision cross sections of ions with molecules of a buffer gas, such as air or nitrogen) has been used since 1970. The more recent FAIMS technique separates ions by the difference between their mobility in strong and weak electric fields. Ultra-high sensitivity Field Asymmetric waveform Ion Mobility Spectrometry (FAIMS/IMS) provides an effective separation power equivalent to liquid chromatography, but 100-fold faster. The combination was greatly advanced using PNNL ion funnel technology. Enlarged View Advantages of the FAIMS/IMS combination are speed (analyses require seconds to minutes instead of hours to tens of hours for condensed-phase alternatives such as liquid chromatographic methods at comparable separation power) and capability to characterize ion structures. Coupling IMS and FAIMS to electrospray ionization mass spectrometry (ESI-MS) has greatly expanded their utility, enabling new applications in biomedical and nanomaterials research by allowing analyses of more complex mixtures. "FAIMS is the first but likely not the only example of a nonlinear IMS method. Combining various nonlinear and linear IMS phenomena will allow further novel approaches to analytical separations," notes Alex Shvartsburg, one of the lead scientists for the FAIMS/IMS. Methods: FAIMS/IMS/MS is used at PNNL for high-throughput proteomic analyses and conformational characterization of proteins. The results of FAIMS/IMS development and application have been published in leading scientific journals. In particular, PNNL scientists have - Designed and constructed the most sensitive ESI/IMS/time-of-flight (TOF) MS instrument to date by using electrodynamic ion funnels at both ends of the IMS drift tube for effective ion focusing and accumulation at the ESI/IMS interface and near-perfect ion transmission from IMS to TOF MS. - Developed the first ESI/FAIMS/IMS/TOF MS platform and demonstrated its utility for high-throughput analyses of complex mixtures and characterization of macromolecular conformations and protein folding. - Established a comprehensive FAIMS modeling capability and used it to guide the development of advanced FAIMS technology, including a new planar FAIMS design achieving record resolving power, the higher-order differential IMS (HODIMS) concept, and a slit-aperture interface for efficient coupling of planar FAIMS to MS and IMS/MS stages. Acknowledgments: The research team includes Keqi Tang, Alex Shvartsburg, David Prior, Mikhail Belov, Erin Baker, Brian Clowers, and Dick Smith, all at PNNL. Portions of this work were supported by PNNL Biomolecular Systems Initiative, and the National Institutes of Health National Center for Research Resources. The work was done at the W.R. Wiley Environmental Molecular Sciences Laboratory, a U.S. Department of Energy user facility at PNNL. Shvartsburg AA, F Li, K Tang, and RD Smith. 2007. "Distortion of ion structures by field asymmetric waveform ion mobility spectrometry." Anal. Chem. 79(4):1523. Belov ME, MA Buschbach, DC Prior, K Tang, and RD Smith. 2007. "Multiplexed ion mobility spectrometry-orthogonal time-of-flight mass spectrometry." Anal. Chem. 79(6):2451. Shvartsburg AA, F Li, K Tang, and RD Smith. 2006. "Characterizing the structures and folding of free proteins using 2-D gas-phase separations: observation of multiple unfolded conformers." Anal. Chem. 78(10):3304. Shvartsburg AA, F Li, K Tang, and RD Smith. 2006. "High-resolution field asymmetric waveform ion mobility spectrometry using new planar geometry analyzers." Anal. Chem. 78(11):3706. Shvartsburg AA, T Bryskiewicz, R Purves, K Tang, R Guevremont, and RD Smith. 2006. "Field asymmetric waveform ion mobility spectrometry studies of proteins: dipole alignment in ion mobility spectrometry?" J. Phys. Chem. B 110(43):21966. Shvartsburg AA, SV Mashkevich, and RD Smith. 2006. "Feasibility of higher-order differential ion mobility separations using new asymmetric waveforms." J.Phys. Chem. A 110(8):2663. Tang K, F Li, AA Shvartsburg, EF Strittmatter, and RD Smith. 2005. "Two-dimensional gas-phase separations coupled to mass spectrometry for analysis of complex mixtures." Anal. Chem. 77(19):6381. Tang K, AA Shvartsburg, H Lee, DC Prior, MA Buschbach, F Li, AV Tolmachev, GA Anderson, and RD Smith. 2005. "High-Sensitivity Ion Mobility Spectrometry/Mass Spectrometry Using Electrodynamic Ion Funnel Interfaces." Anal. Chem. 77(10):3330.
<urn:uuid:cbcb4af1-47bd-42d4-afed-ab0f3fd3f70f>
CC-MAIN-2016-26
http://www.pnl.gov/science/highlights/highlight.asp?groupid=753&id=205
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.855554
1,382
2.625
3
The Convention of 1836 was held at the Texas capitol located at Washington-on-the-Brazos on March 2nd. It ended without the delegates’ knowledge of the defeat Texas had suffered during the Battle of the Alamo. As a result, a call went out for individuals to march to San Antonio and relieve those serving at the Alamo. General Sam Houston had impressed upon the convention delegates to remain in the capitol and continue their work to create Texas’s constitution. He was now the sole commander of the Texian troops and made his way to Gonzales to meet up with the 400 volunteers who awaited his arrival prior to departing for San Antonio. Within hours of Houston’s arrival in Gonzales, two men entered the camp and brought him news regarding the Alamo. After learning all Texians had been slain, Houston feared a panic would soon erupt, so he had the two men arrested and charged with being spies. They were later released when Susannah Dickinson, the Alamo’s only adult Anglo survivor, reached Gonzales and confirmed the report Houston has previously received. Houston now ordered the area to be evacuated and the army to retreat. The Runaway Scrape which followed sent the new government and most of the Texians east. Though the Texians had taken out a good number of Mexicans at the Alamo, the troops led by General Antonio López de Santa Anna still outnumbered them by six to one. Believing the Alamo outcome capable of quelling future resistance toward his troops, Santa Anna pressed on towards his next target. Little did he know, the news had exactly the opposite effect. As word spread of the Alamo’s fall, volunteers came out of the woodwork, causing Houston’s army to swell in number. When Santa Anna learned of Houston’s retreat, he knew he must move quickly or else Houston would be able to muster a larger army. Thus, he divided the Mexican soldiers into three different groups. The first numbered approximately 1,000 and was sent southwards to restore order in the area’s towns and villages. 800 other troops traveled northward towards present-day Bastrop in an effort to stop Houston’s army during their eastward retreat. Santa Anna himself led the remaining 700 men and later joined up with the troops headed north. Believing the Texians now to be cornered, Santa Anna chose to give his troops a short respite. They made camp on April 19th and would rest two days, then attack on April 22nd. Sensing Houston was nearby, Santa Anna sent out patrols to find him. These patrols stumbled upon their Texian counterparts at New Washington. Soon the Mexicans came under attack by the Twin Sisters and beat a hasty retreat without discovering the exact whereabouts of Houston’s troops. As the sun set over southeast Texas on April 20, 1836, no one could have envisioned the dramatic event on the horizon which would later fill the history books. When dawn broke on the 21st, General Sam Houston and his rag-tag army were camped at the mouth of the San Jacinto River. Approximately 900 Texians now prepared to attack Santa Anna’s troops. During the morning hours of April 21st, General Houston conducted a council of war. The majority of officers in attendance voted to wait for Santa Anna to attack. By doing so, they felt it would help leverage their position. After receiving their feedback, Houston made his decision to attack and revealed it to his officers that afternoon. Somewhere around 4:30 p.m., Santa Anna’s men were camped near Lynchburg Ferry. Flushed with victory from the Alamo battle, the Mexicans became a bit lazy with security and failed to post sentries. While enjoying their afternoon siesta, the Mexican troops were suddenly awakened as the Texian soldiers arrived, shouting, “Remember the Alamo!” and “Remember Goliad!” as their battle cry. The Texians needed no more than 18 minutes to subdue the Mexicans. In the process, General Houston had two horses shot out from under him and received a bullet in the ankle, with nine Texians killed. On the Mexican side, 630 soldiers died and 700 surrendered. The following day, several Mexicans were captured and brought to Sam Houston. Unknown to him at the time, among those captured was Santa Anna himself. Dressed in the uniform of a common foot soldier, the general’s identity was revealed when numerous Mexican prisoners began to shout, “El Presidente! El Presidente!” Santa Anna was held as a prisoner of war and signed a peace treaty three weeks later. The Mexican army was forced to leave the region and the Republic of Texas became an independent nation. After his capture, Santa Anna told Houston, “That man may consider himself born to no common destiny who has conquered the Napoleon of the West. And now it remains for him to be generous to the vanquished.” Houston looked at the captured leader and replied, “You should have remembered that at the Alamo!” The Mexican defeat at the Battle of San Jacinto eventually resulted in the loss of approximately one million square miles of land from Mexican control. The subsequent annexation of Texas by the United States added the land area which later became the states of Texas, New Mexico, Nevada, Utah, Arizona and California; along with a portion of Kansas, Oklahoma, Colorado and Wyoming. The ripple effect of the Battle of San Jacinto radiated out far beyond the border of the United States. The birth of the new nation, now known as the Republic of Texas, caught the eyes of the powerful nations in Europe and helped to create the distinctive mélange of American culture. Texas would remain a sovereign nation for 10 years prior to being annexed by the United States in 1845. With the annexation, the border of the United States now stretched from coast to coast. Following the annexation of Texas during President Polk’s administration, the eyes of Europe and the United States centered on Latin America. Soon the US and Great Britain were involved in the political and commercial growth of the region, leading up to the Clayton-Bulwer Treaty in 1850. The two nations composed the treaty in an effort to ensure a balance of power, anticipating the creation of the Inter-Oceanic Nicaraguan Canal – which was never built. Texas had paid a high price to claim victory at the Battle of San Jacinto. The 13-day siege which was the Battle of the Alamo took place between February 23 and March 6, 1836 and resulted in the deaths of all the Texas defenders; among them William Travis, Jim Bowie and Davy Crockett. Though an immense loss of life, those who fought and died at the Alamo bought Sam Houston the time he needed to prepare for his confrontation with Santa Anna. The conflict at San Jacinto was the most decisive battle in Western military history, in addition to the history of the United States and indeed the Western world.
<urn:uuid:47860a8d-dcd4-4fdb-9928-1561dbcf7baa>
CC-MAIN-2016-26
http://www.examiner.com/article/the-battle-of-san-jacinto-lasted-18-minutes-and-changed-the-world?cid=rss
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.9795
1,447
3.375
3
Previous stories pertaining to Professor Mitsch's research: FLOODING MIGHT HELP LOWER GAS EMISSION FROM WETLANDS COLUMBUS, Ohio – River floods and storms that send water surging through swamps and marshes near rivers and coastal areas might cut in half the average greenhouse gas emissions from those affected wetlands, according to recent research at Ohio State University. A study suggests that pulses of water through wetlands result in lower average emissions of greenhouse gases over the course of the year compared to the emissions from wetlands that receive a steady flow of water. The study compared the emission of methane from wetlands under two different conditions, one with a pulsing hydrology system designed to resemble river flooding and one with a steady, low flow of water. The research showed that in areas of deeper water within the wetlands, methane gas fluxes were about twice as high in steady-flow systems than they were in pulsing systems. Methane emissions from edge zones, which are sometimes dry, were less affected by the different types of conditions. Methane is the major component of natural gas and is a greenhouse gas associated with global warming. While the Environmental Protection Agency estimates that human activities are responsible for about 60 percent of methane emissions worldwide, wetlands are among the natural sources. Bacteria that produce methane during the decay of organic material cause wetlands to release the gas into the atmosphere. The study by Ohio State University scientists is part of ongoing research comparing pulsing vs. steady-flow conditions in two experimental wetlands on the Columbus campus. “Pulsing refers to a number of different conditions in wetlands – river pulses that happen on a seasonal basis, two-per-day coastal tides, and the rare but huge ones, like hurricanes or tsunamis,” said William Mitsch, the study’s senior author and director of the Wilma H. Schiermeier Olentangy River Wetland Research Park at Ohio State. “Our point is that the healthiest systems and the ones with the lowest emissions of greenhouse gases are those that have these pulses and that are able to adapt to the pulses.” The research was published in a recent issue of the journal Wetlands. Often called the “kidneys” of the environment, wetlands act as buffer zones between land and waterways. They also act as sinks – wetlands filter out chemicals in water that runs off from farm fields, roads, parking lots and other surfaces, and hold on to them for years. The study examined methane fluxes over a two-year period during which researchers created two different kinds of conditions in two 2.5-acre experimental wetlands. In 2004, scientists used pumps to deliver monthly pulses to create conditions in the wetlands resembling natural marshes flooded with river water. In 2005, researchers pumped approximately the same amount of water but maintained a constant flow of water through the wetlands to mimic less dynamic hydrologic conditions. In addition to methane emissions, the study also investigated other processes such as denitrification, sedimentation, and aquatic productivity. The pulsing hydrology experiment was maintained and methane levels were measured approximately twice monthly over the two study years by Mitsch, also an environment and natural resources professor at the Olentangy River Wetland Research Park, and study co-author Anne Altor, a former Ohio State graduate student who is now a consultant in Indianapolis. During both years, more methane was emitted during the summer than during other seasons in all portions of the wetlands, with emissions about four times higher during summer in the edge zones. Consistently wet areas released more gases in the spring than did edge zones under both conditions. Methane is composed of carbon and hydrogen, and its emissions are expressed in terms of the amount of carbon released into the atmosphere. The emissions were at their highest during the summer of the steady-flow year, when the amount of methane released from the deepest part of the wetlands averaged 18.5 milligrams of carbon per square meter of wetland surface per hour. With these wetlands covering about 5 acres, the emissions amounted to an estimated 20 pounds of carbon per day. That level was twice as high as the summertime methane emissions measured from the deepest area of the wetlands during the year of pulsing conditions. The average levels of methane emissions in the deepest water of the wetlands over the course of the study were 6 pounds of carbon per day in the pulsing year and almost 12 pounds of carbon per day during the steady-flow year. The researchers suggested that slightly warmer soil temperatures and less fluctuation in water levels during the steady-flow year created conditions that promoted the production of methane. A simultaneous study of carbon collection in the wetlands showed that the different water conditions had no significant effect on how much carbon was stored by the wetlands. Many experts suggest that the benefits of wetlands’ carbon storage capacity offset any damage resulting from their methane emissions. Mitsch noted that pulses from storms not only help dissipate one negative effect of wetlands, but also serve as a reminder of how wetlands function to absorb the surge. “If we didn’t have salt marshes and mangroves in subtropical and tropical coastal areas of the United States, it’s safe to say these current storms would have even more damaging effects,” he said. “When you lose wetlands, you’ve lost a place for floodwater to go,” Mitsch noted. “Mother Nature is better at withstanding these pulses than we are. Whether it’s a flooding river or a hurricane, no matter what those pulses are, if there’s a natural ecosystem to absorb them, then we as humans would be safer.” This research was supported by the U.S. Department of Agriculture, a Payne Grant from the Ohio Agricultural Research and Development Center, the Wilma H. Schiermeier Olentangy River Wetland Research Park, the U.S. Environmental Protection Agency, and a Rhonda and Paul Sipp Wetland Research Award.
<urn:uuid:3167c517-a13a-45f5-98b5-ddabfb99f488>
CC-MAIN-2016-26
http://researchnews.osu.edu/archive/wetpulse.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00054-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962134
1,234
3.546875
4
When we hear the name "Akhenaten", more than likely for many of us, the first image of the pharaoh that comes to mind is one similar to those displayed on the statues below: The consistency with which each of the sculptures above portray the Pharaoh's face, like for example, the way the statues display a personality with a rather long slender face, obviously plays a role in image stamped into folks' minds about the pharaoh. The images are consistent in the way the Pharaoh's cheek bones manifest, the aforementioned long and slender face, the eye-shapes, the nose and lips. Such artistic consistency has even enticed anthropologists to link the Pharaoh to skeletal remains that they feel conforms to the long facial profile seen the New Kingdom statues. Such consistency may even suggest that the images are capturing the pharaoh when he was alive, and in light of such matters, it may be seen as a dismissive one but the question is: Is this what the living Pharaoh would have really looked like? The following images however, may question the prospect that the pharaoh sported that characteristic long face that we all recognize. These are also images that generally see relatively less circulation in the Internet when compared to the examples posted above, thereby giving the impression that the latter are the ones out there about the Pharaoh. Take a look: In these figurines, it is hard to miss the rounder and shorter facial profile of Akhenaten. The feminine-like abdomen and protruding belly, the artistic convention that took hold under the Pharaoh, have been retained for this rendering, but the long chin and slender face are gone, including the pseudo "divine" goatee. Unlike many of the sculptures seen of the Pharaoh, the figurine features skin tone rendering. Some observers have opined that the couple as portrayed here, may have been in the later years—i.e. their more mature years. What the said observers base such assessment on, isn't entirely clear, other than to perhaps assume that in the case of Akhenaten, the rounder facial profile might have something to do with it. This rounder and shorter facial profile is by no means an aberration or an anomaly, as it recurs in other possibly lesser known sculptures of the pharaoh... Note the chin profile on either of the above sculptures; the chin is not as prominent as those found in the examples posted at the top of this page. The faces are also "fuller" and hence, more rounded in their profile. There are more examples of this theme... More "rounder"-faced Akhenaten renderings... At this point, with enough examples given, showing a person with a fairly long and slender face, along with a goatee, on the one hand, and those with shorter, perhaps more rounded facial profile, while lacking a goatee, on the other hand, one might ask, then which representations are the closest estimation of the then living pharaoh?! One might turn to the whole sum of the individual statues for some clues. One can't help but notice that the examples shown above, displaying what some call "elongated" facial profile, along with the pseudo [royal/divine] goatee, almost always have some regalia in their hands, to the extent that the post-cranial portions of their body are in view. The pseudo-goatee alone on these renderings may suggest a measure of some idealization theme about them. The feminine-like abdomen profile with a somewhat protruding belly noticeably falls in that direction [idealization or symbolism]. That is just about one of the few features that we notice on statues sporting the long facial structure and pseudo-goatee and on some of those sporting the rounder and/or shorter facial structure without a pseudo-goatee. Note that only one example among the few sculptures with the beardless shorter face displays this theme, and it is the one that interesting also displays the seated Pharaoh holding regalia in one of his hands—the statue in "yellow" tone, which was described by one author as that of Akhenaten as "a young man". The figurine of the Pharaoh standing side by side with his wife [claimed to be Nefertiti], and the example showing the pharaoh at least up to his chest area, do not sport the pharaoh holding any regalia in his hands at all, nor do they sport the characteristic 'divine' goatee; they do however, retain crowns or royal "soft" head gears. Such smaller details may suggest that the examples with longer faces and goatee had been given relatively more idealization, as sanctioned by the Pharaoh himself. The face, while it may in some ways have stayed true to the living figure, had been exaggerated in a caricature sort of way, as done in political cartoons of ruling and political figures of today, resulting in the somewhat unusually long slender face, exacerbated by the fairly prominent chin that is flanked by a pseudo-goatee. To this end, even the ears of the long-faced statues with goatee seem somewhat more exaggerated in their shape and size than the counterparts with shorter and beardless faces. It has been noted earlier that some researchers have been influenced by the statues of Akhenaten sporting the long face and pseudo-goatee to assign certain remains (mummy) to the pharaoh. Edward Wente of the Oriental Institute of the University of Chicago is one such example that comes to mind. He notes... The craniofacial morphology of the mummy labeled Amenhotep III also made it difficult to place in the position he should occupy as son of Thutmose IV. Of the mummies in the collection only the one supposed to be Amenhotep II is a suitable candidate to have been the father of the Amenhotep III mummy. Over the years Jim became increasingly intrigued by the Amenhotep III mummy, because it is one of the most severely battered of the royal mummies, having suffered postmortem injuries of a very violent nature, more than what tomb-robbers generally inflicted upon the mummies in search of precious items. Since the publication of the x-ray atlas further study of this mummy has been undertaken by Jim and Dr. Fawzia Hussein, Director of the Anthropological Laboratory of the National Research Center, Cairo; and it has been ascertained that the skull is two standard deviations too large for his body, and its craniofacial characteristics are consonant with sculptured portraits of Akhenaten. The advantage of this shuffling of the mummies is that the close clustering of the mummies of Thutmose IV, Smenkhkare, and Tutankhamun is maintained. If as some have proposed, the skeleton from KV 55 is Akhenaten's and not Smenkhkare's, we would then have a nice father-son-grandson succession: Amenhotep III (represented by the Thutmose IV mummy), Akhenaten (the skeleton from KV 55), and Tutankhamun. The unusual mummy labeled Amenhotep III might then be identified with King Aye, Tutankhamun's successor (Scheme 1). A variant of this reconstruction is to take the skeleton from KV 55 as Smenkhkare's rather than Akhenaten's, in which case Smenkhkare and Tutankhamun would be brothers and either grandsons or sons of Amenhotep III, represented by the mummy labeled Thutmose IV (Scheme 2). The weaknesses of either of these two genealogical reconstructions is that the Thutmose IV mummy is one of the better identified ones, with dockets inscribed both on his mummy and coffin. Moreover, the sequence Amenhotep II - Thutmose IV is biologically less probable than the reverse when taking into consideration the craniofacial characteristics of the entire Thutmoside line. Finally, the striking similarity of the Amenhotep III mummy to sculptured portraits of Akhenaten is not explicable if this mummy is identified as Aye's. There is a third, more radical solution to this puzzle that deserves consideration (Scheme 3).Bearing in mind that the most probable sequence of the mummies from the viewpoint of inheritance of craniofacial characteristics is the sequence of the mummies labeled Thutmose IV, Amenhotep II, and Amenhotep III (in fact only the Amenhotep II mummy provides a suitable father to the Amenhotep III mummy),we have suggested that the Thutmose IV mummy is indeed Thutmose IV, that the Amenhotep II mummy is that of Amenhotep III, and the Amenhotep III mummy is that of Akhenaten. - Edwarde F. Wente, Who Was Who Among The Royal Mummies, 1995. Clearly Edward Wente and his research partner James Harris see the mummy identified as "Amenhotep III" as more likely that of Akhenaten. They had been reportedly aided in that assessment by the facial structure of the "Amenhotep III" mummy, which they say is more in line with "sculptured portraits" of Akhenaten, and by: "the skull is two standard deviations too large for his body". Thus, by their own admission, the mummy that has been nominally assigned to Akhenaten did not really invoke the sort of image as that is seen on the "long-faced" sculptures of the pharaoh. Instead, from their assessment, the "Amenhotep III" mummy's facial structure conforms more to this image than that assigned to Akhenaten. They also note the uncertainty surrounding the identity of the mummies, taking exception to the "Tutmose IV" mummy as one of the better identified mummies, which they claim has "dockets inscribed both on his mummy and coffin". It underscores the uncertainties surrounding these mummies, making assignments of the mummies directly to the historical ruling figures tenuous, whether from an genealogical standpoint or a craniofacial one. Now of course, Wente sought to buttress his estimation of the "Amenhotep III" mummy really being that of Akhenaten even further, by adding that "it is one of the most severely battered of the royal mummies, having suffered postmortem injuries of a very violent nature, more than what tomb-robbers generally inflicted upon the mummies in search of precious items". This is supposed to be significant, in that the pharaoh is generally understood to have stoked much hostility within the ruling circles, who were opposed to his brand of monotheism, and doing away with the long held neteru system. The question is, what if Wente's estimation was wrong, and that the mummy assigned to Akhenaten happens to be right; what then? Well, the answer would be obvious then; that the most popular sculptures of Akhenaten sporting the long face and pseudo-goatee were highly idealized and exaggerated renditions, if not "likeness", of the then living pharaoh. In his estimation, along with his research partner James Harris, even if the "Amenhotep III" mummy were positively assigned to Akhenaten, this would be the likely case: Since neither the skeleton from KV 55 nor Tutankhamun are likely biologic sons of the Amenhotep III mummy or of the Amenhotep II mummy, we come to the possible conclusion that Tutankhamun was not the biologic son of a king. Rather, we suggest that Thutmose IV was the paternal grandfather of Tutankhamun, a conclusion consonant with a literal reading of the text on the Oriental Institute astronomical instrument, and that Amenhotep III was his maternal grandfather. In other words, Tutankhamun was the offspring of a marriage between a son of Thutmose IV and a daughter of Amenhotep III. Historians of the New Kingdom may balk at this solution because of the Amarna block stating that Tutankhuaten was a "king's son of his body." Although in the New Kingdom this expression is generally to be taken literally, the Amarna period does witness many departures from the norm. It has been suggested that the emphasis on solar worship and the position of pharaoh in relation to the solar deity at Amarna received its inspiration from the Old Kingdom. The Old Kingdom is also the time when the title "king's son of his body" was occasionally used in the extended sense of king's grandson. - Edwarde F. Wente, Who Was Who Among The Royal Mummies, 1995. As a matter of fact, the whole point of the recent project of extracting certain mummy DNA (the "Amenhotep III" mummy, the "KV 55" and "Tutankhamun" mummy) was to confirm the king-to-son relationship between Tutankhamun and either of the former two, under Hawass' watch. As it turns out, so claim the researchers, the KV 55 was indeed the paternal parent of Tut. If this is to be accepted, then this takes us back to the "Amenhotep III" mummy conforming to the long-faced sculptures of Akhenaten more so than the KV 55, which was just recently (February 2010) said to be that of the father of Tutankhamun and claimed to be that belonging to Akhenaten, which in turn would confirm that the said long-faced renditions were idealized personifications of the pharaoh, while the shorter-faced and goatee-less counterparts were likely intended to serve as portraits of the pharaoh! The female-like abdomen and protruding belly was a signature artistic convention of the Akhenaten era, and doubtlessly one of the most visible idealized aspects of the rulers being artistically celebrated... *As additional information comes to light, modifications or additions may be made to this post. —Personal notes from August 2005. —Edward F. Wente, Who Was Who Among The Royal Mummies, 1995. —Hawass et al., Ancestry and Pathology in King Tutankhamun's Family, 2010. —*Visual aids from various sources.
<urn:uuid:deb916d2-584d-4940-b38f-1beebc3d8425>
CC-MAIN-2016-26
http://exploring-africa.blogspot.com/2010/09/akhenatens-face.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00059-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960987
2,926
2.546875
3
|Indonesia Table of Contents Indonesian Buddhism in the early 1990s was the unstable product of complex accommodations among religious ideology, Chinese ethnic identification, and political policy. Traditionally, Chinese Daoism (or Taoism), Confucianism, (agama Konghucu in Indonesian) and Buddhism, as well as the more nativist Buddhist Perbuddhi, all had adherents in the ethnic Chinese community. Following the attempted coup of 1965, any hint of deviation from the monotheistic tenets of the Pancasila was regarded as treason, and the founder of Perbuddhi, Bhikku Ashin Jinarakkhita, proposed that there was a single supreme deity, Sang Hyang Adi Buddha. He sought confirmation for this uniquely Indonesian version of Buddhism in ancient Javanese texts, and even the shape of the Buddhist temple complex at Borobudur in Jawa Tengah Province. In the years following the 1965 abortive coup, when all citizens were required to register with a specific religious denomination or be suspected of communist sympathies, the number of Buddhists swelled; some ninety new monasteries were built. In 1987 there were seven schools of Buddhism affiliated with the Perwalian Umat Buddha Indonesia (Walubi): Theravada, Buddhayana, Mahayana, Tridharma, Kasogatan, Maitreya, and Nichiren. According to a 1987 estimate, there were roughly 2.5 million followers of Buddhism, with 1 million of these affiliated with Theravada Buddhism and roughly 0.5 million belonging to the Buddhayana sect founded by Jinarakkhita. Other estimates placed Buddhists at around only 1 percent of the population, or less than 2 million. Buddhism was gaining in numbers because of the uncertain status of Confucianism. Confucianism was officially tolerated by the government, but since it was regarded as a system of ethical relations rather than a religion per se, it was not represented in the Department of Religious Affairs. Although various sects approach Buddhist doctrine in different ways, a central feature of the religion is acknowledgment of the Four Noble Truths and the Eightfold Path. The Four Noble Truths involve the recognition that all existence is full of suffering; the origin of suffering is the craving for worldly objects; suffering ceases when craving ceases; and the Eightfold Path leads to enlightenment. The Eightfold Path invokes perfect views, resolve, speech, conduct, livelihood, effort, mindfulness, and concentration. Buddhism originally was an intellectual creed, and only marginally concerned with the supernatural. However, political necessity, and the personal emotional desire to be shielded from the terrors of the world by a powerful deity, have led to modifications. In many ways, Buddhism is highly individualistic, with each man and woman held responsible for his or her own self. Anyone can meditate alone; no temple is required, and no clergy is needed to act as intermediary. The community provides pagodas and temples to inspire the proper frame of mind to assist the worshippers in their devotion and self-awareness. Source: U.S. Library of Congress
<urn:uuid:8df55ad9-2fc7-4a69-a4c6-fd1d07074496>
CC-MAIN-2016-26
http://countrystudies.us/indonesia/40.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961785
641
3.234375
3
Here's another easy way for your young child to make wrapping paper. - Liquid tempera paint or soap paint (see Frosty Soap Painting) - Heavy plain paper - Crumple up some newspaper into a ball and dip it in liquid tempera or soap paint. - Press the newspaper ball lightly all over the heavy plain paper. - Use two or three different colors if you like. - Let dry. More on: Activities for Toddlers Copyright é 1999 by Patricia Kuffner. Excerpted from The Toddler's Busy Book with permission of its publisher, Meadowbrook Press. To order this book visit Meadowbrook Press.
<urn:uuid:e815e150-46a6-469e-af57-d97a231f97f7>
CC-MAIN-2016-26
http://fun.familyeducation.com/childrens-art-activities/painting/37134.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00013-ip-10-164-35-72.ec2.internal.warc.gz
en
0.856209
140
3.03125
3
Process Controllers Predict the Future A feedback controller can steer a process variable toward the desired setpoint only if it can somehow predict the future effects of its current control efforts. A model-based controller does so with the help of a mathematical representation of the process’s behavior. A well-tuned PID loop uses an implicit model characterized by the values of the controller’s P (proportional), I (int... A feedback controller can steer a process variable toward the desired setpoint only if it can somehow predict the future effects of its current control efforts. A model-based controller does so with the help of a mathematical representation of the process’s behavior. A well-tuned PID loop uses an implicit model characterized by the values of the controller’s P (proportional), I (integral), and D (derivative) parameters. These two techniques compute their control efforts differently, but they both rely on the linearity of the process to anticipate how it is going to respond. A process is said to be linear if the process variable increases by a factor of u when the control effort is increased by the same amount. And if two separate sequences of control efforts are added together and applied to a linear process, the resulting values of the process variable will always equal the sum of the values that would have resulted had the two control efforts been applied separately. This predictability gives rise to the Superposition Principle which governs the behavior of all linear processes. The “Superposition Principle” graphic shows how it works in four situations where a computer-based controller with a cycle time ofΔ t seconds has applied a different sequence of control efforts to the same linear process. In case A, the controller has applied a single impulse with a magnitude of 1 unit (percent, degree, PSI, whatever) and a width of one cycle time (Δt seconds). The resulting fluctuations in the process variable are known as the process’s impulse response or, in this particular case, its unity impulse response . The process in this example happens to be somewhat sluggish, so its unity impulse response rises and falls relatively slowly as the effects of the impulse wear off. This could represent any number of industrial processes, such as the temperature in a vat after a heating element has been turned on then off again, or the flow rate in a pipe after a valve has been opened then closed. Case B shows how increasing the magnitude of the impulse increases the magnitude of the impulse response but not its general shape. The second impulse is three times as large as the first, so the magnitude of the impulse response has been tripled. In case C, both impulses have been applied to the process, but at different times. The process’s net response after the second impulse equals the sum of the two impulse responses added together point by point. The second impulse response has been effectively “superimposed” on the first, hence the name of the principle that describes this phenomenon. Case D shows that a contiguous sequence of impulses with magnitudes of u (0), u (1), u (2), ... applied to the process at times 0,Δ t , 2Δ t , ... has the same additive effect. Each new impulse response adds to the impulse responses already in progress, and the magnitude of each is determined by the magnitude of the impulse that caused it. The process’s net response at any time is the sum of all the impulse responses that have been initiated up to that point. Thanks to the Superposition Principle, a controller can predict how a linear process will respond to any sequence of control efforts, not just impulses. It also gives an algorithm for computing the resulting values of the process variable, as shown in “Calculating the Process Response.” This graphic depicts the same four situations, except that the control efforts and the corresponding process responses are represented by their numerical values rather than trend charts. Each data stream has been sampled and recorded once everyΔ t seconds, hence the expression sampling interval often used to describe the controller’s cycle timeΔ t . Case D shows the calculations required to compute the values of the process variable y (0), y (1), y (2), ... that would result from an arbitrary sequence of control efforts u (0), u (1), u (2), ... Specifically, y (0)= u (0) h (0) y (1)= u (0) u (1) h (0) y (2)= u (0) h (2)+ u (1) h (1)+ u (2) h (0) etc. Each calculation gets successively longer as more and more impulses figure into the result. Fortunately, there’s a convenient way to organize all these multiplication and addition operations, as shown in the “Convolution” table, where two infinitely long “numbers” H = h (0), h (1), h (2), ... U = u (0), u (1), u (2), ... are “multiplied” together to compute Y = y (0), y (1), y (2), ... using the familiar long multiplication algorithm, but with data points h (0), h (1), h (2), ... and u (0), u (1), u (2), ... instead of individual digits. This calculation, known as convolution , is actually the mirror image of long multiplication. The multiplication and addition steps are the same, but it does not involve any carry-over from one column to the next. It is typically written as Y=H*U where “*” is the convolution operator . Convolution is the basis for an entire mathematical discipline known as linear systems analysis . It gives control engineers a powerful tool for analyzing the behavior of linear processes and designing feedback controllers that can predict the future. Vance Van Doren, Ph.D., P.E., is senior editor for Control Engineering. He can be reached at [email protected] . - Events & Awards - Magazine Archives - Oil & Gas Engineering - Salary Survey - Digital Reports Annual Salary Survey Before the calendar turned, 2016 already had the makings of a pivotal year for manufacturing, and for the world. There were the big events for the year, including the United States as Partner Country at Hannover Messe in April and the 2016 International Manufacturing Technology Show in Chicago in September. There's also the matter of the U.S. presidential elections in November, which promise to shape policy in manufacturing for years to come. But the year started with global economic turmoil, as a slowdown in Chinese manufacturing triggered a worldwide stock hiccup that sent values plummeting. The continued plunge in world oil prices has resulted in a slowdown in exploration and, by extension, the manufacture of exploration equipment. Read more: 2015 Salary Survey
<urn:uuid:8d85c631-e22c-4322-b138-d23f0125ca13>
CC-MAIN-2016-26
http://www.plantengineering.com/single-article/process-controllers-predict-the-future/db45144a26e310197c5ca66c96cbf112.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920823
1,436
3.3125
3
Runaway star moving at 5 MILLION mph leaves a cosmic dust trail that is a record-breaking 37 light years long - The pulsar - a spinning neutron star - is located in constellation of Carina - It is thought to be one of the fastest pulsars ever observed by scientists - Jet trail is ten times as long as distance between sun and its nearest star - Tail has a corkscrew pattern indicating pulsar is wobbling like a spinning top A pulsar moving at five million miles per hour has delivered the longest jet of high energy particles astronomers have ever seen. The jet trail stretches 37 light years, or 218 trillion miles long - ten times as long as the distance between our sun and its nearest star. Pulsars are rotating neutron stars formed when the core of a massive star undergoes gravitational collapse at the end of its life. Nasa's Chandra X-ray Observatory has seen a fast-moving pulsar escaping from a supernova remnant while spewing out a record-breaking jet - the longest of any object in the Milky Way (seen here in the bottom right) This pulsar, known as IGR J11014-6103, and its peculiar behaviour can likely be traced back to its birth in the collapse and subsequent explosion of a massive star. The pulsar is located about 60 light-years away from the centre of the supernova remnant SNR MSH 11-61A in the constellation of Carina. Spotted by Nasa's Chandra X-ray Observatory, its speed is between 2.5 million and 5 million mph, making it one of the fastest pulsars ever observed. ‘We've never seen an object that moves this fast and also produces a jet,’ said Lucia Pavan of the University of Geneva in Switzerland. The jet here is seen with X-rays from Chandra. This pulsar - a spinning neutron star - is moving between 2.5 million and 5 million miles per hour WHAT ARE NEUTRON STARS? When the core of a massive star undergoes gravitational collapse at the end of its life, protons and electrons are scrunched together, leaving behind a neutron star. Neutron stars can fit roughly 1.3 to 2.5 solar masses into a city-sized sphere perhaps 12 miles across. Matter is packed so tightly that a sugar-cube-sized amount of material would weigh more than 1 billion tonnes. Most known neutron stars belong to a subclass known as pulsars. These relatively young objects rotate extremely rapidly, with some spinning faster than a kitchen blender. They beam radio waves in narrow cones, which periodically sweep across Earth. ‘By comparison, this jet is almost 10 times longer than the distance between the sun and our nearest star.’ As well as its impressive span, it has a distinct corkscrew pattern that suggests the pulsar is wobbling like a spinning top. The pulsar's jet and the pulsar wind nebula are nearly perpendicular to one another, which is baffling scientists. ‘We can see this pulsar is moving directly away from the centre of the supernova remnant based on the shape and direction of the pulsar wind nebula,’ said co-author Pol Bordas, from the University of Tuebingen in Germany. ‘The question is, why is the jet pointing off in this other direction?’ Usually, the spin axis and jets of a pulsar point in the same direction as they are moving, but IGR J11014-6103's spin axis and direction of motion are almost at right angles. ‘With the pulsar moving one way and the jet going another, this gives us clues that exotic physics can occur when some stars collapse,’ said co-author Gerd Puehlhofer also of the University of Tuebingen. One possibility requires an extremely fast rotation speed for the iron core of the star that exploded. A problem with this scenario is that such fast speeds are not commonly expected to be achievable. The supernova remnant that gave birth to IGR J11014-6013 is elongated from top-right to bottom-left in the image roughly in line with the jet's direction. The strange movements and huge jet could be explained if its parent star's iron core had an extremely fast rotation speed. However, Nasa said that such fast speeds were not commonly thought to be achievable
<urn:uuid:7bbca48c-178f-4eac-85e4-96108a41b6da>
CC-MAIN-2016-26
http://www.dailymail.co.uk/sciencetech/article-2563855/Runaway-star-moving-5-MILLION-mph-leaves-cosmic-dust-trail-record-breaking-37-light-years-long.html?ITO=1490
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947069
919
2.984375
3
1. Open-heart surgery in which the rib cage is opened and a section of a blood vessel is grafted from the aorta to the coronary artery to bypass the blocked section of the coronary artery and improve the blood supply to the heart. 5. A very poisonous metallic element that has three allotropic forms. 7. (Islam) The man who leads prayers in a mosque. 11. (prefix) Indicating difference or variation. 12. Date used in reckoning dates before the supposed year Christ was born. 13. A narrow elongated opening or fissure between two symmetrical parts. 14. British artist and writer of nonsense verse (1812-1888). 15. A white metallic element that burns with a brilliant light. 16. (Babylonian) God of storms and wind. 17. Cubes of meat marinated and cooked on a skewer usually with vegetables. 19. Flightless New Zealand birds similar to gallinules. 21. Resinlike substance secreted by certain lac insects. 22. A bachelor's degree in theology. 27. A small cake leavened with yeast. 30. Someone who works (or provides workers) during a strike. 32. Any of numerous local fertility and nature deities worshipped by ancient Semitic peoples. 36. (Akkadian) God of wisdom. 38. Being nine more than ninety. 40. Any of a number of fishes of the family Carangidae. 44. A human limb. 45. A port in southwestern Scotland. 47. The Tibeto-Burman language spoken in the Dali region of Yunnan. 48. In or of the month preceding the present one. 49. Forbidden to profane use especially in South Pacific islands. 50. A flat wing-shaped process or winglike part of an organism. 1. A metal cleat on the bottom front of a horseshoe to prevent slipping. 2. On or toward the lee. 3. Divulge information or secrets. 4. Small goat antelope with small conical horns. 5. The elementary stages of any subject (usually plural). 6. Singing jazz. 7. A republic in the Middle East in western Asia. 8. Some point in the air. 9. A woman hired to suckle a child of someone else. 10. Produced by a manufacturing process. 18. A soft silvery metallic element of the alkali earth group. 20. A Kwa language spoken in Ghana and the Ivory Coast. 23. (astronomy) The angular distance of a celestial point measured westward along the celestial equator from the zenith crossing. 24. (computer science) A computer that is running software that allows users to leave messages and access information of general interest. 25. A bluish-white lustrous metallic element. 26. An amino acid that is found in the central nervous system. 28. Harsh or corrosive in tone. 29. The blood group whose red cells carry both the A and B antigens. 31. A radioactive element of the actinide series. 33. Jordan's port. 34. Of or pertaining to hearing or the ear. 35. (folklore) A corpse that rises at night to drink the blood of the living. 37. Any group or radical of the form RCO- where R is an organic group. 38. A workplace for the conduct of scientific research. 39. (Babylonian) God of wisdom and agriculture and patron of scribes and schools. 41. The products of human creativity. 42. The federal agency that insures residential mortgages. 43. A light touch or stroke. 46. A highly unstable radioactive element (the heaviest of the halogen series).
<urn:uuid:6b786630-2dcd-4787-b6bd-b6138add1c0c>
CC-MAIN-2016-26
http://www.crosswordpuzzlegames.com/puzzles/gt_1868.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.859657
810
2.59375
3
In research that was conducted by Sanger, Hux, and Griess (1995), one-third of the school-based speech language pathologists (SLP) surveyed were found to provide pullout speech language services as the only service delivery option for students with communication impairments. More than a decade later, pullout services remain the predominant service delivery model that is used across the United States (ASHA, 2010). What is the reason for this long standing preference for pullout only service delivery models? Unfortunately, the reason for this preference is not fully understood. Research has identified several administrative barriers and challenges that are believed to be associated with service delivery choices (ASHA, 2010). Large caseloads, excessive workload duties (e.g., paperwork, bus duty), limited planning and collaboration time with teachers, and misperceptions about treatment intensity appear to have the potential to limit service delivery choices and prevent services from being delivered in the least restrictive environment (ASHA, 2010; ASHA, 2002; Roberts, Prizant, & McWilliams, 1995). Clinicians in the field have also encountered teacher and parent resistance and uncertainty about what to do in the classroom to improve communication skills. Despite these barriers and challenges, SLPs are continually being called upon to reassess the appropriateness of traditional pullout only programs and employ alternative approaches in an inclusive setting. SLPs cannot implement this move away from pullout only models alone. Building-level and district administrators must play an important role in creating and maintaining the systemic change necessary to ensure that appropriate service delivery models are used for students who require speech language services (Achilles, Yates, & Freese, 1991; Beck & Dennis, 1997; Cooper, 1991; Ferguson, 1991; Larson, McKinley, & Boley, 1993; Miller, 1989; Moore-Brown, 1991; Schetz & Billingsley, 1992; Throneburg, Calvert, Sturm, Paramboukas, & Paul, 2000). Service Delivery Models The American Speech Language Hearing Association (ASHA) considers service delivery to be a dynamic concept, and recommends that it change as the student moves through the various stages of therapy (1999). As the student progresses in speech language therapy across time, the service delivery model should be reevaluated and modified to address the unique and changing needs of the student. The “one size fits all” approach to service delivery is not appropriate; therefore, school-based SLPs need to be provided the skills, structure, and support to design and implement a continuum of services to effectively serve students on their caseload in the least restrictive environment (ASHA, 1999). There are several service delivery models that can be used in the schools to provide educationally relevant services to students with communication impairments. The first example, pullout speech language therapy, occurs whenever the SLP works independently and provides small-group or individual services in a setting that is separate from the student’s classroom (e.g., speech therapy room, hallway, etc.). It is important to note that the goals of pullout services do not necessarily coincide with the academic content standards that are established by states (Norris, 1989). These services occur apart from the classroom, teacher, curriculum, and nondisabled peers, and they are often times disconnected from the student’s regular daily activities (e.g., lunch, recess, transitions, etc.). Services in the pullout therapy room can be made more educationally relevant when intervention is provided on those language underpinnings that negatively impact the student’s progress in the general education curriculum or setting (Ehren, 2000). Materials, textbooks, and concepts from the classroom can also be used in the pullout therapy room in order to improve communication goals. In the beginning stages of therapy, nondisabled peer volunteers can also be invited during lunch, recess, or afterschool to participate in social skill lessons. Despite these efforts to infuse the curriculum and nondisabled peers into pullout therapy, a consistent criticism remains. Pullout therapy alone does not promote skill carryover or generalization (Bellini, Peters, Benner, & Hopf, 2007; Elksnin, & Capilouto, 1994; Finn, 2003; Miller, 1989). Speech language services in a therapy room cannot effectively replicate the interactions and activities commonly found in the classroom, which may in turn adversely affect the carryover and generalization of newly learned skills. Speech language services become “decontextualized,” and the student struggles to make connections between what goes on in the therapy room and what needs to occur throughout the rest of the school day (Miller, 1989). Pullout therapy is not the only service delivery option available to SLPs. The second service delivery model that is employed infuses services into the classroom setting and is known by many names. It can be called inclusion or integrated classroom-based, curriculum-based, or classroom-based services, and this model directly impacts upon the student’s academic and functional performance across educational settings (ASHA, 1999). The SLP provides direct services in the classroom through either co-teaching with the educational staff or leading large or small-group lessons. The SLP works with the classroom teacher or aide to select communication skills or strategies that not only benefit the student with the communication impairment but also the rest of the class. The SLP works with the classroom personnel to infuse these strategies or skills into ongoing classroom instruction so that they are carried over throughout the school day and after the SLP leaves the classroom. The SLPs models for the teachers and aides the use of the strategy or skill in hopes that the teaching staff will infuse these into classroom instruction. Services of this type are better able to ensure a carryover and generalization of newly learned skills across educational settings and communication partners (Bellini et al., 2007; Throneburg et at., 2000; Wilcox, Kouri, & Caswell, 1991). “The goal of introducing alternative models of service delivery [is] not to eliminate pullout services; rather, the goal [is] restriction of the use of pullout services to appropriate cases and the provision of alternative approaches when they best serve students’ needs” (Sanger et al., 1995, p. 80).
<urn:uuid:60328621-6386-4744-8eec-8751f554cddd>
CC-MAIN-2016-26
http://www.speechpathology.com/articles/moving-beyond-pullout-only-service-1671
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930166
1,288
2.71875
3
Postoperative wound infections following the preparation of the patient with the usual antiseptics are one of the great trials of surgical practice. Many surgeons feel that the number of infections is too great and investigators have toiled earnestly in the laboratories on the subject of skin disinfection. Various drugs have been studied and recently the use of dyes has been especially recommended, because these substances apparently remain in the field longer than aqueous or alcoholic solutions of the ordinary antiseptics. In this issue of The Journal are two papers on the disinfection of the unbroken skin, and one on the use of antiseptics on mucous membranes. Because of the apparently contradictory statements in these presentations, it may be well to examine carefully the methods used in an attempt to discover the reasons for these discrepancies and to establish the actual clinical value of the germicides in question. Some differences in
<urn:uuid:46996f46-1f09-4fb3-a7d3-f68cb0e7139e>
CC-MAIN-2016-26
http://jama.jamanetwork.com/article.aspx?articleid=259595
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935249
179
2.71875
3
Focus on Gender: Women and HIV Even after a quarter century of HIV/AIDS, and despite all of the education programs about how the virus is transmitted and who is vulnerable, many people still regard the disease as affecting mostly gay white men and (usually male) intravenous drug users. One of the most overlooked populations, in everything from education to prevention to treatment, is women. In the U.S. today, depending on where you live, women account for between a quarter and a half of people living with HIV. But few programs are designed with women's specific needs in mind. Worldwide, women are even more vulnerable, and have even fewer options. In this issue, we describe some of the issues involving women who are living with HIV or at risk of infection. In "HIV and Women Around the World," Luis Scaccabarrozzi provides an overview of the epidemic as it affects women in the United States and abroad. His insightful article highlights several specific vulnerabilities women face, from powerlessness to negotiate safer sex practices, to domestic violence, to lack of easily accessible healthcare. The facts are brought to life in a Personal Perspective written by Mary, a brave South African woman living with HIV and stigma and helping other women do the same. Important policy issues are explored in an article by Kimberleigh Smith, while articles by Dr. Mark Brennan and Rosa Bramble Weed examine depression in older HIV-positive women and HIV among immigrant women. Jane Fowler offers a concise listing of some facts and tips for women who are infected or at risk. Finally, one of the thorniest topics for women with HIV is pregnancy. Can an HIV-positive woman have a successful pregnancy and a healthy baby? What about a negative woman whose male partner has the virus? Can a woman with HIV pass the virus to the fetus in her womb? Should she have a vaginal birth, or plan a C-section? What about breastfeeding? Vaughn Taylor and Hanna Tessema examine the many complex issues facing pregnant women with HIV and those who are considering having children. And Delia G. shares her deeply personal story of learning first her HIV status and then that she was pregnant, how she coped, and how she went on to build a stable, loving -- and healthy -- family. We hope that this special issue of ACRIA Update will help dispel some myths about women and HIV and offer insights into their special needs. As always, we welcome your thoughts and comments. Daniel Tietz is Editor-in-Chief of ACRIA Update. This article was provided by AIDS Community Research Initiative of America. It is a part of the publication ACRIA Update. Visit ACRIA's website to find out more about their activities, publications and services.
<urn:uuid:71b725d8-8a38-4a03-821b-856c50a05b9e>
CC-MAIN-2016-26
http://www.thebody.com/content/art45499.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959924
558
3
3
PASADENA, Calif., July 25 (UPI) -- Comet C/2013 A1 Siding Spring won't whiz by Mars for another two-plus months, but NASA is already preparing its Mars orbiters for the flyby. Currently, NASA has two observation craft circling the Martian planet. A third will arrive just a month prior to the arrival of Comet C/2013 A1. "Three expert teams have modeled this comet for NASA and provided forecasts for its flyby of Mars," explained Rich Zurek, chief scientist for the Mars Exploration Program at NASA's Jet Propulsion Laboratory. "The hazard is not an impact of the comet nucleus, but the trail of debris coming from it." Though the risk isn't as great as once thought, even the tiniest pieces of debris -- which will be spewed from the passing comet at a rate of 35 miles per second -- could do serious damage to one of the three orbiters. "Mars will be right at the edge of the debris cloud, so it might encounter some of the particles -- or it might not," said Zurek. NASA will make slight adjustments to the orbiter's path to minimize the risk of the comet's debris hitting the spacecraft. But the three probes will still be in prime position to capture hopefully impressive footage of the flyby. The comet will pass Mars at one-tenth the closest distance a comet has ever come to Earth.
<urn:uuid:4ae4aff4-1d7b-4077-a464-6d9f03ec46b2>
CC-MAIN-2016-26
http://www.upi.com/Science_News/2014/07/25/NASA-preparing-to-protect-Mars-orbiters-from-comet-close-call/4161406312922/?spt=sec&or=sn
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940269
295
2.953125
3
This leads to a startling possibility, because red dwarfs have a characteristic that distinguishes them from stars like the sun: longevity. Small stars, like small dogs, live longer. Our own sun has been boiling away for nearly 5 billion years; it has another 5 billion to go before it starts to shudder and die. But a red dwarf would offer much more time for development, 100 billion years or more, because these dim bulbs are parsimonious with their fuel. If life, and occasionally intelligent life, exists elsewhere, then the most ancient civilizations are surely encamped around the oldest stars; and the oldest that still shine are red dwarfs. Of course, 14 billion years after the big bang, even the most aged of red dwarfs are still teenagers. But if some have planets on which biology bloomed early, that life has a history that is two or three times as long as the span between Earth's earliest microorganisms and the ascent of man. Such an ancient society, with far more time to exploit science, might easily be able to betray its existence. No, we haven't found evidence for such civilizations yet, but if we do, it's conceivable that they developed on a world of which the rock found around Gliese 876 is merely a first example. That overheated planet might be the first signpost of myriad worlds where life could flourish.
<urn:uuid:e9b0c354-a3e2-4dbc-950c-fec0ec1e6e65>
CC-MAIN-2016-26
http://www.taipeitimes.com/News/feat/archives/2005/06/28/2003261275/2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959336
278
3.40625
3
Hiking is an outdoor activity which consists of walking in natural environments, often on hiking trails. Hiking may be broadly grouped into two categories: - Wilderness backpacking involves a multi-day hiking expedition where participants carry the required supplies for overnight stay and two or more days of survival in the wilderness, and camp en route. - Day hiking involves distances of less than a mile up to longer distances that can be covered in a single day. For a day hike along an easy trail little preparations are needed, and any moderately fit person can enjoy them. Families with small children may need more preparations, but a day outdoors is easily possible even with babies and pre-school children. Hiking can often be done near home even if you live in a big city. If you or your family are not used to hiking near home is often the best way to start; getting away is easier and if something goes wrong or you simply do not enjoy your time, you can go home and do it differently the next time. For some, having a big experience the first time may feel important, but especially for children this is not a good option: they will be fascinated by the very small new experiences. If you do not have a wood behind your house, a picnic at some nearby destination with trails and a campfire site may be ideal until you know everybody will be comfortable with more demanding adventures. At many hiking destinations there are easy-to-follow trails, such that knowing how to use a map and compass is not essential (although recommended), and there may be lodges with food and accommodation. Some such hikes offer the possibility to get to see the wilderness without too much skill and effort. The requirements vary though. If you are not used to walk a few kilometres, a ten kilometres mountain hike will certainly be very hard. And on some trails you may find that the trail is anything but easy to follow, or that the creek you have to cross has transformed into a fast-flowing river. Always check what to expect. There are long and demanding trails in the wilderness with possibilities to comfortable lodging, but in this end of the scale there may instead be unmanned Spartan shelters or only a spot where to put your tent. Wilderness backpacking often assume you will get along without any infrastructure, even trails, with what you carry and perhaps fish from the streams and berries you pick. And if you need help, you may have to go to fetch it. If you want to feel like returning to the days long past or truly immerse in the natural environment, this may be what you should aim for. City folks are usually not accustomed to long walks with heavy packing. Even if you are fit, you should try long walks in hilly terrain before you go for any demanding hike. If aiming for real wilderness or long distance hiking, you should start with hikes you can interrupt more easily. Ideally you build up your skills and endurance little by little, from year to year, from picnics to long wilderness hikes. If you have to train more quickly, remember to start gently anyway. You should also get acquainted to your equipments: footwear, clothes, backpack, camping stove, food etcetera. You want to know how to handle your tent in storm and rain and – on demanding hikes – how to repair it with the tools you will have. Footwear also have to get acquainted to you. You need as versatile equipment as possible, to be able to leave as much as possible out, and a simple tool you know well is often more versatile than a complicated one. The backpack will be heavier than ideal in any case. Plan your route so that mishaps do not ruin your trip, and so that you have time for enjoying it. On longer hikes it is usually advisable to have a resting day now and then. Weather is one of the main factors in preparing for any hike; check weather forecasts, ensure you have a good weather window, with lots of time to spare. Be aware that weather in mountainous or coastal areas can change dramatically and adjust your equipment accordingly. Heavy rainfall can cause rapid flooding of rivers. River crossings are one of the main causes of death and injury when hiking. It is almost always best to be prepared to wait for river levels to go down rather than to cross a river in flood. Also be aware of day light hours. It is never a good idea to be caught out hiking at dusk or at night. Watch your time and don't underestimate the length of the trail. Get advice from other hikers, talk to your local hiking clubs, visit local equipment retailers and outfitters. There are some excellent books availably on safety in the mountains as well as guides to weekend or day hikes. Start small and build up experience. See also Appalachian Trail#Prepare, about a demanding long-distance trail. Doing trail sections with two cars Many well known hiking trails are of long distances, more than many people can tackle on a single trip. One method to do these is to walk in stages using two vehicles to get between start and finish points of a day hike. The method is simple once pointed out. You drive with two cars to the end of the trail, parking one of them there. Then everyone gets into the other car and drives to the start of the section you are walking. At the end of the hike you get back the first parked car and drive back to the start to get the other. If you can use public transport (and perhaps a taxi for getting near the trailhead) you avoid the extra driving and the hassle of handling the cars. Australia and New Zealand - Tramping in New Zealand; New Zealand is a Mecca for hiking, both day hikes and multi-day hikes, with a network of trails and huts to cater for most abilities. The country has a number of Great Walks which offer both private and public accommodation as well as guided hiking. These include the following: - Grande Randonnée, Long distance walking in Europe, pilgrimages - United Kingdom
<urn:uuid:4194d501-c1f2-4da7-9ec6-12fac0263df3>
CC-MAIN-2016-26
https://en.wikivoyage.org/wiki/Hiking
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962031
1,233
3.4375
3
I find myself increasingly interested in materials science and how quantum leaps in energy efficiency are made possible by rethinking some of the most basic materials inside a device. In the case of the LCD screen, that component is a thin film called a polarizer. All LCD screens, whether they be on your cell phone, computer or television, require a polarizing film to convert the display's back light into the image that you see on your screen. Unfortunately that polarizer is very thick and only allows 50 percent of the total light to shine through: The PolarBrite technology by California-based startup Agoura (top) replaces that polarizer film with a more transparent film that uses a lattice of tiny wires to do the filtering. This simple solution may ease the panic taking place right now in the television industry as a result of a new efficiency law currently in review in California which would force television manufacturers to make their devices 33 percent more efficient by 2011 and 49 percent more efficient by 2013. The Consumer Electronics Association is vehemently opposed to the law, saying that the exorbitant costs associated with making their televisions compliant would hurt retailers and limit consumer choice. But now they may not have such an easy excuse. The polarizer film developed by Agoura is cost neutral and simply by the fact that it allows more light to pass through to the front of the screen, means much less power is required to produce the light display. Swapping out the old film for the new wire polarizer would result in a 30 percent energy savings on a typical screen, which takes it just short of the 2011 standard at no cost. For a very detailed explanation about how the technology works check out the full white paper on Wire Grid Polarizers (PDF).
<urn:uuid:c8cf5240-dd3d-43e4-9815-1c349ff8fe4c>
CC-MAIN-2016-26
http://www.mnn.com/green-tech/gadgets-electronics/blogs/space-age-polarizer-film-to-cut-tv-energy-use-by-30
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945307
354
3.03125
3
The National Oceanic and Atmospheric Administration (NOAA)/ National Ocean Service's (NOS) Center for Operational Oceanographic Products and Services (CO-OPS) manages the National Current Observation Program (NCOP) to collect, analyze, and distribute observations and predictions of currents. The program's goals are to ensure safe, efficient and environmentally sound maritime commerce, and to support environmental needs such as HAZMAT response. The principal product generated by this program is information used to maintain and update the Tidal Current Tables. NOAA and its predecessor agencies have collected information on currents in various ports and harbors, and in the Gulf Stream, since the mid-1800's. The Coast and Geodetic Survey first published tidal current predictions for the use by mariners on the East Coast in 1890 and for those on the West Coast in 1898. By 2002, Tidal Current Tables contained predictions for over 2,700 locations throughout the USA. Most of the data presently in use was collected between 1930 and 1980 when significant resources were dedicated to the program. From the 1960s through the mid-1980s, two NOAA ships (the McARTHUR on the West Coast and Alaska, and the FERREL on the East Coast) and numerous staff oceanographers and technicians were dedicated full-time to the collection, processing, and analysis of tidal current data. These complete comprehensive physical oceanographic surveys measured currents, water levels, water temperatures and salinity, and meteorological data. Many were the first complete physical studies ever conducted on major U.S. estuaries. Due to budget cuts and ship reassignments in the late 1980s, the program was reduced significantly. Since the mid-1990s, the National Current Observation Program has been recognized as fulfilling a vital mission of national interest to both the maritime industry and environmental stewardship. As a result, many organizations strongly recommend that it is time for the program's data to be updated. Approximately 70 percent of the stations in the 2001 Tidal Current Tables are over 30 years old. Many of these stations are based on analyses of less than 7 days of data (the data duration is known for 24% of all stations). Channel dredging and changes in the configuration of ports and harbors over the years have significantly altered the physical oceanography of many of the nation's estuaries. Reports from local users indicate that many of NOS's tidal current predictions may be inaccurate. NOS intends to address these deficiencies by rebuilding the program and resampling the currents at every major port and estuary within the next 20 years. The majority of work to deploy, recover, and maintain the program's sensors is likely to be conducted by contractors overseen by NOS staff.
<urn:uuid:b4a00879-aacd-4c47-9ac2-722294e7ad57>
CC-MAIN-2016-26
http://tidesandcurrents.noaa.gov/ncop.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949667
553
3.34375
3
De Rham Theory (part 2) Continuing from last time we will now introduce the duality between forms and chains, or the Stokes theorem which is of a fundamental importance in math and physics. Recalling that ∂ represents the boundary, and d is the exterior derivative, George Stokes proved the following identity: We can introduce an inner product: between a cycle Ω and a cocycle ω: < Ω , ω > as the integral of ω over the domain Ω. Stokes theorem then simply states: < ∂Ω , ω > = < Ω , dω > Why do we state it like this? Because the boundary of a boundary is zero: ∂∂ = 0 implies that dd = 0 as follows: 0 = < ∂∂Ω , ω > = < ∂Ω , dω > = < Ω , dd ω > Recall that for boundaries we have an exact sequence (which is also called a chain complex). For the usual 3D space this means: Then we have a De Rham co-chain complex (in co-chains the arrows point in the reversed direction): and from dd = 0 we recognize the usual identities: · Gradient of a curl is zero · Curl of a divergence is zero. Finally we arrive at De Rham Theorem. From chain complexes we extract the Ker/Image homology group: Hp = Zp/Bp From cochain complexes we the Ker/Image cohomology group: DR= (α| d α = 0)/( α| d α = β) De Rham theorem tells us that the two groups are isomorphic: Hp ~ This means that we can explore the topological properties of the space by looking at the solutions of differential equation on that space. For example, an electric charge generates an electric field around it. The spatial distribution of the electric field can tell us the location of the charge. That is why the second cohomology group H2 DR can be interpreted as an electric charge! In general cohomological classes have a physical interpretation. Next time we will venture into the related wonderful world of Hodge theory.
<urn:uuid:1e0806e1-cef8-4ba6-a3eb-8f415713499f>
CC-MAIN-2016-26
http://fmoldove.blogspot.com/2014/06/de-rham-theory-part-2-intuitive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00126-ip-10-164-35-72.ec2.internal.warc.gz
en
0.905988
469
2.703125
3
PUBLIC HEALTH ASSESSMENT ENVIRONMENTAL CONTAMINATION AND OTHER HAZARDS This section presents the contaminants identified at the site and selects which of these contaminants are of potential health concern in each environmental medium. The environmental sampling conducted to date has detected many different contaminants. Certain contaminants in each medium are selected from all contaminants detected at the site in order to focus the public health assessment on contaminants most likely to pose a health risk. Their selection does not necessarily mean that a health threat exists, but only that they will be evaluated further in the assessment. Subsequent sections will evaluate whether individuals have been or could be exposed to the contaminants of concern and will determine whether such exposures have public health significance. The existence of a public health hazard is dependent on the magnitude of contamination in the various environmental media and not the source; it is not our intent to attribute the contamination to particular sources. Contaminants of concern at the site are selected primarily based on a comparison of detected concentrations with comparison (i.e., health guidance) values. Other criteria for selection include concentrations of contaminants on-Property and off-Property and frequency of detection, field data quality, laboratory data quality, sample design, and community health concerns. Comparison values used to select contaminants for further evaluation at the Frontier Fertilizer site include the following: |EMEG||Environmental Media Evaluation Guide| |RMEG||Reference Dose Based Media Evaluation Guide| |MCL||Maximum Contaminant Level| |CREG||Cancer Risk Evaluation Guide| |LTHA||U.S. EPA's Drinking Water Lifetime Health Advisory| EMEGs are media specific values developed by ATSDR to serve as an aid in selecting environmental contaminants that need to be further evaluated for potential health impacts. EMEGs are based on non-carcinogenic end-points and do not consider carcinogenic effects. EMEGs are based on an ATSDR Minimal Risk Level (MRL). RMEGs are equivalent to EMEGs, but are derived from a U.S. EPA Reference Dose (RfD) instead of an MRL, according to ATSDR guidance. Both the MRL and the RfD are estimates of daily exposure to a chemical that is unlikely to cause adverse, non-carcinogenic, health effects. MCL's are enforceable standards for contaminants in drinking water, developed by U.S. EPA under the authority of the Safe Drinking Water Act. In addition to health factors, MCL's are also required by law to consider the technological and economic feasibility of removing the contaminant from the water supply. The limit set must be feasible given the best available technology and treatment techniques. MCL's may be used as a comparison value if no other comparison values exist, or if the MCL is the most conservative of all existing comparison values for a particular contaminant. U.S. EPA's Lifetime Drinking Water Advisory (LTHA) defines a concentration in drinking water at which non-cancer adverse health effects would not be expected to occur. In general, if a chemical is known or believed to cause cancer, a comparison value based on the carcinogenic properties of the chemical will be substantially lower than a comparison value based on its non-carcinogenic adverse effects. This is because it is generally assumed that there is no absolutely "safe" exposure levels to carcinogens (known as the no-threshold assumption), whereas for non-carcinogens, there are levels of exposure (i.e., thresholds), below which no adverse effects are expected to occur. Carcinogenic chemicals detected in soil at the site were evaluated and selected for follow-up using Cancer Risk Evaluation Guide (CREG) values for soil. CREGs are media specific values which serve as an aid in selecting contaminants of concern that are potential carcinogens. CREGs are derived from U.S. EPA cancer slope factors according to ATSDR guidance (15). Cancer slope factors give an indication of the relative carcinogenic potency of a particular chemical. CREG values represent media concentrations which are thought to be associated with an extra lifetime cancer risk of one-in-a-million. A. TOXIC CHEMICAL RELEASE INVENTORY (TRI) SEARCH The Toxic Release Inventory (TRI) maintained by the U.S. EPA contains information on estimated annual releases of toxic chemicals from active industrial facilities from 1987 to present. TRI data can be used to get a general idea of the current environmental emissions occurring in the area surrounding a site, and whether they may be causing additional environmental burden to the community. We searched the TRI for the years 1987 through 1992 (the years for which TRI data were available at the time this public health assessment was written). The TRI data are organized by zip code. We searched for data from the two zip codes existing in the site vicinity, 95616 (Davis) and 95618 (El Macero). No records were found for zip code 95618 for any year. Only one facility in the City of Davis zip code 95616 reported releases to the TRI during 1987-1992. This facility is located about 3-1/2 miles west of the Frontier Fertilizer Property. The facility reported a total release to land of 150,369 pounds of chemicals in 1987: 132,021 pounds of sodium hydroxide solution and 18,348 pounds of chlorine. The company reported zero releases to land during the years 1988-1992. The company reported a total release to air of 3,000 pounds during 1987-1992, resulting from a release of 750 pounds of chlorine to air each year during the period 1989-1992. No air releases were reported for the years 1987 and 1988. The company reported zero releases to water during the period 1987-1992. Due to the distance of this facility from the Frontier Fertilizer Property, releases as reported in TRI are not expected to impact the community near the site. B. ON-PROPERTY CONTAMINATION CONTAMINANTS IN SOIL ON PROPERTY Several soil sampling investigations have been conducted at the site by different contractors and agencies since August 1983 (2). The soil samples have been analyzed for various organic and inorganic chemicals. Organic chemicals analyzed for have included EDB, DBCP, other halogenated volatile organics, carbamates, phenoxyacid herbicides, and organophosphate and organochlorine pesticides. The inorganic chemical analytes have included arsenic, barium, cadmium, cobalt, chromium, copper, nickel, lead, selenium, and zinc (2). Multiple pesticides have been found in soil samples. The principal area where pesticides have been detected is at or near the former pesticide disposal basin. In addition, pesticide residues have been detected near the wash pad near the southern end of the pole barn, the ditch north of Second Street, and the ditch south of Second Street (2). The vast majority of soil samples collected over the years have been from depths of one foot below ground surface or deeper. The limited existing surface soil data are summarized below. In June 1984, DTSC collected a surface soil sample at the center of the disposal basin (2). Although deeper soil samples contained high levels of EDB and other pesticides, no EDB was detected in the surface sample. Disulfoton and trifluralin were detected, but at levels below health comparison values (2). The detection limits and description of the quality of the data was not reported in the cited report. In February 1985, during the NEIC/FBI investigation, 12 soil samples were collected, some apparently at the surface (2). From the literature available for review, it is not known how many surface samples were taken, or the levels present. The NEIC/FBI detection limits for EDB and DBCP exceeded the health comparison values for these contaminants. In a sample collected at six inches below ground surface in May 1985, no pesticides were detected (2). The cited report did not indicate the detection limits, describe the quality of the data, or indicate who collected and analyzed the samples. Four samples collected in July 1985 at a depth of about six inches below ground surface detected EDB, disulfoton, and parathion (2); the levels were below health comparison values. In February 1987, four shallow soil samples (surface to six inches) were collected by DTSC around the former disposal basin and the pole barn. Several pesticides were detected, including endosulfan, 2,4-D, benefin, methomyl, and carbaryl (2). However, the levels were below health comparison values. In September 1994, in preparation of widening Second Street onto part of the Frontier Fertilizer Property, the City of Davis conducted soil sampling in order to determine if any contaminants have migrated off the southern border of the Property via surface water runoff. The City of Davis's contractor, Kennedy/Jenks Consultants, collected a total of 79 soil samples about 20 north of the fence which marks the southern boundary of the Property. The soil sampling area was divided into two areas: Area A lies along the southwest portion of the Property and Area B lies along the southeast portion of the Property. Each area was divided into a 40-foot sampling grid system; samples were collected at the center of each sampling grid, unless overhead or subsurface obstruction prevented the collection. Soil samples were collected from depths of 1.0 - 1.5 feet bgs and 2.5 - 3.0 feet bgs in Area A and from depths of 0.5 - 1.0 feet bgs in Area B (30). The soil samples were analyzed for EDB, DBCP, volatile organic hydrocarbons (VOCs), organochlorine pesticides, organophosphorus pesticides, and carbamate pesticides. The only contaminant detected was DDT (at 0.16 ppm) in a soil sample collected in Grid 10 (near the center of the sampling area); however, the level was below the health comparison value. See Figure 8. Soil sampling was first carried out at the site by YCDPH in August 1983, when two soil samples were collected from the disposal basin. These samples contained 1,676 ppm disulfoton and 1,056 ppm EDB (1). Twenty-two soil samples collected from eight locations by YCDPH in March 1984 contained EDB, 1,2-DCP, DBCP, and other pesticides and herbicides (2). The inorganic chemical analysis revealed naturally occurring elements, such as, arsenic, barium, cadmium, cobalt, chromium, copper, nickel, lead, selenium, and zinc in the soil samples taken at depths between one to four feet (2). All levels, with the exception of cadmium, were within range of the levels commonly found in the soils in the western United States. In June 1984, CDHS collected soil samples from the center of the pesticide disposal basin at six depths ranging from surface to five feet below ground surface. Although, as indicated above, no EDB and only low levels of other pesticides were detected in the surface sample, subsurface samples contained up to 2,770 ppm EDB, up to 3,250 ppm disulfoton, and up to 166 ppm trifluralin (2). The inorganic chemical analysis revealed naturally occurring elements were within range of the levels commonly found in the soils in the western United States. In November 1984, CDHS collected soil samples from three depths within the disposal basin. At 4.5 feet below ground surface, EDB was detected at 215 ppm, along with 63 ppm disulfoton and 7 ppm ethyl parathion. At 8 feet below ground surface, EDB was found at 18 ppm; at 16 feet below ground surface, EDB was found at 11 ppm (2). The NEIC/FBI samples collected in February 1985 from 12 locations contained disulfoton up to 3,300 ppm, pebulate up to 3,000 ppm, trifluralin up to 480 ppm, alachlor up to 3,300 ppm, parathion up to 1 ppm, endosulfan up to 13 ppm, EDB up to 11,000 ppm, and DBCP up to 550 ppm (2). Post Excavation Data In 1985, much of the soil contamination in the pesticide disposal basin was excavated and taken off the Property. Soil sampling since then, however, has shown that subsurface soil contamination still exists in the area of the former disposal basin. The following discussion summarizes post excavation levels of subsurface soil contamination. In December 1989, GTI collected soil samples from 53 locations on site, mostly near the pesticide basin (1). Samples were only analyzed for EDB, DBCP, and 1,2-DCP. The results revealed limited horizontal migration of these contaminants, with high levels in and near the former disposal pit dropping to non-detect 20 feet away. Subsurface soils contained EDB up to 3.5 ppm, 1,2-DCP up to 34 ppm, and DBCP up to 0.69 ppm. In March 1993, U.S. EPA's contractor E&E collected 105 soil samples at four depths from 27 locations from the former pesticide disposal basin area (see Figure 4 and sample locations F01-F27 on Figure 5) (10). Depth intervals were 1-2, 8-9, 18-19, and 26-27 feet below ground surface. Samples were analyzed for carbon tetrachloride, EDB, DBCP, 1,2-DCP, and 1,3-DCP. In April and May 1993, E&E collected an additional 141 soil samples from 36 different locations (see sample locations F28-F62 and F65 on Figure 5 and Figure 6) (10). The depths were the same as the March sampling event. In addition to analyzing for EDB, DBCP, 1,2-DCP, and 1,3-DCP, these samples were analyzed for volatile organic compounds, organochlorine and organophosphate pesticides, and carbamate/urea pesticides. EDB was found in subsurface soil at levels exceeding its CREG at 35 of the 63 locations tested by E&E. All 35 locations are in the general vicinity of the former disposal basin. EDB was also found at levels exceeding its CREG in the 1989 GTI investigation, also near the former disposal basin. Therefore, EDB is selected for follow-up in subsurface soil. DBCP was found in subsurface soil at levels exceeding its CREG at five of the 63 locations tested by E&E. All five locations are in the general vicinity of the former disposal basin. Therefore, DBCP is selected for follow-up in subsurface soil. In the E&E study, neither 1,2-DCP or 1,3-DCP were found at any depth at any location at levels exceeding their CREG values. The 1989 GTI investigation did find 1,2-DCP at a level exceeding its CREG near the former pit area; therefore, this contaminant is selected for follow-up. Carbon tetrachloride was not detected in any soil samples. The detection limit for the March samples was 2 ppm; the April and May samples included samples collected near the concrete sump suspected of being a carbon tetrachloride source; the detection limit for these samples was 0.025 ppm. The CREG for carbon tetrachloride in soil is 5 ppm, therefore, this chemical was not selected for follow-up in this public health assessment based on its absence in the soil, however, it was selected based on its presence in groundwater (see next section). Multiple other pesticides were detected on the property during the E&E investigation. Most of these contaminants were found at low levels and below health comparison values. Three pesticides detected: siduran, barban and methiocarb, were selected for follow-up because there was no comparison value for these compounds. These pesticides were also found in the general area of the former pesticide disposal basin. In summary, a total of six contaminants are selected for follow-up in subsurface soil, and all contamination of concern is within the general area of the former disposal basin. Contaminants selected for follow-up in subsurface soil on-property, along with their maximum concentrations detected and health comparison values, are summarized in Table 1. |Chemical||Date Detected||Max. Conc. Sample Depth (feet)||Maximum Concentration (ppm)||Comparison Value1(ppm)||Comparison Value Source| 1 See text above for description of the types of comparison values used. 2 CREG calculated using slope factor obtained from IRIS 3 CREG calculated using slope factor obtained from HEAST Between June 1985 and October 1987, a total of 13 monitoring wells were installed on Property to investigate the vertical and horizontal extent of groundwater contamination resulting from the former disposal basin. The locations of the wells are shown in Figure 7. Eight monitoring wells (AW3, AW4, AW5, AW6, MW-3A, MW-4A, MW-5A, and MW-5B) are screened in the S1 zone on the Property. Three monitoring wells (MW-4B, MW-5C and MW-33) are screened in the S2 zone on the Property. Two monitoring wells (MW-3C and MW-4C) are screened in the A1 zone on the Property. There are no monitoring wells screened in the A2 zone on Property; limited data on contamination of on-Property groundwater in the A2 zone are available from water samples collected from the former Labor Camp and ANDCO water supply wells. Since June 1985, several groundwater sampling investigations have been conducted by different contractors and agencies. Samples have been analyzed for pesticides and other contaminants. Nine contaminants have been detected in on-Property groundwater at levels exceeding health comparison values, and were selected for follow-up. These nine contaminants, their maximum concentrations detected, and health comparison values are presented in Table 2. Several other contaminants have also been detected in on-Property groundwater over the years, but at levels below health comparison values. These contaminants include benzene, chloroform, diazinon, dibromomethane, 1,2-dichlorobenzene, 1,3-dichloropropene, 1,2-dichloroethene, disulfoton, methylene chloride, napthalene, tetrachloroethylene, toluene, 2,4,5-TP (Silvex), 1,2,3-trichloropropane, trichlorotrifluoromethane (Freon 113), and xylenes (1, 2, 9, 10, 16-21). Results from E&E's 1993 groundwater investigation indicated that the concentrations of the majority of the contaminants have decreased in most of the wells since the previous groundwater investigation conducted in 1991 (10). In 1994, ICF Kaiser conducted a groundwater investigation which also concluded that the concentration of the majority of contaminants have decreased in most of the wells; however, in a few monitoring wells, the levels of contamination have increased (17, 37). Contamination is predominantly confined to the S1 and S2; however, contaminant levels in the A1 aquifer has been rising; therefore, the groundwater pump-and treat system has been upgraded in order to remediate this situation (37). The Labor Camp well and the ANDCO well were sampled in 1984, 1985, and in 1992 by different regulatory agencies. EDB and 1,2-DCP were detected in the Labor Camp well at levels exceeding health comparison values. Trace levels of several other contaminants were detected in the Labor Camp well, including 1,3-dichloropropene, chloroform, 1,2-dichloropropane, and Freon 113 (2, 17). In the ANDCO well, three chemicals were found at levels exceeding health comparison values, including 1,2-DCP at 8.3 ppb, 1,2-dichloroethane (1,2-DCA) at 3.3 ppb, and carbon tetrachloride at 2.7 ppb. Contaminants detected at levels below health comparison values included chloroform, benzene, toluene, 1,2,3-trichloropropane, 1,1,1-trichloroethane, trichlorotrifluoromethane (Freon 113), disulfoton, and 2,4,5-TP (2, 16, 18). Inorganic chemical analysis revealed naturally occurring elements, such as, boron, calcium, magnesium, manganese, nickel, potassium, silicon, sodium, and zinc in the Labor Camp well (18). For both ANDCO and Labor Camp wells, the levels of the naturally occurring elements detected were within range of the levels commonly found in the groundwater in the United States. |Chemical||Maximum Concentration In Each Zone (ppb)||Comparison Value (ppb)||Comparison Value Source| |S1 Zone||S2 Zone||A1 Zone||A2 Zone| ND = Not Detected NA = Not Available NM = Not Monitored MCL = Maximum Contaminant Level CREG = Cancer Risk Evaluation Guide MCPP = 2-(4-chloro-2-methylphenoxy)propionic acid Propoxur = 2-(1-methylethoxy)phenyl methylcarbamate * 1,2-DCP was detected in the Labor Camp well at a level of 13 ppb. The Labor Camp well was screened in both the A1 and A2 zones. The maximum level of 1,2-DCP detected in the ANDCO well, which was screened only in the A2 zone, was 8 ppb. ** EDB was detected in the Labor Camp well at a level of 14 ppb. The Labor Camp well was screened in both the A1 and A2 zones. CONTAMINANTS IN AIR ON PROPERTY Prior to the April 1985 excavation of soil from the disposal basin, limited on Property air monitoring was carried out to determine if workers needed to wear respirators during the excavation. The highest level of EDB in the ambient air samples collected near the pesticide disposal basin was 0.002 ppb. Air samples collected by personnel air monitors during well construction and soil sampling activities ranged from below the detection limits to 0.0001 ppb EDB. According to the California Health and Safety code (Title 8, section 5219), if EDB is greater than 0.1 ppm and up to 1 ppm, workers are required to wear supplied air respirators or self containing breathing apparatus. Since the levels of EDB was below 0.1 ppm, workers were not required to wear respirators. The air CREG for EDB is 0.0006 ppb. This value is meant as a screening tool to protect the general population. Since the excavation action was a temporary action, and since air concentrations did not exceed worker safety guidelines, no contaminants in on Property air were selected for follow-up. Of more concern would be longer-term exposures to on-Property workers when the site was active. Since there are no data on such exposure, it is considered a potential pathway and is discussed in the Pathways Analysis section. On April 18 and 19, 1995, air monitoring was conducted during drilling activities at the Frontier Fertilizer site. Personal and ambient air samples were collected by Gillian air sampling pumps for EDB and DBCP. All results were non-detect (i.e., less than 1 ppbv) (37). Soil Gas On Property In 1992, Harding Larson Associates (HLA) conducted a soil gas survey to evaluate the distribution of carbon tetrachloride in the soil. Eight soil gas samples were collected on Property. The maximum concentration detected was 0.2 ppb (samples were collected in the former pesticide disposal basin and near the concrete sump). Although the level of carbon tetrachloride exceeded the health comparison value (air CREG is 0.01 ppb), carbon tetrachloride was not selected for follow-up in air. The air comparison values are media-specific concentrations that are used to select environmental contaminants for further evaluation. Because the contaminants were detected in the pore space surrounding the soil and not in the ambient air, it would be inappropriate to use the air CREG as a guideline. C. OFF-PROPERTY CONTAMINATION CONTAMINANTS IN SOIL OFF PROPERTY In April 1985, approximately 1,100 cubic yards of contaminated soil from the pesticide disposal basin was excavated and treated at an agricultural field located three miles east of the site. Treatment consisted of spreading the excavated soil over 15-acres to a thickness of about 1-1/2 inches deep. The treatment enabled volatile pesticides to rapidly dissipate and aided degradation. This has been shown in a field pilot study that was completed in November 1984. In this pilot study, the concentration of EDB in the treated soil dropped from an initial level of 49,000 ppb to 276 ppb after about 2 weeks and to about 43 ppb after 4 weeks (2). In 1993, U.S. EPA's contractor E&E collected soil samples from six locations about 10 feet to the north of the fence which marks the northern boundary of the Property, and is just north of the former disposal basin. The shallow most samples were taken at a depth of one foot below ground surface. No EDB, DBCP, 1,2-DCP or 1,3-DCP were detected in these samples. Disulfoton and linuron were detected in these samples, although at levels below health comparison values. EDB, but not DBCP or DCP, was detected at levels exceeding health comparison values in deep (about 26 feet below ground surface) soil at about the water table in five of these samples. CONTAMINANTS IN GROUNDWATER OFF PROPERTY Several investigations have been conducted by different contractors and agencies over the past nine years to evaluate the extent of groundwater contamination off-Property. A total of 26 monitoring wells have been installed off Property to characterize the horizontal and vertical extent of groundwater contamination. The locations of the wells are shown in Figure 7. Twelve off-Property monitoring wells (AW1, AW2, MW-6A, MW-6B, MW-7A, MW-7B, MW-8A, MW-9A, MW-10A, MW-11A, MW-12A, and MW-13A) are screened in the S1 zone. Eight off-Property monitoring wells (MW-6C, MW-7C, MW-8B, MW-9B, MW-10B, MW-11B, MW-12B, and MW-13B) are screened in the S2 zone. MW-2A is screened in both the S-1 and S-2 zones (31). MW-13C is screened in the A-1 zone (31). Four off-Property monitoring wells (MW1, MW-2B, MW-7D, and MW-9C) are screened in the A1 zone (1). There are no monitoring wells screened in the A2 zone off-Property. A total of 17 contaminants have been detected in off-Property groundwater at levels exceeding health comparison values and were selected for follow-up. These 17 contaminants, their maximum concentrations detected, and health comparison values are presented in Table 3. Several other contaminants have been detected in off-Property groundwater as well, but at levels below their health comparison values and were not selected for follow-up. These contaminants include carbon disulfide, 1,2-dichlorobenzene, 1,3-dichlorobenzene, 1,4-dichlorobenzene, 1,1-dichloroethane, 1,2-dichloroethene, dichloromethane, ethylbenzene, toluene, 1,1,1-trichloroethane, trichloroethylene, trichlorofluoromethane, 1,2,3-trichloropane, trichlorotrifluoromethane (Freon 113), and xylenes (1, 2, 9, 10, 16, 17, 18, 19, 20). As was the case for on-Property contamination, recent sampling events indicate that the levels of most of the contaminants in off-Property groundwater have been decreasing (10, 17). However, the concentrations of EDB and 1,2-DCP have increased in MW-11B which is screened in the S-2 zone and located approximately 450 feet north of the pesticide basin (17, 21). And for MW-7C, which is screened in the S2 zone and located about 100 feet north of the former disposal basin, the levels of EDB, 1,2-DCP and DBCP were found to have also increased (21). The level of EDB has also increased in two monitoring wells, MW-7D and MW-1 (37). Contamination is predominantly confined to the S1 and S2 zone off Property. 1,2-DCP were detected in two of the four off-Property monitoring wells screened in the A1 zone and EDB has been detected in three of the four off-Property monitoring wells screened in the A1 zone (37). To date, no monitoring wells have been installed in the A2 zone on Property because of the relatively low levels of contamination detected in the A1 zone (13). In May 1995, U.S. EPA installed three temporary monitoring wells (called hydropunches) in the A1 zone (37). Private and Municipal Wells There are two private wells in the vicinity of the Property: the Anderson well located near the Anderson office building between 2nd Street and Mace Boulevard and the Mizuguchi well located approximately 50 feet west of the Property. During May 1984, the DTSC and the State Water Resources Control Board collected water samples from the Mizaguchi and Anderson wells, and the wells were sampled again by Frontier Fertilizer's contractors, Luhdorff and Scalmanini, in April 1985 and July 1985. No contaminants of concern were detected in either well (13, 34, 35, 36). The Mizaguchi well is no longer in use, whereas, the Anderson well is still being used (13). It is not clear whether the Anderson well is still being tested. There are twenty-one City of Davis municipal wells; they are monitored approximately every 18 months. The closest well is located a quarter mile from the Property, whereas, the farthest well is located greater than four miles from the Property. None of these wells are contaminated (32). The principal contaminants at the site are EDB and 1,2-DCP. Based on recent investigations, the leading edge of the EDB contamination is beyond MW-11 in the S-1 and S-2 zones north of the site and beyond A-1 in MW-1 also north of the site. EDB does not appear to exist south of MW-4 (37). The 1,2-DCP contamination also extends beyond MW-11 north of the site in the S-1 and S-2 zones, beyond MW-1 north in the A-1 zone, and just appear at AW-6 on the southern boundary of the site (37). Soil Gas Plume Two plumes of carbon tetrachloride have been partially characterized. The highest concentration of carbon tetrachloride has been found near well cluster MW-12, located about 400 feet north of the property. The larger of the two plumes apparently extends about 600 feet north of the Property; its southern boundary has not been clearly defined. The larger plume most likely originated on the Property, possibly from a concrete sump located near the labor camp. However, soil and soil-gas investigations have been unable to identify a source (10). A second smaller plume was found near the former disposal basin. The smaller plume extends about 25 feet north of the Property and extends approximately 50 feet into the Property. The source of the smaller plume was most likely the former disposal basin. CONTAMINANTS IN AIR OFF PROPERTY In October 1990, representatives from the Regional Water Quality Control Board (RWQCB) conducted a walk-around survey of the property to identify the location and the condition of the monitoring wells on property and off property. During the walk-around survey, the ambient air was monitored using a Gillian pump for EDB and DBCP. The levels of EDB and DBCP were less than 0.26 ppb and less than 0.32 ppb, respectively (23). In 1992, during the soil gas investigation to evaluate the potential source area of carbon tetrachloride, HLA collected 5 ambient air samples. Four of the five air samples were collected at random locations north of the property in the vicinity of the monitoring wells. The highest level of carbon tetrachloride detected in the ambient air was 0.001 ppb (24). The air CREG for carbon tetrachloride is 0.01 ppb; therefore, carbon tetrachloride was not selected for followup in off-Property air. Open Borehole and Well Head Space In 1990, GTI collected 20 vapor samples in open boreholes during soil sampling and well installation activities. 1,2-DCP was detected in MW-7D at 39 ppb. No contaminants were detected in the other samples. In May 1991, representatives from the RWQCB collected air samples from the well head space of two wells during an inspection of the monitoring wells. The two monitoring wells were selected for sampling because they have shown in previous groundwater investigations to contain the highest levels of contamination. EDB and DBCP were detected in the head space of MW-7A at a level of 11.2 ppb and 2.4 ppb, respectively (25). In MW-7B, only EDB was detected (at a level of 3.7 ppb). In 1993, URS conducted a similar investigation for DTSC. EDB and 1,2-DCP were detected in MW-7C at 11 ppb and 350 ppb, respectively. |Chemical||Maximum Conc. In Each Zone (ppb)||Comparison Value (ppb)||Comparison Value Source| |S1 Zone||S2 Zone||A1 Zone||A2 Zone| NM = Not Monitored NA = Not Available ND = Not Detected CREG = Cancer Risk Evaluation Guide MCL = Maximum Contaminant Level child RMEG =Child Reference Dose Media Evaluation Guide LTHA = Lifetime Health Advisory for Drinking Water * Higher levels were reported in the Ecology and Environment, Inc. report, however, these were estimated quantity due toexceeded holding times (22). ** The estimated levels from the Ecology and Environment, Inc report were 150 ppb (22). *** U.S. EPA's highest samples were a full order of magnitude below these values (which were obtained by the State of California Regional Water Quality Control Board, Central Valley Region). Although the levels of DBCP, 1,2-DCP and EDB were detected in the well head space at levels exceeding their health comparison values (0.2 ppb for DBCP; 0.87 ppb for 1,2-DCP; and 0.006 ppb for EDB), they were not selected for followup. The air comparison values are media-specific concentrations that are used to select environmental contaminants for further evaluation. Because the contaminants were detected in the well head space and not in the ambient air, it would be inappropriate to use such values as a guideline. Furthermore, standard operating protocols mandate that monitoring wells must be capped at all times, except during sampling; therefore, there would not be any exposure to the community. Several soil gas surveys have been conducted by different contractors and agencies. The purpose of a soil gas survey is to provide preliminary data to aid in the identification of the location of sources of contamination in soils; however, it does not provide quantitative data on soil and groundwater contaminant levels. In 1990, GTI collected two soil gas samples directly south of the MW-7 cluster. The samples were analyzed for EDB, 1,2-DCP and DBCP. At four feet below ground surface, no contaminants were detected. At 20 feet below ground surface, 1,2-DCP was detected at 720,000 ppb and EDB was detected at 1,900 ppb. In 1992, HLA conducted a soil gas survey to investigate the carbon tetrachloride plume and to try and identify its source. Thirty-one soil gas samples were collected off Property. The maximum concentration of carbon tetrachloride detected was 9.5 ppb from a sample collected near MW-12. In 1993, Entrix, Inc. conducted a soil gas investigation in order to resolve the issue of potential health risks posed by soil gas contaminated with EDB, 1,2-DCP, and benzene. Soil gas samples were collected within a 15 foot radius of MW-7 (the location of greatest groundwater contamination) and at depths down to 20 feet. No EDB, 1,2-DCP or benzene were detected in the 15 samples analyzed. Although these investigations detected EDB, 1,2-DCP and carbon tetrachloride at levels exceeding their health comparison values (air CREG's are 0.0006 ppb, 0.87 ppb, and 0.01 ppb, respectively), they were not selected for follow-up. The air comparison values are media-specific concentrations that are used to select environmental contaminants for further evaluation. Because the contaminants were detected in the pore space surrounding the soil and not in the ambient air, it would have been inappropriate to use the air CREG as a guideline. Exposure to volatiles released to the ambient air or indoor air from subsurface contamination will be discussed in the Pathways Analysis section. D. QUALITY ASSURANCE AND QUALITY CONTROL In preparing this public health assessment, ATSDR and CDHS rely on the information provided in the referenced documents and assume that adequate quality assurance and quality control measures were followed with regard to chain-of-custody, laboratory procedures, and data reporting. The accuracy of the conclusions contained in this public health assessment is determined by the completeness and reliability of the referenced information. E. PHYSICAL AND OTHER HAZARDS Access to the site is limited by a fence surrounding the property. Physical hazards were noted on the site during the site visit. There were broken and rusted farm equipment abandoned south of the pole barn and north of the labor camps. There was a large pile of concrete rubble located south of the anhydrous ammonia tanks. This section addresses the pathways by which human populations in the area surrounding the site could be exposed to contaminants at, or migrating from, the site. If it is determined that exposure to chemicals not necessarily related to the site is also of concern, this exposure will be evaluated as well. When a chemical is released into the environment, the release does not always lead to exposure. Exposure only occurs when a chemical comes into contact with and enters the body. In order for a chemical to pose a human health risk, a complete exposure pathway must exist. A complete exposure pathway consists of five elements: 1) a source and a mechanism of chemical release to the environment; 2) a contaminated environmental medium (e.g., air, soil, water); 3) a point of human contact with the contaminated medium (known as the exposure point); 4) an exposure route (e.g., inhalation, dermal absorption, ingestion) at the exposure point; and 5) a human population at the exposure point (15). Exposure pathways are classified as either completed, potential, or eliminated. Completed exposure pathways require that all five elements exist. Potential exposure pathways are either: 1) not currently complete but could become complete in the future, or 2) are indeterminate due to lack of information. Pathways are eliminated from further assessment if they are determined to be unlikely to occur. A time frame given for each pathway indicates whether the exposure occurred in the past, is currently occurring, or will occur in the future. For example, a completed pathway with only a past time frame indicates that exposure did occur in the past, but does not currently exist and is not likely to exist in the future. Human exposure pathways are evaluated for each environmental medium possibly impacted by site-related chemicals. The toxicological implications of any completed exposure pathways identified will be evaluated in the Public Health Implications section. A. COMPLETED EXPOSURE PATHWAYS The only completed exposure pathway identified in this public health assessment involves the past use of water from the Labor Camp well on the Property (see Table 4). Since this well was decommissioned in 1992, this pathway existed in the past only; no current completed exposure pathways are identified. The exposed population consists of people drinking from or otherwise using water from the Labor Camp well from the early 1970's until 1992. The Labor Camp structures were used to house farm workers from the early 1950s until 1972. Since pesticide operations and disposal at the Property did not begin until 1971, it is not expected that contamination resulting from these activities would have impacted the Labor Camp well water until about the mid 1970s. Therefore, exposed individuals probably consisted of workers on the Property beginning about the mid 1970s. Information on the number of potentially exposed individuals and use information (such as if well use continued after contamination was detected in 1984) was not located in the site literature available for review. This information was requested during preparation of this assessment, but was not received prior to completion of the report. Only limited data exists on the nature and extent of contamination of water from the Labor Camp well. Two contaminants were detected in this well at levels exceeding comparison values. Specifically, EDB was detected at a level up to 14 ppb, and 1,2-DCP was found up to 13 ppb. Using several assumptions about factors such as the concentration of contaminants over time, exposure frequency, and exposure duration, we will calculate a quantitative estimate of exposure dose (i.e., an estimate of a daily exposure level) in the toxicological evaluation section, according to ATSDR guidance (15). Only completed exposure pathways are evaluated for their toxicological risks. For the Frontier Fertilizer site, only one completed pathway was identified in the Pathways Analysis section. This pathway involved past exposure (no completed exposure pathways currently exist or are expected to exist in the future) of workers on the Property drinking from or otherwise using water from the Labor Camp well from the mid 1970s until 1992. Two contaminants were detected in this well at levels exceeding comparison values. Specifically, EDB was detected at a level up to 14 ppb, and 1,2-DCP was found at levels up to 13 ppb. There are insufficient data on contaminant concentrations and water usage to accurately estimate exposures and risks from this pathway. However, likely maximum doses based on existing data were estimated to get a rough idea of whether or not the chemicals at the concentrations found could have potentially affected the health of Labor Camp well water users. Because it is thought that only workers and not residents were exposed to contaminants in the Labor Camp well, we only evaluated exposure and associated risk from drinking the water (e.g., we did not evaluate exposure from other domestic uses such as bathing). Additional information about actual usage of water from the Labor Camp well would allow a more realistic estimation of doses and associated risks from this pathway. The exposure and risk estimates for this pathway, along with the assumptions used, are provided in Tables 7-9. The toxicological implications of exposure to 1,2-DCP and EDB from drinking water from the Labor Camp well is discussed separately for each contaminant. |Contaminated Environmental Medium||Time Frame||Exposure Point||Exposure Route||Exposed Population| |Groundwater||Past||On property||Ingestion||People drinking from Labor camp well from early 1970's until July 1992| B. POTENTIAL EXPOSURE PATHWAYS Three potential exposure pathways were identified and are summarized in Table 5. Two of the pathways involve workers and others on the Property in the past while pesticide operations were taking place. These individuals were potentially exposed to contaminants via skin absorption, incidental ingestion, and inhalation. There was insufficient information in the literature available for review on the nature of possible exposures to evaluate these pathways. The third potential exposure pathway is the potential future inhalation of carbon tetrachloride vapors which have migrated up from the plume of carbon tetrachloride contaminated groundwater. This plume could potentially impact workers in the future light industrial development planned for the land within about 600 feet north of the Property. Potential exposures to EDB and 1,2-DCP from vapors migrating from contaminated groundwater were evaluated in previous risk assessments and by DTSC in its border zone determination. Carbon tetrachloride has been documented in soil gas in the area north of the Property proposed for light industrial development. However, based on the material available for review, potential exposures and associated risks were not evaluated for carbon tetrachloride. |Contaminated Environmental Medium||Time Frame||Exposure Point||Exposure Route||Exposed Population||Comments| |Soil||Past||On property||Skin absorption, incidental ingestion||People on property||Nature and magnitude of exposure occurring in the past is unknown| |Air||Past||On Property||Inhalation||People on Property||Nature and magnitude of exposure occurring in the past is unknown| |Air||Future||Light Industrial area above groundwater plume within about 400 feet north of Property||Inhalation||Workers at exposure point||Exposure to carbon tetrachloride not evaluated in existing risk assessments| C. ELIMINATED EXPOSURE PATHWAYS Three exposure pathways were evaluated in this public health assessment, but were eliminated from further review because it was determined that they were unlikely to exist. Eliminated pathways are summarized in Table 6. Two of the eliminated pathways involve current or future exposure to workers on the Property to contaminants via either skin absorption, incidental ingestion, or inhalation of vapors from soil contamination. The limited surface soil data does not indicate levels of soil contaminants at levels of concern. The lack of substantial levels of contamination of soil at the surface is expected. Because the principal contaminants such as EDB and 1,2-DCP do not adsorb tightly to soil particles and dissolve in water, these chemicals tend to move down through the soil to the groundwater. In the years since pesticide operations were stopped at the site, a combination of downward migration, volatilization to the air, and degradation by sunlight and microorganisms has substantially reduced contamination of surface soil. Fairly rapid reductions in contaminant levels were shown in the sampling conducted as part of the 1985 excavation and "land treatment" which was described in the Environmental Contamination and other hazards section. On the other hand, contaminant levels are of concern in subsurface soil. Construction or excavation workers on Property could potentially come into contact with the contaminants. However, significant exposure via this pathway is not expected to occur as long as personal protection equipment is used. The only water supply well in use which has been impacted by groundwater contamination is the Labor Camp well, the sole water supply well used on Property. Exposure to contaminants from this well was discussed under completed exposure pathways above. Other water supply wells in the area include the City of Davis municipal water supply wells, the Barthel Mobile Home Park well, and a few private wells within several miles of the site. Contamination has not been documented in any of these drinking water wells. Currently, there are no drinking water supply wells in the area potentially impacted by the groundwater contaminant plumes. There are deed restrictions for both the Frontier Fertilizer Property as well as the Mace Ranch Park property, restricting the placement of groundwater wells. Given these deed restrictions, the groundwater monitoring program, and the establishment of a groundwater cleanup program at the site, future exposure to contaminants in groundwater is considered unlikely to occur. Furthermore, the City of Davis is proposing an ordinance that will strongly discourage private citizens from installing private groundwater wells (32). |Contaminated Environmental Medium||Time Frame||Exposure Point||Exposure Route||Exposed Population||Comments| |Soil||Current, future||On property||Skin absorption, incidental ingestion, inhalation||People on property coming into contact with subsurface soil||Significant exposure via this pathway is not expected if personal protection equipment is used| |Groundwater||Future||Off property||Ingestion||Users of well water in vicinity of property||Exposure via this pathway will not occur as long as site remedial efforts are continued| A. TOXICOLOGICAL EVALUATION When individuals are exposed to a hazardous substance, several factors determine whether harmful effects will occur and the type and severity of those health effects. These factors include the dose (how much), the duration (how long), the route by which they are exposed (breathing, eating, drinking, or skin contact), the other contaminants to which they may be exposed, and their individual characteristics such as age, sex, nutrition, family traits, life style, and state of health. The scientific discipline that evaluates these factors and the potential for a chemical exposure to adversely impact health is called toxicology. This section will evaluate the toxicological risks from each completed exposure pathway identified in the Pathway Analyses section. The approach used to evaluate the potential for non-carcinogenic (i.e., non cancer) adverse health effects to occur in an individual or population assumes that there is a level of exposure (i.e., a threshold level) below which non-cancer adverse health effects are unlikely to occur. The approach compares the exposure level (referred to as the dose estimate) with the threshold level (referred to as the toxicity value). The dose estimate is an estimate of exposure expressed in terms of the amount of contaminant (either in contact with or absorbed into the body) per unit body weight per unit time (e.g., mg contaminant per kg body weight per day, or mg/kg/day). When the dose estimate for a contaminant exceeds the toxicity value for that contaminant, there may be concern for potential non-cancer adverse health effects from that contaminant. However, the dose estimate and toxicity values are developed using conservative assumptions of exposure and toxicity in order to be protective of human health. Although a dose estimate exceeding a toxicity value does indicate a health concern may exist, it does not necessarily mean that observable health effects in the exposed individuals are likely to exist. Toxicity values used to evaluate non-carcinogenic adverse health effects include the ATSDR Minimal Risk Level (MRL) and the U.S. EPA Reference Dose (RfD). Both of these values are estimates of daily exposure to the human population (including sensitive subgroups), below which non-cancer adverse health effects are unlikely to occur (26). The MRL and the RfD only consider non-cancer effects. Since they are based only on information currently available, some uncertainty is always associated with the MRL and RfD. Uncertainty factors are used to account for the uncertainty in our knowledge of their danger. When there is adequate information from animal or human studies, MRLs are developed for different routes of exposure, such as ingestion and inhalation. Separate non-cancer toxicity values are also developed for different durations of exposure. ATSDR develops MRLs for acute exposures (less than 14 days), intermediate exposures (from 15 to 364 days), and for chronic exposures (greater than one year). U.S. EPA develops RfDs for developmental exposures (less than 14 days), subchronic exposures (from two weeks to seven years), and chronic exposures (greater than seven years). For example, the chronic MRL or RfD for ingestion is set at a level to be protective against chronic (i.e., lifetime) exposures via ingestion. Both the MRL and RfD for ingestion are expressed in units of milligrams of contaminant per kilograms body weight per day (mg/kg/day). The potential for carcinogenic health effects (i.e., cancer) to occur in an individual or population is evaluated by estimating the probability of an individual developing cancer over a lifetime as the result of the exposure. This approach is based on the assumption that there are no absolutely "safe" exposure levels to carcinogens (i.e., there is no safety threshold as there is for non-carcinogenic substances). U.S. EPA has developed cancer slope factors for many carcinogens. A slope factor is an estimate of a chemical's carcinogenic potency, or potential, for causing cancer. If adequate information about the level of exposure, frequency of exposure, and length of exposure to a particular carcinogen is available, an estimate of excess cancer risk associated with the exposure can be calculated using the slope factor for that carcinogen. Specifically, to obtain risk estimates, the estimate of long-term exposure (expressed in the units milligrams contaminant per kilogram body weight per day or mg/kg/day) is multiplied by the slope factor for that carcinogen (expressed as the risk per mg/kg/day) (26). Cancer risk is the likelihood, or chance, of getting cancer. We say "excess lifetime cancer risk" because we have a "background risk" of about one-in-four (1/4) of getting cancer from all other causes during our lifetime. If we say there is a "one-in-a-hundred-thousand" (1/100,000) excess cancer risk from a given exposure to a contaminant, we mean that each individual exposed to that contaminant at that level over his or her lifetime would be expected to have, at most, a one-in-a-hundred-thousand chance (above the background chance) of getting cancer from that particular exposure. In order to take into account the uncertainties in the science, the risk numbers used are plausible upper limits of the actual risk. In actuality, the risk is probably somewhat lower than one-in-a-hundred-thousand, and, in fact, may be zero. TOXICOLOGICAL IMPLICATIONS OF EXPOSURE TO 1,2-DCP FROM THE LABOR CAMP WELL Information concerning the toxicity of 1,2-DCP to humans were gathered from occupational exposure reports and accidental or intentional over-exposure cases. To date, there are no human exposure studies. In occupational industries, painters and metalworkers who handled solvents containing 10 to 40% 1,2-DCP developed adverse dermal changes, such as, dermatitis, redness, blisters, fluid accumulation, and other signs of skin toxicity. There has been several reported cases of poisonings due to accidental or intentional over-exposure to 1,2-DCP (27). 1,2-DCP mainly exerts its toxic effects upon the central nervous system, liver, and kidney. The adverse health effects discussed were the result of exposure to high levels of 1,2-DCP. The level of exposure that occurred as a result of using contaminated groundwater from the Labor Camp well would be lower than the levels in these case studies, therefore, the workers may not have experienced similar health effects. We estimated the dose of 1,2-DCP that workers might be exposed to from drinking water from the Labor Camp well (see Tables 7 and 8 for exposure assumptions used and dose estimates). The dose estimate for 1,2-DCP was below the ATSDR chronic MRL. Therefore, non-cancer effects are not expected to occur from past ingestion of 1,2-DCP in the Labor Camp well. In carcinogenicity studies of 1,2-DCP, animals developed benign (non-harmful) liver tumors. A slight increase of mammary gland tumors were also noted. According to the International Agency for Research on Cancer (IARC), 1,2-DCP is not classified as a human carcinogen. Their conclusion was based on the limited evidence for the carcinogenicity of 1,2-DCP in animal studies and the lack of human data (27). However, both the United States and California Environmental Protection Agency's do consider 1,2-DCP a potential human carcinogen. If we consider 1,2-DCP a potential carcinogen, and if we assume that there is no safe level of exposure to carcinogens, then workers who drank water from the Labor Camp well may have a small increased chance (i.e., 1 in 386,997) of getting cancer from exposure to 1,2-DCP. TOXICOLOGICAL IMPLICATIONS OF EXPOSURE TO EDB FROM THE LABOR CAMP WELL Adverse health effects are unlikely to occur in humans exposed orally to low levels of food or water contaminated by EDB. Doses that cause acute death in human and animals are relatively high (29). If EDB is immediately washed off the skin after contact, low levels of EDB are neither irritating to the surface of the skin nor rapidly absorbed through the skin. In one human study, volunteers were exposed to 0.5 ml of EDB. No dermal changes were observed in the volunteers. However, a burning sensation, inflammation, and blisters occurred when a cloth dressing was saturated with EDB and applied to the skin for 1 to 2 hours. In one animal study, only reddening of the skin occurred when EDB was applied on the skin. Additional adverse signs, edema and necrosis, occurred when a cloth dressing was saturated with EDB and applied to the skin. Two fatal cases of occupational exposure to EDB were reported in the literature (28). Two workers died after collapsing in a pesticide storage tank containing residues of EDB. The primary route of exposure was skin contact with EDB; however, inhalation of EDB may have also contributed to the deaths. After EDB is absorbed through the skin, the liver and kidney are the target organs for toxicity. The principle cause of the two deaths was liver failure. There has been three cases of deaths caused by the intentional ingestion of high doses of EDB. However, the levels of exposure that have occurred as a result of using contaminated groundwater from the Labor Camp well would be lower than the levels in these case studies. In an animal study, lethal amounts of EDB applied to the skin was rapidly absorbed. If evaporation of EDB was prevented for 24 hours by a cloth dressing, death occurred within 4 days. Two human studies have suggested that EDB may have adverse effects on fertility and sperm production (29). However, these studies had severe limitations and provide little or questionable evidence linking EDB and adverse fertility or sperm production. Animal studies have also indicated that high doses of EDB may have adverse effects to the male reproductive systems. No Minimal Risk Level (MRL) was derived for EDB because of the lack of quantitative exposure data (29). Humans are susceptible to the short-term toxic effects of EDB from three routes of exposure, inhalation, dermal, and ingestion. With the exception of adverse reproductive effects in men due to occupational exposure, long-term effects of EDB have not been documented in humans. However, based on animal studies, there is a potential for certain adverse health effects, such as, liver and kidney damage, sperm abnormalities, and DNA damages, in humans exposed chronically (long-term) to low environmental levels of EDB from hazardous waste sites. We estimated the dose of EDB that workers might be exposed to from drinking water from the Labor Camp well (see Tables 7 and 8 for exposure assumptions used and dose estimates). The dose estimate for EDB exceeded U.S. EPA Reference Dose (RfD) for EDB. Therefore, past exposure may have presented a non-cancer hazard to workers. In studies of EDB exposed workers, results did not show increases in the number of deaths or cancers. However, these studies had limitations and small increases in cancer may not have been detected (29). Studies in animals have shown that, via the inhalation route, EDB is a potent carcinogen, producing cancer in the upper respiratory systems, as well as in other organs and tissues throughout the body (29). Our estimate of an upper bound extra lifetime cancer risk to workers from drinking EDB contaminated water from the Labor Camp well results in a moderate increased cancer risk (about 1 in 286). |Dose Estimate (mg/kg-day) = (CW x IR x EF x ED) ÷ (BW x AT)| |CW||Chemical Concentration in Water (mg/liter)| |IR||Intake Rate (liters/day)| |EF||Exposure Frequency (days/year)| |ED||Exposure Duration (years)| |BW||Body Weight (kg)| |AT||Averaging Time (the period over which exposure is averaged, in days). For noncarcinogenic effects, it is the pathway-specific period of exposure (i.e., ED x 365 days/year). For carcinogenic effects, it is a 70 lifetime (i.e., 70 years x 365 days/year).| |Variable Values Used:| |CW:||EDB: 0.014 mg/liter; 1,2-DCP: 0.013 mg/liter| |BW:||70 kg for adult (14)| |AT:||For noncarcinogenic effects: 10 years x 365 days/year = 3,650 days. For carcinogenic effects: 70 years x 365 days/year = 25,550 days.| |Contaminant||Non-cancer Effects (mg/kg/day) a||Carcinogenic Effects (mg/kg/day)b| a Dose for evaluating noncarcinogenic effects. Exposure is averaged over the period or duration of exposure (i.e., AT = 10 years x 365 days/year). b Dose for evaluating carcinogenic effects. Exposure is averaged over a 70 year lifetime (i.e., AT = 70 years x 365 days/year). |Contaminant||Non-cancer Toxicity Value (mg/kg-day)||Non-cancer Toxicity Value Source||Ratio of Dose to Toxicity Value||EPA Cancer Slope Factor (mg/kg-day)-1||Chemical-Specific Cancer Risk| |1,2-DCP||0.09||Chronic MRL||0.24||0.068||0.0000026 (i.e., about 1 in 400,000)| (i.e., about 1 in 300) B. HEALTH OUTCOME DATA EVALUATION Existing health databases such as the cancer and birth defects registries are generally useful if substantial exposures are documented or suspected for neighborhoods in the vicinity of a site. In the case of Frontier Fertilizer, community-wide exposures have not occurred. A review of disease statistics in the vicinity of the site would not help to define any potential impact on individuals on the Property who in the past may have been exposed to contaminants in water from the Labor Camp well. In this section, we will address specific health concerns raised by community members. Has contaminated water gone down far enough to reach the municipal water supply? - No. Small amounts of contamination from the site have gone down to the A2 zone, based on data from the former water supply wells on Property. Some municipal wells get water from this zone. However, based on available data groundwater contaminants from the site have not migrated very far off site. According to a representative of DTSC, there are currently no municipal water wells at risk of being contaminated with any contaminants from the site. Could there be a problem if children played in the fields adjacent to Frontier? - No. Based on the information and data available for review, there is no indication of or potential for significant levels of site-related contaminants to have impacted surface soils in fields near the site. There may be small releases of carbon tetrachloride to the air above the carbon tetrachloride plume. Long-term exposure to this chemical inside structures built over the plume might be of concern. However, any carbon tetrachloride escaping to the ambient air from soil gas would be rapidly diluted and would not pose a health hazard to children or others playing in the area. Could any of the chemicals at Frontier Fertilizer cause air pollution in the surrounding areas? - No. The contamination of concern at the site is in the subsurface soil and in the groundwater. Under current conditions, air releases either via fugitive dust or due to vapors is not a problem for residents. There may be small releases of carbon tetrachloride to the air above the carbon tetrachloride plume. The potential for exposure to this chemical should be assessed prior to development of the land over the carbon tetrachloride plume. No residential units will be constructed in the area. Will site cleanup activities allow exposure to toxic chemicals to occur? - Given the distance from the site to the nearest residence, exposure to the community during remedial actions is not likely to occur. U.S. EPA will provide community members with information about possible remedial actions for the site. Prior to implementing a remedial action, there will be a formal public comment period. Any concerns about exposures related to the particular remedial action would be addressed at that time. Does the site currently pose any danger to the health of residents living closest to the site? - No. As indicated above and evaluated in this assessment, the site does not currently pose any danger to the health of residents living closest to the site.
<urn:uuid:4a83065f-e348-43b9-ab9f-e68532894c5b>
CC-MAIN-2016-26
http://www.atsdr.cdc.gov/HAC/pha/PHA.asp?docid=22&pg=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943278
13,840
2.828125
3
- Historic Sites It opened fifty years ago and changed Broadway forever February/march 1993 | Volume 44, Issue 1 Musical comedy had its roots in vaudeville. The plots of these shows were slight, the characters pasteboard, and the jokes and songs often had little, if anything, to do with either. But musical comedies could also be very inventive, often on the cutting edge of popular music. Moreover, musicalcomedy lyrics, at least for the major songs, were carefully written, poetically sophisticated, and often extremely witty. As with Gilbert and Sullivan, and unlike European musical theater, they were as important as the music itself. From early on Hammerstein sought to expand the boundaries of the purely American musical-comedy form. He wanted to bring it closer to the op- eretta, a much more dramatically solid kind of musical theater that had roots in Berlin and especially Vienna, as well as a long Broadway tradition. With his next big hit, Rose Marie , in 1924, he began to do so. Then, in 1927, he and Jerome Kern wrote Show Boat . Hammerstein sought the rights to the failed play and learned that Rodgers and Hart had already been given them. Today Show Boat is the only musical of the 1920s that can hold the boards in its own right, not just as a historical curiosity with good songs. It is, in every sense of the word, a masterpiece. Hammerstein was always at his best adapting the work of others, and his dramatization of Edna Ferber’s sprawling novel was a marvel of concision. The score was an integrated whole, arising out of the dramatic situation. Yet it produced no fewer than six songs that became standards. In one of these songs, “Can’t Help Lovin’ Dat Man,” Hammerstein for the first time expressed what would become a constant theme in his later work: the idea that human love is an elemental force in human nature, quite beyond the control of those who experience it. “Tell me he’s lazy, tell me he’s slow./Tell me I’m crazy (maybe I know)—/Can’t help lovin’ dat man of mine.” Doubtless this expressed a long-held belief. Doubtless also, it reflected his recent encounter “across a crowded room” with Dorothy Blanchard, who was to be his second wife and the love of his life. Twenty years later, when his lyrics were published in book form, he dedicated the volume, simply, “To Dorothy, the song is you.” But as the twenties gave way to the thirties, and boom to depression, Hammerstein’s style of musical—romantic, concerned with character and the nature of love—went out of style. Instead shows featuring the lives of the rich and set in penthouses and ocean liners—the Broadway-musical version of Hollywood screwball comedies—came into vogue. Although Hammerstein and Kern’s Music in the Air was the big hit of the dismal 1932 season, it would be Hammerstein’s last success for eleven long years. His only hits thereafter were occasional individual songs such as “All the Things You Are” and “The Last Time I Saw Paris.” This last song was most atypical of Hammerstein. For one thing, it was one of the very few he ever wrote not intended for a particular play or movie. (It was later interpolated into the movie Lady Be Good and won the Academy Award for best song in 1941.) He had written the lyric only because he was so saddened by the fall of Paris, a city he deeply loved, to the Nazis in the early summer of 1940. Jerome Kern then set it to music. Further, it showed a side of Hammerstein that was not often revealed in his work. For if he was not a particularly urban man, he was a thoroughly urbane and sophisticated one and was quite as much at home in Paris as at his beloved Pennsylvania farm. Even there, as his potégé Stephen Sondheim explained, if the cattle were often standing like statues, they did so right beyond the tennis court. Despite the success of “The Last Time I Saw Paris,” when Rodgers called him in the summer of 1941, the wisdom on that hard-nosed thoroughfare they both knew so well had it that Hammerstein’s Broadway career was washed up. Hammerstein’s response to Rodgers’s plea for advice was typical of the man. He told Rodgers that he should keep working with Hart for as long as possible. He thought that for Rodgers to walk away from his partner now would kill him. But he told Rodgers that if the time came when Hart was unable to finish a job, he should let him know and he would finish it for him, with no one but the two of them the wiser. After Rodgers and Hart completed By Jupiter (Rodgers got Hart to check into a hospital until the score was completed), Rodgers, as always, immediately looked for another project. The Theatre Guild, in 1931, had produced a play by Lynn Riggs called Green Grow the Lilacs . It had been a flop then, but Theresa Helburn and Lawrence Langner, who ran the Guild, thought it had possibilities as a musical. Rodgers immediately saw the potential; Hart was less enthusiastic.
<urn:uuid:ee77dae2-8b1c-4f72-85dd-cfdb60646a10>
CC-MAIN-2016-26
http://www.americanheritage.com/content/oklahoma-0?page=3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.985736
1,139
2.8125
3
Howard Pyle's King Arthur and his Knights Chapter Sixth. How King Arthur Was Wedded in Royal State and How the Round Table Was And now was come the early fall of the year; that pleasant season when the meadow-land and the wold were still green with summer that had only just passed; when the sky likewise was as of summer-time - extraordinarily blue and full of large floating clouds; when a bird might sing here and another there, a short song in memory of spring-time, when all the air was tempered with warmth and yet the leaves were everywhere turning brown and red and gold, so that when the sun shone through them it was as though a cloth of gold, broidered with brown and crimson and green, hung above the head. At this season of the year it is exceedingly pleasant to be a-field among the nut-trees with hawk and hound, or to travel abroad in the yellow world, whether it be a-horse or a-foot. Now this was the time of year in which had been set the marriage of King Arthur and the Lady Guinevere at Camelot, and at that place was extraordinary pomp and glory of circumstance. All the world was astir and in a great ferment of joy, for everybody was exceedingly glad that King Arthur was to have him a Queen. In preparation for that great occasion the town of Camelot was bedight very magnificently, for the stony street along which the Lady Guinevere must come to the royal castle of the King was strewn thick with fresh-cut rushes smoothly laid. Moreover it was in many places spread with carpets of excellent pattern such as might be fit- to lay upon the floor of some goodly hall. Likewise all the houses along the way were hung with fine hangings of woven texture interwoven with threads of azure and crimson, and everywhere were flags and banners afloat in the warm and gentle breeze against the blue sky, wherefore that all the world appeared to be alive with bright colors, so that when one looked adown that street, it was as though one beheld a crooked path of exceeding beauty and gayety stretched before him. Thus came the wedding-day of the King - bright and clear and exceedingly radiant. King Arthur sat in his hall surrounded by his Court awaiting news that the Lady Guinevere was coming thitherward. And it was about the middle of the morning when there came a messenger in haste riding upon a milk-white steed. And the raiment of that messenger and the trappings of his horse were all of cloth of gold embroidered with scarlet and white, and the tabard of the messenger was set with many jewels of various sorts so that he glistened from afar as he rode, with a singular splendor of appearance. So this herald-messenger came straight into the castle where the King abided waiting, and he said: "Arise, my lord King, for the Lady Guinevere and her Court draweth nigh unto this place." Upon this the King immediately arose with great joy, and straightway he went forth with his Court of Knights, riding in great state. And as he went down that marvellously adorned street, all the people shouted aloud as he passed by, wherefore he smiled and bent his head from side to side; for that day he was passing happy and loved his people with Thus he rode forward unto the town gate, and out therefrom, and so came thence into the country beyond where the broad and well-beaten highway ran winding down beside the shining river betwixt the willows and the osiers. And, behold! King Arthur and those with him perceived the Court of the Princess where it appeared at a distance, wherefore they made great rejoicing and hastened forward with all speed. And as they came nigh, the sun falling upon the apparels of silk and cloth of gold, and upon golden chains and the jewels that hung therefrom, all of that noble company that surrounded the Lady Guinevere her litter flashed and sparkled with surpassing For seventeen of the noblest knights of the King's Court, clad in complete armor, and sent by him as an escort unto the lady, rode in great splendor, surrounding the litter wherein the Princess lay. And the frame-work of that litter was of richly gilded wood, and its curtains and its cushions were of crimson silk embroidered with threads of gold. And behind the litter there rode in gay and joyous array, all shining with many colors, the Court of the Princess - her damsels in waiting, gentlemen, ladies, pages, and attendants. So those parties of the King and the Lady Guinevere drew nigh together until they met and mingled the one with the other. Then straightway King Arthur dismounted from his noble horse and, all clothed with royalty, he went afoot unto the Lady Guinevere's litter, whiles Sir Gawaine and Sir Ewaine held the bridle of his horse. Thereupon one of her pages drew aside the silken curtains of the Lady Guinevere's litter, and King Leodegrance gave her his hand and she straightway descended therefrom, all embalmed, as it were, in exceeding beauty. So King Leodegrance led her to King Arthur, and King Arthur came to her and placed one hand beneath her chin and the other upon her head and inclined his countenance and kissed her upon her smooth cheek - all warm and fragrant like velvet for softness, and without any blemish whatsoever. And when he had thus kissed her upon the cheek, all those who were there lifted up their voices in great acclaim, giving loud voice of joy that those two noble souls had thus met together. Thus did King Arthur give welcome unto the Lady Guinevere and unto King Leodegrance her father upon the highway beneath the walls of the town of Camelot, at the distance of half a league from that place. And no one who was there ever forgot that meeting, for it was full of extraordinary grace and noble courtliness. Then King Arthur and his Court of Knights and nobles brought King Leodegrance and the Lady Guinevere with great ceremony unto Camelot and unto the royal castle, where apartments were assigned to all, so that the entire place was alive with joyousness and And when high noon had come, the entire Court went with great state and ceremony unto the cathedral, and there, surrounded with wonderful magnificence, those two noble souls were married by the Archbishop. And all the bells rang right joyfully, and all the people who stood without the cathedral shouted with loud acclaim, and lo! the King and the Queen came forth all shining, like unto the sun for splendor and like unto the moon for beauty. In the castle a great noontide feast was spread, and there sat thereat four hundred, eighty and six lordly and noble folk - kings, knights, and nobles - with queens and ladies in magnificent array. And near to the King and the Queen there sat King Leodegrance and Merlin, and Sir Ulfius, and Sir Ector the trustworthy, and Sir Gawaine, and Sir Ewaine, and Sir Kay, and King Ban, and King Pellinore and many other famous and exalted folk, so that no man had ever beheld such magnificent courtliness as he beheld at that famous wedding-feast of King Arthur and Queen Guinevere. And that day was likewise very famous in the history of chivalry, for in the afternoon the famous Round Table was established, and that Round Table was at once the very flower and the chiefest glory of King Arthur's reign. For about mid of the afternoon the King and Queen, preceded by Merlin and followed by all that splendid Court of kings, lords, nobles and knights in full array, made progression to that place where Merlin, partly by magic and partly by skill, had caused to be builded a very wonderful pavilion above the Round Table where it stood. And when the King and the Queen and the Court had entered in thereat they were amazed at the beauty of that pavilion, for they perceived, an it were, a great space that appeared to be a marvellous land of Fay. For the walls were all richly gilded and were painted with very wonderful figures of saints and of angels, clad in ultramarine and crimson, and all those saints and angels were depicted playing upon various musical instruments that appeared to be made of gold. And overhead the roof of the pavilion was made to represent the sky, being all of cerulean blue sprinkled over with stars. And in the midst of that painted sky was an image, an it were, of the sun in his glory. And under foot was a pavement all of marble stone, set in squares of black and white, and blue and red, and sundry other colors. In the midst of the pavilion was a Round Table with seats thereat exactly sufficient for fifty persons, and at each of the fifty places was a chalice of gold filled with fragrant wine, and at each place was a paten of gold bearing a manchet of fair white bread. And when the King and his Court entered into the pavilion, lo! music began of a sudden for to play with a wonderful sweetness. Then Merlin came and took King Arthur by the hand and led him away from Queen Guinevere. And he said unto the King, "Lo! this is the Round Table." Then King Arthur said, "Merlin, that which I see is wonderful beyond the After that Merlin discovered unto the King the various marvels of the Round Table, for first he pointed to a high seat, very wonderfully wrought in precious woods and gilded so that it was exceedingly beautiful, and he said, 11 Behold, lord King, yonder seat is hight the Seat Royal,' and that seat is thine for to sit in." And as Merlin spake, lo! there suddenly appeared sundry letters of gold upon the back of that seat, and the letters of gold read the name, And Merlin said, "Lord, yonder seat may well be called the centre seat of the Round Table, for, in sooth, thou art indeed the very centre of all that is most worthy of true knightliness. Wherefore that seat shall be called the centre seat of all the other Then Merlin pointed to the seat that stood opposite to the Seat Royal, and that seat also was of a very wonderful appearance as afore told in this history. And Merlin said unto the King: "My lord King, that seat is called the Seat Perilous, for no man but one in all this world shall sit therein, and that man is not yet born upon the earth. And if any other man shall dare to sit therein that man shall either suffer death or a sudden and terrible misfortune for his temerity. Wherefore that seat is called the Seat "Merlin," quoth the King, "all that thou tellest me passeth the bound of understanding for marvellousness. Now I do beseech thee in all haste for to find forthwith a sufficient number of knights to fill this Round Table so that my glory shall be entirely Then Merlin smiled upon the King, though not with cheerfulness, and said, "Lord, why art thou in such haste? Know that when this Round Table shall be entirely filled in all its seats, then shall thy glory be entirely achieved and then forthwith shall thy day begin for to decline. For when any man hath reached the crowning of his glory, then his work is done and God breaketh him as a man might break a chalice from which such perfect ichor hath been drunk that no baser wine may be allowed to defile it. So when thy work is done and ended shall God shatter the chalice of thy life." Then did the King look very steadfastly into Merlin's face, and said, "Old man, that which thou sayest is ever of great wonder, for thou speakest words of wisdom. Ne'theless, seeing that I am in God His hands, I do wish for my glory and for His good will to be accomplished even though He shall then entirely break me when I have served His "Lord," said Merlin, "thou speakest like a worthy king and with a very large and noble heart. Ne'theless, I may not fill the Round Table for thee at this time. For, though thou hast gathered about thee the very noblest Court of Chivalry in all of Christendom, yet are there but two and thirty knights here present who may be considered worthy to sit at the Round Table." "Then, Merlin," quoth King Arthur, "I do desire of thee that thou shalt straightway choose me those two and thirty." "So will I do, lord King," said Merlin. Then Merlin cast his eyes around and lo! he saw where King Pellinore stood at a, little distance. Unto him went Merlin and took him by the hand. "Behold, my lord King," quoth he. "Here is the knight in all the world next to thyself who at this time is most worthy for to sit at this Round Table. For he is both exceedingly gentle of demeanor unto the poor and needy and at the same time is so terribly strong and skilful that I know not whether thou or he is the more to be feared in an encounter of knight against Then Merlin led King Pellinore forward and behold! upon the high seat that stood upon the left hand of the Royal Seat there appeared of a sudden the name, And the name was emblazoned in letters of gold that shone with extraordinary lustre. And when King Pellinore took his seat, great and loud acclaim long continued was given him by all those who stood round about. Then after that Merlin had thus chosen King Arthur and King Pellinore he chose out of the Court of King Arthur the following knights, two and thirty in all, and these were the knights of great renown in chivalry who did first establish the Round Table. Wherefore they were surnamed The Ancient and Honorable Companions of the Round Table." To begin, there was Sir Gawaine and Sir Ewaine, who were nephews unto the King, and they sat nigh to him upon the right hand; there was Sir Ulfius (who held his seat but four years and eight months unto the time of his death, after which Sir Geheris - who was esquire unto his brother, Sir Gawaine - held that seat); and there was Sir Kay the Seneschal, who was foster brother unto the King; and there was Sir Baudwain of Britain (who held his seat but three years and two months until his death, after the which Sir Agravaine held that seat); and there was Sir Pellias and Sir Geraint and Sir Constantine, son of Sir Caderes the Seneschal of Cornwall (which same was king after King Arthur); and there was Sir Caradoc and Sir Sagramore, surnamed the Desirous, and Sir Dinadan and Sir Dodinas, surnamed the Savage, and Sir Bruin, surnamed the Black, and Sir Meliot of Logres, and Sir Aglaval and Sir Durnure, and Sir Lamorac (which three young knights were sons of King Pellinore), and there was Sir Griflet and Sir Ladinas and Sir Brandiles and Sir Persavant of Ironside, and Sir Dinas of Cornwall, and Sir Brian of Listinoise, and Sir Palomides and Sir Degraine and Sir Epinogres, the son of the King of North Umberland and brother unto the enchantress Vivien, and Sir Lamiel of Cardiff, and Sir Lucan the Bottler and Sir Bedevere his brother (which same bare King Arthur unto the ship of Fairies when he lay so sorely wounded nigh unto death after the last battle which he fought). These two and thirty knights were the Ancient Companions of the Round Table, and unto them were added others until there were nine and forty in all, and then was added Sir Galahad, and with him the Round Table was made entirely complete. Now as each of these knights was chosen by Merlin, lo! as he took that knight by the hand, the name of that knight suddenly appeared in golden letters, very bright and shining, upon the seat that appertained to him. But when all had been chosen, behold! King Arthur saw that the seat upon the right hand of the Seat Royal had not been filled, and that it bare no name upon it. And he said unto Merlin: "Merlin, how is this, that the seat upon my right hand hath not been filled, and beareth no name?" And Merlin said: "Lord, there shall be a name thereon in a very little while, and he who shall sit therein shall be the greatest knight in all the world until that the knight cometh who shall occupy the Seat Perilous. For he who cometh shall exceed all other men in beauty and in strength and in knightly grace." And King Arthur said: "I would that he were with us now." And Merlin said: "He cometh anon." Thus was the Round Table established with great pomp and great ceremony of estate. For first the Archbishop of Canterbury blessed each and every seat, progressing from place to place surrounded by his Holy Court, the choir whereof singing most musically in accord, whiles others swung censers from which there ascended an exceedingly fragrant vapor of frankincense, filling that entire pavilion with an odor of Heavenly blessedness. And when the Archbishop had thus blessed every one of those seats, the chosen knight took each his stall at the Round Table, and his esquire came and stood behind him, holding the banneret with his coat-of-arms upon the spear-point above the knight's head. An all those who stood about that place, both knights and ladies, lifted up their voices in loud Then all the knights arose, and each knight held up before him the cross of the hilt of his sword, and each knight spake word for word as King Arthur spake. And this was the covenant of their Knighthood of the Round Table: That they would be gentle unto the weak; that they would be courageous unto the strong; that they would be terrible unto the wicked and the evil-doer; that they would defend the helpless who should call upon them for aid; that all women should be held unto them sacred; that they would stand unto the defence of one another whensoever such defence should be required; that they would be merciful unto all men; that they would be gentle of deed, true in friendship, and faithful in love. This was their covenant, and unto it each knight sware upon the cross of his sword, and in witness thereof did kiss the hilt thereof. Thereupon all who stood thereabouts once more gave loud acclaim. Then all the knights of the Round Table seated themselves, and each knight brake bread from the golden patten, and quaffed wine from the golden chalice that stood before him, giving thanks unto God for that which he ate and drank. Thus was King Arthur wedded unto Queen Guinevere, and thus was the Round Table
<urn:uuid:b323db9b-ff70-4c0c-b852-841052009dc0>
CC-MAIN-2016-26
http://www.celtic-twilight.com/camelot/pyle/ka/kab1p3c6.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968975
4,248
2.734375
3
Known as the "weeping fig" due to the tendency of the branches to bend downward, the ficus is a popular indoor houseplant. Somewhat of a challenge to care for, the ficus sheds a lot of leaves when first brought into a new environment as it is adjusting to its new habitat. With proper care though, the ficus can have a long and healthy life inside. Proper pruning can help maintain the ficus and keep it in optimal health. Prune to control the size of the ficus. Without pruning, the ficus may grow to several feet, which is often too large for an indoor plant. Use shears to trim back the length of the branches one at a time. Remove dead or dry wood and limbs that have shown weak growth from the ficus to promote fresh growth. Trim the limbs right at the branch collar, the ring at the base of each branch that appears slightly larger than the width of the branch. When left in place, the collar will automatically shut down sap flow, preventing the spread of disease through the plant. Pick off yellow leaves that haven't fallen off on their own. These leaves are already dead and removing them will prompt the growth of new foliage. Cut long branches that veer off toward the light all the way back to the central branch. This will maintain one central branch and help the tree keep its vertical shape. Cut off a third of the ficus canopy, distributing the cuts evenly all the way around the plant to keep it even and allow light and air in on all sides. Ficus plants thrive with a lot of light and, if the leaves are too dense, the inner branches will die off.
<urn:uuid:87bf3705-66d2-4449-befe-1696e22c3fe5>
CC-MAIN-2016-26
http://www.gardenguides.com/79849-prune-ficus.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934309
345
3.3125
3
The Nobel Assembly at the Karolinska Institute announced that HHMI investigators Randy W. Schekman and Thomas C. Südhof, and Yale's James E. Rothman are the recipients of the 2013 Nobel Prize in Physiology or Medicine for their discoveries of machinery regulating vesicle traffic, a major transport system in our cells. - The 2013 Nobel Prize in Physiology or Medicine is awarded for discoveries of machinery regulating vesicle traffic, a major transport system in our cells. - Through their discoveries, Rothman, Schekman and Südhof have revealed the exquisitely precise control system for the transport and delivery of cellular cargo. - Disturbances in this transport system have deleterious effects and contribute to conditions such as neurological diseases, diabetes, and immunological disorders. The Nobel Assembly at the Karolinska Institute announced today that Randy W. Schekman, a Howard Hughes Medical Institute (HHMI) investigator at the University of California, Berkeley, Thomas C. Südhof, an HHMI investigator at Stanford University, and James E. Rothman of Yale University are the recipients of the 2013 Nobel Prize in Physiology or Medicine for their discoveries of machinery regulating vesicle traffic, a major transport system in our cells. According to the Nobel Assembly, this year's Nobel Prize in Physiology or Medicine honors three scientists who have solved the mystery of how the cell organizes its transport system. Each cell is a factory that produces and exports molecules. For instance, insulin is manufactured and released into the blood and chemical signals called neurotransmitters are sent from one nerve cell to another. These molecules are transported around the cell in small packages called vesicles. The three Nobel Laureates have discovered the molecular principles that govern how this cargo is delivered to the right place at the right time in the cell. Schekman discovered a set of genes that were required for vesicle traffic. Rothman unraveled protein machinery that allows vesicles to fuse with their targets to permit transfer of cargo. Südhof revealed how signals instruct vesicles to release their cargo with precision. Through their discoveries, Rothman, Schekman and Südhof have revealed the exquisitely precise control system for the transport and delivery of cellular cargo. Disturbances in this system have deleterious effects and contribute to conditions such as neurological diseases, diabetes, and immunological disorders. Randy W. Schekman Traffic inside a cell is as complicated as rush hour near any metropolitan area. But drivers know how to follow the signs and roadways to reach their destinations. How do different cellular proteins "read" molecular signposts to find their way inside or outside of a cell? For the past three decades, Randy Schekman has been characterizing the traffic drivers that shuttle cellular proteins as they move in membrane-bound sacs, or vesicles, within a cell. His detailed elucidation of cellular travel patterns has provided fundamental knowledge about cells and has enhanced understanding of diseases that arise when bottlenecks impede some of the protein flow. Schekman has been an HHMI investigator since 1991. He also serves as editor-in-chief of the open access research journal eLife. His work earned him one of the most prestigious prizes in science, the Albert Lasker Award for Basic Medical Research, which he shared with James Rothman in 2002. Schekman's path to award-winning researcher began with a youthful enthusiasm for science and math, which he attributes to his father, an engineer who helped develop the first online program for real-time stock quotes. High school science fairs—and winning them—further whetted his appetite for competitive science. Biology's power hit him more personally, though, when his teenage sister died of leukemia. He considered pursuing medical school as an undergraduate at the University of California, Los Angeles. But after spending his junior year in a laboratory at the University of Edinburgh, his path to graduate school became set. He obtained a Ph.D. in biochemistry at Stanford in the laboratory of Arthur Kornberg, who won the Nobel Prize in 1959 for identifying a key enzyme in DNA synthesis. Schekman first became interested in how proteins move within cells during a postdoctoral fellowship between 1974 and 1976 with John Singer, who was studying the outer membranes of mammalian cells. At the time, though, scientists couldn't easily study the steps of vesicle movement in mammalian cells growing in culture. So Schekman, who moved in 1976 to the University of California, Berkeley, as an independent investigator, decided to use yeast, a one-celled microorganism, to determine how vesicles containing proteins move inside and outside the cell. Scientists can easily genetically manipulate yeast, which have membrane-bound organelles similar to those of higher organisms. Organelles, such as mitochondria or the Golgi apparatus, are structures within cells that perform specified functions. When Schekman began his yeast studies, scientists only had a general sense of the cellular traffic patterns that proteins follow: Ribosomes manufacture proteins, which enter the endoplasmic reticulum, a membranous network inside the cell. Vesicles carrying proteins pinch off from the endoplasmic reticulum and travel to the Golgi apparatus, which further processes the proteins for internal or external use. What Schekman, using genetic methods, and Rothman, with biochemical approaches, working independently did, was dissect in meticulous detail the molecular underpinnings behind vesicle formation, selection of cargo, and movement to the correct organelle or path outside the cell. Ultimately, he identified 50 genes involved in vesicle movement and determined the order and role each of the different genes' protein products play, step by step, as they shuttle cargo-laden vesicles in the cell. One of the most important genes he found, Schekman says, is the SEC61 gene, which encodes a channel through which secretory proteins under construction pass into the endoplasmic reticulum lumen. When this gene is mutant, proteins fail to enter the secretion assembly line. Another significant set of genes he discovered encode different coat proteins that allow vesicle movement from the endoplasmic reticulum and from the Golgi. Although Schekman's research was done in yeast, follow-up studies confirmed that higher organisms, such as humans, share the majority of the genes in the yeast secretory pathway. Such knowledge provided a foundation for understanding normal human cell biology and disease states. In fact, as the study of the genetics of mammalian cells has become easier, Schekman has been characterizing human diseases that arise from secretory pathway problems. He has identified the structural basis of a rare craniofacial disease that disrupts the construction of a coat protein complex essential for transport vesicle formation. He also is studying whether the accumulation in the brains of Alzheimer's disease patients of the protein amyloid is due to a secretion pathway roadblock. While many steps in vesicular trafficking are now known, some have evaded discovery. Schekman continues to look for receptors in the endoplasmic reticulum membrane that find appropriate protein cargo for transport to the Golgi. He is also trying to identify molecules that help protein-laden vesicles move from the Golgi out of the cell. Schekman, with as much passion for science today as he has had throughout his career, is confident he can persuade Nature to reveal undiscovered routes in her traffic patterns. Thomas C. Südhof For people to have ideas, to experience happiness, or to remember the lyrics of a song, the neurons in their brains must communicate. This communication occurs in a manner similar to a relay racer passing a baton from one runner to the next. When stimulated, a presynaptic neuron releases a "baton" in the form of a chemical messenger—called a neurotransmitter—across a synapse, a small gap between the cells in the brain. Then, a postsynaptic neuron absorbs the message and conveys it to subsequent neurons. For decades, the majority of neuroscientists focused their research on postsynaptic neurons and their role in learning and memory. But throughout his career, Thomas Südhof has studied the presynaptic neuron. His collective findings have contributed to much of our current understanding of how a presynaptic neuron releases neurotransmitters and, more recently, how synapses form. His work also has revealed the role of presynaptic neurons in psychiatric illnesses, such as autism. Südhof has been an HHMI investigator since 1986. Born in Germany, Südhof obtained a medical degree from the University of Göttingen in 1982. He got a taste for neuroscience when he performed research for his doctoral degree at the Max-Planck-Institute for Biophysical Sciences under Victor P. Whittaker, a pioneer in neurochemistry. To expand his knowledge of biochemistry and molecular biology, Südhof then started to work in 1983 as a postdoctoral fellow at the laboratories of Michael Brown and Joseph Goldstein at the University of Texas Southwestern Medical Center at Dallas. There, Südhof cloned the gene for the receptor of LDL (the low-density lipoprotein), a particle in the blood that transports cholesterol. Moreover, his work identified the sequence that mediates the regulation of the LDL receptor gene expression by cholesterol. While Südhof was in their laboratories, Brown and Goldstein won the Nobel Prize in Physiology or Medicine in 1985 for their discoveries related to the regulation of cholesterol metabolism. Soon after, in 1986, UT Southwestern offered Südhof the opportunity to start his own laboratory. He began his inquiry into the presynaptic neuron. At the time, what scientists mainly knew about the presynaptic neuron is that calcium ions stimulate the release of neurotransmitters from membrane-bound sacs called vesicles into the synapse, in a process that takes less than a millisecond. This release involved fusion of the vesicles with the plasma membrane, but how such fusion occurs, and how it is triggered by calcium was unknown. Südhof decided to try to answer these questions. His work revealed that fusion of the synaptic vesicles, the small sacs filled with neurotransmitters, involves an obligatory catalytic protein called Munc18-1 that acts in conjunction with a protein machine made up of so-called SNARE proteins that were described by others and provide the muscle to the brawn of Munc18-1. Strikingly, the function of Munc18-like and SNARE-type proteins generally applies to most fusion reactions in biology, not only to synapses. More importantly, Südhof’s work shows how calcium controls fusion at the synapse: He showed that calcium binds to synaptotagmin proteins, thereby stimulating synaptotagmins to trigger rapid neurotransmitter release. Again, Südhof’s work revealed that synaptotagmins also act as universal calcium sensors in non-neuronal cells, for example for release of hormones in non-neuronal cells. Furthermore, his work described how a complex of organizing proteins, containing RIM and Munc13 proteins as central coomponents, embed the fusion machinery into the presynaptic nerve terminal. The RIM/Munc13 complex recruits and prepares vesicles for fusion, and tethers calcium channels in the plasma membrane next to the release sites to allow rapid coupling of neuronal excitation to neurotransmitter release. In more recent studies that intensified after Südhof moved to Stanford in 2008, Südhof's work examined how pre- and postsynaptic proteins form physical connections during synapse formation. Specifically, he identified proteins on presynaptic neurons called neurexins, and proteins on the postsynaptic neuron called neuroligins and LRRTMs, that come together and bind to each other across the synaptic cleft. There are many types of neurexins and neuroligins, and the pairing of any two helps create the properties of a synapse and the wide variability in the types of connections in the brain, Südhof says. The coming together of neurexin and neuroligin at the synapse is very important for normal brain function. Alterations in these proteins impairs the brain's chemistry, as uncovered in recent human genetics studies showing that mutations in neurexin or neuroligin genes can cause schizophrenia or autism. Südhof has shown that these mutations, when introduced into mice, change the properties of synapses and impair neurotransmission. His current studies aim to clarify how neurexins, neuroligins, and other proteins control synapse formation and synapse function, and how they mediate synapse remodeling during learning or other adaptive changes of the brain. Progress in such studies will help our understanding on how the brain is wired normally, and how such wiring becomes impaired in neuropsychiatric diseases.
<urn:uuid:6095538b-52b6-4ce5-8a17-6cce4bcf714e>
CC-MAIN-2016-26
https://www.hhmi.org/news/schekman-sudhof-awarded-2013-nobel-prize-physiology-or-medicine
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938357
2,697
2.859375
3
What is the 'Shapiro-Keyser' cyclone model? During cases of rapid cyclogenesis (see the Glossary), the long accepted 'Norwegian' frontal/cyclone development model is not appropriate. M.A. Shapiro and D. Keyser, in a paper published in 1990, proposed an alternative which has gained widespread acceptance. I am grateful to Dr. David Schultz (NSSL) for permission to quote the following from an article written (with co-author H. Wernli) for the 'Mariners Weather Log', which to my mind is an excellent summary of the differences between the 'classical' frontal depression model and that proposed for rapid cyclogenesis events ... "The Norwegian cyclone model, so named to honor the Norwegian meteorologists (e.g. Bjerknes, Bergeron and Solberg) who first conceptualised the typical life-cycle of midlatitude cyclones in the 1910's and 1920's, presents the evolution of a cyclone from an incipient frontal wave with cold and warm fronts, to a deepening cyclone with a narrowing warm sector as the cold front rotates around the cyclone faster than the warm front, and finally to a mature cyclone with an occluded front. Typically, a Norwegian cyclone is oblong, orientated roughly north-south with the cold front more intense and longer than the weak and "stubby" warm front. The Shapiro-Keyser cyclone model is named after the authors of the study that first presented this conceptual model of the frontal structure in some marine cyclones. As with the Norwegian cyclone model, an incipient cyclone develops cold and warm fronts, but in this case, the cold front moves roughly perpendicular to the warm front such that the fronts never meet, the so-called 'T-bone'. Also, a weakness appears along the poleward portion of the cold front near the low center, the so-called 'frontal fracture' and a back-bent front forms behind the low center. (In the final stage), colder air encircles warmer air near the low center, forming a warm seclusion. Typically, the Shapiro-Keyser cyclone is oblong, elongated east-west along the strong warm front". Schultz & Wernli then go on to state (I paraphrase) ... an important factor in determining which evolution will be preferred ... is the nature of the large-scale (i.e. mid/upper tropospheric) flow. NWP experiments have indicated significant sensitivity to the profile of the wind speed across the jet flow and other studies have indicated that the along-jet variations of wind speed can be important. Cyclones embedded within diffluent flow (e.g. jet-exit regions) tend to evolve like the Norwegian cyclone model, whereas cyclones embedded within confluent flow (e.g. jet-entrance regions) tend to evolve like the Shapiro-Keyser cyclone model.
<urn:uuid:f9809667-bd26-4ea1-bd72-cf2cbf15b999>
CC-MAIN-2016-26
http://weatherfaqs.org.uk/node/98
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913324
607
2.890625
3
These pictures show the heart from the front. The right side of the heart is on the left side of the heart pictures. The left side of the heart is on the right side of the pictures. Your heart has four separate chambers that pump blood. The chambers are called the right atrium, right ventricle, left atrium, and left ventricle. The right and left sides of the heart are separated by a muscular wall that prevents blood without oxygen from mixing with blood that has oxygen. The heart also has valves that separate the chambers and connect to major blood vessels. Your heart is divided into two separate pumping systems, the right side and the left side. The right side of your heart receives oxygen-poor blood from your veins and pumps it to your lungs, where the blood picks up oxygen and gets rid of carbon dioxide. The left side of your heart receives oxygen-rich blood from your lungs and pumps it through your arteries to the rest of your body. Blood travels through your heart and lungs in four steps: The right atrium receives oxygen-poor blood from the body and pumps it through the tricuspid valve to the right ventricle. The right ventricle pumps the oxygen-poor blood through the pulmonary valve to the The left atrium receives oxygen-rich blood from the lungs and pumps it through the mitral valve to the left ventricle. The left ventricle pumps the oxygen-rich blood through the aortic valve out to the rest of the body. ByHealthwise Staff Primary Medical ReviewerRakesh K. Pai, MD, FACC - Cardiology, Electrophysiology Specialist Medical ReviewerStephen Fort, MD, MRCP, FRCPC - Interventional Cardiology Rakesh K. Pai, MD, FACC - Cardiology, Electrophysiology & Stephen Fort, MD, MRCP, FRCPC - Interventional Cardiology The Health Encyclopedia contains general health information. Not all treatments or services described are covered benefits for Kaiser Permanente members or offered as services by Kaiser Permanente. For a list of covered benefits, please refer to your Evidence of Coverage or Summary Plan Description. For recommended treatments, please consult with your health care provider.
<urn:uuid:06318de5-ea09-4ecf-ac05-4dd72e0ce41d>
CC-MAIN-2016-26
https://healthy.kaiserpermanente.org/static/health-encyclopedia/en-us/kb/aa54/865/aa54865.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.867145
476
2.890625
3
Google.org Hopes to Solve Energy, Health, and Environmental Issues Google.org is Google’s philanthropic arm. They’ve focused mainly on environmental issues but today they announced how they are expanding their vision. Google announced that the organization will give out more than $26 million in new grants to organizations and businesses. The grants go towards solving the problems of poverty, health, and the environment. Here are the ways Google is investing in education and development to improve social and economic conditions in the world: - RE<C – Funding research and development of cleaner, cheaper energy sources like solar and wind energy. - RechargeIT – Investing $500,000 to $2 million in funding to promote the development of affordable plug-in and hybrid vehicles for the mass market. - Funding research that improves health and stops the spread of diseases and addresses malaria, polio, blindness, etc. - Improving education, health, water, and sanitation in poor countries (like working with Pratham, to measure basic reading and math skills of children in rural India). - Investing in small and medium-sized businesses, through micro lending and other initiatives, like supporting entrepreneurs in Ghana and Tanzania. Background on Google.org As of January 2008, Google.org and the Google Foundation have committed more than $75 million for grants and investments. Google.org was formed when Google went public and about 1 percent of Google shares were set aside to fund philanthropic goals. In 2006, Google converted 300,000 shares into about $90 million and set up Google.org, which is a nonprofit organization. Even though critics say Google should spend more, or question their motives, they are on par with what many companies give.
<urn:uuid:f6f70733-413e-4395-b007-8d6578eb776d>
CC-MAIN-2016-26
http://www.marketingpilgrim.com/2008/01/googleorg-hopes-to-solve-energy-health-and-environmental-issues.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940551
357
2.59375
3
The Civil War was the deadliest and costliest war ever fought on American soil with losses of lives reaching well over 600,000, out of some 2.4 million soldiers. Millions more soldiers and non-soldiers were injured. The south was devastated, when it was all said and done. The people, their property, the territories, would never be the same again. For decades before the Civil War began, tensions had been high between the Southern states and the Federal government over the individual rights of the states against the federal authority, slavery, westward expansion. The final blow was when Abraham Lincoln, a man who vowed to end slavery in the south, was elected president in 1860. Immediately, seven states had seceded from the Union and they soon formed the Confederate States of America. The mid nineteenth century saw the United States had remarkable growth. The northern states were growing in industry and had little farming while the south grew in farming and little industry. However, the southern states needed slave labor to grow certain crops like cotton and tobacco. As the westward expansion continued in the United States, sentiments against slavery grew into the new western territories and the southern states felt that slavery was going to be abolished and this would destroy the mainstay of their economic progression. In 1857, in the famous Dred Scott case, the Supreme Court ruled that slavery was indeed legal in the United States, but in 1859, famed slavery abolitionist, John Brown and his followers attacked Harpers Ferry, Virginia and attempted to seize an arsenal there. After this attack, the southern states began to think that the northern states were getting more and more opposed to slavery and the final blow was when Abraham Lincoln won the presidency of the federal government. After the first seven states seceded, within months of Lincolnís presidential victory, the federal government tried what they could to get the states back into the Union. However, these attempts were for naught, as neither side could arrive at an agreeable conclusion. Fort Sumter was a federally held fort that guarded the entrance to the Charleston Harbor. South Carolina, one of the first seven states to secede, ordered the federal government out of the fort, but the federal government attempted to send supplies to the fort and the rebel forces fired on the fort. These were the first shots fired and essentially started the Civil War. There were only two casualties during this 34-hour battle and that was because a gun exploded during the surrender ceremonies and two Union soldiers were killed. Confederate Army General Robert E. Lee finally surrendered to Union Army General Ulysses S. Grant in Appomattox, Virginia in 1865, thus ending the Civil War.
<urn:uuid:bef8d7b3-97b6-4155-b4ff-2c9a52df52c6>
CC-MAIN-2016-26
http://www.bellaonline.com/ArticlesP/art181867.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983064
543
3.984375
4
French Verbs: Watching Your Mood Verbs are divided into various moods. Linguistically, the mood of a verb is a way of expressing oneself, or a way of speaking. A mood shows the speaker's attitude toward an event. The French language has seven such moods that are divided into two categories: personal moods and impersonal moods. Making it personal The verbs in the personal moods are conjugated in order to correspond to the subject pronouns. They are divided into four groups: - The indicative mood (which is the mood that's used most often) indicates that the speaker is talking about a fact, or something that's happening, will happen, or has happened. - The subjunctive mood (which you use more often in French than in English) is the mood of doubt, uncertainty, emotion, will, and command. - The imperative mood expresses an order, a request, or a directive. The imperative mood uses the present tense of most verbs and the conjugations of the following three subject pronouns: tu, nous, and vous. However, you never use the subject pronouns in an imperative construction. - The conditional mood appears in a hypothetical sentence where you place the conditional form of the verb in the result clause. For example, you may say Si j'avais de l'argent, je voyagerais. (If I had money, I would travel.) You may also use the conditional to make polite requests or suggestions. Don't take it so personally: The impersonal mood Unlike the personal moods, the impersonal mood verbs aren't conjugated because they don't correspond to any particular subject pronoun. These impersonal mood verbs include the infinitive, the gerund, and the participle: - The infinitive mood is often used as a noun. An example is in the French saying Vouloir, c'est pouvoir, which translates to Where there's a will, there's a way. Literally, it means To want to is to be able to. - The gerund can be used as an adverb, like it is in the sentence On réussit à la vie en travaillant dur, meaning One succeeds in life by working hard. - The participle can be used as an adjective, as in the example Les devoirs finis, ils ont joué au basket, which means Once the homework was finished, they played basketball.
<urn:uuid:e351dc09-0aea-4ed3-8c29-878aeda61b0a>
CC-MAIN-2016-26
http://www.dummies.com/how-to/content/french-verbs-watching-your-mood.navId-323302.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00119-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940541
515
3.484375
3
By Fadila Memisevic I believe the way one person talks to another is important, because if someone is yelling at me, than it ruins my whole day. Since I spend a lot of my time at school, I am influenced by other adults in the school and how they talk to other adults. If a teacher has a problem with another teacher and a student sees how polite they are to each other it will make an impression on the student; the student will learn by example. At a store, I saw a woman accidentally get charged for two packages of gum when she only bought one. The customer said, “I think you charged me twice even though I only bought one.” She was calm and polite. In turn, the cashier responded respectfully and apologized for her error. Meanwhile, other people witnessed how calmly and respectfully both adults handled the situation. It was a lesson in civility. — Fadila Memisevic is a senior at Nottingham High School
<urn:uuid:61f7c650-5165-461a-bbd3-4b2213436718>
CC-MAIN-2016-26
http://blog.syracuse.com/voices/2011/03/calm_polite_confrontation_give.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978589
201
2.609375
3
Analysis: Form and Meter Loose Villanelle Followed by Free Verse So, there are two parts to this poem and they don't really look anything alike, as far as form is concerned. The first part is a loose villanelle form, the second part is in free verse. Part 1: Villanelle The first bit of the poem – titled "The Cane Fields" – is, in fact, an unrhymed villanelle. (Villanelles are a kind of poem that, traditionally, have a pretty strict ABA / ABA / ABA rhyme scheme (think "cat/ball/bat flat/hall/rat"). So what we're saying here is that Dove is breaking some of the rules. But, hey, there are no poem police here.) For a good explanation of the traditional villanelle form, and how it's been used by more contemporary poets (the form kicked off in the 1800s, so it's pretty old), see our guide to "One Art," by Elizabeth Bishop. OK, now that you've read that, there's not a whole lot more to say except for the fact that Dove has two refrains – "there is a parrot imitating spring" and "out of the swamp the cane appears" – and that she doesn't bother to rhyme her stanzas. The non-rhyming fact is a curious one. One of the effects that it has on the poem is that the form sort of sneaks up on you. It's subtler. It's like you're reading, and then you suddenly think, "Hey! I've seen this line before," and then you go back looking for the form. It's a quieter gesture than a strictly rhyming villanelle. Part 2: Free Verse Now about the second part, titled "The Palace." Dove has said that the second bit was originally supposed to be a sestina, which is a really wacky form of poetry involving six-line stanzas and lots of rules (source). But since the second part isn't actually a sestina, we won't go into it here. Instead, Dove writes in free verse (no form or rhyme scheme) but repeats some of the lines from the first part, to keep the poem tied together and feeling kind of driven and insistent. Her lines are relatively tightly controlled. They're all about the same length and form a neat rectangle on the page, and they range from about six to thirteen syllables. They don't have a particularly set rhythm – no iambic pentameter here. So this is half-form, half-not poem, and overall it's pretty loose. We think the important part here about the form is the repetition that we've mentioned. It's a haunting, of sorts – one that Dove is playing around with form to emphasize without being too overbearing about it.
<urn:uuid:b1c8015b-dad3-440f-8579-9093ba2f1480>
CC-MAIN-2016-26
http://www.shmoop.com/parsley-dove/form-and-meter.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976022
605
2.578125
3
Moll Flanders Theme of Authorship Moll Flanders is focused both on telling a good story and on making that good story seem true. We're supposed to take Moll at her word and believe wholeheartedly in all of her adventures, no matter how outlandish. Plus, the author of the Preface takes pains to remind us how true the story is (despite a few tweaks here and there). But hold your horses, folks. This is a novel. It's completely made up. Moll is not real, and nor are her adventures. Even the author of the preface isn't real, for how could he have met Moll and written her story? All these issues of truth and authorship beg the question: why does this novel go so out of its way to convince us of its truth? Why not just trust us readers to enjoy the ride? Questions About Authorship - Did you find this story of a woman's life, as written by a man, credible? We know it's fiction, but does it seem plausible at all? - Why do you think Moll makes so many asides about wishing she'd given up crime earlier, or slips in lots of phrases about how she should have repented? Does this relate to the Preface somehow? - Is Moll a good choice for the author of her own story? Why or why not? Why do you think the novel is written in the first person, and not written from the point of view of the author of the Preface? - Why do you think the author of the Preface makes such an effort to convince us that the story is true? How does this change how you read the story, if it does at all? Chew on This The whole reason the story is told in the first person is so we don't worry too much about Moll's life being in danger. We know she'll live in the end, because she's alive to tell her tales. The fact that this book is really written by a man gives the voice of Moll Flanders an authority that female authors lacked at the time. If Defoe felt this tale was worth telling, well then surely it's worth reading.
<urn:uuid:940da563-8a71-43b9-b1cf-b3468e316125>
CC-MAIN-2016-26
http://www.shmoop.com/moll-flanders/authorship-theme.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980267
452
2.703125
3