text
stringlengths
198
630k
id
stringlengths
47
47
metadata
dict
More example sentences - They then managed to make their way to Europe and its modern ancestor has stemmed such varieties as the zucchini or courgette. - For the main course, tuck into some of Greece's favourite vegetables, tomatoes, spinach and zucchini or courgettes. - Cut turkey into even, bite-sized pieces; slice courgettes and corn into thick slices, and remove calyx from tomatoes. 1930s: from French, diminutive of courge 'gourd', from Latin cucurbita. More definitions of courgetteDefinition of courgette in: - The British & World English dictionary
<urn:uuid:a2f83287-2d0d-4cee-b1de-ab34bdca60db>
{ "date": "2014-07-23T09:32:55", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877644.62/warc/CC-MAIN-20140722025757-00176-ip-10-33-131-23.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8624947667121887, "score": 2.953125, "token_count": 137, "url": "http://www.oxforddictionaries.com/definition/american_english/courgette" }
|This article needs additional citations for verification. (July 2014)| Top left:Matsushima Port, Top right:Togetsukyō Bridge with Ojima, 2nd left:Matsushima Marinpia Aquarium, 2nd right:Zuigan temple, 3rd left:Godaidō, 3rd right:Entsuin, Bottom left:Atago Shrine, Bottom right:Matsushima Coast Railroad Station Location of Matsushima in Miyagi Prefecture |• -Mayor||Tateo Ohashi (since April 2007)| |• Total||54.04 km2 (20.86 sq mi)| |Population (June 2014)| |• Density||271/km2 (700/sq mi)| |Time zone||Japan Standard Time (UTC+9)| |Address||10 Takagi-aza-chō, Matsushima-machi 981-0215| Matsushima (松島町 Matsushima-machi?) is a town located in Miyagi District, Miyagi Prefecture, Japan. As of June 2014, the town had an estimated population of 14,642 and a population density of 271 persons per km². The total area was 54.04 km². It is most famous as the location of Matsushima Bay, one of the Three Views of Japan, and is also the site of the Zuigan-ji, one of the most famous Zen temples in Tōhoku, Entsū-in and Kanrantei. Matsushima is located in east-central Miyagi Prefecture,with Matsushima Bay to the east. The town’s highest point is Mount Danyama, with a height of 178.0 meters. The area of present-day Matsushima was part of ancient Mutsu Province, and has been settled since at least the Jomon period. The Daikigakoi Shell Mound is one of the largest shell middens to have been discovered in Japan. With the establishment of Tagajō in the Nara period, Matsushima was part of the Yamato colonization area in the region. During the Sengoku period, the area was contested by various samurai clans before the area came under the control of the Date clan of Sendai Domain during the Edo period, under the Tokugawa shogunate. The modern village of Matsushima was established on June 1, 1889 with the establishment of the municipalities system. It was raised to town status on December 16, 1963. - East Japan Railway Company (JR East) – Tōhoku Main Line - East Japan Railway Company (JR East) – Senseki Line - Port of Matsushima - Matsushima Kankō Yūran Line International sister cities Japanese sister cities - August 1, 1987: Kisakata, Akita - October 16, 1988: Kamiamakusa, Kumamoto (formerly known as Matsushima) Noted people from Matsushima - Jun Senoue – musician |Wikimedia Commons has media related to Matsushima, Miyagi.|
<urn:uuid:a12ae725-dbf7-4c56-89b4-6c0fa9fc79ac>
{ "date": "2015-01-31T20:29:49", "dump": "CC-MAIN-2015-06", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122030742.53/warc/CC-MAIN-20150124175350-00237-ip-10-180-212-252.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.879635751247406, "score": 2.8125, "token_count": 643, "url": "http://en.wikipedia.org/wiki/Matsushima,_Miyagi" }
Volume 12, Number 1 Mental Health Problems of Addicted Mothers Linked to Infant Care and Development By Robert Mathias NIDA NOTES Staff Writer Women who abuse cocaine while they are pregnant often share many characteristics, such as addiction, poverty, and low literacy levels. However, they differ in the quality of care they give their children, a NIDA-funded study says. In fact, how well cocaine-addicted mothers care for their infants appears to be strongly influenced by the type and severity of psychological problems these women suffer from, according to the study. The quality of care these mothers provide is important because it affects the cognitive development of their cocaine-exposed infants, the study notes. "Self-reported symptoms of psychological problems among addicted women really did affect maternal caregiving," says Dr. Judy Howard of the University of California at Los Angeles (UCLA), who directed the study. This finding indicates that drug abuse treatment programs should work on other issues, such as mental health problems, in addition to helping addicted mothers become abstinent, Dr. Howard says. The UCLA study of cocaine-addicted women and their infants was one of NIDA's Perinatal-20 treatment research demonstration projects. The 5-year program, which ended last year, evaluated the effec-tiveness of providing comprehensive therapeutic programs that include drug abuse treatment plus a range of additional social and health services for drug-abusing women of child-bearing age and their children. The projects in the program have yielded new information about the characteristics and treatment needs of pregnant and parenting women who abuse drugs. (See "NIDA's Perinatal-20 Projects," NIDA NOTES, November/December 1994, p. 6.) The cocaine-abusing women in the study, which was conducted by Dr. Howard and Dr. Leila Beckwith, also of UCLA, were similar demographically to the women in many of the other Perinatal-20 projects. On average, they were about 29 years old, had less than a high school edu-cation, were single, had a history of being physically or sexually abused, and belonged to minority groups. The women had a long history of cocaine and other drug abuse. Despite their similarities, including heavy drug use, "these women are not a homogeneous group," stresses Dr. Howard. The women in this study exhibited a wide range of psychological symptoms and maternal caregiving abilities that affected the development of their infants, she says. Specifically, mothers who re-ported more symptoms of a narcissistic, paranoid, histrionic, or borderline personality disorder were the least sensitive caregivers. In turn, many of these mothers' babies showed signs of delayed cognitive development at 6 months of age, Dr. Howard notes. Recently, Dr. Howard and her colleagues conducted a further analysis of the data collected about the women's drug use and parenting behaviors 6 months after they gave birth. That analysis indicates that although the women who exhibited the most severe psychological symptoms reduced their drug use, they were still the least sensitive caregivers. "These findings suggest a clinically significant relationship between a mother's psychopathology and her ability to care for her newborn, which, in turn, might negatively affect her child's development," says Dr. Elizabeth Rahdert, a research psychologist with NIDA's Division of Clinical and Services Research, who has been involved with the Perinatal-20 program since its inception. In addition, the finding that many of these women have severe mental health problems suggests that treatment programs should include a psychiatric component to assess and address women's mental health problems on an individual basis, Dr. Rahdert says. Social service programs that do not have mental health professionals on their staff can make sure women receive the therapy they need by establishing strong links to the mental health care system within their communities, says Dr. Rahdert. Mental health professionals should play a key role in drug treatment for drug-abusing mothers, agrees Dr. Howard, but they need to be trained in addiction-related problems, she says. In the final analysis, the study's findings argue for comprehensive treatment programs and coordination of addiction treatment, mental health, and pediatric services to adequately meet the needs of these women and their children, concludes Dr. Howard. Howard, J.; Beckwith, L.; Espinosa, M.; and Tyler, R. Development of infants born to cocaine-abusing women: Biologic/Maternal influences. Neurotoxicology and Teratology 17(4):403-411, 1995. Howard, J.; Espinosa, M.; and Beckwith, L. Psychological status and parenting behaviors in cocaine-using mothers. Abstract presented at the 58th Annual Scientific Meeting of the College on Problems of Drug Dependence, San Juan, Puerto Rico, 1996. From NIDA NOTES, January/February, 1997 [Home][NIDA NOTES Index][1997 Archive Index Index]
<urn:uuid:80d0a1d5-0d90-4494-b7ab-4a5847dad75a>
{ "date": "2014-11-27T09:28:37", "dump": "CC-MAIN-2014-49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008218.28/warc/CC-MAIN-20141125155648-00196-ip-10-235-23-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9494822025299072, "score": 2.609375, "token_count": 1086, "url": "http://archives.drugabuse.gov/NIDA_Notes/NNVol12N1/Mental.html" }
The Rout of San Romano mazzocchio, a wooden or wicker headdress, was a common article of male attire in Florence in the second and third quarters of the fifteenth century. Painted representations of the mazzocchio are seen in many of Uccello's paintings. By virtue of its form, which can be clearly determined in these paintings, its representation presented a constructional problem of special complexity, with which later perspective theorists also concerned themselves. The elaborate system of projection can be reconstructed from incised lines on the original drawings, and shows the relevance of the methods and principals of the costruzione legittima.
<urn:uuid:7cc9e4e1-6cc0-436f-bcb6-f2a068cd3d78>
{ "date": "2013-12-06T01:52:27", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163048970/warc/CC-MAIN-20131204131728-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9483537673950195, "score": 2.890625, "token_count": 146, "url": "http://www.abstract-art.com/abstraction/l2_Grnfthrs_fldr/g013_uccello.html" }
LEARNING STORIES--this is an interesting idea. Te Whariki New Zealand early childhood curriculum New Zealand Maori - 1 - 10 Numbers poster in Maori Enjoy learning Te Reo Maori with this 77cm x 51 cm laminated poster perfect for any child care centre, school or children's room $16.95 Some ideas to write learning stories about. A progress story with a Te Whāriki lens. Learning story prompts. Progress Story focusing on the strands of Te Whariki - change to EYLF/QKLG Outcomes Graphic from Montessori Beginnings preschool linking NZ Curriculum, Te Whariki and Montessori. Children's Grants provides a collection of resources on foundation and government support of children. Bringing natural materials in the classroom. Weave in sticks. Inspired by Te Whariki, a weaving approach.
<urn:uuid:d95111c2-2233-45a3-ab9e-84c49d2fbe24>
{ "date": "2017-05-24T01:18:29", "dump": "CC-MAIN-2017-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607726.24/warc/CC-MAIN-20170524001106-20170524021106-00193.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8270295262336731, "score": 2.734375, "token_count": 186, "url": "https://nz.pinterest.com/fiona80/ece-te-whariki/" }
"As women, we must speak out, speak up, say no to our inheritance of loss and yes to a future of women-led dialogue about women's rights and value." - Zainab Salbi As a young adult from Pakistan, I grew up with women in my family facing gender inequality. Women in my family often do not voice their opinions, and when they reach the age of about 18, daughters are pressured to discontinue their education so that they can get married. As a senior at William Cullen Bryant High School, I participated in Global Kids' Summer Institute at the Council on Foreign Relations where I learned more about how gender inequality is a global issue, and this knowledge has inspired me to address this problem head on. Hillary Clinton once stated, "Women's rights are human rights," but women continue to face a persistent gap in access to opportunities and decision-making power compared to men. Globally, women have fewer opportunities for economic participation than men, less access to basic and higher education, face greater health and safety risks, and are often politically under-represented. Gender inequality is an issue that is important to discuss because as children, we learn to adapt to specific gender roles, and as we grow older, they become more evident and important to our role in society. Women make up half of the world's population and without half of the population, men are not able to do much. People have to realize that women are strong, smart and involved with the world much more than they were a half a century ago. I believe that one reason why there is gender inequality is that some people still do not believe women belong in high-powered positions. Some men believe women are not "strong" enough physically, mentally, and emotionally to handle the stress that comes with some positions. They think that we are not physically strong enough to lift heavy items or emotionally and mentally ready for the stress that comes along with certain professions, and that we need to devote our time to family and household responsibilities. This patriarchal system exists in many parts of the world, including here in New York City where I was raised. When I say that I want to become an engineer or a tech professional, I realize the barriers I'm up against because those fields are heavily dominated by men in the United States. Also in South Asia where my family is from, society thinks that men should work and the women should stay home. I am changing that. I am the first female in my family who will forge a different path than those who followed traditional gender roles, such as giving up their education and getting married at a young age. I will be attending college and moving forward in my academic journey. I want to be able to work in a field that I am interested in and be able to live and support myself, instead of relying on my husband. I know that if I stand up and set an example to my family and society, other women and future generations will be able to follow in my footsteps and become successful. In the words of Malala Yousafzai, "I believe in equality. And I believe there is no difference between a man and woman. I even believe that a woman is more powerful than a man." -- Tehmeena Khan, William Cullen Bryant High School Senior and Global Kids Leader
<urn:uuid:56fcad3e-be5d-4379-9e5f-59d2b24d1247>
{ "date": "2019-08-26T07:02:19", "dump": "CC-MAIN-2019-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00176.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9782646894454956, "score": 2.8125, "token_count": 667, "url": "https://www.huffpost.com/entry/international-day-of-the_b_12440564" }
Tips On Healthy Eating For You And Your Household!Nourishing your body seems like it should be simple enough. However, in this current day and age, there is so much noise and confusion surrounding nutrition that it's hard to make the right choices. With the aid of these tips, you'll be better equipped to trim the fat (pun intended) and pick the healthiest foods. The One Thing You Need To Strengthen Your Yoga Practice - mindbodygreen The One Thing You Need To Strengthen Your Yoga Practice - mindbodygreen Beyond those day-to-day beauty benefits, collagen is particularly helpful in post-yoga recovery. In addition to giving your joints a little extra love, collagen inhibits the body from breaking down muscle after your workout. Essentially, collagen acts as food for the muscles, joints, and ligaments—making it the perfect follow-up to a sweaty yoga flow. Calcium is a beneficial mineral that should be a part of a healthy diet. Calcium is involved in teeth and bone structure. It also helps in blood clotting, nerve function, muscle contraction, and blood vessel contraction. Calcium helps prevent many diseases such as osteoporosis, hypertension, diabetes, colon cancer, high cholesterol, and obesity. Eat more fish for your health and for your brain. Fish are high in DHA which has been shown to improve your memory, vocabulary and prowess in nonverbal tasks. DHA may also reduce the risk of Alzheimer's. Fish is also a great source of protein and the Omega-3 fatty acids may be beneficial to your heart health. To stay away from sodas and other sugary drinks, you need to find an alternative. It is natural to have cravings for something sweet: why not try fruit juice? Or better yet, mix fruit juice and water. Buy http://www.iamsport.org/pg/bookmarks/floyd56carolee/read/33121203/make-physical-fitness-a-truth-with-these-tips and squeeze them yourself. You can do the same with a lot of fruits, and combine different kind of juices for flavor. One great way to live healthier is to eat nuts. Nuts have been proven to be very good for the heart. Nuts have monounsaturated and polyunsaturated fats that are great for the heart. Nuts also have other nutrients such as fiber and vitamin E that can lower your risk of heart disease. When trying to lose weight, it is vital that you keep your metabolism high. http://blogs.rediff.com/leatha33sol/2017/02/25/ways-to-promote-your-skin-care-retail-organisation-efficiently/ can help. Green tea has components that have been proven to raise your metabolism. In addition, spicy foods have been proven to raise your metabolism. Consider adding chili peppers to your food to achieve this. If you are trying to cut down on the amount of soda that you consume and think water is too plain, try flavored water. This tasty alternative comes with the same amount of water that you require, and does not have the high sugar and calorie content as soda. As important as nutrition is for young people, it becomes even more important for women as they age past fifty. For example, women over 50 should make the effort to keep their weight under control. They need to make everything they eat count, because their metabolism is slower and cannot process food in the same way it once did. To stay healthy while dieting, choose low calorie but nutrient rich foods. Grapefruit, asparagus, and cantaloupe contain very few calories, but provide your body with many essential vitamins. You should also look for low calorie foods that are high in protein, such as salmon and kidney beans. These will give you the energy you need to get through the day. A good nutritional tip is to start drinking green tea. Green tea is rich with antioxidants, and studies have shown that green tea can actually delay fatigue during harsh exercise. Drinking green tea also provides us with more energy and causes more fat to be burned during exercise. Nutrition is just as important before you get pregnant as it is during pregnancy. So start now by replacing soft drinks with water. There is no nutrition in soda to help your body get ready for the stresses of growing a baby. Water helps clear the body of toxins to make sure you are in top shape before you conceive. Getting fit does not mean that you have to give up the foods that you love. Just make a few changes in the choices that you make. Try to choose diet soda instead of regular soda and use a napkin to soak up the extra grease that is floating on the top of your pizza and hamburgers. An important aspect of nutrition is to make sure you drink enough water. Not only is water essential for the body, but thirst is sometimes confused with hunger, so not drinking enough water can lead to eating extra calories. If you don't like water plain, try making herbal tea that tastes good but adds no calories. Encourage your child to try new foods but don't force them to eat something if they don't like it. Try and have them taste a food on more than one occasion to see if they like it and if they don't, don't keep forcing them to eat it. Learn Additional don't want them to come to dread meal time. Remember, whether you're trying to lose some weight or gain some muscle or anything in between, proper nutrition is essential. We are what we eat. What you've just read here are some great nutrition-based tips. Don't forget to use these tips in your day-to-day life for optimum results.
<urn:uuid:80a814eb-728b-4fd0-bfe7-d8934a4a7804>
{ "date": "2018-06-25T17:04:37", "dump": "CC-MAIN-2018-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868237.89/warc/CC-MAIN-20180625170045-20180625190045-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9609363079071045, "score": 2.6875, "token_count": 1186, "url": "http://foodbubble9lamar.total-blog.com/tips-on-healthy-eating-for-you-and-your-household-10812732" }
Common Cause of Power Surges Power surges can happen anywhere, especially at home and in the office. These surges happen when any of the wirings experience short jolts of high electric voltage. It’s essential to contact your master electrician when dealing with power surges to prevent further damage. Even electronics or appliances that are turned off are prone to surges as it can flicker or make buzzing sounds during such situations. Here are the most common causes of power surges. One lightning strike near your house can add millions of volts to your electrical voltage. This is why your computer, lamp or cable box are mostly damaged during a power surge that is caused by lightning strikes. However, a surge protector can prevent your equipment from damage during a power surge. Exposed or Damaged Wiring Another cause of power surge is damaged or exposed wiring as the electricity will not flow normally when your wiring is unprotected. Once a wire is damaged, you can smell smoke as the wires can burn and melt due to the unusual flow of electric. You can fix the wirings yourself or contact a professional electrician to fix it for you. Overloaded Circuits or Outlets This is the common cause of power surges as people forget the basics of electricity rules. It’s not advisable to plug too many appliances in the same socket as it can cause surges or worse, electrical fires. High Energy Electrical Devices Refrigerators, air conditioners, or elevators are some high powered machines or appliances that can cause energy surges and spikes. When these produce a large amount of electricity, they overpower other electronics in its path that cause an abnormal flow of electric current. When you experience power surges, it’s important to contact you master electrician first before doing something on your own. They can investigate and fix the damages safely that was caused by power surges.
<urn:uuid:a8b78328-7d1e-4521-add3-71f90fada8f4>
{ "date": "2019-05-23T11:26:25", "dump": "CC-MAIN-2019-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9470283389091492, "score": 2.890625, "token_count": 382, "url": "https://chriselec.com.au/common-cause-power-surges/" }
Asbestos is a fibrous mineral that comes in different colors and takes different structural forms. Six types of asbestos became commonly used in the 20th century, though at least a hundred other asbestiform minerals are found in nature. It is resistant to heat, electricity and chemical reaction. It fireproofs and strengthens, prevents chemical combustion and generally makes products more reliable. Asbestos makes products safer, but it is carcinogenic. By the 1970s, it became widely known that the mineral used for safety was a silent killer. Asbestos has been linked with the development of several diseases including asbestosis, lung cancer and mesothelioma. The prognosis for mesothelioma, in particular, is usually not favorable. The U.S. Environmental Protection Agency (EPA) began regulating asbestos in 1970 and attempted to enact a ban in the late ’80s, but the ban was overturned in 1991. Asbestos use remains legal in the U.S. and regulated by the EPA. The mineral is not easy to visually identify once integrated into products. Unless the product contains an asbestos label, it’s nearly impossible to confirm the presence of asbestos through visual inspection. To avoid exposure, you must learn which products are likely to contain asbestos. Most asbestos-containing materials (ACM) were used in construction of buildings and homes. Other products were used commercially and industrially for fireproofing such as asbestos countertops and gaskets. Fewer products were used as household items, including hairdryers, ovens and other appliances that utilize heat, pot holders and certain toys. Of the numerous construction products that have contained asbestos, insulation is by far the most common. W.R. Grace manufactured Zonolite, a type of loose-fill insulation used in attics and possibly one of the most popular forms of the mineral. Zonolite was installed in approximately 80 million homes. Other types of asbestos insulation are found in or around the following materials: - Spray-applied insulation - Valve jackets Because asbestos was so widely used in insulation, it’s smart to treat all older insulating materials as if they could contain asbestos. Other Construction Materials Asbestos was one of the most popular building materials of the 20th century. Any product that needed to withstand heat likely contained asbestos such as hot water pipes, water tanks, boilers, electrical panels and roofing materials. Other construction products that have contained asbestos include: - Flooring materials - Ceiling tiles - Exterior siding Most of these materials are still made with asbestos, though typically in less concentration compared to historical use. If you encounter these products in damaged or deteriorating condition, proceed with caution. Reach out to an asbestos abatement company for professional guidance. Types of Asbestos Knowing what individual types of asbestos look like doesn’t help people recognize the mineral in certain products. Once raw asbestos is processed and integrated into products, its original color and structure often becomes unrecognizable. However, broken, damaged or deteriorating asbestos products may appear to contain a fibrous component. Testing is necessary to confirm whether a fibrous material is asbestos, but visual confirmation of fibers is enough to suspect asbestos and subsequently handle the product with extreme safety. The six types of commercial asbestos include chrysotile, tremolite, crocidolite, amosite, anthophyllite and actinolite. They are classified into two categories: Serpentine and amphibole. Both of these cause cancer. Amphiboles are considered slightly more carcinogenic. Chrysotile is the only form of serpentine and the other five are amphiboles. Chrysotile is curly in structure and appears white. Amphiboles aren’t curly but sharp, jagged and needle like in form. Amosite appears brown; crocidolite is blue; tremolite is grey; actinolite is gray-green and anthophyllite is gray-brown. The appearance of asbestos is useful for geologists but not for the average person who hopes to identify asbestos in products. The fact remains that suspicious products must be tested for asbestos — visual identification isn’t reliable. You can avoid asbestos exposure by safely handling products that might contain the mineral. If you encounter a damaged asbestos product, wet it with water to limit spread of asbestos fibers until you can call a licensed asbestos professional for assistance. Do-it- yourself asbestos abatement isn’t safe. Make sure to hire professionals to protect yourself and your family from asbestos exposure. TAKE THE COURSE: You can get instant UKATA certification via this Asbestos Awareness Course
<urn:uuid:74cc608d-9b41-4efc-82e1-62a2dc2a3a38>
{ "date": "2018-12-14T14:28:28", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.93095862865448, "score": 3.4375, "token_count": 981, "url": "https://asbestosawareness.training/what-does-asbestos-look-like/" }
All Saints' Church When was All Saints' Church Built? This is a question that I have been asking myself since I moved to Swanton Morley some three years ago and started trying to understand the architecture of the church. I am still not sure that I have the right answer, but I thought that you might like to share my thoughts on this issue. If you stroll round the church and look carefully at the stone ‘tracery’ in the windows then you will notice that it varies quite a bit. The windows can, in fact, be divided into two main ‘families’. The oldest is the nave family, which includes the side and east windows in the side aisles, and the high east window of the nave. The youngest is the chancel family, which covers all the windows in the chancel. In between, in terms of both age and design, are the windows at the west end of the church and in some of the windows of the tower. It now seems to be generally accepted that the nave family can be attributed to a master mason called Robert Wodehirst, so he probably did the initial designs for the nave. Now we do know something about him. He was still training at the Palace of Westminster at the time of the Black Death in 1349. By 1358 he was still working in London, but by then he was on the highest specialist’s pay carving vault bosses for the cloister at Westminster Abbey. Shortly afterwards he qualified as a master mason and it was probably in about 1360 that he did the designs for the new nave at Swanton Morley. At that time Swanton Morley appears to have been a very wealthy village and Wodehirst’s designs were very ambitious. Not only was it a very large church with a huge space underneath the tower, it was also being built to a very high standard. The flintwork at the base of the west façade is quite exceptional and the mouldings and decoration around both the south and north doors are excellent. Work suddenly stops in 1362 Then, just as the building got to the level of the bottom of the windows, work stopped. This is revealed by a distinct break in the masonry of both the north and south aisle walls, but it can be seen more clearly on the north wall. If you look at the sill-level stringcourse (a long narrow line of stonework that juts out just below the windows), you will find just above it four courses of square knapped flints, topped by a narrow course of red tiles. The fact that this discontinuity runs the full length of the nave at the same height is very significant for it shows that work was not proceeding from east to west, as was usual, but was being done in layers. Now, this layer approach was only feasible when quick progress was anticipated, so it suggests that funds were available for a speedy completion, and that the break was unexpected. The most likely explanation is the collapse of the spire at Norwich Cathedral in 1362. Wodehirst was awarded the contract to rebuild the clerestory and this would immediately have become his priority. The skilled labour was probably diverted to Norwich and the walls at Swanton Morley capped and thatched for protection. It is probable that most of the windows for the aisles had already been delivered. In order to find out when work probably re-started at Swanton Morley it is necessary to look at what was going on in Norwich. Prior to the collapse of the spire, the major project at the cathedral had been work aimed at completing the cloister. Work on this also stopped in 1362 when the spire collapsed and it did not recommence until 1381, after the completion of the new clerestory. If work could not re-start any earlier than this on the cloister then it would seem unlikely that skilled men could have been spared for Swanton Morley. This allows us to look in a new light at the will that Sir William de Morley wrote in April 1379. He actually made gifts to several churches, and the relevant section reads as follows: He specifically refers to the repair of Holy Trinity, so the sum of 20s was presumably intended to help pay for work on the clerestory that was still under way. So far as Swanton Morley is concerned, however, the words already begun do not necessarily mean that work was in progress. In fact, it seems more likely that he was referring to the fact that there was a partially built church in existence. Now, these were troubled times and Norfolk did not escape from the upheaval caused by the Peasants’ Revolt. It was, in fact on 17 June 1381 that Norwich opened its gates to a band of rebels led by Geoffrey Litster, a dyer from Framlingham. The Swanton Morley area must also have been affected because Sir Thomas Morley, who had succeeded the above Sir William, was among several noblemen captured by Litster. It was, in fact, the warlike Bishop Despenser of Norwich who put an end to the local revolt. There was a battle at North Walsham later in June at which Litster was captured and executed. These events probably further delayed work at both Norwich and Swanton Morley. Taking all the above into account, it is not unreasonable to suggest that work did not re-start at Swanton Morley until the end of 1381. Work restarts in about 1381 After this long delay, swift progress was now necessary. Thus, although existing supplies of knapped flints may have been used where possible, plastered and limewashed walls of rough flint rubble were used elsewhere as necessary. The side and east windows of the aisles would have been the first items to be installed, and we may perhaps find here an explanation of why just two of these windows - the adjacent east and side windows in the south aisle - do not have embattled transoms. If all the other windows had been essentially completed prior to the break and just two remained to be made then it is quite probable that these two were made by other masons using the same templates. It is not difficult to see how, after a twenty-year break, it could have been forgotten that the transoms were meant to be embattled. Work also re-started on the west front, but the windows had not been made and by now styles had changed. This can be seen by looking at the west windows of the two aisles and the west window of the nave (in the face of the tower). If Wodehirst did not design these windows then they were certainly designed by a mason who had been influenced by him. By 1390 the main body of the nave was probably complete and at least the east end was roofed. However, only the lower part of the tower was complete. It is likely that, as the 14th century drew to a close, the nave was complete and the tower half built - certainly above the top of the west nave window, and probably to the top of the sound holes. We do not know when work started on the chancel; we only know that the church was not consecrated until 1440. This suggests that it was not started much earlier than the turn of the century. What we do know is, that when work started on the chancel, there was a major change in the style of the windows, and this style makes it unlikely that they were designed before 1400. Perhaps the most interesting thing is that the huge windows in the bell chamber of the tower are very similar to the big east window of the chancel, making it likely that they were built at the same time. This is interesting because the nave was the people’s part of the church and would have been paid for by the congregation, and by people such as Sir William de Morley, whilst the chancel belonged to the rector and he would normally have been expected to pay for its rebuilding. Perhaps the same mason was employed, but he was paid separately for the tower. A final piece of information is that in 1441 a certain John Fox left a bequest to provide a lead roof for the nave. Now, the original roof was presumably thatched, and this would have required a steeper pitch. Is it possible that, when the shallower pitched roof was introduced, the opportunity was taken to put in the little clerestory? Page Updated: 22/10/10 “I give to work on the fabric of St Paul’s London 20s. Also I bequeath for the repair of Holy Trinity Norwich (i.e. Norwich Cathedral) 20s. Also I give my gilt chalice to the parish church of Swanton, which they may reclaim … Also I give to work on the fabric of the same church already begun 10 marks (£6-13-8)”
<urn:uuid:190b8eb7-1b50-41ce-a0bf-e5f7a1ecbbf7>
{ "date": "2018-06-24T11:20:41", "dump": "CC-MAIN-2018-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866932.69/warc/CC-MAIN-20180624102433-20180624122433-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9876470565795898, "score": 2.890625, "token_count": 1859, "url": "http://swantonmorleychurch.co.uk/article2.html" }
If you found yourself stranded on a deserted island and were allowed to have only three things, what would they be? Well, if you want to continue living on that island, one of them better be a source of clean water—and not just to splash on your Slip-n-Slide® or use to fill-up your Super Soaker®. Fresh water is extremely important to humans, plants, and animals. Without it, nothing would live very long. In fact, we know that humans can survive without food for thirty to forty days—about 5 weeks—but without water, life would end in about three to five days! Water is so critically important because it is involved in nearly every bodily process. In fact, the human body is composed of about 60% water—or roughly eight gallons. (The brain is approximately 70% water, while the blood running through your veins and your lungs is made up of more than 80% water!) Medical researchers report that there is virtually no function or reaction in the body that can take place without the presence of water. Waterhelps transport nutrients throughout the body, and it is necessary for all building functions in the body. It also helps maintain normal body temperature, and is essential for carrying waste material out of the body. Therefore, replacing the water that is continually being lost through sweating and other processes is extremely important. While approximately 75% of the Earth’s surface is covered in water, not all of that water is drinkable. Actually, researchers believe that only a very small percentage of the water onthe Earth is good for humans to drink. The rest is either salty (as the oceans) or frozen (like the glaciers), and therefore is not fit for drinking. While water is colorless, odorless, and tasteless, all life depends on it. Think about what your houseplants would look like if they went a weekor two without any water! Scientists realize that in order to have life, water is required, which is one reason they keep trying to findwater on Mars. And yet, God, knowing that living things need water, placed it on the Earth the very first day of Creation (see Genesis 1:2). How refreshing.
<urn:uuid:6e8ce115-0839-4c52-b395-10c8aff22c95>
{ "date": "2017-06-26T20:35:27", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00017.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9674139022827148, "score": 3.03125, "token_count": 449, "url": "http://www.apologeticspress.org/apPubPage.aspx?pub=2&issue=810&article=1521" }
When looking at the cultural norms of the ancient world, we often see that what one culture believed to be normal, they often deemed to be natural. In other words, what is natural is culturally conditioned and changes from culture to culture. This conception of nature has been explored by John J. Winkler in The Constraints of Desire, where he remarks that when we see “nature” in ancient Greek texts, we should actually read “culture.” “Indeed what ‘natural’ means in many such contexts is precisely ‘conventional and proper.’ The word ‘unnatural’ in contexts of human behavior quite regularly means ‘seriously unconventional’ and is used like a Thin Ice sign to mark off territory where it is dangerous to venture.” In a similar vein, Philo also offers comment on ways in which customs become a part of what is considered natural over the course of time. “They are under the sway of a very ancient custom, which through long familiarity has won its way to the standing of nature (γενόμενον εἰς φύσιν ἐκνενίκηκεν)” (Special Laws 21.109). Exploring the topic of foreskin and circumcision in the ancient world reveals the same understanding of nature. In various medical texts and handbooks we often see culturally situated penile aesthetics being described as natural. In his monumental work, Greek Homosexuality, K. J. Dover comments on the near obsession ancient Greeks had with foreskin when it came to artistic depictions of the penis. Penises were depicted as being petit and having long tapered foreskins. While the medical texts may not be as descriptive as the artwork, they do attest to Greek (and Roman) desire to have a specific look to the foreskin, which they deemed to be “natural.” In his Gynecology, Soranus describes the process through which a midwife can mold an infant into its “natural (κατὰ φύσιν) shape” (Gynecology 2.9.14). Concerning the foreskin he writes, “If the infant is male and it looks as though it has no foreskin, she should gently draw the tip of the foreskin forward or even hold it together with a strand of wool to fasten it. For if gradually stretched and continuously drawn forward it easily stretches and assumes its normal length, covers the glans and becomes accustomed to keep the natural good shape (τὴν φυσικἠν εὐμορφίαν)” (Gynecology 2.16.34). Similar procedures are outlined by Celsus and Galen to elongate or reconstruct a deficient foreskin. On the insufficient foreskin, Galen writes about “a departure in terms of magnitude from what accords with nature (κατὰ φύσιν)” (Method of Medicine 14.16). From this cursory overview, it seems evident that in the ancient Greco-Roman mind having a foreskin is natural and desirable. When we take this knowledge in hand and begin to approach the Apostle Paul’s thoughts on circumcision, what do we find? In an often poorly translated section of Romans 2, Paul refers to those, presumably gentiles, who are “from nature foreskinned (ἐκ φύσεως ἀκροβυστία)” (Rom 2:27). While those of us in the modern world would understand all male infants to be foreskinned by nature, from this text it seems that in Paul’s mind only gentiles were naturally foreskinned. The corollary of this would be that Jews are by nature circumcised. In Galatians 2:15, Paul comments that “We are by nature Jews (ἡμεῖς φύσει Ἰουδαῖοι) and not gentile sinners.” Given that the key issue of Galatians pertains to gentile Judaizing (specifically undergoing circumcision), it is probable that this text corroborates with the above reading of Romans 2:27, that is, Paul, like his Greco-Roman contemporaries, had a culturally situated view of what was natural. Paul understood that Jews were by nature circumcised and gentiles were by nature foreskinned. How does this impact the way we read Paul on circumcision? I would argue that for Paul, gentiles should remain in their natural state as foreskinned and Jews should also remain in their natural state as circumcised (à la 1 Cor 7:18). These were ethnic identities that he acknowledges and upholds. These categorical differences would naturally remain in place, even after the Messiah came. As we see in Romans 11:24, naturally wild branches (φύσιν ἀγριελαίου) are grafted, contrary to nature (παρὰ φύσιν), into a cultivated tree. While this is only one aspect of Paul’s thought regarding circumcision, I find it to be quite illuminating in regard to how we read him when it comes to Gentiles and Jews and the issue of circumcision. The above findings from Greco-Roman texts can also inform us when we seek to understand Paul’s gentile audiences and their opinions on circumcision, but that’s a subject for another essay. So to respond to the question posed in the title, for Paul, I think his answer would be, “It depends.” John J. Winkler, The Constraints of Desire: The Anthropology of Sex and Gender in Ancient Greece (London: Routledge, 1990), 17. K.J. Dover, Greek Homosexuality (Cambridge, MA: Harvard University Press, 1989), 125-31. Most modern translations read, “physically uncircumcised.” In my understanding, ἀκροβυστία is best translated as “foreskin.” For another culturally situated instance of what is natural in Paul, see 1 Cor 11:14.
<urn:uuid:af5257a4-70d4-4f96-a12e-0ead533e0527>
{ "date": "2017-03-26T14:53:09", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189242.54/warc/CC-MAIN-20170322212949-00131-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.955514132976532, "score": 3.109375, "token_count": 1332, "url": "http://www.christianorigins.div.ed.ac.uk/2016/12/10/circumcision-and-foreskin-which-one-is-natural/" }
|Are you interested in contributing to HLWIKI International? contact To browse other articles on a range of HSL topics, see the A-Z index. - This entry is out of date, and will not be updated, August 2018 See also Critical appraisal | Critical theory in librarianship | Paulo Freire | Journal clubs | Occam's razor | Research Portal for Academic Librarians | Teaching library users "...critical thinking is an intellectually disciplined process of actively conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness." — Foundation for critical thinking Critical thinking refers to the ability to evaluate ideas (especially in academia) critically through questioning, reasoning and reflective techniques. Critical thinking gives consideration to evaluating evidence, context and other salient considerations. The process of critical thinking should promote the adoption of rational views based on balanced or reasoned conclusions. While critical thinking provides room for adjustment to one's position, disagreements occur between critical thinkers; critical thinking asks its proponents to employ logic and intellectual criteria such as clarity, accuracy, relevance, depth, breadth, significance and fairness in argumentation. Fisher & Scriven define critical thinking as "...the skilled, active, interpretation and evaluation of observations, communications, information and argumentation." Moore defines it as the careful determination of whether one will accept, reject or suspend judgment about a claim or argument. In contemporary use "critical" has the connotation of expressing disapproval but this is not necessarily the case with critical thinking. The critical evaluation of a paper, for example, might conclude that its ideas and conclusions are expressed reasonably well despite elements which do not hold up to close scrutiny. According to Papp et al (2014)...."Critical thinking is essential to a health professional's competence to assess, diagnose, and care for patients. Defined as the ability to apply higher-order cognitive skills (conceptualization, analysis, evaluation) and the disposition to be deliberate about thinking (being open-minded or intellectually honest) that lead to action that is logical and appropriate, critical thinking represents a "meta-competency" that transcends other knowledge, skills, abilities, and behaviors required in health care professions." Some academic libraries take it upon themselves to teach aspects of critical thinking. For example, here is University College Dublin Library's Critical thinking tutorial. Critical thinking defined What is critical thinking? - Disciplined, self-directed thinking which exemplifies the perfections of thinking appropriate to a particular mode or domain of thinking. - Thinking that displays mastery of intellectual skills and abilities. - The art of thinking about your thinking while you are thinking in order to make your thinking better: more clear, more accurate, or more defensible. Critical thinking can be distinguished into two forms: "selfish" or "sophistic", on the one hand, and "fairminded", on the other. In thinking critically we use our command of the elements of thinking to adjust our thinking successfully to the logical demands of a type or mode of thinking. See critical person, critical society, critical reading, critical listening, critical writing, perfections of thought, elements of thought, domains of thought, intellectual virtues. Developing critical thinking processes Open-ended questions to trigger thinking Reading and reflecting on ideas are two critical activities in learning how to think. However, research indicates that teachers do not spend enough time in their classrooms posing questions. If they do, the vast number of teachers ask students to recall information rather than get them to think critically about it. Questions calling for a recall of facts are the least likely to promote student involvement. In fact, some studies show that open-ended questions that require divergent thinking (i.e., questions that allow for a range of possible answers and encourage students to think at a deeper level than rote memory) are more effective in eliciting student responses than “closed” questions (i.e., questions that require students to select one correct answer). The results indicate that students are more likely to respond to questions that require deeper-level thought (critical thinking) than those that require rote memorization. Open-ended questions are useful cognitive triggers and can be used liberally in classes or small groups. Sometimes, students can think for a moment about their own responses to questions. This strategy benefits students by allowing them time to gather their thoughts prior to verbalizing them. International students and those who may be fearful about public speaking may find it gives them time to build their confidence before communicating their ideas. Experimental research indicates that students who are asked higher-level questions are more likely to display higher-level thinking on course exams. Class-based research indicates that students learn to generate their own higher-level thinking questions. Using a technique called guided peer questioning students are provided with generic questions that serve as cognitive prompts to stimulate different forms of thinking: - What are the implications of ___? - Why is ___ important? - What is another way to look at ___? Questions that promote reflection After students communicate their ideas orally via groups or in writing, ask them to reflect on what type of critical thinking they are engaged in and whether they think they have demonstrated critical thinking in their responses. Ask them to record their reflections either individually or in pairs. If they select the latter, their job is to listen and record the reflections shared by their partner. Research shows that one distinguishing characteristic of high-achieving students is that they reflect on their thought processes and recognize the importance of cognitive strategies. Additional research shows that students can learn to engage in “meta-cognition” (thinking about thinking) if regularly asked to do so. When students learn to routinely ask themselves these questions, the depth and quality of their thinking are enhanced. What makes a critical thinker? Nickerson (1987) characterizes a good critical thinker in terms of knowledge, attitudes and abilities. Here are some of the characteristics of such a thinker: - uses information and evidence skillfully and impartially - organizes thoughts and articulates them concisely and coherently - distinguishes between logically valid and invalid inferences - suspends judgment in the absence of sufficient evidence to support a decision - understands the difference between reasoning and rationalizing - attempts to anticipate the probable consequences of alternative actions - understands the idea of degrees of belief - sees similarities and analogies that are not superficially apparent - can learn independently and has an abiding interest in doing so - applies problem-solving techniques in domains other than those in which learned - can structure informally represented problems in such a way that formal techniques, such as mathematics, can be used to solve them - can strip a verbal argument of irrelevancies and phrase it in its essential terms - habitually questions one's own views and attempts to understand both assumptions that are critical to views and implications of views - sensitive to the difference between the validity of a belief and the intensity with which it is held - aware one's understanding is always limited, often much more so than would be apparent to one with a noninquiring attitude - recognizes the fallibility of one's own opinions, the probability of bias in those opinions, and the danger of weighting evidence according to personal preferences Barriers to critical thinking - Lack of relevant background information - Poor reading skills - Poor listening skills - Peer pressure - Mindless conformism (tendency to follow the crowd ) - Mindless non-conformism - Distrust of reason - Unwarranted assumptions and stereotypes - Relativistic thinking - Wishful thinking - Short-term thinking - Selective perception / attention - Selective memory - Overpowering emotions - Fear of change Linkage to information literacy See Information literacy and Transliteracy for librarians - See Lai E. Critical thinking: a literature review. Research Report, 2011 - Albitz RS. The what and who of information literacy and critical thinking in higher education. Portal: Libraries and the Academy. 2007;7(1):97−109. - Alfino M. Advancing critical thinking and information literacy skills in first year college students. College & Undergraduate Libraries. 2008;15:81–98. - Allen M. Promoting critical thinking skills in online information literacy instruction using a constructivist approach. College & Undergraduate Libraries. 2008;15:21–38. - Bertacchini de Oliveira L, Johanna Rueda Díaz L, Alves de Araújo Püschel V, DeAlmeida Lopes Monteiro da Cruz D. The effectiveness of teaching strategies forthe development of critical thinking in nursing undergraduate students: a systematic review protocol. JBI Database System Rev Implement Rep. 2015 Mar 12;13(2):26-36. - Danermark B, Ekstrom M, Jakobsen L. Explaining society: an introduction to critical realism in the social sciences. Routledge; 2001 Nov 22. - Dauer FW. Critical thinking: an introduction to reasoning, 1989. - De Bono E. De Bono's thinking course. BBC Active, 2006. - Ennis RH. Critical thinking: reflection and perspective. Inquiry: Critical Thinking across the Disciplines. 2011;26(1):4–18. - Fisher A, Scriven M. Critical thinking: its definition and assessment. Point Reyes, CA: Edgepress; 1997. - Gold HE. Engaging the adult learner: Creating effective library instruction. Portal: Libraries and the Academy. 2005;5:467−481. - Hamby BW. The philosophy of anything: critical thinking in context. Kendall Hunt Publishing Company, Dubuque Iowa, 2007. - Jeanfreau SG, Jack L Jr. Appraising qualitative research in health education:guidelines for public health educators. Health Promot Pract. 2010 Sep;11(5):612-7. - King PM. Developing reflective judgment: understanding and promoting intellectual growth and critical thinking in adolescents and adults. San Francisco: Jossey-Bass; 1994. - McMillan J. Enhancing college students' critical thinking: a review of studies. Research in Higher Education. 1987;26:3-29. - Moore B, Parker R. Critical thinking: evaluating claims and arguments in everyday life. Mountain View, CA: Mayfield; 1998. - Nickerson RS. Why teach thinking? Teaching thinking skills: theory and practice. In: Series of books in psychology. New York: Freeman/Henry Holt; 1984. - Papp KK, Huang GC, Lauzon Clabo LM, Delva D, Fischer M, Konopasek L, Schwartzstein RM, Gusic M. Milestones of critical thinking: a developmental model for medicine and nursing. Acad Med. 2014 May;89(5):715-20. - Pavlidis P. Critical thinking as dialectics: a hegelian-marxist approach. J Critical Education Policy Studies. 2010;8(2):74-102. - Washburn P. The vocabulary of critical thinking. Oxford University Press, 2010. - Weiler A. Information-seeking behavior in generation Y students: motivation, critical thinking, and learning theory. J Acad Librarianship. 2004;31:46-53. - Xin C, Feenberg A. Pedagogy in cyberspace: the dynamics of online discourse. J Dist Ed. 2006;21(2):1-25
<urn:uuid:7d9b627a-412e-497b-bac5-fb97dc778fdd>
{ "date": "2018-11-16T01:39:34", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742968.18/warc/CC-MAIN-20181116004432-20181116025710-00025.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8865390419960022, "score": 3.375, "token_count": 2441, "url": "http://hlwiki.slais.ubc.ca/index.php/Critical_thinking" }
Video presented by Ana Bracilovic, MD This video accompanies the article: Hip Pain and Arthritis A radiograph is also known as an X-Ray, where it is a picture of the bones. It does not show a good evaluation of the soft tissue. The diagnosis of hip osteoarthritis is made based upon radiographic findings (what we see on an X-Ray.) it can tell us not only if there is the presence of arthritis but also the degree of that type of arthritis. When we’re looking at any kind of muscle, ligament, tendon, or any kind of soft tissue injury, then we will usually order an MRI. If there’s any suspicion of another source for the hip pain, for example the back, then we will order an MRI of the lower back, more specifically the lumbosacral spine. If we are suspicious for any type of tear in the cartilage of the hip joint, also known as a hip labral tear, then a specific test for that would be to do a MRI with an arthrogram. We can put a little bit of lidocaine, which is an anesthetic, into the hip joint. That is both diagnostic as well as therapeutic after seeing if the lidocaine made any difference in any patient symptoms.
<urn:uuid:0d9c7a3f-0795-4757-bc01-e5afe24f6d9a>
{ "date": "2018-12-17T17:16:34", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00096.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9431791305541992, "score": 3.015625, "token_count": 270, "url": "https://www.arthritis-health.com/video/diagnosis-hip-pain-video" }
This question comes from a friend: "I know someone who is moving to the sticks for an extended period, and will probably go into town once a week for groceries. However, refrigeration is very, very limited. How can he make sure he gets his leafy greens and such with no refrigerator to keep things cool?" I'd advise your friend to eat his leafy greens on the days he goes to town, when they are as fresh as possible. Fresh produce loses valuable nutrients during days of storage, even with refrigeration. In fact, a comprehensive study on this topic by the UC Davis found that — by the time they are consumed — fresh, frozen or canned vegetables and fruit may contain similar amounts of nutrients. Canned fruits and vegetables are packed at their peak of freshness and retain most of their original nutrients since the canning process shields the food from oxygen. The heating process of canning primarily lowers the vitamin C content of canned food, say UC Davis researchers. Some nutrients may be more concentrated in canned foods. One-half cup of canned tomatoes, for example, contains almost 12 grams of lycopene — an antioxidant that may help reduce the risk for heart disease and some cancers such as prostate cancer. A medium fresh (uncooked) tomato contains less than 4 grams of lycopene. Canned pumpkin, according to data from the United States Department of Agriculture (USDA), contains more than three times the vitamin A as fresh cooked pumpkin. Lutein — an antioxidant in corn known to protect the eyes from cataracts and macular degeneration — was found by researchers at Cornell University to more bioavailable in canned corn than fresh. Heat treatment also kills dangerous bacteria. And in the case of canned carrots, tomatoes and spinach, it also enhances the body's absorption of carotene, an antioxidant that converts to vitamin A in the body. And Oregon State University scientists found that canned blueberries have some enhanced antioxidant benefits over fresh blueberries. Have your friend stock up on citrus fruits like oranges and grapefruit that don't require refrigeration and are rich sources of the vitamin C he needs. Keep his leafy greens as cool as possible and eat them within 1 or 2 days. Tell him to ditch the salt shaker, since canned foods are notoriously high in sodium. Or look for lower sodium versions. Remember, too, that canned food is cooked food. A canned pear is a poached pear. Canned tuna has been filleted and steamed. Canned beans have been soaked and simmered. This might save him some energy costs. Experts say the canning process helps a food maintain its quality and nutrient content for about two years. And it remains safe to eat as long as the container is not bulging or leaking, according to the Canned Food Alliance. Bottom line: Fresh, frozen or canned, your friend in the sticks needs a variety of foods from each nutrient group: fruit, vegetables, grains, protein and calcium sources. A study at the University of Massachusetts concluded that "it's the ingredients you choose, not the form of the ingredients, that really determine a recipe's nutrient content." Barbara Quinn is a registered dietitian and certified diabetes educator at the Community Hospital of the Monterey Peninsula. Email her at [email protected].
<urn:uuid:074e863f-3f90-4049-b803-42454a7b7575>
{ "date": "2015-02-01T19:52:01", "dump": "CC-MAIN-2015-06", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122086930.99/warc/CC-MAIN-20150124175446-00213-ip-10-180-212-252.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9611625671386719, "score": 2.546875, "token_count": 682, "url": "http://www.montereyherald.com/lifestyle/20140108/barbara-quinn-nutrition-in-the-sticks-yes-we-can" }
Autobiographical essay rubric Autobiography is a personal story where the writer shares his or her personal experience of life autobiography template allows individuals to create autobiographies and autobiography related documents like an autobiography rubric that helps to rate an personal essay example - 7. Essay rubric for argumentative pdf ideas in essayclear structure which rubric for the assessment of the argumentative essay find cheap and affordable essay writing services by high professionals an autobiography 2 vols online library of liberty. Autobiographical or personal narrative in an autobiographical or personal narrative, you will describe a personal personal narrative scoring rubric use the rubric below to evaluate your personal narrative circle the score that. Use explicit modeling to help high school students explore their identities as they write autobiographical essays who am i: writing an autobiographical essay print copies of the reflection worksheet: who am i printable, who am i example essay printable, expository essay rubric. Early career autobiography rubric resources directions in essays and in speeches for example, i try to include an intro and conclusion in every lecture i give early career autobiography filename format: lastname firstname autobiography. Write your own autobiography [6th grade] jeanine capitani trinity university -students will fill out the final rubric on their own work to see if they need to make any changes -students will draw pictures to go with the new sections. Narrative/literacy autobiography prompt so far in this class we have looked 3 remember, this essay is not about what you learned from your experience, but what your experience can teach your audience [evaluation] see attached rubric for grading criteria title: assignment 2: literacy. Write an essay in which you tell about a significant incident from your past this should be an incident which taught you something about yourself, which changed you in some way, or which is important to you for some reason. Autobiography project 8th grade pre-ap language arts summer/1st six weeks assignment mrs mueller mrs kirkpatrick. View educ301_math_autobiography_grading_rubric from educ 301 at liberty university duplicate educ 301 math autobiography grading rubric. Important about autobiography essay autobiographical novel , autobiography , fiction 796 words | 2 pages critical essay ideas in management writing instructions and marking rubric this assessment task is your experiential learning essay. Title: autobiography rubric author: scholasticny\nathagwi subject: after completing a visual life map, students should use this rubric as a guide for writing their autobiographies as part of the writing an autobiography unit plan. Irubric j9369b: write an autobiographical essay in which you introduce yourself to me provide a brief life history including information about your family, where you live (and have lived), your pets, your favorite things, etc) when i finish reading your autobiography, i should feel like i have known you forever. Autobiography writing rubric pdf - rubric for sequencing story irubric: story sequence of events rubric yx3b4w: rcampus, irubric goldessayclubcom - a leading online essay writing companywelcome to the purdue university online writing lab. Simply fill in the text boxes to create an autobiographical poem. Rubric for autobiography 3rd grade rubric-for-autobiography-3rd-grade rubric for autobiography 3rd grade rubric-for-autobiography-3rd-grade common core essay rubric more references related to common core essay rubric komatsu gd555 3c motor grader parts book 4th grade persuasive writing. Irubric w6636: this paper should be about you or someone you know it should be written from your perspective you can choose one of two topics: narrative- write an essay about someone you know who has struggled with an addiction, autobiography- write a story about a time when someone influenced you for good or bad. Autobiographical incident drafting and elaboration the paragraph below is from the first draft of one student's autobiographical incident essay. The organization, elements of narration, grammar, usage, mechanics, and spelling of a written piece are scored in this rubric. Autobiographical essay rubric This product was designed specifically to help middle and high school students understand what elements go into writing an autobiographical essay it includes: •pre-writing information worksheet-- this is a brainstorming worksheet where students collect and write down information about themselves to include in their. Teaching commons teaching guides feedback & grading rubrics types of rubrics types of rubrics analytic analytic rubrics an analytic rubric resembles a grid with the criteria for a student product listed in the leftmost column and with levels of performance listed across the. - Concept / topic to teach: write and publish an autobiography standards addressed: teacher will assess the autobiography using the rubric provided it is the same rubric students used earlier to assess each other's work adaptations. - Writing a biographical essay rubric title: microsoft word - pt6_biorubricdoc author: system administrator created date: 8/17/2007 1:58:31 pm. - This rubric delineates specific expectations about an essay assignment to students and provides a means of assessing completed student essays. Title: 12-2 personal narrative/college essay rubric author: grobert last modified by: georgia robert created date: 10/6/2009 1:01:00 pm company: ellington public schools. Autobiographical narrative rubric category storytelling/ creativity a paper the story contains many creative details and/or descriptions that contribute to the reader's enjoyment. Download and read autobiographical essay rubric high school autobiographical essay rubric high school spend your few moment to read a book even only few pages. Autobiographical narrative read more about reader, included, autobiographical, vivid, mechanics and numerous. Includes rubrics for essay questions, logs, journal writing, and lab write-ups includes invention report, book talk, persuasive essay and autobiographical event essay math and science rubrics math rubrics university of wisconsin.
<urn:uuid:c552fc85-afae-46b2-ac64-cd8a86d16065>
{ "date": "2018-07-21T05:55:56", "dump": "CC-MAIN-2018-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592387.80/warc/CC-MAIN-20180721051500-20180721071500-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8834538459777832, "score": 2.765625, "token_count": 1247, "url": "http://fvassignmentjxxq.ultimatestructuredwater.info/autobiographical-essay-rubric.html" }
How to: Water Transfer Printing A type of graphic printing used by large manufacturers for products like interior car trim and hunting equipment, commercial water transfer printing is a lengthy, involved process that can be simplified and done at home on a smaller scale. DIY versions produce excellent results in altering three-dimensional items with new decorative patterns. While patterns are primarily used for vehicles and sporting goods, the options are nearly endless, as custom print decals can be made for anything you'd like to create. Things You'll Need - Water-slide decal image - Rubber spatula or sponge - Urethane clear coat paint or fixative resin - Dipping tank (optional) - Ink activator (optional) - Epoxy primer (optional) Use a water slide decal for small and simple projects. Choose the decal from an assortment of printed patterns, including camouflage, metal, wood grain or carbon fiber. Order them from several water transfer printing supply companies, such as Liquid Print, since most of them are made of a special polyvinyl alcohol film that can only be bought from these suppliers. Wet the decal and place it onto the desired spot. Work with wet fingers and use a sponge or rubber spatula to carefully spread the decal onto the surface. Leave the decal there a few minutes until it becomes dry. Peel off the backing and spray the image with a fixative resin, or clear coat urethane, to make sure it stays intact. Make or purchase a dipping tank when working with larger objects and more complex projects, so you can submerge the entire surface (it goes unsaid that objects harmed by water cannot be printed with the water transfer technique). This method is closer to the one used by professional services, and will cost a bit more money. Buy the tank from Liquid Print or other supply sites, which sell tanks of varying sizes and prices, and includes timers and heaters for the water. Use a drum similar to ones used in the newspaper-printing process if making your own tank. Heat the water separately for a homemade tank, unless you build a heater into the drum. Thoroughly clean the surface before entering a large-scale printing process, removing dirt and grease so the adhesion works properly. Sand down the surface if it's uneven, or if there's an extremely glossy finish on it. Use an electric sander or sandpaper, depending on the size of the object and the material you're working with. Paint or spray-paint a coat of epoxy primer onto the surface to prep the object for printing and ensure excellent adhesion. Heat the water to 87 or 88 degrees Fahrenheit, and lay the film decal on the surface of the water. Once the film begins to dissolve into the water, spray it with an ink activator to soften it, so it sticks to the surface of the dipped object. Submerge the object into the tank entirely -- or, whichever part you are printing, since any submerged part will pick up the loosened decal ink. Dip it slowly into the water, then pull it out to dry and spray it with a urethane clear coat. - If purchasing a tank, consult supplier companies for the best kind of tank for your projects. - Don't dip anything until you've checked with the manufacturer that it won't be damaged by water. - Wear a mask if working in an enclosed space with any of the sprayed or painted-on chemicals. - Grüne Wasser / Water image by Nazar Chabara from Fotolia.com
<urn:uuid:23ed72a2-b316-43ad-8bb0-be8ef2ba0fe8>
{ "date": "2014-04-24T00:39:45", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00163-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9269782900810242, "score": 2.71875, "token_count": 736, "url": "http://crafts.creativebug.com/to-water-transfer-printing-1673.html" }
A cave on the southern coast of South Africa contains a bowl’s worth of edible shellfish dating back to about 165,000 years ago. Besides pushing back the earliest known seafood meal by 40,000 years, the discovery also marks the earliest time when people might have engaged in symbolic thought. Anatomically modern humans probably emerged between 150,000 and 200,000 years ago in eastern Africa. When those humans first developed the potential for symbolic thought, including language, has remained a puzzle. Looking for early human remains, Curtis W. Marean of the Institute of Human Origins at Arizona State University and his colleagues homed in on the caves at South Africa’s Pinnacle Point. In addition to discovering the first known seafood dinner—mostly brown mussels—they found small stone blades and reddish rocks tossed in with the shells. They identified a dozen or so pieces of iron-rich hematite rock with flattened sides bearing parallel grooves, indicating that the shellfish eaters scraped the rocks to make powder. Mixing that powder with sap or another binder yields a reddish or pinkish paint, possibly to adorn the body or the face. That people were working with pigments back then “is a pretty good indicator of symbolic thought,” says Marean, who published the findings in the October 18 Nature. A population living on shellfish would have stayed in one place and grown in number, he notes, increasing the need for negotiations between individuals or social groups, which might have led to a system of decorative markings.
<urn:uuid:a40883d1-df65-416d-b96b-4caa0b0c38cb>
{ "date": "2019-01-18T03:42:10", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659677.17/warc/CC-MAIN-20190118025529-20190118051529-00056.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9604308009147644, "score": 4.125, "token_count": 318, "url": "https://www.scientificamerican.com/article/food-for-symbolic-thought/?error=cookies_not_supported&code=e395f9a7-6347-4de4-8c96-5ceb9fb306e9" }
Dissecting cadavers is a rite of passage for a majority of medical students. 3rd year Emergency Medicine Resident, Rajnil Shah, recalls his first encounter with a human body as a moment he’ll never forget. “It was scary, but I was excited to see what I could learn from this precious gift.” For centuries, anatomic specimens have been an integral part of the learning environment for medical students and residents. They remain essential for the many aspiring doctors who train at Jump. “There’s a great appreciation because those who donate their bodies to science help us understand the human anatomy, become more proficient with medical procedures, and feel more comfortable when we start working with live people,” said Shah. More than 25 medical students and residents utilize anatomic specimens every month in our anatomical skills lab. They are used to study the human body and practice surgical procedures. Jump and OSF HealthCare strictly adhere to the Ethical and Religious Directives for Catholic Health Care Services. We believe it’s important to teach our learners to show respect not only for the living but to those who’ve donated their bodies for the greater good. Jump staff also makes an effort to educate visitors interested in the anatomical lab about the importance of cadavers in training. Medical schools seeking to procure specimens today are able to find many accredited vendors that handle bodies donated to science. This hasn’t always been the case. Mary Roach, in her 2003 book, Stiff: The Curious Lives of Human Cadavers, sums it up perfectly: “Few sciences are as rooted in shame, infamy, and bad PR as Human Anatomy. To understand the cautious respect for the dead that pervades the modern anatomy lab, it helps to understand the extreme lack of it that pervades the field’s history.” Dissection as Sentencing Religious beliefs prevented the donation of bodies to science in the 16th century. People believed that dissection would spoil the chances of holy resurrection. It was considered a punishment worse than death. Executed murderers were the only legal source of cadavers for anatomical study in Britain and the United States. The U.S. even adopted a law in 1790 permitting judges to add dissection to capital punishment for murder. This standard for obtaining cadavers worked well into the 18th and 19th centuries until the number of medical schools began increasing. Anatomists and medical schools faced a shortage of material. Some enacted extreme measures, such as famed English physician and anatomist William Harvey. Harvey was so dedicated to his calling, he dissected his own deceased father and sister. Others turned to more underhanded ways of acquiring cadavers. The most common method was body snatching—the act of stealing freshly dead humans from graves. Anatomists referred to body snatchers as resurrectionists. Resurrectionists found it was an easy way to make money. The pay worked out to about $1,000 a year, with summers off. Anatomy courses were often held between October and May, to avoid the stench of summertime decomposition. Some anatomy instructors even encouraged their students to raid graveyards at night to provide bodies for class. Certain Scottish schools in the 1700s had a more formal arrangement: tuition could be paid in corpses rather than cash. The practice of body snatching became so rampant, people hired guards to watch over bodies. Some purchased so-called “anti-resurrectionist” products and services. These included iron cages, morthouses, and coffins outfitted with cast-iron corpse straps. Then there are those who resorted to murder. The infamous William Hare and William Burke devised a scheme to kill people and sell their bodies to Edinburgh anatomist, Robert Knox. The pair invited passersby into their lodge for the night and murdered 16 people over 10 months before they were caught. Ironically, Burke was later brought to justice, hanged, and publically dissected. Hare was granted immunity for testifying against his partner. Burke’s bones were shipped to the Royal College of Surgeons of Edinburgh to be made into a skeleton. His remains reside there to this day. Laws to Stop Body Snatching of Cadavers Finally, Parliament intervened and passed the Anatomy Act of 1832 in the United Kingdom. This act made body snatching a criminal offense. It allowed unclaimed bodies and those donated by relatives to be used for the study of anatomy. Massachusetts was the first state in the U.S. to enact laws, in 1830 and 1833, allowing unclaimed bodies to be used for dissection. Over the course of the next decades, many other states followed suit. These measures allowed medical schools to collect unclaimed bodies of people who died in hospitals, asylums, and prisons. Congress approved the Uniform Anatomical Gift Act in 1968. It officially made body donation a right, morally based on free choice and volunteerism. A second act was signed in 1987 and served to clarify the donation process further. Together these two acts standardized laws among states. Furthermore, the act established the human body as property, a new privilege that allowed for a donor’s wishes to be honored in court even if his or her next of kin objected to donation after death. The Need for Donors Dissection of the human body is a fundamental part of medical training. It helps up-and-coming medical professionals understand the human anatomy and how the body functions. While technology has found a way to simulate the anatomy of a human being, there’s no substitute for having the ability to maneuver through the intricacies of a human body. Jump is thankful to have an anatomical lab. We are able to provide exceptional learning opportunities for medical students and residents. Our facility is helping them sharpen their skills and, at the same time, learn that every life is sacred. Stiff: The Curious Lives of Human Cadavers by Mary Roach
<urn:uuid:346c2a38-52c0-4db3-aa84-c43a8e2ee1e0>
{ "date": "2018-02-19T11:41:36", "dump": "CC-MAIN-2018-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9631653428077698, "score": 3.390625, "token_count": 1242, "url": "http://jumpsimulation.org/research-innovation/our-blog/2016/february/what-you-didnt-know-about-cadavers" }
Nearly two weeks after 15 miners became trapped in an illegal coal mine in northeastern India, rescue efforts to save them have been hampered by more flooding. A shaft of the illegally constructed mine was flooded in Meghalaya state on December 13, trapping the men inside. The mine’s shaft, which flooded after a nearby river overflowed, is estimated to be about 320 feet deep. The men are believed to be trapped at the bottom. More than 80 rescue personnel, including deep-water divers from the National Disaster Response Force (NDRF) are currently working to pump out the excess water in the shaft. Authorities have installed two pumps and state officials have reached out to experts for technical support to increase the pumps’ efficiency. But on December 20 heavy rains caused the river to overflow again and re-flooded the mine, raising fears that the men trapped inside might not have survived. “The efforts are on and nobody expected this to happen,” Conrad Sangma, Meghalaya’s chief minister, told reporters Wednesday. “The government’s duty is to continue trying. There is hope. We are not going to simply give up like that.” The rescue effort also has been challenged by the mine’s illegal construction. Normally, mine operators are required to produce maps that highlight tunnel passages and safety areas in case of an emergency, NDRF commander S.K. Shastri told CNN. But this mine does not have a map, said Shastri, who is in charge of the rescue operation. The mine, constructed in a jungle in the state’s East Jaintia Hills district, was employing a method of extracting coal known as rat-hole mining. The practice was banned in the Meghalaya in 2014 by the National Green Tribunal (NGT) due to health and environmental risks but is still used in secluded pockets of the state. Meghalaya is home to some of the largest coal deposits in the country and the resource has been illegally mined for decades. According to the state government, the region has more than 576 million metric tons of coal deposits.
<urn:uuid:fb8bd1b4-2e57-455e-a610-30d71aaeccec>
{ "date": "2020-01-25T01:27:43", "dump": "CC-MAIN-2020-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9725196361541748, "score": 2.546875, "token_count": 449, "url": "https://www.newzimbabwe.com/15-trapped-miners-feared-dead-in-india/" }
We've all heard stories about the Rapture — when all the righteous people will be bodily lifted into Heaven, leaving everybody else to endure years of tribulation. It's a popular idea, that appears in loads of books as well as movies. But where did this bizarre idea come from? It turns out the notion of the Rapture is pretty new — dating back less than 200 years. So who developed this doctrine, and how did it become so popular, almost overnight? Where did all the people go? The best known treatment of the Rapture is probably Jerry Jenkins and Tim LaHaye's Left Behind book and movie series. The Left Behind tie-in movies feature a wide-eyed Kirk Cameron leading people through a world that looks like a PG-rated issue of Garth Ennis' Crossed. Planes crash into the ground, and cars that are suddenly missing their drivers careen into each other, as a chosen group of people are "raptured" and disappear from the Earth, leaving the rest of the world to fend for themselves. The fever dream of a young girl Depending on which theologian you speak to, only one or two passages from Judeo-Christian religious texts make reference to an event akin to what is portrayed as the Rapture, leaving the idea with very little Biblical support. Instead, most of the lore surrounding the Rapture originates with two people in the early 19th Century: a teenage girl living in Scotland, and a London-born preacher. Margaret McDonald, a fifteen-year-old girl living in Scotland, experienced a "vision" of the end of the world in 1820. In McDonald's vision, the chosen few are saved from a "purifying" fire. This is not exactly the disappearance in the middle of the day that popular culture views as the Rapture, but an early prototype. Not everyone leapt to follow her view — and in fact, several contemporary religious leaders deemed her visions demonic. Meanwhile, London-born evangelist John Darby and members of his flock, the Irish-born Plymouth Brethren, popularized and molded the idea of Judeo-Christians being removed from the Earth, prior to an unknown period of strife. But McDonald had no influence on Darby's views, since Darby apparently espoused this idea as early as 1827. But McDonald's visions, and their later publication, no doubt further popularized the idea of the Rapture in Europe. Popping up in publication Darby traveled to North America on several occasions during the mid-19th Century, teaching his theory of the Rapture. On one of these trips, Darby met with James Brookes, a prominent preacher and writer in Missouri — and, most importantly, the mentor of Cyrus Ingerson Scofield. Scofield, influenced by Darby's teachings via his mentor, published the Scofield Reference Bible in 1909. The Scofield Reference Bible went on to become one of the best selling religious texts of the early 20th Century, one that continues to sell extremely well in the United Kingdom. Scofield's text displays his personal notes and explanations right next to the King James translation of the Judeo-Christian Bible. The proximity of Scofield's notes to the religious text no doubt lent credence to his words, especially in a world lacking widespread communication systems. As individuals emigrated to the United States in the early 20th Century, this helped spread the belief that Darby had already put in place, during his visits to North America. Amongst those who do believe in the Rapture, meanwhile, the exact details of the event remain quite a mystery. But some leaders do go into specifics — even setting exact dates when the Rapture will happen. Three different highly publicized dates came and went in the 1990s, but the most recent failed predictions happened just last year. Harold Camping made his second and third attempts to fix a date on the rapture after his humbling announcement of the "confirmed" date of September 6, 1994. Camping announced May 21, 2011 as the date for the disappearance of the worthy — but after the date passed, he quickly came back and announced the date of October 21, 2011 as the "true" date. Camping predicted a series of earthquakes beginning in New Zealand, to accompany these dates. The 89-year-old Camping and his followers spent $100 million publicizing these two dates in a media campaign. After nether date turned out to be accurate, many of Camping's followers felt cheated, especially people who'd put their lives on hold for years. This article, checking in on Camping's followers a year later, is compelling but depressing reading. One engineer spent most of his retirement savings on publicizing Camping's predictions, only to see them fail to materialize. Another former believer in Camping told the reporter, "I think I was part of a cult."
<urn:uuid:65c3c9bb-12c2-469b-81f9-3109299d7acc>
{ "date": "2015-11-30T12:59:32", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461529.84/warc/CC-MAIN-20151124205421-00281-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9699031114578247, "score": 3.03125, "token_count": 1004, "url": "http://io9.com/5928190/the-very-short-history-of-the-rapture?tag=rapture" }
Atrial fibrillation (afib, af) is a common type of heart rhythm abnormality with symptoms like dizziness, fainting, weakness, fatigue, shortness of breath there are. This guide will give you tips for how you can manage your atrial fibrillation (afib), feel better, and keep from having a stroke when you have afib, taking care of. However, if you notice your heart suddenly races or if you have uneven heartbeats that last several minutes, you may have a condition known as atrial fibrillation. This issue provides a clinical overview of atrial fibrillation, focusing on diagnosis, treatment, and practice improvement the content of in the. Atrial fibrillation (also called afib or af) is a quivering or irregular heartbeat ( arrhythmia) that can lead to blood clots, stroke, heart failure and other heart- related. Mini-maze procedure to treat atrial fibrillation (a-fib) - dr david affleck, md, cardiovascular surgeon at mountainstar cardiovascular surgery in utah,. Atrial fibrillation is a cardiac arrhythmia in which the upper chambers of the heart – the atria – beat irregularly and, often, rapidly the condition occurs as a. Atrial fibrillation (afib) - cleveland clinic heart center - leader in treatment of afib, including maze procedure and pulmonary vein isolation ablation atrial. When the heart beats regularly, its two upper chambers (atria) are beating in sync or in rhythm with the heart's two lower chambers (ventricles) atrial fibrillation is. Find in-depth information on atrial fibrillation, including symptoms ranging from lack of energy to heart palpitations and dizziness. Atrial fibrillation (af) is the most common cardiac arrhythmia that has the following electrocardiographic characteristics (see):the rr intervals. The only proven way to prevent a stroke caused by atrial fibrillation is to use a blood thinner called coumadin (also called warfarin) patients with the following . We asked people to describe what having atrial fibrillation (af) feels like palpitations (a noticeably rapid, strong or irregular heart beat) and a fast pulse rate are. Introduction to atrial fibrillation, including causes and treatments. Atrial fibrillation (af) educational online resources for patients and carers - find out all about all aspects of your condition here. There are multiple theories about the etiology of atrial fibrillation an important theory is that, in atrial fibrillation, the regular impulses produced by the sinus node. Atrial fibrillation (afib) is an abnormal heart rhythm the four chambers of the heart usually beat in a steady, rhythmic pattern afib means that the atria (upper . A disruption to this pattern causes an irregular rhythm, also known as an arrhythmia atrial fibrillation, or afib, is the most common type of arrhythmia you may or. Atrial fibrillation controversies - rate control vs rhythm control, the aggressive ottawa protocol, stroke risk stratification, chads score w clare. Our heart rhythm specialists provide the full range of treatment options for atrial fibrillation, including radiofrequency catheter ablation. Dr srini iyengar on atrial fibrillation bch's newest cardiologist discusses symptoms and treatment of irregular heartbeat with abc channel 7 news. Atrial fibrillation clinical research trial listings in cardiology/vascular diseases on centerwatch. Atrial fibrillation is the most common irregular heart rhythm in the united states according to the american heart association (aha), about two million americans . Atrial fibrillation (af) is a fast, irregular heart rhythm in which the two upper chambers of the heart (the atria) quiver rapidly (fibrillate) instead of. Atrial fibrillation (af) is the most common clinical arrhythmia encountered a wealth of evidence has improved our ability to diagnose and.
<urn:uuid:8fde93f6-9394-441c-b197-cc70e7fb1a26>
{ "date": "2018-11-16T11:55:05", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743011.30/warc/CC-MAIN-20181116111645-20181116133645-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8941879272460938, "score": 3.265625, "token_count": 855, "url": "http://trassignmentalok.bethanyspringretreats.us/atrial-fibrillation.html" }
and makes this point: For example, as I was re-reading an interesting little maths primer this week by Timoth Gowers (A Fields medal winner and maths prof at Cambridge) I found him briefly discussing axiomatics and the role of Russell and Whitehads' "Principia Mathematica". The importance of this text, for Gowers, is that it establishes the axiomatic hermeneutic (my phrase not his btw) which "means that any dispute about the validity of a mathematical proof can always be resolved".That is correct. Furthermore, as Gowers goes on to explain: Nevertheless, the fact that disputes can in principle be resolved does make mathematics unique. There is no mathematicd equivalent of astronomers who still believe in the steady-state theory of the universe, or of biologists who hold, with great conviction, very different views about how much is explained by natural selection, or of philosophers who dis- agree fundamentally about the relationship between consciousness and the physical world, or of economists who follow opposing schools of thought such as monetarism and neo-Keynesianism. [p.49]That is correct, and it is a consequence of logicism. Math is the only subject that resolves all of its disputes. You might think that the The hard sciences would be able to resolve disputes, but they are not. Physics has a dispute over the merits if string theory, and there is no hope of any resolution. Update: I see that the Wikipedia article on Mathematics has some nonsense about philosphers and mathematicians deciding that math must be like a science because Goedel proved that it was not reducible to logic. As explained below, they are wrong. You can find a more accurate description of the logical nature of math in the article on Axiomatic set theory. I'd like to see those philosophers and mathematicians named.
<urn:uuid:20a34060-c90a-4a8c-b01f-ff89147ea719>
{ "date": "2019-03-24T13:39:35", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203448.17/warc/CC-MAIN-20190324124545-20190324150545-00416.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9641951322555542, "score": 2.65625, "token_count": 384, "url": "http://blog.singularvalues.com/2011/01/why-math-is-unique.html" }
As parents, we’ve all had one of those never-ending days full of stresses, messes and fussy kids. Even those seemingly perfect parents face temper tantrums, spills and sleepless nights. Parenting can be even more challenging when you throw in the extra challenges of frequent relocations and deployments. There are a few simple things you can do to help alleviate that stress and make sure your family is happy, healthy and safe. Protective factors are conditions in families and communities that, when present, increase the health and well-being of your family. They can help you create a great family environment that promotes healthy child development and reduces the headaches of parenthood. These six protective factors build on your family's strengths and can easily be incorporated into your everyday routine: - Nurture and attachment – We all know that kissing a scratch or a big bear hug can go a long way to turning a frown into a smile on our children’s face. Research has shown that simple acts of affection, such as hugs or loving words, have a significant impact on the positive growth and development of your children. With our older children, this can also mean taking more time to listen to their worries or be involved with after-school activities. Nurturing children of any age encourages healthy physical and emotional development. - Knowledge of parenting and child development – While you are an expert in understanding your children's personalities and unique behaviors, you might not always know what to expect of them in terms of typical developmental milestones. Knowing what your children are capable of and setting realistic expectations for their behavior can take a lot of stress and frustration out of parenting on a day-to-day basis. That is why keeping up with the latest information about parenting techniques and child development is so important. - Parental resilience – Life comes with ups and downs. When the downs seem like they outnumber the ups, it’s important to know how to deal with the stress in a way that doesn’t affect your parenting. As a parent, you have inner strengths and support systems you can tap into, such as your faith, sense of humor or relationships with friends and family. The ability to identify stress and deal with it in a healthy way not only increases your well-being, but also shows your kids a model for positive ways to cope. - Social connections – There is truth in the adage, "It takes a village to raise a child." Having a solid group of friends and family to offer assistance and give you advice can take the edge off a rough day, and allow you to enjoy your family even more. - Concrete supports for parents – There are lots of things that can greatly affect the stability of your family, such as financial insecurity, lack of adequate housing or employment issues. That is why it’s important to have support measures in place, both in the community and at home, to help you overcome these challenges. Reach out to your military and family support center to learn about programs and services on your installation or in the local community that might be able to help your family. You can also speak with a Military OneSource consultant by calling 800-342-9647. - Social and emotional competence of children – Your children's ability to connect and interact with the world around them has a positive impact on their relationships. Whether it’s playing with the neighborhood kids or expressing emotions, your kids are learning different ways to connect. However, they will always make mistakes and their behavior can be challenging at times. Understanding this, and being able to identify developmental delays, can make challenging behaviors easier to deal with. Early work with children to keep their development on track helps keep them safe and fosters healthy development. For more information about protective factors and how to strengthen them in your family, visit the U.S. Administration for Children and Families website on protective factors.
<urn:uuid:982daea1-e3f5-40f0-88ea-312609186ad1>
{ "date": "2014-10-21T08:10:49", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444209.14/warc/CC-MAIN-20141017005724-00075-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9618930816650391, "score": 2.765625, "token_count": 791, "url": "http://www.militaryonesource.mil/parenting/single-parenting?content_id=267916" }
Bill Clinton on Immigration President of the U.S., 1993-2001; Former Democratic Governor (AR) 2000: Required agencies to communicate in foreign languages Congress traditionally refused to admit new states if they lacked an English-speaking majority. In recent decades, the incentive to learn English has eroded. For example, the Voting Rights Act of 1965 required provision of bilingual voting ballots. California and some other states also allow voting by mail in state elections using non-English language ballots. But bilingual ballots should not be needed, because immigrants since the Nationality Act of 1906 (later reaffirmed in the Nationality Act of 1940) have had to demonstrate literacy in English in order to gain US citizenship. Since only citizens can vote, why should anyone need a foreign language ballot? Furthermore, just before he left office President Clinton signed Executive Order 13166, which required federal agencies to ensure people could receive communications and services from the government in foreign languages. Although well-intentioned, the policy further reduces motivation for immigrants to learn English. Source: Leadership and Crisis, by Bobby Jindal, p.139 , Nov 15, 2010 Assure Mexico: no mass deportations Immigration was a big issue [in discussions between myself and the president of Mexico]. Many Central Americans and people from the Caribbean nations were working in the US and sending money back home to their families, providing a major source of income in the smaller nations. The leaders were worried about the anti-immigration stance Republicans had taken and wanted my assurances that their would be no mass deportations. I gave it to them, but also said we had to enforce our immigration laws. Source: My Life, by Bill Clinton, p.756 , Jun 21, 2004 In 50 years, US will have no majority race, like NYC now In 1998, Mr. Clinton rhapsodized to a cheering student audience about a day when Americans of European descent will be a minority. "Today, largely because of immigration, there is no majority race in Hawaii or Houston or NYC. Within 5 years there will be no majority race in our largest state, California. In a little more than 50 years there will be no majority race in the US. No other nation in history has gone through demographic change of this magnitude in so short a time." Correction: no nation in history has gone through a demographic change of this magnitude in so short a time, and remained the same nation. Mr. Clinton assured us that it will be a better America when we are all minorities and realize true "diversity." Source: The Death of the West, by Pat Buchanan, p. 3 , Oct 15, 2002 Reduce immigration backlog while maintaining quality Since 1993, the US has welcomed nearly 4.4 million new American citizens. Faced with this unprecedented number of applications, the Administration undertook an initiative that has significantly reduced the backlog of citizenship applications and is restoring timely processing while at the same time maintaining the integrity of the process. The INS is on track to meet its goal of reducing this backlog by completing 1.3 million applications this year while maintaining the highest levels of quality. Source: WhiteHouse.gov web site , Jul 2, 2000 Opposed Official English; strengthen bilingual education Source: WhiteHouse.gov web site , Jul 2, 2000 - Opposed English-Only Legislation. The Clinton-Gore Administration strongly opposed legislation to make English the official language of the US, which would have jeopardized services and programs for non-English speakers and jeopardized assistance to the tens of thousands of new immigrants and others seeking to learn English as adults. - Strengthening Bilingual and Immigrant Education. The President is committed to ensuring that students with limited English skills get the extra help they need in order to learn English and meet the same high standards expected for all students. - Providing Quality Bilingual Education. Bilingual education funding also provides teachers with the training they need to teach students with Limited English Proficiency. - Increased Assistance for Migrant Children and Families. - Opposed Efforts to Keep Immigrant Children Out of Public Schools. 700 new Border Patrol agents; increased penalties on aliens The Clinton Administration sent a legislative proposal to Congress in 1995 to strengthen the country's strategy for combating illegal immigration. This proposed legislation provides for: Source: State of the Union, by T.Blood & B.Henderson, p. 44 , Aug 1, 1996 No fewer than 700 new Border Patrol agents. - An Employment Verification Pilot Program to determine the most effective means of removing a significant incentive to illegal immigration: employment in the US. Increased penalties for alien smuggling, illegal reentry, failure to depart, employer violations, and immigration document fraud. Streamlined deportation procedures so that criminal aliens can be more expeditiously removed from the US. Strict enforcement against illegal immigration We must not tolerate illegal immigration. Since 1992, we have increased our Border Patrol by over 35%; deployed underground sensors, infrared night scopes and encrypted radios; built miles of new fences; and installed massive amounts of new lighting. We have moved forcefully to protect American jobs by calling on Congress to enact increased civil and criminal sanctions against employers who hire illegal workers. Since 1993, we have removed 30,000 illegal workers from jobs across the country. Source: Between Hope and History, by Bill Clinton, p.134 , Jan 1, 1996 We are richer for the energy & ideas of immigrants We must realize that all Americans, whatever their racial and ethnic origin, share the same old-fashioned values, work hard, care for their families, pay their taxes, and obey the law. This same commitment to tolerance and equal opportunity should govern our approach to immigration. Itís important for us all to remember that we are both a nation of immigrants and a nation of laws. Legal immigration has made America what it is today-a vibrant and diverse nation, all the richer for the energy, ideas, and plain hard work immigrants have contributed to our society. Immigrants who enter our country legally and begin the process of attaining citizenship today are little different from the strivers who were our own ancestors. We need to remember that, and repudiate those who argue against immigration as a thinly veiled pretext for discrimination. Source: Between Hope and History, by Bill Clinton, p.133-134 , Jan 1, 1996 - Click here for definitions & background information on Immigration. - Click here for VoteMatch responses by Bill Clinton. - Click here for AmericansElect.org quiz by Bill Clinton. Other past presidents on Immigration: Bill Clinton on other issues: George W. Bush(R,2001-2009) George Bush Sr.(R,1989-1993) John F. Kennedy(D,1961-1963) Harry S Truman(D,1945-1953) Past Vice Presidents: Natural Law Party Page last updated: Jan 21, 2018
<urn:uuid:3f8bf599-5c6b-4e73-9691-a8dc1addd0cd>
{ "date": "2018-03-20T06:12:48", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9174084663391113, "score": 2.75, "token_count": 1441, "url": "http://www.ontheissues.org/celeb/Bill_Clinton_Immigration.htm" }
EconSouth (First Quarter 2006) EconSouth (First Quarter 2006) |Tom Cunningham is vice president and associate director of research at the Federal Reserve Bank of Atlanta.||Disasters, Income, and Wealth| Disasters come in many forms, either natural or man-made, and they often cause calamitous loss of life and destruction of physical property and wealth. Although we try to minimize or mitigate the potential losses associated with disasters, some level of risk is unavoidable. To mitigate the risk of the loss of physical wealth, we often insure our possessions. Then, if a disaster destroys these possessions, we can replace them at the expense of the insurance company. The insurance company, in turn, is willing to bear this loss because it charges its customers premiums that over time compensate it for the risk. In the case of idiosyncratic human disasters, such as fires, thefts, and auto accidents, the risk premium can be calculated on the basis of experience and individual characteristics, and these losses are randomly distributed across the entire economy. This loss of wealth has two components. First, and most devastating, is the loss of life. People cannot be replaced, and the brunt of this loss is borne by the victims’ families. A region’s economy also suffers with the loss of human capital. Individuals who live in an area and work in a job over time develop a knowledge base specific to that area and task. When those individuals are gone, their purely technical skills may be replaced, but their unique skills must be relearned by others who have to make the investment in a learning curve. If the destroyed property was fully insured, then the insurance companies bear the net loss of wealth. Presumably the insurance companies have managed their risk portfolios so that they are well positioned to bear these particular losses and pay off any claims. That net loss is spread over a wide portion of the economy. Without insurance, the assets’ owners bear the entire loss. In effect, the asset side of their balance sheet is diminished by the extent of the damage and nothing happens to their external liabilities, so the disaster is a deadweight loss of net wealth to the owners. Consider a business that is quite profitable but is wiped out by a storm. Assume the business has no insurance or is self-insured. The owner of the business suffers a loss to her assets at the time of the disaster. (In accounting terms, this loss of assets is matched by a loss of the owner’s equity.) But if the business is sufficiently profitable, it pays to rebuild it, even if the owner has to borrow to make the restoration. Taking out a loan should not be a problem for an otherwise thriving business. The income generated locally by the restoration is essentially the same as if the business were insured, but now the restoration of the lost asset has turned into an increase on the liability side of the owner’s balance sheet—the debt to finance the restoration. The net reduction in wealth is the same, but the consequence for individual balance sheets is quite different. Topsy-turvy income and wealth Wealth is usually the result of saving from a stream of income; more income means potentially more wealth. More wealth manifests itself in more physical capital, which makes workers more productive and thus generates still more income. The key to this virtuous circle is the savings from the initial stream of income. The conversion of lost wealth into income in the disaster-stricken area is short-lived, however, and the insured victims eventually return to something like the status quo. No matter how well prepared insurance companies may be to pay off disaster losses, their net assets decrease by the amount of the payout. Uninsured victims pay an even heavier price and may never fully recover their previous level of wealth. As the victims of Hurricane Katrina can sadly attest, when disaster strikes, someone, somewhere, must take the loss.
<urn:uuid:42b389b7-592f-4df2-a863-4c649d23525e>
{ "date": "2015-08-28T19:12:27", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00173-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9609794616699219, "score": 2.546875, "token_count": 796, "url": "https://www.frbatlanta.org/regional-economy/econsouth/econsouth_vol_8_no_1_fed_issue.aspx" }
Finally we have it – a proof that establishes that hydraulic fracturing carried out in oil and gas operations is directly responsible for small-to-moderate earthquakes. For a long time the scientific community has believed that there is a link between fracking and earthquakes, authors of the study published in journal Science note. However, there have been no conclusive evidence that could answer the questions of how and why is seismicity induced by hydraulic fracturing. This changes now with the study by scientists at University of Calgary revealing the link that establishes the relation between fracking and earthquakes. For their study, scientists at the university collected and analyzed seismic data going back to the winter of 2015, when the first seismic event exceeding magnitude 4 on the Richter scale, occurred in the Fox Creek area. To get the data they required, scientists resorted to information from private and public seismograph stations in the area and also a comprehensive database of hydraulic fracturing data from each well in the area. Once the data that was required was at their disposal, scientists used advanced techniques to create a database of 905 distinct events. While the number of events was large, one thing to point out was that most of these events were too small to be part of existing seismicity catalogues. But, scientists were able to link seismic events to specific operations at individual wells, thanks to Repsol Oil and Gas Canada Inc. and Canadian Discovery Ltd., which provided data for the study. This university-industry partnership is crucial for researchers to gain access to data and to understand the fundamental processes. Scientists were able to show a pre-existing but previously undetected fault system running parallel to two horizontally drilled wells. In one strand of the fault, hydraulic fracturing in both wells triggered small earthquakes by imposing mechanical stresses on the rock formations beneath the hydrocarbons-bearing zone — causing the fault to slip. In this case, movement on the fault effectively terminated when hydraulic fracturing operations ceased, consistent with existing regulatory protocols to halt operations under certain conditions. However, in another strand of the fault — and more than two weeks after hydraulic fracturing injections had stopped — the magnitude 3.9 earthquake occurred at a calculated depth of just over four kilometres. This places the event within the upper levels of Precambrian basement rocks. Subsequent smaller seismic events persisted for a few months afterward, as the seismic activity migrated slowly from the basement back up toward the injection zone. The researchers’ findings indicate that this persistent activity appears to be associated with infiltration of fracturing fluids into one strand of the fault.
<urn:uuid:d597dc33-d89d-4c36-a994-ed2e05ea3695>
{ "date": "2018-08-15T01:52:53", "dump": "CC-MAIN-2018-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209755.32/warc/CC-MAIN-20180815004637-20180815024637-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9493362307548523, "score": 3.359375, "token_count": 506, "url": "http://topexaminer.com/2016/11/18/fracking-responsible-for-earthquakes-in-alberta-study-reveals-two-seismic-processes/" }
Tuesday, 13 November 2012 The knowledge management world is large and complex, with many different understandings of what the term means, and what it encompasses. Here is a first-pass map of the Knowledge Management Landscape, and some of the nooks, crannies, islands and archipelagos that make up the landscape. Or if you prefer, the 50 shades within the KM rainbow. Lets start down the data end, where the knowledge management landscape meets the border with data management. KM's interest in data comes from combining data through linked data, and looking for the patterns within data, though data mining, so that new insights can be gained. Where this is applied to customer data or business data, then we get into the analogous disciplines of CRM and Business Intelligence. Next to data comes Information, where knowledge management is involved in several ways. For example the structuring of information, through taxonomies, ontologies, folksonomies, or information tagging. Or else the retrieval of information, where knowledge management encompasses enterprise search, and/or semantic search. Or the presentation of information, through intranets, or portals, supported by content management. The presentation of information, as well as the creation of explicit "knowledge objects" is an important component of call centre knowledge management, closely allied to the creation of customer knowledge bases, and knowledge based engineering is a discipline where engineering design is done based on knowledge models. The creation of explicit knowledge is a significant part of the KM world, containing many shades of its own. Knowledge retention deals with capture of knowledge from retiring staff aka Knowledge Harvesting), lessons capture deals with learning from projects, as do learning histories based on multiple interviews, while learning interviewing Another part of the landscape is the organisational learning corner. This abuts the border with learning and development, but is concerned with learning of the organisation, rather than learning of the individual. In this part of the KM world we find action learning, business-driven action learning, and lesson-learning, plus analogous disciplines such as e-learning, coaching, and mentoring. Organisational learning abuts the area of knowledge transfer, where we look at dialogue-based processes such as peer assist, knowledge handover, knowledge cafe, baton-passing, after action review, appreciative enquiry, and so on - processes that are focused on knowledge, but are closely allied to other meeting disciplines. Knowledge transfer between people - the tacit area, or experience management, takes us into the area of networking. Here we find the communities of practice, the centres of excellence, the communities of interest, and the social networks. The latter, of course, is closely allied to social media - social media being the technology which supports social networks. Then we have storytelling, as a means of knowledge transfer, crowdsourcing, as a means of accessing knowledge from a wide source, and collaboration as a sort of catch-all term (supported by collaborative technology). There is a whole innovation area to KM as well - open innovation, creativity, deep-dives etc The finally we have the more psychological end of knowledge management, where we have disciplines such as epistemology, sense-making, complexity theory, decision-making theory. Plus of course the part of knowledge management that deals with the lone worker - personal knowledge management. So there are our 50+ shades of knowledge management - if I have missed any, please let me know through the comments option! There are few if any companies that work across the entire KM landscape. Here in the UK we have created the KAAS consortium, that seeks to address the full spectrum from data to information to tacit knowledge to experience management, but even there we may have left one or two little areas outside the coverage.
<urn:uuid:7924dc8d-5d36-4a1a-bfa9-d50d1ef13a40>
{ "date": "2015-08-29T03:02:08", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064167.28/warc/CC-MAIN-20150827025424-00114-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9308818578720093, "score": 2.53125, "token_count": 772, "url": "http://www.nickmilton.com/2012/11/50-shades-of-knowledge-management.html" }
Results 1 - 20 of 109 matches Is Greenland Melting? part of Earth Exploration Toolbook:GreenlandMelt DATA: Thickness, velocity, and annual melt extents of the Greenland ice sheet TOOL: My World GIS SUMMARY: Explore map layers to examine annual melting and long-term changes of Greenland's ice sheet. Analyzing the Antarctic Ozone Hole part of Earth Exploration Toolbook:Analyzing the Antarctic Ozone Hole DATA: Total Ozone Mapping Spectrometer (TOMS) Images. TOOLS: ImageJ, Spreadsheet. SUMMARY: Animate and explore 10 years of Southern Hemisphere ozone images. Then measure and graph the area of the ozone hole over time. Relative Geologic Time and the Geologic Time Scale part of Cutting Edge:Courses:Paleontology:Activities Group simulation of the development of the geologic time scale illustrating concepts of correlation and relative time. Extremely effective for teaching the significance of the geologic time scale. Igneous Rock Compositions and Plate Tectonics part of Cutting Edge:Courses:Petrology:Teaching Examples In this exercise, students use whole-rock major- and trace-element compositions of igneous rocks from a variety of tectonic settings and locations to explore the importance of plate setting in determining magma ... Environmental Health Risk Inventory part of Cutting Edge:Topics:Public Policy:Activities In this activity, students perform an environmental health risk inventory of a selected locale. Students will address anthropogenic and natural health risks of an area using data collected from various online ... Poleward Heat Transport Jigsaw part of Cutting Edge:Topics:Hurricanes-Climate Change Connection:Activities Based on great plate tectonic exercise by Sawyer et al. (2005 JGE), this small-group exercise with maps of data about earth's energy balance helps students visualize poleward heat transport. Exploring the Link between Hurricanes and Climate using GCM Results part of Cutting Edge:Topics:Hurricanes-Climate Change Connection:Activities This activity requires students to examine global climate model output available online and consider the potential impact of global warming on tropical cyclone initiation and evolution. As a follow-up, students ... Mapping Local Data in a GIS part of Earth Exploration Toolbook:MyMap DATA: Student-collected GPS data and site characterizations TOOLS: MyWorld GIS, spreadsheet program SUMMARY: Follow a study of Urban Heat Islands as an example of map-based science research projects. Earthquake Case Study part of Cutting Edge:Courses:Introductory Courses:Activities This activity is a multiple case study analysis of different earthquakes that leads to student interpretation of claims, evidence and prediction/recommendations. Red Beans and Rice: Slope failure experimental modeling part of Cutting Edge:Courses:Geomorphology:Activities Students replicate a slope failure experiment published in Science (Densmore et al., 1997) using a simple, acrylic slope failure box in an effort to forge a link between autocyclic processes, long-term landscape ... Testing the Fixed-hotspot-moving-plate model part of Cutting Edge:Courses:Introductory Courses:Activities Students examine hot spot tracks, magnetic inclination data, and coral data from the Hawaii-Emperor Seamount Chain to test the hypothesis that hotspots are fixed. Most students have learned somewhere that hotspots ... Using GLOBE Data to Study the Earth System part of Earth Exploration Toolbook:Using GLOBE Data to Study Earths System DATA: Student-collected environmental data TOOL: GLOBE Online Graphing Tool SUMMARY: explore, graph, and compare data from the GLOBE (Global Learning and Observations to Benefit the Environment) Program. How Fast Do Materials Weather? part of Starting Point-Teaching Entry Level Geoscience:Interactive Lectures:Examples A think-pair-share activity in which students calculate weathering rates from tombstone weathering data. - Carbon Dioxide Exercise part of Starting Point-Teaching Entry Level Geoscience:Interactive Lectures:Examples Students work in groups, plotting carbon dioxide concentrations over time on overheads and estimating the rate of change over five years. - Harker Diagrams part of Integrating Research and Education:EarthChem:Compositional Diversity in Volcanic Suites Kent Ratajeski Department of Earth Sciences, Montana State University, Bozeman, MT Published Oct. 26, 2004. Description In this exercise, students use whole-rock major- and trace-element compositions of volcanic ... Directed Discovery of Crystal Structures part of Integrating Research and Education:Crystallography:Directed Discovery of Crystal Structures David Mogk and Kent Ratajeski Department of Earth Sciences, Montana State University, Bozeman, MT Published Jan. 25, 2005 Description This contribution is modified from a published exercise "Directed ... Two streams, two stories... How Humans Alter Floods and Streams part of Quantitative Skills:Activity Collection An activity/lab where students determine the changes in 100-year flood determinations for 2 streams over time. The Heat is On: Understanding Local Climate Change part of Cutting Edge:Enhance Your Teaching:Visualization:Examples Students draw conclusions about the extent to which multiple decades of temperature data about Phoenix suggest that a shift in local climate is taking place as opposed to exhibiting nothing more than natural ... Gulf Stream Heat Budget and Europe part of Cutting Edge:Enhance Your Teaching:Visualization:Examples Student groups are presented a problem scenario to research and make recommendations, predictions, and a resolution of the problem based on data and visualizations. Detecting El Niño in Sea Surface Temperature Data part of Earth Exploration Toolbook:PMEL DATA: Sea Surface Temperature (SST). TOOL: My World GIS. SUMMARY: Examine 15 years of SST data from the Pacific Marine Environmental Laboratory. Create and analyze average SST maps to identify El Nino and La Nina events.
<urn:uuid:b7b869cb-fa70-4182-b13a-92364eca8264>
{ "date": "2016-02-08T23:26:22", "dump": "CC-MAIN-2016-07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154682.35/warc/CC-MAIN-20160205193914-00237-ip-10-236-182-209.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8128051161766052, "score": 3.28125, "token_count": 1271, "url": "http://serc.carleton.edu/NAGTWorkshops/data_models/activities.html?q1=sercvocabs__13%3A10" }
To make his fortune, man risked everything to cross the country during the Gold Rush days. Anna Ryan Greenwell of Lakenan, Missouri, has a packet of 13 letters over a century old written by her grandfather, Alden Rice Grout, as he made his way across the plains to California during the Gold Rush days and labored there hoping to make his fortune. The letters were written on thin blue and white paper. When the writer had covered the pages, he turned to the margins and filled them with last-minute thoughts. One space in the center was left for the address, and the page was sealed with a dab of sealing wax. Here are excerpts from the letters: "Richmond, Mo., April 22, 1849 – If practicable, I wish to go on to California as I have been foolish enough to start, and try to be paid for some of the trouble and anxiety I have already felt by leaving a pleasant home for the miserable life I am now enduring." There was a disagreement among the leaders of the 32-wagon train. At Ft. Laramie, Wyoming, Mr. Grout and seven other men broke away to form their own company. He wrote: "The captain, as good and clever a man as ever lived, wished to please all; the consequence was nobody was pleased." Estimating that they had passed 1,000 dead oxen in 150 miles of alkali plains along with deserted wagons, tools and clothing, he says: "I can think of nothing I ever read to compare with it except Bonaparte's excursion to Russia." The company arrived at the diggings near Sacramento on Sept. 16, 1849. Mr. Grout's letters soon were filled with admonitions to his friends in Missouri to remain there. There were days when he dug $200 worth of gold from the hills, but these days were preceded by weeks of labor that netted nothing. To a friend asking his advice about making the trip, he wrote: "I doubt not but you almost weekly hear of this one and that of having taken out his pounds and pounds of gold. But do you hear of the one thousand and one that have died, or are lying sick, unable to labor for their bread and not any means to buy it with?" The fantastic prices took their share of his hard-earned gold. At one time he wrote of flour being $2 a pound; onions, $1.50; potatoes, $1. Hay was $10 for 100 pounds, and cornmeal $25 a barrel. In all his letters, Mr. Grout begged for word from home. His wife wrote, but it was May 29, 1850, more than a year after he left home, before he received his first letter. It had been sent east to New York and by steamer around South America. The last letter in the packet informs his wife he will leave for home in December 1850. He had found gold, but not enough to keep him in the treacherous golden land. There was much, he discovered, that gold could not buy – the love of wife and children, peace of mind, a little comfort, such a small thing as a table on which to write. "I have been reading this letter over," he laments, "and confess it goes a good deal like riding in a lumber wagon over frozen roads." Mrs. Robert G. Lanham Monroe City, Missouri Back in 1955 a call went out from the editors of the then CAPPER’s Weekly asking for readers to send in articles on true pioneers. Hundreds of letters came pouring in from early settlers and their children, many now in their 80s and 90s, and from grandchildren of settlers, all with tales to tell. So many articles were received that a decision was made to create a book, and in 1956, the first My Folks title – My Folks Came in a Covered Wagon – hit the shelves. Nine other books have since been published in the My Folks series, all filled to the brim with true tales from CAPPER’s readers, and we are proud to make those stories available to our growing online community. More than 150 workshops, great deals from more than 200 exhibitors, off-stage demos, inspirational keynotes, and great food!LEARN MORE
<urn:uuid:3e5a9810-c77f-4564-bc18-5f6bdebffa8d>
{ "date": "2017-11-21T00:47:31", "dump": "CC-MAIN-2017-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806309.83/warc/CC-MAIN-20171121002016-20171121022016-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9887020587921143, "score": 3.34375, "token_count": 888, "url": "https://www.cappersfarmer.com/humor-and-nostalgia/gold-rush-days-make-his-fortune-lanham" }
The Egyptian pharaoh Khufu was the second ruler of ancient Egypt's fourth dynasty. He tasked his builders with constructing the largest pyramid ever built, which is known today as the Great Pyramid of Giza. Khufu was mummified and placed in the pyramid following his death.Continue Reading Khufu was known as Cheops by the Greeks; his full name was Khnum-Khufu, which translates to phrase "the god Khnum protects me." Egyptians worshiped Khufu as a living god during his reign; he ruled as both the religious and political leader of the empire. He was known as a cruel, absolute leader, unlike his father and grandfather before him. Khufu began his reign in 2589 B.C.; it lasted for approximately 23 years until his death in 2566 B.C. He inherited the throne from his father Seneferu and had two wives, Queen Meritites and Queen Henutsen. Khufu had nine sons and 15 daughters. According to the Greek historian Herodotus, Khufu sent one of his daughters to work in a brothel to help pay for his pyramid's construction. Considered one of the seven wonders of the ancient world, the pyramid originally sat at 481 feet high and 755 feet wide. His sarcophagus sat directly in the middle.Learn more about Ancient Egypt
<urn:uuid:ebb0c9ee-e306-41f5-a815-7e96a403b697>
{ "date": "2016-12-06T03:20:12", "dump": "CC-MAIN-2016-50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541876.62/warc/CC-MAIN-20161202170901-00088-ip-10-31-129-80.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9915690422058105, "score": 3.84375, "token_count": 281, "url": "https://www.reference.com/history/interesting-pharaoh-khufu-a9fd87b5489339bd" }
Mixed Genre - D and T This is the default Chart. It is a good place to start if you seek to sample how Technacy Genres become more demanding in design and innovation capabilities and in project expectations. The Mixed Genre chart offers a broad guide for Product Design & Technology subjects. More genre charts can be added by request through the CONTACTS tab. Emergent Consolidation Early The EARLY EMERGENT SKILLS phase is the typical preparatory phase of skill and knowledge development for learners who begin to identify basic ideas for combining of resoources towards production of objects or systemsThe Learner is expected to show principles of success based on understanding and identifying key context parameters to a project purpose and brief. Emergent Consolidation Upper The UPPER EMERGENT SKILLS can only be accessed via demonstrable and consistent achievements the UPPER EMERGENT PLAY phase of their development, where the demonstrated achievements presented evidence of relevant capabilities for the project task and context brief. Distinguishing trait: starts to see and examine Links between traditionally dissassoated idea. Begins to identify assoications between technology, ecology and humanity as an interdependent system (aka first signs of holism in technacy) The Learner is expected to show principles of success based on understanding and identifying key context parameters to a project purpose and brief. Emergent Pioneer Early Identified by overtly expressing ideas through basic juxtapositions of concepts. Distinguished trait: highly unlikley to be realised in the given paremetres of the project or learner's personal skills. Emergent Pioneer Upper The EMERGENT PIONEER is an 'enthusiastic beginner'. Identified by the lack of competence to apply in real time, any aspect of their ideas in a way that 'works' in an applied context for the purpose given. Nevertheless, they offer a significant dimension to a project team by challenging norms and assumptions. Often their 'failures' provide significant information for future successes otherwise not considered. The EMERGENT PIONEER phase-domain can only be accessed via the EMERGENT CONSOLIDATION phase-domain of technacy development. The EMERGENT PIONEER cannot progress to the COMPETENT PIONEER phase-domain without soliciting access via COMPETENT CONSOLIDATION either personally or via a successful collaboration with someone that demonstrates technacy capability in the Competent-Consolidation phase-domain relevant to the purpose and applied context of the project-task. Emergent Play Early EARLY EMERGENT PLAY includes dispondency or low interest to explore the qualities, functionality and properties of materials and tools including instruments and devices. EXAMPLE-1: Learners are expected to play, with no expectation or consistent display to be purpose directed other than self disovery play: set activities with materials and devices/tools of a tangible nature to explore their qualities, discover what tools work to transform the shape or joining of given materials, and to appreciate the tactile role of their own agency in seeing what works. They begin to use their body (eg hands) as first tools, to shape materials, or to measure or hold fast objects, even on digital displays. EXAMPLE-2: Equally, a highly accomplished Architect or Furniture designer experienced in the applied context of East Coast Urban Australia would be expected to redress their capability via the Early Emergent Phasewhen the context or the technology is relatively new. Typically, they also show in this phase a degree of 'giving up' and frastration, rather than exploring the charateristics to learn how the technology or material/digital object responds to actions. Emergent Play Upper The EMERGENT PLAY is the typical starting phase for any learner whose background is as a novice or as a 'sceptic/despondent beginner' EMERGENT PLAY demonstrates engagement in activity for enjoyment and recreation rather than for serious or practical application. Mimicry of solution norms usually proceeds inventive play and original ideas. Surreal/unfeasible combinations of clichés are typically proposed. Solutions "Use and feature everything" is commonly presented by the upper levels of EMERGENT PLAY. EXAMPLE-1: Children are expected to play with materials and devices/tools of a tangible nature to explore their qualities, discover what tools work to transform the shape or joining of given materials, and to appreciate the role of their own agency in seeing what works. EXAMPLE-2: Equally, a highly accomplished Architect or Furniture designer experienced in the applied context of East Coast Urban Australia would be expected to redress their capability via the Emergent Phase Domains if the task before them expects a functional solution in the applied context of Desert Indigenous Australia Communities. Competent Consolidation Early Distinguishing feature: apprentice. Formal vocational level education, or equivalent cultural peer review of accepted codified knowledge and techniques. Competent Consolidation Upper The COMPETENT SKILLS phase of development can only be accessed via demonstrable and consistent achievements in the EMERGENT SKILLS and/or the COMPETENT PLAY phases of their development, where the demonstrated achievements presented evidence of relevant capabilities for the project task and context brief. This is the normalised location for most learners and is typically identified by achievement of codified institutional or socially peer endorsed credentialing, for a specified range of contextual applications. Eg. Institutional (formal), or cultural (peer reviewed) credentialing to a relevant standards framework. Competent Pioneer Early Not consistently achieveing upper Competent pioneer outcomes, though demonstrating many of those outcomes through the course of the project. Competent Pioneer Upper The COMPETENT PIONEER phase of technacy development can only be accessed via 'at least' sufficient COMPETENT SKILLS development that is contextually relevant to the project task. The COMPETENT PIONEER phase expects the learner to succeed in achieving project tasks where most of the knowledge and skills required draw upon known and codified methods relevant to the project context and purpose. The task presents novel challenges that the learner has hitherto not had success or experience with achieving. Effective collaboration, communication and social skills are essential determinants, for project success. Competent Play Early The EARLY COMPETENT PLAY can only be progressed via demonstrable and consistent achievements in the EMERGENT PLAY phases of their development. This phase typically demonstrates codified, or specific knowledge driven testing using codified methodical, recorded, and reported experimentation involving relevant to the project brief to better understand the task and context factors. The reports are to have been peer reviewed/benchmarked as appropriate for the project context and brief. Competent Play Upper The UPPER COMPETENT PLAY can only be progressed via demonstrable and consistent achievements in the EARLY COMPETENT PLAY phases of development. This phase typically demonstrates operational modelling and virtual representation with occassional reference to codified, or specific knowledge. It is a purpose/ideas driven output task defined for the person by a project brief and by provided/pre-defined context criteria. Sophisticated Consolidation Early Not consistently achieveing upper sophisticated skill consolidation outcomes, though demonstrating many of those outcomes through the course of the project. Sophisticated Consolidation Upper The SOPHISTICATED SKILLS phase of technacy development can only be accessed via demonstrable and consistent achievement in the COMPETENT SKILLS phase of their development. Solutions to complex problems are based on known and consolidated competence or well codified methods. The learner demonstrates modelling and simulation that is of such plausible standard that the information produced would be regarded by peers as a feasible reality, and where the learner is well skilled to realise their simulations as individuals, having consistently demonstrated COMPETENT SKILLS in a relevant area. The simulations evoke a collegial view that this idea is very likely to 'work' in the applied context. A learner in Sophisticated-consolidation is expected to progress to the Sophisticated-Pioneer phase-domain where both the contextual drivers and the exact nature of the solutions are unknown and where the solution is of high impact risk if failure occurs. Sophisticated Pioneer Early Not consistently achieveing upper sophisticated pioneer outcomes, though demonstrating many of those outcomes through the course of the project. Sophisticated Pioneer Upper The SOPHISTICATED PIONEER phase of technacy development can only be accessed via demonstrable and consistent achievements in the UPPER COMPETENT PIONEER and/or the UPPER SOPHISTICATED SKILL CONSOLIDATION phases of a learner's development. Human Innovation Attributes: student is able to communicate well, think originally and critically, adapt to change, work cooperatively, remain motivated when faced with difficult circumstances, connects well with both people and ideas and is capable of finding solutions to problems as they occur, is highly guided by contextual operational parametres —in short, the individual presents an array of skills constituting a well-developed capacity for innovation Project Purpose and Operational Context Parametres: Project context requires a designed response that has to account for high complexity, risk or chaos. Significant detail of ender user operational context must be addressed in final project outcomes. eg may be the design of mega structures, space technologies, nano structures or solutions for socially complex end user environments. This Phase-domain in technacy development is very difficult to sustain as individuals contend with a high demand for CROSS-DISCIPLINE PROJECT collaborations, are tasked with generating new solutions for largely unknown or changing parameters and when collaborative success is achieved, the project or technology skills pioneered are reclassified down the technacy-innovation chart to the Competent-Skill Consolidation Phase-domain as they become normalised and domesticated into codified methods and systems. This is a very high risk of impact and personal risk phase of project activity and so difficult for any individual to sustain. It demands high levels of cognitive abstraction, drawing on mostly 'fluid intelligence' and basic principles (rather than on prior knowledge, standards or experience) to work a project brief to actual success. Validation is based on collective social exchange, extensive externally reviewed testing processes, and complex modelling and simulation. Affective attributes are highly demanded of the learner who often has to contend with personal beliefs, values and ideas in order to progress a project successfully for the greater good. Eg High risk, High Return projects at this level include: designing and mass producing the Air Bus, or the iPhone, or designing and constructing the Brooklyn Bridge, the Sydney Opera House or the International Space Station or the development of nano-technology and trans-human systems. Sophisticated Play Early Expected to show independently researched and produced detail of designs that have numerous contextual variables identified and also accommodated with an evidence base to justify all aspects of the design. While complex, little of the complexity is fundamentally new but the combination of many elements to the design clearly demonstrates coherance and synthesis in totally knowledge and skill. Expected skill: includes accuarate technical specification drawings of complex designs, and models and marques to scale built with precise techniques. Sophisticated Play Upper The UPPER SOPHISTICATED PLAY phase can only be achieved via demonstrable and consistent outcomes in the EARLY SOPHISTICATED PLAY phase of development. This phase typically demonstrates superior detail and expertise in being able to abstract models for complex design ideasdeemed viable for commercial production. Typially a university degree graduate in Architure, engineering, and product design or cultrural equivalent. High detailed knowledge in codified, or specific knowledge in highly abstract and well-credentialed simulations. Modelling and simulation is of such plausible standard that peers would regard the information produced as a feasible reality, describing a solution or aspect of a project with sufficient detail as to permit realisation. The SOPHISTICATED PLAY learner cannot progress to the SOPHISTICATED SKILLS phase without soliciting access via COMPETENT SKILLS either personally or via a successful collaboration.
<urn:uuid:a522e17b-e6d2-41aa-9734-42ac843ed137>
{ "date": "2018-04-20T20:25:48", "dump": "CC-MAIN-2018-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944682.35/warc/CC-MAIN-20180420194306-20180420214306-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9008130431175232, "score": 2.609375, "token_count": 2536, "url": "http://www.technacy.com/chart/data/106" }
The Signs of Inflation Since Barack Obama has decided to continue down the path George Bush started down, the path of Robert Mugabe and Friedrich Ebert, the United States economy will soon be facing all the problems associated with inflation. Unfortunately, the effects of inflation are poorly popularized, meaning that most people have a very limited understanding of inflation. As a result when confronted with symptoms of inflation people all to often pin the blame on other “causes” such as Jews or greedy white landowners. These false accusations are heavily promoted by the ruling classes; after all it’s better that the mob chase some Jews or hedge fund managers with pitch-forks than coming after the rulers who inflated the currency. What Inflation Isn’t The primary misconception of inflation is that it merely is the rise in prices. The news media encourages people to believe this, reporting the inflation rate in the same way it reports on changes in the weather, as if it is some natural phenomenon over which people have no control. Price increases are a result of inflation, yes. However, inflation has the same relationship with price increases that war has with the number of burials per month at Arlington Cemetery. The Mechanics of Inflation Inflation is the phenomenon where additional money is created. In the case of the Unites States, the Federal Reserve purchases some asset, such as a United States Treasury Bond, paying by check. The seller deposits the check in a bank, which then records the additional money as part of its demand deposits. This is the moment where the inflation occurs. The bank then loans out 80% of those deposits to borrowers. Those borrowers spend the loaned money. This spent money is deposited back into the banking system. The banks loan out another 80% of this latest round of deposits. This cycle of loaning out money which then spent and then deposited back into the banking systems continues until eventually an equilibrium is established where banks list $4.00 in outstanding loans for every dollar created when the Federal Reserve Bank wrote that first check. The people spending this newly created money bid up prices; people selling stuff tend to sell stuff to the highest bidder, and the people holding the newly printed money are competing with other buyers to purchase the stuff they want. The result is that this new money slowly disperses out through the economy, leaving higher prices in its wake. The people who spend it first get the benefit of being able to buy stuff at pre-inflation prices, the people who spend it last do not. The people who have access to the newly printed money are left better off, the people who don’t are left poorer. Inflation invisibly and without much fuss transfers purchasing power from those who are not closely connected economically with the central bank to those who are. Follow the Money The price increases generally show themselves in the sectors of the economy where the new money is spent. For example, let’s say that President Obama directs the Secretary of the Treasury to sell a bond to generate the money needed to build yet another bridge over some stream in West Virginia named after Robert Byrd. The bond is purchased by the Federal Reserve Bank with newly printed money. The money is spent by the Federal Government on tools and materials needed to build the bridge. The price for cranes, hourly employment of road-crews, concrete and steel increases. The Rise of the Ersatz Product The other users of these raw materials find themselves struggling to buy the supplies they need. For example, a manufacturer of prefabricated steel sheds might find that he has to pay more to get the metal he needs. He is caught in a squeeze; the price his customers are willing to pay has remained unchanged while his production costs have increased. The manufacturer can’t raise his prices, so naturally he decides to cut costs by attempting to reduce the amount of steel he uses by substitution or adulteration. The end result, the steel shed might cost the same, but the quality of the steel, the strength of the steel, or its toughness will be inferior, resulting in a shoddier product. Candy bars are packaged to look like their sizes are constant, while the volume of candy is reduced. Houses are built less sturdily. Automobiles are manufactured with lower quality steel, with engines that wear out more rapidly, etc. The Rise in Prices As the money percolates out through the economy, the people coming into possession of it bid up prices. In fits and starts, at different rates in different sectors of the economy, the price levels go up. People on a fixed income find themselves becoming poorer and poorer. The unpredictability of the price levels causes accounting to become more uncertain. Investments and projects that otherwise would be attempted are foregone. A majority of the population is left poorer as a result of the inflation. Increased Political Repression These people naturally lobby for relief. However the political classes that benefit from the inflation don’t want to stop, and the people are too ignorant of economics to recognize the fact that it was the printing presses which caused their problems. Instead the people attack profiteers or greedy manufacturers or the greedy bankers. They call for price and wage controls, which if instituted, further wreck the economy. People also try to abandon the unreliable monetary system for alternatives. They try to do business in foreign currencies or even use commodities such as cigarettes or gold chains as money. Governments typically react savagely, criminalizing attempts to do business using alternatives to the rapidly devaluing currency. Stop Me Before I Inflate Again! Typically, when a government or central bank engages in inflation, they find it hard to stop – the political incentives are too great to resist. They cannot fund their operations if they stop the printing presses. The more the economy falters, the more the tax-base is disrupted, the lower the productivity of the industrial base, the more dependent the elites are on the printing press to fund their lifestyles and the operations of the state. In private, the central-bankers will admit that the currency debasement is wrecking the economy. But the central-bankers’ fear of the negative personal consequences if they should stop inflating the currency overrides any impulse to do the right thing. America’s Peculiar Institution Some economists argue that since the Federal Reserve is not part of the U.S. government such a doomsday scenario cannot play out here. They claim that the Federal Reserve, being independent, is immune to political pressure. Yet throughout its existence it has acted to support the U.S. government; periodically, the Congress threatens to update or revise the Federal Reserve Act to mandate greater congressional oversight of the Fed’s operations and the ‘independent’ organization suddenly becomes quite accommodating. Historically, the Federal Reserve has purchased only a small fraction of the bonds issued by the U.S. Treasury; it didn’t have to – there were always sufficient buyers willing to buy Treasury Bonds to keep the U.S. government operating. That is changing. The U.S. government will have to borrow more than a trillion dollars a year to fund its operations. At the same time the U.S. Government is borrowing at such unprecedented levels, the willingness of voluntary investors to purchase the bonds is collapsing. If the Federal Reserve were passive, within a year or so we would see U.S. Treasury Bonds routinely going unsold at auctions. The Federal Reserve, inevitably, will begin purchasing those bonds to keep the U.S. government solvent. In an inflationary economic regime, the formal economy is generally a suckers game. To avoid being taken to the cleaners, consider doing the following: - Start or purchase a business making something in relatively widespread demand that is not likely to have price controls slapped on it. - Go into a profession where your skills will be in wide demand. - Learn how to fix broken things. - Cultivate circle of associates with whom you can engage in gray-market or black-market business deals. - Befriend a policeman or a judge! They can get you out of hot water should you get tagged for committing an economic ‘crime’. - Get to know local gold-dealers, particularly ones who buy and sell gold chains. - Learn how to defend yourself; police take time to respond, and are prone to prosecuting economic crimes when confronted by evidence of contraband. - Buy an old “Whip Inflation Now” button. It will be about as helpful as it was in the 1970’s You don’t have to be helpless as the Democratic and Republican Party Apparatchiks loot the economy. Prudent steps taken now can pay off in the coming dark years.
<urn:uuid:ec9c173e-7d8c-4621-a2a6-48dd8359290d>
{ "date": "2016-10-27T14:49:30", "dump": "CC-MAIN-2016-44", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721347.98/warc/CC-MAIN-20161020183841-00562-ip-10-171-6-4.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9541579484939575, "score": 2.546875, "token_count": 1802, "url": "http://www.thelibertypapers.org/2009/05/27/the-signs-of-inflation/" }
EDISTO BEACH, S.C. (WCIV) - People walking along the sands of Edisto Beach early Sunday morning were treated to a unique display of nature at work. More than 100 sea turtle hatchlings emerged from the sand and slowly scampered their way toward the waves. The rare daylight event was captured on video by Edisto Beach Turtle Patrol volunteer member Courtney Beeks. She believes this was the first nest in South Carolina to hatch this year. "It was nest No. 1 at Edisto, at least," Beeks added. The hatchlings began to emerge around 7 a.m. Beeks says her volunteer group typically sees a hatching around 9 p.m. "The nests typically do not hatch until around at least dusk, so, usually, it is very hard to get a good picture or video of the boil because it is dark outside," she added. Hatchlings typically follow light reflecting from the ocean and the slope of the beach to guide them toward the water, according to NOAA and the South Carolina Department of Natural Resources. So, why is this hatching referred to as a boil? Picture a pot on the stove. "When the tiny turtles are ready to hatch out, they do so virtually in unison, creating a scene in the sandy nest that is reminiscent of a pot of boiling water," according to the NOAA's National Ocean Service. A lucky crowd of early morning beach-goers gathered on each side of the tiny turtles Sunday to watch the rare daylight event. "The crowd absolutely loved it," Beeks said. Video shows the turtles reaching the water. The hatchlings were loggerheads. The SCDNR says loggerheads are the most common species found in South Carolina. "Once they reach the ocean, they swim continuously for about 36 hours to escape predators that may prey on them in coastal waters," according to SCDNR. "They swim offshore in search of large clumps of Sargassum seaweed where they are camouflaged while feeding on a variety of small invertebrates."
<urn:uuid:46115d62-5dd0-4d5e-90f9-c3cdb775245f>
{ "date": "2019-02-19T14:51:29", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9670907258987427, "score": 2.671875, "token_count": 428, "url": "https://kcby.com/news/offbeat/watch-rare-daylight-sea-turtle-hatching-captured-on-edisto-beach" }
Remarkable large format map of Normandie, in old color. Colored by regions. Ornate cartouche with coats of arms and smaller second cartouche. Alexis-Hubert Jaillot (ca. 1632-1712) was one of the most important French cartographers of the seventeenth century. Jaillot traveled to Paris with his brother, Simon, in 1657, hoping to take advantage of Louis XIV's call to the artists and scientists of France to settle and work in Paris. Originally a sculptor, he married the daughter of Nicholas Berey, Jeanne Berey, in 1664, and went into partnership with Nicholas Sanson's sons. Beginning in 1669, he re-engraved and often enlarged many of Sanson's maps, filling in the gap left by the destruction of the Blaeu's printing establishment in 1672.
<urn:uuid:2a474847-5bdf-4f55-991a-43efd2d08fff>
{ "date": "2020-01-24T12:32:10", "dump": "CC-MAIN-2020-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9574281573295593, "score": 2.765625, "token_count": 182, "url": "https://www.raremaps.com/gallery/detail/62572/le-duche-et-gouvernement-de-normandie-divisee-en-haute-et-ba-jaillot?q=2" }
Here is your link to your student's Clever account which gives them access to iReady, Benchmark ELA materials, Google Apps, and more: https://clever.com/in/wpusd. To login to the Clever portal, students will need to know their Gmail account address and student ID number. Please be sure to check with your teacher to be sure you student has already taken the iReady diagnostic placement test in class. They will not be able to access iReady online lessons from home until the diagnostic is completed at school. Whether you are looking for actual math support for your student at home or just to view the site, this link will take you there. Use [email protected] as the login and tbestigers1 as the password to sign in. Once onto the site, click the icon for Programs and on the subsequent screen select the grade-level you are interested in viewing. If you are just looking for some math games, type "games" into the search window once you have logged in. We have found that the Envisions site sometimes runs more smoothly on the Mozilla Firefox browser than Google Chrome. The FOSS site supports our district science curriculum grades1-5.
<urn:uuid:88afd635-9a54-4036-ad95-d7ea09b732b6>
{ "date": "2018-12-11T04:21:43", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823565.27/warc/CC-MAIN-20181211040413-20181211061913-00616.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9251322746276855, "score": 2.703125, "token_count": 252, "url": "http://tbes.wpusd.org/Student-Resources/Educational-Websites-for-Students/index.html" }
Fishing for a living fossil Fonte Boa, Brazil By Bruno Kelly This was the second year I’ve had the chance to document the fishing of the world’s largest freshwater fish with scales, the arapaima, or pirarucu, as it’s known in the Brazilian Amazon. Last year I photographed a community that fished only at night for a few days to fill their quota, but this year it would be done in the day and the fishing would last a week. I traveled to the Mamiraua nature reserve, some 600 km (373 miles) west of Manaus along the Solimoes river, one of the two main tributaries of the Amazon. This reserve was created in 1996 with the aim of promoting sustainable use of natural resources for the development of the river communities. The trip began with a two-hour flight from Manaus to Tefe, and from there on a fast launch to the town of Fonte Boa, affectionately known as the Land of Pirarucu by its residents. The Mamiraua reserve is divided into nine sectors with some 200 communities. To put the reserve’s dimensions into perspective, it would take more than 24 hours to travel from one extreme of the reserve to another in one of the fast launches that are commonly used here. One of those launches became our home during this trip. Arapaima fishing season is open between the months of July and November, the dry season in the region when the rivers are at their lowest. This is important because it’s when the fish are trapped in lakes, making catching them much easier. To begin the season the first thing they do is count the number of arapaimas in each lake. This is done by the villagers themselves. After that, they receive authorization to catch approximately 30% of the total of adult fish. The arapaimas must be at least 1.5 meters (4.9 feet) long, with the smaller ones preserved for the future. The festive atmosphere of the communities is very apparent. The arapaima is one of the main sources of income for these families, and virtually all of them, old and young, men and women, are involved in some way in the fishing season. Most of the lakes are many kilometers from the communities, so the operation needs to be well organized. Long and narrow canoes are used for fishing, which is done by surrounding the fish with nets to corral them into a contained area of the lake, so they can be harpooned. Due to their size and strength, the arapaima must be knocked unconscious before being pulled into the canoe. One movement of their powerful tails could easily flip the canoe and end the day’s fishing. For me to get close to the action I often had to climb into a canoe that was already overloaded with two fishermen, and do a balancing act so we didn’t end up in the water. It was all made even more difficult by the army of spiders and insects that attacked us. My biggest concern was with my equipment, as the nearly 100% humidity made it necessary to constantly wipe the lenses. And since the dry season was practically ending, the rain also kept me busy protecting my gear. The fishing operation lasts many days, so a complete logistical base is set up on the lake’s edge. The main food to be eaten is arapaima stew made with the fish they catch and Brazil’s staple cassava flour called farinha. That was my food too during the expedition. After catching the fish, the villagers carry out one of the most tiring jobs of all – lugging the arapaimas to their community to clean and freeze them. The path they take is along a floodplain that in a few weeks would be completely submerged. This is the moment when the physical strength of these villagers comes into play. Along one stretch of nearly two kilometers surrounded by the heat of the rainforest, the villagers carried, one by one, the hundreds of arapaimas they caught within the legal quotas. Each fish weighed an average of 60 kgs (132 lbs), but they can reach as much as 140 kgs (308 lbs). As the fish arrived at the village the women took over and cleaned them. With their sharpened knives, they pulled out the guts and left them ready to be frozen. Each arapaima taken from the reserve receives a seal with the information on where it was caught and its length and weight. After being frozen, part of the catch remains in the community and the rest is shipped to the state capital, and then distributed around the country and exported. The arapaimas are sold in two different forms – fresh frozen meat, and salted and dried. They call the salted arapaima the “cod of the Amazon.” The job of managing arapaima is not one of just controlled fishing. Throughout the year the villagers organize themselves to watch over the lakes, because poaching is still a reality. This control is very important because it guarantees for future generations the opportunity to live in the forest from its sustainable resources. In the riverside schools, conservation and sustainability are taught to children from the very beginning, because the future of the Amazon will depend on us as well as them.
<urn:uuid:04739c84-3a60-4e3d-acc7-70da9f7cb3e1>
{ "date": "2015-03-03T09:12:33", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463165.18/warc/CC-MAIN-20150226074103-00000-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9753560423851013, "score": 2.75, "token_count": 1107, "url": "http://blogs.reuters.com/photographers-blog/2013/12/13/fishing-for-a-living-fossil/" }
Sunspot cycle - on the downward slope (September 5, 2002) The number of sunspots has begun slowly to recede again, as we move away from the solar maximum period of peak activity for this 11-year solar cycle. Scientists track solar cycles by counting sunspots and this solar cycle reached its peak level in July 2000. Since then, the number of sunspots and general solar activity has generally declined, though it did attain a second peak around January 2002. Inevitably, the number of sunspots will follow a declining path until the numbers bottom out, sometime around 2006. The sprinkling of sunspots seen here are evidence that the action is not over yet!| The upper and lower dotted lines represent the higher and lower estimates for sunspot numbers for this solar cycle; the central white line between represents the average of estimates, and the jagged and heavier white line represents the monthly sunspot numbers observed. SOHO began its Weekly Pick some time after sending a weekly image or video clip to the American Museum of Natural History (Rose Center) in New York City. There, the SOHO Weekly Pick is displayed with some annotations on a large plasma display. If your institution would also like to receive the same Weekly Pick from us for display (usually in Photoshop or QuickTime format), please send your inquiry to [email protected].
<urn:uuid:cc9c8311-3e15-4b0a-9f23-2c9bb8024a34>
{ "date": "2016-06-25T05:20:09", "dump": "CC-MAIN-2016-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00046-ip-10-164-35-72.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9239575862884521, "score": 3.390625, "token_count": 293, "url": "http://sohowww.nascom.nasa.gov/pickoftheweek/old/05sep2002/" }
Atheism, in a broad sense, is the rejection of belief in the existence of deities. In a narrower sense, atheism is specifically the position that there are no deities. Most inclusively, atheism is simply the absence of belief that any deities exist. Atheism is contrasted with theism, which in its most general form is the belief that at least one deity exists. The term atheism originated from the Greek ἄθεος (atheos), meaning "without gods", which was applied with a negative connotation to those thought to reject the gods worshipped by the larger society. With the spread of freethought, skeptical inquiry, and subsequent increase in criticism of religion, application of the term narrowed in scope. The first individuals to identify themselves as "atheist" appeared in the 18th century. Today, about 2.3% of the world's population describes itself as atheist, while a further 11.9% is described as nonreligious. Between 64% and 65% of Japanese describe themselves as atheists, agnostics, or non-believers, and 48% in Russia. The percentage of such persons in European Union member states ranges as low as single digits in Italy and some other countries, and up to 85% in Sweden. Atheists tend to lean towards skepticism regarding supernatural claims, citing a lack of empirical evidence. Common rationales for not believing in any deity include the problem of evil, the argument from inconsistent revelations, and the argument from nonbelief. Other arguments for atheism range from the philosophical to the social to the historical. Although some atheists tend toward secular philosophies such as humanism, rationalism, and naturalism, there is no one ideology or set of behaviors to which all In Western culture, atheists are frequently assumed to be exclusively irreligious or unspiritual. However, atheism also figures in certain religious and spiritual belief systems, such as some forms of Buddhism, that do not advocate belief in gods. Religion (from Latin religio, "reverence for the gods", "piety", possibly related to religare, "to bind") is the belief in and worship of a god or gods, or more in general a set of beliefs explaining the existence of and giving meaning to the universe, usually involving devotional and ritual observances, and often containing a moral code governing the conduct of human affairs. Aspects of religion include narrative, symbolism, beliefs, and practices that are supposed to give meaning to the practitioner's experiences of life. Whether the meaning centers on a deity or deities, or an ultimate truth, religion is commonly identified by the practitioner's prayer, ritual, meditation, music and art, among other things, and is often interwoven with society and politics. It may focus on specific supernatural, metaphysical, and moral claims about reality (the cosmos and human nature) which may yield a set of religious laws and ethics and a particular lifestyle. Religion also encompasses ancestral or cultural traditions, writings, history, and mythology, as well as personal faith and religious experience. The development of religion has taken many forms in various cultures, with continental differences. The term "religion" refers both to the personal practices related to communal faith and to group rituals and communication stemming from shared conviction. "Religion" is sometimes used interchangeably with "faith" or "belief system", but it is more socially defined than personal convictions, and it entails specific Religion is often described as a communal system for the coherence of belief focusing on a system of thought, unseen being, person, or object, that is considered to be supernatural, sacred, divine, or of the highest truth. Moral codes, practices, values, institutions, tradition, rituals, and scriptures are often traditionally associated with the core belief, and these may have some overlap with concepts in secular philosophy. Religion is also often described as a "way of life" or a life stance.
<urn:uuid:5cf6b1a1-b62e-4b8f-ad91-498601d3eed1>
{ "date": "2014-10-25T16:44:19", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648706.40/warc/CC-MAIN-20141024030048-00227-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9415234923362732, "score": 3.4375, "token_count": 869, "url": "http://www.atheistalliance.org/resources/12-knowledge-base" }
Welcome to FlowJo University! Before we begin the discussion on how to use FlowJo to analyze single cell data, lets first discuss where that data comes from. Single cell data can be collected using flow cytometry: the measurement of cells in a flowing medium. The tool used to make these measurements is called a cytometer. Cytometers, with a few examples shown here, come in all shapes and sizes but they share some underlying principles. Cytometers are generally comprised of fluidics, optics, and electronics systems. The fluidics system accepts the cell sample (colored red dots in this illustration) and transports it to the interrogation point, while forcing the cells into a single file stream though either hydrodynamic or acoustic focusing. The optics system is then engaged. At the interrogation point, where the blue laser hits the red cells in the graphic, the cells pass through a laser beam or beams that excite florescent probes bound to selected cellular targets, emitting light of a different color for each unique probe. That light is filtered into individual colors and the quantity of emitted photons is measured using photomultiplier tubes, or PMTs. The quantitative aspect of this is important; cytometry measures relative ammounts allowing for measurement of the intensity of expression instead of simply yes/no or on/off. Finally, the electronics system is used to convert excited photons to voltage measurements to a singular number representing intensity of expression. Lets take a closer look at how the numbers we work with are produced... One topic glossed over so far is how multiple phenotypes are identified. Monoclonal antibodies tagged with the aforementioned florescent probes are used to bind specifically to the selected antigen that will reveal the phenotype under study. Cytometry allows for the measure of up to approximately 50 different cellular aspects depending on the system. The measurement of these multiple aspects on many cells are then stored as a flow cytometric standard file, abbreviated as FCS. FlowJo will accept FCS files from any cytometer. That concludes the introduction to cytometry.
<urn:uuid:094914e0-4f4a-4083-aa5e-b7cc1ed5b140>
{ "date": "2019-11-13T20:25:02", "dump": "CC-MAIN-2019-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00416.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9041700959205627, "score": 3.28125, "token_count": 418, "url": "https://www.flowjo.com/learn/flowjo-university/flowjo/getting-started-with-flowjo/2" }
Warning: Illegal string offset 'keywords_time' in /home/mybedwet/public_html/wp-content/plugins/internal_link_building.php_/internal_link_building.php on line 103 If you have read other posts on my blog, then you probably know what a bladder is and how it stores your urine until you take a pee. A big bladder means you can hold it in longer and small bladder means you better be nearby a bathroom. In fact many young children with bedwetting problems find themselves running to the bathroom at the last minute. These usually is a sign that the kid has a very small bladder, or maybe there just too busy watching TV or playing games. Have you ever wondered how big your bladder is, you might of think that it’s impossible, but it isn’t. The best way to measure your bladder is to keep a 2 day record of how much you pee each time you go to the bathroom. You may want to do it over the weekend so that you can take measurements at home for more privacy. What you Need - A 16 ounce plastic measuring cup pr container - An index card - A pen or pencil Measuring Your Bladder - Keep all of your supplies in the bathroom that you usually use at home. - When you feel the urge to pee, go to the bathroom and pee in the cup or container. This is easy for boys, but for girls you might want to try sitting on the toilet backwards. - Record how much urine is in the cup (in ounces) on the index card. - Don’t forget to empty the container in the toilet and wash it out. - To get the most accurate picture of your bladder size, you should have 8-10 measurements. If you don’t have that many, you should extend the experiment for another day. Kids sometimes forget to pee in the container, you might want to leave the toilet seat down and place a reminder note on the seat. How to Tell if your Bladder is Small A child’s normal bladder size (in ounces) equals the child’s age + 2. Therefore, the average 6-year-old has a bladder size of about 8 ounces. If the child is 10, his bladder should be able to hold 12 ounces of urine. Kids who are 12-years and older have a bladder that’s the same size as an adult around – 12-16 ounces. Age Normal Bladder Size Small Bladder Size 6 years old 6-10 ounces 5 or less ounces 7 years old 7-11 ounces 6 or less ounces 8 years old 8-12 ounces 7 or less ounces 9 years old 9-13 ounces 8 or less ounces 10 years old 10-14 ounces 9 or less ounces 11 years old 11-15 ounces 10 or less ounces 12 years or older 12-16 ounces 11 or less ounces A few reasons why it’s worth knowing the size or your child’s bladder. You will want to know if your young one’s bladder is small, because there are things that you can do to help. Kids with a small bladder should drink more water during the day. You can make sure your child always has quick access to a bathroom. Children are less likely to respond to the medication Desmopressin.
<urn:uuid:41937726-8833-4258-b482-4d6fda60d6ce>
{ "date": "2015-03-27T17:12:48", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296587.89/warc/CC-MAIN-20150323172136-00250-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9238471388816833, "score": 2.6875, "token_count": 702, "url": "http://mybedwettingsolutions.com/" }
Lightly trailing a feather or a finger over the instep of someone's foot usually causes several predictable reactions: The person laughs, giggles or becomes irritated, instinctively draws his foot out of reach and does his utmost to avoid being tickled a second time. Dr. Michael Nirenberg of the America's Podiatrist website says having ticklish feet is a good thing for a variety of reasons. Video of the Day Researchers who have conducted experiments on ticklishness still don't have all the answers as to why humans and other animals, such as cats, rats and monkeys, are ticklish or what exactly goes on between nerve endings and the brain during tickling. Because nerves on the foot's sole have both touch and pain receptors that carry information about either sensation along neural pathways to the brain, it's difficult to separate the two when talking about ticklishness, says the American Scientist website. The pain and pleasure of having your feet tickled is linked to these pain and touch nerve tracts. Add to that the anticipation of the tickle, and the whole business invokes a "tonic top-down regulation of neural activity," as reported in MIT's "Journal of Cognitive Neuroscience"—which means the brain is primed and ready to react before the feather touches skin. It also explains why a person can't tickle himself: The surprise factor is absent because the tickler is in control of tickling of his own body part and knows it. Knismesis vs. Gargalesis The term knismesis refers to a feather-light touch on the skin's surface that provokes irritation rather than pleasure or laughter. Gargalesis, on the other hand, describes the more enjoyable experience of tickling a foot or other body part in a playful, non-threatening manner that results in genuine laughter, according to a 2004 article in the "Journal of the American Academy of Dermatology." Social behavior might have a lot to do with gargalesis. People who try to tickle themselves might manage to produce knismesis, but not gargalesis. Indicator of Health Ticklish feet are usually a good indication of health, says Nirenberg. Non-ticklish feet can indicate problems with nerve receptors caused by illnesses such as diabetes mellitus, arthritis, certain vitamin deficiencies or thyroid problems. According to Nirenberg, patients who either have ticklish feet or feel foot pain should feel fortunate—losing ticklish sensation on the feet can indicate neuropathy, a disease in which the nerves deteriorate. Ticklish feet also might be a primal reaction in both humans and animals to rid themselves of insects or reptiles crawling on feet or other vulnerable body parts, states "The Boston Globe." The feel of a blade of grass or a creeping centipede elicits the same swift response on ticklish feet. Darwin's and Other Theories Scientists and psychologists at one time thought ticklishness was a reflex, but now view it as social-bonding behavior that can be learned at an early age between a parent and child. Nirenberg of the America's Podiatrist website states that "tickling helps establish trust between a child and mother." The mother-child tickling scenario was part of Darwin's theory about tickling. He posited that a child expecting to be tickled laughed, but a child who wasn't expecting to be tickled initially showed displeasure. Tickling a complete stranger's foot on a subway train would most likely not draw the same reaction as tickling the feet of a child, lover, friend or sibling.
<urn:uuid:45b6c745-9158-4b99-ae49-3a41fbffeb33>
{ "date": "2018-08-21T23:24:52", "dump": "CC-MAIN-2018-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219197.89/warc/CC-MAIN-20180821230258-20180822010258-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9476365447044373, "score": 2.734375, "token_count": 728, "url": "https://www.livestrong.com/article/198315-why-feet-are-ticklish/" }
Nov 23, 2015 Soil is the most important resource. Wheat, rice and millets, pulses, oilseeds, beverages, vegetables and fruits, all are obtained from soil. Other food items such as poultry, meat and milk are animal products. Besides, food timber, fibers, rubber, herbs and medicinal plants are also obtained from the soil.
<urn:uuid:ba0ac3eb-9bde-42e8-b6a0-1b85cee3afb3>
{ "date": "2018-01-24T09:23:18", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893629.85/warc/CC-MAIN-20180124090112-20180124110112-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9687603712081909, "score": 2.828125, "token_count": 72, "url": "https://m.jagranjosh.com/general-knowledge-climate-soil-vegetation-1446548155-1" }
Flip Flop Spanish was written by a teacher based upon her work teaching group classes including children of varied ages—the ordinary homeschool situation. The idea was to create a program usable for children as young as three that could be taught by a parent or teacher with no knowledge of Spanish. Flip Flop Spanish has a number of resources that you might use, and you probably won’t use all of them. Instead, select those most suitable for your children’s learning styles. The workbooks with CDs and the See It and Say It all incorporate activities to suit the three learning modalities—listening activities, visual activities, and hands-on activities, but you can spend more or less time on the different types of activities. Descriptions of the various options and how they might work together should help you decide. Workbooks with audio CDs There are three workbooks in the series at this point. Two are labeled as appropriate for ages 3 to 5 and one is labeled for ages 6 to 9. While the Level 1 and Level 2 books for ages 3 to 5 both direct children to write Spanish words, these lessons are actually meant to be done as narration/dictation with the parent doing the writing rather than the child. There are some exercises with numbers and colors that might be too difficult for some young children. If I had not noticed a suggested age range, I would have suggested a target audience of about ages 6 to 10. Older children should be able to do their own writing in the books for ages 3 to 5. Younger children certainly can be taught from these books as long as parents do the writing and select activities appropriate for each child's age and ability. The book labeled as appropriate for ages 6 to 9—again titled Level 1, but for that age group—should be suitable for about ages 7 through 11 in my opinion. The spiral-bound workbooks have an unusual format with lessons printed only on the right hand pages; after students reach the end of the book, they flip it over and continue with the other half of the lessons printed only on the right hand pages in reverse—thus the name “Flip Flop.” This means children are always working on only a single page at a time. Students read and write in the text if they are able. They often have opportunity to color in illustrations and, sometimes, to label them. There are matching, fill-in-the-blanks, circling, a crossword puzzle (in the older level book), and drawing activities. The books are printed in black-and-white although the paper is actually cream colored. The paper is a little thin which might pose problems when students color or draw on one side then need to write on the reverse side after they flip their book. (Consider the digital download option from CurrClick to avoid this problem.) The workbooks also direct interactions such as conversations, guessing games, body movements, games, and songs. Each book has a glossary and a relatively small number of flash cards (to be cut out) at the back. (There are only twelve flash cards in the first book. These are printed on the same paper as the rest of the book, so they are not very useful. See my description of the more useful sets of flashcards below.) Each book comes with an audio CD with listening and speaking practice for the new words, phrases, and sentences. The CDs also direct students to perform some interactive conversations and to use gestures and expressions as they speak. The lower level books and CDs introduce vocabulary and usage but very little grammar. Older students are taught helpful grammatical concepts as needed, but this is essentially a conversational rather than a grammar-based approach. Books are relatively brief. Each lesson should take a week to complete, and there are three or four activities within each lesson. A daily lesson in the younger level books should take from five to fifteen minutes while those in the upper level book might take ten to twenty minutes each. The lower level books have fourteen lessons each while the upper level has sixteen. See It and Say It Flip Flop Spanish See It and Say It is a full curriculum that can be used with all ages. It has a teacher manual with detailed plans for a two-year course with lessons three times a week. However, older students should be able to complete two lessons in a sitting and can finish the course in one year. The teacher manual comes as loose-leaf pages for you to put into your own binder. Following instructions in the manual you use four audio CDs and three sets of flash cards as the primary teaching tools. (Flashcards included in the See It and Say It set are also available separately for those who want to use them along with the workbook/CD sets.) Flashcards have full-color images on one side. On the reverse are the word in Spanish (including the appropriate article for nouns, e.g., el tenedor), the phonetic pronunciation, and the English meaning. Parents and children should work together through the lessons, listening to the CD, identifying the appropriate cards, saying the words, and then saying sentences as they put cards together. After listening through lessons, the parent presents the same lesson from the teacher manual without the CD. Younger children will work only with the picture sides of the cards at first. Older students might work with both picture and word sides of the cards. You will need to pause the CD frequently as students puzzle out how to construct sentences and practice saying them. On the other hand, “speed rounds” on the CDs challenge students to quickly identify cards. The instructor, Señora Gose, gradually introduces extra Spanish vocabulary such as ¿Donde está? and the words aquí and cuidado as instructional words and phrases even though these are not on the flashcards. Also, you will need to obtain your own cardstock to create some additional flashcards without illustrations for words such as y, pero, es, me, los, and algo; cards with hand drawn pictures of things such as an equal sign, a boy, and a girl; and color cards. There are quite a few of these cards to create, so I recommend cutting cards from cardstock to the same size as the pre-printed cards so that they align easily with each other for use and for storage. While I would prefer to have these cards provided with the set rather than having to create them, if students help create them, this can reinforce learning. After the first few weeks, students will begin to identify their cards as nouns, verbs, and adjectives by drawing borders on their cards with different colored permanent markers. Students frequently construct their own sentences and responses, which challenges them to apply what they are learning. Students begin combining two cards in the first lesson and five cards in the second to form Spanish sentences. They start with a limited number of words, but they use those words over and over in various combinations. Sentences can expand to become quite lengthy as students learn more and more vocabulary words. Older students (ages eight and up) should write to reinforce their learning. They will write words and definitions as well as sentences, but they do this in their own notebook (called a “personal learning dictionary”). It is up to parents to determine how much writing they need to do. Crosswords puzzles also reinforce vocabulary and writing for older students. Included in the See It and Say It set are a wipe-off paddle (that looks like a ping pong paddle) and a wipe-off marker that are used for reinforcement games. Some other learning activities are presented as games, generally using the flashcards. Some activities work better if you have two or more students at similar skill levels competing with each other, but most can be done with a parent and child. Grammatical elements are gradually introduced, so while this is a conversational-style program, some grammar is taught within the lessons and it includes grammar notes for parents that they might present to students if appropriate. The concept of verb conjugations is introduced toward the end of the course, teaching the conjugation of ir (to go): voy, vas, va, vamos, vaís, and van. But the instructions tell parents to emphasize only voy and vamos (I’m going and we’re going) rather than expect children to master all of the forms. While the course is great as an introduction to Spanish for the elementary grades and even for older students, it is not comprehensive enough for a high school credit course. As I mentioned previously, the flashcards are sold separately as well as within the See It and Say It set. I highly recommend that you get the flashcards to use along with the workbooks and CDs. The Spanish Fun Activity Calendar has space for writing the current year’s dates for each month. Each day on the calendar features a word or phrase, its pronunciation, and its translation; this is a great way to practice each day even through the summer and holidays. Bella: A Bilingual Reader presents the story of a hermit crab named Bella in English. Flip the book over and upside down to read the same story in Spanish. The level of the Spanish vocabulary is rather challenging so you might find the Spanish version difficult to read with the limited teaching in these courses. Flip Flop Spanish Vocabulary Builder: Movie Magic (which I have not reviewed) sounds like a great way to practice Spanish with movies. Señora Gose shows you how to run popular children’s movies such as “Dumbo” in Spanish in a way that helps children listen for vocabulary that they know. Familiar movies make it much easier for children to follow along. The way the courses are designed makes them very practical for group or family learning, but they will also work for a parent teaching just one child. The mix of activities offered by Flip Flop Spanish makes it a great choice for children up through the elementary grades. You can select the resources that best suit the needs of your learners. If you want more reading and writing, you might choose the workbooks with CDs. Otherwise, See It and Say It might be better. Even then, you can incorporate writing activity for older students. Many of the products are also available as digital downloads through CurrClick, so check out the options by clicking here if this interests you. Note: Catherine Levison, author of A Charlotte Mason Education, recommends Flip Flop Spanish for those seeking a Charlotte Mason approach to foreign language.
<urn:uuid:2a10149d-4e40-4c4d-a25e-d96c5ff0b504>
{ "date": "2016-09-26T22:27:06", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00037-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9638243913650513, "score": 4.03125, "token_count": 2173, "url": "http://cathyduffyreviews.com/homeschool-reviews-core-curricula/foreign-language/spanish/flip-flop-spanish" }
Researchers poke holes in super duper SSL Spoofing the unspoofable Websites that use an enhanced form of digital authentication remain just as vulnerable to a common form of spoofing attack as those that use less costly certificates, two researchers have found. Previously, so-called extended validation secure sockets layer certificates (or EV SSL) were believed to be immune to man-in-the-middle attacks, in which an interloper on a hotel network or Wi-Fi hotspot sits between an end user and the site she is visiting. When researchers demonstrated one such attack in December, SSL issuers proudly proclaimed that the more expensive EV certs were impervious to the technique. Independent researchers Alexander Sotirov and Mike Zusman have now proven that assumption wrong. Because of design flaws in most browsers, it's possible to perform a MITM attack and still cause the browser's address bar to display a green bar indicating the site is protected by EV SSL. The researchers presented their findings last week at the CanSecWest security conference in Vancouver. The attack method still requires that an attacker obtain a fraudulent SSL certificate, but that's certainly not outside the realm of possibility. Zusman and others have already shown it's possible to obtain a no-questions-asked SSL certificate for Mozilla.com. Zusman was also successful in obtaining an SSL certificate for Microsoft's login.live.com domain. Sotirov said he still believes EV certs are valuable because they require additional vetting of the applicant's identity. That's a considerable improvement over the current SSL infrastructure, in which some 135 certificate authorities exercise varying procedures for deciding whether to issue someone an SSL credential. But until browser makers figure out how to distinguish between the two types of certificates, there can be no guarantees that the little green button in your address bar really means no one has tampered with the site you're visiting. And making that happen without breaking website compatibility won't be easy. "For a lot of sites, it's hard not to have mixed content," Sotirov said. "Google Analytics and other third party content providers might not have extended validation. Suddenly, all those sites break." ®
<urn:uuid:904254c5-50bc-4919-8960-5673234196d6>
{ "date": "2017-04-23T13:57:19", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118707.23/warc/CC-MAIN-20170423031158-00468-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.936339259147644, "score": 2.5625, "token_count": 447, "url": "https://www.theregister.co.uk/2009/03/28/ev_ssl_spoofing/" }
Most Popular in: Transdisciplinary Knowledge for Holistic Innovation By: David Elliott and Rob Barker Posted: June 1, 2012, from the June 2012 issue of GCI Magazine. page 3 of 6First, perhaps it is useful to elaborate a little further on the way the brain seems to work. It has been established that the left side of the brain is the logical side, working to solve problems in a straightforward, rational way. Jonah Lehrer argues that the right side flashes into life only if you are stumped, acting like an organic search engine to reassemble previously unconnected thoughts, memories and unconnected events.5 Logic flies out of the window, and essentially the solution mysteriously appears to hand. Although Lehrer does not specifically make this point, it is likely that those who accept that this can happen are more likely to be comfortable with ideas that are generated in this way. There may be no logical reason for the idea, but it could be right; however, scientists may find this difficult.6 In Imagine, Lehrer goes on to map conditions that can create a favorable environment for companies and societies to be innovative. According to Lehrer, employee freedom and mingling appear to be key—as evidenced by the environments at 3M and Pixar—with all divisions, departments and disciplines adding ideas. This promotes the apparent hypothesis that ideas appear to come out of the blue, and when they occur, these epiphany moments come to a conclusion that is crystal clear. Research on compound remote associate problems from scientists such as Mark Jung-Beeman of Northwestern University has shown that the brain (parts of the cortex) enables people to make sense of metaphors.5 Basically, the left brain sees the trees while the right brain sees the wood. To promote this in companies, it is important to create relaxed environments that foster creativity. Blue rooms seem to foster innovation—relaxed associations. People need to have control over their own focus, and companies must trust their employees to pursue worthwhile opportunities and projects. In fact, Lehrer hypothesizes productive moments may come only after you stop thinking of them.5 Thus more is not necessarily better. Too much stress reduces creativity, while too little blocks may block it. Companies also must be aware of the dangers of burnout, particularly in non-routine tasks. Narrow input will produce narrow output, but exposure to unfamiliar perspectives may favor creativity.7 Creativity seems to be stimulated by mood swings and “getting out of your head.” Thus, reducing inhibitions appears to prompt creativity.
<urn:uuid:694e5631-bcf9-4b93-9f0d-5efa6a2e3b1b>
{ "date": "2014-10-25T14:34:57", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648297.22/warc/CC-MAIN-20141024030048-00196-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9526656270027161, "score": 2.921875, "token_count": 519, "url": "http://www.gcimagazine.com/business/management/innovation/156315385.html?page=3" }
A diverse collection of images kindly released for teaching and learning purposes by the compiler of the image sets, Carlo Giovanella, an instructor at UBC's department of Earth and Ocean Sciences. This virtual field trip will guide you through the palaeogeography, palaeocology, and sedimentary facies of the exhumed Permian reef complex, Guadalupe Mountains National Park, west Texas, U.S.A The CD-ROM comprises 100 high resolution photographs of structural features ranging from microscopic to aerial photograph scale. This web site provides a preview of the set (at a significantly lower resolution). It is intended for teaching use. This is one of the educational resources made available by the Lithoprobe project. More information for educators (posters, brochures, background, links, references) is available at their Educational Resources website, or see the Lithoprobe homepage. Three short slide shows illustrating some aspects of geophysical surveying for engineering / environmental applications, petroleum exploration, and mineral exploration. 1.Oil/gas well drilling and logging. 2. Resistivity / ip field surveying. 3. Land and marine magnetics surveying. Images and descriptions of rocks typical of clastic depositional environments. Images and captions illustrating some features of landslides.
<urn:uuid:d931ec37-71cf-4227-846d-f0af9a8bec64>
{ "date": "2014-04-24T01:44:47", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00163-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8762953281402588, "score": 2.53125, "token_count": 271, "url": "http://www.eos.ubc.ca/resources/slidesets/" }
Albert Einstein’s revolutionary general theory of relativity describes gravity as a curvature in the fabric of spacetime. Mathematicians at University of California, Davis have come up with a new way to crinkle that fabric while pondering shockwaves. “We show that spacetime cannot be locally flat at a point where two shockwaves collide,” says Blake Temple, professor of mathematics at UC Davis. “This is a new kind of singularity in general relativity.” Temple and his collaborators study the mathematics of how shockwaves in a perfect fluid affect the curvature of spacetime. Their new models prove that singularities appear at the points where shock waves collide. Vogler’s mathematical models simulated two shockwaves colliding. Reintjes followed up with an analysis of the equations that describe what happens when the shockwaves cross. He dubbed the singularity created a “regularity singularity.” “What is surprising,” Temple told Universe Today, “is that something as mundane as the interaction of waves could cause something as extreme as a spacetime singularity — albeit a very mild new kind of singularity. Also surprising is that they form in the most fundamental equations of Einstein’s theory of general relativity, the equations for a perfect fluid.” The results are reported in two papers by Temple with graduate students Moritz Reintjes and Zeke Vogler in the journal Proceedings of the Royal Society A. Einstein revolutionized modern physics with his general theory of relativity published in 1916. The theory in short describes space as a four-dimensional fabric that can be warped by energy and the flow of energy. Gravity shows itself as a curvature of this fabric. “The theory begins with the assumption that spacetime (a 4-dimensional surface, not 2 dimensional like a sphere), is also “locally flat,” Temple explains. “Reintjes’ theorem proves that at the point of shockwave interaction, it [spacetime] is too “crinkled” to be locally flat.” We commonly think of a black hole as being a singularity which it is. But this is only part of the explanation. Inside a black hole, the curvature of spacetime becomes so steep and extreme that no energy, not even light, can escape. Temple says that a singularity can be more subtle where just a patch of spacetime cannot be made to look locally flat in any coordinate system. “Locally flat” refers to space that appears to be flat from a certain perspective. Our view of the Earth from the surface is a good example. Earth looks flat to a sailor in the middle of the ocean. It’s only when we move far from the surface that the curvature of the Earth becomes apparent. Einstein’s theory of general relativity begins with the assumption that spacetime is also locally flat. Shockwaves create an abrupt change, or discontinuity, in the pressure and density of a fluid. This creates a jump in the curvature of spacetime but not enough to create the “crinkling” seen in the team’s models, Temple says. The coolest part of the finding for Temple is that everything, his earlier work on shockwaves during the Big Bang and the combination of Vogler’s and Reintjes’ work, fits together. There is so much serendipity,” says Temple. “This is really the coolest part to me. I like that it is so subtle. And I like that the mathematical field of shockwave theory, created to address problems that had nothing to do with General Relativity, has led us to the discovery of a new kind of spacetime singularity. I think this is a very rare thing, and I’d call it a once in a generation discovery.” While the model looks good on paper, Temple and his team wonder how the steep gradients in spacetime at a “regularity singularity” could cause larger than expected effects in the real world. General relativity predicts gravity waves might be produced by the collision of massive objects, such as black holes. “We wonder whether an exploding stellar shock wave hitting an imploding shock at the leading edge of a collapse, might stimulate stronger than expected gravity waves,” Temple says. “This cannot happen in spherical symmetry, which our theorem assumes, but in principle it could happen if the symmetry were slightly broken.” Image caption: Artist rendition of the unfurling of spacetime at the beginning of the Big Bang. John Williams/TerraZoom
<urn:uuid:00e46a58-90a3-41ae-831a-1c2a4577a5ea>
{ "date": "2019-09-21T00:25:16", "dump": "CC-MAIN-2019-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574159.19/warc/CC-MAIN-20190921001810-20190921023810-00536.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9430509805679321, "score": 3.671875, "token_count": 966, "url": "https://www.universetoday.com/tag/zeke-vogler/" }
The port and docks of the city of Minas Tirith, built on a bend in the River Anduin, and used by river traffic from the southern regions of Gondor. It lay some four miles to the southeast of the city, protected by the defences of the great wall known as the Rammas Echor1. The docks were so placed that there was a clear view of the southern river for many leagues, allowing defenders to see approaching ships long before they arrived at port. During the War of the Ring, it was at the Harlond that Aragorn landed with his fleet of captured Corsair vessels, reinforcing the beleaguered defenders of Minas Tirith. After the victory in the Battle of the Pelennor, the Harlond continued to be busy, as more ships from the southern lands carried new troops to Minas Tirith. Exactly how the docks related to the wall is not entirely clear. The text of The Lord of the Rings (The Return of the King V 1) tells us that the wall ran along the edge of the river at this point, and that the Harlond was 'beneath' it. In his unpublished index, Tolkien also states that the Harlond was 'within' the wall. Of course the docks and quays on the river must themselves have been outside the wall, so presumably some kind of gateway led from the landings into the port itself, though the exact arrangement is nowhere described. For acknowledgements and references, see the Disclaimer & Bibliography page. Website services kindly sponsored by Axiom Software Ltd. Original content © copyright Mark Fisher 1997-2000, 2010. All rights reserved. For conditions of reuse, see the Site FAQ.
<urn:uuid:b232e2cb-7bf1-42d2-878f-72ed5b1bbe99>
{ "date": "2014-04-23T21:01:59", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00491-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9650787711143494, "score": 2.671875, "token_count": 353, "url": "http://www.glyphweb.com/arda/h/harlond.html" }
Mallorca-Art and Crafts-LLATRA- ART OF PALMS When Easter comes, our country Spain is full of stalls selling handmade palms with beautiful shapes that later will be sold to be blessed on Palm Sunday. Here in Mallorca, this sector includes not only these products typical of this time of year, but we extend it to the production of basketry in all its variants. Although the tradition is more rooted in Elche, in Mallorca has also performed such craft, as in 1955 the island had approximately 8500 artisans of the palm (here called craftsmanship of llatra ). Today we have only one artisan in the whole island. He is called Guillem Caselles. Our last craftsman of the palm, can be found on popular fairs or markets all over the island. It is known that this practice was made in Mallorca in the Neolithic period and later was an activity carried out by women, children or elderly in order to have extra income for the family economy. It was in the north of Mallorca where it blossomed more, specifically in Arta, and Sant Llorenç and Capdepera where today still taught classes to learn this craftworks. In fact, in 1899 was born in Capdepera a cooperative to commercialize these products, alongside with another cooperative born in Arta. To do any of these products, the beginning of the process will be the collection of the palm leaves, a variety of small palm trees that can be seen by many corners of the island, called palmito. This harvest period is concentrated in the first 15 days of July, when the palm leaves are at their best. To pick up the palm leaves at the early twentieth century or late nineteenth century, men bandaged their hands, until they reached the first specific tools for this activity. Despite all this, today is still a practice done entirely by hand. Once harvested, the leaves should be dried in the sun to proceed later to the application of sulfur by a controlled emanation of gases or vapors. This means, the sulfur is fired to emit smoke and the palm leaves (here in Mallorca are usually called also garballó) in contact with this sulfur smoke. With this process, the whiteness of the palm leaves increases, also flexibility and softness. After this process, for distribution, palm leaves are classified depending on the various uses, for exemple to do baskets, bags, Easter palms, and so on. Some small leaves can be tinted with different colors, to be later used to make ornaments. As a curiosity, the process with sulfur was also performed for medical reasons. Osteoarthritis is, and it was, a very common disease in our island, obviously because of our geographical location and climate, high humidity level on the island that causes rheumatic diseases. The majorcan women knew that touching or keeping the skin in contact with products which had been in contact with sulfur, prevented future arthritic or rheumatic problems. Therefore, it was common for majorcan women to have different bags or baskets made with palm leaves. For example, not only the small purse to go to the church or larger to be use everyday, but also several specific bags and baskets to go to buy potatoes or daily shopping ,also a small, narrow and flat bag for carring doctor’s prescriptions. Discover Mallorca with Private Tour Guides the ancient crafts made on the island, the llatra or palm leaves crafts. Let us guide you through the ancient traditions and crafts of the island, in the hands of our licensed tour guides you will discover unknown traditions and many ancient crafts made in mallorca for many years and this craftworks are part of our cultural heritage. Visit Mallorca with Mallorca Private Tour Guides and be surprised by a lot of crafts and traditional trades still remaining on the island. Mallorca Private Tour Guides Team
<urn:uuid:b2ad66fe-9f62-40fe-9bb1-accdf3f29f66>
{ "date": "2017-10-23T02:12:11", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9655275940895081, "score": 2.609375, "token_count": 816, "url": "https://mallorcaprivatetourguides.wordpress.com/2013/04/02/mallorca_art-and-crafts-llatra-art-of-palms/" }
Cougars, bobcats making a comeback . . But what about bats? By B. Adam Burke North Liberty Leader NORTH LIBERTY– White nose syndrome is the scourge of bat colonies worldwide. The disease was first tracked out of a cave in Europe, where larger bat species have been able to resist its effects and survive winters. In Iowa and the U.S., micro-bats swarm the evening air, vacuuming up and devouring their weight in insects each night. But with a lower body weight, micro-bats have a harder time fighting off the fungal infections from white nose syndrome. The first known case of white nose syndrome in Iowa was spotted in Maquoketa Caves in June, according to Pella Wildlife Company’s Ron DeArmond, a conservator who presented a live brown bat and a six-week old bobcat during his talk at the Sugar Bottom Campground Amphitheatre on June 30. Iowa is home to eight bat species, including little and big brown bats, eastern and Indiana bats. DeArmond brought out one little brown bat that stayed glued to his hand as he walked through the crowd of about 50 people. Micro-bats in Iowa are mouse-sized and can live 15 to 20 years. DeArmond encouraged the audience of about 50 people to build bat houses– small wooden structures placed 15 feet off the ground– and to take inventory of bats around them and report the numbers to Pella Wildlife Company or the Iowa Department of Natural Resources. Bats save an estimated $50 billion in pest damage across the U.S., and Iowa bats save up to $2 billion in insect damage to crops. But white nose syndrome could eventually wipe out North American bats, DeArmond said. In a worst case scenario, he said, many species of micro-bats might be extinct in 10 to 15 years. In the best case, bats might take 100 years to recover to previous numbers. He spent the majority of his time discussing the effects of white nose syndrome on Iowa bats but he also showed off the star attraction of the night; a six-week old bobcat. DeArmond started Pella Wildlife Company (PWC) three years ago and added staff member Dr. Kristy Burns, a wildlife anthropologist, two years ago. Burns handled the bob-kitten as DeArmond talked about the return of bigger cats like the bobcat and cougar to Iowa and the Midwest. Bobcats mostly hunt rodents and rabbits, but will also find larger prey like stray pets. Burns said she thought about a dozen cougars live in Iowa currently. No human has ever been attacked by a bobcat, DeArmond noted. At the end of the talk, Burns put the tiny bat away and brought out the bobcat again to the delight of the crowd, Attendees were allowed to pet the bobcat’s back as Burns held it. DeArmond will visit northeast Iowa this summer as well, where a black bear was recently spotted near the Minnesota-Iowa border. PWC operates a wildlife education center in Merle Hay Mall in Des Moines. The group moved there from Pella to reach a wider audience. PWC offers schools, clubs, Scout troops and other groups a first-hand look at their wildlife ambassadors including cougars, bobcats, lynx, wolves, foxes, skunks and porcupines. More about PWC can be found at its website: www.pellawildlifecompany.org. The interpretative campground program presented by the PWC was part of an ongoing education series offered this summer by the U.S. Army Corps of Engineers at Coralville Lake, with programs free and open to the public, whether camping or just visiting. Most sessions were presented at the Sugar Bottom Campground Amphitheatre, located north of North Liberty on Mehaffey Bridge Road on the north side of the Coralville Lake bridge. The summer series has concluded for this year, but tours of the Coralville Dam can be arranged on Saturdays through Labor Day. On Saturdays, Sundays and holidays, the visitor center can be reached at 319-338-3543, ext. 6300, from 10 a.m. until 5 p.m.
<urn:uuid:d9033ddb-ee57-4ed7-a723-7e3acd5c2df7>
{ "date": "2014-09-02T01:58:20", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921318.10/warc/CC-MAIN-20140909055359-00495-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9569442272186279, "score": 3.015625, "token_count": 884, "url": "http://www.northlibertyleader.com/content/cougars-bobcats-making-comeback-what-about-bats-0" }
A stored procedure is a subroutine available to applications that access a relational database system. A stored procedure (sometimes called a proc, sproc, StoPro, StoredProc, sp or SP) is actually stored in the database data dictionary. Typical use for stored procedures include data validation (integrated into the database) or access control mechanisms. Furthermore, stored procedures can consolidate and centralize logic that was originally implemented in applications. Extensive or complex processing that requires execution of several SQL statements is moved into stored procedures, and all applications call the procedures. One can use nested stored procedures by executing one stored procedure from within another. Stored procedures are similar to user-defined functions (UDFs). The major difference is that UDFs can be used like any other expression within SQL statements, whereas stored procedures must be invoked using the Stored procedures may return result sets, i.e. the results of a SELECT statement. Such result sets can be processed using cursors, by other stored procedures, by associating a result set locator, or by applications. Stored procedures may also contain declared variables for processing data and cursors that allow it to loop through multiple rows in a table. Stored procedure flow control statements typically include CASE statements, and more. Stored procedures can receive variables, return results or modify variables and return them, depending on how and where the variable is declared. The exact and correct implementation of stored procedures varies from one database system to another. Most major database vendors support them in some form. Depending on the database system, stored procedures can be implemented in a variety of programming languages, for example SQL, Java, C, or C++. Stored procedures written in non-SQL programming languages may or may not execute SQL statements themselves. The increasing adoption of stored procedures led to the introduction of procedural elements to the SQL language in the SQL:1999 and SQL:2003 standards in the part SQL/PSM. That made SQL an imperative programming language. Most database systems offer proprietary and vendor-specific extensions, exceeding SQL/PSM. A standard specification for Java stored procedures exists as well as SQL/JRT. |Database system||Implementation language| |DB2||SQL PL (close to the SQL/PSM standard) or Java| |Firebird||PSQL (Fyracle also supports portions of Oracle's PL/SQL)| |Informix||SPL or Java| |Microsoft SQL Server||Transact-SQL and various .NET Framework languages| |MySQL||own stored procedures, closely adhering to SQL/PSM standard.| |Oracle||PL/SQL or Java| |PostgreSQL||PL/pgSQL, can also use own function languages such as pl/perl or pl/php| Other uses In some systems, stored procedures can be used to control transaction management; in others, stored procedures run inside a transaction such that transactions are effectively transparent to them. Stored procedures can also be invoked from a database trigger or a condition handler. For example, a stored procedure may be triggered by an insert on a specific table, or update of a specific field in a table, and the code inside the stored procedure would be executed. Writing stored procedures as condition handlers also allows database administrators to track errors in the system with greater detail by using stored procedures to catch the errors and record some audit information in the database or an external resource like a file. Comparison with dynamic SQL Overhead: Because stored procedure statements are stored directly in the database, they may remove all or part of the compilation overhead that is typically required in situations where software applications send inline (dynamic) SQL queries to a database. (However, most database systems implement "statement caches" and other mechanisms to avoid repetitive compilation of dynamic SQL statements.) In addition, while they avoid some overhead, pre-compiled SQL statements add to the complexity of creating an optimal execution plan because not all arguments of the SQL statement are supplied at compile time. Depending on the specific database implementation and configuration, mixed performance results will be seen from stored procedures versus generic queries or user defined functions. Avoidance of network traffic: A major advantage with stored procedures is that they can run directly within the database engine. In a production system, this typically means that the procedures run entirely on a specialized database server, which has direct access to the data being accessed. The benefit here is that network communication costs can be avoided completely. This becomes particularly important for complex series of SQL statements. Encapsulation of business logic: Stored procedures allow programmers to embed business logic as an API in the database, which can simplify data management and reduce the need to encode the logic elsewhere in client programs. This can result in a lesser likelihood of data corruption by faulty client programs. The database system can ensure data integrity and consistency with the help of stored procedures. Delegation of access-rights: In many systems, stored procedures can be granted access rights to the database that users who execute those procedures do not directly have. Some protection from SQL injection attacks: Stored procedures can be used to protect against injection attacks. Stored procedure parameters will be treated as data even if an attacker inserts SQL commands. Also, some DBMSs will check the parameter's type. A stored procedure that in turn generates dynamic SQL using the input is however still vulnerable to SQL injections unless proper precautions are taken. Comparison with functions - A function is a subprogram written to perform certain computations - A scalar function returns only a single value (or NULL), whereas a table function returns a (relational) table comprising zero or more rows, each row with one or more columns. - Functions must return a value (using the RETURNkeyword), but for stored procedures this is not compulsory. - Stored procedures can use RETURNkeyword but without any value being passed. - Functions could be used in SELECTstatements, provided they don’t do any data manipulation. However, procedures cannot be included in - A stored procedure can return multiple values using the OUTparameter or return no value at all. - A stored procedure can save the query compilation time. Comparison with prepared statements Prepared statements take an ordinary statement or query and parameterize it so that different literal values can be used at a later time. Like stored procedures, they are stored on the server for efficiency and provide some protection from SQL injection attacks. Although simpler and more declarative, prepared statements are not ordinarily written to use procedural logic and cannot operate on variables. Because of their simple interface and client-side implementations, prepared statements are more widely reusable between DBMSs. - Stored procedure languages are quite often vendor-specific. Switching to another vendor's database most likely requires rewriting any existing stored procedures. - Stored procedure languages from different vendors have different levels of sophistication. - For example, Oracle's PL/SQL has more language features and built-in features (via packages such as DBMS_ and UTL_ and others) than Microsoft's T-SQL. - Tool support for writing and debugging stored procedures is often not as good as for other programming languages, but this differs between vendors and languages. - For example, both PL/SQL and T-SQL have dedicated IDEs and debuggers. PL/PgSQL can be debugged from various IDEs. - Stored Procedures in MySQL FAQ - An overview of PostgreSQL Procedural Language support - Using a stored procedure in Sybase ASE - PL/SQL Procedures - Oracle Database PL/SQL Language Reference
<urn:uuid:9b63bdf0-49f6-4271-8748-cfc3c561c18e>
{ "date": "2013-05-22T00:08:52", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8626874685287476, "score": 3.890625, "token_count": 1575, "url": "http://en.wikipedia.org/wiki/Stored_procedure" }
Fruits & Veggies - More Matters® Click here for a new fruit & veggie resource. More Matters Because… Fruits and veggies are nutrition powerhouses. Not only are they low in fat and calories they are good sources of a variety of nutrients, such as vitamin C and folic acid that promote good health. As part of a healthy diet, eating fruits and vegetables can: - Help maintain a healthy weight - Help prevent certain cancers - Help maintain a healthy blood pressure - Reduce heart disease risk - Reduce diabetes risk With over 200 choices and a variety of packaging options to make fruits and vegetables easy to store and serve, there’s bound to something to please everyone, even the pickiest of eaters. Fresh, frozen, canned, and dried—they all count. How are you going to get more? Fruits & Veggies — More Matters® is a public health campaign to support all Americans in eating more fruits and veggies. The campaign was updated from the previous 5 A Day message to mirror the most current US Dietary Guidelines, which recommend filling half your plate with fruits and vegetables. Fruits & Veggies — More Matters is jointly sponsored by the Center for Disease Control and Prevention and the Produce for Better Health Foundation. The Missouri Department of Health and Senior Services is the lead agency for coordinating Fruits & Veggies—More Matters activities in our state. The program encourages Missourians to eat more fruits and vegetables and to increase the availability of fruits and vegetables at home, school, work, and other places where food is served. For more information about Missouri’s Fruit and Vegetable Program or to join the network of Missouri fruit and vegetable colleagues please contact [email protected].
<urn:uuid:bb0c6fd9-75cd-4923-a1f8-7ce52fa7e328>
{ "date": "2013-12-10T03:48:03", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164006951/warc/CC-MAIN-20131204133326-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.922615110874176, "score": 2.90625, "token_count": 356, "url": "http://www.health.mo.gov/living/wellness/nutrition/foodprograms/fruitsandveggies/index.php?style=mobile2" }
- An example of viable is a fetus. - An example of viable is a plan to save a small portion of money each month in hopes of eventually purchasing a car. - able to live; specif., - having developed sufficiently within the uterus to be able to live and continue normal development outside the uterus: a premature but viable infant - able to take root and grow: viable seeds - workable and likely to survive or to have real meaning, pertinence, etc.: a viable economy, viable ideas Origin of viableFr, likely to live ; from vie, life ; from Classical Latin vita: see vital - Capable of success or continuing effectiveness; practicable: a viable plan; a viable national economy. See Synonyms at possible. - a. Capable of living, developing, or germinating under favorable conditions: viable spores.b. Capable of living outside the uterus. Used of a fetus or newborn. Origin of viableFrench, from vie, life, from Old French, from Latin v&imacron;ta; see gwei- in Indo-European roots. (comparative more viable, superlative most viable) - Able to live on its own (as for a newborn.) - Able to be done, possible. - In (biology), able to live and develop.
<urn:uuid:5d2c0d79-d4b4-4e90-b59d-38bf7e796782>
{ "date": "2016-06-25T14:06:01", "dump": "CC-MAIN-2016-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00036-ip-10-164-35-72.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8594949245452881, "score": 3.078125, "token_count": 281, "url": "http://www.yourdictionary.com/viable" }
The following is a list of important crops cultivated in the Peninsula: nellu or rice; varaku or kada millet; mondi or Indian millet; thinaichchamy or German/Italian millet; panichchamy or common millet; karurhachchamy (a variety of millet); putchamy or panicum colonum; kurakkan or eleusine coracana; payaru or green gram; uzhunthu or black gram; kollu or red horse gram. Peasants of Jaffna, who have been known for qualities of thrift, prudence, patience and hard work, have wisely and carefully used the available land by the system of crop rotation and soil fertilization. One may mention three factors that prompted special forms of crop rotation: - to make the the best use of the land of consisting of poor soil, - to feed the big population of the Peninsula by cultivating food crops like paddy, dry grains and yams, and - to enable the people of the Peninsula to survive financially by the cultivation of cash crops like tobacco and chillies. This is how a peasant of Jaffna makes use of every available inch of land: “Often on the borders of vegetable lots, or gardens of chillies, bean creepers are grown, providing a second crop. Where garden crops are fenced in, the fences are used as a trellis for snake gourds and other vegetables. Sometimes at the four corners of a small plot, a yard square containing a brinjal plant, one discovers four Indian corn plants and to these are trained bean creepers.” Finally, a special feature of irrigation in Jaffna, which has today almost vanished with the introduction of modern machines, may be mentioned. Though the Peninsula is an arid region, abundance of underground water, which has been “‘the most important condition for human settlement in the Peninsula” is fully utilized by digging wells. The presence of such wells is marked usually by tulip trees (surya) and a few coconut trees. water is drawn from the well by means of a device known as thula or well-sweep. It is a balanced lever. “A palmyrah trunk is supported horizontally On supports with the thinner end of the trunk just over the well. To this end is fixed a pole or a rope that can be dropped into the well and at the end of pole is a bucket. Two men walk up and down the palmyrah trunk and as they walk towards the thinner end, their weight dips the pole and its bucket to the mouth of the well. As they walk towards the thinner end of the beam, they bring up the thinner end and with its comes the bucket filled with water. This is emptied by a third man into the field. Thus for hours two men run up and down the palmyrah beam working it in a see-saw manner while a third man lifts the bucket and empties it into the field where the little channels carry the water all over the garden” so much for agriculture.
<urn:uuid:de698436-a8b2-48da-baae-ce50b644a1a2>
{ "date": "2017-10-21T08:23:24", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00756.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9441788792610168, "score": 3.65625, "token_count": 647, "url": "http://www.eelavar.com/agriculture/" }
In-depth oil analysis is essential to the health and reliability of production machinery. Oil analysis may be one of the last frontiers of industrial maintenance, where large amounts of money can be saved for a relatively small investment. By reducing failure-related costs, it is not beyond the realm of possibility to expect a return on investment in excess of 500 percent. Savings of more than $1 million are possible, depending on the size of the plant. Not all lube oil analysis programs are equal. Just because a plant engages in some form of oil analysis does not mean that the machinery is well protected or that the program is effective. It is important to note that oil analysis is only one part of a comprehensive tribology program that includes vibration monitoring and analysis, ultrasonics, and thermography. Oil analysis supplements vibration analysis by revealing two key root causes of machinery failure: changes in oil chemistry and oil contamination. Evaluate the current situation Here are three questions to ask: - How many samples are being collected and tested? - What tests are being done regularly? - What cost savings have been documented in the past 12 months? Sampling. Collecting as few as 10 oil samples per quarter is considered adequate in some plants. This is probably not enough, however. On the other hand, some intensive programs involve gathering more than 1000 samples per month. In general, collecting fewer than fifty samples per month is an indication of an incomplete oil analysis program in most plants. Samples should be collected and tested often enough to detect contamination and chemistry problems and establish trends. If a seal failure could allow contamination leading to damage in three months, then monthly samples will be necessary to identify a problem early enough so that steps can be taken to repair the seal. Because every plant is unique, there is no single answer to questions regarding the number of oil samples to be gathered or collection frequency. On average, most industrial mills or plants can expect excellent cost savings based on information gained by collecting and analyzing between 50 and 200 samples each month. Here is one rule of thumb: if there are 3000 vibration points in the oil-lubricated pumps, motors, compressors, turbines, gearboxes, air handlers, and hydraulic systems in a plant, at least 100 oil points should be sampled monthly. Testing. The going price for industrial oil analysis is about $32 per sample, but may be as low as $8 each. Is a single sample worth $32? What will it cost to repair a damaged machine? A greater investment is probably economical protection for millions of dollars worth of production equipment. Industrial machinery is subject to contamination and chemistry-related faults leading to abnormal wear mechanisms typically involving abrasion, fatigue, adhesion, and corrosion. A thorough oil analysis for industrial applications seeks to identify lubricant components that support wear due to abrasion, adhesion, and corrosion. Typical tests include spectrometric oil analysis, total acid number, water by Karl Fischer, particle count with size distribution, and automatic and analytical wear debris analysis (WDA). Too many industrial oil samples are subjected to low-cost analysis when they really need the particle counting, particle size distribution, and wear debris analysis that come only with a more expensive program. Some maintenance departments choose low-cost tests because they do not understand the value of looking for particles larger than 10 microns. Purchasing agents may insist on lower cost analysis. Or, the oil supplier may give away oil analysis as a value-added service. Savings. If substantial cost savings cannot be attributed to oil analysis, serious changes to the current program should be considered because it is not producing results that are attainable. Successful oil analysis programs do pay off. A saving of $250,000 in the first year is not unusual, as an effective oil analysis program will identify potential problems that can be corrected by appropriate maintenance actions. If no follow-through is called for, the program is a waste of time and money. It would be better to do no oil analysis than to have a program with no maintenance follow-up. At least management will not be lulled into a false sense of security by thinking that plant assets are being fully protected. Establish a strong program The operational life of most industrial equipment is directly related to the contamination and chemistry of the lubricants, which are root causes of abrasion, adhesion, and corrosion. When a program is established to recognize presence of contaminants and to identify the types and sizes of particles present in a lubrication system, a giant step has been taken toward predicting if and when a ma-chine will fail in order to initiate corrective measures. Is it better to do testing and analysis on site or to rely on a well-equipped off-site laboratory to test samples collected in the plant? There are a number of good arguments for doing the work on site, including better control, immediate results and immediate retest if needed, analysis by technicians that are familiar with the equipment, and the ability to test more lubricants more often. In general, on-site oil analysis makes sense for large industrial plants with more than 100 oil systems. An effective on-site program monitors machine wear, system contamination, and oil chemistry. Emphasis must be placed on the identification of the primary root causes of abrasive wear, fatigue wear, adhesive wear, and corrosive wear. Considering the wide range of equipment in an industrial plant and the number of faults to be monitored, an on-site program must have a range of capabilities, including both quantitative and qualitative wear debris analysis, particle counting, water contamination monitoring and oil chemistry testing. The key to the success of an on-site program is a well-trained, in-house champion with a vision for improvement. No matter who does the testing and analysis, successful oil analysis programs generally encompass: - Automatic wear debris analysis providing a quantitative measure of ferrous and nonferrous metal particles in an oil sample - Analytical wear debris analysis (e.g., the viewing and classifying of wear debris under a microscope) - Particle counting with size distribution - Water contamination - Oil chemistry and viscosity - Expert interpretation - Electronic reporting Wear debris analysis (WDA). WDA measures the nature and severity of wear mechanisms quantitatively and qualitatively. An automatic wear debris analyzer or ferrous density monitor not only measures particle size, it screens out the relatively few samples requiring in-depth visual analysis. Qualitative analysis is performed by a trained technician who uses a microscope to view both ferrous and nonferrous wear debris on a glass slide or filter patch. In many cases, this step produces the most useful information of all, including the concentration, shape, size, texture, color, and optical properties of the particles. A trained technician can determine types and causes of wear and contamination (abrasion, adhesion, fatigue or corrosion) quite accurately using this technique. Abrasive wear particles normally are an indication of excessive dirt or other hard par-ticles that are cutting away at load-bearing surfaces. Adhesive wear particles will reveal problems with lu-bricant starvation that results from either low or high load, high temperature, slow speed, or inadequate lubricant delivery. Fatigue wear par-ticles may be associated with mechanical problems, such as improper fit, misalignment, imbalance or some other condition. Corrosive wear particles indicate the presence of corrosive fluids, such as water or process materials contacting metal surfaces. This knowledge, which reveals the condition of a piece of equipment when the sample was taken, is useful in predicting when corrective action will be needed and what must be done. Particle counting with size distribution. Water and dust, the most common contaminants in oil, are primary causes of abrasion, corrosion and fatigue wear. Effective oil analysis programs quantify both water and dust. Particle counting is the accepted method for measuring total concentration of particulate debris, as well as size distribution. Both are important for monitoring the condition of the lubricant and effectiveness of the filtration system. Particle counters for on-site oil analysis should actually measure multiple size ranges leading to a determination of size distribution. Both bench-top and portable unitss are available, but bench-top use of a portable particle counter can be cumbersome. A new ppm distribution method combines particle counting and WDA for maximum impact. Fig. 2 shows parts per million (ppm v/v) of solid particles vs size distribution for those particles. Each peak in the ppm distribution plot represents a different source of contamination or wear debris in the oil. If there are multiple peaks in the distribution, there should be a separate group of debris on a filter patch or glass slide corresponding to each peak in the plot. Each particle group can be attributed to a root cause event associated with contamination or wear events. Water contamination. There are many ways to measure water in oil. Visual appearance, crackle test, and time-resolved dielectric are three common methods of identifying water contamination problems. The exact measure of water concentration is best left to a laboratory using Karl Fischer titration. Corrective actions depend on whether the water is in solution, emulsion, or free state. In general, emulsified and free water are most damaging. Oil chemistry. Chemical instability in lubricants is often caused either by ingress of process materials into the fluid or by breakdown of the fluid. Breakdown occurs due to high temperature exposure and/or aeration, possibly due to foaming. Another serious form of chemical instability is the result of water or coolant contamination. These corrosive fluids not only attack metal surfaces, they also consume vital additives that are needed for anti-oxidation, anti-wear and other functions in the fluid. Chemistry monitoring normally involves comparison of a used oil sample with new oil. Visual examination can reveal color changes from amber to reddish-brown, indicating chemical deterioration. Quantitative on-site methods for measuring oil chemistry include dielectric, voltammetric, and TAN test kit. Dielectric increases 0.1 to 0.02 and TAN increases 1.0 to 2.0 each represent significant chemical deterioration of lubricating oil. Staffing and training A basic understanding of the importance of oil analysis is imperative for all maintenance personnel. The single most important ingredient in a successful oil analysis program, however, is the champion behind it. One individual must be assigned to take the lead, and that person must be passionate about the opportunity he or she has been given to save the company money. Skill training and certification are essential. Many equipment vendors offer training and certification for their instruments and methods. In addition, general tribology training is available from various sources. The Society for Tribologists and Lubrication Engineers (STLE) provides training standards such as Certified Lubrication Specialist and Oil Monitoring Analyst. The standards are high, and the exams are not easy. Formal training is crucial. Which department should perform on-site oil analysis? The reliability team in the maintenance department is the first choice. A good alternative is the technical services department, where laboratory analysis for environment and process monitoring already take place. A third possibility is an outside contractor to collect the samples and perform oil analysis on site. In any of these scenarios, the findings must impact equipment maintenance. Oil analysis without corresponding corrective actions will not be effective. Periodic auditing is suggested for best practice. Each audit should compare this plant with plants identified as industry benchmark (e.g., those setting the highest standards for oil analysis practices). The audit report should include an assessment of performance and cite ways to improve. After the initial audit, a continuous improvement plan should be drafted. The plan should set objectives for the next 24 months. Then, quarterly reviews should measure progress. Each quarterly review should be summarized in a status report to the maintenance and plant managers. All reports must include available financial evidence of savings. Benefits outweigh costs Many industrial plants are faced with downsizing and out-sourcing for maintenance activities. Some predictive maintenance teams that formerly comprised four to six people have been cut in half. How can these plants possibly increase the number of oil samples collected from 10 per month to more than 100 per month? Moreover, how can they begin doing the oil analysis? The plant collecting 10 or 20 samples per month is missing problems costing far more in labor and other expenses than the cost of collecting more samples. It takes only about one week per month to collect and test 100 samples. The payoff in both labor and cost savings is far greater than the time spent doing this work. Obviously, new programs must be justified, but history provides dozens of documented case histories with anecdotal evidence backed by the knowledge of tribology experts. Ray Garvey is tribology solutions manager for Emerson Process Management CSI. Telephone (865) 675-2110;e-mail [email protected]; Internet www.mhm.assetweb.com. His certifications include PE, CLS and OMA1.
<urn:uuid:4af0b25e-2f29-4bb1-a714-ca2638d383df>
{ "date": "2015-02-02T00:13:02", "dump": "CC-MAIN-2015-06", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122122092.80/warc/CC-MAIN-20150124175522-00201-ip-10-180-212-252.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9352885484695435, "score": 2.796875, "token_count": 2680, "url": "http://www.maintenancetechnology.com/2005/09/do-you-have-an-effective-lube-oil-analysis-program/" }
FreeBSD man pages : readlink (2) READLINK(2) FreeBSD System Calls Manual READLINK(2) readlink - read value of a symbolic link Standard C Library (libc, -lc) readlink(const char *path, char *buf, int bufsiz); Readlink() places the contents of the symbolic link path in the buffer buf, which has size bufsiz. The readlink() function does not append a NUL character to buf. The call returns the count of characters placed in the buffer if it suc- ceeds, or a -1 if an error occurs, placing the error code in the global Readlink() will fail if: [ENOTDIR] A component of the path prefix is not a directory. [ENAMETOOLONG] A component of a pathname exceeded 255 characters, or an entire path name exceeded 1023 characters. [ENOENT] The named file does not exist. [EACCES] Search permission is denied for a component of the [ELOOP] Too many symbolic links were encountered in translat- ing the pathname. [EINVAL] The named file is not a symbolic link. [EIO] An I/O error occurred while reading from the file sys- [EFAULT] Buf extends outside the process's allocated address lstat(2), stat(2), symlink(2), symlink(7) The readlink() function call appeared in 4.2BSD. FreeBSD 4.8 June 4, 1993 FreeBSD 4.8
<urn:uuid:021e74b4-febc-4ef6-87b4-03f5565a7305>
{ "date": "2013-05-26T03:03:02", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7112043499946594, "score": 2.609375, "token_count": 344, "url": "http://www.manpages.info/freebsd/readlink.2.html" }
Here, There, and Everywhere? Which came first--galaxies or the supermassive black holes that lurk in their centers? New evidence hints that the galaxies we see grew from the inside out. Predicting the Past - With giant new telescopes at their disposal, astronomers can now see back to the time when stars and galaxies first flooded the universe with light. Taking the High Road - A trip to the mountaintop brings one amateur astronmer closer to nature, and more importantly, to the deep-sky objects he loves. Celestial Portraits: Pictor, Dorado, & Mensa - These southern constellations might not merit a second look if not for the brilliant star clusters and splashy nebulae of the biggest and brightest galaxy visible from Earth. Astronomy, 2001, February
<urn:uuid:64e0783f-5913-425b-ad87-5183fd12f1ad>
{ "date": "2014-10-23T10:04:07", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413510834349.43/warc/CC-MAIN-20141017015354-00359-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8818869590759277, "score": 3.0625, "token_count": 168, "url": "http://www.kalmbachstore.com/asy010201.html" }
If you’re in to making your own PCBs at home, you know the trials of etching copper clad boards. It’s slow, even if you’re gently rocking your etch tank or even using an aquarium pump to agitate your etching solution. [cunning_fellow] over on Instructables has the solution to your etching problems, and can even produce printmaking plates, jewelry, photochemically machine small parts, and make small brass logos of your second favorite website. The Etchinator is a spray etcher, so instead of submerging a copper clad board into a vat of ferric or cupric chloride, etching solution is sprayed onto the board. We’ve seen this technique before, but previous builds use pumps to spray the etching solution and cost a bundle. [cunning_fellow]‘s Etchinator doesn’t used pumps; it’s driven by two cordless drill motors sucking up etching solution through a hollow tube. The basic idea behind the build is sticking a vertical PVC pipe in a box with etching solution. Mount an impeller in the bottom of the tube, drill many small holes in the side of the tube, and spin it with a motor up top. The solution is sucked up the tube, sprayed out the sides, and falls back down into the reservoir. Put a masked off copper board in the tank and Bob’s your uncle. Not only did [cunning_fellow] come up with an awesome PCB etching solution, but the same machine can be used for etching brass plate for printmaking, and even photoetching brass sheets for model planes, trains, and automobiles. The quality is really amazing; the Instructables robot above was etched out of 0.7 mm thick brass, with an etch depth of 0.35 mm with only 0.05 mm of undercut. A very awesome build that is already on our ‘to build’ project list.
<urn:uuid:524653ff-9c6f-46e9-94a8-e668d1284ade>
{ "date": "2013-05-22T00:30:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9157674312591553, "score": 2.53125, "token_count": 415, "url": "http://hackaday.com/2012/09/16/etching-brass-and-copper-with-the-etchinator/?like=1&source=post_flair&_wpnonce=ca20eaf6a9" }
What does it mean for people with disabilities? On February 8, 1996, the first major overhaul of American telecommunications policy in nearly 62 years, the Telecommunications Act of 1996, was signed into law. One of the goals of this historic legislation is to promote the availability of telecommunications services and equipment to people traditionally underserved in telecommunications, including people with disabilities. Read on for an informal overview of the provisions in the new law that most concern disability access, what the FCC has done to implement these provisions, and how you can get involved. DISABILITY ACCESS PROVISIONS: Two provisions of the Telecommunications Act focus entirely on access by persons with disabilities: Sections 255 and 713. I. Access by Persons with Disabilities: Section 255 Section 255 of the Act requires all manufacturers of telecommunications equipment and providers of telecommunications services to ensure that such equipment and services are designed and developed to be accessible to and usable by individuals with disabilities, if readily achievable. The FCC has conducted a rulemaking proceeding to implement this provision and the final Report & Order, released September 29, 1999 is on the website at www.fcc.gov/Bureaus/Common_Carrier/Orders/1999/fcc99181.txt and is available in other formats on the Section 255 main page at www.fcc.gov/encyclopedia/section-255-disability-rights-office. The final Report & Order includes the Regulations in Appendix B. II. Video Programming Accessibility: Section 713 Section 713 aims to ensure that video services are accessible to individuals with hearing and speech disabilities. It requires the FCC to study the level at which video programming is closed captioned, and then to establish a timetable for closed captioning requirements. (The FCC is authorized to exempt programming for which the provision of closed captioning would be economically burdensome.) A Fact Sheet, Frequently Asked Questions and the Final Rule are on the website www.fcc.gov/encyclopedia/closed-captioning-video-programming-television Section 713 also directs the FCC to study the use of video description in order to assure the accessibility of this service to persons with visual impairments. The final rule to implement this provision is found on the website at www.fcc.gov/encyclopedia/video-description-information. Other provisions of the Act aim to promote access to telecommunications by all Americans, including those with disabilities. III. Advanced Telecommunications Incentives: Section 706 Section 706 requires the FCC to encourage the deployment of advanced telecommunications to all Americans, and to elementary and secondary schools and classrooms in particular. It requires the FCC to assess the level at which advanced telecommunications are available, and then to take steps, if necessary, to accelerate deployment of such services by removing barriers to infrastructure investment. This provision could significantly benefit children with disabilities as well as children without disabilities a nd adults. For updates and reports on Section 706 visit FCC website www.fcc.gov/broadband/ IV. Universal Service: Section 254 Section 254 concerns universal service, and directs the FCC and a Federal-State Joint Board to define what services should be made universally available and to take other actions as needed to further the Act's universal service principles. Section 254 also revises the definition of universal service to include schools, libraries, and health care facilities. It says that telecommunications companies must provide services to these public institutions at affordable rates, upon request. The FCC and the States must decide what constitutes affordable rates, what telecommunications services should be covered, and how discounts should be made available to public institutions. V. Coordination for Interconnectivity: Section 256 Section 256(b)(2)(B) directs the FCC to establish procedures for oversight of telecommunications network planning and states that the FCC may participate with the industry in developing standards for "interconnectivity" (the ability of telecommunications carriers to connect to each other's networks). Such standards would promote access to telecommunications networks by people with disabilities. VI. Interconnection: Section 251 Section 251(a)(2) states that telecommunications carriers may not install network features, functions, or capabilities which do not comply with the guidelines and standards established under Sections 255 and 256. HOW THE FCC DEVELOPS REGULATIONS: The FCC develops regulations through the "notice and comment" process, which allows the public to participate in rulemaking. The FCC usually begins a proceeding to implement a section of a law by issuing one of two types of public documents -- a Notice of Inquiry or a Notice of Proposed Rulemaking. The public is encouraged to read these documents and address the issues they raise. These documents list deadlines for filing comments and/or reply comments (reply comments give parties an opportunity to respond to the comments filed by others). Comments and reply comments form the "record" of the proceeding, and are considered by the Commissioners and their staffs when they make final decisions. HOW YOU CAN PARTICIPATE: You may want to begin by reading those sections of the Telecommunications Act that pertain to disability access, as cited above. The text of the Act is at www.fcc.gov/telecom.html. When a proceeding that you would like to participate in is underway, it is suggested you read the relevant Notice of Inquiry or Notice of Proposed Rulemaking. These documents will be available on the FCC web site, or you may phone 1-888-225-5322 (voice) or 1-888-835-5322 (TTY) for more information on how to obtain them. The Notice of Inquiry or Notice of Proposed Rulemaking will have instructions on how to submit a comment or reply comment, and how many copies to submit. (You can submit a reply comment even if you did not submit a comment.) You may wish to first familiarize yourself with our hints on filing comments. HOW TO STAY INFORMED ON THESE ISSUES: To get the most recent information on the FCC's implementation of the disability access provisions of the Telecommunications Act, use one or more of these methods: (1) Regularly check the Disabilities Issues Home Page. Also, be sure to check the rest of the FCC Home Page regularly, for developments in all Bureaus and Offices of the Commission. (2) To receive free Consumer and/or Disability-related information e-mailed directly to you, register today!
<urn:uuid:12abf229-dfb0-4b93-b829-217eb39071dc>
{ "date": "2017-02-26T15:33:13", "dump": "CC-MAIN-2017-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00156-ip-10-171-10-108.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9312814474105835, "score": 3.109375, "token_count": 1323, "url": "https://www.fcc.gov/general/telecommunications-act-1996-and-people-disabilities?contrast=highContrast" }
Currently, most electron sources are thermionic, where heating of a metallic filament results in electrons being "sprayed off" and extracted through a biasing grid. These thermal electron sources have limitations due to the required high operating temperature, power consumption, and lack of compactness. Further, as the electrons are boiled off in all directions, the emittance, or spatial kinetic energy distribution, of the source can be quite high and require complex electron focusing optics. In contrast, field emitters extract electrons through a large electric field without using high temperature. The emission process is quantum-mechanical tunneling through vacuum, and is directional. The resulting electron source requires no heating and has reduced emittance. While the most critical parameters for a thermionic source are the temperature and work function of the emitter, for a field emitter source, there are other experimental variables, most notably the geometric shape of the emitter, which impacts the tunneling probability. These properties are shown in the tunneling diagrams below. Fig. 1. Tunneling diagrams showing quantum mechanical electron tunneling in thermionic and field emitter sources. For thermionic emission, the temperature and work function are most important. For field emission, the emitter shape is most important. The primary focus of this effort is to develop metrology for improving the characterization of the emission and emittance properties of nanostructured field emitters. Using nanofabrication techniques, these emitters are being fabricated from carbon nanotubes and compound semiconductors that have been patterned into one-dimensional structures as shown below. Fig. 2. The emission areas of these nanofabricated structures is quite small, but they are capable of producing significant emission at low electric field gradients due to their high aspect ratio. We are developing measurement techniques using a custom-built apparatus designed to characterize these nanostructured emitters. These include design and fabrication of arrays of micro-patterned anode collectors that can simultaneously determine the global emission from the array structure, and the local emission from individual emission sites. These collectors will also be individually addressed electronically to ascertain emitter-emitter interactions and their impact on device emittance. The goal is to develop these measurement techniques into appropriate metrology tools for this enabling nanotechnology.
<urn:uuid:cbd826ca-4246-4d74-9b03-4fa79ad5a808>
{ "date": "2017-05-25T12:18:02", "dump": "CC-MAIN-2017-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608067.23/warc/CC-MAIN-20170525121448-20170525141448-00052.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9098508954048157, "score": 3.578125, "token_count": 464, "url": "https://www.nist.gov/programs-projects/metrology-high-current-density-electron-field-emitters" }
By Laura James Grandparents everywhere have superpowers! Unfortunately this power is often dormant because many grandparents don’t even know they have grandparenting superpowers. No, it’s not flying or shooting spider webs from wrists, but it does involve spiders of another sort: “Itsy Bitsy Spiders.” As handy as flying and web spinning would be for wrangling little ones, grandparent superpowers are far better and have a greater impact on the world. The Power of Play The superpower that all grandparents have is play; it is something that we all share, but like Peter Pan, grandparents sometimes forget how to play. If only there were an app for that. Well, now there is: Together Time with Song and Rhyme is a new app that helps grandparents and parents bond with their preschoolers through fun, tickles, songs, and rhymes that support early childhood development. “Together Time makes grandparents even more FUN for kids!” says Laura James, the app’s creator and founder of 7Potato.com. “Childhood is a once-upon-a-time opportunity, it only lasts a few short years. It’s easy for grandparents to spend too many of those years focused on trying to getting kids to behave in our adult world, when we could be using our superpower to make us child-like again. Grandparents are better served when they practice living in their children’s world and play. “It’s more of a Jane Goodall approach”, says James, “where you observe and behave like the little creatures, to try and understand kids and their world, instead of trying to make the little creatures fit into your world.” This immersive approach helps grandparents gain trust and paves the way for a more fun and seamless childhood.” Play not only helps kids acquire critical early development skills, it makes everyone’s life more fun. Play has been proven to strengthen the parent-child connection, which inevitably, helps reduce the natural friction that can often occur with new experiences and patterns of behavior. Play: Helping Kids Prepare For Future Activities Creative play, or even a playful attitude, can help break the cycle of noncompliance that many children develop toward seemingly mundane activities, and encourage children to think about these activities from another, more engaging perspective. Does your grandchild refuse to get in the car? Start singing “Windshield Wiper” before you even get out the door. This helps set expectations for where you are going, rather than the seat they have to be strapped into, while giving your child a sense of fun. Trouble getting the kids to go to bed? Help settle little creative minds for dreamland with this 5-second tickle before reading their bedtime stories: “The moon is round, as round as can be. Two eyes, a nose, a mouth, like ME!” “These types of grandparent/child bonding activities give children the ability to experience their world with more of their senses; sight, sound, and touch. The sing-songy, repetition helps them expand on their learning, intuitively and exponentially, with some sense of understanding and even control,” says James. “It also encourages positive behavior and supports early literacy in language, math, and science, which in turn helps them become active learners and lead more engaged lives. Put simply: play encourages possibility in the world. “Play is the work of children; it supports their ongoing development physically, cognitively, and emotionally. When grandparents and parents decide to take a little time to find their inner child, play can have the power to transform their lives, giggle-by-giggle, into a magical wonderland. One of the best parts of having kids is that it gives you an excuse to relive childhood. It goes by fast so have fun, play often and connect! If fun is the focus, learning will be the outcome, every time.” To download Together Time, visit: https://itunes.apple.com/us/app/together-time-song-rhyme-for/id500577597?mt=8 Laura James, former children’s TV Producer and creator of Together Time: the app for parents, grandparents and educators, filled with fun activities that promote learning. Laura is also the Founder of 7Potato: the new online activity library for parents, grandparents, kids, and educators
<urn:uuid:839a87bc-d509-469d-b052-01f69e356029>
{ "date": "2015-05-07T04:40:37", "dump": "CC-MAIN-2015-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430460283022.18/warc/CC-MAIN-20150501060443-00041-ip-10-235-10-82.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9503006935119629, "score": 2.796875, "token_count": 936, "url": "http://www.grandmagazine.com/news/2014/06/whats-grandparenting-superpower/" }
The infamous Billy the Kid was shot and killed by Sheriff Pat Garrett On This Day in 1881 at the Maxwell Ranch in New Mexico. Billy the Kid, infamous gunslinger of the Old Wild West, escaped from prison three months prior to his fatal altercation with Sheriff Garrett. He was one of the most wanted criminals in the West. By the time he was 18, he had committed at least 17 murders, including Sheriff William Brady. History.com provides his real name as Henry McCarty. An article in Lancaster Intelligencer, dated 12 August 1881, lists his birth name as Billy M’Carthy and states he was born in just 22 years prior. The Cheyenne Transporter states his real name was Wm. Bonney and he was not quite 21. New York As a young child, his family moved to Grant County, New Mexico. “Billy the Kid is shot to death”. History.com On This Day is a prompt to further explore historical events. © Jeanne Ruczhak-Eckman, 2015
<urn:uuid:d7825dc5-0ff8-40dc-8833-49d3639913b0>
{ "date": "2017-06-25T17:22:44", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320545.67/warc/CC-MAIN-20170625170634-20170625190634-00537.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9747915863990784, "score": 3.03125, "token_count": 221, "url": "http://genealogybyjeanne.blogspot.com/2015/07/on-this-day-sheriff-kills-billy-kid.html" }
The Federal Communications Commission defines "broadband Internet" as service that delivers data at speeds of at least four megabits per second. However, according to a recent FCC report, 60% of the 94% of U.S. households that subscribe to broadband Internet only sees speeds of 768 kilobits per second. That's plenty fast when all you're doing is browsing blogs, checking Twitter, and updating Facebook, but it becomes pretty problematic when you try to stream high quality video from services like Netflix. Netflix recommends its users have an Internet connection faster than 500 kilobits per second. So, even with 768 kbps, users will be able to stream movies and TV shows, albeit slowly. For HD and DVD quality, users are recommended to have between three and five Mbps. The only problem? Only 38% and 26% of households that subscribe to Netflix can reach those speeds, respectively. Interestingly enough, according to the FCC, some communities with wireless broadband connections see speeds that far exceeds those with wired connections.
<urn:uuid:14419b11-2427-4c6d-9d9e-6917bd21e370>
{ "date": "2015-07-28T01:44:59", "dump": "CC-MAIN-2015-32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/warc/CC-MAIN-20150728002301-00056-ip-10-236-191-2.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9378634095191956, "score": 2.921875, "token_count": 207, "url": "http://www.complex.com/pop-culture/2012/08/nearly-half-of-americans-have-internet-too-slow-to-stream-netflix" }
Essay Topic 1 Answer the following essay question being sure to use at least two examples while fully explaining why the textual evidence is being used: - What is your favorite dream from "Einstein's Dreams" and why? Essay Topic 2 Answer the following essay question being sure to use at least two examples and fully explaining the reasoning behind your answer: - What is your least favorite dream from "Einstein's Dreams" and why? Essay Topic 3 Take into consideration the many different representations of time described in the novel. Find the most realistic representation of time, described in a dream, and build a case as to why it is the most reasonable representation of what time could be. Be sure to use textual based examples to prove why this representation could exist in reality. Essay Topic 4 Choose a representation of time from the novel and examine why it is not a... This section contains 1,223 words (approx. 5 pages at 300 words per page)
<urn:uuid:796926b0-6e9e-472b-ba2c-6e415114ee97>
{ "date": "2017-08-21T03:23:11", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107487.10/warc/CC-MAIN-20170821022354-20170821042354-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9484865665435791, "score": 2.84375, "token_count": 204, "url": "http://www.bookrags.com/lessonplan/einsteins-dreams/essaytopics.html" }
At the start of the breeding season, male northern fur seals begin arriving at their traditional breeding sites about a month ahead of the females, and start competing for breeding territories (4). These fights can be extremely violent, with each mature bull aiming to slash an opponent’s neck with his sharp canine teeth. Only the largest and heaviest bulls can hope to claim the title of ‘beachmaster’; the smaller younger males, who have no chance of competing with the fully-grown animals, occupy the fringes of the breeding territories (3). The female northern fur seals arrive at the breeding grounds in mid-June and give birth to the pups, conceived the year before, some two days after their arrival. Within a week of the birth, the females will mate again (4). Males compete to secure as many females as they can within a harem, although it is thought that females are influenced by the presence of other females and the characteristics of the territory rather than the mere size and power of the male (2). The beachmasters will continue to squabble and fight over females right through the breeding season, usually because these colonies are crowded, and wandering females sometimes stray into another male’s territory (3). Occasionally, younger males will attempt to steal a mating with a female and, if spotted by one of the beachmasters, they will be chased off (2). In order to ensure that their females are not claimed by another male, northern fur seal bulls do not feed throughout the breeding period and may eventually loose 20 percent of their body weight (4). Female northern fur seals suckle their pups for up to ten days before returning to feed at sea, usually during the night. The female will stay at sea feeding for four to ten days, returning to feed the pup for one or two days. The female will do this for four months before leaving her youngster and migrating south, usually in late October (4). Fur seals feed on a variety of prey, including squid and pollock (4), and have also been recorded taking seabirds (2). The fertilised egg within the female fur seal undergoes a four-month period of delayed implantation. This ensures that that the developing pup will be born at the right time the following year when the animals return to their breeding grounds. The pups will spend as long as 22 months at sea before returning to the beach where they were born (4). Fur seals mature between the ages of three and six, but males will probably not begin to breed for an additional three years (2). The principal natural predators of fur seals are orcas (Orcinus orca), great white sharks (Carcharodon carcharias) and the much larger Steller’s sea lion (Eumetopias jubatus). On land, northern fur seal pups can fall prey to foxes (4).
<urn:uuid:758701ff-fea9-4e09-b6e6-8a74ea4db07b>
{ "date": "2014-12-22T04:18:14", "dump": "CC-MAIN-2014-52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773201.145/warc/CC-MAIN-20141217075253-00035-ip-10-231-17-201.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9584213495254517, "score": 3.265625, "token_count": 593, "url": "http://www.arkive.org/northern-fur-seal/callorhinus-ursinus/" }
The Mobile-Tensaw Delta is a fascinating place where five rivers come together into Mobile Bay. When the Mobile, Spanish, Tensaw, Apalachee, and Blakeley rivers flow into Mobile Bay, it’s a collision of fresh and salt water. This mix of waters creates a rare environment for all manner of unique plants and animals. The delta includes some 250,000 acres of scenic waterways, woods, and wetlands. This habitat supports over 500 types of plants, 126 species of fish, and more than 300 species of birds. 30 species of amphibians, 69 species of reptiles, and 40 species of mammals also call the delta home. 5 Rivers Delta Resource Center is located in Spanish Fort, AL. The center encompasses over 81 acres of the Mobile-Tensaw Delta for you to explore in a variety of ways. There are trails to hike, places to picnic, and endless photo opportunities. Read the rest of this page »
<urn:uuid:f79b2485-1ff6-4606-85a1-33dcfa17b630>
{ "date": "2018-08-15T15:16:38", "dump": "CC-MAIN-2018-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9080767035484314, "score": 2.703125, "token_count": 195, "url": "https://malagainn.com/tag/mobile-bay-area-lodging/" }
OLYMPIA – It's rare to see a sea otter in Puget Sound these days and rarer still to spot one five miles inland. But then, "McAllister" is clearly no ordinary sea otter. Responding to a call from the City of Olympia, biologists from the Washington Department of Fish and Wildlife (WDFW) and an area research firm captured the 53-pound marine mammal Monday night five miles up McAllister Creek in an outlet to McAllister Springs. "When we first got the call, I was sure we were talking about a river otter," said Steve Jeffries, a WDFW research scientist who netted McAllister with the help of John Calambokidis from Cascadia Research. "I've never heard of a sea otter roaming that far upstream." Named after his adoptive creek, McAllister was transported Monday night to the Point Defiance Zoo in Tacoma, where he dined on six rock crabs, 12 squid and 2 pounds of prawns. Today (Wednesday), having received the results of McAllister's medical exam, Jeffries plans to tag the wayward sea otter and release him into Puget Sound. "Aside from a few scrapes on his back flippers, he appears to be in good health, said Jeffries, who puts McAllister at 2 or 3 years old. "He's very active." That's good news to Harriet Allen, endangered species manager for WDFW, which lists sea otters as a state endangered species. They are also protected under the federal Marine Mammals Protection Act. Having once thrived off the Washington coast, sea otters were wiped out by the fur trade in the 1800s, Allen explained. They were then reintroduced to the state in 1969-70 with 59 animals brought south from Amchitka Island, Alaska. Today, approximately 600 sea otters live in state waters, mostly along the Pacific coast from Cape Flattery to Destruction Island, although a few are spotted every year around Puget Sound and the San Juan Islands. "We don't see that many sea otters in southern Puget Sound, and to find one that far upstream is very unusual," Allen said. "This guy must have thought life would be better as a river otter." There are actually some significant differences between sea otters and river otters, starting with the fact that river otters are much more common and are not listed by the state as a "species of concern." In addition: - Sea otters are generally larger, weighing in at 20 to 90 pounds, compared to 12 to 25 pounds for river otters. - The back feet of a sea otter look like flippers; a river otter's do not. - Sea otters typically feed while floating on their backs; river otters usual feed on shore. One field guide also states that river otters live in both freshwater and ocean habitats while sea otters "live exclusively in the ocean." "In this case, anyway, McAllister clearly departed from that rule," Allen said. Allen asks that anyone who spots a sea otter in Puget Sound call Steve Jeffries at WDFW (253-589-7235) or John Calambokidis at Cascadia Research (360) 943-7325. "We're trying to monitor these animals so we can keep track of how they're faring," Allen said. "McAllister should be easy to spot: he'll be the one with red and white tags on his back flippers." WDFW and the U.S. Fish and Wildlife Service (USFWS) are also seeking information on any sea otters found stranded or dead on the Washington coast. Last year, there were 22 documented cases of sea otters found dead on the beach, said Allen, noting that the causes of these deaths are unknown. Anyone who finds a stranded or dead sea otter is asked to call USFWS at (360) 753-6048, the U.S. Geological Service at (541) 754-4388 or Jeffries at the number above.
<urn:uuid:8f0a94f8-ce7c-4c8b-b6c5-f24df626fb39>
{ "date": "2014-08-21T08:14:21", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815756.79/warc/CC-MAIN-20140820021335-00384-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.966572105884552, "score": 2.859375, "token_count": 865, "url": "http://wdfw.wa.gov/news/jun0601a/" }
- Ethanol overdose in children may result in hypoglycemia. - Methanol ingestion is associated with visual disturbance, metabolic acidosis, and possibly multiorgan system failure. - Ethylene glycol poisoning is associated with metabolic acidosis, renal failure, and possibly death. - Isopropanol may cause CNS depression but does not usually cause metabolic acidosis. - All of the toxic alcohols can produce an osmolal gap. - Fomepizole is the only FDA approved antidote for ethylene glycol and methanol toxicity. - Hemodialysis is indicated in severe toxic alcohol ingestions not responsive to conventional medical therapy, or with evidence of end-organ damage or severe acidosis. According to a recent 2-year prospective study in Norway, of those pediatric poisonings reported in children aging from 8 to 15 years, 46% involved ethanol.1 In addition to alcohol-containing beverages such as beer, wine, and hard liquors, children have access to more than 700 ethanol containing medicinal preparations, colognes and perfumes, as well as mouthwashes that can contain up to 75% ethanol. There has been increasing legislation in the United States regulating child-resistant packaging and product-warning labels on mouthwash products containing ethanol. Since these interventions were instituted in 1995, improved outcomes have been documented in regards to these ingestions in children.2,3 Pharmacokinetics and Pathophysiology Ethanol undergoes hepatic metabolism via two metabolic pathways: alcohol dehydrogenase and the microsomal ethanol-oxidizing system (MEOS). Alcohol dehydrogenase pathway is the major metabolic pathway and the rate-limiting step in converting ethanol to acetaldehyde. In general, nontolerant individuals metabolize ethanol at 10 to 25 mg/dL/h and alcohol tolerant metabolize up to 30 mg/dL/h. Children may ingest large amounts of ethanol in relation to their body weight, resulting in rapid development of high blood alcohol concentrations. In children younger than 5 years, the ability to metabolize ethanol is diminished because of immature hepatic dehydrogenase activity. Ethanol is a selective CNS depressant at low concentrations, and a generalized depressant at high concentrations. Initially, ethanol produces exhilaration and loss of inhibition, which progresses to lack of coordination, ataxia, slurred speech, gait disturbances, drowsiness, and, ultimately, stupor, and coma. The intoxicated child may demonstrate a flushed face, dilated pupils, excessive sweating, gastrointestinal distress, hypoventilation, hypothermia, and hypotension. Death from respiratory depression may occur at serum ethanol concentrations >500 mg/dL. Convulsions and death have been reported in children with acute ethanol intoxication owing to alcohol-induced hypoglycemia. Hypoglycemia results from inhibition of hepatic gluconeogenesis and is most common in children younger than 5 years. It does not appear to be directly related to the quantity of alcohol ingested.4 In symptomatic, pediatric patients who have suspected ethanol intoxication, the most critical laboratory tests are the serum ethanol and glucose concentrations.5 Although blood ethanol concentrations roughly correlate with clinical signs, the physician must treat patients based on their clinical status, not the absolute level. If the ethanol level does not correlate with the clinical picture, ...
<urn:uuid:3934d2f3-e08d-4988-b7b6-bf076264f4d3>
{ "date": "2017-06-26T10:21:26", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00337.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8974982500076294, "score": 2.96875, "token_count": 683, "url": "http://accesspediatrics.mhmedical.com/content.aspx?bookid=558&sectionid=42137070" }
All Means All (Second Week of Advent, 2013) “We three kings of Orient are.” So begins a favorite carol of the Advent season about the “Wise Men” who visit the newborn Jesus. And so begins a tale that takes inaccuracy and historical revisionism to a whole new level. Reverend John Henry Hopkins, Jr., who wrote the carol a century and a half ago, should have known better. First, we don’t know exactly how many kings there were. There could have been as few as two and up to almost any number. Tradition says that there were three (though some traditions mention twelve), and over time they were even given names: Caspar, Melchior, and Balthasar. But these are apocryphal stories. Second, they were not “kings” from the Orient. They were Wise Men, or put more accurately, Magi. The Magi were astronomers – primitive by today’s standards – who were on the cutting edge of scientific and philosophical knowledge in their day. So it may be best to view these Magi as the uncanny combination of scientists, philosophers, and theologians – but not kings. And such men called Persia home (modern day Iran), not the Far East. Third, these men did not find the Christ child while “following yonder star.” They saw the star “in the East” or “at the rising of the sun,” but then proceeded west to Palestine. The star did not reappear until they were already in Bethlehem. And finally, the Magi, technically, do not belong in the Nativity scene at all. They were latecomers to the Christmas party, maybe as late as Jesus’ second birthday. The quaking shepherds, singing angels, and lowing cattle had returned to life as normal long ago. On and on I could go ripping the veracity of this Christmas Carol apart, but that is not my intent. “We Three Kings” remains one of my favorite Holiday hymns to bellow out this time of year. My critique of it is to simply point out that apart from the accumulations of questionable tradition, we know little about these mysterious men from the East. And these traditions prevent us from embracing what we can learn from them – for the journey of the Magi is a fascinating exercise in unexpected faith. They came seeking the child who had been born king of the Jews, based almost entirely on the appearance of an enigmatic star. While history is rampant with explanations for this phenomena, one conclusion is certain: The Magi interpreted this unusual sign in the heavens as a clear communication that something extraordinary had taken place in the world. And even more extraordinary, these Persian sages applied their interpretation to the emergence of Jesus, the Jewish Messiah. Why so astonishing? Not many people would launch out on a dangerous journey through the Middle East based solely on a spiritual hunch. Not many people would put their life on hold to prove their mystical intuitions to be true. And the most shocking of all, not many Persians (today’s Iranians) would worship at the feet (or manger) of a Jew. And not many Jews could abide by such a thing, either! Yet, in God’s way, these all belonged together. Divisions of race, religion, nationality or ethnicity did not factor into the equation. This is a foreshadowing of what the Apostle Paul would say later. “In Christ,” he said, “there is no difference between Jew and Greek, slave and free person, male and female. You are all the same in Christ Jesus” (See Galatians 3). And “all” does mean “all.” All are welcome into the presence of the One who will “reconcile everything – all things in heaven and on earth to himself.” So here is where the Magi teach us the wisest of their lessons: There are many barriers to overcome and great distances to cover in our journey of faith – “field and fountain, moor and mountain” to quote Reverend Hopkins – but when we get to where we are going, we will we welcomed in with open arms. There we will find the “King forever, ceasing never, over us all to reign.” And “all” surely means “all.” Ronnie McBrayer is a syndicated columnist, pastor, and author. His newest book is “The Gospel According to Waffle House.” You can read more at www.ronniemcbrayer.me.
<urn:uuid:dc877149-b397-400c-947a-6c851c8e583a>
{ "date": "2014-10-02T14:28:08", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.5/warc/CC-MAIN-20140930004103-00256-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.969864547252655, "score": 2.671875, "token_count": 976, "url": "http://blogs.evtrib.com/spirituallife/2013/12/09/all-means-all-second-week-of-advent-2013/" }
"It is not hard to see why the government of a region becomes less and less manageable with size. In a population of N persons, there are of the order of N2 person-to-person links needed to keep channels of communication open. Naturally, when N goes beyond a certain limit, the channels of communication needed for democracy and justice and information are simply too clogged, and too complex; bureaucracy overwhelms human process. ...Let's also remind ourselves of the empirical evidence that enormous online communities cannot satisfy every need. America Online has not subsumed all the smaller communities on the Internet. People unsubscribe from mailing lists when the traffic level becomes too high. Early adopters of USENET discussion groups (called "Netnews" or "Newsgroups" back in the 1970s and "Google Groups" to most people in 2005) stopped participating because they found the utility of the groups diminished when the community size grew beyond a certain point. "We believe the limits are reached when the population of a region reaches some 2 to 10 million. Beyond this size, people become remote from the large-scale processes of government. Our estimate may seem extraordinary in the light of modern history: the nation-states have grown mightily and their governments hold power over tens of millions, sometimes hundreds of millions, of people. But these huge powers cannot claim to have a natural size. They cannot claim to have struck the balance between the needs of towns and communities, and the needs of the world community as a whole. Indeed, their tendency has been to override local needs and repress local culture, and at the same time aggrandize themselves to the point where they are out of reach, their power barely conceivable to the average citizen." So the good news is that, no matter how large one's competitors, there will always be room for a new online community. The bad news is that growth results in significant engineering challenges. Some of the challenges boil down to simple performance engineering: How can one divide the load of supporting an Internet application among multiple CPUs and disk drives? These can typically be solved with money, even in the absence of any cleverness. The deeper challenges cannot be solved with money and hardware. Consider, for example, the following questions: It isn't challenging to throw hardware at a performance problem. What is challenging is setting up that hardware so that the service is working if any of the components are operational rather than only if all of the components are operational. We'll examine each layer individually. Suppose that we have a popular application and need 16 CPUs to support all the database queries. And let's further suppose that we've decided that the RDBMS will run all by itself on one or more physical computers. Should we buy 16 small computers, each with one CPU, or one big computer with 16 CPUs inside? The local computer shop sells 1-CPU PCs for about $500, implying a total cost of $8000 for 16 CPUs. If we visit the Web site for Sun Microsystems (www.sun.com) we find that the price of a 16-CPU Sunfire 6800 is too high even to list, but if the past is any guide we won't get away for less than $200,000. We will pay 25 times as much to get 16 CPUs of the same power, but all inside one physical computer. Why would anyone do this? Let's consider the peculiarities of the RDBMS application. The RDBMS server talks to multiple clients simultaneously. If Client A updates a record in the database and, a split-second later, Client B requests that record, the RDBMS is required to deliver the updated information to Client B. If we were to spread the RDBMS server program across multiple physical computers, it is possible that Client A would be served from Computer I and Client B would be served from Computer II. A database transaction cannot be committed unless it has been written out to the hard disk drive. Thus all that these computers need do is check the disk for updates before returning any results to Client B. Disk drives are 100,000 times slower than RAM. A single computer running an RDBMS keeps an up-to-date version of the commonly used portions of the database in RAM. So our multi-computer RDBMS server that ensures database coherency across processors via reference to the hard disk will start out 100,000 times slower than a single-computer RDBMS server. Typical commercial RDBMS products, such as Oracle Parallel Server, work via each computer keeping copies of the database in RAM and informing each other of updates via high-speed communications networks. The machine-to-machine communication can be as simple as a high-speed Ethernet link or as complex as specialized circuit boards and cables that achieve memory bus speeds. Don't we have the same problem of inter-CPU synchronization with a multi-CPU single box server? Absolutely. CPU I is serving Client A. CPU II is serving Client B. The two CPUs need to apprise each other of database updates. They do this by writing into the multiprocessor machine's shared RAM. It turns out that the CPU-CPU bandwidth available on typical high-end servers circa 2002 is 100 Gbits/second, which is 100 times faster than the fastest available Gigabit Ethernet, FireWire, and other inexpensive machine-to-machine interconnection technologies. Bottom line: if you need more than one CPU to run the RDBMS, it usually makes most sense to buy all the CPUs in one physical box. The abstraction layer is sometimes referred to as "business logic". Something that is complex and fundamental to the business ought to be separated out so that it can be used in multiple places consistently and updated in one place if necessary. Below is an example from an e-commerce system that Eve Andersson wrote. This system offered substantially all of the features of amazon.com circa 1999. Eve expected that a lot of ham-fisted programmers who adopted her open-source creation would be updating the page scripts in order to give their site a unique look and feel. Eve expected that laws and accounting procedures regarding sales tax would change. So she encapsulated the looking up of sales tax by state, the figuring out if that state charges tax on shipping, and the multiplication of tax rate by price into an Oracle PL/SQL function: The Web script or other PL/SQL procedure that calls this function need only know the proposed cost of an item, the proposed shipping cost, and the order ID to which this item might be added (these are the three arguments to create or replace function ec_tax (v_price IN number, v_shipping IN number, v_order_id IN integer) return number IS taxes ec_sales_tax_by_state%ROWTYPE; tax_exempt_p ec_orders.tax_exempt_p%TYPE; BEGIN SELECT tax_exempt_p INTO tax_exempt_p FROM ec_orders WHERE order_id = v_order_id; IF tax_exempt_p = 't' THEN return 0; END IF; SELECT t.* into taxes FROM ec_orders o, ec_addresses a, ec_sales_tax_by_state t WHERE o.shipping_address=a.address_id AND a.usps_abbrev=t.usps_abbrev(+) AND o.order_id=v_order_id; IF nvl(taxes.shipping_p,'f') = 'f' THEN return nvl(taxes.tax_rate,0) * v_price; ELSE return nvl(taxes.tax_rate,0) * (v_price + v_shipping); END IF; END; ec_tax). That sales taxes for each state are stored in the ec_sales_tax_by_statetable, for example, is hidden from the rest of the application. If an organization that adopted this software decided to switch to using third-party software for calculating tax, that organization would need to change only this one function rather than wading through hundreds of Web scripts looking for tax-related code. Should the abstraction layer run on its own physical computer? For most applications, the answer is "no". These procedures are not sufficiently CPU-intensive to make splitting them off onto a separate computer worthwhile in terms of system administration effort and increased vulnerability to hardware failure. What's more, these procedures often do not even warrant a new execution environment. Most procedures in the abstraction layer of an Internet service require intimate access to relational database tables. That access is fastest when the procedures are running inside the RDBMS itself. All modern RDBMSes provide for the execution of standard procedural languages within the database server. This trend was pioneered by Oracle with PL/SQL and then Java. With the latest Microsoft SQL Server one can supposedly run any .NET-supported computer language inside the database. When should you consider a separate environment ("application server" process) for the abstraction layer? Suppose that a big bank, the result of several mergers, has an IBM mainframe to manage checking accounts, an Oracle RDBMS for managing credit accounts, and a SQL Server-based customer support system. If Jane Customer phones up the bank and asks to pay her credit card bill from her checking account, a computer program needs to perform a transaction on the mainframe (debit checking), a transaction on the Oracle system (credit Visa card), and a transaction on the SQL Server database (payment handled during a phone call with Agent #451). It is technically possible for, say, a Java program running inside the Oracle RDBMS to connect to these other database management systems but traditionally this kind of problem has been attacked by a stand-alone "application server", usually a custom-authored C program. The term "application server" has subsequently become used to describe the physical computers on which such a program might run and, in the late 1990s, execution environments for Java or C programs that served some function on a Web site other than page presentation or persistence. Another example of where a separate physical application server might be desirable is where substantial computation must be performed. On most photo sharing sites, every time a photo is uploaded the server must create scaled versions in standard sizes. The performance challenge at the orbitz.com travel site is even more serious. Every user request results in the execution of a Lisp program written by MIT Artificial Intelligence Lab alumni at itasoftware.com. This Lisp program searches through a database of two billion flights and fares. The database machines that are performing transactions such as ticket bookings would collapse if they had to support these searches as well. If separate physical CPUs are to be employed in the abstraction layer, should they all come in the same box or will it work just as well to rack and stack cheap 1-CPU machines? That rather depends on where state is kept. Remember that HTTP is a stateless protocol. Somewhere the server needs to remember things such as "Registered User 137 wants to see pages in the French language", "Unregistered user who started Session 6781205 has placed the hardcover edition of The Cichlid Fishes in his or her shopping cart." In a multi-process multi-computer server farm, it is impossible to guarantee that a particular user will always be returned to the same running computer program, if for no other reason than you want the user experience to be robust to failure of an individual physical computer. If session state is being kept anywhere other than in a cookie or the persistence layer (RDBMS), your application server programs will need to communicate with each other constantly to make sure that their ad hoc database is coherent. In that case, it might make sense to get an expensive multi-CPU machine to support the application server. However, if all the layers are stateless except for the persistence layer, the application server layer can be handled by multiple cheap one-CPU machines. At orbitz.com, for example, racks of cheap computers are loaded with identical local copies of the fare and schedule database. Each time a user clicks to see the options for traveling from New York to London, one of those application server machines is randomly selected for action. The most common place for script execution is within the operating system process occupied by the Web server. In other words, the script language interpreter is built into the Web server. Examples of this architecture are Microsoft Internet Information Server (IIS) and Active Server Pages, AOLserver and its built-in Tcl interpreter, Apache and the mod_perl add-in. If you've chosen to use one of these popular styles of Web development, you've chosen to merge the presentation layer with the HTTP service layer, and spreading the load among multiple CPUs for one layer will automatically spread it for the other. The multi-CPU box versus multiple-separate-box decision here should again be based on whether or not the presentation layer holds state. If no session state is held by the running presentation scripts, it is more economical to add CPUs inside separate physical computers. The main reason that people run out of capacity on a single front-end Web server is that HTTP server programs are usually packaged with software to support computationally more expensive layers. For example, the Oracle RDBMS server, capable of supporting the persistence layer and the abstraction layer, also includes the necessary software for interpreting Java Server Pages and performing HTTP service. If you were running a popular service directly from Oracle you'd probably need more than one CPU. More common examples are Web servers such as IIS and AOLserver that are capable of handling the presentation and HTTP service layers from the same operating system process. If your scripts involve a lot of template parsing, it is easy to overload a single CPU with the demands of the Web server/script interpreter. If no state is being stored in the HTTP Service layer it is cheapest to add CPUs in separate physical boxes. HTTP is stateless and user interaction is entirely mediated by the RDBMS. Therefore there is no reason for a CPU serving a page to User A to want to communicate with a CPU serving a page to User B. There are two ways to protect your users' privacy from packet sniffers. The first is by using a newer version of Internet Protocol, IPv6, which provides native data security as well as authentication. In the glorious IPv6 world, we can be sure of the origin of a packet, whether it is from a legitimate user or a denial-of-service attacker. In the glorious IPv6 world, we can be sure that it will be impractical to sniff credit card numbers or other user-sensitive data from Web traffic. As of spring 2005, however, it isn't possible to sign up for a home IPv6 connection. Thus we are forced to fall back on the 1990s-style approach of adding a layer between HTTP and TCP. This was pioneered by Netscape Communications as Secure Sockets Layer (SSL) and is now being standardized as TLS 1.0 (see http://www.ietf.org/html.charters/tls-charter.html). However it is performed, encryption is processor-intensive. On the client side, that's not a big deal. The client machine probably has a 2 GHz processor that is 98 percent idle. However on the server end performing encryption can tie up a whole CPU per user for the duration of a request. If you've run out of processing power the only thing to do is ... add processing power. The question is what kind and where. Adding general-purpose processors to a multi-CPU computer is very expensive as mentioned earlier. Adding additional single-CPU front-end servers to a two-tier server farm might not be a bad strategy especially because, if you're already running a two-tier server farm, it requires no new thinking or system administration skills. It is possible, however, that special-purpose hardware will be more cost-effective or easier to administer. In particular it is possible to do encryption in the router for IPv6. SSL encryption for HTTP connections can be done with plug-in boards, an example of which is the Compaq AXL300, PCI card, available in 2005 for $1400 with a claimed performance of handling 330 SSL connections per second. Finally it is possible to interpose a hardware encryption machine between the Web server, which communicates via ordinary HTTP, and the client, which makes requests via HTTPS. This feature is, for example, an option on load-balancing routers from F5 Networks (www.f5.com). You might ask what CPU speed is this 10 hits per second per CPU number based upon? The number is independent of CPU speed! In the mid-1990s, we had 200 MHz CPUs. Web scripts queried the database and merged the results with strings embedded in the script. Everything ran on one physical computer so there was no overhead from copying data around. Only the final credit card processing pages were encrypted. We struggled to handle 10 hits per second. In the late 1990s we had 400 MHz CPUs. Web scripts queried the database and merged the results with templates that had to be parsed. Data were networked from the RDBMS server to the Web server before heading to the user. We secured more pages in response to privacy concerns. We struggled to handle 10 hits per second. In 2000 we had 1 GHz CPUs. Web scripts queried the referer header to find out if the request came from a customer of one of our co-brand partners. The script then selected the appropriate template. We'd freighted down the server with Java Server Pages and Enterprise Java Beans. We struggled to handle 10 hits per second. In 2002 we had 2 GHz CPUs. The programmers had decided to follow the XML/XSLT fashion. We struggled to handle 10 hits per second.... It seems reasonable to expect that hardware engineers will continue to deliver substantial performance improvements and that fashions in software development and business complexity will continue to rob users of any enjoyment of those improvements. So stick to 10 requests per second per CPU until you've got your own application-specific benchmarks that demonstrate otherwise. We will start by positing a two-tier server farm with a single multi-CPU machine running the RDBMS and multiple single-CPU front-end machines, each of which runs the Web server program, interprets page scripts, performs SSL encryption, and generally does any computation not being performed within the RDBMS. **** insert drawing of our example server farm **** How was the CNN system experienced by users? When a student at MIT requested http://www.cnn.com/TECH/, his or her desktop machine would ask the local name server for a translation of the hostname www.cnn.com into a 32-bit IP address. (Remember that all Internet communication is machine-to-machine and requires numeric IP addresses; alphanumeric hostnames such as "www.amazon.com" or "web.mit.edu" are used only for user interface.) The MIT name server would contact the InterNIC registry to learn the IP addresses of the name servers for the cnn.com domain. The MIT name server would then contact CNN's name servers and learn that "www.cnn.com" was available at the IP address 188.8.131.52. Subsequent users within the same subnetwork at MIT would, for a period of time designated by CNN, get the same answer of 184.108.40.206 without the MIT name server going back to the CNN name servers. Where is the load balancing in this system? Suppose that a Biology major at Harvard University requested http://www.cnn.com/HEALTH/. Harvard's name server would also contact CNN's name servers to learn the translation of "www.cnn.com". This time, however, the CNN server would provide a different answer: 220.127.116.11, leading that user, and subsequent users within Harvard's network, to a different front-end server than the machine providing pages to users at MIT. Round-robin DNS is not a very popular load balancing method today. For one thing, it is not very balanced. Suppose that the CNN name server tells America Online's name server that www.cnn.com is reachable at 18.104.22.168. AOL is perfectly free to provide that translation to all of its more than 20 million customers. Another problem with round-robin DNS is the impact on users when a front-end machine dies. If the box at 22.214.171.124 were to fail, none of AOL's customers would be able to reach www.cnn.com until the expiration time on the translation had elapsed—the site would be up and running and providing pages to hundreds of thousands of users worldwide, but not to those users who'd received an unlucky DNS translation to the dead machine. For a typical domain, this period of time might be anywhere from 6 hours to 1 week. CNN, aware of this problem, could shorten the expiration and "minimum time-to-live" on cnn.com but if these were cut down to, say, 30 seconds, the load on CNN's name servers might start approaching the intensity of the load on its Web servers. Nearly every user page request would be preceded by a request for a DNS translation. (In fact, CNN set their minimum time-to-live to 15 minutes.) A final problem with round-robin DNS is that it does not provide abstraction. Suppose that CNN, whose primary servers were all Unix machines, wished to run some discussion forum software that was only available for Windows. The IP addresses of all of its servers are publicly exposed. The only way to direct users to a different machine for a particular part of the service would be to link them to a different hostname, which could therefore be translated into a distinct IP address. For example, CNN would link users to "http://forums.cnn.com". Users who enjoyed these forums would bookmark the URL, and other sites on the Internet would insert hyperlinks to this URL. After a year, suppose that the Windows servers were dying and the people who knew how to maintain them had moved on to other jobs. Meanwhile, the discussion forum software has become available for Unix as well. CNN would like to pull the discussion service back onto its main server farm, at a URL of http://www.cnn.com/discuss/. Why should users be aware of this reshuffling of hardware? **** insert drawing of server farm (cloud), load balancer, public Internet (cloud) **** The modern approach to load balancing is the load balancing router. This machine, typically built out of standard PC hardware running a free Unix operating system and a thin layer of custom software, is the only machine that is visible from the public Internet. All of the server hardware is behind the load balancer and has IP addresses that aren't routable from the rest of the Internet. If a user requests www.photo.net, for example, this is translated to 126.96.36.199, which is the IP address of photo.net's load balancer. The load balancer accepts the TCP connection on port 80 and waits for the Web client to provide a request line, e.g., "GET / HTTP/1.0". Only after that request has been received does the load balancer attempt to contact a Web server on the private network behind it. Notice first that this sort of router provides some inherent security. The Web servers and RDBMS server cannot be directly contacted by crackers on the public Internet. The only ways in are via a successful attack on the load balancer, an attack on the Web server program (Microsoft Internet Information Server suffered from many buffer overrun vulnerabilities), or an attack on publisher-authored page scripts. The router also provides some protection against denial-of-service attacks. If a Web server is configured to spawn a maximum of 100 simultaneous threads, a malicious user can effectively shut down the site simply by opening 100 TCP connections to the server and then never sending a request line. The load balancers are smart about reaping such idle connections and in any case have very long queues. The load balancer can execute arbitrarily complex algorithms in deciding how to route a user request. It can forward the request to a set of front-end servers in a round-robin fashion, taking a server out of the rotation if it fails to respond. The load balancer can periodically pull load and health information from the front-end servers and send each incoming request to the least busy server. The load balancer can inspect the URI requested and route to a particular server, for example, sending any request that starts with "/discuss/" to the Windows machine that is running the discussion forum software. The load balancer can keep a table of where previous requests were routed and try to route successive requests from a particular user to the same front-end machine (useful in cases where state is built up in a layer other than the RDBMS). Whatever algorithm the load balancer is using, a hardware failure in one of the front-end machines will generally result in the failure of only a handful of user requests, i.e., those in-process on the machine that actually fails. How are load balancers actually built? It seems that we need a computer program that waits for a Web request, takes some action, then returns a result to the user. Isn't this what Web server programs do? So why not add some code to a standard Web server program, run the combination on its own computer, and call that our load balancer? That's precisely the approach taken by the Zeus Load Balancer (http://www.zeus.com/products/zlb/) and mod_backhand (http://www.backhand.org/mod_backhand/), a load balancer module for the Apache Web server. An alternative is exemplified by F5 Networks, a company that sells out-of-the-box load balancers built on PC hardware, the NetBSD Unix operating system, and unspecified magic software. It seems as though the load-balancing router out front and load-balancing operating system on the RDBMS server in back have allowed us to achieve goals 1 and 3. And if the hardware failure occurs in a front-end single-CPU machine, we've achieved goal 2 as well. But what if the multi-CPU RDBMS server fails? Or what if the load balancer itself fails? Failover from a broken load balancer to a working one is essentially a network configuration challenge, beyond the scope of this textbook. Basically what is required are two identical load balancers and cooperation with the next routing link in the chain that connects your server farm to the public Internet. Those upstream routers must know how to route requests for the same IP address to one or the other load balancer depending upon which is up and running. What keeps this from becoming an endless spiral of load balancing is that the upstream routers aren't actually looking into the TCP packets to find the GET request. They're doing the much simpler job of IP routing. Ensuring failover from a broken RDBMS server is a more difficult challenge and one where a large variety of ideas has been tried and found wanting. The first idea is to make sure that the RDBMS server never fails. The machine will have three power supplies, only two of which are required. Each disk drive will be mirrored. If a CPU board fails, the operating system will gracefully fail back to running on the remaining CPUs. There will be several network cards. There will be two paths to each disk drive. Considering the number of moving parts inside, the big complex servers are remarkably reliable, but they aren't 100 percent reliable. Given that a single big server isn't reliable enough, we can buy a whole bunch of them and plug them all into the same disk subsystem, then run something like Oracle Parallel Server. Database clients connect to whichever physical server machine is available. If they can't get a response from a particular server, the client retries after a few seconds to another physical server. Thus an RDBMS server machine that fails causes the return of errors to any in-process user requests being handled by that machine and perhaps a few seconds of interrupted or slow service for users who've been directed to the clients of that down machine, but it causes no longer term site unavailability. As discussed in the "Persistence Layer" section of this chapter, this approach entails a lot of wasted CPU time and bandwidth as the physical machines keep each other apprised of database updates. A compromise approach introduced by Oracle in 2000 was to configure a two-node parallel server. The first machine would process online transactions. The second machine would be allowed to lag as much as, say, ten minutes behind the first in terms of updates. If you wanted a CPU-intensive report querying last month's user activity, you'd talk to the backup machine. If Machine #1 failed, however, Machine #2 would notice almost immediately and start rolling its own state forward from the transaction log on the hard disk. Once Machine #2 was up to date with the last committed transaction, it would begin offering service as the primary database server. Oracle proudly stated that, for customers willing to spend twice as much for RDBMS server hardware, the two-node failover configuration was "only a little bit slower" than a single machine. Be explicit about the number of computers employed, the number of CPUs within each computer, and the connections among the computers. Your answer to this exercise should be no longer than half a page of text. Be explicit about the number of computers employed, the number of CPUs within each computer, and the connections among the computers. If you're curious about the real numbers, remember that eBay is a public corporation and publishes annual reports, which are available at http://investor.ebay.com/. Your answer to this exercise should be no longer than one page. Note: http://philip.greenspun.com/ancient-history/webmail/ describes an Oracle-backed Web mail system built by Jin S. Choi. Perhaps we can take some ideas from the traditional face-to-face world. Let's look at some of the things that make for good offline communities and how we can translate them to the online world. How do we translate the features of identifiability, authentication, and accountability into the online world? In private communities, such as corporate knowledge management systems or university coordination services, it is easy. We don't let anyone use the system unless they are an employee or a registered student and, in the online environment, we identify users by their full names. Such heavyweight authentication is at odds with the practicalities of running a public online community. For example, would it be practical to schedule face-to-face meetings with each potential registrant of photo.net, where the new user would show an ID? On the other hand, as discussed in the "User Registration and Management" chapter, we can take a stab at authentication in a public online community by requiring email verification and by requiring alternative authentication for people with Hotmail-style email accounts. In both public and private communities, we can enhance accountability simply by making each user's name a hyperlink to the complete record of their contributions to the site. In the face-to-face world, a speaker gets a chance to gauge audience reaction as he or she is speaking. Suppose that you're a politician speaking to a women's organization, the WAGC ("Women Against Gun Control", www.wagc.com). Your schedule is so heavy that you can't recall what your aides told you about this organization, so you plan to trot out your standard speech about how you've always worked to ensure higher taxes, more government intervention in individuals' lives, and, above all, to make it more difficult for Americans to own guns. Long before you took credit for your contribution to the assault rifle ban, you'd probably have noticed that the audience did not seem very receptive to your brand of paternalism and modified your planned speech. Typical computer-mediated communication systems make it easy to broadcast your ideas to everyone else in the service, but without an opportunity to get useful feedback on how your message is being received. You can send the long email to the big mailing list. You'll get your first inkling as to whether people liked it or not after the first 500 have it in their inbox. You can post your reply to an emotionally charged issue in a discussion forum, but you won't get any help from other community members, at least not through the same software, before you finalize that reply. Perhaps you can craft your software so that a user can expose a response to a test audience of 1 percent of the ultimate audience, get a reaction back from those sample recipients, and refine the message before authorizing it for delivery to the whole group. This causes a properly configured browser to launch the AIM client (try it). Although the AIM-based chat offered superior interactivity, it was not as successful due to (1) some users not having the AIM software on their computers, (2) some users being behind firewalls that prevented them from using AIM, but mostly because (3) photo.net users knew each other by real names and could not recognize their friends by their AIM screen names. It seems that providing a breakout and reassemble chat room is useful, but that it needs to be tightly integrated with the rest of the online community and that, in particular, user identity must be preserved across all services within a community. <a href="aim:gochat?RoomName=photonet">photo.net chatroom</a> People like computers and the Internet because they are fast. If you want an answer to a question, you turn to the search engine that responds quickest and with the most relevant results. In the offline world, people generally desire speed. A Big Mac delivered in thirty seconds is better than a Big Mac delivered in ten minutes. However, when emotions and stakes are high, we as a society often choose delay. We could elect a president in two weeks, but instead we choose presidential campaigns that last nearly two years. We could have tried and sentenced Thomas Junta immediately after July 5, 2000, when he beat Michael Costin, father of another ten-year-old hockey player, to death in a Boston-area ice rink. After all, the crime was witnessed by dozens of people and there was little doubt as to Junta's guilt. But it was not until January 2002 that Junta was brought to trial, convicted, and sentenced to six to ten years in prison. Instant messaging, chat rooms, and Web-based discussion forums don't always lend themselves to thoughtful discourse, particular when the topic is emotional. |"As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1" — (Mike) Godwin's Law| How difficult is it in the offline world to find people interested in the issues that are important to us? If you believe that charity begins at home and all politics is local, finding people who share your concerns is as simple as walking around your neighborhood. One way to translate that to the online world would be to build separate communities for each geographical region. If you wanted to find out about the environment in your state, you'd go to massachusetts.envrionmentaldefense.org. But what if your interests were a bit broader? If you were interested in the environment throughout New England, should you have to visit five or six separate servers in order to find the hot topics? Or suppose that your interests were narrower. Should you have to wade through a lot of threads regarding the heavily populated eastern portion of Massachusetts if you live right up against the New York State border and are worried about a particular chemical plant? The geospatialized discussion forum, developed by Bill Pease and Jin S. Choi for the scorecard.org service, is an interesting solution to this problem. Try out the following pages: Another way to look at geospatialization is of the users themselves. Consider, for example, an online learning community centered around the breeding of African Cichlids. Most of the articles and discussion would be of interest to all users worldwide. However it would be nice to help members who were geographically proximate find each other. Geographical clumps of members can share information about the best aquarium shops and can arrange to get together on weekends to swap young fish. To facilitate geospatialization of users, your software should solicit country of residence and postal code from each new user during registration. It is almost always possible to find a database of latitude and longitude centroids for each postal code in a country. In the United States, for example, you should look for the "Gazetteer files" on www.census.gov, in particular those for ZIP Code Tabulation Areas (ZCTAs). Despite applying the preceding tricks, it is always possible for growth in a community to outstrip an old user's ability to cope with all the new users and their contributions. Every Internet collaboration system going back to the early 1970s has drawn complaints of the form "I used to like this [mailing list|newsgroup|MUD|Web community] when it was smaller, but now it is big and full of flaming losers; the interesting thoughtful material is buried under a heavy layer of dross." The earliest technological fix for this complaint was the bozo filter. If you didn't like what someone had to say, you added them to your bozo list and the software would hide their contributions from your view of the community. In mid-2001 we added an "inverse bozo filter" facility to the photo.net community. If you find a work of great creativity in the photo sharing system or a thoughtful response in a discussion forum you can mark the author as "interesting". On subsequent logins you will find a "Your Friends" section in your personal workspace on the site. The people that you've marked as interesting are listed in order of their most recent contribution to the site. Six months after the feature was added 5,000 users had established 25,000 "I think that other user is interesting" relationships. What is it about a newspaper that makes it particularly tough for that organization to act as the publisher of an online community? /doc/planning/YYYYMMDD-scalingon your server and start writing a scaling plan for your community. This plan should list those features that you expect to modify or add as the site grows. The features should be grouped by phases. Add a link to your new plan from /doc/ or a planning Let's look at some concrete scenarios. Let's assume that we have a public community in which user-contributed content goes live immediately, without having to be approved by a moderator. The problem of spam is greatly reduced in any community where content must be pre-approved before appearing to other members, but such communities require a larger staff of moderators if discussion is to flow freely. Scenario 1: Sarah Moneylover has registered as User #7812 and posted 50 article comments and discussion forum messages with links to her "natural Viagra" sales site. Sarah clicked around by hand and pasted in a text string from a word processor open on her desktop, investing about 20 minutes in her spamming activity. The appropriate tool for dealing with Sarah is a set of efficient administration pages. Here's how the clickstream would proceed: userstable associated with User #7812 ought to be deleted as well. Scenario 2: Ira Angrywicz, User #3571, has developed a grudge against Herschel Mellowman, User #4189. In every discussion forum thread where Herschel has posted, Ira has posted a personal attack on Herschel right underneath. The procedure followed to deal with Sarah Moneylover is not appropriate here because Ira, prior to getting angry with Herschel, posted 600 useful discussion forum replies that we would be loathe to delete. The right tool to deal with this problem is an administration page showing all content contributed by User #3571 sorted by date. Underneath each content item's headline are the first 200 words of the body so that the administrator can evaluate without clicking down whether or not the message is anti-Herschel spam. Adjacent to each content item is a checkbox and at the bottom of all the content is a button marked "Disapprove all checked items." For every angry reply that Ira had to type, the administrator had to click the mouse only once on a checkbox, perhaps a 100:1 ratio between spammer effort and admin effort. Scenario 3: A professional programmer hired to boost a company's search engine rank writes scripts to insert content all around the Internet with hyperlinks to his client's Web site. The programs are sophisticated enough to work through the new user registration pages in your community, registering 100 new accounts each with a unique name and email address. The programmer has also set up robots to respond to email address verification messages sent by your software. Now you've got 100 new (fake) users each of whom has posted two messages. If the programmer has been a bit sloppy, it is conceivable that all of the user registrations and content were posted from the same IP address in which case you could defend against this kind of attack by adding an originating_ip_address column to your content management tables and building an admin page letting you view and potentially delete all content from a particular IP address. Discovering this problem after the fact, you might deal with it by writing an admin page that would summarize the new user registrations and contributions with a checkbox bulk-nuke capability to remove those users and all of their content. After cleaning out the spam you'd probably add a "verify that you're a human" step in the user registration process in which, for example, a hard-to-read word was obscured inside a patterned bitmap image and the would-be registrant had to recognize the word amidst the noise and type it in. This would prevent a robot from establishing 100 fake accounts. No matter how carefully and intelligently programmed a public online community is to begin with, it will eventually fall prey to a new clever form of spam. Planning otherwise is like being an American circa 1950 when antibiotics, vaccines, and DDT were eliminating one dreaded disease after another. The optimistic new suburbanites never imagined that viruses would turn out to be smarter than human beings. Budget at least a few programmer days every six months to write new admin pages or other protections against new ideas in the world of spam.
<urn:uuid:bcf13665-37b2-438b-b4dc-79084f13b24f>
{ "date": "2016-12-10T01:10:32", "dump": "CC-MAIN-2016-50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542932.99/warc/CC-MAIN-20161202170902-00096-ip-10-31-129-80.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9399626851081848, "score": 2.515625, "token_count": 8844, "url": "http://philip.greenspun.com/seia/scaling" }
The Risk Number is an objective, mathematical approach to removing subjectivity by quantifying the risk of investors and portfolios. The Risk Number is calculated based on downside risk. On a scale from 1 of 99, the greater the potential loss, the greater the Risk Number. We believe investing is broken. Ambiguous terms such as “conservative” or “moderately-aggressive” cause confusion in the investment arena. This uncertainty benefits no one in the advisor-client relationship. Generalizing client risk tolerance doesn’t work. Investors view risk through their own, unique lens to gauge risk and return tradeoffs. The speed limit metaphor helps instill an understanding of that risk. Sometimes it’s prudent to slow down depending on weather conditions—the same is true of risk. The Risk Number is a proprietary scaled index developed by Riskalyze to reflect a risk score for both an investor’s unique fingerprint or for a particular portfolio of investments. Shaped like a speed limit sign, a higher Risk Number means a higher level of risk and potential return. The Risk Number is a single-dimension variable designed to approximate the relative risk between people or portfolios. Thus, a “45” portfolio generally has more risk than a “44,” but two “45” portfolios may be quite different from each other. One of the most important drivers of the Risk Number is the measurement of downside risk: either the downside risk in the investor’s comfort zone (the range of risk to reward that they approve via risk questionnaire), or the downside risk in a portfolio as measured by the 95% probability range. Here are a few examples of the relationship between downside risk and Risk Number: - Downside of -2%: Low 20s - Downside of -5%: Low 30s - Downside of -7%: Low 40s - Downside of -12%: Low 60s - Downside of -18%: Low 80s A Six Month Probability Range is calculated in every portfolio. Each portfolio has a 95% mathematical probability of ending up within that range six months from today. Riskalyze cannot predict where a portfolio might end up inside of the range, and there is a 5% probability it will end up outside of the range. The underlying market assumptions for calculating the Six Month Probability Range is an important factor in this equation. Riskalyze’s technology is model-agnostic, but they needed to develop a solid and stable one that advisors could reliably use with clients. The results speak for themselves — for the $2 billion in portfolios built on this data model in 2012, less than 1.6% of portfolios broke below projected risk. This was despite the serious volatility in May 2012, or the high number of AAPL-heavy portfolios that dropped in value in the fall of 2012. All investments are calculated using the actual past performance of their returns, standard deviation, and correlations. Riskalyze uses volatility and correlation statistics to calculate the width of the Six Month Probability Range and corresponding Risk Number. The standard deviation and correlation matrix gives Riskalyze the statistics needed to calculate the distance between the downside and upside returns in the 6 month range. Specific to standard deviation, when securities newer than 1/1/2008 are presented, Riskalyze assists advisors in making apples-to-apples comparisons between younger and older securities by use of Extrapolation. Thus, Extrapolation mitigates a younger fund appearing out of harm’s way when it’s stacked next to a seasoned, veteran investment.
<urn:uuid:d72e8424-93cd-4cbf-b6ae-cd50f63f2c23>
{ "date": "2018-07-22T16:50:54", "dump": "CC-MAIN-2018-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593378.85/warc/CC-MAIN-20180722155052-20180722175052-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9253149628639221, "score": 2.5625, "token_count": 739, "url": "https://www.fplcapital.com/model-risk-numbers/" }
BANGALORE: A system of digitising spatial data, where numerical information can be layered on satellite imagery, is gaining popularity with government agencies and private companies for various studies. The technology, called Geographical Information System (GIS), is revolutionizing the way we study and analyse areas of habitation, experts say. Last year, the Karnataka government tied up with SECON, a Bangalore-based engineering company, to develop a digitized, real-time property database of all the residents in Mysore city. The map deploying GIS techniques will provide information not only on the number of households and their occupation levels, but whether the families have paid their water bills or not. "The company will identify the number of houses, income distribution of people, collect information on the nature of habitation and vegetation around the area and layer it on a GIS map," says Dhyan Appachu, Director of International Operations at SECON. Where traditionally numerical data and maps were used as two different components, GIS puts all information on one surface with the help of satellite images and detailed geographical information. Interested parties can add, delete and compare data. "GIS is primarily about layering different kinds of data on a single map image," says MC Kiran, co-ordinator at the Eco-Informatics lab at Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bangalore. For instance, companies collect information on income distribution, age groups and occupation patterns of people in a particular locality when they set up factories or large stores. The information collected from government records and survey results is then integrated with satellite images that create a visual database. This makes it easy to figure out the consumer profile around a store and what products would sell. "Many companies from the wind energy sector have started using GIS to identify the right areas. In fact, telecom companies like Reliance Communications use this technology to map customers," he said. While most of the GIS related work in India happens within the academic community, in recent years an increasing number of government agencies and private firms have identified it as a key tool. For instance, India Biodiversity portal provides map-based information on all aspects of biodiversity in India and acts as a discussion forum. Recently, the rural development ministry in Kerala used GIS to layer information on public roads and soil patterns, when deciding how much funds to allocate for the MNREGS. SECON's Appachu says insurance companies use GIS to locate areas of high car theft. At research institutions like the National Centre for Biological Research, GIS is used as a tool for studies in bio-informatics and wildlife sciences, that includes tracking tiger movements with the help of satellite imagery. Two years ago, when American geographer Paul Robbins and his team from Arizona University visited the School of Desert Sciences in Jodhpur, they wanted to study about the land cover change, human uses, and governance issues at the Kumbhalgarh Wildlife Sanctuary in South Pali district. But within days, Robbins and his team discovered something that changed the course of their research: contrary to the general view, many areas showed significant regrowth although human use of the forest had destroyed some habitat. "So what had caused the forest recovery? And what role did human settlements play in this?" Robbins recalled them discussing. The team realised that they had to do something more than merely collecting information on forest land. In a matter of days, they layered all possible information they had collected on a GIS-enabled electronic satellite image. "The surprise here was that, with the help of GIS we found out areas with loss of forest cover were indeed near human settlements, but so were areas of forest recovery - meaning, local settlements were not uniformly destructive of forests and people were fully capable of allowing areas to regrow." With so much information about people now explicitly spatialised, does GIS trigger privacy concerns? "There is certainly a concern. It is possible for observers to know where you are at all times, where you are going, and what you are doing," says Robbins. But it isn't just the government, private companies like insurance firms or other interested parties can use GIS to integrate spatial data about people's health status or behaviour in ways that are dangerous and undemocratic. While the biggest challenge lies in collecting accurate information, experts say entering this information into GIS software and then presenting it in real-time is the biggest task. "Earlier GIS was viewed as an expensive technology, but things are slowly changing," says ATREE's Kiran. "In sum, GIS is much more than making maps, a tool for conservation science, economic development, and community planning."
<urn:uuid:252d0965-a79d-4008-b9b2-9e4d12773bf2>
{ "date": "2014-12-20T10:00:29", "dump": "CC-MAIN-2014-52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769642.136/warc/CC-MAIN-20141217075249-00067-ip-10-231-17-201.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9545988440513611, "score": 2.75, "token_count": 968, "url": "http://articles.economictimes.indiatimes.com/2011-10-13/news/30275214_1_gis-mapping-satellite-imagery-income-distribution" }
Sleep apnea can lead to a number of complications, ranging from daytime sleepiness to possible increased risk of death. Sleep apnea has a strong association with several diseases, particularly those related to the heart and circulation. Daytime sleepiness is the most noticeable, and one of the most serious, complications of sleep apnea. It interferes with mental alertness and quality of life. Daytime sleepiness can also increase the risk for accident-related injuries. Several studies have suggested that people with sleep apnea have two to three times as many car accidents, and five to seven times the risk for multiple accidents. Undertreated sleep apnea is a major risk factor for injury at factory and construction work sites Effects of Sleep Apnea on Heart and Circulation A number of cardiovascular diseases -- including high blood pressure, heart failure, stroke, and heart arrhythmias -- have an association with obstructive sleep apnea. This link may be because both cardiovascular illnesses and sleep apnea are associated with obesity and its consequences. However, large studies have increasingly suggested that OSA itself may lead to or worsen cardiovascular disease. At this time, however, evidence of a clear causal relationship between obstructive sleep apnea and cardiovascular events is lacking. Likewise, whether treating obstructive sleep apnea improves cardiovascular outcomes has not been demonstrated. Review Date: 06/11/2010 Reviewed By: Harvey Simon, MD, Editor-in-Chief, Associate Professor of Medicine, Harvard Medical School; Physician, Massachusetts General Hospital. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
<urn:uuid:d7cda740-fff8-48fd-a177-759141e1d43f>
{ "date": "2014-10-23T12:30:14", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066290.7/warc/CC-MAIN-20141017150106-00216-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9446338415145874, "score": 2.953125, "token_count": 334, "url": "http://www.healthcentral.com/sleep-disorders/sleep-apnea-introduction-000065_4-145.html" }
This guidebook was compiled with the express purpose of describing the general geology of Moloka'i and those locations with significance to the U.S. Geological Survey's study of Moloka'i's coral reef, a part of the U.S. Department of Interior's 'Protecting the Nation's Reefs' program. The first portion of the guidebook describes the island and gives the historical background. Fieldtrip stop locations are listed in a logical driving order, essentially from west to east. This order may be changed, or stops deleted, depending on time and scheduling of an individual fieldtrip. Additional publication details USGS Numbered Series Moloka'i Fieldtrip Guidebook: Selected Aspects of the Geology, Geography, and Coral Reefs of Moloka'i
<urn:uuid:03a70fcf-e658-4122-a356-b2d8d97c8710>
{ "date": "2016-09-29T17:39:25", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661905.96/warc/CC-MAIN-20160924173741-00282-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8749076724052429, "score": 2.515625, "token_count": 164, "url": "https://pubs.er.usgs.gov/publication/ofr02158" }
“I could not speak. I became unconscious. I could not open my mouth because then I smelled something terrible … I heard my daughter snoring in a terrible way, very abnormal…. When crossing to my daughter’s bed … I collapsed and fell … I wanted to speak, my breath would not come out…. My daughter was already dead.” These are the words of Joseph Nkwain, who on August 21, 1986, survived one of the strangest natural disasters in history. Known locally as “the Bad Lake,” Lake Nyos, located in the Northwest Region of Cameroon, Africa, carried a folklore of danger, and tales were spoken of an evil spirit which emerged from the lake to kill all those who lived near it. This legend contained the memory of a very real threat. Lake Nyos was formed in a volcanic crater created as recently as 400 years ago. Crater lakes commonly have high levels of CO2 as they are formed by the volcanic activity happening miles beneath them. Under normal circumstances this gas is released over time as the lake water turns over. But Lake Nyos was different: an unusually still lake, with little in the way of environmental agitation. Rather than releasing the gas, the lake was acting as a high-pressure storage unit. Its deep waters were becoming ever more loaded with gas until more than five gallons of CO2 were dissolved in every gallon of water. Pressurized to the physical limit, Lake Nyos was a time bomb. On August 21, 1986, something in the lake went off. It is unknown what the trigger was – landslide, small volcanic eruption, or even something as small as cold rain falling on an edge of the lake. Whatever the cause, the result was catastrophic. In what is known as a Limnic Eruption, the lake literally exploded, sending a fountain of water over 300 feet into the air and creating a small tsunami. But far more deadly than the water was the gas.
<urn:uuid:682dc252-93b5-43f9-8bc3-6861f1dc072a>
{ "date": "2018-10-19T20:33:08", "dump": "CC-MAIN-2018-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9857093691825867, "score": 2.84375, "token_count": 406, "url": "https://derwombat.net/2018/07/01/lake-nyos-the-deadliest-lake-in-the-world/" }
History of the Shiliskin Empire The shiliskin first began to record their history around the year 2000, although at the time they were spread throughout Darkhollow in a network of warring nationstates. One prophet, a withered shiliskin named Jarzarrad, appeared early in shiliskin recorded history and was thought to have been granted immortality by the Korlach, a mighty leviathan beneath Darkhollow’s great lake, so he might serve as the creature’s speaker. Near 4000, Jarzarrad, in his thousandth year of life, came to serve as the personal advisor for a war chief known as Vogan Sillgar. Jarzarrad prophesized that Vogan’s primary general, Jayan, would betray the war chief by spawning a child, a young warrior who would eventually kill Vogan and take his place. Although Jayan vowed to never betray Vogan in such a way, the war chief remained impassive. In an effort to prevent Jarzarrad’s prophesy from coming to pass, Vogan condemned Jayan to death by sacrificing him to the Korlach. Unfortunately for Vogan, Jayan was swallowed whole by the Korlach only to be belched out on a deserted beach to the west. While in the Korlach’s hollow stomach, Jayan spawned an offspring, a young shiliskin named Illsalin. Knowing they could never return to their old nation-state, Jayan fled with Illsalin to a neighboring shiliskin kingdom. There they were taken in as slaves and sold to a gladiator broker. Illsalin grew up in the arenas, miraculously surviving battle after battle until he became a young adult and managed to organize a revolt and attempt a daring escape. Through his strategies and success, slaves rallied around Illsalin and he soon became their leader. Indeed, the mere presence of his army would often cripple any opposing force that stood against him, as most slave conscripts quickly fled to his side of the battle line. After years of struggle, Illsalin did fulfill Jarzarrad’s prophesy and defeated Vogan and his army. Illsalin then succeeded in uniting the shiliskin nation-states, proclaiming the outpost south of the Corathus Creep to be their new home. The outpost grew into a capital city and soon the city itself became synonymous with its ruler and was simply named “Illsalin.” The Reign of Illsalin, the Gladiator King Illsalin prolonged his life with dark magic and an unnatural thirst for conquest, enabling him to lead the newly formed Shiliskin Empire for the first few hundred years of its existence. During this time, the shiliskin displaced many of the other races. The werewolf clans were driven back into their old ancestral territories west of Lake Korlach. Jarzarrad, the prophet who foretold Illsalin’s rise to power, was exiled for his loyalty to Vogan. Normally Jarzarrad would have been executed outright, but he was spared due to the truth of his prophesies and that he had ironically made Illsalin’s birth possible. After his exile, Jarzarrad traveled to the east of Lake Korlach and has remained there in relative seclusion ever since. As Illsalin grew and prospered, the shiliskin deathshed priests learned to use the local underwater life, called nargilor coral, to fuel their incantations and augment their rituals. With this newfound power, they were able to venture into the Korlach’s lair and lull the beast into submission. The Korlach, previously thought to be an uncontrollable force of nature, became the personal guardian of the Shiliskin Empire. Even as the Shiliskin Empire grew more technologically advanced, the shamanistic deathshed priests retained their place in society as the keepers of the Korlach. With the Korlach now under control, the shiliskin were free to colonize the lake’s edge without fear of retribution. Lake districts such as Malgrinnor and Xill appeared and prospered during this time, fueling the spread of the Shiliskin Empire. Illsalin died in 4812 and three emperors followed before Draygun ascended to rule Illsalin. The Fall of Xill Many years into Emperor Draygun’s rule, the shiliskin began to grow suspicious of the lights appearing in the great spire above the lake. They could faintly see a building carved into the stone at the cavern’s height and it appeared to be near completion. Draygun organized a battalion to crush whoever had arrived to take residence in Darkhollow. The battalion never returned. More angered than afraid, Draygun amassed an army to rush the unknown interlopers once and for all. While the army gathered outside the gates of Illsalin, a horde of drachnids burrowed into the nearby and undefended lake city of Xill. A bloodbath ensued. Every shiliskin in Xill was slain, drained, or dragged back to the drachnid hive to be cocooned for “later.” The War of Four Crests The Fall of Xill sparked the War of Four Crests, so named because it eventually involved four armies. With the shiliskin armies assembled and fully aware of the drachnid menace, the shiliskin generals took a much more cautious approach toward the new forces that threatened their home. For the next hundred years a long series of skirmishes unfolded between the Agents of Dreadspire and the Shiliskin Empire. The werewolves, always eager for war, joined the struggle with the Shadowmane Clan aligning with drachnids under the command of Master Vule the Silent Tear and the Ragepaw Clan moving behind the shiliskin ranks. Even after years of struggle, the conflict yielded no decisive victor. About twenty-five years into the War of the Four Crests, a charismatic advisor rose to power in Illsalin. This advisor, a crippled sage named Bodrak, spread the belief that the key to defeating the drachnid hordes was to master their own necromantic magic and use it against them. Draygun, the current shiliskin emperor, followed this advice and began a fervent study of necromancy. Shortly after Draygun founded a school dedicated to drachnid necromancy, Bodrak disappeared from Illsalin. Although Bodrak was never seen again, his skin was found draped in a crumpled pile on the shores of Lake Korlach. Around this time, the Korlach leviathan turned on Illsalin, smashing through the city’s walls and carving a wake of destruction through the city itself. Although it remains unclear why exactly the Korlach leviathan turned on its former masters, many believe that it became angered by the shiliskin priests’ slow gravitation towards the drachnid school of necromancy. Others believe that the creature in the spire may have promised the beast freedom if it turned on its shiliskin captors. And still others believe that the Korlach is simply a force of nature that was never meant to be controlled. The Fall of Illsalin As the drachnids spilled over the walls of Illsalin, Draygun turned to the city’s last resort, a powerful artifact known as Shadowspine. Shadowspine was an ancient spell book recovered from a raid on the drachnid hive. The book contained powerful spells and Draygun believed it held the key to turning back the drachnid invasion. Unbeknownst to Draygun however, the book was a twisted entity capable of pulling those who opened it into its pages. When Draygun opened the book, its power spread throughout the city and cursed Illsalin’s defenders and the drachnids to undeath. Draygun was strong enough to achieve rudimentary control of the book, and he used it to raise himself as a lich and command the other undead throughout the city. Despite this control, the book is now slowly bending Draygun to its will. With each spell that Draygun casts from Shadowspine, he slides closer and closer to insanity and servitude. For now however, Draygun retains his free will and continues to defend Illsalin against invaders. He lords over the undead city with Shadowspine close by his side. The surviving shiliskin forces fell back to Malgrinnor, the empire’s last standing fortress in the east of Lake Korlach. Although the shiliskin are far from extinct, their armies are scattered and demoralized to the point that they no longer pose an obstacle to the evil master in the great Dreadspire Keep above the lake. Werewolves and Norrath The werewolf has existed in Norrath as long as most other races have, but was found only in Darkhollow for some time. These first feral werewolves, called wurines, credit the Great Wuria with their creation — the mother of all werewolves. She is considered a spirit of the dark wilds and less of a god. She is the provider of the beasts they hunt and feed on and the source of their strengths. The werewolves learned to grow and survive in the dark and dangerous underground world around them. They have the gift of intelligence, are motivated and social, but not all equal. They are feral creatures with finely honed instincts and survival skills. For several hundred years, no one on the surface of Norrath had ever seen a werewolf. It wasn’t until an expedition of Qeynosian miners breached the barrier of Darkhollow that the first werewolf was seen and the first human bitten. The werewolves attacked the foreign expedition party and killed all but three who suffered near lethal bites — Patrim Gallowtrow, Brendin Fardon, and Wendal Meen were their names. They manage to survive the bites and flee to the surface to later become the kinsfolk of the werewolves of Darkhollow, the half-human, half-werewolf breed that transformed under the light of the moon. It was only a matter of days before they all went through the first transformation. This new breed of werewolf became known as the Clan of the White Fangs. They considered themselves closest in blood to a true werewolf through the father’s bloodlines. It was Patrim Gallowtrow who bit Sentry Alchin, the friend of Sentry Joanna in Rathe Mountains, who became one of the White Fangs. The White Fangs were arrogant, aristocratic, and were somewhat consumed with their power and gifts. Later, the generations of White Fangs in Norrath became muddied with mixed blood of the various races of Norrath as they mated in human form. This third breed or tribe of werewolf became known as the Dusk Leapers — the mutts of the werewolves. They lived on the fringes of civilization and often plotted against it to rule over it and their cousins, the White Fangs. Meanwhile, among the wurines down below, social conflicts that have lasted thousands of years continue unresolved. As with any intelligent creature with a measure of individuality and the capacity for ideas, the werewolves do not always agree or follow the same path. There are two tribes in Darkhollow: Created by Matriarch Shyra, the Shadowmanes prefer a more matriarchal social structure. They believe the females have the closest spiritual ties to the Great Wuria and seek the matriarch’s guidance and approval. The Shadowmanes can be characterized as a more spiritual and intellectual clan. While they have a matriarch, there are internal politics that dictate what each member of the clan must accomplish in their commune. They struggle against their innate primal instincts as they have some desire for peace and tranquility, even though they live in such a volatile region. They want to find a balance that allows them some sophistication and spirituality. They abhor the purely uncivilized animalistic ways of the Ragepaws, finding them base and disgusting. The Ragepaws believe in the predatory nature of being a wurine and organize themselves by the strength of the alpha male of the group. They shun and hold contempt for any political or high-level social musings that their counterparts have. Their lives are fairly simple — to survive and not allow the Shadowmanes to overcome their ideals or get in the way of their chief philosophy: kill or be killed. They have been led for hundreds of years by the brute strength and will of Bloodeye. The Norrathian Perspective on Werewolves When the first werewolves that were created by wurine-bitten humans began to walk the lands of Norrath, new conflicts arose. By day, the werewolf could comfortably walk among humans in human form. At night, they could transform into a fearsome creature, half-human, half-wolf, with an unnatural destructive rage and strength. They prowled the wilds and hunted, killing anything in their path. These werewolves roamed the wilds of the Karanas, the Faydarks, or wherever they felt free. A few Norrathians, touched by the terror of werewolves, chose to band together to thwart the danger and protect their families. They call themselves the Fangbreakers, a relatively quiet society that spans several generations and reaches far across the lands of Norrath. The Fangbreakers recognize each other quietly and trust very few. For centuries they have protected their organization from being infiltrated by werewolves posing as concerned citizens and they prefer to keep it that way. Nul Aleswiller has been the leader of the Fangbreakers for 500 years. They were originally employed by the people of the Plains of the Karanas to protect the farms and lands from the threat of werewolf attacks. Bunu Stoutheart, Fixxin Followig, and Cory Bumbleye can be considered the coleaders of the Fangbreakers, having also lived in the Karanas for hundreds of years, keeping the werewolves at bay. Wurines’ Conflicts and Civil War Until Matriarch Shyra claimed to have spoken to the Great Wuria, their great mother, through divination, each pack of wurine lived separately. But this matriarch preached a new way of life — one of spiritual fulfillment, order, and worship to their mother. Many joined together to follow Matriarch Shyra, becoming members of the Shadowmane clan. The feral wurines then banded together and formed the Ragepaw Clan. Soon after, Matriarch Shyra created the Lodge of the Fangs, a rudimentary court for all wurines. The departure from the old ways incensed the feral wurine, the Ragepaws, and a civil war ensued — The War of the West Tunnels — over philosophy and territory that lasted 20 years, until the wurines accepted that they would never agree, and instead would learn to coexist to survive Darkhollow. Ragepaw Clan elders were added to the Lodge of the Fangs, which made larger rulings and decisions for both clans when necessary. About 200 years ago, a dark master that threatened the lives of all the wurines offered a grim proposal to Matriarch Shyra. Align with him in his great castle above the lake, or die. In exchange for their loyalty and service, the master would spare the Ragepaws their annihilation as well. There would be benefits to their service — material wealth, comforts, and protection from the shiliskin, sporali and the other elements of Darkhollow. Shyra took the proposal to the Lodge of the Fangs and they discussed the matter. It wasn’t long before the notion of safe haven and access to surface-world comforts won them over. They agreed. The Lodge of the Fangs summoned the alphas of the Ragepaws and they were told the news of the decision to preserve the wurine race and serve the lord of the keep. The Ragepaws were chagrinned and refused to exist in servitude in any way to the master whom they believed intended to deceive and exploit the pride and strength of the wurines. This master who never showed his face represented everything that was dark in their world, and they would not succumb. But they would not fight against it out of fear. They remain in the Snarlstone Dens in the West Lake Korlach region. Today, the Shadowmanes still serve the master, building and guarding his fortress above the waters of Lake Korlach. They accept this duty to preserve and advance their place in the world of Darkhollow. The Ragepaws remain in the darkness and continue to do what they have done for much of their lives — survive and preserve the true feral ways of the wurine. Genesis of the Sporali Around 4900, a sentient fungus spore settled into the groundwater through a pool in the Clan Runnyeye goblin lair before it eventually found its way through some cracks into Darkhollow. The spore was greatly affected by the tainted waters of Darkhollow and evolved in strange and fantastic ways to become a sporali. It grew and spread over 200 years until the first sporali colonies were formed. The colonies began to harvest corathus, a strange resource secreted by the corathus worms which they learned had caused the sporali to grow and evolve at an accelerated rate. The shiliskin also harvested corathus and viewed the sporali as a threat to their supplies. Before the shiliskin could drive the sporali into extinction, the sporali shamans pooled their corathus stocks and fed it to a single spore king. Thus, Antraygus was born. The corathus made Antraygus near invulnerable, and any sporelings he created were also unnaturally resilient. Antraygus and his offspring lead a fierce resistance against the shiliskin raiders who eventually forewent corathus altogether and turned their attention to gathering nargilor, the coral with magical properties that grows below Illsalin. During the Shiliskin-Sporali wars, the sporali bred many plants to use against the shiliskin, including mindspore and retch weed. They still exist today. Ak’Anon Expedition 328 About a century and half ago, King Ak’Anon sponsored a drill expedition to seek out mithril deep below the Steamfont mines. Unfortunately, Mithril Expedition 328 was fraught with disaster. They were the first to use the great new invention, the Burrownizer, a powerful drill that could dig deep into the earth<, carrying gnomes and clockworks within it. During the expedition, far below the surface the Burrownizer’s rubble-sweeping mechanism jammed, leaving the craft unable to maintain a usable tunnel in its wake. The gnomish engineers soon realized that the only way to go was down, so that’s where the drill expedition went. The gnomes traveled for two and half years at a fifteen-degree downward angle before eventually crashing into Corathus Creep in Darkhollow. The gnomes calculated that they were somewhere under Antonica, likely beneath the Nektulos forest. Their drill was hopelessly smashed and they had no way to contact the surface. Soon the gnomish scientists began their lives as castaways. One by one, they fell victim to the various hazards of Darkhollow. Those that survived were forced to augment their failing bodies with salvaged clockwork parts until the gnomes were almost completely mechanical. Through the magic of tinkering, most of them managed to retain some of their personality and memory in Fibblebrap gems, named after the gnome that invented them. These gems, placed into the heart of the clockwork, served to keep the gnomes’ souls alive as they waited for word, existing as what they call gnomeworks. But as can happen when toying with tinkering, it wasn’t perfect. The miners of the Expedition began to show strange behaviors after some time. These miners, called the Creep Reapers, have all but forgotten its gnomish heritage, and have instead focused on mining corathus. Perhaps it was the influence of the corathus mineral, or perhaps it was their willingness to surrender their biological parts so quickly, but the Creep Reapers have adopted a somewhat relentless and remorseless approach to mining. They attack anything that enters their mines and often work themselves to malfunction. The Creep Reapers detest the other survivors of Expedition 328 who have chosen not to help toil in the mines.
<urn:uuid:de33165e-02f8-407f-aef4-a43f4dce8974>
{ "date": "2018-01-22T06:01:28", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891105.83/warc/CC-MAIN-20180122054202-20180122074202-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9677273631095886, "score": 2.671875, "token_count": 4398, "url": "https://loreofnorrath.wordpress.com/category/sporali/" }
Christian Wilkinson / shutterstock We used 11 different satellite missions to track Antarctica’s contribution to rising sea levels. The crew of scientists prepare to put the drill stem into the Greenland ice sheet to probe water flows about a half of a mile below. A glaciologist develops a lightweight method for probing the depths of Greenland's ice sheet to answer a crucial question: How fast is it melting? Knowing where the ice comes from can help work out what it will do to sea levels. Polar ice isn't all the same - it can be divided roughly into "land ice" and "sea ice". What matters most for sea levels is how much ice slides off the land and melts in the sea. Adélie Penguins struggle to reach their nesting sites if there’s too much ice in the way. Despite their image as cold-loving creatures, Adélie penguins could be winners from climate change. Antarctica is vital to the planet’s climate system. Antarctic image from www.shutterstock.com Why should we care if the polar ice sheets melt hundreds of years in the future? Because they are vital for maintaining our current climate. How much staying power? A calving front of the Antarctic ice sheet. If we burned all fossil fuels, the loss of ice in Antarctica would raise sea levels 160 to 200 feet, but even our current trajectory could lead to dramatic sea level rise. Study raises new questions over the rate of ice melting, and thus sea level rise. NASA's former climate chief, James Hansen, is lead author on a paper that predicts rapidly rising seas this century, but not all climate scientists believe the study's models are convincing. Antarctica’s Brunt Ice Shelf photographed in October 2011 from NASA’s DC-8 research aircraft during an Operation IceBridge flight. Researchers find that ice around Antarctica shrank quickly last decade, raising concerns over this buttress against melting land-based ice and future sea-level rise. The Thwaites Glacier is among several in West Antarctica that is already retreating. Antarctic climate science is having a moment – a worrying moment. Three new studies have all concluded that the West Antarctic Ice Sheet has begun to collapse. This collapse will impact humanity for generations… Climate change is causing the North Pole to shift, owing to subtle changes in Earth’s rotation that result from the melting… Antarctic ice shelves are losing ice by melting from their undersides, as well as by calving icebergs. A study compiled by… 3D visualisation of the mega-canyon. Jonathan Bamber, University of Bristol A previously unknown canyon has been discovered in Greenland, hidden beneath the ice. It is at least 750 kilometres long… Findings from a large-scale ice drilling study on the Greenland ice sheet may revise the models used to predict how ice sheets… The cause of dramatic losses in glaciers and ice sheets in the West Antarctic is still unclear. A recent research team has…
<urn:uuid:3f68077d-a486-47e1-ba8f-b63e8a02fc37>
{ "date": "2019-01-17T04:58:59", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658702.9/warc/CC-MAIN-20190117041621-20190117063621-00096.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9199719429016113, "score": 3.984375, "token_count": 627, "url": "http://theconversation.com/africa/topics/ice-sheet-5314" }
Watching the Earth Breath -- Measuring Atmospheric Carbon Dioxide with NASA's Orbiting Carbon Observatory-2 (OCO-2) Fossil fuel combustion, deforestation, and other human activities are now adding almost 40 billion tons of carbon dioxide (CO2) to the atmosphere each year, enough increase the atmospheric concentration of this gas by one percent per year. Interestingly, less than half of this CO2 stays airborne. The rest is apparently being absorbed by natural processes at the surface, whose identity and location are poorly understood. Ground-based CO2 measurements accurately record the global atmospheric CO2 budget and its trends but do not have the resolution or coverage needed to identify the "sources" emitting CO2 into the atmosphere or the natural "sinks" absorbing this gas. One way to improve the resolution and coverage of these measurements is to collect precise observations of CO2 from an orbiting satellite. The Orbiting Carbon Observatory-2 (OCO-2) is NASA's first satellite designed to address this need. OCO-2 was successfully launched on July 2, 2014. By early September of 2014, it was recording almost a million measurements over Earth's sunlit hemisphere each day. Over the next two years, these measurements are expected to revolutionize our understanding of the processes controlling the atmospheric CO2 buildup. This talk will describe the OCO-2 mission, summarize its measurement approach, and present results from its first 15 months in operation. Dr. David Crisp is an atmospheric physicist at the Jet Propulsion Laboratory, California Institute of Technology. His research focuses primarily on the development of instruments and models for analyzing light reflected, emitted, and scattered by atmospheres and surfaces of Earth and other planets. He served on science teams for the Soviet/French/US Venus VEGA Balloon mission, NASA's Hubble Space Telescope WFPC-2, and Mars Pathfinder Lander, and ESA's Venus Express missions. He also worked as a planetary astronomer and as the Chief Scientist of the New Millennium Program, NASA's space flight technology program from 1997 to 2001. More recently, Dr. Crisp was the Principal Investigator of the Orbiting Carbon Observatory (OCO), NASA's first mission designed specifically to measure atmospheric carbon dioxide. He is currently the Orbiting Carbon Observatory-2 (OCO-2) Science Team Leader. "What's Up?" in this month will be presented by Steve Condrey
<urn:uuid:fe274d56-f71f-4639-bcd1-b99b962101e7>
{ "date": "2018-07-18T09:03:36", "dump": "CC-MAIN-2018-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9282348155975342, "score": 3.375, "token_count": 486, "url": "http://www.ocastronomers.org/e-zine/monthly_meetings/details/MM201512.asp" }
Children - Exampel And Precept ( Originally Published 1916 ) THE separation of precept from the actual practices of the people who surround the child is seen in many ways. When Agnes loses her patience with her puppy or with her sewing, we rebuke her gently and tell her that it is not ladylike to scold or fret in such a manner. Agnes is duly impressed. But when we are trying to add up a column of figures on the grocer's bill and the child's senseless and interminable babble irritates us to—a certain point—we blurt out something that is neither polite in manner nor parliamentary in substance. And again Agnes is duly impressed. In time she will no doubt learn to control herself; but we must not wonder that our repeated admonitions fail to bring quicker results. The admonitions produce their effects ; but our outbursts produce their counter effects. At a gathering of mothers, the ever recurring problem of children's lies was under discussion. One of those present told of a troublesome case. The more experienced suggested various remedies, such as discovering the type of lie, to see whether it was over-activity of the imagination, or fear, or slovenly thinking, or whatever else it might be that led the child along the path of untruthfulness. " Oh, his mother has tried everything; she has punished him and promised him rewards, but he keeps on lying," said the woman who had introduced the horrible example. As the family of the child in question was not known to those present, the suggestions were soon exhausted ; but one of the mothers made the casual observation, " Of course, a great deal depends upon the home environment of the child. If the people at home lie—even about little things—the threats and promises will not break the child. One bad example will offset ten good precepts." This was very illuminating. The first speaker said: "Now that I come to think of it, there is something in that. I was taking David's younger brother with me to the museum last week, and just before we got to the car he said, You do not have to pay my fare because mother never does; I can say I am only five. Now I do not suppose the child invented that himself." This incident illusrates one class of cases in which we expect the child to do as we say, in spite of all that we do. The hundreds of " white lies " that grown-ups tell day after day, without even being aware that they are using inflated or figurative language, are accepted by the children as literally true, or as models of diplomacy. When we consider how difficult it is for the young child to attain to habits and ideals of truthfulness, we must see the importance of giving him all the help possible through sympathetic understanding and through the removal of all unnecessary temptations—especially the temptation to imitate his elders in untruthfulness. We wish our children to be friendly in their manner to all with whom they come in contact. We never tire of preaching friendliness to them. Nay, we go even farther; we set them excellent examples by our con-duct in the presence of guests, or when on a visit. But have the children ever heard us make derogatory remarks about these very people to whom we have taken so much pains to appear friendly? Have they heard us decry Mrs. Brown's extravagance or ridicule the Briggs' taste in house-furnishing? No adult is expected to be so saintly or so lacking in standards as to refrain from criticising others. But criticisms of persons should not be made in the presence of young children, and they should never be made in a flippant or sarcastic spirit. Children will hear a great deal of casual comment or table talk without giving any outward sign of having noticed. But when they throw back a phrase we had heedlessly dropped, we are greatly shocked at their saying such things! Arthur's mother complained that it was impossible for her to keep a servant for any length of time, be-cause Arthur was so ugly and impudent in his manner toward the help. The listening friend sympathized with her, for she knew how difficult it was to adjust the harmonies of a complex household without the added burden of unfriendly childrem But when she visited Arthur's mother shortly afterward, she was entertained (in the presence of Arthur himself) with a long and vigorous tirade against servants in general and her own Mary in particular. She could not help but feel that here at least was one of the factors in the mother's problem. It is hardly to be expected that a boy of ten will conduct himself courteously or even humanely towards people of whom his parents constantly speak contemptuously. In making the resolve to be suitable models for the conduct of our children, it is not necessary to go to the extreme of adopting only such speech and manner as is fit for children. Even young children can learn that there are some things which it is proper enough for their elders to do, but not permissible in them-selves. Thus, parents may stay up into the mysterious hours of the night, but children sometimes " go to bed by day"; some food is suitable for grown-up folks, but taboo for children, and so on. Nevertheless, the development of many good habits and the establishment of high ideals will depend directly upon the examples furnished by the parents. In this fact lies the greatest educational advantage to adults in having children about, for if they realize this it will hold them up to their own best standards.
<urn:uuid:2444ea9e-b50e-47f8-874e-f199c44f44c6>
{ "date": "2015-07-05T07:52:42", "dump": "CC-MAIN-2015-27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097354.86/warc/CC-MAIN-20150627031817-00216-ip-10-179-60-89.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9769290089607239, "score": 2.75, "token_count": 1140, "url": "http://www.oldandsold.com/articles08/children-20.shtml" }
|Name: _________________________||Period: ___________________| This quiz consists of 5 multiple choice and 5 short answer questions through Chapter 36, Fortuna North. Multiple Choice Questions 1. What do Oskar and Herbert Truczinski do while Herbert is unemployed? (a) Explored the city. (b) Herbert taught Oskar to read. (c) Committed two robberies. (d) Played instruments. 2. What happens on the train that causes the passengers to lose their belongings? (a) They are confiscated. (b) The passengers throw them off the side. (c) They are lost in a train wreck. (d) They are robbed. 3. What do Maria and Kurt sell on the black market? (b) Honey and flints. 4. What is Oskar's mother possessed with? (d) An undefined demon. 5. What does Oskar develop because of the bouncing of the train? (b) New drum beats. (d) A hunchback. Short Answer Questions 1. While on the road with the midget troupe, who does Oskar sleep with? 2. What type of children does Oskar say are "destructive out of mischief"? 3. When is Oskar's mother born? 4. Lobasck, the director of training, has what distinguishing characteristic? 5. In Chapter 34, how many people are on the freight car with Maria, Kurt, and Oskar? This section contains 198 words (approx. 1 page at 300 words per page)
<urn:uuid:3873abab-eb72-48ba-ad1e-ad9494799ae3>
{ "date": "2018-04-24T22:52:00", "dump": "CC-MAIN-2018-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947421.74/warc/CC-MAIN-20180424221730-20180425001730-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.87398362159729, "score": 2.65625, "token_count": 350, "url": "http://www.bookrags.com/lessonplan/tin-drum/quiz8e.html" }
Regulators Issue Guidelines to Reduce Driver Distraction from Electronic Devices Posted On behalf of Simien & Simien on Dec 09, 2016 in Automotive Distracted driving has become one of the biggest threats to driver safety on our nation's roadways. Each day, eight people die in crashes involving a distracted driver while approximately 1,161 people are injured. One of the most common reasons drivers get distracted is because of electronic devices like smartphones. That is why the National Highway Traffic Safety Administration (NHTSA) recently proposed a set of voluntary guidelines to help reduce the potential for drivers to get distracted by aftermarket electronic devices. The guidelines encourage manufacturers of these devices to add features to preserve functionality while not requiring drivers to take their eyes off the road. Features could include the ability to link the device to vehicle infotainment systems or a simplified user interface, or driver mode. Features like these allow drivers to continue using their devices in safer, more convenient ways. The NHTSA is committed to working with manufacturers to develop devices and features to keep drivers focused on driving, says NHTSA administrator Dr. Mark Rosekind. Examples of Distracted Driving Distracted driving occurs when your eyes, hands or mind are engaged in another activity besides driving. Talking on the phone, eating and texting are common examples of activites that take drivers' eyes off the road. Texting and driving is particularly dangerous because it involves your eyes, hands and your mind. Also, if you read a text while traveling at 55 miles per hour, you will have traveled the length of a football field while your eyes were not on the road. If you have suffered injuries or a lost a loved one in a crash involving a distracted driver, you may have legal options. Contact our car crash lawyers to find out if you are entitled to compensation for medical bills, lost wages, pain and suffering, and other damages. Contact us today by calling (800) 374-8422.
<urn:uuid:87331ab0-6566-4a06-9e75-819e22fc5ffd>
{ "date": "2017-08-21T06:17:33", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107720.63/warc/CC-MAIN-20170821060924-20170821080924-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9512107372283936, "score": 2.671875, "token_count": 401, "url": "http://www.simien.com/blog/device-guidelines-to-reduce-driver-distraction.html" }
The Complete Fairy Tales Translated from the French by Christopher Betts Oxford University Press: 256 pp., $29.95 There's no reliable way to score the game, but it's at least arguable that the European fairy tale has been the source for more popular entertainments than any other narrative tradition -- more than Greek tragedy, the Bible, the Arabian Nights, even the plays of Shakespeare. The enchanted world has inspired works as various as Rossini's Cinderella opera ("La Cenerentola") and Tchaikovsky's "Sleeping Beauty," Jerry Lewis' "Cinderfella" and Stephen Sondheim's "Into the Woods." The Disney animated features, which soften the terror and boost the romance, have had a profound influence in shaping (some have said warping) the minds of children worldwide. And where the storytellers go, critics are right behind them, Rumpelstiltskins spinning ingenious interpretations out of every theoretical straw that floats by: Freudian, Jungian, Marxist, Christian, feminist, postmodernist -- you name it. Most of the classic tales were familiar to European mothers and grandmothers centuries before the earliest written versions in Italian, by the fabulists Gianfrancesco Straparola (published 1550) and Giambattista Basile (1636). Yet the collection that would have the widest and most lasting impact was that by a French courtier named Charles Perrault, published in 1697, that established the brief, moralistic genre later exploited by the Brothers Grimm and Hans Christian Andersen. Perrault was an iconoclast, a rebel against the tyranny of classical education in the 17th century, who set out to prove that myths based on European folk tales could have as enduring and profound an appeal as the stories of the Greeks and Romans. A new translation of his little book, by Christopher Betts, proves him triumphantly right about that, if any proof were needed. Parents who read fairy tales to their children know how terrifying they are. Most fairy tales have a happy ending, but it usually comes only after one or more characters have died a horrible death or spent a long time in durance vile. People who haven't read them since they were children themselves will scarcely believe that such shocking, gruesome stories are permitted in the hands of the young: Fathers who want to marry their daughters, ogres that dine on little children and a serial killer who murders his brides and locks up their corpses in a secret chamber are standard fare. And that's exactly why the appeal of fairy tales has never diminished. Children are thrill addicts who relish imaginary gore -- and have no interest in theories about why they do so. Only a child can hear the story of "Little Red Riding-Hood" and see it as a straightforward, "what happens next" narrative. We didn't need Freud to tell us that there were powerful sexual currents in a story about a little girl who ends up in bed with a cross-dressing wolf, who amazes her with the prodigious size of various parts of his anatomy. The most influential of modern fairy-tale theorists was Bruno Bettelheim, who propounded the thesis in "The Uses of Enchantment" (1976) that fairy tales are fantastic psychodramas that enact the real fears of children and end with the reassurance that all will be well in the end, when they grow up. Most American children slay their demons and satisfy their appetite for righteous mayhem with cartoons and video games, but in Perrault's day, European children had direct experience with the horrors portrayed in fairy tales. In the reign of Louis XIV, France was ravaged by famines; thus the story "Hop o' My Thumb," in which the parents abandon their little sons to die in the forest because there isn't enough food for all, wasn't a bizarre, monstrous fantasy but a plausible reality. Unlike the other great narrative traditions, fairy tales generally have been treated as anthropological phenomena more than as literature. There's no such thing as a definitive text of the major stories; the books read by most children are modern retellings, frequently bowdlerized to protect the sensibilities of the parents. Betts' integral, authoritative translation of Perrault's "Histoires ou Contes," which renders the prose tales in lucid, polished style and (for the first time, Betts writes in his introduction) the poetry in galloping rhyming verse, captures the full measure of the tales' suspenseful power. No matter how many times we've heard the stories, we still long for Cinderella's wicked stepsisters' comeuppance and delight in Puss in Boots' ingenuity in tricking the ogre into transforming himself into a mouse so he can gobble him up. Yet much of the charm of Perrault's versions lies in their depiction of life during the reign of the Sun King, delightfully captured in 26 engravings by Gustave Doré and reprinted in this edition. The grandmother's rustic hovel in "Little Red Riding-Hood" could be a set for a pastoral ballet at Versailles; Perrault's ogres are landed gentry in grand châteaux, who serve baked children at their tea parties. This gloss of Gallic charm domesticates the savagery of the enchanted world in a way that Disney would later emulate with a sentimental, distinctly American optimism. James is the author of several books, including "The Music of the Spheres" and "The Snake Charmer: A Life and Death in Pursuit of Knowledge."
<urn:uuid:c4267e5e-2aee-412b-8694-e7224c8242d4>
{ "date": "2016-12-07T22:48:48", "dump": "CC-MAIN-2016-50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542250.48/warc/CC-MAIN-20161202170902-00384-ip-10-31-129-80.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9626748561859131, "score": 2.53125, "token_count": 1152, "url": "http://www.baltimoresun.com/la-ca-fairy-tales13-2009dec13-story.html" }
Skip to Main Content Frequency synchronization is very important for OFDM systems. Two frequency synchronization algorithms, one in frequency domain and another one in time domain, which are used in different digital television terrestrial broadcasting systems are compared. Computer simulation of these two systems in an equivalent base-band multi-path channel is also given. Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on (Volume:1 ) Date of Conference: 29 June-1 July 2002
<urn:uuid:e8cf550d-fca7-48bc-a4ff-092d5c5d20a8>
{ "date": "2013-06-20T02:22:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9171477556228638, "score": 2.734375, "token_count": 102, "url": "http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=1180631&contentType=Conference+Publications" }
Large Millimeter Telescope (LMT, Gran Telescopio Milimétrico, GTM) (large microwave telescope in Mexico) The Large Millimeter Telescope (LMT, aka Gran Telescopio Milimétrico, GTM) is a 50-meter diameter telescope at Volcan Sierra Negra, Mexico, aimed at wavelengths on the order of 1mm, i.e., short microwave and long infrared wavelengths. It is situated at 4,640 metres (15200 feet) and detects .85-4mm (75-350 GHz) signal. At its location, only part of the year provides ideal observation. It does not approach the angular resolution of Atacama Large Millimeter Array but features a much larger field of view. Its first light was in 2011, and went into operation in 2013 using the inner 32m of the reflector, with 50m becoming operational in 2017. Its angular resolution at 1.1mm was 8.5 arcsec with the 32m reflector and 5.5 arcsec with 50m. The 50m size gives it the ability to detect a star formation of 10 solar masses per year in galaxies at high redshift. - SEQUOIA - 32 pixel, 3mm camera. - RSR - Redshift Search Receiver - 4 pixel camera covering the 90GHz atmospheric window to detect redshift of carbon monoxide spectrum features. - AzTEC - 144 pixel 1.1mm camera. A future instrument is TolTEC, a camera imaging simultaneously in three bands, which promises rapid surveying. | || | |.85mm||353GHz||1.5meV||begin||Large Millimeter Telescope| |4mm||75GHz||310ueV||end||Large Millimeter Telescope| IRAM 30m Telescope
<urn:uuid:ec28c6b0-ee6b-445e-981a-6317ffc27b1b>
{ "date": "2019-01-19T02:13:21", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583661083.46/warc/CC-MAIN-20190119014031-20190119040031-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7777641415596008, "score": 2.828125, "token_count": 392, "url": "http://vaporia.com/astro/start/lmt.html" }
The death toll from the Ebola outbreak in West Africa has passed 1,000 and is still rising, according to the World Health Organization. Fear of the virus and concerns about its spread beyond Liberia, Guinea, Sierra Leone and Nigeria are also soaring. Hospitals in the United States, including Harvard affiliates in Boston, are reminding their staffs of standard infection-control procedures in case someone infected with Ebola comes through their emergency department doors. Sean Whelan, HMS professor of microbiology and immunobiology and an expert in virology who studies Ebola and other pathogens, talked to Harvard Medicine News about the small chance of infection in North America, the very real humanitarian crisis in West Africa and progress being made toward therapies against the deadly disease. Here are his answers to seven questions about Ebola. HMN: What is Ebola? SW: The Ebola virus was discovered in 1976. It is an RNA virus with what we call a negative-sense genome, and that virus, when it infects a cell, makes more virus particles. An infection of humans by this virus causes hemorrhagic fever and massive damage to the internal organs. Basically the body goes into shock. HMN: What can be done to prevent or treat it? SW: There is no current vaccine or antiviral drug that is approved to treat Ebola virus infection. Ebola certainly has been well studied by the research community, but developing a therapeutic is not something that is a priority for most pharmaceutical companies, for example. Until the current outbreak, the total number of deaths from Ebola virus that we knew of since 1976 was about 2,000. Whilst there's active research to study Ebola virus infection, there are a number of other infectious agents that are responsible for many more deaths per year on a global scale than Ebola. Also, because it's a biosafety level four virus, you can work with the complete virus only in very specialized containment facilities, including the one that's about to finally open at Boston University. The U.S. government, through the National Institutes of Health and through the Centers for Disease Control and Prevention, has funded lots of research on Ebola. HMN: Should people in the U.S. be concerned? SW: I don't see Ebola virus becoming a significant public health problem in the U.S. Ebola is a horrible disease but you're obviously much more likely to be exposed to Ebola virus in Africa than you are in North America. I think the challenges of being infected with a virus like Ebola are compounded because of the living conditions in West Africa versus here. I think it's right for people in the U.S. to be concerned about Ebola virus infection, but I think we should be concerned from a humanitarian perspective, to help combat the outbreak in West Africa. I don't see that Ebola is going to become a public health problem in North America. There was a story in the news about a patient at Mt. Sinai Hospital in New York who presented with vomiting and diarrhea and had just returned from West Africa and was being checked to see if they had Ebola virus. Well, it's much more likely that they just have food poisoning of some description. It's an important disease and we should be vigilant and continue our efforts to try and develop therapies to combat this disease. HMS: What might be in the pipeline? SW: There's a candidate vaccine that has been generated by Heinz Feldmann [chief of the laboratory of virology at the National Institute of Allergy and Infectious Disease Rocky Mountain Laboratories] that's based on vesicular stomatitis virus (VSV). He replaced the envelope protein of VSV with that of Ebola virus and has demonstrated that that virus will protect monkeys against a challenge with infectious Ebola. If given 48 hours post-infection along with a lethal dose of Ebola, it will protect those monkeys against disease so they recover. There are also a number of interesting candidate antiviral therapeutics in various stages of development that treat the infection. Jim Cunningham [HMS associate professor of medicine (Microbiology and Molecular Genetics) at Brigham and Women's Hospital] has been working on one in cell culture that remains to be proven in the context of an infectious scenario in large animal models of disease. There is an inhibitor against the polymerase of Ebola virus that was published earlier this year by Sina Bavari's group at USAMRIID [U.S. Army Medical Research Institute of Infectious Diseases]. That polymerase inhibitor was able to treat monkeys that were experimentally infected with Ebola. They recovered from that infection. But the toxicity of that compound isn't fully clear. So there are things that are in stages of development, but there's nothing that is currently approved as a drug and has made it through a set of trials. HMN: What about ZMapp, the experimental serum? SW: It's an anti-serum that is basically an antibody against Ebola virus. We've known for years that passive immunotherapy can protect against many diseases, so long as you get it early enough in the process of infection. This experimental antibody is apparently what the people brought back to the United States had been given. But again, this antibody hasn't yet been approved as a licensed therapeutic. This is one of the challenges with these types of diseases. How do you get approval for doing a human clinical trial for an infectious agent like this? Under these conditions where you have an infectious agent whose lethality varies, depending on the outbreak, from 50 percent up to 90 percent, then if your chance of surviving an infection is one in two, you're probably going to be willing to take whatever you can. HMN: Why do Ebola outbreaks flare and subside? SW: It's very difficult to absolutely pin down why an outbreak starts. One source of transmission to people is eating or butchering contaminated monkeys. But as to how the virus is really transmitted in nature, what's the real reservoir for the virus? Some people argue that it's bats. And then the reason that the outbreaks subside is often because of the isolation of the people who are infected. People who are infected are very sick and it's only very close contacts of these people who usually get infected by the virus. So it sort of naturally dies out. HMN: What's next? SW: I'm optimistic based on the currently available data that one day there will be an effective treatment. Then the question becomes, how do you make that available to the people most in need of this treatment? You know, the ZMapp antibody, for example, if it's going to be an effective therapy, there has to be a way to get it to people and keep it cold and then there has to be a way to inject those people with it. And antibody-based therapies are very expensive. From a humanitarian perspective, I think there is the will to do this. The U.S. has invested a lot of money in trying to develop therapies and vaccines to treat this disease. There are a lot of people working on this problem and a lot has been learned in the past decade or so in particular. I think the fact that there are certain experimental therapies and a candidate vaccine already in progress is a testament to that work.
<urn:uuid:b9b17a1e-8bd4-444b-a735-21d89e4f863f>
{ "date": "2015-05-24T11:15:37", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928015.28/warc/CC-MAIN-20150521113208-00084-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.972892165184021, "score": 3.328125, "token_count": 1491, "url": "http://medicalxpress.com/news/2014-08-ebola_1.html" }
The digital age has dramatically changed social science research. Today we can do more research with more data in less time and at a lower cost. Computing power allied to digital storage and transmission allows us to combine existing data with new forms of digitally generated administrative and transactional data. This creates new opportunities for social science research, for example in terms of reusing data that were thought only for answering one research question, expanding their research potential. Meanwhile, such opportunities for research also bring to light the challenge of correctly managing the data used. Researchers need to be aware of the importance of getting it right when it comes to data collection, organisation, contextualisation, storage, and dissemination. While research data management has always been integral to good research practice, it becomes more and more important in the context of the digital society in which we now live. Research data management is about looking after your data. It concerns the development and implementation of practices, procedures, and policies to protect, validate, and describe data. Doing this ensures its quality, thereby facilitating potential reuse. Practicing good research data management will keep your data alive for generations, creating an impact long after your original research. Regardless of whether you intend to share your data or not, making sure that you think about how you will manage the collection, use, and store data early on in your research project is fundamental. Why? Because a little time spent on research data management at the start of a project means a lot more time for writing and publishing at the end. The advantage for researchers in addressing research data management early on in their project is that it provides them with a strategy for confronting issues such as: - consent, data ownership and licensing: If you are reusing data, what can you do with that data and what should you not do? If you are creating data, are there any restrictions on future reuse you that you need to justify? - research integrity and replication: Good research is replicable research, meaning that context is critical. Have you described the process of data creation and analysis so that others can understand, evaluate, and reuse the data or methodology without having to ask you for further information? - data security and the risk of data loss: Think about how you are going to share data within a research team. Do you know what happens to your data when you press “Save”? Is it being backed-up, where is it stored, and who can access the data? - safe and secure disposal of data: Copies of data, or data not suitable for long-term preservation, need to be disposed of without compromising guarantees of confidentiality given to participants and funders. In short, designing and implementing a research data management strategy increases and extends the value of your research, which will mean that you save time and resources. Moreover, funding bodies increasingly view how a proposal addresses research data management as an essential component of any funding request. The CESSDA User Guide on research data management can be downloaded below.
<urn:uuid:a99f4e40-d062-4da8-b894-6d8aa0b4b9ef>
{ "date": "2018-07-17T19:30:56", "dump": "CC-MAIN-2018-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589892.87/warc/CC-MAIN-20180717183929-20180717203929-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9235747456550598, "score": 2.828125, "token_count": 613, "url": "https://www.cessda.eu/Research-Infrastructure/Training/Research-Data-Management" }
To understand how our knowledge of the sky has been enhanced by telescopes. By grade six, students have a good base of general knowledge to work with regarding stars, the moon, and the sun. They should know, for instance, that the patterns of stars in the sky stay the same, that the moon has phases, and that the sun is a star. They will likely have done naked-eye observations before and should have the skills to do so again. Students will also know about telescopes and that they help humans better observe stars and moons. This lesson will build on students’ prior knowledge of these things, with an emphasis on accurate descriptions of the moon, stars, and planets as seen from earth and on the motion of planets relative to the stars. (Benchmarks for Science Literacy, p. 240.) These observations will be discussed in the context of Galileo’s use of the telescope and his discovery of Jupiter’s moons. At this level, it is important to introduce characters and stories in the history of science so that students have a base for more complicated science stories and concepts that will be learned in later grades. (Benchmarks for Science Literacy, p. 238.) In this lesson, students will make their own night-sky observations, diagram and describe what they see, and will then look at pictures taken by telescopes. They will compare the two ways of observing to confirm what telescopes can reveal about the night sky. They will also learn about Galileo and contemplate how the telescope helped him make his discoveries 400 years ago. Students will do an at-home night-sky observation assignment. Part of the observation involves using binoculars, so if you can gather up extra pairs of binoculars for your students, it may be helpful. Since this lesson will incorporate Galileo and his discoveries, students should begin by using their Sky Watching student esheet to visit the From Galileo to the Hubble activity. This activity is a brief timeline beginning with Galileo and taking the reader up through the Hubble telescope. It gives a good overview of how observing the night sky keeps improving. When students have finished, discuss these questions with the class. - Do you think that when Galileo made observations it was the same as it is today? (This question will gauge whether or not students understand that early scientists had more crude equipment. They should understand that when Galileo was observing, he had a simple telescope, and today some scientists use the Hubble to see more.) - Do you think Galileo did naked-eye observations? How do you think that compared to using a telescope? (Student answers will vary.) Tell students that in this lesson they will have the opportunity to do some sky observing just as Galileo did and just as many scientists do today. Students will consider the history of night-sky observing by doing their own observations, first naked-eye, then with binoculars, and then looking at the results of telescope observations. Have students look at the Night-Sky Observations student sheet. Review the instructions with the class and clarify anything students may not understand. Even though the student sheet directs them to use binoculars in the final part of the assignment, if binoculars are not available, instruct your students to make only naked-eye observations. After students have completed their night-sky observations on their own, conduct a class discussion in which students can compare their diagrams and discuss their observations. Then, have students use their esheet once again to explore Stars. They should spend five to ten minutes exploring pictures taken by the Hubble telescope. Afterwards, use these questions to allow students to compare the Hubble photographs to what they observed with their own eyes. Ask students: - What did you learn from your night-sky observations? (This is a fairly open-ended question. Students should have noted that the stars moved in the sky, that some stars are brighter than others, etc.) - If you had the opportunity to use binoculars, what did the binoculars allow you to see? (Binoculars, for this assignment, are an intermediary technology piece to demonstrate that students can see more details.) - Compare what you saw in your own observations to what can be seen through the Hubble telescope. Students should reflect on the differences between their own observations as compared to those made by the Hubble. Assign them to write a paper titled “Naked-Eye Sky Observation Compared to One Using a Telescope.” Students should discuss the process that they went through and reflect on what they were able to observe with the naked eye, the changes that they saw, and how our knowledge of the planets and stars can be enhanced by the telescope. Students can also continue to explore the Galileo Project and write about how the telescope impacted the life of Galileo. Would he have made his discoveries without the telescope? This lesson may be supplemented by the related Science NetLinks lesson, Looking into Space, in which students explore the make-up and history of the telescope. NASA's PlanetQuest site exists to keep you updated on the latest events and information in the unfolding story scientists' search for other planets like Earth in the universe. To see images taken via telescopes and amateur photographers, go to David Hanon's Astronomical CCD Imaging Page. Students can see images taken through an amateur telescope and a picture and description of the equipment used.
<urn:uuid:84f7450b-5907-4a01-9096-352397387168>
{ "date": "2014-11-01T06:45:34", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637904722.39/warc/CC-MAIN-20141030025824-00067-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9536828398704529, "score": 3.9375, "token_count": 1099, "url": "http://sciencenetlinks.com/lessons/sky-watching/" }
The Kentucky Department for Public Health, Office of Health Equity (OHE) was established in September 2008 to address health disparities among racial and ethnic minorities, and rural Appalachian populations. Grant support has been received from the U.S. Department of Health and Human Services, Office of Minority Health (OMH) since 2010. OHE supports goals and evidence-based strategies from the National Partnership for Action to End Health Disparities (NPA) to mobilize a statewide, comprehensive, community-driven, and sustained approach to combating health disparities and to move Kentucky toward achieving health equity. OHE also supports a wide variety of activities and services through partnerships with health departments, universities, nonprofit organizations and private health systems. What is Health Equity? “Attainment of the highest level of health for all people. Achieving health equity requires valuing everyone equally with focused and ongoing societal efforts to address avoidable inequalities, historical and contemporary injustices, and the elimination of health and healthcare disparities.” - National Partnership for Action to End Health Disparities, 2011. The Office of Health Equity provides input using a health equity lens for numerous initiatives and committees. Current involvement includes:
<urn:uuid:569ca431-93ee-4fbc-9ad3-b9dccbb783ff>
{ "date": "2016-05-03T04:57:59", "dump": "CC-MAIN-2016-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118790.25/warc/CC-MAIN-20160428161518-00066-ip-10-239-7-51.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9474585652351379, "score": 2.625, "token_count": 239, "url": "http://chfs.ky.gov/dph/OfficeofHealthEquity.htm" }
Fred M. Vinson Fred Vinson was the son of a rural Kentucky county jailer and his wife. He worked his way through college and law school and entered the practice of lawin Kentucky at the age of 21. Vinson was a congressman for 8 terms and served on the influential Ways and Means Committee during much of the New Deal. He resigned his House seat to accept an appointment by Roosevelt to the U.S. Court of Appeals for the District of Columbia. After five years on the bench, Vinson resigned to accept an appointment in the Roosevelt administration as head of the Office of Economic Stabilization. Vinson later succeeded former justice James Byrnes as head of the Office of War Mobilization. Vinson became a trusted advisor to President Harry Truman, who appointed him Secretary of the Treasury. Truman later nominated Vinson to the position of Chief Justice. Vinson avoided the announcement of sweeping constitutional principles. He resisted overturning prior decisions. Though he helped chip away at the "separate but equal" doctrine of racial separation, he resisted a head-on confrontation of the issue in Brown v. Board of Education. Vinson's sudden death from a heart attack in 1953 paved the way for the unanimous opinion crafted by Vinson's successor, Earl Warren. |Clerk||Law School||Terms Clerked| |Carl S. Hawkins||Northwestern||1952| |William W. Oliver||Northwestern (1949)||1952| |Newton N. Minow||Northwestern (1950)||1951| |James C.N. Paul||Penn (1951)||1951| |Howard J. Trienens||Northwestern (1949)||1950| |Lawrence F. Ebb||Harvard (1946)||1947| |Arthur R. Seder, Jr.||Northwestern (1947)||1947| |Francis A. Allen||Northwestern (1946)||1946, 1947| |Byron R. White||Yale (1946)||1946| |Earl E. Pollock||Northwestern (1953)|
<urn:uuid:325d527a-5fd2-4ce4-a97e-f1904594554b>
{ "date": "2014-09-02T09:16:25", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030952-00040-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9392155408859253, "score": 2.515625, "token_count": 451, "url": "http://www.oyez.org/justices/fred_m_vinson" }
Living Things Show Cleverness in Many Ways Wherever biologists and microbiologists look, they find organisms solving problems in remarkably clever ways. Cells respond to surface curvature in clever ways (Science Daily). Did you know microbes act like A-students in Calculus-III class? Calc III tends to focus on problems in 3-dimensional curved surfaces. This article, based on work at the University of Pennsylvania, shows how cells respond to changing curvatures like math champs. Who taught them? Last year, researchers from the University of Pennsylvania revealed surprising insights into how cells respond to surface curvature. Specifically, they investigated how cells respond to cylindrical surfaces, which are common in biology. They found that cells change the static configurations of their shapes and internal structures. “We think of it as the cells doing calculus; the cells sense and respond to the underlying curvature,” says Kathleen Stebe of Penn’s School of Engineering and Applied Science. Now, the researchers, led by Stebe and recent engineering graduate Nathan Bade in collaboration with Randall Kamien of the School of Arts and Sciences and Richard Assoian of the Perelman School of Medicine, have published a follow up study that Stebe likens to “calc III” for cells, investigating how cells respond to more complex geometries. Sea Turtle Magnets Evidence that Magnetic Navigation and Geomagnetic Imprinting Shape Spatial Genetic Variation in Sea Turtles (Current Biology). This open-access paper (a bit rare for this journal) continues amplifying knowledge that the Illustra film Living Waters showed: how sea turtles find their way through the trackless seas over thousands of miles. There’s enough variation in magnetic field intensity at each beach, the authors say, for the hatchlings to “imprint” on the magnetic coordinates of their native beach and find their way back years later. What they found may also help explain other animals that use geomagnetic navigation, such as the salmon shown in the Illustra film. Here, we present evidence for an additional, novel process that we call isolation by navigation, in which the navigational mechanism used by a long-distance migrant influences population structure independently of isolation by either distance or environment. Specifically, we investigated the population structure of loggerhead sea turtles (Caretta caretta), which return to nest on their natal beaches by seeking out unique magnetic signatures along the coast—a behavior known as geomagnetic imprinting. Results reveal that spatial variation in Earth’s magnetic field strongly predicts genetic differentiation between nesting beaches, even when environmental similarities and geographic proximity are taken into account. The findings provide genetic corroboration of geomagnetic imprinting. Moreover, they provide strong evidence that geomagnetic imprinting and magnetic navigation help shape the population structure of sea turtles and perhaps numerous other long-distance migrants that return to their natal areas to reproduce. How does plant DNA avoid the ravages of UV radiation? (Science Daily). “If the ultraviolet radiation from the sun damages human DNA to cause health problems, does UV radiation also damage plant DNA? The answer is yes, but because plants can’t come in from the sun or slather on sunblock, they have a super robust DNA repair kit.” A genetic repair toolkit called nucleotide excision repair is especially robust in plants. Not only that, it responds to the day-night cycle (the diurnal circadian clock), becoming more active when needed in sunlight. New type of opal formed by common seaweed discovered (University of Bristol). In this example of a clever trick, the researchers at U Bristol are not exactly sure why the organism does it, but the common brown alga seaweed manufactures biological opals. Opals are known for the iridescent colors produced at different angles. The seaweed not only imitates the gemstone, it can switch it on and off. The scientists like the trick so much they want to imitate it. Such structures arise from nanosized spheres packed tightly in a regular way and are known to optics experts to reflect different colours from incoming white light into different directions. These types of structures are also seen naturally in gem stone opals, which comprise a nanostructure of tiny spheres of glass formed within hard stone deep below the earth’s surface that naturally pack together in such a way that they diffract light into different directions giving the opal it’s well-known opalescence…. In a process unknown to present nanotechnology, the seaweed’s chloroplasts-containing cells (which aid photosynthesis) self-assembles the oil droplets into a regular packing. Surprisingly, these seaweeds can switch this self-assembly on and off, creating changing opals which react to the changing sunlight in tidal rockpools. Even more remarkable is how the seaweed performs the dynamic self-assembly, over a timescale of just hours, is a true mystery to the research team. Human Gaze Gaiting Gaze and the Control of Foot Placement When Walking in Natural Terrain (Current Biology). We don’t want to leave out people as cleverly-equipped organisms, too. This open-access paper explores how the brains, eyes, legs and feet of us upright walkers solve the problem of keeping ahead of the terrain. A hiker negotiating a rocky trail has to maintain a balance between watching her feet and looking ahead. Most of us learn to do this trick without much thought, but it is very complex. The scientists outfitted hikers with headsets that allowed measurements of where they were gazing as they walked on level ground and proceeded onto rocky terrain. Human locomotion through natural environments requires precise coordination between the biomechanics of the bipedal gait cycle and the eye movements that gather the information needed to guide foot placement. However, little is known about how the visual and locomotor systems work together to support movement through the world. We developed a system to simultaneously record gaze and full-body kinematics during locomotion over different outdoor terrains. We found that not only do walkers tune their gaze behavior to the specific information needed to traverse paths of varying complexity but that they do so while maintaining a constant temporal look-ahead window across all terrains. This strategy allows walkers to use gaze to tailor their energetically optimal preferred gait cycle to the upcoming path in order to balance between the drive to move efficiently and the need to place the feet in stable locations. Eye movements and locomotion are intimately linked in a way that reflects the integration of energetic costs, environmental uncertainty, and momentary informational demands of the locomotor task. Thus, the relationship between gaze and gait reveals the structure of the sensorimotor decisions that support successful performance in the face of the varying demands of the natural world. The research highlighted four major findings: - Gaze and full-body kinematics were recorded during real-world locomotion - Walkers show distinct gaze strategies appropriate for the demands of each terrain - Nevertheless, walkers also adopted a constant look-ahead time across all terrains - Walkers tune gaze behavior to sustain consistent locomotor strategy in all terrains So there you have it. You may not have even known about the optimizing strategy your mind and body utilize in the simple act of walking down a path. You are a wonder walking through a world of wonders. Give thanks to our Creator, and study His marvelous designs in creation.
<urn:uuid:a6e2707f-696b-4e57-84cc-3bc93361a935>
{ "date": "2019-05-22T16:55:32", "dump": "CC-MAIN-2019-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256887.36/warc/CC-MAIN-20190522163302-20190522185302-00096.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9357534050941467, "score": 3.703125, "token_count": 1536, "url": "https://crev.info/2018/04/living-things-clever/" }
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA. One of the biggest threats is that a KHOBE-style attack could be paired with a zero-day attack that bypasses host antimalware or other security software; then KHOBE could be used to do further damage. The recently discovered KHOBE attack technique is perhaps most notable not because of the damage it caused, but because of the controversy surrounding its significance. KHOBE, which stands for kernel hook bypassing engine, was discovered by security research group Matousec on May 5th, 2010. The discovery led to a firestorm of media attention, with various experts either proclaiming how serious and dire the threat was, or denouncing it entirely. Both sides mention legitimate issues, but the context is critical to understanding how the research affects enterprises. The reality is that while similar attack techniques have been reported before, the KHOBE attack technique is an improvement on the prior attacks (mentioned by ESET in a recent blog post). In this tip, we'll explain what you as an enterprise security pro should learn from KHOBE, and how to make sure attackers can't use the KHOBE technique to exploit your organization. KHOBE Attack explained The KHOBE attack is basically a classic time of check vs. time of use race condition; the code scanned by the security software running on the host computer finds the malware or data as safe, but before the data is used or malware is executed, malicious code is included or swapped in to be unknowingly executed. In the KHOBE attack, the malware already running on a system will pass innocuous code to be scanned or checked by the security software running on the host computer, and then, after that check, the initial malware will execute the KHOBE attack and actually run malicious code in place of the innocuous code to perform malicious activities, like install any variety of malware or rootkit to take over the system. The KHOBE technique enables this code swap using a kernel hook to directly manipulate kernel data (or user data) used for execution of software. A kernel hook is a way (unsupported by Microsoft since the introduction of Patchguard in Windows Vista) to get control over the execution of code on a Windows operating system, and is used by the software Matousec lists as vulnerable. A kernel hook bypass inserts itself into the code-execution process to change the control over the code execution. The issue with the race condition is the security software assumes that once it checks the potentially malicious code to see if it is indeed malicious, the code it checked will be the code that runs, and it won't be changed before it is executed; with the KHOBE technique that's not the case. Matousec reports Windows XP and Windows 7 are considered vulnerable, and a long list of security software are affected, primarily host intrusion prevention systems (HIPS) and some antimalware software. However, it's important to note that malware needs to have already been administered on a target system in order for the KHOBE technique to execute an effective kernel bypass. The KHOBE attack method also requires a sizable piece of code to work and most likely could not be included in attack shell code from an exploit, but could be included with other malware. As we'll cover in a moment, this is why KHOBE's detractors say its relevancy is limited; it depends on other exploits, and by itself is of little use to an attacker. Security threats from the KHOBE attack technique While some may argue otherwise, the KHOBE attack technique poses a significant security risk, but the key issue is that code needs to be running on a system to use the attack by the kernel hook bypass. One of the biggest threats is that a KHOBE-style attack could be paired with a zero-day attack that bypasses host antimalware or other security software, and then KHOBE could be used to do further damage. This general type of attack is fairly common with an initial exploit loading additional malware to fully take over the system. The KHOBE technique is not the only way to do this type of attack though; KHOBE just happens to be the newest way. The real-world implications of the KHOBE attack technique are minimal at this time, but it is certainly conceivable that an attacker could pair KHOBE with a zero-day attack to exploit any vulnerable software applications. Since the KHOBE attack is now much better known, it could even be included in more general attack toolkits or scripted attacks, which would elevate the priority of deploying patches to mitigate a potential attack. KHOBE attack technique: Enterprise defense strategy The enterprise defense strategy for the KHOBE attack technique is fairly straightforward at this point, since KHOBE is still a proof-of-concept and hasn't been observed in the wild yet. You should follow your standard antimalware protection strategy and update your antimalware software as soon as an update is released. For other security software, you may want to investigate in more depth the vulnerability of your software, especially if you don't have other antimalware software protecting your system. For example, if you use a HIPS that is vulnerable and you use other antimalware software, the HIPS software could be disabled leaving your system protected by the antimalware software. So, given that the antimalware software is still protecting your system, you may not need to immediately deploy the update for your HIPS software. Antimalware companies are also releasing updated signatures to detect attacks using this technique, so that the KHOBE attack code cannot run on a host system even if an initial exploit is successful, and some software vendors are updating their products to not use kernel hooks, and thus would not be vulnerable. Despite what its detractors have said, the KHOBE attack technique is an important one that Windows-centric enterprises should be aware of, and something vulnerable security software vendors should quickly patch in their software. Enterprises and vendors alike should ensure their security software is not vulnerable and provides protections, but there is minimal threat to widespread attacks from the KHOBE attack techniques in and of itself. About the author: Nick Lewis (CISSP, GCWN) is an information security analyst for a large Public Midwest University responsible for the risk management program and also supports its technical PCI compliance program. Nick received his Master of Science in Information Assurance from Norwich University in 2005 and Telecommunications from Michigan State University in 2002. Prior to joining his current organization in 2009, Nick worked at Children's Hospital Boston, the primary pediatric teaching hospital of Harvard Medical School, as well as for Internet2 and Michigan State University. He also answers your information security threat questions.
<urn:uuid:96291a71-cca0-477c-b23a-ce390b32b3d9>
{ "date": "2015-08-05T02:26:57", "dump": "CC-MAIN-2015-32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043058631.99/warc/CC-MAIN-20150728002418-00157-ip-10-236-191-2.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9509216547012329, "score": 2.640625, "token_count": 1414, "url": "http://searchsecurity.techtarget.com/tip/KHOBE-attack-technique-Kernel-bypass-risk-or-much-ado-about-nothing" }
Sonar (originally an acronym for SOund Navigation And Ranging) is a technique that uses sound propagation (usually underwater, as in submarine navigation) to navigate, communicate with or detect objects on or under the surface of the water, such as other vessels. Two types of technology share the name “sonar”: passive sonar is essentially listening for the sound made by vessels; active sonar is emitting pulses of sounds and listening for echoes. Sonar may be used as a means of acoustic location and of measurement of the echo characteristics of “targets” in the water. Acoustic location in air was used before the introduction of radar. Sonar may also be used in air for robot navigation, and SODAR (an upward looking in-air sonar) is used for atmospheric investigations. The term sonar is also used for the equipment used to generate and receive the sound. The acoustic frequencies used in sonar systems vary from very low (infrasonic) to extremely high (ultrasonic). The study of underwater sound is known as underwater acoustics or hydroacoustics.
<urn:uuid:16d92334-fa27-41cf-85c8-dbae1b62b562>
{ "date": "2018-09-18T20:16:31", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155676.21/warc/CC-MAIN-20180918185612-20180918205612-00056.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9479995965957642, "score": 3.828125, "token_count": 229, "url": "https://adventuresinrediscovery.com/2012/08/06/sonar-marine/" }
For Seniors: Set Up Speech Recognition on a Laptop If you have dexterity challenges from a condition such as arthritis, you might prefer to speak commands using a technology called speech recognition rather than type them. If your laptop doesn’t have a built-in microphone (most do), plug a headset with a microphone into your laptop headset ports (see your owner’s manual if you’re not sure where they’re located). From the Control Panel, choose Ease of Access→Start Speech Recognition. The Welcome to Speech Recognition message appears. Click Next to continue. If you’ve used Speech Recognition before, this message does not appear.) In the resulting Set Up Speech Recognition dialog box, select the type of microphone that you’re using and then click Next. The next screen tells you how to place and use the microphone for optimum results. Read the message and click Next. In the dialog box that appears, read the sample sentence aloud to help train Speech Recognition to your voice. When you’re done, click Next. A dialog box appears, telling you that your microphone is now set up. Click Next. During the Speech Recognition setup procedure, you are given the option of printing out commonly used commands. It’s a good idea to do this, as speech commands aren’t always second nature! In the resulting dialog box, choose whether to enable or disable document review, which allows Windows to review your documents and e-mail to help it recognize the way you typically phrase things. Click Next. A dialog box will appear. In the next dialog box, choose either Manual Activation mode, where you can use a mouse, pen, or keyboard to turn the feature on; or Voice Activation mode. Voice Activation mode is useful you have difficulty manipulating devices because of conditions such as arthritis or a hand injury. Click Next. In the resulting screen, if you wish to view and/or print a list of speech recognition commands, click the View Reference Sheet button and read about or print reference information, and then click the Close button to close that window. Click Next to proceed. In the resulting dialog box, either leave the default setting of running speech recognition selected when you start up, or click the Run Speech Recognition at Startup check box to disable this feature. The final dialog box informs you that you can now control the laptop by voice, and it offers you a Start Tutorial button to help you practice voice commands. Click that button, or click Skip Tutorial to skip the tutorial and leave the Speech Recognition setup. The Speech Recognition control panel appears. Say Start listening to activate the feature if you used voice activation in Step 7, or click the Start Speech Recognition button (it looks like a microphone) if you chose manual activation in Step 7. You can now begin using spoken commands to work with your laptop. To stop Speech Recognition, click the Close button on the Speech Recognition Control Panel. To start the Speech Recognition feature again, from the Control Panel choose Ease of Access and then click the Start Speech Recognition link. To learn more about Speech Recognition commands, click Speech Recognition from the Ease of Access panel and then click the Take Speech Tutorial link in the Speech Recognition Options window.
<urn:uuid:28158379-f5d8-48e2-aabe-df676fa2c476>
{ "date": "2014-07-22T19:33:24", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00200-ip-10-33-131-23.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8509078025817871, "score": 2.890625, "token_count": 688, "url": "http://www.dummies.com/how-to/content/for-seniors-set-up-speech-recognition-on-a-laptop.html" }
Welcome to running! It is fantastic that your son's involvement with cross-country led you to run also. You have made good progress in the past two years with improving your fitness and losing weight. Running injuries are every runner's bane and IT Band injuries are a common one. In general, most running injuries are classified as "over-use" injuries, which means we inflict them upon ourselves by doing too much, too soon. The IT band is made of connective tissue and this matrix takes longer to "toughen" up to running than other soft tissue, like muscles. Connective tissue is not as vascular as muscle tissue and therefore adaptation to new demands, like running, takes more time. Your injury may have been a result of this difference in adaptation rate between muscles and connective tissue. Another contributing factor to injuries, especially IT band injuries, can be the running surface. Whether road, trail, or sidewalk, how level is it? If there is a significant camber, this slant can contribute to injury by creating an uneven load on the joint and the surrounding soft tissue. Try to run on the most level surfaces you can find and mix up your running routes frequently to help avoid this problem. And, if you have not already done so, consider having an evaluation done of your running biomechanics and gait. This can provide important information, like determining the correct running shoes for you, and prevent future injuries as well. Keep track of your mileage and replace your running shoes when needed; this will minimize injury risk too. It will be important for you to pay attention to your IT band and continue with PT exercises, stretching, and ice for some time to come as a pro-active measure. Keeping a training log is an excellent way for you to track your training and assess it objectively. When laying base miles, it is very important for you to keep a steady pace, especially when returning to running after an injury. Resist the urge to jump back into your running where you left off. Base mileage provides the foundation for everything you want to accomplish, and, much like the foundation of your house, you want it done right. Pay attention to your pace and rein it in until you have laid a solid foundation. Base mileage pace means conversational intensity or approximately 60 to 75% of your maximum heart rate. When you find yourself speeding up, slow it down! You may even need to take a walk break to re-set your speedometer. "Zoning out" is a great feeling, but for now, remain acutely aware of your pace. Give yourself a full 12 weeks to lay down a base. After 8 weeks of consistent running at base pace, you can add ONE 20 minute tempo run during the week. Tempo run pace during this phase means 30 seconds per mile faster than your base pace, so not a lot faster, but this will help transition you to speed work when you complete your base mileage phase. After 12 weeks of base, begin adding in some speed work once a week. After a good warm up, pick up the pace for short periods of time and ease back down. Begin with one minute at tempo pace; and then back to base pace for three minutes, and repeat. Start with a 30-minute run and gradually increase this time in four-minute increments each week barring any aches or pains. After 8 to 12 weeks of this speed work, try the track and see how it feels. Always do a thorough warm up before track work, plan on running easy for one to two miles. When you complete a speed workout, do a cool down consisting of running easy for one-half to one mile. Here are two good track workouts for improving your speed: 1) Two sets of 6 x 400. Jog a 200 after each interval for a short recovery and then move right into the next 400 interval until you complete the first set of six. Take four minutes for a more complete recovery after the first set, then begin the second set. Keep the times for these intervals consistent. Aim for no more than a five second variance in all the intervals. This will help train you to run faster and hold this faster pace. 2) 5 x 1000. Jog a 200 after each interval for a short recovery. Again, keep these interval times consistent, with no more than a 5 second variance between them. For pace on the track, start with your 5k time to base these intervals on and see how you feel after that. If all is well, crank it down a bit below your 5k pace IF you can maintain that pace. This will be evident if you are able to keep all the intervals close to the same time. Running three or four times a week is an excellent plan for many runners because it allows for recovery time between workouts. Know the intention and the pace for each of your training runs when you head out the door. In general, include one speed work out, one strength work out (you can alternate tempo and hill runs), one endurance run or long run, ("long" being relative to the distance you are training for) and one recovery run each week. Cross-training can be very helpful too. Consider adding two days a week of some form of aerobic exercise other than running. Swimming is an excellent choice for runners because it is non-weight bearing, recruits different muscles than running, and emphasizes the upper body. Spinning or rowing are also good cross-training choices. In addition, adding some strength and flexibility training to your routine is also recommended, even if it's only 1 day a week. This can be in the form of weight training or an exercise class like Yoga, Pilates, or an abs class. Susan Paul, MS Susan Paul has coached more than 2,000 runners and is an exercise physiologist and program director for the Orlando Track Shack Foundation. For more information, visit www.trackshack.com. Have a question for our beginners experts? E-mail it to [email protected]. NOTE: Due to the volume of mail, we regret that we cannot answer every e-mail.
<urn:uuid:95dde57b-9372-46ae-b7f0-b77c54db89ec>
{ "date": "2015-10-13T09:04:42", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738004493.88/warc/CC-MAIN-20151001222004-00004-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9487848877906799, "score": 2.703125, "token_count": 1254, "url": "http://www.runnersworld.com/for-beginners-only/how-do-i-improve-without-getting-reinjured" }
The root senses of the words As you have noticed, English has changed a lot since the first translations were made. Now, the original meaning of 'blessed', according to the etymology, is 'consecrated', but it started to pick up a second meaning—over time it started to sound like more like the word 'bliss', and so, following on the idea of being consecrated, the idea of blissfulness or happiness was added to it. (This is explained at dictionary.com. For another example of how unrelated words can influence each other's senses, compare how the word 'niggardly' came to be politically incorrect.) It may also be instructive to look at the meaning of 'happy' in the past. The word 'happy' comes from the word 'hap', meaning what happens by chance. The words 'mishap' and 'happen' itself are related; the former is a bad chance occurrence, and the latter is what just happens to occur. The root meaning of 'happy' thus is what we might describe as 'lucky' or 'fortunate'—as the familiar saying goes, "happiness is based on happenstance". While we do not strongly associate the idea of good luck with 'happy' today, the association was stronger in the past, so the translators of the time may have preferred not to use it here, though people who wrote dictionaries would use it to specify when they said 'blessed' that they mean this sense. Which sense is actually meant? We can find out by looking at the original language. While English uses 'bless' for both the idea of consecration and the idea of happiness, Greek has two different words: εὐλογέω (eulogeō) is to bless as in to consecrate, and μακάριος (macarios) is happy. In the original Greek of the beatitudes, 'μακάριος' is used. So indeed the sense of happiness is intended here; Jesus is talking about the future happiness of people who are not traditionally considered to be happy, not the future consecration of people who are not traditionally considered to be consecrated. (In Latin likewise there are two different words -- benedico is to bless as in to consecrate, and beatus is happy. It is from the latter that we get the name 'beatitudes'.) But 'happy' is different from 'blessed' somehow Now, as you say, you "don't believe that happy and blessed do coalesce, at least not on earth". You have reason to say this, but this is less about the meaning of the words and more about their associations. The idea of 'blessed' happiness is strongly tied to the religious idea whose name it shares and so we tend to use the word only in religious contexts or with religious feeling -- and because we make it a religious idea, we tend to think about it more and realize that real happiness is not of or in this world. Because 'happy' has no such religious association, we tend to use it more lightly and don't think of it in such a way. But the idea is still in the word; one may see it when we are talking about happiness in a philosophical but non-religious way, as when we speak of the saying of Solon, "call no man happy till he is dead"—the original Greek for the 'happy' here is also of the same root as μακάριος. So why change 'blessed' to 'happy' in the translation? It's true that 'blessed' already has more of the connotations that one would want in this text. But, especially for those not raised to religious terminology, the word is kind of obscure — and without use in a variety of contexts, it's hard for people to learn what a word means; the meaning tends to get muddled. By updating 'blessed' to 'happy'—which we can do now that 'happy' no longer means 'lucky'—we at once make the original idea more accessible, and hopefully stimulate in the word 'happy' the kind of thinking about happiness we have already done when using the word 'blessed'.
<urn:uuid:a133c6e6-7606-4271-b295-045c543866bc>
{ "date": "2014-09-30T18:03:09", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663060.18/warc/CC-MAIN-20140930004103-00278-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9637864828109741, "score": 3.125, "token_count": 904, "url": "http://christianity.stackexchange.com/questions/4801/what-is-the-difference-between-being-blessed-and-being-happy/17367" }
Means of Creation Coding improves problem solving skills and gives confidence to build anything you want with just coding languages. You can build web or mobile apps .You can apply coding skill to build innovative products that can be used for agriculture or smart home,address some of the biggest issues that humankind is facing with like climate change, healthcare, education and many more. You Become a Tool Maker When you become a programmer you become a tool maker and you love your craft because it fulfills you. You create tools all day long to help people or yourself, or solve the unsolvable, or create better entertainment, make life more efficient, synthesize conscience and logic, and a plethora of concepts.
<urn:uuid:1fb86eaf-25fe-4875-9749-74ad879e5ac6>
{ "date": "2019-11-22T07:01:11", "dump": "CC-MAIN-2019-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8972787857055664, "score": 2.59375, "token_count": 142, "url": "https://www.codebuild.co/why-learn-coding/" }
Seven plutons in the Composite Arc Belt and Frontenac terrane of the Grenville Province are herein determined to be part of the Kensington-Skootamatta intrusive suite. The syenite-monzonite and granite-monzogranite plutons have crystallization ages of ca. 1086-1072 Ma and ca. 1077-1067 Ma, respectively. In general, the plutons are shoshonitic, metaluminous to weakly peraluminous, and alkalic to calc-alkalic. The plutons have conflicting trace element signatures that suggest a within-plate tectonic setting with a weak remnant suprasubduction zone signature and a depleted mantle isotopic signature of εNd 2-5. This geochemistry points to derivation of these melts through fractionation of an alkaline basalt or partial melting of quartzofeldspathic crust. Melt generation was perhaps initiated by crustal delamination, and further heated through insulation by overthickened crust and the Midcontinent Rift magmatic system. Two other plutons that were studied have ca. 1180-1160 Ma crystallization ages and geochemical characteristics similar to the Frontenac intrusive suite.
<urn:uuid:54ac3dc0-9380-4a1a-b870-0c7e62eb04d6>
{ "date": "2019-03-26T03:39:29", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204768.52/warc/CC-MAIN-20190326014605-20190326040605-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8792046308517456, "score": 2.609375, "token_count": 261, "url": "https://curve.carleton.ca/fe1cea81-0716-4481-bc9c-8b62a17e32fc" }
Lyveden was commissioned by Sir Thomas Tresham, a cultivated Elizabethan landowner who was frequently imprisoned and fined for his Catholic faith. The pavilion is riddled with symbols relating to Catholicism, some of which are so cryptic that they have never been deciphered. It remained unfinished at Tresham’s death in 1605. The plot thickened recently when National Trust curator Chris Gallagher (perhaps I should call him ‘renowned curator’, in true Dan Brown style) discovered an aerial photograph of Lyveden taken during World War II by the German airforce, the Luftwaffe. The photograph provides vital clues to the design of the garden, but until recently it had lain unexamined in the United States National Archives in Baltimore, Maryland. Tresham was a keen gardener, and the ten concentric cicrles seen in the Luftwaffe photo, measuring about 120 meters in diameter, reveal more about the design of the garden. The circles are set within what Sir Thomas described in a letter a his ‘moated orchard’. Elsewhere there are references to 400 raspberries and roses that were to be planted within the ‘circular borders’. The 1944 photo proves that parts of these garden features remain, thinly covered by grass. This discovery of the physical evidence of the Elizabethan garden has prompted English Heritage to upgrade Lyveden to grade 1 on their Register of Historic Parks and Gardens. As a first stab at recreating the lost garden, National Trust staff have mowed a labyrinth in the sward, which is one possible interpretation of what the circles could have been part of. It is hoped that further research will allow an informed replanting of the area to be carried out. And it may even inspire Dan Brown’s next book…
<urn:uuid:4a9e80f3-e584-4b83-970b-68b98233bec0>
{ "date": "2014-10-02T10:25:39", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663743.38/warc/CC-MAIN-20140930004103-00258-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9659802913665771, "score": 3.265625, "token_count": 378, "url": "http://nttreasurehunt.wordpress.com/2010/11/17/the-tresham-code/" }
Contains selected information resources covering all subject areas. Search by keyword or browse by subject. Intute: Arts and Humanities A free online service providing access to websites selected and evaluated by subject specialists. Contains links to several modern languages. ROUTES is a database providing access to selected quality-assessed freely available internet resources, selected by course teams and the Open University Library's Learning and Teaching Librarians. Reading Experience Database 1450 - 1945 An Open University database. RED's mission is to accumulate as much data as possible about the reading experiences of British subjects from 1450 to 1945. Searchable. You may also contribute to the database A freely available collaborative encyclopedia. The Free Dictionary Online dictionary (Houghton Mifflin), thesaurus (Collins) and encyclopaedia (Columbia), with English and American audio pronunciation, translation to other languages and other features such as word games. Ethnologue: Languages of the World An encyclopaedic reference work cataloguing all of the world’s 6,912 known living languages Go to top BBC Learning Languages page The Linguist List Information on languages and language families, and links to websites devoted to natural and constructed languages, to writing systems, and to language resources, e.g., dictionaries. A comprehensive catalog of reviewed language-related Internet resources. Provides links to online language lessons, translating dictionaries, native literature, translation services, software, language schools, and general information. A multilingual, on-line resource centre for foreign language learning. It provides information about, and links to good on-line resources from around the world relating to the learning and teaching of any modern foreign language. BBC Learning English Resources to help those new to English with quizzes, help with grammar and business English. Resources to help people with word and number key skills. This site is a co-production between the British Broadcasting Corporation (BBC) and the British Council. Both organisations receive funding from the UK government for their work. The materials on this site are designed for non-native speaker teachers of English working predominantly in secondary education in state schools around the world. As the content increases we hope it will be of interest to teachers of a wider range of age groups and working in other kinds of teaching institutions. Lots of activities for people learning English, covering both grammar and vocabulary. World Wide Words Compiled by lexicographer Michael Quinion, World Wide Words is an online guide to contemporary English and its usage. The site includes a collection of articles about the use and evolution of the English language; definitions of new and unusual words; a guide to phrases; answers to queries submitted by site visitors; and notes about words currently featured in the press. Freely available collection of classic fiction, poetry, essays and non-fiction. Includes all Shakespeare plays, and biographies of authors. Go to top Directory of Open Access Journals DOAJ provides free access to selected scholarly journals. It can be browsed by title or subject, as well as searched for keywords. Welsh Journals Online Free online, searchable, access to a selection of 19th-, 20th- and 21st-century Welsh and Wales-related journals held at The National Library of Wales and partner institutions. These materials cover a very wide range of subject areas, including humanities, social sciences, science and technology. References to over 1 billion items in 10 000 libraries worldwide. Online dictionary offering definitions of English words and three bilingual dictionaries from English to Spanish, French and Italian. You can also access bilingual dictionaries between each of these languages. Scholar’s Lab / Electronic text center 10,000 publicly available texts including history, literature, philosophy, religion, and history of science. Languages include Latin, Apache, Japanese, and Chinese. 164,000 related images, including rare books, manuscripts, and book illustrations. An American site giving access to thousands of free electronic books, poems, articles, short stories and plays. It also includes study guides, dictionaries, biographies, religious texts and popular non-fiction. UC Press e-books Collection 1982 - 2004 More than 300 free electronic editions published by eScholarship Editions, mainly in the humanities, religion, history, literature, arts and social sciences. Arts and Humanities Data Service The Arts and Humanities Data Service is a national service setup to collect, describe, and preserve the electronic resources which result from research and teaching in the humanities. It encourages scholarly use of its collections through an online catalogue. British Library Images Online Images from two millennia of world history. The collection includes images of people, natural history, religion, conflict, travel and exploration, and social history. There are free samples which can be downloaded for personal use.
<urn:uuid:8ce55674-5013-4951-a302-eabe71c9a58b>
{ "date": "2014-03-09T01:58:45", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999670363/warc/CC-MAIN-20140305060750-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.900968611240387, "score": 3.078125, "token_count": 995, "url": "http://www.open.ac.uk/libraryservices/pages/oair/?id=16" }