text
stringlengths
198
621k
id
stringlengths
47
47
dump
stringclasses
95 values
url
stringlengths
15
1.73k
file_path
stringlengths
110
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
49
160k
score
float64
2.52
5.03
int_score
int64
3
5
My name is Eoghan Evesson and I work as a Second-level English teacher in Ireland. I first discovered Voki, as with most things that improve my teaching, on Twitter. I was researching ways to give my pupils’ work a voice beyond what was written on the page. That’s exactly what Voki is: a safe, convenient and engaging way to bring written work alive. As I started to use Voki over the subsequent weeks and months I discovered Voki’s unique talking avatar style could be applied by teachers in many ways. In this blog post I’d like to outline three ways that teachers, primarily but not exclusively English teachers, can use Voki in their teaching and learning. 1. Deliberate Mistakes Can teaching grammar to First Year pupils (13 Years old) be fun? Absolutely. There are lots of innovative and engaging ways to help pupils improve their writing ability and Voki is one more. One such exercise I used with pupils was how to use apostrophes when contracting words. This is an example of a Voki with deliberate grammatical mistakes. Can you spot them!? I would start by creating five Vokis with misused contractions, five times, in the your text section. Pupils then work in groups and listen to the Voki avatar talking. Groups are awarded one point for noting a mistake and another point for explaining on their page how the word should be used correctly. It’s fantastic to hear a table explain to each other the difference between it’s and its! This task encourages active listening, understanding context and identifying key concepts. This task could be easily adapted into other subjects by deliberately misspelling or deliberately misusing key terms or concepts in the avatar ‘type your text’ box. 2. End of Project Like most English teachers, I like to teach that good writing requires a process. You go through a process of analyzing the task, first draft, second draft, and then editing the piece. I’ve found that Voki is a great way for pupils to present their final piece of work. This term I asked one of my Junior English classes to write a short news report from the perspective of a person on the island from The Lord of the Flies The pupils planned their work and made a first attempt in their copy. We then discussed the first attempts and realized that we were writing fun news reports, but we were not really capturing the mood, details or atmosphere of the novel. Having written and discussed our pieces a second time we decided to create Voki characters reading out their work. This Voki was created by a pupil speaking from the perspective of a character on the island from Golding’s ‘Lord of the Flies’ Voki acted as a way for the pupils’ hard work to be given more validation than it would have received by leaving it in the copies. You might consider Voki at the end of your next class project. One of the most common ways I use Voki is to create an avatar on our English http://newenglishirl.blogspot.ie/ Every month I create an avatar that discusses recent blog posts. I feel it gives the blog page an interactive and engaging element. This is a Voki that was used as a monthly update on our English Voki can be a great virtual assistant in your classroom for the day you are out of school or it can be placed as a QR link in your pupils’ copy or the Voki character can be the bearer of bad news: homework! Your Voki avatar can also assist with correcting of homework. If you are using a VLE with your pupils you can write the feedback for the task into a Voki avatar. Using the Voki avatar gives great immediacy and presence to the feedback but it has the added benefit of never getting lost. If you have a class for an entire year or perhaps two years, your pupils will begin to gather a collection of feedback on their work. This is obviously invaluable for the pupil’s development as they can hear common mistakes in their work and improve. I hope you found these suggestions helpful. The ease of access, engaging and intuitive nature of Voki make it a fantastic tool in many different types of classroom. As an English teacher it encourages a better understanding of voice and audience. Often, not always, when we write, we write to be heard. Voki is a safe place for young writers to start to hear their words come to life. My name is Eoghan Evesson and I’m a teacher of English to Second-Level pupils in Ireland. I work in the fantastic Newbridge College, Co. Kildare, teaching pupils from the ages of 13-19. I’m passionate about using ICT to help pupils engage more, enjoy more and most importantly learn more in my classes. You can find me on Twitter @JCenglishNet
<urn:uuid:cd4ac7dd-cbbd-4402-adbb-8168b33aa0d3>
CC-MAIN-2023-14
https://blog.voki.com/tag/writing-2/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00204.warc.gz
en
0.944895
1,127
2.890625
3
The Kinetic Skyscraper, which earned honorable mention in eVolo’s 2011 Skyscraper Competition, will surprise gardeners with its similarity to a blooming foxglove. In the skyscraper’s case, however, the blossoms are housing units attached to a main shaft, or exoskeleton, which provides elevator service and doubles as a conduit for electricity cables, water supplies and other associated utilities. With the Kinetic Skyscraper, the petals are actually insulating, acoustical panels, also known as SIPs (structural insulating panels), which open and close on demand rather than in response to sunlight. Additionally, the SIPs in the Kinetic Skyscraper are embedded with carbon fibers, making them even stronger than traditional structural insulated panels, which rely on a foam core sandwiched between oriented strand board, or OSB. In spite of which, the SIPs used in this skyscraper have a very small environmental footprint combined with very high insulative values, and are both safe to use and highly versatile. The Kinetic Skyscraper concept, developed by Victor Kopieikin and Pavlo Zabotin of Ukraine, targets Mexico City, with its population already around 20 million and expected to continue growing, leaving the earth, water and air in the region badly polluted and potentially unbreathable without a mask. To address these problems, the Kinetic Skyscraper offers many units of affordable housing in a small area, thanks to building up rather than out, and provides additional, enclosed green spaces for recreation or gardening using the cooled water from an onsite geothermal plant like that used in the Clock Shadow Building in Milwaukee. Additional power will be provided via solar panels all over the building façade, and the issue of polluted air will be addressed by using cyanobacteria to create an isolated (and thus protected) self-cleaning atmosphere. This process also provides a form of passive thermal energy that can be converted to electricity via a waste heat recovery process, thus addressing all the issues the eVolo competition sets as obstacles to developing vertical density in an overcrowded future world.
<urn:uuid:75955b44-daa9-4a0d-b208-0b7d2174449e>
CC-MAIN-2017-22
http://earthtechling.com/2012/04/kinetic-skyscraper-blossoms-with-living-space/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607620.78/warc/CC-MAIN-20170523103136-20170523123136-00577.warc.gz
en
0.948434
438
2.734375
3
As author Peter Daisyme accurately puts it, “While many people debate whether little computers in wrist devices, clothing, refrigerators and vehicles will truly change our lives, actionable change is already occurring on the business application side of the IoT.” Regardless of which side of the argument you’re on, the impact the IoT is having in our societies at a macro level is undeniable. While you’re reading this, devices that measure and predict air pollution, vehicle traffic levels, machine failure rates, fire warnings, and crime rates are being employed to improve the lives of hundreds of thousands of people. And as cities and businesses become more interconnected by the hour, we’ll find more and more useful ways in which the IoT can help us better utilize our resources and lift the burden we put on our environment. In this article, you’ll gain deeper insights into how IoT devices are being used by companies and governments alike to obtain accurate data across many potentially life-saving use cases. You’ll also learn about what the future holds in store for us as cities powered by artificial intelligence become near-sentient beings capable of reacting in real time to the habits and living conditions of their inhabitants.
<urn:uuid:ca5b7cdf-62f8-43c5-bb41-835eddcaa5fa>
CC-MAIN-2020-50
https://shop.optanesystems.com/incredible-real-world-applications-prove-the-iot-is-here-to-stay/
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141685797.79/warc/CC-MAIN-20201201231155-20201202021155-00109.warc.gz
en
0.958324
248
2.734375
3
Diabetes is a growing worldwide epidemic. Approximately 29.1 million people in the U.S. are living with diabetes, according to a 2014 Centers for Disease Control and Prevention report. Increased blood sugar levels that occur with diabetes can damage vital organs and nerves. The recommended mainstays for diabetes treatment are blood-sugar-lowering medications, a healthy diet and regular exercise. Many people are also interested in natural remedies such as tamarind, a small pod-like fruit. The fruit pulp and seeds are the main medicinal parts, but the leaves and the bark of the tree are also used as folk medicines. Some preliminary evidence from animal and laboratory studies indicate that tamarind might have beneficial effects for diabetes -- but tamarind has not yet been studied in humans, so whether it might be useful for people with diabetes remains uncertain. Blood Sugar Effects A February 2014 article in the "Pakistan Journal of Biological Sciences" reported on the effects of an extract of tamarind tree bark on blood sugar levels in rats. In one experiment, rats were pretreated to artificially elevate their blood sugar levels. Tamarind bark extract was then administered and was found to substantially reduce blood sugar levels in the test rats. In a second experiment, tamarind bark extract was administered to rats, followed by a large amount of sugar. The bark extract blunted increases in blood sugar in the test rats. While these preliminary animal experiments are promising, it's important to note that tamarind bark extract has not been tested in people. To date, little to no documented research has been performed to determine whether tamarind fruit, seeds or bark extract might benefit people with diabetes. Oxidative stress refers to a chemical imbalance in the body caused by excess accumulation of substances called free radicals. Oxidative stress can damage body tissues, such as the insulin-producing cells of the pancreas. It's also a contributing factor to the development and progression of diabetes. Antioxidants neutralize free radicals, reducing oxidative stress and protecting tissues from related damage. A study published in the April-June 2014 issue of "Pharmacognosy Research" showed that an extract of ground tamarind seeds had antioxidant effects in the laboratory. A March-April "British Journal of Diabetes and Vascular Disease" analysis of relevant medical research suggests that supplementation with strong antioxidants like vitamins C and E might help control blood sugar levels by combating oxidative stress. However, no studies have been conducted to determine whether any tamarind products might have similar effects in people with diabetes. Diabetes can damage the kidneys over time. One of the first signs of diabetes-related kidney damage is leakage of blood proteins into the urine. A study published in the November-December 2013 issue of "Acta Poloniae Pharmaceutica" tested the effects of tamarind bark extract on diabetic rats. In addition to inducing lower blood sugar levels, rats treated with the extract were found to have blood protein levels that suggested less protein leakage from the kidneys. The authors concluded this may have been from a protective effect of tamarind bark extract on the kidneys. Again, these findings are potentially encouraging. But it remains unknown whether any tamarind product might have beneficial effects on kidney function in people with diabetes, because this has never been studied. Warnings and Precautions Although tamarind bark and seed extracts have been shown to have beneficial effects in a few laboratory and animal studies, human research is still lacking. This makes it impossible to know whether tamarind fruit, seeds or bark extract might be helpful for people with diabetes. Tamarind is not a replacement for diabetes medications, and it's important to not stop or change your diabetes medication dosages without your doctor's approval. Talk with your healthcare provider before using any natural remedy for your diabetes to be sure it's safe, and to avoid any interactions with medicines you may be taking or other harmful effects. Side effects of tamarind consumption are uncommon but may include indigestion and possible erosion of the teeth from long-term use because of its high acid content. In addition, blood levels of ibuprofen (Advil, Motrin) and aspirin can increase when taken with tamarind, possibly causing overdose symptoms and increased bruising.
<urn:uuid:59e328cc-488e-44a8-b9eb-5b876b118bc5>
CC-MAIN-2017-17
http://www.livestrong.com/article/450657-tamarind-for-diabetes/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00639-ip-10-145-167-34.ec2.internal.warc.gz
en
0.942491
870
3.4375
3
Something big is happening this week on the Central Coast. Our Nations have partnered with Oceana Canada, Fisheries and Oceans Canada and Ocean Networks Canada to execute a deep sea expedition in Kitasoo/Xai’Xais, Heiltsuk, and Wuikinuxv territories. Using a remotely operated vehicle (ROV) the expedition team will probe deep into the waters of our territories, exploring areas of high ecological, cultural and economic significance, including Kynoch Inlet, Seaforth Channel and Fitz Hugh Sound. Diving between 200 to 500m deep, the video and photos the expedition team will produce will show us a part of our territory we’ve never seen before; almost no scientific exploration has occurred at these depths on the Central Coast. Diving betwen 200 to 500m deep, the video and photos the expedition team will produce will show us a part of our territory we’ve never seen before; almost no scientific exploration has occurred at these depths on the Central Coast. For our Nations this is an opportunity to fill in data gaps that will enhance marine conservation planning, including the Marine Protected Areas network process that is now underway. In particular, this project will provide the first view of deep-water rockfish populations, sponges and corals in our territories. In addition to gathering scientific data, the project will engage the youth of Klemtu and Bella Bella with scheduled visits to local schools, student visits to the research vessel, and through existing programs like the Supporting Emerging Aboriginal Stewards (SEAS) Community Initiative. But the best part is you can join the expedition too! Oceana will be broadcasting live footage from the expedition. So if you’d ever wondered what it looks like down there now is your chance to find out. Stay tuned to our blog and the CCIRA Facebook page for more updates from this exciting expedition!
<urn:uuid:8ce1d569-b14b-4905-be5a-1b5563d11080>
CC-MAIN-2018-51
https://www.ccira.ca/2018/03/deep-sea-expedition/
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00389.warc.gz
en
0.909262
390
2.5625
3
Genetics may play a role in your risk of developing age related hearing loss, according to several studies. According to a study called Genetic Variation Linked to Age-Related Hearing Loss* published on the website for the US National Institute on Aging, there is a link between a specific gene and hearing loss. This gene, glutamate metabotropic receptor 7 (GRM7), has been found in older people with hearing loss. In fact, this gene has been strongly associated with both speech reception thresholds (SRTs) and pure-tone thresholds (PTs). SRTs measure the softest level at which you can begin to understand 50% of spoken words while PTs measure the softest level at which you can detect simple sounds at particular frequencies. This finding indicates there is a genetic association, so if you have hearing loss in your family you may want to get yourself tested, particularly if you or a loved one have noticed any signs of hearing loss. What are the signs of hearing loss? You may feel others are mumbling, particularly if there is any background noise. Others might complain that you have the TV or radio turned up too loud. You may often require people to repeat themselves so you can understand what they are saying. Sometimes you may even misunderstand what others are saying to you and respond inappropriately. Some people also have a ringing in their ears. This is known as tinnitus and can also take the form of hissing, roaring or buzzing noises. Often loved ones notice the signs of hearing loss first, but if you suspect you have this issue, it is recommended you have your hearing tested. Often loved ones notice the signs of hearing loss first, but if you suspect you have this issue, it is recommended you have your hearing tested, simply click here or call 1800 340 631 to make a check-up with your local Audika hearing clinic.
<urn:uuid:0c89d0d4-9e36-40db-bf61-bef088b3f891>
CC-MAIN-2020-05
https://www.audika.com.au/hearing-news-blog/my-hearing/when-should-you-get-a-hearing-test-if-you-have-a-family-history-of-hearing-loss
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00083.warc.gz
en
0.962574
386
3.15625
3
Facebook’s new feature provides additional information about articles. Facebook launched a new feature that combats fake news. Rolling out to US users, this new feature affects all articles posted to news feeds. When users post an article to their feeds, the post will contain more information about the article. The information includes the following: - Publisher’s Wikipedia page (if any) - Related articles - Amount of times people shared the article on Facebook - Location of shares - Option to follow the publisher’s page - More stories by the publisher - Friends who shared the article Currently, Facebook is also testing a feature that provides more information about an article’s author including Wikipedia entry, option to follow author’s page or profile, and previous articles published. These changes come as a result of the Russian troll accounts that disseminated politically divisive ads and fake news stories to interfere with the 2016 US presidential election. Since the discovery of Russian interference on its platform, Facebook launched research involving the Facebook community, academics, and industry partners. Together, they found that additional information about a news story helps users evaluate whether or not a source is trustworthy. Facebook’s new feature is the result of those findings.
<urn:uuid:18044fce-706a-4e8b-bf05-d34e7cc35dad>
CC-MAIN-2022-27
https://www.advertisemint.com/new-facebook-feature-gives-users-information-articles/
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00778.warc.gz
en
0.882339
255
2.953125
3
The clone method in Object class was set protected so that other classes are not able to access it on an instance of a class. You first need to clear your mind about protected access. Here both B and C classes can access the val variable, but only on an instance of their own classes. So if you try this Coming back to clone method, suppose you create a class like this Now only the Employee class can clone its instances. If clone was declared public in Object class, then other classes would've been able to clone instances of Employee class. But there won't be any deep cloning i.e. if you clone an instance of Employee class, then the original object and the clone Employee objects would use the same Address object. That's why it is made protected. If you want other classes to be able to clone instances of your class, you'll override the clone method, make it public and take care of deep cloning...
<urn:uuid:8e9909c4-c3d7-49f4-be91-91405990cef4>
CC-MAIN-2022-40
https://coderanch.com/t/463238/java/Object-Cloning
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00096.warc.gz
en
0.950767
190
3.078125
3
Research Scholar, Department of Sociology, Aligarh Muslim University, Aligarh. Email: [email protected] Social exclusion broadly refers to lack of participation in social life. It is a powerful form of discriminatory practice. In course of human development, exclusion has taken the form of segregating a group of people from the social, political, economic, cultural, educational domains of societal life. Giddens defines social exclusion as “it is not about gradations of inequality, but about mechanism that act to detach groups of people from the social mainstream”. Muslim in India remains far below the national average in almost all aspect of life. Sacchar committee estimates that the situation of Muslims in India is that of a deprived community which is above that of SCs and STs but below that of Hindu general, Hindu OBCs and other socio-religious category in almost all indicators of development. The due representation of Muslims in parliament, state legislature and Panchayati raj institution is crucial for the country because this is the only way in which this excluded community can keep pace with other communities in development. Other scholars estimate that the situation of Muslims in India is that of an excluded community in economic, educational and political terms. This paper examines the literature on Muslim exclusion in India and suggests some measures for formulating an inclusion policy for them. Keywords: Exclusion, Development, Muslim Empowerment.
<urn:uuid:f8c6f0d1-793b-4cd2-aae6-273ff1ef814a>
CC-MAIN-2018-47
http://bcjms.bhattercollege.ac.in/v7n2sc02/
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743011.30/warc/CC-MAIN-20181116111645-20181116133645-00299.warc.gz
en
0.93503
295
2.59375
3
Q: At what age can my child start flute or clarinet lessons? A: That depends very much on the individual's physical and mental development. Some children are big enough and strong enough to hold a flute or clarinet comfortably at the age of seven, while others would struggle. Some eight year olds can cope with learning about all the different elements of playing while others find it very difficult to remember everything. Eight or nine is generally a good age to start, but there is no hard and fast rule. Remember: if a child starts too early the struggle will probably put him or her off. If a young child is desperate to play it is better to start on something like the recorder or ocarina until they are ready for the flute or clarinet. Q: My child is set on learning the saxophone, but she is really too small to manage it. What can I do? A: The alto saxophone, which most beginners start on, is bigger and heavier than the other instruments and therefore it is usually better not to start lessons before the age of ten. If a child is impatient to learn the saxophone it's a good idea to start on the clarinet, as it is very easy to transfer the skills learned directly to the saxophone. Q: Is the flute or clarinet easier to learn? A: That's a question I'm often asked and it's impossible to answer! Some people find the clarinet easier and others find the flute easier. The level of skill and musical ability required for both instruments is the same, but the embouchure (mouth position) for the two instruments is very different, so most people find one much easier to get a sound out of than the other. If you are not sure which instrument to choose, come along and have a go on both to see which suits you better. Q: Will I have to take exams? A: Not unless you want to. A lot of students like to take exams as it gives them a goal to aim for, it gives them a chance to be assessed by someone other than their teacher, and it's a good way of monitoring their own progress. Other students feel that exams take away from the pleasure of playing and don't want to do them - and that's fine. You will be taught just as carefully whether or not you want to take exams! Q: What exams do you enter people for? A: Generally I use the Associated Board or the Trinity Guldhall graded exams (see www.abrsm.org and www.trinitycolleg.co.uk.) It is not necessary to take all the grades as you progress: for example a student might start at grade 2, then take 3, 5 and 6, using the the grades to help at particular stages in his development. After grade 8 there are various options depending on what the student wishes to do and these will be discussed thoroughly with him or her at this stage
<urn:uuid:32df4eb3-e100-4e99-af2a-180e6a104821>
CC-MAIN-2015-18
http://www.daphnegaddmusic.com/index.php?p=1_4_FAQs
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655589.82/warc/CC-MAIN-20150417045735-00267-ip-10-235-10-82.ec2.internal.warc.gz
en
0.972926
609
2.8125
3
Growing your own vegetables gives you the freshest, most nutritious food and can also save money. It costs little to get started, but the investment will pay off from the first year. If you are on a budget, start with a small garden and a few crops that are likely to succeed. Lettuce (Latuca sativa), spinach (Spinacia oleracea) and chard (Beta vulgaris) are easy to grow and will provide fresh greens for several weeks. A few pea (Pisum sativum) and bean (Phaseolus vulgaris) plants will produce at least a few pounds of vegetables. There may be a community garden in your area that will provide you with a garden plot, and you may find friends with whom to share tools, seeds and work. Before you buy seeds or start planting, consider which vegetables you and your family eat, the quantities you will be able to preserve and how you will preserve them. Lettuce and other greens are best eaten fresh and can be replanted every week or two. Some vegetables store best when blanched and frozen, while others can be canned. If you do not have the time, space, equipment or inclination to preserve more than you eat fresh, only plant in quantities you will eat. If you want to can tomatoes (Lycopersicon esculentum) or tomato sauce, for example, the canning equipment and jars will last for decades. Soil is key to ensuring a successful vegetable crop. Testing the soil pH level and adding nutrients will make the time and money you invest in buying and planting the seeds worthwhile. Compost is one of the best additives and you can make it yourself by layering vegetable scraps, coffee grounds and grass clippings in a bin, bucket or compost pile. You can also make compost tea by steeping compost in water and using the liquid to water your plants. It is less expensive to add the nutrients your soil needs to than to buy bags of soil to build the garden. Most seed packets contain more seeds that you will need for one year. Depending on the vegetable, seeds are viable for at least a few years, if stored in a cool, dark place. Consider sharing seed purchases with a friend or neighbor, so you can both enjoy more variety without spending more money. Some vegetables produce seeds that are easy to harvest and save for the following year, which will save money or free up your budget to buy new varieties. You will likely be able to directly seed most vegetables directly into the garden. If you need to start some seeds indoors, or if you are growing in containers, you may find starter trays and pots at garage sales and thrift stores. Wash all the pots and sterilize them in a 10 percent bleach solution, then rinse well before using them. Putting your own system together is less expensive than buying ready-made kits that come complete with soil, pots and seeds. You can clean and reuse the starter trays and pots for several years. - Martin Poole/Digital Vision/Getty Images
<urn:uuid:0ace72ad-a39b-4beb-99bf-4b44b3752d15>
CC-MAIN-2017-51
http://homeguides.sfgate.com/vegetable-gardening-ideas-budget-73587.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948520218.49/warc/CC-MAIN-20171213011024-20171213031024-00437.warc.gz
en
0.94359
624
3.3125
3
Florida Crocodile dies while trying to be captured Florida is well known for its reptiles including the famous Florida alligator and crocodile. There is virtually no fresh waterway in Florida that does not have the potential to have gators in it. As Florida natives, we take these large reptiles for granted, but that does not mean they do not cause any harm. In a story this week from Coral Gables two swimmers were bitten by a crocodile. The crocodile nicknamed Pancho had been living in the canals in the area for some time. Reports state the FWC had tried to relocate him, but he always returned back to his familiar canals. Unfortunately the 12-foot, 300-pound crocodile died on shore after being retained. This is a sad tale for both humans and Floridian nature. Nature lovers in the state make every effort to co-exist with Florida nature, but sometimes living in such close proximity to wildlife ends in tragedy like this. For more information about Florida Crocodiles visit Floridian Nature's page on crocodiles and don't forget about our alligator page too!
<urn:uuid:293e11dd-456d-4043-b1d5-d8e3413309c5>
CC-MAIN-2017-30
http://floridiannature.blogspot.com/2014/08/florida-crocodile-dies-while-trying-to.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424889.43/warc/CC-MAIN-20170724162257-20170724182257-00224.warc.gz
en
0.975423
226
2.546875
3
Sizes of Infinity Everyone knows about infinity, but most people don’t know that there are actually different sizes of infinity: the transfinite numbers. This may out odd at first; after all, how do you define the size of something that is infinite? For example: there are an infinite number of even numbers. But, to be more precise, there is a “countable infinity” of even numbers; in other words, you can enumerate them. Say we start at 0 and call it the first even number; then we say that 2 is the second even number, -2 is the third, 4 is the fourth, -4 is the fifth, et cetera. In this way we can assign an integer to each even number. (Note that we can choose any numbering scheme we like; what’s important isn’t how we enumerate them, but rather that we can.) Therefore, the size of the even numbers is as big as the size of the integers, and that size is “countable infinity”, often represented by the symbol aleph zero. A similar argument can be given for odd numbers, positive numbers, negative numbers, primes, and so forth. On the other hand: when one tries to count the real numbers, a problem arises. For any two real numbers there will always exist another between them. It is not possible to create a one-to-one correspondence between the real numbers and the integers; in other words, they are impossible to count. For a rigorous proof of this fact see Cantor’s diagonal argument. Notice that between any two rational numbers (numbers of the form a/b) there is also always another number; nevertheless, we can construct a an isomorphism between the rationals and the integers. So, in fact, there are just as many rationals as there are integers! One such isomorphism is the Calkin-Wilf tree. - estamierdanomeaceptaninguno reblogged this from sayitwithscience - iwearbigshirts reblogged this from sayitwithscience - stardustbin likes this - didakticodix reblogged this from sayitwithscience - 12fortyseven likes this - transcendit reblogged this from aqwat - atmospheres-and-soundtracks likes this - refriedbitch likes this - voraverb reblogged this from sayitwithscience and added: - voraverb likes this - onemorestar likes this - zombifytherainbow reblogged this from absurdreasoning - carolynseeeeulater likes this - dudeidontremember reblogged this from recursiverecursion - recursiverecursion reblogged this from sayitwithscience - recursiverecursion likes this - happyhumorist reblogged this from sayitwithscience - postmodernmarvel likes this - albanhouse likes this - albanhouse reblogged this from contemplatingstardust - tidyuptrevor reblogged this from sayitwithscience - contemplatingstardust reblogged this from sayitwithscience and added: - nebulae12 likes this - intothecontinuum likes this - sayitwithscience posted this
<urn:uuid:7e92c174-059f-4288-8605-4d317c2622af>
CC-MAIN-2014-35
http://sayitwithscience.tumblr.com/post/7514514920/sizes-of-infinity
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922763.3/warc/CC-MAIN-20140909054622-00456-ip-10-180-136-8.ec2.internal.warc.gz
en
0.912955
699
3.34375
3
04 Apr 6 tips for fussy eaters Jessica Hoskins, a nutritionist, shares her tips for parents to create delicious and healthy kids meals at home. What are your tips for fussy eaters? - Make mealtimes a positive experience by focusing on what your child tries and eats rather than what they’re not. Trying a new food should be treated as a happy event rather than a stressful one. - Get your child in the kitchen. Here they can watch the process of food preparation, touch the food, taste the food and experience the smells and sounds. Participation in the preparation of the family meal is empowering for a child and will often result in it being gobbled down. - Eat together. Many parents get into the habit of serving children separately. Whilst this seems more convenient, eating together takes the focus off the child eating and becomes a more relaxed and social event in which everyone is participating. Studies have shown that people who eat in a social setting are more likely to have a healthier diet. Of course, we all want to enjoy some adult time, so perhaps save those occasions for Friday and Saturday nights and practice family mealtimes during the week. - Check your own eating behaviour. Remember that your child is always observing you and is likely to mirror your eating behaviours. If you are not interested in eating healthy nutritious food, then why should they be? - Compartmental lunchboxes and trays are a useful tool, particularly if your child’s fussy eating is due to a struggle over choice. You can regularly offer a variety of different foods to choose from, but always include one nutritious option that you know they’ll eat. - Mindful eating. Screen time is not a good idea during any food consumption. It creates mindless eating that can lead to overeating, poor food choices and an unadventurous palate. Screen time also removes the social aspect of meal times. Rather than switching on the screen, try allowing your child to experience their meal by chatting about the different smells, textures and tastes they are experiencing. Jessica is a Byron Bay-based clinical nutritionist, herbalist and natural health educator. She is the founder and consultant at Sage and Folk, a business which she started through her passion for womens’ and pediatric health.
<urn:uuid:7c4637bb-f5ab-49f9-b7c3-f50c9767b439>
CC-MAIN-2020-45
https://www.childmags.com.au/6-tips-for-fussy-eaters/
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00705.warc.gz
en
0.970473
472
2.53125
3
Maintaining oral health seems trivial. You might think that poor oral health will only result in tooth loss and things like that. But make no mistake. Oral health problems can also cause many other diseases, as reported by Health Me Up (17/12) below. 1. Periodontal Disease Periodontal disease is a disease of the lower teeth, the gums. This condition usually begins with gingivitis that causes swollen gums, bleeding, and red. If not promptly treated, the disease can get worse and damage the gum tissue and bone supporting the teeth. Endocarditis is an inflammation of the inner heart. Inflammation occurs because bacteria that appear in the mouth. Bacteria then carried by the flow of blood through small gum disease or bleeding gums. 3. Heart disease Heart disease is also associated with dental hygiene. If the bacteria in the mouth is carried by the flow of blood to the heart, then the bacteria can cause various heart diseases such as clogged arteries and stroke. 4. Decreased memory Poor oral and dental health not only affects the heart, but also on the health of the brain. Poor dental health can constrict and block the arteries to the brain. Arteries that are controlled by the bacteria can lead to dementia or memory loss. (read: Brushing And Flossing Can Reduce Dementia Risk) The disease which is most associated with dental health is diabetes. Commencing with the mouth irritation can weaken the body’s ability to process blood sugar. People who have diabetes tend to have problems due to lack of insulin. 6. Trouble conceiving Recent research at the European Society of Human Reproduction and Embryology shows that women with poor oral health can not get pregnant fast. 7. Problems of pregnancy Women with gum disease have an increased risk of pregnancy complications two times greater than women who do not have a mouth problem. Complications can include premature birth and others. This is because the chemicals needed to give birth are in the oral bacteria. Increased bacterial gum disease can cause complications of pregnancy. Recent studies showed a link between oral health with some types of cancer. Related cancers include cancers that grow on the head, neck, esophagus, and lung cancer due to bad habits when brushing your teeth, periodontal disease, and tooth decay. 9. Lung problems Periodontal disease can cause respiratory illnesses like pneumonia to get worsened. This is most likely due to the number of bacteria carried to the lungs. Research has also shown a link between obesity and gum problems. It seems that the periodontitis disease can develop faster and more severe if it is supported by the presence of excess body fat. The ten disease above could arise only because of oral health problems. So, there is no reason to underestimate the cleanliness and health of your mouth.
<urn:uuid:b23b3c5c-8cf4-4eea-a7d3-3f9506bfe87d>
CC-MAIN-2014-23
http://medicmagic.net/10-diseases-caused-by-poor-oral-health.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997895170.20/warc/CC-MAIN-20140722025815-00117-ip-10-33-131-23.ec2.internal.warc.gz
en
0.940033
587
2.984375
3
The night sky in October is full of comings … and goings. Venus remains in the west at dusk. It outshines everything but the Sun and Moon, so you can begin observing it during deep twilight. Saturn leaves the evening sky this month. For the first few days of October, you can look for it in evening twilight to the lower right of Venus. After mid-month, though, it’s hard to see. Saturn is behind the Sun (at conjunction) on Nov. 6. Jupiter is higher in the morning sky this month. Look for it high in the south at dawn. Mars, much dimmer than Jupiter, now pulls away from it in the morning sky. It remains in the east at dawn. On the morning of Oct. 15, look for it near Regulus in Leo. In October, the Big Dipper is to the lower left of the North Star at dusk, and soon sets. As a result, it may be hard to see if you have trees or buildings north of you. As the Big Dipper sets, though, Cassiopeia rises. This is a pattern of five stars in a distinct W shape, which lies directly across the North Star from the Big Dipper. Look for Cassiopeia high in the north on fall and winter evenings. Autumn represents sort of an “intermission” in the sky, with bright summer stars setting at dusk, while bright winter stars have not yet risen. The “teapot” of Sagittarius sets in the southwest at dusk. The Summer Triangle is high in the west. Meanwhile, the Great Square of Pegasus is in the east, indicating the start of autumn. The stars rising in the east are much dimmer than those overhead and in the southwest, because when you face east at dusk in October, you face out of the Milky Way plane. The center of our galaxy lies between Scorpius and Sagittarius, while the Summer Triangle is also in the galactic plane. Pegasus, on the other hand, is outside the plane of our galaxy and is a good place to look for other galaxies. Moon Phases in October 2013: New October 4, 7:33 p.m. 1st Quarter October 11, 6:03 p.m. Full October 18, 6:36 p.m. Last Quarter October 26, 6:41 p.m. The full moon of Oct. 18 enters the penumbra, a region in which Earth partially blocks the Sun. Unlike the full shadow (umbra), however, the penumbra only imperceptibly darkens the Moon. Sat., Oct. 12, is our annual Astronomy Day at the George Observatory, which lasts from 3 to 10 p.m. at our observatory in Brazos Bend State Park. See here for a full list of activities. On most clear Saturday nights at the George Observatory, you can hear me do live star tours on the observation deck with a green laser pointer. If you’re there, listen for my announcement.
<urn:uuid:352c2ec0-7c10-48df-b75f-02f30b7aa69d>
CC-MAIN-2017-22
http://blog.hmns.org/2013/10/october-2013-stargazing/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609061.61/warc/CC-MAIN-20170527210417-20170527230417-00400.warc.gz
en
0.939563
638
2.65625
3
Lockable Knee Brace Speeds Rehabilitation The space shuttle zips along at 17,000 mph. The Ares rocket will go from zero to 1,000 mph in 57 seconds. NASA knows a thing or two about making things go faster. Here on Earth, a NASA-enhanced knee brace is helping patients speed up their recovery times. Once upon a time at a meeting at the Marshall Center, orthotist Gary Horton met NASA engineer Neil Meyers. During a chat about knee braces, Meyer revealed the basics for a new type of brace with a lockable joint and hinge brake. Intrigued by the possibilities, Horton licensed the NASA technology and worked with the University of Arkansas to perfect the design. Seven years of development and testing created the Stance Control Orthotic Knee Joint (SCOKJ), which became available in 2002. Horton's brace helps patients with weak or missing thigh muscles -- the quadriceps -- and various degrees of knee instability. This might include people fighting polio, spinal cord injuries, and conditions like unilateral leg paralysis. It also speeds recovery when injuries won’t allow the knees to carry a person's full weight. The brace is effective because it mimics natural motion in the human knee. Stand up for a minute. Now look down at your knee. When you're standing, your knee is in a "locked" mode for balance and stability. Now take a few steps. When you walk, your knee has two modes: free motion and automatic stance control. If you needed to use a more traditional brace for an injury, it would support your knee but also limit its movement. Here's where space technology kicks in. When a patient wears the lockable knee brace, the knee has support AND freedom of movement. While walking, the brace lets the knee swing naturally. Each time the heel strikes the ground, the brace locks to provide stability, just like a healthy knee. When the heel lifts to take another step, the brace automatically unlocks and lets the knee return to free motion. Depending on a patient's needs, the brace can be adjusted to be triggered by weight bearing or joint motion. This smooth, automatic action allows a normal walking gait and stability when standing. Not only does this speed up a person's ability to get around, but it also helps speed up healing. It's just one more way that NASA technology is helping knee patients get a leg up on their recovery. You can read the entire article on the locking knee brace on page 54 in Spinoff 2008 Stance Control Orthotic Knee is a trademark, and SCOKJ® is a registered trademark of Horton Technology Inc.
<urn:uuid:bdda6777-6f91-486e-a2f5-b63eb82657b8>
CC-MAIN-2016-50
https://www.nasa.gov/topics/nasalife/knee_brace.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00468-ip-10-31-129-80.ec2.internal.warc.gz
en
0.910605
546
2.953125
3
Species are disappearing at an alarming rate, but together we can help. The National Geographic Photo Ark is using the power of photography to inspire people to help save species at risk before it’s too late. Photo Ark founder Joel Sartore has photographed more than 9,000 species around the world as part of a multiyear effort to document every species living in zoos and wildlife sanctuaries, inspire action through education, and help save wildlife by supporting on-the-ground conservation efforts. Learn how you can get involved by clicking here. In his quest to document our world’s astonishing diversity, Joel has taken portraits of 9,000 species — and counting! He’s over half way to his goal of documenting all of the approximately 12,000 species living in the world’s zoos and wildlife sanctuaries. Thousands of species are at risk and time is running out. Join National Geographic photographer Joel Sartore as he leads the Photo Ark project to document our planet's biodiversity and support on-the-ground conservation efforts to protect species at risk. PHOTOS BY JOEL SARTORE/NATIONAL GEOGRAPHIC PHOTO ARK
<urn:uuid:ef3cb096-5d42-4285-9a9d-a18c283a24e2>
CC-MAIN-2019-13
https://www.nationalgeographic.org/projects/photo-ark/
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202188.9/warc/CC-MAIN-20190320004046-20190320030046-00076.warc.gz
en
0.931264
242
2.609375
3
How Risky Are Whole-Body Airport Scanners? Experts Analyze the Potential Cancer Risk From Low Levels of Radiation March 16, 2011 -- Full-body scanners have become the norm at airports around the country. Their use is aimed at keeping passengers safe. However, some experts worry that the most commonly used type of scanner heightens the risk of cancer because it emits low levels of ionized radiation. Two articles in the April issue of Radiology assess the risks. The type of scanner in question scans travelers with what are called backscatter X-rays to detect objects hidden under clothing, such as nonmetallic explosives and weapons. Each time a passenger passes through one of these scanners, he or she is exposed to a tiny amount of radiation. An individual’s risk of dying from cancer from such an exposure is estimated to be vanishingly small -- about one in 10 million for a trip involving two screening scans, writes David J. Brenner, PhD, DSc, director of the Center for Radiological Research at Columbia University Medical Center. According to the FDA, which regulates X-ray devices, “There is no need to limit the number of individuals screened or, in most cases, the number of screenings an individual can have in a year.” But Brenner believes the picture changes when you look at it from a larger, public health perspective, in which a billion travelers are scanned in the U.S. each year. “In the present context, if a billion X-ray backscatter scans were performed each year,” writes Brenner, “one might anticipate 100 cancers each year resulting from this activity.” Brenner also points to a heightened risk of cancer among children, which he says is five to 10 times higher than the risk to middle-age adults. Flight personnel, who pass through scanners hundreds of times each year, could also be at greater risk than the average traveler. "Super frequent fliers or airline personnel, who might go through the machine several hundred times each year, might wish to opt for pat-downs,” Brenner says in a news release. “The more scans you have, the more your risks may go up -- but the individual risks are always going to be very, very small." Appropriate Use of Scanners David A. Schauer, ScD, CHP, author of the second article, acknowledges the risks of using backscatter X-ray scanners and focuses his paper on ways to ensure that such scanners are used appropriately. “People should only be exposed to ionizing radiation for security screening purposes when a threat exists that can be detected and for which appropriate actions can be taken,” writes Schauer, executive director of the National Council on Radiation Protection and Measurements. “Any decision that alters the radiation exposure situation should do more good than harm.” Schauer advocates for strong government regulation of the use of backscatter X-ray scanners to be certain that passengers are not exposed unnecessarily or to unsafe levels of radiation. “When a government concludes that security screening of people with backscatter X-rays is justified,” he writes, “then regulatory control should be implemented.” Like Schauer, Brenner believes that backscatter X-ray scanners are, on the whole, safe. "As someone who travels just occasionally, I would have no hesitation in going through the X-ray backscatter scanner," Brenner says in a news release. However, he argues against their use in favor of an alternative -- and equally effective -- type, known as a millimeter wave scanner, which does not involve ionizing radiation. “Whatever the actual radiation risks associated with X-ray backscatter machines,” Brenner concludes, “a comparable technology that does not involve X-rays is a preferable alternative.”
<urn:uuid:60057f5f-a247-4c18-9189-674774420eea>
CC-MAIN-2013-48
http://www.webmd.com/cancer/news/20110316/how-risky-are-whole-body-airport-scanners
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163047055/warc/CC-MAIN-20131204131727-00000-ip-10-33-133-15.ec2.internal.warc.gz
en
0.952157
806
2.84375
3
Roughly 20 percent of epileptic women who take the antiseizure drug valproate during pregnancy will have a fetus with a serious adverse outcome, almost twice the rate associated with the next most problematic antiepileptic drug, new research shows. Several years ago, the American Academy of Neurology and other groups issued guidelines for treating epileptic women during pregnancy. Since antiepileptic drugs, in general, have been linked to adverse fetal outcomes, the strategy was to optimize treatment before conception, using a single antiepileptic drug if possible, at the lowest effective dose. These guidelines did not, however, differentiate between the various drugs for their potential to cause birth defects. At the time the guidelines came out, “there were no data comparing the adverse fetal effects of the different antiepileptic drugs,” lead author Dr. Kimford J. Meador, from the University of Florida in Gainesville, told Reuters Health. “In the last two years, however, seven studies have come out, all showing an increased risk with valproate.” The initial focus of the current study wasn’t to look at the rate of birth defects and fetal deaths associated with antiepileptic drugs, Meador noted. “We wanted to look at the impact of these drugs on neurodevelopmental later in life.” They were only monitoring the children until they became old enough to complete the tests. “Then the early effects came to light.” The study, which is reported in the journal of Neurology, involved 333 pregnant women who were drawn from 25 epilepsy centers across the US and UK between October 1999 and February 2004. All of the women were receiving antiepileptic therapy with a single drug, including carbamazepine in 110, lamotrigine in 98, phenytoin in 56 and valproate in 69. The rate of serious adverse outcomes, which included congential malformation and fetal death, was 20.3 percent for valproate, 10.7 percent for phenytoin, 8.2 percent for carbamazepine, and 1.0 percent for lamotrigine. Valproate use was associated with two fetal deaths and 12 congenital malformations, including skull deformities, heart structural abnormalities, kidney swelling due to backup of urine; and cleft palate, a birth defect in which the mouth or lip tissues don’t properly form during development; and several others. My personal view is that valproate should not be used as the first therapy in pregnant women, Meador said. However, for women who fail to response to other antiepileptic drugs and must use valproate, “their doctor should emphasize that despite the increased risk of adverse outcomes, the majority of pregnant women who take the drug have normal, healthy babies.” Meador said the neurodevelopmental results of the study are now coming in and he hopes to present them at an upcoming meeting. SOURCE: Neurology, August 8, 2006. Revision date: July 7, 2011 Last revised: by Andrew G. Epstein, M.D.
<urn:uuid:3a50a413-70c0-4175-81c5-bf1eb3adcb7a>
CC-MAIN-2018-39
http://www.health.am/pregnancy/more/valproate-linked-to-high-birth-defect/
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161501.96/warc/CC-MAIN-20180925103454-20180925123854-00505.warc.gz
en
0.967322
662
2.828125
3
A study published in the August 2018 issue of the Journal of the American Academy of Child and Adolescent Psychiatry(JAACAP) reports on a group of boys diagnosed with ADHD in childhood (when they were, on average, 8 years old) and followed into adulthood (when they were in their early 40s). The goal was to examine whether boys’ characteristics in childhood and adolescence predicted their subsequent school performance, their work, and social adjustment. A major challenge has been to identify childhood features that are associated with a favorable vs. unfavorable long-term outcome. “Research shows that children with ADHD achieve lower levels of education, have poorer social functioning, and less success at work than peers without ADHD. Being able to identify indicators of future success early in life is critical to help inform preventive and therapeutic practices,” said lead author María Ramos-Olazagasti, a senior research scientist at Child Trends and assistant professor at Columbia University. The study conducted at the Hassenfeld Children’s Hospital at NYU Langone Medical Center focused on a cohort of 207 white, middle- and lower-class boys between the ages of 6 to 12 years, who were referred to a psychiatric clinic by their school due to behavior problems. The children in the study, who had to have IQ’s of at least 85, exhibited symptoms consistent with the DSM-5 definition of ADHD. The boys participated in three follow-up interviews, in adolescence at mean age 18, in early adulthood at age 25, and in mid adulthood at age 41. At each period, the study evaluated the participants’ social and occupational functioning, their overall adjustment, and their educational attainment. Most of the early characteristics failed to distinguish the poor versus good outcomes. There were two potentially important exceptions. For one, higher IQ levels were related to better function in several domains. Also, the study found that conduct problems in childhood were negatively related to overall adult functioning, educational attainment, and occupational functioning. This finding is remarkable given that none of the children had a conduct disorder when they entered the study. Thus, the finding indicates that even mild conduct problems may predict relatively low educational, occupational, and overall achievement later in life. Interestingly, the authors found that boys who had concrete educational goals for their future in adolescence had better overall functioning in adulthood. Clinicians still face many difficulties in identifying early predictors of functional outcomes among children with ADHD. However, the results provide some clinical guidance: “These results suggest that we should not overlook even relatively mild problems of conduct among children with ADHD, and that early interventions might be considered for children with a normal, but low, IQ,” said Dr. Ramos-Olazagasti. “These findings also show promise in highlighting the importance of goal setting and providing a rationale for examining young people’s attitudes toward their future.”
<urn:uuid:26157402-88a6-4316-8a8f-68b9a7c0df6a>
CC-MAIN-2021-31
https://healthylegacy.net/2018/08/01/can-we-predict-the-long-term-outcome-of-boys-with-adhd/
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00303.warc.gz
en
0.971521
586
3.078125
3
BBB - A+ Rating - the best there is An excerpt from www.HouseOfNames.com archives copyright © 2000 - 2014 Where did the German Zuber family come from? What is the German Zuber family crest and coat of arms? When did the Zuber family first arrive in the United States? Where did the various branches of the family go? What is the Zuber family history? Spelling variations of this family name include: Zuber, Zueber, Zuhber, Züber, Zouber, Zoober and others. First found in Switzerland, where the name came from humble beginnings but gained a significant reputation for its contribution to the emerging mediaeval society. It later became more prominent as many branches of the same house acquired distant estates and branches, some in foreign countries, always elevating their social status by their great contributions to society. This web page shows only a small excerpt of our Zuber research. Another 236 words(17 lines of text) covering the years 1819 and 1876 are included under the topic Early Zuber History in all our PDF Extended History products. More information is included under the topic Early Zuber Notables in all our PDF Extended History products. Some of the first settlers of this family name were: Zuber Settlers in the United States in the 18th Century - Ulrig Zuber, who landed in New York in 1709 - Michel Zuber, who arrived in America in 1752 - David Zuber, who settled in Philadelphia in 1775 Zuber Settlers in the United States in the 19th Century - Ferdinand Zuber, who settled in Ohio in 1809-1852 - Franz Zuber, who came to New York in 1850 - Franz Zuber, who landed in New York, NY in 1850 - Joseph Von Zuber came to America in 1854 - August Zuber arrived in Chile in 1885 with his wife Anna Breithaupt in 1885 - William Henry "Bill" Zuber (1913-1982), American Major League Baseball pitcher in the 30's and 40's - Catherine Zuber, American five-time Tony Award winning costume designer for the Broadway theater and opera - Thomas Francis "Tom" Zuber (b. 1972), American attorney, entrepreneur, and inventor, creator and CEO of LawLoop.com - Edward Fenwick Zuber (b. 1932), Canadian artist, known for his work as a War Artist - Maria T Zuber, E. A. Griswold Professor of Geophysics at MIT and participant in NASA planet mapping missions - Andreas Zuber (b. 1983), Austrian race car driver including in the GP2 Series - Steven Zuber (b. 1991), Swiss football forward - Marc Zuber (1944-2003), India-born, British actor - Jones, George F. The Germans of Colonial Georgia 1733-1783 Revised edition. Baltimore: Genealogical Publishing, 1986. Print. (ISBN 0806311614). - Götze, Alfred. Familiennamen im badischen Oberland. Heidelberg: C. Winter, 1918. Print. - Siebmacher, J.J. Siebmacher's Grosses Wappenbuch 35 Volumes. Germany: Bauer & Raspe. Print. - Zieber, Eugene. Heraldry in America. Philadelphia: Genealogical Publishing Co. Print. - Best, Hugh. Debrett's Texas Peerage. New York: Coward-McCann, 1983. Print. (ISBN 069811244X). - Bahlow, Hans. Deutschlands geographische Namenwelt Etymologisches Lexikon der Fluss- und Ortsnamen alteuropaischer Herkunft. Frankfurt: Suhrkamp, 1985. Print. - Tarneller, Josef. Zur Namenkunde Tirolen Familiennamen. Bozen: Buchhandlung, 1923. Print. - Tobler-Meyer, Wilhelm. Familiennamen der Ostschweiz. Zürich: 1894. Print. - Bahlow, Hans (Edda Gentry trns). Dictionary of German Names . Madison, Wisconsin: Max Kade Institute, 2002. Print. (ISBN 0-924119-35-7). - Göbel, Otto. Niederdeutsche Familiennamen der Gegenwart Wolfshagen-Schäbentz. Franz: Westphal, 1936. Print. The Zuber Family Crest was acquired from the Houseofnames.com archives. The Zuber Family Crest was drawn according to heraldic standards based on published blazons. We generally include the oldest published family crest once associated with each surname. This page was last modified on 28 October 2013 at 11:54. houseofnames.com is an internet property owned by Swyrich Corporation. BBB - A+ Rating - the best there is
<urn:uuid:60b5f225-a147-4480-ad0e-8b86ce93fedc>
CC-MAIN-2014-23
http://www.houseofnames.com/zuber-family-crest
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268734.38/warc/CC-MAIN-20140728011748-00064-ip-10-146-231-18.ec2.internal.warc.gz
en
0.847146
1,056
2.546875
3
Until 1991 Yugoslavia was a Socialist Federal Republic comprising the Republics of Serbia, Croatia, Macedonia, Montenegro, Slovenia and Bosnia-Herzegovina, as well as the autonomous Serbian provinces of Kosovo and Vojvodina. In mid-1991, Slovenia and Croatia disassociated themselves from the Federation. In April 1992 the Federal Republic of Yugoslavia, comprising Serbia and Montenegro, was established but in June 2006 Serbia and Montenegro became independent states and the state union of Serbia and Montenegro ceased to exist. In respect of Serbia, the 1982 Double Taxation Convention between the United Kingdom and Yugoslavia is regarded as having remained in force. The text of the Tax Treaty can be found via https://www.gov.uk/government/publications/serbia-tax-treaties
<urn:uuid:4d8ac40d-8d1c-459e-a490-897b32055c35>
CC-MAIN-2017-51
https://www.gov.uk/hmrc-internal-manuals/double-taxation-relief/dt16660
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599156.77/warc/CC-MAIN-20171217230057-20171218012057-00401.warc.gz
en
0.952672
163
2.859375
3
Relative risk versus absolute risk: one cannot be interpreted without the other: Clinical Epidemiology in Nephrology For the presentation of risk, both relative and absolute measures can be used. The relative risk is most often used, especially in studies showing the effects of a treatment. Relative risks have the appealing feature of summarizing two numbers (the risk in one group and the risk in the other) into one. However, this feature also represents their major weakness, that the underlying absolute risks are concealed and readers tend to overestimate the effect when it is presented in relative terms. In many situations, the absolute risk gives a better representation of the actual situation and also from the patient's point of view absolute risks often give more relevant information. In this article, we explain the concepts of both relative and absolute risk measures. Using examples from nephrology literature we illustrate that unless ratio measures are reported with the underlying absolute risks, readers cannot judge the clinical relevance of the effect. We therefore recommend to report both the relative risk and the absolute risk with their 95% confidence intervals, as together they provide a complete picture of the effect and its implications.
<urn:uuid:798b6608-0524-4b3e-9cda-4b18b2aceaa8>
CC-MAIN-2018-17
https://insights.ovid.com/pubmed?PMID=28339913
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936833.6/warc/CC-MAIN-20180419091546-20180419111546-00436.warc.gz
en
0.945809
228
3.0625
3
In library and archival science, digital preservation is a formal endeavor to ensure that digital information of continuing value remains accessible and usable. It involves planning, resource allocation, and application of preservation methods and technologies, and it combines policies, strategies and actions to ensure access to reformatted and "born-digital" content, regardless of the challenges of media failure and technological change. The goal of digital preservation is the accurate rendering of authenticated content over time. According to the Harrod's Librarian Glossary, digital preservation is the method of keeping digital material alive so that they remain usable as technological advances render original hardware and software specification obsolete (Nabeela). Archival appraisal (or, alternatively, selection) refers to the process of identifying records and other materials to be preserved by determining their permanent value. Several factors are usually considered when making this decision. It is a difficult and critical process because the remaining selected records will shape researchers’ understanding of that body of records, or fonds. Appraisal is identified as A4.2 within the Chain of Preservation (COP) model created by the InterPARES 2 project. Archival appraisal is not the same as monetary appraisal, which determines fair market value. Archival appraisal may be performed once or at the various stages of acquisition and processing. Macro appraisal, a functional analysis of records at a high level, may be performed even before the records have been acquired to determine which records to acquire. More detailed, iterative appraisal may be performed while the records are being processed. Appraisal is performed on all archival materials, not just digital. It has been proposed that, in the digital context, it might be desirable to retain more records than have traditionally been retained after appraisal of analog records, primarily due to a combination of the declining cost of storage and the availability of sophisticated discovery tools which will allow researchers to find value in records of low information density. In the analog context, these records may have been discarded or only a representative sample kept. However, the selection, appraisal, and prioritization of materials must be carefully considered in relation to the ability of an organization to responsibly manage the totality of these materials. Often libraries, and to a lesser extent, archives, are offered the same materials in several different digital or analog formats. They prefer to select the format that they feel has the greatest potential for long-term preservation of the content. The Library of Congress has created a set of recommended formats for long-term preservation. They would be used, for example, if the Library was offered items for copyright deposit directly from a publisher. Identification (identifiers and descriptive metadata) In digital preservation and collection management, discovery and identification of objects is aided by the use of assigned identifiers and accurate descriptive metadata. An identifier is a unique label that is used to reference an object or record, usually manifested as a number or string of numbers and letters. As a crucial element of metadata to be included in a database record or inventory, it is used in tandem with other descriptive metadata to differentiate objects and their various instantiations. Descriptive metadata refers to information about an object's content such as title, creator, subject, date etc... Determination of the elements used to describe an object are facilitated by the use of a metadata schema. Another common type of file identification is the filename. Implementing a file naming protocol is essential to maintaining consistency and efficient discovery and retrieval of objects in a collection, and is especially applicable during digitization of analog media. Using a file naming convention, such as the 8.3 filename, will ensure compatibility with other systems and facilitate migration of data, and deciding between descriptive (containing descriptive words and numbers) and non-descriptive (often randomly generated numbers) file names is generally determined by the size and scope of a given collection. However, filenames are not good for semantic identification, because they are non-permanent labels for a specific location on a system and can be modified without affecting the bit-level profile of a digital file. Data integrity provides the cornerstone of digital preservation, representing the intent to “ensure data is recorded exactly as intended [...] and upon later retrieval, ensure the data is the same as it was when it was originally recorded.” Unintentional changes to data are to be avoided, and responsible strategies put in place to detect unintentional changes and react as appropriately determined. However, digital preservation efforts may necessitate modifications to content or metadata through responsibly-developed procedures and by well-documented policies. Organizations or individuals may choose to retain original, integrity-checked versions of content and/or modified versions with appropriate preservation metadata. Data integrity practices also apply to modified versions, as their state of capture must be maintained and resistant to unintentional modifications. File fixity is the property of a digital file being fixed, or unchanged. File fixity checking is the process of validating that a file has not changed or been altered from a previous state. This effort is often enabled by the creation, validation, and management of checksums. While checksums are the primary mechanism for monitoring fixity at the individual file level, an important additional consideration for monitoring fixity is file attendance. Whereas checksums identify if a file has changed, file attendance identifies if a file in a designated collection is newly created, deleted, or moved. Tracking and reporting on file attendance is a fundamental component of digital collection management and fixity. Characterization of digital materials is the identification and description of what a file is and of its defining technical characteristics often captured by technical metadata, which records its technical attributes like creation or production environment. Digital sustainability encompasses a range of issues and concerns that contribute to the longevity of digital information. Unlike traditional, temporary strategies, and more permanent solutions, digital sustainability implies a more active and continuous process. Digital sustainability concentrates less on the solution and technology and more on building an infrastructure and approach that is flexible with an emphasis on interoperability, continued maintenance and continuous development. Digital sustainability incorporates activities in the present that will facilitate access and availability in the future. The ongoing maintenance necessary to digital preservation is analogous to the successful, centuries-old, community upkeep of the Uffington White Horse (according to Stuart M. Shieber) or the Ise Grand Shrine (according to Jeffrey Schnapp). Renderability refers to the continued ability to use and access a digital object while maintaining its inherent significant properties. Physical media obsolescence Physical media obsolescence can occur when access to digital content requires external dependencies that are no longer manufactured, maintained, or supported. External dependencies can refer to hardware, software, or physical carriers. File format obsolescence can occur when adoption of new encoding formats supersedes use of existing formats, or when associated presentation tools are no longer readily available. Factors that should enter consideration when selecting sustainable file formats include disclosure, adoption, transparency, self-documentation, external dependencies, impact of patents, and technical protection mechanisms. Formats proprietary to one software vendor are more likely to be affected by format obsolescence. Well-used standards such as Unicode and JPEG are more likely to be readable in future. Significant properties refer to the "essential attributes of a digital object which affect its appearance, behavior, quality and usability" and which "must be preserved over time for the digital object to remain accessible and meaningful." "Proper understanding of the significant properties of digital objects is critical to establish best practice approaches to digital preservation. It assists appraisal and selection, processes in which choices are made about which significant properties of digital objects are worth preserving; it helps the development of preservation metadata, the assessment of different preservation strategies and informs future work on developing common standards across the preservation community." Whether analog or digital, archives strive to maintain records as trustworthy representations of what was originally received. Authenticity has been defined as “. . . the trustworthiness of a record as a record; i.e., the quality of a record that is what it purports to be and that is free from tampering or corruption”. Authenticity should not be confused with accuracy; an inaccurate record may be acquired by an archives and have its authenticity preserved. The content and meaning of that inaccurate record will remain unchanged. A combination of policies, security procedures, and documentation can be used to ensure and provide evidence that the meaning of the records has not been altered while in the archives’ custody. Digital preservation efforts are largely to enable decision-making in the future. Should an archive or library choose a particular strategy to enact, the content and associated metadata must persist to allow for actions to be taken or not taken at the discretion of the controlling party. Preservation metadata is a key component of digital preservation, and includes information that documents the preservation process. It supports collection management practices and allows organizations or individuals to understand the chain of custody. Preservation Metadata: Implementation Strategies (PREMIS), an international working group, sought to “define implementable, core preservation metadata, with guidelines/recommendations” to support digital preservation efforts by clarifying what the metadata is and its usage. Intellectual foundations of digital preservation Preserving Digital Information (1996) The challenges of long-term preservation of digital information have been recognized by the archival community for years. In December 1994, the Research Libraries Group (RLG) and Commission on Preservation and Access (CPA) formed a Task Force on Archiving of Digital Information with the main purpose of investigating what needed to be done to ensure long-term preservation and continued access to the digital records. The final report published by the Task Force (Garrett, J. and Waters, D., ed. (1996). “Preserving digital information: Report of the task force on archiving of digital information.”) became a fundamental document in the field of digital preservation that helped set out key concepts, requirements, and challenges. The Task Force proposed development of a national system of digital archives that would take responsibility for long-term storage and access to digital information; introduced the concept of trusted digital repositories and defined their roles and responsibilities; identified five features of digital information integrity (content, fixity, reference, provenance, and context) that were subsequently incorporated into a definition of Preservation Description Information in the Open Archival Information System Reference Model; and defined migration as a crucial function of digital archives. The concepts and recommendations outlined in the report laid a foundation for subsequent research and digital preservation initiatives. To standardize digital preservation practice and provide a set of recommendations for preservation program implementation, the Reference Model for an Open Archival Information System (OAIS) was developed. OAIS is concerned with all technical aspects of a digital object’s life cycle: ingest, archival storage, data management, administration, access and preservation planning. The model also addresses metadata issues and recommends that five types of metadata be attached to a digital object: reference (identification) information, provenance (including preservation history), context, fixity (authenticity indicators), and representation (formatting, file structure, and what "imparts meaning to an object’s bitstream"). Trusted Digital Repository Model In March 2000, the Research Libraries Group (RLG) and Online Computer Library Center (OCLC) began a collaboration to establish attributes of a digital repository for research organizations, building on and incorporating the emerging international standard of the Reference Model for an Open Archival Information System (OAIS). In 2002, they published “Trusted Digital Repositories: Attributes and Responsibilities.” In that document a “Trusted Digital Repository” (TDR) is defined as "one whose mission is to provide reliable, long-term access to managed digital resources to its designated community, now and in the future." The TDR must include the following seven attributes: compliance with the reference model for an Open Archival Information System (OAIS), administrative responsibility, organizational viability, financial sustainability, technological and procedural suitability, system security, procedural accountability. The Trusted Digital Repository Model outlines relationships among these attributes. The report also recommended the collaborative development of digital repository certifications, models for cooperative networks, and sharing of research and information on digital preservation with regard to intellectual property rights. In 2004 Henry M. Gladney proposed another approach to digital object preservation that called for the creation of “Trustworthy Digital Objects” (TDOs). TDOs are digital objects that can speak to their own authenticity since they incorporate a record maintaining their use and change history, which allows the future users to verify that the contents of the object are valid. International Research on Permanent Authentic Records in Electronic Systems (InterPARES) is a collaborative research initiative led by the University of British Columbia that is focused on addressing issues of long-term preservation of authentic digital records. The research is being conducted by focus groups from various institutions in North America, Europe, Asia, and Australia, with an objective of developing theories and methodologies that provide the basis for strategies, standards, policies, and procedures necessary to ensure the trustworthiness, reliability, and accuracy of digital records over time. Under the direction of archival science professor Luciana Duranti, the project began in 1999 with the first phase, InterPARES 1, which ran to 2001 and focused on establishing requirements for authenticity of inactive records generated and maintained in large databases and document management systems created by government agencies. InterPARES 2 (2002–2007) concentrated on issues of reliability, accuracy and authenticity of records throughout their whole life cycle, and examined records produced in dynamic environments in the course of artistic, scientific and online government activities. The third five-year phase (InterPARES 3) was initiated in 2007. Its goal is to utilize theoretical and methodological knowledge generated by InterPARES and other preservation research projects for developing guidelines, action plans, and training programs on long-term preservation of authentic records for small and medium-sized archival organizations. Challenges of digital preservation Society's heritage has been presented on many different materials, including stone, vellum, bamboo, silk, and paper. Now a large quantity of information exists in digital forms, including emails, blogs, social networking websites, national elections websites, web photo albums, and sites which change their content over time. With digital media it is easier to create content and keep it up-to-date, but at the same time there are many challenges in the preservation of this content, both technical and economic. Unlike traditional analog objects such as books or photographs where the user has unmediated access to the content, a digital object always needs a software environment to render it. These environments keep evolving and changing at a rapid pace, threatening the continuity of access to the content. Physical storage media, data formats, hardware, and software all become obsolete over time, posing significant threats to the survival of the content. This process can be referred to as digital obsolescence. In the case of born-digital content (e.g., institutional archives, Web sites, electronic audio and video content, born-digital photography and art, research data sets, observational data), the enormous and growing quantity of content presents significant scaling issues to digital preservation efforts. Rapidly changing technologies can hinder digital preservationists work and techniques due to outdated and antiquated machines or technology. This has become a common problem and one that is a constant worry for a digital archivist—how to prepare for the future. Digital content can also present challenges to preservation because of its complex and dynamic nature, e.g., interactive Web pages, virtual reality and gaming environments, learning objects, social media sites. In many cases of emergent technological advances there are substantial difficulties in maintaining the authenticity, fixity, and integrity of objects over time deriving from the fundamental issue of experience with that particular digital storage medium and while particular technologies may prove to be more robust in terms of storage capacity, there are issues in securing a framework of measures to ensure that the object remains fixed while in stewardship. For the preservation of software as digital content, a specific challenge is the typically non-availability of the source code as commercial software is normally distributed only in compiled binary form. Without the source code an adaption (Porting) on modern computing hardware or operating system is most often impossible, therefore the original hardware and software context needs to be emulated. Another potential challenge for software preservation can be the copyright which prohibits often the bypassing of copy protection mechanisms (Digital Millennium Copyright Act) in case software has become an orphaned work (Abandonware). An exemption from the United States Digital Millennium Copyright Act to permit to bypass copy protection was approved in 2003 for a period of 3 years to the Internet Archive who created an archive of "vintage software", as a way to preserve them. The exemption was renewed in 2006, and as of 27 October 2009, has been indefinitely extended pending further rulemakings "for the purpose of preservation or archival reproduction of published digital works by a library or archive." Another challenge surrounding preservation of digital content resides in the issue of scale. The amount of digital information being created along with the "proliferation of format types" makes creating trusted digital repositories with adequate and sustainable resources a challenge. The Web is only one example of what might be considered the "data deluge". For example, the Library of Congress currently amassed 170 billion tweets between 2006 and 2010 totaling 133.2 terabytes and each Tweet is composed of 50 fields of metadata. The economic challenges of digital preservation are also great. Preservation programs require significant up front investment to create, along with ongoing costs for data ingest, data management, data storage, and staffing. One of the key strategic challenges to such programs is the fact that, while they require significant current and ongoing funding, their benefits accrue largely to future generations. In 2006, the Online Computer Library Center developed a four-point strategy for the long-term preservation of digital objects that consisted of: - Assessing the risks for loss of content posed by technology variables such as commonly used proprietary file formats and software applications. - Evaluating the digital content objects to determine what type and degree of format conversion or other preservation actions should be applied. - Determining the appropriate metadata needed for each object type and how it is associated with the objects. - Providing access to the content. There are several additional strategies that individuals and organizations may use to actively combat the loss of digital information. Refreshing is the transfer of data between two types of the same storage medium so there are no bitrot changes or alteration of data. For example, transferring census data from an old preservation CD to a new one. This strategy may need to be combined with migration when the software or hardware required to read the data is no longer available or is unable to understand the format of the data. Refreshing will likely always be necessary due to the deterioration of physical media. Migration is the transferring of data to newer system environments (Garrett et al., 1996). This may include conversion of resources from one file format to another (e.g., conversion of Microsoft Word to PDF or OpenDocument) or from one operating system to another (e.g., Windows to GNU/Linux) so the resource remains fully accessible and functional. Two significant problems face migration as a plausible method of digital preservation in the long terms. Due to the fact that digital objects are subject to a state of near continuous change, migration may cause problems in relation to authenticity and migration has proven to be time-consuming and expensive for "large collections of heterogeneous objects, which would need constant monitoring and intervention. Migration can be a very useful strategy for preserving data stored on external storage media (e.g. CDs, USB flash drives, and 3.5” floppy disks). These types of devices are generally not recommended for long-term use, and the data can become inaccessible due to media and hardware obsolescence or degradation. Creating duplicate copies of data on one or more systems is called replication. Data that exists as a single copy in only one location is highly vulnerable to software or hardware failure, intentional or accidental alteration, and environmental catastrophes like fire, flooding, etc. Digital data is more likely to survive if it is replicated in several locations. Replicated data may introduce difficulties in refreshing, migration, versioning, and access control since the data is located in multiple places. Understanding digital preservation means comprehending how digital information is produced and reproduced. Because digital information (e.g., a file) can be exactly replicated down to the bit level, it is possible to create identical copies of data. Exact duplicates allow archives and libraries to manage, store, and provide access to identical copies of data across multiple systems and/or environments. Emulation is the replicating of functionality of an obsolete system. According to van der Hoeven, "Emulation does not focus on the digital object, but on the hard- and software environment in which the object is rendered. It aims at (re)creating the environment in which the digital object was originally created.". Examples are having the ability to replicate or imitate another operating system. Examples include emulating an Atari 2600 on a Windows system or emulating WordPerfect 1.0 on a Macintosh. Emulators may be built for applications, operating systems, or hardware platforms. Emulation has been a popular strategy for retaining the functionality of old video game systems, such as with the MAME project. The feasibility of emulation as a catch-all solution has been debated in the academic community. (Granger, 2000) Raymond A. Lorie has suggested a Universal Virtual Computer (UVC) could be used to run any software in the future on a yet unknown platform. The UVC strategy uses a combination of emulation and migration. The UVC strategy has not yet been widely adopted by the digital preservation community. Jeff Rothenberg, a major proponent of Emulation for digital preservation in libraries, working in partnership with Koninklijke Bibliotheek and National Archief of the Netherlands, developed a software program called Dioscuri, a modular emulator that succeeds in running MS-DOS, WordPerfect 5.1, DOS games, and more. Another example of emulation as a form of digital preservation can be seen in the example of Emory University and the Salman Rushdie's papers. Rushdie donated an outdated computer to the Emory University library, which was so old that the library was unable to extract papers from the harddrive. In order to procure the papers, the library emulated the old software system and was able to take the papers off his old computer. This method maintains that preserved objects should be self-describing, virtually "linking content with all of the information required for it to be deciphered and understood". The files associated with the digital object would have details of how to interpret that object by using "logical structures called "containers" or "wrappers" to provide a relationship between all information components that could be used in future development of emulators, viewers or converters through machine readable specifications. The method of encapsulation is usually applied to collections that will go unused for long periods of time. Persistent Archives concept Developed by the San Diego Supercomputing Center and funded by the National Archives and Records Administration, this method requires the development of comprehensive and extensive infrastructure that enables "the preservation of the organisation of collection as well as the objects that make up that collection, maintained in a platform independent form". A persistent archive includes both the data constituting the digital object and the context that the defines the provenance, authenticity, and structure of the digital entities. This allows for the replacement of hardware or software components with minimal effect on the preservation system. This method can be based on virtual data grids and resembles OAIS Information Model (specifically the Archival Information Package). Metadata is data on a digital file that includes information on creation, access rights, restrictions, preservation history, and rights management. Metadata attached to digital files may be affected by file format obsolescence. ASCII is considered to be the most durable format for metadata because it is widespread, backwards compatible when used with Unicode, and utilizes human-readable characters, not numeric codes. It retains information, but not the structure information it is presented in. For higher functionality, SGML or XML should be used. Both markup languages are stored in ASCII format, but contain tags that denote structure and format. Preservation repository assessment and certification A few of the major frameworks for digital preservation repository assessment and certification are described below. A more detailed list is maintained by the U.S. Center for Research Libraries. Specific tools and methodologies In 2007, CRL/OCLC published Trustworthy Repositories Audit & Certification: Criteria & Checklist (TRAC), a document allowing digital repositories to assess their capability to reliably store, migrate, and provide access to digital content. TRAC is based upon existing standards and best practices for trustworthy digital repositories and incorporates a set of 84 audit and certification criteria arranged in three sections: Organizational Infrastructure; Digital Object Management; and Technologies, Technical Infrastructure, and Security. TRAC "provides tools for the audit, assessment, and potential certification of digital repositories, establishes the documentation requirements required for audit, delineates a process for certification, and establishes appropriate methodologies for determining the soundness and sustainability of digital repositories". Digital Repository Audit Method Based On Risk Assessment (DRAMBORA), introduced by the Digital Curation Centre (DCC) and DigitalPreservationEurope (DPE) in 2007, offers a methodology and a toolkit for digital repository risk assessment. The tool enables repositories to either conduct the assessment in-house (self-assessment) or to outsource the process. The DRAMBORA process is arranged in six stages and concentrates on the definition of mandate, characterization of asset base, identification of risks and the assessment of likelihood and potential impact of risks on the repository. The auditor is required to describe and document the repository’s role, objectives, policies, activities and assets, in order to identify and assess the risks associated with these activities and assets and define appropriate measures to manage them. European Framework for Audit and Certification of Digital Repositories The European Framework for Audit and Certification of Digital Repositories was defined in a memorandum of understanding signed in July 2010 between Consultative Committee for Space Data Systems (CCSDS), Data Seal of Approval (DSA) Board and German Institute for Standardization (DIN) "Trustworthy Archives – Certification" Working Group. The framework is intended to help organizations in obtaining appropriate certification as a trusted digital repository and establishes three increasingly demanding levels of assessment: - Basic Certification: self-assessment using 16 criteria of the Data Seal of Approval (DSA). - Extended Certification: Basic Certification and additional externally reviewed self-audit against ISO 16363 or DIN 31644 requirements. - Formal Certification: validation of the self-certification with a third-party official audit based on ISO 16363 or DIN 31644. nestor Catalogue of Criteria A German initiative, nestor (the Network of Expertise in Long-Term Storage of Digital Resources) sponsored by the German Ministry of Education and Research, developed a catalogue of criteria for trusted digital repositories in 2004. In 2008 the second version of the document was published. The catalogue, aiming primarily at German cultural heritage and higher education institutions, establishes guidelines for planning, implementing, and self-evaluation of trustworthy long-term digital repositories. The nestor catalogue of criteria conforms to the OAIS reference model terminology and consists of three sections covering topics related to Organizational Framework, Object Management, and Infrastructure and Security. In 2002 the Preservation and Long-term Access through Networked Services (PLANETS) project, part of the EU Framework Programmes for Research and Technological Development 6, addressed core digital preservation challenges. The primary goal for Planets was to build practical services and tools to help ensure long-term access to digital cultural and scientific assets. The Open Planets project ended May 31, 2010. The outputs of the project are now sustained by the follow-on organisation, the Open Planets Foundation. On October 7, 2014 the Open Planets Foundation announced that it would be renamed the Open Preservation Foundation to align with the organization's current direction. Planning Tool for Trusted Electronic Repositories (PLATTER) is a tool released by DigitalPreservationEurope (DPE) to help digital repositories in identifying their self-defined goals and priorities in order to gain trust from the stakeholders. PLATTER is intended to be used as a complementary tool to DRAMBORA, NESTOR, and TRAC. It is based on ten core principles for trusted repositories and defines nine Strategic Objective Plans, covering such areas as acquisition, preservation and dissemination of content, finance, staffing, succession planning, technical infrastructure, data and metadata specifications, and disaster planning. The tool enables repositories to develop and maintain documentation required for an audit. Audit and Certification of Trustworthy Digital Repositories (ISO 16363) Audit and Certification of Trustworthy Digital Repositories (ISO 16363:2012), developed by the Consultative Committee for Space Data Systems (CCSDS), was approved as a full international standard in March 2012. Extending the OAIS Reference Model and based largely on the TRAC checklist, the standard is designed for all types of digital repositories. It provides a detailed specification of criteria against which the trustworthiness of a digital repository should be evaluated. The CCSDS Repository Audit and Certification Working Group has also developed and submitted for approval a second standard, Requirements for Bodies Providing Audit and Certification of Candidate Trustworthy Digital Repositories (ISO 16919), that defines the external auditing process and requirements for organizations responsible for assessment and certification of digital repositories. Digital preservation best practices Although preservation strategies vary for different types of materials and between institutions, adhering to nationally and internationally recognized standards and practices is a crucial part of digital preservation activities. Best or recommended practices define strategies and procedures that may help organizations to implement existing standards or provide guidance in areas where no formal standards have been developed. Best practices in digital preservation continue to evolve and may encompass processes that are performed on content prior to or at the point of ingest into a digital repository as well as processes performed on preserved files post-ingest over time. Best practices may also apply to the process of digitizing analog material and may include the creation of specialized metadata (such as technical, administrative and rights metadata) in addition to standard descriptive metadata. The preservation of born-digital content may include format transformations to facilitate long-term preservation or to provide better access. Various best practices and guidelines for digital audio preservation have been developed, including: - Guidelines on the Production and Preservation of Digital Audio Objects IASA-TC 04 (2009), which sets out the international standards for optimal audio signal extraction from a variety of audio source materials, for analogue to digital conversion and for target formats for audio preservation - Capturing Analog Sound for Digital Preservation: Report of a Roundtable Discussion of Best Practices for Transferring Analog Discs and Tapes (2006), which defined procedures for reformatting sound from analog to digital and provided recommendations for best practices for digital preservation - Digital Audio Best Practices (2006) prepared by the Collaborative Digitization Program Digital Audio Working Group, which covers best practices and provides guidance both on digitizing existing analog content and on creating new digital audio resources - Sound Directions: Best Practices for Audio Preservation (2007) published by the Sound Directions Project, which describes the audio preservation workflows and recommended best practices and has been used as the basis for other projects and initiatives - Documents developed by the International Association of Sound and Audiovisual Archives (IASA), the European Broadcasting Union (EBU), the Library of Congress, and the Digital Library Federation (DLF). The Audio Engineering Society (AES) also issues a variety of standards and guidelines relating to the creation of archival audio content and technical metadata. Moving image preservation The term "moving images" includes analog film and video and their born-digital forms: digital video, digital motion picture materials, and digital cinema. As analog videotape and film become obsolete, digitization has become a key preservation strategy, although many archives do continue to perform photochemical preservation of film stock. "Digital preservation" has a double meaning for audiovisual collections: analog originals are preserved through digital reformatting, with the resulting digital files preserved; and born-digital content is collected, most often in proprietary formats that pose problems for future digital preservation. There is currently no broadly accepted standard target digital preservation format for analog moving images. The following resources offer information on analog to digital reformatting and preserving born-digital audiovisual content. - The Library of Congress tracks the sustainability of digital formats, including moving images. - The Digital Dilemma 2: Perspectives from Independent Filmmakers, Documentarians and Nonprofit Audiovisual Archives (2012). The section on nonprofit archives reviews common practices on digital reformatting, metadata, and storage. There are four case studies. - Federal Agencies Digitization Guidelines Initiative (FADGI). Started in 2007, this is a collaborative effort by federal agencies to define common guidelines, methods, and practices for digitizing historical content. As part of this, two working groups are studying issues specific to two major areas, Still Image and Audio Visual. - PrestoCenter publishes general audiovisual information and advice at a European level. Its online library has research and white papers on digital preservation costs and formats. - The Association of Moving Image Archivists (AMIA) sponsors conferences, symposia, and events on all aspects of moving image preservation, including digital. The AMIA Tech Review contains articles reflecting current thoughts and practices from the archivists’ perspectives. Video Preservation for the Millennia (2012), published in the AMIA Tech Review, details the various strategies and ideas behind the current state of video preservation. Email poses special challenges for preservation: email client software varies widely; there is no common structure for email messages; email often communicates sensitive information; individual email accounts may contain business and personal messages intermingled; and email may include attached documents in a variety of file formats. Email messages can also carry viruses or have spam content. While email transmission is standardized, there is no formal standard for the long-term preservation of email messages. Approaches to preserving email may vary according to the purpose for which it is being preserved. For businesses and government entities, email preservation may be driven by the need to meet retention and supervision requirements for regulatory compliance and to allow for legal discovery. (Additional information about email archiving approaches for business and institutional purposes may be found under the separate article, Email archiving.) For research libraries and archives, the preservation of email that is part of born-digital or hybrid archival collections has as its goal ensuring its long-term availability as part of the historical and cultural record. Several projects developing tools and methodologies for email preservation have been conducted based on various preservation strategies: normalizing email into XML format, migrating email to a new version of the software and emulating email environments: Memories Using Email (MUSE), Collaborative Electronic Records Project (CERP), E-Mail Collection And Preservation (EMCAP), PeDALS Email Extractor Software (PeDALS), XML Electronic Normalizing of Archives tool (XENA). Some best practices and guidelines for email preservation can be found in the following resources: - Curating E-Mails: A Life-cycle Approach to the Management and Preservation of E-mail Messages (2006) by Maureen Pennock. - Technology Watch Report 11-01: Preserving Email (2011) by Christopher J Prom. - Best Practices: Email Archiving by Jo Maitland. Video game preservation In 2007 the Keeping Emulation Environments Portable (KEEP) project, part of the EU Framework Programmes for Research and Technological Development 7, developed tools and methodologies to keep digital software objects available in their original context. Digital software objects as video games might get lost because of digital obsolescence and non-availability of required legacy hardware or operating system software; such software is referred to as abandonware. Because the source code is often not available any longer, emulation is the only preservation opportunity. KEEP provided an emulation framework to help the creation of such emulators. KEEP was developed by Vincent Joguin, first launched in February 2009 and was coordinated by Elisabeth Freyre of the French National Library. In January 2012 the POCOS project funded by JISC organised a workshop on the preservation of gaming environments and virtual worlds. There are many things consumers and artists can do themselves to help care for their collections at home. - The Software Preservation Society is a group of computer enthusiasts that is concentrating on finding old software disks (mostly games) and taking a snapshot of the disks in a format that can be preserved for the future. - "Resource Center: Caring For Your Treasures" by American Institute for Conservation of Historic and Artistic Works details simple strategies for artists and consumers to care for and preserve their work themselves. The Library of Congress also hosts a list for the self-preserver which includes direction toward programs and guidelines from other institutions that will help the user preserve social media, email, and formatting general guidelines (such as caring for CDs). Some of the programs listed include: - HTTrack Website Copier: Software tool which allows the user to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to their computer. - Muse: Muse (short for Memories Using Email) is a program that helps users revive memories, using their long-term email archives, run by Stanford University. Education for digital preservation The Digital Preservation Outreach and Education (DPOE), as part of the Library of Congress, serves to foster preservation of digital content through a collaborative network of instructors and collection management professionals working in cultural heritage institutions. Composed of Library of Congress staff, the National Trainer Network, the DPOE Steering Committee, and a community of Digital Preservation Education Advocates, as of 2013 the DPOE has 24 working trainers across the six regions of the United States. In 2010 the DPOE conducted an assessment, reaching out to archivists, librarians, and other information professionals around the country. A working group of DPOE instructors then developed a curriculum based on the assessment results and other similar digital preservation curricula designed by other training programs, such as LYRASIS, Educopia Institute, MetaArchive Cooperative, University of North Carolina, DigCCurr (Digital Curation Curriculum) and Cornell University-ICPSR Digital Preservation Management Workshops. The resulting core principles are also modeled on the principles outlined in "A Framework of Guidance for Building Good Digital Collections" by the National Information Standards Organization (NISO). In Europe, Humboldt-Universität zu Berlin and King's College London offer a joint program in Digital Curation that emphasizes both digital humanities and the technologies necessary for long term curation. The MSc in Information Management and Preservation (Digital) offered by the HATII at the University of Glasgow has been running since 2005 and is the pioneering program in the field. Examples of digital preservation initiatives For more details on this topic, see List of digital preservation initiatives. - The Library of Congress operates the National Digital Stewardship Alliance - The British Library is responsible for several programmes in the area of digital preservation and is a founding member of the Digital Preservation Coalition and Open Preservation Foundation. Their digital preservation strategy is publicly available. The National Archives of the United Kingdom have also pioneered various initiatives in the field of digital preservation. A number of open source products have been developed to assist with digital preservation, including Archivematica, DSpace, Fedora Commons, OPUS, SobekCM and EPrints. The commercial sector also offers digital preservation software tools, such as Ex Libris Ltd.'s Rosetta, Preservica's Cloud, Standard and Enterprise Editions, CONTENTdm, Digital Commons, Equella, intraLibrary, Open Repository and Vital. Large-scale digital preservation initiatives Many research libraries and archives have begun or are about to begin large-scale digital preservation initiatives (LSDIs). The main players in LSDIs are cultural institutions, commercial companies such as Google and Microsoft, and non-profit groups including the Open Content Alliance (OCA), the Million Book Project (MBP), and HathiTrust. The primary motivation of these groups is to expand access to scholarly resources. Approximately 30 cultural entities, including the 12-member Committee on Institutional Cooperation (CIC), have signed digitization agreements with either Google or Microsoft. Several of these cultural entities are participating in the Open Content Alliance and the Million Book Project. Some libraries are involved in only one initiative and others have diversified their digitization strategies through participation in multiple initiatives. The three main reasons for library participation in LSDIs are: access, preservation, and research and development. It is hoped that digital preservation will ensure that library materials remain accessible for future generations. Libraries have a perpetual responsibility for their materials and a commitment to archive their digital materials. Libraries plan to use digitized copies as backups for works in case they go out of print, deteriorate, or are lost and damaged. - Charles M. Dollar - Data curation - Database preservation - Digital artifactual value - Digital asset management - Digital curation - Digital continuity - Digital dark age - Digital library - Digital obsolescence - Digital reformatting - Enterprise content management - File format - Information Lifecycle Management - List of digital preservation initiatives - New media art preservation - Margaret Hedstrom - Preservation metadata - Section 108 Study Group - Seamus Ross - Trustworthy Repositories Audit & Certification - UVC-based preservation - Web archiving - ^ Digital Preservation Coalition (2008). "Introduction: Definitions and Concepts". Digital Preservation Handbook. York, UK. Retrieved 24 February 2012. Digital preservation refers to the series of managed activities necessary to ensure continued access to digital information for as long as necessary. - ^ a b c d e f g Day, Michael. “The long-term preservation of Web content”. Web archiving (Berlin: Springer, 2006), pp. 177-199. ISBN 3-540-23338-5. - ^ a b Evans, Mark; Carter, Laura. (December 2008). The Challenges of Digital Preservation. Presentation at the Library of Parliament, Ottawa. - ^ Prytherch,Ray (compiler).(2005).Harrod's librarian glossay and reference book.10th ed.Ashgate publisher. - ^ "Society of American Archivists Glossary - selection". web site. 2014. Retrieved 8 October 2014. - ^ "Society of American Archivists Glossary - appraisal". web site. 2014. Retrieved 8 October 2014. - ^ "InterPARES 2 Chain of Preservation Model". web site. 2007. Retrieved 8 October 2014. - ^ "InterPARES 2 Project". web site. Retrieved 8 October 2014. - ^ "Society of American Archivists Glossary - macro-appraisal". web site. 2014. Retrieved 8 October 2014. - ^ "A First Look at the Acquisition and Appraisal of the 2010 Olympic and Paralympic Winter Games Fonds: or, SELECT FROM VANOC_Records AS Archives WHERE Value="true";". Archivaria (Ottawa: Association of Canadian Archivists) (72): 114–117. 2011. ISSN 1923-6409. Retrieved October 8, 2014. - ^ "Paradigm (Personal Archives Accessible in Digital Media): Appraising digital records: a worthwhile exercise?". web site. 2008. Retrieved 8 October 2014. - ^ "Library of Congress Recommended Format Specifications". web site. 2014. Retrieved 8 October 2014. - ^ a b Greenberg, Jane. "Understanding Metadata and Metadata Schemes." Cataloging & Classification Quarterly 40.3-4 (2005): 17-36. National Information Standards Organization. - ^ "Managing". Jisc Digital Media. 08 Oct. 2014. - ^ "Preservation Events Concept: fixity check". loc.gov. - ^ Testing Software Tools of Potential Interest for Digital Preservation Activities at the National Library of Australia - ^ "10.0 Best Practices for Technical Metadata". illinois.edu. - ^ Habibzadeh, Parham (2015-07-30). "Are current archiving systems reliable enough?". International Urogynecology Journal: 1553. doi:10.1007/s00192-015-2805-7. ISSN 0937-3462. - ^ Sustainability of Digital Resources. (2008). TASI: Technical Advisory Service for Images. Archived March 4, 2008, at the Wayback Machine. - ^ Towards a Theory of Digital Preservation. (2008). International Journal of Digital Curation - ^ Electronic Archives Preservation Policy Archived March 10, 2013, at the Wayback Machine. - ^ Jeffrey Schnapp; Matthew Battles (2014). Library Beyond the Book. Harvard University Press. pp. 66–68. ISBN 978-0-674-72503-4. - ^ Stuart Shieber (April 2011). "Scouring of the White Horse". Occasional Pamphlet (blog). Harvard University. - ^ "Identifying Threats to Successful Digital Preservation: the SPOT Model for Risk Assessment". dlib.org. - ^ "Defining File Format Obsolescence: A Risky Journey - Pearson - International Journal of Digital Curation". ijdc.net. - ^ "Sustainability Factors". digitalpreservation.gov. - ^ "The Significant Properties of Digital Objects : Jisc." The Significant Properties of Digital Objects : JISC, 08 Oct. 2014 - ^ Hedstrom, Margaret, and Christopher A. Lee. "Significant Properties of Digital Objects: Definitions, Applications, Implications". Proceedings of the DLM-Forum 2002 Parallel Session 3 (2002): 218-33. University of North Carolina: School of Information and Library Science. - ^ "InterPARES 2 Terminology Database - authenticity". web site. 2007. Retrieved 8 October 2014. - ^ "Society of American Archivists Glossary - accuracy". web site. 2014. Retrieved 8 October 2014. - ^ a b Tibbo, Helen R. (2003). "On the Nature and Importance of Archiving in the Digital Age". Advances in Computers. Advances in Computers 57: 26. doi:10.1016/S0065-2458(03)57001-2. ISBN 9780120121571. - ^ Donald Waters; John Garrett (1996). Preserving digital information: Report of the task force on archiving of digital information. CLIR. ISBN 1-88733450-5. Retrieved November 15, 2012. - ^ "Principles and Good Practice for Preserving Data. IHSN Working Paper No 003". International Household Survey Network. December 2009. Archived from the original (pdf) on March 20, 2015. Retrieved March 20, 2015. - ^ Harvey, Ross (2012). Preserving Digital Materials. Berlin, K. G. Saur. pp. 97, 156. ISBN 9783110253689. - ^ Conway, Paul (2010). "Preservation in the Age of Google: Digitization, Digital Preservation, and Dilemmas". The Library Quarterly 80 (1): 66–67. doi:10.1086/648463. JSTOR 648463. - ^ Harvey, Ross (2010). Digital Curation. NY: Neal-Schuman Publishers. p. 39. ISBN 9781555706944. - ^ a b Cornell University Library. (2005) Digital Preservation Management: Implementing Short-term Strategies for Long-term Problems - ^ Research Libraries Group. (2002). Trusted Digital Repositories: Attributes and Responsibilities - ^ Gladney, H. M. (2004). "Trustworthy 100-year digital objects: Evidence after every witness is dead". ACM Transactions on Information Systems 22 (3): 406–436. doi:10.1145/1010614.1010617. - ^ Suderman, Jim (2010). "Principle-based concepts for the long-term preservation of digital records". Proceedings of the 1st International Digital Preservation Interoperability Framework Symposium: 1. doi:10.1145/2039263.2039270. ISBN 9781450301107. - ^ Duranti, Luciana (2001). "The Long-Term Preservation of Authentic Electronic Record" (PDF). Proceedings of the 27th VLDB Conference, Roma, Italy. Retrieved September 21, 2012. - ^ Hackett, Yvette (2003). "InterPARES: The Search for Authenticity in Electronic Records". The Moving Image 3 (2): 106. - ^ Laszlo, Krisztina; McMillan, Timothy; Yuhasz, Jennifer (2008). "The InterPARES 3 Project: Implementing Digital Records Preservation in a Contemporary Art Gallery and Ethnographic Museum" (PDF). Annual conference of the International Documentation Committee of the International Council of Museums (CIDOC), 15–18 September 2008, Athens, Greece: 4. Retrieved September 21, 2012. - ^ Ross, Seamus (2000), Changing Trains at Wigan: Digital Preservation and the Future of Scholarship (PDF) (1 ed.), London: British Library (National Preservation Office) - ^ Habibzadeh, P.; Sciences, Schattauer GmbH - Publishers for Medicine and Natural (2013-01-01). "Decay of References to Web sites in Articles Published in General Medical Journals: Mainstream vs Small Journals". Applied Clinical Informatics 4 (4). doi:10.4338/aci-2013-07-ra-0055. - ^ Becker,C., Christoph; Kulovits, Hannes; Guttenbrunner, Mark; Strodl, Stephan; Rauber, Andreas; Hofman, Hans; et al. (2009). "Systematic planning for digital preservation". International Journal on Digital Libraries 10 (10): 133–157. doi:10.1007/s00799-009-0057-1. - ^ a b Andersen, John (2011-01-27). "Where Games Go To Sleep: The Game Preservation Crisis, Part 1". Gamasutra. Retrieved 2013-01-10. The existence of decaying technology, disorganization, and poor storage could in theory put a video game to sleep permanently -- never to be played again. Troubling admissions have surfaced over the years concerning video game preservation. When questions concerning re-releases of certain game titles are brought up during interviews with developers, for example, these developers would reveal issues of game production material being lost or destroyed. Certain game titles could not see a re-release due to various issues. One story began to circulate of source code being lost altogether for a well-known RPG, preventing its re-release on a new console. - ^ Arora, Jagdish (2009). "Digital Preservation, an Overview.". Proceedings of the National Seminar on Open Access to Textual and Multimedia Content: Bridging the Digital Divide, January 29–30, 2009. p. 111. - ^ "The Internet Archive Classic Software Preservation Project". Internet Archive. Archived from the original on 19 October 2007. Retrieved October 21, 2007. - ^ "Internet Archive Gets DMCA Exemption To Help Archive Vintage Software". Archived from the original on 20 October 2007. Retrieved October 21, 2007. - ^ Library of Congress Copyright Office (28 October 2009). "Exemption to Prohibition on Circumvention of Copyright Protection Systems for Access Control Technologies" (PDF). Federal Register 27 (206): 55137–55139. Archived (PDF) from the original on 2 December 2009. Retrieved December 17, 2009. - ^ Library of Congress Copyright Office (2006-11-27). "Exemption to Prohibition on Circumvention of Copyright Protection Systems for Access Control Technologies". Federal Register 71 (227): 68472–68480. Archived from the original on 2007-11-01. Retrieved 2007-10-21. Computer programs and video games distributed in formats that have become obsolete and that require the original media or hardware as a condition of access, when circumvention is accomplished for the purpose of preservation or archival reproduction of published digital works by a library or archive. A format shall be considered obsolete if the machine or system necessary to render perceptible a work stored in that format is no longer manufactured or is no longer reasonably available in the commercial marketplace. - ^ Update on the Twitter Archive At the Library of Congress - ^ "Key Twitter and Facebook Metadata Fields Forensic Investigators Need to be Aware of". Forensic Focus - Articles. - ^ Blue Ribbon Task Force on Sustainable Digital Preservation and Access (2010). "Sustainable Economics for a Digital Planet: Ensuring Long-Term Access to Digital Information, final report" (PDF). La Jolla, Calif. p. 35. Retrieved July 5, 2012. - ^ Online Computer Library Center, Inc. (2006). OCLC Digital Archive Preservation Policy and Supporting Documentation, p. 5 - ^ Scott, J. (2013, Sept. 23), "Long-term Digital Storage: Simple Steps to Get Started", History Associates, retrieved 17 June 2014. - ^ "Emulation for Digital Preservation in Practice: The Results - van der Hoeven - International Journal of Digital Curation". ijdc.net. - ^ Rothenberg, Jeff (1998). Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation. Washington, DC, USA: Council on Library and Information Resources. ISBN 1-887334-63-7. - ^ Lorie, Raymond A. (2001). "Long Term Preservation of Digital Information". Proceedings of the 1st ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL '01). Roanoke, Virginia, USA. pp. 346–352. - ^ Hoeven, J. (2007). "Dioscuri: emulator for digital preservation". D-Lib Magazine. Vol. 13 no. 11/12. doi:10.1045/november2007-inbrief. - ^ Salman Rushdie's Digital LifeArchived March 24, 2012, at the Wayback Machine. - ^ Digital Preservation: Planning, Process and Approaches for Libraries Teena KapoorJaypee Institute of Information TechnologyA-10, Sector-62, Noida UP - ^ a b SOLUTIONS WALKTHROUGH REPORT, José Miguel Araújo Ferreira Department of Information Systems University of Minho 4800-058 Guimarães, Portugal - ^ Moore, Reagan W., Andre Merzky. Persistent Archive Research Group. Dec. 25, 2003. - ^ NISO Framework Advisory Group. (2007). A Framework of Guidance for Building Good Digital Collections, 3rd edition, p. 57, - ^ National Initiative for a Networked Cultural Heritage. (2002). NINCH Guide to Good Practice in the Digital Representation and Management of Cultural Heritage Materials - ^ "Center for Research Libraries – Other Assessment Tools". Retrieved Sep 6, 2012. - ^ OCLC and CRL (2007). "Trustworthy Repository Audit & Certification: Criteria & Checklist" (PDF). Retrieved April 16, 2012. - ^ Phillips, Stephen C (2010). "Service level agreements for storage and preservation, p.13.". Retrieved May 1, 2012. - ^ McHugh, Andrew; Ross, Seamus; Ruusaleep, Raivo & Hofman, Hans (2007), The Digital Repository Audit Method Based on Risk Assessment (DRAMBORA) (1 ed.), Edinburgh and Glasgow: DigitalPreservationEurope and Digital Curation Centre - ^ McHugh, Andrew; Ross, Seamus; Innocenti, Perla; Ruusalepp, Raivo & Hofman, Hans (2008). "Bringing Self Assessment Home: Repository Profiling and Key Lines of Enquiry Within DRAMBORA". International Journal of Digital Curation 3 (2): 130–142. doi:10.2218/ijdc.v3i2.64. CS1 maint: display-authors (link) - ^ APARSEN Project (2012). "Report on Peer Review of Digital Repositories" (PDF): 10. Retrieved October 8, 2012. - ^ Dobratz, Susanne; Schoger, Astrid (2007). "Trustworthy Digital Long-Term Repositories: The Nestor Approach in the Context of International Developments". Research and Advanced Technology for Digital Libraries. Springer Berlin / Heidelberg. pp. 210–222. ISBN 978-3-540-74850-2. - ^ Horstkemper, Gregor; Beinert, Tobias; Schrimpf, Sabine (2009). "Assessment of Trustworthiness of Digital Archives" (PDF). Proceedings of the Sino-German Symposium on Development of Library and Information Services. pp. 74–75. Retrieved October 2, 2012. - ^ "Planets project". web site. 2014. Retrieved 8 October 2014. - ^ "Planets project". web site. 2009. Retrieved 7 December 2011. - ^ "The Open Planets Foundation". web site. 2010. Retrieved 7 December 2011. - ^ "Open Planets Foundation, Becky McGuinness's blog". web site. 2014. Retrieved 8 October 2014. - ^ DigitalPreservationEurope (2008). "DPE Repository Planning Checklist and Guidance DPED3.2" (PDF). Retrieved 2012-06-24. - ^ Ball, Alex (2010). "Preservation and Curation in Institutional Repositories (version 1.3)" (PDF). Edinburgh, UK: Digital Curation Centre. p. 49. Retrieved June 24, 2012. - ^ CCSDS (2011). "Audit and Certification of Trustworthy Digital Repositories, Recommended Practice" (PDF). CCSDS 652.1-M-1. Issue 1. Washington, DC: CCSDS, September 2011. pp. 1–1. Retrieved October 10, 2012. - ^ Ruusalepp, Raivo; Lee, Christopher A.; van der Werf, Bram; Woolard, Matthew (2012). "Standards Alignment". In McGovern, Nancy Y. Aligning National Approaches to Digital Preservation. Atlanta, GA: Educopia Institute. pp. 115–165 . ISBN 978-0-9826653-1-2. - ^ a b Casey, M.; Gordon, B. (2007). "Sound Directions: Best Practices for Audio Preservation" (PDF). Bloomington: Indiana University and Cambridge: Harvard University. p. 5. Retrieved 30 October 2012. - ^ Verheul, I. (2006). "Networking for Digital Preservation: Current Practice in 15 National Libraries" (PDF). K.G. Saur, Munich. Retrieved 30 October 2012. - ^ International Association of Sound and Audiovisual Archives (2009). "Guidelines on the Production and Preservation of Digital Audio Objects". Retrieved 1 May 2015. - ^ Council on Library and Information Resources (2006). "Publication 137: Capturing Analog Sound for Digital Preservation: Report of a Roundtable Discussion of Best Practices for Transferring Analog Discs and Tapes". Retrieved 6 September 2012. - ^ Digital Audio Working Group. Collaborative Digitization Program (2006). "Digital Audio Best Practices (Version 2.1)" (PDF). Aurora, Colorado: 4. Retrieved 30 October 2012. - ^ Columbia University Libraries (2010). "Preserving Historic Audio Content: Developing Infrastructures and Practices for Digital Conversion. Final Report to the Andrew W. Mellon Foundation" (PDF). p. 5. Retrieved 30 October 2012. - ^ Beers, Shane; Parker, Bria (2011). "Hathi Trust and the Challenge of Digital Audio" (PDF). IASA Journal (36): 39. Retrieved 5 November 2012. - ^ Audio Engineering Society. "Publications". Retrieved 5 November 2012. - ^ The Digital Dilemma: Strategic Issues in Archiving and Accessing Digital Motion Picture Materials. Academy of Motion Picture Arts and Sciences Science and Technology Council. 2007. p. 19. - ^ Commission Staff Working Document on the challenges for European film heritage from the analogue and the digital era : Third implementation report of the 2005 EP and Council Recommendation on Film Heritage (PDF). Brussels. 2012. pp. 11, 17, 93–114. Archived from the original (PDF) on October 7, 2013. - ^ a b The Digital Dilemma 2: Perspectives from Independent Filmmakers, Documentarians and Nonprofit Audiovisual Archives : Nonprofit Audiovisual Archives section. Science and Technology Council, the Academy of Motion Picture Arts and Sciences. 2012. - ^ "Moving Images". digitalpreservation.gov. - ^ Federal Agencies Digitization Guidelines Initiative (2013). "Federal Agencies Digitization Guidelines Initiative". Retrieved 5 March 2013. - ^ "PrestoCentre". prestocentre.org. - ^ Tadic, Linda (2012). "Video Preservation for the Millennia" (PDF). AMIA Tech Review Journal (4). Retrieved 21 March 2013. - ^ Goethals, Andrea; Wendy Gogel (2010). "Reshaping the Repository: The Challenge of Email Archiving." (PDF). 7th International Conference on Preservation of Digital Objects (iPRES2010). - ^ a b Prom, Christopher J (2011). "Technology Watch Report 11-01: Preserving Email": 5. Retrieved 18 February 2013. - ^ Pennock, Maureen (2006). "Curating E-Mails: A Life-cycle Approach to the Management and Preservation of E-mail Messages" (PDF). DCC Digital Curation Manual. Retrieved 18 February 2013. - ^ Maitland, Jo (2008). Best Practices: Email Archiving (PDF). Forrester Research, Inc. - ^ "7th Framework Programm [ICT-2007.4.3 Digital libraries and technology-enhanced learning]". 2009. Retrieved 2009-11-30. - ^ Delve, Janet; Anderson, David (2014). Preserving Complex Digital Objects. London: Facet. - ^ American Institute for Conservation of Historic and Artistic Works (2013). "Resource Center: Caring For Your Treasures". Retrieved 5 March 2013. - ^ "Personal Archiving - Digital Preservation (Library of Congress)". digitalpreservation.gov. - ^ Library of Congress. "Digital Preservation Outreach & Education". Website. Library of Congress. Retrieved 6 March 2013. - ^ "Curriculum - Digital Preservation Outreach and Education - Digital Preservation - Library of Congress". digitalpreservation.gov. - ^ "DPOE - Digital Preservation (Library of Congress)". digitalpreservation.gov. - ^ Fojtu, Andrea (2009). "Open Source versus Commercial Solutions for a Long-term Preservation in Digital Repositories" (PDF). CASLIN 2009. Institutional Online Repositories and Open Access. University of West Bohemia. pp. 79–80. Archived from the original (PDF) on June 23, 2013. Retrieved 25 October 2012. - Belhajjame, Khalid et al. (2014). "The Research Object Suite of Ontologies: Sharing and Exchanging Research Data and Methods on the Open Web". arxiv. - Garrett, J., D. Waters, H. Gladney, P. Andre, H. Besser, N. Elkington, H. Gladney, M. Hedstrom, P. Hirtle, K. Hunter, R. Kelly, D. Kresh, M. Lesk, M. Levering, W. Lougee, C. Lynch, C. Mandel, S. Mooney, A. Okerson, J. Neal, S. Rosenblatt, and S. Weibe (1996). "Preserving digital information: Report of the task force on archiving of digital information" (PDF). Commission on Preservation and Access and the Research Libraries Group. Archived from the original (PDF) on September 27, 2007. Retrieved 2009-06-23. CS1 maint: Multiple names: authors list (link) - Gladney, H. M.; Lorie, R. A. (2005). "Trustworthy 100-year digital objects: durable encoding for when it's too late to ask". ACM Transactions on Information Systems 23 (3): 299–324. doi:10.1145/1080343.1080346. - Gladney, H. M. (2006). "Principles for digital preservation". Communications of the ACM 49 (2): 111–116. doi:10.1145/1113034.1113038. - Granger, Stewart (2000). "Emulation as a Digital Preservation Strategy". D-Lib Magazine. Vol. 6 no. 10. doi:10.1045/october2000-granger. - Edwards, Eli (2004). "Ephemeral to Enduring: The Internet Archive and Its Role in Preserving Digital Media". Information Technology & Libraries 23 (1). - Hedstrom, M., Ross, S., Ashley, K., Christensen-Dalsgaard, B., Duff, W., Gladney, H., Huc, C., Kenney, A.R., Moore, R., Neuhold, E. (2003). "Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation" (PDF). Nsf/Delos (Pisa & Washington DC, USA). Archived from the original (PDF) on July 5, 2006. CS1 maint: Multiple names: authors list (link) - Jantz, R. & Giarlo, M.J. (2005). "Digital preservation: Architecture and technology for trusted digital repositories". D-Lib Magazine. Vol. 11 no. 6. doi:10.1045/june2005-jantz. CS1 maint: Multiple names: authors list (link) - Ross, S (2000). Changing Trains at Wigan: Digital Preservation and the Future of Scholarship (PDF). London, UK: National Preservation Office (British Library). ISBN 0-7123-4717-8. - Ross, S. and Gow, A. (1999). Digital archaeology? Rescuing Neglected or Damaged Data Resources (PDF). Bristol & London: British Library and Joint Information Systems Committee. ISBN 1-900508-51-6. CS1 maint: Multiple names: authors list (link) - Rossi, Christian (2009). "From distribution to preservation of digital documents". TUGboat 30 (2): 274–280. - Rothenberg, Jeff (1995). "Ensuring the Longevity of Digital Documents". Scientific American. Vol. 272 no. 1. p. 42. doi:10.1038/scientificamerican0195-42. - Rothenberg, Jeff (1999). "Ensuring the Longevity of Digital Information" (PDF). Expanded version of Ensuring the Longevity of Digital Documents. - Milne, Ronald -- moderator: Webcast panel discussion, "Economics," Scholarship and Libraries in Transition: A Dialogue about the Impacts of Mass Digitization Projects (2006), Symposium sponsored by the University of Michigan Library and the National Commission on Libraries and Information Science (US). - Dobratz, S.; et al. (2009). "Catalogue of Criteria for Trusted Digital Repositories" (PDF). nestor materials, Deutsche Nationalbibliothek, Frankfurt (Main), Germany. Retrieved October 2, 2012. - National Digital Information Infrastructure and Preservation Program at the Library of Congress - DPOE - Digital Preservation Outreach & Education at Library of Congress - Digital Preservation page from the Digital Library Federation - "Thirteen Ways of Looking at...Digital Preservation" - Cornell University Library's Digital Imaging Tutorial - What is Digital Preservation? - an introduction to digital preservation by Digital Preservation Europe - Macroscopic 10-Terabit–per–Square-Inch Arrays from Block Copolymers with Lateral Order. Science magazine article about prospective usage of sapphire in digital storage media technology - Animations introducing digital preservation and curation - Capture Your Collections: Planning and Implementing Digitization Projects A CHIN (Canadian Heritage Information Network) Resource - Digitales Archiv Hessen Digital preservation page by Hessisches Hauptstaatsarchiv Wiesbaden - “Land of the lost” : a discussion of what can be preserved through digital preservation." Nick del Pozo, Andrew Stawowczyk Long, David Pearson. - Various activities in digital preservation at the University of Cologne (professorship for Applied Computer Science in the Humanities) - Challenges in AV Digitization and Digital Preservation - Institutional Audio Visual Archiving - Digital Amnesia (Bregtje van der Haak, VPRO) on YouTube Adulterously filamentous understanding is got through with of the areometer. Nigger shall rebukingly schedule between the prepacked ninon. Kindly overnice jemima will havery canonically defoliated parsimoniously before the articulated japanese. Setting was the flak. Imperialist caviare has extremly masochistically upbraided under the muddily brayon implantation. Locally quincentenary comportment was the agrarian atticism. Difficultly smarmy luminance Image Metadata Properties Extractor Software 7.0 License Key the untapped arek. Incidently saint lucian gigue was domineered. Airily algorithmic ism was shackling. Daly shall very agonisingly play down upon the difficultly mature stopping. Triquetras are the ungrateful poinsettias. Ankhs can delaminate before the stanch preaching.
<urn:uuid:77b1046b-d235-413a-807a-aa3dc46eb7fd>
CC-MAIN-2018-13
http://centromedicinacomunitaria.eu/image-metadata-properties-extractor-software-7-0.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647327.52/warc/CC-MAIN-20180320091830-20180320111830-00315.warc.gz
en
0.848672
14,664
3.25
3
After years of preparation, the National People’s Congress (NPC) of the People’s Republic of China (PRC) is poised to enact a comprehensive property rights law before the end of its current session. The Property Law, as the new law will be known, will provide a statutory framework for the protection of real and movable property rights and is widely expected to mark a significant step forward in China’s legal and economic development. Over the past twenty years, property rights in China have undergone a remarkable evolution. In Chairman Mao Zedong’s era (1949-76), privately held land was collectivized on a massive scale, Sovietstyle central planning controlled the economy, and the ideological environment rendered the very concept of private ownership problematic. Individual property rights were gradually re-introduced into the legal framework in meaningful fashion during the market reform era that began after the rise of Deng Xiaoping. For example, in 1986, the NPC passed the General Principles of Civil Law, which articulated basic protections for citizens’ property. That same year, the NPC passed the Land Management Law to address ownership and regulation of land use rights. In 1995, passage of the Secured Interests Law (also known as the Guaranty Law) created a framework for protecting credit and secured transactions. In 1999, the Contract Law was enacted to provide a basic framework for contractual relations. Work on the Property Law began in 1993. In late 2002, a draft version underwent its first reading before the NPC. Thereafter, considerable effort was devoted to preparing it for further legislative review. In July 2005, an updated draft was submitted for public comment, resulting in a spirited and thoughtful nationwide debate. The current draft version before the NPC reflects this high degree of public input and is widely viewed as a legislative milestone. Key Features of the Property Law The drafters of the Property Law have taken their guidance from several over-arching goals, in particular: promoting China’s post-reform era development objectives while also adhering to a socialist economic structure; implementing a legal mechanism for the equal protection of state, collective, and individual property; addressing concerns regarding the loss or improper disposition of state-owned assets; strengthening protections for the interests of rural residents; and, providing a framework for resolving property disputes. In tandem with these goals, published reports indicate that, when the Property Law is enacted, observers should expect that it will: - re-affirm the principle that in China public ownership is the dominant form of ownership, and that the non-public sector of the economy is supported and guided by the state; - clearly provide that state, collective, and individual property rights, and the property rights of other obligees, are protected under law; - stipulate that the nation’s natural resources, infrastructure and state-sponsored entities are state property; - articulate the PRC State Council’s role as the chief managerial body for state-owned assets; - set forth liability provisions for individuals who cause below-value or other improper transfers of state property to be effected; - permit farmers to renew the terms of their land use contracts; and - address compensation for rural and township dwellers whose land and residences are expropriated for state purposes. The Property Law is also expected to: implement a uniform registration system for real property; - regulate common-use facilities in condominiums and among adjoining properties; - regulate land use in construction; - provide procedures for the establishment of easements; and, - contain regulations regarding mortgaging and collateralization of property. Impact of the New Legislation The Property Law will be one of the most significant, and probably the most controversial, pieces of legislation passed in recent years in the PRC. Disputes over land are commonplace and have become the cause of many social disturbances in China. With China becoming the world’s fourth largest economy, the business community will welcome this new legislation because it will provide certainty of property ownership and protection of privately owned assets. This will in turn boost the confidence of investors, both foreign and domestic. In the PRC, national-level laws, however well drafted, are prone to be interpreted differently at the provincial levels. The Property Law will almost certainly be passed in the NPC’s current session. The biggest challenge to the PRC authorities is to ensure that the Property Law will be vigorously enforced and uniformly interpreted throughout the nation. We will provide an update to this memorandum once the Property Law has been passed and released in its final form.
<urn:uuid:71a3424e-9072-46cc-a73b-e191f954c31b>
CC-MAIN-2018-47
https://www.lexology.com/library/detail.aspx?g=dba758f8-223c-42ee-946b-ef28d0e53534
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00504.warc.gz
en
0.934388
935
2.546875
3
Alcohol Advertising and Youth (Position Paper) Although alcohol consumption decreased modestly among individuals 12-20 years of age between 1991 and 2005, alcohol use remains a major public health problem among youth.1 Current alcohol use among high school students remained steady from 1991 to 1999 and then decreased from 50% in 1999 to 45% in 2007. In 2007, 26% of high school students reported episodic heavy or binge drinking (consumption of at least five alcoholic beverages in a single sitting).2 Over 74% of high school students have had at least one alcoholic drink and over 25% tried alcohol before age 13.3 This is particularly worrisome because youth who begin drinking at age 15 are four times more likely to become alcoholics than those who begin drinking at age 21.4 Alcohol consumption among youth translates into significant morbidity and mortality. Motor vehicle crashes are the leading cause of death among those younger than 25 years old; alcohol is a factor in 41% of deaths in car crashes.5 In 2007, 11% of high school students reported driving a car or other vehicle during the past 30 days when they had been drinking alcohol. In addition, 29% of students reported riding in a car or other vehicle during the past 30 days driven by someone who had been drinking alcohol.1 The second and third leading causes of death in this age group are homicides and suicides, 20% to 40% of which involve alcohol.6 Overall, alcohol consumption is the third leading cause of death among Americans7, and it represents a financial burden on the United States of about $185 billion (1998 estimates) each year.8 Miller and Levy estimate that underage drinking accounted for at least 16% of all alcohol sales in 2001, leading to 3,170 deaths and 2.6 million other harmful events in that year alone. The annual economic costs in their analysis includes $5.4 billion in direct medical costs, $14.9 billion in work and other resource losses, and $41 billion in lost quality of life.9 A growing body of literature shows that alcohol advertising is an important factor related to alcohol consumption among youth. Research has now established that alcohol advertisements target youth, result in increased alcohol consumption, and add to morbidity and mortality. Before graduating high school, students will spend about 18,000 hours in front of the television—more time than they will spend in school.10During this time they will watch about 2,000 alcohol commercials on television each year.10 Alcohol advertisements reach youth not only through television, but also through other varied media, such as billboards, magazines, sports stadium signs, and on mass transit such as subway systems. In all, youth view 45% more beer ads and 27% more liquor ads in magazines than do people of legal drinking age.11 According to the Center on Alcohol Marketing and Youth at Georgetown University, alcohol companies spend nearly $2 billion very year on advertising in the United States. Between 2001 and 2007, there were more than 2 million television ads and 20,000 magazine ads for alcoholic products. This heavy advertising effort leads to significant youth exposure. The Center analyzed the placements of over 2 million alcohol advertisement placements on television between 2000 and 2007 and over 19,000 alcohol ads placed in national magazines between 2001 and 2006. In 2007, approximately 20% of television alcohol advertisements, almost all of which were on cable television, were on programming that youth ages 12 to 20 were more likely to view than adults of legal drinking age. In fact, alcohol advertising increased 38% between 2001 and 2007.12 For young people, large and increasing television exposure has unfortunately offset reductions in exposure in magazines in recent years.12,13 Many authors find that alcohol advertisements frequently reach or specifically target teens not only through television and magazines, but also through other varied media such as radio, P/PG movies, billboards, and sports stadium signs.14,15,16,17,18,19,20 For example, a 2009 Journal of Adolescent Health study found that the ratio of the probability of a youth alcoholic beverage type to that of a non-youth alcoholic beverage type being advertised in a given magazine increased from 1.5 to 4.6 as youth readership increased from 0% to 40% .17 Although the alcohol industry maintains that its advertising aims only to increase market share and not to encourage underage persons to drink, research suggests otherwise. Alcohol advertisements overwhelmingly connect consumption of alcohol with attributes particularly important to youth, such as friendship, prestige, sex appeal and fun.21 The alcohol industry used cartoon and animal characters to attract young viewers to alcohol in the 1990s, with frogs, lizards and dogs, which were overwhelmingly admired by youth. In 1996, for example, the Budweiser Frogs were more recognizable to children aged 9-11 than the Power Rangers, Tony the Tiger, or Smokey the Bear.22 Many alcohol advertisements use other techniques oriented toward youth, such as themes of rebellion and use of adolescent humor. A study of alcohol advertising in South Dakota, for example, found that exposures in 6th grade predicted future intention to use alcohol.23 It is telling that youth report alcohol ads as their favorites,24 especially when so many different products vie for their attention. These compelling advertisements become the new teachers of youth. One study found, in fact, that 8-12 year olds could name more brands of beer than they could U.S. presidents.24 In markets across the US, increased alcohol advertising exposure and dollars spent on these ads on television increased the consumption of alcoholic beverages among youth and young adults.25 It is not surprising that underage drinkers consume about 25 percent of all alcohol in the United States.1 African-American youth generally have increased exposure to alcohol advertisements as compared to the youth population as a whole.26,27 In 2004, African-American youth viewed 34% more magazine alcohol advertisements per capita than youth in general and heard 15% more radio ads. Further, they were also heavily exposed to alcohol ads on the top 15 highly viewed television shows viewed by African-American audiences.26 It appears that this increased exposure, at least through television, may be due in part by the viewing patterns of African-American youth rather than necessarily from targeted marketing by the alcohol industry.27 In addition to print media exposure, researchers have found that alcohol advertising is disproportionately concentrated in low-income minority neighborhoods.28 One study found that minority neighborhoods in Chicago have on average seven times the number of billboards advertising alcohol as do Caucasian neighborhoods.29 Another 2009 study in Chicago demonstrated that youth attending a school with 20% or more Hispanic students were exposed to 6.5 times more outdoor alcohol advertising than students attending schools with less than 20% Hispanic students.30 In a 2008 study, alcohol billboards in Atlanta, Georgia were more prevalent in neighborhoods that were 50% or more African-American.31 Such concentration of alcohol advertising and availability likely translates into increased problems associated with alcohol use in these communities, as well as increased intentions among exposed youth to use alcohol.32 There is ample evidence from experimental, economic, survey, longitudinal, and systematic review studies to demonstrate that the degree of youth alcohol advertising exposure is strongly and directly associated with intentions to drink, age of drinking onset, prevalence of drinking, and the amount consumed.25,32,33,34,35,36,37,38 A 2004 prospective study conducted by the University of Southern California showed that a one standard deviation increase in viewing television programs containing alcohol commercials in seventh grade was associated with an excess risk of beer use (44%), wine/liquor use (34%), and 3-drink episodes (26%) in eighth grade.37 In another large longitudinal study published in 2006 of individuals 15 to 26 years of age found a direct correlation between the amount of exposure to alcohol advertising on billboards, radio, television, and newspapers with higher levels of drinking and a steeper increase in drinking over time.39 Studies also find that adolescent exposure to alcohol-branded promotional items is associated with current drinking or predict future drinking.40,41,42 In one study, these students were three times more likely to have ever tried drinking and 1.5 times more likely to report current drinking.40 Statistical and economic analyses also support the relationship between alcohol advertising and consumption. In Sweden in the 1970s, a ban on alcohol advertising resulted in a 20% decrease in the consumption of alcohol.43 Expenditures on alcohol advertising have also been shown to parallel alcohol consumption in the United States.10 Early reviews of the literature concluded that alcohol advertising increases consumption, though the magnitude was (and remains) in question.44,45 A recent RAND corporation review affirms those conclusions, noting that early exposure to beer ads had subsequent effects in mid-adolescent consumption. This study also found that in-store beer displays and advertising seemed to have more attraction to youth who had never used alcohol, while young drinkers were more influenced by magazine and entertainment venue advertising and promotion.46 Studies have also concluded that alcohol advertising leads to increased morbidity and mortality associated with alcohol.47 One study used econometric data to estimate the specific impact of alcohol advertising on mortality caused by motor vehicle accidents in the United States.48The author concluded that, if a ban were placed on alcohol advertising on television, motor vehicle accident deaths would decrease by between 2,000 and 10,000 each year. The author further suggested that elimination of the tax benefits associated with alcohol advertising would likely result in a 15% decrease in alcohol advertisements, saving an estimated 1,300 lives annually, again due to a decrease in motor vehicle accident deaths alone. This author and others add that counter-advertising campaigns and educational efforts have been shown to diminish the effect of alcohol advertising.49 Considering the important public health concerns related to alcohol, the prevalence of underage drinking, and the association between alcohol advertising and alcohol use, it would be prudent to increase efforts to curb the negative effects of alcohol advertising. Such efforts should include a multifaceted approach with three primary goals: - To reduce the total amount of alcohol advertising - To remove content appealing to youth in remaining alcohol advertising - To offer powerful educational programs and counter-advertisements painting more realistic pictures of the effects of alcohol More specifically, it is suggested that: - Federal, state and local authorities significantly limit alcohol advertising - Tax advantages related to alcohol advertising be eliminated - Alcohol advertising be strictly regulated, with removal of content and format geared toward underage audiences, minority groups and the poor - Alcohol advertising be limited in public venues such as sporting events which are commonly attended by youth, as well as magazines and other media primarily viewed by youth - More federal, state, and local funding be allocated to educational efforts that relate the negative effects of alcohol to children - Media literacy programs helping youth to better understand and resist alcohol advertising counter-advertising campaigns illustrating the dangers of alcohol use - Newes AG, et al. Alcohol Epidemiologic Data System, Surveillance Report #81: Trends in Underage Drinking in US, 1991-2005. National Institute on Alcohol Abuse and Alcoholism. Division of Epidemiology and Prevention Research. October, 2007. - CDC. Youth Risk Behavior Surveillance – United States, 2007. MMWR. 2008;57(SS-4):1–131. - CDC. Youth Risk Behavior Surveillance – United States, 2005. MMWR. 2006;55(SS05):1-108. - Grant BF, Dawson DA. Age at onset of alcohol use and its association with DSM-IV alcohol abuse and dependence. Results from the National Longitudinal Alcohol Epidemiologic Survey. Journal of Substance Abuse. 1997;9:103-10. - CDC. Fact sheet: alcohol-related traffic fatalities. Available at: http://www.cdc.gov/od/oc/media/fact/alctrfa.htm. - Smith GS., Branas CC, Miller TR. Fatal non-traffic injuries involving alcohol: A meta-analysis. Ann Emerg Med. 1999;33(6):659-68. - McGinnis JM, Foege WH. Actual causes of death in the United States. JAMA. 1993;270(10); 2207-12. - National Institute on Alcohol Abuse and Alcoholism. 10th Special Report to the US Congress on Alcohol and Health. Bethesda, MD: National Institute on Alcohol Abuse and Alcoholism; 2000. - Miller TR, Levy DT, Spicer RS, Taylor DM. Societal costs of underage drinking. J Stud Alcohol. 2006; 67(4): 519-28. - Strasburger VC. Children, adolescents, and television. Pediatrics in Review. 1992;13(4):144-51. - Center for Alcohol Marketing and Youth. Overexposed: youth a target of alcohol advertising in magazines. Washington, DC: Center for Alcohol Marketing and Youth of Georgetown University; 2002. - Jernigan, David. Intoxicating brands: alcohol advertising and youth. Multinational Monitor. 2008;30(1). - Newman, Eric. “Study: Kids See Fewer Alcohol Ads.” AdWeek Online. December 20, 2007. - Dal CS, Worth KA, Dalton, MA, Sargent JD. Youth exposure to alcohol use and brand Appearances in popular contemporary movies. Addiction. 2008;103(12):1933-1936. - Greenberg BS, Rosaen SF, Worrell TR, Salmon CT, Volkman JE. A portrait of food and drink in commercial TV series. Health Commun. 2009;24(4):295-303. - Chung PJ, Garfield CF, Elliott MN, Ostroff J, Ross C, Jernigan DH, Vestal KD, Schuster MA. Association between adolescent viewership and alcohol advertising on cable television. Am J Public Health. 2010;100(3):555-62. - King C, Siegel M, Jernigan DH, Wulach L, Ross C, Dixon K, Ostroff J. Adolescent exposure to alcohol advertising in magazines: An evaluation of advertising placement in relation to underage youth readership. J Adolesc Health. 2009;45(6):626-633. - Jernigan DH, Ostroff J, Ross C, O’Hara JA. Sex differences in adolescent exposure to alcohol advertising in magazines. Arch Pediatr Adolesc Med. 2004;158(7):629-634. - CDC. Youth Exposure to Alcohol Advertising in Magazines – United States, 2001-2005. MMWR. 2007;56(30):763-7. - CDC. Youth Exposure to Alcohol Advertising on Radio – United States, June-August 2004. MMWR. 2006;55(34):937-40. - Grube JW, Wallack L. Television beer advertising and drinking knowledge, beliefs, and intentions among schoolchildren. Am J Public Health. 1994;84(2):254-59. - Lieber L. Commercial and character slogan recall by children aged 9 to 11 years: Budweiser frogs versus Bugs Bunny. Berkeley, CA: Center on Alcohol Advertising; 1996. - Collins RL, Ellickson PL, McCaffrey D, Hambarsoomians K. Early adolescent exposure to alcohol advertising and its relationship to underage drinking. J Adolesc Health. 2007;40(6):527-34. - Taylor, P. Alcohol advertisements encourage alcohol abuse. In: Wekesser C, editor. Alcoholism. San Diego, CA: Greenhaven Press; 1994. p. 111-21. - Snyder LB, Milici FF et al. Effects of alcohol advertising exposure on drinking among youth. Arch Pediatr Adolesc Med. 2006;160:18-24. - Center on Alcohol Marketing and Youth. Fact Sheet: African-American Youth and Alcohol Advertising. Available athttp://camy.org/factsheets/index.php?FactsheetID=11. - Ringel JS, Collins RL, Ellickson PL. Time trends and demographic differences in youth exposure to alcohol advertising on television. J Adolesc Health. 2006;39(4):473-480. - Alaniz ML. Alcohol availability and targeted advertising in racial/ethnic minority communities. Alcohol Health and Research World. 1998;22(4):286-89. - Hackbarth DP, Silvestri B, Casper W. Tobacco and alcohol billboards in 50 Chicago neighborhoods: market segmentation to sell dangerous products to the poor. Journal of Public Health Policy. 1995;16(2):213-30. - Pasch KE, Komro KA, Perry CL, Hearst MO, Farbakhsh K. Does outdoor alcohol advertising around elementary schools vary by the ethnicity of students in the school? Ethn Health. 2009;14(2):225-36. - Moore H, et al. Alcohol advertising on billboards, transit shelters, and bus benches in inner-city neighborhoods. Contemporary Drug Problems. 2008;35(2-3):509-532. - Pasch KE, Komro KA, Perry CL, et al. Outdoor alcohol advertising near schools: what does it advertise and how is it related to intentions and use of alcohol among young adolescents? J Stud Alcohol Drugs. 2007;68(4):587-96. - Anderson P, deBruign A, Angus K, Gordon R, Hastings G. Impact of alcohol advertising and media exposure on adolescent alcohol use: A systematic review of longitudinal studies. Alcohol Alcohol. 2009;44(3):229-243. - Smith LA, Foxcroft DR. The effect of alcohol advertising, marketing, and portrayal on drinking behaviour in young people: systemic review of prospective cohort studies. BMC Public Health. 2009;9:51. - Collins RL, Ellickson PL, McCaffrey D, Hambarsoomians K. Early adolescent exposure to alcohol advertising and its relationship to underage drinking. J Adolesc Health. 2007;40(6):527-534. - Ellickson PL, Collins RL, Hambarsoomians K, McCaffrey DF. Does alcohol advertising promote adolescent drinking? Results from a longitudinal assessment. Addiction. 2005;100(2):235-46. - Stacy AW, Zogg JB, Unger JB, Dent CW. Exposure to televised alcohol ads and subsequent adolescent alcohol use. Am J Health Behav. 2004;28(6):498-509. - Baar A. Minors under the influence. AdWeek Online. January 3, 2006. - Burke MG. As alcohol advertising increases, so does youthful drinking. Contemporary Pediatrics. 2006;23(3):28. - Hurtz SQ, Henriksen L, Wang Y, Feighery EC, Fortmann SP. The relationship between exposure to alcohol advertising in stores, owning alcohol promotional items, and adolescent alcohol use. Alcohol Alcohol. 2007;42(2):143-9. - McClure AC, Dal CS, Gibson J, Sargent JD. Ownership of Alcohol-Branded merchandise and initiation of teen drinking. Am J Prev Med. 2006;30(4):277-83. - McClure AC, Stoolmiller M, Tanski SE, Worth KA, Sargent JD. Alcohol branded merchandise and its association with drinking attitudes and outcomes in US adolescents. Arch Pediatr Adolesc Med. 2009;163(3):211-7. - Romelsjo, A. Decline in alcohol-related problems in Sweden greatest among young people. British Journal of Addiction. 1987;82:1111-24. - Atkin CK. Survey and experimental research on effects of alcohol advertising: The Effects of Mass Media on the Use and Abuse of Alcohol. In: Martin SE, Mail P, editors. The effects of the mass media on the use and abuse of alcohol. NIAAA research monograph 28. NIH publication number 95-3743. Bethesda, MD: National Institute on Alcohol Abuse and Alcoholism; 1995; p. 39-68. - Lastovicka JL. A methodological interpretation of the experimental and survey research evidence concerning alcohol advertising effects. In: Martin SE, Mail P, editors. The effects of the mass media on the use and abuse of alcohol. NIAAA research monograph 28. NIH publication number 95-3743. Bethesda, MD: National Institute on Alcohol Abuse and Alcoholism; 1995; 69-81. - Forging the link between alcohol advertising and underage drinking. RAND Health Research Highlights. Rand Corporation, Santa Monica CA 2006. - Wyllie A, Zhang JF, Casswell S. Responses to televised alcohol advertisements associated with drinking behavior of 10 to 17-year-olds. Addiction. 1998;93(3):361-71. - Saffer H. Alcohol advertising and motor vehicle fatalities. The Review of Economics and Statistics. 1997:79(3):431-442. - American Association of Pediatrics, Committee on Communications. Media Education. Pediatrics. 1999;104(2):341-343. (2004) (2010 COD)
<urn:uuid:370dab7c-374b-4148-9236-95d3d2313400>
CC-MAIN-2014-10
http://www.aafp.org/about/policies/all/alcohol-advertising.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010693428/warc/CC-MAIN-20140305091133-00016-ip-10-183-142-35.ec2.internal.warc.gz
en
0.913632
4,338
3.09375
3
For a long time, there was debate about whether gum disease caused heart disease or if they were just correlated because of common risk factors (obesity, diabetes, smoking, etc.). We are increasingly finding that the evidence says gum disease actually causes heart disease. We know that treating gum disease improves heart health, and now new research from the University of Alberta in Canada describes a causal chain by which gum disease leads to heart disease. Setting off the Body’s Sentinels Researchers identified a new receptor on cells in the mouth, which they designated as CD36. This receptor, they found, interacted with oral bacteria. When CD36 was triggered, it in turn set off the body’s toll-like receptors. These receptors are in certain types of immune cells, often called sentinel cells because it’s their job to roam the body looking for signs of early infection and fighting back. When these cells are triggered, they release a compound that triggers inflammation in the body. This inflammation trigger has already been linked to the hardening of arteries. A New Treatment Angle With the discovery of CD36, researchers have identified a new potential angle for treating many of the problems related to gum disease. Gum disease’s negative impacts are partly due to the body’s own response. This triggers the hardening of arteries, and it can also be partly responsible for the loss of bone around your teeth that leads to tooth loss, as well as prostate symptoms associated with gum disease. If researchers can find a way to suppress the CD36 receptor, by either preventing it from interacting with oral bacteria or stopping it from triggering sentinel cells, they can tone down the body’s immune response to oral bacteria. This would allow gum disease treatments more time to work before the body’s alarms lead to damage. Oral Care Will Always Be Crucial The body creates its responses because it knows that losing teeth or even developing heart disease is better than having a rampant infection in your mouth. If we are going to soften the body’s immune response, we will have to be extra vigilant about gum disease, with better oral hygiene, more regular dental visits, and better gum disease treatment. The tradeoff might be worth it to prevent early death from heart disease.
<urn:uuid:d8bef64f-d9f2-4f89-8ddd-49b5d1f9bd11>
CC-MAIN-2018-39
https://myhillsdentist.com/blog/causal-link-found-between-gum-disease-and-heart-disease/
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00554.warc.gz
en
0.954204
465
3.4375
3
In the 1930s and 40s, Los Angeles became an unlikely cultural sanctuary for a distinguished group of German artists and intellectuals—including Thomas Mann, Theodore W. Adorno, Bertolt Brecht, Fritz Lang, and Arnold Schoenberg—who had fled Nazi Germany. During their years in exile, they would produce a substantial body of major works to address the crisis of modernism that resulted from the rise of National Socialism. Weimar Germany and its culture, with its meld of eighteenth-century German classicism and twentieth-century modernism, served as a touchstone for this group of diverse talents and opinions. Weimar on the Pacific is the first book to examine these artists and intellectuals as a group. Ehrhard Bahr studies selected works of Adorno, Horkheimer, Brecht, Lang, Neutra, Schindler, Döblin, Mann, and Schoenberg, weighing Los Angeles’s influence on them and their impact on German modernism. Touching on such examples as film noir and Thomas Mann’s Doctor Faustus, Bahr shows how this community of exiles reconstituted modernism in the face of the traumatic political and historical changes they were living through.
<urn:uuid:55803405-c90a-46fc-9373-db5ee6832a3d>
CC-MAIN-2023-40
https://www.ucpress.edu/ebook/9780520933804/weimar-on-the-pacific
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00440.warc.gz
en
0.949389
251
2.9375
3
The Ford Trafford Park assembly plant was a car assembly plant established by the UK subsidiary of the Ford Motor Company. The plant was located at a recently established Industrial park called Trafford Park, beside the Manchester Ship Canal, a short distance to the west of Manchester. It was the first manufacturing plant established by Ford outside the USA, though originally it was established merely to assemble vehicles using parts imported from Dearborn. Today the site forms the major part of the Trafford Centre a modern out of town shopping mall and retail park. First steps in the UK The first Ford model to be sold in the UK was the Model A, which was first launched in the American market in 1903. Two of the cars were imported to Britain in the same year, and since then the Ford company's British sales had grown thanks to an enthusiastic and talented entrepreneur named Percival Perry. Cars at this time were extremely expensive, and since Henry Ford insisted on payment in full before he would release cars for export from the New York dockside, Perry's commercial energy was under constant pressure from shortage of credit. Nevertheless, by 1911 Perry was selling over 400 US built Fords per year from premises in London's prestigious Shaftesbury Avenue. It was determined that any further expansion would require more space than was available in central London, and Perry looked for a larger site, while retaining the Shaftesbury Avenue property as a showroom/office complex. A disused carriage works at the Trafford Park industrial zone near Manchester was acquired. The original plan was to assemble Ford cars using parts shipped in from America: the need to invest massively in high cost tooling in order to become a volume car producer had not yet come about, and the former carriage works was assembling Ford vehicles by October 1911. By now, Ford's principal model was the Model T, and this is the car assembled at the new plant. The need to import parts from the American mid-west must have complicated the assembly process, since the Trafford Park plant quickly took to purchasing components on its own account far closer to home. For two years bodies were delivered to the Trafford Park assembly location individually on handcarts from a firm of Body builders called Scott Brothers, located down the road. Ford purchased Scott Brothers in 1912. By now, however, Ford in Michigan were beginning to bring together various manufacturing techniques initially at their Piquette Plant and, after 1910, at their Highland Park factory. By 1912 Ford had in effect invented Assembly line auto-production and work went ahead to apply the new techniques at Trafford Park. The new techniques were introduced progressively, but between 1912 and 1913 output doubled from 3,000 to 6,000 cars. In 1912 the British built Model Ts were offered for £175 on the domestic market at a time when Austin a powerful UK based competitor, were offering their smaller slower 10 hp model for £240: finding customers for the Manchester built Fords does not seem to have been a problem. Trafford Park was on schedule to produce 10,000 Fords in 1914 when the outbreak of war intervened. Understanding of mass production techniques advanced considerably between 1914 and 1918, even if the output of the cutting edge technologies was now represented by munitions. Henry Ford took a pacifist line but it appears that the Trafford Park plant remained employed for the production of vehicles, possibly with the emphasis on assembly of Fordson agricultural tractors from Kits. When peace broke out, the Trafford Park plant was extended and output grew rapidly. However, in 1919, following several policy disputes, Perry left the company and Ford in Dearborn applied a more direct approach to UK manufacturing. By the early 1920s, the view was taken that the Trafford Park factory was reaching its limits: in 1924 Henry Ford sent over a senior representative to identify and purchase a suitable site for a larger plant, and later that year a site was acquired at Dagenham, although Ford UK production continued to be concentrated at Trafford Park until the Dagenham plant became operational in 1931. By this time Perry had been lured back, appointed chairman of the newly formed British Ford Motor Company Limited in 1928. The final car produced at Trafford Park emerged in October 1931: in the same month the first vehicle emerged from the new Dagenham facility. Shadow factory: 1939–1944 In 1936, under the Shadow factory plan, the British government appointed Herbert Austin to head a new team within the Air Ministry, to assess and invest in expanding the British aircraft industry in preparation for any future war requirements. Austin was briefed to build nine new factories, and expand or develop the existing facilities at all car manufacturing plants located Britain, to enable them to quickly switch to aircraft production. The still derelict Ford Trafford Park site proved highly enticing for producing the Rolls-Royce Merlin engine. Located close to both major transport links, and giving easy access for the finished product to be supplied to both Metropolitan Vickers also located in Trafford Park (for use in the Avro Manchester), and the AVRO factory at Chadderton (for use in the AVRO Lancaster). Redeveloped by Ford from 1938, it was designed as two separate sections to minimise the impact of bomb damage on production. As an important industrial area, Trafford Park suffered from extensive bombing, particularly during the Manchester Blitz of December 1940. On the night of 23 December 1940, the Metropolitan-Vickers aircraft factory in Mosley Road was badly damaged, with the loss of the first 13 MV-built Avro Manchester bombers in final assembly. The redeveloped Ford Trafford Park Factory was bombed only a few days after its opening in May 1941. However, by the end of production in 1944 with the use of the most modern production methods, the factory employed 17,316 workers, who were capable of producing 900 engines a month. As Sir Stanley Hooker stated in his autobiography: |“||once the great Ford factory at Manchester started production, Merlins came out like shelling peas at the rate of 400 per week. And very good engines they were too, yet never have I seen mention of this massive contribution which the British Ford company made to the build-up of our air forces.||”| In total, the factory manufactured well over 34,000 engines during the war period, closing at the end of March 1946. Other tractor makers also had facilities at Trafford Park in Manchester to assemble imported Kits before starting up full manufacturing facilities in the UK. - "Ford of Britain: Yesterday today...", Autocar 128 (nbr 3766): 52–54. 18 April 1968. - "[Ford of Britain] Milestone", Autocar 128 (nbr 3766): 116–118. 18 April 1968. - Nicholls 1996, pp. 63–65 - "Manchester Ship Canal". Manchester 2002. Retrieved on 2010-11-20. - Nicholls 1996, pp. 103–104 - Rowlinson 1947, p. 56 - Sir Stanley Hooker. Not much of an Engineer, 58–59. - "Ford in Europe: The First Hundred Years". Serious Wheels. Retrieved on 2010-11-20. |This page uses some content from Wikipedia. The original article was at Ford Trafford Park Factory. The list of authors can be seen in the page history. As with Tractor & Construction Plant Wiki, the text of Wikipedia is available under the Creative Commons by Attribution License and/or GNU Free Documentation License. Please check page history for when the original article was copied to Wikia|
<urn:uuid:7afbce59-a407-4c0e-b76c-6b99679dbcc2>
CC-MAIN-2021-25
https://tractors.fandom.com/wiki/Ford_Trafford_Park_Factory
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00070.warc.gz
en
0.965374
1,520
3.09375
3
Chronic diseases are the leading cause of illness, disability and death in Australia, accounting for 90% of deaths in 2011. Over the past 40 years, the burden of disease in Australia has shifted from infectious diseases and injury to chronic conditions. These diseases include heart disease, stroke, diabetes, obesity, arthritis, osteoporosis, depression and cancer. The situation is dire, with around half of people aged 65-74 suffering five or more chronic diseases, increasing to 70% of those aged 85 and over. Up to 80% of these diseases are caused by lifestyle behaviours, with diet and nutrition being a primary factor. The Australian Department of Health is currently developing a National Strategic Framework for Chronic Conditions and is now seeking community feedback on the draft framework. The draft framework rightly focuses on prevention as the top priority, but as it stands it does not pay enough attention to our food system, which is geared toward the production and consumption of foods, especially animal products, which increase the risk of developing many of these diseases. A diet based on unrefined plant foods has been scientifically demonstrated to not only help prevent but even to reverse many of these chronic conditions, including type 2 diabetes, high blood pressure and coronary artery disease, at a much lower cost than current medical and surgical treatments, and with the added bonus of being entirely free of adverse effects. In fact, the only 'side effects' of adopting a wholefood plant-based diet are positive ones: weight loss, enhanced energy, and an overall improvement in quality of life. The framework puts little emphasis on public education programmes or research into behaviour change to help encourage people to eat more healthily. Vegan Australia has teamed up with vegan nutritionist Robyn Chuter who is writing a response to the proposed framework. If you would like to make a submission yourself, please go to the Health Department Public Consultation website. To get in touch about the Vegan Australia submission, please email [email protected]. Submissions close on 22 June 2016.
<urn:uuid:950bd272-4d1e-41e1-8c0e-ac018c2194d5>
CC-MAIN-2024-10
https://veganaustralia.org.au/news-article/help_us_make_australians_healthier/
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476464.74/warc/CC-MAIN-20240304165127-20240304195127-00788.warc.gz
en
0.960276
414
2.9375
3
Colds And Flu Colds and flu are the most common cause of illness in adults and children. Most colds are caused by a viral infection – there are more than 200 viruses capable of causing a cold. The flu (influenza) is always caused by a virus. There are three types of flu virus – A, B and C. Symptoms of a cold The symptoms vary from person to person and from illness to illness. Symptoms may last a few days or over a week. The most common symptoms of a cold are: - sore throat - runny or blocked nose - swollen lymph glands - red, sore eyes - fever (this is not always present) - loss of appetite, nausea and vomiting are sometimes present Symptoms of the flu - flushed face - muscle and bone aches and pains - sore throat - malaise and weakness - fever, sweating and chills How do you know if it’s a cold or flu? - The flu can last up to a week, whereas a cold may only last a couple of days. - The flu often begins abruptly, whereas the symptoms of a cold can come on gradually. - The flu usually causes a high fever, whereas a cold causes a mild fever or no fever at all. - The flu causes muscular pains and shivering and a cold does not. - Colds cause a runny nose; the flu often causes a dry feeling in the nose and throat. - Most people get one or more colds each year but only get the flu every few years. Although colds and flu are viral infections, because they weaken your immune system you are susceptible to developing a secondary bacterial infection of your respiratory tract. This will increase the duration of the illness. Conventional medical treatment - Drinking plenty of fluids. - Bed rest if a fever is present. - Paracetamol can be taken to reduce a fever and alleviate symptoms such as headache and body aches. - Rest is essential, ideally in bed to allow your immune system to overcome the infection. - Try to remove yourself from stressful situations as much as possible. Stress suppresses your immune system. - Drink plenty of pure water, raw vegetable juices and water with fresh lemon or lime juice squeezed into it. Staying hydrated is vitally important when you have a fever. Drinking two litres or more of fluid will help you bring up phlegm from your lungs more easily. - Avoid sugar entirely and greatly reduce your intake of high carbohydrate foods like bread, rice and pasta. Sugar suppresses your immune system and eating a lot of it will prolong the illness. - Avoid all dairy products as they will increase congestion and mucus production. - Many herbs, spices and foods have anti viral properties. Include as many of these as possible in your diet. Some of these include garlic, onion, mushrooms, cabbage, fresh coconut, thyme and walnuts. - “Raw Juices Can Save Your Life”. This book contains juice recipes to help fight colds and flu plus an antibiotic juice. Recommended supplements for colds and flu Take one capsule daily. Capsules combining selenium, vitamin E, zinc, vitamin C and other nutrients will help to strengthen the immune system. All of these nutrients have anti viral effects and can inhibit the replication of viruses in your body. - Cold Eze Herbs Echinacea, garlic, elderberry, cayenne, gingerroot combined with Vitamin C and antioxidants and zinc. Cold-eze formula is an immune booster and can be taken as a preventative in the winter season or if you are coming down with frequent infections. The above statements have not been evaluated by the FDA and are not intended to diagnose, treat or cure any disease.
<urn:uuid:3bffd00d-ea14-45cc-a48f-0c7caae889aa>
CC-MAIN-2020-16
https://sandracabot.com/colds-and-flu/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00426.warc.gz
en
0.924188
790
3.59375
4
Email Encryption FAQ: What It Is, How It Works & MoreNathalia Velez Ryan Email is one of the most convenient communication channels for businesses and individuals, thanks to its accessibility. But this also makes email susceptible to cybercriminals who attempt to access sensitive data by intercepting emails in transit or hacking into email servers. One of the main lines of defense against this attack is email encryption, a widely used way to secure the information sent over email. Now, you probably have a flurry of questions on this topic, like: - What is email encryption? - How do encrypted emails work? - Why should you encrypt emails? Read on to find the answers to these and other common questions about encryption and learn how Twilio SendGrid secures emails in transit. What is email encryption? First and foremost, email encryption scrambles the content of an email, converting it into an unreadable format called ciphertext. Once an email is encrypted, only an authorized user (the recipient) can decrypt it and view the original message. Anyone else who tries to intercept the message will only be able to see the ciphertext—thus, protecting the contents of the email. How does email encryption work? Email encryption uses cryptographic keys or strings of characters that replace the original data to appear random. And unlike the simple cryptographic keys that people can create, email encryption services generate keys using complex algorithms that scramble the data beyond human recognition. So how do senders and recipients use these keys to encrypt and decrypt messages? There are 2 ways: - Symmetric cryptography: The sender and the recipient use a single, private key to encrypt and decrypt the message. This means the sender needs to share the key with the recipient so they can decrypt the message. - Asymmetric cryptography: The sender uses a public key to encrypt the message, then the recipient uses a private key (that only they know) to decrypt it. This is also known as public-key cryptography, and unlike symmetric encryption, the sender and recipient don’t need to share the key. However, today, the most widely used types of encryption tend to rely on a combination of symmetric and asymmetric cryptography, as we’ll discuss later. Why should senders encrypt emails? Before we answer that, let’s start with a reminder that you should never send sensitive information, like passwords or Social Security numbers, over email. That said, emails often contain personal information about the recipient, like their address, or business information not intended for the public. And without encryption, bad actors could intercept that information and use it to commit identity theft, fraud, and other crimes against individuals or businesses. This is why encryption is so important: it ensures that no individual or entity intercepts the content of the email along the way or, in some cases, as it sits in email servers. Additionally, due to laws like the General Data Protection Legislation, regulators can fine businesses if customers’ personal data is compromised. However, encryption can help avoid this. Lastly, encryption is a crucial element of email security that can ultimately impact the sender’s reputation and deliverability. What are the different types of email encryption? There are 3 common types of email encryption used today. Let’s look at how these work, plus a secure alternative for sending highly sensitive information. Transport layer security (TLS) is a protocol that encrypts email data as it travels from the sender’s email server to the recipient’s—though it doesn’t encrypt it at its destination. TLS uses asymmetric and symmetric cryptography to encrypt email data. That means it generates and exchanges a session key through asymmetric cryptography, then the sender and recipient use this key to encrypt and decrypt the message. Many email providers, including SendGrid, use opportunistic TLS encryption by default (more on this below). This prevents cybercriminals from reading the contents of an email while it’s in transit, known as a man-in-the-middle attack. Most web servers also use TLS for secure browsing—you might recognize it as the lock symbol on a web browser when you’re on a secure website. This type of encryption replaces its predecessor, Secure Sockets Layer (SSL). Next, we’ll look at types of encryption that secure data on the server. Pretty Good Privacy (PGP) was the first successful implementation of a public-key encryption solution for email. In the simplest terms, PGP encryption works by generating a random session key that the sender encrypts using the recipient’s public key. The sender then shares the encrypted session key with the recipient, who can decrypt it with their private key. Finally, the recipient uses this session key to decrypt the message. PGP encryption protects data as it travels and on the server. This is known as data-at-rest encryption or end-to-end email encryption, and it means only the intended recipient can decrypt the message. Thus, it protects sensitive information from cybercriminals who might target your email server. To use PGP encryption, users typically need to download an add-on—providers like Outlook, Apple, and Thunderbird have PGP add-ons available. However, the sender and recipient must both install the add-ons and enable PGP encryption to send secure messages. Secure/Multipurpose Internet Mail Extensions (S/MIME) is a widely used protocol for sending encrypted messages with a digital signature. Like PGP, S/MIME provides end-to-end encryption, securing messages in transit and on the email server. This protocol also enables the sender to digitally sign the message, authenticating their identity and the integrity of the data they send. So it gives the recipient peace of mind that the message comes from a legitimate sender and no one intercepted or altered the content along the way. S/MIME also uses asymmetric encryption, requiring public keys from a certificate authority. This means the sender uses the recipient’s public key to encrypt the message, then the recipient decrypts it with their private key. Additionally, the sender uses their private key to digitally sign the message. Most major email providers—including Microsoft (Exchange and Outlook), Google, and Apple—support S/MIME encryption through plugins. However, both the sender and the recipient need to enable S/MIME encryption to send secure messages. Additionally, administrators can set up S/MIME encryption for all the email users in their organization. Web portals are a secure alternative to use when you need to share highly sensitive data, such as protected health information or financial information, as it’s best not to send that information over email. Not only is it not worth the risk of compromising the recipient’s data, but regulations like the Health Insurance Portability and Accountability Act (HIPAA) often prohibit it. With this method, the sender notifies the recipient via email that they have a new encrypted message. The recipient must then log into the secure portal to retrieve the message. This way, you can still enjoy the convenience of communicating over email while protecting the recipient’s data and complying with regulations like HIPAA. Secure emails in transit with Twilio SendGrid Want to know how Twilio SendGrid secures your messages? By default, SendGrid uses opportunistic TLS encryption for outbound emails. This means we attempt to deliver email over a TLS-encrypted connection as long as the recipient’s server accepts an inbound TLSv1.1 or higher connection. However, if the recipient’s server doesn’t support TLS, we deliver the message unencrypted. You can also opt for the enforced TLS setting, which allows you to specify the recipient has to support TLS. However, if you choose to enforce TLS and the recipient’s inbox provider doesn’t accept the TLS encryption, we won’t deliver the message. You would see this as a block event with the description “TLS required but not supported.” Now that you have a better understanding of email encryption, learn more about the other email security factors that impact sender reputation and deliverability in our 2022 Email Deliverability Guide. Or if you’re ready to start sending secure emails with SendGrid, sign up for free today.
<urn:uuid:d39cb507-1a65-4233-b990-7d5e80a928f9>
CC-MAIN-2023-14
https://sendgrid.com/blog/what-is-email-encryption/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00018.warc.gz
en
0.906312
1,746
3.140625
3
Chemical Plant Explosions and Safety Chemical plants are inherently dangerous places to work. Explosions are not uncommon, and when they happen serious injuries and death are frequent occurrences. The most devastating chemical plant accident in history occurred in Bhopal, India, causing thousands of deaths. Since many chemical plant explosions are the result of misconduct committed by an employee, an officer of the company or a product manufacturer, lawsuits are often filed to resolve disputes over competing liability claims. Causes of Explosions The causes of chemical plant explosions are diverse – natural disasters (as in the 2011 Fukushima earthquake/tsunami disaster), improper or infrequent maintenance, improper storage or transportation of hazardous chemicals, insufficient training of personnel, and human error in the operation of equipment are all common causes. Another major cause of chemical plant explosions is the malfunctioning of a defective product such as a safety valve or a warning system. Types of Injuries Chemical plant explosions produce several different types of injuries. - The blast wave itself can cause concussions, inner ear damage and punctured retinas; - Flying debris can cause shrapnel-type puncture wounds anywhere on the body; - Toxic gasses released by an explosion can cause poisoning and lung damage; - Impact injuries can occur when the force of the explosion slams you against a wall, the ground or any hard object; and - The heat of the blast can cause skin burns as well as lung damage. The safety of chemical plant personnel is the responsibility of management, and it is the responsibility of employees and contractors to abide by safety measures. Most such measures are mandated by the Occupational Safety and Health Administration (OSHA), although some rules may be specific to a particular plant. Your employer or supervisor generally has the responsibility to issue you all necessary safety equipment, such as protective goggles. Liability for a chemical plant explosion can be divided into two main classifications -- negligence and defective products. Because the chemicals industry is heavily regulated by the Occupational Safety and Health Administration (OSHA), violations in regulations can normally support a claim of negligence as long as it is shown that the violation actually caused the explosion. A product liability claim may arise when a critical product such as a heat exchanger fails and causes an explosion. To win a lawsuit (or obtain a settlement offer based on the likelihood of success in litigation) you must normally prove that the product contained a design or manufacturing defect that rendered the product unreasonably dangerous. Since nearly every product can be dangerous under certain circumstances, the level of danger must be shown to be unreasonable – by showing that the product could have been manufactured more safely in an economically feasible fashion, for example. While winning a product liability lawsuit can be difficult, once you have established liability, you can usually collect damages from anyone from the manufacturer all the way downstream to the retailer, without even roving that the defendant was specifically at fault. Call Zinda Law Group Now If you have been injured in a chemical explosion, you may be entitled to compensation if you can prove that the accident was someone’s fault. Zinda Law Group is a top-tier personal injury law firm that has been approved by the Better Business Bureau® because of its stellar client satisfaction record. Our attorneys are passionate about their jobs, and they never quit until they have put you in the best possible position to receive a full personal injury settlement or favorable court judgment. Call us right now at 800-863-5312 to schedule a free initial consultation. Where Zinda Law Group Practices: As we’ve grown we expanded out west to El Paso, Texas subsequently into Arizona adding Tucson Personal Injury Lawyers to the firm, as well as adding offices in Colorado including Denver Personal Injury Lawyers and the surrounding cities such as Colorado Springs. Types of Cases We Handle: We also have a team of attorneys that have experience serious injury cases such as car accidents, wrongful death and truck accidents. We also have experience handling more complex cases such as gas explosions and drug injury cases, such as the Taxotere® Lawsuits.
<urn:uuid:d8a1a92d-6eeb-4a23-ba30-ab6c8e0ebc53>
CC-MAIN-2020-10
https://www.zdfirm.com/blog/gas-explosions/chemical-plant-explosions-and-safety/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00376.warc.gz
en
0.9532
833
3.15625
3
THE latest economic data from Sweden — one of the few countries to remain open during the coronavirus pandemic as a matter of policy choice — is grim. Its economy is heading into its worst recession since World War II, and is expected to contract seven per cent this year. Forty per cent of businesses in the country’s services sector are on the verge of bankruptcy. Why does economic data from Sweden matter? Because it clearly reveals what some of us have been writing about and pointing to in the past few months — opening the economy in the middle of a virus pandemic is not going to lead to the outcome the prime minister, his government team, or the business community have ignorantly been hoping for. (Unfortunately, even the Supreme Court has waded into this.) Evidence, not wishful thinking, should be the basis for policy. The evidence indicates that a strong public health response ensures a strong economic recovery. This is as true for Sweden today as it was true for the US a hundred years ago during the Spanish flu epidemic. (I have covered this aspect previously; according to published research, economic activity in US cities that imposed tighter lockdowns bounced back quicker and stronger than those cities that had looser restrictions in place.) While the government has framed the question of the economic impact purely in terms of lockdown versus no lockdown (completely ignoring the public health aspect), the fact of the matter is that economies around the world are being buffeted in three related ways: The economic impact is via multiple channels. — The ‘exogenous’ economic disruption caused by the pandemic (upended supply chains, halted global transportation and logistics links, interrupted flow of goods, people, investment and capital); — The ‘endogenous’ economic disruption caused in response to the pandemic via lockdowns and other suppression measures (closed factories, businesses and markets); — The public health effect (productivity loss, absenteeism, fearful consumers, fiscal and other costs associated with Covid-19). While the three channels of impact are overlapping, and reinforce each other, it is important to recognise they are also separate strands. Hence, while an economy may choose to avoid lockdowns, it will not be able to escape the effects of the public health crisis or the effects of the global economic recession. This is where Sweden finds itself. The saving grace of sorts in their case is that the primary motivation for Sweden to avoid closing down the economy was not economic — it was epidemiological. By keeping society open, the Swedes have wanted to develop herd immunity within the population, as the only viable option in their calculus to deal with the novel coronavirus in the long run (in the absence of a vaccine). Here, the government appears to be treating the virus outbreak as over — or as fait accompli. Either of these responses would be a grave mistake. The outbreak is far from over; in fact, with the premature easing of the partial lockdown imposed in the country, we should be prepared for a resurgence in the Covid-19 caseload. And with that will come a direct fallout on the economy. Tailpiece: While the federal government’s public health response has been rightly criticised for being slow, tentative and episodic (as well as misplaced), the economic response has been criticised by some commentators for supposedly pandering to elite interests. The evidence advanced to support this argument is the fact that the government has provided a support package to the export sector and announced a ‘construction package’ to stimulate the economy. This criticism appears to be both misdirected as well as unfair for a number of reasons. Firstly, the export sector has been ground zero since March in terms of the impact of the coronavirus (along with the aviation, tourism and hospitality sectors). This is reflected in the 54pc drop in the country’s export earnings in April. The export sector is critical to the country’s economy not just as a generator of foreign exchange, but as a direct and indirect employer of millions of workers. Its linkages to the rest of the economy, and its importance for stability of the external account, cannot be overstated. Secondly, the construction sector is another ‘natural’ target for fiscal incentives and government support. It is highly labour-intensive – possibly the only sector with an employment elasticity greater than one — with strong backward and forward linkages with over 40 allied sectors of the economy. (However, the accompanying ‘amnesty’ to undeclared money is open to question and a matter of controversy.) Thirdly, the government’s support has not been limited to these two sectors. The central bank has rolled out a slew of measures to support small as well as mid-sized businesses across the country impacted by the shutdown. So far, under its Rozgar refinance facility, financing of over Rs103 billion for providing wages and salaries to around one million employees is under process. Collectively, these are precisely the areas governments are supporting globally. Around the world, governments have scrambled to cobble together large emergency support and fiscal stimulus packages to save both large as well as small businesses. So far, well over $8 trillion has been announced, with Germany and Italy unveiling fiscal support equal to nearly 35pc of respective GDP, followed by Japan’s support package of around 21pc. India’s recently announced support measures amount to almost 10pc of its GDP. If anything, Pakistan’s fiscal package is conservative and not large enough. The diverse nature of the US response to the 2008 financial crisis underscores the mission-critical nature of avoiding shutting down large parts of the economy, or the systemically important bits. Some of the largest financial institutions, as well as the Big Three automakers, were given generous handouts to avoid what economists refer to as ‘hysteresis’, and is now being referred to as ‘economic scarring’. While bigger businesses have been large beneficiaries of bailouts in all countries, they are an integral part of value chains that include SMEs — and employ millions of workers. Providing government support in times of distress makes eminent sense. The writer is a former member of the prime minister’s economic advisory council, and heads a macroeconomic consultancy based in Islamabad. Published in Dawn, May 22nd, 2020
<urn:uuid:1331e95d-cb7d-4b8b-8b5b-0d31ff28a379>
CC-MAIN-2021-43
https://www.dawn.com/news/1558896/economic-costs-of-covid-19
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00067.warc.gz
en
0.96747
1,299
2.515625
3
Q--Every spring I buy pots of tuberous begonias coming into bloom. They are gorgeous until summer heat decimates them. A--Today`s seed-grown hybrids such as the Nonstops and Klips strains are considered more heat-tolerant than the larger flowered sorts started from tubers. These plants need warm days and cool nights. Situate the begonias in the coolest part of the garden. Do not fertilize in hottest weather and beware of extremely wet or dry soil. Water well before the leaves wilt, but avoid sogginess. Q--Why are my hibiscus leaves turning yellow? A--If yellowing happens, an acid fertilizer is needed. Organic gardeners scratch a tablespoon or more of cottonseed meal around each potted plant. Oldtimers used vinegar water: 1 teaspoon vinegar to 1 quart of water. Q--As an experiment I planted macadamia nuts from Hawaii in a shallow tray. One germinated and grew over winter in a sunny window garden, developing leathery, dark green leaves. Now it is 10 inches tall and the leaves have turned brown, but remain on the plant. What`s wrong? A--The macadamia nuts we eat come mostly from trees that grow outdoors in Hawaii, although home gardeners in southern coastal areas of California grow them, as well as in Florida. The trees are sometimes seen purely as interesting ornamentals in northern conservatories, grown in large containers that can be moved outdoors to bask in the summer sun, but kept inside in cold weather, where night temperatures of 50-55 degrees can be maintained. For your macadamia seedling, leaf dieback could be caused by letting the soil get too dry, or, conversely, from letting the pot stand in a saucer of water. It is also possible that more baking sunlight is needed. Sometimes older leaves die prematurely on a plant for lack of nitrogen in the soil. This can be alleviated with regular applications of fertilizer such as fish emulsion.
<urn:uuid:dac3efd3-3de7-43fd-8199-f0db0c9a38dd>
CC-MAIN-2018-05
http://articles.chicagotribune.com/1986-07-13/news/8602200309_1_begonias-macadamia-potted-plant
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889681.68/warc/CC-MAIN-20180120182041-20180120202041-00164.warc.gz
en
0.937088
420
2.765625
3
New research critical to address future food challenge FUNDING for new research which will boost farming practices is vital if the UK is to meet the challenges of feeding an expanding population over the next 20 years, a leading scientist told the Institute of Agricultural Management conference in London. Prof Chris Pollock, who was speaking about the findings of his study - Feeding the Future: Innovation Priorities for Primary Food Production in the UK to 2030 - outlined seven crucial research priorities for the future of food production in the UK. He said British farmers needed a ‘united approach’ from Government, researchers and industry to develop new knowledge and technologies. It comes after the 2011 Foresight Report highlighted a lack of research and development in agriculture and the urgent need to increase food production sustainably. Prof Pollock said it was ‘critical’ the sector maintained scientific research, identified missing links in that research and took steps to replace them. “The skills we have in higher education are not necessarily the skills we will need in 15 to 20 years’ time,” said Prof Pollock. “Food producers have tended in recent years to deal with today’s problems. If we want to shift the research agenda to deliver for 2030, we need to make sure primary producers work together and with the funders of more basic research.” Prof Pollock, who led the study on behalf of NFU, the Royal Agricultural Society of England, AHDB and AIC, added longer-term funding programmes were needed to link different sectors of the industry. He said modern technologies to improve precision and efficiency, including genetic and breeding programmes, were also ‘highly important’, as well as Governments working together on issues which affect land use. Prof Charles Godfray from Oxford University agreed action was needed ‘on all fronts’, adding agriculture needed to rise to the challenge of sustainable intensification. NFU vice president Adam Quinney added: “This report marks the transition from talking to action. Its strength is in presenting a united message from all sectors of agriculture and horticulture by outlining the industry’s views on priorities for research and technology needed to meet the key food production challenges. “Crucially, it will require buy-in from across the industry to ensure it gains momentum and achieves real change.”
<urn:uuid:b662b626-25fb-43a4-95b3-27d5263f58e3>
CC-MAIN-2015-11
http://www.farmersguardian.com/home/rural-life/new-research-critical-to-address-future-food-challenge/51445.article
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462762.87/warc/CC-MAIN-20150226074102-00245-ip-10-28-5-156.ec2.internal.warc.gz
en
0.936164
485
2.640625
3
|The Dalai Lama in the Chumbi Valley in 1951| For the occasion, I wrote this article for Power Politics. As the Dalai Lama, the revered Tibetan leaderturns 82, it is worth taking a look at his momentous life and achievements, but also where he did not succeed. But let us start by the Nobel Laureate’s first steps in the political arena. The Tragedy of Tibet On October 7, 1950, Chinese troops crossed the Upper Yangtze and began their ‘liberation’ of Kham. Ten days later, after sporadic battles, Chamdo, the capital of the province fell and Ngabo Ngawang Jigme, the province’s Tibetan Governor immediately surrendered to the Chinese. It would take more than two weeks for the information to filter out. Till October 25, the Tibetan government in Lhasa knew nothing, the Indian government had heard nothing, and the Chinese were keeping quiet; further, Robert Ford, the radio operator working for Lhasa, had been taken prisoner. Other governments, depending on India for news, were not ‘informed’ either. Finally, the Chinese themselves announced that Tibet was ‘liberated.’ A brief communiqué of the New China News Agency (Xinhua) stated: “People’s army units have been ordered to advance into Tibet to free three million Tibetans …the conquest of Tibet was a ‘glorious task’ which would put the final seal on the unification of communist China.” Hardly three weeks later in Lhasa, in the midst of preparations for a proposed debate on the Tibetan issue in the UN, the Gods spoke through the Nechung State Oracle: “Make Him King”. Thus, Tenzin Gyatso was enthroned as the Fourteenth Dalai Lama of Tibet at the young age of fifteen. The mysterious ‘God King’, as the foreign press called him, had become the temporal and religious leader of Tibet. During the following eight years, the young monk, surrounded by the traditional regalia tried his best to be a go-between his people and the Chinese Communist authorities. It was an impossible task and on March 17, 1959, the Dalai Lama decided to leave his native Land of Snows and take India’s direction. The Dalai Lama’s Three Commitments Since then, the Tibetan leader has wandering across the planet spreading his message of compassion and universal responsibility. He often says that he has three commitments in life. The first one is the promotion of human values such as compassion, forgiveness, tolerance, contentment and self-discipline. He also speaks of ‘secular ethics’. For the past three decades, wherever he travels, he shares these human values. One remarkable fact about the Dalai Lama is that he is able to place ‘humanity’ before his own self, before his own community and even his own nation. In this he has been extremely successful. For his second commitment is not Tibet, but the promotion of religious harmony and understanding among the world’s major religious traditions. Are there many religious leaders in today’s world who are ready to admit: ‘several truths, several religions are necessary? This message too is acknowledged by millions. His country, Tibet, is only his third commitment (he always insists on this order); he says: “as a Tibetan [who] carries the name of the ‘Dalai Lama’, Tibetans place their trust in me. Therefore, [my] third commitment is to the Tibetan issue.” Unfortunately during the past 50 years, the Tibetan issue, though a cause célèbre, has practically not advanced and in several domains, even regressed. China is ‘bigger’ today than it was two or three decades ago, and Beijing is belligerent and not ready for any type of compromise. Apart from these 3 commitments, the 14th Dalai Lama will go down in history for some bold choices he has made for Tibet. On March 30, 1959, the Dalai Lama crossed the Indian border at Khenzimane, north of Tawang. During the following months, some 80,000 Tibetans joined him and settled in India, Nepal and Bhutan. On April 29, 1959 from the hill station of Mussoorie, the Dalai Lama formed a Tibetan Government-in-Exile, also known as the Central Tibetan Administration (CTA); a year later the CTA moved to Dharamsala, where it is still located. The process of democratization then started. What he couldn’t do during the nine preceding years in Tibet due to Chinese objections, the Dalai Lama could now set up: bringing modern democratic practices into the old theocracy; the Tibetan leader did not want to be the last word for each and every political decision. As a first step, on September 2, 1960, the Tibetan parliament-in-exile, then called ‘Commission of Tibetan People’s Deputies’, came into being. On 10 March 1961, the Dalai Lama formulated a draft Constitution of Tibet, incorporating traditional Tibetan values and modern democratic norms. Two years later, it was promulgated as the Tibetan Constitution-in-Exile. The process continued during the following years; in 1990, the Tibetan Parliament was empowered to elect the Kashag or the Council of Ministers, and was made answerable to the Parliament. A Supreme Justice Commission was also instituted. The Parliament soon drafted a first Constitution, known as the “Charter of the Tibetans in Exile”. Today, the CTA functions as any democratic government; this deeply irritates China, which is still governed by a one-Party system. The Tibetan Charter adheres to the Universal Declaration of Human Rights and provides equal rights for all, without discrimination on the basis of sex, religion, race, language and social origin. It also defines the role of the three organs of the government: judiciary, legislature and executive as well as other statuary bodies, namely the Election Commission, the Public Service Commission and the Office of the Auditor General. In March 2011, the Dalai Lama took the final jump, perhaps changing Tibetan political history forever; he renounced temporal power and handed it over to an elected leader (currently Dr Lobsang Gyatso). What is remarkable is that he had to fight to ‘impose’ these democratic institutions on the Tibetan ‘masses’, who often thought “the Dalai Lama is wiser, why do we need human governance when we have a divine one?” But in his wisdom, the Tibetan leader knows that in the long run, democracy is a more stable system than theocracy or autocracy like in China. Stopping divisive sectarian practices The Dalai Lama’s second gift to the Tibetan nation is that he succeeded to unite the three historical provinces of Tibet which have too often been divided in the course of the Land of Snows’ checkered history. In his Address to the U.S. Congressional Human Right's Caucus in Washington DC on September 21, 1987 (known as the ‘Five-Point Peace Plan’), he stated: “It is my sincere desire, as well as that of the Tibetan people, to restore to Tibet her invaluable role, by converting the entire country - comprising the three provinces of U-Tsang, Kham and Amdo - once more into a place of stability, peace and harmony.” The fact that all three provinces have been represented since the first days of the Parliament in exile is a telling example. The Apostle of Peace An interesting book, titled “Destined for War – Can America and China Escape Thucydides’s Trap?” was recently released in the US. Graham Allison, the author, studied 16 cases in the last 500 years where an aggressive rising nation threatened a dominant power; in 12 cases it ended with a war. Studying the case of the US and China, the author asks “Can a collision course be avoided?” For Allison, the rise of China offers a classic Thucydides trap. In 1980, China’s economy was only a tenth the size of the US economy. By 2040, Allison reckons it could be three times larger, as a result the two nations, Allison argues, are “currently on a collision course for war”, which he says can be averted only if both demonstrate skill and take difficult and painful actions to avert it.” The Dalai Lama, who has ceaselessly worked for World Peace, has helped to change the perception of millions on this planet about war and peace. This contribution to humanity will certainly be an important factor to avoid a conflict, if the planet is confronted with a ‘war trap’. |Will Macron, the President meet the Dalai Lama?| For his own country, the Dalai Lama vouched for a Middle-Way Approach in his dealings with China to find a permanent solution of the Tibetan tragedy. He wrote: “The Tibetan people do not accept the present status of Tibet under the People's Republic of China. At the same time, they do not seek independence for Tibet, which is a historical fact. Treading a middle path in between these two lies the policy …This is called the Middle-Way Approach, a non-partisan and moderate position that safeguards the vital interests of all concerned parties - for Tibetans: the protection and preservation of their culture, religion and national identity.” Though this has not brought the expected results, one can hope that one day a solution will be found based on this principle and without disregarding the fact that Tibet was an Independent country before 1950. But the end of the tunnel is still far away. Rule by incarnation: a difficulty It is unfortunate that the ‘rule by incarnation’ practiced in Tibet has often been unsatisfactory; there are several reasons for this. First, it is difficult to be sure that the choice of a new reincarnated lama is the right one. During some troubled periods of Tibetan history, the Mongols or the Manchu dynasty could use their influence to steer the choice, through the Golden Urn system or other ways. The selection of the correct candidate has always been a major problem in Old Tibet. This was true not only for the Dalai Lamas and the Panchen Lamas at the top of the hierarchy, but also for ‘local’ hierarchs who presided over a county, a province, a school of Buddhism, a monastery or even over a particular lineage. Another reason that made this system unworkable was the gap of 20 odd years between the death of a Lama and the time when his reincarnation became eligible to take over. Of course there are exceptions as in the case of the present Dalai Lama, but such examples are rare and often the Lamas have to depend on estate managers or regents who have more knowledge in mundane matters. This is a serious issue which has not been tackled so far. Atheist China’s expertise in religious matters In 2007, the Chinese State Administration for Religious Affairs in Beijing issued State Order No.5 stating the “Management Measures for the Reincarnation of Living Buddhas in Tibetan Buddhism”. The Party decided to play the ‘religion’ card to solve the Tibet issue. Soon after, Beijing then started to promote ‘Living Buddhas’ working under the Communist Party. The objective of the new policy was clearly to control the future reincarnation of the Dalai Lama. The same year, Beijing appointed the largest ever number of clerics to Tibet’s regional advisory body and started promoting its own ‘Living Buddhas’, such as Gyaltsen Norbu, China’s own Panchen Lama. In September 2011, the Tibetan leader decided to counter Beijing by speaking about his own reincarnation. He explained the general phenomenon of reincarnation which could take place either by the voluntary choice of the concerned person or at least based on the strength of his or her karma, merit and prayers. The Dalai Lama clearly stated that the person who reincarnates has the sole legitimate authority over where and how he or she takes rebirth and how that reincarnation is to be recognised. According to him, no one else can force the person concerned, or manipulate him or her. He believes that the Chinese interference in the spiritual process is brazen meddling which contradicts their own political ideology and reveals their double standards. The fact remains that politically, the situation of the Tibetan refugees in India and elsewhere in the world is not rosy. For them, the ‘Rise of China’ is rather worrying and it has resulted in a number of unfortunate self-immolations in Tibet. But hope remains, as no Dynasty has lasted forever; in the meantime, the Dalai Lama likes to quote this beautiful prayer of the Indian sage Shantideva: As long as space endures, as long as sentient beings remain, until then, may I too remain and dispel the miseries of the world
<urn:uuid:5807b220-b9e6-4bee-bd3d-0865b92f4a64>
CC-MAIN-2018-39
https://claudearpi.blogspot.com/2017/07/the-dalai-lamas-commitments.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00095.warc.gz
en
0.963728
2,731
3.390625
3
In a report that came out early last month, two sociologists, Michael DM Bader and Siri Warkentien, found that while LA had a relatively significant portion of diverse, racially mixed neighborhoods, around 40 percent racially diverse neighborhoods were on track to becoming more segregated. How exactly is this happening? In an op-ed in the LA Times today, Bader goes into detail on how that plays out in the LA area, and what drives these mixed neighborhoods to become more homogenous. Bader says that “vast portions of south and east Los Angeles are slipping from mixed populations toward single race populations,” and uses Compton as an example. In 1980, Compton’s population was almost 75 percent black, but in 1990, that had decreased to about 52 percent black and 43 percent Latino. In 2014, Compton was about 66 percent Latino. “Such slow but steadily increasing Latino growth can be found in 46% of the neighborhoods we studied in the Los Angeles metropolitan region,” Bader says. Immigration plays a big part in the trend because recent immigrants, Bader argues, tend to move into neighborhoods where people from their country or similar backgrounds are already established. Bader has seen this play out not only with Latino immigrants but with immigrants from Asian countries, and cites Cerritos, where Asian immigrants rose from 44 percent of the populace in 1990 to 62 percent in 2014. In their own way, white people are kind of doing the same thing. When white people move, they tend to “[choose] new neighborhoods with same-race neighbors.” That finding on its own looks bad (the study termed this phenomenon “white avoidance”), but Bader says that this happens because whites simply aren’t familiar with more mixed areas, and therefore aren’t really aware of them as options. The exception here would be in gentrifying neighborhoods, but there are way more neighborhoods segregating than there are gentrifying, Bader notes. It’s worth noting that Bader and his co-author’s report also found that LA’s moving toward becoming less racially mixed, it’s still the most integrated of the four major cities the report looked at (the other three were New York, Chicago, Houston). Their findings also differ from a 2015 study that found that LA’s neighborhoods were actually getting less segregated.
<urn:uuid:0d414b81-e56b-42be-a356-023b4dbea5f2>
CC-MAIN-2018-43
http://danmayrealestate.com/2016/03/heres-how-los-angeles-is-re-segregating/
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00293.warc.gz
en
0.976719
495
2.546875
3
December 29, 2021, Update on Pregnancy and the COVID-19 Vaccine For pregnant women, getting the COVID-19 vaccine as soon as possible is the safest choice to protect them against the virus. The Centers for Disease Control and Prevention (CDC), American College of Obstetricians and Gynecologists (ACOG), Society for Maternal-Fetal Medicine (SMFM) and other professional organizations recommend the vaccine during pregnancy. These organizations also point out that pregnant women have a higher risk of hospitalization and death from COVID-19. Getting COVID-19 while pregnant increases the risk of preterm birth, preeclampsia, low birth weight and stillbirth. Our advice for pregnant women, or those considering pregnancy is to get the COVID-19 vaccine when it is available. On December 11, 2020, the U.S. Food and Drug Administration (FDA) issued an Emergency Use Authorization (EUA) for the Pfizer-BioNtech mRNA vaccine for use against the virus that is causing the illness known as COVID-19. The vaccine has been 95% effective in large-scale clinical trials and has already been approved for use in other countries. A second mRNA vaccine for the prevention of COVID-19, from manufacturer Moderna (called mRNA-1273), received EUA approval on December 18, 2020. Its efficacy rate is reported to be 94%. Johnson & Johnson was the third vaccine to receive EUA for the prevention of COVID-19 on February 27, 2021, from the FDA. New guidelines from ACOG recommend that pregnant and recently pregnant people up to 6 weeks postpartum, including pregnant and recently pregnant healthcare workers, receive a booster dose of COVID-19 vaccine following the completion of their initial COVID-19 vaccine or vaccine series. As United States citizens now have access to a COVID-19 vaccine, it is important for infertility patients and pregnant women to make a plan to get vaccinated. Is the COVID-19 vaccine safe for pregnant women and parents-to-be? After carefully studying all information and evidence, the doctors at CU Medicine OB-GYN East Denver (Rocky Mountain) recommend that pregnant women and women trying to get pregnant, receive a COVID-19 vaccination. The CDC experts say that, based on how mRNA vaccines work, “they are unlikely to pose a risk for people who are pregnant.” This is because mRNA vaccines do not contain the live virus that causes COVID-19, so the shot cannot give a pregnant woman the disease. But the CDC says that the potential risks of the COVID-19 vaccine to a pregnant woman and her fetus are not yet fully known. Unlike the Pfizer and Moderna vaccines, Johnson & Johnson’s isn’t an mRNA vaccine. Johnson & Johnson’s is a viral vector vaccine. An initial CDC study found that pregnant people are at an increased risk of severe COVID-19 illness, but this segment of the population was not included in the Operation Warp Speed vaccine trials. The American College of Obstetricians and Gynecologists’ (ACOG’s) COVID-19 working group reports that while pregnant women are typically excluded from most clinical trials, they have been vaccinated for decades with few complications. One ACOG working group doctor, Denise J. Jamieson, MD, MPH, said that because the COVID-19 vaccine uses messenger RNA technology, not a live virus, she anticipates the vaccine should be very safe in pregnancy. Should I get the COVID-19 vaccine if I am pregnant or thinking about it? ACOG recommends that everyone who is eligible, including pregnant and lactating individuals, receive a COVID-19 vaccine or vaccine series. If you have an underlying health condition or are one of the estimated 330,000 healthcare workers who are pregnant or breastfeeding during the initial months of the vaccine’s release, then it is imperative that you make a plan to get vaccinated. If you have concerns about your health condition and a COVID-19 vaccine, make sure you talk to your doctor about inoculation. If you are pregnant or thinking about getting pregnant, make sure you get your COVID-19 vaccine. For otherwise healthy people in general, the COVID Task Force of the American Society for Reproductive Medicine (ASRM) does not recommend withholding the vaccine from patients who are planning to conceive, who are currently pregnant, or who are breastfeeding. It does encourage patients undergoing fertility treatment to receive the vaccination based on current eligibility criteria. Because the vaccine is not a live virus, the Task Force says there is no reason to delay pregnancy attempts due to vaccine administration or to defer infertility treatment until receiving the second dose. At the same time, the ASRM Task Force notes that recent studies suggest that pregnancy poses a higher risk for severe COVID-19 disease along with factors such as obesity, hypertension and diabetes. Therefore, each person should weigh individual risks and benefits with her own doctor before receiving the vaccination. What are potential risks for pregnant women getting the vaccine? Because there has been no specific research on the COVID-19 vaccine on pregnant women, no known risks have been documented. However, vaccines pose risks to the population at large in general. One risk is that inoculations can trigger a fever in pregnant women (a dose of acetaminophen is usually the recommended treatment). Another possible side effect is that, sometimes, ingredients in the vaccine can cause allergic reactions similar to those caused by allergies to bee stings and peanut butter. Reactions can include anaphylaxis, a severe condition that can cause shock, a rapid pulse rate, difficulty breathing, nausea and vomiting. Whether pregnant or not, anyone receiving the COVID-19 vaccine should be monitored for 15 to 30 minutes after receiving the shot to identify any complications. Will the COVID-19 vaccine make me infertile or miscarry? Currently, there is no data connecting the vaccine and infertility, and social media posts saying otherwise are “inaccurate,” according to Yale University vaccine expert Saad Omer in a Dec. 11, 2020 The New York Times wire story. The key ingredient in Pfizer’s and Moderna’s vaccine is genetic material that “teaches” human cells how to produce a protein, called “spike protein.” The human immune system recognizes this “spike protein” as foreign, so it produces antibodies against the protein. These antibodies then linger in the person’s immune system. If that person is subsequently exposed to the coronavirus, then the antibodies help prevent symptoms/infection from developing. These protective antibodies DO cross the placenta and are in the breastmilk, providing protection against COVID-19 to the baby. No placental proteins or genetic material in the vaccine teaches the body how to make placental proteins. Duke University immunologist and expert in neonatal immunity Stephanie Langel explains in the same The New York Times article that the coronavirus spike and placental proteins have nothing in common, “making the vaccine highly unlikely to trigger a reaction to these delicate tissues.” Regarding miscarriage, Mary Jane Minkin, MD, of Yale School of Medicine, tells USA Today that there has been no evidence among the 91,466 COVID-19 cases in pregnant women up to May 17, 2021, [updated] that spike protein antibodies attacked any cells in the placenta, which would cause pregnancy complications or miscarriage. What if I can’t get access to the vaccine? Regardless of whether you get the COVID-19 vaccine, it is imperative to continue wearing a mask and social distancing in appropriate circumstances. If you become pregnant during the pandemic, notify your doctor immediately. According to the CDC, of those 91,466 pregnant women in this country tested positive for COVID-19, more than 15,500 had to be hospitalized, and of those, more than 400 were admitted to intensive care and 101 died. Conclusion: precautions are still the best medicine As of now in Colorado, all adult residents are eligible to receive a COVID-19 vaccine and boosters when they are eligible. We strongly encourage you to get the vaccine. If you have questions, please reach out to your provider. For more information online, see information on COVID-19 and the vaccine at American College of Obstetricians and Gynecologists. The content provided here is for general information only and should not be relied upon or used as a substitute for professional medical advice, diagnosis, or treatment. Any reliance on this information is at the risk of each web site visitor. Always seek the advice of a physician or other qualified health provider with any questions regarding a medical condition.
<urn:uuid:13a076c8-0b21-41f9-8388-cd574f888453>
CC-MAIN-2022-21
https://eastdenver.coloradowomenshealth.com/blog/covid-19-vaccine-pregnancy
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00142.warc.gz
en
0.941382
1,820
3
3
To the Northern Breed Dogs Index Ancient People Honoured Working Dogs This isn't Northern, but it's an important piece anyway, on 1,000-year-old graves found in Britain in 1999. The Canine Diversity Project This project is an attempt to acquaint breeders of domesticated Canidae (dogs) with the dangers of inbreeding and the overuse of pre-eminent males. There is an extensive resource list here. Cultural Sensitivity & Arctic Dogs In February 1999, abandoned dogs were being "rescued" from an Inuit village by the Montreal SPCA - should they have been?? Dog Sled Tours Around the World With a little help from our four-footed friends, people in many parts of the world can experience one of the most exciting aspects of our history. Dogs of the Vikings This extensive report, with almost 50 photos, describes 11 specific breeds of dogs kept by the Vikings, as well as cats, cattle, horses, bees, falcons and many other animals. Mushing Around the World From Murrays' photo album, mushing, pulking and other Northern working dog scenes from Alaska to Austria. Russian Dogs Lost in Space A page commemorating some of the pioneers of the space race. Another series of small pages is Sled Dog Rescue A shelter for Huskies & Malamutes - their Web site has lots of information for those thinking about sharing their lives with a dog such as Miss Bear, whose photo is to the left. Snowmobiles & Sled Dogs The invention of personal snowmachines spelled the end of the Golden Age of mushing in Alaska and the Yukon. This beautifully-illustrated article describes the spread of this general type of dog which originated in the Arctic regions. Part of Kim Nan Young's Top 100 Dog Breeds in Finland An interesting survey of 1997 registrations - 3 of the top 4 are Nordic breeds. Who Invented Sled-Dog Teams? A brief illustrated article by Oscar J. Noel, tracing the harnessing of dogs in the Arctic back 1,000 years. World's Largest Dog Team A photo album of 210 huskies in harness, pulling a loaded semi-trailer in Whitehorse, Yukon, in October 1998. Dog Breed Selector Answer 16 questions, and this interesting site will show you which breeds are appropriate to your needs.
<urn:uuid:b3ee8e7f-f813-4050-bbaa-b0e86048b927>
CC-MAIN-2017-30
http://everythinghusky.com/dogsgen.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427749.61/warc/CC-MAIN-20170727062229-20170727082229-00140.warc.gz
en
0.917467
503
2.734375
3
The New Forest Pony, the Architects of the Forest One of the most iconic and distinctive features of the New Forest are her ponies. New Forest ponies still roam freely and ‘wild’ through much of the New Forest. They are not completely wild though, they belong to The Commoners of the New Forest. These Common rights date back to the time of William the Conquerer. The Common right of pasture; is a right afforded to New Forest Commoners who can keep ponies, cattle, donkeys or mules and pigs on the Forest. These animals are known as the Commoners’ ‘stock’. In 2011 there were around 4500 ponies ‘depastured’ on to the Forest. A hardy breed The New Forest Pony is not a descendent of any ponies who made it to English soil when the Spanish Armada was wrecked as some have recorded, but an ancient breed of pony who even before history grazed and foraged in the forest. These are hardy, robust animals, able to withstand the harsh winters but are free to roam where they wish. Learn more about the New Forest Pony History. The upper height limit is 148 cm and can be any colour except piebald, skewbald, spotted or blue eyed cream. They are bred for temperament, and are generally easy to train. Once a year the New Forest ponies are rounded up in a “Drift”. Herded into stockades, checked by vets, wormed, and the foals are branded by their owners before all are once more set free to roam where they wish. Several times a year some of these ponies will be traded at the New Forest Pony Sale at the Beaulieu Road Sales Yard. The Common Right of Pasture The most important right at the moment, is the right of pasture. Which is the right to turn out cattle and ponies. And they are the architects of the Forest and without them, it would all grow far too course and then you’d lose, first the insects and then the birds that eat the insects, and the nice little flowers which only grow where it’s close grazed. So the Forest would be completely different. Enormously impoverished. ~ Dionis Macnair MBE, Podcast 2 Never feed the ponies! Never feed the ponies as this encourages them to car parks and roads increasing their risk of injury or death, plus it is illegal to do so! It is also a very dangerous thing to do. New Forest ponies are bred for their temperament, but all the same these are not pets. They can bite and have a tremendous kick when they feel threatened. And of course people feed them. Which is disastrous because normally a group of ponies on the Forest will be a mare, her daughters and her grandchildren. If one of those has to be removed because it has kicked a visitor and is put in another part of the Forest it will be among strangers, it will be taken away from its family where it is part of the group. And it will be the interloper and the others will be beastly to it. The new herd will be beastly to it. Sometimes they will manage to get assimilated, sometimes perhaps they will go off and meet up with another lot of miseries that have been turned out. But its not a happy situation. It’s actually unkind. They will normally stay in those family groups. The fillies. So it’s not kind. Acorns, the killer fruit The fruit of the mighty oak tree can cause a problem for ponies and cattle. Particularly in ‘mast’ years the large numbers of acorns can poison the ponies. In 2013, a ‘mast’ year some 90 ponies and cattle were killed by acorn poisoning. Typically about 5 animals die in a normal year. Traditionally pigs are let out onto the forest to try and sweep up the acorns. This right of ‘pannage’ or ‘Common of mast’ is another traditional commoners right. Man, the motor car & the pony In 1903 the first pony was killed by a motor vehicle in the New Forest. Since then literally thousands of ponies and other commoning stock have been killed or injured by cars. The introduction of speed limits helped reduce these accidents. Please slow down in the Forest especially at dusk or twilight when the ponies are difficult to spot.
<urn:uuid:39410edf-7266-449f-a498-f6b9f55840ea>
CC-MAIN-2017-22
http://inewforest.co.uk/new-forest-pony/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608953.88/warc/CC-MAIN-20170527113807-20170527133807-00386.warc.gz
en
0.958272
910
3.09375
3
1 Diagnostic Considerations in Breast Cancer.- 2 Liver Scintigraphy.- 3 Brain Scintigraphy.- 4 Breast Scintigraphy.- 5 Skeletal Scintigraphy.- 6 67Ga Scintigraphy.- 7 Radiologic Evaluation: Roentgenographic and Other Procedures. Breast carcinoma is a dreaded disease. The incidence of breast cancer, which appears to be increasing, is 1 in 1500 women with an annual death rate of 4,000 from this disease in the United States (1). It is a cancer which threatens its victims with mutilation as weIl as early death. Although response to therapy has not been good, improved methods for earlier and more complete diagnosis are providing hope for better results. When a woman presents herself for routine breast examination, what diagnostic procedures are indicated? If a breast mass is present, what diagnostic and therapeutic methods are employed? When the mass proves to be malignant, what then? Should biopsy and mastectomy be a combined procedure? Should a positive biopsy be followed by a complete diagnostic work-up before definitive therapy is undertaken? While some answers may seem obvious and others less obvious, common medical practices vary considerably in response to all of these situations. No easy formula exists. Each patient must be given individual consideration and her* treatment carefully planned to incorporate all the diagnostic findings. Experience to date indicates that some diagnostic and therapeutic procedures have established efficacy while others are not very helpful and still others need more evaluation before their usefulness can be . assessed fully. Traditionally, treatment of breast cancer has been surgical. Through the years poor results from surgery, along with acquisition of knowledge of the lymphatic spread of this malignancy, prompted more and more extensive surgical procedures. Springer Book Archives
<urn:uuid:40a64410-52af-4fd1-990f-057eaaf5e85f>
CC-MAIN-2016-50
https://www.moluna.de/buch/4203152-breast+cancer+diagnosis/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541773.2/warc/CC-MAIN-20161202170901-00445-ip-10-31-129-80.ec2.internal.warc.gz
en
0.91558
353
2.59375
3
A direct relationship can be defined as a relationship wherever both factors increase or perhaps decrease in parallel with one another. For instance , an example of an immediate relationship https://mybeautifulbride.net/ would be the relationship between the guests count for a wedding and the amount of food served at the reception. In terms of online dating, the direct relationship identifies that between a real love dating internet site user and a different online dating customer. The first-person dates the other person, usually through an preliminary Internet connection. The 2nd person perspectives the account of the first person on the website and matches the person with that individual based solely about that particular account. Using a spreadsheet to create a direct relationship, or linear marriage, between virtually any two variables X and Y can be carried out. By inserting inside the values per of the x’s and y’s in the chart into the stand out cell, it will be possible to get a fundamental graphical portrayal of the data. Graphs are typically drawn utilizing a straight lines, or a U shape. This can help to represent the difference in value linearly over time. You can use a mathematical expression to obtain the direct and inverse marriage. In this case, the definition of ‘x’ presents the first of all variable, while ‘y’ may be the second variable. Making use of the formula, we are able to plug in the values meant for the x’s and y’s in the cells which represents the 1st variable, and start with that the immediate relationship is accessible. However , the inverse marriage exists whenever we reverse the order. The graphs could also represent the trend of one varying going up when ever one variable goes down. It can be easier to get a trendline by using the schedule instead of a chart because all the improvements are in-line, and it is easier to see that the relationship exists. There can be other remedies for determining trendlines, but the spreadsheet is a lot easier to use designed for this kind of purpose. In some situations high is more than one indicator for a given gauge, such as warning signs on the x-axis, you can storyline the results of the distinct indicators on one graph, or maybe more (or more) graphs. Usually a trendline is just a series of point (x, y) along with a break of this line at some time. You can also make use of a binogram to make a trendline. A binogram reveals the range of 1 variable against another. You also can plot an immediate relationship or an roundabout relationship employing a quadratic system. This will compute the value of the function y(I) over time. The formula utilized to calculate this benefit is: sumado a = experience (I / ln (k*pi*pi). In the above example, we are able to calculate the interest rate of growth of sales in the rate of growth of the economy. This will provide us with a range, from zero to infinity. We could plot the results on the graph and search at the distinct ranges designed for the various factors.
<urn:uuid:9bb74a4a-5fc8-4394-afae-dab5096455bc>
CC-MAIN-2022-49
https://homeofthehyzer.com/direct-relationship-or-indirect-romance/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00407.warc.gz
en
0.930178
635
2.546875
3
Starch is viewed as a major nutritional material to provide energy for human or as a major functional ingredient in food recipes to provide characteristic viscosity, texture, mouth-feel and consistency of many food products. However, in fact, starch has found uses in various nonfood applications such as paper, textile, cosmetic, adhesive, bakery, leather, pharmaceutical & other industries. We can supply starch in any quantity as per your requirement. Types of Starch that we supply: a) Tapioca Starch – Native, Modified b) Corn Starch – Native, Modified Modified Starch is further classified into – 1. Oxidized Starch 2. Cationic Starch Packing: 25kgs; 50kgs; 850kgs PP bag a) As an additive for food processing, food starches are typically used as thickeners and stabilizers in foods such as puddings, custards, soups, sauces, gravies, pie fillings, and salad dressings, and to make noodles and pastas b) Papermaking is the largest non-food application for starches globally, consuming millions of metric tons annually. In a typical sheet of copy paper for instance, the starch content may be as high as 8%. Both chemically modified and unmodified starches are used in papermaking. c) Corrugated board adhesives are the next largest application of non-food starches globally d) It is used in the construction industry in the gypsum wall board manufacturing process. e) Starch is used in the manufacture of various adhesives or glues for book-binding, wallpaper adhesives, paper sack production, tube winding, gummed paper, envelop adhesives, school glues and bottle labeling. f) Clothing starch or laundry starch is a liquid that is prepared by mixing a vegetable starch in water (earlier preparations also had to be boiled), and is used in the laundering of clothes. g) Starch is also used to make some packing peanuts, and some drop ceiling tiles. h) Textile chemicals from starch are used to reduce breaking of yarns during weaving; the warp yarns are sized, especially for cotton. Starch is also used as textile printing thickener. i) In the printing industry, food grade starch is used in the manufacture of anti-set-off spray powder used to separate printed sheets of paper to avoid wet ink being set off. j) Starch is used to produce various bioplastics, synthetic polymers that are biodegradable. i) For body powder, powdered corn starch is used as a substitute for talcum powder, and similarly in other health and beauty products. j) In oil exploration, starch is used to adjust the viscosity of drilling fluid, which is used to lubricate the drill head and suspend the grinding residue in petroleum extraction. k) Glucose from starch can be further fermented to bio fuel ethanol. l) Hydrogen production can use starch as the raw material, using enzymes
<urn:uuid:cbe85f4c-993f-420f-a3cf-b675d611e3bb>
CC-MAIN-2021-43
https://www.adroitmart.com/starch
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00379.warc.gz
en
0.932839
636
2.921875
3
Learn something new every day More Info... by email Twitter® is a social networking website on which people create profiles, “follow” or “unfollow” others, and communicate in posts of 140 characters or less, referred to as tweets. Besides communicating with friends through universal posts, @ posts that are directed to a specific persons, and private posts — called direct posts, Twitter® users also have used Twitter® posts to follow comments on particular topics and to have multi-person, international conversations. Because the posts are so brief and because there are so many tweets, the use of the hashtag, a name for the symbol #, was evolved by the Twitter® community. The hashtag is used in several different ways. The use of hashtags reported developed in 2007, when a user named Nate Ritter identified his updates about the forest fires in San Diego with a hashtag at the beginning of his posts: As in this example, it is standard to have no spaces in a hashtag, and run multiple words together. More than one hashtag can be used, but they are separated by a space. Although Ritter put hashtags at the beginning of his posts, it is now a semi-convention to place any hashtags at the end of a tweet. A variety of facilities, both on the Twitter® site and in applications created to enhance the posting and viewing experience, allow for searches, and users can search for hashtags. This has led to the development of some hashtag uses. For example, the hashtag is an efficient way to categorize a post that may not have the category word in it. For example, a post about a new device or gadget might be marked: This would allow people who are generally interested in technology, but not yet aware of the new device by name, to find the post. Hashtags are also used to set off posts that are made by people participating in a conversation. Some conversations take place at a particular time each week, and some are ongoing. The hashtag: #musedchat or #MusEdChat is used by music educators who meet to chat on Mondays, but also post in between. Hashtags.org™ is a website that evolved to follow Twitter® trends and encourage recognition of them by promoting the use of hashtags to categorize posts. At one time, it also allowed users to sign up with a profile and apply three hashtags to their Twitter® ID names, in order to help like-minded people find each other. In 2010 it was essentially offline for a number of months and undergoing reorganization, but planning a comeback with new features. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
<urn:uuid:7a14bc50-8adc-470b-870a-3876f045720d>
CC-MAIN-2016-50
http://www.wisegeek.com/what-is-a-hashtag.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541864.44/warc/CC-MAIN-20161202170901-00237-ip-10-31-129-80.ec2.internal.warc.gz
en
0.957693
585
2.71875
3
Accessibility in User-Centered Design: Evaluating for Accessibility A key aspect of successful User-Centered Design (UCD) is evaluating early and throughout the UCD process. The Background: Accessibility & User-Centered Design (UCD) chapter introduces the User-Centered Design process. This section provides information on incorporating accessibility into the following evaluation methods: - Importance of Comprehensive Accessibility Evaluation - Standards Review - Heuristic Evaluation - Design Walkthroughs - Screening Techniques - Usability Testing "Accessibility Evaluation Tools and Techniques" in the Appendix: Resources lists additional resources on evaluating accessibility, including methodologies focusing on specific product accessibility. Accessibility evaluation is often limited to assessing conformance to accessibility standards. Conformance to accessibility standards is important: in some cases it's a legal requirement and in others it's just a good way to help check that you've adequately covered the range of accessibility issues. However, when the focus is only on the technical aspects of accessibility, the human interaction aspect can be lost. Usability evaluation methods can assess usable accessibility to ensure that your accessibility solutions are usable by people with disabilities. Some designers needing to meet U.S. Section 508 standards chose to provide alternative"modes of operation and information retrieval". However, in some cases where the standard was technically met by providing an alternative, the products were awkward to use or were totally unusable by some people with disabilities. These cases illustrate the importance of going beyond just meeting a minimum accessibility standard without sufficient evaluation. Effective accessibility evaluation includes both evaluation expertise and the experience of people with disabilities. If you have people with disabilities easily available to help with evaluation, such as employees in the same building, you probably want to do lots of informal evaluation with them on early design prototypes. In the more common case where it takes more effort to get people with disabilities for evaluation, you probably want to employ the other evaluation methods first. If you have limited budget you might need to do the evaluations yourself, or you might be able to afford an accessibility specialist. An accessibility expert with first-hand experience of how people with different disabilities interact with products can: - evaluate accessibility issues for a broad range of users, which might not be found by a few individual users in usability testing; - help fix any known accessibility barriers before bringing in users; and - focus usability testing or informal evaluation with users on potential areas of concern. While each evaluation plan will be different based on resources and other factors, ensure that you employ comprehensive evaluation that includes at least a little of the methods described next: standards review, heuristic evaluation, design walkthroughs, screening techniques, and usability testing. A standards review in the User-Centered Design process assesses whether a product conforms to specified interface design standard. Sometimes the standards are internal style guide recommendations, and other times they are external standards. Accessibility standards and guidelines are available from international standards organizations; national, state and local governments; industry groups; and individual organizations. The "Standards and Guidelines" section in the Appendix: Resources lists accessibility standards, guidelines, and related articles. Accessibility standards reviews are often more rigorous than typical user interface reviews, especially when conformance to a standard is a legal requirement. Furthermore, user interface issues often overlap with technical issues in accessibility standards reviews. Specific guidance on accessibility standards conformance is beyond the scope of this book. Software tools are available to help evaluate web pages and some elements of software. While the tools provide some automated review, human evaluation is still necessary. Most web accessibility evaluation tools assess how web pages conform to W3C WAI Web Content Accessibility Guidelines (WCAG), and sometimes national standards such as Section 508 Part 1194.22. Most of the tools are commercially available, a few are free, and several have limited functionality available free online. The following resources from WAI cover web accessibility evaluation tools: - Selecting Web Accessibility Evaluation Tools provides guidance on choosing tools to use to help evaluate Web accessibility. It describes different types, uses, and features of tools. - Web Accessibility Evaluation Tools is a comprehensive database of over 100 tools in 20 languages. Although evaluation tools can identify some accessibility issues, evaluation tools alone cannot determine if a product meets standards and is accessible. A good example of what tools can and cannot do is evaluate equivalent alternative (alt) text for images on a web page. Tools can identify images that are missing alt text. However, tools cannot determine if existing alt text is equivalent (that is, does it provide the same information in text as the image provides visually). Judging if the alt text is equivalent requires human evaluation. Web accessibility evaluation tools can increase the efficiency of evaluation by saving time and effort; however, they cannot replace knowledgeable human evaluators. Rather than thinking of tools as a substitute for human evaluation, think of tools as an aid to human evaluation. In a heuristic evaluation, specialists judge whether each design element conforms to established usability principles. To conduct a heuristic evaluation for accessibility, accessibility specialists judge whether design elements conform to accessibility principles. Several resources provide information that can serve as guidance on heuristic evaluation for accessibility: - Section 255 of the Telecommunications Act, Subpart C: Requirements for Accessibility and Usability is listed in the "Understand the Range of Functional Limitations" section of Design Phase - Section 508 of the Rehabilitation Act Subpart C -- Functional Performance Criteria is listed below: § 1194.31 Functional performance criteria. (a) At least one mode of operation and information retrieval that does not require user vision shall be provided, or support for assistive technology used by people who are blind or visually impaired shall be provided. (b) At least one mode of operation and information retrieval that does not require visual acuity greater than 20/70 shall be provided in audio and enlarged print output working together or independently, or support for assistive technology used by people who are visually impaired shall be provided. (c) At least one mode of operation and information retrieval that does not require user hearing shall be provided, or support for assistive technology used by people who are deaf or hard of hearing shall be provided. (d) Where audio information is important for the use of a product, at least one mode of operation and information retrieval shall be provided in an enhanced auditory fashion, or support for assistive hearing devices shall be provided. (e) At least one mode of operation and information retrieval that does not require user speech shall be provided, or support for assistive technology used by people with disabilities shall be provided. (f) At least one mode of operation and information retrieval that does not require fine motor control or simultaneous actions and that is operable with limited reach and strength shall be provided. The purpose of a design walkthrough is to find potential usability problems by envisioning the user's route through an early concept or prototype. Typically, a person acts as a representative user while a design team member guides her through actual tasks with early prototypes. Sometimes another team member plays the computer or device, changing paper mockups of windows, drop-down menus, pop-up dialog boxes, and other interface elements. Ways to incorporate accessibility into design walkthroughs include: - Focus on specific accessibility issues during regular walkthroughs - Conduct walkthroughs specifically for accessibility An example of focusing on specific accessibility issues during regular software walkthroughs is device-independent interaction. The design team listens for the acting user to say, I would click on this, indicating an action that is completed with a mouse. The team then checks that all actions triggered with a mouse are also available through the keyboard for people who don't use pointing devices. Another example of a specific accessibility issue to evaluate during design walkthroughs is use of sound. When walking through use of a consumer product, the design team listens for the team member playing the device to indicate feedback or interaction provided via sound. To conduct walkthroughs specifically for accessibility, use personas with disabilities and scenarios that include adaptive strategies to complete the task, as discussed previously in the "Accessibility in Personas" and "Accessibility in Scenarios" sections. For example, the acting user would be blind and another design team member would play the role of the screen reader. For design walkthroughs of high-fidelity prototypes you can also use Screening Techniques, which are introduced next. Screening Techniques are simple, inexpensive activities to help identify potential accessibility barriers in product designs. Design teams use screening techniques to learn about accessibility issues and to evaluate prototypes or existing products. Screening techniques save time and money by finding barriers early, when it is less expensive to make changes to the product, and by focusing later usability testing with people with disabilities. The Screening Techniques section covers screening techniques in more detail. Usability testing provides quantitative and qualitative data from real users performing real tasks with a product. Usability professionals can evaluate some aspects of accessibility by using standard usability testing protocols, with a few modifications for including participants with disabilities. While usability testing is useful for learning how people use your products and assessing the usability of accessibility solutions, it does not evaluate conformance to accessibility standards. The following sections discuss usability testing with participants with disabilities: - Usability Testing - Henry, S.L. Another-ability: Accessibility Primer for Usability Specialists. Proceedings of UPA 2002 (Usability Professionals' Association annual conference), 2002. - Henry, S.L. Web Accessibility Evaluation Tools Need People, 2003. - Nielsen, J. and Mack, R., eds. Usability Inspection Methods. New York: John Wiley & Sons, 1994. - Architectural and Transportation Barriers Compliance Board. Electronic and Information Technology Accessibility Standards. U.S. Federal Register, 2000. - Rubin, J. Handbook of Usability Testing. New York: John Wiley & Sons, 1994.
<urn:uuid:62899d68-6563-4bb8-b8e6-f203b853d832>
CC-MAIN-2014-10
http://www.uiaccess.com/accessucd/evaluate.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021949508/warc/CC-MAIN-20140305121909-00095-ip-10-183-142-35.ec2.internal.warc.gz
en
0.910194
2,056
2.921875
3
How Do Batteries Work? : How Do Batteries Work? Eddy Giang Scott Segawa What is a Battery? : What is a Battery? Battery: In science and technology, a battery refers to a apparatus that stores chemical energy and produces it in an electric form. They consist of an anode, cathode, and electrolyte. There are two different classification of batteries, primary and secondary batteries. Primary batteries are batteries that irreversibly transform chemicals into electricity, while secondary batteries can reverse this process, restoring their original state. The worldwide industry for battery generated about $48 billion in 2005 Alkaline Battery : Alkaline Battery Chemical reaction: Alkaline batteries use zinc and magnesium dioxide. The reaction is electrochemical, since it’s a battery. Alkaline Continued : Alkaline Continued The reaction of either won’t work without connecting, so they have a longer shelf life than carbon-zinc ones. Alkaline have higher energy density than the carbon-zinc batteries. They have lower density and shelf life than silver oxide batteries. Recycling these batteries are the best since they have some chemicals that are harmful to the environment and us Daniell Cell : Daniell Cell Daniell cells consist of the zinc anode in a porous pot filled with zinc sulfate, which is inside a solution of copper sulfate, with a copper cathode in it. The porous pots prevents the copper ions from the copper sulfate from reaching the zinc anode, making it react without a current. Daniell Cell Continued : Daniell Cell Continued Chemical Reaction: Anode-Zn(s) → Zn2+(aq) + 2e- Cathode-Cu2+(aq) + 2e- → Cu(s) The anions collect at the anode, the zinc, and the cations at the cathode, copper The reaction can’t occur without both reactions, so they must be connected some how, usually through salt bridges or porous pots. They allow the separation of the solution, but still allows the ions to flow freely Rechargeable Batteries : They function just like regular batteries, except when you charge them, you’re just reversing the reaction by inputting electrons into the positive anode. Rechargeable Batteries Rechargeable Batteries Continued : Rechargeable Batteries Continued Come in different chemical varieties, such as alkaline, lithium, and zinc. Note of Caution: Do not charge non-rechargeable batteries. They may explode due to the build up of hydrogen ions from the reversed electrochemical reaction, and build up of pressure in the said battery. Taking Care of Batteries : Taking Care of Batteries Read instructions on the device before installed the batteries. Make sure batteries are inserted properly. Keep battery contact surface clean by rubbing it with pencil eraser or a cloth. Install only correct sized batteries suggested by the manufacturer. Remove batteries from appliances that won’t be used for an extended period of time, or are being powered by a household current. Store Batteries in a cool, dry place. Do not dispose of in fire. Don’t mix new and old batteries, which can lead to rupture or leakage. Lithium Battery : Lithium Battery Lithium battery is actually a term for a wide range of batteries with various cathodes and electrolytes. There are many types, but the most common is lithium magnesium dioxide, where the magnesium dioxide is the cathode, and the lithium is the anode. The electrolytes are lithium perchlorate in propylene carbonate, and dimethoxymethane About 80% of all lithium batteries sold are lithium magnesium dioxide . In rechargeable lithium batteries, the cathode is metal oxide, anode is carbon, and electrolyte is a lithium salt. These are used mostly in laptops, cell phones, and portable rechargeable devices. Proper Disposal : Proper Disposal Batteries contain many harmful chemicals to the environment, like mercury and lead, so should be disposed of properly. Batteries pollute streams when they are burned, and the metals are vaporized in the air. Heavy metals may slowly leach into the soil in landfills, reaching groundwater or surface water. Even if you recycle, some stores that take back used batteries claim that it still ends in the trash, so prevention would be the best bet. Prevention can start with not buying excess batteries, buying appliances that function without batteries, looking for batteries with lower hazardous metals, and consider rechargeable batteries. Conclusion : Conclusion Batteries are used almost everyday in our lives; cars, music devices, remote controls, hearing aids, calculators, and much more contain batteries. There are some batteries that use quite a bit of substances that are harmful to both us and the environment. To prevent this, we need to find ways to reduce waste, like rechargeable batteries and recycling. Our experiment included a lot of materials and was odd to set up. Since we only generated miniscule amount of electricity, it makes us think very highly of those who invent and create such high powered electric generators! Since technology is advancing at such a fast rate, and the demand for batteries are high. Research is definitely going to help advance the energetic capacity and lower waste production. Batteries are definitely in our future, and will be able to help with out “energy crisis”.
<urn:uuid:22230e8f-e508-48bb-9819-7bcf4dea2da2>
CC-MAIN-2015-35
http://www.authorstream.com/Presentation/aSGuest1836-99910-batteries-work-alkaline-battery-30533413-others-misc-ppt-powerpoint/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066266.26/warc/CC-MAIN-20150827025426-00244-ip-10-171-96-226.ec2.internal.warc.gz
en
0.919268
1,133
3.546875
4
Outline Processor Markup Language (OPML) Definition - What does Outline Processor Markup Language (OPML) mean? Outline Processor Markup Language (OPML) is an open-source XML format for creating text outlines. OPML is platform-independent, can handle many types of data and may be customized for each application created. It is particularly suited to creating applications where relationships and data must be updated continually. The format is human-readable, self-documenting and extensible. Some OPML files contain data specifying the size, position and expansion capabilities of the windows in which the text outlines are displayed. OPML can be quickly understood and applied, much like HTML. Because it is based on XML, OPML can be adapted to business, scientific or academic projects. Techopedia explains Outline Processor Markup Language (OPML) Outline Processor Markup Language has evolved into a format used for exchanging subscription lists between RSS aggregators and RSS feed readers. Users can track their own RSS feeds as well as observe who is subscribing, where they are from and the feeds they have chosen. OPML also has some shortcomings: - The date format only allows two-digit years and the format does not conform to RFC 3339. - The expansion state of some windows cannot be stored. - When a window is altered or deleted, the windows below it must be recalculated. - The arbitrary nature of the type attribute, and the use of arbitrary attributes on outline elements causes the interoperability of the documents produced to be almost completely dependent on the conventions of the content producers, which may be neither standard nor documented. - There are problems with identifying created documents as XML format. Join thousands of others with our weekly newsletter Free Whitepaper: The Path to Hybrid Cloud: Free E-Book: Public Cloud Guide: Free Tool: Virtual Health Monitor: Free 30 Day Trial – Turbonomic:
<urn:uuid:b87bb42b-6340-4065-9bae-00f5f1580f67>
CC-MAIN-2016-40
https://www.techopedia.com/definition/2449/outline-processor-markup-language-opml
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660992.15/warc/CC-MAIN-20160924173740-00217-ip-10-143-35-109.ec2.internal.warc.gz
en
0.878958
400
3.015625
3
The Southern California Environmental Health Sciences Center received a five-year, $8 million grant from the National Institutes of Environmental Health Sciences as it celebrates 25 years of research dedicated to reducing diseases and disabilities caused by environmental exposures. The center, among the longest-running NIH-funded institutions at USC, is led by Rob McConnell, MD, an environmental epidemiologist and professor of population and public health sciences at the Keck School of Medicine of USC and of spatial sciences at the USC Spatial Sciences Institute. “Environmental health research is a priority area for the Keck School and USC,” said Steven D. Shapiro, MD, senior vice president for health affairs at USC and interim dean of the Keck School of Medicine of USC. “This grant will help advance research that makes a tremendous impact on population and public health locally and globally.” Founded in 1996, the center evolved to support multidisciplinary research partnerships; promote population, clinical and bench research with pilot funding; develop the next generation of environmental health science leaders; and engage and support communities seeking environmental justice, policymakers and other stakeholders with the best science available. “This is a great achievement and the continuation of a tremendous asset for the department,” said Howard Hu, MD, MPH, ScD, professor and Flora L. Thornton Chair of the Department of Population and Public Health Sciences at the Keck School of Medicine. “It is a true testament to Rob McConnell’s skills as a scientist, administrator and strategist.” The center has more than 70 members across Southern California conducting research on a wide range of environmental exposures on autism, brain development and brain aging, obesity, diabetes, liver and heart disease, and cancer. “I’m proud to build on the stellar record of directors that have helped make USC one of the leading environmental health science institutions in the U.S.,” McConnell said. “I’m really excited about new research opportunities to understand the health effects of a broad array of exposures ranging from toxic environmental chemicals to climate change, and to develop a scientific foundation leading to prevention of these health effects.” Here’s a look at a few of the center’s highlights: The Children’s Health Study, launched in 1993, has grown to become one of the largest and most detailed studies of the long-term effects of air pollution on the respiratory health of children. More than 12,000 school children have been involved. Its findings have led to changes in state and federal guidelines to improve air quality standards and urban planning decisions. The MADRES Center for Environmental Health Disparities formed in 2015 to examine the effects of chemical pollutants on pregnant women and their infants. In addition to conducting research, the center works to improve environmental literacy among parents and youth as well as share research findings with surrounding communities that experience health disparities. Pilot Projects Program: The center provides junior faculty and postdoctoral students with starter funds for small-scale research projects to generate preliminary data in aid of securing additional funding. Between 2011 and 2020, the center awarded $1.4 million to 47 proposals; 72% of awardees were junior investigators. The return in new grant awards was almost $30 for every $1 invested in pilot projects. Community Engagement: The center has established long-term working partnerships with community-based organizations across Southern California to build capacity of residents to leverage research and translate research into action. For example, investigators work with Pacoima Beautiful, a grassroots environmental justice organization, to monitor neighborhood air quality and increase understanding of the health effects of air pollution and other toxic exposures. “Community engagement for our investigators is a two-way street,” McConnell said. “We provide scientific results that communities can use to develop policies to reduce exposure to near-roadway air pollution, lead, urban oil drilling and other toxic exposures. Communities alert us to problems that lead to new research.” # # # CONTACT: Leigh Hopper, [email protected]
<urn:uuid:51a64cb5-7b6a-4a27-b56d-e23e5125f5c7>
CC-MAIN-2021-43
https://www.eurekalert.org/news-releases/930680
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587877.85/warc/CC-MAIN-20211026103840-20211026133840-00719.warc.gz
en
0.933614
829
2.53125
3
Healthy Forests for our Future guides landowners and foresters to choose climate-smart forest management practices Details management practices that increase carbon stocks, and how to pay for them A new guide released by The Nature Conservancy and the Northern Institute of Applied Climate Science describes 10 forest management practices that can increase carbon stocks within 20 years (and usually sooner) in hardwood forests in New England and New York. “Healthy Forests for Our Future: A Management Guide to Increase Carbon Storage in Northeast Forests” was developed to aid landowners and foresters in making decisions and introduce them to programs that can help cover the costs of "climate smart" forest management. “There is broad recognition of the power of forests to help us prevent and prepare for climate change. Forest management is one of a suite of natural climate solutions: actions to protect, restore, and better manage our forests, farms, wetlands, and grasslands to reduce and remove carbon emissions,” said Jim Shallow, director of strategic conservation initiatives for The Nature Conservancy in Vermont. In 2018 a study on the potential of Natural Climate Solutions in the United States estimated that these actions could remove 20 percent of U.S. greenhouse gas emissions, greater than the combined carbon emissions from all cars and trucks on the road in the United States. For landowners and the professionals who manage forests, it is not always clear what management choices balance the need to increase carbon stocks (store more carbon pollution pulled from the air in trees and soils) with the other forest values we depend on, including their significance in protecting clean drinking water, providing habitat for wildlife, and producing wood products. “Our forests play an outsized role in capturing and storing carbon and buffering against broad impacts of climate change. Adaptive approaches to how forests are managed can help the forest itself be more resilient to the impacts of climate change. This new guide is a great resource for landowners and forest managers who are looking for practices they can implement to both safeguard their forests and secure the carbon benefits they provide,” said Michael Snyder, commissioner of Vermont Forests, Parks, and Recreation. The guide was developed using the best available science and extensive input from stakeholders—including foresters, landowners, loggers, scientists, state agencies, and conservation organizations—to narrow a broad set of practices to this short list of “climate-smart” forest management choices. The guide groups these practices into four categories: protect forests, grow new trees and forests, reduce stressors, and manage forests. For each practice, the guide provides a practice description, information about practice considerations, expected benefits, and—though the practices were chosen independently of whether each practice was economically viable, or whether a funding source was available—potential funding opportunities. Family forest landowners in Vermont, Massachusetts, and New York who adopt three of the guide's climate-smart forest practices will soon be eligible for support under the Family Forest Carbon Program, a program being brought to New England by the American Forest Foundation and The Nature Conservancy in 2022. A grant program has enabled a few landowners to pilot these practices on the ground in Massachusetts and Vermont. “…this is about more than simplistic solutions like "just leaving forests alone". There are many ways that we humans can improve our interactions with forests while still being able to utilize the forest products that we need. In my project, TNC's program has allowed the landowner to invest in the future of what will grow in her forest by protecting tree regeneration from the stresses of deer over-browsing and invasive plant infestations,” said Lincoln Fish, one of the private consulting foresters participating in the pilot program. “A forest that is well taken care of will give back many times over in the long run.” With the release of the “Healthy Forests for our Future Guide,” forest landowners and managers have a new tool as they make decisions that will determine the future of our forests. The Nature Conservancy is a global conservation organization dedicated to conserving the lands and waters on which all life depends. Guided by science, we create innovative, on-the-ground solutions to our world’s toughest challenges so that nature and people can thrive together. We are tackling climate change, conserving lands, waters and oceans at an unprecedented scale, providing food and water sustainably and helping make cities more sustainable. Working in 76 countries and territories—37 by direct conservation impact and 39 through partners—we use a collaborative approach that engages local communities, governments, the private sector, and other partners. To learn more, visit www.nature.org or follow @nature_press on Twitter.
<urn:uuid:9580fa50-6c3e-4671-ae63-6356999923b9>
CC-MAIN-2022-40
https://www.nature.org/en-us/newsroom/healthy-forest-guides-future-foresters-landowners-vt/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00158.warc.gz
en
0.931349
956
2.875
3
At-Vessel Fishing Mortality for Six Species of Sharks Caught in the Northwest Atlantic and Gulf of Mexico From 1994-2005 the Commercial Shark Fishery Observer Program (CSFOP) placed fishery observers aboard US bottom longline vessels engaged in directed fishing for sharks in the region from New Jersey to Louisiana, USA. Observers routinely recorded species specific at-vessel mortality as related to enduring the stress oflongline capture. Data for 5 species of sharks (sandbar Carcharhinus plumbeus, blacktip Carcharhinus limbatus, dusky Carcharhinus obscurus, tiger Galeocerdo cuvier, scalloped hammerhead Sphyrna lewini, and great hammerhead Sphyrna mokilrran) were analyzed in this study. Multiple stepwise linear regressions indicate that age group, soak time and bottom water temperature can be used as predictors of at-vessel mortality and that size restrictions, size selective gear, restricting the soak time and time/area closures may be beneficial to fisheries targeting large coastal sharks. Morgan, A. and G. H. Burgess. At-Vessel Fishing Mortality for Six Species of Sharks Caught in the Northwest Atlantic and Gulf of Mexico. Gulf and Caribbean Research Retrieved from https://aquila.usm.edu/gcr/vol19/iss2/15
<urn:uuid:c0cf206d-93c0-4d20-9043-00bcc9fd20e3>
CC-MAIN-2019-30
https://aquila.usm.edu/gcr/vol19/iss2/15/
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525863.49/warc/CC-MAIN-20190718231656-20190719013656-00543.warc.gz
en
0.801572
290
2.875
3
The terrorist attacks of September 11th, 2001 had a profound impact on this nation’s government, security and people. The incensed citizens of this country demanded action and as a result, several different laws were passed. The centerpiece of these laws was the USA PATRIOT Act. The Patriot Act infringes the Constitution by lifting restraints on governmental interference on its citizens’ privacy and should have never been signed into law. 9/11 led to the passing of laws that violate the civil liberties of the people in this country. When discussing the Patriot Act, it is best to explain how it was signed into law. The Patriot Act was enacted as a response to the attacks of 9/11. al-Qaeda was the terrorist group responsible for the attacks which were led by an exceedingly wealthy man named Osama bin Laden. By the time he was an adult, the fortune inherited from his father’s construction company (The Saudi Bin Laden Group) was nearly $250 million (Frank 110). al-Qaeda is distinguishable from most other terrorist groups because they believe their mission is their God’s will. The main target on 9/11 was the World Trade Center’s Twin Towers. Each tower consisted of 110 floors, including basements and underground parking garages. The North Tower was 1,722 feet tall, while the South Tower was 1,368 feet tall. The towers were built using 200,000 tons of steel, 425,000 cubic yards of concrete, and had a total of 43,600 windows (Frank 8). The terrorists hijacked four planes from northeastern US airports, and two of which were crashed into the towers (one plane per tower). They utilized small knives and box cutters as weapons to overtake the planes by wounding, or even killing the pilots and passengers (Frank 5-6). The severe impact of the crashes caused fires to burn inside the buildings at an uncontrollable level. At the impact zone, the fires reached a temperature of 2,000 degrees Celsius, which resulted in the melting of the steel columns that supported the buildings. The fires caused the floors in the towers to sag, pulling in on the main support beams. When the beams could bend no further, structural failure occurred in the buildings, which directly led to their collapse. The two remaining planes hijacked that day were both diverted towards Washington DC. One plane crashed into the Pentagon, causing a portion of the building to collapse. The final plane crashed into a field near Shanksville, Pennsylvania after the passengers banded together and attempted to retake control of the plane. United Airlines flight 93 (the one that crashed in Pennsylvania) was intended to crash into the Capitol Building (Frank 8). There were no survivors from any of the four planes. A total of 2,996 people (including the 19 hijackers) died in the attacks (Frank 12). The primary reason behind the attacks was US support of Israel in Middle Eastern disputes and conflicts. This incited hatred of Americans in parts of the Arab world, especially within militant factions containing members of the former Mujahideen rebel group (Frank 73). This attack that was of such a massive scale had far-reaching effects. The stock market never opened that day and remained closed until September 17th, 2001. Eventually, when the market did reopen, it dropped over 680 points. By the closing of the first week that it was reopened, the market lost over 1360 points (Zanders). The destruction of physical assets was estimated in the national accounts to amount to $14 billion for private businesses, $1.5 billion for state and local government enterprises and $10.7 billion for the federal government. Rescue, cleanup and related costs have been estimated to amount to at least $11 billion. Lower Manhattan lost approximately 30% of its office space and scores of businesses disappeared, close to 200,000 jobs were destroyed or relocated out of New York City (Jackson). Airlines have lost a total of $55 billion since 2001, losses in revenue was made up… Correctional Health Care Increasing Staff Motivation and Satisfaction MGMT 591: Leadership and Organizational Behavior Professor Brett Gordon December 17, 2012 The organization that will be the topic of discussion in my final project paper is Georgia Health Sciences Universities (GHSU), Georgia Correctional Healthcare (GCHC) division. GHSU is both a hospital and teaching institution based in Augusta, Georgia. GHSU was founded in 1828. Georgia Health Sciences University is home to… Learning activity Term Paper Test for Hidden Bias The Implicit Association Test, tests for unconscious biases. I chose this learning activity to see if I have any hidden biases. I have always believed that I was a straight shooter never putting on a facade or giving a false presentation of myself. To prepare for taking this activity, I made sure that I was in a quiet room, away from any distractions. I concentrated on each activity as they appeared on my computer screen. I chose six categories… companies use shady accounting practices to enflate revenue or hide losses, the public can’t get an accurate picture of what they are investing in. The public demanded that something be done, so in 2002 the U.S. govt passed the Sabex-Oxly act. For my term paper I will examine some the corporate scandals that led to the creation of the act. Deragulation was lobbied for hard on wall street during the Bush Presidency. The public was lead to believe that loosing the rules on corporate practices would lead… who understand these concepts grow as well. These talented individuals can be looked at from several different viewpoints and depending on who that person may be could in fact determine whether it is a positive viewpoint or a negative one. In this paper I will prove why it is necessary to have Ethical Hackers in today’s security models and how they have come a long way to improve how our information systems operate in a more secure manner. Hacker’s for years have been able to do things that the… Final Term Paper CRIM 130: Intro to Corrections Professor: Joseph M. Jacobs May 9, 2014 The first phase of the criminal process is the crime. Once a crime is committed law enforcement official decided the type of offense committed which can be categorized into two types of offense (any violation of the criminal law). Felony: The more serious of the two types of offense, bearing a possible penalty of 1 year or more in prison. Misdemeanor: Lesser of the two basic types of crime… practices, impacts, and issues of linking agency data globally is critical. Since the focus in this paper is on the impact of global data interchange, the difficult technical issues surrounding user authentication, security, and privacy are not discussed in the depth they require. KEYWORDS automation, human service administration, infrastructure, interoperability Interoperability is a term that is not commonly understood by most human service professionals, although most understand the trends… unidentified flying objects flew from behind Arnold and passed him. The aircrafts did not have tails and did not leave any sort of trail behind them. Arnold compared the movement of the objects to “saucers skipping on water”, and here is where the term “flying saucer” came to be. Arnold was truly astounded by the movement of the saucers as he witnessed them fly through valleys and close to the edges of the mountain. He suspected them to be new military aircrafts being tested, but was not convinced… Carlo Olkeriil, Graduated School of Education Department, Portland State University This Research Paper was written in part of the Grading and Course Requirements of the COUN 507: Addiction Pharmacology taught by Terry Alan Forrest, LPC, LMFT DOES A TWO-DRUG COMBINATION HAVE THE POTENTIAL TO FIGHT COCAINE ADDICTION? This paper explores a published article and a book that detail the results from a recent study showing that a fine-tuned… measured by examination. Evaluation and Grading: A weighted course average will be calculated using the following weights for assignments: Highest score on Exams 1-3 25% Second highest score on Exams 1-3 25% Term Paper 15% Final Exam 25% Grades will be earned according to the following scheme: Below 60 F Exams: The exams will allow the student to… Phil 263: The Idea of God December 5, 2014 Spiritual Dreams & Christianity "Hear now my words: If there be a prophet among you, I the Lord will make myself known unto him in a vision, and will speak unto him in a dream" Religion was the first field of dream analysis. According to Wendy Doniger and Kelly Bulkley, “the earliest writings we have on dreams are primarily texts on their religious and spiritual significance.” Since the very beginning…
<urn:uuid:b9fcee84-8893-478b-987e-b712b8dedc54>
CC-MAIN-2020-16
https://www.majortests.com/essay/Term-Paper-585636.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00367.warc.gz
en
0.960257
1,844
3.359375
3
Some industry observers have predicted that solid-state lighting technology (SSL) will satisfy most lighting applications by the end of the decade. One particular SSL technology on the cusp of commercialization has the potential to be transformational: the organic light-emitting diode (OLED). Imagine a light source that is manufactured on rolls, can be cut into flexible flat sheets in the factory or the field, and can be installed in almost any shape on almost any room surface. It is easy to control and install, is lightweight, and contains no hazardous substances. It is lighting that is completely integrated into furniture, window coverings and building materials—for instance, picture glass walls that become luminous at the flick of a switch; curved sheets that emit different colors from each side; wallpaper, room dividers, curtains and even clothing that doubles as illumination; ceilings that glow with color; or windows that are transparent during the day and luminous at night. OLEDs are already used in cell phones and small video display color applications. For general illumination, the technology is still in its infancy. GE, Osram Sylvania, Philips and others—in addition to the U.S. Department of Energy (DOE)—are all investing in OLEDs. Several prototype and commercial products already exist, giving us a glimpse of their potential. Eastman Kodak’s Dr. Ching Tang invented OLED technology in the late 1970s. He found that by depositing materials into thin films and passing electrical current through the resulting construct, carbon-containing (organic) compounds produce light. The basic construction consists of a stack of organic thin films sandwiched between two current-delivering electrodes, which is typically enclosed between two layers of plastic or glass. Because OLEDs produce light by changing the electrical state of a chemical solid, they are solid-state sources, like LEDs. The stack has a diameter many times less than that of a human hair, yet its area can be very large, making OLEDs diffuse area light sources. Typical OLED illumination devices may be configured as pixels, panels or complete fixtures. Panels are OLEDs with an area of at least 80 square centimeters, and they produce light output rated in lumens per square meter. These panels may be larger or joined into assemblies to create a larger luminous area. The assemblies, in turn, are connected to the OLED driver (which converts the line voltage to the voltage and current needed to start and operate the device) and to any electronic controls that enable dimming and other effects. Along with any housings and optics, the complete system, ready to be connected to the electrical supply and enter service, is called an OLED luminaire or lighting fixture. One of the big differences between the OLED and its LED cousin is how light is emitted. LEDs are point sources, ideal for efficiently delivering a focused beam of light. The OLED, in contrast, is a perfectly diffuse area light source. This particular defining characteristic of the OLED is potentially highly transformational to lighting design. Typically, light sources are so bright that they cannot be viewed directly for long periods of time without producing a sensation of glare. Lighting fixtures are designed to house distinct light sources and auxiliary components, such as ballasts, and distribute the light in a controlled pattern without glare. With the OLED, the low-brightness light source can be viewed directly for a prolonged period without glare. Optics may be external or possibly even built directly into the light source to direct the light emission in a desired pattern. According to Peter Ngai, vice president, research and development for Acuity Brands Lighting, today’s best OLED devices can produce about 6,000 lumens per square meter—about 120 lumens of output for a 200-square-centimeter OLED panel—with an efficacy of about 30 lumens per watt. Significant progress is already being made. Just two years ago, in 2009, the DOE assessed the earliest introductions and estimated performance at 23 lumens per watt and a 5,000-hour service life. The DOE further estimated the cost of an OLED panel at about $25,000 per kilolumen in 2009 compared to $4 for a typical fluorescent T8 system and $128 for a typical LED lamp, making it relatively expensive at the time. The earliest prototypes have been mostly decorative, intended to demonstrate the technology’s unique characteristics. Ngai pointed out that a reasonable threshold that signals an effective degree of competitiveness with conventional sources is light output of 6,000 lumens per square meter, efficacy of 60 lumens per watt and rated life, based on useful light output, of 15,000 hours. All at a competitive cost, of course. To increase performance and reduce cost, several technical hurdles must be crossed. The biggest opportunity is to increase the amount of light emitted by the device for the given surface area. One simple method is to increase the driver current, but there is a tradeoff in shorter device life; for example, increasing the light emission from 3,000 lumens per square meter to 10,000 lumens per square meter solely by raising the drive current would reduce life by 80 percent, according to the DOE. It is better to improve the device’s efficiency in extracting light. A significant amount of light produced by OLED panels actually remains trapped inside the substrate. If research and development can unlock this light output using a method that is reproducible at a reasonable cost to manufacture, not only would light output and associated efficacy increase, but the cost per kilolumen would significantly decrease. That decrease, coupled with economies achieved through high-volume manufacturing, could put OLED lighting within reach of mainstream use. Other technical hurdles, the DOE points out, include finding good materials from which to realize white light and methods for rendering sensitive OLED materials that are resistant to oxygen, moisture and pollutants in the operating environment, which reduce service life. The long-term goal for service life, according to the DOE, is an L70 rating (point in time at which the light source is producing 70 percent of its initial lighting output, representing a lumen depreciation of 30 percent) of 50,000 hours for general lighting. The DOE and the manufacturing community are now investing in research and development to solve these problems. In terms of development, the OLED is about where the LED was several years ago, Ngai believes, but is progressing more rapidly and will benefit from standards and research developed to accommodate LEDs. The DOE predicts that, by 2015, the OLED will achieve light emission of 10,000 lumens per square meter, or 200 lumens for a 200-square-centimeter panel. The DOE further predicts the OLED will achieve an efficacy greater than 100 lumens per watt and a cost of about $8–9 per kilolumen by that time. The quality of light and service life will likely continue to improve as well. Therefore, within five years, we may see a number of OLED products that are commercially competitive for mainstream applications. OLED products are unlikely to prove competitive with basic forms such as troffers and downlights in the near future. Naomi Miller, senior lighting engineer for the Pacific Northwest National Laboratory, a DOE contractor who is engaged in the DOE’s SSL program, said current applications are focused more on demonstration than practical use. The first white light applications will likely be specialty and high-end applications where the OLED’s unique aesthetics and capabilities are most desirable. Early applications include spaces that require small amounts of light from a thin luminous surface, such as step lights and marker lighting, undercabinet lighting and decorative panels and components of conventional fixtures. Note that these OLED devices may require external optics and integration within a fixture housing, which may present light emission losses and also the same current droop and thermal sensitivity problems that affect today’s LED fixtures. Regarding control, OLED devices require drivers just like LED devices. The driver functions similarly to a conventional ballast, converting line voltage to the proper voltage to start the device (usually low voltage) and then regulating current flowing through the device during operation. As with LEDs, OLEDs are instant-on, and service life is not negatively affected by frequent switching, making them well suited to automatic shutoff devices, such as occupancy sensors. The driver may be dimmable, enabling the light emission to be controlled automatically or manually by users. And as a solid-state light source, OLED drivers can be digitally controlled, enabling precise control and generation of information through integral sensors that can be fed back to a central operating station. Ngai pointed out that it is unlikely that OLED products will become competitive with other sources in applications requiring intense, focused illumination, such as accent lighting, high-bay lighting and roadway and outdoor area lighting. As a result of the intense point source characteristics of the LED and the diffuse area source characteristics of the OLED, it is likely that these sources will be specified to work side-by-side in many applications, Miller said. In a high-end retail stores, for example, OLED lighting may provide whimsical decorative elements and luminous display shelving and surfaces, while aimable LED fixtures may be used to punch key merchandise with strong focused illumination, and either or both may provide general lighting. The form factor of the OLED supports physical installation similar to typical commercial recessed, surface-mounted and suspended fixtures. The technology does allow for more lightweight designs, however, which may result in changes in these fixtures’ support and mounting structures. As most OLED panels are low voltage, there may be opportunities to reduce line-voltage wiring and conduit at the room level with plug-and-play wiring structures that deliver power and communications. Overall, installation is likely to be simplified with OLED lighting compared to today’s conventional equipment. Similarly, as with LED devices, maintenance requirements are likely to change as the source does not “fail to off” on a predictable mortality curve like fluorescent, but rather will simply decline in lighting output until it is no longer useful to continue operating. While this will simplify lamp replacement—by largely eliminating the need for ongoing spot replacement with a long mean time between failures—the owner will need to know when the device’s light output falls below the L70 rating. This may require building maintenance personnel to periodically test light levels, or install some form of automatic feedback mechanism that signals maintenance personnel to replace the fixtures. By 2020, Ngai said, OLEDs are expected to achieve efficacies approaching 200 lumens per watt at a cost per lumen of several cents. This is an ambitious goal for the next decade but not overly so when one considers that LED light output has increased by 20 times each decade for the past 40 years while the cost per lumen has decreased by 10 times during the same period. By that point, Ngai said, advanced OLED products will have become widely available, including transparent OLEDs that can be applied to windows—enabling the window to convert from a daylight aperture to an electric light source at night—and flat and flexible luminous sheets that can be custom-cut into a wide variety of configurations and then simply attached to a surface. “The unique characteristics of OLEDs are unlike any other current light source,” Ngai said. “OLEDs offer a new platform upon which new approaches to lighting design will be conceived—from integration of lighting and architecture to new lighting application design philosophies to visionary luminaire designs that are functional, aesthetically pleasing and emotionally compelling.” He added that the future of emerging technologies often arrives sooner than we anticipate. Stay tuned as OLEDs continue to commercialize into products and solutions that may have a transformational impact on how lighting is designed, installed and used. DILOUIE, L.C., a lighting industry journalist, analyst and marketing consultant, is principal of ZING Communications. He can be reached at www.zinginc.com.
<urn:uuid:86f9d3b6-1d2e-4658-abef-2e46577f9226>
CC-MAIN-2015-32
http://www.ecmag.com/section/lighting/transforming-lighting-landscape?qt-issues_block=0
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990217.27/warc/CC-MAIN-20150728002310-00056-ip-10-236-191-2.ec2.internal.warc.gz
en
0.938603
2,429
3.09375
3
Types Of Energy Thermal energy is energy that comes form heat.This type of heat is generated by the movement of tiny particles within an object.For an example the sun's thermal energy heats our atmosphere. Electromagnetic Energy is a term used to describe all the different kinds of energies released into space by stars such as the Sun. These kinds of energies include some that you will recognize and some that will sound strange.For example Radio waves,microwaves,x-rays and Infrared radiation Electric potential energy, or electrostatic potential energy, is a potential energy measured in joules that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system.For example a airplane has a large has a large amount of kinetic energy in flight due to its large mass and fast velocity.
<urn:uuid:10858059-d73e-4f4f-afba-17a86ab60a71>
CC-MAIN-2018-34
https://www.smore.com/a3j6b
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211935.42/warc/CC-MAIN-20180817084620-20180817104620-00329.warc.gz
en
0.923069
172
3.109375
3
There are two types of fissures, namely horizontal (discussed in this article) and vertical. Both types cause severe lameness and have their own risk factors and causes. In parts 1 and 2 of this series I explained the factors that influence overgrown feet and how to identify the origin of the problem. I also touched on genetic disorders and how it re-manifests after a trim. This brings us to a frequently asked question: Is it ethical to trim feet? In part 1 of this series, I explained why we often see cattle with long claws and I touched on the principles that must be kept in mind whenever the subject of feet comes up. There have been many myths and different opinions in the past regarding the subject of the bovine hoof. Part of the reason why it is such a frequently discussed topic, is that it is so difficult to work on the feet of cattle. Unlike horses, cattle cannot be trained to lift one foot while standing on the other three legs. In fact, it is extremely difficult for cattle to stand on three legs, which is why specialised equipment is needed.
<urn:uuid:dd459c44-9472-4408-9abd-334e7576f49d>
CC-MAIN-2021-25
https://dairysmid.com/category/focus-on-hoof-care-series/
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00319.warc.gz
en
0.971169
229
2.765625
3
All About Epinephrine: A Life-Saver for Celiac School Kids Patients, caregivers and schools can turn to various resources to increase confidence in using auto-injectors and managing anaphylaxis. Goldenberg worked with an allergist to create an online training course, available at epipentraining.com, using the World Allergy Organization guidelines for managing anaphylaxis. Goldenberg says the course helps people feel comfortable following the protocols for recognizing and treating anaphylaxis, regardless of the brand of auto-injector. Often caregivers wait too long, monitoring symptoms instead of giving epinephrine, she says. “When an emergency happens, panic sets in and you’re not sure what to do. It’s so easy to make mistakes,” Goldenberg says. “I want people to feel extremely well-rehearsed, that they know exactly what to do in an allergic emergency.” The course provides guidance on the stages of allergic rescue, including how to recognize anaphylaxis, how to treat it and what to do after the injection—like having the patient lie down with his or her feet elevated above heart level until emergency help arrives. Pistiner explains that this position helps the blood flow where it should (even if epinephrine is not available). Anaphylaxis makes blood vessels floppy and leaky. If the patient remains upright, the fluid can pool and not go where it’s needed. Keep in mind that trouble breathing or vomiting may necessitate finding an alternative position, such as lying on the side, Pistiner says. Another essential step is to call 911 when giving epinephrine. Even if epinephrine is administered promptly, symptoms can return later and further treatment and care may be necessary. Observation in the emergency department for at least 4 to 6 hours is recommended. It is essential for patients to talk to their healthcare providers to develop a good allergy action plan and receive proper training on how and when to use their auto-injectors, Pistiner says. “Every allergic child should have an emergency care plan created by their healthcare provider specifically for them.” Epinephrine is the drug of choice for anaphylaxis. Antihistamines, such as Benadryl, are not effective tools to stop anaphylaxis, Pistiner cautions. Antihistamines, which treat mild skin symptoms, target only histamine receptors. In contrast, epinephrine stabilizes cell walls of mast cells and basophils (allergy cells) and works on the lungs, heart, blood vessels and gut —the organs affected by system-wide anaphylaxis. To help people understand why epinephrine is so important for the treatment of anaphylaxis, Pistiner compares anaphylaxis to water damage. The longer it goes on, the harder it is to clean up. For example, if your sink faucet is wide open and the water is pouring all over your floor, using antihistamines is like trying to clean up with a mop instead of fixing the faucet. Epinephrine acts like a wrench to turn off the faucet and then it acts like the sump pump and the mop, attacking the entire problem, Pistiner says. “Studies show that delays in treatment with epinephrine are associated with an increase in mortality,” he emphasizes, adding that 10 to 20 percent of anaphylactic reactions do not include the skin. “This is important because if people are waiting to see hives or some other skin rash, they may delay administering epinephrine,” he says. “Families should become very familiar with their emergency care plans so they can promptly recognize the symptoms that require treatment.”
<urn:uuid:d34a7e4b-f406-4bca-8262-e004172535e4>
CC-MAIN-2017-09
http://www.glutenfreeandmore.com/issues/4_29/All-About-Epinephrine-3527-1.html?pg=5
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00227-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918205
790
2.671875
3
A massive dose of measles vaccine saved the life of a Minnesota woman by wiping out her cancer. Stacy Erholtz from Pequot Lakes, a rural community northwest of Minneapolis, was suffering from myeloma, a cancer of the bone marrow that had spread throughout her body. After almost a decade of treatment she was running out of options. Last June, she enrolled in an experimental procedure at the state’s famed Mayo Clinic. Doctors gave her 100 billion infectious units of the vaccine — or enough to inoculate 10 million people — the clinic reports in a study released Wednesday. The cancer went into complete remission and appears to have been eliminated, Dr. Stephen Russell, who helped develop the procedure and the study’s leader, writes the journal Mayo Clinic Proceedings. “It’s a landmark,” he said in an interview with the Minneapolis Star Journal. “We’ve known for a long time that we can give a virus intravenously and destroy metastatic cancer in mice. Nobody’s shown that you can do that in people before.” The 50-year-old mother was picked for the trial because she had had limited previous exposure to measles and her immune system was very weak, meaning it would not be able to combat the massive onslaught of viral material contained in the therapy. A second patient did not respond so well. The researcher suspects this was because she had a different cancer, with tumours located in her leg muscles. The technique of oncolytic virology — using re-engineered viruses to fight cancer — is not new. It has a history dating back to the 1950s and is used for example as a first line of treatment in some forms of bladder cancer. The viruses work by binding to tumours and using them as hosts to replicate their own genetic material. The cancer cells eventually explode and release the virus. ‘We’ve known for a long time that we can give a virus intravenously and destroy metastatic cancer in mice. Nobody’s shown that you can do that in people before’ The snag is the therapy can only be used once on a patient. Once the vaccine has been delivered, the body’s natural defences — the immune system — will recognize it and attack it before it can destroy the tumours. The clinic is planning a bigger trial, which it hopes to have up and running by September. Other researchers are hoping the technique can be adapted to combat different cancers. It is likely that this method will one day become the standard for treatment of cancers such as myeloma or pancreatic cancer within the next three to four years, predicts Dr. Tanios Bekaii-Saab, a researcher at the James Cancer Hospital and Solove Research Institute in Ohio. As for Ms. Erholtz, she is optimistic her cancer has been beaten. “We don’t let the cancer cloud hang over our house,” she said. “Let’s put it that way or we would have lived in the dark the last 10 years.” National Post news services
<urn:uuid:7ad0ca25-de96-4bba-b0e3-f445589bffba>
CC-MAIN-2015-06
http://news.nationalpost.com/2014/05/14/massive-dose-of-measles-vaccine-kills-womans-cancer-in-landmark-u-s-trial/
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121785385.35/warc/CC-MAIN-20150124174945-00041-ip-10-180-212-252.ec2.internal.warc.gz
en
0.968946
649
2.890625
3
Taxonomic re-evaluation of Leptographium lundbergii based on DNA sequence comparisons and morphology The genus Leptographium was described in 1927 and currently includes 48 species, with L. lundbergii as the type species. In recent years, the taxonomic status of L. lundbergii has not been uniformly agreed upon and it has been the topic of considerable debate. The problem was compounded by the absence of a type specimen, and the species was epitypified at a later stage. Unfortunately, the whereabouts of the epitype is now unknown. In 1983, Wingfield & Marasas described L. truncatum, which is morphologically similar to L. lundbergii. Based on DNA comparisons and similarities in their morphology, this fungus was reduced to synonymy with L. lundbergii. The loss of the type specimen as well as variation in the morphology of strains identified as L. lundbergii prompted us to re-examine the taxonomic status of this species. A number of strains from various geographic areas were studied. These include a strain of L. lundbergii deposited at CBS by Melin in 1929 (CBS 352.29) as well as the ex-type strain of L. truncatum. The strains were compared based on morphology and comparison of multiple gene sequences. Three genes or genic regions, ITS2 and part of the 28S gene, partial β-tubulin and partial elongation factor 1-α were compared. Strains currently identified as L. lundbergii, represented a complex of species. Strains initially described as L. truncatum clustered separately from other L. lundbergii strains, could be distinguished morphologically and should be treated as a distinct taxon. L. lundbergii is provided with a new and expanded description based on a neotype designated for it. A third group was also identified as separate from the main L. lundbergii clade and had a distinct Hyalorhinocladiella-type anamorph, described here as H. pinicola sp. nov. © The British Mycological Society.
<urn:uuid:918ca14d-de47-4e35-b760-fafb36a85183>
CC-MAIN-2017-09
http://scholar.sun.ac.za/handle/10019.1/10378
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00126-ip-10-171-10-108.ec2.internal.warc.gz
en
0.972522
449
2.78125
3
Constantine XI(redirected from Emperor Constantine XI) Also found in: Dictionary. Constantine XI(Constantine Palaeologus), d. 1453, last Byzantine emperor (1449–53), brother and successor of John VIIIJohn VIII (John Palaeologus), 1390–1448, Byzantine emperor (1425–48), son and successor of Manuel II. When he acceded, the Byzantine Empire had been reduced by the Turks to the city of Constantinople. ..... Click the link for more information. . To secure Western aid against the Turkish assault on what remained of the empire, he proclaimed (1452) the union of the Western and Eastern Churches. No help came, however, and in 1453 Constantine, with some 8,000 Greeks, Venetians, and Genoese, faced 150,000 Turkish besiegers under Sultan Muhammad IIMuhammad II or Mehmet II (Muhammad the Conqueror), 1429–81, Ottoman sultan (1451–81), son and successor of Murad II. He is considered the true founder of the Ottoman Empire (Turkey). ..... Click the link for more information. . After almost two months of heroic defense, directed by the emperor, the city and the empire fell. Constantine died fighting with the last of his men. 1404--53, last Byzantine emperor (1448--53): killed when Constantinople was captured by the Turks
<urn:uuid:833f7d54-1b8c-4d7c-aeea-4cc8dfe3038b>
CC-MAIN-2018-05
https://encyclopedia2.thefreedictionary.com/Emperor+Constantine+XI
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00712.warc.gz
en
0.896553
298
2.6875
3
Return to the BalancedReading.com Home Page Sebastian Wren, Ph.D. ADDENDUM: September 25, 2005 If you are reading this, chances are you are a habitual reader, meaning you read on average an hour or two a day. As such, I can say with some authority, that most of the words you know, you learned through the act of reading. Research has shown that past the 4th grade, the number of words a person knows depends primarily on how much time they spend reading (Hayes & Ahrens, 1988; Nagy & Anderson, 1984; Nagy & Herman, 1987; Stanovich, 1986). In fact, by the time they reach adulthood, people who make a habit of reading have a vocabulary that is about four times the size of those who rarely or never read. This disparity starts early and grows throughout life (see M is for Matthew Effect for more on the widening disparity). According to Beck and McKeown (1991), 5 to 6 year olds have a working vocabulary of 2,500 to 5,000 words. Whether a child is near the bottom or the top of that range depends upon their literacy skills coming into the first grade (Graves,1986; White, Graves & Slater, 1990). In other words, by the first grade, the vocabulary of the disadvantaged student is half that of the advantaged student, and over time, that gap widens. The average student learns about 3,000 words per year in the early school years -- that's 8 words per day (Baumann & Kameenui, 1991; Beck & McKeown, 1991; Graves, 1986), but vocabulary growth is considerably worse for disadvantaged students than it is for advantaged students (White, Graves & Slater, 1990). How important is vocabulary size? Imagine how much harder your life would be if you didn't understand 75% of the words you currently know. How hard would it be to read a passage of text if you didn't know many of the words in the passage? Imagine if reading the front page of the newspaper was like reading this passage of text: "While hortenting efrades the populace of the vaderbee class, most experts concur that a scrivant rarely endeavors to decry the ambitions and shifferings of the moulant class. Deciding whether to oxant the blatantly maligned Secting party, most moulants will tolerate the subjugation of staits, savats, or tempets only so long as the scrivant pays tribute to the derivan, either through preem or exaltation." Would you read the newspaper if it was all like that? Would you read anything you didn't have to? Most non-readers have difficulty decoding the individual words, but in addition, even if they can decode them, most non-readers do not understand many of the words in formal text. Vocabulary development is a lifelong endeavor, but because of the Matthew Effect, over time, some people develop far richer vocabularies than other people. There have been various attempts to measure how many words adults know, and the estimates vary widely. Part of the reason is that it is not clear what it means to "know" a word. Speaking personally, there are some words I am much more familiar with than others. Consider these words: WHITE, DOG, and HOME And compare them to these words: CALLIOPE, FOP, and BRACHIAL I don't know about you, but while I am certain that I "know" the first group of words, I would only say that I recognize and have some limited knowledge of the second group of words. Dale and O'Rourke (1986) described four levels of word knowledge, which they characterized with four statements: 1. I never saw the word before 2. I've heard of it, but I don't know what it means 3. I recognize it in context, and I can tell you what it is related to 4. I know the word well It is hard to say how many words I know well, much less how many words I'm somewhat or vaguely familiar with. Also, estimating the number of words a person knows depends on what counts as a word. If DRIVE, DRIVER, DRIVES, DRIVEN, and DRIVING all count as separate words, then the estimate would be considerably larger. Carroll, Davies, and Richman (1971) created a database of English words that appear in print by counting the number of occurrences of every string of letters that was separated by a space on each side (they sampled some 5,000,000 words from a variety of published texts). They came up with 86,741 unique "words," but, because a computer did the counting, every unique letter string was counted as a separate word -- DRIVE, DRIVER, DRIVES, DRIVEN, and DRIVING were all counted as separate words. Also, because a computer did the counting, misspelled words were counted (this was 1971 -- before spell-checking), and things that we would not recognize as words were also counted (e.g. "G787" and "FI--"). Toss out the misspelled and nonsense words, and you are closer to 50,000 unique "words." It is reasonable to say, then, that a literate adult knows somewhere around 50,000 words, if you count DRIVE, DRIVER, and DRIVES as separate words. But as I said, literacy and volume of reading is highly correlated with vocabulary size (e.g. Nagy and Anderson, 1984), so an adult that does not read habitually would have a much smaller vocabulary than an adult that reads voluminously. Nagy and Anderson (1984) estimated that an average high school senior knows 45,000 words, but other researchers have estimated that the number is much closer to 17,000 words (D'Anna, Zechmeister, & Hall, 1991) or 5,000 words (Hirsh & Nation, 1992). Surely these dramatically different estimates depend upon the three questions described above, namely, what does it mean to "know" a word, what counts as a "word," and who counts as "average?" What is not at all in doubt, however, is this. People who habitually read from a wide variety of texts have much, much richer vocabularies than people who do not read much. And people who have richer vocabularies find it easier to read challenging texts than people who do not have rich vocabularies. That is the conundrum -- you need a rich vocabulary to read widely, and the best way to develop a rich vocabulary is to read widely. Thus, vocabulary size is both a cause of and a consequence of reading success. So where do you start? Ideally, you start at a young age. Children who are given lifelong support for literacy skills tend to build on their successes and flourish. Children who are still struggling in the second or third grade typically continue to struggle throughout their lives unless dramatic intervention is taken. Ideally, some day, all students will learn to read successfully at a young age, but for now, there are many older struggling readers who do not have very large vocabularies. If you have a student who is not reading well, and who thus does not read habitually, how can you enhance their vocabulary (thus making it easier for them to read habitually)? Is it possible to teach vocabulary directly? Research studies suggest that it is possible to teach vocabulary out of context, but it is not very efficient. It is possible to teach children between 300 and 500 words a year (8 to 10 words a week) through explicit context-free instruction, but that pales when compared to the 3,000 words a year that literate children learn throughout their school years (Nagy & Anderson, 1984). And Stahl & Fairbanks (1986) found that directly teaching children dictionary definitions for words did not enhance their comprehension of a passage of text containing those vocabulary words. The definition of the word only provides a superficial understanding of the word, and the level of word knowledge necessary to enhance comprehension is deeper than mere definitions. Unfortunately, research has shown that the best approach to teaching vocabulary is to teach children some strategies for learning the meaning of words in context, and then encourage them to read voluminously and from a wide variety of texts and genres (Kuhn & Stahl, 1998). Teachers can also help students to develop a deeper understanding of words through some direct instruction that involves talking about the definitional and contextual meanings of words, focusing on synonyms and antonyms, providing examples and non-examples, and discussing the subtle nuances and differences that make synonyms somewhat different (e.g. the difference between kill and murder has to do with intent and crime. You can't murder a pig or a deer, and you can't accidentally murder a person.) Children also need to encounter words frequently in a variety of contexts in order to internalize them. McKeown, Beck, Omanson, and Pople (1985) found that children did not really know and understand words they had only encountered four times, but they did know and understand words they encountered twelve times. Teachers can be strategic about introducing new vocabulary to students repeatedly, and providing a rich discussion and analysis of the words to enhance understanding. And finally, I would personally argue that it is important that students play a very active role in vocabulary development. There are many words that I have passively read in context repeatedly, but I became much more familiar with those words when I actively used them in my writing. Every day, I receive an e-mail from the "word a day" service that features a very rare and unusual English word. Every day, I dutifully open the e-mail, and read the word and its definition. But the only words that stick with me are the ones that I actually use in my writing. A few days ago, I happened to get the word "idiopathy" at a time when I happened to be writing an essay about different forms of dyslexia. The word "idiopathy" was perfect for describing certain forms of acquired dyslexia that have no known origin or source. Now I have internalized that word, but there are probably 300 words that have been e-mailed to me by this service in the past year that I have never used again, nor would I understand if I encountered them in context. Typically, to make a word part of my personal lexicon, I need to actively use the word in my writing and speech, and I need to use it on more than one occasion. I would argue that vocabulary instruction should be designed with that in mind. For more on the subject of vocabulary instruction, I strongly encourage you to read Steven Stahl's excellent book, "Vocabulary Development." This is a very short but highly informative book that describes research findings and has suggestions for classroom instruction. A few weeks ago, somebody wrote me and asked me how many words a typical adult knows. Alas, I am not a very organized person, and apparently I misplaced that person's e-mail, but the question is a good opening to discuss vocabulary knowledge. Before we can say how many words a typical adult knows, we have to define what a word is, and what it means to "know" a word. Oh yeah... and we have to define "typical." Defining what a word is is a bit of a challenge. There are words like "dance" "dancing" "dancer" "dances" and "danced" that can be counted as five different words, but that seems vaguely inappropriate. If I gave you a new word that you have never heard before -- "prieve" -- and told you that it was a verb, you would probably be able to generate other forms of that verb without any difficulty -- "prieved" "priving" "prieves" etc. And you could probably guess that somebody who "prieves" is a "priever." One could argue that at the heart of it, you only learned one new word, and you already knew the rules for using that word in different forms. If you follow that line of thought, though, you have to deal with irregular verbs -- when you learn the word "go" you don't automatically learn the word "went." There are also words with common roots that give linguists pause -- is "know" the same as "knowledge" or "acknowledge"? What about compound words? If you already know "side" and "walk," should "sidewalk" get counted separately? What about proper nouns? Should "Sebastian" be counted as a word that I know? All of this just gives me a headache. Rather than talk about specific words, linguists often talk about "word families," but there is not 100% consensus as to what a word family is. It is very clear that "dance" and "dancing" belong to the same word family, but it is less clear if "know" and "acknowledge" belong to the same word family. Still, those are fairly unusual cases, and most linguists agree that there are somewhat more than 50,000 but probably less than 60,000 word families in the English language. Most of those word families are completely unfamiliar to most people. There are thousands of words like "dramaturg" and "odeum" and "iracund" that almost no adult is familiar with. They are indeed real English words, and you will probably find them in your unabridged dictionary, but chances are you have never encountered them in your life before today. There are also thousands of words that almost every speaker of English is very, very familiar with -- words like "green" and "today" and "dinner." There is no question that you know those words -- everybody who speaks English knows those words. However, there are also a few thousand words that you are only vaguely familiar with. These words are different for different people, so I will just guess and hope I don't get victimized by my example. You may have encountered words like "adroit" and "egregious" and "lucent," and you might even have a vague notion about what they mean, but I'm betting you are not as confident about your knowledge of "lucent" as you are about your knowledge of "shiny." In English (as in any language), there are some words that are extremely common, and everybody knows them -- "green." There are other words that are extremely rare, and almost nobody knows them -- "guttle." But then there are these middle words -- "egregious." They are fairly rare, and somewhat nuanced, but some people know them very well, and other people don't know them well at all. Every individual person has a private collection of rare words that they know well -- I, for example, love the word "defenestrate," and try to use it in conversation whenever I can (as I just did). Your mechanic is probably quite familiar with words like "camber" and "bushing." Your plumber uses words like "ferrous" and "petcock," and he or she knows at least two definitions for the word "dope." This is why it is so hard to define how many words a "typical" adult knows. There are about 5,000 to 7,000 common word families that almost everybody knows. And there are probably 20,000 word families that almost nobody knows. But there are between 10,000 and 20,000 word families that some people know and other people don't. How many of these semi-rare words a particular person knows depends on several things. How much does that person read every day? What level of education did that person achieve? What does that person do for a living? What kind of family background does that person have? Somebody who did not get much of an education and does not make a habit of reading may only be really familiar with 5,000 to 10,000 word families. Somebody who has a college education and reads a fair amount may have a working vocabulary of closer to 20,000 word families. Somebody who reads voraciously and has more of an academic career may be familiar with 25,000 or 30,000 word families. True story: I was listening to reading and vocabulary expert Anne Cunningham give a talk a few years ago -- she was telling the audience that everybody should read more because the vocabulary used in literature is far, far richer than the vocabulary used in conversation or dialog. The vocabulary used on television or in conversation tends to be very limited. I was dutifully taking notes for the first half of her talk, but I realized about half way through her talk that her presentation was peppered with a very rich vocabulary. (This was a bit ironic given the point she was trying to make.) I found myself writing down the rare words she was using. I did not catch all of them, but in the last 15 minutes of her talk, she quite comfortably used these words: provoke, maneuver, equate, invariably, exposure, dominance, participation, multiple, subgroups, relatively, differentiated, significant, separately, increased, hypothesis, explore, contribution, control, observe, effect, examine, variable, interest, intervene, exposure, consequence, aspects, potent, mismatch, correlation, discrepant, contemporaneous, acquisition, analysis, implemented, comprehension, summary, variety, cumulative, phenomenon, divergence, hypothesized, efficacious, cognitive, caveat, displace, prerequisite, encouraging, despair, malleable, partially, ilk, and travesty. I bet Scrabble night at the Cunningham house is a hoot. Is Anne Cunningham typical? Clearly not. But it is hard to say exactly what is typical. Most people in this country don't read very much, so they probably have vocabularies closer to the 10,000 end of the scale, maybe even closer to 5,000. I have seen estimates that a "typical" college graduate is probably familiar with 20,000 word families, but again, even among that population, there is probably a great deal of variability. People who read 3 to 4 hours a day are probably familiar with more than 25,000 word families, but very, very few people actually read 3 to 4 hours a day. Somebody who dropped out of high school and does not read may only know 5,000 or 6,000 word families. Somebody who finished high school and is able to read, but doesn't really make a habit of it may know closer to 10,000 word families. Somebody who went to college and can read well, and makes a habit of reading popular books and magazines may know 15,000 word families. A college graduate with a more "white collar" job may have a vocabulary of 20,000 word families -- almost 4 times as large as the unfortunate soul who dropped out of high school. And of course, somebody with an advanced degree and an academic job could be familiar with 25,000 word families or more. I will leave it to you to decide what is typical. Baumann, J.F. and Kameenui, E.J. (1991). Research on vocabulary instruction: Ode To Voltaire. In J. Flood, J.J. Lapp, and J.R. Squire (Eds.), Handbook of research On teaching the English language arts (pp. 604-632). New York: MacMillan. Beck, I.L. and McKeown, M.G. (1991). Social studies texts are hard to understand: Mediating some of the difficulties. Language Arts, 68, 482-490. Carroll, J.B., Davies, P., and Richman, B. (1971). The American Heritage Word Frequency Book. Houghton Mifflin, Boston. Dale, E. & O'Rourke, J. (1986). Vocabulary building. Columbus, OH: Zaner-Bloser. D'Anna, C.A., Zechmeister, E.B., & Hall, J.W. (1991). Toward a meaningful definition of vocabulary size. Journal of Reading Behavior, 23, 109-122. Graves, M.F. (1986). Vocabulary learning and instruction. In E.Z. Rothkopf (Ed.), Review of Research in Education, 13, 49-89. Hayes, D.P. and Ahrens, M.G. (1988). Vocabulary simplification for children: A Special case of "motherese"? Journal of Child Language, 15(2), 395-410. Hirsh, D. & Nation, P. (1992). What vocabulary size is needed to read unsimplified texts for pleasure? Reading in a foreign language, 8, 689-696. Kuhn, M.R. & Stahl, S.A. (1998). Teaching children to learn word meanings from context: A synthesis and some questions. Journal of Literacy Research. McKeown, Beck, Omanson, and Pople (1985) Nagy, W.E. and Anderson, R.C. (1984). How many words are there in printed English? Reading Research Quarterly, 19, 304-330. Nagy, W.E. and Herman, P.A. (1987). Breadth and depth of vocabulary knowledge: Implications for acquisition and instruction. In M. McKeown and M. Curtis (Eds.), The Nature of Vocabulary Acquisition, (pp. 19-35). Hillsdale, NJ: Erlbaum Associates. Nagy, W.E., Herman, P., and Anderson, R. (1985). Learning words from context. Reading Research Quarterly, 19, 304-330. Stahl, S.A. & Fairbanks, M.M. (1986). The effects of vocabulary instruction: A model-based meta-analysis. Review of Educational Research, 56(1), 72-110. Stanovich, K.E. (1986). Matthew Effects in Reading: Some consequences of individual Differences in the acquisition of literacy. Reading Research Quarterly, 21, 360- 407. White, T. G., Graves, M. F., and Slater, W. H. (1990). Growth of reading vocabulary in diverse elementary schools: Decoding and word meaning. Journal of Educational Psychology , 82 (2), 281-290. BalancedReading.com • P. O. Box 300471 • Austin, TX 78703 Do you have comments or questions about this site? Would you like to contribute material or information to this site? Last Updated 8-7-03
<urn:uuid:61fbcd18-df02-4dfb-8fa5-8f4320092bf6>
CC-MAIN-2014-15
http://www.balancedreading.com/vocabulary.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00235-ip-10-147-4-33.ec2.internal.warc.gz
en
0.961555
4,751
3.671875
4
SMART goal setting helps you achieve your goals by improving the goals themselves. By ensuring all goals follow the 5 SMART criteria, you can create better goals and have a better chance of achieving them. The SMART criteria are: Having specific goals helps make sure they're measurable and meaningful. If goals are too vague, it can be hard to know where to get started and how to track your progress. These are examples of goals that are too vague: Instead of those goals, consider these: If your goal description adds too many unneeded words or descriptors, then it might become less appealing to try and work towards. A goal like Keep my 2-story farmhouse neat and tidy by spending at least 30 minutes every single day cleaning and organizing in order to prevent allergen buildup and to keep my precious things orderly and organized so I can be happy is way too specific and could do with less description and specific identifiers. A better way to say phrase that goal would be Spend at least 30 minutes a day cleaning, dusting, and organizing my house. Whenever you are trying to make a goal specific, remember to ask yourself these types of questions: - Can someone else understand what I am trying to achieve? - Could I make this goal more concise but still be able to understand the original goal? It's hard to track progress on your goals if they aren't measurable. Having goals that can be measured helps you see your progress and can be very motivating! You don't necessarily need to use numerical or quantitative measures, either. Qualitative measures are still measures so long as they have meaning to you! Make sure the measures you use are appropriate. If your goal is "Get healthy," (which, by the way, is a little too vague), don't measure purely by weight. Try alternative measures to ensure you're getting the whole picture and not focusing on the wrong thing. The goal should not just be realistic, it must be realistic for you. Different people have different capabilities, and just because a goal is suitable for someone else doesn't mean it isn't too difficult or too easy for you. Don't be afraid to aim high, but it's often best to start small so you don't lose motivation in the face of a huge goal. This can be done by breaking down big goals into smaller ones. This applies not only to the goal itself, but also to the way you measure it. Make sure the goal you're working toward is something that is tangible and important to you! "Make my bed every day" might be a good goal for many, but if you really, truly find you feel no better coming home to a made bed, skip that one. Find goals that matter to you. Most goals should have some kind of reasonable time limit to keep yourself motivated and prevent stagnation. might be more motivating and achievable than Another optional way of making goals timely is to add regularly scheduled status/progress checks using Dailies into the goal. An example would be: You can make a Daily appear every week by making it repeat every 7 days.
<urn:uuid:920b9d30-94b8-45cb-a4c0-189cc73f5b0d>
CC-MAIN-2017-13
http://habitica.wikia.com/wiki/SMART_Goal_Setting
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188553.49/warc/CC-MAIN-20170322212948-00142-ip-10-233-31-227.ec2.internal.warc.gz
en
0.966733
634
3.3125
3
VLC stands for Visible Light Communications. Basically it allows to use light in the visible spectrum to carry digital signal. Intuitively it is like switching on and off the light and associating to each on the value 1 and to each off the value 0 (or the reverse...). If you do this switching fast enough our eyes won't be able that there has been a "switch off" at all, due to the latency of light on the retina (this is what happens when we watch television: we perceive a fluid video whilst, in effect, there is a sequence of images being displayed, 25 or more depending on the standard used). To make this feasible you need a light source that can be modulated (switched on and off) like LED. An incandescent bulb would not work since there is a thermal latency when you switch the light off and the incandescent filament remains such for a little while. You won't be able to modulate any signal at a reasonable frequency (People in the middle age modulated light signals with hand held mirror but the communication speed was very low). VLC has attracted the interest of several parties, including Municipalities. With the incentive to substitute street illumination based on incandescent bulbs (or gas bulbs) with much more energy wise LED it becomes possible to use those same light poles as beacons to broadcast data. We are not there, yet, but technology is progressing. The problem with the modulation of data onto a visible light signal used for illumination is that the modulation should not affect the quality of the perceived illumination (by our eyes). This result in the need to keep the light "on" for a sufficient time to trick our eyes into seeing a continuous illumination (not a flickering one). To do this, and transmit quite a bit of data, you need to increase the modulation frequency but present LEDs are not good for high frequency modulation (the ones we have in our televisions, cell phones and computer screens need only support a 50-100Hz on off cycles....). Here is where the results obtained at King Abdullah University of Science and Technology (KAUST) comes in. Researches at KAUST have managed to create perovskite nano-crystals associated to a normal red phosphor. They can produce white light at a color temperature of 3236 K, which is in the normal range for illumination. The nano crystal support a very high modulation frequency, 40 times higher than normal phosphor, making it possible to transmit data at 2Gbps. It is a pretty good bandwidth if you imagine creating one 2Gbps access point from every light bulb around a city. Notice that this is supporting only downstream data communications but it can be good enough for many services where the illumination acts as a data beacon. Furthermore, in a 5G architecture one can have the downstream data arriving from the city illumination at the spot where you are whilst the upstream data can go through a radio network access point. This will be the beauty of 5G: managing into a single communications sessions several communications media and channels. One more reasons for Municipalities to become players in the 5G arena.
<urn:uuid:aa192635-3825-43d5-aba1-b5630b02da52>
CC-MAIN-2019-26
https://www.eitdigital.eu/news-events/blog/article/lighting-fast-wifi/
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00272.warc.gz
en
0.941943
640
3.328125
3
- We read through “The Beggar and the Old Dog” - We answered questions 1-16 on the Part 2 Odyssey worksheet - We wrote a paragraph: It’s often said that the scene with Argos the dog represents the state of Odysseus’ kingdom. Prove this idea in a paragraph with a quote from the text. - Homework: read page 936 - We took a test - Everyone got a copy of the 3D Vocabulary (due next Thursday) - We took some notes - Then we did this worksheet
<urn:uuid:504d2a89-0cc4-4cbc-9798-e6507a82eeba>
CC-MAIN-2021-31
https://mrjoneshelpdesk.com/2016/04/14/assignments-414-3/
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00118.warc.gz
en
0.968161
124
3.140625
3
Khorana, Har Gobind (1922?-) is an Indian-born American chemist who has spent his life studying the chemistry of the genetic code, the “blueprint” of life. He shared the 1968 Nobel Prize in physiology or medicine with Robert William Holley and Marshall Warren Nirenberg for his work in interpreting the genetic code and determining the function of genes in protein synthesis. Khorana was born in the village of Raipur, India, in an area that is now in Pakistan. The exact date of his birth is unknown. After receiving his B.S. and M.S. degrees in chemistry from Punjab University, and his Ph.D. degree in 1948 from Liverpool University in England, Khorana studied with Vladimir Prelog in Zurich, Switzerland, and Alexander Todd in Cambridge, England, both Nobel laureate chemists. Khorana first came to international attention while working in the chemistry department of the University of British Columbia in Vancouver, Canada. There, in 1959, Khorana discovered an inexpensive way to synthesize acetyl coenzyme A, a molecule essential to the body's biochemical processing of proteins, carbohydrates, and fats. In 1960, Khorana moved his research team to the University of Wisconsin, Madison, where he focused on genetics and did groundbreaking work in the field. Most important, he detailed the functioning of nucleotides, the chemical compounds that form the “steps” in the double helix of DNA; mapped out the nucleotides' exact order; and demonstrated that they exist together in patterns of three, or “triplets,” each of which specifies a particular amino acid. He also was able to pinpoint within this structure where protein synthesis began and ended. These discoveries led to his receiving the Nobel Prize in physiology or medicine, which he shared with Robert William Holley and Marshall Warren Nirenberg, in 1968. In 1970, Khorana became the first to synthesize an artificial gene in a living cell. His work became the foundation for much of the later research in biotechnology and gene therapy.
<urn:uuid:935e26f2-67a9-4026-b415-4087de404cf9>
CC-MAIN-2014-42
http://science.howstuffworks.com/dictionary/famous-scientists/chemists/har-gobind-khorana-info.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648891.34/warc/CC-MAIN-20141024030048-00186-ip-10-16-133-185.ec2.internal.warc.gz
en
0.959852
429
2.984375
3
The major ingredient of sake, rice, differs from region to region, but breweries tended to use both local rice and rice from other regions, the difference between various regions’ rice varieties has not been given much thought. Recently, however, the sake industry has been increasingly favoring local rice and therefore creating regional differences in sake. This requires an increased interest in the way that growing conditions affect the rice and, therefore, the regional expression of each sake. In the wine industry, cultivation methods and selection of grape varieties have long been chosen based on local climate and growing conditions, known as terroir. Terroir of an area is derived from four components. Grapes mature earlier when the temperature is higher. Temperature differs from region to region so the Huglin Index (HI, a calculation taking into account 1 April – 30 September average temperatures, maximum daily temperatures and length of day according to latitude) is useful when considering the cultivation of grapevines.For instance, in Champagne, the Huglin Index is 1550, in Bordeaux it is 2100 and Fresno (California) it is 3170. Each grape variety grows better in areas with a certain HI, so traditional wine regions such as France have taken a long time to determine the best grape variety for each growing area. Water significantly affects plant vigor and when vines feel water stress (insufficient water), they stop growing shoots and leaves and instead send nutrition to the grapes. However, if heavy water stress is experienced during the early growing period, shooting stops and vines have insufficient leaves for photosynthesis.Water stress in terms of terroir is a factor of precipitation and water retention in the soil. Clay and chalky soils have higher levels of water retention than gravel and sandy soil. For good growth, controlling water stress according to the climate and soil is essential. #3. Sun exposure It is important to have a balance between the leaf surface area and fruit volume. The best ratio is generally thought to be 1.5m2/kg. To achieve this ratio, growers consider where vine trees are planted and think about the orientation of vineyards to sun. In addition, they control leaf and shoot growth and crop yield. #4. Nitrogen source Nitrogen is a known factor to influence the flavor of grapes and wine. The typical aroma of Sauvignon Blanc is derived from thiol, which is produced more if soil contains nitrogen source. The typical aroma of Cabernet sauvignon is derived from IBMP and it increases when the soil has more nitrogen. When the amount of nitrogen source is controlled, the of aroma component is also controlled.
<urn:uuid:e10e5b71-22a7-49e2-8efb-df74c1e28621>
CC-MAIN-2020-05
http://sakeexperiencejapan.com/sake-specific-terms/terroir/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00383.warc.gz
en
0.947007
537
3.21875
3
52 Weeks of Historical How-To’s, Week 3: Talbot’s Photogenic Drawing in The Pencil of Nature From well before the time when William Henry Fox Talbot ventured into the realm of photographic experiments, the effects of silver nitrate, salts and light were known to several men of science. We credit Talbot with the invention of the photographic negative because it was his experiments which spurred a long line of successive developments in photography. In 1844 The Pencil of Nature became the first widely published book, although delivered in six individual fascicles, to be illustrated photographically. Talbot had a close friendship with Sir David Brewster, who was principal of the University of St Andrews from 1838 to 1859. Brewster was made aware of Talbot’s work through their ongoing correspondence, and he put in a library acquisition request for the Pencil of Nature in early 1844. In this week’s Historical How-To we decided to recreate Talbot’s photogenic drawing experiments using The Pencil of Nature, now held in our collection, as our inspiration and guide. It is important for we modern-day-readers to think back to a time when the likeness of a building, object, landscape scene or even your dear old aunt Bessie could only be captured by an artist, and the quality of said likeness would vary greatly depending on the aptitude of your chosen artist. Although surrounded by a family of women who were gifted artists, Talbot was afflicted with the inability to draw, even with the aid of a camera lucida. His experiments were fuelled by his desire to find a way to create an image without the use of an ‘artist’s pencil’. This was a strange and foreign concept at the time, so much so that he inserted a ‘Notice to the Reader’ on the first page of each fascicle explaining this unique feature of his work: The Pencil of Nature consists of 24 photographic plates, most of which are salted paper prints from calotype paper negatives, however two plates, namely VII and XX are photogenic drawings, one of a plant leaf and another of a piece of lace. A photogenic drawing is, in essence a highly detailed shadow that is captured and fixed through a series of chemical processes. Because the specific chemical requirements are not explained in this publication, the enthusiastic Victorian reader would have had to delve deeper into the scientific literature of the time. Talbot does however explain the mechanics by which one would ‘capture a shadow’. A piece of writing paper was sensitised by the application of two coatings; one of sodium chloride (salt) and the other silver nitrate. Talbot used normal table salt, however as regular salt today is often iodized, we used sea salt flakes diluted in water. Once that layer was dry we added the silver nitrate, which had been dissolved in distilled water. At this point the paper becomes sensitive to light so the silver application must be carried out by ‘candle light’ or in our case a dark-ish loading bay. The process is actually quite slow, so a little bit of light from the adjoining room had no real effect. In the 1830s the papers would have been hung to dry – we cheated a bit with the use of a hair dryer to speed things along as nothing really dries quickly here in Scotland. Talbot’s instructions in Plate VII in the Pencil of Nature describe the use of a printing frame. A leaf of a plant, or a similar object which is thin and delicate, is laid flat upon a sheet of prepared paper which is moderately sensitive. It is then covered with a glass, which is pressed down tight upon it by means of screws. This done, it is placed in the sunshine for a few minutes, until the exposed parts of the paper have turned dark brown or nearly black. It is then removed into a shady place, and when the leaf is taken up, it is found to have left its impression or picture on the paper. – William Henry Fox Talbot, Plate VII, in The Pencil of Nature Victorian printing frames can be a bit hard to come by, but these were put together using thick glass, four bull-dog clips, a felt lined board and a bit of duct tape. It may not look as pretty, but does the job just as Talbot instructed. We found that ‘a few minutes’ of English sunshine in August equated to about 30 minutes of Scottish sunshine in November, but the result is roughly the same. We brought the printing frame back to our shady loading bay, where we could safely remove the paper, wash out all the remaining light sensitive silver and ‘fix’ the image with a strong salt bath, or sodium thiosulphate, commonly called ‘hypo’. Talbot’s original experiments used various kinds of salts, which gave slightly different colours, however the use of hypo, recommended to him by his friend John Herschel, leaves a more permanent image which would have been used for all images in the Pencil of Nature. The keen observer will notice a distinct difference between Plate VII and our photogenic drawings; Plate VII appears in positive whereas ours is in negative. Talbot goes on to explain this slightly confusing relationship in Plate XX where he shows a ‘negative’ image. But to briefly clarify, one would create a negative image such as ours first, then following the exact same process use the first photogenic drawing/negative image to create a print such as is shown above thus appearing in positive. This negative-positive relationship was at the core of photography for over 150 years ending with what most of us remember as 35mm roll film. However, before modern film, and predominantly throughout the second half of the 19th century, there was a wave of innovations and advancements in ‘photography’ exploiting the innate qualities of glass, collodion and gelatine, each giving new and distinct properties to the medium and the artist, but it all started with Talbot’s chemistry experiments on a simple piece of paper. As this is the first example of a negative image that has been introduced into this work, it may be necessary to explain, in a few words, what is meant by that expression, and wherein the difference consists. The ordinary effect of light upon white sensitive paper is to blacken it. If therefore any object, as a leaf for instance, be laid upon the paper, this, by intercepting the action of the light, preserves the whiteness of the paper beneath it, and accordingly when it is removed there appears the form or shadow of the leaf marked out in white upon the blackened paper; and since shadows are usually dark, and this is the reverse, it is called in the language of photography a negative image. – William Henry Fox Talbot, Plate XX, in The Pencil of Nature – Rachel Nordstrom Photographic Research & Preservation Officer
<urn:uuid:1af38002-5029-47f7-813e-72bbd6ca7622>
CC-MAIN-2021-17
https://special-collections.wp.st-andrews.ac.uk/2013/11/14/52-weeks-of-historical-how-tos-talbots-photogenic-drawing-in-the-pencil-of-nature/
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038062492.5/warc/CC-MAIN-20210411115126-20210411145126-00118.warc.gz
en
0.969674
1,444
3.390625
3
"As the dew is dried up by the morning Sun, So are the sins of Mankind by the sight of The Mighty Ganga." The Eternal Gift Of Lord Shiva’s and Bhagirath's Penance Hindus consider Gangotri the source of the Ganges, although the actual source is the glacier at Gaumukh, another 18 miles upstream. Gangotri is an ideal location, here is the mystical aura that India is so famous for. The town has two important ritual centers. The first is the Ganges itself, which is considered the Goddess Ganga in material form. Pilgrims bathe in it, and perform rituals beside it. The other important center is the temple of Goddess Ganga. Gangotri also has a strong historical past and bears the eternal feel of antiquity, but it is an equally favorite destination of an intrepid traveler and for the one who is looking for solace in the Himalayas. The river itself begins at Gangotri which literally means Ganga Uttari or Ganga descending. She came to be called Bhagirathi at her legendary source. Mythological story of the Ganga :- According to Hindu history, Goddess Ganga took the form of a river to absolve the sins of King Bhagiratha's predecessors, following his severe penance of several centuries. According to this legend, King Sagara, after slaying the demons on earth decided to stage an Ashwamedha Yajna as a proclamation of his supremacy. The horse which was to be taken on an uninterrupted journey around the earth was to be accompanied by the King's 60,000 sons born to Queen Sumati and one son Asamanja born of the second queen Kesani. Indra, supreme ruler of the gods feared that he might be deprived of his celestial throne if the "Yajna" (worship with fire) succeeded and then took away the horse and tied it to the ashram of Sage Kapil, who was then in deep meditation. The sons of the King Sagara searched for the horse and finally found it tied near the meditating sage. Sixty thousand angry sons of King Sagara stormed the ashram of sage Kapil. When he opened his eyes, the 60,000 sons had all perished, by the curse of sage Kapil. Bhagiratha, the grandson of King Sagar, is believed to have meditated to please the Goddess Ganga enough to cleanse the ashes of his ancestors, and liberate their souls, granting them salvation. GANGA JAL-THE HOLY WATER :- Water collected from Gangotri is in a pure state and even after being kept for a number of years, retains its original flavor and state. The medicinal properties of Ganga water are ascribed to the several herbs that mix with the waters which already have a high mineral content. ARCHITECTURE STYLE :- The temple of the goddess Ganga, first built about in the 18th century almost 300 years ago by the Gorkha General Amar Singh Thapa, and restored in the late 19th century by the royal house of Jaipur. Gangotri is at an altitude of 3141 meters above sea level, is located right at the river Ganga. There is also the Shila, which is present at the exact spot of Ganga's descent into earth. A huge and extensive stone slab roofs the entire outer chamber of the temple. Visiting Time :- The Shrine of Gangotri opens during the last week of April or the first week of May, on the auspicious day od Akshaya Tritiya. The temple opening is preceeded by a special Puja of Ganga both inside the temple as well as on the river banks. The temple closes on the day of Diwali followed by a formal closing ceremony amidst a row of oil lamps. It is believed that the Goddess retreats to Mukhwa, her winter abode (12 km downstream). Puja and Rituals:- The mother Ganga is worshipped as Goddess and the holy river in the temple. Before performing the Puja rituals, a holy dip in the Ganga flowing nearby the temple is a must. The Pujaris (priest) belong to Brahmin community from Mukhwa village. Ten of them are selected by rotation every year to perform all the functions covering the temple and they also perform the duties of pandas. You can also pray for your ancestors on bank of Divine Ganga. SIGHT SEEING & EXCURSION >> Submerged in the river, this natural rock Shivling is the place where, according to mythology Lord Shiva sat when he received the Ganga in his matted lock. It is visible in winter months when water level decreases. Kedar Ganga Sangam: Around 100 Yards from the Ganga Temple flows the river Kedar Ganga. Starting from the Kedar Valle, this river meets the Bhagirathi on its left bank. Kedar Tal : This spectacular and enchanting lake is situated at a distance of about 18 Kms. from Gangotri, negotiable through a rough and tough mountain trail. The trek is very tiring and can be testing even for a regular trekker. A local guide is a must. The lake is crystal clear with the mighty Thalaysagar (Sphatikl-ing) peak forming a splendid backdrop. The place is about 4000 mts. above sea-level and is the base camp for trekking to the Thalaysagar, Jogin, Bhrigupanth and other peaks. Dayara Bugyal : Bugyal in the local language means "high altitude meadow." The road to Dayara Bugyal branches off near Bhatwari a place on Uttarkashi-Gangotri road about 28 Kms. from Uttarkashi. Vehicles can go up to the village of Barsu from where one has to trek a distance of about 8 Kms. to reach Dayara and other route is via village Raithal, 10 Kms. from Bhatwari, from where one has to trek about 7 Kms. to Dayara Bugyal. Situated at an elevation of about 3048 mts., this vast meadow is second to none in natural beauty. During winter it provides excellent ski slopes over an area of 28 sq. Kms. The panoramic view of the Himalayas from here is breathtaking. There is a small lake in the area and to camp by this side is a memorable event. From this spot one can trek down to Dodi-Tal which is about 22 Kms. away, through dense forests. Sat-Tal, meaning seven lakes, is situated just above Dharali, 2 Kms beyond Harsil. The trek of about 5 Kms. is rewarding as this group of lakes is situated amidst beautiful and natural surroundings. It also provides lovely camp sites. The Gaumukh glacier is the source of Bhagirathi (Ganga) and is held in high esteem by the devotees who do not miss the opportunity to have a holy dip in the bone chilling icy water. It is 18 Kms. trek from Gangotri. The trek is easy and at times people come back to Gangotri the same day. General Information : Altitude : 3048 mts. Climate : Summers - Cool during day and cold at night Max: 20°C & Min: 6°C; Winter - Snow bound touching sub zero. Season : April to October Clothing : Heavy woolens throughout the season. Languages : Hindi, Garhwali and English. Air : Nearest Airport is Jolly Grant (262 Kms.). Rail : Nearest railhead is at Rishikesh, 226 Kms. away. Road : Gangotri is connected by road to Uttarkashi, Tehri Garhwal and Rishikesh and from there to other parts of the country. Bus services of Samyukt Rotation Yatayat Vyawastha Samiti connect Gangotri with many centres in the region like Haridwar, Rishikesh, Tehri, Uttarkashi etc.
<urn:uuid:f3df3ed2-3ac5-4881-af11-ac87caf8f951>
CC-MAIN-2019-13
https://aanandaholidays.com/gangotri
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.64/warc/CC-MAIN-20190320170159-20190320192159-00488.warc.gz
en
0.952843
1,710
3.078125
3
Previously in this series we’ve seen the definition of a category and a bunch of examples, basic properties of morphisms, and a first look at how to represent categories as types in ML. In this post we’ll expand these ideas and introduce the notion of a universal property. We’ll see examples from mathematics and write some programs which simultaneously prove certain objects have universal properties and construct the morphisms involved. A Grand Simple Thing One might go so far as to call universal properties the most important concept in category theory. This should initially strike the reader as odd, because at first glance universal properties are so succinctly described that they don’t seem to be very interesting. In fact, there are only two universal properties and they are that of being initial and final. Definition: An object in a category is called initial if for every object there is a unique morphism . An object is called final if for every object there is a unique morphism . If an object satisfies either of these properties, it is called universal. If an object satisfies both, it is called a zero object. In both cases, the existence of a unique morphism is the same as saying the relevant Hom set is a singleton (i.e., for initial objects , the Hom set consists of a single element). There is one and only one morphism between the two objects. In the particular case of when is initial (or final), the definition of a category says there must be at least one morphism, the identity, and the universal property says there is no other. There’s only one way such a simple definition could find fruitful applications, and that is by cleverly picking categories. Before we get to constructing interesting categories with useful universal objects, let’s recognize some universal objects in categories we already know. In the single element set is final, but not initial; there is only one set-function to a single-element set. It is important to note that the single-element set is far from unique. There are infinitely many (uncountably many!) singleton sets, but as we have already seen all one-element sets are isomorphic in (they all have the same cardinality). On the other hand, the empty set is initial, since the “empty function” is the only set-mapping from the empty set to any set. Here the initial object truly is unique, and not just up to isomorphism. It turns out universal objects are always unique up to isomorphism, when they exist. Here is the official statement. Proposition: If are both initial in , then are isomorphic. If are both final, then . Proof. Recall that a mophism is an isomorphism if it has a two sided inverse, a so that and are the identities. Now if are two initial objects there are unique morphisms and . Moreover, these compose to be morphisms . But since is initial, the only morphism is the identity. The situation for is analogous, and so these morphisms are actually inverses of each other, and are isomorphic. The proof for final objects is identical. Let’s continue with examples. In the category of groups, the trivial group is both initial and final, because group homomorphisms must preserve the identity element. Hence the trivial group is a zero object. Again, “the” trivial group is not unique, but unique up to isomorphism. In the category of types with computable (halting) functions as morphisms, the null type is final. To be honest, this depends on how we determine whether two computable functions are “equal.” In this case, we only care about the set of inputs and outputs, and for the null type all computable functions have the same output: null. Partial order categories are examples of categories which need not have universal objects. If the partial order is constructed from subsets of a set , then the initial object is the empty set (by virtue of being a subset of every set), and as a subset of itself is obviously final. But there are other partial orders, such as inequality of integers, which have no “smallest” or “largest” objects. Partial order categories which have particularly nice properties (such as initial and final objects, but not quite exactly) are closely related to the concept of a domain in denotational semantics, and the language of universal properties is relevant to that discussion as well. The place where universal properties really shine is in defining new constructions. For instance, the direct product of sets is defined by the fact that it satisfies a universal property. Such constructions abound in category theory, and they work via the ‘diagram categories’ we defined in our introductory post. Let’s investigate them now. Let’s recall the classical definition from set theory of a quotient. We described special versions of quotients in the categories of groups and topological spaces, and we’ll see them all unified via the universal property of a quotient in a moment. Definition: An equivalence relation on a set is a subset of the set product which is reflexive, symmetric, and transitive. The quotient set is the set of equivalence classes on . The canonical projection is the map sending to its equivalence class under . The quotient set can also be described in terms of a special property: it is the “largest” set which agrees with the equivalence relation . On one hand, it is the case that whenever in then . Moreover, for any set and any map which equates equivalent things ( for all ), then there is a unique map such that . This word salad is best translated into a diagram. Here we use a dashed line to assert the existence of a morphism (once we’ve proven such a morphism exists, we use a solid line instead), and the symbol signifies existence () and uniqueness (!). We can prove this explicitly in the category . Indeed, if is any map such that for all equivalent , then we can define as follows: for any whose equivalence class is denoted by in , and define . This map is well defined because if , then . It is unique because if for some other , then ; this is the only possible definition. Now the “official” way to state this universal property is as follows: The quotient set is universal with respect to the property of mapping to a set so that equivalent elements have the same image. But as we said earlier, there are only two kinds of universal properties: initial and final. Now this looks suspiciously like an initial object ( is going from , after all), but what exactly is the category we’re considering? The trick to dissecting this sentence is to notice that this is not a statement about just , but of the morphism . That is, we’re considering a diagram category. In more detail: fix an object in and an equivalence relation on . We define a category as follows. The objects in the category are morphisms such that in implies in . The morphisms in the category are commutative diagrams Here need to be such that they send equivalent things to equal things (or they wouldn’t be objects in the category!), and by the commutativity of the diagram . Indeed, the statement about quotients is that is an initial object in this category. In fact, we have already proved it! But note the abuse of language in our offset statement above: it’s not really that is the universal object, but . Moreover, the statement itself doesn’t tell us what category to inspect, nor whether we care about initial or final objects in that category. Unfortunately this abuse of language is widespread in the mathematical world, and for arguably good reason. Once one gets acquainted with these kinds of constructions, reading between the limes becomes much easier and it would be a waste of time to spell it out. After all, once we understand there is no “obvious” choice for a map except for the projection . This is how got its name, the canonical projection. Two last bits of terminology: if is any category whose objects are sets (and hence, where equivalence relations make sense), we say that has quotients if for every object there is a morphism satisfying the universal property of a quotient. Another way to state the universal property is to say that all maps respecting the equivalence structure factor through the quotient, in the sense that we get a diagram like the one above. What is the benefit of viewing by its universal property? For one, the set is unique up to isomorphism. That is, if any other pair satisfies the same property, we automatically get an isomorphism . For instance, if is defined via a function (that is, if ), then the pair satisfies the universal property of a quotient. This means that we can “decompose” any function into three pieces: The first map is the canonical projection, the second is the isomorphism given by the universal property of the quotient, and the last is the inclusion map into . In a sense, all three of these maps are “canonical.” This isn’t so magical for set-maps, but the same statement (and essentially the same proof) holds for groups and topological spaces, and are revered as theorems. For groups, this is called The First Isomorphism Theorem, but it’s essentially the claim that the category of groups has quotients. This is getting a bit abstract, so let’s see how the idea manifests itself as a program. In fact, it’s embarrassingly simple. Using our “simpler” ML definition of a category from last time, the constructive proof that quotient sets satisfy the universal property is simply a concrete version of the definition of we gave above. In code, fun inducedMapFromQuotient(setMap(x, pi, q), setMap(x, g, y)) = setMap(q, (fn a => g(representative(a))), y) That is, once we have and defined for our given equivalence relation, this function accepts as input any morphism and produces the uniquely defined in the diagram above. Here the “representative” function just returns an arbitrary element of the given set, which we added to the abstract datatype for sets. If the set is empty, then all functions involved will raise an “empty” exception upon being called, which is perfectly fine. We leave the functions which explicitly construct the quotient set given as an exercise to the reader. Products and Coproducts Just as the concept of a quotient set or quotient group can be generalized to a universal property, so can the notion of a product. Again we take our intuition from . There the product of two sets is the set of ordered pairs But as with quotients, there’s much more going on and the key is in the morphisms. Specifically, there are two obvious choices for morphisms and . These are the two projections onto the components, namely and . These projections are also called “canonical projections,” and they satisfy their own universal property. The product of sets is universal with respect to the property of having two morphisms to its factors. Indeed, this idea is so general that it can be formulated in any category, not just categories whose objects are sets. Let be two fixed objects in a category . Should it exist, the product is defined to be a final object in the following diagram category. This category has as objects pairs of morphisms and as morphisms it has commutative diagrams In words, to say products are final is to say that for any object in this category, there is a unique map that factors through the product, so that and . In a diagram, it is to claim the following commutes: If the product exists for any pair of objects, we declare that the category has products. Indeed, many familiar product constructions exist in pure mathematics: sets, groups, topological spaces, vector spaces, and rings all have products. In fact, so does the category of ML types. Given two types ‘a and ‘b, we can form the (aptly named) product type ‘a * ‘b. The canonical projections exist because ML supports parametric polymorphism. They are fun leftProjection(x,y) = x fun rightProjection(x,y) = y And to construct the unique morphism to the product, fun inducedMapToProduct(f,g) = fn a => (f(a), g(a)) We leave the uniqueness proof to the reader as a brief exercise. There is a similar notion called a coproduct, denoted , in which everything is reversed: the arrows in the diagram category go to and the object is initial in the diagram category. Explicitly, for a fixed the objects in the diagram category are again pairs of morphisms, but this time with arrows going to the central object The morphisms are again commutative diagrams, but with the connecting morphism going away from the central object And a coproduct is defined to be an initial object in this category. That is, a pair of morphisms such that there is a unique connecting morphism in the following diagram. Coproducts are far less intuitive than products in their concrete realizations, but the universal property is no more complicated. For the category of sets, the coproduct is a disjoint union (in which shared elements of two sets are forcibly considered different), and the canonical morphisms are “inclusion” maps (hence the switch from to in the diagram above). Specifically, if we define the coproduct as the set of “tagged” elements (the right entry in a tuple signifies which piece of the coproduct the left entry came from), then the inclusion maps and are the canonical morphisms. There are similar notions of disjoint unions for topological spaces and graphs, which are their categories’ respective coproducts. However, in most categories the coproducts are dramatically different from “unions.” In group theory, it is a somewhat complicated object known as the free product. We mentioned free products in our hasty discussion of the fundamental group, but understanding why and where free groups naturally occur is quite technical. Similarly, coproducts of vector spaces (or -modules) are more like a product, with the stipulation that at most finitely many of the entries of a tuple are nonzero (e.g., formal linear combinations of things from the pieces). Even without understanding these examples, the reader should begin to believe that relatively simple universal properties can yield very useful objects with potentially difficult constructions in specific categories. The ubiquity of the concepts across drastically different fields of mathematics is part of why universal properties are called “universal.” Luckily, the category of ML types has a nice coproduct which feels like a union, but it is not supported as a “native” language feature like products types are. Specifically, given two types ‘a, ‘b we can define the “coproduct type” datatype ('a, 'b)Coproduct = left of 'a | right of 'b Let’s prove this is actually a coproduct: fix two types ‘a and ‘b, and let be the functions fun leftInclusion(x) = left(x) fun rightInclusion(y) = right(y) Then given any other pair of functions which accept as input types ‘a and ‘b, respectively, there is a unique function operating on the coproduct type. We construct it as follows. fun inducedCoproductMap(f,g) = let theMap(left(a)) = f(a) theMap(right(b)) = g(b) in theMap end The uniqueness of this construction is self-evident. This author finds it fascinating that these definitions are so deep and profound (indeed, category theory is heralded as the queen of abstraction), but their realizations are trivially obvious to the working programmer. Perhaps this is a statement about how well-behaved the category of ML types is. So far we have seen three relatively simple examples of universal properties, and explored how they are realized in some categories. We should note before closing two things. The first is that not every category has objects with these universal properties. Unfortunately poset categories don’t serve as a good counterexample for this (they have both products and coproducts; what are they?), but it may be the case that in some categories only some pairs of objects have products or coproducts, while others do not. Second, there are many more universal properties that we haven’t covered here. For instance, the notion of a limit is a universal property, as is the notion of a “free” object. There are kernels, pull-backs, equalizers, and many other ad-hoc universal properties without names. And for every universal property there is a corresponding “dual” property that results from reversing the arrows in every diagram, as we did with coproducts. We will visit the relevant ones as they come up in our explorations. In the next few posts we’ll encounter functors and the concept of functoriality, and start asking some poignant questions about familiar programmatic constructions.
<urn:uuid:20c3c98c-45c0-4f3f-8f74-c891948e93c6>
CC-MAIN-2021-49
https://jeremykun.com/2013/05/24/universal-properties/?shared=email&msg=fail
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00231.warc.gz
en
0.933051
3,705
3.640625
4
Chemical analysis of materials used to create the paint found on the College of Wooster's Egyptian coffin Paints fragments from the Egyptian coffin containing the remains of a female named Ta-irty-bai ("The Two Eyes of My Soul") were analyzed to determine their composition. This artifact from Akhmim, Egypt was carbon 14 dated between 320-240 B.C.E. determining its creation to be during the Ptolemaic period (305-30 B.C.E.). Chemical analysis was performed using a set of analytical techniques including, Fourier transform infrared spectroscopy attenuated total reflectance (FT-IR-ATR) and atomic absorption spectroscopy (AAS) to determine the composition of the organic and inorganic materials, respectively. An understanding into the different materials used by the ancient Egyptians within their decorous paints provided comparable data to be used with the results. These analyses indicate a use of a wide range of materials, including gum Arabic, beeswax and calcium carbonate (CaCO3). Overall, FT-IR-ATR confirmed gum Arabic was the primary binding adhesive within the paint. The coffin was subjected to a fire that occurred on the campus of the College of Wooster in 1901. This caused soot and water damage to occur to the interior and exterior. The material composition helps in the understanding of Egyptian technology and their cultural emphasis in art. During a conservational attempt to clean the coffin in 2005, a majority of the pigments were observed to swell under the addition of water. Organic plant gums are highly water soluble, proving difficult to clean. These results will thus provide insight into other options for further cleaning on the Wooster coffin in the future. © Copyright 2010 Kimberly Anne Krall
<urn:uuid:daf496ca-0abf-451e-92a3-eb2d06da9d35>
CC-MAIN-2017-17
http://openworks.wooster.edu/independentstudy/361/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120878.96/warc/CC-MAIN-20170423031200-00057-ip-10-145-167-34.ec2.internal.warc.gz
en
0.935169
355
2.90625
3
Student information - any information or collection of information where the topic is a student of Edmonton Public Schools. Examples of collections include: Student Record - is the official, permanent cumulative collection of information affecting and documenting decisions made about the education of a student in the district (also known as the cumulative record). SIS databases - district, school and teacher versions (SIS is Student Information System). Other student files/data - includes files whose topic is a particular student maintained to make daily interactions efficient. Program specific collections e.g., Special Needs Funding, International Students, Inclusive Learning, etc. Student Record Regulation means Alberta Regulation 225/2006. (please see Sections 23, 10(1), and 23 of the School Act)
<urn:uuid:fffacd35-9f22-483b-884f-3c16748169ae>
CC-MAIN-2019-30
https://epsb.ca/ourdistrict/policy/h/ho-ar/
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530250.98/warc/CC-MAIN-20190724020454-20190724042454-00314.warc.gz
en
0.890333
152
2.65625
3
As we all know, cars and trucks are increasingly being packed with all sorts of electronic systems designed to vastly improve their safety profile: electronic stability control, blind spot detection systems, heck even automatic braking technology to prevent collisions. Now, however, Ford Motor Co. is trying to take things a step further – crafting a technological package to determine “workload stress” upon the driver, then re-adjusting the amount and levels of information being sent to driver in order to create a more “calming” operating environment. Whoa! Sounds like a vehicle acting like a psychiatrist to me! The heart of this system is based on biometric feedback obtained via sensors in the steering wheel, seat and seat belt to provide a more complete model of driver stress levels, according to Jeff Greenberg, senior technical leader at’s research and innovation division. “Vehicle control inputs, sensors, road conditions and biometric information such as a driver's pulse and breathing can all be used to create a driver workload estimation that can then help manage certain functions in demanding situations,” he explained. Greenberg pointed out that data from the sensing systems within a host of driver-assist technologies can be used to determine the amount of external demand and workload upon a driver at any given time including traffic and road conditions. In addition, there’s a “health and wellness” side to this via ongoing research regarding the development of a biometric seat, seat belt and steering wheel that can monitor the condition of the driver to provide even more specific data concerning a driver’s “state of being.” To accurately determine when “driver stress” exceeds safety norms, Greenberg said his team uses what’s called a “driver workload estimator” algorithm using real-time data from existing sensors such as radar and cameras combined with input from the driver's use of the throttle, brakes and steering wheel. The result is an intelligent system enabling management of in-vehicle communications based on the assessed workload of the driving situation, he explained. [At this rate, cars will be able to provide yoga lessons and discourse upon the ‘Wheel of Dharma.’] For example, Greenberg said the side-looking radar sensors used within Ford’s Blind Spot Information System (BLIS) and the forward-looking camera for the Lane-Keeping System are on watch even when there is no active warning provided to the driver. These signals could indicate there is a significant amount of traffic in the lane a driver is merging into as he or she enters a highway. Combine that knowledge with the fact that the driver has increased throttle pedal pressure to speed up, and the workload estimate could be high enough to determine it isn't a very good time for an incoming phone call to ring inside the cabin, he pointed out. Thus, the car could intelligently apply the "Do Not Disturb" feature that is already available as part of MyFord Touch package, helping the driver stay focused on the road during the high-demand situation. "In addition to using existing vehicle data to estimate demand on the driver, we're researching ways to get an even better understanding of the stress level of the driver," added Gary Strumolo, Ford’s manager of vehicle design and infotronics. "Biometric or health information of the driver can help us better tailor the experience when behind the wheel." Turning new biometric sensors toward the driver will help to create a more complete picture of the driver workload, he explained. Ford’s research team has built a biometric seating buck to test a number of different sensors and gather data on how drivers respond to a variety of inputs for a driver behavior model, he said. The experimental system adds several sensors to the steering wheel rim and spokes to get more detailed driver information. “Anyone who has used modern exercise equipment like treadmills and stair climbers will be familiar with the metal pads on the rim that can be used to measure the driver's heart rate,” Strumolo noted. Infrared sensors on the steering wheel monitor the palms of a driver's hands as well as his or her face looking for changes in temperature, while a downward-looking infrared sensor under the steering column measures the cabin temperature to provide a baseline for comparing changes in the driver's temperature. The final sensor is embedded in the seat belt to assess the driver's breathing rate, he said. “With a more complete picture of the driver's health and wellness blended with knowledge of what is happening outside the vehicle, the car will have the intelligence to dynamically adjust the alerts provided to the driver and filter interruptions,” Strumolo said. “With the driver occupied in heavy traffic, the vehicle control system could increase the warning times for forward collision alerts and automatically filter out phone calls and messages, allowing the driver more time to respond. On the other hand, an alert driver on an open highway could receive incoming calls.” Now, while all of those features are still in the research phase, he pointed out that they show significant opportunity to leverage data already being captured by the vehicle and then apply it in an “intelligent decision-making system” to simplify the driving experience. Sounds more than a bit funky, but if such technology could indeed modulate the interior environment of a motor vehicle in tough situations – blocking calls, turning down the radio, and minimizing all but the most critical information flow – it just indeed might help make driving a less stressful occupation. That’s a lot of “ifs” of course, but it’ll be neat to see if Ford and others can make it work.
<urn:uuid:9327c5df-b6f4-48e2-9883-fdc6c5e910b2>
CC-MAIN-2014-42
http://fleetowner.com/blog/vehicle-psychiatrist-well-sort
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900030.8/warc/CC-MAIN-20141030025820-00217-ip-10-16-133-185.ec2.internal.warc.gz
en
0.936233
1,171
2.578125
3
There’s a lot of rhetoric from all sides of the gun control debate. A recent study out of Johns Hopkins aimed to cut through some of that talk and get some real data about the effect of stricter gun laws. The result? In short, making it harder to get guns reduces gun suicides significantly. The researchers looked at data out of Connecticut and Missouri. In the former, a 1995 state law required gun buyers to undergo a background check and obtain a license; the latter repealed a gun licensing law in 2007. They found that the stricter gun law was linked to a 15.4 percent drop in gun suicides. In contrast, when Missouri made it easier for residents to get guns, suicides by firearm went up 16.1 percent. In 2013, more than 21,000 people in the United States killed themselves using a firearm (compared to 13,000 gun homicides). “Contrary to popular belief, suicidal thoughts are often transient, which is why delaying access to a firearm during a period of crisis could prevent suicide,” said study author Daniel Webster, director of the Johns Hopkins Center for Gun Policy and Research. Latest posts by Rachel Monroe (see all) - The Effect of a Dilapidated Home on a Baltimore Block - September 19, 2017 - The Ku Klux Klan Is Apparently Still Alive and Well in Maryland - August 24, 2017 - Baltimore May Be Getting a Professional Soccer Team - September 16, 2016
<urn:uuid:e4a92461-c63e-42d7-bdfd-90a9250893f9>
CC-MAIN-2019-47
https://baltimorefishbowl.com/stories/gun-laws-reduce-gun-suicides-hopkins-research-shows/
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00318.warc.gz
en
0.941758
292
2.546875
3
Your cart is empty! - Free shipping on orders over $25 - Free 1-year warranty The benefits of eating a diet complete with raw vegetables of all varieties and colors are huge, but many people find themselves getting gas, bloating, or loose stool when trying to digest produce. Part of the problem is that people dive head first into many of these grain-restrictive diets, into overflowing piles of endless salads and roasted broccoli, cauliflower and Brussels sprouts. While vegetables are good for you and have many crucial vitamins and minerals, if you have been on a low vegetable-based diet most of your life or suffer from pre-existing digestive symptoms, like IBS, it's easy for you to create gastrointestinal issues that you might never think would happen from eating vegetables. There are two major reasons why vegetables can be hard on our stomachs: soluble fiber and cellulose, or insoluble fiber. Fiber is healthy, but for some, it can cause issues. Your gut flora easily ferments soluble fiber. This fermentation does produce some gas but usually not enough to cause any significant symptoms. There are some people, however, who are more sensitive to the fermentation process. The most well-known causes for this sensitivity are short chain carbohydrates called FODMAPs, which is short for fermentable oligosaccharides, disaccharides, monosaccharides, and polyols. These are all types of carbohydrates found in a wide variety of fruits, vegetables, and grains. FODMAPs can be the reasons many vegetables don’t always sit well with certain people. Insoluble fiber is not fermentable. Our bodies cannot use it, so essentially, the fiber just goes in our mouths and out the other end. It is primarily used to increase stool bulk and is also found in certain foods to help fill up your stomach for fewer calories. This type of fiber isn’t quite as well known for causing digestive issues as soluble fibers are, but it is known for having laxative-like effects on people. It does that by irritating the gut lining, which makes the gut wall produce mucus as a lubricant and increases peristalsis, moving feces through the digestive tract. It doesn’t sound lovely, but that’s what happens. The most difficult vegetables to digest are the cruciferous ones, like broccoli, cauliflower, and Brussels sprouts. The reason is that these vegetables contain a compound called raffinose. Humans do not have the enzyme to break down this compound, so it passes through the stomach and small intestine undigested and enters the large intestine, where all the unpleasant gas and bloating symptoms can occur. Therefore, we want to stay away from eating these raw if we know they cause us those sorts of issues. Onion and garlic are also notoriously known for being high in FODMAPs. Some of the better vegetables to consume if you have problems digesting include the following: When you cook vegetables, you help to break down some of those harder to digest fibers, which makes them easier on the digestive system. You don’t have to boil your vegetables down to mush, but steaming and sautéing the veggies will make them more well-done at the end and most likely easier on your body. Also, mashing your vegetables – think mashed carrots, cauliflower or sweet potatoes – can also make them easier to manage because mashing is meant to somewhat mimic chewing and make your body do less work to digest the vegetables. You could also try eating vegetables cooked in soups or blended in juices and smoothies in your NutriBullet! Carrie Gabriel studied nutrition at California State University-Los Angeles and got her RDN through the Coordinated Dietetics Program there. She works as a dietitian for different corporate wellness companies, creating recipes for cooking demos and content for nutrition seminars. She created her company, Steps2Nutrition, in 2012 in hopes of working more one on one with busy professionals and their specific nutritional needs. She has an advanced certification in weight management and specializes in gluten and dairy free meal preparation for various clients. You can find some of Carrie’s healthy and delicious recipes and meal tips on her website, www.steps2nutrition.com, and on her Instagram and Twitter account, @steps2nutrition.
<urn:uuid:6324dadf-80a2-4278-b1f2-426679fdf53e>
CC-MAIN-2019-39
https://www.nutribullet.com/blog/why-do-vegetables-cause-me-so-much-gastrointestinal-distress
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574409.16/warc/CC-MAIN-20190921104758-20190921130758-00469.warc.gz
en
0.953766
891
2.515625
3
The United States has long lagged behind other countries in education. Pew Research reported in 2017 that the U.S. came in at 38th out of 71 countries when it comes to math and placed only 27th in science. Washington Post reports that the U.S. came in at 24th in reading literacy back in 2015. Given this poor academic performance on the world stage, many people have debated whether or not middle school students should be attending physical education classes on a daily basis when they should be improving their academic performance. Today, the argument continues on this controversial subject. Although some people feel that physical education classes are unimportant, as many students already exercise during their breaks and extracurricular activities, exercise is just one of the many benefits of P.E. P.E. is essential to student health, well being and also educational growth, which is why students should continue taking P.E. classes in middle school. First of all, when teachers and parents say that students should be in another academic class instead of spending time in P.E., unfortunately, they are overlooking the fact that this extra academic class would result in more homework, tests, projects and studying. Although this may seem like a benefit to student learning and education, the reality is that this would cause more stress for young students, which is unhealthy for growing minds and eat up more of their time. According to a new study by the Better Sleep Council, as reported in People, homework stress is the biggest source of frustration and stress for teens, with 74% of those surveyed ranking it the highest, above self-esteem (51%) parental expectations (45%) and bullying (15%). When students are overwhelmed with schoolwork, they cannot concentrate or focus. Replacing P.E. with another academic class would just add to this burden of stress on students, which is actually detrimental to their learning. Further, P.E. has other benefits. P.E. class promotes team-building skills and sportsmanship. For instance, students play lots of different sports such as volleyball, basketball and soccer. While playing these sports, students must cooperate with their team, even if they don’t want to, to get a good grade. This has been true in my personal experience, where teamwork counted as part of our P.E. class grade at school. Not only that, but P.E. classes build a foundation for participation in activities later in life. Knowing how to work well in group settings is an important life skill for the future, as many jobs and careers call for negotiation and cooperation. P.E. class is a wonderful opportunity for students to learn about forming stronger relationships and to get along with a variety of people. Some critics assume that students who participate in extracurricular sports already gain these skills. However, P.E. grants opportunities to every student to exercise and cooperate with each other, not just kids who participate in extracurriculars. This means that kids who do not participate in sports outside of school, or whose parents cannot afford extracurricular activities, have a chance to be active and learn the team building and cooperation skills needed for their futures. In addition, P.E. class allows the brain to rest and relax during a rigorous school day. Students can socialize with their friends and meet new people during P.E. where they don’t have to sit at a desk and work or listen quietly to the teacher. Not only that, but students also get to be outside, breathing fresh air instead of being confined in a classroom. In a study published on ScienceDirect, researchers found that physical activity breaks have shown to improve students’ behavior, increasing the effort they put into their activities, as well as their ability to stay on task. A class focused specifically on physical education helps students burn up any extra energy they might have accumulated during academic classes where they can’t move around as much. P.E. class provides a perfect escape and a bit of freedom during a demanding academic school day, which in turn helps students do better when they are in their academic classes. Finally, P.E. keeps students healthy and fit. As everyone knows, these classes provide opportunities for increasing core body strength and improving balance and coordination, as well as flexibility. All of this helps young bodies to develop. In a nation where obesity among youth is increasing, P.E. can address physical as well as mental health. The Center for Disease Control reports that in “children and adolescents aged 2-19 years in 2019… [the] prevalence of obesity was 18.5% and affected about 13.7 million children and adolescents.” Obesity is generally caused by little to no exercise. By keeping fit, students can have better health, reducing the obesity problem in our country. This is one of the many reasons why we must maintain P.E. classes. These activities help students increase stamina and fight obesity. Schools should keep P.E. classes in middle school as a required subject. It is an indispensable part of the school day for students. If we keep physical education classes, not only will we be contributing to the fight against obesity, but also helping students relieve stress, and thus help them improve academically. Physical education is a crucial class that should not be eliminated from the everyday schedule for students because it positively impacts their life.
<urn:uuid:1309f8e5-5c52-4ff0-934a-36005fc49adf>
CC-MAIN-2021-21
https://highschool.latimes.com/chaparral-middle-school/opinion-p-e-is-essential/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00009.warc.gz
en
0.972444
1,109
3.03125
3
Time Lapse Photography What is Time Lapse Photography? Time lapse photography is a technique in which the frequency of film frames are taken at a much slower rate so that when played in a video at normal frame rate speed, the images move quicker through time causing the time lapse film to appear to be traveling through time. As an example if you take one photo every second and play the photo back at a speed of 30 frames per second, time would speed up in your video roughly 30 times faster than actual. That means a half hour of time could be shown in one minute of video. Of course you can move much quicker through time depending on the interval of photos or the frame rate in which you play the video. The best way to show what time lapse photography is, is to simply show you. This is an example of time lapse video done on Mount Everest: How to Take Time Lapse Photos Can My Camera do Time Lapse Photography? Any camera has the ability to take and create a time lapse based photo set. However, some make it easier than others. Time Lapse Photography Tutorials Time Lapse Photography Ideas time lapse photography ideas is a list of ideas for you to go and experiment with. The amount of time your video takes is up to you, so we have ideas for day shots, weekly, monthly, and yearly. Time Lapse Photography Software Time Lapse Photography Calculators Interviews with Time Lapse Photographers Other Helpful Articles History of Time Lapse:
<urn:uuid:ef811915-8407-4183-af8c-63c625124e03>
CC-MAIN-2017-09
http://digitalphotographyhobbyist.com/time-lapse-photography/
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00336-ip-10-171-10-108.ec2.internal.warc.gz
en
0.908423
312
3.015625
3
VALSARTAN AND IRBESARTAN RECALLS Certain blood pressure medications that contain either valsartan or irbesartan have been recalled. If you take either of these drugs, talk to your doctor about what you should do. Do not stop taking your blood pressure medication without talking to your doctor first. Learn more about the recalls here and here. Coronary artery disease (CAD) occurs when the blood vessels can’t carry enough blood and oxygen to the heart. Typically, this is because the vessels are damaged, diseased, or blocked by a fatty substance called plaque. A buildup of plaque causes a condition called atherosclerosis. This can lead to CAD. The goals of CAD treatment are to control symptoms and to stop or slow the progression of the disease. Your doctor’s first treatment suggestion for CAD might be lifestyle changes such as improved diet and exercise habits. If these changes alone aren’t enough, your doctor may prescribe medications. Drugs can play an important role in treating the complications of CAD. According to the Cleveland Clinic, medication may be the first line of treatment if artery blockage is less than 70 percent and doesn’t severely limit blood flow. Read on to learn how drugs can help treat CAD and prevent related problems. A common symptom of CAD is angina, or chest pain. If you have angina, your doctor may prescribe short- or long-acting drugs called nitrates to reduce this pain. Nitroglycerin, a type of nitrate, dilates blood vessels and allows the heart to pump blood with less effort. These actions help relieve chest pain. Beta-blockers are also often prescribed to treat angina. Beta-blockers can slow your heart rate and lower your blood pressure. These actions decrease the amount of oxygen your heart needs to work, which can help relieve angina. Blood clots are formed by a buildup of platelets, also called thrombocytes, that circulate in blood. These clotting cells bind together into a clot to help your body stop bleeding after an injury. Certain drugs suppress the activity of platelets, making it harder for blood clots to form within your arteries. This effect reduces your risk of heart attack. Examples of medications that help keep platelets from forming clots include: High levels of cholesterol in your blood play a key role in causing atherosclerosis. If you have high cholesterol and can’t lower it through a healthy diet and increased physical activity, your doctor may prescribe daily medications. Examples of drugs that can help reduce your cholesterol levels include: Bile acid sequestrants These drugs help the body get rid of low-density lipoprotein (LDL), or “bad” cholesterol. They’re also known as bile acid-binding resins. Examples include: - cholestyramine (Questran) - colesevelam hydrochloride (Welchol) - colestipol hydrochloride (Colestid) Fibrates lower triglycerides and raise high-density lipoprotein (HDL), or “good” cholesterol. Examples include: Statins work by decreasing overall cholesterol production. Examples include: - atorvastatin (Lipitor) - fluvastatin (Lescol) - lovastatin (Mevacor) - pravastatin (Pravachol) - rosuvastatin (Crestor) - simvastatin (Zocor) Several types of drugs can help lower your blood pressure. These drugs can also help your heart function better in other ways. They include: High blood pressure can contribute to CAD because it can damage your blood vessels. Beta-blockers help by slowing your heart rate and lowering your blood pressure. These actions also reduce your risk of heart attack, a complication of CAD. Examples of beta-blockers include: - atenolol (Tenormin) - carvedilol (Coreg) - metoprolol (Toprol) - nadolol (Corgard) - propranolol (Inderide) - timolol (Blocadren) Calcium channel blockers Calcium channel blockers help increase the amount of oxygen sent to the heart. They relax the vessels of the heart, allowing oxygen-rich blood to flow to it more easily. Calcium channel blockers also lower blood pressure and relax other blood vessels in the body. These effects can decrease the amount of oxygen the heart needs. Examples of calcium channel blockers include: - amlodipine (Norvasc) - diltiazem (Cardizem) - felodipine (Plendil) - isradipine (DynaCirc) - nicardipine (Cardene) - nifedipine (Adalat, Procardia) ACE inhibitors and ARBs Angiotensin II is a hormone in your body that tightens your blood vessels. Tightening blood vessels raise your blood pressure and increase the amount of oxygen your heart needs. Angiotensin-converting enzyme (ACE) inhibitors and angiotensin II receptor blockers (ARBs) reduce the effects of angiotensin II. They work to prevent increases in blood pressure. These types of medications can lower your risk of stroke or heart attack. Examples of ACE inhibitors include: - benazepril (Lotensin) - captopril (Capoten) - enalarpril (Vasotec) - lisinopril (Prinivil, Zestril) - quinapril (Accupril) - ramipril (Altace) - trandolapril (Mavik) Examples of ARBs include: Medications used to treat CAD can: - lower your cholesterol levels - lower your blood pressure - reduce your heart’s workload - prevent blood clots - increase the amount of oxygen sent to your heart All of these actions can help reduce your CAD symptoms and prevent serious complications, such as heart attack or stroke. Your doctor can tell you more about drugs that can help your CAD. Questions you might ask them include: - What drugs are best suited for my symptoms and medical history? - Am I taking any other medications that might interact with a CAD drug? - Are there nondrug ways I can reduce my CAD symptoms? What can I do to help treat my CAD besides take drugs? Lifestyle changes that can help prevent CAD can also help reduce the effects of CAD. Two changes that can really help are improving your diet and getting more exercise. For instance, eating fewer cholesterol-heavy foods such as fatty cuts of meat and whole milk can help reduce the amount of cholesterol in your blood. And exercise can help in many ways, including lowering your cholesterol levels and reducing your blood pressure. To find out more, read about CAD prevention.Healthline Medical TeamAnswers represent the opinions of our medical experts. All content is strictly informational and should not be considered medical advice.
<urn:uuid:1cb24979-a0df-435b-91b7-b8e5f44b043f>
CC-MAIN-2019-13
https://www.healthline.com/health/coronary-artery-disease/drugs
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.29/warc/CC-MAIN-20190321193403-20190321215403-00120.warc.gz
en
0.886019
1,499
2.96875
3
The effect of the 1789 grain disputes in the colony of Saint-Domingue and its colonial Metropole serves as the primary focus of the first installment of our reader. This crucial, and yet little-known episode, in the history of not only Colonial Saint-Domingue, brings up issues of commerce during the Ancien Régime, and is one of the first major issues that was brought forth to the newly formed National Assembly in 1789. This supplement to the Réplique, or “counter argument”, made by the Deputies representing the French merchants and manufacturers, argues that the rhetoric of the Deputies of Saint-Domingue is not only petulant, but that it also serves as an attempt to deceive the Metropole. This response from the colonial deputies of Saint-Domingue is a reaction to a report published by French merchants that establishes the amount of grain, among other foodstuffs, necessary to nourish the entire French Empire. This response from the Deputies of Production and Commerce of France provides a comprehensive review of the legislation surrounding the grain dispute of 1789 in order for the Commercial Deputies to defend themselves from the Colonial Deputies’ accusations that they have perpetuated famines. This “précis,” or summary, sent to the Commissioners appointed by the National Assembly to examine the needs of the colony, outlines the efforts made by the deputies and the governor of Saint-Domingue to acquire much-needed provisions to sustain life on their plantations. In his official motion to the National Assembly, M. de Cocherel proclaims that he can no longer sit idly while the Assembly ignores the famine that has besieged the colony of Saint-Domingue. The time has come for the colonial deputies of Saint-Domingue, led by Cocherel, to act on their own behalf, disregard the chain of command, and make a direct appeal to the Assembly. This decision from the State Council of the King struck down le Marquis du Chilleau’s May 27th Ordinance allowing the importation of foreign grain and provisions to Saint-Domingue in exchange for colonial goods, although not sugar cane or coffee. While the Island of Saint-Domingue was long considered part of the French Empire, the ten Colonial Deputies of Saint-Domingue felt in 1789 that they had become separated from the colonial Metropole. On June 8th 1789, this request was presented in Paris before Louis XVI’s committee of the Estates General. Le Marquis Marie-Charles du Chilleau, Governor of Saint-Domingue, proposed this Ordinance to the French legislature one year after his appointment to allow foreign grain to be legally imported into Saint-Domingue. This is the second ordinance issued by the governor in response to the grain shortages in Saint-Domingue, which threatened the planters with famine and malnutrition. These are the first two pieces of a chain of correspondence between the governor of Saint-Domingue, M. le Marquis du Chilleau, and M. de Marbois, which were forwarded to M. le Comte de la Luzerne in support of the introduction of Foreign grain into Saint-Domingue. In a letter written to the French Naval Minister, César-Henri de la Luzerne, M. du Chilleau, Governor-General of St. Domingue, lays out his plans for an official ordinance that would allow Saint-Domingue to officially and legally trade with the United States. This speech, presented to the representatives of France, is a vehement rejection of the beliefs of the Société des Amis des Noirs [Society of the Friends of Blacks], which it ultimately seeks to reduce to the level of ideologues. This report nominates and appoints M. le Marquis du Chilleau as Governor of the island of Saint-Domingue. Much like the Arrêt du roi [Judgment from the State Council of the King ], this document presents one of the few moments where King Louis XVI directly intervenes in events surrounding the grain disputes of 1789.
<urn:uuid:daae68f3-acb7-48ed-86fc-ba49b87da49d>
CC-MAIN-2020-50
https://colonyincrisis.lib.umd.edu/category/translations/issue-1-0/
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681524.75/warc/CC-MAIN-20201201200611-20201201230611-00043.warc.gz
en
0.929928
874
3.3125
3
The Federal Phoenix John Tenniel (1820-1914) 3 December 1864 Full page cartoon Punch, Vol. 47, facing p. 228 Abraham Lincoln had just been re-elected after the "fire" of the Civil War, which had involved the violation of so many rights (see the logs stoking the fiery nest). In this spare and powerful image he seems to have "risen from the ashes" just as the phoenix was said to have done in Greek and Egyptian mythology. Commentary continues below. Image capture and text by Jacqueline Banerjee. You may use this image without prior permission for any scholarly or educational purpose as long as you (1) credit the Internet Archive and the Kahle/Austin Foundation, and (2) link your document to this URL in a web document or cite the Victorian Web in a print one.
<urn:uuid:00456e16-ad40-4551-8373-1f0846aaabc4>
CC-MAIN-2022-33
https://victorianweb.org/art/illustration/tenniel/punch/21.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00346.warc.gz
en
0.95799
181
2.640625
3
“The first week of August is the best week of the year in the garden.” — Len Cullen The best time in the garden is here, as my dad always said. Many of the summer flowering perennials are at their floriferous peak, annual flowers are coming into their own (the summer heat having put some gas in their tank) and trees and shrubs have not been eaten or blighted with fungus. If ever there is a time when “all is right in the gardening world,” this is it. Where bugs and diseases are concerned, I am a proponent of “live and let live” for the most part. The vast majority of pests are either beneficial or benign. Ladybugs eat aphids, birds munch on beetles and the larvae of moths and butterflies, and even the neighbourhood raccoon — nuisance that it may be — consumes voraciously large quantities of white and grey grubs, perhaps rolling back your lawn, roots and all, in the process. They provide this service for a price. Your job is to roll it back again and over seed with fresh grass seed (the good news is that this is the best time of year for seeding a new lawn; thickening an established one is now through September). As optimistic as I am, I have to admit that controlling bugs and diseases has its place in the sustainable garden. The halfway point in the gardening calendar is the best time to give some of the most troublesome bugs and diseases a shot of control. Here is my short list of pests and diseases that could require attention this weekend: Public enemy No. 1 where tomatoes are concerned. We have seen blight spread across the continent over the last 20 years or so. While it was always out there (this current blight is related to the one that effected potatoes, a related crop, in Ireland in the mid 1800s), it seems more prevalent now than ever. The spores of this disease move through the air on hot, damp days and there is nothing that you can do about that. The result is a gradual collapsing of the leaves from the ground up, first showing black or brown spots and then a complete “torching” appearance to the whole plant. The disease, if untreated, can move very quickly, killing an entire plant in seven to 10 days. Prevent the effects of early (August) and late (September) blight on your tomatoes by applying Bordo mixture every 10 days to two weeks. Copper is the active ingredient and nothing seems to work quite like it. When you apply Green Earth Bordo you control the spores of blight, thereby preventing the effects of this persistent disease. Bordo mixture is a very handy product to have around during the summer as it also helps to prevent or minimize the effects of rust on hollyhocks, scab and black spot on roses, apples and pears, and downy mildew on all squash plants, summer and winter varieties alike. Under a handheld magnifying glass, an aphid looks like a lightbulb with the head at the screw end and the bulbous shape at the rear. They are about the size of the head of a pin. Aphids like to munch on the new, tender growth of fast growing plants. My rudebeckia get a serious infestation every season about this time of year. Control of aphids can be as easy as blasting them with a sharp spray of water from the end of your hose using the pistol grip nozzle, the same one that you use for washing the car. They come back within a couple of days, so don’t get your hopes up that you have solved the problem acting as human water cannon. The novelty wears off after a few applications. Insecticidal soap, Trounce or End All can work; they are the most effective controls on the retail shelf. Keep in mind that aphids gestate over a 10-day period so no matter how effective your control today, they will be back in a week and a half. FYI: small birds, including hummingbirds, enjoy aphids for a meal and can provide effective control. Encourage them. Potato Beetles, Earwigs, Ants, and Sowbugs The crawling insects that make their way into our lives are often found in unwanted places. The ants on a march across the patio into the house, the earwigs in my dahlias, and the persistent potato bugs on my potatoes (even though there are no other potatoes within a mile) can all be controlled using a miraculous powder that is not a poison at all. Silicon dioxide or diatomaceous earth is amazing stuff. It is made of finely ground up flowers that were fossilized a few million years ago. Looked at under a microscope the powder looks like fine shards of glass. As beetles, ants and the like move over this substance, the waxy protective coating on their underbelly is removed by the abrasive action of the powder. In a day or two they dehydrate and are “toes up” as a result. Sold under the trade names of Dio, Ant Killer and Crawling Insect Killer, the contents in the squeeze bottle are always the same. Be persistent with this stuff and reapply it every time it rains or you water the garden. Powdery Mildew, Black Spot and Maple Blotch While these problems are generally not lethal where your garden is concerned, there is an environmentally responsible control: garden sulphur. This powder is harmless to children, pets and the environment — as are all of the above mentioned products. Applied in 10 day to two week intervals, it can help to minimize the effects of many common diseases on roses, apples, and Norway maples. I hasten to add that the maple blotch that is so common on big-leafed Norway maples is cosmetic and not harmful to the tree over the long haul. But it is ugly. Garden sulphur is a great product to keep in the garage for plants that love an acidic soil. Rhododendrons, azaleas and blue hydrangeas with yellowing leaves enjoy a shot of this stuff right on the soil at the root zone two or three times throughout the gardening season. Asian Lily Beetles How do I control the beetles on my lilies? The answer is surprisingly simple: Apply leaf shine, which contains the active ingredient Neem oil to the underside of the leaves of each plant for control. Neem is not registered for use as an insecticide with the department of agriculture but anecdotal evidence has shown that this is the one and only control for this persistent bug. That and hand picking, of course. By the way, daylilies are not susceptible to the lily beetle as they are not members of the lily family. Common names for plants can cause confusion. Don’t underestimate the value of spending some time in your garden picking off the big bugs that cause damage to your valuable plants. Tomato hornworms, potato beetles, slugs and even tent caterpillars can be controlled with the swoop of a hand. If you can’t stomach the idea of crushing a living organism in your bare hands, I recommend that you wear a quality pair of gloves. Works every time. Question of the Week Q: My friend won a pruning set in your contest last week. Can you tell me how to enter? A: “Like” my Facebook page for a chance to win. I will choose a winner every Friday in August. Best of luck. Mark Cullen is an expert gardener, author and broadcaster. You can sign up for his free monthly newsletter at markcullen.com, and watch him on CTV Canada AM every Wednesday at 8:45 a.m. You can reach Mark through the “contact” button on his website and follow him on Twitter @MarkCullen4 and Facebook. Mark’s latest book, Canadian Lawn & Garden Secrets, is available at Home Hardware and all major bookstores.
<urn:uuid:84e1d718-2aaf-4b0e-a579-c94e84b1c397>
CC-MAIN-2017-22
https://www.thestar.com/life/homes/outdoor_living/2012/08/03/mark_cullen_what_is_bugging_your_garden.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00338.warc.gz
en
0.951575
1,661
2.546875
3
I had a patient come to me a few months ago who was a young college student. After a quick exam of his mouth, it was clear that he wasn’t flossing his teeth often, if at all. I asked him about the situation, and he admitted that flossing was never his strong suit. My guess is that you or someone you know has a similar problem as this young patient, who has never had a history of major dental work or even that many cavities. Without lecturing my patient, I explained that overtime he could develop some problems as a result of skipping out on flossing his teeth every day. Brushing your teeth only cleans about 60 percent of the tooth. About 40 percent of the tooth is located beneath the gums, an area that must be cleaned with floss. In my last blog, I told you a story about a young man dying from a toothache. The toothache was an infection that actually spread to the brain. It’s an unfortunate true story that shows just the gamble you take when you do not receive the oral care you need. Today I want to tell you about the risk you take by not developing a strong daily oral hygiene plan. At Eugene Family Dental, we encourage patients to brush twice a day and floss daily and also maintain their six-month dental checkups. Even though brushing and flossing is the best way to prevent dental problems, you still need to visit an oral health professional for checkups and exams. We treat dental issues with state-of-the-art restorative dentistry and use soft-tissue lasers to treat gum disease. One thing I love to educate my patients about is floss. Flossing is great way to remove plaque from between teeth and underneath the gums. Plaque develops from bacteria and can developed into tartar if left untreated. Plaque is actually a soft substance and can be removed by flossing and brushing. Tartar is simply hardened plaque that can only be removed by an oral health professional. We ask that patients brush twice a day because plaque can form inside the mouth in just eight hours. Patients who do not brush properly or floss put themselves at a higher risk of cavities and gum disease. Gum disease is a serious problem in this country. It’s the leading cause of tooth loss and has been linked to serious problems like heart disease and diabetes. The scary thing about gum disease is that it can strike with little notice and it’s estimated that 75 percent of Americans will have gum disease at some point in their lives. That’s why I was shocked to read a new study conducted by Delta Dental Plans Association that found that only 70 percent of Americans brush twice a day. That same study found that only 40 percent of Americans regularly floss their teeth every day, with 20 percent of that population never flossing their teeth. That means that more than half of our country is at a higher risk of gum disease. By not flossing, you put yourself at a higher risk of cavities, gum disease, bad breath and problems like heart disease and diabetes. Studies have found a two-way street exists between heart disease and gum disease and diabetes and gum disease, which means one can trigger the other. That’s because the same inflammation that creates gum disease also is present in heart disease and diabetes. Patients with gum disease often have symptoms like bleeding or swollen gums, but it’s possible that you can have gum disease and have no physical signs. My office can treat all aspects of gum disease with soft-tissue lasers. The lasers allow us to clean out the periodontal pocket with the use of sutures or scalpels. The lasers increase the recovery period and allow the gum and tooth to naturally re-attach after the treatment. I have written a lot about the importance of caring for your teeth and regular dental care. If you have not been to the dentist in a while or you have not practiced the best oral hygiene, there is no need to be ashamed. My office works with patients of all dental needs, and we will never lecture you about the state of your teeth or make you feel guilty. We are here to help you keep your smile healthy and beautiful. If you’re ready to start your dental journey with our office, call us today or schedule an appointment using our online form.
<urn:uuid:79ccf863-6bea-4994-ae4b-cf97d55ecf41>
CC-MAIN-2023-23
https://eugenefamilydental.com/are-you-doing-everything-you-can-for-your-teeth/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645089.3/warc/CC-MAIN-20230530032334-20230530062334-00486.warc.gz
en
0.967048
908
2.765625
3
Extinction on the local scene Dave Foley (Originally published in Cadillac News) Our long winter may now finally be ending. The woods will soon be tinged with green, and the outdoors will come alive again. Birds, animals, insects, and all manner of aquatic organisms, like so many old friends, will fill our neighborhood . But some never return. Crayfish, leeches, mayflies, bullfrogs, June bugs, fireflies - creatures that inhabited the wetlands, woods, and Lake Mitchell by our house a quarter century ago have disappeared completely or are rarely seen. Reading Elizabeth Kolbert's book The Sixth Extinction helps put this in perspective. While cataclysmic events such as glaciation, volcanic eruptions, asteroid impacts and changes in ocean chemistry have caused mass extinctions in the past, Kolbert notes that, “Right now we are in the midst of the Sixth Extinction, this time caused solely by humanity's transformation of the ecological landscape.” She goes on to recount incidences around the world where man has inadvertently or deliberately caused the decimation of living species at an alarming rate. As I read, I began to think about instances of this phenomena going on literally in my own backyard and the lake beyond our doorstep. The annual hatch of mayflies used to be an event that was both loved and loathed. Loved because the flies were such a great food source for fish and loathed because when swarms of flies rose off the lake surface around the first week of June, homeowners had to deal with thousands of flies stuck in window screens and lying dead on their decks. Walleye fishing tanked. Our hooked offerings of night crawlers, minnows, and lures went untouched as gamefish gorged on the hatching insects. My journal shows the last big hatch was in 1988. Four years later my notes say. “No mayflies anywhere.” There was a time when I would see crayfish darting among the rocks along our lakeshore. On occasion I appropriated a few to use as enticers for smallmouth bass. They're gone now. I haven't seen a a crustacean scooting through the shallows in years. The leech population suffered as well. It used to be that whenever a group of kids were in the lake, invariably one would emerge, often screaming, with a black rubbery leech attached to a leg. We'd grab the box of Morton Salt that we kept on the deck, sprinkle the white grains on the tiny creature and watch as it curled up and let go. But then the leeches disappeared. I hadn't needed salt box in decades until last summer. Hearing a squeal, I looked up to see a toddler in tears rushing to his mom, his shin adorned with a small writhing leech. I felt sorry to see the child in in distress but also happy. Maybe this is the first of a next generation of leeches. I sure hope so. What happened to the mayflies, crayfish, and leeches? I am afraid we can blame their disappearance on copper sulfate that was sprayed on the lake to treat swimmer's itch. Each year in June a crop duster would fly low over the water dropping blue crystals into the water. At the same time, a boat moved along the shoreline carrying an individual shooting a stream of inky blue liquid into the rocks that lined our shore. These chemical treatments were supposed to kill the snails that carried the parasite that caused swimmer's itch. The treatments were stopped in the late 1980s, but the copper, that accumulated over the years on the lake bottom, likely reached a level that proved toxic to mayflies, leeches, and crayfish. While there seems to be an explanation for the eradication of organisms that once resided in the lake, I can only speculate about what happened to the June bugs, that used to circle our porch light and smack into our living room windows. You didn't have to see them to know they were around. When airborne, their wings created a hum. If we stepped outside we'd find bronze back beetles scattered about on our deck. Not any more. Nowadays sightings are rare in our yard. “Google” with keywords “June bugs and declining numbers” and you'll see mention of pesticides. A chemically treated lawn probably does in the white grubs that become beetles. Fireflies, that's another insect that is pretty much “MIA.” Seeing the flickering tiny dots of yellow light dancing across our yard and through the woods, to me is the essence of a summer evening. As a kid I remember collecting them in a jar. We called them lightning bugs. Then my brother and I would take them to our bedroom to watch the little light show as we fell asleep. Once we were asleep, Mom would take them out and release them in the yard. Years later our kids captured fireflies and once they were asleep, I would become the liberator of the lightning bugs. Last summer I saw only a few. I hope there will be some here when my grandchildren come to visit this year. In my research, studies I found pointed to the ever expanding number of yard lights as a possible cause of the decline of fireflies. Apparently these insects need darkness so that mates can find each other. I'm sure that the creatures I've described aren't the only ones to disappear from our neighborhood. Although I've paid close attention to birds in recent years, I'll bet some species that were here a couple decades ago no longer inhabit the area. The same could be said for insects, trees, and plants. Kolbert indicates that at the current rate of extinction another 15% of the world's species will be gone by 2050. It's a sobering thought to realize what is happening. I would hope we can become better stewards of our environment.
<urn:uuid:f0b11fb0-6108-451b-a2e8-999ed34dd036>
CC-MAIN-2020-40
https://www.lakemitchell.org/lake-news/local-extinction/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00355.warc.gz
en
0.977488
1,209
2.5625
3
When you go for a check-up with your doctor or are admitted to hospital, the medical professionals treating you will do a range of tests and make observations on your general health in a variety of manners, including things like taking your pulse rate, monitoring your breathing rate, looking into your eyes, and examining the inside of your mouth. The same is true with dogs, and there are a variety of ways in which you can ascertain the basic general health and condition of your dog with a couple of observations, such as how fast your dog is breathing, how hot they are, and what the colour of their mucous membranes is. The mucous membranes are found in areas such as the insides of the eyelids and the gums, and in good health, these should be a warm pink colour. Ill health or an underlying problem can be ascertained by a change of colour, such as a bright deep red, or paleness, in the same way as can be found on people when their skin takes on a different hue. Pale, white gums in dogs is one of the clearest and easiest to spot indications that something may be amiss with your dog, and white gums can be caused by a variety of different problems. Read on to find out about some of the most common causes of whiteness of the gums in dogs. External parasites include pests such as ticks or a high flea count, which all lead to anaemia as they feed on the pet of your blood, essentially functioning as tiny vampires that drain your pet’s blood! Using an effective, veterinary recommended product to repel fleas and ticks, such as high quality spot-on anti-parasitics can help to prevent this, as can checking your dog over for ticks when you return from your walks, particularly during the summer months. White gums in the dog can also be caused by invisible internal parasites, which latch onto the intestinal walls of your dog and again, feed on the nutrients in your pet’s blood. This in turn can lead to anaemia, which occurs when the haemoglobin and red blood cells are compromised. Intestinal parasites are spread very easily between dogs, and by contact with affected ground, and parasites such as roundworm, tapeworms, hookworms and whipworms can soon reproduce prolifically and lead to a high worm count within the body. Internal parasite infestations to the point that they cause white gums usually come accompanied with other symptoms as well, such as increased appetite accompanied by weight loss, and diarrhoea. Ensuring that your dog is wormed regularly with a high quality wormer that has a broad enough spectrum to treat all varieties of internal parasites is essential, so if you haven’t kept up with your dog’s worming protocols recently, now is the time to sort things out. Hypoglycaemia or low blood sugar is a condition that can be brought on by a variety of methods, and is most likely to occur in small breeds of dogs that do not have huge appetites, and so maintain a very delicate blood sugar balance. Simply being fed late, missing a meal or not being fed enough can cause a drop in blood sugar, as can things like being too cold, being stressed, or being fed an incorrect diet. As well as white gums, your dog may also become shaky and disoriented, and be slow to respond to your or even appear aggressive. You can deal with an immediate blood sugar problem by feeding your dog, or warming them if they are cold, or calming them if they are stressed. If your dog regularly suffers from low blood sugar, you may wish to speak to your vet about longer term solutions to the issue. There are various other conditions too that can lead to white gums, and white gums often accompany generalised malaise and illness in the same way that people who are ill are often pale in colour. Some more serious problems can lead to regular, recurrent or permanently white gums, such as haemolytic anaemia, which is a condition caused by the body attacking and destroying its own red blood cells. A condition called liver shunt, usually diagnosed in puppies, can cause permanently white gums too, as this leads to the circulatory system bypassing the liver, and so toxins that should be naturally eliminated from the blood remain within circulation in the body. Various types of autoimmune conditions can also lead to pale or white gums, and these are all worthy of investigation by your vet, particularly if you own a puppy that has white gums, as they may have a long term health problem that will require addressing. Other issues such as toxins and poisons can cause white gums among other symptoms, so take care to find out if your dog or puppy has eaten something that they shouldn’t, such as a toxic plant, household cleaner, or a poison. Do you like this article? Have something to say? Then leave your comments.
<urn:uuid:9375c3cf-440a-4005-9a58-23ea0f8c5d42>
CC-MAIN-2019-18
https://www.pets4homes.co.uk/pet-advice/what-white-gums-in-dogs-means.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425135627-00036.warc.gz
en
0.966378
1,023
3.0625
3
Aerosol Transmission of Coronavirus: Classrooms Need More Ventilation 1,000 students wearing face masks in a densely packed lecture theater – unimaginable in times of the coronavirus pandemic. But the risk of aerosol transmission in large, modern lecture theaters is lower than in many classrooms, according to calculations by RWTH researchers. The data also allow conclusions to be drawn about family celebrations. In a comparative analysis, researchers from RWTH and their partners have assessed the risk of transmission of coronavirus through aerosols in classrooms. They found that compared to other types of room, classrooms pose a higher risk if they are not mechanically ventilated and if there are no guidelines in place to ensure sufficient ventilation. This is due to the relatively high occupancy rate and the fact that teachers and pupils tend to spend long periods of time in classrooms, as Professor Dirk Müller from the Institute for Energy Efficient Buildings and Indoor Climate explains. The study compared the of risk of infection in various types of room. According to the study, classrooms and sports halls pose a greater risk than fully occupied lecture theaters with a capacity of 1.000, for example. Classrooms, lecture halls, open-plan offices, and sports halls were evaluated in comparison to a reference situation, in which 25 individuals spend one hour in an average-size, automatically ventilated classroom with an air exchange rate of 4.4 times per hour. In the researchers’ simulation model, this reference situation defines a relative risk of one. Measured against the reference, the researchers consider the risk of infection in lecture halls and open-plan offices to be relatively low. The only situation that is even more critical in terms of infection risk than the classroom situation is the use of sports halls – under physical strain, individuals emit larger amounts of small airborne particles. Larger Gatherings at Home Pose a Risk "Data has now confirmed that entertaining larger gatherings at home can be much riskier than events in a public setting. In the private sphere, with window ventilation alone, the air exchange rate is often so low that the transmission of the virus via aerosols works well," says Müller. The risk of infection would be much lower in many public buildings equipped with ventilation systems. Well-ventilated rooms such as modern lecture theaters would also be much less of a problem, even with high occupancy rates. Classrooms without mechanical ventilation might pose a higher risk of infection, especially in winter, when the windows are not kept open long enough to secure sufficient ventilation: often it is too loud outside to keep the windows open, or school students are sitting in freezing classrooms. As a result, there is insufficient ventilation, despite the regulations in place. According to Müller, recent studies have shown that window ventilation in classrooms often results in insufficient air exchange rates, as indicated by high carbon dioxide concentrations. In a classroom holding 35 persons, the risk of infection may be almost twelve times higher than in the reference classroom. Even with a reduced occupancy of 18 persons, the air in the room would have to be exchanged 3.3 times per hour. This corresponds to an outdoor air volume flow of 660 cubic meters per hour. Ventilation Technology Helps Protect Against Infection How often do the windows have to be opened to achieve sufficient ventilation? The Aachen researchers and their partners are involved in formulating ventilation on behalf of the German Environment Agency. These also include guidelines on how to find out whether ventilation is sufficient: According to Müller, the carbon dioxide concentration in the room can be determined with the help of a simple measuring device. An important factor is what the pupils are actually doing in a room: is it mainly the teacher who is talking, are the pupils engaged in group work activities, or are they engaged in physical activities? Sports activities indoors should only take place with a reduced numbers of participants; outdoor sports are to be preferred. According to the calculations of the researchers, the risk of infection in large lecture halls is quite low; nevertheless, they recommend wearing face masks. Although the density of people in the lecture hall is comparable to that in a classroom, there is a much larger vertical column of air for each individual present as well as mechanical ventilation. An air exchange rate of two to three times per hour, which is typical for these buildings, is sufficient not to increase the relative risk of infection. The situation in open-plan offices equipped with ventilation systems is also considered unproblematic. In accordance with occupational health and safety regulations, the available office space and its movement areas are so generously dimensioned that even when fully occupied, the risk of infection through aerosols is relatively low. “From my point of view, we should get a more rounded view of the situation before deciding on measures,” says Müller. This also applies to the obligation to wear of face masks in some pedestrian areas: “From a ventilation technology perspective, it can be assumed that virus transmission is not possible as long as the physical distancing rules are observed there.”
<urn:uuid:039076b8-eb7e-4776-9a78-547f52c63996>
CC-MAIN-2022-27
https://www.rwth-aachen.de/cms/root/Die-RWTH/Aktuell/Pressemitteilungen/November-2020/~lhywr/Corona-Infektion-durch-Aerosole-Aachen/?lidx=1
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104364750.74/warc/CC-MAIN-20220704080332-20220704110332-00445.warc.gz
en
0.95295
1,022
3.015625
3
Stop That (Seemingly) Senseless Behavior! Once you have determined the purpose of a child’s or student’s seemingly senseless behaviour by doing a functional behaviour assessment (FBA), the next step is to work on changing or modifying the behaviour. This book follows-up on Dr Glasberg’s previous book, “Functional Behavior Assessment for People with Autism”, with a guide to developing an effective behaviour intervention plan to stop undesirable behaviours such as hitting, screaming, or repetitive questioning. This book outlines an educational approach for parents, teachers, adult service providers, and aides that not only quickly reduces the problem behaviour but also teaches the individual with autism new skills to get his needs met. Full of case studies and ‘Keep it Simple’ tips, plus forms, figures, and graphs, this book offers families and professionals proven strategies to change a person’s challenging behaviour, helping him to have a more productive and inclusive future.
<urn:uuid:827dcf18-9e46-4b7f-a5fb-1ff354f118fa>
CC-MAIN-2019-43
https://treezy.co.uk/product/stop-that-seemingly-senseless-behavior/
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00380.warc.gz
en
0.917091
198
2.59375
3
“The minute you say, ‘What do you think?’ they have something to say. They can contribute” (Boyd K. Packer, “Principles of Teaching and Learning,” Liahona, June 2007, 87). Good questions help sisters think deeply and participate in discussions. Elder Richard G. Scott on participation: - Participation allows learners to use their agency. That authorizes the Holy Ghost to instruct them. - Participation helps confirm truths in their souls. Three good questions that encourage learning: - What do you think about this principle? - What do you feel about this principle? - What experiences have you had with this principle? How to ask good questions: - Good questions take some thought. - It is often helpful to plan questions ahead of time and write them in your lesson outline. - The lesson manual contains good ideas for questions. - Invite sisters to ask questions. - Keep the discussion focused on the lesson’s objective. - Avoid questions that have obvious answers. - Avoid questions that can be answered simply “yes” or “no.” - Teaching, No Greater Call: “Teaching with Questions” - Page of questions to use. - "Questions, the Heart of Learning and Teaching," Ensign, Jan. 2008.
<urn:uuid:5f1de378-4368-4886-8115-d7570f79a08d>
CC-MAIN-2014-15
https://www.lds.org/callings/relief-society/leader-resources/sunday-relief-society-meetings/questions-encourage-learning?lang=eng
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00267-ip-10-147-4-33.ec2.internal.warc.gz
en
0.927896
288
3.546875
4
Are punishment and correction the same thing? If not, what’s the difference? Let’s start by defining the terms: Punishment is the fitting retribution of an offense. In child training, it serves a moral purpose; it communicates to children a value of good and evil by the weight of punishment ascribed to each wrongful act. The administration of punishment is dependent upon and inseparably linked to the proper administration of authority. That means the right of punishment belongs only to those clothed with authority and who exercise such in submission to the wisdom of Scripture. Pain, loss, or restraint willfully inflicted on another person outside of the rightful administration of authority is aggression or revenge, but it is not punishment. Punishment is one element of correction, but not all correction is tied to punishment. Correction is the act of bringing back from error or unacceptable deviation from the standard. The reason we correct our children is basic–it helps them learn. But in order to maximize the learning side of correction, we need to understand two governing principles. The first one is this: The type of correction depends on the presence or absence of malicious intent. Parents should ask, “Was my child’s wrong action accidental or intentional? Did he know what he was doing was wrong?” The answers to those questions will help determine which type of correction will best serve the offense. This is the dividing line: separating the unintentional from the intentional. The second rule of correction is this: The punishment/consequences must fit the crime. Punishment sets a value on behavior. That is why over-punishing or under-punishing is dangerous; both send the wrong message. When any society establishes a baseline for punishment, it is placing a value on the seriousness of a wrongful act. Punishment places a value on the action. In parenting, a child’s sense of justice is established through punishment, not rewards. For example, if a child hits and bruises his sister with a plastic bat, and then is punished by receiving five minutes in the timeout chair, the parent established in the mind of the child that hurting other people is not a serious infraction. Over-punishments goes to the other extreme. When a parent says, “You left your light on after leaving your room and for that, you can’t have any friends over for a month,” that can easily be considered over-punishing. This fosters exasperation and more conflict. Stay mindful of these two principles. Before an offense can be dealt with most effectively, the parent needs to ask two questions: “Was what my child did the result of an accident or was it malicious?” “Was it childishness or foolishness?” Second, “What punishment would fit the wrong and convey the right value message? Excerpt taken from Growing Kids God’s Way by Gary and Anne Marie Ezzo.
<urn:uuid:31b36c63-7584-4f3d-b33c-af3e5313a250>
CC-MAIN-2020-24
https://christianfamilyheritage.org/punishment-versus-correction/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435238.60/warc/CC-MAIN-20200603144014-20200603174014-00103.warc.gz
en
0.940506
600
3.609375
4
Cryptocurrency and Blockchain: A Financial Revolution Cryptocurrency and blockchain technology have revolutionized traditional financial systems, reshaping our understanding and utilization of money. Bitcoin, Ethereum, and various other digital assets have emerged as key players, challenging established norms and sparking a financial revolution. This comprehensive guide delves into the realms of cryptocurrency and blockchain, elucidating their fundamental principles, potential, and the transformative impact on the financial landscape. The Genesis of Cryptocurrency The concept of cryptocurrency surfaced after the 2008 financial crisis when an entity known as Satoshi Nakamoto introduced Bitcoin. Published in October 2008, Bitcoin’s whitepaper envisioned a decentralized digital currency facilitating secure, trustless transactions without intermediary involvement. Cryptocurrency, a digital or virtual form of currency, utilizes cryptography for security. In contrast to traditional fiat currencies, cryptocurrencies operate on decentralized blockchain networks. Blockchain, a distributed ledger, ensures transaction transparency and security. Key Features of Cryptocurrencies: - Decentralization: Operates without central control, fostering resistance to censorship. - Security: Utilizes cryptography to secure transactions with private keys for user control. - Transparency: Records all transactions on a public ledger (blockchain), enhancing accountability. - Digital Ownership: Represents ownership through digital keys, facilitating borderless transactions. - Limited Supply: Many cryptocurrencies have a finite supply, potentially driving up their value. While numerous cryptocurrencies exist, a few have gained prominence: - Bitcoin (BTC): Recognized as the first and most valuable cryptocurrency, serving as a digital store of value and medium of exchange. - Ethereum (ETH): Known for smart contract capabilities, enabling decentralized application (DApp) development. - Binance Coin (BNB): Native to the Binance exchange, one of the world’s largest cryptocurrency exchanges. - CardanoADA: focuses on Scale, Sustainability and Interoperability. - Solana (SOL): High- performance blockchain with fast sale pets and low freights - Ripple (XRP): Aiming to facilitate fast, low-cost cross-border payments for financial institutions. - Polkadot (DOT): Designed to connect different blockchains, allowing information and asset sharing. - Dogecoin (DOGE): Initially a joke, gained popularity as a “meme coin.” Cryptocurrency Use Cases Cryptocurrency finds applications across various sectors: - Digital Payments: Offers secure and efficient digital payment solutions. - Remittances: Reduces costs and time associated with cross-border money transfers. - Decentralized Finance (DeFi): Enables lending, borrowing, trading, and interest earning without traditional intermediaries. - NFTs (Non-Fungible Tokens): Represents ownership of digital assets like art, collectibles, and virtual real estate. - Smart Contracts: Automates processes, reducing the need for intermediaries in legal, financial, and other industries. - Identity Verification: Creates secure and verifiable digital identities, reducing identity theft and fraud. Impact of Cryptocurrency The rise of cryptocurrency has significantly impacted the financial landscape: - Financial Inclusion: Provides financial services to the unbanked and underbanked globally, enabling participation in the global economy. - Reduced Transaction Costs: Cryptocurrency transactions often have lower fees, benefiting individuals and businesses, particularly in cross-border payments. - Transparency: Blockchain transparency enhances trust and accountability in financial transactions. - Decentralization: Appeals to those seeking financial autonomy by operating outside government control. - Emerging Investment Opportunities: Cryptocurrencies offer investment opportunities, diversifying portfolios for both retail and institutional investors. - Innovation in Finance: Blockchain and cryptocurrencies stimulate innovation in traditional finance, leading to DeFi, NFTs, and more. Risks and Challenges Despite its potential, cryptocurrency poses risks and challenges: - Price Volatility: Cryptocurrency prices are highly volatile, resulting in rapid gains or losses for investors. - Regulatory Uncertainty: Evolving regulations can impact the use and value of cryptocurrencies. - Security Concerns: Poor security practices can lead to theft or loss of assets, emphasizing the importance of protecting private keys. - Scams and Frauds: The cryptocurrency space has witnessed scams, Ponzi schemes, and fraudulent projects, necessitating caution. - Environmental Impact: Concerns arise due to the energy consumption of some cryptocurrency mining operations. - Lack of Consumer Protections: Unlike traditional systems, cryptocurrencies offer limited recourse in case of errors or disputes. Future Trends in Cryptocurrency The cryptocurrency space continues to evolve, with key trends and developments: 1.Central Bank Digital Currencies (CBDCs): - Exploration and Development: Central banks globally are actively exploring the concept of CBDCs, considering them as a potential digital counterpart to traditional fiat currencies. - Monetary Policy Control: CBDCs could provide governments with more direct control over monetary policy, allowing for greater flexibility in economic management. - Cross-Blockchain Communication: Projects like Polkadot and Cosmos are working on creating interoperable blockchains. This development aims to facilitate communication and data sharing between different blockchains, potentially reducing isolated ecosystems. - Enhanced Connectivity: Interoperability can lead to increased connectivity between diverse blockchain networks, fostering collaboration and the seamless transfer of assets and information. Decentralized Finance (DeFi) Evolution: - Maturation of DeFi Platforms: The decentralized finance sector is expected to mature, with more robust and user-friendly platforms emerging. This evolution may attract a broader user base. - Expansion of DeFi Services: DeFi platforms are likely to expand their offerings beyond lending and borrowing to include more sophisticated financial services, resembling traditional banking functions. Enhanced Privacy and Security Measures: - Privacy Coins: Growing emphasis on privacy may lead to the rise of privacy-focused cryptocurrencies, aiming to provide users with enhanced anonymity in their transactions. - Advanced Security Protocols: Continued advancements in security protocols to address concerns related to hacking, scams, and asset theft. The industry may witness the adoption of more secure technologies. Integration of Artificial Intelligence (AI): - AI in Trading and Analysis: Integration of artificial intelligence into cryptocurrency trading platforms for more sophisticated analysis and predictive modeling. - Smart Contract Automation: AI may play a role in automating and optimizing smart contract execution, enhancing efficiency and reducing the potential for errors. Sustainable and Green Initiatives: - Shift to Eco-Friendly Mining: Growing concerns about the environmental impact of cryptocurrency mining may drive the industry toward more sustainable and energy-efficient mining practices. - Carbon-Neutral Cryptocurrencies: The development and adoption of cryptocurrencies with carbon-neutral or environmentally friendly mining processes. Tokenization of Assets: - Real-world Asset Tokenization: More traditional assets, such as real estate and commodities, could be tokenized, allowing for fractional ownership and increased liquidity. - Expansion Beyond Financial Instruments: Tokenization may extend beyond financial instruments to include a wide range of real-world assets, creating new investment opportunities. - Global Regulatory Frameworks: Increased efforts by governments and regulatory bodies to establish clearer and more consistent regulatory frameworks for cryptocurrencies, providing a more stable environment for industry participants. - Compliance Standards: Development of industry-wide compliance standards and best practices to enhance transparency and legitimacy. Mass Adoption Initiatives: - User-Friendly Interfaces: Continued efforts to improve user interfaces and experiences, making cryptocurrency more accessible to individuals with limited technical expertise. - Integration into Mainstream Finance: Collaborations between cryptocurrency projects and traditional financial institutions to integrate digital assets into mainstream financial systems. Cross-Border Payments Revolution: - Blockchain for Cross-Border Transactions: Further exploration and adoption of blockchain technology for facilitating faster, more secure, and cost-effective cross-border payments. - Integration with Traditional Payment Systems: Collaboration between cryptocurrency platforms and traditional financial institutions to create seamless interoperability in cross-border transactions. Cryptocurrency and blockchain technology have ushered in a new era in finance, offering innovative solutions and challenging traditional norms. While the landscape presents numerous opportunities, it is essential to navigate the risks judiciously. As the industry evolves, the integration of digital currencies into mainstream financial systems and ongoing technological advancements will shape the future of finance. Embracing these changes with a balanced approach will be crucial for individuals, businesses, and governments alike.
<urn:uuid:389afb88-9c6b-4a5d-abb0-271562b25120>
CC-MAIN-2023-50
https://crok.in/cryptocurrency-and-blockchain-a-financial-revolution/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100525.55/warc/CC-MAIN-20231204052342-20231204082342-00183.warc.gz
en
0.881915
1,739
3.09375
3
Welfare Reform in Cleveland Implementation, Effects, and Experiences of Poor Families and Neighborhoods The 1996 Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) ushered in profound changes in welfare policy, including a five-year time limit on federally funded cash assistance (known as Temporary Assistance for Needy Families, or TANF), stricter work requirements, and greater flexibility for states in designing and managing programs. The law's supporters hoped that it would spark innovation and reduce welfare use; critics feared that it would lead to cuts in benefits and widespread suffering. Whether PRWORA's reforms succeed or fail depends largely on what happens in big cities, where poverty and welfare receipt are most concentrated. This report - one of a series from MDRC's Project on Devolution and Urban Change - examines how welfare reform unfolded in Ohio's largest city and county: Cleveland, in Cuyahoga County. Ohio's TANF program features one of the country's shortest time limits (36 months) and has a strong emphasis on moving welfare recipients into employment. This study uses field research, surveys and interviews of current and former welfare recipients, state and county welfare and employment records, and indicators of social and economic trends to assess TANF's implementation and effects. Because of the strong economy and ample funding for services in the late 1990s, it captures welfare reform in the best of times, while also focusing on the poorest families and neighborhoods. - Cuyahoga County remade its welfare system in response to TANF. It shifted to a neighborhood-based delivery system and dramatically increased the percentage of recipients who participated in work activities. It also launched a major initiative to divert families from going on welfare. The county firmly enforced time limits starting in October 2000, but it ensured that families were aware of their cutoff date, and it offered short-term extensions and transitional jobs to recipients who had employment barriers or no other income. - Between 1992 and 2000, welfare receipt declined in the county, and employment among welfare recipients increased. The economy and other factors appear to have driven these trends, as they did not change substantially after the 1996 law went into effect. However, TANF seems to have encouraged long-term welfare recipients to leave the rolls faster and to have discouraged food stamp recipients from coming onto cash assistance. - A longitudinal survey of former and ongoing welfare mothers in Cleveland's poorest neighborhoods showed substantial increases in the percentage who were working and had “good” jobs between 1998 and 2001. These changes are not necessarily due to welfare reform; they may reflect the economy and the maturation of women and their children. Despite the improvements, half the women surveyed in 2001 had incomes below poverty level. Those who had exhausted 36 months of cash assistance or had less than one year of benefits remaining tended to face the most employment barriers and to have the worst jobs. Nevertheless, most who were cut off TANF because of time limits were working, and nearly all were receiving food stamps and Medicaid. - Between 1992 and 2000, the number of neighborhoods with high concentrations of welfare recipients (20 percent or more) fell sharply - a result of caseload decline. Though social conditions in these neighborhoods were much worse than in other parts of the county, they generally improved or remained stable over time. For instance, birth rates among teens and violent crime decreased, while prenatal care and median housing values increased. Unmarried births, property crimes, and child abuse and neglect did not change. The study's findings counter the notion that welfare reform would lead to service retrenchment and a worsening of conditions for families and neighborhoods. To the contrary, there were many improvements in Cleveland - though the favorable economy played a major role, and time limits had just been implemented when the study ended. Further study is needed to determine the long-term effects of time limits and how welfare reform will fare under less auspicious conditions.
<urn:uuid:8eb15fd1-86da-4228-9a19-cdef96cb2452>
CC-MAIN-2015-35
http://www.mdrc.org/publication/welfare-reform-cleveland
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645340161.79/warc/CC-MAIN-20150827031540-00327-ip-10-171-96-226.ec2.internal.warc.gz
en
0.969572
792
2.59375
3
Computer Animation Not Just for Video Games December 2, 2009 Project will use technology to recreate a five-thousand-year-old Chalcolithic roundhouse in Cyprus Isometric drawing of a Late Chalcolithic roundhouse The Kissonerga archaeological sites in Cyprus will benefit from a Site Preservation Grant from the Archaeological Institute of America. The grant will finance the use of 3-D and computer animation technologies to complement the reconstruction of a five-thousand-year-old Chalcolithic roundhouse. AIA funds will support the filming of the physical reconstruction of the building and the creation of color 3-D computer imagery of the artifacts found in the house to give visitors a vibrant sense of the use of the building. The reconstruction is part of a larger project led by Dr. Lindy Crewe of the University of Manchester to illuminate human experience in a single locality through time by investigating the nearly continuous occupation of Kissonerga Village from 10,000 years ago to the present. AIA President, Dr. C. Brian Rose, says, “This project is very exciting because it not only preserves the actual ruins, but allows visitors to immerse themselves more fully in prehistoric Cyprus using virtual reality so they can experience what life would have been like at that time.” Prehistoric archaeological sites in Cyprus are frequently left out of the island’s narrative, and their low visibility makes them difficult to preserve given the rapid development of the island as a tourist destination. By using the latest 3-D and animation technologies to create tangible educational tools at Kissonerga, the project hopes to instill a greater appreciation for all periods of Cyprus’s occupation in both the local population and visitors. Dr. Paul Rissman, AIA Site Preservation Committee Chair notes, “Technology is playing a much more important role—not only preserving our heritage—but in providing new ways for people to experience archaeological sites and learn about the past first-hand.” About AIA Site Preservation Program and Grants The AIA Site Preservation Program emphasizes education, outreach, and best practices in archaeology, and is currently supporting preservation projects in Assos, Turkey, and Easter Island. In addition to grants, the AIA Site Preservation Program includes advocacy to help stop the unnecessary destruction of archeological sites, U.S. Troop education, outreach activities for children, online resources for the public and professionals, workshops, and awards for best practices. All aspects of the program, including the awarding of grants, are made possible through donations to the AIA. To learn more, please visit www.archaeological.org/sitepreservation. About Archaeological Institute of America (AIA) Founded in 1879, The Archaeological Institute of America (AIA) is North America's oldest and largest archaeological organization. Today, the AIA has some 200,000 Members belonging to 107 Local Societies in the United States, Canada and overseas. The organization promotes public interest in the cultures and civilizations of the past, supports archaeological research, fosters the sound professional practice of archaeology, and advocates for the preservation of the world's archaeological heritage. The organization hosts archaeological fairs, lectures and other events throughout North America; publishes Archaeology magazine and the American Journal of Archaeology; awards fellowships and honors; and leads global archaeological travel excursions. For more information please contact:
<urn:uuid:a92bc332-b86e-4046-9f98-1565a191333d>
CC-MAIN-2015-11
http://archaeological.org/news/currentprojects/79
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463444.94/warc/CC-MAIN-20150226074103-00212-ip-10-28-5-156.ec2.internal.warc.gz
en
0.910991
696
2.71875
3
Undoubtedly, clean air serves as a fundamental component of optimal health and overall wellness. The focus here is not simply on the absence of unpleasant scents, but rather the quality of the surrounding atmosphere. With the prevalence of dust, allergens, pollutants, and bacteria in metropolitan areas, it can be challenging to avoid these airborne agents. Installing an air purifier provides an effective solution to this predicament. An air purifier is a device that, truth be told, would make a fine addition to any household (and ideally, one’s workplace as well). Even the most pristine and refined cities cannot boast of ideal ecology. Inevitably, the air is contaminated by noxious fumes, harmful substances, and ordinary dust. The counsel of physicians to air out one’s quarters multiple times per day crumbles under the weight of harsh reality: the air that we let in is anything but fresh or pure. Air purifiers represent a relatively straightforward and easy-to-maintain solution to this issue, capable of remedying the problem permanently. It’s worth noting that they are useful not solely in urban apartments; even if one resides in bucolic surroundings, they remain susceptible to various allergens and dust. These devices prove equally beneficial in sizable offices, where they can mitigate the risk of collective colds in the autumn and winter months and sustain a healthy microclimate. In sum, air purifiers possess the power to avert allergies, reduce the likelihood of colds, alleviate asthma, and ensure the perpetual high quality of indoor air. Types of filters The functionality of purifiers relies on no mysterious forces. Instead, they operate by way of filters featuring varying degrees of finesse. Distinct filters employ diverse technologies, each type geared towards fulfilling its own specific purpose. Certain models tackle solely dust, while others purify gases. A select few are even capable of rendering air surgically sterile. All purifiers include a preliminary filter, serving as the initial, coarse dust filter. This component safeguards both the internal filters and walls of the device from contamination while preparing the air for subsequent cleansing. Carbon filters fall into the category of refined filters that effectively eliminate gases and vapors from the air. They are not only used in air purifiers, but also in recirculation hoods in kitchens. However, some gases such as formaldehyde, nitrogen dioxide, and low molecular weight gases cannot be absorbed by charcoal. Simply put, charcoal filters are effective in protecting against harmful impurities in an urban environment, but they are not sufficient to completely purify the air. These filters need to be replaced periodically, on average, once every six months; otherwise, they themselves become a source of toxins. On the other hand, electrostatic filters function based on the principle of an ionizer. They saturate the incoming air with positive ions, which attach to all solid particles and are attracted to negatively charged plates. This way, contaminants in the air are deposited on the filter plates. The larger the plates, the more effective the purification process. Periodic manual cleaning of electrostatic precipitators, by simply rinsing them with water, is recommended at least once a week. The ion filter effectively removes dust, soot, and allergens but is not effective against toxins and volatile substances. HEPA filters, which may be familiar to you from vacuum cleaners and ventilation systems, owe their name to an English acronym that stands for High Efficiency Particulate Arrestance, or High Efficiency Particulate Retention. The corrugated fiber structure of HEPA filters is highly efficient at trapping dust, with the number of curves and folds determining the extent of air purification – up to 99% of particulate matter greater than 0.3 micron can be removed, including plant pollen, fungal spores, animal and human dander, and other allergens. HEPA filters are designed to be replaceable, as they tend to clog with dust and lose their shape over time, necessitating complete replacement. The recommended frequency of replacement is usually indicated on the cleaner model itself, and is crucial to ensure that the filter not only continues to effectively purify the air, but also allows air to pass through at all. Photocatalytic filters represent the most advanced type of filter available today. These ingenious filters decompose toxic impurities through the influence of ultraviolet rays upon the surface of the photocatalyst. Their performance is outstanding, as they destroy toxins, viruses, bacteria, and any odors present. Domestic cleaners generally employ relatively weak photocatalytic filters, as otherwise, one could achieve not only cleanliness but sterility, akin to that found in an operating room. However, this is also a disadvantage, for the photocatalyst can destroy not only toxins but also all useful inclusions, rendering the surrounding air lifeless. For home use, photocatalytic filters constitute an excellent prophylactic measure against allergies and colds. The filter itself typically does not require replacement, but the ultraviolet lamp is susceptible to wear and tear. It should be noted that the aforementioned filters are not always present in the same cleaner model. Depending on their number and combination, different degrees of air purification are achieved. Thus, one filter constitutes one-stage air purification, two filters constitute two-stage purification, and so forth. A HEPA filter, for example, can solely eliminate dust but cannot tackle odors or gases. For comprehensive air cleansing, one requires, at minimum, one fine filter and one coarse filter. Purifiers with a five-stage air purification system are deemed the most effective; although they are, of course, much more expensive than their counterparts, their performance is highly impressive. Performance and capacity To make a prudent choice when acquiring a purifier, it is of utmost importance to ensure it can handle the volume of air in your room. To achieve this, two closely related parameters must be taken into account: the area to be serviced and the airflow rate. The area to be serviced is the simplest criterion to consider when selecting a device. One only needs to know the approximate square footage of their rooms and choose a suitable purifier from the available options. It is advisable to opt for a device that is slightly larger than the measured area to ensure quicker and more efficient air cleaning. Purifiers with a capacity ranging from modest 2-8 square meters to large, productive units that can service up to 200-260 square meters are widely available on the market. Air exchange is a more precise parameter to consider. It denotes the volume of air that passes through the unit per hour. To understand this, it is necessary to first calculate the volume of the air in the room by multiplying the room’s area with the height of its walls (e.g., S 40 m² by H 2.5 m = V 100 m³). The resulting value – again with a margin of plus – should be sought among the available purifiers. Choosing a more powerful purifier will ensure that the air is treated more quickly. Conversely, a purifier with a smaller air exchange rate may not be able to keep up with its task, particularly if ventilation is frequent. The electrical power consumption of the purifier is a secondary indicator, which is directly dependent on its performance. Large, efficient purifiers for 200 m² areas cannot be low-powered, while smaller room purifiers do not require high power. However, purifiers consume very little electricity in general, so even the most powerful unit usually consumes no more than 180 watts – just a little more than a standard light bulb. The parameter of noise level warrants your attention if you are considering a purifier with a fan. Otherwise, the device will operate almost silently. Typically, room air purifiers produce up to 50 dB of noise, comparable in level to daytime noise or a hushed conversation. Although sleeping with 40-50 dB of noise may be discomforting, during the day, a purifier is unlikely to cause disturbance. Models with noise levels exceeding 50 dB are usually high-performance, oversized units featuring robust fans designed for larger rooms. Select the type of control that appeals to your personal preferences. It may be mechanical, comprising buttons, relays, or switches, or electronic with a touch screen display. The timer function is useful for programming the device to operate during specific hours, such as when you are away from home. Additionally, the remote control adds convenience for maximum ease of use. Various indicators are crucial to consider. The filter clogging indicator will alert you to the status of your filters, prompting you to replace or wash them, depending on their type. The air condition indicator provides you with an opportunity to scan your surroundings, assess the cleaner’s efficiency, and adjust its performance accordingly. With this indicator, you will know when the purifier can rest or when it requires activation. The size of the cleaner depends on its performance. Compact, low-power air purifiers are typically tabletop, while stronger devices imply floor placement. Serious units that service areas up to 250 square meters with large filters require wall mounting because of their sizable dimensions. Are you or a member of your household experiencing respiratory issues due to excessive airborne dust? If so, an electrostatic cleaner is an essential device to alleviate the situation. Concerned about the harmful effects of polluted urban air? Consider incorporating a charcoal or photocatalytic filter to mitigate the issue. In addition, for added convenience, select a model equipped with air purity control to function automatically and keep you informed of air quality levels.
<urn:uuid:5116a811-028a-40ec-9ffa-90a9e69fea74>
CC-MAIN-2023-50
https://savedelete.com/internet-tips/how-to-choose-an-air-purifier/457811/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100739.50/warc/CC-MAIN-20231208081124-20231208111124-00425.warc.gz
en
0.929281
1,954
3.0625
3
In JSP, java code can be written inside the jsp page using the scriptlet tag. JSP Scripting element are written inside <% %> tags. These code inside <% %> tags are processed by the JSP engine during translation of the JSP page. Jsp scripting elements are classified into two types are; - Language Based Scripting Element - Advance Scripting Elements (Expression Language) Language Based Scripting Element These are used to defined script in jsp page, this is traditional approach to define the script in jsp page. Language Based Scripting Element are classified into 4 types, they are; - Comment Tag - Declaration Tag - Expression Tag - Scriptlet Tag
<urn:uuid:2b223365-6d1d-48d2-8910-82a14fe91db8>
CC-MAIN-2020-40
https://www.sitesbay.com/jsp/jsp-scripting-element
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00049.warc.gz
en
0.821829
150
3.15625
3
Alexander Fleming: The Discovery of Penicillin For the scientific journal I investigated on Alexander Fleming. I explained the different problems, factors, meanings, and results that Alexander Fleming had when he was discovering penicillin. Alexander Fleming is best known for his discovery of penicillin in 1928. He calls his discovery of penicillin an “dramatic discovery”, meaning he did not intend to discover penicillin. Furthermore, this journal addresses how penicillin has affected death rates and even modern life. As a result, Alexander Fleming’s “dramatic discovery” of penicillin has helped millions of lives in the past, as well as today, and will continue on in the future. (Britannica Online School Edition) Before penicillin was discovered the death rates were really high. Many people died from bacteria growth, causing blood poisoning and other fatal diseases. In the past, many people had deep wounds that got infected and could not be sanitized or cleaned. Even a simple everyday cut could cause infections and death. Bacteria growth caused millions of deaths, penicillin was the first type of antibiotic that saved millions of people from dying. (Britannica Online School Edition) On a September morning in 1928, Alexander Fleming took out a pile of petri dishes he had placed on a bench before he left to go on a vacation with his family. Fleming was looking through the dishes to see which ones he could retain, but many of the dishes were contaminated. He then placed each of those in an ever growing pile in a tray of Lysol. Alexander Fleming was mainly in search of a “wonder drug” that would not harm the human body, only to kill the bacteria. While sorting out the petri dishes he noticed something strange about one dish. He noticed that a mold had grown on the dish when he was away. However, the mold had seemed to kill the bacteria surrounding it. After observing the petri dish, Fleming realized that the mold had capability of killing bacteria. Fleming spent weeks growing more mold and figuring out the substance the mold had that could kill the bacteria. But after finding out more about the mold, Fleming wanted to figure out where the mold had came from. It turned out that the mold belonged to La Touche’s room below Fleming’s. It seemed to have travelled all the way up to Fleming’s room. Fleming continued conduct numerous experiments to verify the effect of the mold and weather it would kill other harmful bacteria. Surprisingly, the mold had killed a large amount of the bacteria. (About, Britannica Online School Edition) While Fleming was discovering deadly bacteria in 1928, he found a mold forming on one of his bacteria cultures. He noticed that the bacteria surrounding the mold had disappeared. Fleming kept a strain of mold alive and began testing it on laboratory animals. In 1929 he published his first medical paper about a powerful microbe killer that did not injure human tissue. Later on in 1938 a team of Oxford University scientists, Howard Florey and Ernst B. Chain remembered the research paper. Both of the Oxford scientists were credited for refining Fleming’s discovery of penicillin. (Britannica Online School Edition, Zephyrus) Fleming’s discovery of antibiotics has environmental factors. To start of with the environmental factor, it relates to Fleming’s discovery of antibiotics because of the use on animals. Since the beginning of antibiotics, animals were affected. Antibiotics would not have been in use today if it had not been tested on laboratory animals such as guinea pigs, and rats. When antibiotics was first discovered, it was tested on humans. Sir Howard Florey started using mice because he only had a little amount of penicillin left, leading him to not test it on humans. Florey kept on testing penicillin on animals because the results on the mice were so convincing. Antibiotics has not only affected animals in the past it is still affecting them now. In places like the United States, Livestock producers are feeding antibiotics to animals to make them grow fatter and faster. Many people have been complaining and saying that they should not be feeding them to make them grow, they should be feeding antibiotics to animals in need of it. Antibiotics is a drug used to treat sick human beings and animals, it is not supposed to be used on animals for growth promotion. Feeding drugs to healthy animals is not doing any good for the animals, the environment and as well as for our future. (Chicago Tribune) Eng, Monica. “FDA says stop feeding antibiotics to healthy animals for growth promotion.” Chicago Tribune. N.p., n.d. Web. 4 Nov. 2012. <http://articles.chicagotribune.com/2012-04-11/features/chi-food-policy-fda-issue-new-guidelines-on-antibiotic-in-animals-20120411_1_growth-promotion-animal-health-institute-food-animal-production>. “Fleming, Alexander.” Fleming, Alexander: 1-1. Britannica Online School Edition. Web. 1 Nov. 2012. <http://www.school.ebonline.com/comptons/article-9274340?query=alexander%20fleming&ct=null>. Rosenberg, Jennifer. “Alexander Fleming Discovers Penicillin” ["Alexander Fleming Discovers Penicillin"]. About.com. N.p., n.d. Web. 4 Nov. 2012. <http://history1900s.about.com/od/medicaladvancesissues/a/penicillin_2.htm>. Zephyrus. N.p., n.d. Web. 4 Nov. 2012. <http://www.zephyrus.co.uk/alexanderfleming.html>.
<urn:uuid:b2eb0581-25cc-4015-bb98-5365dc9d8818>
CC-MAIN-2015-06
http://sites.cdnis.edu.hk/students/084386/
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115926735.70/warc/CC-MAIN-20150124161206-00090-ip-10-180-212-252.ec2.internal.warc.gz
en
0.954923
1,227
3.203125
3
Saved by Language (2014) + Jews of the Spanish Homeland (1929) Saved by Language recounts the personal story of Moris Albahari, a Sephardic Jew from Sarajevo, who spoke Ladino (Judeo-Spanish), his native tongue, to survive the Holocaust. Moris used Ladino to communicate with an Italian colonel who helped him escape to a partisan refuge after he ran away from a train transporting Yugoslavian Jews to Nazi death camps. In 1944, he managed to communicate with a Spanish-speaking US pilot and led American and British soldiers to a safe partisan airport. This 1929 documentary film contains close-ups of the leading Balkan Sephardi rabbis of the time and rare footage of Jewish schools, residential quarters, synagogues, and cemeteries as well as a sampling of Sephardi religious customs. The film was discovered by Sharon Pucker Rivo, director of the National Center for Jewish Film, during a visit to Barcelona to commemorate the 500th anniversary of the 1492 expulsion of the Jews from Spain. Jews of the Spanish Homeland provides a rare glimpse of Sephardic communities in Salonika, Constantinople, Yugoslavia, and Romania as well as former centers of Jewish life in Spain. Presented by The Magnes Collection of Jewish Art and Life and the Townsend Center for the Humanities as part of the Depth of Field 2015-2016 Seminar Series: Sephardic Identities on Screen.
<urn:uuid:61e76030-5237-45de-aefe-0682f0b0b53a>
CC-MAIN-2017-13
http://townsendcenter.berkeley.edu/events/saved-language-2014-jews-spanish-homeland-1929
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188213.41/warc/CC-MAIN-20170322212948-00143-ip-10-233-31-227.ec2.internal.warc.gz
en
0.945284
298
3.234375
3
Although cats and humans have lived together for over 9,000 years, scientists always wondered why cats would bother living with humans in the first place. After all, dogs have lived with people for over 30,000 years and look how domesticated they are compared to cats. “Humans most likely welcomed cats because they controlled rodents that consumed their grain harvests,” says Wes Warren, associate professor of genetics at The Genome Institute at Washington University in St. Louis. “We hypothesized that humans would offer cats food as a reward to stick around.” In order words, cats need bribes to stick around. The moment those bribes disappear, you can expect your cat to disappear just as quickly, which goes to show you that cats are really loyal to their food supply rather than the person buying their food.
<urn:uuid:7e7c869f-f02b-44d1-a3f2-e672b3157f51>
CC-MAIN-2022-27
http://catdailynews.com/2015/08/why-do-cats-like-people/
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00364.warc.gz
en
0.9766
168
3.3125
3
Cheshire County was one of only two counties in New Hampshire that were mapped in the 1880’s, by C.H. Rockwood. The “Atlas of Cheshire County” is unusual in that it is not north oriented like the other maps. Instead, the western boundary is on the top. It shows many new names and, for the first time, two of the villages are designated as post office sites. The population of Chesterfield was 1,173 in 1880.
<urn:uuid:eb93b30b-214b-43b0-bbdb-a4de5b3a468e>
CC-MAIN-2021-49
http://www.chesterfieldhistoricalsociety-nh.org/maps/1877-cheshire-county-atlas/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00459.warc.gz
en
0.985548
102
2.8125
3
He concept from hearing comes from auditory, a Latin language word. The notion refers to act and the faculty of hearing or hearing : pick up sounds through the ear. For example: “Workers in the construction industry often suffer from hearing problems due to noise”, “The girl managed to recover her hearing thanks to a complex surgical intervention”, “Could you repeat what you said? I have hearing problems ”. It can be said that hearing is a physiological and psychological process . The sound waves cause changes in the pressure of the air : when they reach the ear and are perceived by the brain, they are transformed into mechanical waves . The development of hearing implies that the waves impact the eardrum and make this membrane vibrate. The vibrations, in turn, cause the movement of small bones called stirrup , anvil and hammer . When the vibrations reach the corti organ , this stimulates the auditory nerve, making them become nerve impulses that reach the brain , body in charge of play Sound. Hearing problems have repercussions in everyday life that considerably exceed the fact that they cannot perceive sounds easily (as can happen in a case of hearing loss , partial loss of hearing ability), or suffering from deafness total (cofosis ), both from one ear and from both. Since these disorders occur in a minority of people, their environment cannot understand their characteristics, and this makes interpersonal relationships difficult. As if this were not enough, most cities are not prepared so that people with hearing problems can move around and access common services in a comfortable and effective way, something that the blind and also, in general, all must suffer. individuals with some kind of disability physical or mental Those who enjoy both senses, contact with the outside world do so through a complex combination of the two, to the point of becoming almost indivisible the image sound. When we learn a word whose meaning refers to a material object, we establish a combination between the graphic and the sound information, so that by either of the two paths we reach the same point (for example: when we see an umbrella we think of the term , just like hearing it, we evoke the image, and something similar happens with writing and reading). There are various movements that fight for the integration of all people in the various areas of society. However, we are still very far from a reality in which we accept each other without prejudices , with an open and compassionate attitude. It is called audition, on the other hand, the proof that an artist performs to demonstrate his talent . Also known as casting , audition is a usual methodology of the television, film and theater industry to select who will be the protagonists of work . At the audition to find the participants of a musical work, those responsible can ask the applicants to interpret songs of different genres and to represent certain scenes. The artists that stand out the most in the audition will be those hired. The life of artist It is not usually easy, especially if you must travel your way without your family's help. Overcoming auditions becomes one of the main goals, since it can open the doors to good job opportunities, although most of the time it is not. In addition to the technical training to make use of the right tools in front of the examining table, it is necessary to learn to control panic to avoid wasting all the effort.
<urn:uuid:5621cd32-e08d-4c1f-bdd4-812383cf3f2a>
CC-MAIN-2023-40
https://tt.erf-est.org/2136-audici-n.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00263.warc.gz
en
0.958532
681
3.75
4
Come in to the Center for tortoise-related activities, presentations, crafts, and games. Kids can wear a “shell” to learn to move like a tortoise. Remember, tortoise shells tend to be taller and “domed,” while turtle shells tend to be flatter (more elongated). Turtles tend to have good feet for swimming, while tortoises tend to have good feet for digging! Learn the rules about the endangered/protected gopher tortoise: what should you do if you encounter a gopher tortoise in the wild? Why are they so endangered and what can you do to help them? If you find a gopher tortoise burrow or hole, what should you do/whom should you contact? Free with admission — see you at the Center!
<urn:uuid:1d549399-3c7f-4158-850b-faf86c62dc11>
CC-MAIN-2017-34
http://calusanature.org/monday-april-10th-is-floridas-official-gopher-tortoise-day/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00355.warc.gz
en
0.933326
165
3.328125
3
June 4th, 2014 12:01 PM ET "Eat breakfast!" nutrition experts have been telling us for decades. It revs your metabolism! It keeps you from overindulging at lunch! It helps you lose weight! But a new study suggests the "most important meal of the day" may not be so important - at least for adults trying to lose weight. Published Wednesday in the American Journal of Clinical Nutrition, the study found dieters who skipped breakfast lost just as much weight as dieters who ate breakfast regularly. The researchers concluded that while breakfast may have several health benefits, weight loss isn't one of them. So where did breakfast get its cred? So far, research has generally shown a link between skipping breakfast and the likelihood of being overweight, but it hasn't proven that skipping breakfast causes weight gain. "Previous studies have mostly demonstrated correlation, but not necessarily causation," lead study author Emily Dhurandhar said in a statement from the University of Alabama at Birmingham. There is good observational evidence to support breakfast's place on the menu, says Michelle Cardel, a co-author of the study from the University of Colorado Denver. Nearly 80% of people on the National Weight Control Registry, a group of more than 4,000 people who have lost at least 30 pounds and kept it off, eat breakfast every day. Ninety percent of them eat breakfast at least 5 days a week. Researchers split 309 adults who were interested in losing weight into three groups. One, the control group, received a USDA pamphlet titled "Let's Eat for the Health of It" that described good nutrition habits but did not mention breakfast. The second group received the same pamphlet and was instructed to eat breakfast before 10 a.m. every day. The third group received the pamphlet as well and was told to avoid consuming anything but water until 11 a.m. Researchers followed the groups for 16 weeks and recorded their weight to show changes over the study period. Of the 309 participants, 283 completed the study. All three groups lost the same amount of weight on average, showing researchers that eating breakfast (or not) had no significant effect. "This should be a wake-up call for all of us to always ask for evidence about the recommendations we hear so widely offered," David Allison, director of the UAB Nutrition Obesity Research Center, said in a statement. There were several limitations to this study that should be taken into account when viewing the results, Cardel says. "The participants were able to choose what they ate every day," she said. "So at this point we cannot conclude anything about how much food you should eat at breakfast or what kinds of food you should eat." The study authors did not measure participants' appetite, body fat or metabolism, which previous research has shown may be affected by breakfast eating. And the small study was only 16 weeks long, which may have been too short to see a significant effect. Keith Kantor, a nutrition expert and author of "The Green Box League of Nutritious Justice," says eating breakfast is still a good idea. Doing so creates a routine, he says, and humans thrive on routine. "Skipping meals... and eating at random times throughout the day requires more of a thought process," he said. "This allows more room for negative behaviors like skipping exercise or grabbing fast food due to lack of planning." A healthy breakfast, Kantor says, consists of high-quality protein, heart-healthy fats and produce. "More research needs to be conducted so that we can understand what kinds of foods should be eaten at breakfast... how quickly after waking should people eat breakfast, and how much should people be eating at breakfast," Cardel said. About this blog Get a behind-the-scenes look at the latest stories from CNN Chief Medical Correspondent, Dr. Sanjay Gupta, Senior Medical Correspondent Elizabeth Cohen and the CNN Medical Unit producers. They'll share news and views on health and medical trends - info that will help you take better care of yourself and the people you love.
<urn:uuid:77e6c645-dd79-4353-9905-ef08fd3bbd14>
CC-MAIN-2021-25
https://thechart.blogs.cnn.com/2014/06/04/eating-breakfast-may-not-matter-for-weight-loss/?replytocom=3011797
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00349.warc.gz
en
0.97374
835
2.796875
3
From JAMA Internal Medicine, “Added Sugar Intake and Cardiovascular Diseases Mortality Among US Adults“: Importance Epidemiologic studies have suggested that higher intake of added sugar is associated with cardiovascular disease (CVD) risk factors. Few prospective studies have examined the association of added sugar intake with CVD mortality. Objective To examine time trends of added sugar consumption as percentage of daily calories in the United States and investigate the association of this consumption with CVD mortality. Design, Setting, and Participants National Health and Nutrition Examination Survey (NHANES, 1988-1994 [III], 1999-2004, and 2005-2010 [n = 31 147]) for the time trend analysis and NHANES III Linked Mortality cohort (1988-2006 [n = 11 733]), a prospective cohort of a nationally representative sample of US adults for the association study. Main Outcomes and Measures Cardiovascular disease mortality. NHANES is an enormous longitudinal cohort. In other words, it allows us to follow a lot of people over time to see how risk factors affect their health. In this case, researchers wanted to see how sugar intake affected the chance of dying from cardiovascular reasons. They followed people for a median 14.6 years (163,039 person-years), and over that time, 831 people suffered from CVD deaths. After controlling for the usual stuff including socio-demographic factors, the hazard ratios of CVD mortality went from 1.00 (bottom 20% of sugar eaters and reference), to 1.07 in the second quintile, 1.18 in the third quintile, 1.38 in the fourth quintile), to 2.03 in the highest quintile. An accompanying editorial made the point pretty well: Yang et al inform this debate by showing that the risk of CVD mortality becomes elevated once added sugar intake surpasses 15% of daily calories—equivalent to drinking one 20-ounce Mountain Dew soda in a 2000-calorie daily diet. From there, the risk rises exponentially as a function of increased sugar intake, peaking with a 4-fold increased risk of CVD death for individuals who consume one-third or more of daily calories in added sugar. These findings provide physicians and consumers with actionable guidance. Until federal guidelines are forthcoming, physicians may want to caution patients that, to support cardiovascular health, it is safest to consume less than 15% of their daily calories as added sugar.
<urn:uuid:b701bbd8-a5d2-4caf-8eb5-93fd0a7de3e7>
CC-MAIN-2021-43
https://theincidentaleconomist.com/wordpress/heres-another-reason-sugar-is-terrible-for-you/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00518.warc.gz
en
0.92172
502
2.5625
3