text
stringlengths
198
630k
id
stringlengths
47
47
metadata
dict
Characterize the following physical operation: A gardener who has valuable plants with long delicate stems protects them against the wind by staking them; that is, by plunging a stake into the ground near them and attaching the plants to the stake with string. What would happen: If the stake is only placed upright on the ground, not stuck into the ground? If the string were attached only to the plant, not to the stake? To the stake, but not to the plant? If the plant is growing out of rock? Or in water? If, instead of string, you use a rubber band? Or a wire twist-tie? Or a light chain? Or a metal ring? Or a cobweb? If instead of tying the ends of the string, you twist them together? Or glue them? Or place them side by side? If you use a large rock rather than a stake? If the string is very much longer, or very much shorter, than the distance from the stake to the plant? If the distance from the stake to the plant is large as compared to the height of the plant? If the stake is also made out of string? Trees are sometimes blown over in heavy storms; can they be staked against this? Contributed by Ernie Davis ( [email protected]), New York University, U.S.A. (18th September 1997)
<urn:uuid:c67c4a70-da4a-4e0e-8f9d-5f22e5c3f4dc>
{ "date": "2019-03-24T18:02:13", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203464.67/warc/CC-MAIN-20190324165854-20190324191854-00296.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9638528227806091, "score": 3.796875, "token_count": 287, "url": "http://www-formal.stanford.edu/leora/commonsense/gardening.html" }
Preparations for root canal therapy. Dental instruments for local anesthesia If your tooth is vital (alive), rooth canal therapyrequires local anesthesia. Root canal therapy should not be a painful procedure, although many patients still fear it. If your dentist tells you that you need root canal procedure, it means there is an infection: therefore, your dentist decides how much anesthetic would be needed to effectively numb the area. - Topical anesthetic spray (benzocaine) is used for local anesthesia prior to the injection to prevent the discomfort associated with the injection. - Needle and syringe are used to inject the anesthetic (lidocaine or similar) for nerve block. Cartridge and needle are disposable. Anesthesia works faster on soft tissues (such as your lip); it takes longer to get the pulp chamber numb: this is the reason why your dentist may still want to wait after your lip becomes numb. - Dental cartridge syringe with aspiration is often used by your dentist during sedation to avoid hematoma, just in case the needle hits a larger blood vessel. Dental instruments used in pulpotomy procedure (removal of a portion of the dental pulp) / pulpectomy (complete removal of the dental pulp) - A high-speed hand piece is used to produce an opening in the tooth. - The endodontic explorer is a hand piece that helps locating canal openings. - A spoon excavator may be used to remove corrupted tissues such as enamel residue and decayed dentine. - Root canal files are flexible broaches used to remove the dental pulp from the root canals. These broaches may vary in diameter. Your dentist usually starts with the smaller size and gradually increase file diameter in order to keep the procedure as non-invasive as possible. If your dentist has a different technique, that is OK! There is more than one good solution to this problem. After removing the infection and getting rid of the bacteria, your dentist will use the files to clean and shape the root canal. - Root canal reamers see Root canal files - A syringe with a blunt needle will be used to flush the root canals with sodium hypochlorite as soon as your dentist has finished reaming. - Paper points. Your dentist needs to remove sodium hypochlorite and let your root canals dry. Paper points serve this purpose. Dental instruments for dental sealing after cleaning the root canals At the end of a standard root canal procedure, your dentist will seal the area. At this point, sealing refers to a couple of distinct procedures: closing the canals and performing dental filling (tooth restoration). - Dental instruments to close the root canals - Gutta-percha points are dental supplies used by your dentist to fill the dry root canals and replace the removed tissue. The cone base is colored (you may have noticed the red points). - Endodontic spreaders are hand pieces with a sharp end used to push the gutta-perchs points into the canals. - Pluggers have a blunt tip, but basically do what endodontic spreaders do: compress and pack the substance (gutta-percha) into the root canal. - Dental instruments for tooth restoration The dental instruments used by your dentist to seal the cavity formed during the root canal procedure are similar to those used for regular dental fillings. For more information on the subject, see Dental instruments for dental fillings II. More often than not, a temporary filling is used in the first therapy session. During your next appointment, your dentist will replace this with a permanent filling. This is standard procedure. By the time your temporary filling will be replaced with a permanent filling, your tooth will no longer be vital (alive), so the procedure cannot be painful.
<urn:uuid:eac62555-7941-430c-87c8-bb18ec8ed190>
{ "date": "2015-02-28T13:49:07", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461988.0/warc/CC-MAIN-20150226074101-00147-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9082478880882263, "score": 3.109375, "token_count": 800, "url": "http://ivorydental.eu/2011/12/dental-instruments-for-root-canal-therapy/" }
Medicaid, the health care insurance program that is jointly funded by federal and state governments, has been enlarged under the Affordable Care Act, alternately referred to as ACA and Obamacare, and now covers more low-income earning adults including those included within the prison population. A new nationwide survey of state prison administrators found that with the expansion of Medicaid eligibility prison systems have begun to support prisoners' enrollment in Medicaid as a way to help lower prison system costs while also improving prisoners' access to health care after release. "Enrollment improves access to basic health services, including substance use and mental health services and can in turn benefit the health of the communities and families to which prisoners return,” Dr. Josiah D. Rick, director of the Center for Prisoner Health and Human Rights at the Miriam Hospital, said in a press release. “There is a possibility that there will be decreased recidivism as people get treatment for their mental illness and addiction." The study appears in the American Journal of Public Health. Medicaid Before and After the ACA Prior to the Affordable Care Act, Medicaid routinely provided coverage for adults if they were disabled, 65 or older, or, in cases of non-elderly adults, if they were low-income parents, other caretaker relatives, or pregnant women. Some people who did not fit those requirements also received Medicaid in about half of the states, which were able to provide coverage, often in some but not all cases, through state-funded programs. Under the previous laws, then, a health care insurance coverage gap existed for low-income adults. To fill in gaps in coverage for the poorest Americans, provisions under the Affordable Care Act created a minimum Medicaid income eligibility level across the country. Beginning in 2014, individuals under age 65 with incomes below 133 percent of the federal poverty level (calculated as $11,490 for an individual in 2013) became eligible for Medicaid in every state, with eligibility extended to non-disabled adults under the age of 65 without dependent children. Those eligible for Medicaid will receive a benchmark benefit or benchmark equivalent package that includes the minimum essential benefits provided by the insurance exchanges. Along with a benchmark plan, benefits also include prescription drugs, preventive and obesity-related services, tobacco cessation programs, and health homes for those enrollees suffering from chronic conditions. Medicaid under ACA also promotes prevention, wellness, and public health and helps people receive long-term care services and support in their home or the community. Medicaid, then, has expanded not only eligibility but also services and the impact on various populations remains to be seen, and for this reason one particular team of researchers chose to explore potential effects and benefits for the prison population. Under the constitution, prisoners have a right to adequate medical attention, which comprises a significant expense of prison financing; in 2008, a Pew Charitable Trust survey based on Bureau of Justice statistics revealed that out of $36.8 billion in overall institutional correctional expenditures, nearly $6.5 billion went to prison health care in 2008. Although a small percentage of prisons provide some health care services for select prisoners under Medicaid, most do not and in 2000 — prior to enactment of the ACA — nearly all states had policies terminating Medicaid enrollment upon incarceration. To better understand policies and practices employed in state prison systems (SPS), Rich and his co-researchers surveyed prison administrators from December 2011 through August of 2012. Survey questions included Medicaid termination or suspension upon incarceration, assistance reenrolling in Medicaid, challenges reenrolling in Medicaid, and screening previously nonenrolled prisoners for potential Medicaid eligibility. Of the 42 state prison systems that responded to the survey, the policies of two-thirds dictated termination of Medicaid coverage and 21 percent suspension of coverage when a prisoner was first incarcerated. Of these systems requiring termination or suspension, more than two-thirds provided assistance to help prisoners reenroll in Medicaid once they were released. Generally, the researchers found, suspension promoted timely reactivation of Medicaid benefits upon release. "The difficult reality is that terminating Medicaid during incarceration, which is what is occurring in the majority of prison systems today, can be harmful to this population, as well as costly to the general public," Rich said. "Instead, we should be moving toward using this period of incarceration as an opportunity to reduce expensive post incarceration emergency room and inpatient hospital care." The survey also showed that most state prison systems had policies in place that identified prisoners who were potentially eligible for Medicaid and provided assistance with the Medicaid applications. In 15 state prison systems, Medicaid applications were submitted so that benefits could be used during incarceration to pay for inpatient care received in the community. With several states planning to expand Medicaid eligibility in 2014, the number of released prisoners with access to routine care could increase dramatically, the researchers noted. They suggest as well future investigation of successful prison systems and the financial implications of enrollment for prisons and the Medicaid program. Certainly one inference drawn from this study would be that individual states might investigate where and when Medicaid and prison health care systems — both paid for by taxpayers — overlap. Might one or the other be scaled back in order to save state dollars? Source: Rosen DL, Dumont DM, Cislo AM, Rich JD, Brockmann BW, Traver A. Medicaid Policies and Practices in US State Prison Systems. American Journal of Public Health. 2014.
<urn:uuid:c53e084c-af42-4085-af06-70575027a6e8>
{ "date": "2015-09-01T10:10:12", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645171365.48/warc/CC-MAIN-20150827031251-00053-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9617842435836792, "score": 2.546875, "token_count": 1083, "url": "http://www.medicaldaily.com/medicaid-expansion-under-obamacare-improves-prison-health-care-prisoners-enrollment-lowers-prison" }
While the citrus industry seeks better solutions to huanglongbing—also called citrus greening—growers can use nutritional programs to help keep greening-infected trees productive as long as possible. Tearing out infected trees as soon as the disease is detected isn't always the most practical solution, says Fritz Roka, associate professor of agricultural economics at the University of Florida's Southwest Florida Research and Education Center in Immokalee. "To make roguing work, you have to be very vigilant on scouting," he says. That means starting early and never letting up. Replanting after tearing out infected trees leaves growers waiting two to three years for young trees to come into production, Roka says. That was the calculation Maury Boyd, president of McKinnon Corp. in Oakland, made when he first realized his Orange Hammock grove was severely infected with greening. For more information on citrus greening, visit our Citrus Greening Section In 2006 the decline that he'd initially attributed to hurricane damage was diagnosed as HLB; 70 percent of the grove's young trees and 43 percent of his mature trees were infected. "I was told then the trees would probably be gone in two to three years," Boyd says. "I didn't buy into the prediction of utter doom." The tipping point had come and gone before he'd even been aware. "We couldn't remove the infected trees," he says. "That would've been half the trees in our groves." Nor would it be a solid long-term solution, with the rest of his trees probably infected but not showing symptoms. That, combined with infected groves on neighboring properties, meant any new trees he planted were certain to pick up the disease in a short time. A ‘Mediterranean diet’ for trees Growers in Western states, now on the alert after watching Florida deal with greening, may be able to avoid reaching that tipping point, Boyd says. He focused on tree nutrition, supplementing a soil fertility program with foliar applications of what's come to be known as the Boyd cocktail full of minerals and other nutrients. Controlling Asian citrus psyllid, which vectors the disease, also remains essential. His trees now produce the same fruit yields as before infection, he says. "HLB made the trees look like they were starving to death," he says, likening his nutritional program to feeding his trees a Mediterranean diet rather than a McDonald's diet. "If a tree is satisfied with micronutrients, it doesn't express [greening] symptoms" or expresses them to a lesser degree, says Bob Rouse, citrus horticulturist also at the university's Immokalee center. Keeping trees well fed Fighting off HLB's effects on leaves and keeping them green helps the tree continue producing enough food to beat back other symptoms, Rouse says. One recent study suggests addressing phosphorus deficiency can help reduce greening symptoms. Over the years Boyd has tweaked and tinkered with the cocktail's formula, which includes phosphite, magnesium sulfate, manganese sulfate and zinc sulfate. His latest change adds trace amounts of nickel. Most growers have followed Boyd's lead and adopted some form of a nutritional program as a greening defense, Rouse says. Several companies offer their own premixed formulations. "Some growers are more successful than others" with the nutritional programs, he says. "The Boyd program is the hardest and most cumbersome for most people" because it requires gathering and mixing all the components, says Joe Davis Jr., president of Davis Citrus Management in Avon Park. "But he's made them work beautifully." Davis' company uses a Boyd variation in some of its groves and two commercial versions in others, depending on specific problems with the location and citrus variety. A formula that offers added help against canker and greasy spot, for example, is the obvious choice for a grove also battling those diseases. Foliar micronutrients remain crucial Rouse has been studying Boyd's original formula since 2008, testing which mix of components works best and which might be dropped with little or no impact. So far a mix that omits only the systemic acquired resistance, or SAR, products shows the best performance, Rouse says. The micronutrients in the foliar applications appear to be the crucial element in combination with a liquid nitrogen-potassium-phosphorus fertilizer to cover all nutrient bases. Three to four foliar sprays over the growing season supplement a soil fertility program, he says. Greening inhibits nutrient uptake through the roots. Refocusing on root health While Davis plans to continue attention to foliar nutrition, this coming year will see greater efforts to improve root health. "Nutritional enhancement is good, but don't neglect root health and therapy," he says. That includes not only liquid fertilizer applications but also treatments to control beetles and phytophthora, as well as ensuring proper levels of irrigation. "Any stress that tree is under, greening makes the problem exponentially worse," he says. The advent of greening underscores a basic truth, Roka says. "Good horticultural practices will probably be the salvation of getting through this." Like any other business owners, growers must maintain their assets—and citrus trees are among a grove's most crucial assets, he says. "They don't respond well to jumping in and jumping out" of basic horticultural maintenance. But when prices drop, growers tend to reduce costs. Micronutrients whose correlation to tree health and yields aren't clear-cut may seem obvious candidates for cuts, Rouse says. Roka has calculated cost comparisons of six foliar nutrition programs, including Boyd's original formula. Total costs top out at $433 per acre for the Boyd cocktail and range downward to $190 per acre for a foliar nutrition program from Chemical Dynamics Inc. Rouse and Roka are trying to determine which parts of the nutritional programs are most effective and whether they return growers' investment. "It's hard to tease these apart," Roka says. Zinc by itself, for example, may do less than its contribution in concert with other micronutrients. Cumulative effects may be more important than a single year's applications. "We're still spending a lot more money on foliar programs than we were before greening," Davis says.
<urn:uuid:35355c52-8c41-41ba-ad63-b669be07a49c>
{ "date": "2015-01-28T06:29:19", "dump": "CC-MAIN-2015-06", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862207.44/warc/CC-MAIN-20150124161102-00128-ip-10-180-212-252.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9658293128013611, "score": 2.953125, "token_count": 1335, "url": "http://www.thegrower.com/news/citrus-greening/202582011.html?view=all" }
Appointments to UN Agencies - World Bank The World Bank is comprised of a group of development institutions that provide loans and grants to developing countries with the stated goal of alleviating poverty by creating the conditions for sustained development. The Bank is currently the largest source of development finance in the world. The World Bank is led by a Board of Governors, made up of a representative for each of the 185 shareholder countries of the Bank. The Board of Governors serves as the policy-making body for the Bank. Because it only meets once a year, the Board of Governors elects a 24-person Board of Directors (also called Executive Directors) which meets bi-weekly. The President of the World Bank is responsible for the overall management of the Bank, and s/he serves as the Chair of the Bank's Board of Directors. In this capacity s/he also serves as President of the International Development Association (IDA) and Chairman of the Board of Directors of the International Finance Corporation (IFC), the Multilateral Investment Guarantee Agency (MIGA), and the Administrative Council of the International Centre of Investment Disputes (ICSID), the institutions that make up the Bank. The president holds a five-year, renewable term. Relationship between World Bank and UN The World Bank has been in partnership with the United Nations since the founding of the two organizations in 1944 and 1945, respectively. The formal relationship between the two organizations was defined in a 1947 agreement that "recognizes the bank as an independent specialized agency of the UN as well as a member and observer in many UN bodies." The World Bank describes the relationship between the two organizations as "focusing on economic and social areas of mutual concern.... In addition to a shared agenda, the Bank and the UN has almost the same membership." Selection Process of World Bank President There is a long-standing, informal agreement dating from the establishment of the World Bank in 1944 that its president will be a United States national, while the managing director of the International Monetary Fund is a European national. (The Wall Street-related origin of this agreement is explained here.) Therefore, the past ten Bank presidents have been American. There has never been a female Bank president. For the selection of the President, the Board of Directors must approve the nominee by a supermajority of 85%. The voting power of each member is based on its share of contributions to the Bank, which is expressed as a percentage of the total of votes held by all of the shareholders. The US is the largest shareholder in the Bank with 16.41% of the votes. The voting weight of the US gives it the ability to block a supermajority decision. On May 17, 2007 the World Bank announced that then-President Paul Wolfowitz would resign, effective June 30. In the midst of high-profile charges that Wolfowitz abused powers of office and pressure for his resignation, broader doubts about the Bank were voiced widely - both about the overall effectiveness of the Bank and the standing tradition by which the U.S. president chooses the Bank's head. Several member governments (Australia, Brazil, Norway and South Africa) called instead for an open, merit-based, competitive process. In the U.S., legislative leaders sent a letter to President Bush asking him to consider non-American candidates in order to send an "unambiguous signal of the commitment of the United States to the Bank's core anti-poverty mission" and to multilateral agencies. The Bank's Executive Directors - some of whom represent countries which oppose the current selection procedure - recently announced the Bank's written policy that any country can propose a candidate. The U.S. nominated Robert Zoellick, former US Trade Representative, on May 30. The Bank Board then stated that it would continue to accept nominations until June 15, 2007. The Executive Directors also issued a profile of fundamental qualities for nominees. On May 29, they said that essential qualifications for the next World Bank President would include: - A proven track record of leadership; - Experience managing large, international organizations, a familiarity with the public sector and a willingness to tackle governance reform; - A firm commitment to development; - A commitment to and appreciation for multilateral cooperation, and - Political objectivity and independence. The U.S. administration promised to consult more widely as it selected a presidential nominee and hinted that it may accept a non-American candidate. It is unclear to what extent any wider consultations took place, but historical practices of non-transparency (neither revealing criteria for the decision nor publicizing a shortlist with candidate qualifications) were upheld. However, most shareholder governments reportedly did not desire another confrontation with the U.S. administration so soon after the Wolfowitz resignation, and they acquiesced to tradition at least once more. Additionally, Zoellick is considered very well-qualified for the position, which lessened the urgency of reforming the U.S.-driven selection process, and may have contributed to countries' decisions not to put forward other nominees. On June 15 Robert Zoellick was announced as the sole nominee for the Bank's Presidency. The 24-member Board of Directors, representing the shareholder governments of the Bank, then questioned him in a four-hour informal meeting. It was reported by media sources that the Board questioned him thoroughly as part of an effort by some member countries to set a precedent for a merit-based process, possibly laying groundwork for a future competitive selection - one in which the U.S. is not the only nominator. The Directors of the Bank unanimously approved Robert Zoellick as the next Bank President on 25 June in a meeting many regarded as a formality. He took office on July 1, 2007, allowing a one-week overlap with outgoing president Paul Wolfowitz. Zoellick's term is five years, which allows for reform to the selection process to be discussed during his term. Bank observers expect that the selection process for the next president will not be a U.S. prerogative. A Brookings Institution expert said, "Change is afoot and it is very clear that it will be hard to do this again when Zoellick's term is over." Norway said, "We will work with other governments to make such a procedure for future presidential appointments of the World Bank." European control of the IMF appointment should end at the same time, in Norway's view. In the view of a Bank Board official, "the challenge [for ending the US monopoly] will be to keep the issue in people's consciousness and approach some presentable open but well-governed process in good time before the end of this current presidency." Related UNElections Monitors
<urn:uuid:fbf9bb5d-9e55-4081-8bc6-4e756388125a>
{ "date": "2014-07-30T13:06:04", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00360-ip-10-146-231-18.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9654114246368408, "score": 2.609375, "token_count": 1364, "url": "http://unelections.org/?q=/node/72" }
“Life is a hideous thing…” Facts Concerning The Late Arthur Jermyn and His Family (1921) is not one of Lovecraft’s better stories. Virtually all of the characters in the tale are dead before it even begins, so there is little need for dialogue or characterization. For the same reason there is little movement, conflict or suspense—the worst has already happened. And there is scant attention to setting, which is one of Lovecraft’s strengths as a writer. In many of his stories—think of The Shadow Over Innsmouth, or The Dunwich Horror or The Colour Out of Space—Lovecraft showed real talent in creating entire landscapes that are dark, ominous and filled with cosmic doom. But anything Lovecraft wrote can be interesting in the light of what is known about his family and the peculiar emotional and psychological turmoil he endured. His writing tends to be a fictional and symbolic representation of his psychic pain experienced at various points in his life. This of course is common among many writers, but in Lovecraft, the themes in his fiction and poetry are barely transmuted from the source material—a reason he is an important, but not a great writer. His voluminous correspondence combined with his fiction and poetry constitutes the case documentation of an individual who struggled greatly with misfortune, social isolation, and mental health problems. Some of the elements in Arthur Jermyn are better understood if one considers aspects of Lovecraft’s own family history. Both of his parents succumbed to severe mental illness as he grew up—his father’s symptoms were the result of the neurological degeneration of syphilis. Though not clearly established, it has been suspected by some that his mother also contracted the disease. Both died in the same asylum. Lovecraft himself endured extreme depression and several nervous breakdowns, a few times considering suicide by drowning. S.T. Joshi and others have documented the unfortunate influence of his mother on Lovecraft’s self esteem and self perception—she inculcated in him the sense that he was ugly and physically repellent. There is a strong element of this in Facts Concerning The Late Arthur Jermyn and His Family, where the physical strangeness and bestial details in his characters’ appearance are emphasized. The domination and over protectiveness of first his mother and later his maternal aunts, (who interfered with his marriage to Sonia Greene) is also represented in the story: a forgotten race of white apes worships an all powerful female deity. In Facts Concerning The Late Arthur Jermyn and His Family, the story, such as it is, is fairly straightforward. The author depicts the ancestry of one Arthur Jermyn through several generations of patriarchs. The Jermyn clan is an old, well established and well regarded family. It does well until Sir Wade Jermyn, Arthur’s great-great-great-grandfather begins his explorations of the Congo region of Africa. He is reputed to have discovered an ancient prehistoric civilization of white ape like creatures. Worse, it seems that he may have mingled with at least one of the natives. There is reference in local legends about “a great white god who had come out of the west” and taken the ape-princess as his consort. Subsequent generations of Jermyns are afflicted with madness and physiological abnormalities—as well as a fascination with the family’s historic past and connection to Africa. Arthur Jermyn is different from his predecessors. He is the last of the line and described as a “poet and dreamer.” The family’s financial assets are only a shadow of their original grandeur. But Arthur Jermyn becomes smitten with an enthusiasm for family genealogy, which in a Lovecraft story is nearly always life-changing (The Shadow Over Innsmouth), if not fatal, (The Case of Charles Dexter Ward). Think of all the Lovecraft characters that have shared this motivation and interest! Arthur Jermyn is ultimately too successful in his investigations. He obtains an ancient mummified relic of a stuffed goddess from the Congo, and discovers a confirmation of what the reader already knows about his troubled ancestry. He is unable to accept this truth, and so takes his life. It was the consequences of sexual activity outside the bounds of white Anglo-Saxon protestant matrimony which culminated in Arthur Jermyn’s horrible self discovery. Surely this is an echo of Lovecraft’s difficulties with understanding his own father’s behavior and his terrible end. This is in the context of Lovecraft’s Puritan upbringing, and his squeamishness about sex and relationships with women. It is striking that Lovecraft places the ancestor’s sexual transgressions several generations in the past, instead of only one generation, which was the case in his family. This seems a kind of distancing from the pain of that revelation. The horror of miscegenation—marriage and procreation across racial and ethnic lines—is no longer such a horror in the 21st century, but was certainly one at the beginning of the 20th. It is a theme in several Lovecraft stories, among them The Lurking Fear, Pickman’s Model, and The Shadow Over Innsmouth. In the 1920s, American society was concerned with immigration, race relations, and sedition, among other anxieties, as urban areas became more cosmopolitan and diverse in composition. These are recurrent issues in America of course, but were particularly intense in Lovecraft’s time. (As an example, the Ku Klux Klan was revived in the early 1920s, when it introduced its notorious cross burnings. The movement often emphasized the superiority of Anglo-Saxon genetics as well as its presumed descent from 18th century British colonists—a familiar enthusiasm of Lovecraft’s as well.) Facts Concerning The Late Arthur Jermyn and His Family (1921) is not one of Lovecraft’s best, but is an interesting snapshot of his psychological difficulties understanding and accepting the ‘facts’ of his own family.
<urn:uuid:83af5f11-9deb-404a-8370-9c14223126d3>
{ "date": "2018-05-21T15:14:16", "dump": "CC-MAIN-2018-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864405.39/warc/CC-MAIN-20180521142238-20180521162238-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9734843373298645, "score": 2.78125, "token_count": 1259, "url": "http://blog-sototh.blogspot.com/2013/12/lovecraftian-family-secrets.html" }
Protozoa are single-celled organisms that feed by scavenging for particles and other microorganisms, such as bacteria, or by absorbing nutrients from their environment. Many types of protozoa live in moist places, such as soil, water, or sewage, and some of these may infect humans and other animals, causing disease. Other types of disease-causing protozoa depend on bloodsucking insects, such as mosquitoes, to spread them among human hosts. Most cases of protozoal infection occur in the tropical regions of the world. From the 2010 revision of the Complete Home Medical Guide © Dorling Kindersley Limited. The subjects, conditions and treatments covered in this encyclopaedia are for information only and may not be covered by your insurance product should you make a claim.
<urn:uuid:c61b9325-a219-42f1-8baa-7f87bbe5eea0>
{ "date": "2016-05-07T00:29:40", "dump": "CC-MAIN-2016-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864953696.93/warc/CC-MAIN-20160428173553-00078-ip-10-239-7-51.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9365897178649902, "score": 3.734375, "token_count": 164, "url": "http://www.aviva.co.uk/health-insurance/home-of-health/medical-centre/medical-encyclopedia/entry/structure-protozoa/" }
Toolkit for Educators Strengthening Families is an approach to working with families to prevent child abuse and neglect that builds upon family strengths, rather than focusing on deficits. It is not a curriculum or a program, but instead offers a framework of five research-based Protective Factors that give parents what they need to parent effectively, even under stress. Click here to access the Toolkit. Family Friendly Organizations Family Friendly organizations promote active and sustained collaboration between parents and staff, and among family, school and community partners. How can you determine whether your school or organization is family friendly? Parents strongly influence their children’s readiness for and success in school. When parents and the organizations that educate children work together, the results for children can be powerful. Family friendly schools and organizations affirm families’ contributions to student success at home, in school and in the community and promote family engagement with their children’s education. They build strong connections with families through welcoming environments, effective two-way communication, collaborating with families on decisions regarding children’s education, and speaking up on behalf of every child. Family Friendly Schools and Organizations: - Welcome all family members - Encourage effective two way communication - Support child development and student learning - Promote speaking up for every child - Share power - Collaborate with constituents and other partners To learn more about the six standards of a Family Friendly Organization, click here. PA PIRC Brochure The PA PIRC brochure describes the services provided to parents, schools and communities. To order copies, please contact the PA PIRC office at (717) 763-1661 or [email protected]. The Elementary and Secondary Education Act (ESEA) and Parent Involvement The evidence is beyond dispute. When parents are actively involved in the education of their children, children do better in school and student achievement increases. A January, 2003 report from the National Center for Family and Community Connections with Schools at the Southwest Educational Development Laboratory, A New Wave of Evidence: The Impact of School Family and Community Connections on Student Achievement reveals that families make critical contributions to student achievement from pre-school through high school; when parents are involved at home, children do better and stay in school longer, and when a critical mass of parents is involved the whole school improves. For the first time in history of federal education legislation, parent involvement is defined as the “participation of parents in regular, two-way, and meaningful communication involving student academic learning and other school activities, including ensuring that parents (Title IX General Provisions, Part A Sec 9101): - Play an integral role in assisting their child’s learning; - Are encouraged to be actively involved in their child’s education; and - Are full partners in their child’s education. Reauthorization of the Elementary and Secondary Education Act (ESEA) The Elementary and Secondary Education Act (formerly known as No Child Left Behind) is beginning the reauthorization process. The links below list some of the calls for changes in the legislation: To let others know your school or organization is family friendly, order free Family Friendly posters at (717) 763-1661 or Sign up to receive timely health and safety tips by text message. When you sign up, you can expect 3 free text messages per week throughout your pregnancy and until your baby is one year old. To learn more about how a baby's brain develops and what you can do to enrich a very young child's development check out the Brain Map.
<urn:uuid:c045112f-6e1b-4462-843d-e5f1cbc09f3d>
{ "date": "2015-09-02T21:44:30", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645293619.80/warc/CC-MAIN-20150827031453-00052-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9447235465049744, "score": 3.328125, "token_count": 739, "url": "http://www.center-school.org/pa-pirc/" }
Many states are experiencing impacts from drought or dry weather, including impacts on agriculture, water and energy supplies, fires and other environmental conditions. While humans cannot prevent droughts from occurring, we can do our part to avoid intensifying their affects through our water usage. Viewer Tip: Conserving water at home doesn't have to be a chore. Just a few simple changes to your daily routine can add up to big water savings. - Save 5 gallons: Shorten your shower by just two minutes. - Save 5 gallons: Turn water off between rinsing dishes, rather than running water continuously. - Save at least 20 gallons: Water your lawn and garden in the early morning or evening hours, when the weather is cooler and water is less likely to evaporate. These easy tips will save at least 30 gallons of water in one day. Want to save even more? Check out The 40 Gallon Challenge at www.40gallonchallenge.org for more simple ways to save water at home. Track your savings and see what others in your community are doing. Remember to visit Earth Gauge for more tips!
<urn:uuid:6928a4ab-aaed-4f9a-b231-14b3c7ccd411>
{ "date": "2015-08-29T09:26:28", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064420.15/warc/CC-MAIN-20150827025424-00055-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9286243915557861, "score": 3.140625, "token_count": 233, "url": "http://www.motherearthnews.com/print.aspx?id=%7B46E7E3FB-EC13-47F6-8C99-417DDC2C7B36%7D" }
A well known car manufacturer in Korea were testing an automatic car door window intended for one of their new car models. The window is driven by an electric engine positioned in the middle of the car door that drives the window up and down. When the window was driven up by the electric motor, a short timed high pitch squeak noise could be heard as seen from the level versus time plot below. The squeak noise was obviously connected to the window and the car door, but the localisation of the source of the problem proved difficult. The Nor848A-0.4 40 cm camera with 128 microphones was used for the recordings. The camera was positioned at a distance of 2.0 m from the car door, with the front-end of the camera pointed straight at the door. The recording consisted of an event of the car window going up whilst being driven by the electric engine. In addition to the high pitch squeak noise, sound from the electric engine could also be heard which was around 670 Hz as seen below. Since this sound was present during the entire recording, it could be easy to misinterpret the results and pinpoint the source of the high pitch noise to the wrong location without doing proper analysis. As seen in the image below, the location of the electric engine was in the middle of the door.
<urn:uuid:c4fdee00-e5db-41cc-9e92-ff6d2f975662>
{ "date": "2017-05-23T21:42:16", "dump": "CC-MAIN-2017-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607702.58/warc/CC-MAIN-20170523202350-20170523222350-00471.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9843524694442749, "score": 2.84375, "token_count": 266, "url": "https://www.environmental-expert.com/articles/identifying-short-time-high-pitch-squeak-noise-from-electric-window-in-car-door-case-study-650676" }
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. A mixed reality system is a system where both augmented reality and virtual reality are combined to work together. According to B. J. Susanna Nilsson, there are three criteria to be satisfied to classify a system as a Mixed Reality system. There must be a combination of real and virtual elements, that are interactive in real-time and they are in three dimensional. Virtual continuum can be used to describe the relationship between augmented reality and virtual reality. Figure : The Virtual Continuum The field of Mixed Reality has existed for almost two decades, where a diversity of applications has been developed. Examples of them are in medicine, military applications, entertainment and infotainment, technical support and industry applications, distance operation and geographic applications. Application of Mixed Reality There were researches made on this new area of studies in recent years and several research article were publish on this topic, the Mixed Reality. One of the research article published was MagicBook by Mark Billinghurstm, Hirokazu Kato and Ivan Poupyrev. The aim of MagicBook is to enable users to be able to interact with the book either as a physical object, augmented or virtual object. The MagicBook requires one or more handheld display (HHD) and also the physical book to work. In the paper, a Sony Galsstron display, an InterSense InterTrax intertial tracker and a small camera coupled with a switch and pressure pad makes up the HHD. The input from the camera is fed into the computer to calculate the position and orientation relative to the picture and virtual images is generated to be displayed through the HHD. Figure : The MagicBook Handheld Display One of the features is that if a user is viewing the book in the immersive VR mode, the other user that is using the AR mode can see a miniature avatar of the immersive virtual reality user in the scene. Also, the MagicBook is able to provide VR user an experience of having multiple users simultaneously in the virtual environment. Figure : Avatar in the augmented virtual world Another research article by Raphaël Grasset, Andreas Dünser and Mark Billinghurst from HIT Lab NZ aims to bring Mixed Reality into education. This article focuses on bringing edutainment for children through an already published book, creating a new type of 'mixed reality book'. The authors aim to investigate how to design better symbiosis between new technology and a traditional medium and therefore a less disruptive reading experience. Comparing this to the MagicBook, The Mixed Reality Book has more some features that are not found in the MagicBook. Such features are the sound effects, where there are background sounds, animation associated sounds, or 3D sound. The Mixed Reality Book also has immersive effects to make the surroundings of the pages filled with virtual scenery and cinematic effect to guide user to focus on specific part of the pages. Figure : Different types of tangible interfaces. From left, gaze interaction, finger interaction and tangible interaction There are three types of interaction are implemented in this project, gaze interaction, finger interaction and tangible interaction. Gaze interaction is responsible for positioning of an element by direct interaction; finger interaction is for pointing or moving virtual objects within a specific path; tangible interaction is responsible for control, where an element's placement or movement is used to control an interface value. Other than using Mixed Reality on books application, Mulloni, Seichter and Schmalstieg designed an indoor navigation on mobile devices using Mixed Reality. Initially they started off with using AR World-in-Miniature (WIM) views at info points with the combination of turn-by-turn navigation. This kind of assist was well appreciated by users and they feel that such support missing elsewhere. After the first design, Mulloni and others improved their design, with providing Virtual Reality to show way point for the indoor navigation. This design was decided based on their previous findings, where the role of WIM is significant. The new design will transition between VR and also AR depending on situations. In this project, low-key localization infrastructure is used by only having little info points in the building. This will allow users to easily reroute to the nearest info point when path deviation occurs. The MR view shows the instructions using the Mixed Reality WIM views that consist of landmarks of the buildings and also the info points. When info points are not available, VR view will be used to show the current instruction, highlighting the upcoming path to navigate. A short animation will move the user's avatar on the path mentioned. There will also be text information to help users to understand the instructions. Figure : World-in-Miniature Virtual Reality view of the indoor navigation application As user approach an info point, they can target it using the phone's camera, the application will then changes to AR view that will provide an overall view of the path, pin pointing the user's current location and also showing the next path to be taken and previous path taken. In AR view, all the info points in the building are shown as well. The application was tested on an iPhone 4, with the combination of gyroscope, magnetometer and accelerometer data. Kalman filters are used to estimate the orientation of the device and GLSL ES shaders for rendering. Figure : Augmented Reality of the indoor navigation application Another interesting application of Mixed Reality is the TimeWarp by Herbst, Braun, McCall and Broll. TimeWarp is an interactive Time Travel with a mobile city-based Mixed Reality game. The background of this research is about small elves secretly helping the citizens of Heinzelmännchen of Cologne during the night. Rumor says that they had disappeared to an unknown location because they were trapped by a nosy tailor's wife. Herbst and his colleagues then extended the legend by mentioning that the elves have fell into time hole instead. This gave the goal of the game for each player, which is to find the Heinzelmännchen within the period by time travelling and bringing the elves back to the present. Each player is equipped with a "magic technical" system which enables them to see the elves. To rescue the elves, player will need to solve challenges presented by the elves. TimeWarp is a distributed system with a game server retaining the game resources and thick client for the player system. Each player is equipped with a mobile AR system that is responsible for augmenting the real environment and a handheld-based information device. The information device is a Dell Axim x51v running on Windows Mobile 2005. Bluetooth connection is used to allow communication between the AR system and information device. The AR system is either a Notebook with onboard graphics running on Windows XP Professional or an UMPC with the same configurations. Player will wear a monocular SVGA head-worn optical see-through display which is tracked by a GPS receiver for position tracking. Interaction can be done using either a standard Bluetooth mouse or a gyroscopic mouse. The TimeWarp AR application is built using AR/VR framework MORGON and uses AR/VR viewer Marvin for 3D visualization. Figure : Dell Axim x51v information device. Overall information page (left) and interaction map (right)
<urn:uuid:4d927cb8-4349-42f1-90c3-f2ef79eca617>
{ "date": "2017-04-27T11:11:12", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00412-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9306466579437256, "score": 3.15625, "token_count": 1493, "url": "https://www.ukessays.com/essays/computer-science/literature-review-of-mixed-reality-application-computer-science-essay.php" }
This list has been sent to me. I edited and added to it. I think it’s a great list to introduce non Muslims to Islam and as a reminder for Muslims. - “Islam” literally means “peace through the submission to God”. - Islam’s fundamental belief is that there is no God worthy of worshipping except for Allah (the Arabic words that mean God) and that Mohammed (peace be upon him) is the messenger of Allah. - Muslims believe that Mohammed (peace be upon him) is the last of thousands of prophets sent to man kind. - Islam believes that Jesus and Moses (peace be upon them) are among the holiest prophets to walk the earth. - Quran (the holy book of Islam) was authored by God, revealed to Mohammad, and written into physical form by his companions. The original Arabic scriptures have never been changed or tampered with. - All Muslims are not Arab. Islam is a universal religion and way of life which includes followers from all races of people. There are Muslims in and from virtually every country in the world. Arabs only constitute about 20% of Muslims worldwide - One out of every four people in the world is a Muslim - Islam is the fastest spreading religion on the face of earth, it has increased by 235% in the last 50 years, with an annual rate of 6.4%
<urn:uuid:33287d57-82b1-4609-9dea-3f4dc8a5133f>
{ "date": "2017-03-01T17:46:25", "dump": "CC-MAIN-2017-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00160-ip-10-171-10-108.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9626903533935547, "score": 2.875, "token_count": 287, "url": "https://whyislam.wordpress.com/2007/08/02/facts-about-islam/" }
Triage-Based Application of OFAR on the Number of Radiographs Ordered Foot and ankle injuries account for nearly two million visits to Emergency Departments (EDs) in the United States and Canada each year. Of these injured patients, only 15% are diagnosed with actual fractures of the ankle. Due to such a small percentage, the "Ottawa Ankle and Foot Rules" (OFAR) were developed, which are a set of clinical decision-making guidelines that have been shown to be effective in diagnosing ankle and foot fractures. These rules are internationally accepted by the medical community, but are inconsistently applied. At Lehigh Valley Health Network (LVHN), the ED triage nurses are routinely trained in how to use the Ottawa Ankle and Foot Rules, but the rules are not always applied which may result in unnecessary X-rays. These guidelines are current network "standard of care" (usual, established care) that allow nurses to decide treatment for foot and ankle injury patients; in other words, whether to send these patients for an X-ray or not. The research staff is conducting this study in order to find out if using these nurse-directed guidelines--on a regular and consistent basis--can decrease the number of X-rays ordered, decrease patient waiting times/length of stay (LOS) and increase patient satisfaction with their care in the ED. The two main goals of this study are to find out if use of the Ottawa Ankle and Foot Rules by triage nurses can decrease the amount of X-rays ordered in the ED, as well as LOS. Secondary study goals are to: 1) see how many X-rays are ordered by physicians and physicians' assistants after patients are evaluated by the Ottawa Ankle and Foot Rules as not having had a fracture; and 2) evaluate patient and provider satisfaction with the care provided both when the Ottawa Foot and Ankle Rules are used and when they are not. |Study Design:||Observational Model: Cohort Time Perspective: Prospective |Official Title:||The Effect of Triage-Based Application of the Ottawa Ankle and Foot Rules (OAR/OFR) on the Number of Radiographs Ordered: A Pilot Study| - amount of radiographs obtained in the ED. [ Time Frame: Participants will be followed for the duration of their ED stay, expected not to be greater than an average of 4 hours ] [ Designated as safety issue: Yes ]This study aims to determine if triage application of the OAR and OFR can decrease the amount of radiographs obtained in the ED. - waiting times/length of stay (LOS) [ Time Frame: Participants will be followed for the duration of their ED stay, expected not to be greater than an average of 4 hours ] [ Designated as safety issue: Yes ]The implementation of the OAR and OFR at triage by the nursing staff is expected to decrease the amount of radiographs obtained in the ED and decrease LOS. - patient satisfaction with their care in the ED. patient satisfaction with their care in the ED. [ Time Frame: Participants will be assessed at the end of their ED stay, expected not to be greater than an average of 4 hours ] [ Designated as safety issue: No ]It is anticipated that Both patient and provider satisfaction will increase as a result of OAR/OFR implementation. |Study Start Date:||January 2013| |Study Completion Date:||June 2014| |Primary Completion Date:||September 2013 (Final data collection date for primary outcome measure)| Patients with foot and ankle injury After baseline data is obtained a cohort of patients with acute foot and ankle injuries will have OFAR applied This study's independent variable (or "intervention/predictor" variable) is the triage application of the OAR and OFR. Dependent variables (or "outcome" variables) include the number of X-rays ordered and LOS (co-primary outcomes), and measured patient and provider satisfaction (secondary outcomes). The Principal Investigator will create and validate his own study tool/survey to measure patient and provider satisfaction. Confounding variables include, but are not limited to: Patient age; patient gender; census volume on the days the study is conducted; assistance from ED staff from either site; difficulty in achieving enrollment goals in a timely fashion; season of the year (i.e., skateboarding versus skiing injuries); patient expectations about their ED care; and/or patient insurance status. This study is designed to be a pilot, prospective, 2-stage study to examine the application of OAR/OFR at the 17th and Chew, and Cedar Crest site EDs that aims to determine if triage nursing application of these clinical decision rules can decrease the amount of radiographs ordered, as well as decrease patient LOS. Please refer to this study by its ClinicalTrials.gov identifier: NCT01779804 |United States, Pennsylvania| |Lehigh Valley Hospital and Health Network| |Allentown, Pennsylvania, United States, 18103| |Principal Investigator:||Marna R Greenberg, DO, MPH||Lehigh Valley Hospital|
<urn:uuid:34636cbf-f79e-4449-a5a5-35f9ff4a4eaf>
{ "date": "2015-05-25T16:44:35", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00254-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9348114728927612, "score": 2.609375, "token_count": 1075, "url": "https://clinicaltrials.gov/ct2/show/NCT01779804?recr=Open&cond=%22Leg+Injuries%22&rank=11" }
Gardening Articles: Edibles :: Vegetables A Gardener's Guide to Frost by Eliot Tozer It's late fall. The sky is blue, and the sun is bright. Then your local weather forecaster ruins everything with these chilling words: "Possible frost tonight." Once the initial panic subsides, reason sets in. Frost is a local event, and it's possible to predict with considerable certainty whether it will hit the plants in your garden. So relax, walk outside, and pay attention to these six signs to predict the likelihood of frost. Then, if necessary, spring into action. 1. Look Skyward Clear, calm skies and falling afternoon temperatures are usually the perfect conditions for frost. Frost (also called white or hoarfrost) occurs when air temperatures dip below 32°F and ice crystals form on the plant leaves, injuring and sometimes killing tender plants. However, if temperatures are falling fast under clear, windy skies -- especially when the wind is out of the northwest -- it may indicate the approach of a mass of polar air and a hard freeze. A hard, or killing, frost is based on movements of large air masses. The result is below-freezing temperatures that generally kill all but the most cold-tolerant plants. But if you see clouds in the sky -- especially if they are lowering and thickening -- you're in luck. Here's why. During the day, the sun's radiant heat warms the earth. After sunset, the heat radiates upward, lowering temperatures near the ground. However, if the night is overcast, the clouds act like a blanket, trapping heat and keeping air temperatures warm enough to prevent frost. 2. Feel the Breeze Wind also influences the likelihood of frost. In the absence of wind, the coldest air settles to the ground. The temperature at plant level may be freezing, even though at eye level it is above freezing. A gentle breeze, however, will prevent this settling, keep temperatures higher, and save your plants. Of course, if the wind is below freezing, you'll probably have fried green tomatoes for tomorrow's supper. 3. Check the Moisture Just as clouds and gentle winds are your friends, so are humidity and moisture. When moisture condenses out of humid air, it releases heat. Not much heat, true, but perhaps enough to save the cleomes. If the air is dry, though, the moisture in the soil will evaporate. Evaporation requires heat, so this process removes warmth that could save your peppers. 4. Check Your Garden's Location This can have a tremendous influence on the likelihood that early frost could wipe out your garden while leaving your next-door neighbor's untouched. For example, as a general rule, temperature drops 3°F to 5°F with every 1,000-foot increase in altitude. The higher your garden, the colder the average air temperature and the more likely your plants will be hit by an early freeze. So gardening on a hilltop isn't a great idea, but neither is gardening at the lowest spot on your property. Since cold air is heavier than warm air, it tends to sink to the lowest area, causing frost damage. The best location for an annual garden is on a gentle south-facing slope that's well heated by late-afternoon sun but protected from blustery north winds. A garden surrounded by buildings or trees or one near a body of water is also less likely to be frosted.
<urn:uuid:5d6ddc8d-6cbd-45e8-afae-faf7cdb38645>
{ "date": "2014-07-30T13:30:32", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00360-ip-10-146-231-18.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9257304668426514, "score": 2.90625, "token_count": 716, "url": "http://www.garden.org/subchannels/edibles/veggies?q=show&id=403" }
Despite a general trend towards more equality in society and the labour market, progress remains slow and significant gender gaps persist. Economic independence is a prerequisite for enabling both women and men to exercise control over their lives Gender mainstreaming can make a real difference in closing gender gaps by integrating the gender perspective into all policy areas and identifying, addressing and monitoring impacts on inequalities. The goal of gender mainstreaming is to achieve equality between women and men. It is a matter of social justice. But gender equality in all aspects of society: - Is crucial for lasting growth and democracy; - Is a key element in meeting the challenge of an ageing population and shrinking workforce; - Contributes to the financial sustainability of social welfare systems; - Symbolizes a society’s level of political maturity and is key to future developments of societies. In the EU, substantial progress has been made towards achieving this goal over the last 50 years. Eight percent more women are in the labour market today than there were in 1998. Young women aged 20 to 24 represent 59 percent of university graduates in the EU. But the goal of equality between women and men in their diversity is far from being achieved. In the economy, women are still not reaching the decision making positions. Female business owners make up only 33.2 percent of the self-employed, and women are still over-represented in lower paid sectors across the EU. In addition, the pay gap between women and men shows no sign of closing. On average and across the whole economy, women in the EU earn 17.6 percent less per hour than men. Both vertical and horizontal gender segregation are prominent features of the labour market all across Europe. Women’s relation to the labour market remains largely mediated by men, whether as family members, employers or even suppliers of credit. The labour market still favours men over women and reflects and reinforces men’s and women’s perceived roles in the home, thereby polarizing existing divisions, despite clear evidence that the lifestyles of the majority of women, as well as many men, no longer fit into these tight compartments. Parenthood affects women’s employment chances more than men’s. Women continue to work more unpaid hours than men in the home. Rigid gender roles continue to influence crucial personal decisions on such things as education, career paths, working arrangements, family and fertility. These decisions in turn have an impact on the economy and society. We set out here some of the main gender gaps in the labour market. Gender differences in employment rates Women’s employment rates across the EU range from some 40% to 75%, but the EU average is 75.8% for men and 62.5% for women (2009). Employment rates of older women vary considerably between the Member States. In 2005, they were highest in Northern European countries, with more than 60%, and lowest in Southern European countries, all of which were below 35%. Very young mothers with small children experience particular social discrimination as regards their entry into the labour market; their activity rates are much lower than those of mothers older than 25. They also belong to a group at particular risk of poverty. Specific measures are needed to assess the employment situation and working conditions of these very young mothers and address some of their specific needs in Occupational Safety and Health (OSH) policy and prevention, and related policies. Their vulnerability and the difficulty for them to access the labour market may make them more prone to accepting poor working conditions. To reach the Europe 2020 targets of a 75% employment rate for both women and men, particular attention needs to be given to the labour market participation of older women, single parents, women with a disability, migrant women and women from ethnic minorities. Gender differences in relation to part-time work Women work part-time more than men (accounting for over 75% of part-time employees), in less valued jobs and sectors. It seems that female part-time workers invest their free time in non-paid domestic work. When taking into account the composite working hour indicators – i.e. the sum of the hours worked in the main job and in secondary jobs, plus the time spent on commuting and on household work – the research finds that women in employment systematically work longer hours than men. This clearly illustrates the “double role” increasingly played by women in the labour market and in the household. Interestingly, in terms of composite working hours, women in part-time jobs work more hours on average than men in full-time jobs. There is a need for greater recognition of how the links between women’s paid and unpaid work, and their effect on women’s health, including combined risk exposures and less freedom to dispose of their time, are influenced by gender stereotypes. Part-time work may also conceal multiple employment. A 2005 study in France showed that over a million workers, almost 5% of the working population, were in multiple employment. For women, these jobs mostly involved child and elderly care and domestic work, where women’s OSH is difficult to monitor and protection difficult to implement. A German study demonstrated that 640,000 fewer women worked full-time in 2009 than ten years previously, with those previous positions having been replaced by over a million temporary engagements and 900,000 mini-jobs. This was highlighted as an issue of concern by the OSH authorities (Find the report OSHA 2010 here). Gender differences in relation to education and vocational training Nearly 60% of EU university graduates are women, but they account for less than 33% of scientists and engineers across Europe and represent nearly 80% of the total workforce in the health, education and welfare sectors. Gender gaps remain significant both in performance and in choice of subjects. For instance, girls outperform boys in reading, and boys account for most early school leavers. Men outnumber women among graduates in maths, science and technology subjects, as was found in a recent report by the European Commission. Gender differences in relation to family responsibilities The impact of parenthood on labour market participation is still very different for women and men, with only 65.6% of women with children under 12 in work, as opposed to 90.3% of men. This reflects the unequal sharing of family responsibilities, but also often signals a lack of childcare and work-life balance opportunities. Research has shown that women and men exhibit very different behaviours when they become parents, with men generally more able to choose their level of engagement than women. Family policies are often built on the conception of women as the main caregiver. This has led to the expression “women become parents and men fathers”. Gender differences in professions and positions The gender-segregated labour market, the difficulty of balancing work and family life, and the undervaluation of female skills and work are some of the complex causes of the persistent gender pay gap. There are very distinct patterns of employment according to the different age groups; younger women work more in retail and HORECA (hotels, restaurants and catering), while older women work more in education and health care. Gender differences in relation to working conditions Women in the EU earn on average 17.1% less than men for each hour worked. The issue of occupational safety and health (OSH) for women working in the European Union (EU) is central to an understanding of the working environment. Previous research has shown that women’s OSH must be improved. Furthermore, “women’s jobs”, such as within the health and social services, retail and hospitality sectors, display an increase in accident rates, including fatal accidents; while women are also more likely to be bullied and harassed, including sexual harassment. Gender differences in relation to possibilities for economic independence In 2011, the EU launched the first European Semester and adopted its first Annual Growth Survey, anchored in the Europe 2020 Strategy. It highlighted the worryingly low labour market participation rate of women. Indeed, in many Member States, financial disincentives such as tax and benefit systems combined with excessive childcare costs make it more attractive for the spouses with relatively lower earnings (who tend in general to be women) to choose between either inactivity or limited activity. The labour supply of spouses is interconnected, and married women’s decision to enter the labour market is often influenced by the total income of the household. As a result, women may enter or leave the workforce depending on family income needs. They are consequently more sensitive to policies affecting their participation in the labour market than policies addressing hours of work. When pension systems were initially developed, men spent a lifetime in the labour market and women mostly stayed at home. The resulting income inequality in pensions was addressed by allowing wives to draw on their husbands’ contributions. In recent decades, women have entered the labour market in great numbers. However, inequalities remain and these have an impact on the adequacy of their pensions. Women are more likely than men to be outside the labour market at any age, or to work part-time or under atypical contracts. Career breaks often lead to a reduction in lifetime earnings and on average women earn less than men. For all these reasons, women pensioners typically have lower pension benefits than male pensioners. Demographic changes in Europe such as an ageing population and a shrinking working population and the financial and economic crises have created a major challenge for the future of pension systems. An important trend in recent pension reforms in Member States is to try and improve the financial sustainability of pension systems by tightening the link between contributions and benefits in earnings-related pension schemes. This is done mainly by lengthening the contribution periods required to qualify for a full pension and by changing the reference for the calculation of benefits from “best years” to lifetime earnings. As a consequence, pension benefits will increasingly depend upon the worker’s entire career. In parallel, the gender pay gap leads to negative consequences for the reference salary generally used when the statutory pension is calculated.
<urn:uuid:05074279-ba66-4aad-aa24-85d514b65478>
{ "date": "2018-03-24T13:28:45", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650685.77/warc/CC-MAIN-20180324132337-20180324152337-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9673069715499878, "score": 3.453125, "token_count": 2042, "url": "http://standard.gendercop.com/about-the-standard/why-gender-mainstreaming-is-important-in-the-esf/closing-gender-gaps/index.html" }
|The Ark Tablet in the hands of its decipherer, | Dr. Irving Finkel (Benjamin McMahon) The story of The Flood predating that of the Bible has a much longer history however being first brought to light in 1872 by George Smith, who worked in the British Museum studying fragments of cuneiform tablets from Nineveh in Mesopotamia. Among these fragments he found a story about the great deluge containing a Mesopotamian version of The Flood, with the dove, the ark and their equivalent of Noah. Funded by the Daily Telegraph Smith went to Nineveh where he found more tablets. The story, fleshed out from these Babylonian, Assyrian and Sumerian tablets, became known as the Epic of Gilgamesh. Since that time the story has come and gone and the article below tells the story of another such discovery which was reported in The Witness newspaper of 24th July 1914. |Flood Tablet - Penn Museum| "At that time Ziugiddu was King, a pashish-priest of Enki; daily and constantly he was in the service of his god." In order to requite him for his piety, Enki informs him that, at the request of Enlil it has been resolved "in the council of the gods, to destroy the seed of mankind," whereupon Ziugiddu -- this part of the story, however, is broken away -- builds a big boat and loads it with all kinds of animals. For seven days and seven nights a rain-storm rages through the land, and the flood of waters carries the boat away, but then the sun appears again, and when its light shines into the boat Ziugiddu sacrifices an ox and a sheep. Lastly we find Ziugiddu worshipping before Enlil, whose anger against men now has abated, for he says -- "Life like that of a god I gave to him," and "an eternal soul like that of a god I create for him," which means that Ziugiddu, the hero of the Deluge story, shall become a god. A Babylonian story of the Deluge, continues Dr. Poebel, has been known ages for a long time from a poem, that is imbedded in the famous Gilgamesh epic. There exist, also, several fragments of other versions of the story, and the museum possesses a small fragment of thirteen partially preserved lines, which was published by Professor Hilprecht some years ago. Our new text, however, is an entirely different account, as will be seen from the fact that the hero bears a name different from that found in the other Deluge stories. |A Flood tablet in the British Museum| As will be seen from some of the quotations, the text is a kind of poetical composition, and as such was originally not intended to be merely a historical record, but served some practical, ritualistic, or other purpose. For various reasons, it seems to me that our tablet was written about the time of King Hammurabi (2117-2075), thus being the oldest Babylonian record we have at the present time of the Creation as well as the Deluge. The text, however, may go back to even a much earlier time. A LIST OF KINGS. Judging by the colour of the clay, the shape of the tablet, and the script, our text belongs with another tablet that contains a list of Kings. It even seems to me that there were three tablets of about equal size, measuring about 54 by 7 inches, on which a historically interested scribe wrote the world's history, or at least its outlines. The first of these tablets, I believe contained the Babylonian theogony, and then related the famous fight between the younger generation of the gods and the deity of the primeval chaos, which ultimately resulted in the creation of heaven and earth out of the two parts of chaos. Here the tablet I have just described comes in and gives the history of the world as far as the Deluge. Then a third tablet gave a complete list of the Kings of Babylonia from the time of the Deluge to the King under whom the tablets were written. A portion of this third tablet, or to be more accurate, the reverse of this portion, which contains about an eighth of the whole text, was published six years ago by Professor Hilprecht. It contained two of the last dynasties of this list of Kings. I succeeded in copying also the much effaced obverse, which contains the names of Kings of the period immediately after the Deluge, and in addition to this I also found larger and smaller fragments of three other and older lists of Kings. I need hardly emphasise the great historical and chronological value of these new lists, since they gave us not only the names of the Kings, but the length of their respective reigns; and in some few instances even add some short historical references relating to these Kings.
<urn:uuid:7f6ab813-344a-4ee1-b7e8-288768bb7fc5>
{ "date": "2018-04-21T17:23:23", "dump": "CC-MAIN-2018-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945272.41/warc/CC-MAIN-20180421164646-20180421184646-00176.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9758518934249878, "score": 2.96875, "token_count": 1032, "url": "http://anextractofreflection.blogspot.co.uk/2014/09/" }
This example shows how to create a chart with y-axes on the left and right sides using the yyaxis function. It also shows how to label each axis, combine multiple plots, and clear the plots associated with one or both of the sides. Create axes with a y-axis on the left and right sides. The yyaxis left command creates the axes and activates the left side. Subsequent graphics functions, such as plot, target the active side. Plot data against the left y-axis. x = linspace(0,25); y = sin(x/2); yyaxis left plot(x,y); Activate the right side using yyaxis right. Then plot a set of data against the right y-axis. r = x.^2/2; yyaxis right plot(x,r); Control which side of the axes is active using the yyaxis left and yyaxis right commands. Then, add a title and axis labels. yyaxis left title('Plots with Different y-Scales') xlabel('Values from 0 to 25') ylabel('Left Side') yyaxis right ylabel('Right Side') Add two more lines to the left side using the hold on command. Add an errorbar to the right side. The new plots use the same color as the corresponding y-axis and cycle through the line style order. The hold on command affects both the left and right sides. hold on yyaxis left y2 = sin(x/3); plot(x,y2); y3 = sin(x/4); plot(x,y3); yyaxis right load count.dat; m = mean(count,2); e = std(count,1,2); errorbar(m,e) hold off Clear the data from the right side of the axes by first making it active, and then using the yyaxis right cla Clear the entire axes and remove the right y-axis using Now when you create a plot, it only has one y-axis. For example, plot three lines against the single y-axis. xx = linspace(0,25); yy1 = sin(xx/4); yy2 = sin(xx/5); yy3 = sin(xx/6); plot(xx,yy1,xx,yy2,xx,yy3) Add a second y-axis to an existing chart using yyaxis. The existing plots and the left y-axis do not change colors. The right y-axis uses the next color in the axes color order. New plots added to the axes use the same color as the corresponding y-axis. yyaxis right rr1 = exp(xx/6); rr2 = exp(xx/8); plot(xx,rr1,xx,rr2)
<urn:uuid:3124a018-e870-42b9-b531-890446dec1c4>
{ "date": "2019-06-20T11:52:32", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999210.22/warc/CC-MAIN-20190620105329-20190620131329-00256.warc.gz", "int_score": 4, "language": "en", "language_score": 0.7187831997871399, "score": 3.671875, "token_count": 609, "url": "https://www.mathworks.com/help/matlab/creating_plots/plotting-with-two-y-axes.html" }
The Bipolar Disorder Classification as Defined in the Diagnostic and Statistical Manual of Mental Disorders (DSM) - Mania is the cardinal symptom of bipolar disorder. Without the mania, it would be considered Depressive Disorder. - There are several types of Bipolar Disorder based upon the specific duration and pattern of manic and depressive episodes. - People who experience clinically significant episodes of mania and depression but who do not meet criteria for Bipolar Disorder are diagnosed as Bipolar Disorder: Not Otherwise Specified (BP-NOS) Meeting criteria for any diagnosis in the DSM is based upon the presence of certain symptoms over a specified period of time. In its description of bipolar disorder, the DSM first explains what is required for the different behavioral mood episodes: Major Depressive Episode, Manic Episode, Mixed Episode and Hypomanic Episode. It then differentiates the diagnosis according to the presence, sequence and history of those episodes. Of note: Since 2001, there has been much discussion as to whether a sub-threshold presentation of bipolar disorder may be a developmental version of the condition or a different disorder entirely. The decisions of the DSM5 committees responsible for this update have concluded that a large percentage of the sub-threshold cases are a different disorder. The new classification is called Disruptive Mood Dysregulation Disorder (DMDD) Research which defined DMDD purports that those children whose manic symptoms appear only as severe and chronic (rather than episodic) irritability are not bipolar. The inclusion of DMDD into DSM5 is seen as a corrective classification for children who might have otherwise received a BP-NOS diagnosis. Other children, whose manic behavior is not uniquely irritable (would include grandiose/elated behavior) but nevertheless does not meet the criteria for full episodes or those who are uniquely irritable but on an episodic basis, would continue to receive a diagnosis of BP-NOS. No further specificity of the BP-NOS classification has been proposed for DSM 5. Please click here to read JBRF’s position on the new classification. What follows below is from the DSM, 4th Edition, Text-Revision. It is the current criteria for Bipolar Disorder I, II and NOS. The 5th edition has just been released. This page will be updated appropriately as soon as possible. Bipolar I and II; The Behavioral Episodes A Major Depressive Episode includes at least 5 of the following symptoms occurring over the same 2-week period and must include either #1 or #2: - Depressed mood most of the day, nearly every day, as reported by self (i.e. I feel sad or empty) or others (i.e. he appears tearful) Note: in children and adolescents, can be irritable mood. - Markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day. - Significant weight loss or gain, or decrease or increase in appetite nearly every day. Note: in children, consider failure to make expected weight gains. - Insomnia or hypersomnia nearly every day (difficulty or delay in falling asleep or excessive sleep). - Psychomotor agitation (such as pacing, inability to sit still, pulling on skin or clothing) or retardation (such as slowed thinking, speech or body movement) nearly every day that can be observed by others. - Fatigue or loss of energy nearly every day. - Feelings of worthlessness or excessive, inappropriate, or delusional guilt nearly every day. - Diminished ability to think or concentrate, or indecisiveness, nearly every day. - Recurrent thoughts of death (not just fear of dying), recurrent suicidal ideation without a specific plan, or a suicide attempt or a specific plan for committing suicide. A Manic Episode includes a period of at least one week during which the person is in an abnormally and persistently elevated or irritable mood. While an indiscriminately euphoric mood is the classical expectation, the person may instead be predominately irritable. He or she may also alternate back and forth between the two. This period of mania must be marked by three of the following symptoms to a significant degree. If the person is only irritable, they must experience four of the following symptoms. - Inflated self-esteem or grandiosity (ranges from uncritical self-confidence to a delusional sense of expertise). - Decreased need for sleep. - Intensified speech (possible characteristics: loud, rapid and difficult to interrupt, a focus on sounds, theatrics and self-amusement, non-stop talking regardless of other person’s participation/interest, angry tirades). - Rapid jumping around of ideas or feels like thoughts are racing. - Distractibility (attention easily pulled away by irrelevant/unimportant things). - Increase in goal-directed activity (i.e. excessively plans and/or pursues a goal; either social, work/school or sexual) or psychomotor agitation (such as pacing, inability to sit still, pulling on skin or clothing). - Excessive involvement in pleasurable activities that have a high risk consequence. A Hypomanic Episode is very similar to a manic one, but less intense. It is only required to persist for 4 days and it should be observable by others that the person is noticeably different from his or her regular, non-depressed mood and that the change has an impact on his or her functioning. A Mixed Episode would fulfill the symptom requirements for both a Major Depressive Episode and a Manic Episode nearly every day but the mixed symptoms only need to last for a 1-week period. For all four of these episodes, the symptoms must have an impact on the person’s ability to function and can’t derive from some other circumstance or illness that would logically, or better, account for its expression. Bipolar I and II; The Difference The main difference between BP I and BP II is full mania (7 days) v. hypomania (4 days). Once a person experiences a full manic episode, they will receive a BP I diagnosis. Bipolar I Disorder The Bipolar I diagnosis, (with Manic Episode), gets broken down into six different sub-diagnoses which are not important to detail here. Broadly, they are defined by which type of episode the patient is currently in or has most recently experienced and which types of episodes (if any) they have experienced in the past. Two of the six diagnoses do not require the experience of any Major Depressive Episodes. Bipolar II Disorder For a Bipolar II diagnosis, (no Manic Episode) the person must have experienced at least one Major Depressive Episode and at least one Hypomanic Episode. Bipolar NOS: A Classification for Sub-threshold Symptoms The Bipolar Disorder Not Otherwise Specified category includes disorders with bipolar features that do not meet criteria for any specific Bipolar Disorder. Examples include - Very rapid alternation (over days) between manic symptoms and depressive symptoms that meet symptom threshold criteria but not minimal duration criteria for Manic, Hypomanic or Major Depressive Episodes - Recurrent Hypomanic Episodes without intercurrent depressive symptoms - A Manic or Mixed Episode superimposed on Delusional Disorder, residual Schizophrenia, or Psychotic Disorder Not Otherwise Specified - Hypomanic Episodes, along with chronic depressive symptoms, that are too infrequent to qualify for a diagnosis of Cyclothymic Disorder [see below] - Situations in which the clinician has concluded that a Bipolar Disorder is present but is unable to determine whether it is primary, due to a general medical condition or substance induced
<urn:uuid:150647d6-56a4-4a80-b118-319c6be167fa>
{ "date": "2017-06-25T15:31:21", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320539.37/warc/CC-MAIN-20170625152316-20170625172316-00257.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9388774037361145, "score": 3.5625, "token_count": 1607, "url": "https://www.jbrf.org/diagnosis-by-the-dsm/" }
Quality Assessment: Chemical: Ortho- and Total Phosphate Phosphorus occurs in several formsboth inorganic and organic. Inorganic orthophosphate (PO43-) is the only form available to living organisms. However, the other forms of phosphate can be transformed into orthophosphate. For this reason, total phosphate is generally measured in addition to orthophosphate. The measure of total phosphate provides an estimate of the amount of phosphorus potentially available to plants and animals. Since the phosphorus requirements of algae are minimal, rapid algal growth occurs when excess phosphorus is present in streams. Common sources of excess phosphorus include agricultural runoff from feed lots and fertilized fields, and sewage that contains organic phosphorus as well as inorganic phosphorus in products such as detergents. The resulting algal bloom can lead to eutrophication and the subsequent degradation of stream water quality. / Alkalinity / Hardness / Nitrates. Nitrites, and Ammonia / Ortho- and Total Phosphate / Dissolved Oxygen and Biochemical Oxygen Demand / Fecal Coliform / Conductivity Some images © 2004 www.clipart.com Center for Educational Technologies, Circuit Board/Apple graphic logo, and COTF Classroom of the Future logo are registered trademarks of Wheeling Jesuit University.
<urn:uuid:0e316dce-ee2f-4da1-88cf-05e4d6e988b2>
{ "date": "2014-10-30T15:07:24", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898226.7/warc/CC-MAIN-20141030025818-00137-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9120298027992249, "score": 3.5, "token_count": 275, "url": "http://www.cotf.edu/ete/modules/waterq3/WQassess3e.html" }
|Scientific Name:||Glaucostegus thouin| |Species Authority:||(Anonymous [Lacepède], 1798)| Rhinobatos thouin (Anonymous [Lacepède], 1798) |Red List Category & Criteria:||Vulnerable A2abd+3bd+4abd ver 3.1| |Assessor(s):||White, W.T. & Marshall, A.D.| |Reviewer(s):||Kyne, P.M., Heupel, M.R., Simpfendorfer, C.A. & Cavanagh, R.D. (Shark Red List Authority)| Rhinobatos thouin has a widespread distribution in the Indo-West Pacific. It was once moderately abundant but is now irregularly caught as bycatch in local fisheries throughout its range, especially in the Western Central Pacific. It is a large species (>300 cm TL), vulnerable to gillnets, inshore trawl fisheries and even line fishing. Rhinobatids are taken by multiple artisanal and commercial fisheries throughout their range as a target species and as bycatch, and population declines in many guitarfish species have been observed in areas of the Indo-Pacific. Local population depletion can be inferred from Indonesia where the target gillnet fishery fleet declined from a maximum of 500 boats in 1987 to 100 in 1996, due to declining catch rates (Chen 1996). Flesh is sold for human consumption in Asia and the fins from large animals fetch particularly high prices, creating a significant incentive for bycatch to be retained (the value of rhinobatid and rhynchobatid fins far exceeds that of other sharks and rays). Demands for dried fins for the international fin trade could be a factor in the switch from subsistence fisheries to more directed fisheries, although their flesh is also highly sought after. Very little is known about the biology or population status of R. thouin. Their existence along coastal inshore areas of the continental shelf makes them an easy target for fisheries and it is likely that habitat degradation in these areas may also be affecting nursery areas. Population declines are inferred from observed declines in bycatch numbers in local fisheries and given its susceptibility to capture by multiple fishing gear types and its high value fins, it is probable that numbers have been locally reduced by fishing throughout its range. This species meets the criteria of A2abd+3bd+4abd for Vulnerable due to the population decline outlined above and the remaining very high level of unmanaged exploitation in Southeast Asia. |Range Description:||Widespread Indo-West Pacific distribution. Possibly Suriname and the Mediterranean (Compagno and Last 1999).| Native:Bangladesh; Djibouti; Egypt; Eritrea; Ethiopia; India; Indonesia (Jawa, Kalimantan, Sumatera); Iran, Islamic Republic of; Iraq; Japan; Kuwait; Malaysia; Myanmar; Oman; Pakistan; Papua New Guinea; Qatar; Saudi Arabia; Singapore; Somalia; Sri Lanka; Sudan; Thailand; United Arab Emirates; Viet Nam; Yemen |FAO Marine Fishing Areas:|| Present - origin uncertain: Atlantic – western central; Indian Ocean – western; Indian Ocean – eastern; Mediterranean and Black Sea; Pacific – northwest; Pacific – western central |Range Map:||Click here to open the map viewer and explore range.| |Population:||No information available.| |Current Population Trend:||Unknown| |Habitat and Ecology:||Benthic ray found in inshore waters, typically less than 60 m depth over soft sandy substrate. Aplacental viviparous, attaining at least 300 cm TL but nothing known of its biology. Life history parameters Age at maturity (years): Unknown. Size at maturity (total length cm): Unknown. Longevity (years): Unknown. Maximum size (total length): >300 cm TL. Size at birth (cm): Unknown. Average reproductive age (years): Unknown. Gestation time (months): Unknown. Reproductive periodicity: Unknown. Average annual fecundity or litter size: Unknown. Annual rate of population increase: Unknown. Natural mortality: Unknown. Taken by multiple artisanal and commercial fisheries throughout its range as a target species and as bycatch. Fished using nets, line and hook, and trawls throughout its range. Large species, extremely powerful but still vulnerable to gillnets and to a lesser extent inshore trawl fisheries and lines. The fins from Rhinobatos spp. are widely considered as being amongst the most valuable of elasmobranchs (i.e., white-fin) and there is a significant incentive for fishers to remove the fins from large individuals when they are taken as either target catch or bycatch. R. thouin is commonly landed as bycatch in fisheries in Indonesia (Chen 1996, White unpublished data). Fisheries targeting the rhynchobatids in eastern Indonesia, e.g., Aru Islands and Merauke (Papua), often catch this species but generally in low numbers. Also recorded as trawl bycatch in Sabah and Sarawak (R. Cavanagh, pers.com). Since juveniles of this species inhabit shallow sand flats and mangrove estuaries (White, unpubl. data), intensive fishing pressures, e.g., gill, trap and seine nets, in such inshore areas throughout Indonesia (e.g., Merauke, Papua) are most likely having a high level of impact on this species. Not known to have specific habitat requirements but young may require specific inshore nursery areas that have been affected by human activities resulting in habitat degradation, as destructive fishing practices and pollution are significant factors affecting marine resources in parts of this species' range. Local population depletion can be inferred from Indonesia where the target gillnet fishery fleet declined from a maximum of 500 boats in 1987 to 100 in 1996 due to declining catch rates (Chen 1996). Further research into the population structure, biology and ecology of Rhinobatos thouin is required to assess the extent to which fishing pressure, particularly in relation to finning, and habitat destruction is influencing this species within its range. Improved species composition data from all fisheries that take shovelnose rays and guitarfish is necessary. The development and implementation of management plans (national and/or regional e.g., under the FAO International Plan of Action for the Conservation and Management of Sharks: IPOA-Sharks) are required to facilitate the conservation and sustainable management of all chondrichthyan species in the region. See Anon. (2004) for an update of progress made by nations in the range of R. thouin. Future management may involve difficult decisions affecting communities adjacent to these areas. Anonymous. 2004. Report on the implementation of the UN FAO International Plan of Action for Sharks (IPOA–Sharks). AC20 Inf. 5. Twentieth meeting of the CITES Animals Committee, Johannesburg (South Africa), 29 March–2 April 2004. Chen, H.K. (ed.) 1996. Shark Fisheries and the Trade in Sharks and Shark Products in Southeast Asia. TRAFFIC Southeast Asia Report, Petaling Jaya, Selangor, Malaysia Compagno, L.J.V. and Last, P.R. 1999. Rhinobatidae. In: K.E. Carpenter and V.H.Niem (eds) FAO species identification guide for fishery purposes. The living marine resources of the Western Central Pacific. Volume 3. Batoid fishes, chimaeras and bony fishes part 1 (Elopidae to Linophyrnidae). FAO, Rome, pp. 1423-1430. IUCN. 2006. 2006 IUCN Red List of Threatened Species. www.iucnredlist.org. Downloaded on 04 May 2006. IUCN SSC Shark Specialist Group. Specialist Group website. Available at: http://www.iucnssg.org/. |Citation:||White, W.T. & Marshall, A.D. 2006. Glaucostegus thouin. The IUCN Red List of Threatened Species 2006: e.T60175A12316741. . Downloaded on 02 December 2015.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:889f35b2-1c58-47a8-ad1c-5fc485075112>
{ "date": "2015-12-02T00:20:46", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398474527.16/warc/CC-MAIN-20151124205434-00141-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8463295102119446, "score": 2.75, "token_count": 1810, "url": "http://www.iucnredlist.org/details/full/60175/0" }
|« New Evidence Suggests Dietary Soy and Flaxseed Have Positive Effect On Obesity and Diabetes||Histone-Deacetylase Activity of Short Chain Fatty Acids »| STUDY: Ailment affects 1 in 133 JOURNAL: Archives of Internal Medicine AUTHORS: Alessio Fasano ABSTRACT: New research is revealing that celiac disease may be one of the most common genetic diseases, affecting perhaps as many as 2 million Americans. A national survey published today, for example, estimates that 1 in 133 Americans has it. COMMENTARY: Most doctors miss the diagnosis of celiac disease. It’s now clear that the textbook description of this once-obscure ailment is woefully incomplete and describes only a minority of cases. Below the tip of the so-called celiac iceberg is a diverse world of illness that may include thousands of people suffering from various, seemingly unrelated conditions, such as anemia, osteoporosis, infertility, irritable bowel syndrome and chronic fatigue. “We were taught in another way. We were looking in the wrong direction. We were not putting our face under the water to see the iceberg,” said Alessio Fasano, a gastroenterologist at the University of Maryland School of Medicine in Baltimore. It is Fasano and his colleagues who are publishing the survey that estimates 1 in 133 Americans has celiac disease. About 40 percent of the afflicted report no symptoms, although the disease may be having inapparent effects, such as the loss of bone mass, subtle changes in mood and infertility. In close relatives of people with celiac disease, the ailment was especially common, with a prevalence of 1 in 22, according to the paper, which is appearing in the Archives of Internal Medicine. Celiac disease is characterized by a chronic inflammation of the upper portion of the small intestine. This occurs in response to gluten and similar proteins found in wheat, rye and barley. In classical cases, this leads to vomiting and diarrhea in young children soon after cereals are introduced in the diet. What’s now clear is that people can develop celiac disease throughout life and that they often have few, if any, intestinal symptoms. The symptoms they do have often arise from deficiencies of nutrients absorbed in the affected part of the intestine, such as iron, calcium and fat-soluble vitamins. Iron-deficiency anemia is the most common “clinical presentation” of adults with celiac disease. In Fasano’s survey, 30 percent of people in which the disease was newly diagnosed had joint pain. One quarter had fatigue. Six percent had osteoporosis. Celiac disease is diagnosed by testing for three antibodies — anti-gliadin, anti-endomysial and anti-tissue transglutaminase — that are present when an affected person is exposed to gluten but disappear when the offending grains are no longer consumed.
<urn:uuid:e20f6bb9-e570-4663-afa3-a55dc37581de>
{ "date": "2014-03-11T08:06:54", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011155657/warc/CC-MAIN-20140305091915-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9618982076644897, "score": 2.84375, "token_count": 612, "url": "http://www.dadamo.com/B2blogs/blogs/blog6.php/2008/08/03/gluten-reaction-more-common" }
Love looks not with the eyes but with the mind Helena:A Midsummer Night's Dream (I, i, 234) "Love looks not with the eyes but with the mind." In this soliloquy, Helena ponders the transforming power of love, noting that Cupid is blind. The lovesick Helena has been abandoned by her beloved Demetrius, because he loves the more attractive Hermia. Helena, while tall and fair, is not as lovely as Hermia. Helena finds it unfair that Demetrius dotes on Hermia's beauty, and she wishes appearances were contagious the way a sickness is so that she might look just like Hermia and win back Demetrius. The connection of love to eyesight and vision are matters of vital importance in this play about love and the confusion it sometimes brings.
<urn:uuid:f379ad1b-4321-4be7-8a87-8a9507a85ab1>
{ "date": "2016-02-11T00:31:15", "dump": "CC-MAIN-2016-07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160918.28/warc/CC-MAIN-20160205193920-00210-ip-10-236-182-209.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9366136193275452, "score": 2.71875, "token_count": 172, "url": "http://www.enotes.com/shakespeare-quotes/love-looks-eyes-mind" }
The role of data and analytics in business continues to grow. To make sense of their plethora of data, businesses are looking to data scientists for help. Job site, indeed.com, shows a continued growth in “data scientist” positions. To better understand the field of data science, we studied hundreds of data professionals. In that study, we found that data scientists are not created equal. That is, data professionals differ with respect to the skills they possess. For example, some professionals are proficient in statistical and mathematical skills while others are proficient in computer science skills. Still others have a strong business acumen. In the current analysis, I want to determine the breadth of talent that data professionals possess to better understand the possibility of finding a single data scientist who is skilled in all areas. First, let’s review the study sample and the method of how we measured talent. We surveyed hundreds of data professionals to tell us about their skills in five areas: Business, Technology, Math & Modeling, Programming and Statistics. Each skill area included five specific skills, totaling 25 different data skills in all. For example, in the Business Skills area, data professionals were asked to rate their proficiency in such specific skills as “Business development,” and “Governance & Compliance (e.g., security).” In the Technology Skills area, they were asked to rate their proficiency in such skills as “Big and Distributed Data (e.g., Hadoop, Map/Reduce, Spark),” and “Managing unstructured data (e.g., noSQL).” In the Statistics Skills, they were asked to rate their proficiency in such skills as “Statistics and statistical modeling (e.g., general linear model, ANOVA, MANOVA, Spatio-temporal, Geographical Information System (GIS)),” and “Science/Scientific Method (e.g., experimental design, research design).” For each of the 25 skills, respondents were asked to tell us their level proficiency using the following scale: This rating scale is based on a proficiency rating scale used by NIH. Definitions for each proficiency level were fully defined in the instructions to the data professionals. The different levels of proficiency are defined around the data scientists ability to give or need to receive help. In the instructions to the data professionals, the “Intermediate” level of proficiency was defined as the ability “to successfully complete tasks as requested.” We used that proficiency level (i.e., Intermediate) as the minimum acceptable level of proficiency for each data skill. The proficiency levels below the Intermediate level (i.e., Novice, Fundamental Awareness, Don’t Know) were defined by an increasing need for help on the part of the data professional. Proficiency levels above the Intermediate level (i.e., Advanced, Expert) were defined by the data professional’s increasing ability to give help or be known by others as “a person to ask.” We looked at the level of proficiency for the 25 different data skills across four different job roles. As is seen in Figure 1, data professionals tend to be skilled in areas that are appropriate for their job role (see green-shaded areas in Figure 1). Specifically, Business Management data professionals show the most proficiency in Business Skills. Researchers, on the other hand, show lowest level of proficiency in Business Skills and the highest in Statistics Skills. For many of the data skills, the typical data professional does not have the minimum level of proficiency to do be successful at work, no matter their role (see yellow- and red-shaded areas in Figure 1).
<urn:uuid:ab615d21-9e89-4c89-ae13-53331ffbae43>
{ "date": "2016-12-10T16:34:38", "dump": "CC-MAIN-2016-50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543316.16/warc/CC-MAIN-20161202170903-00000-ip-10-31-129-80.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9438781142234802, "score": 2.9375, "token_count": 757, "url": "http://www.7wdata.be/data-analysis/data-science-skills-and-the-improbable-unicorn/" }
Banner o Romanie The Constitution o Romanie provides that “The flag o Romanie is tricolour; the colors are arranged vertically in the follaein order frae the flagpole: blue, yellae, red”. The proportions, shades o color as well as the flag protocol wur established bi law in 1994 an extendit in 2001. The flag is coincidentally vera similar tae the ceevil Banner o Andorrae an the state Banner o Chad. The similarity wi Chad’s flag, which differs anerlie in haein a daurker shade o blue (indigo rather than cobalt), haes caused internaitional discussion. In 2004 Chad asked the Unitit Naitions tae examine the issue, but then-preses o Romanie Ion Iliescu annoonced nae chynge wad occur tae the flag. The Banner o Moldova is relatit tae the Romanian tricolour, except it haes a 1:2 ratio, a lichter shade o blue and the Moldavian coat o airms in the middle. Colors[eedit | eedit soorce] The law mentioned abuin specifies that the stripes o the naitional flag are cobalt blue, chrome yellae an vermilion red. The publication Album des pavillons nationaux et des marques distinctives (2000) suggests the follaein equivalents in the Pantone scale: History an significance o the colors[eedit | eedit soorce] - Main airticle: History o the banners o Romanie Red, yellae an blue wur foond on late 16t century ryal grants o Michael the Brave, as well as shields an banners. Durin the Wallachian uprisin o 1821, they wur present on the canvas o the revolutionaries’ flag an its fringes; for the first time a meanin was attributit tae them: “Liberty (sky-blue), Juistice (field yellae), Fraternity (blood red)”. The tricolour wis first adoptit in Wallachie in 1834, when the reformin domnitor Alexandru II Ghica submittit naval an military colors designs for the approval o Sultan Mahmud II. The latter wis a “flag wi a red, blue an yellae face, an aa haein starns an a bird’s head in the middle”. Suin, the order o colors wis chynged, wi yellae appearin in the center. In 1848, the flag adoptit for Wallachie bi the revolutionaries that year wis a blue-yellae-red tricolour (wi blue abuin, in line with the meaning “Liberty, Justice, Fraternity”). Awready on 26 Aprile, accordin tae Gazeta de Transilvania, Romanian students in Paris wur hailin the new govrenment wi a blue, gowd an red naitional flag, “as a seembol o union atween Moldavians an Muntenians”. Decree no. 1 o 14/26 June 1848 o the proveesional govrenment mentioned that “the Naitional Flag will bear three colors: blue, yellae, red”, emblazoned wi the wirds “DPEПTATE ФPЪЦIE” (Dreptate, Frăţie or “Juistice, Fraternity”). It differed frae earlier tricolors in that the blue stripe wis on top, the princely monogram wis eliminatit frae the corners, as wis the croun atop the eagle at the end o the flagpole, while a motto wis nou present. Nivertheless, decree no. 252 o 13/25 Julie 1848, issued acause “it haes no been unnerstuid [yet] hou the naitional flags shoud be designed”, defined the flag as three vertical stripes, possibly influenced bi the French model. The shades were “dark blue, light yellow and carmine red”; as for order, “near the wood comes blue, then yellae an then red flutterin”. Petre Vasiliu-Năsturel observes that frae a heraldic point o view, on the French as well as the revolutionary Wallachian flag, the middle stripe represents a heraldic metal (argent an or respectively), thus, the twoa flags coud be relatit. Ither historians believe that the tricolour wis no an imitation o the French flag, instead embodyin an auld Romanian tradition. This theory is supportit bi a note froae the revolutionary meenister o foreign affairs tae Emin Pasha: “the colors o the baund that we, the leaders wear, as well as aw oor follaers, are no o modren oreegin. We hae haed oor flags syne an earlier time. When we received the tricolour insignie an baunds we did no follae the spirit o imitation or fashion”. The same meenister assured the extraordinary envoy o the Porte, Suleiman Pasha, that the flag’s three colors haed existit “for a lang time; oor ancestors bore them on their staundart an their flags. So they are no a borraein or an imitation frae the present or a threat for the future”. Efter the revolution wis quelled, the auld flags wur restored an the revolutionaries punished for haein worn the tricolour. Frae 1859 till 1866, the Unitit Principalities o Wallachie an Moldavie haed a red-yellae-blue Romanian tricolour, wi horizontal stripes, as the naitional flag. The flag wis described properly in Almanahul român din 1866: “a tricolour flag, dividit in three stripes, red, yellae an blue an laid oot horizontally: red abuin, blue belaw an yellae in the middle”. Although the Ottoman Empire did no allou the Unitit Principalities tae hae their awn seembols, the new flag gained a degree o internaitional recognition. Relatin prince Cuza’s May–Juin 1864 journey tae Constantinople, doctor Carol Davila observed: “The Romanian flag wis raised on the great mast, the Sultan’s kayaks awaitit us, the guard wis airmed, the Grand Vizier at the door… The Prince, quiet, dignified, concise in his speech, spent 20 minutes wi the Sultan, who then came tae review us… Ance again, the Grand Vizier led the Prince tae the main gate an we returned tae the Europe Palace, the Romanian flag still flutterin on the mast…”. Airticle 124 o the 1866 Constitution o Romanie providit that “the colors o the Unitit Principalities will be Blue, Yellae an Red”. The order an placement o the colors wur decided by the Assembly o Deputies in its session o 26 Mairch 1867. Thus, follaein a proposal bi Nicolae Golescu, they wur placed juist as in 1848: vertically an in the follaeing order: blue hoist, yellae in the middle an red fly. The kintra’s coat o airms wis placed anerlie on airmy an princely flags, in the center; ceevilian flags remained athoot a coat o airms. The same distinction wis made atween flags o the Navy an those o the ceevil an merchant ships. The rapporteur Mihail Kogălniceanu, who an aa conveyed the opinion o Cezar Bolliac, Dimitrie Brătianu, Constantin Grigorescu, Ion Leca, Nicolae Golescu an Gheorghe Grigore Cantacuzino, said: “The tricolour flag as it is the day is no (as the meenister claims) the flag o the United Principalities. It is muckle mair: it is itsel the flag o the Romanian naition in aw lands inhabitit bi Romanians”. The “Law for modifyin the kintra’s airms” o 11/23 Mairch 1872 did no chynge these provisions, anerlie the design o the coat o airms. This design o the naitional flag lastit till 1948. On 30 December 1947, Romania wis proclaimed a fowkrepublic an aw the ex-kinrick’s seembols wur ootlawed, includin the coat o airms an the tricolour flags that showed it. Durin the communist era in Romanie, the state flag haed the emblem o the kintra in the middle o the yellae stripe, an for the first time the 2:3 proportion wis regulatit bi law. Till 1989, nae less than fower coat o airms wur chynged. Starting on 17 December 1989, durin the revolution at Timişoara, the coat o airms o the Romanian Socialist Republic began tae be ripped aff the flags, being perceived as a seembol o Nicolae Ceauşescu’s dictatorial regime. These flags wur cried “the flag wi the hole”. Decree-Law no. 2 of 27 December 1989 regardin the membership, organization an functionin o the Cooncil o the Naitional Salvation Front an o the territorial cooncils o the Naitional Salvation Front. providit at airticle 1, amang ither matters, that “the naitional flag is the traditional tricolour o Romanie, wi the colors laid oot vertically, in the follaein order, stairtin frae the flagpole: blue, yellae, red”. References[eedit | eedit soorce] - At article 12, clause 1 - Law no. 75 o 16 Julie 1994, published in Monitorul Oficial no. 237 o 26 August 1994. - Governmental Decision no. 1157/2001, published in Monitorul Oficial no. 776 of 5 December 2001. - "'Identical flag' causes flap in Romania" - Pălănceanu (1974), p. 138. - Iscru, Gheorghe D., "Steagul Revoluţiei din 1821", in Revista Arhivelor no. 2/1981, p. 211. - Buletinul - Gazetă Oficială a Ţării Româneşti, no. 34 of 14 October 1834, p. 144 - Gazeta de Transilvania, year XI, no. 34 of 26 April 1848, p. 140. - Dogaru (1978), p. 862. - Căzănişteanu (1967), p. 36. - Dogaru (1978), p. 861. - Năsturel (1900/1901), p. 255. - Anul 1848 în Principatele Române, II, Bucharest, 1902, p. 477. - Căzănişteanu (1967), p. 36 - Dogaru (1978), p. 868. - Năsturel (1900/1901), p. 253 - Pălănceanu (1974), p. 145. - Mihalache (1967), pp. 180-1. - Constituţia României, 1866, title VI, art. 124. - Năsturel (1900/1901), p. 257 - Velcu (1938), p. 81 - Năsturel (1900/1901), p. 257. - Decree-Law published in Monitorul Oficial no. 4 of 27 December 1989 |Wikimedia Commons haes media relatit tae Flags of Romania.|
<urn:uuid:1ddefdf0-09d8-4ebe-a95d-fba604ff7e13>
{ "date": "2014-04-20T13:42:44", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00115-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.6636815667152405, "score": 2.765625, "token_count": 2653, "url": "http://sco.wikipedia.org/wiki/Banner_o_Romanie" }
The hands are gone. But if they were still attached, they would point to some time around 3 pm on 11 March 2011. That's when a massive tsunami smothered much of the east coast of Japan and killed almost 20,000 people. It came 15 minutes after the huge, magnitude-9 earthquake that caused a meltdown at the Fukushima Daiichi nuclear plant. The shattered, mud-covered clock belonged to Abe Masahara, whose house near Sendai City was destroyed by the tsunami, but who was safely evacuated with his family before disaster struck. It is part of an exhibition on earthquakes and volcanoes now open at London's Natural History Museum. As well as an earthquake simulator and a live data feed of earthquake hotspots, the gallery includes survivors' tales and possessions. Alongside this clock, you can see a calendar that hung on a wall in Abe's house through earthquake and tsunami: it bears a waterline stain that shows how high the floodwater reached. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
<urn:uuid:86a21c5f-f960-43d9-9c1f-d8d867e34345>
{ "date": "2015-02-02T00:37:37", "dump": "CC-MAIN-2015-06", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122122092.80/warc/CC-MAIN-20150124175522-00201-ip-10-180-212-252.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9649863839149475, "score": 3.640625, "token_count": 258, "url": "http://www.newscientist.com/article/dn24991-moving-remnants-of-a-home-shattered-by-japans-tsunami.html?cmpid=RSS%7CNSNS%7C2012-GLOBAL%7Conline-news" }
The hip is one of the largest weight-bearing joints in the body. When it’s working properly, it lets you walk, sit, bend, and turn without pain. Unlike the shoulder, the hip sacrifices degree of movement for additional stability. To keep it moving smoothly, a complex network of bones, cartilage, muscles, ligaments, and tendons must all work in harmony. The hip is a ball-and-socket joint where the head of the femur articulates with the cuplike acetabulum of the pelvic bone. The acetabulum fits tightly around the head of the femur. The ball is normally held in the socket by very powerful ligaments that form a complete sleeve around the joint (the joint capsule). The capsule has a delicate lining (the synovium). The head of the femur is covered with a layer of smooth cartilage which is a fairly soft, white substance. The socket is also lined with cartilage. This cartilage cushions the joint, and allows the bones to move on each other with very little friction. An x-ray of the hip joint usually shows a “space” between the ball and the socket because the cartilage does not show up on x-rays. In the normal hip this “joint space” is approximately 1/4 inch wide and fairly even in outline. The knee joint, which appears like a simple hinge-joint, is one of the most complex joints in the body. The knee joint is made up of the femur (thigh bone), tibia (lower leg bone) and patella (the kneecap). All these bones are lined with articular cartilage (surface cartilage). This articular cartilage acts like a shock absorber and allows a smooth, low friction surface for the knee to move on. Between the tibia and femur lie two floating cartilages called menisci. The medial (inner) meniscus and the lateral (outer) meniscus rest on the tibial surface cartilage and are mobile. The menisci also act as shock absorbers and stabilizers. The knee is stabilized by ligaments that are both inside and outside the joint. The medial and lateral collateral ligaments support the knee from excessive side to side movement. The (internal) anterior and posterior cruciate ligaments support the knee from buckling and giving way. The knee joint is surrounded by a capsule (envelope) that produces a small amount of synovial (lubricating) fluid to help with smooth motion. Thigh muscles are important secondary knee stabilizers. We tend to ignore our knees until something happens to them that causes pain. If we take good care of our knees now, before there is a problem, we can really help ourselves. In addition, if some problems with the knees develop, an exercise program can be extremely beneficial. To understand the functions, conditions, surgeries & procedures of the knee better, we have included an interactive animated presentation. Your legs are made up of bones, blood vessels, muscles and other connective tissue. They are important for motion and standing. Playing sports, running, falling or having an accident can damage your legs. Common leg injuries include sprains and strains, dislocations, and fractures. These injuries can affect the entire leg, or just the foot, ankle, knee, or hip. Certain diseases also lead to leg problems. For example, knee osteoarthritis, common in older people, can cause pain and limited motion. Problems in your veins in your legs can lead to varicose veins or deep vein thrombosis. The highly qualified orthopedic surgeons and physiatrists at Advanced Orthopedic & Sports Medicine specialize in the evaluation, diagnosis and treatment of hip, knee and leg disorders. Our physicians have expertise in non-surgical care using conservative comprehensive care such as physical therapy, interventional care, and injections. If surgery is necessary, our board certified surgeons have advanced training in hip, knee and leg procedures. Click on the topics below to find out more from the Orthopaedic connection website of American Academy of Orthopaedic Surgeons.
<urn:uuid:fc5c60e2-c0cc-4238-9059-94fd71d91f05>
{ "date": "2015-07-03T15:36:21", "dump": "CC-MAIN-2015-27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096156.35/warc/CC-MAIN-20150627031816-00036-ip-10-179-60-89.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9270309209823608, "score": 3.734375, "token_count": 863, "url": "http://www.advancedortho.org/hip-knee-and-leg-disorders/" }
Groundhog Day is celebrated on February 2 of each year. The folklore holds that the emergence of the groundhog from its burrow on this day will predict the arrival of spring. If the day is cloudy, there may be an early spring that will bloom before the vernal equinox. If the weather is sunny when the groundhog emerges, it will see its shadow and return to its den. This signals that winter will persist for six more weeks. The celebration of Groundhog Day is particularly festive in Pennsylvania where Groundhog Lodges celebrate the day with social events. They serve food and host themed activities and entertainment. The largest celebration is held in Punxsutawney, Pennsylvania with the groundhog Punxsutawney Phil making his prediction about the duration of winter. Whether the predictions are true or not, they turn people’s attention to preparing for the eventualities of different weather patterns. If the prediction is a shorter winter, people can anticipate less moisture, which means that there will be dry conditions and potential brush fires. If the prediction is for a longer winter, people may prepare for storms and water-related disasters such as floods. Groundhog Day is a fun tradition, but the bottom line is that we should all be prepared for whatever Mother Nature may bring. It’s important to keep extra water on hand for drought conditions. For winter weather, you should keep rock salt, snow shovels, and automobile ice scrapers on hand. You should also store flashlights (with batteries), candles, and hand-crank radios. It’s not a bad idea to invest in a generator to keep critical appliances running in case you lose power for extended time periods. No one can prevent natural disasters, but we can all take measures to mitigate their harmful effects.
<urn:uuid:93578ca8-38ac-49e0-aba2-fcbd7127d4b0>
{ "date": "2018-05-23T20:51:40", "dump": "CC-MAIN-2018-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865809.59/warc/CC-MAIN-20180523200115-20180523220115-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9392498135566711, "score": 3.34375, "token_count": 367, "url": "https://www.disastercompany.com/groundhog-day-winter-forecast/" }
An Antrim potato farmer has re-cultivated a variety of potato at the root of the Great Famine, making it available in Ireland for the first time in almost 170 years. The nutritious "Irish Lumper" grew immensely popular among impoverished Irish farmers in the early 19th century because if flourished in poor soil. However, the dependence on a single variety of spud proved disastrous. When the blight took hold in the 1840s, the Lumper was wiped out. The potato variety had all but disappeared until Michael McKillop of Glens of Antrim Potatoes decided to grow the spud five years ago. “I had read in all the history books about its awful flavours and soapy texture of the Lumper, but I wanted to see for myself what this potato with a black history was like,” McKillop told the IrishTimes. “I grew a few and was amazed at how good they tasted.” The Lumper was a hit at the Delicious Ireland consumer show at the Selfridges Foodhall in London last summer, but McKillop's yield was not enough to bring it to a wider market. This year's yield is slightly larger. Next week, the results of McKillop's endeavors will appear on the shelves of Marks & Spencers, where is will sell for just three weeks. The potato blight was caused by a fungus probably imported in fruit from America or Mexico on a trading ship. As soon as the fungus hit the potatoes turned into a mass of rottenness. Over one million starved and one million fled. Log in with your social accounts: Or, log in with your IrishCentral account: Don't have an account yet? Register now ! Join IrishCentral with your social accounts: Already have an account ? Log in Or, sign up for an IrishCentral account below: Make sure we gathered the correct information from you You already have an account on IrishCentral! Please confirm you're the owner. Our new policy requires our users to save a first and last name. Please update your account:
<urn:uuid:b5553874-6a62-46da-897c-e8ca2ca79865>
{ "date": "2014-04-16T04:39:35", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00011-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9646834135055542, "score": 2.984375, "token_count": 430, "url": "http://www.irishcentral.com/news/great-famine-potato-makes-a-comeback-after-170-years-194635321-237569191.html?commentspage=1" }
Selenium is an automated testing suite for web applications with the advantage of working with multiple platforms and browsers. It is an open-source tool and is mainly used for functional testing and regression testing. Since it is open-source, there is no licensing cost involved, which is a major advantage over other testing tools. It Consists of: - Selenium Integrated Development Environment (IDE) - Selenium Remote Control (RC) - Selenium Grid Advantages of Using Selenium Testing: - It operate across different browsers and operating systems. - Selenium IDE - a firefox plugin that lets testers to record their actions as they follow the workflow that they need to test. - Selenium RC - is a flagship testing framework that allows more than simple browser actions and linear execution. It makes use of the full power of programming languages such as Java, C#, PHP, Python, Ruby and PERL to create more complex tests. - Selenium WebDriver - is the successor to Selenium RC which sends commands directly to the browser and retrieves results. - Selenium Grid - is a tool used to run parallel tests across different machines and different browsers simultaneously which results in minimized execution time. Selenium Test Automation Services: - Test automation assessment & ROI analysis - Framework development, customization, integration, review and analysis - Automated test suite development and maintenance - Automated selenium migration services (Migrate2Selenium platform)
<urn:uuid:cf5e9ac0-c2e0-47a3-b853-dd8361cddacd>
{ "date": "2019-09-20T07:25:57", "dump": "CC-MAIN-2019-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573908.70/warc/CC-MAIN-20190920071824-20190920093824-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8793995976448059, "score": 2.546875, "token_count": 304, "url": "https://tmotions.com/technologies/selenium" }
As the second state struck by white nose syndrome in bats, good news for Vermont’s bats is good news for all hibernating bats in North America. An Associated Press story reports that scientists are interpreting results of a winter-long study of bat movements in New England’s largest bat hibernation site as showing a sharp reduction in the number of bats felled by white nose syndrome. The scientists tagged over 400 bats, and found that only eight left their hibernation cave early. Only 192 bats left the cave at their normal time, but the scientists say they think those other 200 or so bats hibernated in another cave, as opposed to dieing somewhere deep in the cave out of reach of their tracking antenna. Read the whole Associated Press story here. Scroll down for some background on the study and other interesting white nose syndrome info, here. Photo: Little brown bat with white nose syndrome. Courtesy of Missouri Dept. of Conservation National Moth Week is July 19 – 27. While most state wildlife departments struggle to include invertebrates of any kind in their program, if you are looking for educational opportunities, this one is as worthy as any. The week was founded and promoted by Friends of the East Brunswick Environmental Commission (and, yes, that’s in New Jersey). Fine the National Moth Week website here. And, of course, there’s a Facebook page. The Nature Conservancy is celebrating National Moth Week. It’s list of moth-related activities is here. And a blog post with more background is here. I’m in the middle of moving this website to a new server. The new server will get rid of the ads, which were never a part of this blog, but something added by WordPress. If you have this site bookmarked as “wildliferesearchnews.com” there should be no change. There will also be no change if you get a weekly email through MailChimp. The move is taking longer than I expected, but it is not taking a month and a half. The big gap in posts is due to other things, and I took advantage of the hiatus to make the server change. I will start posting again as soon as I’m functional on the new server. Also as soon as the site is functional on the new server, I’ll work on getting the people who have email subscriptions through WordPress moved. There are just a very few of you. Thanks for your patience. Looking forward to seeing you on the new server.
<urn:uuid:452e9778-9385-4f3a-842f-26271e337ca6>
{ "date": "2017-06-29T00:19:48", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00417.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9546923041343689, "score": 2.5625, "token_count": 530, "url": "http://wildliferesearchnews.com/2014/07/" }
Controlling RGB-leds never was difficult: just send a PWM signal to the red, green and blue LED and you have a color. Controlling multiple LEDs isn't that hard either: just throw a few mosfets in. Controlling a lot of RGB LEDs and having each one display a separate color is harder: you need to either put them in a matrix or have a chip next to each one. The second solution has become easier in time because of chips specifically designed to do the job. At first, you had Chinese LED-strips with a chip every three leds, with a SPI-like bus running along to set each three LEDs to a 15-bit color. Later, the chips got better and could display 24-bit color. Nowadays, we have chips that actually integrate the driver chip in the LED-package. An example is the WS2812: a nice and bright LED in a PLCC form factor, with just four pins for power and data. The integration of the chip in the LED even seems to make it the cheapest choice (at the time of writing) if you want something that's per-led addressable. These chips also have a disadvantage. While other chips run on a kind of SPI-like protocol whose signals are easily generated either using a microcontroller SPI-port or software bitbanging GPIO, the WS2811 (the little controller die inside an WS2812 LED) uses a single-wire protocol, encoding the ones and zeroes in the duration of the high pulse. 24 of these pulses set the intensity of the red, green and blue element in the LED, and a stream these 24-bit signals provides each LED with their individual color. This also means the pulses are quite timing-sensitive and high-speed: This makes controlling the things in a non-realtime OS like Linux pretty hard. A program running under such a non-realtime OS like that would need to spit out a perfectly timed stream of those weirdly-encoded ones and zeroes. A context switch or an interrupt, however, could easily introduce a delay orders of magnitude bigger than the timing requirements. This would make the leds de-synchronize or reset, which introduces flicker or other randomness. This can be alleviated by handing the control of the LEDs to a secondary microcontroller, running without an OS to interfere with the timings. Ofcourse, this adds cost and due to the bandwith between the Linux-host and the microcontroller, framerate and/or the amount of controllable pixels can be limited.Sometimes, however, you can work around the need to use CPU power to precisely time the signal stream needed. The OctoWS2811 library does that for Teensy boards, using some smart DMA magic to construct the stream almost without any CPU interference. The downside is that the Teensy is just an embedded board, and usually you need to connect it to something larger, e.g. a Raspberry Pi, to connect it to Ethernet or do other higher-level stuff. The idea to (ab)use the onboard hardware of a controller for this is a good one, however, and I wondered if there was a way to use something similar on a Linux-running device. My first thought was to use a Raspberry Pi with the DMA trick I used in my LED board controller build. Unfortunately, that didn't meet the tight timing requirements the WS2811 needs: the CPU gets priority over the DMA transfer sometimes, introducing a small but disastrous delay. So, the question was: Is there something else that can transfer bits from memory to IO-pins which does have strict timing capabilities? Turns out there's at least one thing: a video interface. Unfortunately, the one on the Raspberry Pi isn't documented, but there are other Linux-based boards available. For example, Olimex produces the Olinuxino-series of boards, which are open-hardware Linux-boards based a.o. on the very well documented iMX233-series. I managed to snag an oLinuXino Nano at an eBay auction for EUR18 and decided to use that. 1 Next »
<urn:uuid:56e8c6ce-d154-424d-82ff-e2be8e6f1f50>
{ "date": "2015-10-09T12:11:52", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00084-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9457926750183105, "score": 3.109375, "token_count": 864, "url": "http://spritesmods.com/?art=imx233-ws2811" }
Every year, a Japanese village in northern Aomori Prefecture creates field-sized, living, 3D paintings made of coloured rice shoots. BBC by Selena Hoy1 March 2017 From ground level, it didn’t look like much. In fact, the scene looked like a lot of rural Japan: rice paddies with tender shoots in shades of green, rippling in the wind and stretching off into the horizon. Bucolic, surely, but nothing unusual. But as I ascended the viewing platform and took the longer view, something started to emerge. What had been patches of pale green and reddish brown started to take shape into a detailed tableau of Godzilla mid-attack, in a scene so sprawling it wouldn’t fit into my camera’s normal frame. It was so impressive it almost looked as though the monster would rise up out of the field and start crushing the little houses in the distance under his feet, perhaps crunching a few humans in his well-defined teeth. The Godzilla design appears to show the monster rising out of the field (Credit: Selena Hoy) This astonishing art form is something that hundreds of thousands of people are flocking to the village of Inakadate, in Japan’s northern Aomori Prefecture, to see. Tanbo Art, which translates to ‘Rice Paddy Art’, consists of thousands of strategically planted rice shoots grown in concert to produce field-sized, living, 3D paintings. The imaginative undertaking started back in 1992, when the then-mayor instructed his staff to think of an event that would draw crowds to the village. Takatoshi Asari of the village’s tourism and planning division explained that “one employee had seen an elementary school rice paddy that was planted with yellow, purple and green rice plants in a striped pattern, and thought, ‘What if we planted a field with three colours of rice plants to make a drawing with text?’ There was no concept of art at the time.” Rice agriculture is a cornerstone of life in Inakadate (Credit: Inakadate Village) When I asked my guide Hiroki Fukushi, “Why rice?”, he replied: “This is a village with nothing but rice fields. So we thought, let’s do with what we have.” In this remote village of just more than 8,000 people, rice agriculture is a cornerstone of life. Rice has been cultivated in the area for around 2,000 years; Inakadate’s official flower is inenohana, or rice flower; and the village song also features the rice flower. The first year, about 100 villagers helped to plant the rice there. The result was a simple geometric representation of nearby Mt Iwaki, with the words ‘Rice Culture Village Inakadate’ in Japanese. Very few spectators turned up. Realising that they needed to create something more impressive, “every year, we increased the colours of rice plants that were used, and the technology for creating the art improved,” Asari said. The first design was a simple geometric representation of nearby Mt Iwaki (Credit: Inakadate Village) Atsushi Yamamoto, the art teacher at the village school, is in charge of drawing the plans. “My aunt worked at the village hall. In the year of Heisei 15 , there was a plan to make a rice-paddy artwork of Mona Lisa, but it was complicated,” he explained. Unsure of how to create the more ambitious picture compared to the simple Mt Iwaki that they had been doing for 10 years, the village council came to seek his expertise. Yamamoto drew the Mona Lisa plans, with mixed success. The perspective was off, and some said that the famous mystery woman looked fat. Their ambitious representation of Mona Lisa was met with mixed success (Credit: Inakadate Village) “At the beginning when I started rice paddy art, there were some failures, but after some trial and error, I gained experience, and now, the rice paddy art comes out the way it’s envisioned,” he said. The project has certainly come a long way since Mt Iwaki and Mona Lisa. The annual Tanbo Art now has a planning process, which starts in autumn after rice harvest and is wrapped up in April, a month or so before planting commences, explained the village mayor, Koyu Suzuki. “The theme is decided at the Village Revitalization Promotion Council. We try to choose a design that will be enjoyable to many different people,” Suzuki said. Next, Yamamoto makes the drawing, and a survey company in the village produces a computer-aided design (CAD) blueprint, which helps ensure that the perspective of the artwork will look right when viewed from the observation points. Planting occurs in late spring and the images emerge in the summer (Credit: Inakadate Village) The planting occurs in late spring, and the paddies, in two locations in the village, grow from May to October. “There are 12 varieties of rice plants used, and seven colours. Right after planting the seeds, you can’t tell the difference between the colours, but once the rice plants start growing, you can tell the difference quite distinctly,” said Inakadate tourism section chief Masaru Fukushi, who is also in charge of paddy maintenance. The images begin to emerge from the mud sometime in June, but the living paintings reach peak splendour in July and August, when the viewing areas are crowded with spectators. From the humble numbers of the 1990s, a total of 340,000 visitors came to the two viewing sites in 2016. The town has even built a train station, Tanbo Art Station, and a special viewing tower to accommodate all the people. The themes over the years have included scenes from Star Wars and Gone with the Wind, but increasingly, the villagers are tending toward Japanese motifs. In 2016, the year I visited, the main pictures were of Godzilla and actors from the TV drama Sanada Maru, a historical samurai show that was popular last year. Gone with the Wind (Credit: Inakadate Village) Suzuki was coy when I asked for a hint of this year’s artwork, but did say that “it will be a Japanese style design. Like something out of the Kojiki [Japan’s oldest extant text, dating from the 8th Century].” The Kojiki and rice: what could be more quintessential Japan? Warriors (Credit: くろふね) Seven Gods of Fortune (Credit: 掬茶)
<urn:uuid:0065762d-f042-405d-8b4b-7b0f9367b08d>
{ "date": "2018-01-20T06:52:09", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889473.61/warc/CC-MAIN-20180120063253-20180120083253-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9669935703277588, "score": 2.796875, "token_count": 1429, "url": "http://atpeacewithpink.blogspot.it/2017/05/paddy-paintings.html" }
Paracord Belt Design Paracord can be an awesome tool in your preparedness arsenal. This durable nylon rope can be tied into tons of different designs including bracelets, strengthened cords, pouches and more. If you’re in an emergency, you simply unwind the strong cord and use it to bind, haul or anything else that you might need. So, whether you’re a beginner or an expert paracord lover, we have a design for you. Check out these paracord designs below. If you don’t want to spend time weaving your own bracelet, you can always let us do it for you. What is Paracord? Paracord, also known as parachute cord, is a soft, lightweight nylon rope that was originally used for parachuting. Typically, 550 paracord (which is the paracord used for our bracelets) is made of 32 strands of nylon sheath on the outside and seven strands of 2-ply nylon yarns on the inside (the “guts”). The 550 paracord is the same made for the government and has a minimum breaking strength of 550 lbs. While paracord started out as a parachuter’s tool, people quickly recognized its usefulness in other areas. Since the cord is quick-drying, rot- and mildew-resistant, it’s great for many purposes. Military units use it for securing packs, hanging covers and tents. Many military personnel even use the guts as fishing line. Paracord Belt Design • 100 feet of 550 paracord (depending on measurements) • Small needle-nose pliers • Measuring device • Cutters (ie. scissors, knife) • Candle or lighter • Nice belt buckle The first thing you’ll need to figure out is how long your belt should be. This will vary depending on your waist size and how tight you weave the belt. The belt design consists of five strands of paracord – two core strands, two working strands and an extra strand for the belt loop. As a general rule, plan the length of each strand as follows: Two core strands: Measurement of your waist X 2 + 24 inches = length of one core strand Two working strands: Desired length of belt X 12 + 24 inches = length of one working strand Belt loop: Plan for 36 inches of paracord So, as a quick example. I have a 32-inch waist and I want a 36-inch long belt. That would mean I need 93 feet of paracord. Two cord strands: 32 x 2 + 24 = 88 inches I need two 88 inch cord strands. Two working strands: 36 x 12 + 24 = 456 inches I need two 456 inch working strands Belt Loop: 36 inches So, in total, I need 1, 124 inches of paracord – which is a little more than 93 feet. (88 X 2) + (456 X 2) + 36 = 1,124 inches Creating the Belt 1. Belt buckles have a front and back. With a large rodeo-style buckle, it’s easy to tell which way it should be facing. However, with a smaller model look at the direction the buckle will be opening. Once you’ve figured out which way it will be facing, fold one of the core strands in half. Pass the loop of the core strand through the buckle to one side of the tongue. Repeat the process on the other side of the tongue. 2. The design of the belt is relatively simple. It’s just a series of square knots passing across the core strand. So, the core strands will always be running parallel while the working strands do all the crossing. Place the buckle at the top of the table with the back facing you. 3. Place the buckle on a table with the back facing up. 4. Pass the inner right working strand (blue) over the inner left working strand (orange). Push the outer left working strand, the left core strands, and the inner right that is now the inner left working strand to the side a bit. 5. Pull the outer right working strand outward a bit, then pass the end in front of the right core strands perpendicular to their length. 6. Bring the now inner right working strand straight down across the outer right core strand, then behind the right core strands, and up out through the loop of the outer right core strand. 7. Pull the working strands tight. 8. Repeat the process in reverse with the same strands. This completes the first right square knot. 9. Push all of the right strands to the side so that you can work with the left strands. 10. Do the same thing on the left that you did on the right, but reverse the sides, so that you start by passing the outer left strand across the core strands to start. 11. Once you complete the first left square knot, pass the inner right working strand over the inner left working strand as you did before. 12. Repeat making square knots until you reach three inches less than the desired length of the belt. 13. Cut the inner two core strands (one from each side), and melt the ends. 14. Continue with the square knots and the exchange of the inner working strands in the same way as before until you have added three inches. 15. Tie the two outermost strands in a square knot around all of the other knots. 16. Cut all of the strands so that you have about an inch and a half long. 17. Using the pliers, weave all of the strands backward through the stitches on the back of the belt. This will give a tapered look and finish off the belt tip. The Belt Loop 18. Decide where you want your catch loop, and pull the 36 inch strand through the v-shapes stitch on the back of the belt at that point. 19. Placing the center of the strand under the v-shaped stitch, pull the ends through the side loops on either side of the v-shaped stitch. 20. Pull the strands directly across the front of the belt and through the side loops on the side opposite your starting point. 21. The loops across the front of the belt created this way need to be loose enough for the belt to pass through, with a little extra room since they will thicken in a moment. 22. Treating the loops across the front of the belt as the core strands and the loose ends of the strand as the working strand, crate a series of square knots that lead back to the place where you first inserted the strand into the belt. The method here is identical to the one used to make the belt except that you are only doing one cobra knot instead of the two parallel. 23. After tying your last knot, trim the working ends, melt them, and weave them into the back of the belt. And there you have it! You have a beautiful paracord belt of about 100 feet of emergency rope! Images courtesy of Rodneybones
<urn:uuid:3d1069b3-de11-44b4-9cdd-c46eafa26789>
{ "date": "2014-09-23T22:24:25", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657140379.3/warc/CC-MAIN-20140914011220-00171-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9170738458633423, "score": 2.5625, "token_count": 1486, "url": "http://www.thereadystore.com/emergency-supplies/8489/paracord-belt-design" }
“What we propose is to free redistricting from party political biases,” she said. Their algorithm was created using an open-source code that’s available for anyone to reproduce the results, which Guest believes is crucial to the transparency of the model. “The use of open-source software and transparent, easy to understand code, would help keep the process unbiased, and allow people to verify and trust the results.” The model they tested was based on the simplest possible conditions: voters within the same district should be geographically close. They believe politicians shouldn’t be involved in district-mapping at all, relying instead on computers to do the work. “Districting should be no different than multiplying two large numbers together using a calculator,” Guest remarked. “One knows the numbers, but relies on the computer to do the calculation properly.” RELATED: People Who Are Curious About Science Are More Open-Minded About Politics The researchers were inspired to devise a solution to partisan gerrymandering because they see it as a threat to democracy. “Gerrymandering is corrosive to basic democratic values,” Bradley Love, another experimental psychology researcher at University College London and a co-creator of the algorithm, told Seeker. “In extreme cases, it disenfranchises citizens by creating districts in which results are virtually preordained,” he said. “Government can become less representative and responsive to the will of the people, and instead [become] captured by special and powerful interests.” In developing the formula, the research team aimed to test the difference in population concentration between computed districts and districts that currently exist. They theorized that the density would be largest for big states, which would be challenging for any human to fairly and accurately separate into districts. To evaluate this, they created a clustering algorithm to redraw lines for all 435 congressional districts in the US, adhering to a federal law that requires all districts to have roughly the same population size. “The model starts with one cluster for each district at random locations within the state,” Guest explained. “At the start of each round, the model assigns people to the nearest cluster. At the end of the round, each cluster updates its location to be in the center of its people. Then, using these updated positions, a new round begins with everyone reassigned to the nearest cluster once again. This goes on and on until the clusters stabilize, defining the voting districts. The one nuance is that clusters with more people are penalized so that each cluster ends up with roughly the same number of people.” One aspect of redistricting that many states require is the preservation of communities of interest — people who are demographically similar based on race, class, and/or culture. Because these groups are likely to have similar political concerns, they benefit from having unified representation in the legislature. But grouping voters based on geography could split these communities apart, which is why geography alone is not sufficient criteria for redistricting.
<urn:uuid:8e6789d1-a6d8-4495-8f2c-8853b47dde4f>
{ "date": "2018-08-22T03:40:06", "dump": "CC-MAIN-2018-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219469.90/warc/CC-MAIN-20180822030004-20180822050004-00696.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9577620029449463, "score": 3.5, "token_count": 630, "url": "https://www.seeker.com/tech/an-unbiased-algorithm-could-help-put-an-end-to-partisan-gerrymandering" }
Metamerism is the condition when the general segmentation of bilateral animals involves longitudinal division of the body into linear series of similar sections. Metamerism is also known as metameric segmentation. Each section is called metamere or somite or segment. Each of these segments has repeats of some or all units of organs. This term metamerism is used only when the organs of mesodermal origin are so arranged. Metamerism is a Greek term meaning meta=after, mere=part. The primary segmental divisions are body wall musculature and coelom. This in turn imposes a corresponding metamerism on the associated systems. Longitudinal structures like gut, main blood vessels and nerves extend through entire length of the body. While structures like gonads are repeated in all or only few segments. Metamerism in Animal Kingdom include, - Metamerism encountered for the first time in annelids. - Apart from this it is also found in phylum Arthropoda and vertebrata. One group of Mollusca (Monoplacophora) also exhibits metamerism. - Tape worms show pseudometamerism or strobilization, which is not true metameric segmentation. Types of Metamerism External and internal metamerism: In most of the Annelids, metamerism is conspicuously visible both externally and internally. Example, Pheretima posthuma, it has numerous body segments and all body being repeated segmentally. Moreover even the coelom is segmentally divided into compartments by intersegmental transverse mesenteries called septa. Only the digestive tract escapes this metamerism and it extends through every segment. In Arthropods, metamerism is chiefly external. Humans and other vertebrates show internal metamerism of nerves, blood vessels etc. Complete and incomplete metamerism: Complete type of metamerism practically affects all the body systems. In this type the metameres are homonomous and each metamere has segmental blood vessels, nerves, coelomoducts and nephridia. Thus this condition is also called homonomous metamerism. Metamerism in Arthropods and other higher animals is incomplete because of division of Labour. Consequently metameres of different regions of body vary considerably. Such a condition is called heteronomous metamerism. The larval and embryonic stages of arthropods and other vertebrates show complete metamerism with uniform metameres. But these metameres become unclear in the adults succeeding specialization. True and pseudometamerism: True segments in Annelids are developed during the embryonic stages whereas the pseudo segments present in tape worms are superficial which are formed as a result of strobilization. The proglottids of tapeworms are not true segments but rather they are complete reproductive individuals. |Number of segments is constant for each species. No new segments are added except in asexual reproduction. ||Number of segments is not constant as new segments are constantly added throughout the life. |Simple elongation of the preexisting segments results in growth. ||Addition of new segments from proliferation region results in growth. |All the segments are of same age and at same stage of development. ||Proglottids vary from one another in age and degree of development. |All the segments are integrated and interdependent functionally. They work in coordination and preserve the individuality of the body. This individuality of the body helps in locomotion. ||The proglottids are independent and self-contained units as each of them have full set of sex organs, excretory and nervous systems. Each proglottid is productive unit developed for detachment. Theories of origin and evolution of metamerism Pseudometamerism theory: This theory was proposed by Hymen and Goodrich in 1951. This theory explains Pseudometamerism that occurs in Cestodes such as tapeworms. According to this theory, metamerism initially developed secondarily as a result of repetition of body parts like blood vessels, coelom, nerves etc. Later a segmented condition arouse by the formation of cross-partitions between them, so that each segment receives a part of each system. This process of formation of cross partitions after basic segmentation is also seen in modern day Annelids during development of somites in larval and adult stages. According to this theory it is believed that pseudo segmentation is an adaptation for an undulatory movement. Fission theory: This theory was proposed by Perrier in 1882. It postulates that pseudometamerism evolved in flat worms by strobilation of body. Strobilation is mainly aimed to increase the rate of reproduction. Proglottids of helminths are serially arranged segments but in reverse order and they increase reproductive capacity many times. This theory proposes that metameric segmentation resulted when some non-segmented ancestors divided repeatedly by transverse fission or asexual budding to produce chain of sub individuals. Such process occurs even in modern day Annelids and Platyhelminthes animals. Later these sub-individuals integrated morphologically and physiologically into one complex individual. Thus according to this theory a segmented animal is a chain of completely coordinated sub-individuals. Cyclomerism theory: This theory is proposed by Sedgwick in 1884. According to this theory, metamerism in chordates evolved for better arrangement of organs in coelom. This theory assumes that coelom originated in some ancestral radiate actinozoan coelenterates, through the separation of four gastric pouches from the central digestive cavity. Initial division of two pouches resulted in three pairs of coelomic cavities namely protocoel, mesoscoel and metacoele in ancestral coelomates. Later loss of protocoel and mesoscoel led to unsegmented coelomates like Molluscs. Then the sub division of metacoele produced primary segments, leading to the development of segmented Annelids. This provided septa and compartments in coelom in which organs could be arranged in a better way. The phylogenic assumption of this theory is that, all bilateral metazoans were originally segmented and coelomate and those acoelomate unsegmented groups like flatworms lost these characters later. Locomotory theory: This theory is proposed by Clark in 1964. This theory postulates that metamerism evolved as an adaptation to locomotion of different kinds. It evolved independently in chordates for locomotion which was previously carried out by lateral undulation of body in primitive aquatic vertebrates. Annelid metamerism probably evolved for burrowing. Metamerism allowed myotomes or muscle bundles and nerves to be arranged segmentally for better co-ordination of undulatory movement of body. Significance of metamerism - It has provided effective locomotory mechanism as the coordinated contraction along body generates efficient body undulating movement. - Fluid filled coelomic compartments provide hydro static skeletons for burrowing. Accurate movements can take place by differential turgor pressures affected by flow of coelomic fluid from one part of the body to the other. - Different segments can be specialized for different functions leading to the development of high grade of organization. It is not clearly marked in annelids, but well developed in arthropods. Example spermatheca, clitellum are involved with reproduction, thus regional specification of the body with proper division of Labour. - Enumerate the types of Metamerism. - Bring out the difference between True and Pseudo metamerism - Study and differentiate between various theories of evolution and origin of metamerism in animals. - Write about the significance of metamerism. - Share with your friends! -
<urn:uuid:536fad47-72e6-4c25-9724-9a919a04ad4a>
{ "date": "2019-02-17T05:22:20", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481624.10/warc/CC-MAIN-20190217051250-20190217073250-00176.warc.gz", "int_score": 4, "language": "en", "language_score": 0.923812210559845, "score": 3.65625, "token_count": 1655, "url": "https://www.studyandscore.com/studymaterial-detail/phylum-annelida-origin-and-significance-of-metamerism" }
But at a certain age (over 65), daily hydration becomes a challenge as well as a matter of life and death. Why? Because dehydration is common in seniors due to decreased feelings of thirst, medications and diseases that increase fluid needs, and decrease in overall food and beverage intake. Dehydration can cause confusion, fatigue, hot or cold sensations, muscle cramping, headache, dry mouth, eyes and skin, constipation, dangerous changes to blood pressure, and abnormal blood chemistry (ex: blood sugar, electrolytes). Dehydration left untreated requires medical attention and can be deadly. It can send you to the hospital in a hurry and into a coma. How much fluid is this, exactly? If you are 65 or older, your mission is to get in at least 8 glasses (1 glass=8 oz) of fluid every day. If you have kidney or heart problems, please ask your doctor for specific amounts. Remember that all liquid counts (milk, soup, coffee and tea) and some fruits and vegetables too. Caregivers should make sure the older person has water by his or her side at all times. Encourage frequent drinking in moderate amounts. How to reach this goal? Drink 1 glass with each meal and one in between meals to make sure you get enough. Keep fluid in arm’s reach throughout the day and stash one in the car or your bag when you leave the house. - Older people who get enough water tend to suffer less constipation, use less laxatives, have fewer falls and, for men, may have a lower risk of bladder cancer. Less constipation may reduce the risk of colorectal cancer. - Drinking at least five 8-ounce glasses of water daily reduces the risk of fatal coronary heart disease among older adults. The Science of Aging Scientists warn that the ability to be aware of and respond to thirst is slowly blunted as we age. As a result, older people do not feel thirst as readily as younger people do. This increases the chances of them consuming less water and consequently suffering dehydration. Less body fluids, lower kidney function. The body loses water as we age because of the loss of muscle mass and a corresponding increase in fat cells. In addition, the kidneys’ ability to remove toxins from the blood progressively declines with age. This means the kidneys are not as efficient in concentrating urine in less water, thus older people lose more water than younger ones. Drink lots of water! The chances of your getting too much water are slim to none, so drink up! Contact LifeCall Medical Alert Systems, one of the leading providers of BOSCH in-home health care monitoring solutions for seniors and at-risk persons seeking to retain their independence and remain in their own homes. www.lifecall.com
<urn:uuid:69a5cda1-1b97-43ab-b300-40ff51a73583>
{ "date": "2018-11-18T01:59:00", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743960.44/warc/CC-MAIN-20181118011216-20181118033216-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9478594660758972, "score": 3.328125, "token_count": 584, "url": "https://www.lifecall.com/tag/water/" }
Such recurrences contribute to memory consolidation – i.e. to the stabilization of memory contents. Scientists of the German Center for Neurodegenerative Diseases (DZNE) and the University of Bonn are reporting these findings in the current issue of “The Journal of Neuroscience”. The researchers headed by Nikolai Axmacher performed a memory test on a series of persons while monitoring their brain activity by functional magnetic resonance imaging (fMRI). The experimental setup comprised several resting states including a nap inside a neuroimaging scanner. The study indicates that resting periods can generally promote memory performance. Depending on one’s mood and activity different regions are active in the human brain. Perceptions and thoughts also influence this condition and this results in a pattern of neuronal activity which is linked to the experienced situation. When it is recalled, similar patterns, which are slumbering in the brain, are reactivated. How this happens, is still largely unknown. The prevalent theory of memory formation assumes that memories are stored in a gradual manner. At first, the brain stores new information only temporarily. For memories to remain in the long term, a further step is required. „We call it consolidation“, Dr. Nikolai Axmacher explains, who is a researcher at the Department of Epileptology of the University of Bonn and at the Bonn site of the DZNE. “We do not know exactly how this happens. However, studies suggest that a process we call reactivation is of importance. When this occurs, the brain replays activity patterns associated with a particular memory. In principle, this is a familiar concept. It is a fact that things that are actively repeated and practiced are better memorized. However, we assume that a reactivation of memory contents may also happen spontaneously without there being an external trigger.” A memory test inside the scanner Axmacher and his team tested this hypothesis in an experiment that involved ten healthy participants with an average age of 24 years. They were shown a series of pictures, which displayed – among other things – frogs, trees, airplanes and people. Each of these pictures was associated with a white square as a label at a different location. The subjects were asked to memorize the position of the square. At the end of the experiment all images were shown again, but this time without the label. The study participants were then asked to indicate with a mouse cursor where the missing mark was originally located. Memory performance was measured as the distance between the correct and the indicated position. “This is an associative task. Visual and spatial perceptions have to be linked together”, the researcher explains. “Such tasks involve several brain regions. These include the visual cortex and the hippocampus, which takes part in many memory processes.” Brain activity was recorded by fMRI during the entire experiment, which lasted several hours and included resting periods and a nap inside the neuroimaging scanner. Recurrent brain patterns increased the accuracy For data processing a pattern recognition algorithm was trained to look for similarities between neuronal patterns observed during initial encoding and patterns appearing at later occasions. “This method is complex, but quite effective”, Axmacher says. “Analysis showed that neuronal activity associated with images that were shown initially did reappear during subsequent resting periods and in the sleeping phase.” Memory performance correlated with the replay of neuronal activity patterns. “The more frequently a pattern had reappeared, the more accurate test participants could label the corresponding image”, Axmacher summarizes the findings. “These results support our assumption that neural patterns can spontaneously reappear and that they promote the formation of long-lasting memory contents. There was already evidence for this from animal studies. Our experiment shows that this phenomenon also happens in humans.” Memory performance benefits from resting periods The study indicates that resting periods can generally foster memory performance. “Though, our data did not show whether sleeping had a particular effect. This may be due to the experimental setup, which only allowed for a comparatively short nap”, Axmacher reckons. “By contrast, night sleep is considered to be beneficial for the consolidation of memory contents. But it usually takes many hours and includes multiple transitions between different stages of sleep. However, other studies suggest that even short naps may positively affect memory consolidation.” An objective look at memory contents It is up to speculation whether the recurring brain patterns triggered conscious memories or whether they remained below the threshold of perception. “I think it is reasonable to assume that during resting periods the test participants let their mind wander and that they recalled images they had just seen before. But this is a matter of subjective perception of the test participants. That’s something we did not look at because it is not essential for our investigation“, Axmacher says. “The strength of our approach lies rather in the fact that we look at memory contents from the outside, in an objective manner. And that we can evaluate them by pattern recognition. This opens ways to many questions of research. For example, brain patterns that reoccur spontaneously are also of interest in the context of experimental dream research.”Original publication Dr. Marcus Neitzert | idw Cholesterol-lowering drugs may fight infectious disease 22.08.2017 | Duke University Once invincible superbug squashed by 'superteam' of antibiotics 22.08.2017 | University at Buffalo 16.08.2017 | Event News 04.08.2017 | Event News 26.07.2017 | Event News 23.08.2017 | Life Sciences 23.08.2017 | Life Sciences 23.08.2017 | Physics and Astronomy
<urn:uuid:84ed5ce8-82bf-4623-9df5-da75ec1e6d22>
{ "date": "2017-08-24T03:38:54", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00656.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9479666948318481, "score": 3.546875, "token_count": 1179, "url": "http://www.innovations-report.com/html/reports/medicine-health/recurring-memory-traces-boost-long-lasting-memories-223709.html" }
Off to a great start with Christ-Centered A-B-C’s! - AK3 (Older 3's) - K4 (Young 4's) Note: Older 4's will normally do better starting with Phonics Kit B (Part 002). Just teach in sequence at a pace that best fits your child's own needs. Skills Covered at Phonics Level A: - Aa-Zz Letters/Sounds - Short Vowels - Blends (single consonant and a vowel blended together to form one speech sound) - Reading Short Vowel Words - Spelling Blends and Short Vowel Words - Brief introduction to Long Vowels - Manuscript Printing If you purchase Phonics Kit A (Part 001), when ready for Level B, you will only need to order Phonics Add-On Kit B (Part 002AO). Phonics Kit A includes the following: Christ-Centered Phonics Foundation Set: Click any part number below to link to more information about that product. - Christ-Centered Phonics Teacher's Guide to Reading plus the Christ-Centered Phonics Charts and Visual Aids 3-ring binder used at Levels A, B, and C (Part 200-SET) - Christ-Centered Phonics Flashcards 1-118: Master Set (Part 201-RV). - Christ-Centered Phonics Flashcards Drill CD (Part 201.5) Lessons/Workbook/Answer Key: Click any part number below to link to more information about that product. - Christ-Centered Phonics Lessons for Aa-Zz: Level A (Part 202 ) - Christ-Centered Phonics Workbook: Level A (Part 203) - Christ-Centered Phonics Workbook Instructions/Answer Key: Level A (Part 203AK) - Alphabet & Numbers Tracing Masters Sheets (Part 216) Note: Although these tracing sheets may be reproduced, for repeated use we suggest laminating each page (or inserting in a plastic report cover). For writing practices, we recommend using a dry erase marker or washable overhead transparency pen because either wipes off well. Phonics Kit A (Part 001) contains 31 full lessons, divided into three days each (93 days total). Each lesson could be spread over several weeks, or completed in two or three days per week. A daily lesson plus workbook assignment generally takes about 20 minutes.
<urn:uuid:ca0dd08d-7fa4-42aa-8637-d2e6a8c0efae>
{ "date": "2019-01-16T14:58:36", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657510.42/warc/CC-MAIN-20190116134421-20190116160421-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7973622679710388, "score": 3.03125, "token_count": 524, "url": "http://christcenteredcurriculum.com/store/product_info.php?products_id=138&osCsid=5ad6d1957235f426db79d7ef12a7b2ce" }
The Wonderful World of Color In many 3D applications, when a model is created, by default that model will be a single color - usually gray. A real world analog to this would be sculpting in clay. When a clay sculpture is created the color of the sculpture is the color of the clay. It is only after the artist paints or glazes the sculpture that it takes on a unique color. The same is true when modeling in 3D; the 3D “clay” is a single color determined by the modelling application. Instead of painting directly on to the model in DAZ Studio we apply “Textures” or “Image Maps” to give the model color. Note: Most models purchased from DAZ 3D will load with a texture applied to them. That doesn't mean you won't be able to customize the surface. Chapter 3 is devoted entirely to the surface of your model. We will go over the tools available in DAZ Studio which allow you to change your object from its default gray state to something more interesting. The possibilities are endless; you can focus on hyper-realism and try to get your surfaces to mimic those in the real world as closely as possible, you can stylize your surfaces giving your render a cartoon look and feel, or you might choose something somewhere in between. With DAZ Studio the power is in your hands. Before we jump into the process of applying textures to your figure there are a few concepts we will cover as a foundation. We'll cover these in the next few sections. The first concept to understand - what a “Surface” is. It isn't a terribly difficult concept to master, and you don't need a vast knowledge of the subject to be successful in DAZ Studio. A “Surface” is a specific subsection of the 3D model, a group of specific polygons, that share common properties to describe what the surface of the model looks like. When an artist models an object, he or she will assign specific sections of the model to a named surface. When they are finished, every polygon of the model will be assigned to one surface or another. A simple model may have only one surface, while a complex model can have multiple surfaces. The Genesis figure, for example, has 26 surfaces. Note: You can view a complete list of a model's surfaces in the Surfaces pane. More on this in Section 3.5. The concept may be easier to visualize with an example. Imagine you have modeled a jacket. Most jackets will have buttons, zippers, buckles, rivets, etc as well as the cloth material for the jacket. Each of these parts of the jacket is different, and look different. A 3D modeler would be wise to assign the buttons to their own surface, the zipper to its own, etc. This allows the user to customize the look of each part individually. In this scenario a user would be able to give the cloth a matte look, while making the buttons and buckle shiny. In DAZ Studio the user can customize each surface individually rather than having the settings apply to the surface of the entire model. Now that you know what a surface is, the next logical concept to introduce is image maps. An “Image Map” is a 2 dimensional image that “wraps” around a surface. The majority of surfacing is done with image maps. They provide the easiest way to get results that don't look uniform across a surface (e.g. skin with freckles). Essentially, they add detail to the surface of the model. Note: Image maps are occasionally referred to as “Texture Maps”, especially when designed to be used in the 'Diffuse Color' property. See Section 3.5.1 below. The way the the image map wraps around the model is determined by a “UV Set” - a set of 2 dimensional coordinates that correspond to 3 dimensional point on a model. The intricacies of UV mapping won't be covered in this guide. However, DAZ Studio allows multiple UV sets for a figure. We will cover changing the UV set of a figure in Section 3.5.8. Above you'll find an example of image maps for the 'Face' and 'Torso' of Genesis as well as an example of those image maps applied to the model. Hopefully this will help you visualize how 2 dimensional image maps work in 3 dimensional space. The last foundational concept we need to cover before getting into the meat of this chapter is that of a “Surface Shader.” The concept of a surface shader is a little more abstract than that of a surface, or an image map. The reason being that you can't really see a surface shader in your scene, you can only see the results of one. The simplest way to describe a surface shader is to say that it is a program that is run, by the “Render Engine”, for every visible/sampled point on a surface in order to describe what the final color and opacity of that surface should be. It calculates how a surface reacts to light, how or whether it reflects, or refracts etc. A surface shader ultimately determines the RGB value for every pixel in the scene. If we take this a step further, we can say that a surface shader is a “Shader” that is specific to a surface or multiple surfaces. In DAZ Studio there are 5 different types of shaders including Surface, Light, Volume, Imager and Displacement - with [custom] surface shaders being the most common. For the scope of this User Guide, we will only cover surface shaders. Other shaders are covered in online documentation. Fortunately, as complex a topic as shaders are, DAZ Studio comes with several ready-made shaders so that you don't need to worry about creating or writing your own - we'll leave that to those with advanced degrees in computer science and physics. All you need to know is that the surface shader that is applied to a surface determines what properties are available for that particular surface in the Surfaces pane. We'll cover how to find out what surface shader you are using later on in this section. The most common type of custom shader is a surface shader - custom meaning it is not a DAZ Studio default shader. You may encounter instances where people will refer to a surface shader simply as a shader. As discussed previously there are different types of shaders. While this isn't incorrect (a surface shader is one type of shader), it is good practice to include the type of shader when referring to it. You may also encounter instances where people refer to a “Shader Preset” as a shader - this is incorrect. Note: The 3Delight render of a surface with a custom surface shader applied may be dramatically different than the “Viewport” preview. So far in this chapter we've covered surfaces, image maps and surface shaders - all to prepare you to use the Surfaces pane. The Surfaces pane is where you will customize the surfaces of your objects in DAZ Studio. In the Hollywood Blvd layout, the Surfaces pane is located on the left hand side of the interface, in the 'Actors, Wardrobe & Props' activity. The Surfaces pane is divided into three “Pages.” You can access each of these pages at the top of the pane. They are the Presets page, the Editor page and the Shader Baker page. We are going to focus on the Editor page in this section. If the Editor page isn't selected, go ahead and click on the 'Editor' label at the top of the pane to bring the Editor page forward. The Editor page of the Surfaces pane is organized similarly to the Parameters pane. On the left hand side you will see your current scene selection, as well as any items associated with the current scene selection such as clothing, hair or props. You can expand any of the objects in this list to reveal their surfaces. Note: The current scene selection must have geometry in order for it to show up in the Surfaces pane Editor page. Objects without geometry such as “Lights” and “Cameras” won't show up in the Surfaces pane. If you still have the Genesis 2 Female figure loaded in the scene you should see it listed in the Surfaces pane on the left hand side. If Genesis 2 Female isn't in the scene go ahead and load her into the scene now. If you still don't see her in the Surfaces pane check the Scene pane to make sure that Genesis 2 Female is your current scene selection. Note: For instructions on how to load content into the scene see Section 1.5.1. Now that Genesis 2 Female is in your scene and you have her selected you should see her in the Surfaces pane on the left. Click the arrow next to Genesis 2 Female to reveal her surface selection sets and her surfaces. A “Surface Selection Set” is just a predetermined group of surfaces. They allow you to edit surfaces that are commonly edited together, such as the face head and lips, without having to select the individual surfaces yourself. Genesis 2 Female has several surface selection sets. You can browse through them by clicking the arrows next to 'Default Templates' or 'Legacy Surfaces.' Clicking the arrow next to 'Surfaces' will reveal all of the surfaces for Genesis 2 Female. This is where you can select individual surfaces to edit. To select a surface, simply left click on the surface. You can select multiple surfaces at the same time by holding the Ctrl key and left clicking on the PC, or holding the Cmd key and left clicking on the Mac. The left column of the Surfaces pane also gives you the option to display all properties in the right hand column. To do this left click on the 'All' “Filter.” You can also choose to display only properties that are currently in use. To do this left click on the 'Currently Used' filter. When you select a surface, surfaces, a surface selection set, surface selection sets, or an entire object you will see the properties associated with these surfaces on the right hand side of the pane. Remember, from our discussion about surface shaders (Section 3.4), that it is the shader that determines which properties are available for the selected surfaces. The shader that is applied to the current selection will be listed at the top of the Surfaces pane. The 'DAZ Studio Default' surface shader is the most common as it is the default surface shader for DAZ Studio, but you will also see the 'omUberSurface', the 'AoA_SubSurface' and other custom surface shaders on occasion. Regardless of the surface shader that is applied to the surface there are a few properties that are fairly common among a majority of surface shaders. They are: The following sections will briefly describe each of these properties and what they do. In the real world, the surface of an object absorbs certain wavelengths of light and reflects others. The color we see is determined by the wavelength of light that is reflected by the surface of the object. A diffuse reflection is scattered, meaning a beam of light hitting the surface is reflected simultaneously in multiple directions. The “Diffuse Color” of an object represents this scattered, diffused, reflection of light. The simplest explanation for diffuse color is that it is what we perceive as the [matte] color of the surface. There are a couple of ways you can define the diffuse color of a surface in DAZ Studio. The simplest way is to change the RGB color value using the 'Diffuse Color' property. This will affect the entire surface uniformly. To change the RGB value you can left click and drag any of the numbers. You can also left click directly on the color, between the numbers, to open the 'Select Color' dialog. This dialog allows you to pick a color from a color palette. The second way to edit the diffuse color of a surface is to load an image map - sometimes referred to as a “Texture Map.” If you have an image map that matches with the current UV set for the surface, you can load it by clicking the “Image Menu Button” on the 'Diffuse Color' property. The image menu button is on the left side of the property and is decorated with a downward pointing arrow. Clicking the image menu button will open a drop down menu with a list of recently used textures as well as a few other actions. Click 'Browse…' to open a Windows Explorer window or an OS X Finder window that will allow you to browse your hard drive for the desired image map. Image maps allow for a more realistic look because they allow you to have more than just a single color applied across the entire surface. “Diffuse Strength” determines the amount to which the diffuse color contributes to the overall appearance of the surface. You can think of it as the percentage of light that is reflected by the surface. When the 'Diffuse Strength' property is set to a value of 0%, all light hitting the surface will be absorbed and the surface will appear black. When the 'Diffuse Strength' property is set to a value of 100% all light with a wavelength matching the diffuse color will be reflected, giving the color full strength. The 'Diffuse Strength' property can be controlled in two ways. The first is through the slider. This will affect the entire surface uniformly. You can adjust the slider to a value anywhere between 0% and 100%. As with the 'Diffuse Color' property, you can also add an image map to the 'Diffuse Strength' property. The difference is that a diffuse strength image map will be a grayscale image. Pixels in the image that are white correspond to a 100% value. Pixels in the image that are black correspond with a 0% value. Gray values fall somewhere between; the darker the gray the lower the value. Using a grayscale image map allows you to vary the value across a surface. The image map can be loaded using the 'Diffuse Strength' property's image menu button. When an image map is applied, the value of the 'Diffuse Strength' slider acts as a multiplier for the value in the map. When a beam of light hits a surface and is reflected in a single direction that reflection is referred to as a specular reflection. In DAZ Studio, “Specular Color” refers to the highlights caused by this direct reflection of light. This property isn't used to create a mirror like effect. It merely represents the color of the highlight on the surface. You can change the 'Specular Color' property in the same way that you can change the 'Diffuse Color' property - with either an image map, or with the RGB value for the surface. “Specular Strength” is similar to diffuse strength in that it represents the percentage of light that is reflected from the surface. In this instance however, it only applies to specular reflections. At a value of 0%, there is no specular reflection, and thus no highlights. At a value of 100%, the specular value is at full strength and all light that matches the wavelength of the specular color is reflected directly from the surface. The 'Specular Strength' property can be adjusted in the same manner as the 'Diffuse Strength' property. When an image map is applied, the value of the 'Specular Strength' slider acts as a multiplier for the value in the map. “Glossiness” determines the size of the specular highlight on a surface. The shinier, or more glossy, a surface is the smaller and sharper the specular highlight will be. A surface with a low glossiness value will have its specular highlight diffused across a larger surface area. Glossiness does not affect how strong the highlight is (that is handled by the 'Specular Strength' property) just the size of the specular highlight. However, larger specular highlights are perceived as being less intense since they are diffused across a larger surface area. You can see examples of how the 'Glossiness' property affects the size of the specular highlight in the images below. The 'Glossiness' property can be manipulated just like the 'Specular Strength' property, or other strength properties. You can adjust the slider to change the glossiness of the entire surface - the higher the glossiness value, the more concentrated the highlight. You can also apply a grayscale image map to the 'Glossiness' property. When an image map is applied, the value of the Glossiness slider acts as a multiplier for the value in the map. In the real world, rays of light are constantly bouncing around. Ambient light is the term used to describe the uniform effect that the bounced light has on a scene - instead of direct light that comes from a defined source. DAZ Studio mimics this effect, but instead of providing a single point of control in the form of a light that affects all surfaces in the same way, DAZ Studio provides a more flexible means whereby each surface has its own controls that can be set independently to produce various effects. It is the ambient light that affects the color and value of core shadows on a surface. The 'Ambient Color' property determines the color of the core shadows created on a model's surface as a result of the light in the scene. By default the 'Ambient Color' property is set to an RGB value of 0, 0, 0 or black. This mimics the way ambient light behaves in most real worlds settings. However, changing the Ambient Color property can create some really cool effects, the most common of these would be to get a surface to “glow” in a low light area. The surface isn't actually glowing (it doesn't emit light), but in a low light area it can appear to glow if the value of the 'Ambient Color' property is set to something lighter than the rest of the scene. “Ambient Strength” determines the amount of simulated ambient light that the surface will receive. Remember that the ambient light effect is not propagated to the rest of the scene. The value of the 'Ambient Strength' property will only affect the surface(s) you have selected. You can change ambient strength the same way you change diffuse or specular strength. When an image map is applied to the property, the value of the slider acts as a multiplier for the value in the map. “Opacity” refers to the transparency, or rather lack of transparency, of the object. If you remember way back to primary school - transparent means completely see through, translucent is partially see through, and opaque isn't see through at all. When opacity is at 100% the surface is 100% opaque. When opacity is at 0% the surface is 100% transparent or 0% opaque. Values between 0% and 100% make the surface translucent. 'Opacity Strength' can be adjusted in a manner similar to the other strength values we've discussed. You can use the slider of the 'Opacity Strength' property to affect the opacity of the entire surface. In many cases you will only want part of a surface to be transparent. This is done using an opacity map. An “Opacity Map” is a grayscale image map. Black in the image corresponds to an opacity value of 0%, and thus a fully transparent surface. White corresponds with an opacity value of 100% and thus a fully opaque surface. An opacity map allows you to clip out sections of your surface. You can load an opacity map the same way you would load other image maps - with the image menu button for the Opacity Strength property. When an image map is applied, the value of the Opacity Strength slider acts as a multiplier for the value in the map. Note: Opacity maps are commonly referred to as transparency maps. The term “Transparency Map” is a misnomer, as image maps are typically named according to the meaning of their full value. Technically speaking a transparency map would be the inverse of an opacity map. However, the two terms are used interchangeably. When someone creates a 3D model, the surface of the model is usually smooth. In the real world however human skin, walls, and other surfaces are rarely perfectly smooth. Human skin has pores and other imperfections, most walls have spackle or other texture to them. “Bump” allows you to simulate these imperfections without actually changing the mesh of the object. DAZ Studio simulates these imperfections through a specific type of image map called a bump map. A “Bump Map” is a grayscale image that indicates the strength of the bumps to be simulated. By default an RGB value of 128, 128, 128 corresponds to a neutral bump. Anything lighter indicates bump simulated in a positive direction, anything darker simulates a bump in the negative direction. Once an image map is loaded for the 'Bump Strength' property a slider to adjust overall “Bump Strength” will become available. You can load a bump map using the image menu button for the 'Bump Strength' property. When an image map is applied, the value of the 'Bump Strength' slider acts as a multiplier for the value in the map. Most surface shaders will offer two additional bump related properties labeled 'Bump Minimum' and 'Bump Maximum.' These values determine the simulated bump minima and maxima. 'Bump Minimum' and 'Bump Maximum' can also shift or scale the values from a bump map. Note: Bump will not be seen until the image is rendered. “Displacement” is similar to bump in that it allows you to add details to the surface of the model without having to actually model the details in. The difference is that bump is a simulated effect, while displacement actually changes the shape of the mesh. To explain the difference, lets use an example. Think of a brick wall. One might use bump to simulate the roughness on the surface of each brick. To simulate the gaps caused by the mortar one would use displacement. Just as with 'Bump Strength' you must load an image map to use the 'Displacement Strength' property. An image map used for displacement is called a “Displacement Map.” A displacement map is also a grayscale image and can be loaded using the 'Displacement Strength' property's image menu button. By default an RGB value of 128, 128, 128 indicates no displacement. Anything lighter than this is considered positive displacement (i.e. the mesh will be displaced outwards) while anything darker is negative displacement (i.e. the mesh will be displaced inwards). Some surface shaders will allow you to set the minimum and maximum values for displacement. This determines how far the displacement of the mesh will go when maximum values are reached. The 'Minimum Displacement' value corresponds with negative displacement while 'Maximum Displacement' corresponds with positive displacement. The 'Minimum Displacement' and 'Maximum Displacement' properties can be used to shift or scale the values of a displacement map. Note: 1 unit in DAZ Studio equals 1 centimeter. Keep this in mind when setting minimum and maximum displacement values. Note: You will not see the effects of displacement until the image is rendered. As explained in Section 3.3, a “UV Set” is a set of 2 dimensional coordinates that correspond to a 3 dimensional point on a model. The UV set determines how a 2 dimensional image will “wrap” around the 3 dimensional model. A good UV set will minimize stretching and compression while placing seams in logical or hidden locations of the model. Because the Genesis and Genesis 2 figures have incredible morphing capabilities, DAZ Studio allows for multiple UV sets. If an artist creates an extreme morph for Genesis or one of the Genesis 2 figures they can include an additional UV set that will account for any distortions caused by the changes in the morph. Support of multiple UV sets also increases texture compatibility across figures. The 'UV Set' property on a surface determines which UV set is used for that surface. It is important that the UV set and image maps for a particular surface match. If they don't you are likely to get distortion and seams. You can change the UV set for a single surface, for multiple surfaces, or (more commonly) for an entire figure. To switch the UV set for your current selection in the Surfaces pane click the UV set selection list and choose a UV set from those that are listed. Getting all of the settings right for each surface, loading image maps, setting values etc. can be tedious. Most products you purchase from the DAZ 3D store will come with presets that set values and load image maps onto the properties that, together, describe the surface(s) of a figure or object - collectively referred to as a “Material.” These presets are called “Material(s) Presets” and are by far the easiest way to set the properties for the surface(s) of your model. Material(s) Presets can be loaded through the Presets page of the Surfaces pane as well as through the Smart Content pane and the Content Library pane. To access the Presets page first make sure the Surfaces pane is open. At the top of the pane you will see all of its pages (Presets, Editor, and Shader Baker). Click the 'Presets' label to bring the Presets page forward. The Presets page of the Surfaces pane is organized and functions very similarly to the Smart Content pane. On the left hand side you have a list of categories, or the “Category View”, which can be expanded or collapsed. On the right hand side, in the “Asset View”, you will find icons for each file in the selected category. Remember, since the Preset page works like the Smart Content pane you must have a figure selected before you will see any of the presets. If you still have Genesis 2 Female in your scene make sure she is your current scene selection. If she is not, select her in the Scene pane. Once she is selected all of the Material(s) Presets available for her can be accessed. By default she comes with several eye and make up options. As well as one texture for the whole body named 'Bree All.' If you double click any of the icons it will load that Material(s) Preset on to the figure. Feel free to try out some of the eye or make up options. If you use the 'Bree All' preset, it will restore the materials for Genesis 2 Female back to their original state. You can also load Material(s) Presets from the Smart Content pane and the Content Library pane. Any Material(s) Preset available in the Smart Content pane will be available in the Presets page of the Surfaces pane. Keep in mind that this type of preset must apply to an object in your scene - meaning Material(s) Presets won't load unless you have an object selected. Make sure you that you select your target object before loading a Material(s) Preset for it. Material(s) Presets are great. They can save you a lot of tedious work, and can cut time out of your workflow. However, many artists view Material(s) Presets as a starting point. Don't feel limited by the presets available to you. Once you have loaded a Material(s) Preset feel free to play around with any of the properties on the Editor page of the Surfaces pane. This will help you learn how each property affects the surface of your object. Remember, you can always purchase additional textures and Material(s) Presets in the DAZ 3D store. In fact one of the best ways to learn about surface properties and what they do is to dissect Material(s) Presets purchased from the DAZ 3D Store. In addition to selecting a surface within the Surfaces pane, you can also select a surface directly in the “Viewport” using the Surface Selection Tool. This offers a few advantages. The first is that it allows you to see exactly what areas of the model are part of each surface. The second is that it gives you the ability to select a surface, even if you don't know what the name of the surface is. To use the Surface Selection Tool, first activate it in the toolbar by left clicking on the tool icon. Once the tool is activated you can hover your cursor over the figure in the viewport - the surface you are currently hovering over will be highlighted and its name will be displayed next to your cursor. If you left click while a surface is highlighted that surface will become selected in the Surfaces pane. Multiple selections can be made by holding the Ctrl/Cmd key. So now you've done the work to set up your materials. It doesn't matter if you've only tweaked a few surface properties from a Material(s) Preset, or set up all of the materials yourself - you should be proud of your work, and it shouldn't go to waste. DAZ Studio allows you to save Material(s) Presets that preserve all the hard work you've put into the materials of your model. To save a Material(s) Preset you must first make sure the object you want to save the preset for is your current scene selection. If it is not, select that object in the Scene pane. Once the desired object is selected, navigate to the File > Save As > Material(s) Preset… action and click it. This will open the 'Filtered Save' dialog where you can choose a save location, and name your preset. Once you are happy with the name and location, click 'Save.' Take note of the location you've saved to so that you can find the preset later. You should now see the 'Material(s) Preset Save Options' dialog. This dialog allows you to choose which materials of the object to include in the preset - you may only want to include a few materials, for example if you are saving a preset that only affects the eyes of a figure. Each surface to be included will have a checkmark. If you don't want a surface included then uncheck the box next to that surface. You can also choose which properties are included for each surface. Click the arrow next to a surface and you will see each property used to define the material. You can check or uncheck properties as desired. Once you are satisfied, click “Accept” to save the preset. You will be able to find your newly saved preset in the Presets page of the Surfaces pane, in the Smart Content pane [or in the Content Library pane]. If you have the object you saved the preset for selected, you will find your preset under the 'Unassigned' category. Just double click the preset or drag and drop the preset onto your object to load it. The preview in the viewport often isn't sufficient to see exactly how the materials you've set up in the Surfaces pane will look. Many of the surface's properties don't take effect until after you've rendered the scene. Unfortunately, rendering is a very resource intensive process, and it can take a long time to render an entire scene. DAZ Studio provides a Spot Render Tool that allows you to render only part of a scene. You can use the Spot Render Tool to quickly check what your materials look when rendered. To use the Spot Render Tool simply click on the Spot Render Tool icon in the toolbar. Once the tool is activated you need only to left click and drag in the viewport. When you do this a rectangular marquee will be drawn and DAZ Studio will render everything within the marquee using your current render settings. The render will appear directly in the viewport. That's it for surfaces and materials. We hope you're not overwhelmed and instead see the opportunities they provide. Creating realistic looking materials takes practice and experience. The best way to get good at setting up materials is to practice and experiment. Things get a bit more fun and a lot less technical in the next chapter where we talk about shaping your figure.
<urn:uuid:d2f8009b-ed8c-415b-8bb4-61cda885a2e6>
{ "date": "2019-06-26T22:20:13", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9147542715072632, "score": 2.734375, "token_count": 6714, "url": "http://wiki.daz3d.com/doku.php/public/software/dazstudio/4/userguide/chapters/textures_surfaces_and_materials/start" }
Products from Amazon.com Price: $72.12Was: $84.00 Price: $15.28Was: $34.95 Price: Check on Amazon Price: Check on Amazon Price: $35.83Was: $49.95 Products from Amazon.com Silicosis is a respiratory disease caused by breathing in (inhaling) silica dust. It is an occupational lung disease that develops over time when dust that contains silica is inhaled into the lungs. Other examples of occupational lung disease include coalworker’s pneumoconiosis and asbestosis. The name silicosis (from the Latin silex, or flint) was originally used in 1870 by Achille Visconti (1836-1911), prosector in the Ospedale Maggiore of Milan. The recognition of respiratory problems from breathing in dust dates to ancient Greeks and Romans. Agricola, in the mid-16th century, wrote about lung problems from dust inhalation in miners. In 1713, Bernardino Ramazzini noted asthmatic symptoms and sand-like substances in the lungs of stone cutters. With industrialization, as opposed to hand tools, came increased production of dust. The pneumatic hammer drill was introduced in 1897 and sandblasting was introduced in about 1904, both significantly contributing to the increased prevalence of silicosis. Classification of silicosis is made according to the disease’s severity (including radiographic pattern), onset, and rapidity of progression. These include: Chronic simple silicosis Usually resulting from long-term exposure (10 years or more) to relatively low concentrations of silica dust and usually appearing 10–30 years after first exposure. This is the most common type of silicosis. Patients with this type of silicosis, especially early on, may not have obvious signs or symptoms of disease, but abnormalities may be detected by x-ray. Chronic cough and exertional dyspnea are common findings. Radiographically, chronic simple silicosis reveals a profusion of small (<10 mm in diameter) opacities, typically rounded, and predominating in the upper lung zones. ..Click to see the pictures………..(2)….(1) Silicosis that develops 5–10 years after first exposure to higher concentrations of silica dust. Symptoms and x-ray findings are similar to chronic simple silicosis, but occur earlier and tend to progress more rapidly. Patients with accelerated silicosis are at greater risk for complicated disease, including progressive massive fibrosis (PMF). Silicosis can become “complicated” by the development of severe scarring (progressive massive fibrosis, or also known as conglomerate silicosis), where the small nodules gradually become confluent, reaching a size of 1 cm or greater. PMF is associated with more severe symptoms and respiratory impairment than simple disease. Silicosis can also be complicated by other lung disease, such as tuberculosis, non-tuberculous mycobacterial infection, and fungal infection, certain autoimmune diseases, and lung cancer. Complicated silicosis is more common with accelerated silicosis than with the chronic variety. Click to see the picture Silicosis that develops a few weeks to 5 years after exposure to high concentrations of respirable silica dust. This is also known as silicoproteinosis. Symptoms of acute silicosis include more rapid onset of severe disabling shortness of breath, cough, weakness, and weight loss, often leading to death. The x-ray usually reveals a diffuse alveolar filling with air bronchograms, described as a ground-glass appearance, and similar to pneumonia, pulmonary edema, alveolar hemorrhage, and alveolar cell lung cancer. Because chronic silicosis is slow to develop, signs and symptoms may not appear until years after exposure. Signs and symptoms include: *Dyspnea (shortness of breath) exacerbated by exertion *Cough, often persistent and sometimes severe *Tachypnea (rapid breathing) which is often labored *Loss of appetite and weight loss *Gradual dark shallow rifts in nails eventually leading to cracks as protein fibers within nail beds are destroyed. In advanced cases, the following may also occur: *Cyanosis (blue skin) *Cor pulmonale (right ventricle heart disease) Patients with silicosis are particularly susceptible to tuberculosis (TB) infection—known as silicotuberculosis. The reason for the increased risk—3 fold increased incidence—is not well understood. It is thought that silica damages pulmonary macrophages, inhibiting their ability to kill mycobacteria. Even workers with prolonged silica exposure, but without silicosis, are at a similarly increased risk for TB. Pulmonary complications of silicosis also include Chronic Bronchitis and airflow limitation (indistinguishable from that caused by smoking), non-tuberculous Mycobacterium infection, fungal lung infection, compensatory emphysema, and pneumothorax. There are some data revealing an association between silicosis and certain autoimmune diseases, including nephritis, Scleroderma, and Systemic Lupus Erythematosus, especially in acute or accelerated silicosis. In 1996, the International Agency for Research on Cancer (IARC) reviewed the medical data and classified crystalline silica as “carcinogenic to humans.” The risk was best seen in cases with underlying silicosis, with relative risks for lung cancer of 2-4. Numerous subsequent studies have been published confirming this risk. In 2006, Pelucchi et al. concluded, “The silicosis-cancer association is now established, in agreement with other studies and meta-analysis Silica in crystalline form is toxic to the lining of the lungs. When the two come into contact, a strong inflammatory reaction occurs. Over time this inflammation causes the lung tissue to become irreversibly thickened and scarred – a condition known as fibrosis. Common sources of crystalline silica dust include: •Pure silica sand People who work with these materials, as well as foundry workers, potters and sandblasters, are most at risk. Other forms of silica, such as glass, are less of a health risk as they aren’t as toxic to the lungs. Men tend to be affected more often than women, as they are more likely to have been exposed to silica. Silicosis is most commonly diagnosed in people over 40, as it usually takes years of exposure before the gradually progressive lung damage becomes apparent. There are now fewer than 100 new cases of silicosis diagnosed each year in the UK. This is mostly the result of better working practices, such as wet drilling, appropriate ventilation, dust-control facilities, showers and the use of face masks. Many foundries are also replacing silica sand with synthetic materials. With these measures and an increased awareness of the risks of silica exposure, the number of cases should fall even further in the future. When silicosis is suspected, a chest x-ray will look for any damaged areas of the lungs to confirm the diagnosis. Lung function tests are often performed to assess the amount of damage the lungs have suffered and to guide treatment. •Connective tissue disease, including rheumatoid arthritis, scleroderma (also called progressive systemic sclerosis), and systemic lupus erythematosus •Progressive massive fibrosis There are three key elements to the diagnosis of silicosis. First, the patient history should reveal exposure to sufficient silica dust to cause this illness. Second, chest imaging (usually chest x-ray) that reveals findings consistent with silicosis. Third, there are no underlying illnesses that are more likely to be causing the abnormalities. Physical examination is usually unremarkable unless there is complicated disease. Also, the examination findings are not specific for silicosis. Pulmonary function testing may reveal airflow limitation, restrictive defects, reduced diffusion capacity, mixed defects, or may be normal (especially without complicated disease). Most cases of silicosis do not require tissue biopsy for diagnosis, but this may be necessary in some cases, primarily to exclude other conditions. For uncomplicated silicosis, chest x-ray will confirm the presence of small (< 10 mm) nodules in the lungs, especially in the upper lung zones. Using the ILO classification system, these are of profusion 1/0 or greater and shape/size “p”, “q”, or “r”. Lung zone involvement and profusion increases with disease progression. In advanced cases of silicosis, large opacity (> 1 cm) occurs from coalescence of small opacities, particularly in the upper lung zones. With retraction of the lung tissue, there is compensatory emphysema. Enlargement of the hilum is common with chronic and accelerated silicosis. In about 5-10% of cases, the nodes will calcify circumferentially, producing so-called “eggshell” calcification. This finding is not pathognomonic (diagnostic) of silicosis. In some cases, the pulmonary nodules may also become calcified. A computed tomography or CT scan can also provide a mode detailed analysis of the lungs, and can reveal cavitation due to concomitant mycobacterial infection. Silicosis is an irreversible condition with no cure. Treatment options currently focus on alleviating the symptoms and preventing complications. These include: *Stopping further exposure to silica and other lung irritants, including tobacco smoking. *Antibiotics for bacterial lung infection. *TB prophylaxis for those with positive tuberculin skin test or IGRA blood test. *Prolonged anti-tuberculosis (multi-drug regimen) for those with active TB. *Chest physiotherapy to help the bronchial drainage of mucus. *Oxygen administration to treat hypoxemia, if present. *Bronchodilators to facilitate breathing. *Lung transplantation to replace the damaged lung tissue is the most effective treatment, but is associated with severe risks of its own. *For acute silicosis, Whole-lung lavage (see Bronchoalveolar lavage) may alleviate symptoms, but does not decrease overall mortality. Experimental treatments include: *Inhalation of powdered aluminium, d-penicillamine and polyvinyl pyridine-N-oxide. *The herbal extract tetrandine may slow progression of silicosis. Joining a support group where you can meet other people with silicosis or related diseases can help you understand your disease and adapt to its treatments. The outcome varies depending on the amount of damage to the lungs. The best way to prevent silicosis is to identify work-place activities that produce respirable crystalline silica dust and then to eliminate or control the dust (“primary prevention”). Water spray is often used where dust emanates. Dust can also be controlled through dry air filtering. Following observations on industry workers in Lucknow (India), experiments on rats found that jaggery (a traditional sugar) had a preventive action against silicosis. Disclaimer: This information is not meant to be a substitute for professional medical advise or help. It is always best to consult with a Physician about serious health concerns. This information is in no way intended to diagnose or prescribe remedies.This is purely for educational purpose. - Longest word by oxford… (enhanceyourknowledge.wordpress.com) - Seven Jobs That Will Make You Sick (247wallst.com) - Sarcoidosis (findmeacure.com) - Silica Exposure and the Risk of RA (everydayhealth.com) - Dolce & Gabbana in dock over ‘killer jeans’ (guardian.co.uk) - Do bed bugs infest radios (wiki.answers.com) - How Rheumatoid Arthritis Affects the Lungs (everydayhealth.com) - Killer Jeans (fairtraderjournal.com) - Toxic Construction Dust in Luxury New York Apartments Endangers Health, Angers Accident Attorneys Like David Perecman (prweb.com) - Workplace Risks for COPD (everydayhealth.com)
<urn:uuid:05d8524e-9104-4afb-8987-00fbedd444e3>
{ "date": "2019-01-19T04:36:13", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00376.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9140848517417908, "score": 3.25, "token_count": 2633, "url": "http://findmeacure.com/tag/acute/" }
“If we want America to lead in the 21st century, nothing is more important than giving everyone the best education possible – from the day they start preschool to the day they start their career.” — President Barack Obama The majority of Americans view education as the key to a lucrative career and a comfortable lifestyle, but the constant indebtedness of student borrowers has become a looming financial crisis, which Washington fears would put a strain on the already shaky economy. In response to such fear, the current Administration is attempting to provide affordable education for all. But with the rising cost in tuition and student loan interest rate expected to double from 3.4 percent to 6.8 percent on July 1, higher education still remains out of reach for millions of Americans. Instead of making higher education more affordable, it is becoming more expensive for students. The result: adding more to the existing $1 trillion student loan debt, a number that overshadows all household debt except for mortgages. The most disheartening part about this ordeal is that the federal government is making an enormous amount of profit off millions of struggling students. According to the Congressional Budget Office’s February 2013 Baseline Projections for the Student Loan Program, on every dollar loaned, the government will yield more than 36 cents in profit for 2013. In 2014, it is projected that the government will yield a profit of 12.5 cents per dollar loaned through the Federal subsidized Stafford student loan program; 33.3 cents through the Federal unsubsidized Stafford student loan program; 54.8 cents through loans to graduate student; and 49 cents on parent loans. The government is expected to make billions of dollars in profit from student loans. According to Sen. Elizabeth Warren (D-Mass), “The government is expected to profit $51 billion off student loans this year, which is more than the annual profit of any Fortune 500 company and about five times the profit of Google.” It is outrageous that the government is making such an enormous amount of profit off the very service that should be affordable. It’s just plain wrong and as Sen. Sherrod Brown (D-Ohio) said, “Wall Street, student loan servicers and now the government are reaping profits at the expense of students… when everyone is benefiting from student loan policy except, students and graduates, we have a problem.” The fact still remains that there is already a $1 trillion in outstanding student loan debt, which is constantly growing. Furthermore, more than 85 percent of those loans are from the federal government in which graduates are currently paying a record relative interest rates in an economy where the borrowing cost for consumers has fallen for everyone except student debtors. It is clear that the student loan crisis needs to be under control. The current crisis is not controlled by the traditional market forces; instead it is inflated by third-party intervention, including the Department of Education that generates enormous profit by cornering the student loan market. Therefore, it is high time our policy makers make use of the education funding by allocating the money to low-and moderate-income families in the form of grants rather than loans. Such a move will demonstrate that our policy makers are willing to make a commitment in moving student loan and student aid policies away from a billion-dollar profit gaining business, and towards a system that enables every American to have access to an affordable higher education.
<urn:uuid:b267c558-6466-46fb-a3ee-ddaa91e50345>
{ "date": "2014-10-01T14:14:08", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663460.43/warc/CC-MAIN-20140930004103-00268-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9647427797317505, "score": 2.546875, "token_count": 693, "url": "http://aspanational.wordpress.com/2013/05/29/student-loans-the-billion-dollar-profiting-business/" }
|Name That Place| |Summary: Janice and Randy are making seven stops on their way to the South Pole. Can you identify them based only on their latitude and longitude? Compare your results to the route mapped out in Follow Our Progress.| I. JUST THE FACTS Crisscrossing lines called latitude and longitude lines form an imaginary grid that precisely defines every location on the globe. The markings of latitude lines are circles around the globe which run parallel to each other. They are often called parallels of latitude for this reason. Latitude comes from the Latin word meaning "width". On flat maps, parallels are straight lines that run the width of the page, thus their name of latitude. Latitude is a measurement of how far north or south from the equator a position on a globe or map is. The equator is the natural north/south dividing point of the earth. Thus, latitude has a physical starting point, the equator which is 0°. The points farthest north and south of the equator are called the North Pole and South Pole. The latitude of the North Pole is 90° N and the latitude of the South Pole is 90° S. Latitude is measured in degrees (or hours), minutes, and seconds. Just like time, there are 60 minutes in 1 degree (hour) of latitude and 60 seconds in 1 minute of latitude. Latitude measurements are written in the form degrees:minutes:seconds. Longitude is an angular measurement of how far east or west one is of the Prime Meridian (i.e. zero degrees longitude). Unlike the equator, the zero of latitude, the Prime Meridian has no physical basis and its location was based on politics. It is a much harder problem to determine ones longitude than ones latitude. Latitude i can easily be obtained by measuring the angle of the sun at mid day. In fact the conquest of the longitude problem is the subject of a recent book: Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time by Dava Sobel. Longitude is also divided into degrees, minutes, and seconds and extends from the Prime Meridian to 180° W and to 180° E. Note: look on a globe 180° W and 180° E are the same place! II. DON'T GET HURT / WATCH OUT! Be careful not to get disorientated or to send our intrepid travelers to a dangerous part of the world. Learn about geography, latitude, and longitude. Find our destinations or any place on the globe by using latitude and longitude. IV. WHAT DO YOU THINK? How much detail do you need to know where someone is -- degrees, minutes, seconds, or more? |77:53 S||166:40 E| |Latitude||Longitude||City / Town Name||Country||Stop Number| VIII. DO YOU SEE WHAT I SEE? Compare the route you have mapped out to that described in Follow Our Progress Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time by Dava Sobel ISBN 0 14 02.5879 5 How Far Is It? web site
<urn:uuid:a29e3f2e-781d-42e8-ad14-041d310e5131>
{ "date": "2017-07-21T02:41:56", "dump": "CC-MAIN-2017-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423681.33/warc/CC-MAIN-20170721022216-20170721042216-00016.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8503851294517517, "score": 3.90625, "token_count": 686, "url": "http://astro.uchicago.edu/cara/southpole.edu/latlong.html" }
Q: I just heard the phrase “a tad bit” on the radio. I know I’ve heard it before, but it struck me now as a tad bit odd, since “tad” means a very small amount, and a “bit” is also a very small amount. The result is a redundancy that’s parallel to “the most teeny-weenie-est.” What do you think? A: Aren’t you being just a tad bit picky here? Yes, it’s true that a “tad” means a “bit,” but why not regard “tad bit” as reinforcement rather than a repetition? The phrase is not formal English, after all, but a folksy and semi-humorous usage. People use “little bit” too, and nobody seems to mind. As we’ve said before, there’s a fine line between an emphatic use and a redundancy. And we think “tad bit” is the kind of expression in which that extra emphasis can be defended. Besides, there’s a kind of expression (the writer Ben Yagoda calls it “the salutarily emphatic redundancy”) that is memorable chiefly because of its apparent repetition. A good example is “Raid kills bugs dead.” We’ve written about this subject several times before on our blog, in discussions of phrases like “first time ever,” “fourteen different countries,” “meet up with,” “face up to,” “try out,” “divide up,” “hurry up,” “lose out on,” and many more that some people find redundant. Here’s a link to one post. But let’s look more closely at “tad,” an interesting noun. It’s originally and chiefly North American, according to the Oxford English Dictionary, and may be derived from “tadpole.” By the way, “tadpole” is interesting too. It combines the Middle English word tade or tadde (toad) and, apparently, the noun “poll” (head or roundhead.) It was first recorded in the 1400s, the OED says, as “taddepol.” “Tad” first cropped up in 1845 with a different, unrelated meaning: someone who can’t or won’t pay. But the modern sense of something small was first recorded in the 1870s, when a “tad” or a “little tad” meant “a young or small child, esp. a boy,” the OED says. It wasn’t until the 20th century that “a tad” came to mean a small amount or, used as a modifier, a little, slightly, or somewhat. The OED’s first citation is from a 1940 issue of the journal American Speech, in an article about Tennessee expressions. The article said “tad” meant “a very small amount,” as in the sentence “I want to borrow a tad of salt.” But the expression was obviously around for some time before it caught the attention of language scholars. At any rate, “tad” soon entered the mainstream. Here’s a 1977 example from the Toronto Globe and Mail: “Things are a tad hectic.” And here’s a 1980 usage from the New York Times: “The Mayor’s pitch is a tad exaggerated both on the law’s certainty and on the roominess of New York’s prisons.” While “tad” does appear in some slang dictionaries, The American Heritage Dictionary of the English Language (5th ed.) labels it “informal” and Merriam-Webster’s Collegiate Dictionary (11th ed.) treats it as standard. As we said, “tad” is a noun, but it’s used attributively—that is, as a modifier—in the noun phrase “tad bit.” The noun phrase itself is often used adverbially, as in “Aren’t you being a tad bit picky?” Check out our books about the English language
<urn:uuid:e76b4933-ea6c-4bc8-9025-b6f51bc17696>
{ "date": "2016-09-25T07:09:12", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660158.61/warc/CC-MAIN-20160924173740-00190-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9494318962097168, "score": 2.71875, "token_count": 964, "url": "http://www.grammarphobia.com/blog/2012/01/tad.html" }
Giving High Schoolers a Competitive Edge North Idaho’s high-tech industry is swelling, with startup companies and Silicon Valley offshoots opening in the region in droves. It’s an exciting transformation for the state’s panhandle, which has traditionally relied on the natural resources industry and tourism to fuel its economy. But for Idaho’s tech industry to thrive, it needs a skilled workforce. This fall, the University of Idaho is leading an effort to engage those potential workers earlier than ever, with a new computer programming course for high school students statewide. As an added benefit, those students will receive college credit for the course without forking over a penny in tuition. CS 112: Computational Thinking and Problem Solving was designed in 2014 for UI computer science majors in the College of Engineering, while also providing basic programming knowledge to interested non-majors. Computer science faculty decided the entry-level course was also a good fit for high school students, and in 2015, faculty members led a training at UI Coeur d’Alene on how to teach the dual-credit course. Among the participants in the class was Nanette Brothers, a Sandpoint High School math teacher. After going through the UI Computer Science Department’s certification process, she enticed 11 Sandpoint students to enroll in the class. This summer, UI expanded its CS 112 workshops, offering weeklong training sessions statewide. Twenty-six educators participated in the training, taught by UI computer science Associate Professor Robert Heckendorn and Professor Terry Soule. The instructors received stipends from the Idaho STEM Action Center, along with professional development credit paid for by the Linda and Greg Gollberg Dual-Credit Scholarship Fund. Brothers will facilitate the course online through the Idaho Digital Learning Academy. The goal is to better serve underrepresented populations, giving high school students in rural and urban pockets alike access to quality computer science education. "This is a very big aim of the program," Soule said. "We try to include high school teachers from all parts of the state." The online availability of the course also means that cash-strapped high schools won’t have to stretch resources to hire more teachers. UI Dual-Credit Program Dual credit became a state mandate in 1997 as a way to increase college go-on rates by making the transition to higher education less intimidating and more accessible. “The dual-credit opportunities are ones where we can really show high school students that college work is something that they can do, and actually it can be pretty fun and exciting,” said Dean Kahler, vice provost for Strategic Enrollment Management at UI. “We’re trying to enhance the amount of dual-credit opportunities that are in high schools; that gives students the opportunity to get a taste of what higher education is all about. Dual credit really opens up their eyes to, ‘Hey, college is not such a scary thing after all.’ Idaho’s Department of Education Fast Forward program is progressive in its funding of dual credit, which is part of a larger initiative called Advanced Opportunities, said Charles Bucks associate vice president and executive officer of UI Coeur d’Alene. Currently, the program sets aside $4,125 for every public school student in grades seven to 12 interested in dual-credit courses, college entrance exams or online overload courses. “When the state’s paying for dual-credit courses, maybe a student can get through 15 or 30 credits during those high school years,” Buck said. “And that reduces their cost of attending a university down the road.” Plus, Idaho high school students who take dual-credit courses have higher college GPAs. According to the State Board of Education, the average cumulative GPA for dual-credit students is 2.99, compared to 2.63 for non-dual-credit students. Dual-enrollment students also have higher college retention rates. In 2016, nearly 80 percent returned to college their second year, while the retention rate for non-dual-credit students was 63 percent, according to the board. The Internet of Things The opportunities provided by concurrent enrollment are immense, and the availability of the computer science course is a bonus. "CS 112 is an important step in helping students enter Idaho’s high-tech industry," Soule said. "The programming skills taught are fundamental to a wide range of high-tech fields and the course is designed to encourage the kind of creativity that motivates entrepreneurship." Last year, Idaho had 1,767 technology-related job openings, according to the 2017 CompTIA Cyberstates report, which gives an annual analysis of the U.S. tech industry and workforce. This created a sizable gap in supply and demand, as the state’s higher education institutions reported 436 computer science graduates during the 2016-17 academic year. Stakeholders hope that the CS 112 course will pique student interest in the field and help the Idaho economy prosper. It doesn’t hurt that Idaho’s average tech industry worker earns $83,400, a whopping 114 percent more than the average worker’s salary of $39,100, according to CompTIA. “We hear from scores of companies that need talent in software engineering,” Buck said. “So we’re hoping that exposing high school students to a rigorous course like this will stimulate a percentage of them to major in computer science and engineering to create the workforce that companies need.” According to Brothers, the dual-enrollment program is a good way to achieve this goal. “There are so many available jobs out there, and we need to be filling them with people coming out of our universities,” Brothers said. “Otherwise, people don’t have jobs because they haven’t been appropriately trained.” Article by Kate Keenan, College of Engineering
<urn:uuid:777f02fd-35ed-4a48-9e03-7e35a6acc76e>
{ "date": "2018-05-28T03:28:39", "dump": "CC-MAIN-2018-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870771.86/warc/CC-MAIN-20180528024807-20180528044807-00616.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9541016817092896, "score": 2.65625, "token_count": 1237, "url": "https://sitecore.uidaho.edu/news/here-we-have-idaho-magazine/past-issues/2017-fall/dual-credit" }
Richard Branson is well on his way to space. Now he plans to explore the deepest parts of the ocean as well. announced his undersea exploration venture, Virgin Oceanic, on Tuesday. Unlike his suborbital-space-flight company, Virgin Galactic, the new venture is not accepting paying passengers. Instead, it will comprise only five deep-sea dives, each one carrying just one person, to the deepest points in each of the five oceans. To make the dives, Virgin has built a custom submarine and a flashy promotional video. The sub's cockpit has a bubble-like dome made of quartz, which can withstand 6 million kilos of pressure across its surface, Overall, the sub looks a bit like an aeroplane, the better to "fly" to its underwater destinations. It weighs 3,600 kilos, is made of carbon fibre and titanium, and is rated to withstand pressure up to 37,000 feet below the surface. It's not fast, though, with a maximum speed of just 3 knots and the ability to dive at 1.8 metres per second, so its life-support systems are meant to last up to 24 hours. In addition to its one human, the sub will have a water sampling system that can filter microbes and viruses from the water for later study. It will also be able to deploy unmanned probes. So far, the sub has only gone for a dip in San Francisco Bay. Virgin Oceanic notes that sub was originally the brainchild of aviator and adventurer Steve Fossett, a friend of Branson's, who crashed during a solo plane flight in 2007 but whose remains weren't found until 2008. To support and transport the sub, Virgin has retrofitted carbon-fiber racing catamaran with a crane, generators and lots of electronics. The first dive will be to the bottom of the Mariana Trench, a 36,201-foot canyon deep in the western Pacific. Humans have made it to the bottom of the Mariana Trench just once before, in 1960. The commander of that expedition, Jacques Piccard, later recounted the experience of that The second dive will be to the bottom of the Puerto Rico Trench, which at 8,600 metres below the surface is the deepest trench in the Atlantic Ocean. Branson himself plans to pilot the sub for this journey. Branson will be the backup pilot for the Mariana dive too, if his designated pilot is unable to do it. Three other dives are planned, each to the deepest point of other three oceans. There's a serious scientific purpose to Virgin Oceanic's missions, Virgin says, with actual scientists lined up to make the most of these dives for their research into bottom-dwelling microbes, bioluminescence and But mostly, we suspect, it will be an excellent adventure for the man Wired has called a "happy-go-lucky tycoon." More power to you, Sir Richard. We'll be watching for the IMAX movie.
<urn:uuid:b46f3d22-881d-4779-a347-9580bcff3f52>
{ "date": "2013-05-18T17:48:48", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9247947931289673, "score": 2.671875, "token_count": 664, "url": "http://www.wired.co.uk/news/archive/2011-04/6/virgin-oceanic-launch" }
There is an old saying that a photograph is worth 1000 words. I suggest that photographs are probably worth much more than 1000 words but that does not mean that the information is accurate or correct. I have put up notice that most genealogists accept photographs on their face totally and uncritically. A photograph is a source, just like any other source, and is subject to the same requirements of evaluation concerning reliability as any other source. Let me give a hypothetical situation. Suppose that family members who have been bitterly fighting with each other throughout their lives all all come to the same funeral. A photograph taken at the funeral may show a happily smiling family group but would be utterly misleading as to the relationships between the parties. Further, the backgrounds and style of the photographs may be entirely misleading. Some of the very old photographs required long exposures and those photographs seldom, if ever, showed that individuals smiling. In one series of photos I found in an old collection, the women in different photos shared the same dress. Without seeing the related photographs, it would be impossible to know this fact. You may consider some of these examples to be trivial but we all have a tendency to identify our ancestors with their photographs. I am not arguing that photographs are not useful in contributing historic information concerning ancestors. What I am saying is that photographs can be just as inaccurate or misleading as any other documentary source. The first important fact to realize is that the photographer chooses the subject matter, place and timing of the photograph. It is entirely possible for the photographer to frame the photo so as to cut it out what to the photographer are undesirable background objects or other people. To the extent that the photograph is controlled by the photographer it is a personal statement taken from the photographer's point of view rather than an objective historical document. I recently discussed the ethics of altering existing historical photographs when making copies. There is a more serious issue about the process the photographer goes through in selecting and creating a photograph. One of the most famous photographers of our time is Ansell Adams. You might be surprised to know that he spent a great deal of time in the darkroom altering his original photographs. Today, we would call this photoshopping the image. If you have the opportunity to stand next to a gifted photographer and watch that person take photographs and then later, have the opportunity to view the same printed photographs, you may wonder whether or not you were actually present when the photographs were taken. For this reason, most photographs are notable for what they do not show rather than what they do show. Sometimes photographs suggest relationships and activities that are entirely inconsistent with the traditional family story about the individual ancestor. For example, there may be photographs showing the ancestor enjoying the companionship of someone who was obviously not their spouse or in other situations showing activities that the ancestor would not normally be associated with. In some instances, these types of photographs were destroyed either by the ancestor or by close relatives who did not wish to show the ancestor in a negative light. Occasionally, some of these photos survive to create interesting historical issues. At one point in the past a photograph could be used as conclusive evidence in a court case. However, that day has long since passed. Today, even with substantial supporting testimony photographs are always subject to a measure of skepticism. In our modern age, I would guess that there is not one photograph that you can presently see in an advertisement that has not been altered in some way from the original. We are so used to seeing altered photographs that we do not even realize that some of them are so altered as to be totally imaginary. As genealogists, we may make the mistake to believe that this process of altering photos originated with computers. In fact, since the first photo was taken the photographer has always been in control of the photograph both at the time was taken and during the developing and printing process. It is always a good idea to take the time to critically evaluate each and every historical photograph. Not only may there be interesting information you may have missed with a superficial examination, there may also be inaccurate or incorrect traditional assumptions about the subjects of the photos that are questioned by the examination.
<urn:uuid:b6b615ab-1cf0-4c33-9b58-f17cba258776>
{ "date": "2016-10-23T09:49:08", "dump": "CC-MAIN-2016-44", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719215.16/warc/CC-MAIN-20161020183839-00180-ip-10-171-6-4.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9739714860916138, "score": 2.53125, "token_count": 838, "url": "http://genealogysstar.blogspot.com/2014/05/all-photographs-lie.html" }
Amyotrophic lateral sclerosis (ALS), often referred to as Lou Gehrig’s disease, is a devastating neurological disease of the motor nervous system. Within a few short years, its victims fall from good health—often in the prime of life—and ultimately perish due to progressive motor neuron deterioration. ALS is surprisingly common: people have a lifetime risk of about 1 in 400. Prior investigation led by Brian Wainger, MD, PhD, (Wainger et al., Cell Stem Cell, 2014) has identified abnormalities in the electrical activity of motor neurons derived from ALS patients using stem cell technology. The research culminated in the discovery of the FDA-approved drug retigabine (ezogabine) as a candidate therapeutic, and we are now investigating this drug in a clinical trial of ALS subjects. CHRONIC PAIN RESEARCH Chronic pain does not have the same lethal impact as ALS; however, any sufferer of chronic pain can testify to the profound impairments in quality of life, mood and functioning that plague pain patients. Chronic pain affects over one quarter of adult Americans and is one of the most common reasons for physician visits, lost productivity and disability. Ongoing work by Dr. Wainger has yielded a technique for deriving pain-sensing neurons from patient skin samples (Wainger et al., Nature Neuroscience, 2015). This novel method may offer a way to investigate causes of pain in human patients, thus potentially overcoming the limits of animal models, which have resulted in only limited success in identifying effective treatments for human pain. The goal is to use the human pain-sensing neurons to identify and evaluate novel treatments for pain in patients. The combination of specialized clinical and research training places the group in a prime position to investigate disease-related research questions and find practical and promising ways to directly advance the application of basic science research to clinical medicine.
<urn:uuid:15385c2f-c3ae-4017-950e-d74cc7cb2547>
{ "date": "2019-05-19T06:30:57", "dump": "CC-MAIN-2019-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254253.31/warc/CC-MAIN-20190519061520-20190519083520-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9305788278579712, "score": 2.875, "token_count": 383, "url": "http://www.waingerlab.org/projects-1/" }
We are becoming accustomed to hearing about genealogical discoveries made through the miracles of DNA analysis. But a recent genealogical breakthrough was made by a much older means--noticing a family dental trait. In 1832, fifty seven recent Irish immigrants died while working on a track of railroad, known as Duffy's Cut, outside of Philadelphia. An excavation of the bodies is ongoing. Recent discoveries have revealed that some of the men did not die of cholera, but of blows to the head. Others may have been shot. An excellent article by Lori Lander Murphy, describing the history and discoveries at Duffy's cut, can be found via the link below. One skull was found to have a missing front molar (from birth). Members of the Ruddy family in Co. Donegal, hearing about the research being done in Pennsylvania, alerted the researchers that many members of their family has a genetic quirk--a missing front molar! So, the body of young John Ruddy was the first to be identified and matched with his Irish family.
<urn:uuid:6f45b6ad-ee40-4696-bd3f-5f7e47772d66>
{ "date": "2014-10-21T10:20:46", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444339.31/warc/CC-MAIN-20141017005724-00313-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9810388088226318, "score": 3.015625, "token_count": 215, "url": "http://irishfamilyresearch.blogspot.com/2010/08/physical-genealogy.html?m=1" }
Awa Dance Festival The Awa Dance Festival (阿波踊り Awa Odori?) is held from 12 to 15 August as part of the Obon festival in Tokushima Prefecture on Shikoku in Japan. Awa Odori is the largest dance festival in Japan, attracting over 1.3 million tourists every year. Groups of choreographed dancers and musicians known as ren (連) dance through the streets, typically accompanied by the shamisen lute, taiko drums, shinobue flute and the kane bell. Performers wear traditional obon dance costumes, and chant and sing as they parade through the streets. The earliest origins of the dance style are found in the Japanese Buddhist priestly dances of Nembutsu-odori and hiji-odori of the Kamakura Period (1185-1333), and also in kumi-odori, a lively harvest dance that was known to last for several days. The Awa Odori festival grew out of the tradition of the Bon odori which is danced as part of the Obon "Festival of the Dead", a Japanese Buddhist celebration where the spirits of deceased ancestors are said to visit their living relatives for a few days of the year. The term "Awa Odori" was not used until the 20th century, but Obon festivities in Tokushima have been famous for their size, exuberance and anarchy since the 16th century. Awa Odori's independent existence as a huge, city-wide dance party is popularly believed to have begun in 1586 when Lord Hachisuka Iemasa, the daimyo of Awa Province hosted a drunken celebration of the opening of Tokushima Castle. The locals, having consumed a great amount of sake, began to drunkenly weave and stumble back and forth. Others picked up commonly available musical instruments and began to play a simple, rhythmic song, to which the revelers invented lyrics. The lyrics are given in the 'Song' section of this article. This version of events is supported by the lyrics of the first verse of "Awa Yoshikono Bushi", a local version of a popular folk song which praises Hachisuka Iemasa for giving the people Awa Odori and is quoted in the majority of tourist brochures and websites. However, according to local historian Miyoshi Shoichiro, this story first appeared in a Mainichi Shimbun newspaper article in 1908 and is unsupported by any concrete evidence. It is unclear whether the song lyrics were written before or after this article appeared. 1. The bon-odori may be danced for only three days. 2. Samurai are forbidden to attend the public celebration. They may dance on their own premises but must keep the gates shut. No quarrels, arguments or other misbehaviour are allowed. 3. The dancing of bon-odori is prohibited in all temple grounds. This suggests that by the 17th century, Awa’s bon-odori was a well established as a major event, lasting well over three days — long enough to be a major disruption to the normal functioning of the city. It implies that samurai joined the festival alongside peasants and merchants, disgracing themselves with brawling and unseemly behaviour. In 1674, it was “forbidden for dancers or spectators to carry swords (wooden or otherwise), daggers or poles”. In 1685 revelers were prohibited from dancing after midnight and dancers were not allowed to wear any head or face coverings, suggesting that there were some serious public order concerns. In the Meiji Period (1868-1912) the festival died down as the Tokushima's indigo trade, which had financed the festival, collapsed due to imports of cheaper chemical dyes. The festival was revitalised at the start of the Showa Period (1926) when Tokushima Prefectural authorities first coined the name ‘Awa Odori’ and promoted it as the region’s leading tourist attraction. The song associated with Awa Odori is called Awa Yoshikono and is a localised version of the Edo period popular song Yoshikono Bushi. Parts of it are sung, and others are chanted. The origins of the melodic part have been traced to Kumamoto, Kyūshū, but the Awa version came from Ibaraki Prefecture, from where it spread back down to Nagoya and Kansai. The lyrics of the first verse are: Awa no dono sama hachisuka-kou ga ima ni nokoseshi awa odori What Awa's Lord Hachisuka left us to the present day is Awa Odori The song is usually sung at a point in the parade where the dancers can stop and perform a stationary dance — for example a street intersection or in front of the ticketed, amplified stands which are set up at points around the city. Not every group has a singer, but dancers and musicians will frequently break out into the Awa Yoshikono chant as they parade through the streets: |踊る阿呆に||Odoru ahou ni||The dancers are fools| |見る阿呆||Miru ahou||The watchers are fools| |同じ阿呆なら||Onaji ahou nara||Both are fools alike so| |踊らな損、損||Odorana son, son||Why not dance?| The dancers also chant hayashi kotoba call and response patterns such as "Yattosa, yattosa", "Hayaccha yaccha", "Erai yaccha, erai yaccha", and "Yoi, yoi, yoi, yoi". These calls have no semantic meaning but help to encourage the dancers. During the daytime a restrained dance called Nagashi is performed, but at night the dancers switch to a frenzied dance called Zomeki. As suggested by the lyrics of the chant, spectators are often encouraged to join the dance. Men and women dance in different styles. For the men’s dance: right foot and right arm forward, touch the ground with toes, then step with right foot crossing over left leg. This is then repeated with the left leg and arm. Whilst doing this, the hands draw triangles in the air with a flick of the wrists, starting at different points. Men dance in a low crouch with knees pointing outwards and arms held above the shoulders. The women's dance uses the same basic steps, although the posture is quite different. The restrictive kimono allows only the smallest of steps forward but a crisp kick behind, and the hand gestures are more restrained and graceful, reaching up towards the sky. Women usually dance in tight formation, poised on the ends of their geta sandals. Children and adolescents of both sexes usually dance the men's dance. In recent years, it has become more common to see adult women, especially those in their 20's, dancing the men's style of dance. Some of the larger ren (dance groups) also have a yakko odori, or kite dance. This usually involves one brightly dressed, acrobatic dancer, darting backwards and forwards, turning cartwheels and somersaults, with freestyle choreography. In some versions, other male dancers crouch down forming a sinuous line representing the string, and a man at the other end mimes controlling the kite. Awa Dance Festivals elsewhere Kōenji, an area of Tokyo, also has an Awa Dance Festival, modelled on Tokushima's, which was started in 1956 by urban migrants from Tokushima Prefecture. It is the second largest Awa Dance Festival in Japan, with an average of 188 groups composed of 12,000 dancers, attracting 1.2 million visitors. In May 2015, Japanese production company Tokyo Story will produce a substantially big version of Awa Odori in Paris by bringing there hundreds of dancers from Japan. "Awa Odori Paris 2015", as the event is called, would reproduce the "fever" of Awa Odori. This event will be a first step to promote Awa Odori and the Japanese "matsuri" culture abroad. The production will be financed by French and Japanese companies and institutions. - Miyoshi Shoichiro (2001) Tokushima Hanshi Tokuhon - e.g. http://www.jnto.go.jp/eng/indepth/history/traditionalevents/a46_fes_awa.html - Miyoshi Shoichiro (2001:35) Tokushima Hanshi Tokuhon - Miyoshi 2001: 37 - Wisneiwski, Mark (2003:2) ‘The Awa Odori Trilogy’ in Awa Life - Wisneiwski, Mark (2003) ‘The Awa Odori Trilogy’ in Awa Life. - Wisniewski, Mark (2003:3) ‘The Awa Odori Trilogy’ in Awa Life - Awa Odori video available from Tokushima Prefecture International Exchange Association (TOPIA) - Official Koenji Awa Odori Website - Miyoshi, Shoichiro (2001) Tokushima Hanshi Tokuhon - Wisniewski, Mark (2003) ‘The Awa Odori Trilogy’ in Awa Life, published by TOPIA (Tokushima Prefecture International Association) - de Moraes, Wenceslau (1916) Tokushima no bon odori. - House, Ginevra (2004)'Dancing for the Dead', Kyoto Journal Issue 58. |Wikimedia Commons has media related to Awa Odori.| Official Japanese sites - Awa Odori by the Japan National Tourist Organization - Japan Atlas - Festivals by the Japanese Ministry of Foreign Affairs (click "19" for Awa Odori) - Koenji Awa Odori Official Site - Awa Dance homepage by www.awaodori.net (English translation by Google) - Awa Odori by web-japan.org - Dance of Fools: Awa Odori Festival, Japan by www.pilotguides.com - Japanese Line Dance? by www.country-dance.com (many pictures) - (Dyeing to Dance: an English Translation by Mark Wisniewski) - Official homepage of Tokyo Ebisuren, a Tokyo-based classical style Awa Odori team (English site, contains pictures and video) - Awa Odori Paris 2015 Home page of Awa Odori Paris 2015 (English & Japanese site) - Awa Odori dance video (Japanese)
<urn:uuid:9813a0b0-06d7-4ecb-b756-072a38f242d4>
{ "date": "2015-03-01T19:26:45", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462548.53/warc/CC-MAIN-20150226074102-00257-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.922735333442688, "score": 2.640625, "token_count": 2274, "url": "http://en.wikipedia.org/wiki/Awa_Dance_Festival" }
Astronauts Having Space Station Fun With Newton’s Laws [VIDEO] Subscribe to The Voice of Amarillo on It isn’t all work all the time on the International Space Station. The orbit of the space station is always decaying a teeny bit due to atmospheric drag. There isn’t much air up there but just enough to slow things down. Over time the station’s orbit decays and action must be taken to ensure the station doesn’t burn up in the atmosphere. NASA astronauts on board the station filmed this video to demonstrate Newton’s First Law: an object in motion tends to stay in motion unless acted upon by an outside force. Thanks to the free fall weightlessness the space station moves around the astronauts while they stay stationary.
<urn:uuid:28babf8a-9003-455b-81e8-c60d5b4274ef>
{ "date": "2017-08-19T16:21:25", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00696.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8745461106300354, "score": 3.265625, "token_count": 164, "url": "http://voiceofamarillo.com/astronauts-having-space-station-fun-with-newtons-laws-video/" }
You don’t need to panic, pull all-nighters, and pop energy drinks to get through final exams. Here are simple tips to keep your stress down and your grades up. - Only have time for one tip? Get the sleep you need. Science shows that the loss of sleep from an all-nighter can completely kill your test scores. Lack of sleep makes you tired, reduces your memory, and makes the test harder. What can you do instead of staying up late? Go to bed on time and study in the morning, right before your test. If you arrive at your final exam in need of a nap, studying will be useless. - Build study groups. Partnering up with a group of classmates can help you review material more quickly and efficiently. Have everyone prepare a different review section, or use your combined brainpower to tackle the hardest questions. - Take breaks. Breaks are essential to keep your focus and prevent burnout. MIT suggests studying for 50 minutes and then recharging for ten minutes by chatting, checking your email, going for a walk, stretching, or drinking some water. Physical movement is the best refresher, even if it’s just standing up. - Talk to your professor. All our teachers keep office hours, and they make themselves available by email and phone, too. Faculty won’t tell you the answers to the test, but they are very happy to help you understand the concepts and ideas you need to succeed. - Talk to our free tutors. NLC has free learning centers for Math, Science, Writing, and Language, with access to books, software, and lab equipment. Plus, free tutoring is available from volunteers who have already taken and passed your classes. - Divide your workload into smaller parts. When you’re thinking about the challenges ahead, ignore the big picture and focus on each task in turn. Use breaks as a motivator, or cross each task off a checklist. - There’s an app for that. Smartphones and tablets offer flashcard apps, language learning games, and other study tools. One of them could help you out. Need more help with your stress level? Talk to professional counselors in our Counseling office, A311, or call them at 972-273-3333. You can also visit our Health Services clinic in C200 for a safe space to rest or manage anxiety.
<urn:uuid:9faf607b-7a51-422c-a951-dab932be3eb8>
{ "date": "2017-07-28T14:43:08", "dump": "CC-MAIN-2017-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550969387.94/warc/CC-MAIN-20170728143709-20170728163709-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9327705502510071, "score": 2.75, "token_count": 496, "url": "http://blog.northlakecollege.edu/dont-stress-out-tips-for-conquering-finals/" }
(December 2000) Most of the research on parenting in the United States has surveyed mothers, but not fathers. The recent surge of interest in the father's role has promoted surveys of both parents, which have, incidentally, documented substantial discrepancies between men's and women's reports about their relative involvement in raising their children. A 1999 University of Maryland study explored these discrepancies by asking a sample of mothers and fathers about five domains of parenting: discipline, play, emotional support, monitoring of activities and playmates, and basic care. Parents were asked: "Ideally, who should discipline children, mainly the mother, mainly the father, or both equally?" Similarly, respondents were also asked: "In parenting your children, who disciplines the children, mainly you, mainly the child's father/mother, or both parents equally?" Questions were repeated for each domain of childrearing and were asked both of parents who currently had children in the home as well as of parents who had adult children. There is overwhelming consensus between men and women that parenting should be shared equally across most domains, as shown in the figure. For four of the areas — disciplining children, playing with children, providing emotional support, and monitoring activities and friends — at least 90 percent of men and women say these parenting domains should be shared equally. More than two-thirds of men and women say that caring for children's needs should be shared equally by mothers and fathers. Parents' reports of actual involvement, however, do not agree. Mothers are far more likely than fathers to report that the mother is the main disciplinarian of children (47 percent, compared with 17 percent), and that it is mainly the mother who plays with children (37 percent, compared with 14 percent). Similarly, mothers are far more likely than fathers to report that the mother provides most of the emotional support of children (45 percent compared with 24 percent) and that the mother is the one who mainly monitors their children's activities (51 percent compared with 27 percent). More mothers than fathers believe that mothers are the main caretakers of children (70 percent vs. 58 percent). Overall, fathers are much more likely to hold the view that domains are shared equally with their partners, while mothers are much more likely to report that they are primarily the ones involved in rearing their children. Melissa Milkie, Suzanne M. Bianchi, Marybeth Mattingly, and John Robinson, "Fathers' Involvement in Childrearing: Ideals, Realities, and Their Relationship to Parental Well-Being." (Revised version of a paper presented at the annual meeting of the American Association for Public Opinion Research, Portland, OR, May 18–21, 2000.) This article is excerpted from the Population Bulletin "American Families" (Vol. 55, No. 4, December 2000), published by the Population Reference Bureau. Suzanne M. Bianchi is professor of sociology and faculty associate in the Center on Population, Gender and Social Inequality at the University of Maryland, College Park. Lynne M. Casper is health scientist administrator and demographer at the Demographic and Behavioral Sciences Branch, National Institute of Child Health and Human Development.
<urn:uuid:fa3f69ae-b61b-4342-9af0-115133fa235a>
{ "date": "2014-11-01T01:01:04", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637901687.31/warc/CC-MAIN-20141030025821-00035-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9770134687423706, "score": 2.828125, "token_count": 649, "url": "http://www.prb.org/Publications/Articles/2000/HowInvolvedAreFathersinRaisingChildren.aspx" }
The Tychonic system (or Tychonian system) was a model of the Solar System published by Tycho Brahe in the late 16th century which combined what he saw as the mathematical benefits of the Copernican system with the philosophical and "physical" benefits of the Ptolemaic system. The model may have been inspired by Valentin Naboth and Paul Wittich, a Silesian mathematician and astronomer. A similar geoheliocentric model was also earlier proposed by Nilakantha Somayaji of the Kerala school of astronomy and mathematics. It is essentially a geocentric model; the Earth is at the center of the universe. The Sun and Moon and the stars revolve around the Earth, and the other five planets revolve around the Sun. It can be shown that the motions of the planets and the Sun relative to the Earth in Brahe's system are mathematically equivalent to the motions in Copernicus' heliocentric system, but the Tychonic system fit the available data better than Copernicus system Motivation for the Tychonic system Tycho admired aspects of Copernicus's heliocentric model of the solar system, but felt that it had problems as concerned physics, astronomical observations of stars, and religion. Regarding the Copernican system Tycho wrote, This innovation expertly and completely circumvents all that is superfluous or discordant in the system of Ptolemy. On no point does it offend the principle of mathematics. Yet it ascribes to the Earth, that hulking, lazy body, unfit for motion, a motion as quick as that of the aethereal torches, and a triple motion at that. In regard to physics, Tycho held that the Earth was just too sluggish and heavy to be continuously in motion. According to the accepted Aristotelian physics of the time, the heavens (whose motions and cycles were continuous and unending) were made of "Aether" or "Quintessence"; this substance, not found on Earth, was light, strong, and unchanging, and its natural state was circular motion. By contrast, the Earth (where objects seem to have motion only when moved) and things on it were composed of substances that were heavy and whose natural state was rest—thus the Earth was a "lazy" body that was not readily moved. Thus while Tycho acknowledged that the daily rising and setting of the sun and stars could be explained by the Earth's rotation, as Copernicus had said, still such a fast motion could not belong to the earth, a body very heavy and dense and opaque, but rather belongs to the sky itself whose form and subtle and constant matter are better suited to a perpetual motion, however fast. In regards to the stars, Tycho also believed that if the Earth orbited the Sun annually there should be an observable stellar parallax over any period of six months, during which the angular orientation of a given star would change thanks to Earth's changing position (this parallax does exist, but is so small it was not detected until 1838, when Friedrich Bessel discovered a parallax of 0.314 arcseconds of the star 61 Cygni). The Copernican explanation for this lack of parallax was that the stars were such a great distance from Earth that Earth's orbit was almost insignificant by comparison. However, Tycho noted that this explanation introduced another problem: Stars as seen by the naked eye appear small, but of some size, with more prominent stars such as Vega appearing larger than lesser stars such as Polaris, which in turn appear larger than many others. Tycho had determined that a typical star measured approximately a minute of arc in size, with more prominent ones being two or three times as large. In writing to Christoph Rothmann, a Copernican astronomer, Tycho used basic geometry to show that, assuming a small parallax that just escaped detection, the distance to the stars in the Copernican system would have to be 700 times greater than the distance from the sun to Saturn. Moreover, the only way the stars could be so distant and still appear the sizes they do in the sky would be if even average stars were gigantic — at least as big as the orbit of the Earth, and of course vastly larger than the sun. And, Tycho said, the more prominent stars would have to be even larger still. And what if the parallax was even smaller than anyone thought, so the stars were yet more distant? Then they would all have to be even larger still. Tycho said Deduce these things geometrically if you like, and you will see how many absurdities (not to mention others) accompany this assumption [of the motion of the earth] by inference. Copernicans offered a religious response to Tycho's geometry: titanic, distant stars might seem unreasonable, but they were not, for the Creator could make his creations that large if He wanted. In fact, Rothmann responded to this argument of Tycho's by saying [W]hat is so absurd about [an average star] having size equal to the whole [orbit of the Earth]? What of this is contrary to divine will, or is impossible by divine Nature, or is inadmissible by infinite Nature? These things must be entirely demonstrated by you, if you will wish to infer from here anything of the absurd. These things that vulgar sorts see as absurd at first glance are not easily charged with absurdity, for in fact divine Sapience and Majesty is far greater than they understand. Grant the vastness of the Universe and the sizes of the stars to be as great as you like — these will still bear no proportion to the infinite Creator. It reckons that the greater the king, so much greater and larger the palace befitting his majesty. So how great a palace do you reckon is fitting to GOD? Religion played a role in Tycho's geocentrism also – he cited the authority of scripture in portraying the Earth as being at rest. He rarely used Biblical arguments alone (to him they were a secondary objection to the idea of Earth's motion) and over time he came to focus on scientific arguments, but he did take Biblical arguments seriously. Tycho advocated as an alternative to the Ptolemaic geocentric system a "geo-heliocentric" system (now known as the Tychonic system), which he developed in the late 1570s. In such a system, the sun, moon, and stars circle a central Earth, while the five planets orbit the Sun. The essential difference between the heavens (including the planets) and the Earth remained: Motion stayed in the aethereal heavens; immobility stayed with the heavy sluggish Earth. It was a system that Tycho said violated neither the laws of physics nor sacred scripture — with stars located just beyond Saturn and of reasonable size. History and development of the Tychonic system Tycho's system was foreshadowed, in part, by that of Martianus Capella, who described a system in which Mercury and Venus are placed on epicycles around the Sun, which circles the Earth. Copernicus, who cited Capella's theory, even mentioned the possibility of an extension in which the other three of the six known planets would also circle the Sun. This was foreshadowed by the Irish Carolingian scholar Johannes Scotus Eriugena in the 9th century, who went a step further than Capella by suggesting both Mars and Jupiter orbited the sun as well. In the 15th century, his work was anticipated by Nilakantha Somayaji, an Indian astronomer of the Kerala school of astronomy and mathematics, who first presented a geoheliocentric system where all the planets (Mercury, Venus, Mars, Jupiter and Saturn) orbit the Sun, which in turn orbits the Earth. The Tychonic system became a major competitor with the Copernican system as an alternative to the Ptolemaic. After Galileo's observation of the phases of Venus in 1610, most cosmological controversy then settled on variations of the Tychonic and Copernican systems. In a number of ways, the Tychonic system proved philosophically more intuitive than the Copernican system, as it reinforced commonsense notions of how the Sun and the planets are mobile while the Earth is not. Additionally, a Copernican system would suggest the ability to observe stellar parallax, which could not be observed until the 19th century. On the other hand, because of the intersecting deferents of Mars and the Sun (see diagram), it went against the Ptolemaic and Aristotelian notion that the planets were placed within nested spheres. Tycho and his followers revived the ancient Stoic philosophy instead, since it used fluid heavens which could accommodate intersecting circles. Legacy of the Tychonic system After Tycho's death, Johannes Kepler used the observations of Tycho himself to demonstrate that the orbits of the planets are ellipses and not circles, creating the modified Copernican system that ultimately displaced both the Tychonic and Ptolemaic systems. However, the Tychonic system was very influential in the late 16th and 17th centuries. In 1616, during the Galileo affair, the papal Congregation of the Index banned all books advocating the Copernican system, including works by Copernicus, Galileo, Kepler and other authors until 1758. The Tychonic system was an acceptable alternative as it explained the observed phases of Venus with a static Earth. Jesuit astronomers in China used it extensively, as did a number of European scholars. Jesuits (such as Clavius, Christoph Grienberger, Christoph Scheiner, Odo van Maelcote) were the most efficient agent for the diffusion of the Tychonic system. It was chiefly through the influence of the Jesuit scientists that the Roman Catholic Church adopted the Tychonic system, over a period of nine years (from 1611 to 1620), in a process directly prompted by the Galilean telescopic discoveries. The discovery of stellar aberration in the early 18th century by James Bradley proved that the Earth did in fact move around the Sun and Tycho's system fell out of use among scientists. In the modern era, some of the modern geocentrists use a modified Tychonic system with elliptical orbits, while rejecting the concept of relativity. - Westman, Robert S. (1975). The Copernican achievement. University of California Press. p. 322. ISBN 978-0-520-02877-7. OCLC 164221945. - Owen Gingerich, The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus, Penguin, ISBN 0-14-303476-6 - Ramasubramanian, K. (1994). "Modification of the earlier Indian planetary theory by the Kerala astronomers (c. 1500 AD) and the implied heliocentric picture of planetary motion". Current Science 66: 784–90. - Joseph, George G. (2000), The Crest of the Peacock: Non-European Roots of Mathematics, p. 408, Princeton University Press, ISBN 978-0-691-00659-8 - "The Tychonic system is, in fact, precisely equivalent mathematically to Copernicus' system." (p. 202) and "[T]he Tychonic system is transformed to the Copernican system simply by holding the sun fixed instead of the earth. The relative motions of the planets are the same in both systems ..." (p. 204), Kuhn, Thomas S. , The Copernican Revolution (Harvard University Press, 1957). - "This new geoheliocentric cosmology had two major advantages going for it: it squared with deep intuitions about how the world appeared to behave, and it fit the available data better than Copernicus's system did.." The Case Against Copernicus (Scientific American, Dec 17, 2013 |By Dennis Danielson and Christopher M. Graney). - Owen Gingerich, The eye of heaven: Ptolemy, Copernicus, Kepler, New York: American Institute of Physics, 1993, 181, ISBN 0-88318-863-5 - Blair, Ann, "Tycho Brahe's critique of Copernicus and the Copernican system", Journal of the History of Ideas, 51, 1990: 355-377, doi:10.2307/2709620, pages 361-362. Moesgaard, Kristian Peder, "Copernican Influence on Tycho Brahe", The Reception of Copernicus' Heliocentric Theory (Jerzy Dobrzycki, ed.) Dordrecht & Boston: D. Reidel Pub. Co. 1972. ISBN 90-277-0311-6, page 40. Gingerich, Owen, "Copernicus and Tycho", Scientific American 173, 1973: 86 – 101, page 87. - Blair, 1990, 361. - J J O'Connor and E F Robertson. Bessel biography. University of St Andrews. Retrieved 2008-09-28 - The sizes Tycho measured turned out to be illusory -- an effect of optics, the atmosphere, and the limitations of the eye (see Airy disk or Astronomical seeing for details). By 1617, Galileo estimated with the use of his telescope that the largest component of Mizar measured 3 seconds of arc, but even that turned out to be illusory -- again an effect of optics, the atmosphere, and the limitations of the eye [see L. Ondra (July 2004). "A New View of Mizar". Sky & Telescope: 72–75.]. Estimates of the apparent sizes of stars continued to be revised downwards, and, today, the star with the largest apparent size is believed to be R Doradus, no larger than 0.057 ± 0.005 seconds of arc. - Blair, 1990, 364. Moesgaard, 1972, 51. - Blair, 1990, 364. - Moesgaard, 1972, 52. Vermij R., "Putting the Earth in Heaven: Philips Lansbergen, the early Dutch Copernicans and the Mechanization of the World Picture", Mechanics and Cosmology in the Medieval and Early Modern Period (M. Bucciantini, M. Camerota, S. Roux., eds.) Firenze: Olski 2007: 121-141, pages 124-125. - Graney, C. M., "Science Rather Than God: Riccioli's Review of the Case for and Against the Copernican Hypothesis", Journal for the History of Astronomy 43, 2012: 215-225, page 217. - Blair, 1990,362-364 - Gingerich, 1973. Moesgaard, 1972, 40-43. - Moesgaard 40, 44 - Graney, C. M. (March 6, 2012). The Prof says: Tycho was a scientist, not a blunderer and a darn good one too! The Renaissance Mathematicus. http://thonyc.wordpress.com/2012/03/06/the-prof-says-tycho-was-a-scientist-not-a-blunderer-and-a-darn-good-one-too/ - Stanford Encyclopedia of Philosophy. "John Scottus Eriugena." First published Thu Aug 28, 2003; substantive revision Sun Oct 17, 2004. Accessed April 30, 2014. - Ramasubramanian, K., "Model of planetary motion in the works of Kerala astronomers", Bulletin of the Astronomical Society of India 26: 11–31 [23–4], retrieved 2010-03-05 - Finochiario, Maurice (2007). Retrying Galileo. University of California Press. - Heilbron (2010), p.218-9 - Pantin, Isabelle (1999). "New Philosophy and Old Prejudices: Aspects of the Reception of Copernicanism in a Divided Europe". Stud. Hist. Phil. Sci. 30 (237–262): 247. - Seligman, Courtney. Bradley's Discovery of Stellar Aberration. (2013). http://cseligman.com/text/history/bradley.htm - Plait, Phil. (Sept. 14, 2010). Geocentrism Seriously? Discover Magazine. http://blogs.discovermagazine.com/badastronomy/2010/09/14/geocentrism-seriously/#.UVEn7leiBpd - Musgrave, Iam. (Nov. 14, 2010). Geo-xcentricities part 2; the view from Mars. Astroblog. http://astroblogger.blogspot.com/2010/11/geo-xcentricities-part-2-view-from-mars.html
<urn:uuid:39a419c6-5524-44a6-b283-49c573abe4ea>
{ "date": "2015-03-31T10:35:33", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300464.72/warc/CC-MAIN-20150323172140-00190-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9208404421806335, "score": 3.84375, "token_count": 3540, "url": "http://en.wikipedia.org/wiki/Tychonic_system" }
Lighthouses are a fascinating part of Michigan history and popular tourist spots throughout the state. But if you want to buy one, restore it and open it to tourists, you can forget it. The federal government won't sell you one, even though many of them are in decay. During the 1800s and early 1900s, the federal government built about 2,000 lighthouses-over 100 of them in Michigan-to shine their lights and protect ships from rocky shores. The government also supplied the lighthouses with modern lenses and equipment and built caretakers' houses nearby. Radar, modern communications and other innovations have since rendered many lighthouses obsolete. Since 1939, the Coast Guard has been in charge of maintaining the nation's lighthouses, but keeping them in shape has often been a losing battle. "Many of them have been abandoned by the authorities and are falling victim to vandalism and the elements," reports Tim Harrison, editor of the monthly publication, Lighthouse Digest. Selling lighthouses could be a win-win situation for all concerned-private investors who have the incentive to improve the value of their property, the government that would collect revenue on lighthouses converted from a drain on the treasury to taxpaying private enterprises, and all those people who don't want to see historical treasures disintegrate through public neglect. Since the 1960s, however, federal law has made it almost impossible for the government to sell any lighthouse-regardless of its structural condition-to a private owner. The government sometimes leases lighthouses to be used as historical museums, but it reserves the right to take them back. Understandably, museum operators hesitate to make needed improvements because they can be taken away when a lease expires. To help preserve lighthouses and to persuade the authorities to give them to museums or other local groups, the Great Lakes Lighthouse Keepers Association was formed in 1982. So far, Washington isn't doing much listening. Giving obsolete lighthouses to museums or preservation organizations would certainly be better than letting them decay further. But private, for-profit entrepreneurs might in many cases make the best of all caretakers. "Private owners, the government thinks, may not maintain the lighthouses properly," observes Harrison. We are left with the peculiar argument that even though the nation's lighthouses are steadily deteriorating, they can't be sold to private owners because they might let them deteriorate further. Experience with private ownership of lighthouses suggests a promising potential for privatizing at least some of those still owned by the government. Before the 1960s, a number of them were sold to private individuals. Most of those owners, including some in Michigan, have taken excellent care of their property. The Mendota Lighthouse, for example, is a well-kept home on the Keweenaw Peninsula. Two others-the Sand Hills Lighthouse (also on the Keweenaw Peninsula) and the Big Bay Point Lighthouse north of Marquette-are popular bed-and-breakfast establishments. William Frabotta, bought the Sand Hills Lighthouse over thirty-five years ago. He refurbished the eight bedrooms, all with private baths, and now rents out the rooms in both summer and winter. Frabotta brags about the cross-country skiing in the winter and the view of the scenic Northern Lights from the tower during the summer. "People love to come here to see a part of history," Frabotta says. There's a larger lesson to be learned from all this. America's dwindling supply of lighthouses presents us with both a case study in the shortcomings of public ownership and a heartening prospect of what private enterprise might do if given the chance. To Washington, concerned citizens should send a clear signal: Sell the Lighthouses!
<urn:uuid:9c726878-1fa8-45eb-b661-07989bb7b636>
{ "date": "2014-11-21T20:40:43", "dump": "CC-MAIN-2014-49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400373899.42/warc/CC-MAIN-20141119123253-00237-ip-10-235-23-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9663584232330322, "score": 2.703125, "token_count": 775, "url": "http://www.mackinac.org/58" }
The portrait of Elihu Root in the Harvard Law School Library depicts him as he looked in 1903, when he was 58 and secretary of war under President Theodore Roosevelt. Root wears a thick tie and full vest and morning coat. He is standing, with rimless glasses in his right hand and his left hand in a pants pocket. With graying brown hair parted in the middle, somber brown eyes, and a thick moustache, he is the embodiment of the lawyer-statesman—the idealized lawyer who is a skilled legal technician but, more to the point, a person of practical wisdom and exemplary character. The portrait hangs at the library’s south end, outside a room named for him. Even a frequent visitor to the Root Room should be forgiven for thinking he was another distinguished graduate of the law school recognized for his accomplishments. The library assumes his eminence, without explaining who he was. He was from an old American family. “My maternal grandfather, with whom I passed much time as a child,” Root wrote in a letter to a historian, “was the son of the man who commanded the Americans in the fight at Concord bridge on the nineteenth of April in 1775.” He was certainly accomplished. In 1913, when he was a United States senator from New York, he was awarded the Nobel Peace Prize, for advocating that major conflicts between countries be settled by arbitration instead of war. After his chapter at the War Department and before being elected to the Senate, he was Roosevelt’s secretary of state. In that job, he negotiated bilateral treaties with 24 countries, which each committed to using arbitration to resolve disputes. That led to the creation of a world court, officially the Permanent Court of International Justice, which existed until 1946. Root was a kind of godfather to the group of men responsible for the Kellogg-Briand Pact (1928), which failed to end all wars, as it was supposed to, but, by changing the rules of war, all but ended wars of conquest—the lion’s share of wars until then. The Root quotation on the wall of the room begins, “He is a poor-spirited fellow who conceives that he has no duty but to his clients and sets before himself no object but personal success.” Lawyers more often quote another piece of wisdom from Root found in his authorized biography: “About half the practice of a decent lawyer consists in telling would-be clients that they are damned fools and should stop.” About half the practice of a decent lawyer consists in telling would-be clients that they are damned fools and should stop. Still, Root didn’t attend Harvard College or Harvard Law School. He graduated from New York University School of Law, in 1867. The opening of the room in 1939 was the result of a gift to Harvard from Henry L. Stimson, who attended Harvard Law School for two years before becoming Root’s protégé and then partner at his law firm in New York City. (In those days, membership in the New York Bar did not require a law degree and six out of every 10 candidates who took the bar exam had never been to college, let alone law school.) Stimson, like Root, was prominent in government as well as law practice. He was secretary of state for President Herbert Hoover and secretary of war for President William H. Taft and then, a generation later, for President Franklin D. Roosevelt. The year he turned 50, he enlisted during the First World War. He served as an artillery officer in France and left the Army as a colonel in the 31st Field Artillery. Stimson had intended that the gift help endow a professorship in his mentor’s name, but, a dozen years later, after Harvard was unable to raise sufficient additional money, he said he “must leave the use of the fund to the authorities of the university.” Harvard used it for the “equipment and decoration” of an informal reading room in Langdell. The Root Room was meant to have the feel of a living room—but a stately one, decorated in a neoclassical style. Large and open, with a high ceiling, it was painted an airy blue and white, with ornamental columns emphasizing its height. Paintings and sculptures of distinguished figures from American and British law defined the walls and corners. On the wall opposite the entrance, there was a fireplace with a reproduction of the mantel that, until 1857, stood behind the speaker’s desk in the House of Representatives, in Washington, D.C. Looking back from the fireplace, you could see a gilded clock above the entrance with Roman numerals marking the hours. While the room offered newspapers and magazines, primarily it provided students with books—some novels but mostly nonfiction—about the intersection of law and life rather than the law. The furniture included easy chairs where students sprawled, slept, and otherwise made themselves at home, sometimes getting lost in their reading, which transported them across time and space. “The Trial of Dr. Adams,” Sybille Bedford’s true-crime account, had that power. It’s about an epic 1957 murder trial at London’s Old Bailey of a 58-year-old English country doctor named John Bodkin Adams. Astonishingly, he was acquitted of poisoning an elderly female patient with large quantities of heroin and morphine, though he had irrefutably prescribed them. He was later stripped of his medical license, but he lived into his mid-80s and died a wealthy man, having been the beneficiary of the wills of 132 patients, out of 163 who died in suspicious circumstances. Bedford wrote this about the trial’s opening, when the clerk of the court addressed the defendant and the defendant addressed the judge: “Do you plead Guilty or Not Guilty?” There is the kind of pause that comes before a clock strikes, a nearly audible gathering of momentum, then, looking at the Judge who has not moved his eyes: “I am not guilty, my Lord.” It did not come out loudly but it was heard, and it came out with a certain firmness and a certain dignity, and also possibly with a certain stubbornness, and it was said in a private, faintly non-conformist voice. It was also said in the greatest number of words anyone could manage to put into a plea of Not Guilty. Much of the Root collection was biography, with many books about icons of American law, like John Marshall, Daniel Webster, and Abraham Lincoln, and others about obscure figures, like Charles Henry Fernald, a county judge in Santa Barbara, California, and George Shiras, a Supreme Court justice from 1892 to 1903. Some of it was macabre (e.g., “The Reluctant Hangman: The Story of James Berry, Executioner – 1884-1892” by Justin Atholl and “Wills of the U.S. Presidents” by Herbert R. Collins and David B. Weaver). Some of it was esoteric (“Forbrydertyper hos Shakespeare” by August Goll—“Criminal types in Shakespeare,” an authorized translation from Danish). HLS librarians regarded the books as entertainment and, compared with legal textbooks, they were. But many in the collection seriously addressed subjects that the law school’s curriculum barely considered. The books were Harvard Law School’s version of what Harvard Business School gathered in its Power and Morality Collection. The law school’s goal was to teach students to think like lawyers, not how to practice law or even what lawyers did in practice. The Root Room provided books about both for a form of independent study in a counter-curriculum. Brown v. Board of Education is often called the most important Supreme Court ruling in the 20th century. It was not yet two decades old when I arrived as a 1L in 1973 and became a Root Room regular. Many people know that Thurgood Marshall was the lawyer who argued in favor of what the Court unanimously decided in 1954—that segregated public schools violated the equal protection clause of the 14th Amendment to the United States Constitution. Because of what Marshall stood for as a champion of equal rights, President Lyndon B. Johnson picked him to be solicitor general—the first African-American to serve in that role—and, then, to sit for 24 years on the Supreme Court. A biography of Davis, which was part of the Root Room collection, raised one of the most difficult ideas in American justice: “the principle of non-accountability.” Yet even in the legal world, relatively few know that the lawyer who argued in favor of the status quo in that case, and, therefore, for letting states maintain separate public schools for black students, was John W. Davis. In 1953, Davis’ daughter was seated next to Marshall’s wife in the visitors’ gallery of the Court when it heard oral argument in the Brown case. The daughter congratulated the wife after Marshall finished his argument. Mrs. Marshall replied about Davis, “My husband admires him so much.” Marshall himself later said, “He was a great advocate, the greatest.” How could the lawyer who led the country’s most important legal campaign for racial justice venerate his adversary bent on thwarting it? In 1973, the Root Room acquired a book that answered the question. “Lawyer’s Lawyer: The Life of John W. Davis” by William H. Harbaugh, who was a professor of history at the University of Virginia, was, in the words of The New York Times Book Review, “a monument of scholarship and readability.” It was my introduction to the counter-curriculum. Davis argued more cases (140) before the Supreme Court in the 20th century than any other lawyer, until he was surpassed by a deputy solicitor general who worked for almost 35 years in the SG’s office. In the 1930s, Marshall said, he often skipped classes at Howard University School of Law, in Washington, D.C., to hear Davis argue before the Court. Davis was the ultimate craftsman, the book explained, a genius at arguing in appellate courts, especially America’s highest. Other famous lawyers were in awe of what Harbaugh called “his capacity for total absorption,” his ability in a case “to master the record within a few hours.” (A lawyer who worked with him said, “He could recite you a page of Dickens without even thinking about it.”) Then, in “euphonious language” and an “authoritative baritone voice,” he had the facility “to simplify complex matters with a few pithy Anglo-Saxon phrases devoid of adjective and drained of all emotion.” Henry Stimson's gift established the Root Room, where students could explore broadly. That helped make Davis “the greatest Solicitor General in history,” Harbaugh wrote, when he held the job for five years under President Woodrow Wilson, though other contenders for the title came after him. He was a successful American ambassador to Great Britain after he resigned as solicitor general and ran for president as a conservative Democrat, losing to the Republican Calvin Coolidge in 1924, but it was as a lawyer that he defined himself for history. Among elite lawyers, he was one of the most admired in the profession, with his name put at the front of the name of the Manhattan law firm he joined in 1921, when he was 48. Two generations after his death, the firm of Davis, Polk & Wardwell remains among the most respected corporate law firms in the world. Richard Kluger wrote the Times review that lauded Harbaugh’s book, but he judged Davis as Harbaugh did not. Kluger asserted: “That he is scarcely remembered outside of his profession (though still idolized within it as the model of the appellate lawyer) is not merely comment on our short memory of public figures who decline to turn cartwheels in quest of our acclaim. It is, as well, testament that the values for which John Davis stood unbending throughout his 81 years—the sanctity of property, the immutability of laws, the obligation of the individual to sink or swim on his own—have been challenged by other principles in our ongoing national ferment over the definition of a just society.” In 1975, in “Simple Justice,” his landmark history of the Brown case, Kluger repeated that thought and much of that language in his account of Davis’ background as a Southerner. But he added to that paragraph about the meaning of a just society: “Part of that ultimate definition, it became clear in the aftermath of the Second World War, would hinge on settling the status of black Americans. John Davis’s role in that settlement was determined by one of the few shortcomings in his otherwise sterling character: all his life he was a gentleman racist.” As Harbaugh put it, “his heart was really with the white social order.” Was it fair of Kluger to condemn Davis as a racist, and not let his appraisal of the man rest on the quality of his advocacy? As Harbaugh explained, and Kluger quoted, Davis adhered “absolutely to the principle that the lawyer’s duty was to represent his client’s interest to the limit of the law, not to moralize on the social and economic implications of the client’s lawful actions.” In admiring Davis, Thurgood Marshall accepted that tenet, which the legal scholar Murray Schwartz called the “principle of non-accountability” at the heart of the American adversary system. The system depends on lawyers vigorously representing each of the opposing parties in a dispute. They can do that, the theory goes, only if they are not held responsible for what society in general, and a judge and jury in particular, find repugnant in the actions of clients the lawyers are representing. That is among the most difficult ideas in American justice. It was debated in the 1970s when the American Civil Liberties Union, then headed by Harvard Law School graduate Norman Dorsen, defended the right of the National Socialist (Nazi) Party of America to march in uniforms with swastikas on armbands through Skokie, Illinois, then a village of 70,000 people with 5,000 Holocaust survivors. It is being debated again today, within the ACLU, too. Some staff members have questioned whether the principle of non-accountability still applies when what’s at stake is hate speech in this era of polarized politics and extremism swollen around the globe by social media. They have protested the organization’s defense of Milo Yiannopoulos, the “alt-right” provocateur who, the ACLU recognizes, “has fostered both anti-Muslim bias and disdain for women in one breath, characterizing abortion as ‘so clearly bad for women’s health that it falls second only to Islam.’” A frolic and detour in the Root Room led to that fundamental issue. Similar excursions led to others as important. In 1990, after a half century, the library moved most of the books out of the Root Room and remade it into a center of scholarship where researchers can work with materials brought in from the school’s Historical & Special Collections—as the library describes, “nearly three thousand linear feet of manuscripts, over three hundred thousand rare books, and more than seventy thousand visual images.” The décor is the same, but the furniture has been thinned out. The room feels even airier and looks beautiful. It’s still the Root Room, but it’s really RR 2.0. “The Library remains the largest academic law library in the world, and continues to reinvent itself to meet the needs of the law school,” the school’s website says. The library had contemplated the change for several years, but, as a librarian told me, it was “the asbestos contamination in the Treasure Room, in the spring of 1990, that ultimately accelerated the relocation plan.” The Caspersen Room, as it’s now called, is at the north end of the library and serves as a space for exhibitions, such as the 2005 “Retrospective Honoring Charles Hamilton Houston on the Grand Opening of the Charles Hamilton Houston Institute for Race and Justice” at the law school. An HLS graduate, Houston was dean of Howard Law School and litigation director of the NAACP, the great lawyer who was the main architect of the strategy that Thurgood Marshall and others carried out. What the law school of earlier decades left to the old Root Room to teach about the life of the law is now part of the school’s curriculum. Around the time of the change, the legal profession emerged as a subject of first-rate scholarship among law professors including HLS’s David Wilkins ’80, who is now director of the school’s Center on the Legal Profession and vice dean for global initiatives on the legal profession. What the law school of earlier decades left to the old Root Room to teach about the life of the law is now part of the school’s curriculum. That’s so in courses like Challenges of a General Counsel, which Wilkins co-teaches with Ben Heineman, the former senior vice president and general counsel of GE, and Cross Border M&A: Drafting, Negotiation & the Auction Process, which Mitchell Presser, the head of the U.S. M&A practice of the global law firm of Freshfields Bruckhaus Deringer, teaches, and in the many legal clinics, student practice organizations, and externships that have transformed what students learn and how. Law professors got interested in the legal profession because, as major law firms began to morph into economic powerhouses, many leading American lawyers were concerned that the elite segment of the profession was abandoning “principle for profit, professionalism for commercialism,” as a report of the American Bar Association put it. Davis, Root and Stimson had all been accused of doing the same thing. As Louis D. Brandeis LL.B. 1887 described the problem around the time Root was in the Roosevelt administration, elite lawyers had let themselves “become adjuncts of great corporations” and had “neglected the obligation to use their powers for the protection of the people.” A generation ago, Wilkins and others began a challenging quest that continues today: to persuade elite lawyers to give the ethical dimensions of lawyering much more attention. I graduated from HLS without really understanding what the word “professional” meant in defining the legal profession and how that affected the behavior of lawyers in the most influential corporate law firms. For that matter, I didn’t know what the solicitor general actually did in the U.S. Justice Department or understand many other aspects of law and the legal culture that struck me as significant because of what I read in the Root Room. “Simple Justice” was one of the last books I read in the room before graduating in 1976—especially memorable, it later dawned on me, because the book’s revelations about what happened at a profound intersection of life and law in the United States fortified my interest in a career as a journalist and an author of books about legal affairs. In “Skadden: Power, Money, and the Rise of a Legal Empire,” “The Tenth Justice: The Solicitor General and the Rule of Law,” and other books, I have written about answers I have found and stories that helped explain the answers. I focused on the Skadden firm, rather than Davis Polk or another of the old New York-based firms, because it symbolized the transformation of the large law firm after World War II. It opened on April Fools’ Day in 1948, a tiny operation with no clients; three partners who had been passed over for partnership at well-established firms; and one associate, Joe Flom, from HLS’s two-year, postwar Class of 1948. Thanks to its willingness to serve as special counsel for special purposes—matters too dicey for some other firms to take on—it grew to 75 lawyers in 1975. By 1990, on the basis of its lucrative practice in counseling companies involved in fighting off or doing corporate takeovers, it was a mega-firm with 1,000 lawyers. Flom exhorted his colleagues, “We’ve got to show the bastards that you don’t have to be born into it.” At a celebration of Skadden’s 40th anniversary in 1988, when it was making more money than any law firm ever had and was the colossus of the legal profession, Flom issued a warning. He instructed: “We must remember that the history of major institutions is that they are not permanent. The only permanence comes from what you make of it, or what the institution makes of itself. If it becomes a dinosaur, it will disappear.” It was his version of the lesson that the law school demonstrated it understood when it overhauled its curriculum, and that the library showed it grasped when it remade the Root Room.
<urn:uuid:7c21d6db-518e-476a-8405-864fac40b412>
{ "date": "2019-10-14T04:13:04", "dump": "CC-MAIN-2019-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00376.warc.gz", "int_score": 3, "language": "en", "language_score": 0.975497305393219, "score": 3, "token_count": 4475, "url": "https://today.law.harvard.edu/feature/the-root-room/" }
Overridable and Overrides Although Overridable and Overrides sounds quite technical, it isn't really because the meaning is just as you would use the words in normal English. That is, the one version will override or replace the settings of the original or earlier version. If something is Overridable, it can be replaced. You are allowed to override a method in the parent class if that class has been made Overridable. Also it is important to remember that the signature of the overridden method must be the same as the signature of the parent method. ("Signature" was covered in an earlier article) So if the parent class contains an Overridable method, then the child class may override that method. The developer of the child class has the choice as to whether to provide an overridden method or not. It is not mandatory. The System.Object.ToString is designated as Overridable as we saw in the Object Browser : You can create any String you like as the returned result of the ToString method. In the case of the Person Class, the Forename and Surname are possibly the two most useful pieces of core information. So I will use these as the returned value of the ToString function - that is, what the client code gets if it calls this method. Here is that ToString method for the Person Class: Public Overrides Function ToString() As String Return m_forename & " " & m_surname It finds the value of the m_forename and m_surname for the current instance and returns those values as a concatenated string, with a space between the two parts. Concatenated is simply developer-speak for "joined together". Testing out this method, place the following code in the Button click event of Form1 in the ClassBasics project: ' Create a new Person Instance Dim RealPerson As New Person("Ged", "Mead") '  Display using the default ToString method Label1.Text = RealPerson.ToString This time, you will get the result you want: In this article I covered one method, ToString, in some depth. You saw that all classes inherit from System.Object and that we can override the ToString method in the System.Object class. We can do this because the declaration of the System Object's ToString method includes the Overridable modifier. If you choose to override a method there are two key requirements: - You must include the Overrides modifier in the child method declaration - The signature must be the same as that of the parent method. You can get a lot of very useful information about classes and hierarchies by using the Object Browser tool in Visual Studio. Of course, you can include many other methods in a class - and probably would. We will look at some other methods in a later article. In the next article in this series, I will look at Properties in more detail, how to validate values and send messages back to client code.
<urn:uuid:509a1ca0-13df-48f6-b28b-b527d3f90ca4>
{ "date": "2018-03-20T05:38:40", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00456.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8919786810874939, "score": 3.546875, "token_count": 640, "url": "http://devcity.net/Articles/379/4/article.aspx" }
An increase in a population in cities and towns versus rural areas. Urbanization began during the industrial revolution, when workers moved towards manufacturing hubs in cities to obtain jobs in factories as agricultural jobs became less common. Public Relations Policies for Social Media Read Full Article The pride of dying rich raises the loudest laugh in hell. ...Read more
<urn:uuid:91727cb8-cfc8-4e1a-810c-bb3ac291caac>
{ "date": "2014-09-01T20:54:23", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920694.0/warc/CC-MAIN-20140909055349-00486-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9366850256919861, "score": 2.90625, "token_count": 69, "url": "http://www.businessdictionary.com/" }
Copenhagen’s CO2 neutral university building will be used to encourage ‘green’ thinking ahead of climate conference Christensen & Co Arkitekter A/S won an invited competition to design facilities for the Faculty of Science at the University of Copenhagen with their proposal - the ‘Green Lighthouse’. Ahead of the 2009 United Nations Climate Change Conference in the city, the building, which broke ground yesterday, is being commended as a ‘beacon’ of sustainability by the Lord Mayor of Copenhagen. Dubbed the 'sundial' due to its cylindrical shape and adjustable façade louvres which allow light to twist around the building following the sun, the structural design is used to reduce CO2 emissions. Copenhagen X, an organisation created to encourage architectural awareness in Denmark, explain how the design is championing the way for sustainability in Copenhagen: “The Green Lighthouse is in a class of its own when it comes to commercial buildings which can call themselves CO2 neutral. The proportion between windows and facade has been carefully calculated to assure that the building will not consume more energy for heating than strictly necessary. "The varying intensity of the sun is incorporated into the building's energy system; in summertime excess solar energy is collected in an underground store to use later when the power of the sun is at its weakest. Fresh air is drawn in through motorised windows and ventilated through the skylights to create a pleasant indoor climate, while adjustable louvers in the window sections automatically move up and down with the passage of the sun around the facade.” Providing 950 sq m of space on three levels, the Green Lighthouse, (green both physically and figuratively), will house a student advisory, administration of the university and a faculty club. Copenhagen X say that “To put it bluntly: Copenhagen is not really dotted with prominent examples of sustainable architecture”, but it is hoped that the structure, which will emit lower CO2 emissions than it is forecast buildings will be restricted to by 2020, will help to bolster the city’s sustainability credentials ahead of the conference which is being held from 30 November next year. Niki May Young
<urn:uuid:847c6c45-c443-4279-91b3-f29a736dc626>
{ "date": "2018-08-17T01:51:47", "dump": "CC-MAIN-2018-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211403.34/warc/CC-MAIN-20180817010303-20180817030303-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9512624740600586, "score": 2.515625, "token_count": 452, "url": "http://www.worldarchitecturenews.com/project/2008/10576/christensen-co-architects/green-lighthouse-in-copenhagen.html" }
George Orwell, Down and Out in Paris and London George Orwell’s first book was a tale of living in poverty in two of Europe’s great cities. Published in 1933, it was summarily banned in April of that year by T. W. White, Australia’s repressive Minister for Customs. In 1936, the Literature Censorship Board followed up by banning Orwell’s Keep the Aspidistra Flying, which the chair of the Board, Sir Robert Garran, described as ‘indecent’ and ‘of no literary merit’. Just a year later, Orwell established his reputation with The Road to Wigan Pier and his early works became valuable publishing property. Penguin released Down and Out in a huge paperback edition in 1940 and the book was widely imported into Australia until 1953, when a Customs officer noticed that it was still an illegal import. At this point, the book was sent to the Literature Censorship Board, apparently without explanation. The Board was bemused. As one member put it, ‘I find no ground, whatever, for considering this an indecent book and, indeed, am surprised at the Customs Department referring it to the Board.’ The book was officially allowed into Australia 20 years after it was published, and three years after its author had died.
<urn:uuid:cf3cb470-b28e-4c51-9f09-638b71d3b121>
{ "date": "2015-04-21T08:18:08", "dump": "CC-MAIN-2015-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641054.14/warc/CC-MAIN-20150417045721-00286-ip-10-235-10-82.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9796806573867798, "score": 2.515625, "token_count": 274, "url": "http://www.lib.unimelb.edu.au/collections/special/exhibitions/bannedbooks/exhibition/orwell.html" }
July 23, 2014 -- Sydney Kendall lost her right arm below the elbow in a boating accident when she was 6 years old. Now 13, Sydney has used several prosthetic arms. But none is as practical -- nor as cool, she’d argue -- as her pink, plastic, 3-D-printed robotic arm. The arm was custom-designed for her this spring, in pink at her request, by engineering students at Washington University in St. Louis through a partnership with Shriners Hospital. They printed it while Sydney and her parents watched. “It took about 7 minutes to do each finger,” says Sydney’s mother, Beth Kendall. “We were all blown away.” When Sydney wore her new arm to her school outside St. Louis, her classmates were blown away, too. “They were like, ‘Sydney, you’re so cool! You’re going to be famous!'” Sydney recalls. The robotic arm, with its opposable thumb, helps Sydney grip a baseball, maneuver a mouse, and pick up a paper coffee cup. The cost? About $200. Traditional robotic limbs can run $50,000 to $70,000, and they need to be replaced as children grow. “Kids don’t usually get to have robotic arms because they are so expensive,” Beth Kendall says. Robotic arms like Sydney’s are just one example of how 3-D printing is ushering in a new era in personalized medicine. From prosthetics to teeth to heart valves, it’s bringing made-to-order, custom solutions into operating rooms and doctors’ offices. Experts say dozens of hospitals are experimenting with 3-D printers now, while researchers work on more futuristic applications of the technology: printing human tissue and organs. To foster even more research, the National Institutes of Health in June launched a 3-D Print Exchange that allows users to share and download files. “3-D printing is a potential game-changer for medical research,” said NIH Director Francis Collins, MD, PhD, in announcing the exchange. “At NIH, we have seen an incredible return on investment; pennies’ worth of plastic have helped investigators address important scientific questions while saving time and money.” As one of the leading researchers in the field, Anthony Atala, MD, director of the Wake Forest Institute of Regenerative Medicine, understands its promise firsthand. The institute has already created miniature livers that live in petri dishes as a step toward creating organs. “3-D printing has the potential to revolutionize medicine,” he says. What Is 3-D Printing? Imagine an ink jet printer that, rather than spraying out ink in the shape of letters, sprays out a plastic or metal gel or powder in the shape of a tooth, finger, or a hip joint. A typical printer receives a document to print, while 3-D printers take their commands from an MRI or a CT scan of a body part. Also known as “additive manufacturing,” 3-D printing produces an object, layer by layer, from the ground up. Although 3-D printers have been around since the 1980s, medical uses have skyrocketed in the past few years, experts say. They can produce more complex shapes than traditional manufacturing. This allows the products to be highly personalized: a tooth that looks just like the one you lost, or an exact replica of a hip joint. The process can save time and practically bring production of medical devices to the patient’s bedside. Although no one has exact numbers, University of Michigan biomedical engineering professor Scott Hollister believes about several dozen medical centers in the country now use 3-D printers in some form. Teeth, Limbs, and Hearing Aids 3-D printing is already widely used for body parts -- usually made of plastic or metal -- that come in contact with the body but don’t enter the bloodstream. These include teeth, hearing aid shells, and prosthetic limbs. “In the past, a dental crown had to be fabricated in a lab, which takes a few days if not a few weeks and two to three trips to the dentist by the patient,” says Chuck Zhang, PhD, a professor of industrial and systems engineering at Georgia Institute of Technology. Now a dentist can take a 3-D scan of a tooth and print the crown on the spot. The technique gives amputees like Sydney an alternative to ugly and ill-fitting prosthetics. 3-D printing studios often collaborate with clients to design stylized, artistic limbs the user wants to show off -- not hide. Zhang and his colleagues at Georgia Tech are working with military veteran amputees to correct their prosthetics’ notoriously poor fit. His team is using 3-D-printed materials to create a prosthetic socket that adapts to the body’s changing fluid levels. It will tighten or loosen as needed so the limb doesn’t fall off or become painfully uncomfortable. 3-D-printed plastics and metals have also made their way inside the body. Doctors at University of Michigan’s Mott Children’s Hospital have saved the lives of two babies since 2012 by implanting 3-D-printed plastic splints into their windpipes. The babies had a rare birth defect called tracheobronchomalacia. Without treatment, their weak airways would collapse, suffocating them. The only treatment is to insert a tracheostomy tube and put the baby on a ventilator for up to several years until, hopefully, the airways become strong enough to stay open on their own. But 17-month-old Garrett Peterson’s airways weren’t showing any signs of getting stronger while on the ventilator. Doctors in Utah, where the Petersons live, said they had done all they could. “Everything had to be perfect in the world. Garrett couldn’t cry, or he’d turn blue. He couldn’t poop, or he’d turn blue,” says his father, Jake Peterson. “We just had to hold him and keep him perfectly happy, so it wasn’t realistic to keep him on the ventilator.” The Petersons had read an article about a similar baby helped at the university in 2012 with a 3-D-printed tracheal splint, and they sought the help of Mott surgeon Glenn Green, MD. “We decided this was Garrett’s only chance. The hospital here in Utah said to enjoy him for the rest of the time we had him. And we weren’t ready to do that,” says Natalie Peterson, Garrett’s mother. Based on CT scans of Garrett’s airways, Green and biomedical engineering professor Hollister designed and printed custom-fit splints to hold Garrett’s airways open. His body will eventually absorb the device, and the airways will stay open on their own. Mott Children’s Hospital says it was the first facility in the world to perform this procedure. “I think it was the first example of using a 3-D-printed device in a life-or-death situation,” says Hollister, referring to the baby helped in 2012. Costs for a tracheostomy and extended time on a ventilator exceed $1 million per patient. The splint totaled $200,000 to $300,000, says Hollister. Surgeons have implanted other 3-D-printed devices into patients. Cranial plugs fill holes made in the skull for brain surgery. Cranial plates can replace large sections of the skull lost to head trauma or cancer. Mayo Clinic and some other hospitals offer 3-D-printed hip and knee replacements to eligible patients. The custom joints minimize surgery and recovery time, as surgeons do not have to chisel away at bone to put them in. The FDA has two labs that are investigating how the technology may affect medical devices. In addition to metals and plastics, doctors and scientists around the country are loading 3-D printers with human cells and printing living tissue, called bioprinting. The Holy Grail is to print a living organ for transplant using a patient’s own cells. Some experts predict this could be just a couple of decades away and potentially revolutionize organ transplants. Patients wouldn’t die waiting for organs, and their immune systems wouldn’t reject the organs. Atala of the Wake Forest Institute says researchers will use the miniature livers they created to test drug toxicity. They expect the method to be far more accurate than traditional animal and cell testing, he says. Biomedical engineers use several methods to print an organ. The printer creates a plastic mold of the organ that can be covered with the human cells. Or the printer can jet the cells out inside a collagen-based gel that will hold it all together. The cells must grow on the plastic or collagen scaffold for several weeks before the organ could potentially work. After putting it into the body, the scaffold disintegrates, leaving only human tissue behind. For children, this would mean the tissues could grow with them, eliminating the need for surgeries as they grow. Already, bioengineers at Cornell University have printed ears, and the University of Michigan is also testing the concept. Many labs already print tissue for research and drug testing, and patching damaged organs with strips of human tissue may happen in the near future, says Stuart Williams, PhD, of the Cardiovascular Innovation Institute at the University of Louisville. The first printed windpipe may not be too far off either, says Faiz Bhora, MD, co-director of Mount Sinai Hospital’s Airway Center. Bhora and his colleagues are building windpipes both with plastic and gel bases in hopes of helping patients born with defects or tumors that block their airways. As centers like Bhora’s work on future applications, Hollister predicts the immediate benefits of 3-D printing will lead to having one in every hospital. Williams offers a prediction, too: “3-D printing will change the delivery of health care.”
<urn:uuid:0fa461cb-7572-4f8a-9566-b6d8aef5b18f>
{ "date": "2014-09-19T13:53:08", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131376.7/warc/CC-MAIN-20140914011211-00319-ip-10-196-40-205.us-west-1.compute.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9549453258514404, "score": 2.984375, "token_count": 2140, "url": "http://www.fox16.com/story/d/story/will-3-d-printing-revolutionize-medicine/24463/U6wl_PUt6k6thkBNNxS5Ag" }
CS1102 offers an accelerated and advanced introduction to program design. After a quick overview of basic program design, it covers how to design and implement domain-specific languages for custom software applications. By the end of the course, students are expected to develop skills in identifying, modeling and implementing simple languages, in understanding certain concepts that distinguish programming languages, and in functional programming. On a more general level, students are expected to strengthen their skills at approaching open-ended programming and software-development problems. The course is primarily targeted at students with prior programming experience (including functions, recursion, and lists or trees, as would be covered in AP). However, it is sufficiently self-contained to be a first course for a novice programmer with strong mathematical skills and a desire for a challenge. No particular prior language is assumed; the course covers functional programming (with Scheme), which is new to most students in the course. Students can start in CS1102 and switch over to the novice course CS1101 up through the midpoint of the term, should they find the pace too fast. Comic from xkcd; Lisp is a cousin of Scheme
<urn:uuid:5055301e-bdb7-495e-9924-6aed10b949fb>
{ "date": "2017-03-23T22:06:38", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00076-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9459301829338074, "score": 3, "token_count": 232, "url": "http://web.cs.wpi.edu/~cs1102/a08/" }
Jump to navigation Jump to search - HMS Neptune (1683) was a 90-gun second rate launched in 1683. She was rebuilt in 1710 and 1730 before being renamed HMS Torbay in her new incarnation as a third rate in 1750. She was sold in 1784. - HMS Neptune (1757) was a 90-gun second rate launched in 1757. She was hulked in 1784 and broken up in 1816. - HMS Neptune (1797) was a 98-gun second rate launched in 1797. She fought at the battle of Trafalgar and was broken up in 1818. - HMS Neptune was to have been a 120-gun first rate. She was renamed HMS Royal George (1827) in 1822, before being launched in 1827. Royal George was sold in 1875. - HMS Neptune (1832) was a 120-gun first rate launched in 1832. She was rebuilt as a 72-gun third rate with screw propulsion in 1859 and was sold in 1875. - HMS Neptune (1863) was a coastguard cutter built in 1863 and sold in 1905. - HMS Neptune (1878) was previously Independencia, an ironclad battleship launched in 1874 for the Brazilian Navy. Acquired by the Royal Navy in 1878, she was sold in 1903. - HMS Neptune (1909) was an early dreadnought launched in 1909 and scrapped in 1922. - HMS Neptune (20) was a Leander-class light cruiser launched in 1933 and sunk in a minefield off Tripoli in 1941. - HMS Neptune was a projected Neptune-class cruiser in the 1945 Naval Estimates, but the plans were cancelled in March 1946 and she was never ordered. - HMS Neptune is the name given to the shore establishment at HMNB Clyde. - HSwMS Neptun, two submarines of the Swedish Navy |list of ships with the same or similar names. If an internal link for a specific ship led you here, you may wish to change the link to point directly to the intended ship article, if one exists.This article includes a|
<urn:uuid:a16512ce-fe36-48d2-be9b-1669624fd374>
{ "date": "2018-10-19T03:30:30", "dump": "CC-MAIN-2018-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512268.20/warc/CC-MAIN-20181019020142-20181019041642-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9602890014648438, "score": 2.765625, "token_count": 446, "url": "https://en.wikipedia.org/wiki/HMS_Neptune" }
Surroundings of Hel The earliest indication of the military significance of the Hel Peninsula was the construction of the military railway line in 1920-1921. In the next years there was merely a military observation post in Hel, and only in April 1929 the ten-kilometre final stretch of the peninsula was announced a military zone. In the interwar period, around 1 km west of the fishing harbor in Hel, a naval base was built. The chosen site was on the verge of deep waters and allowed traffic of warships of over ten-metre draught. In 1931-1934 a dock 320 m wide and 440 m long was built, provided with a dockside equipped with a railway track to serve ships. The turn of 14th and 15th c. was for Hel, as it was for the entire Poland, a period of wars and assaults, coming mainly from the Swedes. Hel’s location as the outermost Polish sea base in the Gdańsk Bay made its history rich in battles and attacks of enemy fleets. In 1939 ground forces gave the Polish Navy four heavy and hardly movable 105 mm canons, purchased for armaments testing. These were two pairs of French guns Schneider, version for the Danish army (barrel length L48) and Greek army (L31). In 1928 four Schneider 75 mm anti-aircraft guns mounted on warship support pieces were bought in France. The cannons were used to set up four defense half-batteries defending Gdynia, and three half-batteries defending the Fortified Area „Hel”. The defense of the sea base in Hel required medium caliber cannons, able to fight also the enemy’s cruisers. Thanks to the efforts by Lieutenant Commander Heliodor Laskowski, the idea of buying old guns in France was abandoned and in 1933 an order was placed for Swedish Bofors cannons. In 1946-1956 the Polish Navy built eleven coastal Stationary Artillery Batteries (Pol. abbr. BAS), equipped with naval guns cal. 152, 130 and 100 mm. It was a part of the plan to defend the Polish coast with moored mines, protected against trawling with coastal artillery fire. In 1955 another „cape” battery was built, and it was equipped with naval guns B-34U cal. 100 mm. These were universal, quick-firing guns which allowed to fight aircrafts, light warships and torpedo cutters. The guns were positioned on the shore of the Gdańsk Bay, near the fishing port. Shortly before the war the Polish Navy purchased from the Swedish Bofors another four cannons cal.152,4 mm. They were planned for the second battery of medium artillery, designed around 6 km from the „cape battery”. The guns remained in Sweden, and during warfare operations in 1939 the 34th battery was built in their place. On September 3rd 1939 the minelayer ORP „Gryf” got wrecked in the naval port in Gdynia. The ship rested on the bottom of the dock, leaning to starboard. After the fire had gone out, a decision was made to dismantle and carry out to land the anti-aircraft weapons that were still in good working condition. The conquest of numerous European countries and German preparations for the war with the Soviet Union led to a careful defense of the newly-conquered coast of the Atlantic and the Baltic. In the most crucial places the construction of the heaviest batteries, cal. 15 and 16 inches, was started. The naval base in Hel needed railway as the main means of transport for heavy loads. Therefore, a network of narrow gauge railway tracks, easy to camouflage and serving to carry torpedoes, naval mines and artillery ammunition, was built. The great importance of Gdynia, invaded in 1939, as the new base of the Kriegsmarine on the Baltic, led to an immediate setting up of strong artillery in the Bay of Gdańsk. A year later during the preparations for the war with the Soviet Union the plan to strengthen the defense, also with the heaviest artillery, was implemented. In the middle of the 1950’s a dozen or so defense centres to protect the Polish coast against enemy armies were built and arranged in sites that were most favourable for coastal landing. These were Battalion and Company Fortified Areas – the type of sub-unit indicated the size of the defense site. In the 1960’s the Warsaw Pact countries built a uniform system of anti-aircraft defense. In Poland a defense line subordinate to the Navy was created along the Baltic’s shores from Braniewo to Świnoujście. In January 1963 the 22nd Surface-Air Missile Squadron was formed on the Hel Peninsula. In 1973 another surface-air missile squadron was built on the Hel Peninsula, located east of Jurata. A site was chosen to face the sea, in line with the present-day summer residence of the President of Poland. The square on the crossroads of Wiejska and Przybyszewskiego [Commander Przybyszewski] Streets sites a Memorial to the Defenders of Hel. Originally it was made up of two swords set on a shared base. 31 Polish soldiers that died in 1939, including 4 unknown ones, were buried in the cemetery in Hel in Dworcowa Street. In 1959 the first concrete tombstone was made, distinguished by a beautiful figure of a Piast eagle rising to fly. In May 1939 the reserve company and the heavy machine gun company of the Border Protection Corps (Pol. abbr. KOP) battalion „Sienkiewicze”, as well as the reserve company of the KOP regiment „Sarny” were moved over to Hel. The units together formed a battalion denoted by IV/7pp or the 4th battalion of the Border Protection Corps „Hel”, complemented with Border Guard sub-units in September 1939. In 2010 a boulder with a memorial plate to the 1939 military police was placed in Komandorska Street in Hel, near the crossroads with Przybyszewskiego [Commander Przybyszewski] Street, and a supermarket. Strona 1 z 2
<urn:uuid:8586af5f-4c2d-42e1-9f6d-a13eedf7aec6>
{ "date": "2017-01-21T13:18:56", "dump": "CC-MAIN-2017-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00532-ip-10-171-10-70.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9727852940559387, "score": 3.125, "token_count": 1315, "url": "http://www.kaszubypolnocne.pl/EN/okolice_helu.html" }
Education That is Multicultural and Achievement (ETMA) The Maryland State Department of Education implements a State Regulation (COMAR 13A.04.05), expanded in 1995 and revised in 2005, that requires all local school systems to infuse Education That Is Multicultural into instruction, curriculum, staff development, instructional resources, and school climate. It also requires the Maryland State Department of Education to incorporate multicultural education into its programs, publications,and assessments. Education That Is Multicultural is defined as "a continuous, integrated, multicisciplinary process for educating all students about diversity and commonality. Diversity factors include, but a not limited to race, ethnicity, region , religion, gender, language, socioeconomic status, age, and individuals with disabilities. Education That is Multicultural prepares students to live, interact,and work creatively in an interdependent global society by focusing on mutual appreciation and respect. It is a process which is complemented by community and parent involvement in support of multicultural initiatives."
<urn:uuid:3779120a-c792-4ae5-a359-16c078cd345b>
{ "date": "2013-05-21T10:08:12", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9293453693389893, "score": 3.140625, "token_count": 199, "url": "http://www.marylandpublicschools.org/MSDE/programs/etma/?WBCMODE=PresentationUnpublished%25%3E%25%3E%25%25%25%3E%25%3E%25%3E%25%3E%25%3E%25%3E%25%3E" }
Capell observes that Schubert’s flexibility in matching grammatical and musical style was ‘before him unknown in music’. It is very unusual, even in the German language with its notoriously difficult word-order, for a song lyric to begin with a clause which starts in mid-air – ‘That the east wind breathes fragrance…’. (We may observe, however, that if a clause describes the permeation of fragrance, mid-air is a very logical place for it to start.) Richard Wigmore’s translation avoids opening with a clumsy ‘That’, but the fact is that here a subordinate clause precedes the information at the heart of the meaning – ‘you have been here’. Something indefinite comes before concrete information; in the same way, an indefinable female fragrance carried by the east wind precedes the realisation that ‘she’ has recently been there in person. To put this idea to music, Schubert starts in harmonic mid-air with diminished sevenths decorated with accented passing-notes. In these, the emotions of sexual longing are squeezed and pressed like perfume atomizers. The chords are phrased away in the same delicate, sighing-swooning way that we have encountered in Geheimes (there the crotchet+quaver figure rises toward the quaver rest; here it falls). It is clear that Schubert has imagined the wind as coming from the East, as much as simply from the east: this is no merry sea breeze but something heavy with the fragrance of ‘östliche Rosen’. The opening vocal phrase (‘Dass der Ostwind’) contains an exotic diminished interval – C sharp-F; this falls to E and, after a gap of a quaver rest, to D and C sharp on ‘Düfte’. We thus have a fragmented melody, a tentative tune spiced with a flavour of the orient. The phrase ‘hauchet in die Lüfte’ (where the first syllable is elongated by the exhalation of the singer’s breath) is doubled in both hands of the accompaniment. The music feels its way as if depicting the blind, the awe-struck, those who are emotionally isolated or in the deepest thought. The straining eye or ear are relatively common in lieder, but here, uniquely in song, we have the quivering nostril. At ‘dadurch tut er kund’ the haze of harmony turning around on itself and stopping the music in its tracks seems confused, the lack of harmonic orientation a metaphor for something in the air, something not yet identified. The halting gait of the word-setting depicts the effort involved in identifying the intruder. And suddenly, oh sweet delight, the scents make sense. It is miraculous that Schubert has found a means to find a musical analogue for something as nebulous, yet emotionally engaging, as the fragrance of the beloved. Yes, of course, it is her, for she smells like no other. Now that the mystery is solved, the world of chromaticism is temporarily abandoned in favour of the diatonic lyricism of C major; the arrival on the long-awaited tonic chord at ‘gewesen’ is prepared by a bar of G7 harmony. The phrase ‘Dass du hier gewesen’, a dreamy descent followed by a rapturous rise, is repeated like a magic incantation. After that, the vocal line is complemented by the accompaniment which imitates it at a distance of two bars. The lover and the object of his love are still separated; the piano interlude stops in its tracks, interrupted by a whole bar’s rest, as if the singer has been struck by another thought; his delight in his perfumed discovery has made him forget his disturbed state of mind. The second verse is an exact musical repeat of the first. Now the returning diminished harmonies represent anguish; he realises that his tears can have no fragrance and that the beloved will never know that he has been there after her. There is no reciprocity in this one-way olfactory experience, and that poignant realisation is the subject of this verse. We also realise that it is possible that this lover has been abandoned, and all that remains to him of his inamorata are memories reinforced by his straining senses. If the music does not seem quite as tailor-made for the words as in the extraordinary opening strophe, it nevertheless does good expressive service. And then Schubert modifies and extends the imitative piano interlude. This does not fade away as before; instead it grows and climbs, a sequence of quavers in octaves beginning first on C, then E, then G – an eloquent crescendo in the pianist’s right hand. The beginning of the poem’s third verse is set as part of an interlude rather than as a new musical verse. Now it is the turn of the voice to imitate the piano in those eloquent descending scales, each one beginning higher than the one before as emotion piles on emotion. The entry of the voice (‘Schönheit oder Liebe’) creates between voice and piano the cosseting and caressing thirds and sixths which approximate a lover’s touch, a four-bar phrase which ends with a questioning cadence on ‘bliebe?’. In this bridge passage heartfelt longing seems to leap out of the breast, but it is held in check by the repetitive chords based on G7 which root the singer to the spot. This unchanging harmony (the expressive melodic decoration, for all its ardency, goes nowhere) perfectly paints the idea of feelings and fragrances trapped and contained in one place. But no, in answer to the singer’s tearful question, these emotions cannot be hidden for long; fragrance seeps under doors and floats out into the world, and tears too have their expressive resonance. There is a sense of release and new openness in the new seventh chords, gentler and no longer in diminished harmony, which begin the next section (beginning ‘Düfte tun es und Tränen’), a final musical verse fashioned out of the poem’s two final lines. From G7 we have slipped into C7 with its inevitable pull to F major. The extraordinarily elongated setting of ‘Düfte’ (five-and-a-half beats) sends the fragrances wafting out into the world, and at ‘Tränen’ (tears) the sighing motif of crotchet+quaver in the accompaniment is darkened into the minor – a real moment of Lachen und Weinen this – made more eloquent by an expressive vocal mordent on ‘Tränen’, the only one in the piece. In Du bist die Ru’ Schubert saves up his biggest harmonic surprise for the last verse, and here too, at almost the last moment, he lifts the song into new heights by harmonic sleight of hand. When the words ‘Dass sie hier gewesen’ occur there is no comfortable return to C major. F minor leads to intimations of B flat minor, and from there E flat7 harmony now underpins the outburst of these words where, instead of beatific calm, we hear a lover’s passionate desperation. The direction of the melody is now in reverse: ‘dass sie hier’ is upwardly inflected, and ‘gewesen’ droops. For the first time we hear that past participle ‘has been’ as indicative of a love affair that was, and is no longer. But Schubert, the lover-in-song, is an eternal optimist and memories of love nourish him almost as much as the real thing, a noticeable trait of some of the songs of Winterreise which look back to past happiness. A tiny interlude, two bars of those sighing chords, a flat turning into a natural, and suddenly we find ourselves, as if by magic, back where we began: the same fragmented tune supported by diminished chords, and the same C major ‘Dass sie hier gewesen’ with the piano imitating the voice at a distance of two bars. This time, the echo of the melody is completed with a feminine cadence of the utmost delicacy. The use of ‘sie’ (she) rather than the more immediate ‘du’ (you) of the first strophe emphasises the valedictory nature of the poem’s ending. Schubert has somehow spun a song out of air. Not even in Winterreise do we encounter such deep expression conjured by such slender and economical means, and yet the song remains neglected. Singers who sniff at it should be encouraged to inhale deeply. Mention should also be made of another setting of this poem which is not as important as Schubert’s, but nevertheless enchanting: Meyerbeer’s Sie und Ich – also sung in French as Elle et moi. from notes by Graham Johnson © 2000 |Schubert: The Complete Songs| 'This would have been a massive project for even the biggest international label, but from a small independent … it is a miracle. An ideal Christ ... 'Please give me the complete Hyperion Schubert songs set – all 40 discs –and, in the next life, I promise I'll "re-gift" it to Schubert himself … ...» More |Schubert: The Hyperion Schubert Edition, Vol. 35| 'Throughout the disc, Graham Johnson's accompaniments are typically illuminating with numerous touches of detail glossed over by many pianists. And, a ... 'Revelatory' (The Guardian)» More
<urn:uuid:71d152a2-0e53-4e2a-be2a-582c33b1db64>
{ "date": "2016-05-05T03:51:34", "dump": "CC-MAIN-2016-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125857.44/warc/CC-MAIN-20160428161525-00072-ip-10-239-7-51.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9366214871406555, "score": 2.59375, "token_count": 2091, "url": "http://www.hyperion-records.co.uk/tw.asp?w=W1742&t=GBAJY0003505&al=CDJ33035&vw=al" }
[Haskell-beginners] randomize the order of a list felipe.lessa at gmail.com Sat Aug 28 10:38:49 EDT 2010 On Sat, Aug 28, 2010 at 10:49 AM, A Smith <asmith9983 at gmail.com> wrote: > I've abeen recommended on good authority(my son, a Cambridge Pure maths > graduate, and Perl/Haskell expert) , and backed by a Google search that > Fisher-Yates shuffle is the one to use, as it produces total unbiased > results with every combination equally possible. > As with most things with computers,don't reinvent the eheel, it's almost > certainly been done before by someone brighter that you, Fisher,Yates. & That shuffle requires O(n) time and O(1) space for arrays. Here we're dealing with lists, so either you copy everything between lists and array or use another algorithm. More information about the Beginners
<urn:uuid:eb4cfa8c-f86a-45d0-9e47-a930dd4df306>
{ "date": "2016-12-09T18:54:01", "dump": "CC-MAIN-2016-50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00144-ip-10-31-129-80.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8561995625495911, "score": 2.71875, "token_count": 217, "url": "https://mail.haskell.org/pipermail/beginners/2010-August/005147.html" }
Acid reflux german Gastroesophageal reflux (GER) happens when your stomach contents come back up into your esophagus. Gastroesophageal Reflux Disease Gerd Heartburn Acid RefluxGASTROESOPHAGEAL reflux disease is extremely common associated with. Picture of Acid Reflux in Esophagus Look Like What Throat Ulcers From Acid Reflux In either case, stomach acid and chyme are being allowed to flow back into the esophagus. my acid reflux and ulcer-like stomach pain are due to a mid back injury.German researchers found that people 75 or older who regularly take the. PEDIATRIC GASTROESOPHAGEAL REFLUXA New Theory About How Acid Reflux Is A Lot Worse Than You Think. Email. Print. Imagine being in so much pain because of acid reflux that you lose your ability to speak.German chamomile, angelica root, caraway, milk thistle, lemon balm, celandine.They are useful, however, in diagnosing cancers or causes of esophageal inflammation other than acid reflux, particularly infections. Moreover,.Gastroesophageal refers to the stomach and esophagus, and reflux means to. Acid Reflux Treatment Acid Indigestion RemediesShe started out with Pepcid but now does the following 3 things that has made a huge. Acid IndigestionMy German family ate blood for breakfast,. like my Health Building Food Program.Feranchak, A.P. et al. Behaviors associated with onset of gastroesophageal reflux episodes in infants: prospective study using split-screen video and pH probe. GASTROESOPHAGEAL REFLUX - FACTORS AND CAUSES Chinese Herbal Remedy for Acid Reflux Herbal teas are a good choice for acid reflux because they can improve digestion and soothe many stomach problems, such as gas and nausea.Read articles and find information on GERD, Acid Reflux Disease, gallbladder disease, ulcers and other conditions and ailments that could be causing your heartburn. Acid Reflux Diet RecipesGERD — Comprehensive overview covers symptoms, treatment, diet issues of this acid reflux disease.Acid reflux is a condition in which acid backs up from the stomach into the esophagus and even up to the throat, irritating the tissue. Gastroesophageal Reflux Disease EsophagusReaching over 9 million listeners at its peak, it generated several top. The solution to a problem is not always found at the level of the problem.WebMD provides an overview of acid reflux disease, including symptoms, causes, diagnosis, treatments, and helpful diet and lifestyle tips.Symptoms of acid reflux occur when acid moves backward from the stomach into the esophagus.
<urn:uuid:b0f42afa-e581-4130-b825-3470b0523896>
{ "date": "2018-11-17T21:48:00", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743854.48/warc/CC-MAIN-20181117205946-20181117231946-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8749088644981384, "score": 2.703125, "token_count": 625, "url": "http://dadupoker.gq/qivu/acid-reflux-german-heb.php" }
1. Stay mentally active Just like going to the gym to keep your body in shape, your brain needs activity to keep it in tip top condition! Try crossword puzzles or sudoku. Read a section of the newspaper that you normally skip. Take alternate routes when driving. Learn to play a musical instrument or to speak a new language! 2. Socialize regularly Social interaction helps ward off depression and stress, both of which can contribute to memory loss. Get together with loved ones, friends and others — especially if you live alone. 3. Get organized You're more likely to forget things if your home is cluttered! Start using a special notebook, calendar or electronic planner. It might also help to connect what you're trying to remember to a favorite song or another familiar concept. 4. Sleep well Sleep is when your body recharges all of your batteries, including your brain. Be sure to sleep an appropriate amount of time (try to grab 7 hours a night at least) to make sure your brain is getting the full benefits. 5. Eat a healthy diet A healthy diet might be as good for your brain as it is for your the rest of your body. Eat fruits, vegetables and whole grains. Choose low-fat protein sources, such as fish, lean meat and skinless poultry. What you drink counts, too. Not enough water or too much alcohol can lead to memory loss. 6. Include physical activity in your daily routine Along with that brain activity, get the blood pumping with some cardio! Aim for at least 30 minutes of cardio that gets your heart pumping!
<urn:uuid:4a842ce0-3639-4f31-ad82-50be9bdc5126>
{ "date": "2017-06-25T10:02:17", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320476.39/warc/CC-MAIN-20170625083108-20170625103108-00137.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9449462294578552, "score": 2.9375, "token_count": 331, "url": "https://www.vingle.net/posts/764945-Simple-Surprising-Ways-to-Improve-Your-Memory" }
|The Impact of Chaos on Science and Society (UNU, 1997, 415 pages)| |4. The impact of chaos on mathematics| The mathematical studies of Smale have shown that the orbit of a dynamical system is in some cases asymptotic to a complicated set called an Axiom A attractor. The behaviour of the system is then chaotic and, since Axiom A attractors can be analysed in great detail, they provide very important examples of chaos. Another example of chaos was obtained in an early computer study by Lorenz when he analysed a (rather brutally) simplified model of convection described by the following equations with s=10, b=8/3, r=28. The Lorenz attractor is chaotic, but different from the Axiom A attractors of Smale in that it contains an (unstable) fixed point for the time evolution. Interestingly we have, at this time, no mathematical proof that the solutions of the Lorenz system behave (chaotically) as we think they do. We have however a model inspired by the equations, and called geometric Lorenz attractor, for which a detailed mathematical study has been given by Guckenheimer and Williams . It is known in particular that the geometric Lorenz attractor has some properties of persistence when the equations are slightly changed (technically, what is proved is co-dimension 2 structural stability).
<urn:uuid:890b0b92-3957-4643-b715-e48c7d7bd546>
{ "date": "2015-07-29T22:03:58", "dump": "CC-MAIN-2015-32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00196-ip-10-236-191-2.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9502877593040466, "score": 3.28125, "token_count": 289, "url": "http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0envl--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-10&a=d&c=envl&cl=CL2.1.1&d=HASHe117ec92329f5eaea7ac0f.6.4" }
This large-scale carbon dioxide (CO2) storage project, located in Michigan and nearby states in the northern United States, will, over its 4-year duration, inject a total of one million tonnes of CO2 into different types of oil and gas fields in various lifecycle stages. The project will include collection of fluid chemistry data to better understand geochemical interactions, development of conceptual geologic models for this type of CO2 storage, and a detailed accounting of the CO2 injected and recycled. Project objectives are to assess storage capacities of these oil and gas fields, validate static and numerical models, identify cost-effective monitoring techniques, and develop system-wide information for further understanding of similar geologic formations. Results obtained during this project are expected to provide a foundation for validating that carbon capture and storage technologies can be commercially deployed in the northern United States. Recognized by the Carbon Sequestration Leadership Forum (CSLF) at its Washington meeting in November 2013. Links to more information:
<urn:uuid:93e814da-b62d-4187-86e1-716c0a0ed937>
{ "date": "2019-11-19T02:38:03", "dump": "CC-MAIN-2019-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9361873865127563, "score": 2.765625, "token_count": 203, "url": "https://www.cslforum.org/cslf/Projects/MichiganBasin" }
What is negotiated rulemaking? How are the issues to be negotiated determined? How are negotiators selected? Is negotiated rulemaking open to the public? How is the negotiated rulemaking process structured and what is the time commitment for a negotiator? How is consensus defined for purposes of negotiated rulemaking? What happens after the negotiations have concluded? Typically, the Department of Education (the Department) develops its proposed regulations without public input and then publishes them in the Federal Register for comment by the public. The published document is known as a Notice of Proposed Rulemaking, or NPRM. Under negotiated rulemaking, the Department works to develop an NPRM in collaboration with representatives of the parties who will be affected significantly by the regulations. This is done through a series of meetings during which these representatives, referred to as negotiators, work with the Department to come to consensus on the Department’s proposed regulations. These meetings are facilitated by a neutral third-party. The Department is specifically required by law to use negotiated rulemaking to develop NPRMs for programs authorized under Title IV of the Higher Education Act of 1965, as amended (Title IV programs) unless the Secretary determines that doing so is impracticable, unnecessary, or contrary to the public interest. The Department generally follows these same procedures when it uses negotiated rulemaking to develop NPRMs for programs other than the Title IV programs. The issues to be negotiated come from three sources: newly enacted laws, the Department, and the public. Because negotiated rulemaking is required for development of NPRMs for the Title IV programs, newly enacted statutory provisions for which Title IV regulations are needed are automatically included on an agenda for negotiated rulemaking (unless the Secretary determines that negotiated rulemaking is impracticable, unnecessary, or contrary to the public interest; see Q1&A1). Other issues for negotiated rulemaking are identified by the Department when it makes a determination that existing regulations need to be amended. Once the Department determines that rulemaking is necessary, it publishes a Notice in the Federal Register announcing its intent to conduct negotiated rulemaking and identifying the areas in which it intends to develop or amend regulations. This Notice announces a public meeting (or meetings) to obtain advice and recommendations on the issues to be negotiated from the public. The Department may also solicit written submissions of advice and recommendations. After consideration of this public input, the Department develops a list of the issues that a negotiating committee (or committees) is likely to address, and publishes the list in another Notice in the Federal Register. When the negotiating committee first meets, members may suggest additional issues that may be added to the agenda, subject to the full committee’s approval. Negotiators are nominated by the public, and selected by the Department. In the same Federal Register Notice that announces the Department’s intent to conduct negotiated rulemaking or a subsequent Notice, the Department solicits nominations for negotiators to represent the constituencies who will be significantly affected by the regulations. The Department identifies in the Notice the constituencies it believes will be significantly affected. This may include, but is not limited to, students, legal assistance organizations that represent students, institutions of higher education, state student grant agencies, guaranty agencies, lenders, secondary markets, loan servicers, guaranty agency servicers, collection agencies, state agencies, and accrediting agencies. The Department welcomes nominations for representatives of other constituencies who are thought to be significantly affected. Before nominating an individual to participate as a negotiator, the nominator should confirm that the potential nominee can and will make the necessary time commitment to the process (see Q5&A5). The Department selects negotiators for a committee from the list of nominees with the goal of providing adequate representation for the affected parties while keeping the size of the committee manageable. By law, a federal agency must limit membership on a negotiated rulemaking committee to 25 members unless the agency head determines that a greater number of members is necessary for the functioning of the committee, or to achieve balanced membership. Typically, the Department convenes committees of 12 to 15 negotiators, as well as an alternate for each negotiator to ease attendance concerns for negotiations consisting of multiple sessions. Each committee includes at least one Department representative. Once members of a committee have been confirmed, the Department publishes another Notice in the Federal Register announcing the committee and its membership. The committee may also add members at the committee meetings, subject to the full committee’s approval. Individuals who are not selected as negotiators but who will be affected by the regulations may still participate in the proceedings in several ways, such as having access to the individuals representing their constituency to express their views, and participating in informal working groups on issues between meetings. For more on the public’s role in the negotiated rulemaking process, see Q4&A4. Of course, individuals who are not selected as negotiators--like any other member of the public--can always submit comments in response to the published NPRM. Members of the public may observe meetings of the negotiating committee, but cannot speak unless recognized by the committee. Typically, at the end of each day’s meeting, the committee provides an opportunity for the public to comment. Caucuses (i.e., meetings of smaller groups of negotiators) are open to the public at the discretion of the negotiating committee. Printed materials used by the negotiators are available to the public on the Department’s negotiated rulemaking Web site. The address for this Web site is announced in the Federal Register Notice announcing the members of the committee. The committee protocols usually address how the negotiators may interact with the media. A negotiating committee usually meets for three sessions at roughly monthly intervals. Each session usually lasts three days. The number of sessions, meetings in a session, length of the session meetings, and the time between sessions may vary depending on the issues being negotiated. The first order of business for a negotiating committee is to finalize the agenda and protocols, which are agreed upon by consensus of the committee. Once the agenda and protocols are finalized and agreed upon, the committee begins its negotiations of the issues on the agenda. During the time between sessions, the Department drafts and amends the proposed regulatory language based on committee discussions and on any tentative agreements reached on the issues. The Department provides this draft regulatory language to the negotiators prior to the subsequent session. Subcommittees formed by the negotiators may meet during this time to work on specific issues. The subcommittees bring the results of their discussions to the full committee when it reconvenes. Again, a nominator should confirm that a potential nominee can and will make the necessary time commitment to the process before nominating an individual to participate. Consensus means that there is no dissent by any member of the negotiating committee. Thus, no member can be outvoted. The absence or silence of a member at the time the final consensus vote is taken is equivalent to not dissenting. All agreements reached during the negotiations are assumed to be tentative agreements until members of the committee consider all of the issues included on the agenda, and vote on the entire proposed regulatory language at the end of the final session of the negotiated rulemaking. If final consensus is achieved, committee members may not withdraw their consensus and the Department will use this consensus-based regulatory language in its NPRM. Only under very limited circumstances may the Department depart from this language. If consensus is achieved, the Department uses that regulatory language in its NPRM. If consensus is not achieved, the Department determines whether to proceed with regulations. If the Department decides to proceed with regulations, it may use regulatory language developed during the negotiations as the basis for its NPRM, or develop new regulatory language for all or a portion of its NPRM. Once the proposed regulatory language for the NPRM is finalized, the Department drafts the preamble language (the portion of the NPRM that explains the proposed regulatory text). If consensus was reached, the Department usually shares the preamble language with the negotiators who may review it for accuracy. Although the preamble language is not negotiated, the Department may agree during the negotiations to include in the preamble explanations of certain issues. If the committee did not reach consensus, the preamble language is not shared with the negotiators. When the NPRM is published in the Federal Register, it contains a request for public comments and a deadline for submitting those comments. If consensus was reached, negotiators and those persons and entities whom they represent may not comment negatively on the consensus-based regulatory language. The Department considers the comments received by the close of the comment period in developing final regulations. The final regulations published in the Federal Register contain the regulations with which affected parties must comply and the date by which they must do so. The preamble of the final regulations includes a summary of the comments received, the Department’s response to the comments, and an explanation of any changes made to the regulations that differ from the proposed regulations.
<urn:uuid:1c70fb52-3396-431d-bcd3-1361fae25fcd>
{ "date": "2014-09-02T11:00:17", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030952-00040-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9581549167633057, "score": 2.515625, "token_count": 1818, "url": "http://www2.ed.gov/policy/highered/reg/hearulemaking/hea08/neg-reg-faq.html?exp=6" }
WEST LAFAYETTE, Ind. - Even large amounts of manufactured nanoparticles, also known as Buckyballs, don't faze microscopic organisms that are charged with cleaning up the environment, according to Purdue University researchers. In the first published study to examine Buckyball toxicity on microbes that break down organic substances in wastewater, the scientists used an amount of the nanoparticles on the microbes that was equivalent to pouring 10 pounds of talcum powder on a person. Because high amounts of even normally safe compounds, such as talcum powder, can be toxic, the microbes' resiliency to high Buckyball levels was an important finding, the Purdue investigators said. The experiment on Buckyballs, which are carbon molecules C60, also led the scientists to develop a better method to determine the impact of nanoparticles on the microbial community. "It's important to look at the entire microbial community when nanomaterials are introduced because the microbes are all interdependent for survival and growth," said Leila Nyberg, a doctoral student in the School of Civil Engineering and the study's lead author. "If we see a minor change in these microorganisms it could negatively impact ecosystems." The microbes used in the study live without oxygen and also exist in subsurface soil and the stomachs of ruminant animals, such as cows and goats, where they aid digestion. "We found no effect by any amount of C60 on the structure or the function of the microbial community over a short time," Nyberg said. "Based on what we know about the properties of C60, this is a realistic model of what would happen if high concentrations of nanoparticles were released into the environment." The third naturally occurring pure carbon molecule known, Buckyballs are nano-sized, multiple-sided structures that look like soccer balls. Nyberg and her colleagues Ron Turco and Larry Nies, professors of agronomy and ci |Contact: Susan A. Steeves|
<urn:uuid:a45b666e-60e3-4153-8529-a347a2bc3557>
{ "date": "2013-05-24T08:39:26", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9453170895576477, "score": 3.0625, "token_count": 409, "url": "http://www.bio-medicine.org/biology-news-1/Manufactured-Buckyballs-dont-harm-microbes-that-clean-the-environment-2817-1/" }
|The Bible and Race| |Kenneth McKilliam has a lesson or two for today’s clergy| The clergy of the churches in Britain have no knowledge of science and little knowledge of their Bibles. There are differences between races. Professor Wesley C. George, Emeritus Professor of Histology and Embryology, formerly head of the Department of Anatomy, North Carolina Medical School, wrote:– Professor R. Ruggles Gates wrote in his Human Ancestry, published by Harvard University Press in 1948: “The primary so-called races of living men have arisen independently from different ancestral groups and species in different continents at different times.” Race is a matter of genes and not environment. It is known that there were races of man-like creatures in existence before the creation of Adam and Eve and the Rev. P. E. K. Victor Pearce, an anthropologist, in his book Who was Adam? has shown that these pre-Adamites were the Old Stone Age Men, the hunters and gatherers who are still in the world today. These findings are in accord with ancient scriptures for in Genesis 1.24-25 we read:– “And God said, let the earth bring forth the living creature (Chay Eretz, Hebrew, ‘living creature of the earth’) after his kind, the cattle and creeping things and the beasts of the earth after their kind.” Note the masculine gender in the passage above. The word translated “man” is Awdawm meaning one who shows a rosy blush in the face. There is only one race that can show a rosy blush in the face and that is the white race. This Awdawm was told to be fruitful and multiply and was made God’s representative figure (Tselem) to have control over God’s creation as God’s deputy (Psalm 8 and Genesis 1.26-28). He was made a living soul and all animals were brought before him to see what he would name them – “But for Adam there was not found a help mate for him” – so God made the woman Eve from the body of Adam (the word translated rib is tsalah, literally ‘curved of the body’). “And Adam said, this is now bone of my bones and flesh of my flesh... therefore shall a man leave his father and his mother and shall cleave unto his wife: and they shall be one flesh.” The tree in biblical language has a racial context. The Olive Tree stands for Israel (Romans 11), the Fig Tree stands for the Jews (Matthew 21.19, 45), the Assyrians are Cedars (Ezekiel 31). The Lord Jesus Christ refers to trees when giving us an indication of his second coming in Luke 21.29, applicable to the United Nations:– “And He spake unto them a parable; Behold the Fig Tree and all the trees; when they now shoot forth. ye see and know of your own selves that summer is nigh at hand. So likewise ye, when ye see these things come to pass, know ye that the Kingdom of God is nigh at hand.” Adam and Eve were given the Law; they were forbidden to eat of the Tree of Knowledge of Good and Evil. In Genesis 3.1 we read:– “Now the serpent was more subtil than any beast of the field (Chay Eretz).” The word translated “serpent” is Nacrash meaning an enchanter, a wizard, this is the personal name of this living creature. This creature could reason, dispute, speak and walk upright. He persuaded the woman to eat of the fruit of the forbidden tree. Cain, the son of Adam and Eve, formed a relationship with a Chay Eretz, a pre-Adamite female (Genesis 4.7): “If thou doest not well, sin lieth at the door. And unto thee shall be his desire, and thou shalt rule over him.” The word Chay is a masculine word (Genesis 1.25); the Chay were made after his kind. In Genesis 2.16 we read:– “Thy desire shall be to thy husband and he shall rule over thee.” Therefore the desire of Cain’s consort would be to Cain and he would rule over her. The Apostle Jude refers to this in verses 7 and 11:– “Woe unto them for they have gone in the way of Cain... giving themselves over to fornication and going after strange flesh.” Cain was afraid that he would be killed: who was going to kill him? The Chay Eretz. Cain married his consort and built a city; built a city for whom? And with whose help? The Chay Eretz, the pre-Adamite people. God made the different races for His own purpose and glory, He did not intend them to interbreed. On the plains of Kenya there are the Grant’s Gazelle and the Thompson’s Gazelle; they are almost the same size and look alike except for differences in colouring and marking; they never interbreed. The Adamites corrupted their flesh by interbreeding and brought upon themselves the Great Flood. “The Sons of God (see Luke 3.38) saw the daughters of Men (descendants of Cain) that they were fair and took to themselves all they chose... there were giants in the world in those days... the Sons of God came in unto the daughters of men and they bare children to them, the same became mighty men.” The word “giant” is Nephil which can mean a tyrant and a bully, the word translated “mighty” is Gibbowr meaning a warrior and a tyrant. The Adamites saw the hybrid daughters of Cain that they were fair and copulated with them and their children were tyrants and bullies. And “The Earth was corrupt before God and the earth was filled with violence. And God looked upon the earth and behold it was corrupt: for all flesh had corrupted his ways on the earth.” The flesh of the Adamites had become mingled with the flesh of the Chay Eretz: the mixture of Adamite and Chay flesh in the offspring was corrupt flesh. This is referred to by the Lord Jesus Christ in Matthew 24.37-39:– “But as in the days of Noe, so also shall the coming of the Son of Man be, for as in the days before the flood they were eating and drinking, marrying and giving in marriage until the day that Noe entered into the ark, and knew not until the flood came and took them all away, so also shall the coming of the Son of Man be.” But “Noah was a just man and perfect in his generations” (Genesis 6.9). He was of pure Adamite stock as were the other members of his family with him. They were saved in the ark together with full blooded members of the Chay Eretz. In Genesis 9.20-29 we see that Noah cursed Canaan the younger son of Ham for a sin that Ham committed, making Canaan and his people after him servants to their brethren. Why did he not curse Ham who was the culprit? Because Ham had copulated with a Chay Eretz female saved in the ark and Canaan and his breed were hybrids. All through the Old Testament the Israelites were warned not to interbreed with the people about them; the Canaanites were a mongrel breed of degenerates whom the Israelites were instructed to destroy. Sodom and Gomorrah were degenerates and were destroyed. Archaeologists have found evidence of syphilis in the bones found at the site of Jericho and Joshua destroyed Jericho. Phinehas was praised because he drove his javelin through the bodies of the Israelite prince Zimri and the Midianite woman Cozbi found copulating (Numbers 25.1 and 10-13). Balaam was slain for advocating the racial integration of Israel with the Canaanites and this is referred to in 2 Peter 2.15 and Jude 11. We read in Deuteronomy 23.2:– “A mamzer (hybrid) shall not enter into the congregation of the Lord; even unto his tenth generation shall he not enter into the congregation of the Lord.” In Jeremiah 16.17-18 we read: “And first I will recompense their iniquity and their sin double; because they have defiled my land; they have filled my inheritance (Israel: Jeremiah 10.16; 51.19, Isaiah 19.25) with the carcasses of their detestable and abominable things (hybrids).” In Ezra 9.2 we read:– “For they have taken their daughters for themselves and for their sons: so that the Holy Seed (of Adam) have mingled themselves with the people of those lands.” To show the lack of biblical knowledge of even Bishops and Archbishops of the established church, they quote Leviticus 19.33-34 in support of multi-racialism in Britain:– “If a stranger lives among you in your land do not molest him. You must count him as one of your own countrymen and love him as yourselves for you were strangers in Egypt. I am the Lord your God.” The word translated “stranger” in this passage is Ger. These people were of the same racial stock as the Israelites. They were Hebrews. To become citizens they had to circumcise their males, keep the Feast of the Passover and obey all the laws of Israel. Moreover this text refers to a stranger in the singular, and cannot be interpreted to justify the invasion of one’s native land by way of mass immigration by aliens. There are three other words translated “stranger” in the Old Testament:– The Towshab, foreign immigrant settlers whose status was inferior to the Israelites and the Ger; the Zuwr, mixed breeds who were completely outside the religious life of Israel and could not marry an Israelite; the Nokriy who had no blood relationship with the Israelites and who were treated as aliens and foreigners. “A Nokriy shall not come into the congregation of the Lord for ever” (Nehemiah 13.1 and Deuteronomy 23.2-3). No stranger was ever to be set above an Israelite (Deuteronomy 25.5). The following is from a letter from a West Indian lady of mixed blood:– International Financiers, political Zionists, International Communists and other aliens together with the fools they have indoctrinated are attempting to force the white race to integrate with the negroes and asiatics who have been brought in for the purpose so that the Master Race shall have control over a society of mixed breeds. Kenneth McKilliam was a prominent member of the British Israel movement.
<urn:uuid:fe937241-e80d-434a-9532-cc25ed70d4fd>
{ "date": "2015-07-06T00:55:27", "dump": "CC-MAIN-2015-27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097757.36/warc/CC-MAIN-20150627031817-00116-ip-10-179-60-89.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.977960467338562, "score": 2.8125, "token_count": 2330, "url": "http://heretical.com/mkilliam/biblrace.html" }
There is no perfect energy source. Each and every one has its own advantages and compromises. This series will explore the pros and cons of various energy sources. Learn about other forms of energy generation here. Let’s face it, coal is nasty stuff. It contaminates everything it comes in contact with and creates problems at every step of its life cycle: from unhealthy and unsafe underground mines, to the environmental catastrophe of mountaintop removal, to the problems associated with handling the enormous piles of ash that are produced every day. But by far, the biggest problem is the enormous amount of carbon dioxide emitted. According to the EPA, coal contributes 31 percent of all CO2, the largest of any source. The people who still support coal basically have one argument: that it’s a necessary evil, being the only source of energy within reach that is sufficiently abundant to keep up with our enormous and ever-growing appetite for energy. We have so much coal, they reason, and we need so much energy, how could we not take advantage of this resource? They could be right, as much as those of us who care about the environment hate to admit it. As much as we would like to believe that conservation, efficiency and renewables will meet our growing, but maybe-not-growing-quite-so-quickly demand, there is certainly no guarantee that they will. Considering that coal accounts for 40 percent of all electric generation (down from 45 percent) and 21 percent of all energy in the US, that’s a lot of energy to replace. Of course, with falling natural gas prices, that is clearly picking up a lot of the slack. Meanwhile, renewables accounted for just over 10 percent of electric power in 2010, and most of that was from existing hydropower. If that’s not bad enough, coal powers 70 percent of China’s electric grid, which is growing far faster than ours and shows no sign of slowing down. In fact, the only thing keeping them from increasing coal generation even faster is their limited ability to physically move the stuff. Together, the US and China are responsible for 33 percent of global greenhouse gas emissions. The other thing about coal is, of course, that it’s cheap, usually cheaper by far than other energy sources, largely because so many of its true costs are still being externalized. It is worth noting that wind at 5-6 cents per kWh is closing the gap. Given the reality of climate change, any talk of coal must be clean coal, an approach which enables the utilization of our most abundant domestic energy resource so that at least the impact on the climate is minimized. (To put this in perspective, note that the total amount of energy we received from coal in 2010 is equal to the amount of sunshine over the same period, hitting just 460 square miles. If we adjust for the low efficiency of solar PV (17 percent at the low end), then that number goes up to 2706 square miles, well below 0.1 percent of the land area of the US, though we are nowhere close to capturing all of that any time soon.) Clean coal has a number of variations, but all of them involve stripping the CO2 out of the coal, either before or after it is burned and then capturing it. It is then either utilized for industrial purposes or for enhanced oil recovery, or else it is pressurized into a liquid form where it can be injected underground where it supposedly will stay indefinitely in a process called carbon sequestration. The overall process is called carbon capture and storage (CCS). No sequestration project existing or proposed removes all the CO2 from the exhaust, because of the high energy penalty for doing so (30 percent or more). Most of them bring the CO2 level down to that of natural gas. Canada has already banned the development of any new coal generation project that does not include CCS. No doubt the least destructive form of clean coal is underground coal gasification (UCG). This is where the coal is left in the ground and converted to gas by chemical means and then sucked up to the surface where it is burned. Most of these projects include capturing the CO2 and then sequestering it as described above. Pilot plants have been run in China, and the Swan Hills plant is supposed to come online this year in Alberta, Canada. In the US, the Texas Clean Energy Project, outside Odessa, which received $450 million in DOE funding, will apply UCG, capturing 90 percent of the CO2 and then using that CO2 for enhanced oil recovery in nearby Permian Oil Basin. This approach eliminates most problems associated with coal mining, transportation and burning, leaving only the problems associated with sequestration and gas extraction to be grappled with. With that background, here are the pros and cons of clean coal. - Abundant supply, concentrated in industrialized countries (US, Russia, China, India). - Relatively inexpensive. - Continuous power. Good utilization. High load factor. - Substantial existing infrastructure. Mature industry. - Can be made low carbon and clean with CCS and various scrubbers. - Can be converted to a liquid or a gas, which burn cleaner. - Clean coal technology is currently being used in China. - Relatively low capital investment (compared to gas or nuclear). - Coal is nonrenewable. There is a finite supply. - Coal contains the most CO2 per BTU, the largest contributor to global warming. - Severe environmental, social and health and safety impacts of coal mining. - Devastation of environment around coal mines. - High cost of transporting coal to centralized power plants. - Coal ash is a hazard and a disposal problem. - Coal mining is the second highest emitter of methane, a potent greenhouse gas. - High levels of radiation. Coal plants release more radiation than nuclear plants. - Coal burning releases SOx and NOx which both cause acid rain. - Burning coal emits mercury and other heavy metals that pose major health risks. - Coal emissions linked to increased rates of asthma and lung cancer. - Sequestration is new, expensive and its ability to hold CO2 for long periods of time is unproven. Risk of accidental releases of large quantities of CO2. - Clean coal is not carbon free. - Significant energy penalties are incurred for sequestration. - CO2 is toxic at concentrations above 5 percent. The condition is called hypercapnia. The true costs of coal are not included in what is paid today. Coal would not be competitive if environmental costs were included. When the costs of mitigating these impacts through CCS and UCG are factored in, it will not be competitive against renewables. However we might still need to use it in some localities to meet our ever-growing demand. But with natural gas coming in just as cheap, and with the same level of GHG as Clean Coal, it’s not at all clear that these investments are justified. But there’s no reason I can think of that the same capture and storage technologies that were developed for coal, couldn’t be used in natural gas plants to bring them down to zero carbon. What about other energy sources? - Pros and Cons of Wind Power - Pros and Cons of Fusion Power - Pros and Cons of Tar sands oil - Pros and Cons of Solar Heating and Cooling - Pros and Cons of Concentrating Solar Power - Pros and Cons of Solar photovoltaics - Pros and Cons of Natural Gas - Pros and Cons of Fuel Cell Energy - Pros and Cons of Biomass Energy - Pros and Cons of Combined Heat and Power - Pros and Cons of Clean Coal - Pros and Cons of Algae Based Biofuel - Pros and Cons of Liquid Flouride Thorium Power - Pros and Cons of Tidal Power - Pros and Cons of Nuclear Energy [Image credit: Marc Wathieu: Flickr Creative Commons] RP Siegel, PE, is the President of Rain Mountain LLC. He is also the co-author of the eco-thriller Vapor Trails, the first in a series covering the human side of various sustainability issues of energy (including clean coal), food, and water. Now available on Kindle. Follow RP Siegel on Twitter.
<urn:uuid:4a17ef76-b428-4a5b-8a11-212fcbde0e07>
{ "date": "2014-03-10T09:16:48", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010722348/warc/CC-MAIN-20140305091202-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9484027624130249, "score": 3.078125, "token_count": 1724, "url": "http://www.triplepundit.com/2012/04/clean-coal-pros-cons/" }
Palmerston, Henry John Temple, 3d Viscount, 1784–1865, British statesman. His viscountcy, to which he succeeded in 1802, was in the Irish peerage and therefore did not prevent him from entering the House of Commons in 1807. Initially a Tory, he served (1809–28) as secretary of war, but he differed with his party over his advocacy of parliamentary reform and joined (1830) the Whig government of the 2d Earl Grey as foreign minister. A firm believer in liberal constitutionalism, Palmerston was instrumental in securing the independence of Belgium (1830–31), and in 1834 he formed a quadruple alliance with France, Spain, and Portugal to help the Iberian countries put down rebellions aimed at restoring absolutist rule. He also organized the joint intervention with Russia, Austria, Prussia, and a reluctant France to prevent the disintegration of the Ottoman Empire as a result of the revolt of Muhammad Ali of Egypt (1839–41). He was in opposition during Sir Robert Peel's administration (1841–46) but returned to the foreign office under Lord John Russell. Palmerston was an impulsive man who often acted without consultation; during his second period as foreign secretary he succeeded in offending not only foreign powers but also his colleagues and Queen Victoria. He quarreled with France in the affair of the Spanish Marriages (1846; see Isabella II), gave encouragement to the European revolutionaries of 1848, and in 1850 caused widespread outrage by blockading Greece in order to secure compensation for Don Pacifico, a Portuguese merchant claiming British citizenship, whose house in Athens had been destroyed in a riot. Finally his unofficial and unauthorized approval of the coup in France by Napoleon III led to his dismissal in 1851. Nevertheless he became home secretary in 1852 and in 1855 succeeded the 4th earl of Aberdeen as prime minister. His vigorous prosecution of the Crimean War increased his already great popularity, as did the effective suppression of the Indian Mutiny, and although he lost office in 1858, he returned to power in 1859 and remained prime minister until his death. His attitude greatly facilitated the progress of the Italian Risorgimento and the proclamation (1861) of the kingdom of Italy, but his attempt (1864) to help the Danes in the Schleswig-Holstein question was unsuccessful. He maintained British neutrality in the American Civil War, despite his sympathy for the South and despite the irritating Trent Affair. Palmerston was not much interested in internal affairs, but he did firmly oppose further parliamentary reform. His diplomacy, reckless and domineering though it frequently was, usually served to advance British prestige. See biographies by H. Lytton Bulwer and E. Ashley (5 vol., 1870–76), D. Southgate (1966), J. G. Ridley (1970), K. Bourne (Vol. 1, 1982); study by C. K. Webster (2 vol., 1951; repr. 1969). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
<urn:uuid:355884e8-582a-41ff-b90b-27d1792f0c6f>
{ "date": "2017-02-19T15:10:05", "dump": "CC-MAIN-2017-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00404-ip-10-171-10-108.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9695801734924316, "score": 2.84375, "token_count": 644, "url": "http://www.factmonster.com/encyclopedia/people/palmerston-henry-john-temple-3d-viscount.html" }
In its first 25 days of operations, the newly reactivated NEOWISE mission has detected 857 minor bodies in our solar system, including 22 near-Earth objects (NEOs) and four comets. Three of the NEOs are new discoveries; all three are hundreds of meters in diameter and dark as coal. The mission has just passed its post-restart survey readiness review, and the project has verified that the ability to measure asteroid positions and brightness is as good as it was before the spacecraft entered hibernation in early 2011. At the present rate, NEOWISE is observing and characterizing approximately one NEO per day, giving astronomers a much better idea of the objects' sizes and compositions. Out of the more than 10,500 NEOs that have been discovered to date, only about 10 percent have had any physical measurements made of them; the reactivated NEOWISE will more than double that number. JPL manages the NEOWISE mission for More information on NEOWISE is online at: http://www.jpl.nasa.gov/wise/ TNS 30BautistaJude 140124-4612402 30BautistaJude Most Popular Stories - Obama, Ukraine Discuss Russian Incursion in Crimea - Chinese May Have Spotted Malaysia Airlines Debris - Social Media Causee Sleep Deprivation in Students - First-time Jobless Claims Drop Unexpectedly - Banks Buying Little From Minority Firms: Study - General Electric Plans IPO of Credit Card Unit - SXSW Crash Kills 2, Injures 23 - First-time U.S. Jobless Claims Hit 3-month Low - 'Candy Crush' Maker Files IPO - U.S. Business Inventories Up, Retail Sales Down
<urn:uuid:4c2c2c73-aaff-4f49-bcb9-0e4162815483>
{ "date": "2014-03-14T02:18:57", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678683543/warc/CC-MAIN-20140313024443-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9114292860031128, "score": 2.703125, "token_count": 370, "url": "http://www.hispanicbusiness.com/2014/1/23/neowise_celebrates_first_month_of_operations.htm" }
The following post is from Iris Yuan, an Education Consultant at Tutorspree.com, a marketplace for high-quality tutors across the country. Tutors at Tutorspree.com are highly-educated, experienced people who love what they’re doing. For more information, follow @Tutorspree on Twitter or e-mail [email protected]. Helping children have fun does not mean they can’t be engaged, participating, and learning about the world around them. Below, we share tips and quotes from experienced tutors who’ve worked with children over the summer. Juliette, a Spanish tutor in New York, says cooking is a great way to both learn and have fun. “Stash your children in the kitchen. Make up some at home cooking projects. There are many cookbooks out there that have recipes appropriate for children to help with and suited to their tastes as well. Not only does cooking teach a life-long skill, it teaches children how to follow directions, be patient, organized, and clean up after themselves. It also makes children feel great to see that they can create something delicious! Furthermore, if children ever express being dissatisfied with the meals you prepare them, you can remind them about all that goes into creating a meal for a family. In order to make this type of project into a full day’s activity, first let your children make a list of necessary ingredients for the chosen recipe, then go to the market together with the children, and have them help you collect the groceries. This may even be a good opportunity to teach about prices and how to select what’s best.” Another tip to getting young children interested in learning is to take library and museum trips together. Many museums have kid-friendly areas with interactive activities. Your child may naturally be drawn to a certain area or subject, which you can build on later in the summer. Meanwhile, most libraries hold story times that are age-appropriate. When you’re at the library, be sure to show interest in the books yourself. Find a corner for quiet reading time and read to them, but also read to yourself, so that your child can learn by example. Suzie, an experienced English tutor on the East Coast, tells us that “reading is easy. It’s portable. And maybe best of all, it’s subtle, sneaky learning. You learn while you aren’t even aware of it. Not only can it be a diversion on the beach, an alternative to “Boring! Not that again!?” TV, or a mental vacation on a hot afternoon, but reading also exposes new vocabulary, offers a variety of sentence structures, and painlessly proffers a proliferation of punctuation. All this without tests, worksheets, or quizzes.” Finally, if learning school-related material is what you’re looking for, try in-home tutoring and teach some material yourself (but keep it fun!). Aaron, a past Teach for America corps member, has been teaching for over ten years. He suggests that a great way to help children learn better is by using “positive sandwiches” when giving criticism. This means giving praise first before mentioning areas of improvement, and following up with another positive comment. “When feedback is ‘sandwiched’ between positive comments, problematic reactions are less likely,” says Aaron. “Learning doesn’t mean you can’t have fun. I also use funky colored pens or paper, stickers, jokes, and laughter in my lessons.” Summertime alternatives to TV and video games are vast and many. Taking children out on trips, such as those mentioned above, and livening up the household with cooking and reading are just some of the ways to keep the summer brain drain at bay.
<urn:uuid:e380ac52-668f-4754-89ac-6baf5bafb9a5>
{ "date": "2014-10-25T16:42:01", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648706.40/warc/CC-MAIN-20141024030048-00227-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.95208340883255, "score": 2.8125, "token_count": 806, "url": "http://candostreet.com/blog-parents/category/library/" }
Posted on April 23, 2001 in Washington Watch When American Presidents leave office, the battle for their legacy begins. Historians debate their contributions, reporters examine their records and the public weighs their memories. Partisans also enter the fray, with each side attempting to elevate their favorites, while working to discredit those whom they have politically opposed. At stake, for the partisans, is more than the public memory or the standing of their favorite past-President in opinion polls. As political activists seek to enshrine the legacy of their heroes they also seek to elevate their philosophy and its currency. For decades, for example, liberals elevated the presidency of Franklin Delano Roosevelt (FDR) and his New Deal programs. He was America’s longest serving President–having been elected four times. He led the United States out of the Great Depression, instituted social programs that provided support for the poor, the unemployed and the retired. And he led this nation through the Second World War, calming fears and inspiring greatness. The FDR memorial, recently constructed on Washington’s Mall, has become one of the city’s great tourist attractions. It commemorates not only the history of the man, it also imparts his political philosophy as well. Conservatives have long regarded FDR’s New Deal as the embodiment of all that was wrong with what they call, “big government.” They have struggled both to discredit his political philosophy and to replace him in the ‘pantheon of great presidents” with an icon of their own–Ronald Reagan. There is currently an effort, led by conservative activists and supported by some Republican members of Congress to build a Ronald Reagan memorial in Washington and to name more federal buildings and installations after him. There is already a new Washington government office complex named after Reagan and Washington’s National Airport, has been renamed “Reagan National Airport.” Not yet satisfied, this group is working to have at least one building in each of the 50 states named after the former Republican President. And some members of Congress have proposed putting his face on one denomination of U.S. currency and, possibly even, having his face carved on Mount Rushmore! Reagan, though suffering from Alzheimer’s and out of the public eye, is a living former President and so some have questioned the appropriateness of these efforts. But what of our other living former Presidents? Bill Clinton, who sought to enhance his legacy during his last year in office by attempting to negotiate a Middle East peace was frustrated by the collapse of his effort. His quest for a legacy suffered further blows resulting from a number of controversies occurring during his final days in office. But Clinton is young and enormously talented and despite the many controversies, fed by his partisan opponents, the public view of his eight-year term is still quite good. This, coupled with what he does with the rest of his life, may restore his chance of a positive historical legacy. His predecessor, George H.W. Bush, has had a largely quiet life since leaving office. Bush, whose own term in office was characterized by his struggle to emerge from under the shadow of Reagan, saw only momentary greatness at the victorious Commander in Chief of the Gulf War. This ended with an election defeat in 1992. Bush, though, may find that his legacy will depend, to a degree, on the success of his son’s term as President. Gerald Ford, the other living Republican former President, served only a short time in office and has also led a fairly private life in retirement. That leaves, Jimmy Carter, the last of our living former Presidents. Carter, a Democrat, served from 1977 to 1981. His term in office ended under the clouds of a failing economy and U.S. hostages held in Iran. But since leaving office, Carter has done more to establish his legacy than, I believe, any former President in history. He first closely identified himself with a non-profit volunteer project, Habitat for Humanity. During its two decades of existence, Habitat has become a household word in many communities across the United States. It has built 100,000 low-cost homes for one-half million people in more than 60 countries around the world. Through its 1,900 affiliates, Habitat has worked to transform communities and make a real contribution to the quality of life for millions. Carter, himself, has often times actively volunteered in Habitat projects and when thinking of him today, more Americans call to mind Carter in denim working to build a house, than Carter in a suit at the White House. Though not a young man, Carter and his wife Roslyn still volunteer one week each year to Habitat. Last year they helped to build 100 homes in New York City, the year before 300 in Philadelphia. His Carter Center in Atlanta, Georgia, has further established his credentials as a great ex-President. The Center was established in 1982 and describes its role as “waging peace, fighting disease and building hope.” Known world wide, the Carter Center’s programs have had a significant impact on the lives of those who have benefited from their projects. Many of the Center’s projects are personally led by the former President himself. They have provided technical assistance and monitored elections in 20 fledging democratic states; negotiated peaceful solutions to conflicts in Haiti, Korea, Nicaragua and several African countries; and continued to campaign for global human rights–a theme of his presidency. The Center’s health projects have been equally significant. Through Carter Center efforts, a dreaded disease that plagued parts of Africa and Asia has virtually been irradiated. And programs initiated by the Center have assisted more than one million farmers in Africa to increase their yields and improve their lives. I had the honor to serve under President Carter as an election monitor in Palestine in 1996. His very presence brought integrity to the process and inspired all who saw him stand up to the many challenges he faced in ensuring that that election would be “free and fair.” Recently I had the opportunity to interview Carter about a number of issues related to his work. We spoke of his having been awarded an international prize, named after UAE President Sheikh Zayed, in recognition of his work on behalf of the environment. I also asked him for his reflections on two major Middle East issues, the conflict in Palestine and the sanctions against Iraq. His answers were direct and powerful. On the Palestinians: I got in some trouble when I was the President because I called for a Palestinian homeland, publicly, just a few weeks after I became President. And my primary negotiating technique was to protect the basic rights of the Palestinians. In the Camp David Accords, which Prime Minister Begin and the Israeli Knesset approved, as did President Sadat and the Egyptian Parliament, the Camp David Accords reemphasized the commitment to United Nations Resolution 242, the prohibition against the acquisition of territory by force, and called for the complete withdrawal from the West Bank and Gaza of Israeli’s political and military entities, except for a few outposts which would be mutually decided…. However, when the United States has in effect looked the other way, which I’m afraid we have in the last few years as Israel continued to build additional Israeli settlements in the occupied territories, that’s when I have been critical, publicly. I personally feel that the sanctions against Iraq have been counterproductive. The World Health Organization estimates that between 100,000 and 500,000 Iraqi children have probably perished because of inadequate medical care, partially brought about by the sanctions, and partially brought about by the ineptness or callousness of Saddam Hussein. I think there’s a shared responsibility there, because more of the income from oil that is sold could have been devoted to the alleviation of suffering and the prevention of the death of these children. But I think the sanctions are counterproductive for several reasons. One is what I’ve just described, the humanitarian cause. Secondly, the United States and Great Britain are now acting, to a substantial degree, without the authority or approval of the United Nations organization itself. And the third is because I think the sanctions are hurting the people of Iraq, and not Saddam Hussein, whom I consider to be a dictator, and I think an insensitive dictator, and he is able now to blame all of his maybe self-induced problems, economically and socially, on the United States because of our sanctions and because of our fairly infrequent aerial attacks. I think this gives him an excuse and I think in a strange way it probably makes opposition to him within Iraq much more difficult than would be the case if the sanctions were not imposed, so I really don’t agree that the sanctions are doing any good, I think they are counterproductive. Carter’s term in office may have ended under a cloud. He was, for years severely criticized by Republicans and shunned by Democrats. But his work since leaving office and his continuing commitment to human rights and public service have insured that his legacy will be recognized–not by partisans–but by people of good will everywhere. For comments, contact [email protected] powered by Disqus
<urn:uuid:3c64da98-d863-4cb0-9b74-6f503d42aad7>
{ "date": "2017-04-23T05:28:52", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118477.15/warc/CC-MAIN-20170423031158-00409-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9832552075386047, "score": 2.796875, "token_count": 1887, "url": "http://www.aaiusa.org/w042301" }
Micro CHP systems are covered under the Green Deal as a measure under Microgeneration. Micro CHP is certified under the Microgeneration Certification Scheme. This means as an MCS Certified installer with the additional standard MCS 023, you are able to install MCS Certified Micro CHP systems under the Green Deal and offer consumers The Feed in Tariffs. Micro CHP stands for micro combined heat and power. This refers to a heating technology which generates heat and electricity simultaneously, from the same energy source, in individual homes or buildings. The main output of a micro-CHP system is heat with some electricity generation, at a typical ratio of about 6:1 for domestic appliances. Any electricity generated and not used in the home can be exported back to the grid. A typical domestic system is expected to have the potential to generate up to 1kW of electricity per hour once warmed up. This would be enough to power the lighting and appliances in a typical home. The amount of electricity generated ultimately depends on how long the system is running. Most domestic micro-CHP systems today use mains gas or LPG as a heating fuel, although they can also be powered by oil or bio fuels. While gas and oil are not renewable energy sources (they are fossil fuels), the technology is still considered to be a ‘low carbon technology’ because it is more efficient than just burning the fossil fuel for heat and getting electricity from the national grid. Micro-CHP systems are comparable in size and shape to an ordinary, modern, domestic boiler and can be wall hung like most boilers, or floor standing. Servicing costs and maintenance are estimated to be similar to a standard boiler – although a specialist will be required. The only difference to a standard boiler is that they are able to generate electricity while they are heating water. Micro-CHP has a number of benefits, including: - Electricity generation as a by-product of heat - When the micro-CHP is generating heat, the internal engine or fuel celll will also generate electricity to be used in your home (or exported). - Carbon savings - By generating electricity on-site you are saving significant amounts of carbon as there are minimal losses occurring as compared with the grid. - Financial income - Micro-CHP is eligible for Feed-in Tariffs and you will earn 10p for each kWh generated by your system. You will also receive 3p for each kWh you export. - Installation is easy Micro Combined Heat and Power There is very little complexity to installing a micro-CHP unit. If you already have a conventional boiler then a micro-CHP unit should be able to replace it as it’s roughly the same size. Given the electricity generated, an electrician will also be involved with the installation but this is something the installer will organise. The estimated costs are often quickly covered by Feed in Tariffs which can be offered by Microgeneration Certification Scheme (MCS Certified) Installers who offer MCS Certified CHP Equipment.
<urn:uuid:64ab2d82-64ed-4fe2-875f-d874ca0a4eea>
{ "date": "2017-05-23T14:44:26", "dump": "CC-MAIN-2017-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00388.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9524571895599365, "score": 2.78125, "token_count": 625, "url": "https://www.easy-greendeal.com/resources/microgeneration/micro-chp.html" }
EDUCATION MATTERS FOR SURE COLUMN BY RON MARKS Recently the Chignecto Central Regional School Board (CCRSB) approved a new Strategic Plan for 2013-2016 titled Strengthening Our Learning Community. Quoting from the Strategic Plan, “The Mission of CCRSB is to develop independent lifelong learners in a student-centred environment with high expectations for all.” Fundamental beliefs of CCRSB are simple: “At CCRSB we believe that student learning is our priority and that learning is a partnership among home, school, and community. We believe that all students have the ability to learn and that our students learn in different ways. We believe that we must teach the whole child because learning is a lifelong process. We believe that our schools must be safe, supportive and socially just places where everyone must be treated with dignity and respect.” Most likely, everyone could agree with the Mission and Fundamental Beliefs of the Board. They might ask, so what? Everyone can agree with this but what does it do for my child or grandchild. That would be a fair question and the answer is far from simple. The first thought is that it gives everyone direction so we are all heading in the same direction with the same goals in mind. Keeping it Simple and Achievable, the strategic plan only has two goals. The two goals are: Goal 1. Increase Student Learning “Effective instruction and assessment processes by classroom teachers, working in communities of practice, and supported by knowledgeable and responsive instructional leaders, will have a positive outcome for increased student learning.” The 2013-2014 Yearly Action Plan for Goal 1 is quite ambitious. - “Professional learning will be provided focusing on the application of effective, research-based instruction and assessment practices; student management practices; and student intervention practices. - The new Nova Scotia Mathematics curriculum will be implemented in grades Primary to 3 and grade 10. - Professional learning in effective, research-based motivation, instruction and assessment practices for male and male-like learners will be provided. This will allow teachers and instructional leaders to better differentiate instruction and assessment processes for all students.” Goal 2. Provide Positive, Safe, Socially Just Learning Environments “A positive learning atmosphere is dependent on a climate of complex social interaction within a diverse community of people. Students, staff and parents/guardians should expect an environment that is positive, safe, inclusive and welcoming.” This is just a snap shot of a very detailed document and plan that school board members will use to hold our superintendent accountable. It is also one aspect that you, the voter, should use to hold elected school board members accountable. If everyone does their job, our students will be better prepared for their future. An education advocate, Ron Marks has been an outspoken member of local and regional school boards and a former Stellarton mayor. His column runs weekly.
<urn:uuid:335fef46-2337-43d4-9709-8d64777a2148>
{ "date": "2016-08-24T13:42:33", "dump": "CC-MAIN-2016-36", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292330.57/warc/CC-MAIN-20160823195812-00076-ip-10-153-172-175.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9507144093513489, "score": 2.640625, "token_count": 604, "url": "http://www.ngnews.ca/Community/2014-01-08/article-3568772/Strategic-plan-and-accountability/1" }
‘ala’ plays at least five separate, but interrelated, roles in toki pona, all somehow called negation and, in true toki pona fashion, none clearly distinguished from the others in the vocabulary or the grammar (and often in the context even). This gives rise to several problems which require solutions, usually by fiat. As a logician, I tend to focus on the logical uses, sentential negation and negative quantifiers. Sentential negation is just a function that converts a sentence into its negation, a sentence false when the first was true, true when the first as false, “It is not the case that…”. In simple English sentences this is expressed by “not” right after the main verb, in more complicated cases, it gets (of course) more complicated, but logicians tend to solve that by pulling “It is not the case that” out in front of all the complications. toki pona does not have this device available (the suggestion to use ‘ala la’ for it is regularly rejected as being unpona or even lojbanish, which it is, of course). The result is that either we have to do a lot of internal adjustments to get the right results or we have to avoid saying things like that altogether. The latter seems easier and we will discuss those directly. For now, sticking to simple sentence, we can think (incorrectly, it turns out) of sentential negation as ‘ala’ immediately after the predicate head, “verb”. The negative quantifier “no” is just ‘ala’ modifying whatever word specifies the sort to thing not involved. This creates an amusing conflict with a later use of ‘ala’, to specify the complement class, since ‘jan ala li kama’ can mean “A non-human came” as well as “Nobody came”. To be sure, the “non-human” reading is usually implausible, but the trap is always there. In logic, “nobody” is just “not somebody”, with a sentential “not” ranging over a particular quantifier, “some”. Since toki pona does not have an explicit “some”, this interpretation is not quite possible, since ‘jan li kama ala’ will (as we will see) get read as “Somebody did not come”, even though we just said (albeit with disclaimers) that ‘kama ala’ ought to be a sentential negation. These sorts of problems are not just with “some”, but with all the explicit quantifiers, “ali” and numbers and even ‘ala’, itself. To see how this plays out, we turn to the third sort of negation that ‘ala’ represents, indeed, to the one that is sometimes listed as its only meaning: complementation. If a word, w, refers to a class of things or members of that class, then ‘w ala’ refers to the class of everything not in the first class, for whatever reason. So, ‘loje’ refers to the class of all red things and ‘loje ala’ refers to everything else: green things, blue things, ideas, irrational numbers, running, spirituality, etc. So long as the answer to the question is “Is it red?” is “No”, it is in the class referred to by ‘loje ala’. But while this works for individual things, it works less well for general expressions: if some particular thing is in the complement of ‘loje’, then that thing is not in ‘loje’. But, of course, the mere fact that something or other is not in loje doesn’t mean that something or other else may not be in ‘loje’. So, ‘ala’ on the verb works fine as a sentential negation as long as not quantifiers are involved, but it is not always obvious when quantifiers are involved. Further it is not decisively settled what to do when quantifiers are involved. Cases of quantifier plus negation always present two possibilities and it is not clear which is the normative one and, given an answer to that, how to give the other possibility. It may be illuminating to see what happens with Y/N questions under the two possible readings. Consider “Did everybody come?”, naturally translated as ‘jan ali li kama ala kama?’. Since this is covertly a choice question between ‘jan ali li kama ala’ and ‘jan ali li kama’, there are four possible answers: ‘(kama) ala’, ‘kama’, ‘ala tu’ (“neither”) and ‘tu ali’ (both) which can’t apply in this case (but see later). ‘tu ali’ is also assumed not to apply in this case, but does on some interpretations. So, suppose everybody comes. The ‘jan ali li kama’ is true on either interpretation and is the right answer. Suppose that no one comes. Then ‘jan ali li kama ala’ is true on the complement interpretation, since every person is in the class of non-comers. It is also true on the sentential reading, since ‘jan ali li kama’ is disconfirmed in the extreme. The stronger claim, that nobody came, might be more informative, but the weaker is still true and may be adequate. But what if some came and others did not? Both claims are now false on the complementary, not everybody is in either the class or its complement but some are in each. So, only the “neither” (‘ala tu’) response is correct (despite not being offered as a choice officially). In the sentential reading, however, only the unnegated form is false, for not all are in the class. To be sure, it might be useful to know that only some are in the class, but that is not necessary for the given question (filling this gap is one of many uses of the ‘anu seme?’ question). A similar pattern (though reversed) arises with “Did anybody (somebody) come?” ‘jan li kama ala kama’ (with ‘jan’ somehow despecified). Cleary. if everybody came, the ‘kama’ answer is true and the ‘kama ala’ false. Similarly, the reverse holds if nobody came, though in both cases, additional information might be given. But when some came and some didn’t, the complement interpretation should strictly answer ‘tu ali’, some things are in the base class and some things in the complement. But in the spirit (if not the letter) of Y/N questions, just the ‘kama’ is required. That is, of course, the correct answer for the sentential reading, since “it is not the case that somebody came” is false. Hopefully these cases help to choose what to say under a given interpretation. While it doesn’t ultimately matter which one you choose, pu seems to hold to the complement interpretation and that is assumed from now on. The other two negations are simpler in outcome, though sometimes harder to explain. One of these is the notion of a polar opposite. This is a subclass of the complement class which contains the things most different from the regular class (usually, admittedly, within some broader class, that is, being a totally different sort of thing -- an object and action, for example -- doesn’t enter, just one object and another). The idea seems very subjective. If you have a color wheel in mind, the the polar opposite of red is green , but that makes no sense in a linear spectrum. But there are many cases where culture at least sets out obvious cases, good and bad, dark and light, living and dead, and so on through manny adjectives. Nouns and verbs are more problematic although, for some verbs at least, the is the notion of undoing from doing: regurgitation from ingestion, taking from giving, writing and erasing, and so on. The point is that, where the opposite is not separately given a word, ‘ala’ can be pressed into service for the notion. It is not clear how to tell when this happens without some considerable surrounding talk, but it is a legitimate move. The final negation is of presuppositions. This is called into play when something is said that presupposes something else that is not established. The classic is the loaded question ‘sina pina ala pini utala e meli sina?’, say. Either ordinary answer is an admission that you have beaten your wife. Fortunately, in tp the answer ‘ala tu’ “neither” is legitimate and gets out of that (though someone may try to contest the dodge). There are many contemporary examples of another class type “Why hasn’t Hillary been imprisoned for her treasonous activities?” Which officially only allows an answer of the form “The reason is …” aad not of the form “There have been no adjudged treasonable activities to prosecute, let alone imprison for” and so on. One response is just to say that there are presuppositions here that need to be dealt with before the question as presented can be answered. Various forms of “super ‘ala’” have been dreamed of for this purpose but none has been established as a shortcut for the long version above. But one might come in handy if we ever get around to doing debates in tp.
<urn:uuid:9942b36f-c7ab-4493-a013-86455b509c18>
{ "date": "2018-06-18T07:52:17", "dump": "CC-MAIN-2018-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.13/warc/CC-MAIN-20180618070542-20180618090542-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.95299232006073, "score": 2.890625, "token_count": 2109, "url": "http://forums.tokipona.org/viewtopic.php?f=5&t=2698&p=15984&sid=0d7ef15e67b3500f9cea167774fb9327" }
Linux - an operating system created by Linus Torvalds for personal computers and workstations , dated on September 17, 1991. Linux - is a modern operating system that is compatible with POSIX, and which it is based on the operating system UNIX. It is a multiuser network operating system with a network graphical window system X Window System. Linux operating system supports both standards of open systems and protocols of the Internet. It is compatible with the Unix, DOS, MS Windows systems.
<urn:uuid:7bfa37f3-0c0f-424a-b580-9b8b37b635c7>
{ "date": "2016-09-27T08:56:48", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660996.20/warc/CC-MAIN-20160924173740-00182-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9068574905395508, "score": 3.234375, "token_count": 104, "url": "http://qatestlab.com/expertise/os-infrastructure/linux-family-os/" }
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Copy of Chapter 13 Language and Identity Transcript of Copy of Chapter 13 Language and Identity use We Surveyed 100 SELF learners’ identities may impact learning processes engagements with literacy add resistance to undesirable or uncomfortable educational setting Language, Discourse & Identity Why Structuralists Post-Structuralists have idealized meanings A one week old example... Julia Gillard ? Postructuralists A linguistic system guarantees the meaning of signs (the word and its meaning). Each linguistic community has its own signifying practices that give value to the signs in a language. Structuralists & theory: Bakhtin (1981, 1984) Bourdieu (1977, 1991) Hall (1997) Weedon (1997) Ferdinand de Saussure (1966) Structuralism cannot account for struggles over social meanings attributed to signs in a language. homogenous are sites of struggle conflicting claims to truth and power Signs linguistic communities consensual heterogenous Peter Slipper Tony Abbot ‘ditch the witch’ ‘bitch’ "The leader of the opposition says that people who hold sexist views and who are misogynists are not appropriate for high office... I will not be lectured about sexism and misogyny by this man. And the government will not be lectured about sexism and misogyny by this man. Not now, not ever." Criticized for misusing the word misogyny. imply "entrenched prejudice against women" as well as, or instead of, pathological hatred of them. Every time we speak, we are negotiating and renegotiating our sense of self in relation to the larger social world, and reorganizing that relationship across time and space. Our gender, race, class, ethnicity, sexual orientation, among other characteristics, are all implicated in this negotiation of identity. Bourdieu (1977) Weedon (1997) identity is to of subject [set of relationships] [set of relationships] position of power (empowered) leader, governor, parent, boss, ceo, head of household reduced position of power student, soldier, child, worker The real me is a fiction social construct sexual orientation gender race class ethnicity nationality age changes over time subjectivity as a site of struggle as the result of transformational education What does this mean for teachers? Each identity position may + - opportunities for learners to speak, read or write, limit and constrain enhanced sets of possibilities gender, class, race, ethnicity, sexual orientation gains access to denied access to + - POWERFUL SOCIAL NETWORKS social interaction and human agency יום העצמאות Heterogeneous society example language as a site of struggle Roles of Lanuage Learning Ibrahim (1999) Cameron (2006) Pavlenko (2004) Sunderland (2004) Higgins (200?) Taylor (2004) "Becoming Black" a result of hegemonic processes Hue and Khatra Gender and Language oral multimodal written INSERT MY QUOTES Language is more than just a linguistic system. It is a "social practice in which experiences are organized and identities negotiated." (Norton 2010) Roles of Language Identity Mind Languages (Khatib 2011) I believe it is important to retain our Native languages. Surveyed 100 people…….. I feel more comfortable participating in class when teachers embrace my culture and identity I thoroughly enjoy learning English and attending English courses (in high school, ESL etc). I believe that most English learning courses are taught through an assimilation model (English only, not allowed to speak native language or bring in other cultural aspects). I believe languages are closely related to identity and expression of who we are The Effect of Identity on Language Learners LANGUAGE AND GENDER IDENTITY What do scholars think about language and gender? What do scholars think about language and gender? Lakoff (1975) Tannen (1990) Judith Butler (1990) gender is a stylized performance Language Learners and Gender 1 (assimilation) - L2 gendered subject positions more favorable (Pavlenko 2001) - English as a "weapon for self-empowerment" and new voice (McMahill 2001) Language Learners and Gender 2 (resistance) Learning second language threatens L1 gender identity L1 to L2 gender identity (shifting) English learners do not just abandon L1 & take up L2 gender identity hold onto L1 and reject L2 gender identity Gender identity is fluid. Thus, they may SHIFT BETWEEN gendered subjectivities and find themselves BETWEEN WORLDS Topics for Gender and Language in the Classroom 1. Do you associate the English language with "freer" expression of gender identity? If yes, how is your gender identity expressed differently in the English language from your native (L1) language? 2. Do you think learning the English language gave you new perspectives on gender, sex, and sexuality? If yes, what perspectives specifically? (e.g. are you now more open to LGBTQ? More open to exploring new possibilities for gender identities? ) 3. What is the most powerful English medium that influences your concept of gender identity? (e.g. English T.V.? Literature available only in English about different gender identities? English blogs? Other media?) 4. How has your dual identity affected your English learning process. 5. Was it easy balancing your dual identities? Pedagogical applications 1. Allow greater expressive choices 2. Use gender discourse as a topic for exploration 3. Raise intercultural awareness 4. Address gender stereotypes 5. Bring a global perspective on gender TEACHERS can "My Heart is Where I am"- Somalian Immigrants "I am not denying myself, I am being myself" "My heart is here, my heart is in Somalia. Home is where the heart is, my heart is where I am" At school we feel British, at home we have our own Somalian culture, food, and language; This makes me stand out as my own I believe it is important to retain our native languages. learners’ identities may impact learning processes engagements with literacy add resistance to undesirable or uncomfortable educational setting What does this mean for teachers? Each identity position may opportunities for learners to speak, read or write, limit and constrain enhanced sets of possibilities gender, class, race, ethnicity, sexual orientation gains access to denied access to social interaction and human agency Ibrahim (1999) Cameron (2006) Pavlenko (2004) Sunderland (2004) Higgins (200?) Taylor (2004) "Becoming Black" Particular relations of race, gender, class, and sexual orientation may impact the language learning process a result of hegemonic processes Gender and Language Hue and Khatra sexual orientation gender race class ethnicity nationality age changes over time learners’ identities may impact learning processes engagements with literacy add resistance to undesirable or uncomfortable educational setting offer learners multiple identity positions provide learners with diverse opportunities to take ownership of meaning-making Make the desirable possible! patriarchy Hypothesis: Men and women speak differently because of their sex. female vs. male subcultures Gender stereotypes Media representation of gender Sexist language Ideas for Teaching 1: Formal Instruction -Rejects life style of L2 speakers (Skapoulli 2004) - Refuse to use Japanese honorifics (Siegal 1996) Ideas for Teaching 2: Analyze Media Representation Gender Representation in a Disney Film Learning second language enhances L2 gender identity. Ideas for Teaching 3: Discuss Global Cultural Perspectives on Gender How we talk and what we say is an important part of how we define and express the different sides of "us" Language learners are not passive, they have agency (Andersson and Andersson 1999) Language learners resist by creating “pedagogical safe houses” Good for teachers to re-conceptualize students' resistance more productively as a meaning making activity (Mckinney and van Pletzen 2004) The Effect of Identity on Language Learners Learners with positive attitudes towards their own ethnic identity and towards target culture will most likely develop a strong motivation of L2 acquisition, while also maintaining L1 (Khatib 2011) In China, learning English is considered a valuable non threatening skill; general motivation to learn the language (Khatib 2011) They use both their L1 and L2 to express their evolving identity; their gender, ethnicity, and self identification Language and identity cannot be separated from culture, it is a "social practice in which experiences are organized and identities negotiated." (Norton 2010) References Higgins, Christina. (2010). Gender Identities in Language Education. In Nancy H. Hornberger and Sandra Lee McKay, Sociolinguistics and Language Education (pp. 370-369). Bristol: Multilingual Matters. Norton, Bonny. (2010). Language and Identity. In Nancy H. Hornberger and Sandra Lee McKay, Sociolinguistics and Language Education (pp. 349-369). Bristol: Multilingual Matters. Pavlenko, Aneta. (2001). Multilingualism, Second Language Learning, And Gender. Retrieved from http://books.google.ca/books?hl=en&lr=&id=iLhtuqFOBk0C&oi=fnd&pg=PR5&dq=gender+identity+and+language+learning&ots=A0hnhDyKZ-&sig=iFerdaOyZVUxTYWfm8Cto7c1sDM#v=onepage&q=gender%20identity%20and%20language%20learning&f=true Reyes, Angela. (2010). Language and Ethnicity. In Nancy H. Hornberger and Sandra Lee McKay, Sociolinguistics and Language Education (pp. 398- 426). Bristol: Multilingual Matters. Khatib, Mohammad, & Ghamari, Mohammad Reza (2011). Theory and Practice in Language Studies. Mutual Relations of Identity and Foreign Language Learning: An overview of Linguistic and Sociolinguistic Approach to Identity. Theory and Practice in Language Studies (pp. 1701-1708). Finland: Academy Publisher. Language and Identity” by Mary Bucholtz and Kira Hall, Chapter 16 in Companion to Linguistic Anthropology. ETHNICITY Ethnicity is a social construction that indicates identification with a particular group which is often descended from common ancestors. Members of the group share common cultural traits (such as language, religion, and dress) and are an identifiable minority within the larger nation-state. Ethnicity--national origin, language, religion, food and other cultural markers. Race--skin color, hair texture, eye shape and so on. AFRICAN AMERICAN ENGISH Systematic variety with well-defined linguistic rules HIP HOP Hornberger & McKay PHIL SZE (2012) Summary
<urn:uuid:0e4f93cc-9eaa-4069-9a32-73962e5c4e90>
{ "date": "2018-07-16T07:24:14", "dump": "CC-MAIN-2018-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.865262508392334, "score": 2.6875, "token_count": 2366, "url": "https://prezi.com/o4hz6uhcumpa/copy-of-chapter-13-language-and-identity/" }
Leftover Instruments Will Pave Way for New Propulsion Test Space Science News home Leftover instruments will pave way for new propulsion test for new propulsion test NASA hardware to piggyback on satellite "We want to prove this new technique, the Deflection Plate Analyzer, or DPA, which characterizes ions and electrons in a plasma, and tell us what direction they are coming from" explained Fred Berry, PEST Project Manager in the Space Sciences Laboratory at NASA's Marshall Space Flight Center. Right: The detector head for the Deflection Plate Analyzer is shown next to a 6-inch ruler and a dime to indicate its small size. "Our first tests of the new detector have been successful," Berry said on March 18. "We have it functioning in the plasma chamber and have detected our first plasma with it." December 3: Mars Polar Lander nears touchdown December 2: What next, Leonids? November 30: Polar Lander Mission Overview November 30: Learning how to make a clean sweep in space "This will really benefit ProSEDS, the propulsive tether project NASA/Marshall is developing for launch in 2000," said Dr. Nobie Stone, the principal investigator for the DPA project. "It will give us a chance to get an in-flight diagnostic test of one of the ProSEDS instruments." ProSEDS, the Propulsive Small Expendable Deployer System, will use a wire tether to connect with the plasmas in low Earth orbit and form an "electric motor" that will slowly put the brakes on a spent Delta rocket stage. This will make it re-enter the Earth's atmosphere much faster than natural decay alone (less than two weeks rather than the normal 5 to 6 months). Sign up for our EXPRESS SCIENCE NEWS delivery Plasmas are generated by the ionizing effect of solar ultra violet (UV) radiation on the upper regions of the earth's atmosphere. This region of ionized gas, or plasma, is called the ionosphere and is heavily involved in space weather effects that are important to anyone who builds satellites. Some of the plasma effects can be caused by spacecraft themselves, so NASA/Marshall has placed plasma instruments aboard several Space Shuttle missions to measure the craft's own effects. Right: The Deflection Plate Analyzer is readied for a recent test in a space simulation chamber. The DPA uses a new analysis technique that can measure multiple ion streams and determine the intensity, flow direction, and energy and mass distributions for each stream. But it can't be fully tested on Earth because space simulation chambers can't fully mimic the complex conditions of space. The space test requires comparing the DPA with instruments that have been successfully used in space many times. So, PEST will combine the old and the new, giving the DPA its first flight test while instruments from previous space missions look over its shoulder. Left: The Plasma Diagnostics Package is held over the Space Shuttle payload bay during the SPacelab 2 mission in 1985." That's our standard candle," Berry said of the RPA. Stone noted that the RPA design has been used for more than 30 years and will provide a good point of comparison. But where the RPA just measures along a narrow line of sight and under equilibrium, or "quiet" plasma conditions, the DPA will "see" ions arriving from different directions and should make reliable measurements even under highly turbulent conditions. The other instrument making a return trip to space is the Soft Particle Energy Spectrometer (SPES), which flew aboard the Tethered Satellite System carried by the Shuttle (TSS-1 on STS-46 in 1992 and TSS-1R on STS-75 in 1996). Chasing down an oddity noted during the Tethered Satellite System missions is another objective of the PEST mission. may fly on "empty" using a propulsive tether concept. Jan. 22, 1999 JAWSAT home page at the Center for Aerospace Technology, Weber State University "We noticed something strange in the characteristics of the RM400 conducting thermal coating used on the tethered satellite," Stone explained. "The data suggested tremendous emissions of secondary electrons due to particle bombardment or solar ultraviolet or both. We had no reason to suppose it would do that before the TSS mission launched." To find out what happened, PEST will include a test panel coated with stripes of gold - which has well known characteristics - and stripes of RM400. Stone hopes that data from the DPA and SPES instruments will help him and other scientists sort through the effects caused by the RM400 coating on the tethered satellite experiment. "We think that flying a new generation of instruments - at this altitude over the Earth's poles - will provide a good data set that can be used to improve our ability to model that part of the magnetosphere," Stone said. "It's an important area because the polar regions are active, including the aurora borealis, energetic particle streams, and outflowing plasma and gas escaping from the earth." Right: The Tethered Satellite System stands atop its deployment boom during the STS-46 Shuttle mission in 1992. JAWSAT will be launched by a surplus U.S. Air Force Minuteman ballistic missile with a Pegasus fourth stage added. The primary payload will be the U.S. Air Force Academy's Falconsat. Also piggybacking are Stanford University's OPAL satellite and the Arizona State University Satellite (ASUSat). Total launch cost is $3 million. NASA/Marshall's expenses will be $232,000: $40,000 to piggyback, and $192,000 to develop the DPA instrument suite, the RM400 test panel, and the power and data handling electronics. The entire process, from the offer of a ride, to delivery of the flight hardware in June, to launch in August, will have taken only 11 months. Berry said that PEST will operate for at least two months, and that the data can be collected by amatuer radio operators. "We are really interested in hams being able to use these data," Berry said. Details on the frequencies that PEST will use, and how the data are formatted, will be published this summer, he added. "To launch an experiment of this complexity - an array of three instruments, a materials test panel, and the supporting power and data systems - for $230,000 in only 11 months is unprecedented," Stone said. "It holds great promise for future small satellites. It's very encouraging to know we can do this." More Space Science Headlines - NASA research on the web NASA's Office of Space Science press releases and other news related to NASA and astrophysics sign up for our express news delivery and you will receive a mail message every time we post a new story!!! |For more information, Dr. John M. Horack , Director of Science Communications |Author: Dave Dooling Curator: Linda Porter NASA Official: Gregory S. Wilson
<urn:uuid:beaafe7c-c0e1-401b-9c29-9de8fadb2db0>
{ "date": "2016-10-01T03:17:32", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662507.79/warc/CC-MAIN-20160924173742-00282-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9093151688575745, "score": 2.875, "token_count": 1466, "url": "https://science.nasa.gov/science-news/science-at-nasa/1999/ast22mar99_1/" }
Issue No.01 - January/February (2007 vol.22) Published by the IEEE Computer Society DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MIS.2007.12 The first news story, "AI Fuels Better Video Search," looks at how researchers are using AI methods to improve the accuracy of video search results. The second news story, "The Challenge of Go," reports on current attempts to develop computer programs that play Go. AI Fuels Better Video Search Video clips represent a fast-growing part of the Web, but potential viewers face a daunting problem: A wide range of video clips are useful only if you have an effective way to search them. But video search proves a tougher technology problem to crack than text search, for a variety of reasons. Today, most video search utilizes text-based keywords or metadata that's supplied with user-generated clips. This approach isn't terribly accurate. However, some AI researchers and search experts are using AI methods to try to improve the accuracy of video search results. Annotating whole video scenes While at the University of Oxford, researcher Mark Everingham (now at the University of Leeds) collaborated with Josef Sivic and Andrew Zisserman to create a video search system using more than simple face or speech recognition. Using video from the television series Buffy the Vampire Slayer, the team pursued an ambitious goal—to move from an interface of "find me more of the person in this picture," to automatically annotating every video frame with the names of the characters present. "The key to this work is combining information from both video and text to learn a representation of a person's appearance such that they can be recognized visually and assigned their proper name rather than an anonymous tag," Everingham notes. Why is it tough to index video clips using the people that appear in them? Effortless tasks for humans remain extremely difficult for computer vision, Everingham says—in particular, determining that a person is in the picture in the first place and determining whether two images are of the same person. The Buffy video search project uses statistical machine learning—specifically, computer vision methods for face detection and facial-feature localization. These are learned from training data, rather than built by hand. Using machine learning for tasks such as face detection or object recognition has become common among the computer vision community, Everingham says. But typically, the work uses supervised learning methods, which require tight coupling between the input and desired output (for example, a class label or a face's identity). In contrast, this team's research involves "weakly supervised" methods (a growing research area), where the coupling between input and labels is imprecise or incomplete, says Everingham. "Two aspects of our work might be considered particularly novel," he says. First, the approach uses two texts—subtitles extracted from a DVD and scripts found on fan Web sites. Neither source by itself provides enough data for the system to learn to recognize a character. But by borrowing a sequence alignment method that's been applied to applications such as gene sequence alignment and speech recognition, the team created a system that automatically aligns the texts. Second, the technique incorporates data from the video. "The subtitles tell us when a line is spoken. By aligning with the script, we determine who says that line, and in combination we now know who speaks when. A computer vision method for visual speaker detection then separates the cases where someone is speaking, but off-screen, and when they are visible on-screen," Everingham says. "Finally, the cases for which we have a name and know they are on-screen give us training data with which to name the remaining people in the video." According to Everingham, the most challenging problem remaining is deciding whether two images are of the same person. Even in settings where lighting and facial expression are controlled, that's tough. But in television and movies, it's even harder, because changes in appearance due to factors such as varying lighting or expression are often far larger than the differences in appearance between individuals, he says. Machine learning methods will likely provide the solution to this problem, he believes, because the factors influencing appearance are so complex to model physically. "Our ultimate goal is to provide automatic annotation of video with information about all the content of the video—not just who is in the frame, but where they are, what other objects are present, what they are doing, and how they might be feeling," says Everingham. Annotation like this for movies, news clips, or home video opens up many possibilities for easy searching, efficient use of archives, and automated narration for people with visual impairments, he says. Add context, boost accuracy Blinkx makes a video search engine that it licenses to the likes of Lycos and FoxNews.com. "We use AI in a couple of interesting ways," says Suranga Chandratillake, Blinkx founder and CTO. His company's search technology tries to objectively analyze video content by using speech recognition and matching the spoken words to context gleaned from a massive database (built on some 1.5 billion Web pages). The speech recognition technology is important, Chandratillake says, because relying on metadata tags is risky—they can be inaccurate. Most speech recognition research, though, has previously focused on applications such as dictation software that involve one or just a few speakers, or on applications such as automated customer service that involve a limited set of words. "Blinkx can't do that," Chandratillake says. "We have to listen to everything on the Net." "As well as indexing the voice content, we index textual content from the Web," he says—such as news stories, blogs, product descriptions, and online encyclopedia material. "We use that to build probabilistic modeling of ideas in that world," Chandratillake says. "We analyze the phonetic transcript in the context around it. The probabilistic analysis helps us better guess what the phonetics are." For example, on a purely sound level, "recognize speech" sounds a lot like "wreck a nice beach," Chandratillake says. Blinkx uses the modeling to decide whether the context is speech recognition or a tsunami. Blinkx's technology has also begun to use some visual analysis—for example, reading characters on the screen, such as a name on a sports jersey or a ticker on a news program, he says. The company has also begun to amass a database of famous faces. Probabilistic modeling, using the same large database used for speech recognition, makes these two visual techniques more effective, says Chandratillake. Among the group's AI challenges, Chandratillake cites adapting the modeling to applications such as speech and visual recognition. The speech data set is so large that you must make trade-offs to make the modeling practical, he says. But the trade-offs can cause problems when you apply the modeling to a new type of recognition. So, his team is focusing on making smarter tactical trade-offs and on modeling as good an assumption set from a smaller set of data, using weighted probabilistic modeling techniques. Looking ahead, Chandratillake would like to add ever more context into video searching to make it more accurate. "We'd love to do object analysis, so you could say, pick out the Golden Gate Bridge," in the context of a news story. A related challenge: Spotting doctored video In a world that already distrusts photographs owing to the sophistication of image-editing tools, similar concerns have arisen regarding video. At Dartmouth University, Hany Farid is working on technology to show whether images or video clips have been doctored. Although it's not a specific goal of Farid's, his research results could possibly inform work on video search as well: Given a plethora of video search results, users will need help to know whether a video clip was actually filmed by Fox News or concocted from imagination in a teenager's basement. One approach Farid's team uses is the expectation maximization algorithm to detect tampering in de-interlaced video. Video software will often remove interlacing artifacts. (One frame of interlaced video actually travels in two parts, called fields, with horizontal lines split between them. Horizontal visual effects and blips sometimes remain when mismatches between fields exist.) This process gives rise to specific statistical patterns that you can estimate using the EM algorithm, Farid says. (For more on this work, see www.cs.dartmouth.edu/ farid/research/tampering.html and www.cs.dartmouth.edu/ farid/research/cgorphoto.html.) Farid's research also employs support vector machines—a popular concept in the machine learning community, he notes. An SVM is a classifier for categorizing inputs into two or more categories. Farid's team first trains an SVM on a set of inputs and then applies it to novel data. His research, in progress for about five years, has trained SVMs on wavelet statistics to differentiate computer-generated images from photographic images. (Wavelets, a statistical model, can be used in conjunction with many types of data analysis—in this case, analysis of an image's properties such as scale and orientation.) The biggest difficulty with this research is the massive amount of data that must be processed, according to Farid. "On the other hand, this also makes it harder for the forger to create a convincing forgery, so that should help us as well. In all of our work, we think about how a forger might specifically tamper with a video or image, formulate what type of statistical pattern this manipulation would produce, and then devise techniques for detecting these patterns." His team's ultimate goal: develop a suite of image, video, and audio tools for detecting tampering. These tools won't stop the tampering but will make it more difficult, Farid adds. "We currently have two tools completed and several more that we are actively working on," he says. "I expect to continue work on video, image, and audio forensics for several years to come. Our primary audience is law enforcement and media outlets, but I am learning that there are many more potential applications of this work." The Challenge of Go The historic victory of IBM's Deep Blue over chess grand master Garry Kasparov in 1997 had the unintended effect of boosting interest in Go. This ancient Asian board game has become a challenge for AI researchers around the world. Go is resistant to Deep Blue's brute-force search of the game tree; the number of possible moves is too large. This inspires researchers to develop hybrid methods combining different methods and algorithms. "After chess it is a logical next step—a game with simple rules, and a completely controlled environment," says leading computer Go researcher Martin Mueller about Go's growing popularity in the AI community. "Any real-life problem that consists of many loosely connected smaller components can profit. That covers just about any interesting real-life problem." A deceptively simple game Go, which originated in China, is frequently described as "deceptively simple." Two players take turns putting black and white stones on a board with a 19 × 19 grid. The aim is to capture and hold territory. A stone or group of stones surrounded by the opponent's stones are captured and removed from the board. The game ends when both players agree that neither side can improve its position. More than 100 computer Go programs are on the market; examples include SmartGo, Go++, and Many Faces of Go (see the sidebar for related links). The programs employ heuristics, selective search, pattern matching, and hand-crafted rules but are no match for even amateur players. Part of the reason is that the Go board's large size leads to a nearly infinite number of possible positions. A chess player can choose from about 25 moves, but a Go player has more than 200 options. Commercial chess programs can evaluate about 300,000 positions per second (Deep Blue did 200 million per second), but midway through a game of Go, computer programs can evaluate a few dozen positions at best. Moreover, capturing Go board positions in algorithms is extremely difficult. "It's really hard to evaluate positions," says SmartGo developer Anders Kierulf. "You have to determine which stones are connected, which stones are alive or dead, and use that information to map the territorial balance as well as the influence." Experienced players are said to recognize patterns in strings of stones early in the game that might have strategic implications later in the game, but they rely on intuition and are often unable to explain why they made certain moves. These complexities are what attract AI researchers to the game. "Go is particularly appealing for researchers, as it is well defined and constrained by a set of simple rules," says Thore Graepel of Microsoft Research in Cambridge, England. "We are fascinated by a problem that is simple to state but extremely difficult to solve." Graepel is coauthor of the 2006 paper "Bayesian Pattern Ranking for Move Prediction in the Game of Go" ( www.icml2006.org/ icml_documents/camera-ready/ 110_Bayesian_Pattern_Ran.pdf). "In this paper we focus on the problem of modeling the play of human players by using machine learning techniques to learn from records of historical games," Graepel explains. "The model was trained from 180,000 records of games between expert Go players and aims at mimicking their way of playing in new, as-yet-unseen situations arising in a game. The resulting system gives the best currently published results for expert Go move prediction with a success rate of 34 percent on average compared to previous results of around 25 percent." Monte Carlo Go Another group of researchers is using the Monte Carlo method to improve computer Go. Developed in the early 1990s, this statistical-sampling approach is widely used in computational physics. Monte Carlo responds to a game situation by running through a game thousands of times and then selecting a move that has produced the best result on average. The Monte Carlo method has become popular in recent years because researchers now have low-cost computers that can run many simulations. At the 2006 Computer Olympiad, French researcher Remi Coulom won a gold medal with Crazy Stone, a computer Go program that employs Monte Carlo. Crazy Stone won its medal playing on a 9 × 9 board, which is commonly used for both beginning players and computer Go programmers. Crazy Stone combines Monte Carlo with min-max tree pruning and upper-confidence bounds applied to trees (UCT). The alpha-beta pruning heuristic used in min-max search assigns values to game tree nodes to stop evaluations of moves that are worse than the previously examined move, thus reducing processing time without affecting the final result. UCT, an algorithm developed by Levente Kocsis and Csaba Szepesvari in 2006 ( http://zaphod.aml.sztaki.hu/ papers/ecml06.pdf), chooses the move with the highest upper-confidence bound, which is the sum of the move's average value and the size of its confidence interval. Most experts believe a computer Go program that can beat the world's best players is decades away. Yet researchers will keep trying, largely because of Go's complexity. As Thore Graepel of Microsoft Research puts it, "Since Go is one of many tasks in which humans can rapidly learn to outperform computers, it does seem likely that the techniques which eventually produce a strong Go playing program will offer insights into machine intelligence in general. For example, one could speculate that methods which are successful at determining the value of Go positions might prove useful for image processing, as the analysis of Go positions is a very visual task."
<urn:uuid:b4d912c9-4f3c-4323-8bf5-022f47f3fd83>
{ "date": "2015-04-26T10:13:42", "dump": "CC-MAIN-2015-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654264.98/warc/CC-MAIN-20150417045734-00270-ip-10-235-10-82.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.944562554359436, "score": 3.140625, "token_count": 3281, "url": "http://www.computer.org/csdl/mags/ex/2007/01/x1005.html" }
Location: Soil Management Research Title: Simulated impacts of crop residue removal and tillage on soil organic matter maintenance Authors Submitted to: Soil Science Society of America Journal Publication Type: Peer Reviewed Journal Publication Acceptance Date: April 13, 2013 Publication Date: July 1, 2013 Repository URL: http://handle.nal.usda.gov/10113/57059 Citation: Dalzell, B.J., Johnson, J.M., Tallaksen, J., Allan, D.L., Barbour, N.W. 2013. Simulated impacts of crop residue removal and tillage on soil organic matter maintenance. Soil Science Society of America Journal. 77(4):1349-1356. Interpretive Summary: Stover or residue is the leaves, husks, cobs and stems remaining after corn grain harvest. This material can be sold as bioenergy feedstock. Harvesting residue exposes soil making it more vulnerable to wind and water erosion. Soil organic matter imparts many characteristics that make soil productive. Thus, it is important to maintain or increase its amount in soil. Every year a fraction crop residue decomposes and a portion of it becomes soil organic matter. Soil organic matter formation relies on the annual crops residue input. Whether, soil organic matter increases, decreases or stay the same over time varies based on tillage, amount and type of residue returned, soil and environmental factors. Measurable changes in soil organic matter happen slowly. Therefore, we used a model called CQESTR for predicting changes in soil organic matter under different tillage and residue harvest scenarios in West Central Minnesota. The model predicted changes for both the top 12 inches and the top 24 inches. The study found that found that tilling with a moldboard plow or a chisel plow even returning all crop residues to the field was not enough to maintain soil organic matter. Returning more residues is needed to maintain soil organic matter in the top 24 inches of soil. The model predicted a loss in soil organic matter in the top 24 inches even when it was maintained in the top 12 inches unless the field was managed without tillage and with continuous corn. The model predicted 3 ton per acre or more dry residue be returned annually or else soil organic matter in the top 24 inches would decline. The 2005-2011 average corn yield for Stevens County, which is in West Central Minnesota, was only 160 bushels per acre. In addition almost all row crops are tilled. Thus, routine harvest of corn stover is not compatible with increasing soil organic matter. To achieve an increase in soil organic matter in the top 24 inches, we advise producing high residue crops like corn coupled with no tillage. This information provides guidance to the bioenergy industry, producers and the general public including policy-makers of the benefits and risks associated with plant-based energy. The study results suggest corn yields routinely exceed 175 bushels per acre in a corn-soybean rotation with no-till before it is advisable to harvest corn stover. Technical Abstract: Cellulosic biofuel production may generate new markets and additional revenue for farmers. However, residue removal may cause other environmental problems such as soil erosion and loss of soil organic matter (SOM). The objective of this study was to determine the amounts of residue necessary for SOM maintenance under different tillage and removal scenarios for corn-soybean (Z. mays - G. max, respectively) and continuous corn rotations for a site in west-central Minnesota. We employed a process-based model (CQESTR) to evaluate current management practices and quantify SOM changes over time. Results showed that conventional tillage resulted in SOM loss, regardless of the amount of residue returned. Under no-till, the amount of residue was important in determining SOM accumulation or depletion. For the upper 30 cm of soil, average annual rates of 3,650 and 2,250 kg crop residue ha-1 yr-1 were sufficient to maintain SOM for corn-soybean and continuous corn rotations, respectively. Soil OM in soil layers below 30 cm was predicted to decrease in all scenarios as a result of low root inputs. When considered over the upper 60 cm (maximum soil depth sampled), only continuous corn with no-till was sufficient to maintain SOM. Results from this work are important because they show that, for the scenarios tested here, no-till management is necessary for SOM maintenance and that determining whether SOM is accumulating or depleting depends upon the soil depth considered. At current yields observed in this study area, only continuous corn with no-till may generate enough residue to maintain or increase SOM. [REAP publication].
<urn:uuid:1b8138eb-eab2-4915-a270-fd0cd039d5c8>
{ "date": "2015-11-27T14:10:41", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449160.83/warc/CC-MAIN-20151124205409-00204-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9119787812232971, "score": 2.5625, "token_count": 965, "url": "http://www.ars.usda.gov/research/publications/publications.htm?SEQ_NO_115=283379" }
|Matthew Pizzolato, author| of the West by Matthew Pizzolato Copyright © 2012 Matthew Pizzolato Perhaps one of the biggest myths perpetrated by Hollywood is the role that women played during the days of the Old West. According to Western mythology there were basically two roles for women during the time period: the whore with the heart of gold and the schoolmarm. While both of those characters did exist, they have been so overdone in Western fiction that they have become cliché. Women served a in a multitude of roles that went against the social conventions of the day. Mary Fields, an ex-slave, drove a US mail coach. Martha Jane Canary or Calamity Jane, as she is more popularly known, was an Army scout. Charley Parkhurst dressed like a man and drove a stagecoach. Poker Alice Ivers was one of the most famous gamblers of the time. However, some women of the time resorted to full scale outlawry. Although most of her life and death is shrouded in mystery, Sally became known as a ruthless killer who was a dead shot with the two pistols that she wore. She made her living as a horse trader and wasn't particular about how she acquired her livestock. It was said that she was so proficient with a bull whip that she could snap the head off of flowers. Sally arrived in Texas with her family as one of the first settlers of Stephen F. Austin's colony. When the Civil War started, she became a Confederate blockade runner and hauled cotton to Mexico for shipment to Europe. There are no known photographs of her and the records of her life consist of marriage licenses and divorce degrees. Sally was married five times and is suspected of having killed at least one of her husbands. It is believed that Sally killed more than 30 men during her lifetime. There is no record of her death, but rumor has it that her last husband killed her and disposed of her body in Mexico. Belle Starr (Myra Belle Shirley) Belle Starr was born as Myra Belle Shirley into the life of a spoiled rich girl and received a classical education. Her life changed when the Missouri-Kansas border war broke out. After her brother was killed in 1864, her father moved the family to Scyene, Texas. She married Jim Reed on November 1, 1866 and bore him two children. When Reed was killed in a gunfight with a member of his own gang in 1874, Belle left her children with her mother and rode the Outlaw trail. She met a Cherokee named Sam Starr and settled on his place near Briartown, Oklahoma. From that base, the couple formed a gang and began rustling, stealing horses and bootlegging whiskey with Belle calling the shots. Belle became a target of the Hanging Judge Isaac Parker and was brought before his court several times, but was usually released because of lack of evidence. Eventually, she was caught attempting to steal a horse and was sentenced to two consecutive six month prison terms but returned to the outlaw life upon her release. After Sam was killed, Belle married Jim July. The relationship was fraught with arguments. On February 3, 1889, Belle was shot and killed from ambush. The killer was never found. Suspects included her husband, a neighbor named Watson, as well as both her estranged daughter and son. By the time she became the first known female stagecoach robber in Arizona history, Pearl Hart had already lived a hard life. At the age of seventeen, she married an abusive husband who gave her two children before leaving her. She left both of her children with her parents and went West. She found survival difficult, suffered from depression and attempted suicide several times. In 1899, Pearl met a miner named Joe Boot with whom she decided to rob a stagecoach with to raise money to visit her sick mother. On May 30, 1899, with Pearl dressed as a man, the couple stopped the coach between Florence and Globe, Arizona, taking about $450 and a revolver. Their escape attempt was unsuccessful and they got lost. After making camp for the night, the pair woke up to discover they were surrounded by a posse. During her time in jail, she became known as the "Bandit Queen," often giving autographs. She escaped from the jail but was caught and returned where she faced trial. She was sentenced to five years in Yuma Territorial Prison but was paroled after 18 months. Pearl tried to profit from her fame as a lady bandit, but was unsuccessful. The circumstances of her death are unknown. Flo Quick alias Tom King Flora was born into a wealthy family and married Ora Mundis. The couple moved to Guthrie in Oklahoma Territory in 1892. She inherited the family fortune but quickly squandered it. When the money ran out, so did her husband. Flora stole horses to make a living and began dressing like a man, using the name Tom King to confuse authorities. She met a fellow outlaw, Earnest "Killer" Lewis and began robbing trains. Flora had no trouble supporting herself by horse stealing and often resorted to prostitution when necessary. The circumstances of her death are not known but her life is a constant source of speculation among historians to this day. Rumors persist that she may have been the sixth "man" of the Dalton raid on Coffeeville and that she may have been a sweetheart of Bob Dalton, even that her real name was Eugenia Moore, but none of this has been substantiated. In conclusion, the historical record is full of women who broke social conventions and lived life how they saw fit, regardless of what Hollywood would like to portray. Although the times have changed since the days of the Old West, human nature remains the same. by Matthew PizzolatoAmazon ~ B&N ~ Goodreads Excerpt plus an RTW interview with Matthew Pizzolato Win a $20 Amazon Gift Certificate! What do you consider the best Western (either movie or novel) of all time? Why? Comment and you'll be entered to win the $20 Amazon Gift Certificate ! Email address must be included to be eligible for the drawing (so we can contact you). Drawing will be held Saturday, June 30 at 9pm Pacific Time.
<urn:uuid:0db9d670-939a-4a02-ba98-6ad1c6bc7ab6>
{ "date": "2014-10-24T15:20:34", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646209.30/warc/CC-MAIN-20141024030046-00271-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9887712597846985, "score": 2.75, "token_count": 1301, "url": "http://romancingthewest.blogspot.com/2012_06_01_archive.html" }
Combined-cycle natural gas generation is displacing King Coal. And although renewables promise a bright future, combined-cycle power plants (CCPP) are efficient, clean and inexpensive generating sources with the capacity to replace base-load generation in large scale. The recent growth U.S. shale gas production along with pipeline network expansions have lowered and stabilized natural gas prices to the point where they are competitive with coal generation. Combined-cycle plants are relatively inexpensive to build and can achieve thermodynamic efficiencies exceeding 60%. Additionally, their fast-start ramping capabilities enable hundreds of megawatts to hit the grid faster than other sources. The U.S. Energy Information Administration (EIA) projects that 2016’s power generation from natural gas, for the first time ever, will surpass coal’s share at 33% to 32%, respectively. Correspondingly, 2015 was the first year when domestic natural gas plant utilization exceeded that of coal at a capacity factor of 56% versus 55%. The initial wave of CCPP construction that began in the 1990’s, anticipated low generating costs through baseload operation. Instead, natural gas price volatility and electricity demand variation forced most of these plants to catch emerging power sale opportunities by cycling (meaning they were off at night and on weekends). In recent years, with gas price stabilizing at around the $3 - $4/MMBtu range, many CCPPs are called on to follow load demand and even for baseload operation as they originally were designed. At the same time, their associated heat recovery steam generators (HRSGs) have been used (and sometimes abused) to suit market needs. The HRSG is the boiler placed after the gas turbine to absorb remaining hot exhaust gasses and produce steam to drive an additional turbine/generator set. It enables the added generation and efficiency made possible by combined-cycles in the power plant. Highly flexible operational practices--from periodic baseload operation to cycling the plant every day-- take their toll on HRSG pressure parts. Most in current operation were not designed with the flexibility to withstand the stress levels caused by faster startups, low-load operations and repeated thermal cycling. And the stressors are intensified by today’s larger, more efficient gas turbines. Located directly downstream of these turbines, HRSGs sustain greater thermal and mechanical stress from increased exhaust gas temperatures and pressure changes. In addition to damages from failure mechanisms that have long-plagued conventional boilers, HRSGs are also prone to design, construction, operation and water chemistry deficiencies. Pressure part failures of tubes, headers and connecting piping represent some of the greatest reliability threats. Of critical importance are regular internal inspections to proactively identify failure mechanisms and root causes so that forced outages can be mitigated. The following summarizes some of the most common types of HRSG system failure mechanisms and their causes. HRSG tube failures (HTF) are the primary source of unavailability among combined cycles causing an average of six forced outage events per unit year as indicated by recent NERC GADS data. HTF repair events don’t necessarily require lengthy outages, but they can have large effects on in-market availability and prove costly, especially for merchant generators. - Thermal Fatigue is a common damage mechanism among superheaters and reheaters caused by the thermal expansion and contraction of cyclic operation. Thermal fatigue occurs primarily at dissimilar metal welds and tube to header connections. Attemperator overspray and residual condensate among LP economizer sections cause steaming and quenching during startup and exacerbate thermal fatigue. - Short-term Overheating results from exposures to highly elevated temperatures or when tubes are starved of flow as a result of some blockage. Improving temperature controls, preventing tube internal exfoliation or upgrading tube materials may mitigate overheating. - Long-term Overheating (Creep) accumulates from temperature exposures in excess of design. Even minimal temperature exceedances accrue cumulatively and with time form microstructural creep voids that can shorten tube life. Improved temperature controls or upgrading tube materials are two strategies to mitigate creep. - Creep Fatigue occurs due to the combined effect of overheating and cyclic stress. Often tough to diagnose, it typically initiates along the outside diameter of tubing in high-temperature locations. It can be mitigated by reducing cyclic loading and localized overheating. - Bowing is a common failure mechanism caused by differential expansion, quenching and tube fabrication disparities. Reheater tubes in close proximity to attemperators or duct burners are especially susceptible to bowing. Some units experience tube bowing to the degree that crimping and local yielding results in premature tube failures. - Acid Dewpoint Corrosion occurs from gas turbine exhaust gas moisture which condenses to water vapor and sulfuric acid. The corrosive mix rests upon the tubes and wastes away metal over time. This mechanism may be spared by upgrading tube materials or changing operation to increase tube temperatures. Cycle Chemistry (CC) influences approximately 70% of the HRSG tube failures (HTF). Oxide growth and progressive deposition of water/steam impurities or oxide scale buildup contribute to a variety of damage mechanisms. Unfortunately, the system design or mode of operation increases susceptibility to tube internal deposition. The shutdowns and poor layup practices from cyclical operation introduce elevated temperatures, flow disruptions and contaminants. A suitable water treatment program helps to ensure feedwater quality maintains tube internal surfaces free of contamination and corrosion among all areas of the HRSG. - Flow-Assisted Corrosion (FAC), a chemistry-related failure, causes 40% of all HRSG tube failures. It involves the single (water only) and two-phase (water/steam) variations. FAC originates from the loss of protective metal oxides within the tubes which enables wall loss. Proper boiler water chemistry is critical. In contrast to conventional boilers, FAC among HRSGs is found predominantly among tubes, headers and risers in low-pressure (LP) economizers and LP evaporators. External feedwater piping is also susceptible among HRSGs which take feed pump suction from the LP drum. The best approach to managing FAC metal loss is a combination of correct water chemistry control and regular assessment and trending of wall thicknesses among susceptible locations (most of which are internal to the HRSG box). Additionally, materials containing chromium are resistant to FAC. Increasing numbers of utilities are simply upgrading materials to chromium to enact a permanent fix. - Under-Deposit Corrosion (UDC) occurs exclusively among HP evaporator tubing. It encompasses several water chemistry-related failure mechanisms which commonly cause significant problems when not adequately controlled. A combination of deposited material and corrosion products adhere to the internal tube surface and waste away tube material until eventual failure occurs. An understanding of these corrosion mechanisms is necessary to prohibit or reverse active corrosion. Appropriate cycle chemistry with negligible feedwater corrosion products and avoidance of localized elevated temperatures are the best defense against UDC. - Acid Phosphate Corrosion is defined by a combination of internal deposits and phosphate salts leading to UDC and eventual tube failure. Chemistry controls using mono- or di-sodium phosphate is problematic. - Caustic Gouging occurs when chemistry controls employ too highly concentrated caustic or caustic ingress occurs from the regeneration ion exchange process. The excess caustic dissolves the protective magnetite layer. The water in contact with iron attempts to restore this magnetite and traps the high caustic concentration. A continuous loss of metal ensues. - Hydrogen Damage refers to a combination of internal deposits and contaminant ingress or an acidic concentration. Chloride frequently enters the cycle through condenser leakage. - Pitting is characterized by localized corrosive metal loss illustrated by deep pits. The most common cause of pitting is poor drainage and layup between cycles. Oxygenated, stagnant water within numerous tube circuits is the usual culprit. It’s imperative during lengthy shutdowns that procedures are in place to drain and/or evacuate all water and protect the tube internals from any remaining moisture through dehumidification or nitrogen blanketing. - Corrosion Fatigue is a leading cause of failures among LP evaporators and economizers. It’s usually identified where expansion is restricted such as among tube to header welds. Groups of cracks appear on the internal surface in a position perpendicular to the major strain. Corrosion fatigue is an “on again, off again” mechanism that reemerges when oxide laden cracks are exposed to concurrent strain along with poor water chemistry. Grade 91 steel poses a particular problem. Containing 9% Chromium, it exhibits enhanced creep rupture, yield and ultimate tensile strengths in addition to toughness. Grade 91 enables elevated temperature operation, better lifetime performance and thinner materials in the design and manufacture of piping systems. During the boom years of new HRSG construction, Grade 91 materials were viewed as a panacea for cycling-related thermal fatigue. Unfortunately, many complications have emerged since then. Grade 91 is more sensitive to variations in metallurgy and heat treatment than traditional materials. In particular, its material integrity was frequently compromised from microstructural damages sustained during manufacture, erection or as a result of operational issues experienced among HRSGs. Premature cracking from creep degradation, especially among welds, is widespread. A majority of large HRSGs built from the 1990s have tubes, headers and high energy piping constructed from Grade 91. Failures among the larger components such as headers, major connecting piping, and steam piping frequently require lengthy outages with significant repair costs not to mention the loss of generation. The recommended course of action, regarding P91 piping, is two-fold: First, for repairs, pay close attention to any and all heat treatment activities such that the correct microstructures are developed and maintained and, second, assess current risk(s) through timely and aggressive piping and weld inspections to determine creep degradation and confirm material properties. Problematic designs and operating constraints introduce additional causes of HRSG failures. As previously discussed, the HRSG is located downstream of the combustion turbine. The consequences of sporadic temperature and pressure swings from exhaust gasses and the effect they have on tubes are an obvious place to recognize. But where else? - Water Hammer occurs among many HRSGs. Improper steam attemperator spray controls, premature valve actuation or inadequately design condensate drain lines are often culprits. Water hammer can be a destructive force, often taking a toll on piping supports and sometimes exacerbating piping failure mechanisms. - Thermal Quenching-Induced Fracture ensues when significant “off-design” events occur resulting in rapid thermal quenching and/or overloading failures, primarily at tube-to-header connections. Faulty control logic or damaging operational practices are often to blame. A root cause of failure analysis should be conducted following any such event to mitigate future occurrences. - HRSG Economizers are vulnerable to a number of issues. - Tube-to-Tube Stressors can occur from unbalanced economizer flow distribution. Upon initial startup, HP evaporators are yet to produce steam and do not require makeup water. Thus, since the feedwater control valve is closed, no water enters the economizer. Without this needed heat transfer, economizer metal temperatures can reach flue gas temperatures of 500–600F. - Cold surges among the economizer pose significant sources of stress. They create large temperature differences between contiguous tube sections. Cooler tubes contract while the adjacent warm tubes do not. This phenomenon introduces stress risers, particularly among areas of tube geometrical changes, such as at a weld. These events, although short-lived, shorten the economizer’s remaining useful life at each instance. HRSG reliability is maximized by understanding the conditions giving rise to potential damage mechanisms. HRSG design, along with its feedwater and attemperator control systems, water chemistry, and component materials, are variables critical to operational flexibility. By ensuring correct operational methods, such as controlled startups/shutdowns, the damaging thermal transients can be mitigated. Proactive, risk-based inspections of failure-vulnerable locations are critical to issue identification, necessary repair(s) and preventing unexpected outages.
<urn:uuid:878f153b-009b-44a9-b293-ebaaf08dece1>
{ "date": "2017-05-23T14:37:01", "dump": "CC-MAIN-2017-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00388.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9209169745445251, "score": 3.140625, "token_count": 2553, "url": "http://insights.globalspec.com/article/2963/heat-recovery-steam-generators-vulnerable-to-failure" }
History of Shinto Shinto is the indigenous spirituality and religion of the Japanese people. The earliest recorded instances of Shinto being practiced date back to ancient times, but the practice of Shinto as a formalized religion didn’t really begin until the 5th or 6th century. The word “Shinto” is actually Chinese in origin, and it translates to “The Way of the Gods”. Shinto started out amongst the Japanese as a set of ritualized practices and behaviors. The practice of Shinto – like many other religions – is highly focused on the physical aspects of its traditions. Shinto is also very animistic and includes the worship of nature, ancestors and a focus on personal purity. “Kami” is the name given to the Japanese concept of supernatural spiritual essences. It is easy to conflate kami with the western ideas of gods and deities, but they are not the same. There is no direct translation into English for what the kami represent, but it is easiest to think them as spiritual being that is inside every living thing. The existence of the kami can perhaps explain the animistic aspects of Shinto. These ritual aspects become easier to grasp when we look at examples. Take sumo wrestling, arguably the most “Japanese” of all Japan’s sports. Before the beginning of each match, salt is thrown in a wide arc in the ring, which symbolizes several things: salt acts a purifier, and throwing the salt onto the earth acts a gift to both the earth and the ancestors. Another Japanese custom is flower arranging. This hobby was often practiced by samurai as a way to counter-balance the violence required in their daily lives as warriors. The act of arranging flowers suggests a communal bond with nature, but is also highly valued because of its aesthetic qualities. Like most religions, Shinto did not evolve in a vacuum; that is, Shinto was affected by other religions that came into being on the islands of Japan through cultural diffusion or military conquest. Buddhism, Confucianism and Taoism were all influential on Shinto. In a process called “syncretism,” Shinto evolved to include some aspects of these other religions. For example, Buddhism – which originated on the Indian continent – is very concerned with the afterlife. Buddhism has much to say on what happens to a person after they die, whereas original Shinto beliefs were relatively simple. When these two religions came into contact with each other, Buddhist ideas about the afterlife became intertwined with Shinto. In fact, Buddhism eventually overtook Shinto as the dominant religion in Japan, but the two religions influenced each other to such a degree that it is in many cases impossible to disentangle them. For example, in modern-day Japan, a birth is usually celebrated in a Shinto temple, while a funeral is most often done in the Buddhist tradition Aspects of Shinto As mentioned above, Shinto is very concerned with purity. Unlike the Abrahamic religions – Islam, Christianity and Judaism – Shinto does not suggest that impurity is wrong, per se. Instead, impurity must be cleansed because it can be harmful or annoying. This is an important distinction, because impurity in Shinto does not always lead to a person feeling ashamed or being outcast. Purity in Shinto is referred to as “kiyome” and impurity referred to as “kegare.” Purity is achieved through the practice of certain rituals. These rituals can include things like using wands to purify people or hanging shimenawa (a sort of rope, made with rice) as a way to separate pure areas from impure ones. Shinto practice takes place in shrines. The role of a shrine is two-fold: it acts as a quiet place of contemplation, and it serves as a sort of conduit or collector for the Japanese ancestor spirits. Since Shinto includes the worship of a person’s ancestors, shrines are very important. It is here that pleas or requests are made, and where living family members come to give thanks to their ancestors. Effect of Shinto on Japanese Culture Shinto has had a huge effect on Japanese architecture. The Shinto aesthetic has created a very distinct style of Japanese building, which came about originally as a way to “entice” spirits. The Tea Room, for example, was originally designed to bring in a Tea Spirit. The visual aspect of this room can be found in many other Japanese buildings. The use of wood in Japanese buildings is tied to Shinto as well. Japan is well known for its vast forests and, throughout history, the use of wood as an almost sacred building material has given Japanese buildings a distinctive style. The Japanese attitude towards food was also influenced by Shinto. Shinto maintains a high regard for life, suggesting that killing animals only be done when necessary. Therefore, food preparation is highly ritualized, with a focus on using lots of vegetables. This focus eventually spread to include other aspects of Japanese cuisine. Additionally, Japan is a country where, at least historically, livestock was difficult to come by. Japan has many areas that are very rocky, and so keeping animals “on the hoof” to eat them later was not practical. This is why so many Japanese dishes contain seafood. Even though Buddhism is the “official” religion of Japan, remember that Buddhism, Confucianism and Taoism all melded – in various degrees – to the indigenous religion of Shinto. Shinto represents Japanese spirituality at its most distilled. Latest Fashion Design Articles & News It is no secret to many that music artist Kanye West is no stranger to fashion, and now he will be showing his new collection during Paris Fashion Week later this week, according to the Elle Magazine. Young designer dresses Glee star for Emmy Awards Heather Morris, who plays Brittany on the hit Fox television series "Glee," walked down the red carpet at the Emmy's wearing a deep blue gown with a plunging neckline and sprial construction.
<urn:uuid:266fa296-ad02-49d0-a5c7-78085c5dc594>
{ "date": "2019-02-16T13:53:13", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480472.38/warc/CC-MAIN-20190216125709-20190216151709-00296.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9717766642570496, "score": 4, "token_count": 1256, "url": "http://www.design-training.com/fashion-design/history-of-shinto.html" }
The www.dj5ar.de site is dedicated a very special subject, where a very special vocabulary is used. Those, who are not involved, but interested, will find certain explanations on this page. Amateur radio: As defined in the radio regulations by the ITU: “1.56 amateur service: A radiocommunication service for the purpose of selftraining, intercommunication and technical investigations carried out by amateurs, that is, by duly authorized persons interested in radio technique solely with a personal aim and without pecuniary interest.” German law (Amateurfunkgesetz) defines it as followes: “§2.2 Amateurfunkdienst, ein Funkdienst, der von Funkamateuren untereinander, zu experimentellen und technisch-wissenschaftlichen Studien, zur eigenen Weiterbildung, zur Völkerverständigung und zur Unterstützung von Hilfsaktionen in Not- und Katastrophenfällen wahrgenommen wird; der Amateurfunkdienst schließt die Benutzung von Weltraumfunkstellen ein. …” In fact it is a playground for people interested in communication technologies, informatics or physics. It is not necessary to have a profession related to one of these subjects, just some basic knowledge is enough to start. Band: Definition of a frequency range. There are a lot of bands, spread all over the electromagnetic spectrum, for use by radio amateurs. Each band has its specific characteristica depending on the frequency range eg. the wave length. At present I favore the 23 cm band. The frequency range is defined from 1240 to 1300 MHz. As the name says: the wave lenght is only about 23 cm. Beacon: Unmanned station, transmitting continuously on a fixed frequency, mostly from an exposed location. The observation of beacons is very useful to evaluate all kind of radio wave propagation. Bounce and scatter: In the way these verbs are used in amateur radio the difference is not defined very sharp. Moon bounce is the synonym for contacts of stations performed by reflections on the moon, a hard body. Aircraft scatter could also be called aircraft bounce because the signals will be reflected by one ore more aeroplanes. In rain scatter, tropo scatter, or meteor scatter more diffuse media like clouds or ionized meteor trails are used to evoke reflections. Callsign: All radio amateurs worldwide need a license to run their station legally. Along with the license a personal callsign will be issued by the relevant authorities of the country´s governement. The first letters or numbers (prefix) of a callsign are related to the country. For example: DJ5AR is a german and EI8HH is an irish callsign. These prefixes are widely used as synonyms for country names (EI instead of Ireland). Codes: In historic times, when telegraphy was the only technology to transmit information in real-time over long distances, time was a limiting factor. That might be unbelievable for “mobile phone kids”, but that was fact. Messages had to be kept short (glorious times) and every character costed money. So codes were invented, like the Q-groups or just collections of abbreviations. These were still in use in marine communications (telegraphy!) until satellites became available to all ships in the last decades. Many people still use them, when writing short messages (SMS) on the mobile phone or when performing amateur radio. Contest: Wether contesting in amateur radio is a sport or not can be discussed. Sitting on a radio station and collecting contacts is in minimum as sporty as racing with a car or playing chess, hi. (see glossary for “hi”). In fact participating in a contest can be a real skill. Hunting for special stations to collect points or multipliers, working as many as possible during the contest time or to gather distances in kilometers, can be hard work. CQ: Code for a general call to other stations. Can be restricted by adding the prefix of a specific country or by “DX” for long distance calls only. CW: Telegraphy mode, abbreviation of “continuos wave”. It is still very useful, when dealing with very weak signals. DX: Code for “long distance”. ODX is the longest distance for a station worked or heared on a day, in a contest, on a band or at all. EME or Earth-Moon-Earth: Radio contacts where the moon will be used as a passive reflector. GMT: Short for Greenwich Mean Time which is the global time standard. When the sun crosses the meridian of 0° at the Royal Observatory in Greenwich, it is 12.00 o’clock GMT. HI: Code in telegraphy for “I am amused!” In morse code it sounds very funny indeed: “Didididit didit”. In emails and SMS nowadays substituted by: 🙂 Inversion: Normally the temperature in the atmosphere decreases with the altitude. Under special wheater conditions, e.g. high pressure areas, warm and dry air can cover layers of cold and humid air close to the ground. Radio waves leaving the lower layer, will be refracted. ISS: International Space Station Moonbounce: See EME. NAC: Nordic Activity Contest. ODX: See DX. QSO: Code for a contact between two amateur radio stations. For a complete QSO the callsigns of the involved stations, reception reports and final reception confirmations have to be exchanged. A synonom for this is to have “worked” some station. QTH: Code for the location of a station. Can be given as the name of the place, town, etc. or in a locator grid system, developed by radio amateurs, basing on geografical coordinates. The locator JN49CV defines my location in Mainz. RIG: Code for equipment like transmitter, receiver, amplifiers or antennas. Roger: Confirmation of reception. Transmission of either the word “roger” in voice modes or the code “R” in telegrafy or digital modes. Scatter: See Bounce and Scatter. SHF: Short for “Super high frequency”, the range from 3000 MHz (3 GHz) and up. SSB: Voice mode, abbreviation of “single side band”. TNX: Code for “Thank you”. Tropo: Short for propagation of radio waves in the troposphere. The distance in this propagation mode is, depending on the wave length, limited to some hundred kilometers. It can be enhanced to more than thousand km by troposheric ducting. This can happen under special weather conditions (see inversion). UHF: Short for “Ultra high frequency”, the range from 300 MHz to 3000 MHz. UTC: Short for “Universal Time Coordinated” (see GMT). VHF: Short for “Very high frequency”, the range from 30 MHz to 300 MHz. To be continued…..
<urn:uuid:1fc7d44d-f849-4733-bdf1-efd090e23e4f>
{ "date": "2018-09-25T14:50:40", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161661.80/warc/CC-MAIN-20180925143000-20180925163400-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8736917972564697, "score": 2.75, "token_count": 1561, "url": "http://www.dj5ar.de/?page_id=746" }
Microchip Your Dog Millions of dogs become lost or separated from their owner each year. Tragically, few are reunited with their owners. Many lost dogs end up in shelters where they are adopted out to new homes or even euthanized. It is important that your dog has identification at all times. Collars and tags are essential, but they can fall off or become damaged. Technology has made it possible to equip your pet with a microchip for permanent identification. How it Works A microchip is about the size of a grain of rice. It consists of a tiny computer chip housed in a type of glass made to be compatible with living tissue. The microchip is implanted between the dog’s shoulder blades under the skin with a needle and special syringe. The process is similar to getting a shot. Little to no pain is experienced – most dogs do not seem to even feel it being implanted. Once in place, the microchip can be detected immediately with a handheld device that uses radio waves to read the chip. This device scans the microchip, and then displays a unique alphanumeric code. Once the microchip is placed, the dog must be registered with the microchip company, usually for a one-time fee. Then, the dog can be traced back to the owner if found. Microchips are not tracking systems and only effective when someone uses a ‘scanner’ to read the microchip information. The microchip number has to be phoned into a central call center to locate the current owner of the dog. IF YOU INFORMATION IS NOT CURRENT, you may never get a phone call or letter about your lost dog. It is imperative that your dog’s microchip registrations be correct with the most current information to contact you should you dog be lost. Over 75% of the bulldogs that come into rescue that already have microchips are not registered with ‘any owner’ and the shelter could not return the dogs home for that reason. Things You Should Know - Microchips are designed to last for the life of a dog. They do not need to be charged or replaced. - Some microchips have been known to migrate from the area between the shoulder blades, but the instructions for scanning emphasize the need to scan the dog’s entire body. - A microchipped dog can be easily identified if found by a shelter or veterinary office in possession of a scanner. However, some shelters and veterinary offices do not have scanners. - Depending on the brand of microchip and the year it was implanted, even so-called universal scanners may not be able to detect the microchip. - Microchip manufacturers, veterinarians and animal shelters have been working on solutions to the imperfections, and technology continues to improve over time. No single method of identification is perfect. The best thing you can do to protect your dog is to be a responsible owner. Keep current identification tags on your dog at all times, consider microchipping as reinforcement, and never allow your dog to roam free. If your dog does become lost, more identification can increase the odds of finding your beloved companion.
<urn:uuid:4c04a464-0832-42e7-9e3d-a7b05a5ef068>
{ "date": "2013-12-06T07:07:58", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049948/warc/CC-MAIN-20131204131729-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9461832642555237, "score": 2.734375, "token_count": 647, "url": "http://www.socalbulldogrescue.org/useful-resources/microchip-your-dog/" }
Last summer, history and government major Shannon Welch ’14 was an intern at the National Archives in Washington, D.C. She was paging documents at the Center for Legislative Archives when she stumbled across a little known and disturbing proposed constitutional amendment on the books in her home state of Maryland. “I came upon this 13th amendment that was making slavery institutionalized for the rest of time,” she said. “The federal government could never touch it. Then I found a document that Maryland had ratified it, and I was shocked. They let me keep researching, and I found out that Maryland had never rescinded this amendment, while other states had.” The amendment had been ratified by the state’s general assembly on Jan. 10, 1862, not long after the start of the Civil War when the union was in a state of disarray. When the final version of the 13th amendment abolishing slavery was enacted in 1865, many had forgotten or were unaware of the obsolete, so-called “shadow” version, which stated: No amendment shall be made to the Constitution which will authorize or give to Congress the power to abolish or interfere, within any State, with the domestic institutions thereof, including that of persons held to labor or service by the laws of said State. “You had two countries with two separate congresses pretending like they’re representing the whole country,” Welch said. “It was such a chaotic time period with so many constitutional loopholes that I think it just never got attention.” Even Abraham Lincoln himself had endorsed the document, a political concession he found it necessary to make at the time. (It wasn’t considered to be in support of slavery, per se, but an encouragement for states to govern themselves — an awkward attempt to keep a fragile nation unified.) “It was actually brought up in 1963 in the Georgia state legislature — to pass it,” Welch said. “In 1963… as part of a states’ rights push. While it seems like such a dusty old piece of paper, it definitely has relevance. Especially since being from Maryland you never really learn about how we were a slave state. We weren’t necessarily the most loyal member of the union. You kind of just learn about the shinier, prettier parts of history.” Welch decided to bring the oversight to the attention of the current Maryland State Legislature by emailing every delegate on its website. “Even though it’s symbolic, I think it’s very important to recognize this as part of state history,” she wrote. There was only one response — and it was from state Sen. Brian Frosh (D), a Wesleyan alumnus from the class of ’68. That was last July. A few weeks ago, she heard from Frosh again, and he told her it would be the first bill up for vote in the upcoming session. There was a public hearing with testimony about it on Thursday, Jan. 30. “I doubt it will receive any real pushback,” Welch said. “It should pass pretty soon.” Welch’s findings have garnered some attention from the press (notably, The Baltimore Sun and The Washington Post), but that doesn’t mean she’s changing her plans to attend law school next year. She considers her interest in the Civil War more of a hobby than a focus of her life’s work. Her senior thesis is on an unrelated topic too, an examination on Native American conversions by Jesuit priests and puritan missionaries in Maine in the late 1600s. “I like these moments of cloudy history,” she said.
<urn:uuid:db562a47-aad4-4ed8-9aa4-c5e78a0ea12f>
{ "date": "2015-03-06T12:33:42", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936468546.71/warc/CC-MAIN-20150226074108-00082-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.978572428226471, "score": 3.40625, "token_count": 778, "url": "http://newsletter.blogs.wesleyan.edu/2014/02/12/welchamendment/" }
Gout is a disease that results from an overload of uric acid in the body. This overload of uric acid leads to the formation of tiny crystals of urate that deposit in tissues of the body, especially the joints. When crystals form in the joints, it causes recurring attacks of joint inflammation (arthritis). Gout Uric Acid is considered a chronic and progressive disease. Chronic gout can also lead to deposits of hard lumps of uric acid in the tissues, particularly in and around the joints and may cause joint destruction, decreased kidney function, and kidney stones (nephrolithiasis). A common form of arthritis called gout is caused by high levels of uric acid in the blood stream. To prevent gout, those susceptible to these attacks need to know how to lower their uric acid levels. The easiest and healthiest way to lower uric acid is though proper eating habits and medication. This includes limiting alcohol and avoiding purine rich foods, which will convert to uric acid. Read on to learn how to lower uric acid to prevent gout. Gout has the unique distinction of being one of the most frequently recorded medical illnesses throughout history. It is often related to an inherited abnormality in the body's ability to process uric acid. Uric acid is a breakdown product of purines that are part of many foods we eat. An abnormality in handling uric acid can cause attacks of painful arthritis (gout attack), kidney stones, and blockage of the kidney-filtering tubules with uric acid crystals, leading to kidney failure. On the other hand, some people may only develop elevated blood uric acid levels (hyperuricemia) without having manifestations of gout, such as arthritis or kidney problems. In USA, the state of elevated levels of uric acid in the blood without symptoms is referred to as asymptomatic hyperuricemia. Asymptomatic hyperuricemia is considered a precursor state to the development of gout. The term gout refers the disease that is caused by an overload of uric acid in the body, resulting in painful arthritic attacks and deposits of lumps of uric acid crystals in body tissues. Gouty arthritis is typically an extremely painful attack with a rapid onset of joint inflammation. The joint inflammation is precipitated by deposits of uric acid crystals in the joint fluid (synovial fluid) and joint lining (synovial lining). Intense joint inflammation occurs as the immune system reacts, causing white blood cells to engulf the uric acid crystals and chemical messengers of inflammation to be released, leading to pain, heat, and redness of the joint tissues. As gout progresses, the attacks of gouty arthritis typically occur more frequently and often in additional joints.
<urn:uuid:bfccb396-d3fc-4f95-b541-587f1ae0ab89>
{ "date": "2017-08-20T17:06:24", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106865.74/warc/CC-MAIN-20170820170023-20170820190023-00616.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9511663317680359, "score": 3.46875, "token_count": 571, "url": "http://uric-acid-gout.blogspot.com/2011/12/gout-and-hyperuricemia-tratament.html" }
A unique fish ladder is helping species at risk reclaim their habitat The lake sturgeon, American eel, and copper redhorse are three species present in the Richelieu River© P. Leduc / Parks Canada The Copper Redhorse (Moxostoma hubbsi) is running up the Richelieu River again. This is a major victory for the unusual coppery coloured fish species, which is found only in southwestern Quebec. A dam built in 1967 hindered this endangered species’ migration to its most important spawning area upstream from the Canal-de-Saint-Ours National Historic Site of Canada. Now a fish ladder of unique design is brightening the future for the Copper Redhorse. Canal-de-Saint-Ours National Historic Site of Canada© Parks Canada Like the Copper Redhorse, other denizens of the Richelieu had long been stranded below the dam. Up to 60 fish species use the river, including several species at risk. The Lake Sturgeon and American Shad, among other species, could no longer follow their migratory paths, nor could the American Eel, which had supported an important local commercial fishery before the dam was built. Building a fish ladder, step by step Solving the problem required diligent research and cooperation from several federal and provincial agencies and conservation groups. Parks Canada, as manager of the historic site, was responsible for species at risk on the site. Fisheries and Oceans Canada , the Ministère de l’Agriculture, des Pêcheries et de l’Alimentation and the Ministère des resources naturelles et de la faune du Québec had both fisheries and species at risk responsibilities. Transport Canada also had to be involved, as it had built the dam. “It was a financial challenge, a partnership challenge and an engineering challenge,” says Quebec Service Centre Species at Risk Coordinator Sylvain Paradis. The copper redhorse is found only in southwestern Quebec© N. Vachon It all came together in 2001. Parks built a unique fishway with funding from many sources, including proceeds from Rescousse beer, a beer created to support the recovery of species at risk. They named it the Vianney-Legendre fishway, after the ichthyologist who had first officially described the Copper Redhorse in the 1950s. Different fish – different needs The Vianney-Legendre fish ladder has a unique design that accommodates multiple species© Parks Canada The Richelieu fishway is no ordinary fish ladder. Experts agreed that the fishway should serve multiple species, particularly at-risk species. Fishways designed for a single species such as salmon are fairly common but multi-species designs are rare and quite complex. The design had to consider the needs of species that vary widely in size and behaviour. Specific hydraulic conditions to help them find the fishway entrance and allow them to swim up the ladder were created. Different parameters such as water level fluctuations, water flow, and migration dates also had to be considered. The American Eel has such particular needs that the fishway designers made a whole separate structure tailored to eels at the side of the main fishway. A versatile system To save money, the fishway is designed as a compact S-shaped structure, but the complex structure still cost some $2.5 million to build. To suit a variety of water conditions and fish of different sizes, the fishway has two different entrances at or below the water surface. Also, dam operators can modify the direction of the river flow to improve the fishway’s effectiveness in attracting fish. Experts from as far away as France advised on the design, as did biologists and engineers, who then carefully monitored its operation. There were no guarantees it would work. Over an eight-year period, the efficiency of the fish ladder was studied using different testing methods. The results have been impressive. Of the 60 species historically known to use the Richelieu River, researchers have found 36 using the fishway so far. This includes four of the five at-risk species initially targeted, bringing hope for their populations’ recovery. Profits from the Rescousse beer suppported this project© Rescousse The Vianney-Legendre fishway shows that an historic site can be more than a site of Heritage significance. It can protect biodiversity and help recover species at risk. The opportunity to observe the numerous fish species swimming up the fishway through the viewing window during the spring upstream migration, as well as the installation of information panels should raise interest in species at risk in the Richelieu. Such a successful environmental engineering project could create a tourist attraction and be useful in educating the public about this biodiversity restoration project. In fact, the fish ladder has even gained international attention. Although it has been designed uniquely for this site, other jurisdictions are interested in it as a model. “Enquiries have been coming from all over the world,” says Sylvain Paradis. Note: To read the PDF version you need Adobe Acrobat Reader on your system. If the Adobe download site is not accessible to you, you can download Acrobat Reader from an accessible page. If you choose not to use Acrobat Reader you can have the PDF file converted to HTML or ASCII text by using one of the conversion services offered by Adobe.
<urn:uuid:2d782177-e194-45e0-b110-e658c4e443fb>
{ "date": "2014-11-01T09:49:48", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637905189.48/warc/CC-MAIN-20141030025825-00083-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9477971196174622, "score": 3.25, "token_count": 1105, "url": "http://www.pc.gc.ca/nature/eep-sar/itm11/itm11n.aspx" }
The Side of Dentist Fear and Anxiety If you’re one of the people who get anxious just by sitting on a dentist’s chair and seeing anything associated to a dentist, then you’re not alone. Many people are suffering from dental fear or phobia making them skip the entire treatment altogether. People suffering from this prefer not to get a dental treatment despite the toothaches and other teeth woes. This is where sedation dentistry comes in, a procedure that helps individuals be more at ease. This isn’t to say that it will make the anxiety go away. After all, people are different in terms of dealing with their fears. Some patients take a long time to be comfortable in a dentist chair. Oral sedation can also be used for teeth cleaning and almost any procedure. How Dental Sedation Works As the name states, dental sedation is the process of using medication to help patients relax during a dental procedure. They can be often referred to as “sleep dentistry” but most times patients are actually half awake, except those who undergo general anesthesia. The Levels of Dental Sedation Procedure The sedation process is comprised of different levels: Minimal sedation: Patient is awake but relaxed. Moderate sedation or conscious sedation: Patients have a slurred speech an may not remember much of the procedure. Deep sedation: Patients are conscious, but can still be awakened. General Anesthesia: The patient is completely unconscious. Who are the Best Candidates for Dental Sedation Treatment? This particular dental treatment is aptest for those who have real fear or anxiety that prevents them from having dental procedures done. It may also be apt for those who have a low pain threshold, those with sensitive teeth, people with bad gag reflex, individuals that can’t sit still, and those who require a huge amount of dental treatments. There are some cases when children are sedated such as the ones who refuse to cooperate during their dental visit. How Safe is Sedation Dentistry? Sedation dental treatment here in our facility is safe as we ensure that we have dentists who are experienced in performing this type of treatment. To also ensure the safety of our patients, each patient that will undergo the treatment will have their health properly and thoroughly assessed. For example, obese or individuals who have obstructive sleep apnea should have a clearance from their doctors before they undergo treatment. This is because people with such conditions are more likely to develop complications from this type of treatment. Here at Advanced Dental Concepts we go over your medical history and ensure that you are a good candidate for dental sedation. We will also ask you the current medications that you’re taking. Our team of dentists will also explain the sedative dose appropriate for your age and health. We would also detail to you the risks of the procedure. During the treatment, you are rest assured that your vital signs will be closely monitored following the American Dental Association’s guidelines. Here at our dental facility, we will help you feel at ease and give you the proper dental treatment for a healthier oral health. Getting Oral Sedation at Our Dental Center Advanced Dental Concepts understands that receiving dental care can be scary for many people especially for kids, which is why we provide dental conscious sedation. Dental conscious sedation is the perfect solution for those with dental anxieties who require dental care such as urgent tooth decay treatment for local residents. - Here at Advanced Dental Concepts in Hammond, you can guarantee that a safe sedation dentistry - Our team of dentists will ensure that the process will go as smoothly as possible - We will help you achieve a better oral health and a better smile here at our dental facility With dental conscious sedation, every visit to the dentist can be relaxing and comfortable, no matter what! We also offer Hammond wisdom teeth extraction and our dentist will more likely recommend sedation for this particular procedure. You can visit or schedule a free consultation with a highly recommended Hammond general dentist. There are a variety of types of sedation available, including: - Oral dental conscious sedation - Nitrous oxide (laughing gas) Before deciding which type of sedation is right for you, please consult with Dr. Neil Oza or Dr. Cecilia Luong. He’ll be able to explain the benefits of each type of sedation. To learn more about sedation dentistry and other dental services such as cosmetic dental services and pediatric dental treatments for our patients in the city, please call us at 985-240-5445. Driving Directions – Conveniently Located Address: 1607 Martens Dr. Hammond, LA 70401 Monday & Wednesday – 8am-5pm Tuesday & Thursday – 7am-3pm Advanced Dental Concepts is located on Martens Drive in Hammond. It is 8 minutes away from Zemurray Park and 9 minutes away from Safari Quest Family Fun Center. Directions from Zemurray Park to Advanced Dental Concepts — Take SW Railroad Ave and W Thomas St to Linden St. Take Blackburn Rd and Pecan St to Martens Drive in 5. Directions from Safari Quest Family Fun Center —Head east on Hewitt Road toward SW Railroad Ave. Turn left onto SW Railroad Ave and then continue onto Carter Ln. Use any lane to turn right onto N Oak St and then turn left onto W University Ave. Finally, turn left onto Martens Drive. About Hammond, LA There are so many things to discover in Hammond. Whether you’re visiting or not, you will be amazed by their rich culture. Take in the beautiful surroundings, stroll through the wide green spaces in the area, and delight yourself with delicious Louisiana cuisines. The city has plenty of dining places to offer. You can also check out the museums if you want to dive deeper into their culture. There are so many things to see in Hammond that are truly captivating. Helpful Hammond Resources Sedation Dentistry in Hammond:
<urn:uuid:78768932-18fe-4557-9227-65563b98607e>
{ "date": "2018-11-22T11:02:08", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9331875443458557, "score": 2.578125, "token_count": 1259, "url": "https://www.advanceddentalconcept.com/sedation-dentistry/" }
In the early 1940s when Duncan wrote “An African Elegy,” a group of poets and critics, who came to be known as the New Critics, helped to determine what kind of poetry would be published and read in the coming decades. Writers associated with this trend in criticism include Allen Tate, R. P. Black-mur, Cleanth Brooks, William K. Wimsatt, and John Crowe Ransom, who edited the The Kenyon Review and whose book The New Criticism (1941) gave the group its name. The members of the New Critics, who were mostly southerners and politically conservative, held formalist views of literature and argued that poems and stories be considered for their inherent value. This meant that literary works should be regarded as self-contained objects, separate from the traditions, histories, and authors that helped to produce them. Though they never established a doctrine as such, New Critics introduced critical principles and terms into the study of literature that remain today. It is ironic that Ransom rejected “An African Elegy” after reading Duncan’s essay on homosexuals in society, for it shows that Ransom did not practice what he preached. By 1959, when Duncan finally published the poem, New Criticism had become entrenched in English departments throughout the United States and helped form the theoretical background against which millions of students would come to learn literature. At about the same time, in Asheville, North Carolina, a... (The entire section is 656 words.) Want to Read More? Subscribe now to read the rest of this article. Plus get complete access to 30,000+ study guides! “An African Elegy” uses symbolic imagery to carry the emotional weight of the poem. Some of Duncan’s primary symbols include the Congo, Africa and African nature, African Negroes, blood, and dogs. These images represent a complex of ideas including the unconscious elements of human desire, the ubiquity and reality of death, and the tenuousness of human identity and of life. In the West, Africa has often been used by writers as a symbol of human beings’ baser instincts and desires. Joseph Conrad’s novel Heart of Darkness, which presents the Congo as a place of violence, ignorance, and barbarity, is one such example. Many of Duncan’s images, however, are obscure and sometimes inaccessible to beginning readers of poetry. He attempts to use them as pointers to a deeper, more complex reality than that which human beings experience. That reality can only be expressed in images. Although the poem is called an elegy, its tone shifts between celebration and lament, sometimes approaching a kind of self-destructive ecstasy. The first stanza prepares the reader for this vacillation as it begins with the statement, “No greater marvelous / know I than the mind’s / natural jungle,” and then shifts to a description of the ominous nature of Death’s sounds. Duncan’s archaic spelling, sometimes using t instead of -ed endings for the past tense (e.g., “stopt” instead of “stopped”), his... (The entire section is 252 words.) Compare and Contrast - 1958: Patrice Lumumba founds the Movement National Congolais (MNC), which becomes the most dominant political party of the Democratic Republic of Congo. 1960–1965: Political turmoil engulfs The Democratic Republic of Congo. Lumumba is assassinated by forces loyal to Colonel Mobutu Sese Seko, who eventually takes over the government in 1965. 1971: Seko renames the country the Republic of Zaire and asks Zairean citizens to change their names to African names. 1997: Seko is overthrown by Laurent Kabila and Rwandan-backed rebels, who “re-rename” the country the Democratic Republic of Congo. 2000: Political unrest continues in the Democratic Republic of Congo. - 1956: Allen Ginsberg’s poem “Howl” is published and embraced by the counterculture. In the poem, Ginsberg calls for America to wake up from its middle-class, sterile slumber that crushes the human soul and to end the “human war” on its own people. 1997: Ginsberg dies at 70. The Beat culture, for which Ginsberg was a central figure, is a historical curiosity and has been reduced to slogans and symbols used in advertising campaigns. (The entire section is 164 words.) Topics for Further Study After researching the basic beliefs of Theosophy, give a report to your class outlining them. Are there connections you can draw between any of these beliefs and Duncan’s poem? Keep a dream diary for one month, writing down as much and as many of your dreams as you can remember. Then catalog all of the images and stories. Do certain images or stories reccur? What do these images and stories tell you about that month in your life? Write a poem or story about the creation of the universe using symbols that are personally meaningful to you. Do not worry if these symbols will be accessible to others. Then write a short essay describing why you chose those particular symbols. Research the use of magic by the Swahili. Do you see any similarities with the rituals Duncan describes in his poem? (The entire section is 142 words.) Modern American Poetry sponsors a Robert Duncan web site at http://www.english.uiuc. edu/maps/poets/a_f/duncan/duncan.htm (last accessed April 2001). Kent State University lists a bibliography of Duncan’s work in its special collection at http://www.library.kent.edu/speccoll/literature/ poetry/duncan.html (last accessed April 2001). The Theosophical University Press has a glossary of theosophical terms available online at http://www.theosociety.org/pasadena/etgloss/ mi-mo.htm (last accessed April 2001). The American Academy of Poets offers a 1969 audiocassette of Duncan reading from The Opening of the Field, Roots and Branches, and Bending the Bow. (The entire section is 101 words.) What Do I Read Next? Robert Bertholf edited a collection of thirty-five letters between Duncan and the poet H. D. in 1991, titled A Great Admiration: H. D. / Robert Duncan Correspondence 1950–1961. Duncan and H. D. admired each other’s poetry intensely. Ekbert Faas’ biography of Duncan, Young Robert Duncan: Portrait of the Poet As Homosexual in Society, provides a detailed biography of the poet through 1950. Black Sparrow Press published Robert J. Bertholf’s Robert Duncan: A Descriptive Bibliography in 1986. The book is difficult to obtain but contains an exhaustive and useful collection of secondary sources on Duncan. Critics generally agree that Duncan’s 1960 collection The Opening of the Field begins the poet’s mature phase of work. This collection contains what is perhaps Duncan’s best-known poem, “Often I Am Permitted to Return to a Meadow.” Ian Reid and Robert Bertholf edited a collection of essays and tributes to Duncan in 1979. Robert Duncan: Scales of the Marvelous contains essays by Denise Levertov, Michael Davidson, Thom Gunn, and Don Byrd. Duncan was a fierce and outspoken opponent to the war in Vietnam. James Mersmann’s 1974 Out of the Vietnam Vortex: A Study of Poets and Poetry examines Duncan’s poetry and life in light of the poet’s commitment to the idea of community. Sherman Paul’s The Lost America of Love: Rereading Robert... (The entire section is 275 words.) Bibliography and Further Reading Bertholf, Robert J., ed., A Great Admiration: H. D. / Robert Duncan Correspondence 1950–1961, Lapis Press, 1991. Bertholf, Robert J., and Ian W. Reid, eds., Robert Duncan: Scales of the Marvelous, New Directions, 1979. Cirlot, J. E., A Dictionary of Symbols, Philosophical Library, 1971. Dickey, James, Babel to Byzantium: Poets and Poetry Now, Straus & Giroux, 1968, pp. 173–77. Duncan, Robert, “Pages from a Notebook,” in The New American Poetry, edited by Donald M. Allen, Grove Press, 1960, pp. 400–07. ———, Selected Poems, edited by Robert J. Berthoff, New Directions, 1993. ———, The Years as Catches: First Poems, 1939–1946, Oyez, 1966. Ellingham, Lewis, Poet Be Like God: Jack Spicer and the San Francisco Renaissance, University Press of New England, 1988. Faas, Ekbert, Towards a New American Poetics: Essays and Interviews: Charles Olson, Robert Duncan, Gary Snyder, Robert Creeley, Robert Bly, Allen Ginsberg, Black Sparrow Press, 1978. ———, Young Robert Duncan: Portrait of the Poet As Homosexual in Society, Black Sparrow Press, 1983. Foster, Edward Halsey, Understanding the Black Mountain Poets, University of South Carolina Press, 1984. Johnson, Mark Andrew, Robert Duncan, Twayne Publishers, 1988.... (The entire section is 353 words.)
<urn:uuid:fb1cec53-f1e1-456b-a80d-50ab0a212cc9>
{ "date": "2013-12-11T09:38:07", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164034245/warc/CC-MAIN-20131204133354-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.905484676361084, "score": 2.890625, "token_count": 2007, "url": "http://www.enotes.com/topics/african-elegy/in-depth" }
Geography of Vatican City The geography of Vatican City is unique due to the country's position as an urban, landlocked enclave of Rome, Italy. With an area of 0.17 sq mi (0.44 km2), it is the world's smallest independent state. Outside the Vatican City, thirteen buildings in Rome and Castel Gandolfo (the pope's summer residence) enjoy extraterritorial rights. (One building, the Paul VI Audience Hall, straddles the border, but its Italian portion has extraterritorial rights.) The country contains no major natural resources, and no known natural hazards other than those that affect Rome in general, such as earthquakes. The city state has the same climate as Rome: temperate, mild, rainy winters (September to mid-May) with hot, dry summers (May to September). Vatican City sits on a low hill. The hill has been called the Vatican Hill (in Latin, Mons Vaticanus) since long before Christianity existed. An Etruscan settlement, possibly called Vatica or Vaticum, may have existed in the area generally known by the ancient Romans as "Vatican territory" (vaticanus ager), but if so no archaeological trace of it has been discovered. Extreme points - North: at the intersection of the Viale Vaticano and the Via Leone IV ( ) - South: at the intersection of the Via della Stazione Vaticana and the Via di Porta Cavalleggeri ( ) - West: at the intersection of the Viale Vaticano and the Via Aurelia ( ) - East: easternmost edge of Saint Peter's Square ( ) The lowest point in Vatican City is an unnamed location at 63 feet (19.2 m). The highest point is another unnamed location at 250 feet (76.2 m). The tallest building is St. Peter's Basilica, at 452 feet (138 m). Land use The nature of the estate is fundamentally urban and none of the land is reserved for significant agriculture or other exploitation of natural resources. The city state displays an impressive degree of land economy, born of necessity due to its extremely limited territory. Thus, the urban development (i.e., buildings) is optimized to occupy less than 50% of the total area, while the rest is reserved for open space, including the Vatican Gardens. The territory holds many diverse structures that help provide autonomy for the sovereign state, including a rail line and train station, heliport, post office, radio station (with extraterritorial antennas in Italy), military barracks, government palaces and offices, public plaza, part of an audience hall, old defensive wall marking the border, institutions of higher learning, cultural/art centers, and a few embassies. In July 2007, the Vatican accepted an offer that would make it the only carbon neutral state for the year, due to the donation of the Vatican Climate Forest in Hungary. The forest was to be sized to offset the year's carbon dioxide emissions. International agreements - Party to: Ozone Layer Protection - Signed, but not ratified: Air Pollution, Environmental Modification See also - This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). Encyclopædia Britannica (11th ed.). Cambridge University Press. - This article incorporates public domain material from websites or documents of the CIA World Factbook.
<urn:uuid:df4a9f7b-2ba2-4968-b652-b7bc9402b0fb>
{ "date": "2013-05-23T04:50:40", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.904634416103363, "score": 3.5625, "token_count": 721, "url": "http://en.wikipedia.org/wiki/Geography_of_the_Vatican_City" }
An artist's impression of a black hole like the one weighed in this work, sitting in the core of a disk galaxy. The black-hole in NGC4526 weighs 450,000,000 times more than our own Sun. NASA will reveal new findings about black holes during a news conference Wednesday (Feb. 27). The news conference, which starts at 1 p.m. EST (1800 GMT) Wednesday, will relay results based primarily on observations made by two X-ray space telescopes: NASA's Nuclear Spectroscopic Telescope Array (NuSTAR) and the European Space Agency’s XMM-Newton observatory, NASA officials said. For the rest of the story: http://www.livescience.com/27437-black-hole-discovery-nasa-nustar-telescope.html
<urn:uuid:04be9e51-b5b8-4f7c-9c18-5d8a05ed71b4>
{ "date": "2018-02-19T17:41:43", "dump": "CC-MAIN-2018-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812758.43/warc/CC-MAIN-20180219171550-20180219191550-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.850347101688385, "score": 3.375, "token_count": 175, "url": "http://veritasradio.blogspot.com/2013/02/nasa-to-unveil-black-hole-discovery.html" }
Chemisty- structure of a atom The protons and neutrons are found in the nucleus at the centre of the atom. The nucleus is very much smaller than the atom as a whole. The electrons are arranged in energy levels around the nucleus. The table shows the properties of these three sub-atomic particles: ParticleRelative massRelative charge Proton 1 +1 Neutron 1 0 Electron Almost zero –1 The number of electrons in an atom is always the same as the number of protons, so atoms are electrically neutral overall. Atoms can lose or gain electrons. When they do, they form charged particles called ions: - if an atom loses one or more electrons, it becomes a positively charged ion - if an atom gains one or more electrons, it becomes a negatively charged ion Chemisty- structure of a atom - Atoms contain three sub-atomic particles called protons, neutrons and electrons The Nuclear Model- this is a structure of a atom Chemisty- limestone cycle - Limestone is mainly calcium carbonate, CaCO3, which when heated breaks down to form calcium oxide and carbon dioxide. Calcium oxide reacts with water to produce calcium hydroxide. Limestone and its products have many uses, including being used to make cement, mortar and concrete. Calcium carbonate breaks down when heated strongly. This reaction is called thermal decomposition. Here are the equations for the thermal decomposition of calcium carbonate: calcium carbonatecalcium oxide + carbon dioxide CaCO3CaO + CO2 Other metal carbonates decompose in the same way, including: - sodium carbonate - magnesium carbonate - copper carbonate - An Antibody is a dead or an inactive pathogen Once inside the body, pathogens reproduce. Viruses reproduce inside cells and damage them, while escaping to infect more cells. Bacteria produce toxins - poisons. Cell damage and toxins cause the symptoms of infectious diseases. Once pathogens enter the body, the immune system destroys them. White blood cells are important components of the immune system. White blood cells White blood cells can: - engulf pathogens and destroy them - produce antibodies to destroy pathogens - produce antitoxins that neutralise the toxins released by pathogens Pathogens contain certain chemicals that are foreign to the body, called antigens. White blood cells - lymphocytes - carry antibodies - proteins that have a chemical 'fit' to a certain antigen. When a white blood cell with the appropriate antibody meets the antigen, it reproduces quickly and makes many copies of the antibody that neutralises the pathogen. - Pathogens are microorganisms that cause disease. The body has several defence mechanisms to prevent pathogens from entering the body and reproducing there. The immune system can destroy pathogens that manage to enter the body. New medical treatments and drugs must be tested before their use. - pathigens are organisms that cause disease. They include microorganisms such as bacteria, viruses, fungi and protozoa. Bacteria are microscopic organisms that come in many shapes and sizes. But even the largest ones are only 10 micrometres long - 1 micrometre = 1 millionth of a metre.Bacteria cause diseases such as cholera. Viruses are many times smaller than bacteria. They consist of a fragment of genetic material inside a protective protein coat.Viruses cause diseases such as influenza - flu. Once you have been infected with a particular pathogen and produced antibodies against it, some of the white blood cells remain. If you become infected again with the same pathogen, these white blood cells reproduce very rapidly and the pathogen is destroyed. This is active immunity. Sometimes you may be treated for infection by an injection of certain antibodies from someone else. This is passive immunity. physics- sankey diagram - Sankey diagrams summarise all the energy transfers taking place in a process. The thicker the line or arrow, the greater the amount of energy involved. The energy transfer to light energy is the useful transfer. The rest is ‘wasted’ - it is eventually transferred to the surroundings, making them warmer. This ‘wasted’ energy eventually becomes so spread out that it becomes less useful. conduction, convection and radiation. - Heat can be transferred from place to place by conduction, convection and radiation. Dark matt surfaces are better at absorbing heat energy than light shiny surfaces. Heat energy can be lost from homes in many different ways and there are ways of reducing these heat losses. - There are several different types of energy, and these can be transferred from one type to another. Energy transfer diagrams show the energy transfers in a process. More efficient devices transfer the energy supplied to them into a greater proportion of useful energy than less efficient devices do
<urn:uuid:40d85624-fb79-42e4-86c5-35811998115e>
{ "date": "2017-06-23T07:26:16", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00016.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9116297960281372, "score": 3.96875, "token_count": 1002, "url": "https://getrevising.co.uk/revision-cards/science_1563" }
Fifty years ago today, President Lyndon Johnson stood before Congress and declared an "unconditional war on poverty in America." His arsenal included new programs: Medicaid, Medicare, Head Start, food stamps, more spending on education, and tax cuts to help create jobs. At the time, 1 in 5 Americans was poor. Today, things are better, but tens of millions of Americans are still living at or below the poverty level. That raises the question: Did the war on poverty fail? In the coming year, NPR will explore this question and others about the impact and extent of poverty in the U.S., and what can be done to reduce it. People in the isolated hills of Martin County, Ky., rarely saw outsiders, let alone a president. So when President Lyndon Johnson visited in 1964 to generate support for his proposed war on poverty, it was a big deal. Lee Mueller, a young newspaper reporter at the time, recalls the crowds in downtown Inez, Ky., the county seat, waiting for the presidential party to arrive at an abandoned miniature golf course. "It was just like a hayfield full of long grass. It looked like helicopters landing in Vietnam or something when they came over the ridge," he says. Mueller says the locals didn't know their role in this new, domestic war. For the White House, though, coming to Martin County gave poverty a face — and a name. "In this south-central mountain country, over a third of the population is faced with chronic unemployment," says a government film on Johnson's visit. "Typical of this group is Tom Fletcher, his wife and eight children. Fletcher, an unemployed sawmill operator, earned only $400 last year and has been able to find little employment in the last two years." At the time, the poverty rate in this coal-mining area was more than 60 percent. Johnson visited the Fletchers on the porch of their home — a small wooden structure with fake brick siding. Photographers took what would become one of the iconic images of the war on poverty: the president crouched down, chatting with Tom Fletcher about the lack of jobs. Fast-forward 50 years. The Fletcher cabin still stands along a windy road about 5 miles outside town. It now has wood siding and is painted orange. There's a metal fence with a "no trespassing" sign to keep out strangers. There are lots of small houses and trailers along this road, but also some new, bigger homes that could be found in any American suburb. Today, the roads here are well-paved. People say the schools and hospitals are much better than they used to be. Still, Martin County remains one of the poorest counties in the country. Its poverty rate is 35 percent, more than twice the national average. Unemployment remains high. Only 9 percent of the adults have a college degree. 'I Would Be Homeless' Much of the poverty today is tucked between the mountains in what are called "the hollers." That's where Norma Moore lives with her 8-year-old grandson, Brayden. She says his parents didn't want him. He was born with a rare blood disease and is severely disabled. "And they said he was dying, and then at 4 months I got him, and I've had him ever since," Moore says. Brayden doesn't walk or talk. He's in constant motion, rolling on the floor of their double-wide trailer home, bumping into walls and doors. There's no question that Moore's life is incredibly stressful. She says she gets by on her faith. But here's where the war on poverty has also made a big difference: Today, she gets food stamps and energy assistance to heat her home — programs with roots in Johnson's anti-poverty initiatives — as well as Supplemental Security Income (SSI) for her grandson. Moore shakes her head thinking about life without the help. "I would be homeless. I would be the one living on the street if it wasn't for that," she says. She looks down at her grandson on the floor. "He would probably be in a home somewhere." Today, many people here rely on government aid. In fact, it's the largest source of income in Martin County. People say it has helped to reduce hunger, improve health care and give young families a boost, especially at a time when coal-mining jobs are disappearing by the hundreds. Head Start is one of the signature programs of the war on poverty — helping low- and moderate-income children get ready for school. Budget cuts are always a concern. Some of the county's children get their only hot meals of the day at school. Delsie Fletcher helps Head Start parents in Martin County with services, such as getting their high school diplomas. And yes, Delsie is one of those Fletchers, married to one of the children who stood on the porch with President Johnson. So has the war on poverty helped her husband's family? Turns out, along with the famous photo, it's a sore topic. "They don't like to talk about it, because they don't want to be known as the poorest family in Martin County," she says. And she says they probably weren't. Most of the Fletchers have done OK for themselves. Still, it hasn't been easy. Her husband had some of his toes cut off when he worked in the sawmills, and now he's on disability. Work around here can be tough — and dangerous — which is why coal-mining jobs pay so well. But now they're scarce, and there's nothing to replace them. People are struggling to adjust. 'I Call It Abusing The System' Thomas Vinson, a Martin County resident for 41 years, used to work in the coal fields, but he is currently unemployed. Vinson says he has a big house payment and three sons to raise. Times are tough, he says, but "we are making it." One reason is that Vinson's wife got a job at a gear factory through a federally funded program to help unemployed miners. Vinson is grateful for the short-term help but worried about his future. In the big picture, he's disappointed in the war on poverty. He says he sees too many people around here just collecting checks. "They call it poverty, but I call it abusing the system. Like, if you're going to file for SSI, you go in there and say the right things, you'll come out of there with a check," he says. His feelings are widespread around here: What good are all these government programs if they don't get you a job? Mike Howell runs the Big Sandy Area Community Action Program where the Vinsons went for help. The program is a direct result of the war on poverty. Howell agrees that the war has yet to achieve its goals, but says the reason is a lack of support. The burst of enthusiasm after President Johnson's visit has waned, he says. Every year, his program has to fight for funds. "We've kind of let poverty go to the side," says Howell. "It's still way too high. Somebody asked me one time about the war on poverty, and I said, 'Well, it really wasn't a war — it was more of a skirmish.' And we need to declare war on poverty again."
<urn:uuid:0de689cc-43a1-4af7-a9bf-9d0dac055019>
{ "date": "2015-05-27T22:07:50", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929171.55/warc/CC-MAIN-20150521113209-00319-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9825906753540039, "score": 3.34375, "token_count": 1531, "url": "http://www.capradio.org/news/npr/story?storyid=260151923" }