text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
Your friend debuts a questionable haircut and asks what you think of it. Brutal honesty would definitely hurt his feelings, so what do you say? Most people in this situation would probably opt for a vague or evasive response, along the lines of "It's really unique!" or "It's so you!" Politeness helps us get through awkward social situations like these and makes it easier for us to maintain our relationships. But a new article published in the October issue of Current Directions in Psychological Science, a journal of the Association for Psychological Science, suggests that this kind of politeness can have disastrous consequences, especially in high-stakes situations.
According to authors Jean-François Bonnefon and Wim de Neys of CNRS and Université de Toulouse and Aidan Feeney of Queen's University, we resort to politeness strategies when we have to share information that might offend or embarrass someone or information that suggests someone has made a mistake or a bad choice. The more sensitive an issue is, the more likely we are to use these kinds of politeness strategies.
Politeness can become problematic, however, when it causes us to sacrifice clarity. Existing research suggests that politeness strategies can lead to confusion about the meaning of statements that, under other circumstances, would be clear. And this confusion is especially likely to occur in high-stakes situations, the very situations in which we are most likely to use politeness strategies.
Even worse, say the authors, it takes more of our cognitive resources to process these kinds of polite statements. Thus, "[w]e must think harder when we consider the possibility that people are being polite, and this harder thinking leaves us in a greater state of uncertainty about what is really meant."
This confusion and uncertainty can have particularly negative consequences when safety and security are on the line such as for pilots trying to fly a plane in an emergency or for a doctor trying to help a patient decide on a treatment. Politeness can also have serious consequences within corporate culture people don't want to embarrass their bosses or their co-workers, so they hesitate to point out when something looks amiss, even when potential fraud or misconduct might be involved.
So how can we make sure to get around the confusion of politeness? One option is to encourage people to be more assertive in high-stakes situations. Some companies, including airlines, have even instituted assertiveness training programs, but it's not yet clear whether these programs really work.
Another option is to try to make the interpretation of polite statements easier for people. "Say that there is a tone, a prosodic feature which typically signals that politeness is at work," says Bonnefon. If we can identify this tone, we could "train pilots or other professionals to react intuitively to that tone in order to treat it as a warning signal."
While politeness can be detrimental in certain situations, Bonnefon takes pains to point out that the goal of this research is not to encourage or license general impoliteness "politeness is obviously a very positive behavior in most cases," he concludes. | <urn:uuid:97ea413c-0345-42ce-a33a-335bb8d43c2a> | {
"date": "2015-03-06T17:43:49",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469305.48/warc/CC-MAIN-20150226074109-00068-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.956855833530426,
"score": 2.640625,
"token_count": 633,
"url": "http://medicalxpress.com/news/2011-10-perils-polite.html"
} |
Forcing people to join any organisation and to fund the promotion of views with which they disagree is wrong; freedom of association must include the right not to associate.
International human rights issues get a lot of media attention and politicians spend a lot of time and money trying to ensure rights are available for all. So why then are some human rights withheld from post-secondary students in Canada?
Freedom of association is a fundamental human right. It is guaranteed to everyone in the Canadian Charter of Rights and Freedoms and the United Nations Universal Declaration of Human Rights. However, this right is still denied to some students.
Students decide for themselves which university to attend, what degree to pursue, what papers to take, what extracurricular activity to participate in, and where to live while they study. But they have no choice in whether to join their university’s students’ association.
Students associations – also known as student unions, student governments or federations – are campus organisations that claim to ‘represent’ the views of their members. But students don’t choose to be represented by their association; they are forced to become members and pay a fee for the privilege. This is done automatically as part of the enrolment process and the fee is often hidden in with other university service fees. Many students don’t even realise that they are members of the association and have paid this fee.
Students’ associations claim to use these fees to fund campus services for students. But what if a student doesn’t use the services that the association provides? Perhaps a student must work part time to fund her education and doesn’t have time to utilise the services. Perhaps one is studying at a distance and isn’t often on campus, or simply prefers to use the local gym rather than the one on campus. Regrettably, the personal situation of the student is irrelevant when they’re forced to pay the fees and fund unused or irrelevant services.
In addition to providing services which students may or may not want, Students’ associations use their compulsorily acquired fees to fund campaigns and advertising for political issues which they claim represent students’ views. Representation for students certainly sounds reasonable, however most students unions speak and act as though students were politically homogenous. Not all students have the same views on every issue; they are, after all, individuals with their own personal preferences and political views.
As a recent example, consider how two speakers invited to York University this year were treated differently: Daniel Pipes, a prestigious Middle East expert, and George Galloway, a former British MP. Both are polarising figures whose combination you would welcome at a place of intellectual ferment such as a university.
Worried about the threat of protests by opposing students, the university required the promoters of each event to pay for security costs. The student union paid for security for Galloway and that talk went ahead, but not for Pipes. The Daniel Pipes talk was cancelled as a result.
Galloway suited the political inclinations of the union’s executive, Pipes did not. A student executive influenced the debate and elevated one side according to their political views while using money also acquired from the students who invited Pipes.
The power to tax and compel is usually limited to the government for very good reasons. Governments are accountable to the people; courts can challenge them and media can expose them. Pages and pages of legislation provide protection for citizens against disingenuous, corrupt or unscrupulous politicians through instruments like Freedom of Information Acts and Ombudsmen.
Yet, we allow students’ associations, legally not much more than private clubs, to take hundreds of millions of dollars from students every year. No doubt the local country club could provide fantastic services to the people in the area if everyone was forced to give them hundreds of dollars every year and if everyone agreed on what those services should be, but in a free society people decide for themselves with whom they freely associate.
Around the world, governments and students are realising this inconsistency and adopting laws to allow students to decide for themselves whether to join students’ association. Australia passed a Voluntary Student Union bill in 2005, Sweden adopted similar legislation in July 2010 and the New Zealand Parliament is set to follow in 2011. It’s time Canada followed suit and recognised that students’ right of association needs defending. | <urn:uuid:552a1279-ed31-43fb-8622-a8f478b09bd1> | {
"date": "2018-12-12T05:48:41",
"dump": "CC-MAIN-2018-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823738.9/warc/CC-MAIN-20181212044022-20181212065522-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9688146710395813,
"score": 3,
"token_count": 895,
"url": "https://fcpp.org/2010/12/07/time-to-free-students-from-forced-association-freedom-of-association-is-a-human-right-and-universities-are-an-ironic-place-to-withhold-them/"
} |
news & tips
A collection of helpful articles on teachers and teaching
U.S. Sec. of Education: My National Teacher Appreciation Week Wish List
By Pamela Moreland
In observance of National Teacher Appreciation Week, U.S. Secretary of Education Arne Duncan said he wanted to find a way to help teachers be the leaders in the nation’s transition to higher learning standards. He asked “policymakers, district leaders and principals” to:
- Find opportunities for teachers to lead this work. There is far too much talent and expertise in our teaching force that is hidden in isolated classrooms and not reaching as far as it can to bring the system forward. Teachers and leaders must work together to create opportunities for teacher leadership, including shared responsibility, and that means developing school-level structures for teachers to activate their talents. This may mean reducing teaching loads to create “hybrid” roles for teachers in which they both teach and lead.
- Find, make visible and celebrate examples of making this transition well. Teachers often tell me they’re looking for examples of how to do this right. Let’s spotlight teachers and schools that are leading the way.
- Use your bully pulpit — and share that spotlight with a teacher. Whether you are a principal, superintendent, elected leader, parent or play some other role, you have a voice. Learn about this transition, and use your voice to help make this transition a good experience for teachers, students, and families. Especially important is educating families about what to expect and why it matters. Invite a teacher to help you tell the story and answer questions.
- Be an active, bold part of improving pre-service training and professional development, and make sure that all stages of a teacher’s education reflect the new instructional world they will inhabit. Teachers deserve a continuum of professional growth; that means designing career lattices so that teaching offers a career’s worth of dynamic opportunities for impacting students.
- Read and take ideas from the RESPECT Blueprint, a plan released in April containing a vision for an elevated teaching profession. The blueprint reflects a vision shaped by more than a year’s worth of intimate discussions the department convened with some 6,000 teachers about transforming their profession. Teaching is the nation’s most important work, and it’s time for concrete steps that treat it that way — RESPECT offers a blueprint to do that.
“Don’t get me wrong — teachers deserve a week of celebration with plenty of baked goods. But I hear, often, that this is a time that teachers want some extra support,” Duncan concluded. “They deserve real, meaningful help — not just this week, but all year long.” | <urn:uuid:6311d67c-4c8c-4eaa-a01e-0f9843682dcf> | {
"date": "2014-07-23T20:06:57",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00152-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9539057612419128,
"score": 2.515625,
"token_count": 569,
"url": "http://lessonplanspage.com/u-s-sec-of-education-my-national-teacher-appreciation-week-wish-list/"
} |
Shortly after Philip's murder, Alexander's longtime ally Antipater presented him to the Macedonian army, which immediately acclaimed Alexander king. The first matter Alexander attended to was the inevitable purging of enemies. This included potential claimants Amyntas and two sons of Aëropus (the third was spared because he was among the first to pay homage to Alexander, and also because he was the son-in-law of Antipater) who were known supporters of Amyntas. Many more murders would follow as necessity arose. Olympias showed her vengeful side in a gruesome murder of Caranus. However, while this killing was an order of Alexander, Olympias also murdered Caranus's sister and drove their mother, Cleopatra, to suicide. As Caranus's sister and mother posed no threat to the throne, Alexander was naturally furious at his mother, fearing the public scandal the murders might cause.
With the purging underway, Alexander still had to win the support of the Macedonian people and then attempt to maintain his hold over the foreign states. To assure his subjects, he publicly announced that he would run the state on the same principles as his father's administration; he even removed direct taxation on Macedonian citizens to win their appreciation.
The situation abroad, however, would be more complicated. Athens was thrilled to learn of Philip's death, seeing it as an opportunity to revolt. The famous Athenian orator Demosthenes immediately wrote to Attalus and Parmenion, one of Philip's most loyal lieutenants, to offer Athens's support and to urge them to declare war on Alexander. Although Attalus had to take this opportunity to save his own life, Alexander knew that Parmenion could be won over, and that success in doing so would greatly strengthen his power. While negotiations continued, Alexander took action against states that threatened to defect. Despite warnings against brashness, Alexander knew that he could not show any signs of weakness at this crucial moment. Therefore, he soon brought Thessaly and others into line, convincing them that cooperation would be the wisest decision.
Thebes presented a greater obstacle, as it was naturally averse to Macedonian rule. Alexander, however, offered such appealing terms as could not be refused–he simply asked to be recognized as Hegemon of the Hellenic League. Athens could not, at this point, hold out alone; soon its leaders were apologizing for the delay in acknowledging Alexander as king. Attalus himself gave in and tried to switch allegiances, but his efforts were futile, as Alexander's hatred was personal as well as political. When Attalus's life was the one point of dispute between Alexander and Parmenion–Attalus was Parmenion's son-in-law–Alexander remained firm, and he eventually got his way. With Parmenion's support, Alexander was able to reclaim–all without a battle, and in a short time–the status that his father had worked so hard to achieve.
His housecleaning and consolidation of power taken care of, Alexander soon turned his attention to reaffirming his rule over the barbarians. In these encounters, Alexander showed brilliant foresight and succeeded in near annihilations while losing very few men.
Meanwhile, trouble arose again in Thebes, as rebel leaders began stirring up anti-Macedonian feeling, particularly because of a rumor that Alexander had died. Though Alexander offered Thebes the chance to surrender when he arrived with 30,000 troops, the city, though shocked to see Alexander alive, was nevertheless determined to fight. For a while, the Thebans put up a valiant struggle outside the city walls. However, when Alexander found one gate left open and sent troops to rush in, the Thebans lost heart as their city was stormed.
What resulted was one of Alexander's most destructive massacres–6,000 Thebans killed, 30,000 taken prisoner; only 500 Macedonians lost. Furthermore, the victors did not hold back when the pillaging began. At the ensuing League meeting, the council voted to raze Thebes and sell the captured citizens as slaves. Though many representatives in the League had their own reason to hate Thebes, the destruction of the city still came as a shock to Greece, for Thebes had been one of the most historic and distinguished Greek city-states. Though Alexander successfully made an example of Thebes, he would never be forgiven for his lack of mercy on the city. | <urn:uuid:51bc5930-3efb-41ff-ae69-c3445ea720ed> | {
"date": "2014-11-23T20:16:54",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379916.51/warc/CC-MAIN-20141119123259-00224-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9872868061065674,
"score": 3.203125,
"token_count": 911,
"url": "http://www.sparknotes.com/biography/alexander/section5.rhtml"
} |
Children with special health care needs are at increased risk for developing caries for the following reasons:
- Difficulties performing oral hygiene
- Gastroesophageal Reflux Disease and vomiting
- Gingival hyperplasia and crowding of the teeth
- Medications containing sugar
Abnormal dryness of the mouth due to insufficient saliva production.
In children with special health care needs, uncoordinated chewing may leave more food in the mouth.
A weak, uncoordinated tongue may not be able to adequately clean all oral surfaces.
Gagging on the toothbrush, paste, or saliva may inhibit complete brushing of all surfaces.
An inability to spit may result in the swallowing of toothpaste.
An abnormal or unusual increase in the elements composing a part (as cells composing a tissue). | <urn:uuid:2ef744b2-4f23-4f07-bd42-78e33205cc8e> | {
"date": "2016-10-25T10:21:05",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720026.81/warc/CC-MAIN-20161020183840-00438-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8929638862609863,
"score": 3.25,
"token_count": 166,
"url": "http://www2.aap.org/oralhealth/pact/ch7_sect2.cfm"
} |
Early on the morning of June 6, 1944, 1st Lt. Stanley Fine climbed into the B-17 designated as the lead aircraft, sat down at his radar set, and led the 401st Bombardment Group from its base at Deenethorpe, England, over solid cloud cover to Normandy.
The weather over the English Channel was dreadful. Gen. Dwight Eisenhower, supreme allied commander in Europe, had agonized throughout the night about his decision to proceed with the invasion of Normandy.
Now, in murky skies above German fortifications and just ahead of Allied ground troops on the beaches, Lieutenant Fine studied the smudges on his screen and began his bombing run. From the copilot's seat, the air commander's voice crackled over the intercom, "Blind. Going in 'Mickey.' "
Sixty years later, as Americans commemorate D-Day, few know about Mickey - the nickname for the latest in a newly developed family of microwave radars that helped change the course of the war.
Small enough to be mounted in aircraft and accurate enough to "see" individual targets, microwave radar allowed the Allies to clear the sea lanes, get American troops and supplies to England, and then land those troops and weapons safely on a coastline occupied by the enemy. Without microwave radar, it's not clear that the Allies could have mounted the invasion of France in 1944.
"The thoughts of America were too much with the men going through the Normandy surf for a remarkable thing to get much notice," wrote Gen. Henry H. "Hap" Arnold, commander of the US Army Air Corps, afterward. "The final bombs that paved the way for them, dropping only a few yards ahead of the first men to hit the beaches, went down through a solid overcast of clouds, without ... so much as scratching the paint on a single rowboat in that packed armada below.... [O]ur scientists had taken from Hitler even the comfort of bad weather."
The technology - still used in some radars today - had its own heroes, including a handpicked group of British and American civilian scientists who crisscrossed the Atlantic past marauding U-boats; flew in and out of London during the Blitz to confer with fellow scientists; and flew test, training, and combat missions in military aircraft with military personnel.
As with many key inventions, the radar's beginnings were accidental. In the 1920s, scientists working on radio transmissions and communications noticed changes in their signal reception when planes flew by. Most of them recorded these observations as incidental nuisances in their notebooks, but some speculated about the cause. They soon realized that radio energy, like light, could be reflected by the surface of large objects. The audible echo - sound reflected back to its source from a large surface - works the same way.
In the decade leading up to the war, most of the advanced nations were developing radar. However, the equipment was too large and cumbersome to be mobile, and it was unable to show detail.
Scientists knew they would have to operate at higher microwave frequencies to improve the performance of their radar, but no device existed capable of transmitting sufficient power at those frequencies.
Then one afternoon in November 1939 two physicists at the University of Birmingham in England, John Randall and Henry Boot, sketched out the resonant cavity magnetron.
It took three months of shop work to build a prototype. No one knew what to expect, and at first it proved impossible to test. A blue-violet electric arc sizzled from the output lead. Then the lead melted.
Lab assistants were sent on a succession of errands to the local garage for higher and higher wattage bulbs to connect to the output, but each was burned to a crisp. The resonant cavity magnetron was generating so much power it was burning up everything they connected to it. It was working beyond their wildest dreams. Randall and Boot had created the single component destined to unlock the enormous potential of radar.
Resembling a tin of tuna fish with protruding spines, the resonant cavity magnetron was small enough to fit in the palm of a man's hand. In the hands of American and British scientists it would equip the two nations with a secret and exclusive capability - microwave radar.
The British brought the magnetron to the United States in great secrecy. And the newly organized National Defense Research Council convened a group of civilian scientists who, early in 1941, converged on Cambridge, Mass., from all over the US to begin work on a crash program at the Massachusetts Institute of Technology.
Working in tandem with the British, MIT's new Radiation Laboratory aimed to refine microwave radar so that it could be mounted in aircraft and accurately distinguish individual targets. Under intense pressure, the scientists spent day after grinding day in cold makeshift laboratories, designing, building, testing, and redesigning equipment that had never before existed.
By 1942, they had furnished the Air Force with hand-built microwave radars based on British designs to search for U-boats off the Atlantic coast. In 1943, awaiting orders to join the Eighth Air Force in Europe, Fine suddenly received secret orders to be trained, along with nine other navigators, on the Radiation Lab's new blind-bombing radar, Mickey.
"At Langley, where I was sent for training, a civilian from MIT showed me what knobs to twist and how to interpret the scope images," he recalls. "Then they took me up, covered my windows, and told me to direct the pilot to the Chesapeake and drop a dummy bomb into the bay. All I could think was, 'Please, God, don't let me drop it on the White House.' "
The technology helped set the stage for D-Day. Before mounting the invasion, the Allies had to accomplish two things: First, transport American personnel, supplies, and weapons to England to fight the war. Second, land American and British personnel, supplies, and weapons on mainland Europe.
Obstructing the first goal - transport - were the German U-boats. In just six months, from January to June 1942, U-boats sank a total of 585 Allied ships. But with the fragile, handcrafted radar mounted in slow-flying aircraft, the Allies discovered they could locate German U-boats at night when they surfaced to recharge batteries.
The vulnerable U-boat would suddenly find itself bathed in powerful searchlights from directly above and attacked before it could submerge. The German captains wondered: How were they being located? The first U-boat located by a plane equipped with microwave radar built by MIT's Radiation Lab was sunk on April 1, 1942.
Slowly, steadily, the mayhem inflicted by US and British planes equipped with microwave radar increased. In May 1943, the mystified German High Command, still assured by their scientists that radar was incapable of detecting an object as small as a U-boat, was forced to withdraw all its submarines from the North Atlantic.
"Radar location by aircraft had ... robbed the U-boats of their power to fight on the surface," Admiral Karl Doenitz, commander in chief of the German Navy wrote later. "Wolf-pack operations against convoys in the North Atlantic were no longer possible.... We had lost the Battle of the Atlantic."
With the opening of the shipping lanes, troop transports carried hundreds of thousands of military personnel, and freighters transported millions of tons of equipment to the British Isles in preparation for the D-Day landings. The German Air Force, however, remained a formidable obstacle. Before attempting to land troops, the Allies had to control the skies.
In England and western Europe, dirty weather is the rule throughout the year. During the winter of 1942-43, the Eighth Air Force discovered that visual bombing missions were possible only 20 percent of the days. By the time weather conditions allowed the bombers back into the air, they discovered that most of the damage inflicted by previous missions had been repaired. Their concept of strategic bombing was not succeeding.
Not only were the Germans saving their aircraft to oppose the anticipated Allied landings, their factories were building even more planes. Any possible chance for an Allied invasion in 1943 had evaporated.
However, with the new radar-equipped Pathfinder bombers, operations during the winter of 1943-44 were vastly more successful. Able to navigate in bad weather and "see" strategic targets through cloud cover, American and British Pathfinders led wave upon wave of heavy bombers over enemy territory around the clock, regardless of the weather.
If the formation couldn't bomb visually using bombsights, the Pathfinder lead ship with its blind-bombing radar dropped its load through the clouds. The rest dropped their bombs on the Pathfinder's markers. Although the radar-guided results weren't nearly as accurate as visual bombing, the relentless pounding still forced the German Air Force to send its planes into the skies to challenge the bombers and their fighter escorts.
"We were the bait to get the German planes into the air so we could shoot them down," recalls Fine. By June 6, 1944, there was hardly a German plane left in the sky to oppose the landings.
Even so, the D-Day landings were brutal. In one day, the Allies lost some 2,500 troops of the 156,000 who stormed the beaches. But without the heroism of a few civilians with slide rules, their brave campaign might have proved far more costly.
• The author is the nephew of Captain Fine.
• Atomic bomb - The nuclear weapon was dropped by the US on the Japanese cities of Hiroshima and Nagasaki in August 1945. Japan surrendered shortly thereafter. [Editor's note: The original version misstated the year the bombs were dropped on Japan.]
• Resonant cavity magnetron - Used in the first portable microwave radar on planes, it enabled around-the-clock bombing missions and could locate enemy U-boats. It was invented in November 1939 by scientists at the University of Birmingham, England.
• Water-launched air attacks - In 1941, Japanese airplanes are launched from aircraft carriers in a surprise attack against Pearl Harbor.
• V-2 Missile - This longer-range rocket was first developed in 1942 for the Germans. Hundreds fell on England.
Sources: encyclopedia.com, The Oxford Dictionary of World War II | <urn:uuid:f19f38fa-c742-4f3a-b2f2-4ff520a25da5> | {
"date": "2016-02-07T09:06:01",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148758.73/warc/CC-MAIN-20160205193908-00170-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9743434190750122,
"score": 3.453125,
"token_count": 2104,
"url": "http://www.csmonitor.com/2004/0603/p14s02-stct.html"
} |
Despite the widespread use of palladium-mediated catalytic reactions, handling and removal of palladium during reactions and work-up present challenges that continues to remain a major problem. Reducing palladium content to parts per million level, as is required for active pharmaceutical ingredients, is particularly challenging and addressable via metal scavengers, but effective introduction of stable workhorse palladium catalysts to reactions can be challenging.
The Suzuki reaction is one of the most widely practiced coupling protocols for the preparation of symmetrical and unsymmetrical biaryl compounds. Biotage® PS-PPh3-Pd was developed to perform in a manner similar to that of the well-established small molecule reagent Pd(Ph3P)4, with the additional convenience of a polymer supported reagent for handling and purification. PS-PPh3-Pd is stable to air and can be stored at room temperature for extended periods of time without degradation. A reaction protocol using PS-PPh3-Pd typically results in excellent yield and purity, with <100 ppm residual palladium.
White Paper: Reduce Solvent Use by 50%
Perfecting Your Mass Spectrometer Parameters
Sample Preparation Strategies for Urine Panels with 50 Drugs
Biotage Synthesizer Peer-reviewed Literature
How to Identify Compounds in a Mass Spectrum | <urn:uuid:02f08147-0555-4ddd-9a06-4463b1e95d2a> | {
"date": "2017-10-24T05:34:42",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828178.96/warc/CC-MAIN-20171024052836-20171024072836-00556.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9005247354507446,
"score": 2.5625,
"token_count": 276,
"url": "http://www.biotage.com/news/re-usable-highly-effective-supported-palladium-catalyst-for-suzu"
} |
What is the purpose of a transformer?
Transformers are found everywhere Alternating Current (AC) electrical energy is used. A transformer is an electrical device that trades voltage for current in a circuit, while not affecting the total electrical power. This means it takes high-voltage electricity with a small current and changes it into low-voltage electricity with a large current, or vice versa. One thing to know about transformers is that they only work for Alternating Current (AC), such as you get from your wall plugs, not Direct Current (DC).
Transformers can be used either to increase the voltage also known as stepping up the voltage, or they can decrease the voltage also known as stepping down the voltage. Transformers use two coils of wire, each with hundreds or thousands of turns, wrapped around a metal core. One coil is for the incoming electricity and one is for the outgoing electricity. Alternating Current in the incoming coil sets up an alternating magnetic field in the core, which then generates Alternating Current in the outgoing coil.
Energy is lost in the process of transmitting electricity long distances, such as during the journey from a power plant to your home. Less energy is lost if the voltage is very high. Usually, electrical utilities use high voltage in long-distance transmission wires. However, this high voltage is too dangerous for home use. In the case of electrical utilities in homes, they use transformers to change the voltage of electricity as it travels from the power plant to your home.
First using a transformer, the voltage of electricity coming from a power plant is “stepped up” to the right level for long-distance transmission. Because high-voltage current can arc, step up transformers called ignition coils are used to power spark plugs. Dynamos at power plants generate large currents but not a lot of voltage. This electricity is stepped up to high voltage for transmission over wires, as electricity travels more efficiently at high voltage.
Later, the voltage is stepped down before it enters your home – once again using transformers. A “step-down” transformer changes the 440-volt electricity in power lines to the 120-volt electricity you use in your house. Then, the current is either used at that level for devices like light bulbs, or it is converted to DC using an AC/DC adapter for devices like laptop computers.
Since the emergence of the first constant-potential transformers in 1885, transformers have become essential for the transmission, distribution, and utilization of Alternating Current electrical energy in all applications of power. At Power Temp Systems, we specialize in making innovative equipment that efficiently and safely distributes and utilizes power for any project. | <urn:uuid:f32cb9ef-5c61-42bf-9081-07ef2bbcfcc3> | {
"date": "2019-06-19T13:57:23",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998986.11/warc/CC-MAIN-20190619123854-20190619145854-00416.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9257795810699463,
"score": 3.828125,
"token_count": 546,
"url": "https://powertemp.com/what-is-the-purpose-of-a-transformer/"
} |
Skip to Main Content
The problem of designing a digital low-pass filter is considered from different points of view, including two approaches which have not received sufficient attention. One approach is based on the prolate-spheroidal-function filters of Slepian, Pollak, and Landau. The other is the incompletely specified least mean-square error (LMSE) method which has been proposed by Rorabacher. We find that, compared with minimax and LSME designs, the prolate-spheroidal digital filters have very low sidelobes, at the expense of more deviation from a flat passband. If a transition band or "don't care" region of the frequency response is assumed, then the results of the minimax and LSME methods are nearly the same. | <urn:uuid:27a8b1ea-9e17-4064-ab00-464cbd3bb705> | {
"date": "2016-07-28T09:20:27",
"dump": "CC-MAIN-2016-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.15/warc/CC-MAIN-20160723071028-00109-ip-10-185-27-174.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9491923451423645,
"score": 2.65625,
"token_count": 164,
"url": "http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=1162148&sortType%3Dasc_p_Sequence%26filter%3DAND(p_IS_Number%3A26074)"
} |
On January 13, 2010, the U.S. Fish and Wildlife Service (the "Service") issued a proposed rule to revise its 2005 designation of critical habitat for threatened bull trout (Salvelinus confluentus). The proposal represents a dramatic increase in river miles and lake and reservoir acres designated as critical habitat under Section 4 of the Endangered Species Act ("ESA"). The proposed critical habitat is located in Montana, Idaho, Oregon, Washington, and Nevada. A map showing the areas proposed for designation is available at the link above.
Section 4 of the ESA requires the Service to designate critical habitat for threatened and endangered species based on the best scientific information available, after carefully considering economic impacts, impacts to national security, and other impacts relevant to specifying any particular area as critical habitat. Although critical habitat designations do not establish wildlife refuges, Section 7 of the ESA requires that federal agencies ensure that federally authorized projects, such as timber sales or energy facilities, do not destroy or adversely modify critical habitat.
The Service listed the Klamath River and Columbia River bull trout distinct population segments ("DPSs") as threatened in 1998. Since then, DPSs for Coastal Puget Sound, Jarbidge River, and Saint Mary-Belly River bull trout were listed as threatened. Native to waters of the western United States, bull trout are members of the family Salmonidae. Bull trout are found throughout the Columbia River and Snake River basins, extending east to streams in Montana and Idaho, and into the Klamath River basin in Oregon. According to the Service, the decline in bull trout is primarily due to habitat degradation and fragmentation.
Previous Federal Actions
In 2005, the Service designated critical habitat for the five DPSs listed above. In 2006, environmental advocacy groups filed a complaint in federal district court alleging that the Service failed to rely on the best scientific and commercial data available, failed to consider the relevant facts that led to listing, and failed to properly assess the economic benefits and costs of critical habitat designation. On March 23, 2009, the Service notified the U.S. District Court for the District of Oregon that it would seek remand of the final critical habitat rule for bull trout based on the findings of a report prepared by the Department of the Interior. On July 1, 2009, the court granted the Service's request for a voluntary remand and directed the Service to submit a new proposed rule.
Changes from the 2005 Rule
In the 2005 final rule, the Service designated approximately 3,828 miles of streams and 143,218 acres of lakes in Idaho, Montana, Oregon, and Washington, and 985 miles of shoreline paralleling marine habitat in Washington. The 2005 rule represented a significant downsizing of the critical habitat designated in the combined 2002 and 2004 rules. In the new rule, the Service is proposing to designate as critical habitat 22,679 miles of streams (which includes 985.3 miles of marine shoreline in the Olympic Peninsula and Puget Sound) and 533,426 acres of lakes and reservoirs.
Of the area proposed, 929 miles of streams are outside the geographical area that was occupied by the species when it was listed. The Service has determined these streams to be essential for the conservation of the species.
An economic impact analysis of the proposed rule estimates the annualized incremental cost to be $5 million to $7 million.
The expanded proposed critical habitat designations mean federal actions in the designated areas must consider the effect of the action on bull trout critical habitat. This will have ramifications for timber operators, farmers, energy developers, and others seeking federal approval for projects in the designated areas. As noted above, the Service must ensure that any projects occurring in such areas do not destroy or adversely modify the habitat.
The Service will be accepting comments on the proposed critical habitat revision until March 15, 2010. If you have questions about the issues discussed in this alert or would like more information, please contact:
Cherise Oram at (206) 386-7622 or [email protected]
Greg Corbin at (503) 294-9632 or [email protected]
David Filippi at (503) 294-9529 or [email protected]
Sarah Stauffer Curtiss at (503) 294-9829 or [email protected] | <urn:uuid:86925bc1-25a5-4fae-85df-a5e9a9fe0334> | {
"date": "2014-03-08T01:46:39",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652570/warc/CC-MAIN-20140305060732-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9178385138511658,
"score": 3.5,
"token_count": 885,
"url": "http://www.stoel.com/showalert.aspx?Show=6373"
} |
WWF's work in Peru
In 1997, WWF established a comprehensive country conservation programme. Since then, the WWF Peru Programme Office (PPO) has gone on to achieve a number of conservation goals. Today, WWF concentrates its efforts primarily in 6 of Peru's 14 priority ecoregions:
* Southwestern Amazon Moist Forests
* Amazon River and Flooded Forests
* Humboldt Current
* Napo Moist Forests
* Central Andean Yungas
* Northern Andean Montane Forests.
WWF Peru Programme Office main goals are:
* To protect Peru's numerous endangered and/or endemic fauna and flora species, such as the marine turtle species, spectacled bear (Tremarctos ornatus), giant river otter (Pteronura brasiliensis), mountain tapir (Tapirus pinchaque); and among the flora species the Aguaje palm (Mauritia flexuosa) and, the big-leafed mahogany (Swietenia macrophylla).
* To support the creation and effective management of natural protected areas through political advocacy and the design of conservation master plans, capacity building efforts, and supervision and control.
* To strengthen the framework of Peru's National Protected Areas System (SINANPE). Specifically, with financial support from USAID-Peru, the Peru Programme Office recently helped to develop a national scorecard matrix to evaluate management capacity in natural protected areas, which has already been run twice in 28 natural protected areas. The PPO also prepared a draft biological monitoring tool and overall monitoring system for natural protected areas.
To achieve these goals, WWF Peru Programme Office works in the following areas: | <urn:uuid:57a355c7-5819-4a5b-b309-5076dd86f133> | {
"date": "2013-05-23T11:42:42",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.838086724281311,
"score": 3.0625,
"token_count": 354,
"url": "http://wwf.panda.org/who_we_are/wwf_offices/peru/our_work/"
} |
Does your child struggle with math? Do they dread every time they have to go to the class or do math homework? Unfortunately, this is the case with a number of students. For parents, though, working to help their child learn to love math is important. This can help improve their overall outlook on math and education as a whole. Here are some ways to help teach a child to love math.
Make It Fun
Many children dislike math because it’s not fun for them. To help combat this, there should be ways for the parent to make it fun. Provide various incentives, use games, and remove the textbooks. The student will be more receptive to the lessons being taught and can retain more of the information. It can be surprising how well your child can solve problems once they have been prompted in a more enjoyable manner.
Mix In Subjects
Math by itself can feel boring, uninteresting, or very difficult. This is what causes most students to dislike it. However, if seen everywhere and in multiple situations, math can be something students enjoy. Try putting it in with other subjects or teaching the child to see math in other areas. This could keep them learning the initial subject, but finding ways to incorporate math.
Another potential reason students dislike math is because they worry about making mistakes. Children should know that making mistakes is okay. It’s how we learn and continue to grow. The important aspect is trying again and allowing the child to do so until they get the correct answer.
At Tutor Doctor of Tulsa, we truly believe that a teen or child can love all subjects. Our Tulsa in-home tutors take the time to help students understand the intricacies of math and move forward with confidence. We are determined to make this process as simple as possible for your child.
If your child needs help with math or any other subject, contact us today. | <urn:uuid:a964cf34-d716-4e4e-808b-cff8fe2f3277> | {
"date": "2017-08-17T19:14:56",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.952304482460022,
"score": 3.21875,
"token_count": 384,
"url": "http://www.tutordoctor.com/tulsa/about-us/blog/2016/november/teaching-your-student-to-love-math/"
} |
Heroin is reportedly one of the most addicting substances in the world.
That's according to a popular study from 2007 published in The Lancet. It examined dependence and other factors.
Heroin, an opiate, raises the level of dopamine in the brain.
Dopamine is one of four chemicals responsible for happiness.
Over half a million Americans tried heroin in 2013 — a 150 percent increase in about six years.
In 2013, more than 8,000 people in the U.S. died from heroin overdoses.
Cocaine, street methadone, barbiturates and alcohol rank high with causing dependence, right after heroin.
This video includes clips from the National Institute on Drug Abuse and images from Getty Images. | <urn:uuid:99fddc0a-75de-4780-babc-e8b1c544c042> | {
"date": "2017-02-20T17:57:11",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00580-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9221075773239136,
"score": 2.65625,
"token_count": 152,
"url": "http://www.newsy.com/videos/heroin-might-be-the-most-addictive-drug-and-it-s-a-growing-problem/"
} |
I need help with explaining the highlighted portions in the attached. I am trying to understand it as well as write an explanation before I do a research.
Address model assumptions. Begin by listing the assumptions and then presenting data or text that addresses how well the data meet the assumptions. For example, a t-test requires the data meet the following assumptions to make a valid inference:
• The dependent measure is an interval or ratio scale,
• The independent variable has two values,
• The dependent variable is normally distributed,
• The population variances for the two groups are equal, and
• The observations are independent of one another.
Also, address how well the data meet the assumptions. In some cases the text of the problem will help you, e.g., in inferring independence. In other cases you may need to examine the data, e.g., make a plot to examine normality. Does the data meet the assumptions? Sometimes statistical tests are little affected by violations of assumptions and other times the effects are severe, e.g., the p-value is way off.
State the hypotheses both in terms of what you are trying to accomplish from a real-world perspective and what you are doing statistically. For the t-test example, you might say the research question is about comparing males and females in terms of interest in statistics. Then use an equation editor to state the null and alternative hypotheses:
Present the statistical results of your analysis using APA style. Use text, tables and figures to present an easily read summary of the analysis. If you use tables or figures, place them immediately after the paragraph of first mention. Remember to report the results of the hypothesis tests along with effect sizes. Comment on the power and sample size also. Discuss sample size which is intimately related to power.
Comment on what the results have to ay about the real world. What are the implications of the analysis?
Please find the solution of your posting. I hope it will ...
This solution is comprised of a detailed explanation of Multiple Regression. The detailed solution is provided for testing of interceptor, slope in regression analysis. All the assumptions of multiple regressions are tested and discussed in detail, null and alternative hypothesis defined in Hypotheses sections, results table is prepared in APA format, all the results are discussed with APA format as per the guidelines. | <urn:uuid:d72b6ae4-6e20-4dfc-87d1-3c985cf0776b> | {
"date": "2018-03-24T17:56:18",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00656.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9248274564743042,
"score": 3.890625,
"token_count": 482,
"url": "https://brainmass.com/statistics/simple-linear-regression/multiple-regression-report-597462"
} |
A large-scale sequencing of sugarcane expressed sequence tags (ESTs) was carried out as a first step in depicting the genome of this important tropical crop. Twenty-six unidirectional cDNA libraries were constructed from a variety of tissues sampled from thirteen different sugarcane cultivars. A total of 291,689 cDNA clones were sequenced in their 5' and 3'end regions. After trimming low-quality sequences and removing vector and ribosomal RNA sequences, 237,954 ESTs potentially derived from protein-encoding messenger RNA (mRNA) remained. The average insert size in all libraries was estimated to be 1,250bp with the insert length varying from 500 to 5,000 bp. Clustering the 237,954 sugarcane ESTs resulted in 43,141clusters, from which 38% had no matches with existing sequences in the public databases. Around 53% of the clusters were formed by ESTs expressed in at least two libraries while 47% of the clusters are formed by ESTs expressed in only one library. A global analysis of the ESTs indicated that around 33% contain cDNA clones with full-length insert.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:05453fd1-a487-4633-9a6b-d8272a885f17> | {
"date": "2018-09-22T13:26:37",
"dump": "CC-MAIN-2018-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158429.55/warc/CC-MAIN-20180922123228-20180922143628-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9596737623214722,
"score": 2.703125,
"token_count": 259,
"url": "https://www.mendeley.com/research-papers/libraries-made-sucest/"
} |
Print This Page
Contribute to ReadWriteThink / RSS / FAQs / Site Demonstrations / Contact Us / About Us
Home › Email
Lesson Plan | Standard Lesson
Author: Shawna Rodnunsky
Do their minds go blank when they confront a blank piece of paper? Speedwriting can help students get started on writing and come up with topics to write about. They can then incorporate their key ideas and phrases into a narrative with the help of a graphic story organizer.
(A link to this page will be included in your message.)
Back to this resource
Send me a copy
(Separate multiple e-mail addresses with commas. Limited to 20 addresses.)
characters remaining 300
To help us eliminate spam messages,
please type the characters shown in the image.
© ILA/NCTE 2015. All rights reserved.
Technical Help | Legal | International Literacy Association | National Council of Teachers of English | <urn:uuid:def66144-df06-44b5-8341-b27da854fe9a> | {
"date": "2015-07-07T04:39:15",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098990.43/warc/CC-MAIN-20150627031818-00086-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8131770491600037,
"score": 3.921875,
"token_count": 197,
"url": "http://www.readwritethink.org/util/email.html?url=/classroom-resources/lesson-plans/empowered-fiction-writers-generating-1025.html?tab=6&title=Empowered+Fiction+Writers%3A+Generating+and+Organizing+Ideas+for+Story+Writing&id=1025"
} |
Do you know what to do when bit by a venomous snake?
POSTED: Thursday, May 22, 2014 - 6:31pm
UPDATED: Friday, May 23, 2014 - 1:29pm
Tyler, TX (KETK) — You may want to watch where you step. These sneaky reptiles blend in with the ground, and can coil into small piles of doom that camouflage with rocks and brush.
Dr. Laura Cauthen from the Animal Medical Center of Tyler said, "During colder months we rarely see a snake bite, but this time of year when people are outside more and the snakes are active we see them much more commonly". So before you head out the door to enjoy a nice walk, make sure you know how to properly treat a snake bite to keep you, and your furry friends, safe.
KETK spoke with Dr. Benjamin Constante from Momentum Urgent Care, who warned that if you are bitten by a snake, "Remain calm and seek medical attention". Head to the emergency room immediately, and have a description of the snake. Constante said, "If they've killed the snake then certainly bringing in the dead snake is OK, preferentially not a live snake, although it has been done".
This is because those old home remedies cannot battle the bite from a venomous animal. Constante advised, "Do not go try to suck out the venom or put a tourniquet on the extremity that was struck, or anything like that. Those are ideas that have proven not beneficial". These actually do more harm than good.
However not all snakes in Texas are poisonous. If you are feeling faint, have immediate bruising, massive swelling, and excruciating pain, odds are it was a venomous bite. Our furry friends may experience similar symptoms, by swelling, limping, and not allowing you to touch a certain area.
Copperhead and Water Moccasin snakes are the most common venomous snakes in East Texas, so keep an eye on curious pets. Veterinarian Cauthen said, "Most of the bites we see are usually in the face, because the dogs are snooping around kind of nosing where they shouldn't be".
Snakes bite more than 8,000 Americans each year, but if handled properly they are rarely fatal. | <urn:uuid:0982a9a2-539e-4af4-b985-e0d9877d02b9> | {
"date": "2015-03-30T11:35:24",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299261.59/warc/CC-MAIN-20150323172139-00278-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9552170038223267,
"score": 2.9375,
"token_count": 478,
"url": "http://www.ketknbc.com/news/do-you-know-what-to-do-when-bit-by-a-venomous-snak"
} |
The computer misuse act was introduced within the UK in 1990 after the rapid explosion of use of computer and IT equipment within the country, The act was created in accordance with the provision for securing computer material against the unauthorised access or modification of the material. Organisations can take individuals to court and organisations can be taken to court if the act is found to have been broken. This is an act of high importance within the modern day, with the use of IT on the increase as ever. The guidelines for the act were created whist considering the EU guidelines that are in place.
This act outlines three matters that have now become criminal offences if they are broken, these were supported by the government and the act was introduced in 1990. The unauthorised access to computer material has become a punishable offence under the act, a six month imprisonment sentence can be imposed or a fine with a maximum limit of | <urn:uuid:aac4fae3-5551-41d7-923a-56b54bf48b79> | {
"date": "2013-12-05T15:14:35",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163046759/warc/CC-MAIN-20131204131726-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9802711606025696,
"score": 2.890625,
"token_count": 179,
"url": "http://www.wrexham.com/question-answers/what-is-the-computer-misuse-act-15167.html"
} |
SeaWiFS : Producing a "True Color" Image
SeaWiFS data are collected in eight (8) different bands in the visible and near infrared part of the electromagnetic spectrum. For each band, the detector measures the intensity of the light that reaches the sensor. When these data are displayed visually, the result is a series of black and white or "gray-scale" images, which look much the same as black and white photographs taken with a series of colored filters over the lense. (Notice how different features have different intensities in the various bands. For example, clouds and water appear bright in the blue and purple bands, while land is dark. In the red and infrared bands, it is the land that is bright, while the water is dark.)
To convert these gray-scale images to a "true color" image, several steps are needed.
Steps to Creating a "True Color" Image
- A "dark correction" is applied to the data.
- When the sensor is pointed at a completely black area, the sensor reading is not zero. This is because stray light from the environment and variations in the detector electronics create small positive readings. To correct for this effect, the sensor periodically takes a reading of a known dark area. This reading is then subtracted from the rest of the data, producing "dark-corrected" data.
- The data are "calibrated."
- What the sensor actually measures is the voltage that results when light photons hit the detector elements. To be useful to scientists, this voltage must be converted to a value that represents brightness, or "radiance." This is done using a complex formula, which is constantly checked and updated.
- A "Rayleigh correction" is applied to the data.
- As the photons of light pass through the atmosphere, they interact with molecules in the atmosphere. This interaction produces what is known as atmospheric scattering, where blue light is preferentially scattered in random directions, which is why the sky appears blue. This scattering component is calculated using the well-known Rayleigh scattering equation and is then subtracted from the data.
- The data are "geolocated."
- Data are collected as a series of points along scan lines, creating a rectangular array of "pixels." The curvature of the Earth's surface and the scanning geometry of the satellite, however, mean that data do not always represent a perfectly rectangular area. To correct for this effect, the exact Earth latitude and longitude of each pixel is calculated and then the data are projected onto a latitude/longitude grid. This makes the data look like a standard Earth map. The various bands in the image above are not rectangular because they have been geolocated and corrected.
- The data are "co-registered."
- For many satellites, data from different bands are taken at slightly different times (microseconds apart), and because the satellite is in motion, the different bands will not all correspond to exactly the same area on the Earth. This means that sharp edges, such as lake shores and cloud boundaries, appear fuzzy and unaligned in the final true color image. To correct for this effect, the bands must be co-registered, so that each pixel of any one band correlates exactly to the same pixel on all of the other bands. This is very easy to do when the data are geolocated; you simply align all the bands according to their latitude/longitude coordinates. This step is not neccessary with SeaWiFS data, because the sensor is designed to collect all eight bands at exactly the same time.
- The data are displayed as an "RGB" image.
- Finally, the three bands that most closely represent red, green, and blue (RGB) in the visible spectrum are combined. Each band is displayed in a monochromatic scale corresponding to its appropriate color. When these are mixed on a computer screen they produce the entire range of visible colors, creating an image that is fairly close to what the human eye and brain would perceive. This is very similar to the way a color TV produces a range of visible colors on the TV screen, using only red, green, and blue dots.
SeaWiFS Project Home Page
Irene Antonenko ([email protected])
Image by: Norman Kuring ([email protected]) | <urn:uuid:8f13efbf-27c1-43b8-8d21-64c0fea5db5a> | {
"date": "2015-03-28T00:31:09",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297146.11/warc/CC-MAIN-20150323172137-00082-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9191323518753052,
"score": 4.125,
"token_count": 894,
"url": "http://oceancolor.gsfc.nasa.gov/SeaWiFS/TEACHERS/TrueColor/"
} |
Reference Beam Attenuation is a measurement of beam intensity after propagating through a seawater sample, i.e. the amount of light that reached the receiving detector. By dividing the Signal Beam Attenuation by the Reference Beam Attenuation the proportion of light absorbed or scattered as the light travelled to the receiver detector can be calculated. These values are part of the calculation for Optical Beam Attenuation Coefficient.
The following instrument classes include this data product:
- Spectrophotometer (OPTAA)
Data Product Specification | <urn:uuid:cec58974-2f3b-406f-b0eb-b377e80996a0> | {
"date": "2019-11-15T00:11:50",
"dump": "CC-MAIN-2019-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8576188683509827,
"score": 2.8125,
"token_count": 110,
"url": "https://oceanobservatories.org/data-product/optcref/"
} |
The Fat Dog
Weight Problems in Dogs: Do you have a fat dog?
Did you know that obesity is one of the greatest health risks among dogs?
Obesity is caused by excessive eating and unhealthy binges just as it is in humans. Often, this situation is triggered by some environmental factors that may lead to more problems when not controlled .
Like humans, a fat dog develops certain diseases when they are overweight. These diseases, if not corrected, can lead to more serious complications including death.
Here is a list of some of the repercussions of having a fat dog:
Obesity in dogs tends to put extra weight on their bones, which is how arthritis in dogs develops.
Excess weight in dogs may also initiate the development of other muscle and joint problems like spinal disc disease, hip dysplasia, and cracks on the joint ligaments.
Like humans, a fat dog can get diabetes too. We all know for a fact that being overweight can cause diabetes in humans. This can also cause diabetesin dogs as well. The reason is that the more fat stored in the dog’s body, the greater its system will generate insulin in order to cope up with its growing mass. Hence, dogs have the tendency to resist too much insulin in the body. The end result is diabetes, leading for some pets to be on diabetic dog food.
3. Skin problems
Obesity in dogs can cause some skin problems. This is because excess fat deposits are being stored in their skin; hence, their owners are having a hard time grooming them. When dogs lack proper grooming, there is a tendency to accumulate bacteria, dirt, or other elements that could cause harm to your dog’s skin. In the end, they develop rashes, skin ruptures, and infections.
4. Capacity to tolerate heat
With too much fat deposits accumulated in their skin, a fat dog is more inclined to problems concerning heat tolerance. This is because dogs find it hard to tolerate heat due to the build-up of fats in their skin. That is why most obese dogs are prone to heat stroke especially during summer time.
5. Respiratory problems and other heart diseases
Like humans, obese dogs also have the tendency to develop respiratory problems and heart diseases. This is because the chest cavity is already covered with thick fat deposits. Hence, whenever the dog breathes, the lungs are having a hard time expanding and so does the heart. The problem starts when the heart and the lungs can no longer produce the right amount of oxygen and circulate it within the dog’s body.
6. Gastrointestinal problems
Obesity in dogs causes some problems in their intestines (like constipation in dogs) and pancreas. This problem results in an inflamed pancreas which is very painful to the dog and can also cause death.
7. Liver problems
Fats are harder to strain and this can pose a problem to the dog’s liver. When the liver can no longer function well because of the fat deposits that accumulated in the area, liver problems may occur and may even cause the death of your dog.
These health problems are indeed life threatening. It depends upon the owner how to combat these problems in order to keep your dog healthy.
One of the greatest ways to solve weight problems is a strict exercise schedule. It is a must that owners create a healthy exercise schedule for their dogs. This can be done by taking them for a walk every afternoon or letting him run through a field.
The best thing about this program is that not only the dogs get the chance to exercise but their owners as well. So, it’s a double benefit.
Next is to create a healthy diet for your dogs. Diet meals are extremely important for dogs so that they can still obtain the necessary nutrients they need in order to stay healthy. This should include the right combination of fiber, meat, vegetables, vitamins, and minerals.
However, dog owners should always keep in mind that when their dogs are taking fiber, more water should be employed so as to prevent constipation.
Alternatively, dog owners should also remember that before incorporating all of these things for your dog, it’s best to ask a vet.
For healthier and happier dogs, give them the best love and care you can plus a great dietary regimen. As they say, a healthy dog is a happy dog. | <urn:uuid:ca12e5aa-f75d-47ce-90cc-110446e2c6dc> | {
"date": "2017-09-22T20:39:39",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689192.26/warc/CC-MAIN-20170922202048-20170922222048-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9584852457046509,
"score": 2.78125,
"token_count": 899,
"url": "http://www.free-online-veterinarian-advice.com/fatdog.html"
} |
Multiple choice, short answer questions, and writing questions - you can print the unit along with the poem
"Opportunity" review activity printable - print all section questions at once (options for multiple keys)
List of extended activities for Opportunity
by Walter Malone
by: Walter Malone (1866-1915)
THEY do me wrong who say I come no more
When once I knock and fail to find you in;
For every day I stand outside your door
And bid you wake, and rise to fight and win.
Wail not for precious chances passed away!
Weep not for golden ages on the wane!
Each night I burn the records of the day--
At sunrise every soul is born again!
Dost thou behold thy lost youth all aghast?
Dost reel from righteous Retribution's blow?
Then turn from blotted archives of the past
And find the future's pages white as snow.
Art thou a mourner? Rouse thee from thy spell;
Art thou a sinner? Sins may be forgiven;
Each morning gives thee wings to flee from hell,
Each night a star to guide thy feet to heaven.
Laugh like a boy at splendors that have sped,
To vanished joys be blind and deaf and dumb;
My judgments seal the dead past with its dead,
But never bind a moment yet to come.
Though deep in mire, wring not your hands and weep;
I lend my arm to all who say "I can!"
No shame-faced outcast ever sank so deep
But yet might rise and be again a man!
Have a suggestion or would like to leave feedback?
Leave your suggestions or comments about edHelper! | <urn:uuid:1fb4d421-e913-4899-8dac-9555214e92c3> | {
"date": "2014-07-22T07:23:54",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00216-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8430405855178833,
"score": 2.53125,
"token_count": 365,
"url": "http://www.edhelper.com/poetry/Opportunity_by_Walter_Malone.htm"
} |
New research finally reveals where the immune response actually starts in the body.
Due to advances in clinical research technology that have led to the development of some very sophisticated devices, scientists can now learn more about the human body, and how it works at the micro level.
Much still remains unknown about our body's mechanisms, and surprising discoveries just keep piling up.
For instance, innovative techniques have allowed researchers to learn, earlier this year, that the interstitium — which had been defined as "support tissue" — actually functions as an organ, and it is more important to our health than we had believed.
Now, scientists from the Garvan Institute of Medical Research in Darlinghurst, Australia, have finally been able to ascertain where it is that our bodies "remember" previous exposure to pathogens — through infection or vaccination — and where they start to "strategize" and assemble an appropriate immune response.
In a paper now published in the journal Nature Communications, the researchers explain that they have discovered a kind of "micro-organ" that forms within lymph nodes and acts as the "headquarters" of the immune response.
A tiny dynamic 'organ' drives immunity
The scientists used sensitive 3-D microscopy — a state-of-the-art technique allowing them to follow changes taking place at microscopic level — in mice.
When they did this, they noticed peculiar structures that form over the surface of lymph nodes when the system is exposed to an infection that it has already encountered before.
The scientists found these structures — which they named "subcapsular proliferative foci" (SPF) — not just in mice, but also in sections of lymph nodes collected from human patients.
When looking closer at the SPFs, the scientists saw that different types of immune cells clustered in these structures — most prominently memory B cells, which carry information regarding how to fight pathogens that the body has already encountered.
Also in the SPFs, memory B cells converted into plasma cells, whose role it is to defend the system against infection. Plasma cells generate antibodies, which recognize pathogens and aim to destroy them.
"It was exciting to see the memory B cells being activated and clustering in this new structure that had never been seen before," says study first author Dr. Imogen Moran.
"We could see them moving around, interacting with all these other immune cells and turning into plasma cells before our eyes," she explains enthusiastically.
A 'remarkably well engineered' structure
Importantly, the SPFs are strategically positioned so that they can mount a quick response against infection. This, the researchers explain, is key when it comes to the likelihood of success against pathogens.
"When you're fighting bacteria that can double in number every 20 to 30 minutes, every moment matters. To put it bluntly, if your immune system takes too long to assemble the tools to fight the infection, you die," says study co-author Tri Phan.
He adds that vaccines are key in teaching the immune system to respond efficiently. "Vaccination," he explains, "trains the immune system, so that it can make antibodies very rapidly when an infection reappears."
"Until now we didn't know how and where this happened. Now, we've shown that memory B cells rapidly turn into large numbers of plasma cells in the SPF."
"The SPF is located strategically where bacteria would re-enter the body and it has all the ingredients assembled in one place to make antibodies — so it's remarkably well engineered to fight reinfection fast."
The only reason why scientists had been unable to uncover the existence of these key immune formations before is, simply, because they are so tiny and so dynamic.
It was only with the development of two-photon microscopy — the technique used in the recent study — that researchers finally had the possibility to dive deeper and learn more.
Dr. Moran says, "It was only when we did two-photon microscopy — which lets us look in three dimensions at immune cells moving in a living animal — that we were able to see these SPF structures forming."
"So," states Phan, "this is a structure that's been there all along, but no one's actually seen it yet, because they haven't had the right tools. It's a remarkable reminder that there are still mysteries hidden within the body." | <urn:uuid:e29a0c8b-15de-44e3-8743-25dad7bc4dff> | {
"date": "2019-06-26T12:50:36",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00336.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9680483937263489,
"score": 3.6875,
"token_count": 894,
"url": "https://www.medicalnewstoday.com/articles/322883.php?iacp"
} |
Wind load requirements are covered under the national code standards but may vary from jurisdiction depending on wind zones. Wood has inherent characteristics that make it ideal in areas prone to high wind. All buildings are at risk during high winds and each structure, with its own unique set of characteristics such as stiffness and strength, reacts differently to wind loads. However, wood buildings can be designed to resist high winds.
Wood's elastic limit and ultimate strength are higher when loads are applied for a short time, which tends to be the case in high wind events. As with seismic performance, the fact that wood buildings tend to have numerous nail connections also means they have more load paths, so there's less chance the structure will collapse should some connections fail.
Further, when structural panels such as plywood or oriented strand board (OSB) are properly attached to lumber framing, they also form some of the most solid and stable roof, floor and wall systems available. When used to form diaphragms and shear walls, they are exceptional at resisting high winds. | <urn:uuid:ac8ade65-86d4-4a8a-8d6c-98c5c9221e63> | {
"date": "2016-05-31T21:45:31",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053209501.44/warc/CC-MAIN-20160524012649-00026-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9663068652153015,
"score": 3.21875,
"token_count": 210,
"url": "http://www.rethinkwood.com/wood-meets-code/wind-resistance"
} |
He spent six weeks in a hospital where he received marginal medical treatment before being sent to another military camp. In a chest cast and being badly emaciated, he was expected not to last a week.
His condition improved slowly as time passed. But while he was ill with dysentery he was again subjected to interrogation and torture that included rope bindings and beatings every two hours, punishment so severe that he tried to kill himself to escape the brutal treatment. Eventually, he reached his breaking point, and cooperated with his captors.
A second story of actual treatment of an enemy involved the capture, interrogation and detainment in military custody that lasted several years. During this time the captive was subjected to sleep deprivation for a period of more than seven days, rectal hydration, forced standing for prolonged periods, and was water boarded five times. Eventually, the captive’s will also broke, and he cooperated with his captors.
While the treatment in the second example would certainly be unpleasant, it is less severe than the experience of the pilot in the first example, inasmuch as the captive’s life was never in danger. Some Americans, however, believe the two equally represent torture.
The pilot in the first example was now-Senator John McCain, R-Ariz., and he was shot down over Viet Nam, captured and tortured by the Viet Cong.
The person in the second example was Khalid Sheik Mohammad, the mastermind of the 9-11 attacks on the Twin Towers in New York, the Pentagon in Washington, DC, and a foiled attempt likely aimed at the U.S. Capitol building or the White House, claiming the lives of nearly 3,000 innocent people.
Torture is the action of inflicting severe pain on someone as a punishment or to force them to do or say something, and has been practiced through the ages, and has included the most brutal treatment imaginable.
In interrogation sessions, some techniques are clearly torture, and some techniques are clearly not torture. Somewhere in the middle of these extremes, strong interrogation crosses the thin and fuzzy line into torture. Where that point is seems to be a matter of personal preference.
Having released a controversial partisan report on the CIA’s enhanced interrogation techniques, the U.S. Senate Intelligence Committee charges that the CIA’s techniques constitute torture.
The CIA vigorously disputes the Democrat leadership’s report, saying the methods were thoroughly analyzed and approved by legal consultants prior to their implementation, and that Congressional leaders were briefed on them and accepted the program. Sen. Jay Rockefeller, D-W.Va., is said to have encouraged the program.
The United States does indeed profess and uphold high-minded ideals, and most Americans oppose torture. And through this $40 million report and comments by individual senators, we are told that torture is always and forever wrong.
But is there never a circumstance where torture is justified?
Sen. Dianne Feinstein, D-Calif., thinks not. “In the wake of 9/11, we were desperate to bring those responsible for the brutal attacks to justice. But even that urgency did not justify torture,” states the Chair of the Senate Intelligence Committee. “The United States must be held to a higher standard than our enemies, yet some of our actions did not clear that bar.”
We learn that al Qaeda has placed a suitcase nuke in a major city set to detonate in a few hours. We have captured a member of the group and Sen. Feinstein questions him. He refuses to tell where the bomb is. “Okay. Thank you. Have a nice day,” she says. “After all, we are a people of principle and high morals, and won’t stoop to forceful interrogation.”
Who and how many American lives have to be at risk before those like Sen. Feinstein, clinging to the high moral ground, resort to forceful interrogation methods to save lives? Her spouse? Her hometown? Her Capital office? Or would she sacrifice American lives just to maintain the idealistic moral high ground?
You do not have to support routine use of torture to believe that in extreme cases, torture is acceptable. Many Americans believe nothing is too awful to use on an enemy in order to save lives.
So the issue is not that the United States can never use techniques generally agreed to be torture against enemies, but instead to clarify under what circumstances the United States will use those techniques, and how those decisions will be made?
Routine or indiscriminate torture is wrong. Any method used against knowledgeable enemies to save lives must be encouraged. Foolishly clinging to the high moral ground will get Americans needlessly killed. | <urn:uuid:1d2b4888-3a0e-4bab-b2ad-03d84c42cec5> | {
"date": "2018-07-23T00:11:57",
"dump": "CC-MAIN-2018-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594675.66/warc/CC-MAIN-20180722233159-20180723013159-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9691638350486755,
"score": 2.5625,
"token_count": 961,
"url": "http://faultlineusa.blogspot.com/2014/12/americas-dilemma-terrorism-at-home.html"
} |
It’s a beautiful blue berry–
–but what is it?
Leave me a guess in the comments and I’ll check back later with your answer!
We’ve posted this plant before, but not shown its fall berry. Here’s a photo clue with the leaves.
Mile-a-Minute Weed, Persicaria perfoliata, is an invasive plant that grows like the name suggests–very quickly. It also is sometimes called tearthumb or Asiatic Tearthumb, which is a good name with those little thorns. A post we made a year ago in the summer contains links to learn more, but you should be wary if you see this pretty berry and its triangular leaf. And you should pull it before it looks like this:
Or this, covering your native plants like it has on our nearby golf course.
It’s sad, because under that mess were some nice blackberry bushes. | <urn:uuid:4c5c7ebe-a124-4a16-bd74-2d3108cc318c> | {
"date": "2018-02-18T05:00:21",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9603123068809509,
"score": 2.59375,
"token_count": 204,
"url": "https://thesquirrelnutwork.wordpress.com/2017/10/01/one-of-natures-mysteries-to-solve-212/"
} |
Americans with Disabilities Act (ADA)
What is Americans with Disabilities Act?
American with Disabilities Act (ADA), enforced by the U.S. Equal Employment Opportunity Commission (EEOC), is a federal law designed to prohibit discrimination against any “qualified individual with a disability” in all aspects of employment, including hiring, promotions, discharge, training, and benefits of employment. Under the ADA, employers are only required to provide reasonable accommodations for functional limitation that are due to a disabling condition.
What does “qualified individuals with disability” mean?
According to American Disabilities Act, the term “qualified individuals with a disability” means an individual with a disability who, with or without reasonable accommodation, can perform the essential functions of the employment position that such individual holds or desires.
Myths and Facts about the Americans with Disabilities Act
- ADA suits are flooding the courts.
- The ADA has resulted in a surprisingly small number of lawsuits — only about 650 nationwide in five years. That’s tiny compared to the 6 million businesses; 666,000 public and private employers; and 80,000 units of state and local government that must comply.
- The ADA is rigid and requires businesses to spend lots of money to make their existing facilities accessible.
- The ADA is based on common sense. It recognizes that altering existing structures is more costly than making new construction accessible. The law only requires that public accommodations (e.g. stores, banks, hotels, and restaurants) remove architectural barriers in existing facilities when it is “readily achievable”, i.e., it can be done “without much difficulty or expense.” Inexpensive, easy steps to take include ramping one step; installing a bathroom grab bar; lowering a paper towel dispenser; rearranging furniture; installing offset hinges to widen a doorway; or painting new lines to create an accessible parking space.
- The government thinks everything is readily achievable.
- Not true. Often it may not be readily achievable to remove a barrier — especially in older structures. Let’s say a small business is located above ground. Installing an elevator would not, most likely, be readily achievable — and there may not be enough room to build a ramp — or the business may not be profitable enough to build a ramp. In these circumstances, the ADA would allow a business to simply provide curbside service to persons with disabilities.
- The ADA requires businesses to remove barriers overnight.
- Businesses are only required to do what is readily achievable at that time. A small business may find that installing a ramp is not readily achievable this year, but if profits improve it will be readily achievable next year. Businesses are encouraged to evaluate their facilities and develop a long-term plan for barrier removal that is commensurate with their resources.
- Restaurants must provide menus in braille.
- Not true. Waiters can read the menu to blind customers.
- The ADA requires extensive renovation of all state and local government buildings to make them accessible.
- The ADA requires all government programs, not all government buildings, to be accessible. “Program accessibility” is a very flexible requirement and does not require a local government to do anything that would result in an undue financial or administrative burden. Local governments have been subject to this requirement for many years under the Rehabilitation Act of 1973. Not every building, nor each part of every building needs to be accessible. Structural modifications are required only when there is no alternative available for providing program access. Let’s say a town library has an inaccessible second floor. No elevator is needed if it provides “program accessibility” for persons using wheelchairs by having staff retrieve books.
- Sign language interpreters are required everywhere.
- The ADA only requires that effective communication not exclude people with disabilities — which in many situations means providing written materials or exchanging notes. The law does not require any measure that would cause an undue financial or administrative burden.
- The ADA forces business and government to spend lots of money hiring unqualified people.
- No unqualified job applicant or employee with a disability can claim employment discrimination under the ADA. Employees must meet all the requirements of the job and perform the essential functions of the job with or without reasonable accommodation. No accommodation must be provided if it would result in an undue hardship on the employer.
- Accommodating workers with disabilities costs too much.
- Reasonable accommodation is usually far less expensive than many people think. In most cases, an appropriate reasonable accommodation can be made without difficulty and at little or no cost. A recent study commissioned by Sears indicates that of the 436 reasonable accommodations provided by the company between 1978 and 1992, 69% cost nothing, 28% cost less than $1,000, and only 3% cost more than $1,000.
- The government is no help when it comes to paying for accessibility.
- Not so. Federal tax incentives are available to help meet the cost of ADA compliance.
- Businesses must pay large fines when they violate the ADA.
- Courts may levy civil penalties only in cases brought by the Justice Department, not private litigants. The Department only seeks such penalties when the violation is substantial and the business has shown bad faith in failing to comply. Bad faith can take many forms, including hostile acts against people with disabilities, a long-term failure even to inquire into what the ADA requires, or sustained resistance to voluntary compliance. The Department also considers a business’ size and resources in determining whether civil penalties are appropriate. Civil penalties may not be assessed in cases against state or local governments or employers.
- The Justice Department sues first and asks questions later.
- The primary goal of the Department’s enforcement program is to increase voluntary compliance through technical assistance and negotiation. Under existing rules, the Department may not file a lawsuit unless it has first tried to settle the dispute through negotiations — which is why most every complaint settles.
- The Justice Department never files suits.
- The Department has been party to 20 suits under the ADA. Although it tries extensively to promote voluntary compliance, the Department will take legal action when entities continue to resist complying with the law.
- Many ADA cases involve frivolous issues.
- The Justice Department’s enforcement of the ADA has been fair and rooted in common sense. The overwhelming majority of the complaints received by the Justice Department have merit. Our focus is on fundamental issues related to access to goods and services that are basic to people's lives. We have avoided pursuing fringe and frivolous issues and will continue to do so.
- Everyone claims to be covered under the ADA.
- The definition of “individual with a disability” is fraught with conditions and must be applied on a case-by-case basis.
- The ADA protects people who are overweight.
- Just being overweight is not enough. Modifications in policies only must be made if they are reasonable and do not fundamentally alter the nature of the program or service provided. The Department has received only a handful of complaints about obesity.
- The ADA is being misused by people with “bad backs” and “emotional problems.”
- Trivial complaints do not make it through the system. And many claims filed by individuals with such conditions are not trivial. There are people with severe depression or people with a history of alcoholism who are judged by their employers, not on the basis of their abilities, but rather upon stereotypes and fears that employers associate with their conditions. | <urn:uuid:680b9cc9-1374-43a7-b2a4-0481bd0b48cf> | {
"date": "2015-05-27T21:40:32",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929171.55/warc/CC-MAIN-20150521113209-00319-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9436705112457275,
"score": 3.671875,
"token_count": 1546,
"url": "http://www.hire-ability.org/ada.html"
} |
During the Civil War, air balloons were used for reconnaissance as well as for attacking enemy operations on the ground. Bricks and anything else they could think of were dropped on potential targets. As "civilization" progressed, airplanes replaced the air balloons. Bricks and cannon balls were dropped from the sky. At first, there were very few airplanes. Time passed. All sides had airplanes and guns were used plane to plane. Fortunately as the case may be, those who were shooting often shot off their own propellers before managing to shoot the enemy planes.
All of the above efforts at improving military prowess involved the force of gravity. Dropping bricks and cannon balls without extra force constitutes FREE FALL. Throwing bricks or shooting bullets, however, involves extra forces as well as free fall. Even with simple objective like dropping water balloons from a rooftop, timing is of the utmost importance in acheiving the target.
This data is from a free fall experiment. THe units of measure are seconds for time, t, and feet for height, h. If a relationship exists between time and height, acheiving the target is predictable rather than random. See what you can do.
Time t Height h
4 256 (graph data)
-Find algebraic equation that relates to t and h and works for all the data given. THis is the mathematical model that would help you predict free fall.
-Describe a situation under which your model of free fall would not apply
-Using your mathematical midel, predict how long it would take for a water balloon to fall 20 feet (about 2 floors)
-A water balloon is held out a window at a height of 15 feet above the ground and a pedestrian is approaching but not yet underneath a location directly below the balloon. Draw a sketch. What information do you need to acheive your target?
-When dropping water balloons using free fall, what would make the balloons drop faster? | <urn:uuid:97969439-9333-4498-aeb5-bd81c0b996de> | {
"date": "2014-08-01T03:13:40",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273874.36/warc/CC-MAIN-20140728011753-00414-ip-10-146-231-18.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9641311168670654,
"score": 4.125,
"token_count": 397,
"url": "http://mathhelpforum.com/advanced-algebra/131-i-just-want-check-my-answers-can-you-help-me-out.html"
} |
Who Owns These Bones is written by Henri Cap, Raphael Martin and Renaud Vigourt and is a unique look at animal skeletons and anatomy for 7-11 year olds.
My son is fascinated by animals of all types and on a recent trip to Wild Discovery, he was especially interested in some pictures of bat skeletons. This got us all talking about bones and skeletons (and bat knees which bend the wrong way). Keen to capitalise on this new enthusiasm for skeletons, we’ve been reading Who Owns These Bones.
Learning about what’s inside our bodies and how it all works is an essential part of science. Learning about animal skeletons and how they work and how they have evolved is both very important and actually quite a lot of fun. In Who Owns These Bones, you will discover a whole host of skeletons and learn about what makes each one unique. You’ll learn how to tell the difference between a human skull and a gorilla’s skull and a whole lot more besides.
The book is informative and facts are presented in slightly more than a bite-size format. Each page has flaps which you can lift up which shows the animal skeleton and the animal itself. The book covers all kinds of skeleton facts; from horns and antlers, teeth; how legs, feet and hands are put together, to the skeletons of creatures which swim, like fish, sharks and squid. Importantly it includes the proper scientific names for things, such as exoskeleton and coelacanth.
Who Owns These Bones is a big book with a lot of information in it. It’s not a reference book, but a fun way of exploring biology, evolution and zoology. It’s perfectly pitched at the right age group, it’s accessible and interesting enough for my 8 year old to really get it; but with the depth of information and detail which would still enthrall an 11 year old.
It’s a stylish hardback book; printed in mostly blues and oranges, with detailed illustrations of skeletons and creatures alike. There’s some humour and a lot of good solid information. Who Owns These Bones would be a great book for children who are interested in biology and science. It ticks all the right boxes for us; it’s fun, accessible, interesting and educational.
Who Owns These Bones costs £16.99. It’s published by Laurence King and it is available from a wide range of bookshops including Amazon.
For details of more children’s books published by Laurence King, visit their website. | <urn:uuid:418d63fe-0df4-45b6-b89a-45cdf525a2d1> | {
"date": "2019-08-24T01:29:05",
"dump": "CC-MAIN-2019-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9399248957633972,
"score": 2.625,
"token_count": 537,
"url": "https://hodgepodgedays.co.uk/books/who-owns-these-bones/"
} |
syphilis refers to the infection of the heart and
related blood vessels by the syphilis bacteria. This complication usually
begins as an
inflammation of the arteries. Destruction caused by
cardiovascular syphilis can be life-threatening.
Complications of cardiovascular syphilis include:
Narrowing of the blood vessels that supply blood
to the heart, which may lead to chest pain (angina),
heart attack, and possibly death.
How this information was developed to help you make better health decisions. | <urn:uuid:dc908b33-2ab7-49b6-8ed1-a0616e406d21> | {
"date": "2015-08-31T15:22:09",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066266.26/warc/CC-MAIN-20150827025426-00230-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8777053356170654,
"score": 3.109375,
"token_count": 106,
"url": "http://www.sansumclinic.org/body.cfm?id=129&action=detail&AEProductID=HW_Knowledgebase&AEArticleID=hw195446"
} |
In the Garden
HUI KU MAOLI OLA
The seeds and flowers of Wiliwili are used to make into lei. It's wood is bouyant and used for canoe outrigger.
Latin name: Erythrina sandwicensis
MANY NON-NATIVE species of Erythrina are being grown in Hawaii and they are called Wiliwili. This misleads people into thinking that all Wiliwili are native, but there is really only one and the Erythrina sandwicensis is it.
The light wood is used culturally for constructing canoe outriggers, floats and surfboards. The seeds and individual flowers were strung into lei.
Description: Deciduous trees up to 40 feet tall with light green leaves, reddish-tan bark, and creamy green to dark orange-colored flowers that develop into seed pods with bright orange seeds. Sharp thorns develop on the trunk and branches. The thorns begin to fade as the tree ages.
Distribution: This endemic plant is found on coastal-lowland dry forests on all the main islands.
Landscape uses and care: Wiliwili does well in any sunny, dry area. Initial watering is OK for establishment but the plant can be weaned completely, making it one of the easiest plants to care for.
Rick Barboza co-owns Hui Ku Maoli Ola, a Native Hawaiian plant nursery with Matt Schirman. "In the Garden" runs Fridays.
In the Garden runs Fridays in Today. | <urn:uuid:5f8f1faf-a6e3-4fc0-9308-5754b42f21b9> | {
"date": "2014-07-25T09:00:49",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00072-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9451951384544373,
"score": 2.9375,
"token_count": 323,
"url": "http://archives.starbulletin.com/2003/01/17/features/garden.html"
} |
About Coriander – Coriandrum Sativum
At first sight this herb may remind you of the flat leaf parsley.
Coriander also goes by the name “Chinese Parsley” and “Cilantro”.
In many parts of the English speaking world both the leaves and seeds are referred to as coriander. Coriander is the plant and coriander seeds are obviously the seeds.
In the United States it is common to use the Spanish word “cilantro” when speaking about the coriander leaves.
To add to the confusion about coriander many Americans speak of the seeds as “coriander” (in Europe coriander is the plant). This is something to consider when using recipes from around the world.
Coriander has been used as a cooking and medicinal herb for centuries. The leaves, seeds and root are all edible in the coriander plant.
This is an annual herb, native to the Mediterranean countries and Asia.
About Coriander In Cooking
The coriander root is commonly used in Thai cooking.
The roots of the coriander have a more powerful taste than the leaves.
Coriander seeds are much more intense in flavor that the leaves. The seeds should be ground in a mortar before using.
Do not overuse ground coriander seeds, only small amounts are needed. Ground seeds are used in baking, sauces and sausages. It is also used in curries.
The coriander leaves (cilantro) are lovely to use in cooking. Use lots of leaves to bring out the flavor. Coriander leaves are often used in soups, salads and curries. Try this recipe for a delicious coriander soup!
The coriander leaves are not ideal for slow cooking. Add the leaves shortly before serving.
Coriander leaves need to be used fresh. They are not suitable for drying or freezing as the herb quickly loses its flavor.
Coriander seeds will keep their aroma for years if stores in an airtight container.
Coriander oil is used to flavor gin, vermouth, and liqueurs. It is also used in some perfumes.
About Coriander Health Benefits
Coriander leaves (cilantro) are believed to help fight heavy metal poisoning as for example amalgam that has been used in teeth and mercury or lead.
Dr. Omura discovered how soup with coriander leaves had a positive effect on reducing the amount of heavy metals in the body.
This claim is not officially supported by established medical institutes.
Tests have although shown that coriander has a positive effect on people suffering from diabetes. Coriander helps lower the blood sugar level in the body.
Coriander tea is used in many parts of the Middle East to help relieve anxiety and insomnia.
Coriander seeds are used in Asia to fight fever. Coriander seeds have diuretic qualities.
Coriander increases the appetite and improves digestion. Fresh coriander leaves and coriander seeds are great for the digestion system.
Coriander tea may help dispel wind.
Coriander seeds are used as breath freshener.
It is often used for this purpose by smokers or people who have enjoyed a lovely meal containing garlic.
Coriander tea used as a gargle may reduce tooth aches.
Coriander oil is much used in aromatherapy. It can ease pains and stress.
Coriander has been used throughout the centuries as an aphrodisiac herb. It has been believed this herb stimulates the passion to make love. | <urn:uuid:fbdf1d37-8012-44a1-aeb0-3da595083c82> | {
"date": "2019-06-16T17:24:31",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9452105164527893,
"score": 2.765625,
"token_count": 761,
"url": "https://usesofherbs.com/coriander"
} |
The civil rights movement was consisted of a bunch of organizations and organized efforts to abolish racial discrimination against African Americans and to restore voting rights in the south between 1954 and 1968. In many situations it took form of civil resistance to stop discrimination against blacks by nonviolent forms of resistance. Organizations such as the NAACP, National Association of the Advancement of Colored People, the SNCC, Students Nonviolent Coordinating Committee, the COFO, the Council of Federal Organizations, the CORE, Congress of Racial Equality and the Black Panthers were all major contributors in trying to integrate blacks and whites. Forms of protest and civil disobedience included boycotts like the successful Montgomery Bus Boycott from 1955-1956, “sit-ins” including the influential Greensboro sit-ins in 1960, marches including the Selma to Montgomery marches in 1965 and a wide range of other nonviolent acts. All these civil disobedient acts and protest had all had leaders. The major leader and most famous that made speeches and lead most of the marches was Martin Luther King Jr. Some other people that had a major influence in the civil rights movement were Jesse Jackson, W.E.B. Du Bois, Rosa Parks, Malcolm X and the informally named Big Six who were Martin Luther King Jr., A. Philip Randolph, Roy Wilkins, Whitney Young, James Farmer and John Lewis. All these leaders and nonviolent acts of resistance lead to the abolishment of racial discrimination.
In 1954, the Brown vs. Board of Education, the U.S. Supreme Court declares school segregation unconstitutional. In 1955, Rosa Parks gets arrested for refusing to move to the back of the bus when there were seats open in the front in Montgomery, Alabama. Bus boycotts followed and bus segregation ordinance is declared unconstitutional. It was also the year of the Mississippi and Emmitt Till case. In 1957, the Arkansas government uses National guards to block nine students from attending Little Rock High School; following a court order; President Eisenhower sends federal troops to ensure compliance. In 1960, four black college students began sit-ins at lunch counter in Greensboro, North Carolina, at restaurants where black students were not served. Freedom rides began in 1961, beginning in Washington D.C. all the way into southern states. In 1962 President Kennedy sends federal troops to the University of Mississippi to quell riots so that James Meredith, the school’s first black student, can attend. The Supreme Court also declares that segregation is unconstitutional in all transportation facilities. The Department of Defense orders full integration of military reserve units, excluding the National Guard. In 1963, Medgar Ever, leader of the NAACP was assassinated. Civil Rights leader Martin Luther King Jr. delivers “I Have a Dream” speech to hundreds of thousands at the March in Washington. In 1964, Congress passes Civil Rights Act of 1964, declaring discrimination based on race illegal after 75 day long filibuster. Malcolm X also delivers his “The Ballot or The Bullet Speech” 1965 was the March from Selma to Montgomery, Alabama, to demand protection for voting rights. Also Malcolm X was assassinated at the rally in New York. The voting rights were signed too. In 1968, April 4, Martin Luther King Jr. was assassinated in Memphis, Tennessee.
The Selma to Montgomery marches were three marches in 1965 for the voting rights of African Americans and that represented the peak of the Civil Rights Act. The first march took place on March 7, 1965 in Selma, Alabama. This first march is known as “Bloody Sunday” because about 600 civil right marchers headed east out of Selma on U.S. Route 80 and on the Edmund Pettus Bridge, about six blocks away, there were met by local lawmen. The police tried to force them back into Selma by attacking them with billy clubs and tear gas. Two days later King Jr. lead the march and werre protected by the court for them to march with being attcked by the police. They marched for three days from Selma to Montgomery, Alabama.
Martin Luther King Jr:
“Yes, if you want to say that I was a drum major, say that I was a drum major for justice; say that I was a drum major for righteousness. And all of the other shallow things will not matter.”
“If man hasn’t discovered something that he will die for, he isn’t fit to live.”
“I want to be the white man’s brother, not his brother-in-law.”
“The best way to solve any problem is to remove its cause.”
“Today the choice is no longer between violence and nonviolence. It is either nonviolence or nonexistence.”
“We are out to defeat injustice and not white persons who may be unjust.”
“A riot is at bottom the language of the unheard.”
“Being a Negro in America means trying to smile when you want to cry. It means trying to hold on to physical life amid psychological death. It means the pain of watching your children grow up with clouds of inferiority in their mental skies. It means having their legs off, and then being condemned for being a cripple.”
“Nonviolence is a powerful and just weapon. It is a weapon unique in history, which cuts without wounding and ennobles the man who wields it. It is a sword that heals.”
“I see America through the eyes of a victim. I don’t see any American dream. I see an American nightmare.”
” We don’t go for segregation. We go for separation. Separation is when you have your own. You control your own economy; you control your own politics; you control your own society; you control your own everything. You have yours and you control yours; we have ours and we control ours.”
“It’s just like when you’ve got some coffee that’s too black, which means it’s too strong. What do you do? You integrate it with cream, you make it weak. But if you pour too much cream in it, you won’t know you ever had coffee… It used to wake you up, now it puts you to sleep.”
“If it’s necessary to form a Black Nationalist army, we’ll form a Black Nationalist army. It’ll be ballot or the bullet. It’ll be liberty or it’ll be death.”
“No sane black man really wants integration! No sane white man really wants integration!”
“I am – Somebody. I may be poor, but I am – Somebody! I may be on welfare, but I am – Somebody! I may be uneducated, but I am – Somebody! I must be, I’m God’s child. I must be respected and protected. I am black and I am beautiful! I am – Somebody! Soul Power!” | <urn:uuid:8e921448-9cad-4a3f-b50e-9631d6430f54> | {
"date": "2018-03-24T17:36:33",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00656.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9554657340049744,
"score": 3.921875,
"token_count": 1464,
"url": "https://contemporarymovements.wordpress.com/civil-rights-movement/"
} |
||Find the Pairs|
A pair is two things that have something in common.
3 and 7 are a pair of odd numbers.
A left shoe and a right shoe are a pair.
The two things don't have to be exactly the same to be a pair of matching socks and you can also have a pair of odd socks.
Click on a sock. Look for the matching one and click on it. Can you find all the pairs of socks in the puzzle? The last two left will be an odd pair! | <urn:uuid:ab40fc05-bcff-4427-ab86-d38e61f48fd5> | {
"date": "2019-01-23T11:08:53",
"dump": "CC-MAIN-2019-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584331733.89/warc/CC-MAIN-20190123105843-20190123131843-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9322397708892822,
"score": 2.640625,
"token_count": 109,
"url": "http://www.counton.org/magnet/minus2/pairs/index.html"
} |
- Airport scanners may increase risk of cancer
- Radiation "dangerously underestimated"
- Skin around face, neck most at risk
US scientists are warning that radiation from controversial full-body airport scanners has been dangerously underestimated and could lead to an increased risk of skin cancer - particularly in children.
University of California biochemist David Agard said that unlike other scanners, the radiation from these devices is delivered at low energy beam levels, with most of the dose concentrated in the skin and underlying tissue.
“While the dose would be safe if it were distributed throughout the volume of the entire body, the dose to the skin may be dangerously high,” Dr Agard said.
"Ionizing radiation such as the X-rays used in these scanners have the potential to induce chromosome damage, and that can lead to cancer."
Of further concern is that a failure in the device – like a power or software glitch - could cause an intense radiation dose to a single spot on the skin.
The warnings come ahead of the planned rollout of the scanners in Australia next year as part of the Federal Government’s crackdown on airport security.
David Brenner, the head of Columbia University’s Centre for Radiological Research, says the concentration on the skin – one of the most radiation-sensitive organs of the body – means the radiation dose is actually 20 times higher than the official estimate.
Dr Brenner says the most likely risk from the airport scanners is a type of skin cancer called basal cell carcinoma, which mainly occurs on the head and neck and is usually curable.
The researcher was consulted to write guidelines for the security scanners in 2002 but said he would not have signed the report had he known the devices were going to be used so widely.
"There really is no other technology around where we're planning to X-ray such an enormous number of individuals," he said. “While individual risks will be extremely small, the population risk has the potential to be significant.”
The research also shows children are more vulnerable to radiation damage, because they have more cells dividing at any one time than when fully grown and a radiation-induced mutation can lead to cancer in adulthood.
Officials from the US Transportation Security Administration and the Food and Drug Administration have tried to allay concerns by saying that it would take thousands of trips through the scanners to equal the dose from one X-ray scan in a hospital.
The recent concerns raised by Dr Brenner at the US Congressional Biomedical Caucus have not been officially addressed.
Dr Agard and fellow doctors John Sedat, a molecular biologist and the group's leader; Marc Shuman, a cancer specialist; and Robert Stroud, a biochemist and biophysicist, addressed their concerns to Dr John Holdren, science adviser to US President Barack Obama.
The scientists are calling for more research to be undertaken before the use of the scanners becomes commonplace.
Dr Brenner believes millimetre-wave scanners that use radio waves instead of X-rays would be better to use because they have no known radiation risks.
Full-body scanners no stranger to controversy
The full-body scanners have already caused controversy, with privacy concerns including whether scanned images may breach child pornography laws in various countries.
They have also been criticised as ineffective, with warnings they would be unlikely to detect many of the explosive devices used by terrorism groups.
In other trouble earlier this month, a US airport security screener was suspended for assaulting a colleague who joked about him having small genitalia after he walked through a scanner.
And in March a UK airport security worker said she would sue her bosses after a colleague leered at her “naked” image in a scanner. | <urn:uuid:10efa6f0-9c96-4451-aeef-13fa1d1a7b36> | {
"date": "2016-09-25T20:33:17",
"dump": "CC-MAIN-2016-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660342.42/warc/CC-MAIN-20160924173740-00013-ip-10-143-35-109.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9663754105567932,
"score": 3.015625,
"token_count": 761,
"url": "http://www.news.com.au/travel/travel-updates/naked-scanners-may-increase-cancer-risk/story-e6frfq80-1225868706270"
} |
Date: April 9, 1998
Contacts: Molly Galvin, Media Relations Associate
David Schneier, Media Relations Assistant
(202) 334-2138; e-mail <[email protected]>EMBARGOED: NOT FOR PUBLIC RELEASE BEFORE NOON EDT THURSDAY, APRIL 9Learning About Evolution Critical for Understanding Science
WASHINGTON -- Many public school students receive little or no exposure to the theory of evolution, the most important concept in understanding biology, says a new guidebook from the National Academy of Sciences (NAS). Teachers are reluctant to teach evolution because of pressures from special-interest groups to downplay or eliminate it as part of the science curriculum. Moreover, some are advocating that creationism be taught in public schools -- even though the Supreme Court ruled in 1987 that creationism is a religious idea that cannot be mandated in public education.
In an effort to move beyond the debate and focus attention on effective instruction, the Academy has issued a new guidebook, Teaching About Evolution and the Nature of Science,
to provide educators and policy-makers with tools to help integrate lessons about the scientific theory with basic biology for children in kindergarten through grade 12. The guidebook was written by a group of prominent scientists and educators who have been involved extensively in education and research on evolution.
"The widespread misunderstandings about evolution are of great concern to the scientific community and the Academy," said Bruce Alberts, NAS president and one of the book's authors. "Evolution is the central organizing principle that biologists use to understand the world. If we want our children to have a good grasp of science, we need to help teachers, parents, school administrators, and policy-makers understand both evolution and the nature of science. They also must recognize that many scientists are religious people, and that religion and science represent different approaches to understanding the human condition that are not incompatible with each other."
Teaching evolution is essential for explaining some of the most fundamental concepts of science, the guidebook says. Like all scientific theories, evolution explains natural phenomena by building logically on observations that can be tested and analyzed. The book:>
summarizes the massive amount of scientific evidence in support of evolution and suggests effective ways of teaching it;>
explains the nature of science and how it differs from other ways of knowing about the natural world; >
provides eight sample activities that teachers can use to develop students' understanding of evolution and scientific inquiry; and>
answers some of the most frequently asked questions about the scientific, legal, and educational issues surrounding the teaching of evolution.
"Biology simply cannot be taught well without covering evolution," said Donald Kennedy, a co-author of the guidebook and Bing Professor of Environmental Studies, Stanford University, Stanford, Calif. "Students who understand the process of evolutionary change are able to grasp its vital practical consequences, such as how bacteria develop resistance to antibiotics. A failure to teach effectively about evolution will rob students of a precious opportunity -- to understand how life on Earth has developed and to appreciate their own place in the world."
The guidebook does not attempt to refute the ideas of those who oppose the teaching of evolution. Rather, it points out that most religious denominations in the United States do not view evolution as being at odds with their understanding of human origins. The idea that the entire universe was created all at once about 10,000 years ago -- an idea inherent in "creation science" -- is not supported by scientific data. However, the concept of evolution is unquestioned by almost all scientists.
Many who oppose teaching evolution charge that it is "just a theory, not a fact," and should be taught as such in the classroom. But scientists do not use the word "theory" to describe an unsubstantiated idea. In science, theories are explanations based on a large body of established facts. The debate about evolution in the scientific community is focused on the details of how
evolution occurs, not whether
it occurs, the publication says. Nearly all scientists agree that biological evolution is the most sound theory to explain the diversity of life.
Science teachers can use the new publication in conjunction with the National Science Education Standards
-- voluntary guidelines introduced three years ago by the National Research Council to ensure that all students achieve scientific literacy through improving what is taught, how it is taught, and how students are assessed. The science standards stress the importance of evolution because understanding the theory is essential to mastering basic biology and learning how science works. Teaching About Evolution and the Nature of Science
also provides criteria for evaluating school science programs and the content and design of instructional materials.
The project was funded by the Howard Hughes Medical Institute, the Esther A. and Joseph Klingenstein Fund Inc., and the Council of the National Academy of Sciences. The Academy is a private, non-profit institution that provides science advice under a congressional charter.
Read the full text of Teaching About Evolution and the Nature of Science
for free on the Web, as well as more than 1,800 other publications from the National Academies. Printed copies are available for purchase from the National Academy Press Web site
or at the mailing address in the letterhead; tel. (202) 334-3313 or 1-800-624-6242. Reporters may obtain a pre-publication copy from the Office of News and Public Information at the letterhead address (contacts listed above).National ACADEMY OF SCIENCESNATIONAL RESEARCH COUNCIL
Center for Science, Mathematics, and Engineering EducationWorking Group on Teaching EvolutionDonald Kennedy(1,2)(chair)Bing Professor of Environmental StudiesInstitute for International StudiesStanford UniversityStanford, Calif.Bruce Alberts(1)PresidentNational Academy of SciencesWashington, D.C.Danine Long EzellScience Resource TeacherSan Diego City SchoolsSan DiegoTimothy H. GoldsmithProfessor of BiologyDepartment of BiologyYale UniversityNew Haven, Conn.Robert M. HazenStaff Scientist, Geophysical LaboratoryCarnegie Institution of WashingtonWashington, D.C.Norman LedermanProfessor, College of ScienceScience and Mathematics EducationOregon State UniversityCorvallisJoseph D. McInerneyDirectorBiological Sciences Curriculum StudiesColorado Springs, Colo.John A. Moore(1)Professor Emeritus of BiologyUniversity of CaliforniaRiversideEugenie C. ScottExecutive DirectorNational Center for Science Education Inc.El Cerrito, Calif.Maxine F. Singer(1,2)PresidentCarnegie Institution of WashingtonWashington, D.C.Mike U. SmithAssociate Professor of Medical EducationDepartment of Internal MedicineMercer University School of MedicineMacon, Ga.Marilyn J. SuiterDirector, Education and Human ResourcesAmerican Geological InstituteAlexandria, Va.Rachael WoodCo-Chair, Science Frameworks CommissionDelaware State Department of Public InstructionDoverCENTER FOR SCIENCE, MATHEMATICS, AND ENGINEERING EDUCATION STAFFRodger Bybee, Executive DirectorPatrice Legro, Division Director_________________________________________(1) Member, National Academy of Sciences(2) Member, Institute of Medicine | <urn:uuid:c09b426e-8e6e-44b5-9611-1bf83a60a8c1> | {
"date": "2017-04-30T20:38:24",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125849.25/warc/CC-MAIN-20170423031205-00238-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9057556390762329,
"score": 3.3125,
"token_count": 1461,
"url": "http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=5787"
} |
We have had a few calls about problems with vinca groundcover this season. The plants develop blackened stems and die rather rapidly. Management of the problem depends on accurate disease identification. Two fungal diseases of this groundcover have been fairly common in the past few years in Illinois. Because of the similarity of symptoms, it is likely that many cases have been misdiagnosed. Phoma blight (Phomopsis blight) is probably the more common of the two. Rhizoctonia root rot can produce some very similar symptoms but requires different management.
Phoma blight is caused by the fungus Phoma exigua var. exigua. It is most common in rainy periods. Shoots turn brown or black, wilt, and die, usually to the soil surface. Black lesions can be found on the stems, girdling and killing all tissue beyond the infection. Within the black lesions, the fungus forms black fruiting bodies the size of a pinhead. The fungus remains on the plant stems under the plant canopy, making this disease very difficult to control. If you can’t see the fruiting bodies, try placing affected tissue in a plastic bag with a moist paper towel overnight; then look for the fruiting bodies the next day.
Rhizoctonia root rot causes brown, rotted areas on the roots. Poor root growth results in poor top growth, so dying shoots are prevalent with this disease as well. Black lesions may even appear on the stems. The diagnostic clincher is that fruiting bodies are not found in the lesions on plants infected with Rhizoctonia. In addition, this disease affects roots, so closely examine the roots to distinguish between these two diseases.
Both diseases are very difficult to control. Try to avoid overhead watering or excessive watering of vinca beds. It may be helpful to improve air circulation in the area by pruning surrounding plant material and overhanging branches. Because the fungus can survive in the soil on dead plant material, remove fallen leaves and dead tissue. This task may seem to be impossible--to remove all the dead material and still have live plants. Work with plants when they are dry to avoid further spread of the disease. It has been suggested that new plantings be mulched with black plastic perforated every 4 to 6 inches and then covered with pea gravel or ground corncobs. In most cases, we would avoid the plastic mulch suggestion, but this may be the only way to establish a healthy bed of vinca.
Fungicides that may provide some protection against Phoma blight include iprodione (Chipco 26019), azoxystrobin (Heritage), copper hydroxide (Nu-Cop, Fertilome Blackspot, Champion, Kocide), thiophanate-methyl (Bonide Bonomyl, Dragon 3336, or Ferti-lome Halt), thiophanate-methyl and mancozeb (Zyban), potassium bicarbonate (Armicarb, Bonide Remedy), fludioxonil (Medallion), and mancozeb (Pentathalon or Protect T/O). Azoxystrobin and thiophanate-methyl are systemic products; iprodione is locally systemic. The other chemicals have a protective–contact mode of action and do not provide the same degree of control of the pathogen without multiple applications.
Fungicides that may slow progress of Rhizoctonia root rot include iprodione (Chipco 26019) and PCNB (Engage, Terraclor). Iprodione has a locally systemic mode of action, and PCNB is a protective–contact fungicide. These are not available for homeowner use.
These diseases are most prevalent in cool, wet conditions; but infection can occur anytime from June to August following periods of cool, wet weather. Try to get an accurate diagnosis now so you are ready to help manage this problem. Rhizoctonia can occur even in dry conditions. These diseases are very persistent in vinca plantings, and their presence is one of the main reasons that growers often seek an alternative groundcover. Stem blight of vinca minor is discussed in Report on Plant Disease, no. 640, available at local Illinois Extension offices or on the Internet at http://www.ag.uiuc.edu/~vista/horticul.htm. | <urn:uuid:2eeb9b12-c0ec-49a0-826b-b8e49a4c0058> | {
"date": "2016-05-03T10:37:45",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121423.81/warc/CC-MAIN-20160428161521-00178-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9310425519943237,
"score": 3.015625,
"token_count": 907,
"url": "http://hyg.ipm.illinois.edu/pastpest/200404a.html"
} |
The Adventure of Scouting
In the outdoors, youth have opportunities to acquire skills that make them more self-reliant. They can explore canoe paths and hiking trails and complete challenges they first thought were beyond their ability. Attributes of good character become part of a youth as he or she learns to cooperate to meet outdoor challenges that may include extreme weather, difficult trails and portages, and dealing with nature's unexpected circumstances.
Learning by doing is a hallmark of outdoor education. Unit meetings offer information and knowledge used on outdoor adventures. A leader may describe and demonstrate a Scouting skill at a meeting, but the way Scouts truly learn an outdoor skill is to do it themselves on a unit outing.
Scouting uses the patrol method to teach skills and values. Scouts elect their own patrol leader, and they learn quickly that by working together and sharing duties, the patrol can accomplish far more than any of its members could do alone. The patrol succeeds when every member of the patrol succeeds, and Scouts learn that good teamwork is the key to success.
Health and wellness is part of the outdoor experience. As Scouts hike, paddle, climb, bike, or ride, their muscles become toned and their aerobic capacity increases. When they work as a patrol to plan menus for their outings, they learn to purchase cost-effective ingredients to prepare flavorful and nutritious meals.
Service to others and good citizenship is learned through such outdoor activities as conservation projects, collecting food, building trails and shelters, and conducting community service projects that promote healthy living. Through helping other people, Scouts learn to appreciate how they can share themselves and their blessings to those in need. By giving service to benefit others, Scouts gain a sense of personal satisfaction. | <urn:uuid:6e942788-1ed5-480f-9e53-fa05ec27b214> | {
"date": "2013-12-12T02:13:26",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164346985/warc/CC-MAIN-20131204133906-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9754970073699951,
"score": 3.75,
"token_count": 342,
"url": "http://www.iac-bsa.org/Parents/AdventureOfScouting"
} |
You may have noticed that when you browse the globe in Google Earth, the center section of the 3D viewer is sharp, while the areas close to the edge are not.
This sharper area is called the detail area. You can change the size of this area or make the entire view detailed. Click Tools > Options > 3D View (on the Mac, click Google Earth > Preferences > 3D View) and choose an appropriate Detail Area size.
However, choosing a larger detail area can slow down your computer if you have an older, less powerful video card on your machine. Setting an appropriate detail area is a trade off between an overall better image on your screen and performance. Try different settings and see what works best for you. | <urn:uuid:62a936fc-50a9-48ee-97e5-8058a349add4> | {
"date": "2016-10-24T12:17:24",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719566.74/warc/CC-MAIN-20161020183839-00242-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9174346923828125,
"score": 2.53125,
"token_count": 149,
"url": "http://googleearthuser.blogspot.com/2007/08/detail-area.html"
} |
One of the key challenges facing todays' energy market is to provide highly efficient, low-cost, and environmentally benign alternative-energy devices in the near future, including solar and wind power, geothermal and hydroelectric power, and batteries and supercapcitors. The impending exhaustion and the necessity to lower our dependence on fossil fuel, the desire to develop a more sustainable transportation system, and the demand for a cleaner and secure energy future are all pushing the unprecedented research effort and massive technological innovations that have been experienced in the last few years. Concurrently, global investment initiatives in renewable energy have seen a rapid boost in recent years, driven by concerns about climate change, the forecast of an increasing cost of fossil fuels, and national economic policies to create jobs. Looking forward, global investment in renewable-energy projects alone will rise from $195 billion in 2010 to $395 billion in 2020 and to $460 billion by 2030. Over the next 20 years this growth will require nearly $7 trillion of new capital. Reflecting the rising production and investment levels, the installed capacity of renewable power sources has been projected to climb, reaching 2.5 TW by 2030, a growth of over 800%. Although most of this market is currently occupied by solar and onshore wind-power units, lithium (Li)-ion batteries (LIBs), supercapacitors, and their hybrid devices have seen an upsurge in market share in the recent years, mostly due to the resurgence of electric vehicles and a renewed push for reducing airborne pollution from vehicles. In addition to electric vehicles, LIB and supercapacitors play important roles in our daily lives by powering numerous portable consumer electronics including laptops, personal digital assistants, and cell phones.
Online access to SPIE eBooks is limited to subscribing institutions. | <urn:uuid:dc400b5c-e27e-4f9f-a36a-e68a90a493c4> | {
"date": "2018-06-22T00:11:04",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864303.32/warc/CC-MAIN-20180621231116-20180622011116-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9368826150894165,
"score": 3.03125,
"token_count": 359,
"url": "https://www.spiedigitallibrary.org/ebooks/PM/The-Wonder-of-Nanotechnology-Quantum-Optoelectronic-Devices-and-Applications/Chapter32/Nanostructured-Electrode-Interfaces-for-Energy-Applications/10.1117/3.1002245.ch32"
} |
Making Strides Toward Preventing Inhibitors in Bleeding Disorders
CDC’s Division of Blood Disorders is committed to reducing the occurrence of inhibitors. Find out what they are doing to reach their goal.
Centers for Disease Control and Prevention’s (CDC’s) Division of Blood Disorders (DBD) is committed to reducing the occurrence of inhibitors which are the most significant and costly complication affecting people with bleeding disorders like hemophilia and von Willebrand Disease (VWD).
In March 2012, DBD hosted the first Inhibitor Summit, a national meeting on inhibitors to discuss the urgent need for a system to monitor this serious complication of treatment. This discussion informed efforts by DBD to begin collecting and monitoring information on inhibitor development, treatment, and outcomes by building onto an existing healthcare monitoring system.
Nearly five years later, DBD held the Second Inhibitor Summit on January 30, 2017. This meeting of over 40 key stakeholders and subject matter experts evaluated the current state of patient enrollment in the system and how to improve accuracy and representativeness, discussed how inhibitors are currently being monitored, and generated ideas for how this monitoring system can be better used or improved to prevent inhibitors.
What are inhibitors?
People with hemophilia, and many with VWD type 3, use treatment products called clotting factor concentrates (“factor”). These treatment products improve blood clotting, and they are used to stop or prevent a bleeding episode. When a person develops an inhibitor, the body stops accepting the factor treatment product as a normal part of blood. The body thinks the factor is a foreign substance and tries to destroy it with an inhibitor. The inhibitor keeps the treatment from working, which makes it more difficult to stop a bleeding episode. A person who develops an inhibitor will require special treatment until his or her body stops making inhibitors. Inhibitors most often appear during the first 50 times a person is treated with clotting factor concentrates, but they can appear at any time. In addition, it is estimated that 1 out of every 5 people with hemophilia will develop an inhibitor in their lifetimes.
Why are inhibitors an important area of study for CDC?
Treatment for people with an inhibitor poses special challenges. The healthcare costs associated with inhibitors can be staggering because of the amount and type of treatment product required to stop bleeding. Also, people with hemophilia or VWD who develop an inhibitor are twice as likely to be hospitalized for a bleeding complication, and they are at increased risk of death. CDC is working to reduce the number of people who develop inhibitors, decrease healthcare costs for people with bleeding disorders, and ensure that all treatment products are safe and effective for people with bleeding disorders.
Summit participant, Dr. Steven Pipe, comments after one of the presentations.
Kimberly Haugstad (HFA) and Vanessa Byams (CDC) pose for a photo during one of the breaks.
What is the system CDC uses to monitor inhibitors?
The system CDC uses to monitor inhibitors is called the Community Counts Registry for Bleeding Disorders Surveillance. This registry monitors the health of people who receive care at Hemophilia Treatment Centers (HTCs). It is funded by CDC through a cooperative agreement awarded to the American Thrombosis and Hemostasis Network in partnership with the U.S. Hemophilia Treatment Center Network. The program gathers and shares valuable information on a variety of health outcomes, including those related to inhibitors. For example, through the Registry, CDC can establish estimates of the number of participants with new and existing inhibitors, identify unexpected increases in how often inhibitors develop, and identify risks for inhibitor development.
What was the purpose of the Second Inhibitor Summit?
CDC held the meeting to discuss how inhibitors are currently being monitored and how this monitoring system can be better used or improved to prevent inhibitors.
The goals of the meeting were to
- Share information about the Community Counts Registry for Bleeding Disorders Surveillance and the current state of national inhibitor monitoring,
- Identify steps to collect the most accurate and representative national data possible on the occurrence of inhibitors,
- Determine strategies to make sure that inhibitor testing methods are accurate, and
- Explore the need for a national, coordinated inhibitor science agenda.
Who attended the Summit?
Meeting attendees included
- Federal partners: In addition to CDC, representatives from the National Institutes of Health (NIH), the Health Resources and Services Administration (HRSA) and the Food and Drug Administration (FDA) attended,
- Bleeding disorder community partners: National Hemophilia Foundation (NHF), Hemophilia Federation of America (HFA), World Federation of Hemophilia, American Thrombosis and Hemostasis Network,
- Representatives of scientific organizations: NHF’s Medical and Scientific Advisory Committee, Hemostasis and Thrombosis Research Society, International Society of Thrombosis and Haemostasis, and others, and
- Representatives of pharmaceutical companies with an interest: Bayer Healthcare Pharmaceuticals, Grifols, Novo Nordisk, Pfizer, and others.
- CDC is working with partners to develop educational materials for patients and care providers about the importance of regular testing for inhibitors.
- CDC is working with other Federal partners, including the NIH, HRSA, and FDA, to strengthen support for prevention efforts.
- CDC is working with 130 federally supported HTCs located throughout the United States to strengthen the monitoring system for people who receive care at HTCs and to learn more about what causes inhibitors.
What does CDC want people with hemophilia or VWD type 3 to know?
CDC encourages people with hemophilia or VWD type 3 to
- Get tested for inhibitors once a year,
- Take advantage of the free inhibitor testing at federally funded HTCs through the CDC Community Counts Registry for Bleeding Disorders Surveillance program,
- Seek support and learn from other individuals and families who have been affected by inhibitors, and
- Consider participating in research studies that can result in new prevention programs that help reduce medical complications and lead to improved quality of life.
- Page last reviewed: April 17, 2017
- Page last updated: April 17, 2017
- Content source:
- National Center on Birth Defects and Developmental Disabilities
- Page maintained by: Office of the Associate Director for Communication, Digital Media Branch, Division of Public Affairs | <urn:uuid:9b1f47a0-9b35-406c-9faf-97fd307d9dad> | {
"date": "2018-04-24T14:45:35",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946721.87/warc/CC-MAIN-20180424135408-20180424155408-00136.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9328992366790771,
"score": 3.28125,
"token_count": 1316,
"url": "https://www.cdc.gov/features/preventing-inhibitors/index.html"
} |
Tumors are not made of cancer cells alone. Instead, they are a complex mix of cancerous cells and normal cells that form a rich network – known as the microenvironment – that supports and nurtures a growing tumor. Think of the microenvironment as the “soil” that nourishes a cancer “seed.” Scientists could target that “soil” with anti-cancer drugs, making it an inhospitable place for a tumor to grow and spread. Such an approach could greatly increase the effectiveness of traditional anti-cancer treatments, according to researchers at The Wistar Institute.
This is a new way of thinking for the field of cancer medicine, a break from its traditional goal of minimizing damage to healthy cells. Wistar researchers, led by Ellen Puré, Ph.D., have demonstrated that a molecule called fibroblast activation protein (FAP), which is found in normal cells, has a critical role in the tumor microenvironment.
Puré and her colleagues have demonstrated that targeting FAP – or deleting the gene that codes for it – can significantly reduce tumors in mice with lung and colon cancer by blocking some of the important biological processes required for tumor growth. Since FAP is expressed in nearly 90 percent of all human solid tumor cancers, drugs that target the protein may effectively boost the ability of specific drugs to attack both the tumors and their supporting cells | <urn:uuid:2bf49257-1436-4539-bd4b-fb82caddf5a8> | {
"date": "2014-09-30T09:48:08",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00282-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9632622599601746,
"score": 3.453125,
"token_count": 283,
"url": "http://www.wistar.org/wistar-today/wistar-wire/2011-08-31/taking-environmental-approach-killing-tumors"
} |
Waste legislation and regulations
Guidance for businesses and organisations on how waste disposal is regulated and what they need to do to comply.
Relevant legislation and regulations
The EU Waste Framework Directive provides the legislative framework for the collection, transport, recovery and disposal of waste, and includes a common definition of waste (PDF, 81.3KB, 11 pages) . The directive requires all member states to take the necessary measures to ensure waste is recovered or disposed of without endangering human health or causing harm to the environment and includes permitting, registration and inspection requirements.
The directive also requires member states to take appropriate measures to encourage firstly, the prevention or reduction of waste production and its harmfulness and secondly the recovery of waste by means of recycling, re-use or reclamation or any other process with a view to extracting secondary raw materials, or the use of waste as a source of energy. The directive’s requirements are supplemented by other directives for specific waste streams.
The Waste (England and Wales) (Amendment) Regulations 2012 were laid before Parliament and the Welsh Assembly on 19 July 2012 and come into force on 1 October 2012.The amended regulations relate to the separate collection of waste. They amend the Waste (England and Wales) Regulations 2011 by replacing regulation 13. From 1 January 2015, waste collection authorities must collect waste paper, metal, plastic and glass separately. It also imposes a duty on waste collection authorities, from that date, when making arrangements for the collection of such waste, to ensure that those arrangements are by way of separate collection.
These duties apply where separate collection is necessary to ensure that waste undergoes recovery operations in accordance with the directive and to facilitate or improve recovery; and where it is technically, environmentally and economically practicable. The duties apply to waste classified as waste from households and waste that is classified as commercial or industrial waste.The amended regulations also replaced regulation 14(2) to reflect the changes to regulation 13 to ensure a consistent approach. Consequential changes are also made to reflect changes in paragraph numbering in the new regulation 13.
Our combined Summary of consultation responses and government response to the consultation on amending the Waste Regulations 2011 on the separate collection of recycling has also been published.
Environmental permitting for waste
The recovery and disposal of waste requires a permit under EU legislation with the principal objective of preventing harm to human health and the environment. This legislation also allows member states to provide for exemptions from the need for a permit, providing general rules are laid down for each type of exempt activity, and the operation is registered with the relevant registration authority. We have given effect to the EU requirements through the Environmental Permitting (England and Wales) Regulations 2010 (the 2010 regulations). More information is available on the National Archive and on the Environment Agency website.
Hazardous waste regulations
Hazardous waste is essentially waste that contains hazardous properties which if mismanaged has the potential to cause greater harm to the environment and human health than non-hazardous. As a result, strict controls apply from the point of its production, to its movement, management, and recovery or disposal.
Waste shipment regulations
Waste shipment regulations are comprised of EU Regulations, a UK statutory instrument and a UK Plan. Between them, they control movements of waste between the UK and other countries and provide a framework for enforcement. Some movements are prohibited, others are subject to prior written notification and consent procedures and some are subject to basic administrative controls. The control depends on the nature of the waste, its destination and whether it is destined for recovery or disposal. You can find more information on the National Archive.
UK Ship recycling strategy
Ship recycling is a global issue. Defra considers the environmentally sound management of ships to be a high priority and in 2007 issued a Ship Recycling Strategy for UK ships.
Electrical and electronic equipment
Waste Electrical and Electronic Equipment (WEEE) and Restriction of Hazardous Substances in electrical and electronic equipment (RoHS) directives aim to reduce he quantity of waste from electrical and electronic and increase its re-use, recovery and recycling. The RoHS directive aims to limit the environmental impact of electrical and electronic equipment when it reached the end of its life. It does this by minimising the hazardous substances of legislation controlling hazardous substances in electrical equipment across the community. More information is available on the Environment Agency website.
Packaging, packaging waste and packaging waste regulations
These regulations aim to harmonise national measures concerning the management of packaging and packaging waste to provide a high level of environmental protection and to ensure the functioning of the internal market. For more details read the government’s policy on reducing and managing waste.
This directive aims to prevent or reduce as far as possible negative effects on the environment from the landfilling of waste, by introducing stringent technical requirements for waste and landfills and setting targets for the reduction of biodegradable municipal waste going to landfill. For more information, read the government’s policy on reducing and managing waste.
End-of-life vehicles (ELVs) Regulation 2003
This regulation aims to prevent waste from end-of-life vehicles and promote the collection, re-use and recycling of their components to protect the environment. More information is available on the Environment Agency website.
This directive aims to improve the environmental performance of batteries and minimise the impact waste batteries have on the environment. It does this by:
- restricting the use of cadmium and mercury in the design and manufacture of new batteries
- setting collection and recycling targets for waste portable batteries
- banning the disposal of untreated automotive or industrial batteries in landfill or by incineration
- Waste incineration legislation
- Environmental Protection Act 1990
- Environment Act 1995
- information on the Waste and Emissions Trading Act 2003
- Producer responsibility obligations (packaging waste) regulations 1997
- battery disposal on the Environment Agency website
- The Finance Act and Landfill Tax Regulations 1996
- Waste Minimisation Act 1998
EU Waste Framework Directive
The revised revised EU Waste Framework Directive (revised WFD) (PDF, 146KB, 28 pages) was adopted when the Environment Council met on 20 October 2008, signed on behalf of the European Parliament and the Council on 19 November 2008 and published in the Official Journal of the European Union on 22 November (L312/3) as Directive 2008/98/EC. The revised WFD entered into force on 12 December 2008.
The revised WFD also provides that the European Commission may introduce a range of measures by means of comitology procedure (eg end-of-waste criteria for specified waste streams). These provisions will be available to the Commission from the date of the revised WFD’s entry into force.
Legal definition of waste guidance
This guidance is aimed at businesses and other organisations which take decisions on a day-to-day basis about whether something is or is not waste. In most cases, the decision is straightforward and whoever is taking the decision does not need guidance from the competent authorities to help them take it. However, in some cases, the decision is more difficult (eg where the substance or object has a value or a potential use or where the decision is about whether waste has been fully recovered or recycled and has therefore ceased to be waste). The aim of the guidance is to help ensure that the right decision is taken in these more difficult cases.
We consulted on the draft at the start of 2010 but pledged to publish the full document soon after the publication of the EC guidance on the WFD to ensure that the definition of waste on the 2 documents was still fully aligned:
The legislation to transpose the revised WFD into national law has been made by Parliament and the devolved administrations. The Waste (England and Wales) Regulations 2011 came into force from 29 March 2011.
Other European Commission measures
The European Commission has introduced the following measures by means of the comitology procedure:
End-of-waste criteria developed under Article 6 of the revised WFD for: aluminium and ferrous scrap metal
- Council Regulation 2011/333 establishing when certain types of scrap metal (aluminium and iron & steel) cease to be waste (PDF, 754KB, 10 pages)
Information on national end-of-waste criteria (Quality Protocols) can be found on the Environment Agency’s website.
If you have any queries, please contact the Defra helpline
Introduction to the waste hierarchy
Many businesses are unaware of how significantly waste impacts on their bottom line. As the demand for materials grows worldwide, raising input costs, it makes sense for businesses to adopt the waste hierarchy.
Article 4 of the revised EU Waste Framework Directive sets out 5 steps for dealing with waste, ranked according to environmental impact - the ‘waste hierarchy’.
Prevention, which offers the best outcomes for the environment, is at the top of the priority order, followed by preparing for re-use, recycling, other recovery and disposal, in descending order of environmental preference.
|Prevention:||using less material in design and manufacture, keeping products for longer, re-use, using less hazardous materials|
|Preparing for re-use:||checking, cleaning, repairing, refurbishing, whole items or spare parts|
|Recycling:||turning waste into a new substance or product, includes composting if it meets quality protocols|
|Other recovery:||includes anaerobic digestion, incineration with energy recovery, gasification and pyrolysis which produce energy (fuels, heat and power) and materials from waste, some backfilling|
|Disposal:||landfill and incineration without energy recovery|
The waste hierarchy has been transposed into UK law through the The Waste (England and Wales) Regulations 2011. The regulations came into force on 29 March 2011. The provisions relating to the hierarchy (set out at in Regulations 12, 15 and 35) came into force on 28 September 2011.
What you need to do
If your business or organisation (including local authorities on behalf of householders) produces or handles waste (this includes importing, producing, carrying, keeping or treating waste; dealers or brokers who have control of waste, and anyone responsible for the transfer of waste), you must take all such measures as are reasonable in the circumstances to:
- prevent waste
- apply the waste hierarchy when you transfer waste
Defra published in 2011 a package of guidance to assist businesses and other organisations in England to make better decisions on waste and resource management. This guidance considers the environmental impacts of various waste management options for a range of materials:
- a shorter summary guidance aimed particularly at SMEs
- an evidence paper which summarises current scientific research on the environmental impacts of various waste management options
- an online support tool which produces a tailored guide for businesses according to what waste material they handle - see http://wastehierarchy.wrap.org.uk
As well as through the transposing regulations, the revised waste hierarchy has been incorporated through:
- the planning system via an update to Planning Policy Statement 10: Planning for sustainable waste management
- the environmental permitting regime, if you are operating a site that requires a permit under the Environmental Permitting Regulations (England and Wales) Regulations 2010 - in addition to the duties described above, a condition in new or revised permits will place a duty on the permit holder to apply the hierarchy - if you are an existing permit holder, this new condition will apply when your permit comes up for review (see Environmental permitting guidance).
Defining the waste hierarchy stages
The definitions of each of the stages can be found in Article 3 of Directive 2008/98/EC. Non-exhaustive lists of disposal and recovery operations can be found in Annexes I and II of the directive:
Prevention - measures taken before a substance, material or product has become waste, that reduce:
- the quantity of waste, including through the re-use of products or the extension of the life span of products
- the adverse impacts of the generated waste on the environment and human health
- the content of harmful substances in materials and products
Re-use - any operation by which products or components that are not waste are used again for the same purpose for which they were conceived.
Preparing for re-use - checking, cleaning or repairing recovery operations, by which products or components of products that have become waste are prepared so that they can be re-used without any other pre-processing
Recycling - means any recovery operation by which waste materials are reprocessed into products, materials or substances whether for the original or other purposes. It includes the reprocessing of organic material but does not include energy recovery and the reprocessing into materials.
Recovery - means any operation the principal result of which is waste serving a useful purpose by replacing other materials which would otherwise have been used to fulfil a particular function, or waste being prepared to fulfil that function, in the plant or in the wider economy.
Disposal - means any operation which is not recovery even where the operation has as a secondary consequence the reclamation of substances or energy. Annex I sets out a non-exhaustive list of disposal operations
Deciding the priority order for each waste material
Our guidance is based on the best evidence currently available. As waste management technologies evolve, so their impact on the environment relative to other options may change. The current research shows that for food, anaerobic digestion is environmentally better than composting and other recovery options. The evidence also indicates that for garden waste and for mixtures of food waste and garden waste, dry anaerobic digestion followed by composting is environmentally better than composting alone.
Likewise, the scientific data for certain waste management technologies is currently limited, eg for pyrolysis and rendering. So we are unable to determine their environmental benefits relative to other options within the hierarchy.
Businesses and local authorities may consider other factors when they make decisions on waste, including social and economic impacts, and technical feasibility. These factors are will vary in line with the size of an organisation, the range of materials it handles and its location. The relevance of these factors will have to be weighed on a case-by-case basis.
As new technologies emerge, we will review the evidence available annually and update our guidance on the hierarchy accordingly.
Anaerobic digestion - environmentally preferable to composting
The scientific evidence we currently have, based on life-cycle analysis, shows that for food, anaerobic digestion (AD) is environmentally better than composting and other recovery options. The evidence also indicates that for garden waste and for mixtures of food waste and garden waste, dry anaerobic digestion followed by composting is environmentally better than composting alone.
This is because anaerobic digestion produces both biogas, which can be used to generate vehicle fuel, heat, electricity, combined heat and power, and digestate, which can be used instead of fossil fuel-intensive fertilisers. The combination of both outputs means that anaerobic digestion is environmentally preferable to composting.
The directive does not mandate the use of one option over the others. Businesses and local authorities may consider other factors when they make decisions on waste, including social and economic impacts, and technical feasibility.
The evidence indicates that for garden waste and for mixtures of food waste and garden waste, which are not suitable for dry anaerobic digestion composting is environmentally better. The relative merits of composting depend on the compost being used in place of fertiliser or peat. In terms of greenhouse gas emissions composting and energy recovery are broadly similar.
Recycling and energy from waste
Recovery activities such as energy from waste are also a key part of the hierarchy. The evidence shows that for most materials recycling is better for the environment than energy from waste (EfW) and that EfW is better than landfill.
The government wants to reduce residual waste. However, there will be a need to deal with this type of waste for the foreseeable future and recycling alone cannot currently meet the ambition for diversion from landfill. There is no immediate risk of EfW facilities being deprived of feedstock.
Other sources of support
The Environment Agency has produced advice on the Waste (England and Wales) Regulations 2011.
Separate guidance for England and Wales
In England, the decision has been made to use a range of criteria to inform the waste hierarchy - climate change, air pollution, water pollution and resource depletion. In Wales, the hierarchy is informed by ecological footprinting. Because of this difference in methodology, the 2 guidance documents are not always the same, although they often reach similar conclusions. Separate guidance will be produced in Wales in due course.
Published: 9 April 2013
Updated: 9 May 2014
- Removed information about the Waste Hierarchy Guidance Review 2012. The review was put on hold and the information is now obsolete.
- First published. | <urn:uuid:31603c6a-aeaf-4c06-855d-602f16451ec7> | {
"date": "2017-05-24T08:02:44",
"dump": "CC-MAIN-2017-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00637.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9242334961891174,
"score": 3.171875,
"token_count": 3465,
"url": "https://www.gov.uk/guidance/waste-legislation-and-regulations"
} |
The key to the Leap Motion system is better algorithms. This means it could be adapted to use other kinds of sensing than infrared such as radar or light.
LIDAR uses ultraviolet, visible, or near infrared light to image objects and can be used with a wide range of targets, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules.
Vastly improved 3D sensors at lower cost would accelerate the development, capabilities and adoption of robotics and robotic cars.
This trend would also be enhanced with the improvements in voice recognition from systems like Apples SIRI.
Leap motion has discussed making their system as small as a coin. This means the system should get cheaper and added to all smartphones and tablets and incorporated into other gadgets.
Perfected voice recognition and near micron accurate 3D sensing would transform smartphones. They could combined with augmented reality glasses for gesture and voice input while interacting with virtual projected images.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks | <urn:uuid:c1ca41e3-6187-4401-8255-10384baaf55c> | {
"date": "2017-02-26T21:18:48",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00200-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.954506516456604,
"score": 3.015625,
"token_count": 223,
"url": "http://www.nextbigfuture.com/2012/06/what-would-future-with-cheap-micron.html"
} |
Photo Credit: YouTube/ NASA Video
In a bid to make an organised effort to overcome the obstacles that lie before a human journey to Mars, NASA has decoded some hazards that astronauts can encounter on a continual basis on the Red Planet.
The space agency's Human Research Programme (HRP) used ground-based analogues, laboratories, and the International Space Station (ISS), to evaluate human performance and countermeasures required for the exploration of Mars, expected to be in 2030s.
The team divided the hazards into five classifications - radiation; isolation and confinement; distance from Earth; gravity (or lack thereof); and hostile or closed environments.
"Above Earth's natural protection, radiation exposure increases cancer risk, damages the central nervous system, can alter cognitive function, reduce motor function and prompt behavioural changes," NASA said in a statement on Monday.
To mitigate this, deep space vehicles will have significant protective shielding, dosimetry, and alerts.
Further, crews are to be carefully chosen, trained and supported to ensure they can work effectively as a team for months or years in space.
Sleep loss, circadian desynchronisation, and work overload compound issue isolated and confined and may lead to performance decrements, adverse health outcomes, and compromised mission objectives.
Another hazard is the distance from Earth. Mars is, on average, 140 million miles from Earth and, as the astronauts would be leaving for roughly three years.
For example, when astronauts aboard the ISS face a medical event or emergency, the crew can return home within hours. Additionally, cargo vehicles continually resupply the crews with fresh food, medical equipment, and other resources.
However, once you burn your engines for Mars, there is no turning back and resupply.
"Facing a communication delay of up to 20 minutes one way and the possibility of equipment failures or a medical emergency, astronauts must be capable of confronting an array of situations without support from their fellow team on Earth," NASA said.
A human mission to Mars can also encounter the variance of gravity.
On Mars, astronauts would need to live and work in three-eighths of Earth's gravitational pull for up to two years. This can impact their bones, muscles, cardiovascular system.
NASA is identifying how current and future, US Food and Drug Administration-approved osteoporosis treatments, could be employed to mitigate the risk for astronauts developing the premature bone loss condition.
The spacecraft bound to Mars will include important habitability factors such as temperature, pressure, lighting, noise, and quantity of space. It's essential that astronauts are getting the requisite food, sleep and exercise needed to stay healthy and happy.
"While these five hazards present significant challenges, they also offer opportunities for growth and innovation in technology, medicine and our understanding of the human body," the space agency stated. | <urn:uuid:137a5896-13ea-491e-a993-07b880fef61b> | {
"date": "2019-06-18T01:21:50",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998600.48/warc/CC-MAIN-20190618003227-20190618025227-00496.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9243706464767456,
"score": 3.796875,
"token_count": 576,
"url": "https://gadgets.ndtv.com/science/news/nasa-explains-hazards-of-manned-mission-to-mars-1914946"
} |
I am not really sure I can understand the meaning of the phrase in concept of. I see it is used in a variety of situations. Could anybody help me understand what it means?
1 History shows that there have been major changes in concept of childhood in Eurocentric cultures before
2 Council Regulation (EEC) No 1191/69 of 26 June 1969 on action by Member States concerning the obligations inherent in concept of a public service in transport by rail, road and inland waterway
3 If the assignment is in concept of a payment pro solvendo or pro soluto in exchange of a sale of goods or a provision of services, the law applicable shall be....
4 The earliest hint of Confucianist Shinto can be seen in concept of Confucian-Shinto unity
5 On the other hand in concept of physical wellness, there are physical activities that sustain bodily atrophy which are part of the personal habits of man to live in a better world in the satisfaction of human wants
6 Contributions in concept of happiness
I see the phrase is also used in legal contexts (examples 2 and 3). Could you say if it has a different meaning in these cases? | <urn:uuid:fae16215-a2b1-4eca-9256-6798fff6f0a1> | {
"date": "2015-03-28T15:25:22",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297587.67/warc/CC-MAIN-20150323172137-00038-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9499311447143555,
"score": 3.15625,
"token_count": 241,
"url": "http://www.usingenglish.com/forum/threads/133477-in-concept-of"
} |
Above: Shankar Banik, Ph.D., The Citadel
As seen in The Journal
For decades, security meant locking the doors and windows of your home, maybe turning on an alarm system, keeping your banking information and credit cards away from strangers, and placing your personal papers and passport in a safe.
Today, you have hundreds of doors and windows into your life, and your banking, credit card, passport and other personal information are housed in databases you don’t own or control. You can take steps to secure it all, but there are thousands of malicious actors from all over the world employing sophisticated tools to hijack your data and profit from it.
Or just ruin your life.
And there is only so much you can do about it.
That’s the sobering conclusion, the more you know about cybersecurity.
Dr. Shankar Banik, a professor in the Department of Cyber and Computer Sciences, NSA/DHS CAE-CDE program director and co-director of the Center for Cyber, Intelligence, and Security Studies at The Citadel, says cybersecurity is a problem that is managed, not solved. You can take precautions to minimize your vulnerability, but as recent high-profile attacks on the state of South Carolina, Target, Facebook, Twitter, Marriott Hotels and many, many other organizations demonstrate, no one’s information is totally safe.
Cybersecurity in The Lowcountry
Dr. Banik teaches a cybersecurity course as part of the MS in Computer and Information Sciences (jointly offered by The Citadel and College of Charleston). While initially offered at the Lowcountry Graduate Center, classes today for the joint program are taught on the main campuses of the two institutions in Charleston. Students in the course use hands-on techniques in a closed environment to learn how to detect and prevent cyberattacks.
There are some actions he says we can all take to reduce our vulnerability to cyberattacks.
- Use multiple passwords online.
- Use complex passwords that include upper and lower case letters, numbers and symbols.
- Only download apps and software from trusted providers.
- Be wary of all emails and scrutinize the email address before opening.
- Never open email attachments you aren’t sure about.
- Don’t share sensitive personal information on social media.
- Turn off the microphone on your smart speaker when you’re not talking to it. Otherwise it is constantly listening to everything said and done around it.
“The more online services you use in your daily life, the more vulnerable you are,” Dr. Banik warns.
People Are the Weak Link
Cybersecurity analysts detect vulnerabilities in systems via a variety of tests. These include penetration-pressure testing to find weak points, inventorying all the devices in a network, and constantly scanning systems.
Once vulnerabilities are identified, they establish controls to strengthen the weak points, build firewalls to manage what comes into the network and conduct “system hardening” – ensuring software updates are all installed.
The greatest weakness in any network is beyond the control of cyber security experts – it’s the users.
“Humans are the weakest link,” Dr. Banik says. “You can have all the most sophisticated software and hardware, you can have the best virus protection, but at the end of the day, all a hacker needs is one entry point into a network.”
Dr. Banik cautions against opening those spearfishing emails that look like emails from friends and professional contacts. He says cybersecurity experts find themselves in the education business, teaching network users what he calls “cyber hygiene” — ways to avoid becoming some malicious hacker’s victim.
“We’re teaching it in middle school now because they’re already using the internet,” he says.
Dr. Banik teaches students the three pillars of network security — confidentiality, integrity and availability. That translates to keeping private information out of unauthorized hands, protecting the system from attack and keeping it running all the time. He says a big part of the course is showing the engineers he teaches how to write more secure code.
But even with all the protections, your information is going to get stolen from some organization with which you do business. The defenses keep getting more sophisticated, but so do the hackers. | <urn:uuid:68873f64-5d01-4ed8-bece-63aaaa542d57> | {
"date": "2019-08-23T04:46:52",
"dump": "CC-MAIN-2019-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9209721088409424,
"score": 2.578125,
"token_count": 899,
"url": "https://today.citadel.edu/cybersecurity-expert-citadel-shankar-banik-security/"
} |
Makkah's Place in Arabia
despite the fact that yaman was the most advanced province in the arabian peninsula and the most civilized on account of its fertility and the sound administration of its water resources, its religious practices never commanded the respect of the inhabitants of the desert. its temples never constituted a single center of pilgrimage. makkah, on the other hand, and its ka'bah, the house of isma'il, was the object of pilgrimage ever since arab history began. every arab sought to travel to it. in it the holy months were observed with far more ado than anywhere else. for this reason, as well as for its distinguished position in the trade of the peninsula as a whole, it was regarded as the capital. further, it was to be the birthplace of muhammad, the arab prophet, and became the object of the yearning of the world throughout the centuries. its ancient house was to remain holy forever. the tribe of quraysh was to continue to enjoy a distinguished and sovereign position. all this was to remain so forever despite the fact that the makkans and their city continued to lead a life closer to the hardness of bedouin existence which had been their custom for many tens of centuries. | <urn:uuid:f0338a6c-7f25-4b07-b388-018ffb81d0c8> | {
"date": "2019-04-22T17:09:50",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422181337-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.986088216304779,
"score": 3.03125,
"token_count": 259,
"url": "https://rasoulallah.net/en/articles/article/5556"
} |
Certainly, there is a need to occupy healthcare. Healthcare is essential, and the prevention and treatment that happens in clinics and hospitals, emergency rooms and community health centers, is integral to improving and saving lives.
Yet, while medical care is essential, it accounts for only an estimated 10-15% of preventable mortality in the U.S. The true causes of our country’s poor health outcomes and health inequities – and thereby the real solutions to improving health – are not rooted in the provision of healthcare.
They are rooted in communities: in sidewalks and parks, in access to healthy food and adequate housing, in clean air and safe neighborhoods.
What does this mean? It means that to alter health outcomes and inequities, we must go beyond occupying healthcare.
We must occupy the junk food and fast food industries, whose marketing power and lobbying power (leading to the maintenance of skewed agricultural subsidies) impact what we eat and what is available for us to eat.
We must occupy the criminal justice system. The U.S., with less than 5% of the world’s population, has almost 25% of its prisoners, the majority of whom are people of color, people with mental health issues and drug addiction, and people with low levels of educational attainment. This exacerbates poor health outcomes related to substance abuse and mental health; worsens health inequities by race, ethnicity, and socioeconomic status; and to boot, has done little if anything to make neighborhoods safer.
We must occupy zoning policies and construction and planning industries to improve inequities in access to healthy food, enhance safety and walkability, reduce unintentional injuries (which are the leading cause of morbidity and mortality among children in the U.S.), and reduce the excessive energy use and pollution that stems from our homes and buildings, as well as long commutes in personal motor vehicles (of which we have more in this country than licensed drivers).
We must occupy the welfare system, which focuses on services that – despite what are often good intentions – do not empower citizens, tap into their problem solving capacity, or enhance their ability to take collective action to better their communities, as John McKnight argues in an article entitled “Services are Bad for People”.
We must occupy the news and entertainment media. Whether it is news stories that inaccurately and dangerously link bullying directly to suicide in a way that can elevate suicide contagion risk by suggesting suicide is a natural response to bullying; fictional TV characters eating hordes of junk food day in and day out, without any consequences; or music videos that normalize gender-based violence, the media play an enormous role in our perceptions of what is “normal”, shaping our behaviors in a way that has significant impact on health outcomes. | <urn:uuid:953e46e4-fd33-41d0-9219-54d50e38a762> | {
"date": "2015-09-03T21:20:25",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645328641.76/warc/CC-MAIN-20150827031528-00169-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9578432440757751,
"score": 2.65625,
"token_count": 561,
"url": "http://occupyhealthcare.net/2011/10/beyond-healthcare-occupying-for-health/"
} |
Definition of geosynchronous in English:
(Of an earth satellite or its orbit) having a period of rotation synchronous with that of the earth’s rotation.
- There, the rockets will link up, creating an 80-ton spacecraft that will ascend to 22,000 miles and lock into geosynchronous orbit.
- Up to a dozen geosynchronous satellites go out of service every year, and there are now several hundred derelicts in the disposal orbit.
- The key to this concept was the placement of space stations in geosynchronous Earth orbit, a location 35,786 kilometers above Earth.
Definition of geosynchronous in:
- British & World English dictionary
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. | <urn:uuid:8b4eabc1-082b-46f8-93da-9b5c8279c70b> | {
"date": "2016-02-11T20:34:33",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00069-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.881266176700592,
"score": 2.8125,
"token_count": 170,
"url": "http://www.oxforddictionaries.com/definition/american_english/geosynchronous"
} |
Ready Child, Ready School
School readiness describes the status and ongoing progress a child makes within the domains of physical well-being and motor development, social and emotional development, language and comprehension development, and cognition and general knowledge. By monitoring each child’s progress across multiple domains, teachers, parents, schools, and caregivers can provide needed support to ensure each child’s success in school.
About the School Readiness Initiative
Senate Bill 08-212, Colorado’s Achievement Plan for Kids (CAP4K), passed in 2008 with the goal of aligning Colorado’s preschool through postsecondary education system. The act included provisions related to school readiness for both the State Board of Education and local education providers.
State Board of Education: The State Board of Education is required to define school readiness, which was accomplished in 2008. The State Board is also required to adopt one or more assessments aligned with the definition of school readiness.
Local Education Providers: Beginning in the fall of 2013, CAP4K requires local education providers to ensure all children in publicly-funded preschool or kindergarten receive an individual school readiness plan. Also, local education providers must administer the school readiness assessment to each student in kindergarten. To enable the state to identify more options for the school readiness assessment menu, CDE is advising districts to phase-in the school readiness provision of CAP4K by the 2015-16 school year.
School Readiness Assessment Menu
CAP4K requires that all students in a publicly funded kindergarten be assessed using as state approved school readiness assessment. The purpose of school readiness assessment is to inform the development of an individual school readiness plan in order to provide a responsive learning environment for each child. Information gathered from school readiness assessments is to be used for supportive and instructional purposes and cannot be used to deny a student admission or progression to kindergarten or first grade.
In December 2012, the State Board of Education voted to offer districts a menu of school readiness assessments. Beginning in 2010, CDE engaged a school readiness assessment committee with early childhood educators and experts from across Colorado to advise the department on implementation of the school readiness initiative. The committee assisted in the review of assessments following the criteria established in CAP4K. To review the rubric, please read the Request for Information RFI School Readiness Assessment 2014 2.
In 2012, the State Board approved Teaching Strategies GOLD as the first assessment tool for the menu. At their October 2015 meeting, the State Board voted to add three additional assessments to the menu: Riverside Early Assessments of Learning (REAL), Desired Results Developmental Profile (DRDP-K 2015), and Teaching Strategies GOLD Survey. To read a summary of the reviewed assessments, please click here.
In late October 2014, the department will be releasing a communication about a series of regional meetings across the state to provide educators with information about the different assessment tools, options for funding assessment subscriptions, and training opportunities
CAP4K does not provide funding for school readiness assessments. Colorado’s Race to the Top Early Learning Challenge Fund grant will cover the initial cost of school readiness assessment subscriptions. Districts applied for funding for the 2015-16 school year in January, 2015.
School Readiness Plans
CAP4K requires that each child in a publicly funded preschool and kindergarten program have an Individual School Readiness plan (IRP). An IRP is an individual learning plan that is informed by ongoing assessment of a child’s progress in the developmental and academic domains. The department encourages educators to consider the IRP plan to be a living document where a child’s progress is recorded and a tool for informing instruction.
CDE created this document which include a sample school readiness plan for optional use or modification at the local level.
This webinar explains how the School Readiness plan, the READ Act and IEPs work together. | <urn:uuid:e4254c9b-3afc-4ad5-89f1-306dd1aedda5> | {
"date": "2014-10-31T10:06:46",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899531.38/warc/CC-MAIN-20141030025819-00201-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9225243330001831,
"score": 3.046875,
"token_count": 782,
"url": "http://www.cde.state.co.us/schoolreadiness/assessment.asp"
} |
Prostate health begins with regular screening
Unlike obesity, diabetes and heart disease, prostate disease is a health concern that’s entirely unique to men, and primarily found in men older than 55. Prostate cancer — a disease in which malignant cells form in the tissue of the prostate — is one of the most common types of cancer diagnosed in men in the United States, second only to non-melanoma skin cancer, according to the Centers for Disease Control and Prevention.
More than 200,000 American men are diagnosed with prostate cancer each year and the disease claims the lives of more than 28,000 men annually, making it one of the leading causes of cancer death among men of all races. Becoming informed about prostate health, as well as having regular consults with a urologist or primary-care physician, can help older men stay healthy.
The American Urological Association released updated guidelines for prostate cancer screening at its annual meeting in San Diego in May, recommending that men ages 55 through 69 with an average risk of developing prostate cancer receive prostate-specific antigen, or PSA, blood tests every two years.
However, men with an increased risk of developing the disease, such as those who have had a close family member — father, uncles, brothers or cousins — diagnosed with prostate cancer and men of African-American ethnicity, are recommended to begin the screening process earlier, at age 45. Men in these two groups are also more likely to develop more aggressive forms of prostate cancer, according to Dr. J. Kellogg Parsons, a urologist and associate professor of surgery at the UC San Diego School of Medicine.
Parsons, who also sits on the National Comprehensive Cancer Network for the Early Detection of Prostate Cancer panel, is quick to stress that prostate cancer screening guidelines are just that, and that screening itself is not without risk.
“Every decision to screen, every decision to check a PSA blood test, needs to be made in the context of the conversations with your doctor,” Parsons said. “You have to discuss the benefits and potential risk of screenings with your doctor and come to a decision about whether or not screening is the best choice for you.”
Potential side effects to prostate cancer treatments include urination and erection problems. It’s up to patients to discuss both the risks and benefits with their doctor to determine when, and if, to begin screenings or treatments.
Even in the event that prostate cancer is detected, aggressive treatments may not always be necessary.
“There are a lot of prostate cancers that don’t necessarily need to be treated right away,” Parsons said. “There’s something called active surveillance, or watchful awareness, and a lot of men qualify for that. Those are men who do not have aggressive prostate cancers who may be safely monitored without specific treatments, and that is another alternative that we talk to patients about.”
Men should also be aware of another potential, but non-cancerous, prostate concern: Benign prostatic hyperplasia, or BPH, which affects up to 75 percent of men by the time they turn 75. Symptoms include going to the bathroom frequently, getting up at night to urinate, difficulty urinating and urinary infection.
“(BPH) is not something that you can necessarily screen for; it’s something that men should talk to their doctors about as they develop the symptoms because there are various ways to treat it,” Parsons said. | <urn:uuid:d92b84c7-a30f-4ca7-a9e2-034126da263f> | {
"date": "2015-09-05T11:22:25",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646242843.97/warc/CC-MAIN-20150827033042-00168-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9652549624443054,
"score": 2.9375,
"token_count": 721,
"url": "http://www.sandiegouniontribune.com/news/2013/Jun/11/prostate-health-regular-screenings-men/all/"
} |
Below is one of the first videos I watched as I started to research modern slavery. It's a compilation of news reports about trafficking in Las Vegas. After watching the video, I learned that child slavery and sex trafficking exist in the United States. This led me to learn more about groups such as Shared Hope International, Gems Girls, Not For Sale, Truckers Against Trafficking, Generate Hope and many others.
Some facts about trafficking in the United States.
- Within 48 hours of running away, a child will be contacted by a human trafficker.
- In the United States, the average age for girls to enter into prostitution is 12-14 years old.
- It's estimated that 14,000 - 17,000+ people are trafficked into the United States each year. That's appx. 1,250 people/month or 41 people/day.
- There are over 300,000 slaves in the United Sates today. That's the equivalent of...
- The total stadium attendance for 10 Major League Baseball games.
- The number of people attending Disneyland over 9 days.
- The population of New Orleans, LA.
- The total number of children attending 500 elementary schools in the USA. | <urn:uuid:a0bb6dde-0025-41c9-8c96-2c6cf52bbedb> | {
"date": "2017-08-19T07:21:27",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105326.6/warc/CC-MAIN-20170819070335-20170819090335-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9171525239944458,
"score": 3.015625,
"token_count": 250,
"url": "http://abolitionistjb.blogspot.com/2010/05/sex-trafficking-in-las-vegas.html"
} |
Yesterday we discussed the ways that soil is essential to life on land. Almost every fiber of our being, with the exception of the water and the ideas in your head, is a product of soil. Yet, most of us have very little connection to the soil that feeds our bodies. We place a tremendous amount of trust in people that we have never met to manage our soil.
Go to the link and pick one of the articles listed. Provide your peers with an overview of the article. Be detailed in your description of the topic and provide your own analysis of the condition of the soil. | <urn:uuid:ce518336-0279-4e86-819c-01326a8c0a27> | {
"date": "2017-06-26T05:27:30",
"dump": "CC-MAIN-2017-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00497.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9469015002250671,
"score": 2.578125,
"token_count": 119,
"url": "http://dcscienceworld.blogspot.com/2008/04/soil-renewable-resource.html"
} |
A new report from the USDA found that the animal feed produced by U.S. ethanol plants (known as dried distillers’ grains or DDGs) is replacing even more corn and soybean meal in livestock and poultry feed rations than previously thought. The report’s findings have important implications for discussions regarding ethanol’s impact on feed grains availability, feed prices, land use effects and the greenhouse gas (GHG) impacts of producing corn ethanol.
According to the report by USDA’s Economic Research Service (ERS), “Findings demonstrate that, in aggregate (including major types of livestock/poultry), a metric ton of DDG can replace, on average, 1.22 metric tons of feed consisting of corn and soybean meal in the U.S.”
Every 56-lb. bu. of corn processed by a dry mill ethanol plant generates 2.8 gal. of ethanol and approximately 17.5 lbs. of animal feed. In essence, the new ERS report dispels the conventional assumption that every bushel of corn processed by an ethanol plant generates an amount of feed equivalent to just one-third of the original corn bushel. ERS underscored this point by stating, “Feed market impacts of increased corn use for ethanol are smaller than that indicated by the total amount of corn used for ethanol production because of DDGs.” In fact, ERS found the amount of feed (corn and soybean meal) replaced by the DDGs represents nearly 40% (on a weight basis) of the corn used in the associated ethanol production process for a given crop year.
“The value of the animal feed produced by the ethanol industry has long been misunderstood, understated and misrepresented,” says Geoff Cooper, RFA vice president of research and analysis. “Distillers grains continue to be the industry’s best kept secret, despite the fact that we are producing tremendous volumes of this high-value feed product today. DDGs and other ethanol feed products significantly reduce the need for corn and soybean meal in animal feed rations. Over the past several years, distillers’ grains have been one of the most economically competitive sources of energy and protein available on the world feed market. While some critics of the ethanol industry attempt to downplay the role of DDGs, the facts simply can’t be ignored.”
One of the reasons that 1 ton of DDGs can replace more than 1 ton of conventional feed is that its energy and protein content are concentrated. Only the starch portion of the corn kernel is converted to ethanol, while the protein, fat, fiber and other components are concentrated and passed through the process to the distillers’ grains. Grain ethanol feed product volumes approached 39 million metric tons in the 2010-2011 marketing year, an amount of feed that would produce nearly 50 billion quarter-pound hamburger patties. Nearly 25% of U.S. ethanol feed output is exported to countries around the world to feed livestock and poultry.
More complicated, but no less important, is the impact of DDGs on land use change and the GHG emissions associated with corn ethanol production. Most existing biofuel regulations, including California’s Low Carbon Fuels Standard (LCFS), significantly undervalue the contribution of DDGs when assessing the net GHG impacts of corn ethanol. For instance, the California Air Resources Board (CARB) assumed for its LCFS analysis that 1 metric ton of DDGs replaces only 1 metric ton of corn, with no substitution of soybean meal. Using information from the new ERS report would significantly increase corn ethanol’s GHG emission benefits. The importance of distillers’ grains assumptions in carbon accounting and land use change calculations is described in more detail here (pdf).
“The RFA has long pointed out that the importance of DDGs is being undervalued by the regulatory agencies responsible for federal and state regulations that require a GHG assessment of ethanol,” says Cooper, highlighting two 2009 reports (pdf)sponsored by RFA that reached similar conclusions as the new ERS report. “USDA’s new analysis clearly shows the importance of accurate DDGs accounting. The Environmental Protection Agency and CARB should immediately adopt these new findings into their GHG modeling for the RFS2 and LCFS. The resulting decrease in ethanol’s lifecycle GHG emissions could be significant.” | <urn:uuid:f7af1e34-7e5b-4198-9e9c-cd8e930e467d> | {
"date": "2016-07-31T11:02:59",
"dump": "CC-MAIN-2016-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828314.45/warc/CC-MAIN-20160723071028-00147-ip-10-185-27-174.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9316614270210266,
"score": 3.046875,
"token_count": 911,
"url": "http://cornandsoybeandigest.com/energy/ddg-feed-more-valuable-traditional-corn-soybean-rations"
} |
OVERREACHING DNA POLICIES IN INDIA
By Elonnai Hickok
Over the years Indian law enforcement agencies have been permitted, through evolving legislation, to collect material containing DNA as a way of providing additional evidence for the conviction of criminals in India. Starting in the 1920s, the collection and use of biometrics for identification of criminals legally began for India with the approval of the Identification of Prisoners Bill. The object of the Bill is to “provide legal authority for the taking of measurements of finger impression, foot-prints, and photographs of persons convicted or arrested.” The Bill is still enforced in India, and in October 2010 was amended by the State Government of Tamil Nadu to include “blood samples” as a type of forensic evidence. Other Indian legislation pertaining to forensic evidence is the Code of Criminal Procedure (CrPc) and the Indian Evidence Act. In 2005, the CrPc was amended to authorize investigating officers to collect DNA samples with the help of a registered medical practitioner. Both the CrPc and the Indian Evidence Act fail to address the collection and testing of DNA effectively as they do not set procedures for how the DNA samples should be collected, stored, shared, accessed, secured, and destroyed.
Though India allows the collection of DNA samples by law enforcement agencies for identification purposes, it does not have a national law in force that regulates how governments collect, store, create, and use DNA profiles of accused persons. A DNA profile is created when DNA samples are taken from individuals and are analyzed in laboratories to produce a digitized representation of the sequence. Once created, a DNA profile is stored on a database with other identifying information from the individual and information from the crime scene. Creating DNA profiles and using them to solve crimes has been a growing global practice over the past two decades. Despite the lack of explicit safeguards and regulations, both governmental and non-governmental laboratories have been collecting, testing, and storing DNA samples/profiles for many years. These laboratories function off of internal policies and run DNA tests for both forensic purposes (identifying criminals, victims, etc., conducted by both private and public labs) and personal purposes (paternity and medical, conducted by private labs).
In the past few years, two pieces of legislation that serve to regulate the use of DNA for forensic purposes have been drafted or proposed in India. The most recent legislation, titled the Privacy Bill 2011, was leaked to the public in the spring of this year. If passed, the Bill will allow for the collection of DNA samples only with the consent of an individual, and will prohibit the public disclosure of such information to the extent that it will adversely affect an individual’s right to privacy in a way that would amount to a civil wrong. Though the Bill creates an important standard by mandating consent, it fails to comprehensively protect and regulate the use of DNA data. In 2007, a Bill known as the Draft DNA Profiling Bill was piloted by the Centre for DNA Fingerprinting and Diagnostics, an autonomous organization funded by the Department of Biotechnology in India’s Ministry of Science and Technology. The Bill is pending in Parliament, and aims to legalize the collection and analysis of DNA samples for forensic purposes in order to “enhance the protection of people and administration of justice through analysis of DNA found at the crime scene, and establish identity of victim and offenders.” In its current state, the Bill would permit DNA to be collected and stored in a way that raises many concerns related to privacy and civil liberties.
Most concerning, through a list that outlines the circumstances in which DNA can be collected, the Bill allows for the DNA of innocents who are not related to a crime scene, are not victims, and are not criminals to be added to DNA databases. This list can be expanded by the DNA Board as they deem appropriate. Furthermore, the Bill does not specify at what point exactly DNA can be collected e.g., whether the DNA can be collected on arrest or on charge, whether the DNA has to be directly relevant to the offence, whether the police decide this for themselves, and what are the oversight mechanisms for these decisions. Permitting the collection and storage of innocent people’s DNA is dangerous for many reasons and extends the core rationale of collecting DNA far beyond “for forensic purposes.” As noted by the American Constitution Society for Law and Policy, by adding the DNA data of individuals with no discretion to these databanks, the governmental intent is presumptively changed from one of criminal investigation to population surveillance. The debate over holding an innocent person’s DNA is key to understanding the core of what can and should be protected when formulating safeguards and regulations. Does the state ever have an interest in DNA aside from criminal identification? If so, should the government collect the DNA explicitly for that purpose?
Even in maintaining data for investigative purposes, there are questions as to which data should be kept. On the one hand, conviction is a bright line. On the other hand, if there was significant evidence but not enough to convict, is the Government justified in wanting to keep evidence in case a pattern of crime starts emerging? Is the answer the same for all countries? For all crimes? Is the answer derived from a fundamental understanding of state versus individual or is it a reflection of a specific national ethos? Who decides?
Another area of concern is that the Bill allows for the complete storage of DNA samples and DNA profiles from volunteers, suspects, victims, offenders, children (with parental consent) and convicted persons. Complete DNA samples taken from individuals contain unlimited genetic information (including health-related information) and are not needed once the profile is created. The primary purpose of retaining DNA profiles on a criminal database is to help identify the individual if they reoffend-not to exonerate innocent people or solve past crimes. Stored DNA profiles could in theory be used to track any individual on the database or to identify their relatives, so strict safeguards are needed to prevent misuse.
The comprehensive storage of DNA profiles is also alarming, because the Bill allows for the DNA Profiling Board to grant law enforcement agencies full and direct access to DNA profiles. The primary argument for the creation of DNA databanks for convicted felons is for exact identification in order to help police solve crimes. Because forensic labs have developed extensive techniques that allow for lab technicians to gather information from DNA samples that is far beyond what is needed for the identification of a person, and because the pool of DNA samples goes well beyond convicted felons, permitting unrestricted use of DNA databases is dangerous, and can easily be abused by law enforcement and private entities. DNA facilities are becoming more widespread in India through the establishment of multipurpose forensic laboratories accessible to law enforcement and intelligence agencies. For instance, new forensic laboratories with DNA testing facilities have recently been set up in Assam, Mumbai, and Hyderabad. The growth of forensic labs in India has also come at a time when the Indian Government is pushing for stronger surveillance regimes, revamping policing systems, and passing legislation that permits intelligence agencies easy access to individually identifying material. For instance in 2010, the Government established NATGRID, a program which aims to link information from different databases such as tax, travel, financial, and criminal information. The linking of these databases will allow intelligence agencies to create comprehensive profiles of residents in India. If law enforcement agencies are granted direct access to DNA profiles, it could be too easy for NATGRID to add DNA information to its collection of databases. Additionally, the Union Home Ministry has recently launched the Crime and Criminal Tracking System in Assam, Kerala and Uttar Pradesh. The system looks to facilitate collection, storage, retrieval, analysis, transfer and sharing of data and information between police stations, their state headquarters, central police organizations and other security agencies.
Another scheme that could be used by law enforcement to collect and compile information in India is the Unique Identification project. The project aims to provide all residents of India with an identification number based off of their biometrics. It is envisioned that the number will eventually become ubiquitous throughout society, and individuals will use the number to access benefits, identify themselves to the police, apply for a passport, and open a bank account. These regimes raise again the question: What is the Indian State’s interest in having and connecting identifying, storable, and trackable information of innocent individuals within its borders?
Taken together, the Bill permits the creation of a database comprised of DNA samples and profiles that are unrelated to solving a crime (including the ” identification of victims of: accidents, disasters or missing persons or for such other purposes”), which could be used for intelligence gathering and other forms of surveillance, not just for investigation of the specific crime for which the sample was taken.
In sum, although collecting DNA from victims and volunteers may be useful during the investigation of a crime, the DNA profiles obtained from persons who are not accused of and prosecuted for a crime are now being collected and stored in ways that opens the data to use-and possible abuse-for other purposes and by other agencies. One solution is to mandate that all DNA samples taken from persons who are not prosecuted (i.e., victims, witnesses, and others) be destroyed. Another solution, as suggested above, is that databases be segregated by purpose-missing persons, health alerts, convicted felons, and so on-and that DNA not be permitted to be transmitted across databases.
Over the past century, the collection of citizen data has become an essential aspect of governance. Current systems have become intrinsically dependent on the collection and analysis of information and citizen informatics. Governments have rationalized the collection of massive amounts of citizens’ data for reasons such as effectively delivering public services, ensuring equity, and maintaining justice. These databases function off of the notion that “bigger is better” and look to collate as much data as possible. Because the way in which a citizen’s information is stored, controlled, and used by the government defines the state-citizen relationship, the collection of genetic data raises important questions of privacy, civil liberties, and protection.
Elonnai Hickok is a Policy and Advocacy Associate for the Centre for Internet and Society. Find more of her writing at privacyindia.org.
1. The Prisoners Identification Bill was most recently amended 1981
3. Adhikary, Jyotirmoy. DNA Technology in Administration of Justice. Lexis Nexis. 2007 pg. 259
4. Privacy Bill 2011 Chpt. VI
5. Schedule of offences 5) Miscarriage or therapeutic abortion, b. Unnatural offenses, 7) Other criminal offenses b. Prostitution 9) Mass disaster b) Civil (purpose of civil cases) c. Identification purpose 10) b) Civil:1) Paternity dispute 2) Marital dispute 3) Infidelity 4) Affiliation c) Personal Identification 1) Living 2) Dead 3) Tissue Remains d) 2 (xxvii) “offender” means a person who has been convicted of or is under trial charged with a specified offense; 2(1)(vii) “crime scene index” means an index of DNA profiles derived from forensic material found: (a) at any place (whether within or outside India) where a specified offense was, or is reasonably suspected of having been, committed; or (b) on or within the body of the victim, or a person reasonably suspected of being a victim, of an offense.
6. Section 13(xxii) allows this list to be expanded by the DNA board.
7. Simoncelli Tania, Krimsky Sheldon. A New Era of DNA Colletions: At What Cost to Civil Liberties?. American Constitution Society for Law and Policy. 2007. pg.8
8. Section 35
9. Section 13(x), Section(2) The DNA Profiling Board may, by a general or special order in writing, also form committees of the members and delegate to them the powers and functions of the Board as may be specified by the regulations.
10. Simoncelli Tania, Krimsky Sheldon. A New Era of DNA Collections: At What Cost to Civil Liberties?. American Constitution Society for Law and Policy. 2007. pg.15
11. http://www.telegraphindia.com/1110707/jsp/northeast/story_14204831.jsp, http://dfs.gov.in/CFSLHyderabad/laboratorycfslhyderabad.htm, http://www.dnaindia.com/mumbai/report_mumbai-gets-it-s-very-first-state-of-the-art-forensic-lab_1066370; http://articles.timesofindia.indiatimes.com/2010-09-15/science/28230130_1_dna-database-dna-index-system-forensic-scientists
12. Unravelling NATGRID. Software Freedom Law Center. Retrieved from http://softwarefreedom.in/index.php?option=com_content&view=article&id=90%3Aunravelling-natgrid&catid=53%3Atalish&Item27
14. The Unique Identification Bill 2010 www.uidai.org | <urn:uuid:ae025931-6aff-487f-9cc3-c6d05d9b71ef> | {
"date": "2017-03-28T13:51:43",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00171-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9184906482696533,
"score": 2.703125,
"token_count": 2742,
"url": "http://dnapolicyinitiative.org/genewatch-forensic-dna/overreaching-dna-policies-in-india/"
} |
The academic discipline of Christian apologetics is concerned with offering a reasoned defense of historical, New Testament Christianity. The English word “apology” derives from the Greek apologia, which means to “defend” or “make a defense.” Various biblical writers acknowledged the legitimacy of such activity. The apostle Peter, for example, wrote:
But sanctify in your hearts Christ as Lord: being ready always to give answer [Greek, apologian] to every man that asketh you a reason concerning the hope that is in you, yet with meekness and fear (1 Peter 3:15).
Paul, in his epistle to the Philippians, stated that he was “set for the defense [Greek, apologian] of the Gospel” (Philippians 1:16). Paul’s writings, in fact, teem with sound arguments that provide a rational undergirding for his readers’ faith. Christianity is not some kind of vague, emotionally based belief system intended for unthinking simpletons. Rather, it is a logical system of thought that may be both defended and accepted by analytical minds.
In any defense of Christianity, a variety of evidence may be employed. Such evidence may be derived from science, philosophy, or history, to list just a few examples. It is not uncommon to hear someone mention studies from within the field of “Christian evidences.” Such terminology simply is a reference to an examination of the evidences establishing Christianity as the one true religion of the one true God. Regardless of the source or nature of the evidence, however, the ultimate goal is to substantiate the case for the existence of God, the inspiration of the Bible, the deity and Sonship of Christ, the validity of the creation account found in Genesis 1-2, etc.
Much of the evidence attending the truthfulness of Christianity can be examined within broad categories such as those listed above. But these do not tell the whole story, for within each major area of study there are important subcategories that offer additional insight. An illustration of this point would be a study of the inspiration of the Bible. It is possible to examine various arguments that establish the Bible as being God’s inspired Word. Generally speaking, however, such a study may not examine such things as alleged internal contradictions, supposed historical inconsistencies, and other such matters. In order to respond to such charges, one must “dig a little deeper” into the evidence at hand.
The same is true of the evidence that establishes the case for the existence of God. It is not a difficult task to assemble evidence that represents a compelling case for God’s existence. Yet that evidence often may not touch on other equally important matters that have to do with God’s personality and character (e.g., things like His eternality, His justice, His relationship to other members of the Godhead, etc.). Information on these topics must be derived from separate, independent studies.
One of the areas that Christian apologetics seeks to address in relation to the existence of God is His nature. It is not enough merely to acknowledge that God exists. Rather, it is necessary to know something about Him, what He expects from mankind, and how He interacts with His creation. By necessity, any investigation into the nature of God eventually will have to address the topics of His justice, His mercy, and His grace, because these are a part of His eternal nature. That is the purpose of the present study.
THE MERCY AND GRACE OF GOD
The mercy and grace of God are at the core of one of the most beautiful, yet one of the most heart-rending, accounts in all the Bible—the story of Peter’s denial of His Lord, and Jesus’ reaction to that denial. Christ had predicted that before His crucifixion Peter would deny Him three times (John 13:36-38). Peter did just that (John 18:25-27). First, he was asked by a maid who controlled the door to the court of the high priest if he was a disciple of Jesus. Peter denied that he was. Second, he was asked by servants of the high priest if he was indeed the Lord’s disciple. Again, he denied knowing Jesus. Third, he was asked if he was with the Lord when they arrested Him in the Garden of Gethsemane. One last time, Peter vehemently denied the Lord. The cock crowed, and the Lord looked across the courtyard. As their eyes met, the text says simply that Peter “went out and wept bitterly” (Luke 22:61-62).
When next we see Peter, he has given up. In fact, he said “I go a fishing” (John 21:3). Peter’s life as a follower of Christ was finished, so far as he was concerned. He had decided to go back to his livelihood of fishing. No doubt Peter felt that his sin against the Lord was so grievous that even though he now believed the Lord to be risen, there could be no further use for him in the kingdom. It was, then, to his original vocation that he would return.
It is a compliment to Peter’s innate leadership ability that the other disciples followed him even on this occasion. As Peter and his friends fished one morning, the Lord appeared on the shore and called to them. When they brought the boat near, they saw that Christ had prepared a meal of fish and bread over an open fire. They sat, ate, and talked. As they did, the Lord asked Peter, “Simon, lovest thou me more than these?” (John 21:15). Peter assured Christ that he did. But Christ appeared unsatisfied with Peter’s response. He inquired a second time, and a third. After the last query, the text indicates that Peter was “grieved because Christ said unto him a third time, ‘lovest thou me?’ ” (John 21:17).
Peter’s uneasiness was saying, in essence, “What are you trying to do to me, Lord?” Jesus was asking: “Peter, can you comprehend—in spite of your denying heart—that I have forgiven you? Do you understand that the mercy and grace of God have been extended to you? There is still work for you to do. Go, use your immense talents in the advancement of the kingdom.” Jesus loved Peter. And He wanted him back. Jesus simply was putting into action that which He had taught personally. Forgive, yes, even 70 times 7 times!
Perhaps during these events one of Christ’s parables came to Peter’s mind. He no doubt was familiar with the teaching of the Lord in Luke 7:36-50 (see the similar account found in Matthew 18:23-35). Jesus was eating with Simon, a Pharisee. Simon saw a worldly woman come into the Lord’s presence, and thought: “This man, if he were a prophet, would have perceived who and what manner of woman this is that toucheth him, that she is a sinner” (Luke 7:39). Simon’s point, of course, was that Christ should have driven away the sinful woman. But Jesus, knowing Simon’s thoughts, presented a parable for his consideration.
Two servants owed their lord; one owed an enormous debt, and the other only a small amount. Yet the master forgave both of the debts. Jesus asked Simon: “Which of them therefore will love him the most?” (Luke 7:42). Simon correctly answered: “He, I suppose, to whom he forgave the most” (Luke 7:43). Jesus, through this parable, was saying to Simon: “I came here today and you would not even extend to me the common courtesy of washing my feet. This woman entered, cried, washed my feet with her tears, and dried them with her hair. I have forgiven her. She, therefore, should love me the most.”
This woman had been a recipient of God’s mercy and grace. She gratefully expressed devotion for the forgiveness offered by the Son of God. Simon was too religious to beg, and too proud to accept it if offered. It is a sad fact that man will treat forgiveness lightly so long as he treats sin lightly. The worldly, fallen woman desperately desired the saving mercy and grace of God, and accepted it when it was extended. Christ’s point to Simon was that man can appreciate to what he has been elevated (God’s saving grace) only when he recognizes from what he has been saved (his own sinful state).
In this context, Christ’s point to Peter becomes clear. “Peter, you denied me, not just once, but three times. Have I forgiven you? Yes, I have.” Peter, too, had been the recipient of God’s mercy and grace. He had much of which to be forgiven. Yet, he had been forgiven! The problem that relates to mercy and grace is not to be found in heaven; rather, it is to be found here on the Earth. Man’s first problem often is accepting God’s mercy and grace. His second problem often is forgiving himself. We do not stand in need of an accuser; God’s law does that admirably, as the seventh chapter of Romans demonstrates. What we need is an Advocate (1 John 2:1-2)—someone to stand in our place, and to plead our case. We—laden with our burden of sin—have no right to stand before the majestic throne of God, even with the intent to beg for mercy. But Jesus the Righteous has that right. He made it clear to His disciples, and likewise has made it clear to us, that He is willing to be just such an Advocate on our behalf. The author of the book of Hebrews wrote:
Having then a great high priest, who hath passed through the heavens, Jesus the Son of God, let us hold fast our confession. For we have not a high priest that cannot be touched with the feeling of our infirmities; but one that hath been in all points tempted as we are, yet without sin (4:14-15).
The entire story of the Bible centers on man’s need for mercy and grace. That story began in Genesis 3, and has been unfolding ever since. Fortunately, “the Lord is full of pity, and merciful” (James 5:11). Even when Cain—a man who had murdered his own brother—begged for mercy, God heard his plea and placed a mark on him for his protection. God never has wanted to punish anyone. His words to this effect were recorded by Ezekiel: “Have I any pleasure in the death of the wicked? saith the Lord Jehovah; and not rather that he should return from his way, and live?... I have no pleasure in the death of him that dieth, saith the Lord Jehovah” (18:23,32). Similarly, in the times of Hosea sin was rampant. Life was barren. Worship to God had been polluted. The effects of Satan’s rule were felt everywhere on the Earth. The Lord, suggested Hosea, “hath a controversy with the inhabitants of the land, because there is no truth, nor goodness, nor knowledge of God in the land” (4:1). Evidence of God’s mercy and grace is seen, however, in the words spoken by Hosea on God’s behalf:
How shall I give thee up, O Ephraim! How shall I cast thee off, Israel!... my heart is turned within me, my compassions are kindled together. I will not execute the fierceness of mine anger, I will not return to destroy Ephraim; for I am God and not man; the Holy One in the midst of thee; and I will not come in wrath (11:8-9).
The wise king, Solomon, said that those who practice mercy and truth will find “favor and good understanding in the sight of God and man” (Proverbs 3:4). Many are those in the Bible who desperately sought the mercy and grace of God. Cain needed mercy and grace. Israel needed mercy and grace. Peter needed mercy and grace. And to all it was given, as God deemed appropriate. We must come to understand, however, several important facts about God’s mercy and grace.
God is Sovereign in His Delegation of Mercy and Grace
First, we must realize that God is sovereign in granting both His mercy and His grace. When we speak of God’s sovereign nature, it is a recognition on our part that whatever He wills is right. He alone determines the appropriate course of action; He acts and speaks at the whim of no outside force, including mankind.
When humans become the recipients of heaven’s grace, the unfathomable has happened. The apostle Paul wrote: “For all have sinned, and fall short of the glory of God.... For the wages of sin is death; but the free gift of God is eternal life in Christ Jesus our Lord” (Romans 3:23; 6:23). God—our Justifiable Accuser—has become our Vindicator. He has extended to us His wonderful love, as expressed by His mercy and His grace.
Mercy has been defined as feeling “sympathy with the misery of another, and especially sympathy manifested in act” (Vine, 1940, 3:61). Mercy is more than just sympathetic feelings. It is sympathy in concert with action. Grace often has been defined as the “unmerited favor of God.” If grace is unmerited, then none can claim it as an unalienable right. If grace is undeserved, then none is entitled to it. If grace is a gift, then none can demand it. Grace is the antithesis of justice. After God’s grace has been meted out, there remains only divine justice. Because salvation is through grace (Ephesians 2:8-9), the very chief of sinners is not beyond the reach of divine grace. Because salvation is by grace, boasting is excluded and God receives the glory.
When justice is meted out, we receive what we deserve. When mercy is extended, we do not receive what we deserve. When grace is bestowed, we receive what we do not deserve.
Perhaps no one could appreciate this better than Peter. It was he who said: “And if the righteous is scarcely saved, where shall the ungodly and sinner appear?” (1 Peter 4:18). Paul reminded the first-century Christians in Rome that “scarcely for a righteous man will one die: for peradventure for the good man some one would even dare to die. But God commendeth his own love toward us, in that, while we were yet sinners, Christ died for us” (Romans 5:7-8).
Yet because it is a free gift, and unearned, it remains within God’s sovereign right to bestow it as He sees fit. A beautiful expression of this fact can be seen in the prayers of two men who found themselves in similar circumstances—in that both were under the sentence of death. In Numbers 20, the story is told of God’s commanding Moses to speak to the rock in the wilderness, so that it would yield water for the Israelites. Rather than obey the command of God to speak to the rock, however, Moses struck it instead. The Lord said to him: “Because ye believed not in me, to sanctify me in the eyes of the children of Israel, therefore ye shall not bring this assembly into the land which I have given them” (Numbers 20:12). Years later, God called Moses to the top of Mount Nebo, and allowed him to look across into the promised land, but He vowed that Moses would not enter into Canaan with the Israelites. Moses begged God to permit him to go (Deuteronomy 3:26), but his plea was denied.
Yet king Hezekiah, likewise under a sentence of death, petitioned God to let him live, and God added 15 years to his life. Moses wrote: “The Lord would not hear me...,” and died. But to Hezekiah it was said: “I have heard thy prayer” (2 Kings 20:1-6), and his life was spared. What a beautiful illustration and amplification of Romans 9:15: “For he saith unto Moses, I will have mercy on whom I have mercy, and I will have compassion on whom I have compassion.” God is sovereign in His mercy and His grace.
God’s Grace Does Not Mean a Lack of Consequences to Sin
Second, we must recognize that God’s granting mercy and grace does not somehow negate the consequences of sin here and now. While mercy may ensue, so may sin’s consequences. Perhaps the most touching story in the Bible of this eternal truth is the story of king David. How could a man of David’s faith and righteousness commit the terrible sins attributed to him? David was about 50 years old at the time. Fame and fortune were his as Israel’s popular, beloved king. He had taken his vows before God (see Psalm 101). He had insisted on righteousness in his nation. The people had been taught to love, respect, and honor the God of heaven. David, their king, was also their example. He was a man after God’s own heart (1 Samuel 13:14).
But he committed the sin of adultery with Bathsheba (2 Samuel 11-12), and then had her husband, Uriah the Hittite, murdered. One cannot help but be reminded of the sin of Achan (Joshua 7), when he took booty from a war and hid it under the floor of his tent after the Israelites were commanded specifically not to take any such items. Achan said, “I saw..., I coveted..., I took..., I hid...” (Joshua 7:21). Is that not what king David did? But Achan and David also could state, “I paid.” Achan paid with his life; David paid with twenty years of strife, heartbreak, and the loss of a child that meant everything to him.
Nathan the prophet was sent by God to the great king. He told David the story of a rich man who had many sheep in his flock, and of a poor man who had but one small ewe that was practically part of the family. When a visitor appeared at the rich man’s door, the rich man took the single ewe owned by the poor man, and slaughtered it for the visitor’s meal. Upon hearing what had happened, David was incensed with anger and vowed, “As Jehovah liveth, the man that hath done this is worthy to die” (2 Samuel 12:5).
Nathan looked the powerful king in the eye and said, “Thou art the man” (2 Samuel 12:7). The enormity of David’s sin swept over him, and he said, “I have sinned” (2 Samuel 12:13). David, even through his sin, was a man who loved righteousness. Now that Nathan had shown him his sin, he felt a repulsion which demanded a cleansing that could come only from God. His description of the consequences of sin on the human heart is one of the most vivid in all of Scripture, and should move each of us deeply. His agonizing prayer is recorded in Psalm 51. David cried out: “Have mercy upon me, O God, according to thy lovingkindness.”
David needed a new heart; sin had defiled his old one. He likewise realized that he needed to undergo an inner renewal; pride and lust had destroyed his spirit. So, David prayed for a proper spirit. He could do nothing but cast himself on the mercy and grace of God. David laid on the altar his own sinful heart and begged God to cleanse, recreate, and restore his life. God did forgive. He did cleanse. He did recreate. He did restore.
But the consequences of David’s sin still remained. The child growing in Bathsheba’s womb died after birth. In addition, the prophet Nathan made it clear to David that “the sword shall never depart from thy house,” and that God would “raise up evil against thee out of thine own house” (2 Samuel 12:10-11). David’s life never would be the same again. His child was dead. His reputation was damaged. His influence, in large part, was destroyed.
David learned that the penalty for personal sin often is felt in the lives of others as well. He had prayed that those who loved and served the Lord would not have to bear his shame. But this was not to be. The shame of the one is the shame of the many; as God’s people, we are bound together. More often than not, what affects one of us affects all of us.
It is to David’s credit that once his sin was uncovered, he did not try to deny it. Solomon, his son, later would write: “He that covereth his transgressions shall not prosper; but whoso confesseth and forsaketh them shall obtain mercy” (Proverbs 28:13).
Mercy and Grace are Expensive
Third, we should realize that the mercy and grace God uses to cover mankind’s sins are not cheap. They cost heaven its finest jewel—the Son of God. The popular, old song says it well:
I owed a debt I could not pay
He paid a debt He did not owe
I needed someone to wash my sins away.
So now I sing a brand new song—amazing grace
Christ paid the debt I could never pay.
Jesus’ death represented His total commitment to us. As Isaiah prophesied:
Surely he hath borne our griefs, and carried our sorrows; yet we did esteem him stricken, smitten of God, and afflicted. But he was wounded for our transgressions, he was bruised for our iniquities; the chastisement of our peace was upon him; and with his stripes we are healed. All we like sheep have gone astray; we have turned everyone to his own way; and Jehovah hath laid on him the iniquity of us all.... He bare the sin of many, and made intercession for the transgressors (53:4-6,12).
Paul wrote that “Him who knew no sin he made to be sin on our behalf that we might become the righteousness of God in him” (2 Corinthians 5:21).
Grace does not eliminate human responsibility; rather, grace emphasizes human responsibility. Grace, because it cost God so much, delivers agonizing duties and obligations. It is seemingly a great paradox that Christianity is free, yet at the same time is so very costly. Jesus warned: “If any man will come after me, let him deny himself, and take up his cross, and follow me” (Matthew 16:24). Paul summarized it like this: “I have been crucified with Christ; and it is no longer I that live, but Christ liveth in me: and that life which I now live in the flesh I live in faith, the faith which is in the Son of God, who loved me, and gave himself up for me. I do not make void the grace of God” (Galatians 2:20-21).
Grace does not make one irresponsible; it makes one more responsible! Paul asked: “What shall we say then? Shall we continue in sin, that grace may abound? God forbid” (Romans 6:1-2). God’s grace is accessed through willful obedience to the “perfect law of liberty” (James 1:25). It is God’s law that informs us of the availability of grace, the manner in which we appropriate it, and the blessings of living within it. The testimony of Scripture is abundantly clear when it speaks of the importance of the “obedience of faith” (Romans 1:5). We are to be obedient to God by returning to Him from an alien, sinful state, and, once redeemed, through our continued faithfulness as evinced by our works. Grace and works of obedience are not mutually exclusive.
Neither are grace and law mutually exclusive. One who is “in Christ” does not live under the dominion of sin, since Christianity is a system of grace. The apostle to the Gentiles stated: “Ye are not under the law, but under grace” (Romans 6:14). He cannot mean that we are under no law at all, because in the following verses he spoke of early Christians being “obedient from the heart to that form of teaching” delivered to them (6:17). These Christians obeyed God’s law, and were living faithfully under that law. They understood that “faith worketh by love” (Galatians 5:6). The terms “law,” “works,” and “grace” are not at odds, but like all things within God’s plan, exist in perfect harmony.
We Are Saved Through Grace
Fourth, let us remember that our salvation is by atonement, not attainment. Because salvation is a free gift (Romans 6:23), man never can earn it. Unmerited favor cannot be merited! God did for us what we, on our own, could not do. Jesus paid the price we could not pay. From beginning to end, the scheme of redemption—including all that God has done, is doing, and will do—is one continuous act of grace. The Scriptures speak of God “reconciling the world unto himself, not reckoning unto them their trespasses, and having committed unto us the word of reconciliation” (2 Corinthians 5:19). Peter stated:
Knowing that ye were redeemed, not with corruptible things, with silver or gold, from your vain manner of life handed down from your fathers; but with precious blood, as of a lamb without blemish and without spot, even the blood of Christ (1 Peter 1:18-19).
God has promised mercy and grace to those who believe on His Son (John 3:16), repent of their sins (Luke 13:3), and have those sins remitted through baptism (Acts 2:38; 22:16). Subsequent to the Day of Pentecost, Peter called upon his audiences to: “Repent ye therefore, and turn again, that your sins may be blotted out” (Acts 3:19). The word for “blotted out” derives from the Greek word meaning to “wipe out, erase, or obliterate.” The New Testament uses the word to refer to “blotting out” the old law (Colossians 2:14), and to “blotting out” a person’s name from the Book of Life (Revelation 3:5). One of the great prophetical utterances of the Old Testament was that “their sin will I remember no more” (Jeremiah 31:34).
Our sins were borne by Jesus on the cross. He paid our debt so that we, like undeserving Barabbas, might be set free. In this way, God could be just, and at the same time Justifier of those who believe in and obey His Son. By refusing to extend mercy to Jesus on the cross, God was able to extend mercy to me—if I submit in obedience to His commands.
There was no happy solution to the justice/mercy dilemma. There was no way by which God could remain just (justice demands that the wages of sin be paid), and yet save His Son from death. Christ was abandoned to the cross so that mercy could be extended to sinners who stood condemned (Galatians 3:10). God could not save sinners by fiat—upon the ground of mere authority alone—without violating His own attribute of divine justice. Paul discussed God’s response to this problem in Romans 3:24-26:
Being justified freely by his grace through the redemption that is in Christ Jesus; whom God set forth to be a propitiation, through faith, in his blood...for the showing of his righteousness...that he might himself be just and the justifier of him that hath faith in Jesus.
Man’s salvation was no arbitrary arrangement. God did not decide merely to consider man a sinner, and then determine to save him upon a principle of mercy. Sin placed man in a state of antagonism toward God. Sinners are condemned because they have violated God’s law, and because God’s justice cannot permit Him to ignore sin. Sin could be forgiven only as a result of the vicarious death of God’s Son. Because sinners are redeemed by the sacrifice of Christ, and not their own righteousness, they are sanctified by the mercy and grace of God.
Our Response to Mercy and Grace
What, then, should be our response to mercy and grace?
(1) Let us remember that “blessed are the merciful, for they shall obtain mercy” (Matthew 5:7). It is a biblical principle that unless we extend mercy, we cannot obtain mercy. Jesus taught: “For if ye forgive men their trespasses, your heavenly Father will also forgive you; but if ye forgive not men their trespasses, neither will your Father forgive your trespasses” (Matthew 6:14-15). We would do well to recall the adage that “he who cannot forgive destroys the bridge over which he also must one day pass.” If we expect to be forgiven, then let us be prepared to forgive.
(2) Let us remember that mercy and grace demand action on our part. Mercy is to feel “sympathy with the misery of another, and especially sympathy manifested in act.” Luke recorded an example of Christ’s mercy in healing ten lepers who “lifted up their voices saying, ‘Jesus, Master, have mercy on us’ ” (Luke 17:13). Did these diseased and dying men want merely a few kind words uttered in their direction? Hardly. They wanted to be healed! When the publican prayed so penitently, “God, be thou merciful to me a sinner” (Luke 18:13), he was asking for more than tender feelings of compassion. He wanted something done about his pitiful condition. Mercy and grace are compassion in action.
(3) Let us remember that nothing must take precedence over our Savior. If we have to choose between Christ and a friend, spouse, or child, Christ comes first. He demands no less (Luke 4:25-35)—but His demands are consistent with His sufferings on our behalf. He insists that we take up our cross: He took up His. He insists that we lose our life to find it: He lost His. He insists that we give up our families for His sake: He gave up His for ours. He demands that we give up everything for Him: He had nowhere to lay His head, and His only possession—the robe on His back—was taken from Him. Yes, the costs sometimes are high; but the blessings that we receive in return are priceless. He dispenses mercy and grace, and offers eternal salvation to all those who will believe in and obey Him.
In Luke 15, Jesus spoke of a wayward son who had sinned against his father and squandered his precious inheritance. Upon returning home, he decided to say to his father: “make me as one of thy hired servants” (15:19). He was prepared for the worst.
But he received the best. His father, “while he was yet afar off,...was moved with compassion, and ran, and fell on his neck, and kissed him” (Luke 15:20). The son did not receive what he deserved; he received what he did not deserve. He received mercy and grace. His father wanted him back!
Does our heavenly Father want us back? Oh, yes! Paul wrote: “For ye were bought with a price” (1 Corinthians 6:20). Let us yearn for the day when we can stand before His throne and thank Him for granting us mercy and grace—and for paying the debt we could not pay, and the debt He did not owe.
Vine, W.E. (1940), An Expository Dictionary of New Testament Words (Old Tappan, NJ: Revell). | <urn:uuid:3e3dc55f-e24e-48a5-8ab3-36828ba8356f> | {
"date": "2016-05-25T05:19:41",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00116-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9734681248664856,
"score": 2.671875,
"token_count": 6958,
"url": "http://apologeticspress.org/apPubPage.aspx?pub=1&issue=465&article=197"
} |
Le 30 juin 2016, 04:25 dans Technologie • 0
Hosted services can be defined as contracting with an outside vendor to host network services such as electronic security, data backup, file storage , email, etc. on servers that are accessed over the Internet, as opposed to accessing a server that is physically on-site at the customer location. Typically customer data is housed on high-capacity servers that are shared among many customers.
Many critical line-of-business software packages do not perform well under the hosted model. These include nearly any database application based on Microsoft SQL, MySQL, Oracle, Progress, Omnis, SyBase, Microsoft Access, as well as other database platforms. Databases require very reliable connections between the client desktop and where the data is stored. If this connection is interrupted, even for a very short time (a few seconds or less), the application will crash, requiring the user to restart the software, and resulting in lost productivity and possible data corruption. Note that database applications with a web front-end may indeed work well in a hosted environment, but may require significant additional expense in configuration, hosting, and administration.
Hosted services are often marketed as a way to decrease operating expenses and management overhead, while supposedly improving reliability. There are many companies and individuals in the information technology industry that promote moving all network services in small environments to hosted solutions. It should be noted that many of those promoting such solutions have a vested interest in selling such solutions.
It makes good technical and financial sense for some organizations to move carefully selected services to hosted solutions via the Internet. You are advised however, to avoid the hype and marketing propaganda and to look very carefully at all of the arguments for and against hosted services prior to making the jump! Some applications lend themselves very well to the hosted model. For instance, email, email virus & spam protection, and computer anti-virus protection can be hosted externally for many customers with excellent results for relatively little expense. Of course, websites have been hosted externally for many years.
The connection between desktop computers and an on-site server is often twenty times or more faster than the organization's Internet download speed, and eighty times faster than the many business class Internet upload speeds. Database applications such as those listed above will not function over low-speed Internet connections. In rural communities, higher capacity Internet connections may not be available, and would be extremely expensive in any event.
Ted believes that most organizations, regardless of size, have the same basic information technology requirements. Regardless of whether an organization has five employees or five thousand, they have the critical needs of security, Internet connectivity, (file) data storage and protection, printing, email, backup, system reliability, etc. The difference in IT needs between small and large organizations is primarily one of scale. The need is the same, the size and cost of the solution may not be. | <urn:uuid:88252fa1-6622-4abd-82c5-12fe1331fb8d> | {
"date": "2017-01-20T19:57:01",
"dump": "CC-MAIN-2017-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00156-ip-10-171-10-70.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9577597379684448,
"score": 2.65625,
"token_count": 587,
"url": "http://braatencloudblog.publicoton.fr/points-to-ponder-when-considering-hosted-file-storage-782016"
} |
the idea that a physiological need creates an aroused tension state (drive) that motivates an organism to satisfy the need
a tendency to maintain a balanced or constant internal state; the regulation of any aspect of body chemistry, such as blood glucose, around a particular level
hierarchy of needs
maslow's pyramid of human needs, beginning at the base with physiological needs that must first be satisfied before higher-level safety needs and then psychological needs become active. (physiological, safety, belongingness, esteem, self actualization)
inflated a balloon in his stomach, monitored stomach contractions. hunger was felt when stomach contractions occurred.
If electrically stimulated, well-fed animals begin to eat; if area is destroyed, even starving animals will have no interest in food.
depresses hunger. stimulate: animal will stop eating. destroy:animal's stomach and intestines will process food more rapidly, causing it to become extremely fat.
the point at which an individual's "weight thermostat" is supposedly set. When the body falls below this weight, an increase in hunger an a lowered metabolic rate may act to restore the lost weight
an eating disorder in which a normal-weight person diets and becomes significantly underweight, yet, still feeling fat, continues to starve. | <urn:uuid:306684e8-d41b-4ada-bfc1-9cb7e4fea815> | {
"date": "2015-10-07T08:53:07",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682947.6/warc/CC-MAIN-20151001215802-00164-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.925766110420227,
"score": 2.96875,
"token_count": 264,
"url": "https://quizlet.com/1465595/ch-10-motivation-hunger-flash-cards/"
} |
New findings regarding the formation of fullerenes, aka "buckyballs," were recently published in the journal Nature Communications, suggesting that smaller cages grow into larger ones. According to the article abstract, fullerenes self-assemble in a closed network by incorporating atomic carbon and C2. This growth was shown by measuring fullerene response to carbon vapor, and analyzed by Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry. When carbon vapor was present, large fullerenes containing hundreds of carbon atoms appeared but without the vapor, only C60 and a few slightly smaller fullerenes were detected.
The Royal Society of Chemistry reported on these findings, noting that key to the research was the powerful FT-ICR mass spectrometer at Florida's High Magnetic Field Laboratory. The research, led by Harry Kroto, who discovered "buckyballs" more than 25 years ago, enabled the team to analyze at extremely high resolution the compounds produced when buckyball-sized fullerenes reacted with vaporized carbon. According to the report, the researchers concluded that smaller fullerenes must grow to C60 and larger fullerenes by "eating up" carbon atoms. A few bonds may be rearranged but the cages never compromise their closed structure; the researchers confirmed this by trapping metals inside the cages, which were retained after growth.
These findings reveal fundamental processes that govern the self-assembly of carbon networks, the same processes of which are likely be involved in the formation of other nanostructures such as nanotubes and graphene. Such fullerene technologies have been incorporated into personal care applications for free radical-scavenging, antiaging, whitening, anti-inflammation, anti-wrinkle, sunscreens, pore-tightening, sebum oxidation control and cellulite control. | <urn:uuid:8f786531-525c-4bef-a715-bd26cffffac2> | {
"date": "2017-01-24T13:19:41",
"dump": "CC-MAIN-2017-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00302-ip-10-171-10-70.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9487113356590271,
"score": 3.4375,
"token_count": 383,
"url": "http://www.cosmeticsandtoiletries.com/research/chemistry/158883765.html"
} |
Picasso and Truth: From Cubism to Guernica by T. J. Clark
Alex Danchev applauds a study of one of the 20th century’s greatest thinker-painters
How to talk about painting?” asked Paul Valéry. T.J. Clark’s answer to that question is downright and uplifting. Clark cleaves to the work, and its view of the world, perhaps even its world view. His Picasso is no lofty agnostic, as he puts it, who accepts that the nature of the world is unknowable and all approaches to it equally valid. “His god is exactitude. And exactitude for him (as for Wittgenstein) is a transitive notion. Representations are true or false, accurate or evasive. The world appears in a painting - if it didn’t, who would bother to look?”
Picasso and Truth, then, is a study of art and thought - “the century’s most difficult pictorial thought”, asserts Clark, and also the most influential, “as Picasso’s fellow-artists acknowledged (often against their will)…decisive in changing the language of poetry, architecture, music, sculpture, cinema, theatre, the novel”. More precisely, it is a study of thinking-in-painting, of the thinker-painter. It is centrally concerned with what Clark calls Picasso’s conceptual horizon; “but concepts for Picasso are nothing unless they are kept alive in pictures - entertained on paper, as things or ‘states of affairs’…that might actually be the case”.
Such a study is a mighty undertaking. It is no surprise that this makes for an intensely, almost thuggishly cerebral reading experience - a mix of exposition and excogitation - an intellectual high-wire act, commanding, compelling, thought-provoking…thrilling.
So what will Art be, as part of this spectacle - along with all the other practices of knowledge on which it fed, from Giotto to high Cubism - without a test of truth for its findings, its assertions; without even a will to truth?
Clark takes no prisoners. He remarks on “the abominable character of most writing on the artist”, excoriating “its prurience, its pedantry, the wild swings between (or unknowing coexistence of) fawning adulation and false refusal-to-be-impressed, the idiot X-equals-Y biography, the rehearsal of the same few banal pronouncements from the artist himself; the pretend- moralism, the pretend-feminism, the pretend intimacy…and above all the determination to say nothing, or nothing in particular, about the structure and substance of the work Picasso devoted his life to”. He names names. Or blanks them out: John Richardson, author of three volumes of biography, rates two mentions in this book, both of them in the endnotes.
He appeals instead to Nietzsche, and also to Wittgenstein. “Is not Picasso Nietzsche’s painter? Is not his the most unmoral picture of existence ever pursued through a life?” There is something of a Nietzschean echo in Clark himself (and not only in the rhetoric). He tells of reading On the Genealogy of Morals and being struck by the concluding passages on “these hard, strict, abstinent, heroic spirits who constitute the honour of our age, all these pale atheists, anti-Christians, immoralists, nihilists…these last idealists of knowledge in whom the intellectual conscience today dwells and has become flesh…These are by no means free spirits, for they still believe in truth.” The will to truth, says Nietzsche, poses itself as a problem. From this there is no going back - “morality will gradually perish: that great spectacle in a hundred acts that is reserved for Europe’s next two centuries, the most terrible, the most questionable, and perhaps also the most hopeful of all spectacles”.
“Perhaps,” comments Clark, the self-confessed socialist atheist, laconically. “We have roughly a century to go.” He quotes the note he scribbled in response: “So what will Art be, as part of this spectacle - along with all the other practices of knowledge on which it fed, from Giotto to high Cubism - without a test of truth for its findings, its assertions; without even a will to truth? It seems to me that Picasso and Matisse made just that question their life’s work - and gave the question real aesthetic dignity - in ways that mark them off from the artists who first posed the question (Nietzsche’s contemporaries), for whom it seems to have made painting either a brilliant charade - I think of Gauguin - or an unsustainable agony - I think of Van Gogh.”
The note is revealing. It is the germ of the book, or rather the lectures on which the book is based, as Clark himself concedes. This applies equally to its principles (painting as a practice of knowledge) and to the pattern of its attention, for the artist who posed the truth question most fundamentally is strangely absent: Cézanne. Cézanne had no time for charades, as Picasso surely knew.
Picasso and Truth focuses on the painting of the interwar period. It concludes with a bravura treatment of Guernica (1937); yet much of the book is devoted to Cubism, in particular to Picasso’s “three great reimaginings of Cubism”, Guitar and Mandolin on a Table (1924) in the Guggenheim Museum, New York, The Three Dancers (1925) in Tate Modern, and The Painter and His Model (1927) in the Tehran Museum of Contemporary Art. On this account, Picasso’s truth is founded on the Cubism originally developed in concert with Braque (whose truth goes unexamined) in the years before the First World War. The maturation of this project is what Clark calls high Cubism. “If high Cubism was not true,” he writes in typical style, “it was nothing. Of course even high Cubism was engaged in constant negotiation with painting’s limits, with painting’s playfulness, with its necessary offer of pleasure and its need to draw back from the black hole of analysis. But always at its finest and freest moments…there is a claim to have gotten the structure of the world right in ways that no previous picturing had”. Cubist space is Picasso’s world view. For Clark, the emphasis is on the room - the Cubist room - a room with a view.
Clark’s treatment of Guernica is a kind of rejoinder to Nietzsche, and incidentally to Clausewitz. “The horror and inquisitiveness of the women - their bearing witness even at the point of extinction - have been given sufficient substance. What fixes and freezes them is felt as a mechanism, a rack. The bomb is the abstractness of war - war on paper, war as war rooms imagine it, war as ‘politics by other means’ - perfected. Here is what happens when it comes to earth.”
Where is the Picasso of the drone?
Picasso and Truth is a magisterial work: in many ways a summation. It retains its character as a spoken text; it is full of marvellous obiter dicta. “Every age has the atheism it deserves.” “Nostalgia can be enervating or electrifying. It depends on the past one harks back to, and whether in practice it can be made to interfere with the givens of the present.” “The Communist Manifesto, we see in retrospect, is as much under the spell of Adam Smith and Balzac as looking for a way to set the world on fire. It is the great poem of capitalism’s potential.”
It is a measure of the power of the work that the conclusion is reminiscent of another great artist. “Providing room - the sine qua non of the human for Picasso - just is, for him, providing a room, a specific and familiar floor, wall, wainscot, window…” In the Duino Elegies, Rilke asks:
“What if we’re here just for saying: house, bridge, fountain, gate, jug, tree, window, at most: column, tower…but for saying, understand, oh for such saying as the things themselves never hoped so intensely to be.”
Until his retirement in 2010, Bristol-born scholar Timothy James Clark held the George C. and Helen N. Pardee chair of art history at the University of California, Berkeley.
Today, he says, “I live in London (for the past three years, after 30 years in the US) with Anne Wagner”. What he finds most notable about the city is “the possibility that the reality of London as a ‘world city’ might one day defeat ‘Britishness’ “. Were he to live elsewhere, it would be in “New York, where an analogous victory took place long ago”.
Clark was (“briefly”) part of the Situationist International, a network of radical artists, intellectuals and political activists. It was, he says, “an intense, indelible experience - it still is basic to my view of politics. For 30 years the name of the SI was unspoken (unspeakable) in the academy and most other places. That it now has its moment in the sun is, well, a mixed blessing.” Asked whether those in the later punk movement who cited Situationism as an inspiration were its worthy heirs, he says, “the Gang of Four and the Mekons (way before the Situationist ‘boom’) did manage to give the SI’s ideas a good beat”.
This book emerged from his 2009 A.W. Mellon Lectures in the Fine Arts. “I try to make everything I write be a version - obviously an artificial version - of spoken English. Lecturing breeds all kinds of bad rhetorical habits, but it can, with luck, keep academic prose at bay. I hope it has in Picasso and Truth.”
“I guess I saw Picasso first in books, as a schoolkid; it was Cubism’s strangeness and coolness and optimism and analytic temper that were exciting,” he recalls. “They still are, but I suppose that now I am more interested in why and how the coolness and optimism gave way to monstrosity. I’m still not sure which, in the end, is Picasso’s deepest note.”
Asked about non-academic pastimes, Clark says his current hobby is “torturing myself with the spectacle of present-day ‘politics’”.
Picasso and Truth: From Cubism to Guernica
By T.J. Clark
Princeton University Press, 352pp, £29.95
Published 5 June 2013
Review originally published as: Confluence of art and thought (13 June 2013)
Alex Danchev is professor of international relations, University of Nottingham, and author, most recently, of Cézanne: A Life (2012). | <urn:uuid:c15abf51-cbfc-4904-8bbb-8f32e9696a38> | {
"date": "2014-09-19T13:49:45",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131376.7/warc/CC-MAIN-20140914011211-00319-ip-10-196-40-205.us-west-1.compute.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9566709399223328,
"score": 2.6875,
"token_count": 2436,
"url": "http://www.timeshighereducation.co.uk/books/picasso-and-truth-from-cubism-to-guernica-by-tj-clark/2004642.article"
} |
The Fundamental Principles of Seating and Positioning in Children with Physical Disabilities
The prescription of appropriate seating equipment for children and young people with physical disabilities is important, in order to provide an optimal seated position from which they may engage in functional activities.
Research has evidenced the benefits of adaptive seating to include improved postural alignment (Miedaner 1990; Myhr and von Wendt 1991), development of motor skills (Green and Nelham 1991), helping the prevention of fixed deformity (Pountney et al 2002) and facilitation of upper extremity function (Myhr and von Wendt 1991; Myhr et al 1995, van der Heide 2003). It is imperative that health professionals prescribing and engineers designing seating equipment are well informed regarding the fundamentalseating principles that dictate the sitting postures of children and young people and the impact they have on long term health and function.
Click the "Download the PDF" link to see the whole article. | <urn:uuid:ad8ef7c9-bfcc-4d52-aa2a-96c6a708ad30> | {
"date": "2014-09-01T07:29:08",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00412-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8858479857444763,
"score": 3.421875,
"token_count": 195,
"url": "http://www.leckey.com/know-how/the-fundamental-principles-of-seating-and-positioning-in-children-with-physical-disabilities/"
} |
|You Are Here: > >
Croup: Is It Serious?
Croup is a viral infection that causes swelling of your child's vocal cords. It is commonly associated with a cold and fever over 101 degrees. Your child may have a "barking, seal like" cough and a hoarse voice. Croup does not involve your child's lungs.
Croup usually lasts for 5 to 6 days and the cough seems to increase at night. Croup is a mild illness in most children. In mild croup, your child may be hoarse and when coughing, may sound like a dog or seal barking.
Swelling of the vocal cords causes a narrowing of your child's airway. With increased swelling of your child's airway, your child may develop "stridor." Stridor is a sound that is heard when your child takes a breath in.
Description Of Stridor
- When your child breathes in, you hear a harsh, raspy, vibrating sound.
- Stridor will be louder when your child is crying, upset, scared, or, coughing.
- As croup worsens, stridor will be heard when your child is sleeping or at rest.
- Stridor with retractions (chest wall and rib cage collapsing inward with each breath) is a sign of severe croup and requires immediate treatment in the emergency room.
Treatment Of Croup
If your child develops mild stridor or a barky cough, and is having no difficulty breathing, try the following:
Keep Youself and Your Child Calm!
Most children become scared by the barking cough of croup. This tends to worsen any stridor your child may have. The more upset you are, the more upset your child will be. Staying calm, cool and collected will allow you to help your child.
The Foggy Bathroom
Warm moist air works well to relax the vocal cords and lessen stridor. Run the hot shower with the bathroom door closed. Once the room is fogged up, take your child in there for at least 10 minutes. Try to keep your child calm by cuddling or reading a story.
Fill a humidifier with warm water and have your child breathe deeply from the stream of humidity.
If a humidifier is not available, have your child breathe through a warm, wet washcloth placed loosely over his/her nose and mouth.
What If I Run Out Of Hot Water?
Cold air works as well as humidity to lessen your child's stridor. Dress your child in warm clothes and go for a walk outside in the cold winter air.
Mist And Humidity
Dry air usually makes a cough worse. Keep your child's bedroom humidified. Use a cool air vaporizer if your child's room is warm or a warm air humidifier if the room is cold. Run it 24 hours a day. Remember to clean the vaporizer or humidifier everyday. The room should be moist, but water should not be dripping down the walls.
Warm Fluids For Coughing Spasms
Coughing spasms are often due to sticky mucus caught on the vocal cords. Warm fluids may help relax the vocal cords and loosen up the mucus. Use clear fluids (ones you can see through) such as apple juice, lemonade or herbal tea.
Robitussin DM can be used to lessen your child's coughing. Robitussin Pediatric has one-half the dextromethoraphan (cough suppressant) as Adult Robitussin in one teaspoon. The dose of Adult Robitussin is ¼ to ½ teaspoon for each 10 pounds of body weight or for Pediatric Robitussin, the dose is ½ to 1 teaspoon for each 10 pounds of body weight. Or example,If your child is 20 pounds, he/she can have ½ to 1 teaspoon of the Adult Robitussin. This dose can be repeated every 4-6 hours as needed for relief of the cough. The maximum single dose of Adult Robitussin is 2 teaspoons or 4 teaspoons of Pediatric Robitussin.
If your child has a fever (over 101 degrees) you may give acetaminophen (Tylenol). Refer to Tylenol dose chart, which are available on our fever sheet in the office or on our Web Page.
What Should I Expect?
Most children settle down with the above treatments and then sleep peacefully through the night. If your child is having any difficulty breathing, call our office immediately.
During the first several nights of croup, you may feel more comfortable sleeping in your child's room. However, once your child is better, return to your normal sleeping arrangements so this does not become a habit.
Will Smoking Effect Croup?
Smoking will make croup worse. Do not let anyone smoke around your child at any time.
The virus that causes croup is contagious to other children as long as your child has a fever over 101 degrees. Your child can return to school or child care once he/she feels better and the fever is less than 101 degrees for at least 24 hours.
Call Our Office Immediately if:
- Your child's color around the lips or inside the mouth appears blue.
- Your child is having trouble breathing.
- Your child has increased drooling or spitting, or starts having difficulty swallowing.
- The warm mist fails to improve the stridor in 20 minutes.
- Your child is acting sick.
Call Our Office Within 24 Hours if:
- Your child has stridor.
- Fever (over 101 degrees) with croup lasting more than 3 days.
- Croup lasting more than 5 days.
- If you have any concerns or questions.
- The pictures below show views of the nose, mouth, throat, upper airway and lower airway (lungs).
- The areas labeled "Vocal fold" and "Trachea" (above the lungs) are the areas affected in croup.
- In Croup, the trachea becomes swollen and this results in the typical seal bark.
- The area marked "Lungs" is the area affected by Asthma.
- The pictures to the right show the area involved in croup.
- The x-ray on the left shows the narrowing in the trachea, also called the "steeple sign."
- The picture on the right shows the narrowed trachea (subglottic area). | <urn:uuid:6c7faeb6-80e4-48ea-99ef-0708d40609be> | {
"date": "2015-03-06T07:27:39",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465693.74/warc/CC-MAIN-20150226074105-00096-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9144040942192078,
"score": 3.484375,
"token_count": 1358,
"url": "http://www.andorrapediatrics.com/ap_folders/hand-outs/knowledge/croup.htm"
} |
Secret of spilling coffee revealed
Scientists have only recently discovered the answer to a very modern dilemma: why does coffee spill when you walk? Dr Karl spills the beans on the very serious science of sloshing liquids.
Coffee is probably the world's most popular legal drug. And so, in our busy day, at some stage we will usually walk from here to there carrying a cup of coffee. And every now and then, the coffee will spill. Now this might be a surprise to you, but it took till 2012 before two engineers systematically explored this very familiar phenomenon.
The engineers were H C Mayer and R Krechetnikov from the University of California. The problem of spilling coffee is very complex, and involves two separate fields — biomechanics and the engineering of sloshing liquids.
Let's deal with sloshing liquids first. This field is, surprisingly, very important. Liquids that are out-of-control and sloshing can sink a tanker ship, starve a car engine of fuel, and make a liquid-fuelled rocket fail.
Now for biomechanics, which lead to the coffee cup going through some very complicated motions.
As you walk, your centre of mass follows a rather strange pathway. Your gait depends on many factors such as your gender, age, state of health and so on. After all, walking has been described as "a series of controlled falls". Your centre of mass is continually speeding up and slowing down in your direction of travel, as well as rising and falling — and oscillating from side to side to boot. When you are walking, you typically rock from side to side at about 1.25 hertz, while you oscillate back and forth at around 2.5 hertz. But that's just the motion of your centre of mass.
Your cup of coffee is joined to your centre of mass via your hand, and your wrist, elbow and shoulder joints. Each of these can move in a motion that is very different from what your centre of mass is doing. Your cup of coffee can tilt to the left or right, it can pitch down or up in your direction of travel, or it can even swivel to the left or right.
The engineers drew the appropriate diagrams of a walking person and of a cup, and then labelled all the relevant positions, velocities and accelerations. What they called "a frictionless, vorticity-free, and incompressible liquid", you and I would call "coffee". And we would know "an upright cylindrical container" as a "cup".
And then they began to work out the natural resonant frequency of the coffee oscillating in the cup.
But what is a 'natural resonant frequency'?
Suppose you half fill a bathtub with water. Get something with a decent surface area (such as a breadboard) and gently pat it onto the surface of the water at one end of the bathtub - and then remove the breadboard. You'll see a wave head to the other end of the bathtub. It will then bounce off and head back to your end. And it will continue back and forth, bouncing off the ends of your bathtub every few seconds. So for water, your bathtub has a natural resonant frequency of a few seconds.
Your coffee cup is much smaller, so it has a higher natural resonant frequency. Depending on whether you have your coffee as an exquisite espresso or a cavernous cappuccino, the frequency might range between 4.3 and 2.6 hertz.
Typically, you pick up your cup of coffee while you are stopped, and then you accelerate. This acceleration generates the initial slosh of coffee. You continue to accelerate for a few more steps until you reach cruising speed. Typically, the initial slosh will continue to amplify until get your first coffee spill around the sixth step.
The engineers suggest a few solutions.
First, if you make the walls of the coffee cup flexible, they will absorb the energy of the incoming wave and dampen down the initial slosh. Second, you could install (inside the top of the cup) a series of concentric rings (like egg rings). These would break down the large mass of a single slosh into a bunch of smaller sloshes, which would be much easier to control. A third solution would be to perforate or to drill holes in these rings. Not only would this make the rings lighter, but it would further dampen down the sloshing.
Of course, you'll always try the low-tech approach of "the targeted suppression of resonance frequencies", otherwise known as "watching your step".
Published 29 May 2012
© 2015 Karl S. Kruszelnicki Pty Ltd | <urn:uuid:f1d3c304-496e-43f5-80ec-88582c8c0561> | {
"date": "2015-06-30T03:36:25",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091587.3/warc/CC-MAIN-20150627031811-00126-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9545189142227173,
"score": 2.9375,
"token_count": 979,
"url": "http://www.abc.net.au/science/articles/2012/05/29/3513559.htm?topic=health"
} |
Polar Bears of Churchill – Waiting for the Ice
By mid-October, cooler temperatures bring a change in the bears. Restless, sensing a change in the season, many return to Churchill and gather along the coast in eager anticipation of the coming ice. Once here, some pace along the coast, roll around the willows, graze on some kelp or spar with a friend.
Most, however, pass the time in a ‘day bed’. A ‘day bed’ is simply a good resting spot, comfortable and sheltered. Preferred locations include willow thickets along the shallow tundra ponds and the deep kelp beds on the coast. There are certain ‘day beds’ along Cape Churchill that are used day after day, often year after year.
Resting is not without its complications. Larger, or simply more aggressive bears, often approach and displace sleeping bears. Once ‘victorious’, however, they may occupy the bed for only a short time.
Many times, the displaced bear will return, after walking a large, cautionary circle back to its original resting area. Once here, it is again, time to rest and wait.
Waiting pays off. With each succeeding tide, a little more ice clings to the shore. Each day, the ice reaches out a little further into the bay. And each day, the bears test the ice more and more.
Though considered a marine mammal, bears still prefer not to get wet in cold temperatures. Walking on newly formed ice, polar bears often spread all four legs further and further apart until their belly almost touches the ice. Their large paws distribute their weight very effectively, allowing them to walk on ice that would
not support the weight of a person…and especially not the weight of a tundra vehicle.
Alternately, they may test the strength of the new ice as they progress, giving a little pump with their front paws. As the shore ice builds so does the bears’ anticipation. By season’s end, many bears will be seen wandering along the ice’s edge, far out onto the bay.
Polar Bears of Churchill – How the Ice Forms
Salt water becomes heavier as it freezes. This leaves a greasy soup of ice washing in and out with the tide, each wave leaving just a little more ice clinging to the shore. In Churchill, high tide returns every 12.5 hours and it does not take long for the shore ice to extend well out into the tidal zone.
As well, Hudson Bay’s watershed extends west to the Canadian Rockies and south to Minnesota. This means that a tremendous amount of fresh water pours into the bay from several northern rivers. This inflow results in brackish water (a mix of salt and
fresh water) along the coast and surface of Hudson Bay. Since freshwater begins to freeze at a higher temperature than salt water, this further contributes to the speed of freeze up.
All the while, the ice builds along the northwestern coast of Hudson Bay. Soon, the ‘grease ice’ forms into little ice floes called pancake ice. A strong north wind and consistently cold temperatures of -20C (-4F) or lower will push this ice together and
pack it onto the coast of Cape Churchill.
Once these sheets have frozen together, it signals the bears’ departure. They will venture out to hunt seals even with only a few kilometres of ice. As winter progresses, the ice continues to encroach eastward until the bay is completely frozen, usually occurring in early December.
Almost every year, initial freeze up occurs around mid-November. However, in both 1991 and 2002, conditions prevailed for an early freeze. The freeze up was so sudden in 1991 that the bears departed near Halloween night. In other years, winter takes its time – 1999 and 2003 saw the bears remain ashore well into December. While a late freezeup is not as critical to the bears’ health as an early breakup, it does result in an extreme increase in polar bear occurrences within the community of Churchill. | <urn:uuid:26f0573b-e132-47dc-9786-e8119c988bde> | {
"date": "2014-10-23T12:23:58",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066290.7/warc/CC-MAIN-20141017150106-00216-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9518454074859619,
"score": 3.234375,
"token_count": 843,
"url": "http://advgrrl.com/tag/bears/"
} |
You are not permitted to download, save or email this image. Visit image gallery to purchase the image.
Deanna Copland offers some advice on how to eat well when money's tight.
One key thing you can do is sit down and plan meals for the following week. Organise meals around what is already in the fridge/freezer and pantry, and then check out what foods are on sale at the supermarket. There is a perception that eating well costs more, but it doesn't have to.
Make a list
Take a list of what you need, not want. Also, it is best not to go to the supermarket when you are hungry, or several other temptations may fall into the trolley. Online grocery shopping is also a great way to do your shopping, as you can take unnecessary things back out of the basket if the cost is creeping up.
Shop in season
Try to eat locally grown, seasonal produce for sustainability, nourishment and budget-friendly meals. When you are eating fruit and vegetables that are in season, they will be fresher, more nutrient-dense and better tasting than produce that is not in season and is shipped to a faraway location, e.g. watermelons.
Buying local can often save you money because there are fewer hands involved and there is less of a carbon footprint. For example, if you buy vegetables from the farmers' market, you will probably save money because the farmer does not have to increase his costs to pay all the people involved in getting it on the shelves, unlike the supermarkets.
Pumpkin, for example, is in season now and stores well in a cool, dark place, so you can stock up while it is cheap. It is best stored on flattened cardboard or wood and on an angle to prevent it rotting from air being trapped underneath.
Locally produced foods such as fruit and vegetables, honey, free-range meats, eggs and nut butters can often be bought from local farmers' markets, and you are supporting the local community.
Shop from the bulk bins
Grains, nuts and seeds, spices and dried beans can be cheaper when bought from the bulk bins rather than in packets from the shelves. This also means less plastic. If you only need a small amount for meals, you can buy the exact amount you need from the bulk bin.
Substitute with frozen
Frozen fruit and vegetables are a great way to save money on groceries because you can buy larger amounts and keep them until needed. If you are after fruit or vegies that are not in season, it can often be cheaper down the freezer aisle. Frozen berries are often cheaper than fresh berries, and the same goes for peas, beans and mixed vegetables for stir-fries.
Try different cuts
Trying out different cuts of meat can often make a huge saving to your grocery spend. Boneless chicken thighs have slightly more fat than the breast so are cheaper yet have more flavour. If you buy direct from a butcher or fishmonger, you can get exactly the amount you need and it is often cheaper and fresher.
People presume fish is expensive and often omit it from regular weekly shops but it does not have to be. A single hot-smoked salmon portion can be spread through a whole frittata for a family dinner.
Cutting down on meat and trying out more vegetarian options can often help keep the grocery bill down and is better for your health also. A pumpkin and chickpea Thai red curry served on cauliflower rice is a delicious, cost-effective, warming wintery meal that should keep the family happy.
Grow your own
One of the best ways to save money on fruit and vegetables is to grow your own. If you do not have space, start with herbs - mint, basil, parsley and coriander growing on your benchtop in the kitchen. Meals always look more gourmet and appealing when sprinkled with finely chopped parsley or served with a sprig of basil.
If you do have space to grow your own produce, kale, broccoli, tomato, zucchini and peas are all relatively easy to grow and look after. Raised plantar boxes can be a good idea in Otago, as the elevation off the ground helps when it is particularly cold during our autumn and winter months. If you know of a handy friend or builder, these are relatively easy to build and look smart in your backyard.
An example weekly meal plan for autumn dinners may look like this:
Monday: Chickpea and pumpkin Thai red curry served on cauliflower rice
Tuesday: Thick roast pumpkin soup with sourdough toast
Wednesday: Potato, spinach and hot-smoked salmon frittata
Thursday: Beef and vegetable stir-fry with almonds (make the majority of it vegetables that are both fresh and frozen)
Friday: Thai fish cakes with coleslaw (see recipe)
Thai kumara fish cakes
500g firm white fish fillets, coarsely chopped
1 large kumara, scrubbed (or 2-3 medium potatoes)
½ cup fresh coriander leaves, roughly chopped
¼ cup cornflour
2 Tbsp fish sauce
2 Tbsp sweet chilli sauce
1 egg, lightly whisked
3 green shallots, ends trimmed, finely chopped
50g green beans, finely chopped (or baby spinach)
80ml (⅓ cup) coconut oil, melted
lemon wedges and coriander
1. Chop the kumara (leave skin on for more nutrients and fibre) and cook in boiling water for around 15 minutes. Once cooled slightly, mash and set aside.
2. Place the fish in the bowl of a food processor and process until smooth. Add the coriander, cornflour, fish sauce, sweet chilli sauce and egg, and process until well combined.
3. Transfer the fish mixture to a large bowl. Add the kumara, shallots and beans and stir until well combined. Heat some of the oil in a large frying pan over medium heat. Divide the fish mixture into about 8 equal portions. Cook for 4 minutes each side or until golden brown. Transfer to a plate lined with paper towel. Repeat with the remaining fish mixture, reheating the pan between batches.
4. Divide the hot fish cakes among serving plates on top of coleslaw so that some of it wilts slightly. Serve with lemon wedges and a sprig of coriander. | <urn:uuid:66a74060-ecec-43e2-851d-f03c6acbf5ef> | {
"date": "2019-03-23T09:56:30",
"dump": "CC-MAIN-2019-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.33/warc/CC-MAIN-20190323080959-20190323102959-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9456148743629456,
"score": 2.859375,
"token_count": 1336,
"url": "https://www.odt.co.nz/lifestyle/food-wine/recipes/simple-steps-eating-well-budget"
} |
By age 13, Miami Native Delaney Reynolds had published her third in a series of children’s book about Floridian ecology. But what started as a project to enlighten young people about the beauty of Miami's coastal environment turned into a somber realization about that environment's expiration date. Reynolds’ research led her to learn about “sea level rise,” the harrowing reality that while the levels of our seas have been stable for thousands of years, they've risen about 10 inches in the past century alone.
That may not seem like much, but this is already causing increased flooding and altered shorelines along coastal cities like Miami, Boston, and New York. And if things continue at this rate, sea levels could rise as much as 10 feet by the end of the century. The city of Miami could be underwater by 2100, and up to 2.5 million Miami residents could become climate refugees.
Miami residents are already feeling the effects. For example, in 2015 Reynolds visited the Miami Ad School, which had moved from Miami Beach to the community of Wynwood in an attempt to escape increasingly frequent flooding. Yet despite the move, flooding continued to prevent students from entering the building. One staff member told Reynolds that she keeps a pair of rain boots in her office so she can reach her car on severe flood days, and also encourages students to bring rain boots with them to ensure access to the school.
Reynolds has relentlessly pushed for her community to address flooding, and Miami-Dade County implemented a budget for sea level rise for the first time in 2015 thanks in no small part to her efforts: After learning that just one sentence of the County’s 1,000 page, three-volume budget mentioned climate change, Reynolds spoke to the commission and mayor to press for more funding. That year, $300,000 was budgeted to address the effects of climate change, and a Chief Resilience Officer, who will advise the city’s efforts in that regard, was also appointed to Miami-Dade County local government.
The budget has since increased to $1.7 million, according to Reynolds, who will also represent the youth perspective on the issue on Miami-Dade County’s Rockefeller Foundation 100 Resilient Cities Steering Committee. The city of Miami Beach also recently allocated $500 million to raise roads and install pumping systems to address flooding that has resulted from sea rise.
As admirable as Reynolds’ fight for change is, the hard truth is that even these measures — increased local budgets and local political action — are only temporary fixes. The effects of sea level rise are all but inevitable: According to one 2013 study, much of Miami will be “locked in” to a future underwater by 2041 — changing this course would be impossible by this time. Water levels have only continued to rise since the study was published.
“The ice is melting quicker and quicker. It will continue. It’s now unstoppable,” John Englander, President of the International Sea Level Institute, told MTV News. While “we should try and slow the warming to slow the rate of melting and the rate of sea level rising,” he added, we ultimately need to adapt to the inevitable reality of sea level rise.
Such adaptation will have to extend to residents of communities beyond Miami. Warmer oceans cause heavier rainfall across the world, Englander explained, and we’re already seeing record rainfalls throughout the country — a source of flooding that can affect even inland communities. Increased temperatures, as well as this rainfall, in turn affect agricultural production and alter normal ecological phenomena.
For example, warmer weather led bark beetles to proliferate and, in turn, decimate 46 million of the United States’ 850 million acres of forest in 2015, Mother Jones reported that year. Our fresh water supply and wastewater treatment systems could also be drastically affected by sea level rise.
Politicians are hesitant to acknowledge this reality for reasons both economic and ideological. “This is a revolutionary change,” Englander explained. “It’s going to be expensive and it’s very disruptive to accept that the sea will be five feet higher one day … We’re struggling with how to adapt because we don’t want to give the land back to the sea, but it’s inevitable.”
Political division over and denial about the issue doesn’t exactly encourage progress. Miami’s own Governor Rick Scott instituted an unwritten policy banning the terms “climate change,” “global warming,” and “sustainability” from any official communications, emails, or reports from the Florida Department of Environmental Protection, the Florida Center for Investigative Reporting revealed in 2015. A number of President Trump’s appointees have substantial records of climate change denial, and the President himself recently withdrew from the Paris Climate Agreement, an agreement for environmental action signed by 195 nations.
But ultimately, as Englander put it, “the ocean doesn’t care who is in the White House.” And neither should young people concerned about climate change be deterred by who presently occupies it.
And if Reynolds' experience is any indication, young people are hardly letting politics — or their inability to vote — impede their passion for change. One of Reynolds’ greatest sources of optimism in her work is seeing “how engaged and how passionate” the many children she has met and spoken to about this issue are. “The truth is,” she concluded, “the facts don’t lie, the science doesn’t lie, and I have confidence we absolutely will be able to solve this problem.”
As part of MTV’s An Inconvenient Special, Vice President Gore will be joined by Delaney Reynolds, Fat Joe, and Steve Aoki to discuss the challenges of climate change. Tune in on August 2 at 7:30 p.m., #SaveMiami by supporting Delaney’s project, Sink or Swim, and visit climate.mtv.com for more ways to take action. | <urn:uuid:71f1c482-9279-4a1c-b3d8-078c197281b2> | {
"date": "2018-05-26T02:54:12",
"dump": "CC-MAIN-2018-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867277.64/warc/CC-MAIN-20180526014543-20180526034543-00616.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9580264091491699,
"score": 2.8125,
"token_count": 1258,
"url": "http://www.mtv.com/news/3025750/delaney-reynolds-save-miami-underwater/"
} |
E. Cobham Brewer 18101897. Dictionary of Phrase and Fable. 1898.
Glencoe (2 syl.).
The massacre of Glencoe. The Edinburgh authorities exhorted the Jacobites to submit to William and Mary, and offered pardon to all who submitted on or before the 31st of December, 1691. Mac-Ian, chief of the Macdonalds of Glencoe, was unable to do so before the 6th of January, and his excuse was sent to the Council at Edinburgh. The Master of Stair (Sir John Dalrymple) resolved to make an example of Mac-lan, and obtained the kings permission to extirpate the set of thieves. Accordingly, on the 1st of February, 120 soldiers, led by a Captain Campbell, marched to Glencoe, told the clan they were come as friends, and lived peaceably among them for twelve days; but on the morning of the 13th, the glenmen, to the number of thirty-eight, were scandalously murdered, their huts set on fire, and their flocks and herds driven off as plunder. Campbell has written a poem, and Talfourd a play on the subject. | <urn:uuid:ee82ed2d-d721-4062-8c07-47c9905c8710> | {
"date": "2014-07-31T13:32:35",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00396-ip-10-146-231-18.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9767356514930725,
"score": 3.359375,
"token_count": 249,
"url": "http://bartleby.com/81/7247.html"
} |
In this class, Ralph D. Anske discusses the underlying causes of the American Revolution. He emphasizes that new taxes imposed by the British provoked discontentment and upheaval among the colonists, giving rise to the slogan "no taxation without representation." Anske also examines the many differences between the North American and Latin American colonial experiences. According to Anske, the North American colonies enjoyed a strong sense of community, which made it possible to build a nation under one vision.
Ralph D. Anske is a retired U.S. Foreign Service officer. During his career he was assigned primarily to economic and political matters, in Mali, the Philippines, Mexico, Pakistan, Guatemala, and Kenya. He also worked on energy, democracy, and human rights issues at the Department of State and was an analyst for Central American affairs at the Bureau of Intelligence and Research. Anske earned the equivalent of a bachelor's of science degree in economics at the National Foreign Affairs Training Center and he did doctoral work at Carnegie-Mellon University. He has a MA in history and a BA in history, political science, and English from St. Mary’s University.
The Founding Fathers: Colonial and Early National America (Day 3, Part II) Ralph D. Anske
Academic Building, EN-203 Universidad Francisco Marroquín Guatemala, November 24, 2009
A New Media - UFM production. Guatemala, April 2010 Camera: Mario Estrada; digital editing: Adrián Méndez; index and synopsis: Ximena García; content reviser: Jennifer Keller; publication: Carlos Petz/Daphne Ortiz
This work is licensed under a Creative Commons 3.0 License Este trabajo ha sido registrado con una licencia Creative Commons 3.0 | <urn:uuid:dba05f86-900f-47d3-8c38-5bb95122e3d3> | {
"date": "2014-10-23T10:07:54",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413510834349.43/warc/CC-MAIN-20141017015354-00359-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9068628549575806,
"score": 2.78125,
"token_count": 365,
"url": "http://www.newmedia.ufm.edu/gsm/index.php?title=Anskefoundersday3part2"
} |
Saint George has some of the most amazing landscapes Utah has to offer. With the red rock mountains around the valley, Saint George has some of the most beautiful golf courses. The weather in Saint George is also superior to any other location in Utah.
With the nice landscaping and amazing weather, there is one thing that is a downer, and that is the ants.
What ants survive in Saint George?
With 1000 different ant species, Saint George is home to five: the Argentine ant, Black ant, Carpenter ant, Fire ant, and the Pharaoh ant.
What does the Argentine ant look like and what does it like to eat?
Argentine ants are about 1/16 of an inch to ¼ of an inch long. The color of the Argentine ant will be a dark drown or black and they are very shiny. Argentine ants eat just about anything including eggs, meat, and oil. The Argentine ant will leave a pheromone trial everywhere they go. This will also help them not waste time when they are out looking for food.
What does the Argentine ant do?
The Argentine ant’s colony will be located near their food source; and in a wet environment. They will be found in gardens, and in your back yards. The Argentine ant’s colony can grow to epic sizes, covering a whole back yard or garden. Argentine ant bites will hurt, but luckily they will not create any health threats.
What does a Black ant look like and what does it like to eat?
The Black ant is one of the smallest ants Saint George will have. It will only grow to be about 1/16 of an inch; that is about the thickness of a dime. The body of the black ant is a dark shiny black color. The Black ant has a very powerful jaw and will chew their food. They will eat anything a human leaves out. When outside having a picnic, these ants try to invade.
What does the Black ant do?
Black ants will be out day and night. The colony will be found in small craters in the soil. Inside the colony they will house about 2000 workers. When the Black ant workers are out of the colony enjoying the weather, they will be looking for food to eat. They are very annoying. Call a professional to estimate how big of a problem you may have, and have them treat form them.
What does a Carpenter ant look like and eat?
Carpenter ants will vary in size, but they usually will be about ¼ inch to ½ inch long. Carpenter ants will be a variety of colors, ranging from a tan to a black color. They also can be an orange or red color, and may also have a combination of black and red. Carpenter ants will eat other bugs and will eat plants juices.
What does the Carpenter ant do?
The Carpenter ant will be found in wood that has been buried, or in old rotten logs. Outside the home, they can be found around porch stands, telephone poles, and any other place where the wood meets the ground. Inside the wood, the Carpenter ant will make tunnels to travel throughout, and build their colony up. The colony of a Carpenter ant can hold up to about 2000 workers. The 2000 workers will destroy your wood property. The best way to save your home from the Carpenter ants will be to call an exterminator.
What does the Fire ant look like and what does it eat?
The Fire ant is a red color and it will have tinny hairs coming off their body. They will vary in sizes; from 1.6mm to 5mm. The Fire ant eats anything that gets in their way; it will eat other insects, small mammals, earthworms, frogs, and even lizards. When the Fire ant bites, it also stings the prey at the same time as it places some venom inside it’s prey. The Fire ant’s venom will paralyzed the prey.
What does the Fire ant do?
The Fire ant may be small, but will create huge mounds; growing to be 3 feet tall and 2 feet wide. In the ground mounds, the Fire ant will have hundreds of tunnels leading throughout the whole colony. Sometimes the colony mound can reach 8 feet deep. The colonies of the Fire ants can accommodate around 250,000 workers. The queen Fire ant can average about 1,600 eggs per day. These colonies can be found inside wall voids, rain gutters, bath traps, and under carpets, as well as in electrical equipment. The fire ant can pose a health threat so call a pest control company to come take care of them.
What does the Pharaoh ant look like and eat?
The Pharaoh ant can grow to be about 1/16 of an inch. Pharaoh ants are a reddish orange color and are very hard to see. The list of options of what the Pharaoh ant eats is extensive; including a variety of foods like honey, sugar, fruit juices, dead bugs, and anything really sweet. They will also be caught eating silk and rubber.
What does the Pharaoh ant colony look like?
The Pharaoh ant colony will be infested with hundreds of thousands tiny worker ants. The colony can be found inside walls of the home, in small cracks in the sidewalk, and in areas where the grass and concrete meet. The Pharaoh ant colony will have a mound that sticks up from the grass a couple inches and will be about 6 inches wide. The Pharaoh ant will infest your home and yard, and will become a huge problem. To prevent a problem, get a hold of a professional to come take a look at your home and treat it.
St. George Ant Control
56 North 500 East
St. George, UT 84770
Truly Nolen Pest Control
630 North 3050 East
St. George, UT 84790
3568 West 900 South
Salt Lake City, UT 84104
Published by Bulwark | <urn:uuid:7adb5518-3b58-448d-8ea0-08c0fc3879d1> | {
"date": "2014-09-23T16:21:04",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139314.4/warc/CC-MAIN-20140914011219-00212-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9467093348503113,
"score": 2.671875,
"token_count": 1220,
"url": "http://pestcontrolstgeorge.com/"
} |
Backyard Conservation - Tree Planting
Trees add beauty and so much more
In Your Backyard
Trees in your backyard can be home to many different types of wildlife. Trees can also reduce your heating and cooling costs, help clean the air, add beauty and color, provide shelter from the wind and the sun, and add value to your home.
Choosing a Tree
Choosing a tree should be a well thought-out decision. Tree planting can be a significant investment in money and time. Proper selection can provide you with years of enjoyment as well as significantly increase the value of your property. An inappropriate tree for your property can be a constant maintenance problem or even a hazard. Before you buy, take advantage of the abundant references on gardening at local libraries, universities, arboretums, parks where trees are identified, native plant and gardening clubs, and nurseries. Some questions to consider in selecting a tree include:
What purpose will this tree serve? Trees can serve numerous landscape functions including beautification, screening of sights and sounds, shade and energy conservation, and wildlife habitat.
Is the species appropriate for your area? Reliable nurseries will not sell plant material that is not suitable for your area. However, some mass marketers have trees and shrubs that are not winter hardy in the area sold. Even if a tree is hardy, it may not flower consistently from year to year at the limits of its useful range due to late spring freezes. If you are buying a tree for the spring flowers and fall fruits, this may be a consideration. In warmer climates, there may not be a long enough period of cool temperatures for some species, such as apples, to develop flowers. Apples and other species undergo vernalization -- a period of near-freezing temperatures that cause changes in the plant, resulting in the production of flowers.
Be aware of microclimates. Microclimates are very localized areas where weather conditions may vary from the norm. A very sheltered yard may support vegetation not normally adapted to the region. On the other hand, a north-facing slope may be significantly cooler or windier than surrounding areas and survival of normally adapted plants may be limited.
Select trees native to your area. They will be more tolerant of local weather and soil conditions, enhance natural biodiversity in your neighborhood, and be more beneficial to wildlife than many non-native trees. Avoid exotic trees that can invade other areas, crowd out native plants, and harm natural ecosystems.
How big will it get? When planting a small tree, it is often difficult to imagine that in 20 years it could be shading your entire yard. Unfortunately, many trees are planted and later removed when the tree grows beyond the dimensions of the property.
What is the average life expectancy of the tree? Some trees can live for hundreds of years. Others are considered "short-lived" and may live for only 20 or 30 years. Many short-lived trees tend to be smaller ornamental species. Short-lived species should not necessarily be ruled out when considering plantings. They may have other desirable characteristics, such as size, shape, tolerance of shade, or fruit, that would be useful in the landscape. These species may also fill a void in a young landscape, and can be removed as other larger, longer-lived species mature.
Does it have any particular ornamental value such as leaf color or flowers and fruits? Some species provide beautiful displays of color for short periods in the spring or fall. Other species may have foliage that is reddish or variegated and can add color in your landscaping year round.
Trees bearing fruits or nuts can provide an excellent source of food for many species of wildlife. However, some people consider some fruit and nut bearing trees to be "dirty."
Does it have any particular insect, disease, or other problem that may reduce its usefulness? Certain insects and diseases can be serious problems on some desirable species in some regions. Depending on the pest, control of the problem may be difficult and the pest may significantly reduce the attractiveness, if not the life expectancy, of the plant. Other species such as the silver maple (Acer saccharium) are known to have weak wood that is susceptible to damage in ice storms or heavy winds.
How common is this species in your neighborhood or town? Some species are over-planted. Increasing the natural diversity will provide habitat for wildlife and help limit the opportunity for a single pest to destroy all plantings. An excellent example of this was the American elm (Ulmus americana). This lovely tree was widely planted throughout the United States. With the introduction of Dutch elm disease, thousands of communities lost all their street trees in only a few years.
Is the tree evergreen or deciduous? Evergreen trees will provide cover and shade year round. They may also be more effective as a barrier for wind and noise. Deciduous trees will give you summer shade but allow the winter sun to shine in. This may be a consideration for where to place the tree in your yard.
Placement of Trees
Proper placement of trees is critical for your enjoyment and their long-term survival. Check with local authorities about regulations pertaining to placement of trees. Some communities have ordinances restricting placement of trees within a specified distance of a street, sidewalk, streetlight, or other utilities.
Before planting your tree, consider the tree's ultimate size. When the tree nears maturity, will it be too near your house or other structures? Be considerate of your neighbors. An evergreen tree planted on your north side may block the winter sun from your next door neighbor. Will it provide too much shade for your vegetable and flower gardens? Most vegetables and many flowers require considerable amounts of sun. If you intend to grow these plants, consider how the placement of trees will affect these gardens. Will it obstruct driveways or sidewalks? Will it cause problems for buried or overhead utilities?
Planting a Tree
A properly planted and maintained tree will grow faster and live longer than one that is incorrectly planted. Trees can be planted almost any time of the year as long as the ground is not frozen. Late summer or early fall is the optimum time to plant trees in many areas. This gives the tree a chance to establish new roots before winter arrives and the ground freezes. When spring arrives, the tree is ready to grow. The second choice for planting is late winter or early spring. Planting in hot summer weather should be avoided. Planting in frozen soil during the winter is difficult and tough on tree roots. When the tree is dormant and the ground is frozen, there is no opportunity for the growth of new roots.
Trees are purchased as container grown, balled and burlapped (B&B), and bare root. Generally, container grown are the easiest to plant and successfully establish in any season, including summer. With container grown stock, the plant has been growing in a container for a period of time. When planting container grown plants, little damage is done to the roots as the plant is transferred to the soil. Container grown trees range in size from very small plants in gallon pots up to large trees in huge pots. B&B plants frequently have been dug from a nursery, wrapped in burlap, and kept in the nursery for an additional period of time, giving the roots opportunity to regenerate. B&B plants can be quite large. Bare root trees are usually extremely small plants. Because there is no soil on the roots, they must be planted when they are dormant to avoid drying out. The roots must be kept moist until planted. Frequently, bare root trees are offered by seed and nursery mail order catalogs or in the wholesale trade. Many state operated nurseries and local conservation districts also sell bare root stock in bulk quantities for only a few cents per plant. Bare root plants usually are offered in the early spring and should be planted as soon as possible upon arrival.
Carefully follow the planting instructions that come with your tree. If specific instructions are not available, follow these tips:
Before digging, call your local utilities to identify the location of any underground utilities.
Dig a hole twice as wide as, and slightly shallower than, the root ball. Roughen the sides and bottom of the hole with a pick or shovel so that roots can penetrate the soil.
With a potted tree, gently remove the tree from the container. Lay the tree on its side with the container end near the planting hole. Hit the bottom and sides of the container until the root ball is loosened. If roots are growing in a circular pattern around the root ball, slice through the roots on a couple of sides of the root ball. With trees wrapped in burlap, remove the string or wire that holds the burlap to the root crown. It is unnecessary to completely remove the burlap. Plastic wraps must be completely removed. Gently separate circling roots on the root ball. Shorten exceptionally long roots, and guide the shortened roots downward and outward. Root tips die quickly when exposed to light and air, so don't waste time.
Place the root ball in the hole. Leave the top of the root ball (where the roots end and the trunk begins) 1/2 to 1 inch above the surrounding soil, making sure not to cover it unless roots are exposed. For bare root plants, make a mound of soil in the middle of the hole and spread plant roots out evenly over mound. Do not set trees too deep. As you add soil to fill in around the tree, lightly tamp the soil to collapse air pockets, or add water to help settle the soil. Form a temporary water basin around the base of the tree to encourage water penetration, and water thoroughly after planting. A tree with a dry root ball cannot absorb water; if the root ball is extremely dry, allow water to trickle into the soil by placing the hose at the trunk of the tree.
Mulch around the tree. A 3-foot diameter circle of mulch is common.
Depending on the size of the tree and the site conditions, staking may be beneficial. Staking supports the tree until the roots are well established to properly anchor it. Staking should allow for some movement of the tree. After trees are established, remove all support wires. If these are not removed they can girdle the tree, cutting into the trunk and eventually killing the tree.
For the first year or two, especially after a week or so of especially hot or dry weather, watch your trees closely for signs of moisture stress. If you see leaf wilting or hard, caked soil, water the trees well and slowly enough to allow the water to soak in. This will encourage deep root growth. Keep the area under the trees mulched.
Some species of evergreen trees may need protection against winter sun and wind. A thorough watering in the fall before the ground freezes is recommended. Spray solutions are available to help prevent drying of foliage during the winter.
Fertilization is usually not needed for newly planted trees. Depending on soil and growing conditions, fertilizer may be beneficial at a later time.
Young trees need protection against rodents, frost cracks, sunscald, and lawn mowers and weed whackers. Mice and rabbits frequently girdle small trees by chewing away the bark at snow level. Since the tissues that transport nutrients in the tree are located just under the bark, a girdled tree often dies in the spring when growth resumes. Weed whackers are also a common cause of girdling. Plastic guards are an inexpensive and easy control method. Frost cracking is caused by the sunny side of the tree expanding at a different rate than the colder shaded side. This can cause large splits in the trunk. Sunscald can occur when a young tree is suddenly moved from a shady spot into direct sun. Light colored tree wraps can be used to protect the trunk from sunscald.
Usually, pruning is not needed on newly planted trees. As the tree grows, lower branches may be pruned to provide clearance above the ground, or to remove dead or damaged limbs or suckers that sprout from the trunk. Sometimes larger trees need pruning to allow more light to enter the canopy. Small branches can be removed easily with pruners. Large branches should be removed with a pruning saw. All cuts should be vertical. This will allow the tree to heal quickly without the use of sealants. Major pruning should be done in late winter or early spring. At this time the tree is more likely to "bleed" as sap is rising through the plant. This is actually healthy and will help prevent invasion by many disease organisms. Heavy pruning in the late summer or fall may reduce the tree's winter hardiness. Removal of large branches can be hazardous. If in doubt about your ability to prune properly, contact a professional with the proper equipment.
Under no circumstance should trees be topped. Not only does this practice ruin the natural shape of the tree, but it increases susceptibility to diseases and results in very narrow crotch angles, the angle between the trunk and the side branch. Narrow crotch angles are weaker than wide ones and more susceptible to damage from wind and ice. If a large tree requires major reduction in height or size, contact a professionally trained arborist. There are other methods to selectively remove large branches without sacrificing the health or beauty of the tree.
On the Farm
Windbreaks and tree plantings slow the wind and provide shelter and food for wildlife. Trees can shelter livestock and crops; they are used as barriers to slow winds that blow across large cropped fields and through farmsteads. Windbreaks can be beneficial in reducing blowing and drifting snow along roadways. Farmstead and field windbreaks and tree plantings are key components of a conservation system. They also help prevent dust particles from adding to smog over urban areas.
More About Backyard Conservation
The Natural Resources Conservation Service, National Association of Conservation Districts, and Wildlife Habitat Council encourage you to sign up in the "Backyard Conservation" program. To participate, use some of the conservation practices in your backyard that are showcased in this series of tip sheets -- tree planting, wildlife habitat, backyard pond, backyard wetland, composting, mulching, nutrient management, terracing, water conservation, and pest management. Then, simply fill in the Backyard Conservation customer response card, send a Backyard e-mail request to [email protected], or call 1-888-LANDCARE.
< Back to Backyard Conservation | <urn:uuid:4b86e81e-edf2-4116-84a6-f08b94868939> | {
"date": "2014-12-18T04:15:50",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765610.7/warc/CC-MAIN-20141217075245-00155-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9377802610397339,
"score": 2.9375,
"token_count": 2973,
"url": "http://www.nrcs.usda.gov/wps/portal/nrcs/detail/national/home/?cid=nrcs143_023591"
} |
While other cultural and historical places light up Christmas trees, Antietam lights up a battlefield. Saturday, December 6, 2008 was the 20th Annual Memorial Illumination Ceremony in which 23,110 lighted candles graced Antietam’s battlefield, one candle for each soldier killed, wounded, or missing during battle.
The battle of Antietam on September 17, 1862 was the bloodiest one-day battle during the Civil War, the bloodiest day in America’s history. (Gettysburg was the bloodiest battle but it lasted three days.) The Yankees kicked back the Confederate army south of the Potomac, though President Lincoln wasn’t happy that General George McClellan didn’t follow the retreat and destroy the Confederate army, as Lincoln had instructed. McClellan’s reticence cost him his generalship; following orders could have potentially averted many subsequent deaths during the next three years of the Civil War.
I wasn’t able to visit the illumination this year, but I did drive down to Antietam a couple weeks ago. I grabbed a guide from the visitor’s center and rode my bike along the pathway, while visitors in heated cars looked at me askance. It was cold, but I’d learned at the Gettysburg battlefield that driving a car was a hindrance, since I wanted to get out every fifty yards to read the plaques, monuments, and guideposts. At Antietam, I stared into the West Woods where over 2,200 Union soldiers were killed or wounded during a twenty minute period; I walked along Bloody Lane, a farm lane which had became an open grave after three hours of slaughter. “[The dead] were laying in the road like the ties of a railroad,” one soldier said. When I got home I wrote a poem to organize my thoughts and emotions, as I’d needed to after visiting Gettysburg for the first time.
A while back I heard the documentarist Ken Burns speak. He’s intelligent and engaging, and I listened closely to his presentation, but now I can only remember two things he said: his documentary on the National Parks will come out in fall 2009, and far too many Americans have never heard of Antietam. Burns told about when he went out to lunch with a young professional woman who had grown up in Maryland and worked in Washington DC. When Burns told her he was headed to Antietam after their meeting, she gave him a blank look. When he described the battle and noted the casualties, she was astounded. “That happened here?” she asked. So many men died?
His story only sort of surprised me. I mean, I do have a student who swears the Holocaust is a myth, and when I mentioned Antietam to a couple people in a writing group, one asked if that battle was the end or beginning of the Civil War. After I told them the number of casualties, another asked if we had lost that many in any battle in Iraq. We haven’t suffered many more casualties in the whole Iraq war, I replied. (A comment which wasn’t meant to downplay the number of American casualties in Iraq, only to remind her of the many, often forgotten, Civil War casualties.)
At an Antietam cemetery, as I stood by gravestones on which were written the names of multiple people from the same family who died in the Civil War, I realized that the majority of Americans can’t comprehend the enormous grief which encapsulated families and the entire nation during 1861-1865. If you didn’t have a family member killed or wounded, it was only a matter of time. When placing myself in their shoes, I’m thankful that my immediate family can be together over the holidays, and I sympathize with friends and relatives who have family members fighting on the other side of the world. Antietam’s lights add sobriety to the holiday season, a sobriety which prompts healthy reflection.
photo credit to http://www.lindsayfincher.com | <urn:uuid:4197687d-c030-4fea-be5d-be86bbfc3061> | {
"date": "2013-05-20T22:07:07",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9791834354400635,
"score": 2.578125,
"token_count": 853,
"url": "http://themelononline.com/tag/civil-war/"
} |
How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Many machine vision applications require identification of items at specific locations in an image. These items frequently are identifiable by humans from their characteristic colors. Yet, except for a few specialized applications such as food sorting, color-based recognition is infrequently used in machine vision applications. It's not that the potential of color-based recognition has gone unnoticed, but rather that implementations often have been much more difficult and/or less successful than anticipated. We believe that the difficulties are attributable to extension of a traditional single-vector model into a realm for which it is ill-suited. A statistical model is much more suitable.
The elusive single-color item
When we think of color-based recognition in the abstract, we usually think of single-colored objects. Such objects do exist, often paper, plastic or painted, but without extraordinary care in lighting and camera selection their images are almost never single-colored. Yet most attempts at color-based recognition in machine vision are founded on a model of single- colored objects.
Many technical discussions of color-based recognition start with the fact that colors, as perceived by humans, generally can be represented by three numbers. These numbers may be the relative responses of three types of receptors in the human eye. They also may be the relative intensity of the red, green, and blue phosphors in a computer display, or the cyan, yellow and magenta of printing inks etc. Engineers are accustomed to working with such number triplets; they're referred to as vectors. Vectors are omnipresent in the representation of three-dimensional physical objects in space. Many mathematical tools have been developed to work with vectors, thus aiding the design and fabrication of objects large and small.
It's easy to see how the difference between two colors can be usefully treated as the difference between two vectors. The size of this difference goes to zero when the colors become identical. More than two centuries ago a famous mathematician, Carl Friedrich Gauss, showed how, by making repeated measurements and then averaging the result, we could estimate the true value of the quantity measured (See footnote 1). He also showed that from the variance, or scatter, of the measurements we could estimate the uncertainty in the true value.
Usually overlooked are two assumptions Gauss found necessary to link the measure of mean and variance to the estimate of a true value and its uncertainty: The quantity of interest must be single-valued, and the most likely value must be the mean value. These assumptions are reasonable if one is measuring, say, the length or mass of a beam, or the angle between two sides of a triangle. They are not necessarily valid when considering the colors of an object. | <urn:uuid:b9fad4b2-b6d1-406c-95d7-5ec384373b8f> | {
"date": "2016-10-26T23:07:53",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00098-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9550658464431763,
"score": 3.40625,
"token_count": 566,
"url": "http://www.advancedimagingpro.com/print/Advanced-Imaging-Magazine/Taming-the-Multicolor-Jungle/1$4636"
} |
The Vampyre: A Tale entitled, The Return of Lord Ruthven.
The Pale Lady is a short story, perhaps even a novella, that Dumas published in 1849 and holds the distinction of being one of the first, indeed possibly the first, vampire tales to have been set in the Carpathians.
The story is told from the point of view of Hedwig, a Polish maiden whose brothers have been killed in war with the Russians. Her father sends her and a retinue to the safety a Monastery in the Carpathians as the Russians march on their castle.
The first mention of vampires we get is within a song that is sung on their journey:
“’Tis a vampire! The wild wolf
Runs howling from the horrid thing!”
The song is cut short as brigands attack the travellers. Ultimately Hedwig and four guards are left alive when the attack is interfered with. The brigands’ leader is a Moldavian called Kotsaki and the man who interferes is his half-brother Gregoriska. Gregoriska is the elder and lives in a nearby castle with their mother, who clearly favours Kotsaki, whilst the younger leads the brigands from the forest. Interestingly, Kotsaki carries Hedwig to the castle and she likens the ride to Lenore in Bürger’s poem. Often taken as a vampire poem itself, it isn’t but was quoted, famously, within Dracula.
Kotskai moves to the castle and declares his love for Hedwig (and declares that she will die if she gives her heart to another) but her heart is already given to Gregoriska and he shares her love – though neither declare it at first. News that her father has died gives her an excuse to keep Kotsaki at a distance.
Eventually Gregoriska liquidates his fortune and arranges to elope with Hedwig but Kotsaki obviously gets wind of this and – off page – brother confronts brother; Kotsaki is killed.
It is Kotsaki who haunts Hedwig as a vampire. At quarter to nine in the evening she feels a lethargy overcoming her and swoons onto her bed. She can hear footsteps approach her chamber and the door opening and then senses nothing but a throb of pain in her neck before falling into complete unconsciousness. In the morning she is exhausted (and likens this to exhaustion felt during her menstrual cycle), unnaturally pale and has something like an insect bite, a pinprick, over her carotid artery.
There an identical incident the next night and Gregoriska is confided with. They both realise it is a vampire – she recalls forty graves being opened in a cemetery, during her childhood, and seventeen bodies bearing the signs of vampirism – “that is to say, their bodies were found fresh, rosy, and looking as if still alive;” They were all staked and cremated.
Gregoriska gets her “a twig of box consecrated by the priest and still wet with holy water”. This prevents the lethargy and stops Kotsaki approaching. Gregoriska has been given the holy sword of a crusader and forces his brother to admit that his death was not an act of fratricide but the younger brother had thrown himself onto the elder’s sword – in short a suicide. They force the corpse to march back to its grave (some distance away). He gives the younger brother the chance to repent, which is refused, and uses the sword to pin him onto the earth. However the effort (spiritual not physical) kills the elder brother and both end up buried together (“God’s servant keeping watch and ward over the Devil’s”). The association of the vampire with the Devil would be repeated by Dumas 2 years later when he wrote The return of Lord Ruthven. Hedwig has to rub grave earth splashed with the vampire’s blood onto the wound to keep her safe from Kotsaki in the future. She is left with the “mark” of those who survive an attack by a vampire – an unnatural paleness.
One of the interesting things about this is the use of tropes that are familiar to post Stoker stories. The use of holy relics are explicit (and the twig of box appears to be a folk atropaic; when I googled it as research for this I found the couplet “A twig of box, a lilac spray, Will drive the goblin-horde away” in Henry Van Dyke’s Eight Echoes from the Poems of Auguste Angellier) and Hedwig is clearly being preyed upon the neck, although the description of Kotsaki doesn’t describe anything unusual about his dentition. The use of the Carpathians and the reference to Bürger does lead me to speculate as to whether Stoker was aware of the story?
The story is, of course, a tragedy. The fact that the threat lasts longer than the duration of the story (hence the use of grave dirt, which is straight from Slavic folklore) and leaves a lasting mark on the victim (the pallor of her skin) lends the tale a wonderfully dark and potentially open ending. | <urn:uuid:a4cd0376-558d-4074-be11-2b4f1012cc60> | {
"date": "2014-12-23T00:04:44",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777418.140/warc/CC-MAIN-20141217075257-00107-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9695205092430115,
"score": 2.546875,
"token_count": 1116,
"url": "http://taliesinttlg.blogspot.com/2014/02/classic-literature-pale-lady.html"
} |
A trademark is a symbol, word, name or device that is used to identify the source of a specific product and distinguish it from others. Trademarks prevent your competitors from passing off your product or service as their own without your permission. It is not completely necessary to register your trademark with the federal government, but a federally registered trademark can inform the public about who owns the rights to a product or service. In order to fully register your trademark, you must fill out an application. The information below will explain what is included.
The Actual Application
In order for your trademark application to be accepted, the U.S. Patent and Trademark Office (USPTO) will expect certain details. Your application should include
- The applicant’s full name
- An address for correspondence
- A drawing or picture of the mark
- A full list of the services or goods that your trademark will cover
- Money to cover the filing fee
If you do not provide federal officials with the details listed above, your application will likely be rejected and your fee will be refunded.
Many trademark applicants prefer to utilize a paper application. In this case, you must include the information listed above in order to meet the minimum filing requirements. Once you do this, the USPTO will assign a serial number to your application and send you a filing receipt. Always review your receipt, and if you notice any errors, notify the USPTO as soon as possible.
Electronically filed applications must contain the same information in order to receive a filing date. At the time of filing, you will receive an e-mail summary along with a serial number. The office does not issue paper receipts for electronic applications, so it is imperative that you retain the summary for your own records. If your application does not include the required information, your serial number and filing date will be canceled immediately.
The Review Process
Once your application arrives at the USPTO, a clerk will examine the materials that you sent and determine whether or not you have submitted all of the requested items. If you did, the USPTO will classify your application according to the goods or services that you are attempting to trademark. After this occurs, many applicants will have to wait up to three months for a decision.
During this waiting period, a federally trained lawyer will be charged with reviewing your application. The examiner will decide whether your trademark is eligible for registration and whether your mark contains any generic words that cannot be trademarked as your own. If he or she notices any inconsistencies, you will be notified so that you can correct the erroneous information. If your application is judged as sufficient, it will be published in the Official Gazette (USPTO’s publication). If no one objects to your use of the trademark, you application will finally be registered once you begin using it.
Once your trademark is published in the Gazette, readers will have the opportunity to object to its use. This is a rare occurrence, and it usually only occurs if another person, business or organization is already employing your trademark. In order to avoid legal penalties, you will need to hire a trademark lawyer when defending your claim or resolving a dispute. If you’re successful, you will be allowed to use your trademark freely.
In short, filing a trademark application is fairly simple, as long as you remain organized and thorough. The key to success is to do your homework. Always be sure that no one else is utilizing the trademark. This will prevent a plethora of problems from occurring later on. You should also be sure to read the application closely and submit all of the required items. Once you do this, you will have no problem federally registering your trademark.
The content on our website is only meant to provide general information and is not legal advice. We make our best efforts to make sure the information is accurate, but we cannot guarantee it. Do not rely on the content as legal advice. For assistance with legal problems or for a legal inquiry please contact you attorney. | <urn:uuid:50654408-7da8-4d4b-87cd-4b21975e3fd0> | {
"date": "2017-08-22T13:08:48",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9332714080810547,
"score": 2.515625,
"token_count": 815,
"url": "https://www.mightyrecruiter.com/learn/how-to-successfully-file-your-trademark-registration-application/"
} |
(PhysOrg.com) -- A research team led by Nancy Speck, PhD, Professor of Cell and Developmental Biology at the University of Pennsylvania School of Medicine, has identified the location and developmental timeline in which a majority of bone marrow stem cells form in the mouse embryo. The findings, appearing online this week in the journal Nature, highlight critical steps in the origin of hematopoietic (or blood) stem cells (HSCs), says senior author Speck, who is also an Investigator with the Abramson Family Cancer Research Institute at Penn.
Because HSCs, found in the bone marrow of adult mammals, generate all of the blood cell types of the body, unlocking the secrets of their origin may help researchers to better manipulate embryonic stem cells to generate new blood cells for therapy.
“The ultimate goal for stem cell therapies is to take embryonic stem cells and push them down a particular lineage to replace diseased or dead cells in human adults or children,” says Speck. For instance, in theory embryonic stem cells could be tweaked in a lab to provide a patient with bone marrow failure a fresh supply of compatible HSCs.
To date, however, Speck says scientists have been unable to coax embryonic stem cells to become HSCs without significant genetic manipulations that are too risky for clinical therapies. First things first, Speck says: “You have to understand what's happening in the embryo.”
Previous studies hinted that HSCs originated from a small population of cells lining the blood vessels, called endothelial cells. But, it was unclear how endothelial cells transitioned to blood stem cells during early development.
Before joining Penn in September 2008, Speck, then at Dartmouth Medical School, led a team that confirmed that HSCs in bone marrow were originating from the endothelial cells and determined whether the activity of a protein called Runx1, which is known to be critical in the formation of blood cells, was responsible for this important transition.
First, the researchers inactivated the gene that codes for the protein Runx1 in the endothelial cells of mouse embryos. During development, some endothelial cells express Runx1, signaling the production of grapelike clusters of HSCs along the interior walls of several major blood vessels. Upon release from the vessel walls HSCs enter the blood circulation and travel to the fetal liver, and upon birth they relocate to the bone marrow.
By selectively blocking the ability of endothelial cells to express Runx1 during embryo development, the researchers halted HSC production, demonstrating that Runx1 is vital to the endothelial cell to HSC transition.
Next, Speck’s team shut off Runx1 expression in mouse embryos at day 11.5 of gestation -- a time when most newly born HSCs have detached from the vessel wall and migrated to the fetal liver. The researchers found that blocking Runx1 expression had no effect on HSC formation, suggesting while Runx1 is required for the transition from endothelium to HSCs, the process is complete by the end of the 11th day of gestation.
The researchers also showed that at least 95 percent of all adult HSCs (and therefore almost all adult blood) originate in the endothelium, during this short window of time during development.
“This study helps illustrate a very important step in the transitional stage from embryonic stem cells to HSCs - the need to move through endothelial cells as an intermediary,” Speck says.
Understanding the location and developmental timeline of the origin of blood stem cells will help guide future efforts to coax embryonic stem cells to produce mature blood cells, she says.
Co-authors include Michael Chen and Brandon Zeigler from Dartmouth Medical School (Departments of Biochemistry and Genetics) and Tomomasa Yokomizo and Elaine Dzierzak from Erasmus Medical Center in Rotterdam, Netherlands.
Provided by University of Pennsylvania
Explore further: Healthy five-pound gorilla born at central Ohio zoo | <urn:uuid:c9998679-8509-418e-b90b-22ad7cc2879d> | {
"date": "2013-05-25T13:07:58",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9328692555427551,
"score": 3.203125,
"token_count": 827,
"url": "http://phys.org/news150731605.html"
} |
You cannot taste them. You cannot see them. But scientists say they are there: traces of prescription drugs in the water that comes from many people's faucets.
"Everything from antidepressants to heart medication to birth control pills to caffeine" has been found in certain drinking water, said Dr. Brian Buckley, environmental scientist at Rutgers University in New Jersey.
In his lab in New Brunswick alone, Buckley has found acne medication, barbiturates, caffeine and birth control medication in the water system.
While most of the medicines we take are absorbed by our bodies, he said, traces do escape via human waste and are flushed into our treatment plants, winding up in the water supply.
While the long-term health risks are unclear, there is evidence that medicines in the water, as well as hormones and chemicals, have negatively affected frogs and fish.
"The concern is we don't know what these chemicals do in the body over a lifetime of exposure," Buckley said.
Utility companies say that medicines can be found in the drinking water, but at levels so low that there is little danger. They say the only reason people even know about it now is because the technology has been developed to detect minute traces.
"One could safely consume 50,000 glasses of water a day without any adverse health effects," said Alan Roberson, director of security and regulatory affairs at the Denver-based American Water Works Association, which advocates for improved water quality and supply.
Even though the traces are minimal, Buckley warns that it is possible there may be potential hazards associated with long-term exposure to small compounds over one's lifetime.
"It is probably better to be safe than sorry," Buckley said. "And, in addition, there may be drug-drug interaction, even though the concentrations are very low."
While the government does not require water treatment plants to test for pharmaceuticals, there was enough concern to justify Congressional hearings in September to discuss emerging contaminants in U.S. waters.
"I am very concerned," said Rep. Carolyn McCarthy, D-N.Y. "We don't know for sure if it's having an effect on human beings and that's what we're trying to find out."
Some researchers, like Buckley, say it's necessary to investigate the water supply; if prescription drugs take action on the body in pill form, they're likely to have some effect when absorbed through another medium like water.
There are ways to protect oneself. ABC News asked researchers to test a widely available water filter for the home. They found it greatly reduced the traces of drugs in the water.
And communities across the country are creating drop-off locations where people can bring expired drugs to be incinerated, preventing them from ending up in rivers and streams and contaminating the water supply.
"I used to flush unused Ibuprofen down the toilet rather than have my small children consume them," said Kirsten Calia, a mother from Connecticut. "But now I know that there are great environmental ramifications to this." | <urn:uuid:bc3bdbf9-5b72-43b6-8f23-ac7bb3b4ddb5> | {
"date": "2016-12-05T00:01:08",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541426.52/warc/CC-MAIN-20161202170901-00256-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9671908617019653,
"score": 2.703125,
"token_count": 618,
"url": "http://abcnews.go.com/Health/story?id=6040196&page=1"
} |
Poems for Birthdays
Birthdays are usually celebrated with a four-line ditty and a cake, but long before the Hill sisters composed the now ubiquitous birthday song (originally called "Good Morning to All") in the late nineteenth century, poets had been writing about birth. Thomas Traherne, for example, marked the occasion in "Salutation" by portraying the awe he felt at his own existence: "From Dust I rise / And out of Nothing now awake."
Many poets use their birthdays as a moment for retrospection, for looking back over the past, or imagining the future. Poems about birthdays are often poems about the passing of time, about age, and an opportunity for change. Joyce Sutphen, for example, writes in "Crossroads":
The second half of my life will be black
to the white rind of the old and fading moon.
The second half of my life will be water
over the cracked floor of these desert years.
Walter Savage Landor summed up 75 years in only four lines in the poem "On His Seventy-fifth Birthday":
I strove with none; for none was worth my strife,
Nature I loved, and next to Nature, Art;
I warmed both hands before the fire of life,
It sinks, and I am ready to depart.
William Blake wrote numerous poems where he imagined his own birth, among them "Infant Joy" and "Infant Sorrow," which contrast the joy of a parent at the birth of a new child to the sorrow the newborn feels upon entering this world. "Pretty Joy! Sweet joy, but two days old" coos a parent in "Infant Joy," against which the infant says in "Infant Sorrow":
My mother groaned, my father wept,
Into the dangerous world I leapt;
Helpless, naked, piping loud,
Like a fiend hid in a cloud.
Like Blake, the twelfth century Chinese poet Su Tung-p’o, wrote a poem about birth to comment on the society that the child would be entering. In his poem, "On the Birth of his Son," for example, he criticized the fact that the poor, no matter how intelligent, rarely rose to the top:
Families, when a child is born
Want it to be intelligent.
I, through intelligence,
Having wrecked my whole life,
Only hope the baby will prove
Ignorant and stupid.
Then he will crown a tranquil life
By becoming a Cabinet Minister.
Of course, poets have also been drawn to write about the birthday tradition of giving gifts. Sylvia Plath, for example, imagines that she's unworthy of her gift in her dark poem "A Birthday Present":
What is this, behind this veil, is it ugly, is it beautiful?
It is shimmering, has it breasts, has it edges?
I am sure it is unique, I am sure it is just what I want.
When I am quiet at my cooking I feel it looking, I feel it thinking
"Is this the one I am to appear for,
Is this the elect one, the one with black eye-pits and a scar?
Finally, Thom Gunn expressed a unique, perhaps Freudian birthday wish, when he wrote, in "Baby Song":
From the private ease of Mother’s womb
I fall into the lighted room.
Why don’t they simply put me back
Where it is warm and wet and black?
For poems on birth consider the following:
"Labor Pains" by Yosano Akiko
"Infant Joy" by William Blake
"Infant Sorrow" by William Blake
"The Angel that Presided O’er My Birth" by William Blake
"A Newborn Girl at Passover" by Nan Cohen
"Baby Song" by Thom Gunn
"Happy Birthday" by Ted Kooser
"Seal Lullaby" by Rudyard Kipling
"On His Seventy-Fifth Birthday" by Walter Savage Landor
"The Birthnight" by Walter de la Mare
"To Miss Charlotte Pulteney, in Her Mother’s Arms" by Ambrose Philips
"Morning Song" by Sylvia Plath
"The Birthday Present" by Sylvia Plath
"Crossroads" by Joyce Sutphen
"The Salutation" by Thomas Traherne
"Sweet and Low" by Lord Alfred Tennyson
"On the Birth of His Son" by Su Tung-p’o | <urn:uuid:e95a4ede-7fb5-4108-9da8-147f84b0d7e0> | {
"date": "2014-12-27T00:06:21",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549908.109/warc/CC-MAIN-20141224185909-00099-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9411497712135315,
"score": 2.625,
"token_count": 952,
"url": "http://www.poets.org/poetsorg/text/poems-birthdays?page=1"
} |
One of the challenges of being a science journalist is conveying not only the content of a new scientific result but also the feel of what it means. The prominent article in the BBC about the new measurement by the LHCb experiment at the Large Hadron Collider [LHC] (reported yesterday at the HCP conference in Kyoto — I briefly described this result yesterday) could have been worse. But it has a couple of real problems characterizing the implications of the new measurement, so I’d like to comment on it.
The measurement is of how often B_s mesons (hadrons containing a bottom quark and a strange anti-quark, or vice versa, along with many quark/anti-quark pairs and gluons) decay to a muon and an anti-muon. This process (which I described last year — only about one in 300,000,000 B_s mesons decays this way) has three nice features:
- it can be well-predicted in the Standard Model (the equations we use to describe the known particles and forces, including the simplest type of Higgs particle)
- it is relatively straightforward to measure, and
- it is very sensitive to effects of as-yet unknown particles and forces.
Yesterday the LHCb experiment reported the evidence for this process, at a rate that is consistent (but see below) with the prediction of the Standard Model.
The worst thing about the BBC article is the headline, “Supersymmetry theory dealt a blow” (though that’s presumably the editor’s fault, as much as or more than the author’s) and the ensuing prose, “The finding deals a significant blow to the theory of physics known as supersymmetry.” What’s wrong with it? It’s certainly true that the measurement means that many variants of supersymmetry (of which there are a vast number) are now inconsistent with what we know about nature. But what does it mean to say a theory has suffered a blow? and why supersymmetry?
First of all, whatever this new measurement means, there’s rather little scientific reason to single out supersymmetry. The rough consistency of the measurement with the prediction of the Standard Model is a “blow” (see below) against a wide variety of speculative ideas that introduce new particles and forces. It would be better simply to say that it is a blow for the Standard Model — the model to beat — and not against any speculative idea in particular. Supersymmetry is by no means the only idea that is now more constrained than before. The only reason to single it out is sociological — there are an especially large number of zealots who love supersymmetry and an equal number of zealots who hate it.
Now about the word “blow”. New measurements usually don’t deal blows to ideas, or to a general theory like supersymmetry. That’s just not what they do. They might deal blows to individual physicists who might have a very particular idea of exactly which variant of the general idea might be present in nature; certain individuals are surely more disappointed than they were before yesterday. But typically, great ideas are relatively flexible. (There are exceptions — the discovery of a Higgs particle was a huge blow to the idea behind “technicolor” — but in my career I’ve seen very few.) It is better to think of each new measurement as part of a process of cornering a great idea, not striking and injuring it — the way a person looking for treasure might gradually rule out possibilities for where it might be located.
Then there’s the LHCb scientist who is quoted as saying that “Supersymmetry may not be dead but these latest results have certainly put it into hospital”; well… Aside from the fact that this isn’t accurate scientifically (as John Ellis points out at the end of the article), it’s just not a meaningful or helpful way to think about what’s going on at the LHC.
Remember what happened with the search for the Higgs particle. Last July, a significant step forward took place; across a large fraction of the mass range for the Standard Model Higgs particle, it was shown that no such particle existed. I remember hearing a bunch of people say that this was evidence against the Standard Model. But it wasn’t: it was evidence against the Standard Model with a Higgs particle whose mass was in a certain range. And indeed, when the rest of the range was explored, a Higgs particle (or something very much like it) turned up. Failure to find one variant of a theory is not evidence against other variants.
If you’re looking for your lost keys, failing to find them in the kitchen, living room and bedroom is not evidence against their being somewhere else in the house.
Similarly, the new result from LHCb is not evidence against supersymmetry. It is evidence against many variants of supersymmetry. We learn from it about what types of supersymmetry cannot be true in nature — we know which rooms don’t have your keys. But this is not evidence against supersymmetry in general — we still don’t know if your keys are elsewhere in the house… and we won’t know until the search is complete. Nature is what it is — your keys are wherever they are — and the fraction of your search that you’ve completed is not logically related to how likely your search is to be successful. It may be related to how optimistic you are, but that’s a statement about human psychology, not about scientific knowledge. The BBC article has confused a blow to the hopes and optimism of supersymmetry zealots with a blow to supersymmetry itself.
It’s also important to understand that despite the fact that some physicists and certainly the science media spend an inordinate amount of time talking about supersymmetry, the particle physics community actually spends a lot of time on other ideas, and also on testing very carefully the hypothesis that the Standard Model is correct. For good reason, a number of the most interesting results presented so far at the HCP conference, not just the one we’ve been talking about, involve precise tests of the Standard Model.
Now, in a bit more detail, here are a few of the scientific issues surrounding the article.
First, it’s important to notice that the measurement quoted yesterday is still very rough. Yes, it agrees with the prediction of the Standard Model, but it is hardly a precise measurement yet: speaking broadly, the fraction of B_s mesons that decay to a muon/anti-muon pair is now known to lie somewhere between 1 in 100,000,000 and 1 in 1,000,000,000. The Standard Model predicts something between 1 in 240,000,000 and 1 in 320,000,000. So the LHCb measurement and the Standard Model prediction are currently consistent, but a more precise measurement in future might change that. Because of this, we should be careful not to draw an overly strong conclusion. Many variants of supersymmetry and of other speculative ideas will cause a deviation from the Standard Model prediction that is too small for this rough measurement to reveal; if that’s what nature is all about, we’ll only become aware of it in a few years time.
One serious scientific problem with the article is that it implies
- that supersymmetry solves the problem of what dark matter is, and
- that if supersymmetry isn’t found, then physicists have no idea what dark matter might be
Both of these are just wrong. Many variants of supersymmetry have at least one proposal as to what dark matter is, but even if supersymmetry is part of nature, none of those proposals may be correct. And even if supersymmetry is not a part of nature, there are plenty of other proposals as to what dark matter might be. So these issues should not be linked together in the way they are in the BBC article; one should not mistake propaganda (sometimes promulgated by supersymmetry zealots) for reality.
Another point worth remembering is that the biggest “blows against” (cornerings of) supersymmetry so far at the LHC don’t come from the LHCb measurement: they come from
- the discovery of a Higgs-like particle whose mass of 125 GeV/c² is largely inconsistent with many, many variants of supersymmetry
- the non-observation so far of any of the superpartner particles at the LHC, effects of which, in many variants of supersymmetry, would have been observed by now
However, though the cornering of supersymmetry is well underway, I still would recommend against thinking about the search for supersymmetry at the LHC as nearly over. The BBC article has as its main title, “Popular physics theory running out of hiding places“. Well, I’m afraid it still has plenty of hiding places. We’re not yet nearing the end; we’re more in the mid-game. [Note added: there were some new results presented today at the HCP conference which push this game a bit further forward; will try to cover this later in the week.]
One more scientific/linguistic problem: left out of this discussion is the very real possibility that supersymmetry might be part of nature but might not be accessible at the LHC. The LHC experiments are not testing supersymmetry in general; they are testing the idea that supersymmetry resolves the scientific puzzle known as the hierarchy problem. The LHC can only hope to rule out this more limited application of supersymmetry. For instance, to rule out the possibility that supersymmetry is important to quantum gravity, the LHC’s protons would need to be millions of billions of times more energetic than they actually are. The same statements apply for other general ideas, such as extra dimensions or quark compositeness or hidden valleys. Is this disappointing? Sure. But that’s reality, folks; we’re only human, and our tools are limited. Our knowledge, even after the LHC, will be limited too, and I expect that our children’s children’s children will still be grappling with some of these questions.
In any case, supersymmetry isn’t in the hospital; many of its variants — more of them than last week — are just plain dead, while others are still very much alive and healthy. The same is true of many other speculative theories. There’s still a long way to go before we’ll really have confidence that the Standard Model correctly predicts all of the phenomena at the LHC. | <urn:uuid:3970f74b-59ab-446a-9d3c-886e84a2a47c> | {
"date": "2019-04-22T20:04:09",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422221208-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9535525441169739,
"score": 3.046875,
"token_count": 2265,
"url": "https://profmattstrassler.com/2012/11/13/supersymmetry-dealt-a-blow/"
} |
Probably the most frequently asked question about wireless LANs is how to improve their performance. Although the focus is usually on improving range, what people many times want is to get a better (or any!) wireless connection from a specific location in their home or small office.
There are actually a lot more options than you might think for attacking this problem, and this NTK will attempt to leave no stone unturned in the quest for a more satisfying wireless experience. What it won't cover, however, are techniques for wireless bridging, i.e. connecting multiple wired lans via wireless links. That info is presented in my Wireless Bridging NTK. I'll also be primarily focusing on indoor/short-range WLANs and leave the subject of long-range wireless LANs for another article.
NOTE: Please read references to access points (AP) or wireless routers as applicable to both kinds of products unless otherwise noted.
Before launching into improvement techniques, let's first spend a little time building a "mental model" to use in visualizing how wireless LAN signals travel through your home or office.
The simplest model is that of light - a naked flashlight bulb operating off one battery to be more exact! The analogy works well in an "open field" environment where there is a clear line of sight between the bulb (your Access Point or wireless Router) and your eye (your wireless-equipped laptop), but requires a little bit of tweaking for an indoor environment.
However, if you picture your home's walls and ceilings not as solid objects, but more like translucent panels with varying opacity, the resulting imagery is accurate enough for our purposes. The more panels between the bulb and your eye, the tougher it will be to see the light. The number and location of other "light" sources - 2.4GHz cordless phones, microwave ovens, etc. - will make it difficult, or even impossible to see the flashlight bulb instead of the interfering sources.
It's also helpful to visualize how the radio signal radiates from your access point or wireless router's antenna(s).
Figure 1: Simple dipole antenna radiation pattern
(click on image for a larger view)
From Antennas Enhance WLAN Security by Trevor Marshall
Used by permission
Figure 1 shows the radiation pattern for the dipole-type antenna that comes with most access points and wireless routers. The red "donut" at the top is a 3D representation of the energy radiating from the antenna, which you should picture as sticking up through the "donut"'s hole. The circular plot at the lower left is an Azimuth plot which shows the energy pattern from a top (or bottom) view, while the right-hand plot is an Elevation (side) view. You should look for Azimuth and Elevation plots for any antenna that you are considering buying, since they give you essential information for determining whether an antenna will work in your intended application.
You can see that a dipole is an omni-directional antenna, since its energy pattern is equally strong over all 360 degrees around it. Note also that the pattern is not a perfect sphere, but is flattened slightly on the bottom and top. If the radiation pattern were a perfect sphere, i.e. spread out equally in all directions, the antenna would be a perfect isotropic radiator.
But since every antenna type concentrates radiated (or received) energy in some way, that concentration increases the radiated signal output or sensitivity to received signals. The energy concentration is referred to as gain, and is expressed in units of dBi (decibels relative to isotropic radiator). Given that the dipole is the simplest antenna type, it also has the lowest gain - about 2.15dBi (usually rounded up to 2.2dBi).
I'll have more on antennas later, but with these basics under our belt, let's move on to some wireless improvement!
Location, location, location
The least-expensive thing you can do to make things better in WLAN-land is locate your Access Point or wireless Router as close as possible to the area where you need the best wireless connection. This may be easier said than done, however, especially if you're tied to a specific spot because of where your cable or DSL modem line enters your home or office. Of the two, a DSL-based connection is probably easier to move, since you may already have other phone jacks that are tied to the same phone line.
Once you've picked your location, the following rules of thumb will help with the final AP placement:
1) Higher is better than lower
2) On top (of a cabinet, bookshelf, desk hutch) is better than inside
3) Away from large metal objects (filing cabinets, steel shelving, etc.) is better than near
If you follow the simple practice of trying to "see" (remember the light bulb and translucent-walls analogy) your access point from wherever you want to use a wireless client, you may quickly find some obvious problems. You might also find some not-so-obvious ones too, like the aquarium that one home networker realized was killing his WLAN connection (water weakens high-frequency radio waves). Watch out, too, for utility rooms and attic spaces that might be lined with foil-backed insulation or metal firewalls or doors. Trying to figure out why you can't get a good signal out on your deck? Aluminium siding or window-screens could be the culprit!
The same goes for the locations where you use your wireless clients. You've probably noticed that you get a better signal when you orient your notebook computer in a certain direction, or move to another part of a room. I'll talk more about client-based solutions shortly, but don't overlook moving some furniture around or even moving your favorite chair, if that's where you'll be doing most of your wireless computing.
Get out your wallet
Once you've exhausted the no-cost option of just moving things around, the next steps involve spending additional money. The trick here is to spend wisely and get the maximum bang for your buck. This section will help you define the problem you're trying to solve, which should help guide your hard-earned money in the right direction.
AP or Client?
The first thing to consider is whether you should improve the access point or client end of the connection. The natural inclination is to focus on the AP, especially when your WLAN includes more than one client. But if you have only one, or maybe two wireless clients to deal with, don't rule out the client-based solutions in the next section . You may be surprised how much higher-gain antennas can cost for your AP, vs. how little new wireless client cards are going for!
And even in the cases where the cost of changing your wireless card or improving your access point are about equal, an upgrade of your client card to a current-generation model, may have other benefits such as multi-band access, or improved WEP-enabled speed.
The Centralized Approach
Most people, however,like to focus on beefing up their AP via either a different antenna or signal booster. Some gear-heads also play with boosting the transmit power on Atmel-based APs like the pre-v2.2 Linksys WAP11, SMC2655W, or Netgear ME102. Boosting the transmit power only is the least preferable way to go, because the "hacks" are generally not for beginners, but more importantly are only a one-way solution. Since wireless LANs require two-way communication between AP and client, you may not see the performance improvement you expect by making the signal stronger at only one end of the connection.
The advantage of the centralized approach is that - done correctly - it can benefit most, if not all of your WLAN's clients. This is a definite plus if you have a lot of clients to feed. The disadvantage is that it may improve your WLAN's range enough to make it more widely visible to clients that you don't want on your network.
If you're going to try an AP-boosting approach, your main choice is to use a higher-gain antenna, if your AP's antennas are attached via connectors. Not all 802.11b AP's have upgradeable antennas and if yours doesn't, you'll either need to choose another method or buy a new AP. If you decide to go this way, see the Upgrading your Antenna section.
NOTE: In general, you won't find single or dual-band 802.11a APs or wireless routers with removable antennas. This is due to tighter FCC rules for some of 11a's operating frequencies.
Another alternative for beefing up your AP that might take a little extra work is to use a signal booster. Although commonly used by Wireless ISPs for outdoor "backhaul" type links, Linksys recently brought signal boosting into the consumer market with its WSB24. Although FCC certified (and Linksys-supported) for use only with Linksys' popular WAP11 AP and BEFW11S4 wireless router, it can be used with any 2.4GHz AP if you're willing to provide your own cables. See the review for more info if you want to go this way.
There are two other approaches - wireless repeating and adding Access Points - but I'll save them for last. Instead, let's move on and take a closer look at how to improve the client side of things...
Client-side Helpers - Portables
In my tests of many 802.11b PC Card wireless client adapters, I've found little to differentiate the performance of one from another - aside from WEP-enabled throughput loss in older designs.
The reason for this is simple: the antenna in most 802.11b PC cards is awful! I'll describe just how awful later, but for now, I'll just say that investigating the following alternatives might be a quick way to enhance your wireless laptop's performance:
- Go dual band Although it may seem a contradiction to what I just said, I have found a clearly better 802.11b PC card client in the form of any Atheros-based dual-band CardBus card. Maybe you don't need the 802.11a aspect of the card, but pop one of these babies into your laptop and you should see a significant difference in 802.11b performance vs. range due to its superior radio and antenna design.
Note that the improvement holds for both the older dual (a/b) and newer tri (a/b/g) mode cards. Check out my reviews of the NETGEAR WAB501 dual-mode, or WAG511 tri-mode cards for more details and performance data. Other models that should offer equivalent performance are the Linksys WPC51AB and ORiNOCO Silver and Gold Dual-Band cards.
- XWing marks the spot
Asante's AL1511 AeroLAN XWing Wireless PC Card uses an effective antenna design that's so simple you wonder why someone else didn't come up with it sooner. My testing found significant, measurable improvement with the antennas in their unfolded vertical position.
Switch to USB
Huh? Why would you want to switch to an adapter type that has such a bad reputation for slow performance and has to dangle on the end of a cable? Once again, the difference is in the antenna. The newer class of small USB adapters such as the Linksys WUSB12 can plug directly into your notebook's USB port and also sports a flip-up antenna.
I've even seen larger adapters like NETGEAR's MA101 or Velcroed to the back of a notebook's screen, again to take advantage of the adapter's superior antenna.
Although not the cheapest way to fix a flaky laptop connection, switching to a notebook that has integrated wireless capability should help boost your performance. The reason, again, is better antennas - usually built into the laptop's screen with vertical orientation, too.
- Yes they do exist!
Although hard to find, there are PC card adapters that directly accept connection of higher-gain antennas. The ORiNOCO Gold card sports a proprietary miniature connector in addition to its built-in stripline antenna. While convenient because it can also operate with its built-in stripline antenna, there aren't any little antennas available that just snap onto the card.
Zoom's ZoomAir Model 4103 is a little easier to deal with, given its robust RP-SMA connector which accepts a little dipole "whip" antenna that comes with the card.
But since all wireless clients don't have to move around, I'll next look at some alternatives for desktop machines.
Client-side Helpers - Desktops
When improving the wireless connection to a desktop client there are a few more tricks that can be pulled out of the bag. The main thing to avoid, however, is desktop adapters that consist of a laptop-type PC card inserted into a PCI (or ISA) adapter. These will put your antenna in the absolute worst location, i.e. near the floor and behind a metal object (your PC). Depending on your room and desk's location, the antenna may also be facing an outside wall and away from your AP.
It's a must that any desktop adapter have an antenna at the end of a sufficiently long cable that it allows the antenna to be placed so that it can be clearly viewed from all points in your room. The antenna cable should preferably be attached to the adapter via a connector, which allows you to substitute a different antenna should you need to.
The other main approach to WLAN desktop connectivity is via a USB adapter. You may give up a little bit in maximum throughput because of the USB interface, but you'll gain the flexibility of being able to locate the adapter (and its built-in antenna) where needed to get an unobstructed "view". For this application, a cabled USB adapter is preferred over the newer miniature types that can plug directly into a USB port.
Less likely to be used due to their higher cost, are the newer Wireless Ethernet bridge products such as Linksys' WET11. These require that your computer already have an Ethernet port, but don't require the installation of a driver to get up and running. There's no real signal-enhancing advantage that these products provide, however. Same goes for using a Linksys WAP11 or other AP that supports AP Client mode, i.e. the ability to connect to an Access Point or wireless router.
All this talk about antenna placement reminds me that I need to tell you how to select and install them. So next section, please...
Upgrading Antennas - Factors to Consider
Although it's tempting to think that throwing amplifiers at the problem of weak signals will be a quick and "best" fix, experience has shown that using higher-gain antennas is often simpler and more cost-effective in improving problem WLAN connections. Consider the following points:
Amplifiers boost both signal and noise. Although this isn't really a problem in the transmit direction, amplified noise can swamp out a weak wireless client signal.
WLANs are two way systems. It does little good to have an Access Point with a strong transmitted signal if wireless clients don't have equivalent range.
For best results, amplifiers must be located as close as possible to the AP's antenna to avoid losing the amplifier's gain through loss in a long cable. This requirement can complicate an amplifier's installation beyond the point where many home networkers will want to deal with it.
So let's say that you're convinced that using a higher-gain antenna is the way to go. Where do you start? First, your AP's antennas need to be attached via connectors. Although you'll find exceptions, consumer WLAN gear seems to have zeroed in on the two connector types shown below.
Figure 2: Popular WLAN Antenna Connectors
(Photos courtesy of HyperLink Technologies)
Linksys uses the slightly larger RP-TNC connector across their wireless line, and the smaller RP-SMA is widely used by other manufacturers of consumer wireless gear. By the way, the "RP" in each connector-type's name stands for "Reverse Polarity". These are special versions of each connector type that have the gender of their center contact reversed from that of the non "RP" version. This is done to satisfy Part 15.203 of the FCC regulations which says in part:
An intentional radiator shall be designed to ensure that no antenna other than that furnished by the responsible party shall be used with the device. The use of a permanently attached antenna or of an antenna that uses a unique coupling to the intentional radiator shall be considered sufficient to comply with the provisions of this section. The manufacturer may design the unit so that a broken antenna can be replaced by the user, but the use of a standard antenna jack or electrical connector is prohibited.
Translation: "We don't want folks changing antennas on their own and possibly violating FCC specs, so manufacturers can't use 'standard' connectors". Something obviously got lost in the translation, however, since "RP" based antennas and cables are now widely available, and it's unlikely that the FCC is going to come knocking at your door to shut down your wireless LAN!
TIP: The entire FCC Part 15 Rules are located here.
Now that you have your old antenna removed and know its connector type, how do you select a new one? There are four main factors to consider:
Because of the way that radio waves work, antennas must be designed to work over specific frequency ranges. Generally, the higher the operating frequency, the narrower the frequency range of an antenna.
For 802.11b, you need an antenna designed for 2.4GHz operation. This antenna won't, however, work for 802.11a purposes, even if you did manage to get it attached (remember that 802.11a gear doesn't usually allow you to change the antenna).
As I showed earlier, the simple dipole that comes with your AP has a gain of about 2.2dBi. And no, the two antennas on your AP don't provide a total 4.4dBi of gain, but are there to support antenna diversity, which can improve your WLAN's performance through a different technique.
This is the main thing you're trying to improve by changing antennas and you'll probably spend the most time agonizing over this spec.
This factor is as important as Gain in determining whether a specific antenna is right for you. It determines the directivity or coverage area of the antenna, and if chosen incorrectly can make your wireless connection worse!
It used to be that the antenna type determined the physical form-factor of an antenna. However, you can now get both omni and directional antennas in different physical forms. Especially handy for overcoming a spouse's objections to your wireless improvement plans.
Upgrading Antennas - Making the Choice
The art of antenna selection can get fairly involved when you're trying to get a long-range outdoor link to work. But for indoor or short-range (house to workshop, across a street, etc.) use, selection is fairly easy if you keep two rules of thumb in mind:
Rule of Thumb #1: It takes a 6-dB (dBi) increase in gain to double the range over what you get with a simple dipole antenna. Doubling would be a best case because your WLAN includes obstructions and other effects that reduce the actual range-boosting effect.
Rule of Thumb #2: The higher an antenna's gain, the higher its directivity or area of coverage will be. This effect is similar to what happens with binoculars or telescopes. The higher the binoculars' power, the narrower its field of view will be.
TIP: An antenna's directivity is also commonly referred to as its "beamwidth"
With these rules in mind, let's look at the types of antennas made for indoor use and their merits for specific applications...
|(Photo courtesy of TRENDware)|| |
Gain: 4 dBi
This is a typical omni-directional desk-mount type antenna, with slight gain over a normal 2.2dBi dipole
The primary reason you'd use this would be for flexibility in locating the antenna separate from the AP itself, with minor performance improvement from the 4dBi gain.
HyperGain HG2404CU 3.5dBi Ceiling Mount Antenna
Pattern: 90° vertical , 360° horizontal beamwidth
This antenna's radiation pattern extends in all directions around it, but in a 90° "cone" below it. Good for keeping signals for escaping to upper floors, or above a roof.
With no significant gain advantage over a dipole, its primary use is for aesthetics and controlling signal leakage.
TRENDnet TEW-IA06D 6dBi Directional Antenna
Type: Directional "panel" / "patch"
Pattern: 80° beamwidth, horizontal and vertical
The 6dBi gain should provide noticeable performance improvement. This antenna's directional characteristics will also help keep the signal inside the area in front of the antenna.
The 80° horizontal and vertical beamwidth would provide good coverage for remote rooms, but may miss areas in the same or adjacent rooms.
HyperGain Range Extender 8 dBi "Range Doubler" Omni Antenna
Gain: 8 dBi
Appropriately named, this omnidirectional antenna's 8dBi gain meets our first rule of thumb for range-doubling.
But its 16 in size might not be to everyone's taste, and remember that you'll need two of these bad boys if your AP has dual antennas!
HyperGain Range Extender 8 dBi Flat Patch Antenna
Type: Directional "panel" / "patch"
Pattern: As shown
Although it has the same 8dBi gain as the omnidirectional model above, this "patch" style has a smaller 4.5 x 4.5 in size. The trade-off is its directional pattern which has relatively wide horizontal coverage for indoor use, but perhaps too narrow a vertical pattern for multi-floor coverage.
This type of antenna is about as powerful as you'd want to go for general indoor coverage.
Type: Directional "panel" / "patch"
Pattern: As shown
The 14dBi patch is more for outdoor use in point-to-point applications, or to reach a remote omni-directional station.
Although the high gain is tempting, its beamwidth is too narrow for general indoor coverage.
To sum up, you'll need a minimum gain of 5dBi and no more than 8dBi to see a noticeable performance improvement. The choice of omnidirectional or "patch" types depends on antenna placement relative to the coverage area and any signal containment requirements. Now that wasn't so hard was it?
Before we leave the subject of antennas, let me show you why laptop wireless cards have such poor performance. As most any laptop user knows, their WLAN adapter card is highly directional, but Figure 3 shows just how much!
Figure 3: PC card antenna gain plot
(click on image for a larger view)
From Antennas Enhance WLAN Security by Trevor Marshall
Used by permission
This plot shows the relative sensitivity of a typical 802.11b PC card adapter, with the key points being:
- this is definitely not omni-directional performance
- the adapter would perform better if the laptop's body were vertically oriented!
The reason for the second point is simple. Notebooks orient a WLAN adapter card's antenna in a horizontal plane, while most access points' antennas are vertically oriented. This simple fact itself accounts for the significant performance improvement from built-in WLAN notebook adapters because of their better orientation (usually vertical when the laptop screen is raised) and design (from not having to be squeezed into a 1 x 1.5 in space in the adapter itself).
Now that you're an expert on antennas, let's explore an wireless performance improvement alternative that has become available to consumers within the past year or so.
A newer alternative for WLAN performance improvement - at least for those who don't have "enterprise-grade" budgets - is wireless repeating. This method has actually been around almost since access points themselves, but was available only in commercial-grade APs - with commercial-grade (>$300) pricing!
That all changed in the Fall of 2002 when D-Link introduced a free firmware upgrade to their DWL-900AP+ access point that brought the cost of 802.11b wireless repeating down to about $100! [See Wireless Repeating with the D-Link DWL-900AP+ NTK for more info.] D-Link has pursued this product category the most aggressively, introducing the lower-cost (about $70) DWL-800AP+ that omits some of the 900AP+' s access point modes.
A wireless repeater is basically an AP with a special mode that re-transmits data received from other wireless stations over the same wireless channel. This means that all you need is a repeater and a power outlet to extend the range of a wireless LAN! Figure 4 illustrates the use of a repeater.
Figure 4: WLAN with wireless repeater
This magic comes at a price, however. When running in repeater mode, the AP's Ethernet port no longer passes LAN traffic, and becomes instead the only way to access the admin interface of the repeater. So you really don't want to locate repeaters in hard-to-reach places, because rebooting them when they inevitably lock up will be a major pain.
You also lose about half the normal throughput for each repeater, making a 5Mbps link into one running at about 2.5Mbps. This probably isn't as big a deal as it might sound if you mainly use your wireless LAN for email and web browsing with one or two clients. But if you have a very fast Internet connection or do a lot of file transfer / downloading, you probably won't be satisfied with a wireless repeater's performance.
Although some products allow repeaters to associate with each other - forwarding wireless data through multiple repeaters for longer range extension - others limit the repeat to one "hop" or repeater unit. Even in the latter case, however, you can have multiple repeaters associated with a single AP, which at least will let you extend your WLAN's range in a ring around the AP.
Adding Access Points: Non-Ethernet
The last, at perhaps most dreaded step from both a cost and complexity view, is going to a multiple AP setup. Not long ago, a multiple-AP setup meant running CAT5 Ethernet cable and AC power to each location where an AP was to be located. Because of the expense (and permanence) involved in running that cabling, advance planning in the form of a site survey had to first be done. Given the work involved, most home networkers unhappily put up with their WLAN's performance idiosyncracies, or gave up in frustration and returned their wireless gear altogether.
But this option really isn't as hard to implement as it used to be due to the emergence of Ethernet alternatives that use other wires already running through your home's walls, i.e. phone and power.
HomePNA (HPNA) equipment uses your telephone wiring to carry digital data at a maximum raw bit rate of 10Mbps, which typically yields a useable data rate of about 5Mbps. HPNA doesn't interfere with the phone line's normal use for voice, data, FAX, or even DSL, and doesn't even require the phone line to actually be in service.
HomePlug products operate in a similar way to HPNA, but use AC power wiring instead of phone lines. HomePlug has a 14Mbps max raw data rate, but delivers useable bandwidth similar to HPNA's.
Both technologies are used in a similar way to get a wired connection to a remote access point. Figure 5 shows an example setup.
Figure 5: Adding an AP using Powerline networking
The diagram shows the basic idea when using alternative networking methods, using a powerline-based example. A HomePlug to Ethernet Bridge connects your Ethernet LAN to the power line on one end, and a second Bridge is used to connect the remote AP. More APs can be added by duplicating the remote HomePlug Bridge / AP pairs.
An HPNA-based setup would look essentially the same, with HPNA to Ethernet Bridges (and connection to a common phone line!) substituted for the HomePlug Bridges and their AC connections.
There are many variations on this basic setup. You can use a separate router and AP, or even a wireless router with built-in HomePlug bridge such as SpeedStream's 2524 Powerline Wireless DSL/Cable Router. Speedstream also has its 2521 Powerline 802.11b Wireless Access Point, which combines an 802.11b AP and HomePlug-to-Ethernet bridge in a "wall-wart" sized package!
I prefer HomePlug based setups to those based on HPNA, mainly because they're more flexible - you can usually find a power outlet in every room, but not a phone jack. I also really like the mini-AP mentioned above because it's just so darned easy to move around! Who needs to spend a lot of time planning where to put additional APs when all you need to do is unplug it from one outlet and move it to another?
The downside of both HPNA and Powerline approaches, however, is that they won't work for everyone. Both phone and especially power wiring is not exactly the best electrical medium for high-speed data signals (I think it borders on magic that this stuff even works at all!). Folks with older homes especially may find that their HPNA or HomePlug based networks run too slowly to be of practical use, if they even run at all! Range can be limited too, sometimes preventing their use for LAN extension to a backyard office or free-standing garage.
Adding Access Points: Ethernet-based
Which brings us back to the good old standby - Ethernet. CAT5 cabling may be a pain (and expensive) to run, but it's a low-tech, sure-fire way to have the fastest and most reliable network. It has the added bonus of eliminating the power outlet requirement if you use remote APs with Power Over Ethernet (POE) capability. POE puts DC power on the unused wires in a CAT5 cable, making it do double duty as both a data and power cable. Although most consumer-grade equipment doesn't include the POE feature, it's not that hard to roll-your-own POE solution.
Before I head for the Wrap Up, I'll pass along a few more tips to keep in mind when adding Access Points:
Mix it Up!
You don't have to use the same make and model as your main unit when adding APs to your WLAN. Using the same product is more a matter of convenience, since you won't have to learn the admin interface for multiple products.
- Using Wireless Routers
Wireless Routers can be put into service as expansion Access Points, but require a little reconfiguration to work in this mode.
Putting it all together
If you've made it this far, you may be more confused than when you started, given all the options available. So let's see if I can boil it all down to a few basic points:
Understand the problem
Now that you know how radio waves travel, look for obvious problems from metal, interference sources, and high water-content items. Eliminate the problems you find if you can, and move stuff around if you can't. Relocation is the cheapest tool you have in improving wireless LAN performance.
Less may be more
If relocation doesn't get you what you need, resist the urge to throw the kitchen sink at the problem. You'll have better results if you use only as much improvement as you need, and your wallet will stay fatter, too!
Keep it simple
Higher-gain omnidirectional antennas are probably the simplest, most cost-effective improvement you can make, especially when used at the Access Point. If your AP has two antennas, make sure you upgrade both with the same make and model. If that doesn't get you what you need, try upgrading clients that are still giving you trouble.
Don't fear expansion APs
With the alternative methods of HPNA and HomePlug available, adding on access points doesn't necessarily mean fishing cables through walls. Sometimes just one additional, properly placed access point is all that's needed to fix what ails you, and the cost can end up being lower than you think.
So don't just complain about your wireless LAN's problems... now you can do something about them! | <urn:uuid:61a4c413-2e6f-431e-a6a8-61ad5aaa7e3c> | {
"date": "2019-05-21T18:33:06",
"dump": "CC-MAIN-2019-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9429001212120056,
"score": 2.625,
"token_count": 6837,
"url": "https://www.smallnetbuilder.com/wireless/wireless-howto/24435-wirelesslanperformanceimprovementntk?tmpl=component&print=1&layout=default"
} |
BIOP #1: Buckeye Battlefield
In 2004, the U.S. presidential election hung on the outcome of Ohio’s electoral votes. Ohio, much like Florida and Pennsylvania, is one of the perennial battleground states and the winner of Ohio usually has a lock on the White House. Ohio’s place as a bellwether for the nation is not new but it is unique. There are, as we will show, few states that mirror national trends as accurately. The key question is whether this special status will remain in the future or if demographic, economic and cultural shifts lead to a displacement of Ohio as the nation’s premier swing state.
In this post, we compare Ohio’s voting in presidential, congressional and gubernatorial elections to the nation as a whole and to some other key battleground states. Ohio’s political history can be usefully divided into four fifty-year periods: the Foundation era, 1803-1852; the Civil War era, 1853-1903; the Industrial era, 1903-1953; and the Postindustrial era, 1953-2003. These categories imply a fifth contemporary era (beginning in 2003), which will be the primary focus of most of Buckeye Battleground. Of course, it is far too early to determine the political characteristics of this new era, especially four decades into the future.
Although crude, the four historical periods cover major developments that influence Ohio elections in the contemporary era. Here a geological metaphor is useful, with each of the four previous eras representing a layer of political “sediment” on which subsequent developments rest. Much as layers of sediment eventually harden into layers of rock, time has solidified the earlier political developments in the state. The more distant political developments serve as the “bedrock” of Buckeye politics, having important but less direct influence on present-day elections. Meanwhile, the more recent developments are less solid but more directly relevant to contemporary and future elections. The choice of Republican ballots for this comparison reflects a special feature of Ohio politics: the Buckeye battlefield has tended to vote slightly more Republican over this period of history. For example, since 1856 the Buckeye State has on average voted 50.6 percent Republican in presidential elections compared to 48.1 percent for the nation as a whole, a modest advantage of 2.5 percentage points.
In fact, Ohio’s record is perfect when it comes to electing Republican presidents: no Republican has ever reached the White House without carrying the Buckeye State. In all five cases when Ohio failed to vote for the presidential winner since 1856, it was because Ohioans favored the Republicans’ candidate but the Democrat won the White House. This has happened only twice since 1900.
Figure 1.1 illustrates this pattern by reporting the fifteen most competitive states, measured by the mean margin of victory in the 2004 and 2008 presidential elections (that is, the difference between the major party winner and loser). All these states showed an average margin of victory of 10 percentage points or less in the elections won by Republican George W. Bush and Democrat Barack Obama, with the states listed in declining order from the largest to smallest margin. Most analysts would agree that a victory of 10 percentage points or less constitutes a competitive election.
The first thing to note in figure 1.1 is the position of Ohio at the bottom of the list of states, with the average margin of victory being the smallest across these two close elections (3.3 percentage points). In fact, Ohio is lower than the average for the nation as a whole in these elections (4.9 percentage points). By this measure, the Buckeye State is one of the most competitive states in contemporary presidential elections and especially among large states (Ohio had 20 electoral votes in these elections). Other competitive states include Missouri (3.7 percentage points) and Florida (3.9 percentage points). Florida is also a large state (with 27 electoral votes), but note that the other large states in figure 1.1, such as Pennsylvania (21 electoral votes) and Michigan (17 electoral votes), were much less competitive. All of the rest of the states on this list had markedly fewer electoral votes. So, while technically Colorado was more in line with the national vote, Ohio held over twice as many electoral votes the fact that it is so close to the national average is one reason it is considered the premier battleground state.
It turns out this has almost always been the case. As it is often pointed out, no Republican has ever won the presidency without winning Ohio, but over time, Ohio’s vote is a fairly good predictor of the national vote. Figure 1.2 plots the percent of the total presidential vote cast for Republican candidates nationally (dashed line) and Ohio (solid line) from 1856 to 2008 (minor party ballots are included in the calculation). Florida is included for purposes of comparison (dotted and dashed line).
This figure reveals the modest partisan bias of Ohio alluded to before: the Buckeye battleground has tended to tilt slightly toward the Republicans over this period of history. For example, since 1856 the Buckeye State has on average voted 50.5 percent Republican in presidential elections compared to 48.0 percent for the nation as a whole, a modest advantage of 2.5 percentage points. By comparison, Florida has been much less Republican, at 35.6 percent. The GOP bias of the Buckeye battleground can be seen in the slightly higher GOP vote in good Republican years (such as 1904, 1928, 1956, and 1984) than the national vote--but also in good Democratic years (such as 1912, 1936, 1964, 1992, and 2008). So, one explanation for the Ohio-GOP link is that Republicans usually do well in Ohio and when their presidential candidates do well nationwide, then they are also likely to win Ohio.
Ohio is less “red” than it used to be. The Republican advantage at the polls has declined over time: the average GOP presidential vote was 51.6 percent in the Civil War era (for a 3.8 percentage point advantage); 50.1 percent in the industrial era (a 2.7 percentage point advantage); and an even 50.0 percent in the postindustrial era (a 1.0 percentage point advantage). (The Ohio presidential vote was closely associated with the national vote in the foundation era as well.) Thus, Ohio became more evenly divided in the partisanship of its vote even as it lost population and electoral votes. Alternatively, Florida (and the “new” South overall) had become more Republican than Ohio by the postindustrial era (52.5 percent). Indeed, John McCain’s 46.8 percent in 2008 was far below the historical performance of his party. However, both of these figures closely resemble the national vote in 2004 and 2008, respectively. Of course, this suggests the continuing importance of Ohio as a battleground state.
Ohio’s bellwether status is not limited to just presidential elections. In elections for the U.S. House of Representatives, Ohio’s aggregate congressional vote is a fairly good predictor of party fortunes in the national vote. as demonstrated in Figure 1.5 plots the Ohio and national Republican congressional vote from 1856 to 2006. In these elections, Ohioans voted 51.6 percent Republican compared to the national congressional vote of 47.5 percent. So the Buckeye State was a bit more Republican in congressional elections than in presidential contests. Over time, the Florida congressional vote has been much less associated with the national congressional vote. This has, of course, changed in recent year so that the Sunshine State has come to resemble Ohio in this regard during the postindustrial and contemporary eras.
The small Republican advantage in the Ohio congressional vote increased from the Civil War era (50.8 percent) to the Industrial era (51.3 percent) to the Postindustrial era (52.4 percent). (Although the records of the House vote in the Foundation era are incomplete, the Ohio congressional vote was also associated with the national congressional vote.) From this perspective, the GOP congressional vote of 54.7 percent in 2010 was above the historical norm, while the 49.5 percent in 2008 was below it. Overall, Republicans do better in the congressional vote in Ohio compared to the nation, but given the enormous differences in district makeup across the nation (not to mention partisan gerrymandering), the continued link between the Ohio congressional vote and the national congressional vote is quite remarkable.
What about state elections? Figure 1.4 plots the Republican gubernatorial vote for Ohio and the nation from 1855 (when the first Republican ran for governor in the Buckeye State) to 2010.
From a national perspective, gubernatorial elections are far more complex phenomena than presidential elections. For one thing, governors are chosen at different intervals with varying term lengths, depending on the state. This factor has been especially notable in the Buckeye battleground. From 1855 to 1905, Ohioans elected their governors in “off-off” years--the odd-numbered years between the presidential and congressional elections. Between 1908 and 1956, Ohio governors were chosen in “even” years, during both presidential and congressional elections. And in 1958, the governor’s term was lengthened from two to four years and fixed on nonpresidential years. Similar problems prevent including the Florida gubernatorial results. For the purposes of figure 1.4, the Ohio gubernatorial vote is compared to other gubernatorial elections in the same year.
Overall, Ohioans voted 49.4 percent Republican for governor, compared to 45.0 percent of the national electorate (4.4 percentage point GOP advantage). The Republicans did best in the Civil War era (50.7 percent) and less well in the industrial (47.4 percent) and postindustrial (47.6 percent) eras. Republican Ken Blackwell’s 37.6 percent in 2006 was far below the performance of previous GOP candidates, while John Kasich’s 49 percent was more typical.
The Ohio gubernatorial vote is not as closely associated with the national gubernatorial vote. In this regard, the gubernatorial vote resembles the U.S. Senate vote--a pattern that makes intuitive sense given that both offices are elected statewide and not always in presidential years. Not surprisingly, Ohio has been a poor bellwether in predicting the partisan control of the nation’s state houses, matching the national result only about one-half of the time since 1855.
The party control of the Ohio governorship has changed across the eras. In the Civil War era, the GOP won 72 percent of gubernatorial elections (or 84 percent if the Civil War Unionist governors are counted as Republicans), but then just 33 percent in the industrial era. (The foundation era resembled the industrial era, with alternatives to Democratic candidates also winning about one-third of the time.) However, in the postindustrial era the GOP won 60 percent of the gubernatorial contests. In the contemporary era, both parties have won a gubernatorial contest.
Will Ohio remain a battleground? According to the 2010 Census, Ohio barely gained population (+1.6%) compared to the national average of +9.7% and an average of +3.9% for the Midwest as a whole. In fact, since the 1970s, Ohio has had nearly flat population growth. Ohio’s clout in the Electoral College is beginning to decline. For example, in the Civil War Era, the Buckeye State averaged 23 Electoral Votes, 25 in the Industrial Era, and 23 in the Post-Industrial Era. The largest number of Electoral Votes occurred after the 1930 and 1960 censuses (26), but Ohio’s 20 Electoral Votes in 2004 were the lowest since the 1820 Census (16). Now, after the 2010 Census, Ohio will lose two more congressional seats and will not have just 18 Electoral Votes (since a state’s Electoral Votes equals the number of U.S. Senators (always two for every state) plus the number of U.S. Representatives).
On the other hand, as noted above, only Florida really matches the Buckeye State as a large competitive battleground. Pennsylvania also lost two seats and Michigan actually lost population. Moreover both states are probably solid blue states headed into the 2012 election. Other large states such as Texas, Illinois, California and New York have not been competitive in some time. North Carolina and Virginia have gained Electoral Votes, but it is not clear if they will be competitive in 2012 since Obama’s win in these southern states marked a significant departure from their historical Republican support that traces back over four decades. So, Florida is perhaps the largest contested prize in a presidential election, but Ohio remains both large and competitive. In sum, the Buckeye State remains a key national bellwether and is likely to retain this status for at least one more decade. | <urn:uuid:8cb7b942-6ccf-481c-8647-fb77a7474c9b> | {
"date": "2016-02-09T05:54:40",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156520.89/warc/CC-MAIN-20160205193916-00190-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9582757353782654,
"score": 3.28125,
"token_count": 2616,
"url": "http://www.uakron.edu/bliss/biop1.dot"
} |
FUKUSHIMA, Japan — The day after a giant tsunami set off the continuing disaster at the Fukushima Daiichi nuclear plant, thousands of residents at the nearby town of Namie gathered to evacuate.
Given no guidance from Tokyo, town officials led the residents north, believing that winter winds would be blowing south and carrying away any radioactive emissions. For three nights, while hydrogen explosions at four of the reactors spewed radiation into the air, they stayed in a district called Tsushima where the children played outside and some parents used water from a mountain stream to prepare rice.
The winds, in fact, had been blowing directly toward Tsushima — and town officials would learn two months later that a government computer system designed to predict the spread of radioactive releases had been showing just that.... Read more »
After Japan’s Fukushima catastrophe, Canadian government officials reassured jittery Canadians that the radioactive plume billowing from the destroyed nuclear reactors posed zero health risks in this country.
In fact, there was reason to worry. Health Canada detected massive amounts of radioactive material from Fukushima in Canadian air in March and April at monitoring stations across the country.
The level of radioactive iodine spiked above the federal maximum allowed limit in the air at four of the five sites where Health Canada monitors levels of specific radioisotopes.
On March 18, seven days after an earthquake and tsunami triggered eventual nuclear meltdowns at the Fukushima Daiichi plant in Japan, the first radioactive material wafted over the Victoria suburb of Sidney on Vancouver Island.... Read more »
Record levels of radiation have been recorded at the damaged Fukushima Daiichi plant reactor, just months after the nuclear accident resulting from the earthquake and tsunami in March.
The Tokyo Electric Power Company (TEPCO) reported that Geiger counters - a hand-held device used to measure radiation - registered their highest possible reading at the site on Monday.
TEPCO said that radiation exceeding 10 sieverts [10,000 millisieverts] per hour was found at the bottom of a ventilation stack standing between two reactors.
Al Jazeera's Aela Callan, reporting from Japan's Ibaraki prefecture, said the level recorded was "fatal to humans" but that it was contained just to the plant's site. However, scientists are planning to carry out more tests on Tuesday.... Read more »
Don't you wish you could have bought Atomic Energy of Canada (AECL) from the federal government? Had you been the buyer, you would have pocketed $60 million from the transaction. Sound like a strange deal? It is, but it's the kind of deal that is normal in the weird world of nuclear energy.
SNC-Lavalin, which recently bought AECL from the federal government, put up $15 million for the company and then received $75 million in federal supports for research and development. For that, SNC gets most of AECL's $1.1 billion in assets.... Read more »
What will your electricity bill look like in the next 5-10 years? How much more will you be paying? The answer to these questions comes in a recent report by our friends at the Pembina Institute. Higher costs are almost inevitable, but “investing in renewable energy today is likely to save Ontario ratepayers money within the next 15 years, as natural gas becomes more expensive and as the cost of renewable energy technology continues to decrease.”... Read more » | <urn:uuid:48b3771a-e3ee-4a6c-939a-f16a975f3a84> | {
"date": "2014-07-31T21:40:49",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273676.54/warc/CC-MAIN-20140728011753-00408-ip-10-146-231-18.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9608698487281799,
"score": 2.828125,
"token_count": 705,
"url": "http://www.sierraclub.ca/en/category/program-areas/nuclear-free-canada?page=18"
} |
ABORIGINAL RIGHTS - Human Rights
Lyackson First Nation Protection of Rights
and Cultural Heritage
In collaboration with the Lyackson First Nation, we propose to direct an archaeological inventory of Valdes Island to assist the development of a community heritage management plan for archaeological sites within their traditional territory. Funded by a Capacity Initiative Grant from the Department of Indian and Northern Affairs Canada, this research plans to help build the administrative database and personnel needed for the Lyackson First Nation to enact a greater role in managing their ancestral heritage resources in British Columbia, while contributing new direction in our understanding of regional settlement-subsistence patterns on the Northwest Coast. This collaborative study, therefore, presents the opportunity to integrate problem-oriented archaeological research with First Nation community development and heritage conservation in British Columbia.
A) Development of a Lyackson First Nation Community Heritage Management Plan:
The objectives of the research relating to heritage management and community development involves:
Identifying the location, condition and significance of archaeological resources are necessary preludes for developing effective, long-term management strategies to protect archaeological sites (Lipe 1978; Schiffer 1985). The development of a heritage resource management database for will involve: a) identifying the location and distribution of all archaeological site types; b) assessing their present condition; c) documenting past and current natural and cultural impacts; d) determining their sensitivity to potential future impacts; and e) evaluating their scientific and cultural significance. Drawing upon the results of the inventory, we will identify sensitive heritage areas on Valdes Island (such as sacred sites and rapidly eroding sites), current and potential conservation concerns, and recommend practical measures for developing the community stewardship of these heritage resources.
This study provides paid training for at least three local First Nation persons in archaeological field methods and issues of modern heritage resource management issues in British Columbia. Classroom sessions will educate First Nations participants with government heritage legislation, provincial resource management agencies, databases and permit processes, the use of archaeological materials for public education, and stewardship ethics. Field research and laboratory sessions will be used as opportunities to intensively instruct students in archaeological methods and techniques. Participants will become trained to provincial Resources Inventory Committee (RIC) standards. Through further education, these individuals and are expected to take a leading role in future community heritage and the management of their traditional lands and resources.
B) Contextualizing Pre-contact Settlement-Subsistence patterns on Valdes Island.
Prior settlement-subsistence pattern research on Valdes Island focused primarily on the study of shell matrix sites located in the coastal environment (McLay 1999a,b). A major working assumption of this previous research presumed that all sites contained a component dating to the last thousand years contemporary with the Late Phase (1400/1200-200 B.P.). In 2000, the archaeological inventory on Valdes Island proposes to: a) explore a broader range of environments across the island landscape; and b) define pre-contact settlement-subsistence patterns in a regional chronological framework.
Understanding the range of settlement activity and archaeological land-use patterns in interior environments are relatively unexplored on the Northwest Coast. Prior interior sample survey on Valdes Island indicated sites in the interior landscape are small, low in visibility, and very limited-activity in nature. As an island environment and maritime cultures is consistent with maritime economy of Coast Salish culture. However, the Gulf Island Archaeological Survey in 1974 identified 46 interior sites across 17 greater Gulf Islands, and False Narrows Bluff identified a complex of archaeological sites distant from the coastal zone (Curtin 1989, 1998; Wilson 1989). Informants and archival research has suggested the location of unrecorded archaeological sites. More investigation is required for interior micro-environments on Valdes Island is necessary for both research and management purposes. This research is necessary to present a more holistic understanding of land and resource use patterns on Valdes Island.
The only chronological information for Valdes Island derives from the salvage excavation of DgRv-9, near Blackberry Point (Apland 1980). To assess the significance of archaeological sites, dating information is required to place sites in a chronological framework. Cannon (2000) has provided an interesting paper indicating that sites relate to the sea level change on the coast. The specific relations between sea level changes and archaeological site location have not been definitively explored on the coast (Carlson and Hobler 1994), although it is suggested that due to sea level rise most sites date to the last thousand years. Collecting basal dates for archaeological sites on the coast would place the colonization of Valdes Island, the establishment of village sites, range of size classes. Thompson's research suggests that only in the Late Phase were diversified site types occur, indicative of a collector pattern. However, Matson and Coupland 1995:363 suggest this pattern begins in the Locarno Beach Phase. The development of a collector strategy with permanent base camps may date to even earlier to the Charles Culture (4500- 3500 B.P.) with evidence for permanent house structures at Xay:tem and Maurer sites in the Upper Fraser Valley.
ii. Significance of proposed project.
Understanding pre-contact settlement patterns The academic significance of the project involves
The interior land use is poorly understood. Although a lot of forestry work has been conducted, little information in a comprehensive manner to understand the nature of settlement-subsistence patterns. Understanding how First Nation populations utilized the interior landscape. In 1996, coastal survey identified the majority of sites on Valdes Island were oriented of the coastal environment (McLay 1999a;1999b)
The academic significance of this archaeological research will be to: 1) enhance our archaeological database for the island interior landscape; 2) expand our understanding of the range of activities on the island environment; and 2) collect chronological information to observe change in settlement-subsistence patterns through time. for the early colonization of Valdes Island. Address questions of sea level change. Early occupation of Valdes Island. Thompson Cannon
The social significance of the project is to address the need within the First Nation community to gain the capacity to protect their historical and sacred sites.
The Traditional Use Study directed by the Hul'quim'inum Treaty Group collected important ethnographic data that demonstrates the historical use of our lands and resources. However, the results of the study identified several notable gaps in the data collected. The results of this Traditional Use Study indicated that very little information exists on the use of interior forested environments. The study also noted very few traditional place names in areas where archaeological resources are abundant. Therefore, it is apparent that there existed a greater intensity and diversity of past uses of Valdes Island that cannot be documented through traditional knowledge alone. In collaboration with university archaeologists, we determined that only a fraction of the heritage resources on Valdes Island are currently identified. We require an enhanced database to prepare for the management of our lands and protect all of our First Nation heritage sites. The social significance of the project is to guidelines will be created to establish a First Nation heritage agency that can implement protective measures for managing First Nation heritage resources on Valdes Island. The aim of this archaeological research is to develop a heritage management plan.
First Nation community-based management plan can be used to recommend educational and economic opportunities, promote First Nation heritage on Valdes Island, including potential directions for academic research, public education efforts and avenues for cultural tourism. The creation of a heritage management plan will further assist in the development of co-management economic industry plans on Valdes Island, including forestry, fisheries, recreational sport and tourism industries. The heritage management plan may also be used as a model for neighboring First Nation groups.
Our archaeological inventory of Valdes Island in 1996 used a very basic stratified sampling design, differentiating between the coast and interior landscapes. As a 100% sample of the 50m wide strip of coastline is considered sufficient, it is the interior landscape which requires the further inventory research. Prior survey research to sample the interior landscape utilized a mininum three person crew to traverse a 50m-wide transect oriented west to east which cross-cut the range of the interior micro-environments. In this season, we propose to focus our attention on sampling the range of these micro-environments. Using computer TRIM maps, aerial photos and personal familiarity with the landscape, GPS will be used to locate our map the area of microenvironments, including river valleys, lakes, marshes, cuestas, and forested areas. Non-linear transects will be sample each type of microenvironment. Non-linear sampling techniques have been developed and utilized effectively in forested environments in Mesoamerica for the past decade. Generally, transects are closely-spaced pedestrian traverses, utilizing a minimum of three person crew spaced 10-15m apart.
1b SHOVEL TESTING FOR LOW-VISIBLITY, NON-SITES IN FORESTED ENVIRONMENTS As few archaeological sites are considered very low-visibility in forested environments, a systematic program of shovel-testing will be used along these transects as a second stage of sampling. A statistics program to gauge the using the median site size of shell midden sites on Valdes Island as 500m2, it is suggested that to be effective, a shovel test frequency of 1 shovel test per every 20m will be an effective technique to discover low-visibility and buried archaeological deposits.
2. CORING SHELL MATRIX SITES FOR BASAL CHRONOLOGICAL INFORMATION Following the methodology outlined by Cannon (2000), we propose to utilize a soil corer to collect deep, soil samples for C-14 dating material from deep, shell middens on Valdes Island. Cannon's (2000) research used a JMC Environmentalist's Sub-surface Probe manufactured by . The JMC Probe measures only 4cm in diameter was limited to extract enough carbon material for AMS dates. The probe was not useful for collecting subsistence information.
The machinists at the Earth and Ocean Science Centre at UBC has helped design and fabricate a soil corer specific for this project, which will measure 10cm in diameter enough to gain carbon material for conventional dates and subsistence information. Similar to the JMC Probe, a transparent internal plastic sleeve will extract intact stratigraphic information from selected shell matrix sites. The corer is driven into the subsurface soil using a post-hole driver and will be extracted with a mechanical winch. The intact soil core will be delivered to the laboratory and analyzed in the laboratory. Carbon information will be extracted from the basal layer of the core to date the initial occupation of the site. Subsistence information will be analyzed to observe the range of species exploited and observe change and continuity through time. This chronological and subsistence information will be used for understanding the scientific and cultural significance of sites, develop information for community and public education, and academic research and land claim research.
Unvierstiy of for the early colonization of Valdes Island. Funds are presently available for approximately 30 convential C-14 dates or 12 AMS dates.
3. TEST EXCAVATIONS Test excavations will provide select sites will test the range of site types to understand their nature. Features intended to be tested include a range of shell midden size classes, rock shelter habitations, cultural depressions, and a defensive earthwork.
Research access to Valdes Island is unrestricted. The majority of Valdes Island is Lyackson First Nation reserve and Weyerhaeuser Ltd., has gratefully granted permission to access their lands in advance. Permission to access recreational private properties are in process.
iv. Relation of project to previous work or other work in progress.
In 1996-1997, I completed fieldwork on Valdes Island which explored how shell matrix site locations varied in relation to the environment. A major assumption of this research was that all sites tested (although limited to the topmost layer) dated to the late pre-contact era, or Late Phase (1400-1200- 200 B.P.). The sampling of the size range of shell midden sites on Valdes Island to obtain basal dates will observe whether sites date to , as Thompson (1978) settlement pattern research suggests that a greater range and number of sites date to the late pre-contact era. How settlement-subsistence patterns vary with time is currently unknown in the Gulf of Georgia region. Matson and Coupland (1995) suggest using the number of excavated sites in the region expands in the last thousand years; however, there sampling of sites was largely biased toward excavated sites. A systematic sampling of basal dates will begin to gain the database needed to begin understanding historical changes in settlement variability through time. The exploration of the range of site types, including the range of shell midden size classes (see 1996 research), the ethnohistoric village sites at Cardale, Shingle and Porlier Pass, the defensive earthwork at Cardale Point, rockshelter habitations, cultural depressions, .
4. Disposition of materials collected
5. Financial support This research is supported by a Department of Indian and Northern Affairs Capacity Initiative Grant awarded to the Lyackson First Nation for $74, 698.00.
6. Schedule of fieldwork and analysis Fieldwork intends to begin by July 3, 2000, and continue over a three month period until late August. Laboratory analysis and report writing will continue to January, 2001. A final report will be submitted to the Archaeology Branch by July 1, 2001.
7. Field personnel
8. Previous permits held by applicant
9. Applicant's resume
I certify that I am familiar with the provisions of the Heritage Conservation Act of British Columbia, and that I will abide by the terms and conditions listed on the front hereof, or any other conditions the Minister may impose, as empowered by said Act.
Date .. June 2, 2000
Protection of Rights and Cultural Heritage | <urn:uuid:b25971ab-1c68-4ff0-a3fa-4b9a470caecb> | {
"date": "2016-12-06T15:58:25",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541910.39/warc/CC-MAIN-20161202170901-00016-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9086054563522339,
"score": 2.625,
"token_count": 2858,
"url": "http://www.turtleisland.org/news/news-lyackson1.htm"
} |
Cold Weather Tips
Table of Contents
- Learn more about how you can protect your health when it is extremely cold
- Stay safe while heating your home
- Getting help if your apartment, workplace, school or day care is too cold for comfort
- Getting help with heating bills and reducing energy costs
- Additional resources for help this winter
Learn more about how you can protect your health when it is extremely cold
- The World Health Organization recommends keeping indoor temperatures between 64 and 75 degrees Fahrenheit for healthy people. The minimum temperature should be kept above 68 degrees Fahrenheit to protect the very young, the elderly, or people with health problems.
- Watch out for signs of hypothermia. Early signs of hypothermia in adults include shivering, confusion, memory loss, drowsiness, exhaustion and slurred speech. Infants who are suffering from hypothermia may appear to have very low energy and bright red, cold skin.
- When outside, take extra precautions to reduce the risk of hypothermia and frostbite. In high wind conditions, cold weather-related health problems are much more likely. Be sure the outer layer of clothing is tightly woven to reduce body-heat loss caused by wind. If you will be spending time outside, do not ignore shivering - it is an important first sign that the body is losing heat and a signal to quickly return indoors.
- Since cold weather puts an extra burden on the heart, if you have cardiac problems or high blood pressure, follow your doctor's orders about shoveling or performing any strenuous exercise outside. Even otherwise-healthy adults should remember that their bodies already are working overtime just to stay warm, and dress appropriately and work slowly when doing heavy outdoor chores.
Stay safe while heating your home
- Take precautions to avoid exposure to dangerous levels of carbon monoxide.
- Carbon monoxide (CO) is a potentially deadly gas. It is colorless, odorless, tasteless and non-irritating. It is produced by burning fuels such as wood, oil, natural gas, kerosene, coal and gasoline.
- Symptoms of carbon monoxide poisoning are similar to the flu but do not include a fever. At lower levels of exposure, a person may experience a headache, fatigue, nausea, vomiting, dizziness, and shortness of breath. Exposure to very high levels of carbon monoxide can result in loss of consciousness and even death.
- For more information see Hazard Alert: Carbon Monoxide
- The rising costs of natural gas and oil heat may lead many New Yorkers to use alternative home heating methods to reduce their fuel bills this winter – but wood stoves, space heaters, electric heaters, kerosene heaters and pellet stoves can be dangerous unless proper safety precautions are followed. Learn more at Supplemental Space Heaters
- Never try to thaw a pipe with an open flame or torch and be aware of the potential for electric shock in and around standing water. To keep water pipes from freezing in the home let faucets drip a little to avoid freezing, open cabinet doors to allow more heat to get to un-insulated pipes under a sink or appliance near an outer wall. Keep the heat on and set no lower than 55 degrees.
Getting help if your apartment, workplace, school or day care is too cold for comfort
If you are cold in your building, first discuss the problem with the building owner, landlord, property manager or maintenance staff. Some regulations, codes or other legal protections may apply in your situation to ensure that adequate heat is available when temperatures dip. The actual temperature requirement will vary depending on what kind of space is involved. If additional help is needed:
- For rented homes, apartments or businesses that are below 68°F, call your local building department.
- For classrooms that are below 65°F, call the New York State Education Department at (518) 474-3906.
- For daycares that are below 68°F, call 1-800-732-5207.
- For resident areas in nursing homes (rooms, dining hall, activity areas, etc.) that are not maintained at a comfortable level, call the NYS Department of Health Division of Quality and Surveillance for Nursing Homes at (518) 408-1282.
Getting help with heating bills and reducing energy costs
- In an emergency, during both business and non-business hours, visit this web site to get local contacts.
- If you are having trouble paying your bills this winter, you can call or visit the NYS Home Energy Assistance Program (HEAP) website or Hotline at 1-800-342-3009.
- For consolidated information on assistance in paying heating bills and ways to make your home energy efficient you can visit the website Heat Smart New York or call the number toll free at 1-877-NYSMART (1-877-697-6278).
- For one-stop shopping information about steps to take to conserve energy, manage utility bills, and stay warm, visit the Public Service Commission's web site. To compare prices of various electric and natural gas providers in your area, see Power To Choose.
- If you would like to make your home more energy efficient, which would help reduce heating and cooling bills, find out your eligibility for services through the Weatherization Assistance Program (WAP) . To learn more about this program, call 1-866-ASK-DHCR (1-866-275-3427).
- To learn more about energy efficient products and practices to reduce energy use in your home, visit these web sites:
Additional resources for help this winter
- The booklet "Don't Be left in the Dark – Weathering Floods, Storms and Power Outages" has a lot of information about preparing for weather emergencies, getting through them and recovering after the event.
- Warm, inexpensive clothing may be available from local charities and thrift stores.
- Check the yellow pages in your phone book under "Charities" to find a listing of the organizations in your area that may be able to help with everything from clothes to food to weatherization services to other support.
- The Occupational Safety & Health Administration provides information on Worker Safety during winter storms. | <urn:uuid:ed540657-7eb9-4bd6-b9f4-6bae4a7ee76b> | {
"date": "2013-12-06T15:15:34",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051992/warc/CC-MAIN-20131204131731-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.915305495262146,
"score": 3.015625,
"token_count": 1281,
"url": "http://www.health.ny.gov/environmental/emergency/weather/cold/cold_weather_tips.htm"
} |
Indeed, much of the evidence presented seems to be for an association of Darwin
with certain ideas or social movements and for Darwin
's personal reaction to slavery.
is considered to be Australia's northern gateway and in 2011 Australians were the No.
plagiarized the theory of evolution by means of natural section;
Passengers are advised to not make their way to the terminal as the Darwin
Airport Terminal will be closed.
These chapters engagingly introduce readers to the myriad experiences that prepared Darwin
for Galapagos, including his time in the Andes, in the rainforests of Brazil, at the Falkland Islands, and at Tierra del Fuego.
It is anticipated that the project will illuminate such private relationships as that of Darwin
with his elder surviving daughter, Henrietta.
Perhaps the lust surprise is that Darwin
was not born the middle-aged man with a beard, seen staring rather anxiously out from so many book covers and museum posters.
Collections manager Mathew Lowe said: "To have rediscovered a Beagle specimen in the 200th year of Darwin
's birth is special enough, but to have evidence Darwin
himself broke it is a wonderful twist.
Around 50 elderly mourners came to pay their respects to Mr Darwin
, but his grandsons, Anthony and Mark Darwin
, who disowned their father after his lies came to light, were not among them.
Jack Sennott, Darwin
's Chief Operating Officer, added, "The Darwin
team is very proud of the franchise we have built over the last five-and-a-half years and is excited about our future as a key component of Allied World.
JOHN and Anne Darwin
were each jailed for more than six years yesterday for carrying out a "determined, sustained and sophisticated" pounds 250,000 fraud by faking Mr Darwin
's death in a canoeing accident.
BACK-from-the-dead canoeist John Darwin
turned himself in to police because he wanted to be reunited with his sons, a court heard. | <urn:uuid:ab7e8a93-b05e-46a2-97be-e9b4de69b4ae> | {
"date": "2017-05-23T09:02:30",
"dump": "CC-MAIN-2017-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607593.1/warc/CC-MAIN-20170523083809-20170523103809-00305.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9729800224304199,
"score": 2.546875,
"token_count": 425,
"url": "http://acronyms.thefreedictionary.com/Darwin"
} |
Several adverse characteristics prevailing in the eighteenth and nineteenth centuries shaped the economic and social conditions in the Eastern Mediterranean region: under-population, marauding Bedouin clans, poverty, malarial sickness and lack of investment in efficient and scientific land utilisation.
The many descriptions of the region provided by travellers and foreign consuls at the time were generally not grounded on hard data or academic research. They failed to take into consideration that conditions which prevailed in some parts of Palestine did not pertain in others. In examining its economic and political development, Palestine must be divided into
- four longitudinal regions paralleling the Mediterranean Sea: (i) the coastal plain, (ii) the hilly region (the Negev and the south) (iii) Judea and Samaria in the central region and (iv) the Galilee in the north;
- the Jordan Valley which lies to the east of the Galilee and includes the Dead Sea and the Sea of Galilee (Tiberias) which forms part of the Great Rift Valley;
- the hills of Transjordan.
(see Y. Karmon, Israel: A Regional Geography, John Wiley & Sons London, 1981)
These regions differed from one another in respect of the ethnic origin, population growth and decline, agricultural development and economic vitality.
- To the extent that land in the coastal and other plains was capable of being cultivated, wild marauding Bedouin tribes present in these areas discouraged any permanent rural settlement or agricultural development. Consequently the lower flat lying areas were more or less desolate and unproductive. In addition:
- the Northern and central coastal plains were swamp-like and malaria-ridden as was the land around the Hula lake and the Lake of Galilee;
- the Southern coastal plains were inundated with sand dunes;
- Consequently, Arab urban and rural settlements tended to avoid the coastal plains and were to be found mainly in the hill country west of the Jordan River in Judea and Samaria and parts of the Galilee,
- Jews, prior to acquiring and developing the barren coastal plains, had a significant urban presence in and around Jerusalem, Hebron, Tiberias, Safad and Jaffa and in other smaller towns.
a. The Land and Its Indigenous Rural Population
For many centuries, travellers to Palestine described it as sparsely populated, poorly cultivated and widely neglected – an expanse of eroded hills, sandy deserts and malarial marshes. European consuls located in Jerusalem and Cairo during the 18th and 19th centuries confirmed these opinions.
Mark Twain, who had visited the Holy land in 1867, described it as
“[a] desolate country whose soil is rich enough, but is given over wholly to weeds – a silent mournful expanse… Desolation is here that not even imagination can grace with the pomp of life and action… We never saw a human being on the whole route…there was hardly a tree or a shrub anywhere. Even the olive and the cactus, those fast friends of the worthless soil, had almost deserted the country” (Twain “Innocents Abroad” cited in Bard Myths and Facts AICE 2001, p. 30)
The Report of the 1937 Palestine Royal Commission quotes what it believed to be a truthful and unbiased description of the Maritime Plain as it existed in 1913:
”The road leading from Gaza to the north was only a summer track suitable for transport by camels and carts…no orange groves, orchards or vineyards were to be seen until one reached [the Jewish village of] Yabna [Yavne]….Houses were all of mud. No windows were anywhere to be seen….The ploughs used were of wood….The yields were very poor….The sanitary conditions in the village were horrible. Schools did not exist….The western part, towards the sea, was almost a desert. . . . The villages in this area were few and thinly populated. Many ruins of villages were scattered over the area, as owing to the prevalence of malaria, many villages were deserted by their inhabitants”. (Cmd. 5479 p. 233)
The Report also drew on contemporary descriptions of the economic situation in Palestine, written in the 1830s and supplied to the Commission by Lewis French, the British Director of Development:
We found it inhabited by fellahin who lived in mud hovels and suffered severely from the prevalent malaria…. Large areas…were uncultivated… The fellahin, if not themselves cattle thieves, were always ready to harbour these and other criminals. The individual plots…changed hands annually. There was little public security, and the fellahin’s lot was an alternation of pillage and blackmail by their neighbours, the Bedouin”. (Cmd. 5479 pp. 259-260)
Meyer Levin, the American writer (1905 -1981) recounts in “My Search” that it was impossible to travel directly northwards from Tel Aviv to Netanya, some 25 km away without deviating a considerable distance inland because of the intervening marshland. The present-day route of the “old” Tel Aviv – Haifa road still reflects this.
Derived from the reports of foreign travellers and early settlers (Oliphant), cartographers (Van de Velde), and foreign exploratory expeditions (Palestine Exploration Fund (PEF)), Arie Avneri, in a detailed study provides a description of the topographical and demographic conditions prevailing in the various regions of Palestine immediately prior to Jewish settlement.
(Arie L. Avnieri, The Claim of Dispossession- Jewish Land-Settlement and the Arabs 1878-1948, Yad Tabenkin, Efal, Israel 1982 “Avnieri”)
For example, he notes the fertility of the soil but the sparseness of population and lack of agricultural development in the valleys of the Hula, Kinorot, and the Kishon, owing to their marshy and malarial conditions.
In the valleys of Beit-Shean, Jezreel, and Zevulun, located on the trade routes and where permanent human habitation was possible, Bedouin raids on the settlements – especially in drought years – discouraged any permanent Arab settlement.
Mount Carmel was also waste land. Development was ruined by foreign and local wars and its western slope was malaria ridden, all of which contributed to the abandonment of seventeen villages before Jewish settlers arrived in 1882
(Avnieri pp 49-50).
The coastal area of Samaria (Shomron) starting at the foot of Mount Carmel and stretching south to the Sharon Plain was in a state of desolation and completely ravaged after the military campaigns of Napoleon and Ibrahim Pasha of Egypt (see Section 2 below).
The coastal Sharon Plain was poorly cultivated owing to the sandy nature of the soil and marshlands created by the Alexandra River and further south by sand dunes. Those villages which did exist, described in 1874 by C.R.Condor, were miserable and half in ruins, the villagers downtrodden and browbeaten by money–thirsty absentee landlords (Avneiri p.53).
The Mountain Regions were varied in their population. Parts around Tulkarm were relatively well populated, providing a refuge from malaria and protection against Bedouin raiders. Nevertheless, internal feuds between village clans caused many villages to be destroyed, although their inhabitants tended to remain in the area. The lack of security, however, inhibited the fellahin from investing much effort in improving the soil conditions.
Villages lower down the mountain and closer to the sea, such as Auja, Sidna Ali, Ramadan, Kabani and Hadera, were scattered and thinly populated, because of the sandy soil, punctuated by swampy stretches.
Southern Judea and the Negev, although not plagued by malaria, were no better for agricultural use or permanent settlement. These regions lacked rain and were frequently drought ridden, and the soil was sandy, being often invaded by sand dunes.
By way of contrast, Gaza in 1886 was a town with a population of some 20,000 inhabitants (but see section 2 as to their place of origin). Its people were poor and lived mostly from trade with the Egyptians. In the narrow strip between the coastal sands and desert interior, some fellahin were found to be growing fruit, watermelons and vegetables.
b. Lack of Security for Persons and Property
During the first three decades of the 19th century, Palestine, like the remainder of the Ottoman Empire, was in a general state of decline and stagnation. Despite the ten years of Egyptian military occupation of Palestine between 1831-1841 which brought in its wake significant Egyptian migration (see section 2 below), the total indigenous population of the area did not exceed 250,000.
Under Ottoman rule the Arab male fellahin were extremely insecure both in their person and economically, being eligible both for military conscription while at the same time suffering Egyptian and Bedouin incursions into their homesteads.
Bedouin terror prevented any significant permanent settlement in the principal plains of Palestine – the coastal plain and the Plain of Esdraelon – and compelled the Arab fellahin to retreat to the hill country of Judea and Samaria, which was more secure but less productive.
“According to Turkish registration books from 1596, it seems that the [coastal plain] served as home to Bedouins (Arab nomads) and Turkish and Kurdish nomads. In the eighteenth century, according to tradition, the amir (chief) of the Hawara Bedouins, who hailed from Bilad Hareth …in Eastern trans-Jordan, occupied part of the coastal plain by force. Hawara Bedouins did not cultivate the land; rather they occupied themselves with brigandage and inter-tribal wars. The outcome of their predatory activities was that Wadi Hawarith was described in the nineteenth century as abandoned, swampy, and malaria-ridden and that its passage was dangerous. The lands of the Wadi were described by the Ottoman governor of the Jerusalem region (1906-7) as abandoned lands that were sparsely inhabited by Bedouins”…
“Thus only a small part of the country was being used for agriculture. The towns of Palestine at the beginning of the last [19th] century are best defined as large villages each built on a small area and possessing a limited economic base and a small population of up to 10,000”
(Ruth Kark, Changing Patterns of Land Ownership in Nineteenth-Century Palestine, (1984) 10 J of Historical Geography, 357, 374 ; ‘Landownership and Spatial Change in Nineteenth Century Palestine in Transition from Spontaneous to Regulated Spatial Organisation’ Inst. of Geography and Spatial Organisation, Polish Academy of Sciences, Warsaw, 1983 (“Kark 1983”) pp 185-187
Even by 1895, after the rural population had descended from part of the hilly areas and had begun to settle in plains, only ten per cent of the total area of Palestine was under cultivation, (Kark 1983 p. 189) notwithstanding that Arab urban entrepreneurs and absentee landlords had begun to assemble large tracts of land for resale, following the Ottoman land reform legislation (see section 3.c.ii. below).
c. Fellah’s Economic Situation
Economically, the fellah was generally in a state of chronic poverty and indebtedness to his absentee landlord, seed suppliers and money lenders, owing to a number of interrelated causes: poor soil, lack of water, poor means of communication with the towns, unsuitable marketing arrangements, frequent crop season failures, and an antiquated land system. Even before the first modern Jewish settlement, established in 1855, Palestinian Arab society was already socially fragmented between the peasantry and landowning interests. This became exacerbated after the Ottoman land reform in 1858.
(Haim Gerber, The Social Origins of the Modern Middle East, Lynne Rienner, London, 1987, p.75 (‘Gerber).
Thus, while Palestine as a whole cannot be said to have been desolate and without population as claimed by the Zionists, its people were certainly not thriving. In the hilly areas, the Arab population, while not poverty stricken, was barely self-sustaining. In the plains and the valleys the travellers’ descriptions were a true reflection of the situation – vast desolate expanses devoid of permanent population, malaria infested and subject to the uncontrolled power of the nomadic Bedouin.
Aside from these environmental conditions there were a number of other factors that also contributed to the complex dynamics of the region. | <urn:uuid:d40f0afe-b24d-4ff4-b9c6-51e486819ca4> | {
"date": "2013-12-05T19:39:21",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163047523/warc/CC-MAIN-20131204131727-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9633852243423462,
"score": 3.515625,
"token_count": 2648,
"url": "http://zionismontheweb.org/Palestinian_Israel_Conflict/?tag=jaffa"
} |
Until recent years, the term Afro-Latin@ has primarily been used to refer to people of African descent in Latin America and the Caribbean. Along with “negro,” “afrodescendiente” and “afrolatinoamericano,” Afro-Latin@ served to name the constituency of the many vibrant anti-racist movements and causes that have been gaining momentum throughout the hemisphere for several generations, reaching global visibility at the UNESCO conference at Durban in 2001. Since the early 1990s, however, and in part as a result of intellectual cross-fertilization between North and South, the usage has gained increasing traction in the United States.
Considering the widespread counter-position of African Americans and Latin@s characterizing current racial discourse in the United States, Afro-Latin@s as individuals and as a group constitute a potential bridge across that ominous ethno-racial divide.
In Latin America the “afro-” prefix and other racial markers have long been of crucial importance in challenging the homogenizing obfuscations inherent in varied national and regional identity constructs. In the United States, too, “Afro-Latin@” has surfaced as a way of signaling the diversity encompassed within the overly vague idea of “Latin@” and of calling attention to the anti-Black racism within Latin@ communities themselves.
What does the term Afro-Latin@ mean in the U.S. context? Most obviously, Afro-Latin@ can refer to Latin@s of African descent. It is a group designation, the name for a community that historically has shied away from an explicitly racial identity but whose self-recognition has been gaining rapidly and whose past demonstrates a sense of tradition and shared socio-cultural realities. Since the European conquest, Blacks of Spanish-language backgrounds have built up a legacy—shared cultural values and expressions—traversing national particularities and differentiating itself from the group history of African Americans.
Even while focusing on the specific U.S. situation, however, the term Afro-Latin@ also applies to a transnational discourse or identity linking Black Latin Americans and U.S. Latin@s across national and regional lines. “Afro-Latin@” clearly signals what scholar Arjun Appadurai has termed a contemporary “ethnoscape” of global reach, because the real and potential interactions between African-descendant peoples in Latin America and Latin@s remain a central dimension of the Afro-Latin@ concept in the North American context. Indeed, it is thus increasingly important to resist the limitation of Afro-Latin@ to its supposedly national U.S. confines; the same could be said for the concept of “Latin@” itself, as well as African American.
What is perhaps most particular to the U.S. context is that here the Afro-Latin@ problematic, or “lo afro-Latin@,” also has to do with the cross-cultural relation between the Afro and the Latin@, which means, most saliently, the relation between Latin@s and African Americans. Moreover, Afro-Latin@ is at the personal level a unique and distinctive experience and identity, ranging as it does among and between Latin@, Black and North American dimensions of lived social reality. In their quest for a full sense of social identity, Afro-Latin@s are thus typically pulled in three directions at once and share a complex, multi-dimensional optic on contemporary society. In another essay, I have termed this three-pronged web of affiliations, taking my cue from W.E.B. Dubois, “triple-consciousness.”
I would suggest that an adequate conceptualization of the term Afro-Latin@ in the U.S. context needs to activate and encompass four theoretical coordinates: first, the group identity and cultural traditions of Afro-Latin@s themselves; second, the transnational discourse or ethnoscape linking U.S. Afro-Latin@s with their hemispheric counterparts; third, the historical and ongoing relation between Afro-Latin@s (and by extension all Latin@s) and African Americans; and, finally, the distinct lived experience of what it means to be Afro-Latin@ in the United States. This kind of analytical approach allows for a balanced understanding of the relation between the particularity and the generality of the U.S. Afro-Latin@ reality, as well as between the racial and cultural formations specific to U.S. society and those more characteristic of Latin American and Caribbean home countries.
If the rising interest in “lo Afro-Latino” in its U.S. manifestation signals the emergence of a new area of intellectual inquiry and political struggle, as seems to be the case, then navigating these complexities promises to be of paramount importance. Such a multidimensional and nuanced Afro-Latin@ concept poses a needed challenge to the overly circumscribed theoretical frameworks of both Latin@ and African American studies, as well as, more broadly, those prevailing in Latin American studies and even Diaspora studies. In the academic context, the challenge means that all of these fields and disciplines stand to be nourished and impelled in new directions by the sociological and cultural linkages implied in the study of the Afro-Latin@ experience.
At a more practical, grass-roots level, anti-racist organizing and coalition-building take on a new look and potential by the very situation of U.S. Afro-Latin@s as a collective human bridge between the two “largest minority groups” and between millions of African-descendant people throughout the hemisphere. Notably, in the ongoing immigrant struggles Latin@ immigrants and African Americans are often counter-posed against each other in adversarial terms; the social presence of Afro-Latin@s, though generally unacknowledged, is living evidence of the fallacy of that misleading and divisive assumption. For Afro-Latin@s demonstrate, especially when their bridging role is grasped in its full complexity, that many Latin@ immigrants are Black, and that many Black people in the United States today are Latin@s.
Juan Flores is a professor at NYU and CUNY. The present reflection is based on ideas I have developed over the past five years in my seminar on “Afro-Latino Culture and History” at Hunter College and the Graduate Center of the City University of New York, and which I first presented in an invited lecture, “Afro-Latinos on the Color Line,” in the Latino Cultures Seminar at Harvard University on April 14, 2003. | <urn:uuid:7c939ed8-3cfb-482c-b36a-bb8dcf59e13a> | {
"date": "2016-02-11T19:14:19",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00069-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9529900550842285,
"score": 3.578125,
"token_count": 1392,
"url": "https://nacla.org/news/afro-latins-united-states-theoretical-coordinates"
} |
Skills and Ideas Taught: Chronopticon helps players understand relationships between movements of the earth, sun, moon, and stars, as well as how their motions relate to the passage of time.
Goal or Challenge: Players must guide the game heroes, Tim and Moby, from the 19th century back to the present day.
Primary Audience: Students in grades 5-12.
Assessment Approach: Chronopticon records the number of tries (up to nine) players take to solve each of the four problems per level.
Description: Chronopticon is a discovery game in which players must figure out how to use a 19th century time machine. With minimal clues and some conceptual priming from the Chronopticon’s eccentric inventor, players are presented with a steampunk dashboard containing a model of the earth-moon-sun system. Manipulating the physical position of the model’s celestial bodies advances time to a corresponding degree.
Game Engine: Adobe Flash
Operating System: Windows (Web delivered)
Platform: Personal computer
Special Hardware: None | <urn:uuid:7be42ff1-8f22-45bb-a69e-efdf691ee0af> | {
"date": "2019-01-23T06:19:00",
"dump": "CC-MAIN-2019-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583897417.81/warc/CC-MAIN-20190123044447-20190123070447-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9087451696395874,
"score": 3.390625,
"token_count": 221,
"url": "http://sgschallenge.com/chronopticon/"
} |
This database contains World War II draft registration cards from multiple registrations filled out by men in select states aged 18–44.
The U.S. officially entered World War II on 8 December 1941 following an attack on Pearl Harbor, Hawaii. About a year before, in October 1940, President Roosevelt had signed into law the first peacetime selective service draft in U.S. history because of rising world conflicts. Multiple registrations held between November 1940 and October 1946 signed up more than 50 million American men aged 18–45 for the draft.
Cards in This Database
This database contains images and indexes for registration cards filled out by men born between the years of 1898 and 1929 from Arkansas, Georgia, Louisiana, and North Carolina. The following states are also found in the index with a link to the images available on Fold3:
- New Mexico
- West Virginia
- District of Columbia
- Virgin Islands
More cards will be added from other states as they become available. The cards are potentially valuable sources of genealogical and family information, with details that can include:
- serial number
- address (some ask for mailing address as well)
- place of birth
- country of citizenship
- employer’s name
- place of employment (address)
- name and address of person who will always know registrant’s address, relationship to registrant
- description: race, eyes, weight, complexion, hair
- year of registration
The collection includes some replacement cards for registration cards that were destroyed. These cards list a name. | <urn:uuid:0b1bcdfb-ac3c-4f27-b51a-043fc4eedf71> | {
"date": "2017-10-20T12:37:07",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824104.30/warc/CC-MAIN-20171020120608-20171020140608-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9396291971206665,
"score": 2.859375,
"token_count": 318,
"url": "http://search.ancestry.ca/search/db.aspx?dbid=2238"
} |
(Above: The Moon floats magically in the night sky and is our constant companion on a yearly journey around the Sun)
Our solar system is home to a cast of captivating planets, each with its own set of unique characteristics and each providing evidence of God's masterful creativity and unimaginable scope. However, there are many objects in our midst that exist largely unknown, overshadowed by the planets they patronize or tucked away in the most remote regions of the solar system. A handful of these objects deserve special recognition for their uncommon allure, though let us begin with a very familiar and adoring face...Moon
- Also called "Luna" or even "Selene", the Moon maintains an orbit around the Earth at a distance of approximately 239,000 miles. It is one of the solar system's largest planetary satellites and is composed of a variety of primary elements. Besides the Earth it is the only celestial body upon which humans have tread and has been an object of great cultural significance (Moonlight Sonata
and "Shoot for the moon", for example).Ganymede
- The king of Jupiter's moons, Ganymede has the largest diameter of any moon in the solar system (3,270 miles) and is larger than both Pluto and Mercury. Perhaps not surprisingly, it is the only planetary satellite besides our Moon that can be seen with the naked eye (under optimal conditions).(Above: Jupiter's moon, Io, is the most massive moon in the solar system and resembles a block of moldy cheese more than it does a sphere of molten silicate rock)Io
- Another of Jupiter's 63 documented moons, Io is the fourth largest moon in the solar system and also the most massive*
. The wide variations in color on Io's surface are likely the result of extreme volcanic activity which is constantly reshaping the terrain. In fact, Io is the most volcanically active body in the solar system.* The size of an object does not necessarily tell you anything about its mass (i.e. the amount of "stuff" contained within it). A beach ball can be several times larger than a bowling ball, but the latter is easily more massive. Just try dropping both of these items on your foot and you'll appreciate how the two concepts differ.Miranda
- The smallest of Uranus' major moons (293-mile diameter), Miranda is a patchwork of varying terrain seemingly broken up and haphazardly reassembled by intense geological activity. While the catalyst of this activity is not entirely clear, its effects on the tiny moon's surface are quite striking. Miranda is home to one of the solar system's highest cliffs, a 12-mile-high escarpment known as Verona Rupes.
(Above: A giant space sponge or one of the solar system's most unusual objects? Say hello to Saturn's moon, Hyperion)Hyperion
- One of the largest non-spherical bodies in the solar system (200 miles in diameter at its longest dimension), Hyperion is an oddity in more ways than one. This moon has an appearance not unlike a sponge, though no one is quite sure why. As it travels around its host planet of Saturn, its highly irregular shape is at least partly to blame for a rotation best referred to as "chaotic".90377 Sedna
- The most distant natural body that has been observed in the solar system, Sedna follows a highly elliptical orbit which takes it nearly 86 billion miles away from the Sun (compared to 93 million miles for Earth). This object has no official classification, but is not unlike Pluto and other so-called "dwarf planets". Although 900 times further from the Sun than Earth, Sedna is still within the gravitational influence of our beloved daytime companion. | <urn:uuid:ebc135fd-8b99-4551-9836-c4de0a172825> | {
"date": "2014-10-31T03:54:59",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898844.3/warc/CC-MAIN-20141030025818-00169-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9452632665634155,
"score": 2.984375,
"token_count": 770,
"url": "http://the-heavens-declare-his-glory.blogspot.com/"
} |
There may be no other time in life that is more complex and turbulent than adolescence. It is a transition stage, where all doubts and uncertainties are manifested and during which, nevertheless, crucial decisions must be made about the future.
This unique work has been developed for adult to know and be able to understand the new generations, as well as for young people to be able to understand their elders. The author, Julián Melgosa, psychologist and a recognized communicator, describes for us the causes of the generation gap, while offering practical advice for bridging this distance, which many times can seem impossible to do.
The book is comprised of 8 chapters, distributed over more than 190 pages. All of the information is accompanied by photographs, charts, and diagrams which help organize ideas and facilitate understanding. | <urn:uuid:58391a10-7a54-48e8-b203-e49ae99a36f8> | {
"date": "2019-11-11T22:14:15",
"dump": "CC-MAIN-2019-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9665098786354065,
"score": 2.96875,
"token_count": 162,
"url": "https://safeliz.com/en/product-view/para-adolescentes-y-padres/"
} |
The speech recognition threshold (SRT) is an important measure, as it validates the pure-tone average (PTA), assists in the diagnosis and prognosis of hearing impairments, and aids in the identification of non-organic hearing impairments. Research has shown that in order for SRT testing to yield valid and reliable measures, testing needs to be performed in the patient's native language. There are currently no published materials for SRT testing in the Samoan language. As a result, audiologists are testing patients with English materials or other materials not of the patient's native language. Results produced from this manner of testing are confounded by the patient's vocabulary knowledge and may reflect a language deficit rather than a hearing loss. The present study is aimed at developing SRT materials for native speakers of Samoan to enable valid and reliable measures of SRT for the Samoan speaking population. This study selected 28 trisyllabic Samoan words that were found to be relatively homogeneous in regard to audibility and psychometric function slope. Data were gathered on 20 normal hearing native speakers of Samoan and the intensity of each selected word was adjusted to make the 50% performance threshold of each word equal to the mean PTA of the 20 research participants (5.33 dB HL). The final edited words were digitally recorded onto compact disc to allow for distribution and use for SRT testing in Samoan.
College and Department
David O. McKay School of Education; Communication Disorders
BYU ScholarsArchive Citation
Newman, Jennifer Lane, "Development of Psychometrically Equivalent Speech Recognition Threshold Materials for Native Speakers of Samoan" (2010). All Theses and Dissertations. 2214.
speech audiometry, speech recognition threshold, SRT, homogeneity, psychometric performance-intensity function, word lists, materials, Samoan, languages | <urn:uuid:d67d4c89-02f0-494d-9e82-60158da31bb3> | {
"date": "2019-04-21T20:49:58",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.901680052280426,
"score": 2.546875,
"token_count": 380,
"url": "https://scholarsarchive.byu.edu/etd/2214/"
} |
Exploring Communication Options
Communication is at the heart and soul of our lives. Children with hearing loss may build their communication skills using one or more of the communication options described in this section. To help you get started learning more about these options, let's take a look at each one.
This approach encourages children to make use of the hearing they have (called residual hearing) using hearing aids or cochlear implants. Speechreading, sometimes called lipreading, is used to supplement what's detected through residual hearing. In this approach, children learn to listen and speak but do not learn sign language (described below).
A key element of this approach is teaching children to make effective use of their residual hearingeither via hearing aids or a cochlear implant. Therapists work one-on-one with the child to teach him or her to rely only on listening skills. Because parent involvement is an important part of the auditory-verbal approach, therapists also partner with parents and caregivers to provide them with the skills they need to help the child become an auditory communicator. In this approach, neither speechreading nor the use of sign language is taught.
In this system, children learn to both "see" and "hear" spoken language. They focus on the movements that the mouth makes when we talk. This is combined with: (a) eight hand shapes (called cues) indicating groups of consonants, and (b) four positions around the face, indicating vowel sounds. Some sounds look alike on the lipssuch as "b" and "p"and others can't be seen on the lipssuch as "k." The hand cues help the child tell what sounds are being voiced.
There are two basic types of sign language:
- SEE, which stands for Signed Exact English, and
- ASL, or American Sign Language. SEE is an artificial language that follows the grammatical structure of English. ASL is a language that follows its own grammatical rules. It is often taught as the child's first language. English may then be taught as a second language.
In this communication system, methods are combined. Children learn a form of sign communication. They also use finger spelling, speechreading, speaking, and either hearing aids or cochlear implants.
Confused? Overwhelmed? Wondering how in the world you're supposed to decide which approach to use with your child? Well, that's normal! There's a lot to know about each of these methods. To learn more, take a look at the publications and Web sites we've listed in the section, "Find Out More." Read, ponder and talk with other parents, your child's audiologist and other hearing health-care and education professionals. | <urn:uuid:03c08049-3946-4513-993c-715c51e406d9> | {
"date": "2014-08-23T01:59:40",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824990.54/warc/CC-MAIN-20140820021344-00430-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9501349925994873,
"score": 4.21875,
"token_count": 560,
"url": "http://www2.ed.gov/about/offices/list/osers/products/opening_doors/eco.html?exp=3"
} |
Paleontologists recently discovered rodent teeth in Peru dating back over 41 million years, making them the oldest evidence of rodents in the Americas. But despite its location, this rodent was far more closely related to today's African, not American, rodents.
One of the weirder parts of our evolutionary history is how various animals managed to migrate from one landmass to another. The Americas in particular are home to lots of animals that evolved elsewhere and then somehow managed to make their way across the ocean, with our distant cousins the New World monkeys being perhaps the most dramatic example. Now we can add to that list the rodent you see up top, which has been analyzed by Darin Croft and his fellow researchers at Case Western Reserve University.
Dr. Croft explains what they found when they examined the teeth, which were so tiny that they could only be studied using high-powered microscopes:
"As palaeontologists, we're interested in how animals are related to each other, and we do what are called 'phylogenetic analyses.' We did those analyses for our animals and they are very close in the evolutionary tree to African rodents, which suggests that that's where their ancestors came from - from Africa.
The teeth are confidently dated using sediment samples to 41 million years old, making them ten million years older than the next oldest example of rodents in the Americas.
He also comments on the remarkable journey these rodents must have made to get from Africa to America - and why, when you consider the tremendous timespan involved, it isn't as preposterous as it sounds:
"They could have got there on some raft of vegetation. That maybe sounds like a fantastic tale, but in fact we do see things like this happening today. You can get big logjams of vegetation that get pushed out of rivers during storms, and often you will see mammals on them. The odds of them making this crossing are obviously very low, but after millions and millions of years the odds of some animals making it go up considerably. And if we go back to the middle of the Eocene when we think rodents might have made this crossing, the two continents of South America and Africa were actually closer together than they are today - about half the distance." | <urn:uuid:a180f18c-b62a-4d5d-b4b4-fef313c30ce8> | {
"date": "2015-02-02T00:18:46",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122122092.80/warc/CC-MAIN-20150124175522-00201-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9756194353103638,
"score": 3.8125,
"token_count": 457,
"url": "http://io9.com/5850577/the-ancestor-of-all-american-rodents-really-came-from-africa?tag=fossil"
} |
Aggressive driving refers to the deliberate carelessness of operating the automobile. It is an action of any driver in the road displaying aggression toward other drivers. The behavior of the driver may increase the risks that are associated with accidents, road mishaps, and other untoward situations in the road.
Aggressive driving can be characterized in over speeding, unwarranted changing of lanes, improper overtaking, and the worst of tailgating. In the sense, road rage happens because of the improper behavior of the driver. In such cases, aggressive driving is cited as the reasons for serious accidents in the road involving murder or manslaughter, as the result of reckless driving.
Over the years, aggressive driving problem of the road has increased dramatically. The first cited reason is the exponential increase of the cars or automobiles that are driven in the road. While cars and automobiles are increasing, the road sizes have not changed in recent years. It is to this fact that traffics happen in every corner of the United States or in any other parts of the globe. In the same respect, as traffic congestion looms, the speed of cars is already increased in a drastic manner. A driver may rely on the speed of his or her to ease out from the congestions in the road.
Following the same mindset, a driver who drives a speedy car in a traffic congested road will rely on his car speed to meet demands of his busy schedule. That is, a person maybe aggressive in the road, just to meet the demands of time. No one can stand a minute of waste in the congestion of the road. The congestion of the road motivates noise, increase of temperature, and overcrowding, all of which may hamper the behavior and attitude of the driver.
Nonetheless, it must be noted that naturally, a person is territorial. There is always a desire to keep the area for himself. This is a natural reaction and response of a person. In fact, during aggression of other drivers, the aggressive behavior can ignite the aggressive response from other drivers. The problem of aggressive driving creates a domino of problem in the road, resulting to the road rage, as termed.
To help you in reacting to the aggressive driver, the following tips will keep you safe in the road:
- Do not lose your composure. Just remain calm on the behavior of other drivers on the road. Do not easily jump on the ride of road rage.
- For you to avoid their rages, it is necessary to keep a good distance away from them. Keeping away from aggressive drivers will keep you away from being a victim of accidents.
- Always remember that the road is shared with other drivers in the road. Do not change lanes if you don’t know the visual of all angles.
- As much as you can, do not react to the violent reactions of the aggressive driver. Even if you cannot change lanes because the aggressive driver is taking the passage way, just stay in your lane. Do not look for ways to get even; it will just result to accidents. | <urn:uuid:efdb250a-9f88-4a1b-bdb8-75723aa16490> | {
"date": "2018-01-17T19:59:19",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886964.22/warc/CC-MAIN-20180117193009-20180117213009-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9653838872909546,
"score": 2.875,
"token_count": 617,
"url": "http://wannadrive.com/what-is-aggressive-driving"
} |
Digital image correction and enhancement (Digital ICE) is a set of technologies related to producing an altered image in a variety of frequency spectra. The objective of these technologies is to render an image more usable by Fourier or other filtering techniques. These technologies were most actively advanced in the 1960s and early 1970s in the fields of strategic reconnaissance and medical electronics.
The term "Digital ICE" initially applied specifically to a proprietary technology developed by Kodak's Austin Development Center, formerly Applied Science Fiction, that automatically removes surface defects, such as dust and scratches, from scanned images.
The ICE technology works from within the scanner, so unlike the software-only solutions it does not alter any underlying details of the image. Subsequent to the original "Digital Ice" technology, which used infrared cleaning, additional image enhancement technologies were marketed by Applied Science Fiction and Kodak under similar and related names, often as part of a suite of compatible technologies. The ICE technology uses a scanner with a pair of light sources, a normal RGB lamp and an infrared (IR) lamp, and scans twice, once with each lamp. The IR lamp detects the dust locations with its unique detection method, and then inpainting is applied based on this data afterwards. The general concept is to subtract the position of scratches and dust from the RGB image.
Limitations of Digital ICE
Digital ICE is used to detect scratches and dust during transparent film scan and is not applicable for opaque document scanning. While chromogenic black-and-white films are supported by Digital ICE, other black-and-white films containing metallic silver, which form from silver halides during the development process of the film, are not. This is because the long wave infrared light passes through the slide but not through dust particles. The silver particles reflect the infrared light in a similar manner to dust particles, thus respond equally in visible light and infrared. A similar phenomenon also prevents Kodak Kodachrome slides from being scanned with Digital ICE. Kodachrome's cyan layer absorbs infrared.
Kodak's own scanner, the "pro-lab" HR500 Plus was equipped with Digital ICE that could scan Kodachrome slides effectively, however, this scanner was discontinued in 2005. Nikon produced the Super Coolscan 9000 ED scanner with a new version of Ice (Digital ICE Professional) from 2004 until it was discontinued in 2010. This was capable of scanning Kodachrome slides reliably, dust- and scratch-free, without additional software. LaserSoft Imaging released an infrared dust and scratch removal tool (iSRD - infrared Smart Removal of Defects) in 2008, that allows Nikon's film scanners for Mac OS X and Microsoft Windows, as well as many scanners from other manufacturers to make high quality scans of Kodachrome slides. Fujifilms system for dust and scratch removal, called "Image Intelligence", works on a similar principle as Digital ICE and will also work on Kodachrome film.
- Kodak official Digital ICE site
- Digital ICE: Defect Detection and Correction Using Infrared-enabled Scanners Dr. Gabriel Fielding, Eastman Kodak Company
- United States Patent 5969372
- J. L. C. Sanz, F. Merkle, and K. Y. Wong, "Automated digital visual inspection with dark-field microscopy" J. Opt. Soc. Am. A 2, 1857-1862 (1985)
- Michael J. Steinle, K. Douglas Gennetten Designing a Scanner with Color Vision (pdf) Hewlett-Packard Journal Aug. 1993 pg 52-58 | <urn:uuid:20840d38-ee44-43a8-bc92-db419d023073> | {
"date": "2015-07-07T11:17:27",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099173.16/warc/CC-MAIN-20150627031819-00156-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9129981994628906,
"score": 2.59375,
"token_count": 731,
"url": "https://en.wikipedia.org/wiki/Digital_ICE"
} |
Every year around St Patrick’s Day, cabbage becomes a hit again thanks to the traditional meal of corned beef and cabbage. There is typically a 50 to 75% increase in demand for green cabbage beginning about two weeks before the March 17 holiday. California leads the nation in cabbage production accounting for about 24% of total U.S. production. Most of the cabbage from California at this time of the year comes from the state’s southern coast and southwestern desert. In 2010, a new pest of cabbage, the Bagrada bug, made its grand entrance into these desert cabbage fields and threatened the availability of cabbage for St Patrick’s Day.
“2010 was a year that many winter cole crop vegetable growers in the Desert Southwest would rather forget, thanks to the bagrada bug which attacked plant seedlings en masse.
Since then, research conducted at the University of Arizona and the University of California has led to a better understanding of the pest, its biology, and has helped reduce yield and income losses for growers.
When the bagrada bug made its 2010 grand entrance, winter vegetable growers, pest control advisers, and entomologists were stunned.
“The pest caught us blind. Suddenly the bagrada bug was everywhere in the desert,” says John Palumbo, University of Arizona (UA) Extension specialist and entomologist based at the Yuma Agricultural Center.
The pest attacks the underside of leaves during the day, and hides at night in the soil and under dirt clods.
The bagrada bug can quickly destroy a seedling. In Palumbo’s trials, a single insect placed on a cotyledon killed the plant in about 60 hours under laboratory conditions.
In another lab test, small pots were lined up in a row, each containing one of 12 different vegetable seedlings. The bagrada passed right by the head lettuce to feast on cole crops. Its feeding favorites include green cabbage, red cabbage, and radish.
If the plant lives, the damaged plant develops multiple unmarketable small heads instead of a single large marketable head or floret.
First found in South Africa, the insect arrived in the western hemisphere in the U.S. in 2008 in California; possibly as a stow-a-way on a cargo ship arriving at the Port of Long Beach. The insect then scurried into neighboring Orange County and kept moving.
Palumbo has conducted several trials with synthetic insecticides and natural predators. While he said bio-control is a ways off, pyrethroid insecticides currently provide the most effective control.
“Newer pyrethroids on the market appear to be more consistent with good knockdown and residual control.”
Residual activity usually lasts about five days.
Looking to the future, Palumbo says the best insecticidal control of bagrada may lie in neonicotinoid seed treatments, based on trial findings.”
Author: Blake, C.
Title: Researchers making strides against bagrada bug
Source: Western Farm Press. 2013-11-20. Available at: http://westernfarmpress.com/vegetables/researchers-making-strides-against-bagrada-bug | <urn:uuid:cb128bc6-d420-4260-a24f-68d0a8e92266> | {
"date": "2019-07-23T21:55:12",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9419533610343933,
"score": 2.9375,
"token_count": 674,
"url": "https://pesticideguy.org/tag/st-patricks-day/"
} |
WASHINGTON—Americans nationwide still have a quiver full of queries for experts about climate change.
But the content of their questions — and the sources they are likely to trust with answers — vary depending on their level of concern and engagement with the issue.
That's one of the latest conclusions drawn from an ongoing and wide-ranging study that has tracked how each of the "Six Americas" interprets the threats of global warming since the last presidential election. Researchers at Yale and George Mason universities first identified those half dozen separate audiences after their initial autumn 2008 survey.
Results from the latest questionnaire conducted in the spring, the fourth in a series, were released in late June. They indicate that most Americans want those in the know to explain how they can be sure human activities, rather than natural changes in the environment, are altering the climate.
Drilling down deeper, the questions become more nuanced depending on a respondent's "Six Americas" ranking.
For instance, the 39 percent in the "alarmed" and "concerned" categories want to ask what nations could do to curb heat-trapping gases and if there's still time to act. The 50 percent in the "cautious," "doubtful" and "dismissive" sphere want to hear how global warming is caused by human activities. And the remaining 10 percent in the "disengaged" grouping want to learn what harm global warming will cause if it is actually happening.
Researchers categorized questionnaire respondents by their levels of belief and concern about global warming, with "alarmed" at one end of the scale and "dismissive" at the other. Here's how the latest survey sorts the 981 adults surveyed between April 23 and May 12: alarmed: 12 percent; concerned: 27 percent; cautious: 25 percent; disengaged: 10 percent; doubtful: 15 percent; and dismissive: 10 percent.
"What we're finding out is that there are very different conversations taking place on this issue," Anthony Leiserowitz, director of the Yale Project on Climate Change Communication, tells SolveClimate News in an interview.
"It's sort of like throwing darts in a dark room. Unfortunately, unless you understand that people are coming in from different perspectives and starting points, you might hit the target occasionally but you'll probably miss. And there’s a good chance you'll do collateral damage."
The latest iteration of the survey, Global Warming's Six Americas, is a joint project of the Yale program headed by Leiserowitz and the Center for Climate Change Communication at George Mason University in Virginia.
Funding is provided by the Surdna Foundation, the 11th Hour Project and the Grantham Foundation for the Protection of the Environment.
Its 57 pages are chockablock with figures and 30 tables detailing how the topic of climate change resonates — or doesn't — across America. In an interview with SolveClimate News, Leiserowitz discusses two key pieces of people's perceptions covered in the study. One part examines resources they count on for credible information about global warming. The other looks at how receptive people are to climate and energy policies that hit close to home.
"What we're interested in finding out is why some people get engaged in these issues and why others dismiss them outright," Leiserowitz says. "We want to understand how the public understands or misunderstands the causes, consequences and potential solutions to climate change."
The Trust Factor
Oddly enough, much of the broad information many Americans absorb about climate change is disseminated by the two sources they trust the least — the mainstream news media and their own congressional representatives. Mainstream media and federal legislators finish at the bottom of the barrel — ninth and tenth — just below television weather reporters, among the list of 10 choices the Yale/George Mason survey presented to respondents.
At the other end of scale, respondents offer more stellar marks to government agencies as trustworthy sources of climate change data. For instance, three-quarters of them have high regard for the National Oceanic and Atmospheric Administration as well as scientists overall.
Not surprisingly, those figures drop to 25 percent and 30 percent, respectively, among the "dismissive" audience.
A majority of those surveyed also had kudos for climate change information dispensed by the Environmental Protection Agency, the Centers for Disease Control and Prevention, the National Park Service and the Department of Energy.
Perhaps expectedly, trust in what President Obama espouses about global warming was highly polarized. Survey results reveal that 77 percent of the "alarmed" say they trust him, compared to 21 percent among the "doubtful" and 3 percent of the "dismissive."
"As the glue of society, we know trust is crucial," Leiserowitz notes, adding that on a daily basis Americans are confronted with daunting perils they know nothing about such as climate change, lead in children's toys and salmonella in fresh produce. "We don’t have the time or energy to do an examination to reach our own informed decisions among an ever-more complex landscape of hazards.
"So we look for guides to help us through this dangerous landscape. We take our cues from key trusted individuals and organizations. And different groups tend to trust different messengers."
Indeed, the 10 percent of the respondents that Leiserowitz's team classifies as "dismissive," tend to be unreachable because a fair share of them are conspiracy theorists who distrust any source of climate change data.
But Leiserowitz seems confident that the urgency of the risks of global warming can resonate with the other five categories of Americans — the "alarmed," "concerned," "cautious," "disengaged" and "doubtful" — if the right people can craft appropriate and credible messages.
As former Speaker of the House Tip O'Neill once famously pontificated, "All politics is local." Leiserowitz is convinced those four words can be a communications beacon on the climate change front.
Think Globally, Act Locally
"This is where the rubber hits the road," Leiserowitz says about how critical it is for agencies at the federal, state and municipal level to engage local constituencies. "You can't talk about preparing for climate change in Seattle the same way you would in Phoenix."
As well, the threats of global warming are less likely to sink in with people when the points of reference are distant or unfamiliar geographies instead of their own backyards.
Thus, it's not too shocking that on average, half or more of the respondents in four of the survey groups — the "alarmed," "concerned," "cautious" and "disengaged" — expressed support for safeguarding the public's health in their own communities, as well as the water supplies, agriculture, forests, wildlife, coastlines, sewer systems and public property.
"The vast majority of American are basically local critters," he says. "And who doesn't want to protect their own water and other resources?"
Well, some naysayers do exist. Survey results reveal that very few of those in "doubtful" and "dismissive" categories favored local action to secure those assets because they don't perceive global warming as a danger.
Though it's a slow and laborious process, Leiserowitz is optimistic that most of the American public is reachable and educable about the realities of climate change.
"If there's a window of opportunity for success, it's at the local level where these issues have not become hyperpolarized," Leiserowitz says, adding that people are angry and frustrated with the political structure on the national scene. "Locally is where they are receptive to mitigation and preparing for adaptation."
He lauds a network of federal, state and municipal specialists with expertise in forestry, water, public health, agriculture and public safety who are reaching people closer to their own neighborhoods — despite the lack of a national policy.
"It's a slow process and there's no national television program you can have on it," he stresses. "But it's a conversation that's going on right now. We're talking about changing the knowledge, attitudes and ultimately the behavior of 300 million people. That doesn't happen overnight."
Not Always Doubtful and Dismissive
The Yale/George Mason research indicates that the "doubtful" and "dismissive" survey respondents don't always live up to their pessimistic labels. They have surprisingly high support for a number of the 12 community climate and energy policies that researchers asked them to rank.
For example, three-quarters of those surveyed gave a nod to construction of more bike lanes and bike paths and bumping up the availability of public transportation. This support extends across all six audience groups, with 60 percent of the "dismissive" in favor, compared to 90 percent in the "alarmed" category
Majorities of the "alarmed," "concerned," "cautious" and "disengaged" found merit in ideas such as requiring new homes to be energy efficient; upgrading zoning codes so mixed-use neighborhoods encourage walking instead of driving, reduce urban sprawl and cut commuting times; and promoting energy-efficient apartment buildings instead of less-efficient, single-family homes.
Interestingly and perhaps fittingly, 57 percent of those in the "dismissive" category gravitate toward the idea of building a nuclear power plant locally. Majorities of the other five groups oppose that idea.
Out of the Box
The narrative about global warming is now stuck in three boxes that the country has to extricate itself from to make headway on climate change with audiences beyond those who are already "alarmed" or "concerned," Leiserowitz says.
Box one contains the false debate over whether there's a consensus on the science of climate change; box two frames climate change as solely an environmental threat that starves polar bears and overwhelms island nations; and box three is full of divisive party politics that become only more divisive when a polarizing figure such as former Vice President Al Gore opens his mouth.
"The issue has been so narrowly framed that the vast majority of Americans don't see why it matters," he says. "They see it as a distant problem in time and space."
Those gaps can close, Leiserowitz maintains, if newer messengers at the climate table hoist a larger legitimate megaphone. They include the medical community, the Pentagon and others tasked with national security, businesses and laborers who want the country to gain a competitive edge with clean technology, and religious leaders who see acting on climate change as a moral responsibility.
"Credibility really matters," he emphasizes, adding that "doctors in white coats with stethoscopes around their necks" and "military leaders with shiny medals on their chest," not politicians, should be educating Americans.
"People want to hear directly from the experts."
For example, he continued, it's incumbent on the World Health Organization, the Centers for Disease Control and Prevention, Leiserowitz and other public health officials to draw direct lines between the impacts of climate change with infectious diseases, respiratory health, malnutrition and food and water supplies.
"When people learn about these connections, they see global warming isn't about polar bears, it's about people," Leiserowitz says. "And that's when they realize, 'Oh, now that actually matters to me.'
"This is an issue that is so big and so fundamental," he continues. "It's about the energy systems that are our lifeblood. The stereotype is that this only matters to the long-haired hippies wearing Birkenstocks. But everybody has a stake."
Advocates for climate change action often fall into two camps, Leiserowitz says. One side argues that national and international policies are the only solutions, while another contingent rallies for on-the-ground action.
With such an all-encompassing topic, he says, such an either/or proposition is unrealistic. Instead, both top-down and bottom-up approaches are necessary, and it's imperative to plug away from both directions.
"That's the genius of democracy," Leiserowitz says. "It allows this laboratory of innovation and experimentation to take place." | <urn:uuid:4803477c-7caf-4f51-aba5-373c9bded1d6> | {
"date": "2014-08-20T12:42:37",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500808153.1/warc/CC-MAIN-20140820021328-00124-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9554364681243896,
"score": 2.671875,
"token_count": 2522,
"url": "http://insideclimatenews.org/news/20110705/yale-george-mason-poll-six-americas-scientists-not-politicians-climate-change-science"
} |
ARTICLE I COST
COST, PRODUCTIVITY, PROFIT AND EFFICIENCY: AN EMPIRICAL STUDY CONDUCTED THROUGH THE MANAGEMENT ACCOUNTING
BACKGROUND OF STUDY
This study is to test the connection that exist among cost, productivity, profit and efficiency through the management accounting (Bebe?elea, 2015). The researcher is attempting to confirm the connection as stated by using the method of calculation called Direct Costing method.
Due to the concern of competitive in related field such as marketing, management and others, the calculation of cost become vital compare to traditional method pertaining to numerous costs involve in production such as advanced manufacturing technologies. Considering the limitation of traditional approach in cost calculation, this study tries to analyze using Direct-Costing calculation.
In conducting the study, the researcher is trying to test the hypothesis upon the connection that exist among cost, productivity, profit and efficiency through the management accounting. The purpose of the research is to identify:
? the instrumental research conducted with the help of the indicators calculated according to the statistical and mathematical model;
? the descriptive research which aims to prove the description and evaluation of the cost, profit, productivity and efficiency indicators;
? the explanatory research which has as purpose the study of the causes that explain the evolution in time and space of the cost, productivity, profit, efficiency indicators;
? the conclusive or confirmation research (C?toiu, 2002) which aims to test the researcher’s hypothesis.
Based on the approach in getting the finding to test the hypothesis, the researcher is using qualitative research combine with quantitative elements which observance has been done of some rules and principles characteristic of mixed research methodology. The study is analyzing the term involved as to ascertain accurate finding to determine the correlation on the subjects which are cost, profit and efficiency. The researcher is using management accounting as a tool to evaluate the cost, profits and efficiency indicators and their evolution in time.
The correlation is expressed through Direct-Costing method such as break-even threshold or the point of equilibrium, the coverage factor or the margin contributory, the safety factor and the coefficient of the dynamic safety.
RESULTS OF THE STUDY
The study shows significant increase in coverage factor, coefficient of the dynamic safety, and profit when the controlled percentage growth of the selling price is put in place. Since direct-costing is considering the unit cost only as the variable cost, any growth of the volume in production or the reduction of the variable costs will lead to the maintenance or the loss of the balance of those factors that discuss in this study.
This study concludes that the applicable of Direct-Costing method can be a basis of making efficient decision-making, to determine the level of sales in getting targeted benefits and provide information on variable cost to know the effect on the profit or loss. Based on the finding, the researcher has proven the hypothesis is true that connection between cost-productivity-profit-efficiency exits through management accounting test.
THE EFFECT INFORMATION LITERACY ON MANGERIAL PERFORMANCE: THE MEDIATING ROLE OF STRATEGIC MANAGEMENT ACCOUNTING AND THE MODERATING ROLE OF SELF EFFICACY
BACKGROUND OF STUDY
In this study, the researcher is attempting to examine the mediating effect of Strategic Management Accounting (SMA) information usage on the relationship between information literacy and managerial performance, and the moderating effect of self-efficacy on the relationship between strategic management accounting and managerial performance (Zenita, Sari, Anugerah, ; Said, 2015). In this challenging world, managers are expected to make effective and efficient decision which improve managerial performance. In carrying out those function, the manager need information for decision-making, to develop and monitor business activities and strategy comprising of analysis of managerial and financial information of product market, cost structures and business performance evaluation.
Other intention is to examine the mediating effect of strategic management accounting to the relation between information literacy and managerial performance. Lastly to examine the moderating effect of self-efficacy on the relation between strategic management accounting and managerial performance.
The hypothesis proposed for this study are as follows:
H1 : Information literacy affects the usage of strategic management accounting information.
H2 : Strategic management accounting affects managerial performance
H3 : Strategic management accounting mediates the relationship between information literacy and managerial performance
H4 : Self efficacy strengthen the relationship between strategic management accounting with managerial performance
This study is using questionnaires for its sample and data collections and collected from managerial rank officers in Pekan Baru, Indonesia.
RESULTS OF THE STUDY
Results derived from demographics details of respondents and statistics of variables, hypothesis testing and data analysis. The analysis also shows the respondents possess high level of information literacy. All variables tested are valid and reliable. Those findings show the result of hypothesis testing for H1, H2 and H4.
Hypothesis is done by using regression analysis. H1 and H2 were tested by running simple linear regression model while H4 was tested by running regression analysis. H3 is tested using 4 outlined by Frazier, Barron, and Tix (2004). The findings show all hypothesis are accepted.
From the study, the researcher concluded that the managers with high level of information literacy, high level of self-efficacy and utilize strategic management accounting information in decision making process will have an enhanced managerial performance. Nevertheless, there are limitation to this study such as confine to specific profession, questionnaire that can be biased in measuring managerial performance. Th researcher was suggesting of having future research by enhance the scope of research such as adding more variables related psychological factors that may influence managerial performance. | <urn:uuid:a5cee330-c380-477f-ac26-f9b0430360d0> | {
"date": "2019-06-20T19:51:03",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.24/warc/CC-MAIN-20190620190041-20190620212041-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9204679727554321,
"score": 2.75,
"token_count": 1163,
"url": "https://studentstakingaction.org/article-i-cost/"
} |
A bat infected with rabies was recently found in the area of the 300 block of Rugby Avenue in Berkeley. Although the bat was removed without incident, Berkeley residents are being advised to be extra vigilant as, unless it is treated promptly, rabies is a fatal disease in humans.
“The general advice is if you see a sick or dead bat, leave them alone,” said Fish and Game Warden Patrick Foy.
People are cautioned to avoid skunks and bats and to not handle dead wild animals. Children should be educated about the dangers of wild animals and warned not to touch any animal they do not know. Any nocturnal animal which is seen during daylight hours such as skunks, bats, or grey foxes should be considered dangerous.
In Berkeley, bats and skunks are the most likely animals to be infected, although un-immunized dogs, foxes, coyotes, badgers, weasels, raccoons and unvaccinated cats can also carry the rabies virus. Rodents (gophers, mice, hamsters, squirrels, rats, opossums, guinea pigs) and rabbits are considered very low risk for rabies. Alameda County has been a Rabies Area since 1958. | <urn:uuid:82a66d97-be12-47d3-913e-5043edf338df> | {
"date": "2015-01-25T19:04:53",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861305.18/warc/CC-MAIN-20150124161101-00140-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9602622985839844,
"score": 2.875,
"token_count": 255,
"url": "http://blog.sfgate.com/inberkeley/2011/10/21/bat-infected-with-rabies-found-in-berkeley/"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.