content
stringlengths
275
370k
What Is Ankylosing Spondylitis? Ankylosing spondylitis (AS) is a chronic, progressive, and debilitating disease. It is an inflammatory condition and a form of arthritis. Ankylosing spondylitis usually develops in young people. The first signs may appear during the late teenage years or in people in their twenties. Men are up to three times more likely to develop AS than women. The first signs of ankylosing spondylitis often begin in the lower back in the region of the sacroiliac joints. The condition arises within the junctions where tendons and ligaments attach to bones. These junctions are known as the entheses. Inflammation (called enthesitis) occurs within the entheses and as ankylosing spondylitis advances, this is followed by degradation of the bones. In response to this degradation, new bone is formed and, over time, the new bone growth can cause the spinal vertebrae to fuse together. The patient experiences chronic pain, and flexibility of the spine can be greatly impaired. Causes of Ankylosing Spondylitis The exact cause of ankylosing spondylitis is not known but it is believed to be linked to a gene called HLA-B27. Nine out of ten patients with AS carry the HLA-B27 gene. However, a large proportion of the general population are also carriers of this gene but will not go on to develop ankylosing spondylitis. The HLA-B27 gene can be inherited, and if a close family member has AS then there is a higher chance of developing it. When a patient presents with symptoms of ankylosing spondylitis they may be tested for the presence of HLA-B27. It is also thought that AS can be triggered by environmental factors, although these factors have not yet been identified. Symptoms of Ankylosing Spondylitis The symptoms of ankylosing spondylitis usually develop at a slow rate over the course of months or years. Over time, AS symptoms may appear and then disappear, and the condition can improve or deteriorate. Symptoms usually start in the spine, but other areas of the body can also become affected. The main symptoms of ankylosing spondylitis include: - Back pain and stiffness. These are the major symptoms of AS. They are often worse at night and first thing in the morning, and the pain may be severe enough to wake the sufferer during the night. There is usually some improvement with movement and exercise. - Arthritis. Arthritis usually presents as pain and inflammation in the back. The joints of the hips, shoulders, and knees can also be affected. - Enthesitis. In enthesitis, the entheses, the connective bands of tissue that join bone to bone, become inflamed. Enthesitis can occur at the top of the shin bone, behind the heel in the Achilles tendon, under the heel, and where the ribs join the breast bone. When the ribs are affected, there may be chest pain and reduced lung function. - Fatigue. Fatigue may come and go. The sufferer may feel tired and have low energy when there is a flare-up of the condition. Other Symptoms of Ankylosing Spondylitis Although ankylosing spondylitis is primarily a condition of the spine, it can cause symptoms in other parts of the body too. These may include: - Poor posture - Loss of appetite and weight loss - Mild fever - Inflammation of the bowels (ulcerative colitis or Crohn’s disease) - Foot problems such as Achilles tendonitis or plantar fasciitis - Uveitis (inflammation of the middle layer of the eye) Septic arthritis is a joint condition that occurs when an infection travels via the bloodstream and settles in a joint or when a joint is infected directly. Because AS is such a complex condition, patients may also go on to develop other complications. These complications can be rare in ankylosing spondylitis, but may include: - Cardiovascular disease - Cauda-equina syndrome (damage to the nerves at the base of the spine) - Organ damage due to a condition called amyloidosis Treatment of Ankylosing Spondylitis There is no cure for ankylosing spondylitis, but the symptoms can be managed with the help of physicians. Treatment initially involves a combination of exercise along with pain management in the form of medication. A range of exercises may be recommended. Gentle stretching exercises can help to maintain flexibility and strength. Yoga, swimming, and good postural practices are all beneficial. Patients with ankylosing spondylitis should also be referred for physical therapy. As in all arthritic conditions, it is very important that people with AS keep as active as possible. Activity improves posture, strength, and spinal movement, and helps to prevent pain and stiffness. Patients whose rib joints are affected may benefit from breathing exercises to improve lung function. Gentle massage of the muscles can help to relieve pain and stiffness. However, the bones of the spine should not be massaged as manipulation can cause injury. Various medications are used to help relieve the pain caused by ankylosing spondylitis. Non-steroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen are often recommended as they help to reduce inflammation and reduce pain. NSAIDs may not be suitable for all patients as they can cause or aggravate stomach problems. In these cases, acetaminophen may be used instead, or if necessary, stronger painkillers such as codeine. Corticosteroids such as prednisolone may also be useful. These can be taken in tablet form, or administered by injection directly into the affected joint. It is recommended that corticosteroid injections applied to the same joint be limited to three times a year to reduce the risk of adverse effects. In cases where the symptoms of AS cannot be managed with regular painkillers, anti-tumor necrosis factor (anti-TNF) medication can be used. This is a new treatment for ankylosing spondylitis, and patients undergoing anti-TNF therapy need to be monitored closely as it can affect the immune system. In the majority of cases, people with AS do not need surgical intervention. However, in cases where joints such as the hips or knees have become badly damaged, joint replacement may be considered to improve mobility. In extreme cases of ankylosing spondylitis where the spine has become severely deformed, surgery can be performed to correct and realign the spine.
The most common and widespread falcon in North America, as well as the smallest and most delicate. Having the typical falcon shape — a short neck, relatively small head, long and slender pointed wings, and a long tail – gives this bird a streamlined body designed for fast flight. Females are slightly larger than males, but unlike most birds of prey, the sexes have different plumages. Both have a rufous-red back and tail, double black stripes on white cheeks, and a gray head with a rufous crown patch. Wing color and pattern is the most noticeable difference: females have rufous barred upper wings, while males have wings of blue-gray with small black spots, with a row of white circles on a darker trailing wing edge. Flight of this small falcon is light and buoyant, with rapid, shallow wingbeats and short glides. Often seen in flight with the wingtips swept back, or hovering motionless in midair over prey. Head bobbing, and flicking the tail up and down are two commonly observed behaviors when this bird is perched.
This in itself is enough to convince me to stick with the visual note-taking, but as I have been digging deeper into more and more research to explain the incredible boost in student learning after using the doodle note strategy, I've come across more and more reasons that are probably behind this success for the kids. The psychological research I have been exploring lately is called "Dual Coding Theory." It originated with Paivio in the 70s, and explains how visual and linguistic information is processed in two different areas of the brain. In essence, as new input enters the brain, it's stored in short term memory in two distinct categories. Graphic information, images, and other sensory input are processed in the VISUAL center while auditory input, words, and text are processed in the LINGUISTIC center of the brain. This is a great way for our brain to take in both types of information, and the system works very well. However, in order to convert the new information into true learning, we need it to be saved and stored in long term memory. To do this, we need referential connections between the two zones. We have to CONNECT the information in the visual area with the information in the linguistic area. When we are able to blend the text/auditory input together with the images, we boost the potential for retaining the information! This means that not only are the individual words and ideas committed to long term memory more effectively, but the associations between them are retained as well. Our students can understand the big ideas and concepts AND remember the vocabulary and details more consistently. It's another huge reason that the student brain responds so well to a visual note-taking strategy! A related theory, the "Picture Superiority Effect" is supported by studies that show that blending images with text offers a stronger learning experience than using text alone. It turns out that this boosts both the memory of the individual terms and ideas as well as the associations and connections between the concepts. This is why we use certain visual brain triggers in addition to using text. For example, a stop sign has to instantly register an idea in our brains: STOP. So, in combination with the word (text input), we also always see the same shape (graphic input) as well as the color red (additional visual input). These blend together to send the right signal to our brains more effectively. A good visual note-taking strategy incorporates what I like to call "visual memory triggers." These can be images that contain or represent an analogy that helps the student understand. They can also be graphics that blend text and pictures to stick in the students' brains. These are the types of input that really last in a student's long-term memory. For example, students remember the term "surface area" being written in the handle and bristles of a paintbrush and remember that it represents covering the outside of a sharpe (like painting). Check out more samples of visual triggers that can be incorporated into doodle notes here. To learn more about doodle notes, the research behind them, and how to try this strategy to boost your own students' focus and retention, check out these links: More about the doodle note strategy: My video explaining Dual Coding Theory: Dual Coding Theory vs. "Learning Styles": (Guess which is valid and which may be a myth!) Be sure to sign up for my email list for additional ideas, updates, and resources: Then, check out these related posts: Click to set custom HTML
It was hoped that data from VCAP would provide practical experience of using electron accelerators on later missions, particularly Spacelab-1. Plans were also afoot in conjunction with the Italian Space Agency to build a revolutionary 'tethered satellite', which would be trawled through the upper-atmospheric plasma on the end of a 20-km-long conducting cable. The first tethered satellite mission took place in 1992, several years later than planned, and it flew again on board Columbia in February 1996. Such tethers, researchers argued, could provide a steady supply of electrical power for future spacecraft. Several other OSS-1 experiments were also intended as forerunners of more advanced versions planned for later Spacelab missions. Two instruments - the US Naval Laboratory's Solar Ultraviolet and Spectral Irradiance Monitor (SUSIM) and Columbia University's Solar Flare X-ray Photometer (SFXP) - were devoted to observations of radiation emitted from the Sun, to better understand the processes responsible for them and their impact on Earth. To support them, the flight plan called for Lousma and Fullerton to orient Columbia to aim her payload bay directly at the Sun for several protracted periods of time. In fact, positioning the Shuttle in a series of different attitudes also satisfied another in a long list of tasks that needed to be completed before the vehicle could be declared operational. During their eight days in space, the astronauts oriented her in four 'inertial' attitudes to place different parts under maximum solar heating. Columbia spent 30 hours with her tail facing the Sun, 80 hours with her nose aimed at the Sun and 36 hours with her open payload bay facing the Sun. The men also performed several 'barbecue rolls' to passively thermal-condition the whole spacecraft. During the course of these tests, Lousma and Fullerton exposed the payload bay to its coldest-yet environment as Columbia's tail was pointed at the Sun. The temperatures in the bay were so low that 'outgassed' condensation formed on the aft flight deck windowpanes! When this had been done, the radiators were stowed and latched and the port-side door was closed. In general, the doors performed as advertised under intensely cold conditions, with the exception of a problem when a 'latched' indication was not received for one of the aft bulkhead latches. A spell of passive thermal conditionings quickly resolved this. The week aloft enabled the two men to indulge in taking photographs of Earth. This mission had, according to oceanographer Bob Stevenson, given them an opportunity to photograph a virtually cloud-free China and one of their shots almost got them into diplomatic hot water after landing. ''Jack and Gordon were invited to China to speak to a huge audience [in] some auditorium,'' Stevenson said later, ''and they showed this picture of [a] lake. It was such a beautiful picture that they had it enlarged and matted and framed and they signed it off to the Premier of China [as a gift]. When they got to this picture, [there was] silence. When [the talk] was over, there was [subdued] clapping and they didn't know what to think about this. So they turned to the [US] ambassador and said 'We want to give the picture to the Premier', and he grabs the picture, looks at it [and says] 'I think let's hold this for a while'. When they were leaving the stage, they said 'What's the problem?' [The ambassador replied] 'Well, see that built-up area [on the photo]? That's a secret nuclear facility in China that they didn't know anybody even knew about!' '' Lousma and Fullerton, it seemed, had inadvertently photographed the top-secret site while taking their Earth-observation photographs and, as Stevenson said later, ''Jack wasn't sure [he] was going to come home [alive]!'' After the flight, Stevenson and colleague Paul Scully-Power would arrange with a Chinese friend to put some important-looking comments and signatures in Mandarin on a blown-up copy of the photograph and presented it to Lousma and Fullerton. The faked inscription read: ''If you damn Yankees ever come over China again ...'' Meanwhile, as the astronauts continued to put Columbia through her paces, each of the OSS-1 experiments gathered its own treasure trove of scientific and engineering data. In addition to the instruments already mentioned, the pallet carried the Space Shuttle Induced Atmosphere (SSIA), Thermal Canister Experiment (TCE), Contamination Monitor Package (CMP) and - a boon for Britain's space ambitions - the University of Kent's Microabrasion Foil Experiment (MFE). The latter marked the first experiment built by researchers outside the United States to fly on board the Shuttle. In effect, it was a square section of about 50 layers of tin foil. During the mission, it 'operated' in an entirely passive mode, measuring the numbers, chemical composition and density of tiny micrometeoroids in low-Earth orbit. Following Columbia's landing, the foil was removed from its place on top of the cube-shaped TCE and laboratory analysis enabled scientists to determine not only the depths to which the micrometeoroids had penetrated it, but, consequently, also their impact velocities. Heavier particles punched right through the foil and often left debris, while lighter icy ones left craters. The TCE, to which the foil experiment was attached, was built by NASA's Goddard Space Flight Center in Greenbelt, Maryland, and evaluated a novel method of protecting scientific instruments from extremes of heat and cold - from 200 Celsius to minus 100 Celsius - in Earth orbit. It used a series of heat pipes which maintained several 'dummy' instruments at specific temperatures under various thermal loads and radiated waste heat into space. The canister actually performed better in orbit than it had done in ground tests and would later be used in the electronics module on the ASTRO-1 payload. It also provided useful data for an ambitious experiment slated for Spacelab-2, which sought to better comprehend the physical properties of a peculiar substance known as 'superfluid helium' - the coldest-known liquid - and demonstrate its viability as a cryogenic coolant for future high-energy astronomical instruments. The Spacelab-2 experiment would build on data gathered during STS-3 by evaluating the behaviour of this strange liquid and testing a prototype containment vessel for it. Within NASA, OSS-1 was known as the agency's Pathfinder mission. In many ways, several of its experiments would later find applications on 'operational' Shuttle missions and would fly late into the 1990s and beyond. Its last two pallet-mounted experiments (SSIA and CMP) assessed the impact of clouds and plumes of waste particles ejected from the spacecraft on scientific instruments. The first measured the brightness of particles emitted from the Shuttle, while CMP consisted of two mirrors - coated with magnesium fluoride over aluminium, commonly used in ultraviolet detectors - whose sensitivity was very carefully determined before and after the mission. Scientific activity was also pursued inside Columbia's cabin, with several important experiments housed in middeck lockers. These were tended by Lousma and Fullerton throughout the mission. One of these experiments utilised a new, filing-cabinet-sized facility known as the Plant Growth Unit (PGU), which was so large that a middeck locker had to be removed in order to make room for it. The unit contained all the equipment necessary - growth lamps to provide 14 hours of artificial 'sunlight' each day, timers, temperature sensors, batteries, fans and a datastorage system - to grow almost a hundred plants in the weightlessness of space. One of the key objectives of the PGU experiments on STS-3 was to test whether 'lignification' was a response to gravity or a genetically determined process with little environmental influence. Lignin is a structured polymer, which allows plants to maintain a vertical posture, despite the effects of gravity, and is thus highly important for the plant's ability to grow properly. The experiments tended by Lousma and Fullerton tried to find out if lignin was reduced in the microgravity environment and if this caused plants to lose strength and 'droop'. Earlier experiments on board Skylab and the Russian Salyut space stations throughout the 1970s had revealed that the strange conditions in Earth orbit did indeed cause root and shoot growth to become disorientated, as well as increasing their mortality rates. However, little was known about the physical changes within them. Understanding how plants behave and grow in the absence of gravity was -and, with President George W. Bush's new vision for trips to the Moon and Mars, still is - essential for long-duration missions, in which astronauts will need to grow their own foodstuffs. Chinese mung bean, oat and slash pine seedlings were chosen for STS-3 because all three could grow in closed chambers and under relatively low lighting conditions. Additionally, pine is a 'gymnosperm', which means that it is capable of synthesising large amounts of lignin, and it was believed that its growth was directly affected by gravity. Unlike the mung bean and oat seedlings, which were germinated only hours before Columbia's launch, the pine samples were germinated several days earlier. The seedlings were used in three experiments. One looked at whether lignification was influenced by gravity or determined genetically within the plant. Several of the mung beans did indeed experience orientation problems, although the oats appeared to suffer no ill effects either on Earth or in space. The flight seedlings were all much shorter in stature than the ground control samples, but overall their levels of lignin reduction were only a few percent more than those grown on Earth. As such, although the results did point towards a reduction of lignin in space-grown plants, the difference was deemed statistically insignificant. The second experiment used the mung beans and oats for chromosomal studies, revealing much fragmentation and breakage and confirming that their root cells had been affected by exposure to microgravity. A third experiment investigated how the organisation of the plants' gravity-sensing tissues, including the root cap, was affected by spaceflight. Within hours of Columbia's landing, the seedlings were removed from the PGU, immersed in fixative, thin-sectioned and stained for light and electron microscopy. Was this article helpful?
The gingiva, or gums, are part of the soft tissue lining of the mouth. The gingiva surrounds the teeth and provides a seal around them. Compared with the soft tissue linings of the lips and cheeks, most of the gingiva is tightly bound to the underlying bone which helps resist the friction of food passing over it. Thus, when healthy, it presents an effective barrier to the barrage of insults to deeper tissue. Healthy gingiva has a stippled appearance and is usually coral pink in color, but may contain melanin pigmentation as well. Studies have shown links between periodontal (gum) disease, heart disease and other health conditions. Research further suggests that gum disease may be a more serious risk factor for heart disease than hypertension, smoking, cholesterol, gender and ages. Researchers conclusions suggest that bacteria present in infected gums can become loose and move throughout the body through the bloodstream. Once bacteria reach the arteries they can irritate them in the same way that they irritate gum tissue causing arterial plaque which builds up and can affect blood flow. Dental plaque is a biofilm, usually a pale yellow, that develops naturally on the teeth. Like any biofilm, dental plaque is formed by colonizing bacteria that attach themselves to the tooth's smooth surface. Once sugars are introduced to plaque, it turns into a tooth eating acid that sits just above the gum line. If regular oral care isn't standard, the acid will start eating at the teeth producing cavities and the plaque will cause gum disease. Plaque that is allowed to sit for a prolonged period of time can cause cavities, gingivitis, and other problems in your mouth. If it's left longer than that, serious dental procedures may be required to restore your decaying smile. Gingivitis is an early stage of gum disease. Gingivitis develops as toxins, enzymes and other plaque byproducts by irritating the gums, making them tender, swollen and likely to bleed easily. Gingivitis generally can be stopped with proper oral hygiene and minor treatment from your dentist. If this is achieved, your gums can return to a healthy state. When the bone tissue starts to deteriorate, this is known as a form of gum disease called Periodontitis. This happens when the byproducts of plaque attack the tissues that hold your teeth to the bone. The gums begin falling away from the teeth and form pockets in the gums which allows more plaque to collect below the gum line. When this occurs, the patient becomes more sensitive to hot and cold and the roots of the teeth are more vulnerable to decay. With severe periodontitis, a radical amount of gum tissue and bone tissue is lost. Usually, teeth lose more support as the disease continues to destroy the periodontal ligament and bone. Teeth become loose and may even need to be extracted. This causes difficulties in normal everyday chewing and biting habits. If advanced periodontal disease is left untreated, patients run the risk of other serious health problems.
Written by Ivana Katsarova, Over one third of the European Union (EU) population – some 170 million citizens – are aged under 30, with half that number under the age of 15 years. Although education policies in the EU are essentially decided and implemented by the individual EU countries, the EU provides sound evidence and analysis to help national governments make informed policy decisions and drive reforms to improve educational outcomes and the employability of young people. For this purpose, in 2009, the EU set a series of common objectives to address the most pressing concerns in EU education systems by 2020. In several areas, the EU scores well. In 2015, 39 % of the EU workforce held a higher education degree. Between 2005 and 2015, the percentage of early school leavers decreased by some 30 %, even though during 2016, progress towards meeting the EU target slowed and currently stands at an average of 11 % – one percentage point away from achieving the target. However, the EU faces the major challenge of further upskilling its population and reducing under-achievement in basic skills. In specific terms, the results show that over 22 % of EU students have low achievement levels in mathematics, nearly 18 % in reading, and some 17 % in science. Moreover, by 2020, the EU aims for at least 15 % participation in learning among the population aged 25-64 years. Nevertheless, progress towards this target has been very limited. The EU average in adult learning stood at some 11 % in 2014 (the target was 15 %), and did not increase in 2015. Only urgent and substantive action will enable the EU to reach the benchmark. On a more optimistic note, the Erasmus student mobility programme that has allowed more than 9 million Europeans to study abroad, turns 30 in 2017. Widely recognised as one of the most successful EU programmes, Erasmus provides a concrete example of the positive impact of European integration. Read the complete briefing on ‘Creating opportunities: The EU and students‘. Listen to podcast ‘Creating opportunities: The EU and students‘
Elephants have long been known to be part of the Homo erectus diet. But the significance of this specific food source, in relation to both the survival of Homo erectus and the evolution of modern humans, has never been understood — until now. When Tel Aviv University researchers Dr. Ran Barkai, Miki Ben-Dor, and Prof. Avi Gopher of TAU’s Department of Archaeology and Ancient Near Eastern Studies examined the published data describing animal bones associated with Homo erectus at the Acheulian site of Gesher Benot Ya’aqov in Israel, they found that elephant bones made up only two to three percent the total. But these low numbers are misleading, they say. While the six-ton animal may have only been represented by a tiny percentage of bones at the site, it actually provided as much as 60 percent of animal-sourced calories. The elephant, a huge package of food that is easy to hunt, disappeared from the Middle East 400,000 years ago — an event that must have imposed considerable nutritional stress on Homo erectus. Working with Prof. Israel Hershkovitz of TAU’s Sackler Faculty of Medicine, the researchers connected this evidence about diet with other cultural and anatomical clues and concluded that the new hominids recently discovered at Qesem Cave in Israel — who had to be more agile and knowledgeable to satisfy their dietary needs with smaller and faster prey — took over the Middle Eastern landscape and eventually replaced Homo erectus. The findings, which have been reported in the journal PLoS One, suggest that the disappearance of elephants 400,000 years ago was the reason that modern humans first appeared in the Middle East. In Africa, elephants disappeared from archaeological sites and Homo sapiens emerged much later — only 200,000 years ago. Unlike other primates, humans’ ability to extract energy from plant fiber and convert protein to energy is limited. So in the absence of fire for cooking, the Homo erectus diet could only consist of a finite amount of plant and protein and would have needed to be supplemented by animal fat. For this reason, elephants were the ultimate prize in hunting — slower than other sources of prey and large enough to feed groups, the giant animals had an ideal fat-to-protein ratio that remained constant regardless of the season. In short, says Ben-Dor, they were the ideal food package for Homo erectus. When elephants began to die out, Homo erectus “needed to hunt many smaller, more evasive animals. Energy requirements increased, but with plant and protein intake limited, the source had to come from fat. He had to become calculated about hunting,” Ben-Dor says, noting that this change is evident in the physical appearance of modern humans, lighter than Homo erectus and with larger brains. To confirm these findings, the researchers compared archaeological evidence from two sites in Israel: Gesher B’not Yaakov, dating back nearly 800,000 years and associated with Homo erectus; and Qesem Cave, dated 400,000 to 200,000 years ago. Gesher B’not Yaakov contains elephant bones, but at Qesem Cave, which is bereft of elephant bones, the researchers discovered signs of post-erectus hominins, with blades and sophisticated behaviors such as food sharing and the habitual use of fire. Evolution in the Middle East Modern humans evolved in Africa 200,000 years ago, says Dr. Barkai, and the ruling paradigm is that this was their first worldwide appearance. Archaeological records tell us that elephants in Africa disappeared alongside the Acheulian culture with the emergence of modern humans there. Though elephants can be found today in Africa, few species survived and no evidence of the animal can be found in archaeological sites after 200,000 years ago. The similarity to the circumstances of the Middle East 400,000 years ago is no coincidence, claim the researchers. Not only do their findings on elephants and the Homo erectus diet give a long-awaited explanation for the evolution of modern humans, but they also call what scientists know about the “birth-place” of modern man into question. Evidence from the Qesem Cave corroborates this revolutionary timeline. Findings from the site dated from as long as 400,000 years ago, clearly indicate the presence of new and innovative human behavior and a new human type. This sets the stage for a new understanding of the human story, says Prof. Gopher. To read a reprint of the paper that appeared in PLoS One, click here:
Presentation on theme: "Chapter 8 Section one Mr. Snyder American History."— Presentation transcript: Chapter 8 Section one Mr. Snyder American History The BIG Idea Thomas Jefferson’s election began a new era in American government. Main Ideas of Section One Main Ideas of Section One The election of 1800 marked the first peaceful transition in power from one political party to another. President Jefferson’s beliefs about the federal government were reflected in his policies. Marbury v. Madison increased the power of the judicial branch of government. Main Idea One Federalists John Adams and Charles C. Pinckney ran against Democratic- Republicans Thomas Jefferson and Aaron Burr. Election of 1800 John Adams and the Federalists Rule by wealthy class Strong federal government Emphasis on manufacturing Loose interpretation of the Constitution British alliance Thomas Jefferson and the Democratic- Republicans Rule by the people Strong state governments Emphasis on agriculture Strict interpretation of the Constitution French alliance Problem ! Jefferson and Burr tied, with 73 electoral votes each. Solution: The House broke the tie by selecting Jefferson to be president; Burr became vice president. Happy 12 th Birthday ! The tie led to the passage of the Twelfth Amendment, which provided for a separate ballot for president and vice president in the next election. Main Idea Two ! Democratic-Republican–controlled Congress helped put his republican ideas into practice. Allowed the hated Alien and Sedition Acts to expire. Lowered military spending. Got rid of domestic taxes. Believed main functions of federal government were Protecting the nation from foreign threats Delivering mail. Collecting customs duties Kept some Federalist ideas, like Bank of the United States.... John Marshall John Marshall: (1755–1835) Federalist leader who served in the House of Representatives and as U.S. Secretary of State, he later became the Chief Justice of the U.S. Supreme Court, establishing in Marbury v. Madison the Supreme Court’s power of judicial review. Main Idea THREE ! Marbury v. Madison William Marbury appointed justice of peace by President Adams just before he left office. Marbury’s commission was not delivered; Jefferson took office. Marbury sued Jefferson administration to get his commission. Marbury Vs. Madison The law Marbury based his claim on was unconstitutional—Judiciary Act of 1789. Ruled that the Supreme Court did not hear cases like this one, according to the Constitution; thus, the law that Marbury used was unconstitutional. Importance of Judicial Review Chief Justice John Marshall wrote Court’s opinion in Marbury v. Madison. Ruling established judicial review— Court’s power to declare an act of Congress unconstitutional. Made judicial branch equal to other two branches of government. Section Assessment Section 1: Jefferson Becomes President 1. Who challenged John Adams in the election of 1800? Charles C. PinckneyThomas JeffersonAaron Burrall of the above 2. Who finally chose Thomas Jefferson as president after a tie vote? the electoral collegethe House of Representativesthe Senatevoters, with a recount 3. Why was the Twelfth Amendment added to the Constitution? to guarantee freedom of speechto eliminate the electoral collegeto create a separate ballot for president and vice presidentto authorize Congress to purchase Louisiana from the French 4. Which of the following was one of Jefferson’s beliefs? Government should be enlarged.Government should be kept small.The military should be expanded at all costs.Taxes on wealthy citizens should be raised. 5. What important power was established by the Supreme Court in Marbury v. Madison? the line-item vetoterm limits for Congressseparate ballots for president and vice presidentjudicial review What important power was established by the Supreme Court in Marbury v. Madison? the line-item vetoterm limits for Congressseparate ballots for president and vice presidentjudicial review
The internet is such a slowpoke. In principle, it should operate at nearly the speed of light, which is more than 670 million miles per hour. Instead, internet data moves 37 to 100 times slower than that. The technical term for this speed gap is “network latency,” the split-second delay in an internet connection as a signal travels from a computer to a server and back again. We can do better, says Gregory Laughlin, a professor of astronomy in Yale’s Faculty of Arts and Sciences. Laughlin says we can make the internet at least 10 times faster — perhaps 100 times faster — in the United States. Laughlin and colleagues P. Brighten Godfrey at the University of Illinois at Urbana-Champaign, Bruce Maggs at Duke, and Ankit Singla at ETH Zurich are co-leaders of an exploration into what is slowing the internet down — and what can be done to fix it. The project, funded by the National Science Foundation, is called Internet at the Speed of Light. The researchers say a couple of key factors are holding the internet back. For example, the network of underground, fiber optic cable routes the internet depends upon is highly chaotic. It zig-zags beneath highways and railroad tracks, detours around difficult terrain such as mountains, and typically sends a signal hundreds of miles in the wrong direction at some point during a transmission. Secondly, there’s the matter of the fiber optic cable itself, which is essentially glass. Internet data are pulses of light traveling through the cable; light moves significantly slower when it travels through glass. Laughlin and his colleagues say a network of microwave radio transmission towers across the United States would allow internet signals to travel in a straight line, through the air, and speed up the internet. Moreover, Laughlin says, this idea has already been successfully tested on a limited scale. For example, stock traders built a microwave network a decade ago between stock exchanges in Chicago and New Jersey in order to shave valuable microseconds off of high-frequency trading transactions. In their final findings, which they presented at the 19th USENIX Symposium on Networked Systems Design and Implementation in April, Laughlin and his colleagues discovered that microwave networks are reliably faster than fiber networks — even in inclement weather — and that the economic value of microwave networks would make them worth their expense to build. Laughlin spoke with Yale News recently about the project. How did you come to be a part of Internet at the Speed of Light? Gregory Laughlin: I was interested in the economic problem of where “price formation” in the U.S. financial markets occurs. This required the assembly and correlation of data from different markets, for instance the futures markets in the Chicago metro area and the stock markets in the New York metro area. When I began working on the problem [in 2008] it was clear that even when there was a strong motivation to cut latency down as much as possible between disparate locations, the physical telecommunications infrastructure still imposed limits that prevented signaling at speeds approaching the speed of light. Why did this project appeal to you? Laughlin: I like problems where physics, economics, and geography all intersect, and the problem of price formation is the perfect juxtaposition along those lines. How is this approach different from other examinations of internet infrastructure? Laughlin: A primary concern within studies of the physical structure of the internet is often bandwidth, where the concern is how much information per second one can transmit on a given line. Other work on latency has focused on ideas related to pre-positioning information, which is the idea behind content delivery networks. Our work takes the perspective of asking, “What would the solution look like if you wanted to speed up small-packet traffic as much as possible across the entire United States?” What surprised you the most as you looked at what was slowing down the internet? Laughlin: One thing, that’s very well known, but which never ceases to amaze me, is the enormous amount of information that can be carried on optical fibers. By transmitting light in different color bands simultaneously, single highly specialized multi-core glass fibers are now capable of carrying hundreds of terabits of data per second. My formative internet experiences occurred in the late 1980s and early 1990s, and so my current Yale office Wifi connection seems really fast. But it’s staggering to realize that a single fiber can now transmit data at a rate that exceeds my office connection by more than a factor of a million. It was thus surprising to realize that with the right hybrid infrastructure, the internet could be both extremely fast and capable of carrying staggering amounts of data. Yet because the internet has arisen in an organic way rather than a top-down pre-planned way, it turns out that there are all these curious pockets of slow performance. You and your colleagues have suggested that a national network of microwave radio transmission towers would make the internet faster. Why is this? Laughlin: Even though an overlay of microwave radio transmission towers would provide only a tiny, seemingly negligible increase in bandwidth for the U.S. internet, the overlay could handle an important fraction of the smallest, most latency-sensitive requests. This type of traffic is associated with procedures that establish a connection between two sites, and which involve a lot of back-and-forth transmissions that are a small number of bytes each. By speeding these up and taking the physically most direct routes, you can get a factor-of-10 to -100 increase for the traffic where it matters most. On the other hand, for applications like streaming video, where it’s possible to buffer the information, the microwave towers don’t need to be used. Fiber is the way to go if you have big blocks of data that need transferring. What would it take, in terms of cost and commitment, to create such a network? Laughlin: In our paper, we created a detailed model of a national microwave network that can transmit 100 gigabits per second between 120 U.S. cities at speeds that average just 5% slower than the speed of light [which provides the ultimate physical limit]. This network would involve roughly 3,000 microwave transmitting sites [that use existing towers], and we estimate that it would cost several hundred million dollars to construct. Does that price tag make it worth doing? Laughlin: We did a detailed cost analysis, and it seems very clear that a project of this type would provide an economic benefit. The applications run the gamut from things like telesurgery to e-commerce and gaming. How often do you think about this as you download a document or click on a website? Laughlin: Only when a site seems slow to load! What reactions have you had to the project’s findings? Laughlin: The team presented the findings at one of the leading conferences in the networking field, and the reaction was quite positive. Of course, it’s a big step from designing a network in theory and implementing it in practice. But we definitely feel that it’s something that would work and would be worth building.
Have you ever wondered how films, animations, comic books, and plays in the theater are produced? They are actually carefully made and planned using a storyboard template before they hit production. Storyboards are images or illustrations that are arranged in chronological order to be able to tell a certain story that a film is all about. The same thing is true with an animation storyboard. Before those animated films you see on the big screen are produced, the people behind the scenes worked hard to carefully craft the amazing story that you have witnessed. And that is all thanks to a storyboard. Professional Film Storyboard Template Film Trailer Storyboard Film Making Storyboard Cartoon Animation Storyboard Sample Animation Storyboard How Do You Draw a Storyboard? Need to draw a storyboard but you don’t know how? Well, then forget your worries because we have prepared steps on how you can draw your very own storyboard. And yes, it will be your very own. Read on! - Before starting on any storyboard you are planning to make, finish first the script that you are making for it. A script is as important as a storyboard. While a storyboard is all about the visuals, a script is all about the story. It is like the skeleton of the video or film. - Have a storyboard template ready. You can draw or have it printed. - Leave some space for you to write notes, dialogues, or additional scripts or scenes that you want to add. - Just like in a comic book, start drawing the scenes where the story will occur. This is so that you can establish important scenes and objects on the very first box. - Arrows can show the movement of your storyboard, so use them in your sketches. - Write down notes, dialogues, and description of the scene under each box or on the space provided for. - Ask advice from other people about the storyboard you are making, or use storyboard samples as your reference. - Finalize your work. Make it clean and presentable. Professional Cartoon Storyboard Cartoon Network Storyboard Cartoon Character Storyboard Cartoon Storyboard Sample TV Commercial Storyboard Advertising Commercial Storyboard Restaurant Commercial Storyboard What Are the Uses of a Storyboard? - A storyboard is used so that the producers and directors will be able to get a preview of the whole story before they start filming. - A storyboard allows filmmakers to determine potential errors in the film or the story. They can then make changes or add additional changes that are needed to have the potential error corrected. - A storyboard is used to establish the frames in the story. - The movements of the characters and the camera angles are decided or determined with the use of the illustrations in the storyboard. - By making a storyboard to tell the story, the filmmakers can choose which media type is best for the film. - The story’s dialogue can be changed, added, or improved by checking and studying a storyboard. - A storyboard makes it possible to determine which character position can show their maximum emotional content. - A sample storyboard is being utilized as a reference for making their own storyboards or to get new ideas for a new story. - A storyboard is used to create a story line for a book or novel. - It help students execute a particular story that they would use for a particular academic presentation. - A storyboard serves as a guide for artists and graphic designers who will curate the design of the storyboard once it is transferred to a final outline or story. - Another use of a storyboard is to put together story plots that will be used in various presentations in different industries. These industries would be in education, sales, the arts, and marketing. - In comic making, a storyboard is like a draft that the comic artists make, which becomes the basis of the final output. Professional Christmas Storyboard Christmas Nativity Storyboard Christmas Advert Storyboard Christmas Truce Storyboard Digital Portfolio Storyboard Digital Design Storyboard Game Design Storyboard Computer Game Storyboard Game Design Storyboard Action Movie Storyboard Movie Making Storyboard Movie Trailer Storyboard Storyboards consists of images and illustrations that are either drawn or gathered from various stories and arranged in a way that it will tell a story. Aside from the images and illustrations, there are other things that are induced in a storyboard and they are listed and briefly discussed below. - Header or footer – This would appear as the title of the story, which is usually positioned on top or as the header and the page number as a footer. - The template boxes – These are the boxes that are arranged in vertical or horizontal position where the image or illustration is placed. The size varies depending on how the illustrator wants it to be. This is similar to the boxes that we see in comic storyboard. - Scenes – This would be how the events in the story would look like, like how the location would look like and what actions would occur in that location. - Characters – The characters are not limited to the main characters but they include the supporting characters and others who are needed in a scene. - Notes or space for detailed explanation – This is where the detailed description of the scenes are written and the important notes are written. - The slide or page number – This helps identify which slide or which page an image or scene is. Professional Video Storyboard Music Video Storyboard Website Planning Storyboard Website Design Storyboard Website Content Storyboard Comic Life Storyboard Comic Layout Storyboard Business Plan Storyboard Business Process Storyboard Business Consultant Storyboard Advertising Agency Storyboard Advertising Campaign Storyboard Professional Project Storyboard Decade Project Storyboard Animatic Project Storyboard Digital Project Storyboard Tips for Creating a Storyboard - Use a good layout or one that is appropriate for your storyboard. The scenes, characters, and dialogue bubbles are arranged so that they don’t overlap each other. It gives your storyboard a neat look, just like how they look like in a free storyboard. - Your storyboard should provide information that is valuable to both the filmmakers and the viewers. - Determine your target audience. Use this as basis for creating your story, how to present your characters, and how the story progresses. - Always keep you visuals simple. This helps your readers understand what is in the sketch or image easily. - Write notes and descriptions clearly. This will aid and guide your readers about what is happening in your storyboard. - You don’t need to be good in drawing. You can use stick drawings in your sketches as long as they convey the story that you want to tell. - Make the lines on your template boxes thicker so that it will stand out. It also gives it a neat and framed look. 13+ Security Agreement Samples 11+ Sample Security Agreements 14+ Funeral Notices 15+ Sample Partnership Agreements 24+ Partnership Agreement Templates 13+ Sample Operating Agreements – PDF, Word 11+ Sample Operating Agreement Templates to Download 16+ Sample Handover Reports 16+ Sample Audit Report Templates 13+ Memorandum of Agreements – PDF, Word 15+ Sample Memorandum of Agreement Templates to Download 9+ Medical Prescription Samples – PDF, Word 30+ Sample Marketing Agreement Templates to Download 12+ Sample Marketing Agreements 12+ Sample Standard Loan Agreement Templates
If you’re enthralled by the Large Hadron Collider, you’ll want to watch QUEST’s story on atom smashers. QUEST journeys back in time to find out how physicists on the UC Berkeley campus in the 1930s, and at the Stanford Linear Accelerator Center in Menlo Park in the 1970s, created so-called “atom smashers” that led to key discoveries about the tiny constituents of the atom – from the nucleus all the way down to the quarks. These homegrown particle accelerators paved the way for the Large Hadron Collider, so big that its 17-mile underground tunnel straddles the border between Switzerland and France. Our 12-minute television story starts with the building of the cyclotron, a particle accelerator that UC Berkeley physicist Ernest Lawrence conceived of in 1930. Its first iteration fit in the palm of his hand. It was a breakthrough because without requiring much energy, it could produce very energetic particles in a small space. This allowed physicists to readily investigate the atom’s nucleus by creating elements with large nuclei. The resulting new field of nuclear science has a complicated legacy, of course. It was used to build the atomic bomb, as well as to create the medical accelerators that are now commonly used to fight cancer. Subsequent versions of the cyclotron were so big that they were housed in their own buildings. For our TV story, we filmed at the 88-inch cyclotron at the Lawrence Berkeley National Laboratory. The Berkeley Lab, as it’s referred to, was the laboratory that Lawrence built above the UC Berkeley campus to house his ever-bigger cyclotrons. The 88-inch cyclotron was built in 1961, three years after Lawrence died, and is very much an active research tool. Physicists are still using it to create elements with big nuclei. But about 40 percent of the cyclotron’s time is dedicated to something completely different. It is one of only two facilities in California where you can test the computer chips that go into satellites, by exposing them to high-radiation conditions similar to what they encounter in space. In our story, we follow this testing process. We also tell part of the history of the Stanford Linear Accelerator Center, now called the SLAC National Accelerator Laboratory. What was then the longest particle accelerator in the world began to operate in Menlo Park in 1966. This linear accelerator sent electron beams traveling down a two-mile row of microwave-oven-like devices and smashed them against a stationary target. Physicists used these accelerated electrons to investigate what was inside the protons and neutrons, and in 1968 they found that they were made up of minuscule constituents they called quarks. A few years later, SLAC physicist Burton Richter built a collider, a type of particle accelerator in which particle beams are smashed against each other to reach high energy levels. The so-called SPEAR collider that Richter built led him and his team to discover a more massive quark called the charm quark. This breakthrough helped physicists come up with our current understanding of how matter is organized, a theory called the Standard Model of particle physics. Today, dozens of physicists and graduate students at the Berkeley Lab and SLAC are working on the Large Hadron Collider, making regular trips to Geneva and crunching data back home in their labs in hopes of making discoveries that will answer some of the questions that the Standard Model now leaves unanswered. For example, what is the invisible “dark matter” that makes up 25 percent of the universe? Both at SLAC and at the Berkeley Lab, particle accelerators are being used for exciting new work. The X-rays emitted by accelerated particles, which were at first considered a nuisance, were quickly harnessed in the 1970s to make detailed images. This synchrotron radiation is now used to understand everything from the structure of proteins that could lead to drug development, to materials that could one day be used to build faster computers, and fossils that help prove Darwin’s theory of evolution.
This calculator provides an easy method to solve percentage calculations such as what is 33 percent of 100. See explanation below. You can solve this type of calculation with your own values by entering them into the calculator's fields, clicking 'calculate' and getting your answer! Percentages are similar to fractions with an important difference. In fractions the whole is represented by the denominator (e.g. the number 5 in the fraction of 1/5) In percentages, the whole is represented by the number 100. In fact, "per cent" means "per 100" or "for each 100." To solve the problem above, let’s convert it into equation form: __ = 33% x 100 In this example, the number 100 represents the whole and so, as a percentage, it would be equal to 100%. Written as a ratio, we would get: 100% : 100 If a student took a 100 question test and they got every answer correct, as a percentage they would get a 100% score on the test. In our problem we have to evaluate the 33 percent of 100. For now, let’s call this unknown value "Y". Written as a ratio, we would get: 33% : Y To see a relationship between these two ratios, let’s combine them into an equation: 100% : 100 = 33% : Y It is critical that both of the % values should be on the same side of a ratio. For instance, if you decide to put the % value on the right side of a ratio, then the other % value should also be on the right side of its ratio. "100 : 100% and Y : 33%" is correct. "100 : 100% and 33% : Y" is wrong. Let’s solve the equation for Y by first rewriting it as: 100% / 100 = 33% / Y Drop the percentage marks to simplify your calculations: 100 / 100 = 33 / Y Multiply both sides by Y to transfer it on the left side of the equation: Y ( 100 / 100 ) = 33 To isolate Y, multiply both sides by 100 / 100, we will have: Y = 33 ( 100 / 100 ) Computing the right side, we get: Y = 33 This leaves us with our final answer: 33% of 100 is 33
A type of luminescence, fluorescence is the emission of visible light by a substance as a result of prior absorption of light or electromagnetic radiation. When a molecule is exposed to light or radiation, its valence electrons absorb the energy and jump to a higher energy state. When an electron relaxes back to its ground state, it emits a photon with a longer wavelength than the original light or radiation absorbed. A molecule that displays fluorescent properties is known as a fluorophore. Fluorophores will have specific excitation and emission ranges and play roles in a wide variety of applications such as lighting, analytical chemistry, spectroscopy, biochemistry, medicine, microscopy, and forensics. Fluorescence offers sensitivities as low as parts per trillion because often only one excitation wavelength is used and only one emission wavelength is detected, offering incredibly high selectivity.
|Geographical Range||Northern South America| |Habitat||Forests, woodlands, plains, savannahs| |Scientific Name||Epicrates cenchria cenchria| The Brazilian rainbow boa is the largest of the rainbow boas, reaching six or more feet in length. Rainbow boas get their name from the multicolored sheen of their skin, caused by light reflecting off tiny ridges on their scales. Rainbow boas prowl for food at night and sleep during the day. Although they usually rest in a tree or bush, they spend most of their waking time on the ground. They feed on birds, their eggs, small mammals, lizards, and frogs. Like all boa constrictors, rainbow boas kill their prey by suffocating it as they squeeze the victim's body with their muscular coils.
|Exploration for Groundwater| Groundwater is precipitation that has drained through the soil into the gravels and bedrock fractures and faults below. It is found nearly everywhere, but useable, reliable quantities can only be tapped in sand, gravel, and rock formations that have sufficient void space to hold and conduct water. These formations are known as aquifers. Most groundwater used for domestic supply comes from relatively shallow wells (less than 150 feet in depth) in fractured bedrock or unconsolidated materials. The bedrock may be shale, sandstone, siltstone, limestone, or coal. Water can be stored in all these rocks, but rapid movement of water is primarily controlled by secondary fractures--joints or faults that penetrate the rock near the land surface (Wyrick and Borchers, 1981; Kipp and Dinger, 1991). Joints and faults in the earth's crust may extend for tens of feet up to several miles in length. The more lengthy of these features, called linear terrain features, fracture traces, or lineaments, can be seen on different types of aerial photographs and satellite imagery. These features may collect, store, and transport large amounts of groundwater that can provide sufficient water to communities and industry. Little effort has been made in the past to determine the groundwater resource potential as it relates to high-yield wells. Recent efforts in the upper Kentucky River Basin, in which satellite imagery was used to locate wells, resulted in three out of four resulting wells producing more water than 90 percent of all the recorded wells in the area, and having enough water to supply from 50 to 250 homes per well. Exploiting geologic features such as fracture traces and lineaments is a common technique used for the exploration of subsurface fluids, including groundwater (Siddiqui and Parizek, 1971; Mabee and others, 1994) and petroleum (Driscoll, 1986). Fracture traces are linear expressions on the earth's surface that are less than 1 mile in length; those greater than a mile are termed lineaments. Linear features that are not readily apparent on the ground can often be distinguished at high altitudes. Currently, private vendors as well as foreign agencies have made high-resolution satellite photos and radar images available. These data can be used in detailed surficial analysis for linear features that can be related to high-production groundwater zones.
Our primary goal is to aid the children in reaching their fullest potential in all aspects of their lives. To quote Dr. Maria Montessori, "Each child is constantly working towards the person they will become." We offer a non-competitive environment where the children are encouraged to work according to their own interests and abilities. Without pressuring the child academically, the child is allowed to develop at his/her own pace, learn the importance of making responsible choices and actively direct his/her own experiences. Through careful observation, we can monitor a child's learning process and connect them to the appropriate materials when they are ready. Our goal is to assist the child in developing life skills that will carry them through their adult life. Our role is to prepare the environment so that it meets the needs of the child's development related to: - social skills (peer problem solving) - emotional growth (encouraging independence to develop positive self-esteem) - physical coordination (large and fine motor skills) - cognitive preparation (enriched curriculum) A well-prepared environment allows the child to be independent. Not having to rely on the adult as the main source of information promotes success and self-esteem. The environment allows the child to experience the joy of learning and to develop a positive self-image. The classroom consists of mixed ages, which fosters reciprocal learning between the older children and the younger children. Younger children view their older peers as role models from whom they can learn advanced skills, older children develop self-esteem by sharing their knowledge and assisting younger children. This concept prepares them well for interacting with various age groups throughout their early lives. Fair expectations and ground rules are established so that the children feel secure within themselves and with their surroundings. It is also important that the child learns respect for the materials, the environment and peers.
|History and lists| A performance, in performing arts, generally comprises an event in which a performer or group of performers behave in a particular way for another group of people, the audience. Choral music and ballet are examples. Usually the performers participate in rehearsals beforehand. Afterwards audience members often applaud. After a performance, performance measurement sometimes occurs. Performance measurement is the process of collecting, analyzing and/or reporting information regarding the performance of an individual, group, organization, system or component. The means of expressing appreciation can vary by culture. Chinese performers will clap with the audience at the end of a performance; the return applause signals "thank you" to the audience. In Japan, folk performing arts performances commonly attract individuals who take photographs, sometimes getting up to the stage and within inches of performer's faces. Sometimes the dividing line between performer and the audience may become blurred, as in the example of "participatory theatre" where audience members get involved in the production. Theatrical performances can take place daily or at some other regular interval. Performances can take place at designated performance spaces (such as a theatre or concert hall), or in a non-conventional space, such as a subway station, on the street, or in somebody's home. Examples of performance genres include: - musical genres: - theatrical genres: - other genres: Music performance (a concert or a recital) may take place indoors in a concert hall or outdoors in a field, and may require the audience to remain very quiet, or encourage them to sing and dance along with the music. Palermo Concertmaster Salvatore Greco in performance Live performance event support overview Live performance events including theater, music, dance, opera, use sound production equipment and services like: staging, scenery, mechanicals, sound, lighting, video, special effects, transport, packaging, communications, costume and makeup to convince live audience members that there is no better place that they could be right now. This Live Event Support article provides information about many of the possible performance production support tools and services and how they relate to each other. Live performance events have a long history of using visual scenery, lighting, costume amplification and a shorter history of visual projection and sound amplification reinforcement. This article describes the technologies that have been used to amplify and reinforce Live events. The sections of this article together explain how the tools needed to stage, amplify and reinforce live events are interconnected. - Audio electronics - Liquid light shows - Live sound mixing - Rock concert - Rock festival - Sound technology - VJ (video performance artist) ^ Burris-Meyer and Cole (1938). Scenery For The Theatre.Little, Brown and company pp 246-7 Projected Scenery Effects ^ Wilfred, Thomas (1965) Projected Scenery: A Technical Manual |Wikiquote has quotations related to: Performance| - Espartaco Carlos, Eduardo Sanguinetti: The Experience of Limits,(Ediciones de Arte Gaglianone, first published 1989) ISBN 950-9004-98-7. - Philip V. Bohlman, Marcello Sorce Keller, and Loris Azzaroni (eds.), Musical Anthropology of the Mediterranean: Interpretation, Performance, Identity, Bologna, Edizioni Clueb – Cooperativa Libraria Universitaria Editrice, 2009. - Friedrich Platz, and Reinhard Kopiez: When the first impression counts: Music performers, audience, and the evaluation of stage entrance behavior, Musicae Scientiae, 17, No. 2: 2013, pp. 167-197. doi:10.1177/1029864913486369
Current dynamics of desertification in Africa. Facts and statistics Desertification is considered as one of the world's most alarming global environmental problems. It is also the primary cause of environmentallly induced displacement in many regions of the world. The term “desertification” has been in use since 1949 when French ecologist and botanists Andre Aubreville published a book entitled Climate, Forets et Desertification de l'Afrique Tropicale. He defined desertification as the changing of productive land into a desert as the result of ruination of land by man-induced soil erosion. According to many estimates, desertification affects at least 135-250 million people worldwide. However, some scientists argue that only in China does the problem of desertification concerns more than 400 million people. Primary areas of the world that are affected by desertification is Sahel region, as well as Southern Africa (the Kalahari Desert), China (the Gobi Desert) and Latin America. As Kofi Annan said in 2006, “If we don't take action, current trends suggest that by 2020 an estimated 60 million people could move from desertified areas of sub-Saharan Africa towards North Africa and Europe, and that worldwide, 135 million people could be placed at risk of being uprooted”. According to Allen and Ober (2008) over 67 million people in the Sahel already exist under the effects and threats of desertification. Desertification of soils appears to be one of the fundamental causes of hunger in many regions of the world. Soil degradation is actually very dangerous phenomenon in land degradation and has severe effects on soil functions. To the most important causes of soil degradation we can include deforestation, overgrazing and various agricultural activities. According to Salfrank and Walicki (2005) over a half of total Central Asian land area is prone to desertification and over 80 percent of total land area in Turkmenistan and Uzbekistan is affected by salinization and desertification. Facts and Statistics: - Desertification is especially important problem in Africa. Two-thirds of the continent is desert or drylands, and 74 per cent of its agricultural drylands are already seriously or moderately degraded. - Worldwide, desertification is making about 10-12 million hectares useless for cultivation each year. This is territory equal to 10% of the total area of South Africa. - The areas with the biggest dynamics of desertification are concentrated in the Sahelian region, the Kalahari in the south and the Horn of Africa. According to many estimations 70 percent of African land is already degraded to some degree and land degradation affects at least 485 million people or sixty-seventy percent of the entire African population. (United Nations). - More than 35 percent of the land area (approximately 83,489 km, 49 out of the 138 districts) of Ghana is prone to desertification. Recent research indicates that the land area prone to desertification/drought in the country has almost doubled during last two decades. (United Nations). - Approximately 70 percent of Ethiopia and 80 percent of Kenya is reported to be prone to desertification in recent years. (United Nations). - Recent estimations suggest that between 48 and 78 percent territory of Swaziland is at risk of desertification. - According to the UN Nigeria is losing 1,355 square miles of cropland and rangeland due to desertification each year. This problem affects each of the 11 states of northern Nigeria. Nigeria loses approximately 320,000-350,000 hectares of land per year, which causes mass displacement of local communities in the North. At least 35 million people are facing threats of hunger and economic problems due to present scale of desertification. - Recent estimations suggests that more than 30% of the land area of Rwanda, Burundi, Burkina Faso, South Africa Lesotho and is very severely degraded. The statistics are still growing up and up.
|Tropical Fish||Marine Fish||Pet Birds||Dogs||Cats| |Reptiles||Amphibians||Small Pets||Insects & Spiders||Wildlife| Centipedes are arthropods. They belong to the class Chilopoda and the Subphylum Myriapoda. The word centipede is derived from Latin and literally means “hundred feets”. Centipedes are elongated animals with a pair of legs on each section. A distinctive feature besides large number of legs is a pair of venom claws or forcipules formed from a modified first appendage. All centipedes are predatory. Centipedes can vary a lot in size and the smallest species only reaches a few millimetres in length (genera Lithobiomorphs and Geophilomorphs) while the largest ones can become over 30 cm long (genus Scolopendromorphs). There are some question marks surrounding which is the largest centipede species in the world. Most will however say that it is the Amazonian giant centipede (Scolopendra gigantean) which can grow to more than 30 cm / 12 inches in length. There are however unconfirmed reports of giant centipedes on the Galápagos Islands reaching twice that size. The largest Galápagos Islands giant centipedes in captivity have grown to be a mere 20 cm / 8 inches. It is possible that the reports of large specimens in fact refer to a larger even more rare centipede species on Galapagos. The now extinct members of Euphoberia which once lived in Europe and North America could grow to be 1 m / 40 inches. Some scientists regard these animals as centipedes while others think they are millipedes. Some centipedes are very beautiful with bright colours but most have fairly dull colorations. Most centipedes display different combinations of brown and red. Centipedes are accomplished hunters and can catch even fast and flying insects. The larger species can even catch bats in mid flight. There exists about 3,000 described species of centipede but researchers estimate that there might be upwards of 8,000 centipede species in the world. Centipedes can be found almost everywhere on the planet, even in areas with very hostile climates such as within the Arctic Circle. They can be found in all types of environments from rainforest to desert but need access to moist micro habitats within their habitat. Centipedes often play an important role in keeping insects in check in areas where they are found. Centipedes have interesting mating rituals. The male in many species lays a sperm package on the ground that they then try to get the female to engulf through a courtship dance. The males in other species simply deposit their sperm packages in the forest for the females to find on their own. Centipedes breed all year round in tropical and sub tropical areas. In colder climates they breed during spring and summer. Some centipede species give birth through parthenogenesis and do not need a male. Some centipedes, e.g. members of Lithobiomorpha and Scutigeromorpha, simply deposit their eggs in holes in the soil and leave them there. Other species like Geophilomorpha and Scolopendromorpha show parental care. The female guards the eggs and keep them free from fungi by licking them. Some species even guard the young centipedes for a while after they have hatched. The most extreme form of parental care can be found in some Scolopendromorpha species where the young ones are matriphagic. In other words, the youngsters eat their mother when they hatch and thereby get a good start in life. Centipedes lay between 10 and 60 eggs depending on the species. The eggs take long to hatch and the hatching time can be anywhere from one to several months. It can take many years for a centipede to reach maturity. The fact that they produce a low number of offspring and mature slowly means that centipede populations have a low resistance level. Some centipede species have bites that can be dangerous for humans. Some species are venomous with venom strong enough to kill humans, while others can cause anaphylactic shock in people with allergies. The bites can also be very painful.
Part of our Civil Rights Movement unit covers the life, times, and significance of Emmett Till. In years past Till's story has impacted students in deep and profound ways since he was only 14 when brutally murdered in August 1955. Keith Beauchamp's stunning documentary The Untold Story of Emmett Till is a great resource to learn about the topic, and therefore worthy to see and discuss. (Read about another important documentary here.) Here's a poem about Till, a great web resource and labor of love devoted to Till, and here Cornel West weighs in. An author has created a Till blog, and here's an FBI report on Till. Finally, here's a story on Till and the use of images in history. With what we've discussed in class about the Civil Rights Movement, and in light of viewing the Till documentary, simply leave your thougths and reflections about Till in the comments section. Why do you think Till is an important figure to study? In your opinin, what is his significance to the Civil Rights Movement?
Lollard, in late medieval England, a follower, after about 1382, of John Wycliffe, a University of Oxford philosopher and theologian whose unorthodox religious and social doctrines in some ways anticipated those of the 16th-century Protestant Reformation. The name, used pejoratively, derived from the Middle Dutch lollaert (“mumbler”), which had been applied earlier to certain European continental groups suspected of combining pious pretensions with heretical belief. At Oxford in the 1370s, Wycliffe came to advocate increasingly radical religious views. He denied the doctrine of transubstantiation and stressed the importance of preaching and the primacy of Scripture as the source of Christian doctrine. Claiming that the office of the papacy lacked scriptural justification, he equated the pope with Antichrist and welcomed the 14th-century schism in the papacy as a prelude to its destruction. Wycliffe was charged with heresy and retired from Oxford in 1378. Nevertheless, he was never brought to trial, and he continued to write and preach until his death in 1384. The first Lollard group centred (c. 1382) on some of Wycliffe’s colleagues at Oxford led by Nicholas of Hereford. The movement gained followers outside of Oxford, and the anticlerical undercurrents of the Peasants’ Revolt of 1381 were ascribed, probably unfairly, to the influence of Wycliffe and the Lollards. In 1382 William Courtenay, archbishop of Canterbury, forced some of the Oxford Lollards to renounce their views and conform to Roman Catholic doctrine. The sect continued to multiply, however, among townspeople, merchants, gentry, and even the lower clergy. Several knights of the royal household gave their support, as well as a few members of the House of Commons. The accession of Henry IV in 1399 signaled a wave of repression against heresy. In 1401 the first English statute was passed for the burning of heretics. The Lollards’ first martyr, William Sawtrey, was actually burned a few days before the act was passed. In 1414 a Lollard rising led by Sir John Oldcastle was quickly defeated by Henry V. The rebellion brought severe reprisals and marked the end of the Lollards’ overt political influence. Driven underground, the movement operated henceforth chiefly among tradespeople and artisans, supported by a few clerical adherents. About 1500 a Lollard revival began, and before 1530 the old Lollard and the new Protestant forces had begun to merge. The Lollard tradition facilitated the spread of Protestantism and predisposed opinion in favour of King Henry VIII’s anticlerical legislation during the English Reformation. From its early days the Lollard movement tended to discard the scholastic subtleties of Wycliffe, who probably wrote few or none of the popular tracts in English formerly attributed to him. The most complete statement of early Lollard teaching appeared in the Twelve Conclusions, drawn up to be presented to the Parliament of 1395. They began by stating that the church in England had become subservient to her “stepmother the great church of Rome.” The present priesthood was not the one ordained by Christ, while the Roman ritual of ordination had no warrant in Scripture. Clerical celibacy occasioned unnatural lust, while the “feigned miracle” of transubstantiation led men into idolatry. The hallowing of wine, bread, altars, vestments, and so forth was related to necromancy. Prelates should not be temporal judges and rulers, for no man can serve two masters. The Conclusions also condemned special prayers for the dead, pilgrimages, and offerings to images, and they declared confession to a priest unnecessary for salvation. Warfare was contrary to the New Testament, and vows of chastity by nuns led to the horrors of abortion and child murder. Finally, the multitude of unnecessary arts and crafts pursued in the church encouraged “waste, curiosity, and disguising.” The Twelve Conclusions covered all the main Lollard doctrines except two: that the prime duty of priests is to preach and that all men should have free access to the Scriptures in their own language. The Lollards were responsible for a translation of the Bible into English, by Nicholas of Hereford, and later revised by Wycliffe’s secretary, John Purvey.
Rotate the four cogs 90° in the Z axis to stand them up. Delete History and Reset Transformation (Fig.14). You now have your four basic prototype cogs, ready to create the gear system. All the other cogs in the system will be duplicates of these four prototypes. Creating your First Gear Combination The next objective is to create a gear combination. This requires two different sized cogs connected to a spindle (cylinder), as indicated in Fig.15. The example in Fig.13 combines cog10 (10 teeth) with cog30 (30 teeth). Duplicate cog10 and cog30 - Ctrl + [d]. Then align them using the Move tool [w] with Point Snap [v] pressed (Fig.16). Before we proceed with the tutorial, this is a convenient time to consider the size of the cogs and relationship between them. Look at Fig.16. The radius of cog30 (large) is three times bigger than cog10 (small). Likewise, the number of teeth on cog30 is three times as many as cog10. The most important value for us in this tutorial is this circumference. The circumference of cog30 is three times the length of cog10. This means that if both cogs rotate 360° over the same time period, the teeth on cog30 will have covered three times the distance as the teeth on cog10, and therefore the teeth on cog30 are moving three times faster than cog10. "cog30 is three times faster than cog10" is the same as "cog10 is 3 times slower than cog30". In Fig.17, cog10 and cog30 are both rotated 90° over 25 frames - both are rotating at the same speed. However, the distance covered by the teeth on cog30 is clearly further than the teeth on cog10. Therefore the teeth on cog30 are moving three times faster than the teeth on cog10. Customizing the Cogs As the objective of this tutorial is rigging, the modeling aspect is kept to a minimum - the cogs are very simple. However, to make things more visually interesting in this tutorial, it's a good idea to customize the cogs as you create the gear system. In Component mode [RMB], select the vertices of each cog and move [w] them to customize the thickness of each cog (Fig.18). Remember to return to Object mode. Create a polygon cylinder for the connecting spindle (Fig.19). Don't forget to Delete History and Freeze Transformations after you have positioned and resized the spindle (cylinder). Then rename the spindle cylinder "gear01" (Fig.20).
from, Stephen C. Behrendt; 12/12/07 Not all of these questions will be equally applicable to all of the short stories you will read — or to short stories generally that you will read outside this course. But they will help you to become better, more careful, more insightful, and more confident as a reader. In class discussions, we will emphasize various of these elements, some of them more than others, so that you can get a sense of how we can approach the same story in different ways and with different objectives for ourselves as readers. I suggest that you read through these questions each time you prepare to read an assigned story. Doing so, even though it may seem simplistic and repetitious, will help you to read in a more informed manner, and it will help you, too, to be more receptive to the artistry — the aesthetic values — of these and other stories you will read. Questions to help you analyze the PLOT - Who is the protagonist of this short story? Try to establish his/her age, family background, social class and status, and occupation. - Summarize as briefly as possible the single change which occurs to the protagonist during the course of this story, taking care to specify whether this change is mainly one of fortune, moral character, or knowledge. - Trace the progress of this change through these detailed stages: - the original situation of the protagonist (including the initial possibilities of later disequilibrium);? - the precipitating event which begins to involve the protagonist in a central tension; - the alternative types of action which are available to the protagonist as his involvement intensifies;? - the major steps by which the involvement is intensified (show how each step advances the involvement, and how it changes the relative strength of the alternatives);? - the crisis (show what precipitates the crisis and how);?f. the resolution (show what breaks the crisis and how). - At what point in this story is the tension highest? Is that point the dramatic climax? How is the tension produced, and is it appropriate? Does the story as a whole seem to be high-tension or low-tension? - Does the story involve an epiphany, or moment of insight, revelation, or self-realization for the protagonist—or perhaps for the reader? If so, does it coincide with the dramatic climax, or crisis, of the story? - What questions of probability arise in this story? In general, are the events of this story sufficiently probable to support its overall design? Questions to help you analyze CHARACTERIZATION - Is the protagonist a round or a flat character? On what evidence do you base your answer? What about the other characters? Why are they made the way they are? - Evaluate the moral structure of the protagonist: - To what degree is his/her moral stature defined by the words and actions of contrasting minor characters, or by the testimony of characters who are readily acceptable as witnesses?? - Discuss the protagonist’s inclinations toward specific virtues and vices, his/her powers or handicaps with relation to those virtues and vices, and one or two important instances in which his/her moral stature is apparent. - Describe the psychology of the protagonist: - What are her/his dominant traits or desires? How did these traits or desires apparently originate? Do they support or oppose one another? Explain.? - Through what modes of awareness is the protagonist most responsive to life – rational, instinctual, sensory, emotional, intuitive? Explain and illustrate.? - Discuss the way in which she/he takes hold of a situation. In what terms does she/he see her/his problems? What does she/he try to maximize or minimize, try to prove or disprove? Do her/his reactions proceed through definable phases? If so, what are they? How may one explain her/his effectiveness or inadequacy in taking hold of a situation or emergency? - In view of all these matters, what does the author apparently want us to think and feel about what happens to the protagonist? - Is the protagonist’s personality worked out with probability and consistency? Questions to help you evaluate the story’s NARRATIVE MANNER - What is the predominant point of view in this story, and who seems to be the focal character? Illustrate by citing a very brief passage and showing how it confirms your opinion. - What kind of ordering of time predominates in this story? Explain.?? - At what points does the narrative significantly slow down or speed up? At what points do conspicuous jumps in time occur? Why, in each case? - Select several passages from this story, each reasonably brief, and use them to illustrate a discussion of the following stylistic matters: - special qualities of diction and sentence structure;? - the use of style to individualize the speech, thought, and personality of particular characters;? - the implied presence of the narrator or “author”; his/her level of involvement; his/her personality;? - the basic vision of life which the style of the story reflects and extends. Questions to help you assess IDEA in the story - What is the theme of the story? Express it in a single declarative sentence. - According to the story, what kind of behavior makes for lasting human worth or for human waste? - Evaluate the relative importance in influencing the outcome of the story of the following: physical nature, biological make-up, intimate personal relationships, society. What does the author seem to regard as the chief area in which human destiny is shaped - According to the story, to what extent is the individual able to manage these formative conditions? - To what extent is any individual’s final outcome helped or hindered by forces outside his/her control? In the story are these influences benignant, malignant, or indifferent? Explain. Questions that may help you understand the story’s BACKGROUND - Summarize the facts of the author’s birth, family and social position, main gifts or handicaps, education, and entry into writing. - Describe briefly, with dates, the more important of the author’s earlier works, giving special attention to the work immediately preceding the story under study. - What specific circumstances led the author to write this story? To what extent did she/he depart from the sort of fiction she/he had written up to this point? What persons, events, or other autobiographical materials does this story reflect, and with what modifications? What account of her/his inspirations and problems with this story did the author provide through letters, prefaces, journals, and the like? - By focusing upon sample details of this story, show how this biographical information (questions 1 and 3) helps to explain the design of the work. - What main features of social tension or stability in his/her own times did the author treat in this story? (e. g, ideology, war, economics, technology, daily life, etc.)? Explain, using both this story and such outside sources as personal statements by the author, histories of the period, etc. - By focusing upon sample details of the story, show how this historical information (question 5) helps to explain the design of the story. - What authors, literary circles, or movements did the present author support, attack, imitate, join, or depart from? Why? - Show how this literary background (question 7) helps explain the design of the story.
WATER USE AND CONSERVATION Grade Level: 7-10, can be adapted for elementary grades by summarizing simple facts in the article and shortening the survey to 3 days. Subject: Environmental Science Time Allotment: three 45-minute class periods, one week for home observations Students read the article entitled "Getting Up to Speed" on the water cycle and water conservation. Review the content of the article. Students define the key term in their science journals: Clean Water Act, conservation, evaporation, hydrologic cycle, and transportation. LEARNING ACTIVITY 1 World of Water Demonstration Before the demonstration the teacher explains to the class that the amounts of water are relative quantities and are not actually proportions or amounts. Put 3 gallons of water in an aquarium. Explain that this water represents all the water on earth. In their science journals, students complete a 3-minute quick-write estimating what percent of this water is: - Ice caps/glaciers - Freshwater lakes - Inland seas/ salt lakes Using a measuring cup, the teacher removes 20 ounces of water from the aquarium. Using food coloring, color the remaining water in the aquarium. The dyed water represents the world's oceans. The water in the measuring cup represents all the water in the world that is NOT ocean water. Pour 15 ounces of water from the measuring cup into clear container. This water represents ice caps and glaciers. Because it is in the form of ice, it is not readily available for use so it has to be separated from the world's supply of fresh water. The remaining 5 ounces of water in the measuring cup represent the world's available fresh water. Of this water, only a small percent of an ounce composes the world's freshwater lakes and rivers. Use an eyedropper to collect this water and place it into a student's hand. The water remaining in the measuring cup, after removing ice caps and glacier water and freshwater lakes and rivers (about 4.5 ounces), is groundwater. Pour this water into a cup of sand and explain that this water is what is referred to as groundwater and that it is held in pore spaces of soil and cracks in bedrock. Students complete the World of Water activity worksheet. The answers to the drinking water percentages are: 0.419% total and 2.799% grand total. Students review their estimates from the beginning of class and discuss their reactions to learning that there is such a small percentage of fresh drinking water in the world. Students respond to the following questions in the science journal: LEARNING ACTIVITY 2 - Why isn't all fresh water usable? It's not often easy to get to; it can be frozen or trapped in the soil; it is too polluted for use - Why do we need to take care of the surface and ground water? Water is important for humans, plants, and animals; the more we use and waste, the less water there is available to use. Discuss responses as a class. H2O Diary: How much water do you use? Brainstorm in science journals using the guiding questions: What are some of the ways water is wasted at home? How much do you think is wasted every day? Students share responses in small groups. Student groups record a list of daily activities that require using water and guesstimate how many gallons of water are used for each activity. Pass out H2O Diary Explain that before they begin the survey, students must make a hypothesis as to how much water an average person uses a day. Students record their hypothesis in their science journals and on the H2O Diary. Students complete survey at home for a full week. Make clear that students must make tally marks each time the activity takes place. After students have completed the survey, have students discuss the results in small groups. Students record their responses in their science journals. Use the following suggested discussion questions: Share group discussions with whole class. - What activity happened most often? - Which activity used the most water? - How much water is wasted by leaving the water running while brushing your teeth instead of turning it off? - Electricity is a major user of fresh water. Water is used to cool the machinery used to produce electricity. Reducing your usage of electricity saves on water usage. Can you think of other industries that might use a lot of water? - Why might your answers differ from your classmates' answers? - Based on you survey, what can you and your family do to reduce the amount of water you use every day? - Estimate how much water you would conserve by reducing your water use. - Per day - Per week - Per year Review ways in which individuals and families can conserve water.
Also found in: Dictionary, Thesaurus, Medical, Legal, Financial, Wikipedia. electric charge:see chargecharge, property of matter that gives rise to all electrical phenomena (see electricity). The basic unit of charge, usually denoted by e, is that on the proton or the electron; that on the proton is designated as positive (+e ..... Click the link for more information. . A basic property of elementary particles of matter. One does not define charge but takes it as a basic experimental quantity and defines other quantities in terms of it. According to modern atomic theory, the nucleus of an atom has a positive charge because of its protons, and in the normal atom there are enough extranuclear electrons to balance the nuclear charge so that the normal atom as a whole is neutral. Generally, when the word charge is used in electricity, it means the unbalanced charge (excess or deficiency of electrons), so that physically there are enough “nonnormal” atoms to account for the positive charge on a “positively charged body” or enough unneutralized electrons to account for the negative charge on a “negatively charged body." In line with this usage, the total charge q on a body is the total unbalanced charge possessed by the body. For example, if a sphere has a negative charge of 1 × 10-10 coulomb, it has 6.24 × 108 electrons more than are needed to neutralize its atoms. The coulomb is the unit of charge in the meter-kilogram-second (mks) system of units. See Coulomb's law, Electrical units and standards, Electrostatics the source of an electromagnetic field that is associated with material carriers; an intrinsic characteristic of elementary particles that determines their electromagnetic interactions. Electric charge is one of the basic concepts in the science of electricity. The entirety of electric phenomena is a manifestation of the existence, motion, and interaction of electric charges. Two distinct kinds of electric charges are differentiated and are conventionally designated as positive and negative; it has been noted that bodies (or particles) with like charges repel one another and those with unlike charges attract one another, a fact first established by C. F. Dufay in 1733–34. The charge of an electrified glass rod was designated positive, and the charge of a resin rod (specifically, an amber rod) was designated negative. In accordance with this assumption, the electric charge of an electron is negative (the Greek word elektron means “amber”). Electric charges are discrete: there exists a minimal elementary electric charge, of which the charges of all bodies are multiples. The total electric charge of a closed physical system is equal to the algebraic sum of the charges of its constituent elementary particles (for the common macroscopic bodies these particles are protons and electrons). This total charge is rigorously preserved during all interactions and transformations of the particles in the system. The force of interaction between quiescent charged bodies (or particles) obeys Coulomb’s law. The relationship between electric charges and an electromagnetic field is defined by Maxwell’s equations. In the International System of Units, electric charges are measured in coulombs. L. I. PONOMAREV
World War II: On the Road to War World War II: On the Road to War Some long-term causes of World War II are found in the conditions preceding World War I and seen as common for both World Wars. Supporters of this view paraphrase Clausewitz: World War II was continuation of World War I by the same means. In fact, World Wars had been expected before Mussolini and Hitler came to power and Japan invaded China. Among the causes of World War II were Italian fascism in the 1920s, Japanese militarism and invasions of China in the 1930s, and especially the political takeover in 1933 of Germany by Hitler and his Nazi Party and its aggressive foreign policy. The immediate cause was Britain and France declaring war on Germany after it invaded Poland in September 1939. Problems arose in Weimar Germany that experienced strong currents of revanchism after the Treaty of Versailles that concluded its defeat in World War I in 1918. Dissatisfactions of treaty provisions included the demilitarizarion of the Rhineland, the prohibition of unification with Austria and the loss of German-speaking territories such as Danzig, Eupen-Malmedy and Upper Silesia despite Wilson's Fourteen Points, the limitations on the Reichswehr making it a token military force, the war-guilt clause, and last but not least the heavy tribute that Germany had to pay in the form of war reparations, and that become an unbearable burden after the Great Depression. The most serious internal cause in Germany was the instability of the political system, as large sectors of politically active Germans rejected the legitimacy of the Weimar Republic. After his rise and take-over of power in 1933 to a large part based on these grievances, Adolf Hitler and the Nazis heavily promoted them and also ideas of vastly ambitious additional demands based on Nazi ideology such as uniting all Germans (and further all Germanic peoples) in Europe in a single nation; the acquisition of "living space" (Lebensraum) for primarily agrarian settlers (Blut und Boden), creating a "pull towards the East" (Drang nach Osten) where such territories were to be found and colonized, in a model that the Nazis explicitly derived from the American Manifest Destiny in the Far West and its clearing of native inhabitants; the elimination of Bolshevism; and the hegemony of an "Aryan"/"Nordic" so-called Master Race over the "sub-humans" (Untermenschen) of inferior races, chief among them Slavs and Jews. Tensions created by those ideologies and the dissatisfactions of those powers with the interwar international order steadily increased. Italy laid claim on Ethiopia and conquered it in 1935, Japan created a puppet state in Manchuria in 1931 and expanded beyond in China from 1937, and Germany systematically flouted the Versailles treaty, reintroducing conscription in 1935 with the Stresa Front's failure after having secretly started re-armament, remilitarizing the Rhineland in 1936, annexing Austria in March 1938, and the Sudetenland in October 1938. All those aggressive moves met only feeble and ineffectual policies of appeasement from the League of Nations and the Entente Cordiale, in retrospect symbolized by the "peace for our time" speech following the Munich Conference, that had allowed the annexation of the Sudeten from interwar Czechoslovakia. When the German Fuhrer broke the promise he had made at that conference to respect that country's future territorial integrity in March 1939 by sending troops into Prague, its capital, breaking off Slovakia as a German client state, and absorbing the rest of it as the "Protectorate of Bohemia-Moravia", Britain and France tried to switch to a policy of deterrence. As Nazi attentions turned towards resolving the "Polish Corridor Question" during the summer of 1939, Britain and France committed themselves to an alliance with Poland, threatening Germany with a two-front war. On their side, the Germans assured themselves of the support of the USSR by signing a non-aggression pact with them in August, secretly dividing Eastern Europe into Nazi and Soviet spheres of influence. The stage was then set for the Danzig crisis to become the immediate trigger of the war in Europe started on September 1st, 1939. Following the Fall of France in June 1940, the Vichy regime signed an armistice, which tempted the Empire of Japan to join the Axis powers and invade French Indochina to improve their military situation in their war with China. This provoked the then neutral United States to respond with an embargo. The Japanese leadership, whose goal was Japanese domination of the Asia-Pacific, thought they had no option but to pre-emptively strike at the US Pacific fleet, which they did by attacking Pearl Harbor on December 7th, 1941.
FREQUENTLY ASKED QUESTIONS - WIND POWER How does a wind turbine make electricity? The simplest way to think about this is to imagine that a wind turbine works in exactly the opposite way to a fan. Instead of using electricity to make wind, like a fan, turbines use the wind to make electricity. Almost all wind turbines producing electricity consist of rotor blades which rotate around a horizontal hub. The hub is connected to a generator, which are located inside the nacelle. The nacelle is the large part at the top of the tower where all the electrical components are located. The wind turns the blades round, this spins the shaft, which connects to a generator and this is where the electricity is made. A generator is a machine that produces electrical energy from mechanical energy, as opposed to an electric motor which does the opposite! How much of the time do wind turbines produce electricity? A modern wind turbine produces electricity 70-85% of the time, but it generates different outputs dependent on wind speed. Over the course of a year, it will generate about 30% of the theoretical maximum output. This is known as its load factor. The load factor of conventional power stations is on average 50%. Example calculation for an 80KW: One year has (24 * 365 hours) = 8760 hours. When full power (= 80kW) can be generated all the time, the theoretical yearly output will be 700.800 kWh. In practice this will be around 30% of this amount = 210.240 kWh. Example calculation for a 250KW: One year has (24 * 365 hours) = 8760 hours. When full power (= 250kW) can be generated all the time, the theoretical yearly output will be 2.190.000 kWh. In practice this will be around 30% of this amount = 657.000 kWh.. What influences the output of the wind turbine? The average wind speed. More wind means higher electricity production. It means both how fast it blows and how often it happens. More important is the average speed than the speed at any given moment. The bigger the rotor diameter, the higher the output. The tower height. Wind speeds increase at higher altitudes. So, the higher the tower, the more wind the turbine catches Turbine type; downwind or upwind. Most wind turbines face into the wind, these are called “upwind turbines” Turbines that face the other direction are called “down wind turbines”. A downwind turbine produces less energy and wears out faster compared with an upwind turbine. The reason is that the airflow around the tower introduces turbulence. How safe is wind energy? Wind energy is one of the safest energy technologies. No member of the public has ever been injured by wind energy or wind turbines anywhere in the world, despite the fact that there are now over 100.000 operational wind turbines. Local law and subsidies Many countries and local authorities support and stimulate renewable energy. Please contact us to check what kind of subsidies or other support you can get from them when switching to wind energy What advantages do I get if I place the wind turbine? Most imporatnt reason: 1. Wind turbines pay back for them selves in 3 to 7 years. After this period wind turbines generate electricity for many more years without significant costs. Contact us to make a calculation for you. 2. On top of this there are various other advantages depending on the situation you are in: 3. A wind turbine may stabilize an instable grid 4. You decrease your diesel need and cost 5. You decrease the diesel generator noise 6. You decrease your electricity cost. 7. You produce green energy for a better environment 8. You educate your community by setting an example of a better environment. Do I need permission for placing a wind turbine? Yes. Please contact us for details. How much wind is needed to produce electricity? Wind turbines will start to produce at 3 m/s, however, with an average wind speed of 5 m/s it gets economically interesting to produce your own electricity with a wind turbine. Stronger wind produces more electricity. How do I read the characteristics? Nominal Power is measured in kilowatt (kW) or megaWatt (mW). Nominal power tells the maximum amount power that the turbine can produce. Production is measured in kiloWatt hours (kWh) or megaWatt hours (mWh). Production tells you how much electrical energy will generate during a period of time. Often the time period is one year. If I supply the over production to the grid, will I get paid for? This depends on your local regulations. In many countries you get paid for the kWh that you supply to the grid, please consult us to review the tariff options in your country. What is the life expectancy of a wind turbine? Most medium to large wind turbines have a life expectancy of 20 years. My farm / factory uses a lot of electricity. Can I use wind energy? Yes! As a matter of fact, if there is space to install a wind turbine and there is sufficient wind, then any farm or factory can use a wind turbine. For example: A poultry farmer has a house and a large poultry shed and it consumes 120.000 kWh per year. There is sufficient wind available and the local regulations allow you to use a wind turbine. In this case an 80 kW generator can easily produce 150.000 kWh per year. Enough for your own electricity needs. At times when there is not enough wind, you will automatically draw electricity from the power grid. In this case you only pay the power company for the electricity you take from them and the rest you get from your wind turbine. If you produce more electricity (150.000 to 120.000 kWh = 30.000 kWh) then you can sell the over production to the power company and they will pay you a certain amount per kWh delivered.
by Staff Writers Jena, Germany (SPX) Jul 10, 2012 Max Planck scientists have found out that the olfactory system in hermit crabs is still underdeveloped in comparison to that of vinegar flies. While flies have a very sensitive sense of smell and are able to identify various odor molecules in the air, crabs recognize only a few odors, such as the smell of organic acids, amines, aldehydes, or seawater. Humidity significantly enhanced electrical signals induced in their antennal neurons as well as the corresponding behavioral responses to the odorants. The olfactory sense of vinegar flies, on the other hand, was not influenced by the level of air moisture at all. Exploring the molecular biology of olfaction in land crabs and flies thus allows insights into the evolution of the olfactory sense during the transition from life in water to life on land. Crabs and flies Odor signals are important cues for the crabs' search for food. In order to detect odor molecules outside the water on land, the sensory organs of arthropods had to adapt to the new, terrestrial environment. How did sensory perception evolve during the transition from sea to land? "The land hermit crab Coenobita clypeatus is an ideal study object to answer this question," says Bill Hansson, director of the Department of Evolutionary Neuroethology at the Max Planck Institute for Chemical Ecology in Jena, Germany. The animals live in humid regions close to the sea and regularly visit water sources. Females release the larvae into the sea, where they grow into young crabs. These young crabs look for empty snail shells and live on land. They eat fruits and plants. This way of life suggests that the olfactory sense in crabs is still at an early stage of development. Voltage and behavior A striking feature of the subsequently performed bioassays was that the crabs' behavioral responses to odorants were more obvious and much faster at a significantly increased humidity, assumingly due to an enhanced electrical excitability of their antennal neurons. The EAG showed in fact a reaction at the neurons which was three to ten times stronger if active odors were applied at a higher humidity. In contrast, antennal neurons of vinegar flies did not show any differences and responded evenly and independently of the degree of humidity. Evolution of olfaction In the water flea genome, no genetic information was actually found for so-called olfactory receptors, which are responsible for the highly sensitive olfactory system in insects, such as vinegar flies. Although the receptor genes which are present in the hermit crab genome have not been elucidated yet, the scientists assume that olfaction in crabs is mediated by the original, evolutionarily older ionotropic receptors. It is generally believed that the ancestors of many insect species made the transition from the seas to the continents during much earlier geological eras and that insects have adapted their olfactory system to life on land very well. Terrestrial crustaceans, on the other hand, may be able to use their sense of smell on land thanks to a basic molecular "equipment", but their olfaction is still quite underdeveloped in comparison to insects. Therefore hermit crabs usually stay near the coast: not only because of the short way back to the sea where they reproduce, but also because of their limited sense of smell which does not allow them to orient themselves without any problems in the dry air of the heartlands. Max Planck Institute for Chemical Ecology Darwin Today At TerraDaily.com |The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
Dividing tasks among different individuals can be a more efficient way to get things done, whether you are an ant, a honeybee or a human. A new study published in the Proceedings of the National Academy of Sciences (PDF) suggests that this efficiency may also explain a key transition in evolutionary history: from single-celled to multi-celled organisms. The scientists found the cost of switching between different tasks gives rise to the evolution of division of labor in digital organisms. In human economies, these costs could be the mental shift or the travel time required to change from one activity to another. According to Anna Dornhaus , an associate professor in the Univeristy of Arizona department of ecology and evolutionary biology who collaborated with researchers at the BEACON Center for the Study of Evolution in Action at the University of Michigan in East Lansing, Mich., social insects often are thought to derive their evolutionary success from delegating tasks amongst highly specialized individuals, allowing the whole colony to be more competitive than groups lacking such organization. However, previous research in her lab involving more than 1,000 individually marked ants failed to support that assumption. Instead, it appears that the primary benefit of division of labor comes from avoiding the costs of task switching, a hypothesis proposed by economist Adam Smith in the 1700s but not tested in social insects until now. Led by Heather Goldsby, now a postdoctoral researcher at the University of Washington, the team created a virtual ant colony of self-replicating computer programs and imposed a time cost on the digital organisms that had to complete various computational tasks to reap rewards. “More complex tasks received more rewards,” Goldsby said. “They evolved to perform these more efficiently by using the results of simpler tasks solved by neighboring organisms and sent to them in messages.” In this way, the organisms were breaking the tasks down into smaller computational problems and dividing them up among each other. “Our idea here was to emulate a system with relatively simple individual workers in a computer, to see if the division of labor that evolved in social insects would emerge even given just a few assumptions,” Dornhaus said. “Indeed it turns out that gains in individual efficiency are not necessary for specialization to evolve.” What’s more, the researchers discovered that even task allocation based on spatial position or communication signals, both strategies found in social insects, might evolve from a set of simple rules, such as, “If you find yourself in position X, do Y.” “Task allocation based on spatial position means every job is located somewhere, and if I happen to be standing at that point, I do it,” explained Dornhaus. “No one else will attempt to do it because they would bump into me. This ensures that not everyone tries to do the same thing, and if you have good ‘coverage’ of individuals, all tasks will get done.” The division of labor did not come about by bringing together individuals with different abilities – each member of a community was genetically identical, in the same way that all of the cells in a human body contain the same genetic material. Instead, the organisms had to have flexible behavior and a communication system that allowed them to coordinate tasks. The authors said the most surprising result was that the organisms evolved to become dependent on each other. “The organisms started expecting each other to be there, and when we tested them in isolation, they could no longer make copies of themselves,” said Charles Ofria, an associate professor of computer science and engineering at Michigan State who co-authored the study. The team’s findings have major implications for understanding the transition from single-celled to multi-cellular life forms. “In embryonic development as in the evolution of multicellularity, the initially identical cells divide up the tasks such that some become skin, others become muscle, et cetera,” Dornhaus said. “Not having to switch between these jobs makes them more successful, even if they potentially could do all of them since they all have the same genetic potential. Our result means that multicellularity might have evolved even if cells did not individually become more efficient.”
International Holocaust Remembrance Day January 27, 2018 marks the 73nd anniversary of the liberation of Auschwitz in 1945, and is memorialized as International Holocaust Remembrance Day. International Holocaust Remembrance Day commemorates the survivors and victims of the Holocaust, a mass genocide that took the lives of an estimated 6 million Jews, and millions of others during World War II. On the 29th, the Consulate General of Italy will pay homage to those who were affected by the Holocaust. There will be a reading of names of Italian Jews who were deported from Italy, and its territories. At this ceremony, the importance of New York City will also be recognized. It became a safehaven for Jews seeking refuge from their home countries. New York City gave them freedoms, and the opportunity to start a new life safe from the Nazi-Fascist persecution. In addition, the Consulate General of Italy, the Italian Cultural Institute, Centro Primo Levi, Casa Italiana Zerilli-Marimò at NYU, the Italian Academy at Columbia University, the Calandra Institute at CUNY, and La Scuola d’Italia Guglielmo Marconi will host a series of educational programs to educate the youth about topics like totalitarianism, racism, and antisemitism, all ideologies that lead to the Holocaust. It is important to remember, and to learn from the hatred, racism, and discrimination of the past to ensure that history does not repeat itself.
Learn something new every day More Info... by email Degenerate matter is a bizarre form of exotic matter created in the cores of massive stars, where atoms or even subatomic particles are packed so closely that the primary source of pressure is no longer thermal but quantum - dictated by limitations set by the Pauli exclusion principle, which asserts that no two particles can occupy the same quantum state. It is also useful in some circumstances to treat conduction electrons in metals as degenerate matter, because of their high density. Degenerate matter, specifically metallic hydrogen, has been created in a laboratory before, using pressures over a million atmospheres (>100 GPa). Degenerate matter is unique in that its pressure is only partially dictated by temperature, and pressure would in fact remain even if the temperature of the matter were decreased to absolute zero. This is quite different than the ideal gases we learn about in physics class, where temperature and pressure/volume are closely related. In order of increasing density, common forms of degenerate matter include metallic hydrogen, present in large amounts in the core of massive planets such as Jupiter and Saturn; white dwarf matter, found in white dwarfs, which our Sun will one day become; neutronium, found in neutron stars, the endpoint of stellar evolution for stars from 1.35 to about 2.1 solar masses; strange matter; or quark matter, also postulated to exist within very massive stars. In white dwarfs, the material is referred to as electron-degenerate matter, because there is not sufficient energy to collapse the electrons into atomic nuclei and produce neutronium. In neutron stars, the material is called neutron-degenerate matter, because the pressure is so great that electrons fuse with protons to create matter consisting of nothing but neutrons. Under normal conditions, free neutrons degenerate into a proton and an electron in about 15 minutes, but under the tremendous pressure of a neutron star, neutron-only matter is stable. The most extreme form of degenerate matter, strange matter, is thought to exist in quark stars, stars with a mass somewhere between neutron stars and black holes, in which the constituent quarks of neutrons decouple and a quark soup is created. Quark stars are a possible candidate for the mysterious dark matter that makes up most of the mass of observed galaxies. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Aristotle explains that poetry can have errors in two ways. 1) Explain these two types of errors. 2) Provide an example of each type of error. Aristotle teaches through "Poetics" that poets may use words metaphorically, and he explains that this is done in many ways. 1) Explain various ways in which a character uses words metaphorically. 2) Differentiate between various forms of metaphors. 3) Provide examples of how Aristotle says Homer's "Iliad" uses metaphors. Like any art, poetry is a kind of imitation, as Aristotle asserts. Aristotle differentiates arts from one another in three ways: medium, the objects, and the manner of imitation. 1) Define the three ways in which art is differentiated. 2) Compare poetry to prose. 3) Give an example of how poets may differ in how they represent the objects. Choose one of the three options to write about: 1) Explain why men are drawn to write poetry based on Aristotle's "Poetics". This section contains 947 words (approx. 4 pages at 300 words per page)
The deep-sea sponge, Forcepia species, produces a series of compounds called lasonolides, which exhibit promising biomedical properties for the treatment of pancreatic cancer. In 2003, researchers from Harbor Branch Oceanographic Institution traveled to the Gulf of Mexico to explore marine habitats in search of organisms possessing bioactive compounds with potential as pharmaceutical products or biomedical research tools. Harbor Branch Oceanographic Institution has had an ongoing drug discovery research program since 1984. To date, the group has found over 250 compounds exhibiting anti-inflammatory, anti-cancer or anti-microbial effects. People who are not familiar with ocean exploration often believe that the primary reason for investigating deep-sea ecosystems is little more than scientific curiosity. This perspective quickly changes, however, when they learn that these ecosystems are the source of promising new drugs for treating some of the most deadly human diseases. Most drugs in use today come from nature. Aspirin, for example, was first isolated from the willow tree. Penicillin was discovered from common bread mold. To date, almost all of the drugs derived from natural sources come from land-dwelling organisms. But recently, systematic searches for new drugs have shown that marine invertebrates produce more antibiotic, anti-cancer, and anti-inflammatory substances than any group of terrestrial organisms. Particularly promising invertebrate groups include sponges, tunicates, ascidians, bryozoans, octocorals, and some molluscs, annelids, and echinoderms. Some chemicals produced by marine animals that may be useful in treating human diseases include: A striking feature of this list is that all of the organisms (except the cone snail) are sessile (non-moving) invertebrates. To date, this has been true of most marine invertebrates that produce pharmacologically active substances. Several reasons have been suggested to explain why sessile marine animals are particularly productive of potent chemicals. One possibility is that they use these chemicals to repel predators, because they are basically “sitting ducks.” Another possibility is that since many of these species are filter feeders, they may use powerful chemicals to repel parasites or as antibiotics against disease-causing organisms. Competition for space may explain why some of these invertebrates produce anti-cancer agents. If two species are competing for the same piece of bottom space, it would be helpful to produce a substance that would attack rapidly dividing cells of the competing organism. Since cancer cells often divide more rapidly than normal cells, the same substance might have anti-cancer properties.
The South China Sea is part of the Pacific Ocean partially enclosed by islands, archipelagos, and peninsulas from the open ocean. Its trail begins from the Karimata Strait, which connects the South China Sea to the Java Sea, and Malacca Strait stretching from Malay Peninsular to the island of Sumatra in Indonesia. It flows all the way to the Taiwan Strait which separates the land of Taiwan from the People’s Republic of China. The Sea is located to the South of China, West of Philippines, North of the Bangka-Belitung Islands and Borneo, and East of Vietnam and Cambodia. Nine major rivers flow into the sea. Namely, these include the Min, Mekong, Pearl, Red, Pampanga, Pahang, Pasig, and Jiulong Rivers. Several natural resources are found in the sea, for instance, crude oil, and natural gas. It is an important ecosystem with diverse marine life despite the depletion of fish due to excessive fishing. 5. Historical Background of the Disputes - In the early parts of the 20th Century, the islands within the sea had not been occupied, but by the end of the Second World War in 1946, China started to establish temporary settlements in the Woody Islands. The following year saw French and Vietnamese attempt to occupy the same Island but instead settled on a nearby Pattle Island. During the time, the sea had not grown popular and there was no rush to claim it. However, between 1955 and 1956, accelerating interest grew among the neighboring nations. China and Taiwan were the first to establish permanent settlements on the major Islands in the Sea. The rush to occupy the Islands cooled off until the early 1970s when oil was suspected to be below the sea. The Philippines became the first country to occupy this oil-rich area for oil exploration, but China staged an invasion to occupy other islands.They complained of the Philippines invasion which later led to the halting of the exploration. Disputes on both the Island and maritime claims arose because most of the World’s trade passes through this particular sea. The sovereign states who are interested in controlling the sea want the rights to fishing areas, exploration, mining, and exploitation of crude oil and natural gas. 4. Multiple Countries, Disputes, and Incentives - The countries of China, Taiwan, Brunei, the Philippines, Indonesia, Malaysia, and Vietnam all desire to have control over different parts of the South China Sea and its maritime routes, and therefore disputes involving maritime boundaries and the possession of islands therein have arisen. The first notable dispute was the nine-dash line area which was claimed by the Republic of China (Taiwan) and was later claimed by the People’s Republic of China, Brunei, Indonesia, Malaysia, Philippines, Taiwan, and Vietnam. Dialogues among these nations have been conducted by Singapore as it played a neutral role. The second dispute was between People’s Republic of China, Taiwan, Malaysia, and Vietnam and the contention being the maritime boundary along the Vietnamese coast. Another Dispute arose between Brunei, China, Taiwan, Philippines, Malaysia and Vietnam on the maritime border, North of Borneo. Some islands in the sea, such as the Spratly Islands, have become other centerpieces of conflicts between Brunei, China, Malaysia, Philippines, Taiwan, and Vietnam. The fifth dispute arose between Cambodia, China, Indonesia, Taiwan, Malaysia, and Vietnam on Maritime Boundary, North of Natuna Islands. On top of that, the Maritime boundary, off the coast of Palawan and Luzon was the center of the disagreements between Brunei, China, Taiwan, Malaysia, Philippines, and Vietnam. Another dispute between Indonesia, Malaysia, and the Philippines arose on the Maritime Boundary, Land territory and the Islands of Sabah. The last dispute arose between Singapore and Malaysia on Maritime Boundary and Islands in the Pedra Branca, located in the eastern Singapore, but was resolved amicably between the two countries. 3. Petroleum Reserves, Trade and Commerce, and Strategic Military Presence - Research conducted in the South China Sea has revealed the presence of over 7.7 billion barrels of known oil reserves and, further fueling the territorial disputes, the entire sea has been estimated to contain up to 28 billion barrels of oil cumulatively. Natural gas, another important resource have been expected to cover a volume of up to 266 trillion cubic feet under the sea. Through fishing and exploitation of the natural resources present in the sea, international trade done and passing through the region can add up to 5 trillion US dollars, this makes it an important region for both trade and commerce. The sea is the second most used shipping lane by the vessels in the world for trading. It has been approximated that more than 10 million barrels of crude oil get shipped through the Strait of Malacca and the Sunda Strait. The People’s Republic of China has expanded the military activities in the South China Sea by creating islets from the reefs. These islets have been used for military purposes, such as the maneuvering of armed missiles and aircraft used for conducting drills in the region. In response to military activities done by the People’s Liberation Army Navy, India, Philippines, and Vietnam have joined the United States in conducting patrols as well. 2. Notable Maneuvers to Expand Territory - The contentions over the rights to exploit the oil and natural gas found in the South China Sea have led to the growth of the military presence of China in the region. China has sought to modernize its military, particularly its naval capabilities. This move was to enable them to reinforce the jurisdiction and sovereignty over the sea. Due to the rising of various contingencies among the nations with interest in the sea, China’s move was to ensure that in time of conflict, the United States’ military forces will be at risk and their control will not be overthrown. 1. Current Situation - Due to the disputes that have arisen over the years, the Philippines launched an arbitration case against the People’s Republic of China in January of 2013. The arbitration proceedings began an investigation on claims that they had historically exercised powers over the Nine-dash line. On July 12th, 2016, arbitrators arrived at a conclusion that there was no substantial evidence over China’s claims. The ruling faced rejections from both Taiwan and China with claims that it was not based on reliable facts and evidence. The United States, on the other hand, has also increased its military presence in the surrounding areas, an act to reassure its partners on their commitment to ensuring their security against the Chinese forces.
This is an image of a mercury bulb thermometer. The temperature is measured by reading the number next to the thin black line that goes partly up the yellow tube. Click on image for full size Image courtesy of Wikipedia Creative Commons Thermometers measure temperature. "Thermo" means heat and "meter" means to measure. You can use a thermometer to measure the temperature of many things, including the temperature of the air, the temperature of our bodies, and the temperature of the food when we cook. Temperature is a measure of the hotness and coldness of an object. Thermometers usually have a bulb at the base and a long glass tube that extends to the top. The glass tube of a thermometer is filled with alcohol or mercury. Both mercury and alcohol grow bigger when heated and smaller when cooled. Inside the glass tube of a thermometer, the liquid has no place to go but up when the temperature is hot and down when the temperature is cold. Numbers are placed alongside the glass tube that mark the temperature when the line is at that point. Other types of thermometers include dial thermometers and electronic thermometers. Electronic thermometers measure temperature much more quickly than mercury and dial thermometers. The thermometer measures temperatures in Fahrenheit, Celsius and another scale called Kelvin. Fahrenheit is used mostly in the United States, and most of the rest of the world uses Celsius. Kelvin is used by some scientists. Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: The Kelvin scale is a temperature scale that is often used in astronomy and space science. You are probably more familiar with the Celsius (or Centigrade) scale, which is part of the metric system of measures,...more Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. Rainbows result from the refraction and reflection of...more The Earth travels around the sun one full time per year. During this year, the seasons change depending on the amount of sunlight reaching the surface and the Earth's tilt as it revolves around the sun....more Scientists sometimes travel in specially outfitted airplanes in order to gather data about atmospheric conditions. These research aircraft have special inlet ports that bring air from the outside into...more An anemometer is a weather instrument used to measure the wind (it can also be called a wind gauge). Anemometers can measure wind speed, wind direction, and other information like the largest gust of wind...more Thermometers measure temperature. "Thermo" means heat and "meter" means to measure. You can use a thermometer to measure the temperature of many things, including the temperature of...more Weather balloons are used to carry weather instruments that measure temperature, pressure, humidity, and winds in the atmosphere. The balloons are made of rubber and weigh up to one kilogram (2.2 pounds)....more
The hip joint is a ball and socket joint connecting the upper thigh bone (femoral head) with the pelvic bone (acetabulum). It is one of the largest and most stable joints in the body. The surfaces of the bones are covered by a gristle like surface called articular cartilage. It is smooth and slippery. This allows for a pain free range of movement, as well as cushioning of the underlying bone. The joint is covered by a thin lining of soft tissue called the synovium, which produces a small amount of fluid in a healthy joint. This is a degenerative condition where the gristle, or articular surface, wears away. The surfaces become deformed, rough and irregular and eventually leading to bone rubbing on bone. The synovium gradually becomes inflamed and thickened with time, and this, in addition to the irregular surfaces, causes significant loss of joint movement. Osteoarthritis of the hip Osteoarthritis of the hip is a very common condition in middle aged and older people. As the hip joint is a weight bearing joint osteoarthritis is much more common in the hip than it is in the joints of the upper limb. It usually presents with pain and stiffness, particularly during activity. Causes of osteoarthritis in the hip It most commonly occurs in middle aged and older patients, and is degenerative in nature. There are some families that have a strong history of hip osteoarthritis. - Hip dysplasia or underdevelopment of the hip joint is a significant cause of hip osteoarthritis. It is more commonly seen in female patients. - Femoroacetabular impingement is another causative factor. It is more commonly found in younger to middle aged male patients who have played a high level of fast running sports such as football. This is a significant factor in the development of osteoarthritis. Obese patients have twice the incidence of this condition compared with those of normal weight. This includes slipped upper femoral epiphysis or Perthes disease. These include injuries to the hip such as fracture to the neck of femur or acetabulum. The most common symptom is pain. The pain may be in a number of areas but is usually in the groin or deep in the hip region. It may also occur in the side of the hip and in the buttock. In some patients the pain will refer to the thigh, knee, and even the shin. It gradually progresses over a number of years but sometimes may deteriorate quite quickly. It is often associated with stiffness, which causes difficulty in changing position and putting on shoes and socks. Some patients complain of locking or catching in the hip. The surgeon will observe the patient walking as well as check for any leg length inequality. The range of movement of the hip is usually diminished and there may be some pain or discomfort on moving the joint. In most cases x-rays of the hip are sufficient to make the diagnosis. Only occasionally would an MRI be helpful, where the diagnosis is not clear-cut on x-rays. Modification of lifestyle can be of benefit in reducing the activities that aggravate hip pain. It is important however to remain as active as possible. It is recommended that patients switch from high impact activities such as running or tennis to lower impact activities such as swimming, water aerobics or cycling. Weight reduction can also be helpful in reducing symptoms. Maintaining an exercise regime will also assist in reducing pain. If the condition is particularly disabling, then using a walking stick can be of benefit This can be of some help but probably can only offer minor gains in improving symptoms. Medications that may be helpful include Panadol Osteo, taken on a regular basis as well as taking non-steroidal anti-inflammatory drugs such as Nurofen or Voltaren. Sometimes a steroid injection into the hip can be helpful in the short to medium term. However, this injection is best avoided if the patient is contemplating hip replacement surgery in the near future as it may potentially increase the risk of infection if done within 3 months prior to surgery. When osteoarthritis is advanced hip arthroscopy will be of no benefit. There are some patients who present with quite significant femoroacetabular impingement and mild arthritis. Hip arthroscopy can often provide symptomatic relief in these cases. This procedure is performed for people with hip dysplasia. It can be performed when the arthritis is mild to moderate. Once the arthritic condition is more advanced then it is preferable to simply perform a hip replacement. In this procedure the worn out hip joint is replaced. Prosthetic devices include a femoral stem and ball head, and an acetabular cup and liner. This is an excellent procedure when conservative treatment has failed or when the pain and stiffness are quite disabling. These options will be discussed with patients at the time of consultation. For more information view Total Hip Replacement.
The Goldilocks Theory Just as Goldilocks found the porridge that was just right, the Earth seems to be just right for living creatures. The Earth seems to be the perfect distance from the sun for lots of water. Venus is too close to the sun, and too hot for flowing water on its surface. In fact, it is so hot that, like a sauna, all the water has been evaporated into the atmosphere, and Venus has a thick and heavy atmosphere. Mars is too far from the sun, and too cold for flowing water on its surface. Mars also has no continental drift, so particles of the atmosphere which become trapped within the ground stay trapped within the ground. Thus over time the atmosphere of Mars has become thin, and all the water is frozen into the ground. The temperature of Earth is just right for flowing water on the surface, and for the rock which allows for continental drift. With continental drift, particles of the atmosphere which become trapped within the ground are brought back to the atmosphere through eruptions of volcanoes. These conditions cause refreshment of the planet's atmosphere and help keep the temperature just right. These conditions are just right for abundant life. Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: Once the Earth began to cool, water vapor, one of the volatiles, began to condense and form an ocean. According to the Goldilocks theory, Earth is at just the right distance from the sun for the temperature...more Mars is not quite right for life as we know it on Earth. On a cold planet like Mars, water can become frozen into the ground. This water can only be turned into running water when the weather is warm enough....more Ash is formed as a volcano erupts when rocks made by the volcano blow apart into millions of tiny pieces. The rocks are still very hot, because they just formed from lava. If the hot rocks come into contact...more Cinder cones are simple volcanoes which have a cone shape and are not very big. Compare the size of this volcano to the strato-volcano in this image. They are usually made of piles of lava, not ash. During...more Lava can move in two ways, wide flat lava flows, or through channels which squeeze the lava into a small area. The fastest lava flows move at about 6 mi/hr, an easy jog, but they average between 2/3 and...more Plates at our planet’s surface move because heat in the Earth’s core causes molten rock in the mantle layer to flow. We used to think the Earth’s plates just surfed on top of the moving mantle, but now...more Many kinds of surface features are clues to a sliding lithosphere. Two types of features can form when plates move apart. At ocean ridges, the crust splits apart to make room for molten mantle rock. Continental...more
According to reporting by the Centers for Disease Control in 2014, most youth involved in bullying don’t display suicidal behavior. They do, however concede that teenage suicide and bullying are rising public health issues, and that there is a link between bullying and suicide-related behavior. Unfortunately, teenage suicide rates due to bullying are not available because it’s difficult to attribute teenage suicide singularly to bullying. There are typically other factors which affect a person’s suicidal tendency. - Emotional instability - Exposure to violence in the home, on television, or video games - Struggles with family dynamics - Social or romantic problems - Dislike of school and feeling of non-belonging - Substance abuse, including drugs and alcohol - Peer differences, including physical and intellectual challenges - Feeling of having no one to turn to for guidance with the above problems The Centers for Disease Control and Prevention (CDC) say that bullying is aggressive behavior toward one person by another. Sibling rivalry isn’t usually grouped into the “bullying” category. Conflict between dating partners isn’t considered dating either. The CDC indicates that the aggressor displays a force of power over the victim and that it’s usually repetitive. Bullying is harmful or distressing and can cause long-term damage, either physically or emotionally. Physical, verbal and emotional bullying have been around for centuries, but new forms are developing as our culture evolves. Physical bullying involves aggressive acts by an instigator against another person who is perceived as weak. Kicking and punching are common types of physical bullying. Verbal bullying is the use of language to belittle another person. Teasing and antagonizing are forms of verbal bullying. Sarcasm is prominent in verbal bullying. The bullying is meant to hurt another’s feelings in order cause humiliation. Emotional bullying is the deliberate alienation of a person from a group. For example, a group of teenagers excludes a single person from sitting at their lunch table. That teen goes to table after table and is repeatedly rejected. This often leads to loneliness and isolation and can cause depression. As the technological world continues to develop, cyber-bullying has been created and continues to evolve. It’s common for teens to carry and use smartphones. Most have access to a computer and to a plethora of social media outlets which are great hosts for cyber-bullying. It combines the verbal and emotional bullying, but to a larger audience and into a media that isn’t going away soon. Teacher bullying is another form of bullying that is often overlooked. This is educational bullying. It’s when a student gets singled out and chastised by a teacher. Perhaps the student is late to class often, or he speaks out of turn, doesn’t use manners, forgets his homework or pens and paper. Maybe the teacher doesn’t like the way the student dresses, or maybe she just doesn’t like his name. The teacher outwardly acknowledges the student’s weaknesses or learning differences in front of the other students. It’s a show of power, “This is MY classroom!”. The student doesn’t report it to his parents or to administration primarily because he doesn’t feel he has the support at home or from the school staff. The student fears the teacher will retaliate against the student by giving an unfair grade. This is defeating to the student and the teacher wins. For more information on bullying, look into the site, StopBullying.gov for access to U.S. Government information on bullying topics. In 2009 the CDC reported that suicide was the 10th leading cause of death among persons over age 10 in the United States (36,891 deaths). In youth between 15 and 24, suicide was the third leading cause of death. Causes of Teenage Suicide Rarely is a single isolated event the primary cause for suicide. An event such as loss of a loved one or bullying by peers may trigger suicidal behavior, but there are typically other underlying factors which increase an individual’s suicidal tendencies. Underlying factors may be mood disorders, substance abuse, chronic illness, and being members of the LGBT populations. Bullying is one of those isolated events being studied as a trigger toward suicidal behavior. It’s currently considered a major public health concern as a result of the number of school shootings in recent years thought to be directly related to bullying. The first National Strategy for Suicide Prevention was issued in 2001 by U.S. Surgeon General David Satcher. In 2012 U.S. Surgeon General Regina M. Benjamin, MD, MBA, also co-leader of the National Strategy for Suicide Prevention Task Force, updated the initial strategy because of the alarming increase in the suicide rates in the United States. Several achievements toward suicide prevention were made as a result of the National Strategies. The task force enacted the Garrett Lee Smith Memorial Act. They created the National Suicide Prevention Lifeline (800-273-TALK/8255) and they established the Suicide Prevention Resource Center. The National Strategy endeavors to continue developments toward the prevention of suicide. They will continue to look at the relationship between suicide and mental illness, trauma, violence and substance abuse. Key Facts About Suicide: - Suicide is the 10th leading cause of death in the US, higher than the homicide rate. - Over 33,000 Americans die from suicide each year. - Over a million people in the past year have attempted suicide. - Over 15 percent of U.S. high school students gave suicide serious consideration. - Nearly 8 percent of US high schoolers have attempted suicide in the past year. What Can You Do? Help Prevent Bullying Be able to recognize bullying. Talk to your teen. Often they’re embarrassed or ashamed because bullying is humiliating. Talking about it to a parent or teacher is like re-living the horror. Recognize and be open to the signs of a teen who’s being bullied. Talk with school administrators, teachers, and counselors about your teen’s attitude toward and interaction with peers. Remember that bullies will most often bully when they feel they are the person with authority. They won’t bully others when parents or school administrators are around. Asking teachers or school administrators outright if they’re witnessing bullying will be ineffective because they typically don’t see it. Learn to ask the right people the right questions. Most importantly, let your teen know you’re on their side and you’ll stand behind them when they have social issues. Validate their problems as real, no matter how trivial they seem to you. Educate your teen about bullying. Explain what it is and what to do if they experience it firsthand or if they see someone else being bullied. Teach your teen not to be the bully. Find ways early on to stress the impact of bullying on other people. Understand and recognize some of the common reasons that teens commit suicide. What many adults may see as trivial may be the trigger that pushes the teen to suicidal behavior. These events include major disappointments, rejection, failure, loss of a family member, boyfriend or girlfriend, failing a major exam. These can be crippling for some teens and they may not see these as temporary. They fail to see that suicide is a permanent solution to a temporary problem. - Changes in personality and mood such as becoming hostile or overly sad - Withdrawal from activities they typically enjoy - Weight gain or loss, appetite gain or loss - Sleep issues, too much or too little - Lack of interest in personal hygiene and appearance - Loss of focus and decline in academic performance - Substance use or abuse It’s important to note that if a teen is suffering from any of these symptoms, it’s not a sure way of determining that he is suicidal. Additionally, many of these signs aren’t easily recognizable. One of the biggest and most obvious signs of suicidal behavior is giving or throwing away their personal effects, especially things they cherish. They’re saying goodbye. Open your communication with anyone who is suspected of suicidal tendencies or exhibiting suicidal behavior. Sometimes just knowing someone cares, or knowing that someone has noticed them or their problems helps them turn things around. Any talk of suicide or death should be taken very seriously. Many who commit suicide talked about it beforehand, but not even half had a medical diagnosis of mental illness. One-third of teens who commit suicide have made previous attempts and should therefore be watched closely.
saving the world's most endangered antelope Promotes the conservation of the hirola antelope and its fragile habitat in partnership with communities in eastern Kenya. Hirola are grassland specialist and require open habitat for survival. However, much of the hirola’s historical range occurred in semi-arid grasslands, which were inhabited by nomadic people and wildlife. However, colonial policies lead to a shift from nomadism to sedentary pastoralism by encouraging settlements around boreholes and other fixed infrastructure. A recent analysis of long-term satellite imagery across the hirola’s native range revealed a nearly 300% increase in tree cover in the last 27 years (Figure below). The figure shows changes in tree cover across the hirola’s historic range from 1985 to 2012. Green represents tree cover and brown represents grasslands. The linear narrow band is the Tana River (the longest river in Kenya). Note the stark decrease in grassland between 1985 and 2012. The increase in tree cover poses one of the greatest threats to survival of hirola through food limitation and predation risk. For instance, our study shows hirola perceive wooded areas as riskier than open spaces; if these trends are not reversed, the recovery of hirola will become insurmountable. Our restoration effort aims at restoring grasslands in areas where hirola persist currently as well as future reintroduction sites through bush clearing, grass reseeding and fertilization, which we anticipate will have the knock-on benefit of improving local livelihoods. To restore grassland habitats, we are implementing the following practices: 1) the physical cutting, uprooting or breaking of branches in attempt to restore grassland at scales of hundreds of hectares in prioritized areas within the hirola range, 2) the planting of native grass seeds alongside fertilizer (manure) at scales of hundreds of hectares, 3) community-based protection of elephants (in the form of anti-poaching squads and enhanced communication between villages) to encourage elephant herds to reside on community lands. In addition to hirola, we suspect that a suite of other wild ungulates will benefit from these attempts to improve range. The hirola conservation program is taking the lead in reinstating and restoring Arawale National reserve for hirola conservation in collaboration with Garissa County Government. The reserve was established in 1973 but operations were short lived due to misunderstanding between local communities and authorities. Consequently, the protection and management of the reserve was halted by late 1980s. Subsequently, hirola has experienced a 95% population decline and was listed as a Critically Endangered species in 1996. Until we started our work in 2012, Arawale lacked formal protection and management. Read about more about our work in Arawale here.
Also found in: Acronyms. data display[′dad·ə di‚splā] an output device for computer data (usually the results of processing of input data) that presents the information in a form suitable for visual perception and decision-making by a person (for example, in the form of alphanumeric text or a map, table, curve, circuit, or blueprint). Displays are widely used as terminals for digital computers in data transmission systems, in diagnostic systems and teaching machines, in research and the design of many technical devices, in automatic control and design systems, and in signaling and inspection systems (and in similar “man-machine” systems). A distinction is made between individual and collective displays. The main element of individual displays (Figure 1) is a cathode-ray tube (CRT). The coordinates of characters that are frequently reproduced (letters, numerals, signs, special symbols, and so on) are stored in an auxiliary memory; the central processor of the computer system merely issues an address for such characters, and they are then reproduced automatically on the screen. This kind of display can reproduce a page of book text on the screen in 0.02–0.05 sec. To prevent the image on the screen from flickering, it is reproduced repeatedly (regenerated) at a rate of 20–50 times per sec. Exchange of information with the central processor takes place only when it is necessary to make a change in the image or to transmit the operator’s commands to the processor. In such displays the operator can use a light pencil to erase characters, lines, and portions of a text, to replace elements in circuits and blueprints, to rotate the image in the plane of the screen, and to change the scale of the image. In addition to ordinary CRT’s, data displays use numerical indicator tubes, multiple-beam tubes for synchronous presentation of several rapidly changing variables, tubes with an optical port (for alignment of a composite background, such as a map of a locality or a blueprint, from a slide projector with the cathode-ray image), and color television tubes. The main drawback of CRT displays is the difficulty of matching the displays with digital computers, which requires extra equipment. “Plasma panels” are more convenient from the standpoint of compatibility with a digital computer. Such a panel is composed of three glass plates. The middle plate has holes (cells) filled with a mixture of neon and nitrogen, and the outer plates have select lines (parallel semi transparent strips of gold), which are arranged in such a way that each hole lies between two perpendicular lines. When a control voltage (signal) is fed to the lines, the gas in the cells glows, and the glow persists after the control signal is cut off (the discharge is supported by a DC voltage). To clear an element, a signal of opposite polarity is fed to the pair of lines. Matrix fluorescent screens are arranged in a similar manner (the middle plate is coated with spots of phosphor with an area of about 0.25 sq mm). Screens have been developed that use light-emitting diodes (LED’s) and liquid crystals. LED screens are based on the phenomenon of luminescence of certain semiconductors (for example, gallium phosphide and arsenide) when a voltage is applied to them. Liquid-crystal screens are based on the change in the position of the molecules in certain artificial organic substances under the influence of an electric field; this causes a change in the transparency or color of the corresponding portions of the screen. In a collective display, a primary image produced on an intermediate carrier (the phosphor of a cathode-ray tube) is enlarged and projected onto a screen. The resolution and brightness provided by such displays are sufficient only for relatively small screens (with an area of the order of 2.5 sq m); on larger screens these parameters deteriorate. By replacing the phosphor with a thin oil film that is maintained under a constant potential, a film light modulator is produced (Figure 2). When a cathode ray acts on the film, a charge develops that deforms the film’s surface, producing a raised primary image. Light from a powerful lamp is directed at the primary image by a reflector; when the light is reflected from the irregularities of the oil film, it carries an image of the relief, which is then focused by an objective lens and projected onto a screen. Film-type light modulators provide a high-quality color image on large screens (with areas of up to 200 sq m). Other promising areas are the use of thermoplastic light modulators (similar in design to the film type but using as the primary carrier a material that is first heated to a plastic state) and laser displays (similar to CRT displays, but with a color image transmitted to a large screen by three laser beams of different colors). The data display devices discussed above provide two-dimensional images. However, in a number of cases—for example, in aircraft landing systems and in the design of automobile bodies—a three-dimensional presentation is preferable. A CRT display can reproduce three-dimensional images in an axonometric or other projection by means of a number of attachments; the lines an observer would not see are erased, and the image can be rotated so that the operator can view it from different sides. No less promising is the use of three-dimensional displays based on holography. New possibilities are opened up by volumetric display in which the image is created in a volume filled with gas (Figure 3) rather than in a plane. Two beams from external light sources are directed into a gaseous medium; each changes the energy state of the gas molecules in such a way that the gas fluoresces at the point where the beams intersect. Upon rapid motion of the beams, a luminous trail appears that is perceived by an observer as a complete image upon multiple reproduction. REFERENCESPoole, H. Osnovnye melody i sistemy indikatsii. Leningrad, 1969. (Translated from English.) Venda, V. F. Sredstva otobrazheniia informatsii. Moscow, 1969. Temnikov, F. E., V. A. Afonin, and V. I. Dmitriev. Teoreticheskie osnovy informalsionnoi tekhniki. Moscow, 1971. Chachko, A. G. Chelovek za pul’tom. Moscow, 1974. Davis, S. Computer Data Displays. Englewood Cliffs, N.J., 1969. A. G. CHACHKO
Pediatric dentistry (formerly Pedodontics) primarily focuses on children from birth through adolescence. A general dentist can also see children and perform routine restorative needs, but if child is more nervous or needs extensive work, a referral to a pediatric dentist may be necessary. One of the most important components of pediatric dentistry is child psychology. Pediatric dentists are trained to create a friendly, fun, social atmosphere for visiting children, and always avoid threatening words like “drill,” “needle,” and “injection.” Dental phobias beginning in childhood often continue into adulthood, so it is of paramount importance that children have positive experiences. Education - Pediatric dentists educate the child using models, computer technology, and child-friendly terminology; thus emphasizing the importance of keeping teeth strong and healthy. In addition, they advise parents on disease prevention, trauma prevention, good eating habits, and other aspects of the home hygiene routine. Monitoring growth – By continuously tracking growth and development, pediatric dentists are able to anticipate dental issues and quickly intervene before they worsen. Also, working towards earlier corrective treatment preserves the child’s self-esteem and fosters a more positive self-image. Prevention – Helping parents and children establish sound eating and oral care habits reduces the chances of later tooth decay. In addition to providing check ups and dental cleanings, pediatric dentists are also able to apply dental sealants and topical fluoride to young teeth, advise parents on thumb- sucking/pacifier/smoking cessation, and provide good demonstrations of brushing and flossing. Intervention – In some cases, pediatric dentists may discuss the possibility of early oral treatments with parents. In the case of oral injury, malocclusion (bad bite), or bruxism (grinding), space maintainers may be fitted, a nighttime mouth guard may be recommended, or reconstructive surgery may be scheduled.
by Jennifer Chu for MIT News Boston MA (SPX) Mar 10, 2014 The Earth's magnetic field, or magnetosphere, stretches from the planet's core out into space, where it meets the solar wind, a stream of charged particles emitted by the sun. For the most part, the magnetosphere acts as a shield to protect the Earth from this high-energy solar activity. But when this field comes into contact with the sun's magnetic field - a process called "magnetic reconnection" - powerful electrical currents from the sun can stream into Earth's atmosphere, whipping up geomagnetic storms and space weather phenomena that can affect high-altitude aircraft, as well as astronauts on the International Space Station. Now scientists at MIT and NASA have identified a process in the Earth's magnetosphere that reinforces its shielding effect, keeping incoming solar energy at bay. By combining observations from the ground and in space, the team observed a plume of low-energy plasma particles that essentially hitches a ride along magnetic field lines - streaming from Earth's lower atmosphere up to the point, tens of thousands of kilometers above the surface, where the planet's magnetic field connects with that of the sun. In this region, which the scientists call the "merging point," the presence of cold, dense plasma slows magnetic reconnection, blunting the sun's effects on Earth. "The Earth's magnetic field protects life on the surface from the full impact of these solar outbursts," says John Foster, associate director of MIT's Haystack Observatory. "Reconnection strips away some of our magnetic shield and lets energy leak in, giving us large, violent storms. These plasmas get pulled into space and slow down the reconnection process, so the impact of the sun on the Earth is less violent." Foster and his colleagues publish their results in this week's issue of Science. The team includes Philip Erickson, principal research scientist at Haystack Observatory, as well as Brian Walsh and David Sibeck at NASA's Goddard Space Flight Center. Mapping Earth's magnetic shield Large space-weather events, such as geomagnetic storms, can alter the incoming radio waves - a distortion that scientists can use to determine the concentration of plasma particles in the upper atmosphere. Using this data, they can produce two-dimensional global maps of atmospheric phenomena, such as plasma plumes. These ground-based observations have helped shed light on key characteristics of these plumes, such as how often they occur, and what makes some plumes stronger than others. But as Foster notes, this two-dimensional mapping technique gives an estimate only of what space weather might look like in the low-altitude regions of the magnetosphere. To get a more precise, three-dimensional picture of the entire magnetosphere would require observations directly from space. Toward this end, Foster approached Walsh with data showing a plasma plume emanating from the Earth's surface, and extending up into the lower layers of the magnetosphere, during a moderate solar storm in January 2013. Walsh checked the date against the orbital trajectories of three spacecraft that have been circling the Earth to study auroras in the atmosphere. As it turns out, all three spacecraft crossed the point in the magnetosphere at which Foster had detected a plasma plume from the ground. The team analyzed data from each spacecraft, and found that the same cold, dense plasma plume stretched all the way up to where the solar storm made contact with Earth's magnetic field. A river of plasma "This higher-density, cold plasma changes about every plasma physics process it comes in contact with," Foster says. "It slows down reconnection, and it can contribute to the generation of waves that, in turn, accelerate particles in other parts of the magnetosphere. So it's a recirculation process, and really fascinating." Foster likens this plume phenomenon to a "river of particles," and says it is not unlike the Gulf Stream, a powerful ocean current that influences the temperature and other properties of surrounding waters. On an atmospheric scale, he says, plasma particles can behave in a similar way, redistributing throughout the atmosphere to form plumes that "flow through a huge circulation system, with a lot of different consequences." "What these types of studies are showing is just how dynamic this entire system is," Foster adds. Massachusetts Institute of Technology Solar Science News at SpaceDaily |The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.|
Overstory #182 - Remember to touch trees Since 1998, Dr. Alex L. Shigo has contributed four articles to The Overstory (editions 68, 69, 70, and 132). Earlier this month, Dr. Shigo passed away at the age of 75. Dr. Shigo led a revolution in the way we think about arboriculture. The following is a selection of passages from Dr. Shigo's prolific writings. See "About the Author" below for a brief biography of Dr. Shigo and for information about purchasing his publications. from Tree Basics (p. 4) A brief overview of some unique features of trees Trees are the tallest, most massive, longest-lived organisms ever to grow on earth. Trees, like other plants, cannot move. However, trees, unlike other plants, are big, woody, and perennial, which means they are easy targets for constant wounding. Trees are super survivors mainly because they grow in ways that give them defense systems that are highly effective against infections from wounds. Trees have the capacity to adjust rapidly to changes that threaten their survival. Animals move to get food, water, and shelter. They move to avoid destructive agents. When animals are injured and infected, processes of restoration and repair start. Animals heal after wounding. When trees are injured and infected, processes of boundary formation start. Trees do not restore or repair wood that is injured and infected. In this sense, trees do not heal. Instead, trees compartmentalize wound infections. Compartmentalization is the tree's defense process after injuries where boundaries form that resist the spread of infections. The boundaries also protect systems involving water, air, energy storage, and mechanical support. In a sense, the boundaries are like an inside bark. Tree Basics (p. 30) Trees provide their associates with food, water, shelter, and home, nesting, and roosting sites. Here are some of the benefits the associates provide for trees - Facilitate absorption of water and elements - fungi, (mycorrhizae). - Break down organic and inorganic materials - bacteria, fungi, nsects, animals. - Aerate soils - worms, insects, fungi, animals. - Fertilize - droppings from worms, insects, and other animals. - Detoxify harmful substances - bacteria and fungi. - Help adjust pH - bacteria, fungi. - Convert nitrogen in air to a usable form (fix nitrogen) - bacteria and actinomycetes. - Protect roots against pathogens - bacteria, fungi, (mycorrhizae). - Hold water - actinomycetes, bacteria, (cell coatings). - Regulate slow-release fertilizers - bacteria. - Resist decay - anaerobic bacteria (wetwood), non-decay-causing fungi (discolored wood). - Disseminate seeds - birds, animals, insects. - Pollinate flowers - insects, animals, especially birds and bats. - Facilitate branch shedding - rot-causing fungi. - Protection against wound infection by decay-causing fungi - bacteria, non-decay-causing fungi. from A New Tree Biology (p. 161) Survival means to remain alive under conditions that have the potential to kill. Trees, as we know them today have been evolving on this earth for over 200 million years. They have survived the killing forces of countless pathogens and the ravages of environmental extremes. Somehow, some trees have remained alive under all types of conditions that had the potential to kill. But, trees did not accomplish this by acting as individuals. Trees connected or interacted with a great number of other living things, and together they survived. The power of interactions and connections kept many living things alive. Indeed, trees evolved in groups. They had group protection and group defense. They were protected and defended by their neighbors and associates, and trees protected and defended them. A tight circle or web of connections was the way the individuals within the group survived. Now the connections are being broken. The heart of the survival system is being threatened. The most deadly words to the survival system are "suddenly and repeated." Given enough time most members of the large circle adapted to adverse conditions. But, when adverse conditions repeat faster than adaptation can occur, then the entire system is threatened. Trees never knew a stump until axes and saws came into the forest. Trees never knew complete removal of trunks, machine compaction of soils, sudden changes in water drainage patterns due to roads, pollution, and disruption of niches for soil organisms. The list goes on. These actions have come suddenly. They are being repeated. from A New Tree Biology (p. 458) A hundred times, at least, I have heard it said, "....the client wants their trees topped, and we (the arborist talking) give them what they want and what they will pay for. We must make a profit, or we will not be in business long. And, believe me, the next company that comes along will take the topping job." A dilemma begins to happen. If the arborist does not do the requested topping, somebody else will and the trees still will be injured, so the first arborist often feels that he might as well do the injurious job and get the profit. Some arborists will refuse to top trees. I accept the fact that there is a dilemma. And, I believe that like wound dressings, topping will never go away, but I also believe that there are ways to greatly reduce this injurious practice and still make profit, or even more profit. First, some order is needed on the subject. Let us not confuse topping-the internodal removal of a leader trunk - with early pruning or training of young trees, pollarding, bonsai pruning, crown reduction by cutting at crotches (many names for this drop crotch pruning, dehorning, etc.). Topping is done internodal; proper crown reduction is done at nodes, or at crotches. So the first separation must be nodes - good, internodes - bad. from 5 minute Tree Care (p. 1) For all people who care about trees, but do not have the time to read long articles. CORRECT 5 simple primary tree problems and you will prevent many costly secondary problems caused by insects, diseases, heat and cold extremes, and drought. You will also reduce the chances of your trees becoming hazards. Five major problems and their solutions - SELECT HEALTHY TREES. Do not buy or plant trees that have roots crushed or crowded in a bag or container. - PLANT PROPERLY Do not plant too deep. - PLANT THE RIGHT TREE IN THE RIGHT PLACE. Do not plant large-maturing trees near buildings or power lines. - PRUNE BRANCHES CORRECTLY. Do not remove branch collars or leave stubs. - PRUNE TREES CORRECTLY. Do not top trees. from Modern Aboriculture (p. xvi) Modern arboriculture means - The right tree in the right place. - Building designs that give trees space to grow. - Beautiful trees growing in clusters. - Healthy trees growing below grade. - Young trees with space to grow and with proper early pruning. - The target is removed, not the tree. - No sprouts from a correct pruning cut. - Early training regulates size and shape of trees. - The sidewalk is cut, not the tree or its roots. - Proper care for old trees, and respect for their dignity. - Planting trees at the proper depth. - People touching trees and learning how they work, before they work on them. - Treatments that destroy defense systems must be stopped. - Treatments that cause serious internal injuries must be stopped. - Treatments that start other problems must be stopped. - Treatments that injure and kill transplanted trees must be stopped. from A New Tree Biology Dictionary (p. 26) Concentrations and survival What keeps you alive, can kill you. No water or no salt will kill you, and too much water or too much salt will kill you. It is not water and salt that are essential for survival. It is the proper concentrations of each that are essential. Concentrations of all the factors essential for survival are always changing in nature. Constant adaptation is needed to survive in such an environment. The static state does not exist, and what may be good today could be bad tomorrow for a tree. A cavity full of water will be bad for the decay-causing organisms and good for the tree. As the water evaporates, a point of wood moisture will be reached that will be very good for the wood-decaying organisms. Then they will grow rapidly, within the boundaries set by the tree. When the moisture concentration falls below a certain level, the wood-decaying organisms do not grow further. Moisture, temperature, and all the essential elements are constantly changing, and conditions that are too extreme for best growth of the tree or the pathogens, are also always changing. Vibrations in concentrations of essential survival factors are ways natural systems constantly rid themselves of the weak individuals. We must be very careful not to disrupt the natural fluctuations by adding too much water, and too much fertilizer, or by disturbing the tree at a critical time in the vibration period. This is why we need to understand tree biology. Too many times our honest and loving attempts to help are really actions that hurt the plant. from New Tree Health (p. 1) TREES grow taller, live longer, and become more massive than any other living thing because trees are perennial woody plants. WOOD gives trees superior mechanical support, which is the trees' unique feature. ROT destroys the trees' unique feature. PREVENT WOUNDS THAT LEAD TO ROT lawnmowers, cars, fire, construction, climbing spikes, improper pruning, topping, deep injection and implant holes, and the list goes on and on! Shigo on Trees (back cover) Trees have dignity, too. There comes a time when trees in cities, parks, and near homes should be removed and new ones planted. When possible, plant trees in groups or clusters. LEARN about trees and their associates so that you can help make better decisions for their long-term, high-quality survival. from Tree Hazards (p. 1) A tree hurts, too! Most tree hazards do not just happen. They are usually started by mistreatments by people. When a hazardous tree breaks, it may hurt not only people, but the tree hurts, too in the sense of wounds or even death. MOST TREE HAZARDS CAN BE PREVENTED by regular checkups and proper treatments by tree professionals - arborists. from Tree Hazards (p. 3) CAUTION! Before more trees and people are injured and killed, we must STOP doing some old injurious tree practices and START doing some new beneficial ones. The list is long. Here are a few examples - Removing tops of upright leader stems on big trees - TOPPING. - Removing tips of large branches on big trees - TIPPING. - Removing branch collars when pruning- FLUSH CUTTING. - Planting trees that grow big, under power lines or in small spaces. - Planting trees that have many low branches with tight crotches. - Wounding trees, especially during construction. - Crowding trees with roads, walkways, and buildings. - Planting the wrong tree in the wrong place. - Developing a tree hazard prevention plan with arborists. - Checking trees for health and safety at least once a year. - Consulting arborists for advice before construction starts. - Learning more about trees read A NEW TREE BIOLOGY. - Making decisions based on an understanding of tree biology. - Talking to elected officials about realistic tree support. - Recognizing early signs of problems; consult arborists. from 100 Tree Myths (various pages) Old arboriculture is based on the heartrot concept where the tree is considered a passive organism and that wood is dead. Modern arboriculture is based on the concept of compartmentalization where the tree is considered an active, responding organism, and that wood does have many living cells among the dead cells. Many trees tolerate injurious treatments. This does not mean that such treatments are good for trees. Engineers are straight lines. Biologists are circles. More round cluster plantings of trees are needed in our straight cities! SEE, not just look ACT, not just wait LISTEN, not just hear TOUCH, not just watch From Modern Arboriculture (p. v) Modern arboriculture is about the tree system How it grows, how it defends itself, and how it eventually dies. I hope you will give trees and their associates - the tree system - a fair chance. Learn about them. Touch them. These passages were excerpted with the kind permission of Dr. Shigo's daughter and co-publisher Judy Shigo Smith from Shigo, A.L. 1986. A New Tree Biology Dictionary. Shigo, A.L. 1989. A New Tree Biology, 2nd Ed. Shigo, A.L. 1991. Modern Arboriculture. Shigo, A.L. 1993. 100 Tree Myths. Shigo, A.L. undated. New Tree Health. Shigo, A.L. undated. 5 Minute Tree Care. Shigo, A.L. undated. Shigo on Trees. Shigo, A.L. undated. Tree Hazards. All above publications are available from the publisher About the author Alex L. Shigo was chief scientist with the US Forest Service, and known by many as "the father of modern arboriculture". He is recognized internationally for the development of expanded interpretations of decay based on compartmentalization and microbial succession. His research includes over 15,000 longitudinal tree dissections with a chainsaw. He has published over 15 textbooks used in many universities worldwide, and hundreds of other publications. He received numerous honors and awards. Dr. Shigo passed away October 6, 2006. To purchase Dr. Shigo's publications or more information contact Related editions to The Overstory - The Overstory #181--Dispelling Misperceptions About Trees - The Overstory #144--How trees stand up - The Overstory #143--Dendrology - The Overstory #132--How Trees Survive - The Overstory #92--Trees and Their Energy Transactions - The Overstory #70--Troubles in the Rhizosphere - The Overstory #69--Some Tree Basics - The Overstory #68--Twelve Tree Myths
Can you please answer these questions 1. How did an understanding of the theory of evolution enable Shubin to predict the age and type of sedimentary rock in which he would find Tiktaalik? 2. Why did Shubin look for fossils in Canada's north? kJo 3. It took five years of field work to find Tiktaalik. What does this suggest about the nature of palaeontology? What challenges do you think Shubin's team faced? 4. Some fish have primitive lungs. Do you think Tiktaalik had lungs in addition to gills? Why or why not? Use the Internet and other sources to check your prediction. 5. Articles submitted to the science journal Nature undergo a rigorous peer review process before being accepted. What is the benefit of such a process? Why would scientists not publish their findings in journals that do not require a review?.
Using Diffusion Tubes Many local councils use diffusion tubes to measure nitrogen dioxide as they are relatively cheap, easy to use, don’t require any power supply and can give a good indication of air pollution levels. The low cost allows them to be used across a large area, so a picture of air pollution can be built up. Non-professionals such as environmental campaigners and schools are also using them, as the tubes require only limited attention and are easy to operate. The tubes are small and made of plastic with a cap at each end. Inside the top end of the tube is a metal mesh disc coated with a substance that absorbs nitrogen dioxide. The absorbing substance used for nitrogen dioxide is triethanolamine (TEA). The tube is placed vertically in a holder, usually attached to something like a lamppost or drainpipe, and the bottom cap is removed allowing air to travel into the tube. When nitrogen dioxide in the atmosphere diffuses into the sampler it reacts with the TEA and is converted to nitrite. The nitrite remains in the TEA and more nitrogen dioxide diffuses into the sampler. The diffusion tube is left at the site with the bottom cap removed for a month. After this time the tube has the cap replaced and is taken to a laboratory for analysis. In the laboratory, the metal mesh is removed and washed with water. This water is collected in a small container and has special light shone through it called ultra violet light (UV). The amount of UV light the water absorbs is equivalent to the concentration of nitrogen dioxide measured in the air by the diffusion tube for that month. In some areas, local communities are involved in positioning and changing the diffusion tubes every month. This allows the local residents access to air pollution measurements right where they live. In Sheffield, the information collected from the diffusion tubes that measure nitrogen dioxide can be found on the Sheffield Airmap website www.sheffieldairmap.org
The main difference between Van Der Waals radius and covalent radius is that Van Der Waals radius is a measure of the size of an atom derived from the interatomic distances observed in various molecular crystals, whereas covalent radius is a measure of the size of an atom based on the assumption that atoms in a covalent bond share electrons and form a bond distance that is characteristic of that particular type of bond. Van der Waals radius and covalent radius are two distinct concepts that describe the size of atoms in different contexts. Key Areas Covered 1. What is Van Der Waals Radius – Definition, Features, Applications 2. What is Covalent Radius – Definition, Features, Applications 3. Similarities Between Van Der Waals Radius and Covalent Radius – Outline of Common Features 4. Difference Between Van Der Waals Radius and Covalent Radius – Comparison of Key Differences 5. FAQ: Van Der Waals Radius and Covalent Radius – Frequently Asked Questions Van Der Waals Radius, Covalent Radius What is Van Der Waals Radius The Van der Waals radius is a measure of the effective size of an atom or molecule. It is defined as half the distance between the nuclei of two adjacent, non-bonded atoms of the same element in a solid or molecular crystal when they are at their closest approach without any significant repulsive forces between them. In simpler terms, it represents the distance at which two atoms, if they were not bonded, would come closest to each other due to the attractive and repulsive forces between their electron clouds. For example, imagine two helium (He) atoms that are not chemically bonded but are brought close together in a solid. At a certain distance, the attractive forces between their electron clouds dominate, causing them to approach each other. However, if they get too close, the repulsive forces between the electron clouds and the positively charged nuclei start to push them apart. The Van der Waals radius for helium represents the equilibrium distance at which these attractive and repulsive forces are balanced. Role of Van Der Waals Radius The Van der Waals radius holds significant importance in the realms of chemistry and physics, playing a pivotal role in various key areas. Firstly, it is central to understanding intermolecular forces, where the balance between attractive London dispersion forces and repulsive forces, regulated by the Van der Waals radius, dictates whether atoms or molecules form bonds or engage in weak interactions. In the solid state, the Van der Waals radius becomes crucial for molecular packing in a crystal lattice, determining packing efficiency and crystal structure across various materials. Additionally, the Van der Waals equation of state, incorporating the Van der Waals radius, is instrumental in analyzing the behavior of real gases. This equation provides corrections for the finite size of gas molecules, particularly at high pressures and low temperatures, explaining deviations from ideal gas behavior. What is a Covalent Radius The covalent radius is often defined as half the distance between the nuclei of two identical atoms that are bonded together by a single covalent bond. In other words, it represents the equilibrium distance between two nuclei in a covalently bonded molecule. To illustrate this concept, consider a diatomic molecule such as hydrogen or chlorine. In these molecules, two identical atoms are bonded together by a single covalent bond. The covalent radius of each atom is half the distance between the two nuclei, which is also the bond length. For example, in a hydrogen molecule (H2), the covalent radius of a hydrogen atom is half the H–H bond length, and in a chlorine molecule (Cl2), the covalent radius of a chlorine atom is half the Cl–Cl bond length. The covalent radius is a crucial parameter in chemistry, carrying significant implications across multiple aspects. Firstly, it plays a pivotal role in explaining chemical bonding by providing a quantitative measure of atomic size when bonded in covalent molecules. The resulting bond length, determined by the covalent radius, influences the strength of the bond and overall molecular structure. Additionally, in molecular geometry, the covalent radius impacts bond angles and the three-dimensional arrangement of atoms, influencing molecular shape and properties. Covalent radii are essential for estimating bond lengths in various molecules, aiding in the prediction of molecular properties, such as functional group sizes and interatomic distances. This information proves vital in fields like organic chemistry, impacting the reactivity and behavior of molecules. Furthermore, covalent radii contribute to the calculation of interatomic distances in crystal structures. It offers insights into the arrangement of atoms in diverse materials. Lastly, covalent radii are integral to molecular modeling and computer simulations. It provides predictions on molecular behavior, atom interactions, and complex chemical system structures. Similarities Between Van Der Waals Radius and Covalent Radius - Van der Waals radius and covalent radius are measures of atomic size. - Both are empirical values. Difference Between Van Dar Waals Radius and Covalent Radius The Van der Waals radius is defined as half the distance between the nuclei of two non-bonded atoms of the same element when they are at their closest approach in a molecular crystal or a gas. The covalent radius is defined as half the distance between the nuclei of two identical atoms that are bonded together by a single covalent bond in a molecule. The Van der Waals radius is primarily used to describe the size of atoms when they are not bonded but are in proximity to other atoms. The covalent radius is specifically used to describe the size of atoms when they are involved in chemical bonds within covalent compounds. The Van Der Waals radius considers the entire electron cloud of an atom and takes into account the repulsive interactions between electron clouds as atoms approach each other closely. The covalent radius represents the distance between the nuclei of bonded atoms, where electrons are actively shared between the two atoms. FAQ: Van Dar Waals Radius and Covalent Radius Is Van Der Waals radius greater than the covalent radius? - Yes. Van der Waals radius is generally greater than the covalent radius. Why is the covalent radius smaller? - The covalent radius is smaller because of the overlapping of the electron orbitals of the atoms in the covalent bond. Is Van Der Waals radius and ionic radius the same? - No, Van Der Waals radius and ionic radius are two different types of radii. In brief, the covalent radius is specific to atoms involved in covalent bonds. Meanwhile, the Van der Waals radius is a more general measure that describes the effective size of an atom when considering non-bonded interactions. Thus, this is the main difference between Van Der Waals radius and covalent radius. 1. “Van der Waals Radius.” Wikipedia. Wikipedia Foundation.
What is Nutrient Pollution? Nutrient pollution is quickly becoming Teton County, Wyoming’s most widespread, costly, and challenging environmental problem, and is caused by excess nitrogen and phosphorus in water. Nutrient pollution has impacted many streams, rivers, lakes, bays, and coastal waters across the nation for several decades resulting in serious environmental and human health issues, and impacting the economy. Climate change, population growth, and increased development exacerbate the issue of nutrient pollution through warmer waters and more nutrient loads from fertilizer, wastewater, and urban land use. Nutrient Pollution & Algae When too much nitrogen and phosphorus enter the environment – usually from a wide range of human activities – water can become polluted. Too much nitrogen and phosphorus in the water causes algae to grow faster than ecosystems can handle. Excess algae, called algal blooms, result in a myriad of negative impacts, including: - Harm to water quality, food resources and habitats. - Severely reduced or eliminated oxygen in the water, leading to illnesses and even death in fish and other aquatic life. - Elevated toxins and bacterial growth that can make people sick if they come into contact with polluted water, consume tainted fish, or drink contaminated water.
The most easy concept of DOMAIN and RANGE is explained in this video. I will also teach you… 1) Functions – Domain and Range ( GMAT / GRE / CAT / Bank PO / SSC CGL) 2) Domain and range of a function in Hindi. Example part- 1 3) Domain and range of a function | Functions and their graphs | Algebra II | Khan Academy 4) Find the Domain and Range from a Graph 5) ❖ Finding the Domain of a Function – Made Easy! ❖ 6) Functions : Domain – Range : tutorial 1 : ExamSolutions Domain. The domain of a function is the complete set of possible values of the independent variable. In plain English, this definition means: The domain is the set of all possible x-values which will make the function “work”, and will output real y-values. Determine the domain and range of the function above. To determine the domain, identify the set of all the x-coordinates on the function’s graph. To determine the range, identify the set of all y-coordinates. In addition, ask yourself what are the greatest/least x- and y-values The domain is the set of all first elements of ordered pairs (x-coordinates). The range is the set of all second elements of ordered pairs (y-coordinates). Only the elements “used” by the relation or function constitute the range To find the domain, solve the inequality 4 – x 0. x 4. Thus, all numbers less than or equal to 4 represent the domain for this function. When trying to find the domain and range from a graph, the domain is found by looking at the graph from left to right. (Najam Academy is Free Online Educational Academy that provides Free and quality Education. Najam Academy is providing best and conceptual tutorials of Physics, Maths, Chemistry and Biology. The aim of Najam Academy is to provide Free Education to everyone and everyone should have access to Quality Education. The Academy is covering the syllabus of High School and College.) Subscribe my channel at:https://www.youtube.com/channel/UC_ltCdLVMRZ7r3IPzF2Toyg Youtube link: https://www.youtube.com/channel/UC_ltCdLVMRZ7r3IPzF2Toyg Facebook link: https://www.facebook.com/Najamacademy/ Twitter Link: https://www.youtube.com/channel/UC_ltCdLVMRZ7r3IPzF2Toyg Google+ link: https://plus.google.com/102478466306034827457
The outer layer of the Earth, the solid crust we walk on, is made up of broken pieces, much like the shell of a broken egg. These pieces, the tectontic plates, move around the planet at speeds of a few centimetres per year. Every so often they come together and combine into a supercontinent, which remains for a few hundred million years before breaking up. The plates then disperse or scatter and move away from each other, until they eventually – after another 400-600 million years – come back together again. The last supercontinent, Pangea, formed around 310 million years ago, and started breaking up around 180 million years ago. It has been suggested that the next supercontinent will form in 200-250 million years, so we are currently about halfway through the scattered phase of the current supercontinent cycle. The question is: how will the next supercontinent form, and why? There are four fundamental scenarios for the formation of the next supercontinent: Novopangea, Pangea Ultima, Aurica and Amasia. How each forms depends on different scenarios but ultimately are linked to how Pangea separated, and how the world’s continents are still moving today. The breakup of Pangea led to the formation of the Atlantic ocean, which is still opening and getting wider today. Consequently, the Pacific ocean is closing and getting narrower. The Pacific is home to a ring of subduction zones along its edges (the “ring of fire”), where ocean floor is brought down, or subducted, under continental plates and into the Earth’s interior. There, the old ocean floor is recycled and can go into volcanic plumes. The Atlantic, by contrast, has a large ocean ridge producing new ocean plate, but is only home to two subduction zones: the Lesser Antilles Arc in the Caribbean and the Scotia Arc between South America and Antarctica. If we assume that present day conditions persist, so that the Atlantic continues to open and the Pacific keeps closing, we have a scenario where the next supercontinent forms in the antipodes of Pangea. The Americas would collide with the northward drifting Antarctica, and then into the already collided Africa-Eurasia. The supercontinent that would then form has been named Novopangea, or Novopangaea. 2. Pangea Ultima The Atlantic opening may, however, slow down and actually start closing in the future. The two small arcs of subduction in the Atlantic could potentially spread all along the east coasts of the Americas, leading to a reforming of Pangea as the Americas, Europe and Africa are brought back together into a supercontinent called Pangea Ultima. This new supercontinent would be surrounded by a super Pacific Ocean. However, if the Atlantic was to develop new subduction zones – something that may already be happening – both the Pacific and Atlantic oceans may be fated to close. This means that a a new ocean basin would have to form to replace them. In this scenario the Pan-Asian rift currently cutting through Asia from west of India up to the Arctic opens to form the new ocean. The result is the formation of the supercontinent Aurica. Because of Australia’s current northwards drift it would be at the centre of the new continent as East Asia and the Americas close the Pacific from either side. The European and African plates would then rejoin the Americas as the Atlantic closes. The fourth scenario predicts a completely different fate for future Earth. Several of the tectonic plates are currently moving north, including both Africa and Australia. This drift is believed to be driven by anomalies left by Pangea, deep in the Earth’s interior, in the part called the mantle. Because of this northern drift, one can envisage a scenario where the continents, except Antarctica, keep drifting north. This means that they would eventually gather around the North Pole in a supercontinent called Amasia. In this scenario, both the Atlantic and the Pacific would mostly remain open. Of these four scenarios we believe that Novopangea is the most likely. It is a logical progression of present day continental plate drift directions, while the other three assume that another process comes into play. There would need to be new Atlantic subduction zones for Aurica, the reversal of the Atlantic opening for Pangea Ultima, or anomalies in the Earth’s interior left by Pangea for Amasia. Investigating the Earth’s tectonic future forces us to push the boundaries of our knowledge, and to think about the processes that shape our planet over long time scales. It also leads us to think about the Earth system as a whole, and raises a series of other questions – what will the climate of the next supercontinent be? How will the ocean circulation adjust? How will life evolve and adapt? These are the kind of questions that push the boundaries of science further because they push the boundaries of our imagination.
Every year, it seems technology is more and more present in our lives. This is especially true for access to information. A research project when we were kids consisted of finding a topic, heading to the library, using the card catalog, and searching library shelves using our knowledge of the Dewey Decimal System to find books that might (or might not) be helpful. Kids today don’t realize how easy they’ve got it. Digital research projects allow students to use the internet to do their research from a device with internet access. What are Digital Research Projects? Digital research projects are projects where student research is done over the internet. Oftentimes, the research is displayed digitally as well as a slide or media presentation. Images, animations, charts, and tables can be incorporated into these projects whether they are found online or created by students. Digital research is so easy to conduct. Simply find the search engine you want to use, type your topic into the search bar, and scroll through all the resources available. By utilizing internet research and creating a digital research project, your students will learn a lot and accomplish a lot just by using a computer or tablet. Although everything digital seems to be the way our world is going, teach your students to use the library for their research. It is important that our students still know how to find books and documents in libraries. Not everything is available online yet! How can I teach my students to create a digital research project? Teaching students how to create digital research projects should have a few steps to ensure that the information found is accurate and appropriate for students. Some of the same skills that have always been necessary with research projects are still important for digital research projects. For example, note taking, organizing information, quoting, and citing resources are still important things your students need to know how to do. In addition to these same methods students have been learning for years, they will need to understand how to use a computer, choose safe search engines, and decide which information is credible. Students need to be taught how to do all these little steps for creating a research project, digital or otherwise. Use minilessons to help. - Communicate Digital Expectations: Unfortunately, with the vastness of the internet, expectations for internet use must be clearly defined. Take a few minutes to go over the rules students need to follow online. - Focus the Topic: Find a specific topic your students can research so that the information available isn’t overwhelming. - Use Legitimate Sources: Help students understand what a credible source looks like with photos, sources, and information that makes sense. Avoid sources that seem too good to be true or have questionable information. - Note Taking Skills: Teach students how to take the important information down as notes using cave-man language or by using a chart to keep information organized. A great resource that we haven’t discussed yet is your media specialist. They are trained to help you and your students navigate the world of media. This usually means they have a pretty good idea of what resources you can use online and how students can best go about creating a digital research project. How do I know if a source is appropriate and legitimate? There are many websites out there that we should be leery of. Students should be closely monitored on the internet, and if you have the option available, internet safety features should be used. An appropriate and legitimate site displays information which makes sense logically. It is a page where edits are not allowed by just anyone (ahem…Wikipedia). A legitimate website will contain resources for the information presented on the page. Here are some great resources where your students will find appropriate and helpful information: There are many great resources out there for your students to create digital research projects. Use safe search websites for your students to find appropriate and legitimate information. Teach your students how to take notes and find the information they need just like you would with good old fashioned print media. Soon, your students will be creating digital research projects and enjoy sharing all of the information they have learned with you. Leave a Reply You must be logged in to post a comment.
Electroluminescence is the creation of light by applying an electric bias to a material. It may employ AC or DC voltage. It is employed in LEDs and in large area displays etc. When the applied forward voltage on the diode of the LED drives the electrons and holes into the active region between the n-type and p-type material, the energy can be converted into infrared or visible photons. This implies that the electron-hole pair drops into a more stable bound state, releasing energy on the order of electron volts by emission of a photon. The red extreme of the visible spectrum, 700 nm, requires an energy release of 1.77 eV to provide the quantum energy of the photon. At the other extreme, 400 nm in the violet, 3.1 eV is required. See the IDTechEx report Introduction to Printed Electronics
Touching wildlife might seem like a harmless impulse, but it’s really important not to. First off, imagine strolling through a serene forest, and suddenly, you encounter a majestic deer. Resist the urge to pet it, because wildlife can bite or scratch when they feel threatened, and a deer hoof to the face is nobody’s idea of a fun souvenir! Secondly, many critters carry diseases that could jump to humans with a simple touch. So, while that fluffy squirrel might look adorable, it could be harboring germs that turn your outdoor adventure into an unexpected doctor’s visit. Unfortunately, it’s not just wild animals, but also domesticated animals that can harbor serious diseases. In these instances, it’s important we practice common hygiene practices to prevent illness. So, instead of trying to make furry friends, let’s cherish them from a safe and respectful distance – it’s a wild world out there! Rabies is a viral disease that affects mammals, including humans. It is primarily contracted through the bite or scratch of an infected animal, with the virus being transmitted through saliva. This potentially fatal disease is caused by the rabies virus, which attacks the central nervous system and leads to severe neurological symptoms. Without prompt medical intervention, rabies can be lethal. In terms of statistics, rabies is a global concern, with an estimated 59,000 human deaths annually worldwide, primarily in regions with limited access to medical care. Most cases of rabies occur in Asia and Africa, and dogs are the primary source of transmission in developing countries. Vaccination and post-exposure prophylaxis are paramount in preventing the spread of rabies and saving lives.
The term liuli entered the Chinese arts and crafts taxonomy after contacts with the Western Regions in Chinese Central Asia during the Han dynasty. Written in various homophonic characters, liuli is most frequently encountered in Chinese texts as referring to opaque glass or gemstones, but it can mean ‘glaze’ or anything with a glass-like surface. According to historians of Chinese glass technology, liuli in pre-Tang (618–907) texts denoted all sorts of artifi cially manufactured objects made from silica-based material. Many of these were imported from locations in the West, such as Kashmir and ancient Rome. Due to its early development of glass-blowing technology, the Roman Empire exported many forms of glass to the rest of the world, including Central Asia and China. Yet extensive archaeological investigation reveals that glass was produced in China as early as the Zhou dynasty (1066 BCE–256 BCE). Large quanti-ties of opaque glass and eye-beads have been found in Warring States (ca. 770–221 BCE), Qin (221–206 BCE), and Han tombs, which indicate that they ‘were comparatively common, made as a cheap imitation of jade for funerary purpose.’ Such a function probably accounts for the fact that the standard written form for both the characters liu and li has a ‘jade’ (Ch. yu) radical. Since the material and methods involved in making glass and ceramic coatings (i.e. glazes) are similar, the latter became a common referent of liuli. Though this fluidity is not uncommon in Chinese etymology, it poses a problem for deciphering the exact meaning of liuli in early texts. Phrases contain-ing the term liuli may refer to glass objects or coloured stones, rather than glazed ceramics. While in most pre-Han writings liuli means opaque glass, in later writings it could mean anything from glass vessels, ceramic coatings, coloured stones, brilliant surfaces, or could mean simply ‘radiance’ or ‘shining.’ Perhaps because of this linguistic ambiguity, a new word, boli, appeared in Chinese literature in the Tang dynasty to denote glasswork only. Its coining is thought to be related to the importation of a new type of glass-blown transparent vessel, a novelty in the Tang. Buddhist literature also contains evidence suggesting that both liuli and boli have Sanskrit origins: liuli is a variant of a number of words tran-scribing the Sanskrit word vaiḍūrya, a gemstone, and boli is a transcription of spātika, meaning crystal or quartz. Despite this general diff erentiation, the exact referents of liuli are not always clear. The kinship between glass and glaze is perhaps the main reason that thorough examination and interpretation of the term liuli has been conducted largely by scientists of Chinese glassmaking. Scholars of early Sino-Western trade and cultural exchanges, such as Xinru Liu (b. 1951), and of the history of Chinese science and technology, such as Joseph Needham (1900–1995), opted to read liuli as ‘glass.’ Historians of Chinese architecture and ceramic technology, on the other hand, prefer to interpret it as the coating on pottery, that is, ‘glaze.’ This technique so rich in history is today promoted by the LIULIGONGFANG studio, founded by Chang Yi and Loretta H. Yang.
How long do leaves take to decompose? It’s autumn, which means that leaves are falling from the trees and covering the ground. If you’re like most people, you probably just see them as a nuisance that you have to rake up. But have you ever stopped to think about how long it takes for leaves to decompose? It turns out that the answer is not as simple as you might think. The rate of decomposition depends on a number of factors, including the type of leaf, the moisture level, and the temperature. In this blog post, we will explore these factors in more detail and give you a better idea of how long it takes for leaves to decompose. What is decomposition? The process through which organic stuff decomposes into more basic organic or inorganic substances is called decomposition. The decomposition of leaves, for example, is caused by the action of microorganisms such as bacteria and fungi. These microorganisms break down the complex carbohydrates and other molecules in leaves into simpler compounds that can be used by plants and animals. The rate at which leaves decompose depends on a number of factors, including the type of leaf, the climate, and the presence of other organisms. In general, however, it takes several months for leaves to fully decompose. The decomposition process When leaves decompose, they do so through a process of decay. This process is caused by the activity of microorganisms, such as bacteria and fungi, which break down the organic matter in the leaves. The rate at which this happens depends on a number of factors, including the type of leaf, the temperature, and the amount of moisture present. In general, however, it takes leaves several weeks or even months to fully decompose. Factors that affect the rate of decomposition There are many factors that affect how quickly leaves decompose. Some of these include: -The type of leaf: Some leaves decompose faster than others. For example, oak leaves take much longer to break down than maple leaves. -The size of the leaf: Larger leaves will decompose more slowly than smaller ones. -The temperature: Warmer temperatures will speed up the decomposition process. -The moisture level: If the leaves are too dry, they will not decompose as quickly. If they are too wet, however, decomposition will also be slowed down. -The presence of other organic matter: Leaves will decompose more quickly if there is other organic matter present to help break them down (such as worms or other insects). How long does it take leaves to decompose? When leaves fall from trees in autumn, they begin to decompose. The process of decomposition is slower in colder climates, but generally, it takes leaves a few weeks to several months to completely break down and become part of the soil. Leaves are broken down by bacteria and fungi, which release enzymes that accelerate the decomposition process. Oxygen is also necessary for decomposition to occur, so leaves will decompose more quickly in moist conditions where there is plenty of oxygen available. As leaves decompose, they release nutrients that are essential for plant growth back into the soil, making leaf litter an important part of the ecosystem. It’s clear that leaves decompose at different rates depending on the type of leaf and the conditions it’s in. However, in general, it takes leaves a few weeks to several months to completely decompose. So if you’re looking to add some extra nutrients to your garden, consider using fallen leaves as compost. With a little patience, you can turn those dead leaves into something alive and beautiful.
Summary: A new computational model sheds light on the complexity of smell identification. Researchers report different brains know how to associate new similar odors, so long as they have experienced an overlap in odors over their lifetime. Source: Zuckerman Institute. In a new study, Columbia scientists have discovered why the brain’s olfactory system is so remarkably consistent between individuals, even though the wiring of brain cells in this region differs greatly from person to person. To make sense of this apparent paradox, the researchers developed a computational model showing that two brains need not have previously sniffed the same exact set of odors in order to agree on a new set of scents. Instead, any two brains will know to associate new similar odors with each other (such as two different flowers) so long as both brains have experienced even the smallest overlap in odors during their lifetimes. This work was published last week in Neuron. “Many of the brain cells, or neurons, in our olfactory system are wired together seemingly at random, meaning that the neurons that activate when I smell a rose are different than yours. So why do we both agree with certainty what we’re smelling?” said the paper’s senior author Larry Abbott, PhD, a computational neuroscientist and principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “By creating this model, we could detect, for the first time, the patterns that underlie seemingly random activity, revealing a mathematical consistency to how our brains are identifying scents.” The journey an odor takes from the nose to the brain is labyrinthine. When an odor enters the nasal cavity, specialized proteins called olfactory receptors send information about that scent to a designated location in the brain called the olfactory bulb. In a series of pioneering studies in the 1990s, Richard Axel, MD, a codirector at Columbia’s Zuckerman Institute and a co-author of the new Neuron paper, discovered the more than 1,000 genes that encode these olfactory receptors. This work, which was performed alongside his colleague Linda B. Buck, PhD, earned them both the 2004 Nobel Prize in Physiology or Medicine. Today’s paper focuses on how information leaves the olfactory bulb and is interpreted by a brain region called the piriform cortex. The piriform cortex is believed to be a crucial structure for processing odors. Because no two whiffs of an odor are identical, the brain must make associations between odors that are similar. This process, called generalization, is what helps the brain to interpret similar smells. “Generalization is critical because it lets you take the memory of a previous scent — such as coffee — and connect it to the odor of coffee you’re currently smelling, to guide you as you stumble to the kitchen in the morning,” said Evan Schaffer, PhD, a postdoctoral researcher in the Axel lab and the paper’s first author. However, as scientists have investigated the concept of generalization, they have been puzzled by two paradoxes about the piriform cortex. First, the neural activity in the piriform cortex appeared random, with no apparent logic or organization, so researchers could not tie a particular pattern of neural activity to a class of scents. And second, the piriform cortex itself seemed too big. “Scientists could deduce a need for only about 50,000 of the roughly one-million piriform cortex neurons in the human brain,” said Dr. Schaffer. “Given how energetically expensive neurons are, this raised the question: Why are there so many neurons in this part of the brain?” The researchers developed a mathematical model that offered a resolution to both paradoxes: Two brains could indeed agree on a class of scents (i.e. fragrant flowers versus smelly garbage) if the neural activity came from a large enough pool of neurons. The idea is similar to crowdsourcing, whereby different people each analyze one part of a complex question. That analysis is then pooled together into a central hub. “This is analogous to what is happening in the piriform cortex,” said Dr. Schaffer. “The different patterns of neural activity generated by these one-million neurons, while incomplete on their own, when combined give a complete picture of what the brain is smelling.” By then testing this model on data gathered from the brains of fruit flies, the team further showed that this neural activity helps two brains to agree on common odors, even with limited common experience. Scientists have long argued that two brains must share a common reference point, such as each having previously smelled a rose, in order to identify the same scent. But this model suggests that the reference point can be anything — the memory of the scent of a rose can help two people agree on the smell of coffee. “Even the tiniest bit of common experience seems to realign the brains, so that while my neural activity is different than yours, the association we each make between two related scents — such as flowers — is similar for both of us,” said Dr. Schaffer. This model, while lending insight into a long-held paradox of perception, highlights an underlying elegance to the olfactory system: despite containing different neurons, memories and experiences — two brains can still come to an agreement. “You and I don’t need to have sniffed every type of odor in the world to come to an agreement about what we’re smelling,” said Dr. Schaffer. “As long we have a little bit of common experience, that’s enough.” Funding: This research was supported by the Gatsby Charitable Foundation, the National Science Foundation NeuroNex Award (DBI-1707398), the Simons Collaboration on the Global Brain and the Howard Hughes Medical Institute. The authors report no financial or other conflicts of interest. Source: Anne Holden – Zuckerman Institute Publisher: Organized by NeuroscienceNews.com. Image Source: NeuroscienceNews.com image is in the public domain. Original Research: Abstract for “Odor Perception on the Two Sides of the Brain: Consistency Despite Randomness” by Evan S. Schaffer, Dan D. Stettler, Daniel Kato, Gloria B. Choi, Richard Axel, and L.F. Abbott in Neuron. Published April 26 2018. [cbtabs][cbtab title=”MLA”]Zuckerman Institute “A Rose is a Rose is a Rose: Mathematical Model Explains How Different Brains Agree on Smells.” NeuroscienceNews. NeuroscienceNews, 1 May 2018. <https://neurosciencenews.com/math-olfaction-8931/>.[/cbtab][cbtab title=”APA”]Zuckerman Institute (2018, May 1). A Rose is a Rose is a Rose: Mathematical Model Explains How Different Brains Agree on Smells. NeuroscienceNews. Retrieved May 1, 2018 from https://neurosciencenews.com/math-olfaction-8931/[/cbtab][cbtab title=”Chicago”]Zuckerman Institute “A Rose is a Rose is a Rose: Mathematical Model Explains How Different Brains Agree on Smells.” https://neurosciencenews.com/math-olfaction-8931/ (accessed May 1, 2018).[/cbtab][/cbtabs] Odor Perception on the Two Sides of the Brain: Consistency Despite Randomness •A random model predicts observed preservation of correlations in piriform responses •The model supports consistent agreement about odor quality among individuals •Consistent generalization may require the full complement of piriform neurons Neurons in piriform cortex receive input from a random collection of glomeruli, resulting in odor representations that lack the stereotypic organization of the olfactory bulb. We have performed in vivo optical imaging and mathematical modeling to demonstrate that correlations are retained in the transformation from bulb to piriform cortex, a feature essential for generalization across odors. Random connectivity also implies that the piriform representation of a given odor will differ among different individuals and across brain hemispheres in a single individual. We show that these different representations can nevertheless support consistent agreement about odor quality across a range of odors. Our model also demonstrates that, whereas odor discrimination and categorization require far fewer neurons than reside in piriform cortex, consistent generalization may require the full complement of piriform neurons.
If you drop molten glass into a bucket of cold water, it cools into a weird tadpole shape: a round head with a long, thin tail. It’s called a Prince Rupert’s Drop, and no matter how hard you hit the head with a hammer, it won’t break. But if you nick the tail even slightly, the whole drop explodes into tiny shards of glass. That’s because the cold water cools the outside of the drop very quickly — so quickly that even by the time the outside is solid, the inside is still molten. As the inside cools, it causes the solid outer layer to contract around it. The resulting stresses actually strengthen the glass — except for the tail, which is too thin to have inner and outer layers to balance the stresses, so it’s a weak point. And if you nick the drop’s tail, it doesn’t just break, it explodes into tiny pieces, so fast that you can’t really see it happen in real time. One instant, the glass is there; the next, it’s gone. You can watch it happen at 130,000 frames per second in this video. What’s a Prince Rupert’s Drop good for? Researchers have used Prince Rupert’s Drops since the 17th century to study material failure and elasticity. Geologists are also interested in these funny glass tadpoles, because similar structures often form during volcanic eruptions.
Technologies promote sustainable energy including renewable energy sources, such as hydroelectricity, solar energy, wind energy, wave power, geothermal energy, bioenergy, tidal power and also technologies designed to improve energy efficiency. Costs have decreased immensely throughout the years, and continue to fall. Increasingly, effective government policies support investor confidence and these markets are expanding. Considerable progress is being made in the energy transition from fossil fuels to ecologically sustainable systems, to the point where many studies support 100% renewable energy. Environmental impact of wind power includes effect on wildlife, but can be mitigated if proper monitoring and mitigation strategies are implemented. Thousands of birds, including rare species, have been killed by the blades of wind turbines, though wind turbines contribute relatively insignificantly to anthropogenic avian mortality. For every bird killed by a wind turbine in the US, nearly 500,000 are killed by each of feral cats and buildings. In comparison, conventional coal fired generators contribute significantly more to bird mortality, by incineration when caught in updrafts of smoke stacks and by poisoning with emissions byproducts (including particulates and heavy metals downwind of flue gases). Further, marine life is affected by water intakes of steam turbine cooling towers (heat exchangers) for nuclear and fossil fuel generators, by coal dust deposits in marine ecosystems (e.g. damaging Australia's Great Barrier Reef) and by water acidification from combustion monoxides. There is one more area where buyers may get a false sense of security: Several states in the US have lists of “approved” wind turbines for their rebate programs. An example of this is the California list. The problem is that approval for this list, and the performance data provided (such as rated power and energy production) are essentially self-certified. The less-scrupulous manufacturers can ‘manufacture’ data and submit it under the pretence that it was measured. The only value of those lists is in telling you what rebates are available, they do not provide reliable turbine information. Vertical-axis wind turbines (or VAWTs) have the main rotor shaft arranged vertically. One advantage of this arrangement is that the turbine does not need to be pointed into the wind to be effective, which is an advantage on a site where the wind direction is highly variable. It is also an advantage when the turbine is integrated into a building because it is inherently less steerable. Also, the generator and gearbox can be placed near the ground, using a direct drive from the rotor assembly to the ground-based gearbox, improving accessibility for maintenance. However, these designs produce much less energy averaged over time, which is a major drawback. Sunforce Wind Generators are primarily used to recharge all types of 12-Volt batteries, including lead-acid automotive batteries, deep-cycle (traction type) batteries, gel-cell batteries, and heavy-duty (stationary type) batteries. When using this wind generator to run appliances on a regular basis, the use of deep-cycle marine batteries is recommended. This type of battery is designed to withstand the frequent charge and discharge cycles associated with wind power use. Attempting to run the wind generator on an open circuit without a battery may cause damage to the generator or connected equipment. Wind turbines need wind to produce energy. That message seems lost, not only on most small wind turbine owners, but also on many manufacturers and installers of said devices. One of the world’s largest manufacturers of small wind turbines, located in the USA (now bankrupt by the way, though their turbines are still sold), markets their flag-ship machine with a 12 meter (36 feet) tower. Their dealers are trained to tell you it will produce 60% of your electricity bill. If you are one of those that is convinced the earth is flat, this is the turbine for you! He was able to begin installation sooner than promised. The finished product looks great. The exterior industrial grade electrical work they did looks stylish. The workers kept a clean job site and fully cleaned up, leaving my place neater than before they began. The workers were knowledgeable and helpful. Other than wishing that it was free, I don't know what they could have done better. I give them my highest recommendation because of a job superbly done.... read more A recent UK Government document states that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake". In countries such as Germany and Denmark many renewable projects are owned by communities, particularly through cooperative structures, and contribute significantly to overall levels of renewable energy deployment. Which is to say that Ross and his co-workers had options. And the city was free to take advantage of them because of a rather unusual arrangement: Georgetown itself owns the utility company that serves the city. So officials there, unlike those in most cities, were free to negotiate with suppliers. When they learned that rates for wind power could be guaranteed for 20 years and solar for 25 years, but natural gas for only seven years, the choice, Ross says, was a “no-brainer.” Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Solar PV is rapidly becoming an inexpensive, low-carbon technology to harness renewable energy from the Sun. Small-scale turbines are expensive (one manufacturer says a typical system costs $40,000 to $60,000 to install), though some of that outlay can be offset by federal and local tax credits. Experts recommend that you buy one certified by the Small Wind Certification Council. Turbine manufacturers include Bergey Wind Power, Britwind and Xzeres Wind; look on their websites for local dealers. Several groups in various sectors are conducting research on Jatropha curcas, a poisonous shrub-like tree that produces seeds considered by many to be a viable source of biofuels feedstock oil. Much of this research focuses on improving the overall per acre oil yield of Jatropha through advancements in genetics, soil science, and horticultural practices. SG Biofuels, a San Diego-based Jatropha developer, has used molecular breeding and biotechnology to produce elite hybrid seeds of Jatropha that show significant yield improvements over first generation varieties. The Center for Sustainable Energy Farming (CfSEF) is a Los Angeles-based non-profit research organization dedicated to Jatropha research in the areas of plant science, agronomy, and horticulture. Successful exploration of these disciplines is projected to increase Jatropha farm production yields by 200-300% in the next ten years. A solar power tower uses an array of tracking reflectors (heliostats) to concentrate light on a central receiver atop a tower. Power towers can achieve higher (thermal-to-electricity conversion) efficiency than linear tracking CSP schemes and better energy storage capability than dish stirling technologies. The PS10 Solar Power Plant and PS20 solar power plant are examples of this technology.
Prevention, within the context of ATSA, refers to efforts to stop the perpetration of unhealthy, harmful, dangerous, and illegal sexually oriented behaviors and actions that victimize others. The goal of prevention is to identify the factors that contribute to – and inhibit – sexual abuse, and use that information to stop sexual abuse before it can begin. These factors are often referred to as “risk” and “protective” factors for perpetration. It also is important to understand the risk and protective factors for victimization. A sound, comprehensive, prevention program enhances and assists protective factors and reduces (and in many cases, eliminates) the identified risk factors. For more information about sexual abuse prevention, read What is prevention? - The Relation Between Sexual Harassment And Sexual Coercion - 5 Things: Preventing Harmful Sexual Behaviors In Youth (Infographic) - People Are Talking About Preventing Sexual Abuse (Infographic) These select handouts from ATSA's Research and Treatment conference focus on the latest prevention theory and treatment. - Talking About Prevention - Sometimes it’s not the Picture, it’s the Frame (PDF / PPT) - How ATSA Chapters Can Get Involved in Prevention - Talking about Prevention - Sexual Violence Prevention Fact Sheet Other Prevention Publications - Preventing Child Sexual Abuse within Youth-serving Organizations (CDC, 2007) - Transforming Communities to Prevent Child Sexual Abuse and Exploitation: A Primary Prevention Approach (Prevention Institute, 2009) - Sexual Violence Prevention: Beginning the Dialogue (CDC, 2004) The National Coalition to Prevent Child Sexual Exploitation ATSA is a member of The National Coalition to Prevent Child Sexual Exploitation. The Coalition is formed of more than 30 major agencies and experts that have collaborated to develop this first coordinated, collective national plan to focus on prevention to end demand for the sexual abuse and exploitation of children. It supports comprehensive prevention strategies, but pays special attention to primary prevention and positive youth development-actions that take place before child sexual abuse or exploitation has been perpetrated. Join ATSA in preventing child sexual exploitation--please download the National Plan and distribute the document to family and community members. Read the National Plan Press Release here. National Plan Excerpt: The National Plan explains the multiple areas of trauma associated with sexual abuse and exploitation (CSA/E) as well as the frequency of CSA/E and the economic impact, including increased health care and interdiction costs. The report shows that CSA/E often happen in conjunction with other types of abuse and violence, and can have long-term, psychological impacts. The National Plan identifies action steps in several key areas, including research, ending the public demand for sexual exploitation, increasing public awareness and collaborative practices and funding. Please contact [email protected] for more information.
How It Works This diagrams shows how an Apricus solar hot water system functions. System Operation Overview 1. The solar hot water collector converts sunlight into usable heat, warming the liquid in the header pipe. 2. Once the temperature in the header pipe is measured to be hotter than the water located in the bottom of the storage tank (T2), the pump will turn on. The liquid is slowly circulated through the header pipe in the collector, heating by ~7oC during each pass. 3. Apricus systems are usually installed to use the water directly from the hot water cylinder. In some areas, a coil in the cylinder is used to protect against hard frosts or water quality issues. Throughout the day, the water in the storage tank is gradually heated up. 4. The temperature in the top of the solar tank (T3) is monitored and the solar system is shut down (or excess heat is dissipated) once a maximum temperature (~75oC) has been reached. 5. If the water is not already hot enough from solar input, then traditional heating systems boosts the solar pre-heated water up to the required temperature. Often, the booster is an electrical element inside the solar cylinder, or it can be a wetback, central heating system or gas water heater. Since the water has already been heated by solar energy, less energy is required to heat further. Apricus System Video For more information on how an Apricus solar hot water systems works, including information about evacuated solar collector design, solar hot water system operation as well as collector installation guidelines, please watch this video below.
Magic 6 / 9 Give your students practice skip counting through a magic trick. This is a sure winner to show off to parents. The ballet makes us look at those bodies, it makes us listen to that music, it makes us wonder at the geometry, of the way they come together. The way that extraordinary space is controlled and given such emotional force.John Guare Standards for Mathematical Practice MathPickle puzzle and game designs engage a wide spectrum of student abilities while targeting the following Standards for Mathematical Practice: MP1 Toughen up! This is problem solving where our students develop grit and resiliency in the face of nasty, thorny problems. It is the most sought after skill for our students. MP3 Work together! This is collaborative problem solving in which students discuss their strategies to solve a problem and identify missteps in a failed solution. MathPickle recommends pairing up students for all its puzzles. MP6 Be precise! This is where our students learn to communicate using precise terminology. MathPickle encourages students not only to use the precise terms of others, but to invent and rigorously define their own terms. MP7 Be observant! One of the things that the human brain does very well is identify pattern. We sometimes do this too well and identify patterns that don't really exist.
Abraham Maslow put physiological needs such as drinking water at the base of his hierarchy of human needs. “Water is a basic need next to air, and getting enough sleep,” soil scientist Dr. Annemieke Farenhorst reminded students Oct. 4, 2017. Only one per cent of the world’s freshwater is easily accessible for human consumption. The average Canadian uses 300 litres of water a day, which is well above the global average. More than three billion people in the world do not have access to piped water. Dr. Farenhorst said “NASA can send people to Mars, investing billions of dollars to make sure a handful of people have clean drinking water, yet many people still have no access to water on Earth.” The sewage disposal in many First Nations communities is at the same level as developing countries. Naysayers believe it is not possible to fix these problems but, “Voyager 1 is 20.6 billion kilometers from Earth and still returns data to NASA,” Farenhorst said. If we can do such amazing things in space, why can’t we fix the problems in our own backyard? Microbiologist Dr. Ayush Kumar said “I had no idea we had pockets of Third World countries within our own backyard, less than 300 kilometers away.” About 80 years ago, if you cut yourself and it got infected, you would die. Then antibiotics were discovered, and life expectancy increased. But now bacteria are becoming more resistant, and in 2016 a super bug was discovered that resisted 26 different antibiotics. This occurred due to the overuse and misuse of antibiotics. Kumar talked about a man who had made yogurt for years with organic milk. One day he used grocery store milk and the antibiotics in the milk killed the yogurt’s good bacteria. There is a large gap in life expectancy between First Nation people living on reserve (females 73 and males 64) and other Canadians (females 83 and males 79). Kumar found antibiotic resistant genes that can cause severe skin infections in drinking water on a First Nation reserve. E. coli bacteria were also found in water on some First Nation reserves, especially in homes that store water in cisterns or buckets. Kumar left us with two questions: Does any human being deserve to drink water like this? What do you say to a child who asks if he should be using the water to brush his teeth? Audio podcasts are also available of seminars in this series.
Psych-Ed provides psycho-educational assessments for children, adolescents and university/college students. What is a psycho-educational assessment? It consists of an assessment of psychological aspects of learning and of academic skills. Psychological aspects include: - language skills - working memory - verbal and visual learning - attention / concentration - eye - hand coordination for paper - and - pencil tasks - planning ability - reflective / impulsive response style - time management, organizational skills, flexibility, adaptability, (executive functioning) Academic skills include: - reading (phonetic skills, sight vocabulary, reading comprehension) - mathematics (basic numerical operations, mathematical reasoning) - academic fluency (speed of reading, writing, calculating) - listening comprehension - oral expressive skills
territory: the delimited area over which a state, an individual, or a group exercises control and which is recognized by other states, individuals, or groups. Nation: a group of people often sharing common elements of culture such as religion or language, or a history or political identity. Territory: the delimited area over which a state, an individual, or a group exercises control and which is recognized by other states, individuals, or groups. States: independent political units with territorial boundaries that are internationally recognized by other states. - A political organization in control of territory. Nation-state: an ideal form consisting of a homogeneous group of people governed by their own state. Nationalism: the feeling of belonging to a nation as well as the belief that a nation has a natural right to determine its own affairs. Centrifugal forces: forces that divide or tend to pull the state apart. centripetal forces: forces that strengthen and unify the state. Language: a means of communicating ideas or feelings by means of a conventionalized system of signs, gestures, marks, or articulate vocal sounds. Language family: a collection of individual languages believed to be related to their prehistorical origin. example: Indo-European languages in Europe. Which languages are not Indo-European? Where did their people's come from? Language branch: a collection of languages that possess a definite common origin but have split into individual languages. example: the Romance languages were based on the common root of Latin spread as the Lingua Franca of the Roman Empire. Dialects: regional variations in standard languages. Language group: a collection of several individual languages that are part of a language branch, share a common origin, and have similar grammar and vocabulary. example: Scandanavian languages of Swedish, Danish, and Norwegian are mutually intelligible. Matt Rosenberg - The Mining Company Geography "out of many, one" Independent States of the World - U.S. State Department list Ethnologue Database - Nations of the World Listed by Language and Country International and Supranational Organizations 1 - 2
In two days, the world's fastest-moving glacier shed almost 5 square miles worth of ice. That's enough to cover Manhattan in about 1,000 feet of ice, the European Space Agency estimated, assuming the ice is 4,600 feet thick. The ice broke off Jakobshavn, a glacier in western Greenland that's known for losing big chunks of ice during the summer. Here's what the glacier looked like on July 31, two weeks before the ice's big move on August 14. And here it is photographed by satellite on August 16, when the ice stopped moving. The lost ice is the result of a process called "calving." That's what happens when ice breaks off the lowest part of a glacier, in this case the eastern side. The European Space Agency says this might be the most far inland the glacier's been pushed since the 1800s — and researchers think the glacier will keep heading that way because it's unstable. "What is important is that the ice front, or calving front, keeps retreating inland at galloping speeds," Eric Rignot, a glaciologist at the Jet Propulsion Laboratory said in a news release. The glacier's been speeding up over the past few years — during summer 2012, Jakobshavn moved at a rate of 10 miles per year, according to NASA's Earth Observatory. Here's the outline of the ice that fell off the glacier: Once the ice gets separated from the main glacier, it turns into icebergs that travel down fjords — long, narrow inlets of water that feed into the ocean, which is the Atlantic in this case. The calving off the Jakobshavn glacier isn't nearly as big as the ones happening in Antarctica. Even so, the speedy glacier poses a big threat: "This glacier alone could contribute more to sea level rise than any other single feature in the Northern Hemisphere," NASA's Earth Observatory notes.
Dengue fever and Zika differ in the potentially serious complications that they can cause, however. For dengue, the complications include shock and bleeding, while Zika brings with it the risk of microcephaly, a congenital abnormality, in infants who are exposed to the virus in utero. The Medical Dictionary defines microcephaly as “An abnormally small head, which is usually associated with neurodevelopmental delay and mental retardation. A standard definition is any brain or head which is ≥ 3 standard deviations below the mean for a person’s age, sex, height, weight and race.” Another neurological disorder associated with Zika is Guillain-Barré Syndrome, defined by the National Institute of Neurological Disorders and Stroke as “a disorder in which the body’s immune system attacks part of the peripheral nervous system. The first symptoms of this disorder include varying degrees of weakness or tingling sensations in the legs. In many instances, the symmetrical weakness and abnormal sensations spread to the arms and upper body. These symptoms can increase in intensity until certain muscles cannot be used at all and, when severe, the person is almost totally paralyzed.” Most individuals affected by this virus do not experience severe symptoms, and there is no way to be sure that someone is infected or free of Zika without lab tests on blood, urine, saliva or semen. In most cases, someone who suspects a Zika infection should follow the normal treatment for colds and flus and consult a physician if symptoms worsen. Nevertheless, the potential complications of a Zika infection are serious enough to warrant taking precautions against contracting the disease.
The Water Cycle This activity will have the students interacting with different technological devices. - Students will be asked to be seated in groups preferably 3 or 4. Seated around their electronic devices they should be able to access the webpage provided in the link. - Students should be able to differentiate between precipitation and condensation. Examples may be used for each definition. Students may share their information. (C) The water cycle is important to the sustainability of living organisms. - Students should be able to state and give examples why the water cycle is beneficial to living organism. Creating A Rain Gauge - Students should be asked to collect 2liter bottles. These bottles should be transparent with no forms of label on them however students may add a sticker to identify his/her rain gauge. - Teachers will provide scissors and instructions for cutting the bottles 2/3rds for the top. The top ( The top includes the cork.) should be retained for the rain gauges. - The bottom of the bottle may be ridged so the modeling clay should be placed inside in order to give the inner part of the bottle a flat surface. This will help collect water accurately. - The top part of the bottle is used to retain water, it will be place upside down in the other half of the bottle to act as a funnel for collecting water. - The Teacher may allow the students to store their bottles in the classroom. If the weather forecast says that on a particular day the community will have rainfall allow the students to place their rain gauges outside properly before the rainfalls. PLEASE DO NOT ALLOW STUDENTS TO GO OUT INTO THE RAIN TO PLACE BOTTLES! - Students should keep rain diaries in order to track and measure rainfall in millimeters with a measuring cup or by placing a rule beside the rain gauge then converting centimeters into millimeters. *Rainfall data may be submitted to the Met Office for weather reports. Description: Rain Gauge Tutorial Description: Information on the water cycle Description: Water is important.Students please drink at least 4 glass of water per day.
An Up-Close Look at the Tiny Sensory Pits That Ticks Use to Smell By Meredith Swett Walker If you ever find a tick before it finds you—that is when it’s still hanging out on vegetation hoping you’ll brush past it—you may notice the little bloodsucker waving its “arms in the air like it just don’t care.” But ticks aren’t fans of 1980’s hip hop. They’re waving their arms because they are trying to get a whiff of you. While insects primarily smell with their antennae, ticks are not insects; rather, they’re arachnids, and they don’t have antennae. Instead, a tick smells using a structure on its forelegs called the Haller’s organ. The Haller’s organ is described as a tiny “sensory pit” that can detect chemicals like carbon dioxide, ammonia, or pheromones. It can even sense humidity and infrared light, which includes body heat emitted by the warm, blood-filled creatures that the tick wants to find. Despite the importance of the Haller’s organ in tick’s ability to find hosts, it hasn’t been described in detail for many important tick species. But in research published in December in the Journal of Medical Entomology, Tanya Josek, Brian Allan, Ph.D., and Marianne Alleyne, Ph.D., of the University of Illinois examine the Haller’s organ in three medically important species of tick: the blacklegged tick (Ixodes scapularis), the lone star tick (Amblyomma americanum), and the American dog tick (Dermacentor variabilis). Each of these species is an important disease vector: I. scapularis transmits the bacterium that causes Lyme disease, A. americanum transmits the bacteria that cause ehrlichiosis, and D. variabilis transmits the bacterium that causes Rocky Mountain spotted fever. A better understanding of how these ticks find their hosts may aid in reducing disease transmission. Josek, a graduate student in Alleyne’s lab, was interested in arthropod sensory structures. The Haller’s organ seemed a great research topic: Relatively little was known about it, and the potential findings could have important applications in real-world tick-control efforts. The only problem? Josek had a phobia of ticks stemming from some bad experiences in childhood. “This was a complete shock to me because I have loved spiders and insects my whole life,” she says. But Josek conquered her fear with knowledge, reading up on where you’re most likely to pick up the parasites and how long it takes them to transmit disease. She trained with Allan to learn tick collection skills. Then, arming herself with a white Tyvek suit and ample duct tape, she set out into the forests and fields of Illinois in search of ticks. Previous studies of the Haller’s organ were mostly qualitative, which, while informative, are not useful to making quantitative comparisons between species. In addition, recent advances in scanning electron microscope technology provided an opportunity to look more closely at the tiny structure. Josek and her collaborators used environmental scanning electron microscopy (ESEM) to get high resolution images of Haller’s organs. The team focused on the overall shape of the Haller’s organ in each tick species as well as the organ’s major components: the capsule aperture, anterior pit, and the number of setae and sensilla (hair-like structures) in the pit. They analyzed their images using Geometric Morphometrics Analysis, a technique that essentially translates shape measurements into data that can be used in quantitative comparisons. They found that the structure of the Haller’s organ was significantly different in each species of tick. In one species, D. variabilis, the morphology of the Haller’s organ was significantly different between females and males. This study did not test the consequences of these differences in shape, but the detailed quantitative measurements it provides can serve as a basis for future studies in differences in function of the Haller’s organs between species. I. scapularis, A. americanum, and D. variabilis have different preferred hosts and different strategies for finding them. The differences in the structure of their Haller’s organs may reflect this. In order to understand how Haller’s organ morphology relates to tick life histories, “within-genus comparisons, as well as comparisons between ticks with different host-seeking behaviors or between ticks that have a generalist or specialist host-range,” are necessary, say the authors. Meanwhile, Josek is working on determining the specific chemicals, infrared wavelengths, or humidity variables the Haller’s organ is sensing. Josek and Alleyne are also looking at the organ from a genomics perspective and will soon publish a paper that examines the chemoreceptors and binding proteins expressed in the Haller’s organ. Alleyne’s main research interest is bioinspired design, and, while she’s not a huge fan of ticks, she finds inspiration in the Haller’s organ. “To think that arthropods have an exoskeleton that protects them from the environment … and yet are able to sense minute amounts of chemicals is amazing to me. The Haller’s organ is an example of a multi-functional sensor that is very sensitive yet rather simple in design, compared to a vertebrate’s nose, for instance,” she says. If we can better understand the structure ticks use to find us (and their other hosts,) we might devise ways to elude them. This could reduce transmission of serious diseases, as well as make ticks less of a creepy problem for people who work or play outdoors. Josek has overcome her tick phobia and is now comfortable collecting and handling them, but “I still have the occasional nightmare about them,” she admits. Journal of Medical Entomology Meredith Swett Walker is a former avian endocrinologist who now studies the development and behavior of two juvenile humans in the high desert of western Colorado. When she is not handling her research subjects, she writes about science and nature. You can read her work on her blogs Pica Hudsonia and The Citizen Biologist or follow her on Twitter at @mswettwalker.
Today I took the bull by the horns and tried to develop a model of the magnetic field with my grade 12 physics class. I was excited about this, since my students typically have trouble visualizing magnetic fields, and we often resort to mnenonics such as blindly applying the Right Hand Rule. We were able to get quite a bit out of simple set-up. A 40 cm vertical wire was supplied with 0 to 2 A of current. I used a rheostat as a resistor and a multimeter to measure the current. 1. On a piece of cardstock, we used a small compass to find the direction of the magnetic field near the wire by comparing (vector subtracting) the directions of the compass arrow in the background magnetic field of Earth, and the direction othe arrow in the additional field generated by the wire. This is well-explained in the E&M section of the modeling materials. 2. We repeated this on two more pieces of cardstock. This established the form of the magnetic field near a straight wire. By looking at the negative and positive terminals of the power supply, we estblished the straight wire Right Hand Rule. We switched the leads and saw that the RHR still held. 3. We rolled the wire into a single coil and talked about the structure of the magnetic field that would result. We checked that prediction and then saw that the RHR from the straight wire turns into the RHR for a coil. 4. I set out some small whiteboards and we used the compass to determine the direction of the magnetic field near a solenoid. This led to drawing field lines, and at this point we agreed on some conventions for drawing field diagrams for magnetic fields, and went back to draw them for coils and wires. 5. I held couple strong magnets on opposite sides of the vertical wire. With the current switched on, the wire moves. We pretended the force resisting the movement was spring-like, so the displacement was proportional to the force. Then, we varied the current to find that F = kI. 6. By holding lines of magnets beside the wire, we were sort of able to convince ourselves that F = kL. We ran out of time, so I asserted the other two dependencies, giving us F=ILBsinx. Pretty successful! Here’s a better view of the field around solenoid.
Server: Web vs. Application Server is a device or a computer program that accepts and responds to the request made by other program, known as client. It is used to manage the network resources and for running the program or software that provides services. There are two types of servers: Web server contains only web or servlet container. It can be used for servlet, jsp, struts, jsf etc. It can't be used for EJB. It is a computer where the web content can be stored. In general web server can be used to host the web sites but there also used some other web servers also such as FTP, email, storage, gaming etc. Examples of Web Servers are: Apache Tomcat and Resin. Web Server Working It can respond to the client request in either of the following two possible ways: The block diagram representation of Web Server is shown below: Application server contains Web and EJB containers. It can be used for servlet, jsp, struts, jsf, ejb etc. It is a component based product that lies in the middle-tier of a server centric architecture. It provides the middleware services for state maintenance and security, along with persistence and data access. It is a type of server designed to install, operate and host associated services and applications for the IT services, end users and organizations. The block diagram representation of Application Server is shown below: The Example of Application Servers are:
World has come to the point where it needs more energy than ever before as energy demand grows rapidly on global scale. But not only does the world need energy, but it also needs energy gained from renewable and ecologically acceptable fuels that doesn’t cause major ecological problems such as global warming and air pollution. Ocean energy could well be one of these new renewable energy sources and should really play more significant role in upcoming years. Oceans cover more than 70% of Earth’s surface and they therefore present interesting energy source that may with time provide us with energy to power our households and industrial facilities. At this moment ocean energy is renewable energy source very rarely used as there are only few ocean energy power plants and most of these power plants are also very small so energy gained form oceans is literally negligible on global scale. But future should put more attention to this renewable energy source and there should be significant increase in produced energy, especially with more attention to renewable energy sector. There are three basic types that allow us to use ocean for its energy. We can use the waves (wave energy, wave power), ocean tidal power (ocean high and low tides), and we can even use temperature differences in the water to create an energy (Ocean Thermal Energy Conversion, OTEC). Ocean wave energy is form of the kinetic energy that exists in the moving waves of the ocean since waves are caused by blowing winds over the surface of the ocean. This energy can be used to power a turbine and there are many areas in the world where wind blows with sufficient consistency to provide continues waves. There is tremendous energy in wave power which gives this energy source gigantic energy potential. Wave energy is captured directly from surface waves or from different pressure fluctuations between the surfaces. This energy can then be used to power a turbine and the simple and mostly used working principle of this procedure would be as follows: First the wave raises into a chamber and then the rising water forces the air out of the chamber and the moving air spins a turbine which then turns a generator. The main problem with wave energy is the fact that this energy source isn’t the same in all parts of the world, since it varies significantly from place to place. This is the reason why wave energy can’t be exploited in all parts of the world but there are many researches that work on solutions of how to solve this variability problem. However, there are still many rich wave power areas in the world like the western coasts of Scotland, northern Canada, southern Africa, Australia, and the northwestern coasts of the United States, all with high potential for wave power exploitation. There are many different technologies to capture wave power but very few of these technologies is commercial enough to be fully used. Wave technologies are not only installed near shore and offshore but already also in far offshore locations and the emphasis of new research projects such as “The OCS Alternative Energy Programmatic EIS” is particularly on offshore and far offshore wave technologies where offshore systems are located in deep water, on depths passing even 40 meters. Majority of wave energy technologies are still oriented to installations at or near the water surface, and the main difference between these technologies is their orientation to the waves with which they are interacting with and in their working principle in which they convert the energy of the waves into desired energy forms. Among most popular wave energy technologies are terminator devices, point absorbers, attenuators, and overtopping devices. Terminator devices such as oscillating water column are typically onshore or near shore and have working principle that extend perpendicular to the direction of wave travel and capture or reflect the power of the wave and then the captured water column moves up and down like a piston, forcing the air through an opening connected to a turbine. Point absorbers are different type of wave technology that involves floating structures with components that move relative to each other because of wave action and energy gets produced as this movement drives electromechanical or hydraulic energy converters. Attenuators are also floating structures that are oriented parallel to the direction of the waves and where differing heights of waves along the length of the device causes flexing on the place where the segments connect, and this flexing is connected to hydraulic pumps or other converters for its transformation to energy. Overtopping devices have different working principle and they are basically reservoirs filled by incoming waves to levels above the average surrounding ocean, and after the water gets released gravity causes it to fall back toward the ocean surface and this energy of falling water is then used to turn hydro turbines. While there is definitely undisputed high potential of wave energy there are certain aspects that also need to be considered such as environmental problems as these technologies can influence marine habitat as there is potential danger of toxic releases into sea in form of hydraulic fluids, noise generation above and below water surface, changes in the seafloor, etc. TIDAL ENERGY (TIDAL POWER) Another type of ocean energy is tidal energy since when tides comes into the shore, they can be trapped in reservoirs behind dams. Tidal power is actually a form of hydropower that exploits the movement of water caused by tidal currents or the rise and fall in sea levels. Tidal energy is produced thanks to the use of tidal energy generators which are large underwater turbines placed in areas with high tidal movements, and designed to capture the kinetic motion of the ebbing and surging of ocean tides in order to produce electricity. Tidal power has enormous potential for future electricity generation because of the massive size of the oceans. The potential of tidal power has been recognized for very long time (small dams were built along oceans since 11th century). However, compared to river dams, tidal-power projects are much more expensive, since massive structures must be built in a difficult saltwater environment. Cost effectiveness is actually a main reason why tidal power hasn’t yet found its place among top used renewable energy sources despite its huge potential. Tidal power in order to function at sufficient level needs very large increases in tides, of at least 16 feet between low tide to high tide and this is the main reason why there aren’t many areas on Earth that meet these demands. However one of these areas is definitely La Rance Station in France, the largest tidal power station in the world (also the only one in Europe) is in the Rance estuary in northern France that provides enough energy to satisfy demands of 240,000 homes in France. Capacity of this tidal power plant is approximately one fifth of a regular nuclear or coal-powered plant. Main problem of all tidal power plants is the fact that they can only generate when the tide is flowing in our out, which counts for only 10 hours per day. However there is also advantage that tides are totally predictable, so we can plan to have other power stations generating at those times when the tidal station is out of action, which is something that can’t be done with certain other renewable energy resources (wind energy). Tidal energy has many advantages (it is renewable energy source since tides will continue to ebb and flow and it produces no greenhouse gases or any waste, it needs no fuel in order to work, since tides are totally predictable it can produce electricity reliably and once built it’s not expensive to maintain), but there are also some negative sides as well. Cost effectiveness is still very serious issue since building one of these power plants requires a very vide area and this also brings some environmental problems since it completely changes environment in this area and affects life of many ecosystems, especially for birds that rely on the tide uncovering the mud flats so that they can found food. There is also the already mentioned fact of limited working time of only about 10 hours, when tide is actually moving. OCEAN THERMAL ENERGY CONVERSION (OTEC) Ocean Thermal Energy Conversion is method for generating electricity that uses the temperature difference that exists between deep and shallow waters since the water gets colder the deeper you go. If there is bigger temperature difference, there is the greater efficiency of this method, and minimum temperature difference must be at least 38 degrees Fahrenheit between the warmer surface water and the colder deep ocean water in order for this method to be efficient. This method has very long history that dates from the very beginning of the 19th century and some energy experts believe that if it could become cost-competitive with conventional power technologies, OTEC could produce gigawatts of electrical power. But that still isn’t the case today as Ocean Thermal Energy Conversion power plant requires an expensive, large diameter intake pipe, which is submerged a kilometer or more into the ocean’s depths in order to bring very cold water to the surface and that is of course very expensive. The types of OTEC systems are as follows: - Closed-Cycle – Closed-cycle systems use fluid with a low-boiling point, mostly ammonia, to rotate a turbine which then generates electricity. Warm surface seawater is pumped through a heat exchanger where the low-boiling-point fluid is vaporized and the expanding vapor then starts the turbo-generator. Cold deeper-seawater is pumped through a second heat exchanger where it condenses the vapor back into a liquid, which is then recycled through the system. In 1979, the Natural Energy Laboratory, including several private-sector partners developed the mini OTEC experiment, which achieved the first successful at-sea production of net electrical power from closed-cycle OTEC. The mini OTEC vessel was driven 1.5 miles (2.4 km) off the Hawaiian coast and produced required amount of electricity to illuminate the ship’s light bulbs and run its computers and televisions. And in 1999, the Natural Energy Laboratory tested a 250-kW pilot OTEC closed-cycle plant, the largest such plant ever put into operation. - Open-Cycle – Open-Cycle systems use the tropical oceans’ warm surface water in order to make electricity since when warm seawater gets placed in a low-pressure container, it boils. After this the expanding steam starts driving a low-pressure turbine attached to an electrical generator, and finally being condensed back into a liquid by exposure to cold temperatures from deep-ocean water. In 1984, the Solar Energy Research Institute (today called National Renewable Energy Laboratory) developed a so called «vertical-spout evaporator» to convert warm seawater into low-pressure steam for open-cycle plants. Potential of open-cycle systems was well acknowledged after energy conversion efficiencies as high as 97% were achieved and in May 1993, an open-cycle OTEC plant at Keahole Point, Hawaii, produced 50,000 watts of electricity during its testing procedure. - Hybrid – Hybrid systems are designed to combine the positive features of both the closed-cycle as well as open-cycle systems. Working procedure in hybrid systems includes warm seawater that enters into a vacuum chamber where it gets evaporated into steam (procedure very similar to the open-cycle evaporation process). And afterwards steam vaporizes a low-boiling-point fluid (in a closed-cycle loop) that drives a turbine to produce electricity. Ocean Thermal Energy Conversion has great potential in generating electricity but there are some other great benefits such as air conditioning and aquaculture. Air conditioning can be produced as byproduct and used cold seawater from an OTEC plant can either chill fresh water in a heat exchanger or flow directly into a cooling system. And there is also aquaculture since cold-water fish species, such as salmon and lobster, thrive in the nutrient-rich, deep seawater from the OTEC process. However there are also some negative sides, especially in cost effectiveness since OTEC power plants require large initial investments and there also some environmental issues that need to be satisfied which can be done with appropriate spacing of OTEC plants. Another factor preventing the commercialization of OTEC is the fact that there are only a few hundred land-based sites in the tropics where deep-ocean water is close enough to shore to make OTEC plants feasible projects. Ocean energy is renewable energy sector that surely needs more research to satisfy condition of cost-effectiveness which is at this point it biggest flaw. Since oceans cover almost two thirds of earth’s surface, they truly present renewable energy source with extreme potential and one worth of further exploration. However current technologies aren’t at required level to capture this potential but as world looks for alternatives to dominant fossil fuels sector many researches have been done in different renewable energy sectors including the ocean energy sector. Problems resulting in size of these power plants and cost effectiveness that go with the size do stand out, but there are also some ecological demands that also need to be fulfilled in order to keep environment intact as possible. And though this renewable energy sector hasn’t had rapid growth like some other renewable energy sectors(wind energy), its couple of projects such as cycle OTEC plant at Keahole Point, Hawaii showed good signs of its great potential so ocean energy sector could be having more significance in years to come. Potential is there, all what ocean energy needs now is technology capable of exploiting this high potential.
Attempts to standardize the measurement of type began in the eighteenth century. The point system is the standard used today. One point equals 1/72 inch or .35 millimeters. Twelve points equal one pica, the unit commonly used to measure column widths. Typography can also be measured in inches, millimeters, or pixels. Most software applications let the designer choose a preferred unit of measure; picas and points are standard defaults. Nerd Alert: Abbreviating Picas and Points 8 picas = 8p 8 points = p8, 8 pts 8 picas, 4 points = 8p4 8-point Helvetica with 9 points of line spacing = 8/9 Helvetica A letter also has a horizontal measure, called its set width. The set width is the body of the letter plus a sliver of space that protects it from other letters. The width of a letter is intrinsic to the proportions and visual impression of the typeface. Some typefaces have a narrow set width, and some have a wide one. You can change the set width of a typeface by fiddling with its horizontal or vertical scale.This distorts the line weight of the letters,however, forcing heavy elements to become thin, and thin elements to become thick. Instead of torturing a letterform, choose a typeface that has the proportions you are looking for, such as condensed, compressed, wide, or extended. All the typefaces shown below were inspired by the sixteenth-century printing types of Claude Garamond, yet each one reflects its own era. The lean forms of Garamond 3 appeared during the Great Depression, while the inflated x-height of ITC Garamond became an icon of the flamboyant 1970s. download hi-res pdf: Portrait of Four Garamonds A type family with optical sizes has different styles for different sizes of output.The graphic designer selects a style based on context. Optical sizes designed for headlines or display tend to have delicate, lyrical forms, while styles created for text and captions are built with heavier strokes. download hi-res pdf: Optical Sizes Scale is the size of design elements in comparison to other elements in a layout as well as to the physical context of the work. Scale is relative. 12-pt type displayed on a 32-inch monitor can look very small, while 12-pt type printed on a book page can look flabby and overweight. Designers create hierarchy and contrast by playing with the scale of letterforms. Changes in scale help create visual contrast, movement, and depth as well as express hierarchies of importance. Scale is physical. People intuitively judge the size of objects in relation to their own bodies and environments. the xix amendment Typographic installation at Grand Central Station, New York City, 1995. Designer: Stephen Doyle. Sponsors: The New York State Division of Women, the Metropolitan Transportation Authority, Revlon, and Merrill Lynch. Large-scale text creates impact in this public installation. A basic system for classifying typefaces was devised in the nineteenth century, when printers sought to identify a heritage for their own craft analogous to that of art history. Humanist letterforms are closely connected to calligraphy and the movement of the hand. Transitional and modern typefaces are more abstract and less organic. These three main groups correspond roughly to the Renaissance, Baroque, and Enlightenment periods in art and literature. Historians and critics of typography have since proposed more finely grained schemes that attempt to better capture the diversity of letterforms. Designers in the twentieth and twenty-first centuries have continued to create new typefaces based on historic characteristics. In the sixteeenth century, printers began organizing roman and italic typefaces into matched families. The concept was formalized in the early twentieth century to include styles such as bold, semibold, and small caps. download hi-res pdf: Type Families mcsweeney's Magazine cover, 2002. Design: Dave Eggers. This magazine cover uses the Garamond 3 typeface family in various sizes. Although the typeface is classical and conservative, the obsessive, slightly deranged layout is distinctly contemporary. A traditional roman book face typically has a small family–an intimate group consisting of roman, italic, small caps, and possibly bold and semibold (each with an italic variant) styles. Sans-serif families often come in many more weights and sizes, such as thin, light, black, compressed, and condensed. A superfamily consists of dozens of related fonts in multiple weights and/or widths, often with both sans-serif and serif versions. Small capitals and non-lining numerals (once found only in serif fonts) are included in the sans-serif versions of Thesis, Scala Pro, and many other contemporary superfamilies. univers was designed by the Swiss typographer Adrian Frutiger in 1957. He designed twenty-one versions of Univers, in five weights and five widths. Whereas some type families grow over time, Univers was conceived as a total system from its inception. TRILOGY, a superfamily designed by Jeremy Tankard in 2009, is inspired by three nineteenth-century type styles: sans serif, Egyptian, and fat face. The inclusion of the fat face style, with its wafer-thin serifs and ultrawide verticals, gives this family an unusual twist. A word set in ALL CAPS within running text can look big and bulky, and A LONG PASSAGE SET ENTIRELY IN CAPITALS CAN LOOK UTTERLY INSANE. Small capitals are designed to match the x-height of lowercase letters. Designers, enamored with the squarish proportions of true small caps, employ them not only within bodies of text but for subheads, bylines, invitations, and more. Rather than Mixing Small Caps with Capitals, many designers prefer to use all small caps, creating a clean line with no ascending elements. InDesign and other programs allow users to create FALSE SMALL CAPS at the press of a button; these SCRAWNY LETTERS look out of place. download hi-res pdf: Caps and Small Caps in Context amusement magazine Design: Alice Litscher, 2009. This French culture magazine employs a startling mix of tightly leaded Didot capitals in roman and italic. Running text is set in Glypha. Combining typefaces is like making a salad. Start with a small number of elements representing different colors, tastes, and textures. Strive for contrast rather than harmony, looking for emphatic differences rather than mushy transitions. Give each ingredient a role to play: sweet tomatoes, crunchy cucumbers, and the pungent shock of an occasional anchovy. When mixing typefaces on the same line, designers usually adjust the point size so that the x-heights align. When placing typefaces on separate lines, it often makes sense to create contrast in scale as well as style or weight. Try mixing big, light type with small, dark type for a criss-cross of contrasting flavors and textures. the word: new york magazine Design: Chris Dixon, 2010. This content-intensive page detail mixes four different type families from various points in history, ranging from the early advertising face Egyptian Bold Condensed to the functional contemporary sans Verlag. These diverse ingredients are mixed here at different scales to create typographic tension and contrast. Lining numerals take up uniform widths of space, enabling the numbers to line up when tabulated in columns. They were introduced around the turn of the twentieth century to meet the needs of modern business. Lining numerals are the same height as capital letters, so they sometimes look big and bulky when appearing in running text. Non-lining numerals, also called text or old style numerals, have ascenders and descenders, like lowercase letters. Non-lining numerals returned to favor in the 1990s, valued for their idiosyncratic appearance and their traditional typographic attitude. Like letterforms, old style numerals are proportional; each one has its own set width. download hi-res pdf: Lining and Non-Lining Numerals in Context monthly calendar, 1892 The charming numerals in this calendar don't line up into neat columns, because they have varied set widths. They would not be suitable for setting modern financial data. A well-designed comma carries the essence of the typeface down to its delicious details. Helvetica's comma is a chunky square mounted to a jaunty curve, while Bodoni's is a voluptuous, thin-stemmed orb. Designers and editors need to learn various typographic conventions in addition to mastering the grammatical rules of punctuation. A pandemic error is the use of straight prime or hatch marks (often called dumb quotes) in place of apostrophes and quotation marks (also known as curly quotes, typographer's quotes, or smart quotes). Double and single quotation marks are represented with four distinct characters, each accessed with a different keystroke combination. Know thy keystrokes! It usually falls to the designer to purge the client's manuscript of spurious punctuation. type crimes: new york city tour City streets have become a dangerous place. Millions of dollars a year are spent producing commercial signs that are fraught with typographic misdoings. While some of these signs are cheaply made over-the-counter products, others were designed for prominent businesses and institutions. There is no excuse for such gross negligence. gettin’ it right Apostrophes and quotation marks are sometimes called curly quotes. Here, you can enjoy them in a meat-free environment. gettin’ it wrong The correct use of hatch marks is to indicate inches and feet. Alas, this pizza is the hapless victim of a misplaced keystroke. In InDesign or Illustrator, use the Glyphs palette to find hatch marks when you need them. Writers or clients often supply manuscripts that employ incorrect dashes or faulty word spacing. Consult a definitive work such as The Chicago Manual of Style for a complete guide to punctuation. The following rules are especially pertinent for designers. word spaces are created by the space bar. Use just one space between sentences or after a comma, colon, or semicolon. One of the first steps in typesetting a manuscript is to purge it of all double spaces. Thus the space bar should not be used to create indents or otherwise position text on a line. Use tabs instead. html refuses to recognize double spaces altogether. en spaces are wider than word spaces. An en space can be used to render a more emphatic distance between elements on a line: for example, to separate a subhead from the text that immediately follows, or to separate elements gathered along a single line in a letterhead. em dashes express strong grammatical breaks. An em dash is one em wide-the width of the point size of the typeface. In manuscripts, dashes are often represented with a double hyphen (--); these must be replaced. en dashes serve primarily to connect numbers (1-10). An en is half the width of an em. Manuscripts rarely employ en dashes, so the designer needs to supply them. hyphens connect linked words and phrases, and they break words at the ends of lines. Typesetting programs break words automatically. Disable auto hyphenation when working with ragged or centered text; use discretionary hyphens instead, and only when unavoidable. discretionary hyphens, which are inserted manually to break lines, only appear in the document if they are needed. (If a text is reflowed in subsequent editing, a discretionary hyphen will disappear.) Wayward hyphens often occur in the mid-dle of a line when the typesetter has inserted a "hard" hyphen instead of a discretionary one. quotation marks have distinct "open" and "closed" forms, unlike hatch marks, which are straight up and down. A single close quote also serves as an apostrophe ("It's Bob's font."). Prime or hatch marks should only be used to indicate inches and feet (5'2''). Used incorrectly, hatches are known as "dumb quotes." Although computer operating systems and typesetting programs often include automatic "smart quote" features, e-mailed, word-processed, and/or client-supplied text can be riddled with dumb quotes. Auto smart quote programs often render apostrophes upside down (‘tis instead of 'tis), so designers must be vigilant and learn the necessary keystrokes. ellipses consist of three periods, which can be rendered with no spaces between them, or with open tracking (letterspacing), or with word spaces. An ellipsis indicates an omitted section in a quoted text or...a temporal break. Most typefaces include an ellipsis character, which presents closely spaced points. Fontlab and other applications allow designers to create functional fonts that work seamlessly with standard software programs such as InDesign and Photoshop. The first step in designing a typeface is to define a basic concept. Will the letters be serif or sans serif? Will they be modular or organic? Will you construct them geometrically or base them on handwriting? Will you use them for display or for text? Will you work with historic source material or invent the characters more or less from scratch? The next step is to create drawings. Some designers start with pencil before working digitally, while others build their letterforms directly with fontdesign software. Begin by drawing a few core letters, such as o, u, h, and n, building curves, lines, and shapes that will reappear throughout the font. All the letters in a typeface are distinct from each other, yet they share many attributes, such as x-height, line weight, stress, and a common vocabulary of forms and proportions. You can control the spacing of the typeface by adding blank areas next to each character as well as creating kerning pairs that determine the distance between particular characters. Producing a complete typeface is an enormous task. However, for people with a knack for drawing letterforms, the process is hugely rewarding. castaways Drawing and finished type, 2001. Art and type direction: Andy Cruz. Typeface design: Ken Barber/House Industries. Font engineering: Rich Roat. House Industries is a digital type foundry that creates original typefaces inspired by popular culture and design history. Designer Ken Barber makes pencil drawings by hand and then digitizes the outlines. Castaways is from a series of typefaces based on commercial signs from Las Vegas. The shapes of the letters recall the handpainted strokes made by traditional sign painters and lettering artists. mercury bold Page proof and screen shot, 2003. Design: Jonathan Hoefler/Hoefler & Frere-Jones. Mercury is a typeface designed for modern newspapers, whose production demands fast, high-volume printing on cheap paper. The typeface's bullet-proof letterforms feature chunky serifs and sturdy upright strokes. The notes marked on the proof below comment on everything from the width or weight of a letter to the size and shape of a serif. Many such proofs are made during the design process. In a digital typeface, each letterform consists of a series of curves and lines controlled by points. In a large type family, different weights and widths can be made automatically by interpolating between extremes such as light and heavy or narrow and wide. The designer then adjusts each variant to ensure legibility and visual consistency. Create a prototype for a bitmap typeface by designing letters on a grid of squares or a grid of dots. Substitute the curves and diagonals of traditional letterforms with gridded and rectilinear elements. Avoid making detailed “staircases,” which are just curves and diagonals in disguise. This exercise looks back to the 1910s and 1920s, when avant-garde designers made experimental typefaces out of simple geometric parts. The project also speaks to the structure of digital technologies, from cash register receipts and LED signs to on-screen font display, showing that a typeface is a system of elements. Examples of student work from Maryland Institute College of Art Where do fonts come from, and why are there so many different formats? Some come loaded with your computer's operating system, while others are bundled with software packages. A few of these widely distributed typefaces are of the highest quality, such as Adobe Garamond Pro and Hoefler Text, while others (including Comic Sans, Apple Chancery, and Papyrus) are reviled by design snobs everywhere. If you want to expand your vocabulary beyond this familiar fare, you will need to purchase fonts from digital type foundries. These range from large establishments like Adobe and FontShop, which license thousands of different typefaces, to independent producers that distribute just a few, such as Underware in the Netherlands or Jeremy Tankard Typography in the U.K. You can also learn to make your own fonts as well as find fonts that are distributed for free online. The different font formats reflect technical innovations and business arrangements developed over time. Older font formats are still generally usable on modern operating systems. PostScript/Type 1 was developed for desktop computer systems in the 1980s by Adobe. Type I fonts are output using the PostScript programming language, created for generating high-resolution images on paper or film. A Type 1 font consists of two files: a screen font and a printer font. You must install both files in order to fully use these fonts. TrueType is a later font format, created by Apple and Microsoft for use with their operating systems. TrueType fonts are easier to install than Type 1 fonts because they consist of a single font file rather than two. Opentype, a format developed by Adobe, works on multiple platforms. Each file supports up to 65,000 characters, allowing multiple styles and character variations to be contained in a single font file. In a TrueType or Type 1 font, small capitals, alternate ligatures, and other special characters must be contained in separate font files (sometimes labelled "Expert"); in an OpenType font they are part of the main font. These expanded character sets can also include accented letters and other special glyphs needed for typesetting a variety of languages. OpenType fonts with expanded character sets are commonly labeled “Pro.” OpenType fonts also automatically adjust the position of hyphens, brackets, and parentheses for letters set in all-capitals. typeface or font? A typeface is the design of the letterforms; a font is the delivery mechanism. In metal type, the design is embodied in the punches from which molds are made. A font consists of the cast metal printing types. In digital systems, the typeface is the visual design, while the font is the software that allows you to install, access, and output the design. A single typeface might be available in several font formats. In part because the design of digital typefaces and the production of fonts are so fluidly linked today, most people use the terms interchangeably. Type nerds insist, however, on using them precisely. character or glyph? Type designers distinguish characters from glyphs in order to comply with Unicode, an international system for identifying all of the world's recognized writing systems. Only a symbol with a unique function is considered a character and is thus assigned a code point in Unicode. A single character, such as a lowercase a, can be embodied by several different glyphs (a, a, a). Each glyph is a specific expression of a given character. Roman or roman? The Roman Empire is a proper noun and thus is capitalized, but we identify roman letterforms, like italic ones, in lowercase. The name of the Latin alphabet is capitalized.
“Colonists Ignore Principles of Self-Government” 1) What does the headline imply about the colonists’ view of government? 2) Close-read in your groups 4.4 Rights of Colonists. How accurate is the headline? 3) Write a new headline that accurately sums up the colonial view of government. “African Merchants Make Fortunes Trading Cloth for Rum” 1) What does the headline imply about how slaves lived? 2) Close-read in your groups 4.5 Life of African Americans. How accurate is the headline? 3) Write a new headline that accurately sums up the life of African Americans in the colonies. Use your outline from yesterday (hook it/state it) to write the following paragraph: 1. In the first sentence introduce your subject. How? Grab the reader’s attention with a hook. Mention a unique feature, like climate, housing or economic activity. You do this in a statement or by asking a question. Keep it brief. Don’t explain too much. Do not mention the name of the subject! 2. In the second sentence reveal more about your subject. Mention its name. 3. In the third and final sentence, state your big point as the last sentence of your introduction. (this is in your outline!) Due end of class. (It’s homework!)
Georgian Court University Feudalism in Medieval Europe Grade Level: 6-8 Time Frame: Two 40-45 minute classes In this lesson students will explore the world of Medieval Europe. They will learn the way the people lived and how Phragmites was part of this world. Students will then be assigned a social class role in the system of feudalism and research information about their characterís privileges and disadvantages. Students will experience the feudal system through activities and presentations to relay what they learned to their class. Students may choose a variety of creative outlets to express their characterís life in their own creative way with a group or separately. In conclusion to this lesson, students will be able to reflect and discuss their experience, feelings, and what they learned from this lesson about life in the Middle Ages. Image source: http://publish.uwo.ca/~dmann/Feudalism%201.jpg New Jersey Core Curriculum Content Standards STANDARD 6.1 (Social Studies Skills) All students will utilize historical thinking, problem solving, and research skills to maximize their understanding of civics, history, geography, and economics. ∑ Use critical thinking skills to interpret events, recognize bias, point of view, and context. ∑ Analyze data in order to see persons and events in context. ∑ Examine current issues, events, or themes and relate them to past events. ∑ Formulate questions based on information needs . ∑ Use effective strategies for locating information . ∑ Distinguish fact from fiction by comparing sources about figures and events with fictionalize characters and evens. ∑ Summarize information in written, graphic, and oral formats. STANDARD 6.3 (World History) All students will demonstrate knowledge of world history in order to understand life and events in the past and how they relate to the present and the future. ∑ Discuss the evolution of significant political, economic, social and cultural institutions and events that shaped European medieval society, including Catholic and Byzantine churches, feudalism and manorialism, the Crusades, the rise of cities, and changing technology. STANDARD 8.1 (Computer and information literacy ) All students will use computer applications to gather and organize information and to solve problems ∑ Choose appropriate tools and information resources to support research and solve real world problems, including but not limited to: o On-line resources and databases o Search engines and subject directories STANDARD 9.2 (Consumer, Family, and Life Skills) All students will demonstrate critical life skills in order to be functional members of society. A. Critical Thinking ∑ Communicate, analyze data, apply technology, and problem solve. ∑ Demonstrate responsibility for personal actions and contributions to group activities. C. Interpersonal Communication ∑ Demonstrate respect and flexibility in interpersonal and group situations. ∑ Organize thoughts to reflect logical thinking and speaking. ∑ Work cooperatively with others to solve a problem. ∑ Demonstrate appropriate social skills within group activities. ∑ Practice the skills necessary to avoid physical and verbal confrontation in individual and group settings. ∑ Participate as a member of a team and contribute to group effort. Materials and Resources: Teacher will review the list of social groups from the Middle Ages (below) and, using his/ her knowledge of each studentsí strengths and weaknesses, will assign a suitable mix of students to each group (each group will research and act out one social status). Teacher may wish to review the various websites listed at the end of this lesson plan before starting the class so that he / she can better assist the student groups in navigating those website in order to obtain the necessary information to complete this assignment. If the classroom does not contain enough computers for every student to work in groups or individually, teacher may choose to print out web resources, or plan to schedule media time in the library or other resource room that will allow for individual access to computers in order to offer every student an opportunity to research their topic. Teacher should also ensure that sufficient materials are available for the students to make the posters required for their presentations. Teacher will prompt students to think and discuss what they know about Medieval times. For example, they may not realize that two Disney movies took place during Medieval times: Robin Hood and The Sword and the Stone. Students can discuss the main concepts of these movies, such as the social structure (monarchy), the role of the church and the inequities between the lives of those in the upper levels and the hardships experienced by the "common" people. The teacher can relate these social statuses to those in the world we live in today. The teacher may wish to engage students by further discussing how students feel about social status and privileges that certain people have while others donít, such as movie stars, athletes and other millionaires compared to everyday people. 1) Students will be placed in groups for presentation the following day. 2) Teacher will assign each group one social status (Pope, Merchant, Nobles, Serfs, Knights and Vassals, Peasants, Royalty), to research using the listed websites and other resources that the students find (e.g. books from the library, suitable websites found by the student groups). 3) Teacher will give each member of each group a worksheet to fill in as a guide to be sure that each obtains the desired information for their presentations / role playing. The worksheet should include some or all of the following questions The rest of the class time should be spent in research. Some useful resources are provided at the end of the lesson plan. Depending on teacher and student preference, students should be asked to communicate what theyíve leaned about the social group assigned to them in one of two ways: The class should start with a discussion about what the power structure of the various groups was during this time, and the relative sizes of each social group. Hopefully, based on what they've learned from their research and the teacher's guidance of the discussion as needed, the students will be able to correctly fill in the Medieval power pyramid (below and on student worksheet) using the following descriptors Pope / Church, Monarch, Nobles, Knights, Vassals, Merchants, Farmers, Craftsmen, Peasants, Serfs Things to note here would be that there were very few members of the monarchy, whereas there were a lot of people in the serf/ peasant class at the base of the pyramid. Also, the people at the top of the pyramid had a lot of power over their own lives and those of others. Those at the bottom had little power, even over their own lives. The rest of the class should be dedicated to Role Playing / Poster / PowerPoint Presentation Activities. Extension Activity: ďM&M gameĒ (from http://users.manchester.edu/Student/SRKauffman/professionalwebsite/MiddleAgesLessons.pdf) 1. Assign one student to play the King/ Queen. 2. Assign two students to be Nobles and 2 to be Vassals. 3. The remaining students are Peasants. (To make the selection process fairer, students can cast lots for these positions if desired). 4. Tell the students that one of the Nobles has a bigger estate than the other, and split the Peasants unequally, such that about 1/3 answers to the Vassal for one of the Lords (Lord A) while the other 2/3 of the Peasants live on the estate of "Lord B' and so answer to his / her Vassal. 5. Give each Peasant a plastic cup with exactly ten M&Ms in it. Let them know that they are not to touch the M&Mís until instructed. Tell the students that the M&Mís represent the crops from the land that the Peasants have tended. Then tell the Peasants that they must pay for the protection that they receive from their Lords with their crops. Their assigned Vassals will confiscate seven M&Mís (for classes under 20 students, have Vassals confiscate seven M+Ms to make this work better) from each Peasant in that Nobleís fiefdom. 6. From each Peasantís payment, the Vassal may keep two of the M&Ms but he / she must give five of them to his Noble to pay for his loyalty. 7. From each of the Vassalís payment, the Noble may keep two M&Ms for his / her services but must give the remaining three pieces to the King / Queen. 8. At the end of the exercise the Peasants should each have the fewest M+Ms and the King / Queen should have the most. 9. Ask each different role or students how they feel about what they received. 10. If desired a discussion of the church and its power could be added here. In the middle ages, tithing (in which the biblical suggestion that 10% of what one earned should be provided to the church) was taken very seriously. Have the students calculate how much money (M+Ms) would have gone to the church had everyone in the group tithed their income to the church. Given that money and power are closely related in most social systems, have the students discuss the implications of this for the power of the church during this period. 11. Ask the students if they think that the Feudal System was fair and why / why not? 12. Ask them why they think that the people at the bottom might have put up with this for centuries before eventually revolting (http://www.zum.de/whkmla/sp/0910/yes/yes1.html#iv) . Indeed most of the wars during this period were between different monarchs, or between different members of the royal family seeking the power of the monarchy. With rare exceptions (e.g. French Revolution) the peasants did not rise up in protest against this system, and indeed in some cases, fought to preserve the status quo. 13. If appropriate (concerns about hygiene may intervene here!), allow students to eat their M&Mís, but donít let the lower status players have anymore to reinforce the unfairness of the system. 14. Discuss the simulation: Explain that in this system, there are a few winners and many losers. Also note that, if you are higher up the chain, it is better to have more peasants underneath you. Explain that different kings and nobles had different sized kingdoms / estates and so, even within the upper classes, some people were better off than others. Teacher will review some of the important occurrences that happened during the presentations (specifically if there was a role-playing showing how the feudal system worked) and have a discussion about what the students thought about it. Given extra time have an open discussion about the similarities and differences between the Middle Ages and present day. Teacher will grade students based on accuracy, creativity, and (if applicable) teamwork during their chosen presentation on the teacher-assigned Middle Ages character the second day of lesson plan. Accommodations and Modification: If there are not enough computers to access the website for the entire classroom, students may use the given information printed out for them to research information and relay the following day. Students with disabilities may move up during the PowerPoint presentation or have the slides printed out. Allow students to listen to a recording of the readings from the website. Give students vocabulary sheets that define unfamiliar words. Assist students in small groups in discussing these terms to assure understanding. Meet with small groups of students before starting the simulation if students have difficulty reading the role cards and interpreting what they are supposed to do during the simulation. Allow students who are not comfortable with role playing to observe the simulation and record what is happening on paper. Ask a couple of students to videotape or digitally record the simulation. Provide a set of questions to students before the discussion to help focus their attention on certain aspects of the simulation. Extending the lesson: Teacher can show the active website games for the students to play if finished research early at: http://www.castlesontheweb.com/search/Castle_Kids/ . Answer Sheet for the Pyramid Exercise Note: The order of the different job descriptions within a tier is not meaningful. For example on the bottom row peasants and serfs are interchangeable in terms of their location and on the second row, merchants, farmers and craftsmen can be filled in on the chart in any order. The important thing is which tier each is in, not the order of positions within a tier. TEACHER FEEDBACK REQUEST: We are always to working to improve these lesson plans. If you use this lesson plan, we'd love to hear from you with your thoughts, comments and suggestions for future improvements. Please take the time to fill in our survey at http://www.zoomerang.com/Survey/?p=WEB229JA3BEWBD . Thanks! © 2009. Amanda Traina (Author), Louise Wootton and Claire Gallagher (Editors)Although the information in this document has been funded wholly or in part by the United States Environmental Protection Agency under assistance agreement NE97262206 to Georgian Court University, it has not gone through the Agency's publications review process and, therefore, may not necessarily reflect the views of the Agency and no official endorsement should be inferred.
TEMPERATURE EFFECT ON DIODE The following graph shows the effect of temperature on the characteristics of diode A-B curve: This curve shows the characteristics of diode for different temperatures in the forward biase. As we can see from the figure given above, that curve moves towards left as we increase the temperature. We know with increase in temperature, conductivity of semiconductors increase. The intrinsic concentration (ni) of the semiconductors is dependent on temperature as given by: Eg is the energy gap K is a voltage man constant A is a constant independent of temperature When temperature is high, the electrons of the outermost shell take the thermal energy and become free. So conductivity increases with temperature. Hence with increase in temperature, the A-B curve would shift towards left i.e. curve would rise sharply and the breakdown voltage would also decrease with increase in temperature. A-C curve: This curve shows the characteristics of diode in the reverse biased region till the breakdown voltage for different temperatures. We know ni concentration would increase with increase in temperature and hence minority charges would increase with increase in temperature. The minority charge carriers are also known as thermally generated carriers and the reverse current depends on minority carriers only. Hence as the number of minority charge carriers increase, the reverse current would also increase with temperature as shown in the figure given on the previous page. The reverse saturation current gets double with every 10 C increase in temperature. C-D curve: This curve shows the characteristics of a diode in reverse biased region from the breakdown voltage point onwards. As with increase in temperature, loosely bonded electrons are already free and to free the other electrons, it would take more voltage than earlier. Hence breakdown voltage increases with increase in temperature as depicted in the figure shown in the figure given on the previous page..
The now-famous Christmas truce of 1914 took place at Flanders field on Christmas Eve. British, German, and French troops were in their respective trenches when some German soldiers began to sing “Stille Nacht.” Before long, all the soldiers came out into no man’s land to sing carols together and even exchange souvenirs and chocolates. There are two interesting aspects of this story: - The power of the music seems to have relaxed the tension of the war. - Their coming together had a permanent effect on them; they were no longer willing to shoot at each other!
Superconductivity Demonstration Kit for physical science and physics provides all of the materials needed (except liquid nitrogen) to successfully demonstrate the characteristics of superconductivity and the Meissner Effect. This item can only be shipped to schools, museums and science centers Superconductivity is the spectacular characteristic of the total disappearance of electrical resistance when a superconductor is cooled below a critical temperature. Another important and unique property of superconductors is that if a magnetic field is applied to the superconductor, the effect of magnetization can be reversed. This reversible magnetic behavior is called the Meissner Effect. It is this Meissner Effect that would enable a train to levitate above its tracks. Since there would be no friction between wheels and track, the train could operate while consuming very little energy. This unique demonstration kit will provide you with all the materials needed (except liquid nitrogen) to successfully demonstrate the characteristics of superconductivity and the Meissner Effect. Each demonstration kit includes one rugged 1" superconductor, one rare earth magnet, one non-metallic forceps and one “how to” instruction manual.
The neuromusculoskeletal system refers to the complete system of muscles, bones, tendons, ligaments and associated nerves and tissues that allow us to move, speak, and sing. This system also supports our body's structure. The "neuro" part of the term "neuromusculoskeletal" refers to our nervous system that coordinates the ways in which our bodies move and operate. The nervous system consists of the brain, the spinal cord, and the hundreds of billions of nerves responsible for transmitting information from the brain to the rest of the body and back again in an endless cycle. Our nervous systems allow us to move, to sense, and to act in both conscious and unconscious ways. We could not listen to, enjoy, sing, or play music without these structures. In fact, making any change in our approach to movement, particularly to the array of complex movements needed for the performance of music, means working closely with our nervous system so that any automatic, unconscious or poor habits may be replaced with healthy, constructive, and coordinate movement choices. Basic Protection Steps For All Musicians: - Gain the information about the body that will help you move according to the body's design and structure. The parts of the human body most relevant to movement include the nervous system, the muscular system, and the skeletal system. Muscles move our bones at joints. Our bony structure is responsible for weight delivery and contributes to the support we need to move with ease and efficiency. There is nothing inherent in the design of our bodies or are instruments that should cause discomfort, pain or injury. - Learn what behaviors or situations put your neuromusculoskeletal health at risk and refrain from these behaviors and situations. - Always warm up before you practice, rehearse, or perform. It takes about 10 minutes before muscles are ready to fire at full capacity. - Monitor your practice to avoid strain and fatigue. This means taking breaks when needed, avoiding excessive repetition or practice time if you notice fatigue, strain or discomfort. - Use external support mechanisms when necessary such as neck straps, shoulder straps, proper bench or chair height. - For vocal health, be sure to drink plenty of water, at least 8 glasses a day and limit your consumption of caffeine and alcohol. Avoid smoking. - Be aware that some medications, such as allergy pills, may dry out your tissues. Be aware of side effects and consult your physician if you have questions. - Maintain good general health and functioning by getting adequate sleep, good nutrition, and regular exercise. NOTE: This document has been adapted from the NASM-PAMA documents on Musicians' Health and Safety Links to Musculoskeltal Health Resources: Andover Educators - Body Mapping A Painful Melody:Repetitive Strain Injury Among Musicians Janet Horvath - Playing Less Hurt Gia Publications - Literary Resources Musicians and thier Health Care - A Musical America Special Report The Musician's Way Musician's Health Collective
Dressler’s syndrome is a complication that can occur following a heart attack or heart surgery. It occurs when the sac that surrounds your heart (pericardium) becomes inflamed. An immune system reaction is thought to be responsible for Dressler’s syndrome, which usually develops several weeks or months after heart tissue injury. Dressler’s syndrome causes fever and chest pain, which can feel like another heart attack. Also referred to as post-pericardiotomy and post-myocardial infarction syndrome, Dressler’s syndrome is treated with medications that reduce inflammation. With recent improvements in the medical treatment of heart attack, Dressler’s syndrome is far less common than it used to be. However, once you’ve had the condition, it may recur, so it’s important to be on the lookout for any symptoms of Dressler’s syndrome if you’ve had a heart attack, heart surgery or other heart injury. Dressler’s syndrome causes the following signs and symptoms after heart surgery, a heart attack or an injury to your heart: - Chest pain - Shortness of breath or pain when breathing (pleurisy) - Left shoulder pain Dressler’s syndrome is thought to develop from an overactive immune system response to heart tissue damage, such as from a heart attack or heart surgery. Your body reacts to the injured tissue as it would to any injury, by sending immune cells and proteins called antibodies to clean up and repair the affected area. But this response appears to cause excessive inflammation in the sac enveloping the heart (pericardium), and the symptoms of Dressler’s syndrome develop. Some older studies estimated that Dressler’s syndrome developed in about 3 percent to 4 percent of people who’d had a heart attack. But because of improvements in the treatment of heart attack — which reduce the amount of damage done to heart tissue — the occurrence of Dressler’s syndrome today is less common. Your doctor may diagnose Dressler’s syndrome based on your medical history and signs and symptoms, by listening to your heart, and sometimes by using blood tests. Other diagnostic tests may include: - Echocardiogram. An echocardiogram uses sound waves to produce an image of your heart. This test allows your doctor to see if fluid is collecting around your heart. - Electrocardiogram. An electrocardiogram records the electrical impulses in your heart through wires attached to the skin. Certain changes in the electrical impulses may mean there’s pressure on your heart. - Chest X-ray. This X-ray can help detect fluid building up around the heart or lungs and can help exclude other causes of your symptoms, such as pneumonia. Two rare but serious complications of Dressler’s syndrome are: - Cardiac tamponade. This condition occurs when fluid builds up around the heart and presses on it, reducing its ability to pump well. Treatment requires a procedure called pericardiocentesis, in which fluid is removed with a fine needle. - Constrictive pericarditis. This condition develops from repeated inflammation of the sac around the heart (pericardium). The inflammation causes the pericardium to become thick and scarred. Treatment requires surgery to remove the pericardium (pericardiectomy). Other complications include: - Pleurisy. This is inflammation of the membranes around your lungs. - Pleural effusion. This is a buildup of fluid around your lungs. Mild cases of Dressler’s syndrome may improve without treatment. Your doctor may recommend decreasing your activity until you’re feeling better. More-severe cases require medications to reduce the inflammation around your heart. Sometimes hospitalization is necessary. Medications to treat Dressler’s syndrome include: - Aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs). These drugs work by inhibiting an enzyme called cyclooxygenase (COX). This enzyme is responsible for your body’s production of prostaglandins, hormone-like substances involved in inflammation and pain. NSAIDs are the most common treatment for Dressler’s syndrome. - Other pain medications. If your pain is severe, you might need stronger pain medications, such as a narcotic, for a short time. - Corticosteroids. These drugs, which include prednisone, mimic the effects of certain hormones in your body, such as cortisone, which are produced by your adrenal glands. When an inflammatory illness strikes, additional cortisone in the form of corticosteroids helps suppress inflammation, which reduces the symptoms of Dressler’s syndrome. However, corticosteroids have potential side effects, and there’s a risk of rebound inflammation after you stop taking corticosteroids. Your doctor may recommend corticosteroids if NSAIDs aren’t working for you. - Colchicine. For resistant cases of Dressler’s syndrome or for people who have repeated episodes of pericarditis, colchicine is another type of anti-inflammatory medication that has been effective in some cases. It’s often used to treat gout. Hospitalization sometimes necessary If a complication develops, such as cardiac tamponade, you’ll likely need hospitalization. When cardiac tamponade occurs, you may undergo a technique called pericardiocentesis. In this procedure, a doctor uses a sterile needle or a small tube (catheter) to remove and drain the excess fluid from the pericardial cavity. You’ll receive a local anesthetic before undergoing pericardiocentesis, which is often done with echocardiogram and ultrasound guidance. This drainage may continue for several days during the course of your hospitalization. Repeated episodes of Dressler’s syndrome can lead to a condition called constrictive pericarditis. If you develop this complication, you may need to undergo a surgical procedure (pericardiectomy) to remove the entire pericardium that has become rigid.
Read On 1 TM The entertaining reading series for beginning-level ESL students that uses short, believe-it-or-not, real-life stories to stimulate vocabulary acquisition and build reading comprehension. Read On is a new reading series designed for beginning and high-beginning level ESL students. Each book in the series features 20 short, stimulating reading passages on high-interest topics such as a hotel that s 21 feet below the ocean, a woman who lived in a tree, how twin brothers fell in love with twin sisters, and blind people who use seeing-eye horses. Each reading contains vital, high-frequency vocabulary that is recycled throughout the chapters in each book. This gives students a chance to see new vocabulary a number of times and in a variety of contexts. Teachers and students will also enjoy the self-standing format of the series. They can either select chapters according to need and interest or systematically work through each book. - Chapter-opening photographs introduce students to the themes of the readings. - Before You Read activities ask students to make inferences, activate their prior knowledge, and answer questions based on their experience. - Audio recordings of each reading selection provide students with listening practice to increase their comprehension skills. - Key vocabulary items appear in bold-faced type for easy reference. Main idea questions presented in standardized testing formats help students check their reading comprehension as well as prepare them for success in taking standardized tests. - Learn New Words and Complete the Paragraph activities test students understanding of new vocabulary words. - Think It Over activities encourage students to use critical thinking skills to examine ideas introduced in the reading selections. - Write It Down activities prompt students to write sentences and paragraphs about their thoughts, opinions, and ideas. - Talk It Over activities ask students to discuss ideas related to the chapter topics - An Activity Menu section at the end of each pair of chapters, gives students the opportunity to practice and recycle new vocabulary and do expansion work through: Tie It Together activities-- students use graphic organizers to synthesize information from the previous two chapters. Just for Fun activities students practice spelling and word order. Go Online activities--students Internet search skills. This teacher's manual is designed to accompany the beginning level student book.
What Is Ebola? Ebola is a serious and deadly virus transmitted by animals and humans. It was initially detected in 1976 in Sudan and the Democratic Republic of Congo. Researchers named the disease after the Ebola River. Until recently, Ebola appeared in Africa only. Although the Ebola virus has been present for more than 35 years, the largest outbreak began in West Africa in March 2014. This outbreak has proven more deadly, severe, and widespread than previous outbreaks. While cases have significantly declined since the peak of the outbreak, there’s still a chance of further outbreaks. Learning the facts about the virus can help prevent the spread of this deadly infection. What Causes Ebola? The Ebola virus belongs to the viral family Filoviridae. Scientists also call it Filovirus. These virus types cause hemorrhagic fever or profuse bleeding inside and outside the body. It’s accompanied by a very high fever. Ebola can be further divided into subtypes that are named for the location where they were identified. These - Taï Forest (previously known as Ivory Coast) The Ebola virus likely originated in African fruit bats. The virus is known as a zoonotic virus because it’s transmitted to humans from animals. Humans can also transfer the virus to each other. The following animals can transmit the virus: - forest antelopes Since people may handle these infected animals, the virus can be transmitted via the animal’s blood and body fluids. Risk Factors and Transmission Unlike other types of viruses, Ebola can’t be transmitted through the air or by touch alone. You must have direct contact with the bodily fluids of someone who has it. The virus may be transmitted through: - breast milk These bodily fluids can all carry the Ebola virus. Transmission can occur via the eyes, nose, mouth, broken skin, or sexual contact. Healthcare workers are especially at risk for contracting Ebola because they often deal with blood and bodily fluids. Other risk factors include: - exposure to infected objects, such as needles - interactions with infected animals - attending burial ceremonies of someone who has died from Ebola - traveling to areas where a recent outbreak has occurred What Are the Symptoms of Ebola? According to the Centers for Disease Control and Prevention (CDC), Ebola symptoms typically appear within 8 to 10 days after exposure; however, symptoms can appear as early as two days after exposure or take as long as three weeks Extreme fatigue is often the first and most prominent symptom. Other symptoms - muscle pain - stomach pain - unexplained bleeding or bruising If you’ve come in contact with or provided care to someone diagnosed with Ebola or handled infected animals and have any symptoms you should seek immediate medical attention. How Is Ebola Diagnosed? The early symptoms of Ebola can closely mimic other diseases like the flu, malaria, and typhoid fever. Blood tests can identify antibodies of the Ebola virus. These may also - either unusually low or high white blood cell counts - low platelet counts - elevated liver enzymes - abnormal coagulation factor levels In addition to blood tests, a doctor will also consider whether others in the patient’s community could be at risk. Since Ebola may occur within three weeks of exposure, anyone with possible exposure might undergo an incubation period of the same timeframe. If no symptoms appear within 21 days, Ebola is ruled out. How Is Ebola Treated? The Ebola virus does not have a cure or vaccine at this time. Instead, measures are taken to keep the person as comfortable as possible. Supportive care measures may include: - giving medications to maintain blood pressure - managing electrolyte balances - providing extra oxygen, if needed - providing intravenous and/or oral fluids to - treating coexisting infections - preventing other infections from occurring - administering blood products if indicated Individuals can take several precautions to protect against Ebola. These - avoiding contact with blood and body fluids - practicing careful hand hygiene, including washing hands with soap and water or an alcohol-based hand sanitizer - refraining from engaging in burial rituals that involve handling the body of a person who died from Ebola - wearing protective clothing around wildlife - refraining from handling items a person with Ebola has handled (this includes clothing, bedding, needles, or medical Healthcare workers and lab technicians also must practice precautions. This includes isolating people with Ebola and wearing protective gowns, gloves, masks, and eye shields when coming in contact with the infected person or their belongings. Careful protocol and disposal of these protective materials is also vital for infection prevention. Cleaning crews should use a bleach solution to clean floors and surfaces that may have come in contact with the Ebola virus. Further research is being done to help prevent future outbreaks. As of April 2015, the World Health Organization (WHO) reports that two possible vaccines are being tested for human safety. People’s immune systems can respond differently to Ebola. While some may recover from the virus without complication, others can have residual effects. These lingering effects may include: - joint problems - hair loss - extreme weakness and fatigue - inflammation of the liver and eyes - sensory changes According to the Mayo Clinic, such complications can last for a few weeks to several months. Other complications of the virus can be deadly, - failure of multiple organs - severe bleeding According to the WHO, the average fatality rate for a person infected with Ebola is 50 percent. Some virus strains are deadlier than others. The earlier the infection is diagnosed, the better the outlook for infected estimates that Ebola survivors have antibodies to the virus for about 10 years. This means that once you have the virus, you aren’t necessarily immune to getting an infection. Until a vaccine is available, it’s important to be on your guard to avoid the spread of Ebola.
Some of the most common first names in the United States include Michael, James, John, Robert and David, as of 2013. Some of the most common last names include Smith, Johnson, Williams, Brown and Jones.Continue Reading Statistically, male names rank higher overall on lists of most common names, because female names are typically more diverse. Some of the most common female names include Mary, Jennifer, Patricia, Linda and Elizabeth. As a whole, however, the distribution of American names is quite homogenous, with around 30 percent of all citizens having one of the 100 most common names. The data come from a combination of Social Security Administration and U.S. Census data. The SSA has kept a database of first names since 1880. These data can be adjusted to account for life expectancy in order to determine an estimated distribution of first names. Surveyors also adjusted the data to account for immigrants to the United States, whose names are not cataloged in SSA data. Data on last names comes from the 2000 U.S. Census, adjusted to account for population growth in the succeeding years. The significant increase in the Hispanic population led to increases in the predominance of Hispanic surnames such as Garcia, Rodriguez, Martinez, Hernandez and Lopez. Based on analysis of the data, the most common full name in the United States is probably James Smith.Learn more about Education
Working with children who have had no formal reading instruction in their first language If the child has not had any formal reading instruction in his/her first language there are several ways in which you can proceed: Always try to communicate meaning to your student - As much as possible use objects (or pictures of objects) to teach initial vocabulary. - Use gestures and body movements to teach actions. Use objects or make the movements yourself. - Use dramatic facial expressions to get your message across. Select books with pictures and repetition - Use picture books at the beginning just as you would with an English-speaking child. However, keep in mind that the ELL child may not be able to give you labels for objects or actions. - Look at the book ahead of time and familiarize the child with names of objects, characters, actions, etc., before you present the book. - Use books that have repetition incorporated into the text. Use a variety of ways to convey a story line: Dramatize the plot of the book using cut outs that you have prepared in advance, or have the child make the characters and have him/her paste them on cardboard so that they can stand and can be moved around according to the action described in the book. Have the child draw the objects or characters that you'll be reading about. This will reinforce the new vocabulary. Introduce written labels for words after the child understands and produces the label orally. Label objects even if the child cannot read the words yet. Keep in mind: Even though the child may be able to understand the topic of the story, he/she will not be able to verbalize predictions about the story. Words that are very common in English such as "mat" or "pan" or vocabulary that is mostly home-related, may not be part of the child's vocabulary. Make sure that the child recognizes the meaning of any words before asking him or her to read those words. Depending on the child's native language, it may be difficult for him/her to hear some of the sounds in English. For example: Children for whom Spanish is the first language may have a great deal of difficulty distinguishing between the vowel sounds of "bet" and "bit" or "pat" and "pet." If a child's first language is Japanese, s/he may not hear the difference between "l" and "r" because in Japanese, those two sounds are considered indistinguishable. These differences are learned over time after a fair amount of practice. Do not expect child to be able to give you rhyming words or words that begin with a particular sound. You will have to provide the different pairs of words that rhyme or those words that begin with the same sound. Try these activities to reinforce some basic reading skills: To emphasize initial sounds, you should group objects whose labels begin with a specific sound and a group of objects that begin with a different sound. Make sure that the sounds you choose initially, are very different from one another. Example: " book, boot, baby, bag, ball" as compared to "fist, fan, father, foot." Introduce the letter that corresponds to the sound, stick the letter to a paper bag or box, and play game of placing the objects, or pictures in the bag that has the initial letter of that object or picture. To reinforce the learning of the two sounds, use the same pictures to play concentration.
Fill a Balloon Using Warm Air is a simple science experiment that demonstrates one of the major properties of air. We know that hot air expands. In this experiment we are just doing the same thing. This experiment uses hot water (or heating of water). Please execute this experiment only under the supervision of an adult. Try it yourself Step 1: Pour water into the sauce pan and heat it. Step 2: Put the balloon on the mouth of the bottle. Step 3: Take the saucepan away from the heater. Step 4: Keep the bottle, with the balloon attached to its mouth, inside the hot water. You will see that the balloon inflates all by itself. Logic of ‘Fill a Balloon Using Warm Air’: Just nothing more than the hot air expands. As the hot expands, it escapes from the bottle and fills the balloon. Learn about the properties of air here. A similar experiment that demonstrates the expansion of hot air is egg in a bottle. Read that experiment here.
Crop It is a four-step hands-on learning routine where teachers pose questions and students use paper cropping tools to deeply explore a visual primary source. In our fast-paced daily activities we make sense of thousands of images in just a short glance. Crop It slows the sense-making process down to provide time for students to think. It gives them a way to seek evidence, multiple viewpoints, and a deeper, more detailed, understanding before determining the meaning of a primary source. This routine helps young students look carefully at a primary source to focus on details and visual information and use these to generate and support ideas. Students use evidence from their “crops” to build an interpretation or make a claim. Crop It can be completed as part of a lesson, and can be used with different kinds of visual sources (for example cropping a work of art, a poem, or a page from a textbook). - Print a collection of primary sources related to the unit or topic under study. The collection may include: - various types of sources that include images, such as photographs, cartoons, advertisements, and newspaper articles. Consider images that challenge students to use varying amounts of background knowledge and vocabulary, or that can be read by students working on different reading levels; - sources representing different perspectives on the topic; - sources depicting the people, places, and events that will be tested in a unit; - sources representing perspectives that are missing from the textbook’s account. Print enough copies so each student can have one source: it’s fine if some students have the same image. Step One: Choose an Image Ask students to choose a source from the collection that either: - connects to an experience that you have had; - relates to something that you know a lot about, and/or - leaves you with questions. *Note: other criteria may be substituted such as choose an image that relates to a question you have about the unit, relates to your favorite part of this unit, or that represents the most important topic or idea of this unit. Step Two: Explore the Image - Pass out a set of two Crop It tools to each student. Demonstrate how to use the Crop It tools to focus on a particular piece of a source. Students can make various sizes of triangles, rectangles, and lines to “crop” or focus attention on an important part of the source. - Invite students to carefully explore their image by using the tools. Pose a question and ask students to look carefully and “crop” to an answer. For example, ask students to: Crop the image to the part that first caught your eye. Think: Why did you notice this part? Crop to show who or what this image is about. Think: Why is this person or thing important? Crop to a clue that shows where this takes place. Think: What has happened at this place? (See Question Sets Handout for additional sample questions.) Invite students to revise their answer by choosing another crop that could answer the same question. Encourage students to consider: if they could only have one answer, then which crop would be best? Why? Step Three: Identify the Evidence Collect the types of evidence students cropped on large chart paper by asking them to recall the different types of details that they cropped. These charts encourage students to notice details and can be used later, when adding descriptions to writing or as supports for answers during class discussions. The charts might look like the example below and will constantly grow as students discover how details help them build meaning. Step Four: Close the Lesson Conclude the lesson by asking students what they learned about the topic related to the collection. Ask them to reflect on what they learned about looking at sources, and when in their life they might use the Crop It routine to understand something. Avoid asking too many questions during Step Two: Explore. Keep the questions and the cropping moving fairly quickly so students stay engaged and focused on their primary source. To increase the amount of thinking for everyone, don’t allow students to share their own crops with a partner or the class right away. Ask students to revise their own crop by trying different ideas before sharing. See Image Set Handout for samples that you might use with this strategy. These images represent some events key to understanding the Great Depression of the 1930s (e.g., FDR’s inauguration and the Bonus Army’s march on Washington) and could be used to review or preview a unit of study. Finding Collections of Primary Sources to Crop Use Federal Resources for Educational Excellence to find collections of photographs. Find Primary Source Sets at the Library of Congress. Visible Thinking, Project Zero, Harvard Graduate School of Education. Artful Thinking, Project Zero, Harvard Graduate School of Education. Richhart, R., Palmer, P., Church, M., & S. Tishman. (April 2006). Thinking Routines: Establishing Patterns in the Thinking Classroom. Paper prepared for the American Educational Research Association. Crop It was developed by Rhonda Bondie through the Library of Congress Teaching with Primary Sources Northern Virginia.
Bits are central to computers. All information in a computer is represented as bits. Bits are also fundamental to how programs make decisions and how arithmetic is approximated in a computer. So it is worth your energy to understand bits and how they are used. While this section of the course does not directly concern itself with programming and Pascal, it does provide a way of looking at what you are doing that will deepen your understanding and allow you to `look through' Pascal and see the machine and the ideas behind the machine that make everything work. Pascal does not concern itself with bits and their representations; it pretends they are not there. But underneath, when you run a Pascal program the machine is following its own rules, and it has its own way of doing things. It is important that you understand this mechanism when you program in Pascal, because every operator and statement in a Pascal program is bound by the limits of the computer and the bits that make it work. More important, Pascal pretends that Boolean values, integers, and characters are different things, when underneath they are all just combinations of the same thing with different operations. Seeing the unity beneath the diversity of the Pascal types allows you to understand the means of creating new types of your own devising. No matter how hard Pascal tries to be `machine independent' and mathematical, its basic concepts are machine concepts, and its programs are bound by computer limitations. Mathematics is tied to the infinite; there are an infinite number of integers; there are an infinite number of fractions between 0.0001 and 0.0002. In mathematics whenever you need a number to represent something the number appears magically and is available to use, some of these numbers are hard to write out because they are not a simple fraction, like pi and the , but they are still there. But Pascal is tied to computing machines and finite ways of doing things. The value pi can not be represented in the common methods of representing numbers inside a computer. However as programmers we are usually satisfied with a machine representation of some fraction which comes close to the value of pi. Computer arithmetic works so well across the usual domain of numbers that we experience daily that it lulls us into a false sense of security, We forget that the computer doesn't really handle the generality of mathematical arithmetic; numbers don't go on forever inside a computer; fractions are only approximate; and danger lurks around every method of computation. It is dangerous to reason about programs using mathematics without thinking about machine limitations because often things which are mathematically correct and provable just don't `compute'. We will meet several examples later. Because all practical programming languages, including Pascal and C are bound to the computer and thus to particular ways in which the computer hardware is constructed, you need to understand these ideas. By understanding the basic representations of numbers used in computers you can reason out why the mathematics doesn't work. You have coded several programs, in 07.100 and in the first part of this course. You should begin to feel confident that you can write a simple Pascal program. This should help you put these ideas into a framework so you can more clearly see the computing machine that will cary out your program. This section of the course will help you deepen your understanding and give you these insights. You should read and reread these notes until you understand everything. The material covered here is so basic and important that you have to know it intuitively. Everything else you learn about computer's will be based on these ideas. In computer studies there are many different ways of looking at, defining, and using a bit. This short survey is aimed to give you the basics of each of those ways to look at this most fundamental of computer concepts -- a bit. A bit represents information, any information. Call it an information carrier. The minimum number of bits required to hold some data is a measure of how much information the data contains. The ever classic example is: How much information is required to represent the outcome of flipping a coin. Assume we have an ideal coin, when flipped we expect the coin to land with equal numbers of heads and tails being visible; we never expect it to land on its edge and stand upright. Flipping this ideal coin represents two equally likely outcomes: TAIL or HEAD. If the coin lands heads one can write down the bit value `1' to completely represent the outcome of the coin toss. This represents the amount of information held by a bit: zero or one. A bit represents one outcome of two equally likely alternatives from an experiment. So a sequence of bits can represent a series of successive coin tosses. A mechanism, like a box that flips coins, which produces a series of bits, is sometimes called an information source. It is a little odd to call something that produces random bits an information source, but in communications a bit is a bit, it doesn't matter where it came from, so from that perspective information is the same as noise. Later we will show that results, which you do not expect, give you more information than ones you do expect; the more unexpected an event the more information it gives. This is not your usual notion of information, but it is a technical definition that gives insight into what you can do with computers. Therefore from that technical perspective flipping a coin gives you maximal information because you never know what the next bit will be. This is another sense in which a random series of bits and information are the same thing. As an aside, the ceiling function, ceil(p), is one that is handy and will get used a lot. Ceil is defined so that if the argument, p, has no fractional components then ceil(p) gives back the same integer number as a result, and if p does have fractional components ceil(p) gives the next higher integer. The result of ceil is always an integer. Thus: the ceil( 3.00001 ) is 4, the ceil( 3.9 ) is 4, and the ceil( 4 ) is 4. The ceiling function is sometimes represented as a peculiar set of brackets is 4. There is also a floor function which works like ceiling. Thus: floor( 3.00001 ) is 3, the floor( 3.9 ) is 3, and the floor( 4 ) is 4. Floor can also be represented as . Now, back to the topic, how many equally likely outcomes are there if a coin is flipped twice? Four equally likely outcomes can be represented by two bits. For three coin flips 8 different equally likely outcomes and three bits. For four coin flips 16 different equally likely outcomes and four bits. And so on. This tells us that if something has N equally likely possibilities it will take: bits to represent the outcomes. (Why do we take the ceiling?) For example if one is to toss a pair of dice, there are 36 equally likely possibilities: is approximately 5.169925 thus it takes 6 bits to represent the toss of a pair of dice. You will find it useful for your future in computing to memorize the first 16 or so powers of two. And I mean memorize, it will help you finish exam after exam faster, leaving you time to work on other problems. By memorizing them you will be able to spot powers of two when you see them, and that will help see deeper relationships that you may exploit. The small powers of two are: Note that 25 is 32 and 26 is 64. The log2 of 36 (which we looked up earlier) was 5.169925 a number just greater than the log2 of 32, which is exactly 5. So the exponents of the powers of two give you a feeling for the log base 2 of a number. This is another reason to remember the powers of two table. A good number to remember is 3.321928, the . It tells you how many bits are in ten things, like ten fingers or ten digits. To get a feel for the number of bits you need to represent 500 things, follow the following reasoning! Since 500 has 3 digits, multiply the number of digits by 3.32 to get 9.96 or 9 or 10 bits. Note the estimate is right on. Ten is little over the answer, which should be 9 bits. This is because 3 decimal digits numbers include all the numbers up to 999, 999 requires ten bits to represent. Here the trick is to use the number of decimal digits as a measure of the log10 of the number to get a quick, worst case estimate of the number of bits required. A more accurate way is to compute to find the number of bits required. It is easy to compute the log2 of a number on a calculator by using the following formula: So can be computed by finding then multiplying that by the magic number 3.3219 and you get: 8.9657. Doing gives the correct answer 9 bits. If you can't remember the magic number 3.3219 then use a formula in which the calculator can figure out all the numbers in the formula. Computationally this is not as good because you have to remember what to divide by what. Multiplication is easier; it can be done in any order. Another example, if you have a bit representation that is 52 bits long, how many decimal digits will be needed to represent the The answer is giving 15.66 or it will take 16 decimal digits to uniquely identify each of the 252 cases (i.e. using ).
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2012 January 4 Explanation: Lurking behind dust and stars near the plane of our Milky Way Galaxy, IC 10 is a mere 2.3 million light-years distant. Even though its light is dimmed by intervening dust, the irregular dwarf galaxy still shows off vigorous star-forming regions that shine with a telltale reddish glow in this colourful skyscape. In fact, also a member of the Local Group of galaxies, IC 10 is the closest known starburst galaxy. Compared to other Local Group galaxies, IC 10 has a large population of newly formed stars that are massive and intrinsically very bright, including a luminous X-ray binary star system thought to contain a black hole. Located within the boundaries of the northern constellation Cassiopeia, IC 10 is about 5,000 light-years across. Authors & editors: Jerry Bonnell (UMCP) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
- Analysis & Instrumentation - Cleaning, Polishing & Grinding - Cryogenic Preservation - Fish Farming - Freezing & Cooling - Gas Installations - Heat Treatment - Modified & Controlled Atmospheres - Melting & Heating - Moulding, Foaming, Forming & Extrusion - Petrochemical Processing & Refining - Pharma & Biotechnology - Process Chemistry - Pulp & Paper Making - Water Treatment - Welding Related Processes Plasma cutting is a method developed in the 1950s for cutting of metals that could not be cut by oxyfuel cutting. Such materials include stainless steel, aluminum and copper. Subsequently the method has also been used for cutting and precision-cutting of mild and low-alloyed steel. Plasma is a state of matter where the gas that is ionized, which means that it consists of positive ions and electrons causing the media to be electrically conductive. Plasma is very energy-rich. Plasma melts the material locally and the melted material is removed from the cut by means of the gas jet. Plasma cutting is a melt-cutting method, where the energy of the hot plasma arc is used for blowing away molten material. Plasma cutting is carried out by using gases adapted to the specific application. For example, mild steel is often cut with oxygen or nitrogen as plasma gas while stainless steel is often cut with an argon (or nitrogen) based gas including hydrogen as a reducing media. Avoid risks by following the safety instructions for hot work.
alconry is the art and sport of hunting with raptors. In this modern age it is a highly regulated sport that demands time and serious commitment. Currently there are an estimated 4,000 falconers in the United States with roughly 5,000 birds. Falconry has been practiced in many forms for thousands of years by many cultures. Some speculate that falconry dates back as far as 4000 - 6000 BC in Mongolia, Egypt, and possibly Asia, however there is no concrete evidence to support that. It is known that falcons were given as presents to Chinese princes as early as 2200 BC, but these may have been for pets and not for hunting. Modern falconry, particularly as practiced in North America, has elements of many ancient practices, yet looks modern in many other ways. The modern falconry lifestyle is varied, yet the integration of the people with their raptors is common through all practices. A Falconry Timeline A special thanks to Noriko Otsuka for her information on the Japanese falconry history and for historian David Zincavage for his review of some of this data. - 722-705 BC - Assyrian Bas-relief found in the ruins at Khorsabad during the excavation of the palace of Sargon II (or Saragon II) depicts falconry. A. H. Layard's statement in his 1853 book Discoveries in the Ruins of Nineveh and Babylon is "A falconer bearing a hawk on his wrist appeared to be represented in a bas-relief which I saw on my last visit to those ruins." - 680 - Chinese records describe falconry - E. W. Jameson suggests that evidence of falconry in Japan surfaces - 4th Century BC - Gold coin pictures Alexander the Great with a hawk on his fist. It is assumed that the Romans learned falconry from the Greeks, although it was uncommon; there are accounts of Caesar using trained falcons to destroy pigeons carrying messages - 384 - Aristotle and other Greeks made references to falconry - 200 BC - Japanese records note falcons given to Chinese princes - 355 AD - Nihon-shoki, a historical narrative, records falconry. It is said that the first Japanese falconer was a woman named Kochiku, and her only daughter was also a falconer. - 500 - E. W. Jameson pins the earliest actual evidence of falconry in Europe is represented in a Roman floor mosaic of a falconer and his hawk hunting ducks. - 600 - Germanic tribes practiced falconry - 8th and 9th century and continuing today - Falconry flourished in the Middle East - 9th century - Japanese records mark the presence of women falconers - 875 - Western Europe and Saxon England practiced widely; Crusaders are credited with bringing falconry to England and making it popular in the courts - 1066 - Normans wrote of the practice of falconry; Following the Norman conquest of England, falconry became even more popular. The Norman word 'fauconnerie' is still used today. - 1600's - Dutch records of falconry; the Dutch city of Valkenswaard was almost entirely dependent on falconry for its economy - 1801 - James Strutt of England writes, "the ladies not only accompanied the gentlemen in pursuit of the diversion [falconry], but often practiced it by themselves; and even excelled the men in knowledge and exercise of the art." - 1934 - The first US falconry club, The Peregrine Club, is formed; subsequently died out during World War II - 1961 - NAFA formed - 1970 - The Peregrine Fund is founded mostly by falconers to conserve raptors, but focusing on Peregrines Between 500 AD and 1600 AD falconry was an incredibly popular sport, art, and pastime in Western Europe being as popular through society as golf is today. Just about any historical figure that could be named during this time had an association or dabbled in falconry. Falconry still has strength throughout the Middle East and Asia, but its popularity declined throughout Europe during the 18th century with the invention of the gun and land restrictions. It was so common that for any given historical figure of this time period, the chances are that they engaged in falconry at some point. Perhaps the most famous falconer was Frederick II of Hohenstaufen, Holy Roman Emperor, King of Sicily and King of Jerusalem. He was such an avid falconer that in 1274 he wrote a comprehensive book, De Arte Venandi cum Avibus (The Art of Hunting with Birds or The Art of Falconry), taking over 30 years to complete. As he was such an opponent of the Church, he did not receive much credit for his work for many years - his writings were even prohibited. Finally published in 1596, it was only "discovered" by ornithologists in 1788. This was one of the first scientific works and laid the foundation for ornithology. Frederick was such an avid falconer that he was said to have lost a military campaign when he opted to go hawking instead of maintaining a fortress siege. Pope Leo X was a frequent hunter with his birds. The Bayeux tapestry depicts King Harold taking a falcon and hounds on his visit to William of Normandy. The two are known to have hawked together during this meeting. William brought with him Flemish falconers when he conquered England. Albertus Magnus, the Catholic saint, wrote extensively on falcons and falconry. His real name was Albert von Bollstädt and he was a teacher and doctor at Cologne and also a Dominican friar. As a chemist he was the first to make arsenic in its free form. There is some speculation that Genghis Khan and Attila the Hun were also falconers. Genghis Khan's grandson Kublai Khan most definitely was a falconer. Marco Polo wrote of him, "takes with him full 10,000 falconers and some 500 gerfalcons, besides peregrines, sakers, and other hawks in great numbers, and goshawks able to fly at the water-fowl..." Nobility and falconry were synonymous for centuries. Some of the ruling class that were avid falconer were: Other famous falconers, or falconers well-known outside the falconry community, are: - Empress Catherine of Russia - her favorite falcon was the Merlin - Mary, Queen of Scots - was allowed to fly a Merlin from her window during her imprisonment - Edward III of England - during the invasion of France, he brought 30 falconers and 70 foxhounds to occupy his knights between campaigns - Ethelbert II of England - likely the first English king to be a falconer - Alfred the Great - also wrote on falconry - King Henry - called Henry the Fowler for his love of falconry - Canute the Great - King of England - Edward the Confessor - King of England - Athelstan of England - Henry VII - Henry VIII - had very elaborate mews built where the National Gallery in Trafalgar Square stands today - Francis I of France - kept 300 falcons and 50 masters of falconry - Queen Elizabeth I - one source claims she had a woman Grand Master of Falconry, Mary of Canterbury - Maximilian I - Holy Roman Emperor - King Richard - took his birds with him on the Crusades; when he was captured part of his ransom was 2 white Gyrfalcons - King John - had a passion for crane hawking with a cast of Gyrfalcons which were a gift from the King of Norway - James I - commissioned the translation of the Bible into English; a falconer, but also experimented with cormorants and osprey to take fish; kept white-tailed sea eagles for hunting teal - James IV - ran large, organized hunts on horseback; believed to have spent 1,000 pounds on a pair of Gyrfalcons from Scotland - Henry II - favorite birds were eyass Peregrines from Ramsey Island (Wales); he and his nobles were known to bring their hooded birds to the table during meals - Charlemagne - believed all gentlemen should be trained in falconry - Ottoman Sultan Beyazid - kidnapped the son of Philip the Bold and turned down the ransom of 200,000 gold ducats accepting 12 white Gyrfalcons and a jeweled gauntlet paid for by Carl VI of France - King Cardoman - Edward III - Edward IV - Edward the Black Prince - took 30 falconers with him when he invaded France - Heinz Meng - one of the first to breed the peregrine falcon in captivity; credited with being one of the most influential people to save the Peregrine from extinction and one of the most influential people of the 20th century environmental movements; developed a style of perch known as the Meng perch - Pedro Lopez de Ayala - Spanish statesman, historian, and poet - Gace de la Bigne - Juliana Berners - prioress of Sopwell nunnery near St Albans, wrote the Boke of St Albans - George Turberville - English poet - Symon Latham - Author - Edmund Bert - English author of Treatise of Hawks and Hawking (1619) - Colonel Thomas Thornton - noted sportsman and founding President of The Confederate Hawks of Great Britain - Gerald Lascelles - Deputy Surveyor of the New Forest; instrumental in keeping the sport of falconry alive in Britain as Secretary of the Old Hawking Club through the 19th century - Gilbert Blaine - Author - Frank Beebe - Canadian artist and naturalist; favorite bird is a tiercel gyrfalcon - Franks and John Craighead - American naturalists - Field Marshall Hermann Goering, Commander-in-Chief of the Luftwaffe, President of the Reichstag, Prime Minister of Prussia - Jack Mavrogordato - Attorney-General in the Sudan; expert with both short and longwings - James Robertson Justice - British actor; enjoyed grouse hawking - T.H. White - author (The Once and Future King, The Goshawk) - Philip Glasier - British naturalist - Edward Blair Michell - barrister (at one time legal adviser to the King of Siam) and author and possibly the greatest authority on Merlins until his death in 1926 - Cpt Guy Aylmeri - falconer who developed the revolutionary two piece jess system Ancient Falconry http://www.firstscience.com/SITE/ARTICLES/dobney.asp Sports and Pastimes of the People of England http://www.sacred-texts.com/neu/eng/spe/spe06.htm All images and text Copyright © 2004 - 2016 - Lydia Ash
Native American communities actively managed North American prairies for centuries before Christopher Columbus and his ilk arrived in the New World, according to a new study. Fire was an important indigenous tool for shaping North American ecosystems, but the relative importance of indigenous burning versus climate on fire patterns remains controversial in scientific communities, researchers say. As reported in the Proceedings of the National Academy of Sciences, researchers found that, contrary to popular thinking, burning by indigenous hunters combined with climate variability to amplify the effects of climate on prairie fire patterns. “The important contribution of this research to paleoenvironmental science is a demonstration of the impact that relatively small groups of mobile hunter-gatherers could have on amplifying the broader climatic effect on wildfires,” says coauthor María Nieves Zedeño, a professor in the School of Anthropology at the University of Arizona. “Too often, we assume that hunter-gatherers were passive in their interaction with their environment.” “We have added a new human dimension to the discussion of interactions between people and climate by actually going back in time and showing how mobile hunter-gatherers manipulated the environment by improving the grassland through fire.” The relative importance of climate and human activities in shaping fire patterns is often debated and has implications for how we approach fire management today, researchers say. “While there is little doubt that climate plays an important top-down role in shaping fire patterns, it is far less clear whether human activities—including active burning—can override those climate influences,” says lead author Christopher Roos, an associate professor of anthropology at Southern Methodist University. “Too often, if scientists see strong correlations between fire activity and climate, the role of humans is discounted.” Anthropologists and historians have documented a wide variety of fire uses by Native peoples in the Americas, but fire scientists have also documented strong fire-climate relationships spanning more than 10,000 years. “People often think that hunter-gatherers lived lightly on the land,” says coauthor Kacy L. Hollenback, an assistant professor at Southern Methodist University. “Too often, we assume that hunter-gatherers were passive in their interaction with their environment. On the Great Plains and elsewhere, foragers were active managers shaping the composition, structure, and productivity of their environments. “This history of management has important implications for contemporary relationships between Native American and First Nations peoples and their home landscapes—of which they were ecosystem engineers.” Off the cliff Working in partnership with the Blackfeet Tribe in northern Montana, researchers combined landscape archeology and geoarcheology to document changes in prairie fire activity in close spatial relationship to hunting features known as drivelines. Drivelines consisted of a series of rock piles, spaced a few steps apart and arranged in a funnel-like shape up to five miles long, which were used to drive herds of bison off cliffs to be harvested en masse. “We surveyed the uplands for stone features that delineate drivelines within which bison herds would be funneled toward a jump,” Zedeño says. “By radiocarbon dating prairie fire charcoal deposits from the landscape near the drivelines, we were able to reconstruct periods of unusually high fire activity that are spatially associated with the drivelines,” Roos says. The overlap between peak periods of driveline use, between about 900 and 1650 CE, and prairie fire activity, between 1100 and 1650 CE, suggests that fire was an important tool in the hunting strategy involving the drivelines. The researchers suggest that hunters used fire to freshen up the prairie near the mouth of the drivelines to attract herds of bison, who prefer to graze recently burned areas. Episodes of high fire activity also correspond to wet climate episodes, when climate would have produced abundant grass fuel for prairie fires. The absence of deposits indicating high prairie fire activity before or after the period of driveline use, even though comparable wet climate episodes occurred, suggests that burning by Native hunters amplified the climate signal in prairie fire patterns during the period of intensive bison hunting. “We need to consider that humans and climate have more complicated and interacting influences on historical fire patterns,” Roos says. “Moreover, we need to acknowledge that hunter-gatherers can be active influences in their environments, particularly through their use of fire as a landscape tool. “We expect that future studies of human/climate/fire interactions will further document the complexity of these relationships. Understanding that complexity may prove important as we try to navigate the complex wildfire problems we face today.” Source: University of Arizona
Groundbreaking advancements in the realm of space engineering may soon see the moon sown with the first gardens to grow on the lunar surface. As part of the Google Lunar X Prize, Paragon Space Development Corporation has recently teamed with Odyssey Moon to develop a pressurized mini greenhouse to deploy on the surface of the moon, grow a plant from seed, and hopefully see it flower and seed itself. It’s a complicated endeavor, but it marks a critical stage of development for extending life beyond the confines of our planet. In order to successfully grow a plant on the moon, Paragon has developed a very specialized greenhouse that can safely contain a plant and provide it with all elements it needs to survive. The greenhouse will need to protect the plant from the sun’s intense rays while providing it with enough water, balanced soil, and carbon dioxide while removing its waste oxygen. They are basically creating a space suit for the plant. For this trial, Paragon has chosen a species within Brassica (the mustard family), due to their quick growth and the abundance of knowledge about the plant. A typical Brassica needs 14 days of light in order to grow, flower and then set seed. A lunar day is 14 Earth days long, so if the landing is timed perfectly, it will allow just enough time for the plant to grow to maturity and possibly re-seed. That is if everything goes as planned on the Lunar Oasis Lander, which Paragon and Odyssey Moon are developing. Growing a plant in a controlled environment on the moon will be a groundbreaking development, because this is a crucial step to colonizing life outside of the Earth’s atmosphere. Whatever you think about expanding beyond our own limits, the technology to be able to sustain life in such harsh conditions is pretty incredible – it would be amazing to see the accelerated footage of a plant growing in space. The plant growth payload is just one aspect of the Lunar Oasis Lander being developed for the Google Lunar X Prize. The competition will award $30 million to the first private company to land a craft on the moon by 2012. As part of the requirements, the craft must safely land, send live video feed back to Earth, travel at least 500 meters across the surface, send more video and carry a payload. Paragon is specifically responsible for the plant payload as well as the lander design and thermal controls systems. Interestingly enough, the CEO of Paragon, Taber MacCallum, and his wife Jane Poytner are experts in closed biological systems – they were two of the eight people who spent two years inside of Biosphere 2 in Arizona. Tip via Starkadhr
Scientific Methods used in Criminology And Criminal Justice The scientific method is applied to garner valuable information from physical evidence taken at crime scenes. DNA samples from hair or body fluids, fingerprints analyses, weapons or clothing fibers are studied using scientific methods by forensic experts. Forensic scientists owe responsibility to store evidences, however minute or major they may be properly, analyze and classify them, document the observations scrupulously and take precautions to ensure that the evidences are not tampered with improperly handled. Due to their specific nature of expertise, they are frequently summoned at courts to provide expert witnesses on crime lab methodologies or their scientific observations relating to any criminal trial. Forensic scientists’ work involves close coordination with law enforcement, attorneys and medical examiners besides allied personnel. In contradiction to those seen on TV channels, a few forensic scientists visit crime spots while a part of them work in forensic laboratories and certain others work in universities, hospitals and morgues. Science and technology play major roles in the development of crime scene examination besides criminal justice. Scientific methods are used to restructure a crime scene for analyzing physical evidence which helps to reduce the quantum of misapprehension and manual inaccuracies. Fingerprint identification methods have been used by crime investigators to identify suspected criminals as well as the victims of crime. The weapons and stuffs at crime scenes generally have chances to have fingerprints of people presented at crime scene. Since the fingerprint patterns are unique for each individual, they are matched to identify the criminal. Deoxyribonucleic acid (DNA) is a nucleic acid contains the biological and genetic information for all known living organisms. The DNA samples can be collected from body parts, like blood, tissues, hair, etc. Crime scenes are investigated to found the DNA samples for various purposes, like in some cases to identify a dead body. Gunshot residue is composed of burnt and un-burnt particles from the explosives, and components from the bullet, the cartridge case and the firearms used. While investigating any crime occurrence, the forensic experts apply gunshot residue kits (GSR kits) for determining the elements like composition of granules and patterns revealed by deformation of bullets. Blood stain Pattern Study Forensic experts study the bloodstain patterns to find out the circumstances under which the impact object would have been used with to hit the victim.The angle as well as the direction the bullet would have trailed along the distance from the source towards the target which can be easily explained with the help of blood stains spread in the vicinity of crime. These scientific methods evolved from the history of activities pursued by many professionals over the centuries. These methods are being applied to observe, think and finally resolve the problems in a systematic way. Five main steps involved in the scientific method are - defining the problem - formation of hypothesis - data collection by observing and experimenting, - Interpret the data to draw conclusions. Criminal Justice Research and its Objectives The criminal justice system is everything about people and their way of interaction with the social norms defining the acceptable human behavior the society will recognize. However, the society’s acceptance of a behavior or attitude of the individuals is never widespread every time. The cause and the effect of social problems on society are generally noticeable by involving the criminal justice system. The criminal justice system may enable to interact with people of various ethnicities, cultural, besides religious. Understanding the people’s action in the way they like is crucial to determine the best way to decide the behavior or attitude within a socially agreeable situation. Generally, a reason can always be traced for every situation, if proper observation in the proper context is made. The essences of social study in the criminal justice field is the normal behavior besides and correct scrutiny to explain the reason for carrying out research on criminal justice. Criminal justice research is very much identical to defusing a bomb whereby one has to understand the bomb’s components with is functionality and subsequently shape out the best possible way to diffuse the bomb with least impact and loss on life or property. Scientific methods in Criminal Justice enable to understand social science research methods to evaluate and conduct research in studying crime and criminal justice problems. Various data collection techniques are used in criminological and criminal justice research. An overview of Criminal Justice Management provides theory and practice of management and the policy commencing with introducing organizational including bureaucratic theories, scientific management, and human relations, besides the temperamental approach with core emphasis on the application of each theoretical perspective to criminal justice agencies.
Taissumani, Jan. 17 Who Were the Kinipetu? Whaling and early anthropological literature abounds with references to a group of Inuit called the Kinipetu. The word was used by whalers, explorers, some anthropologists, and early policemen. As is often the case with early renditions of Inuit terms, it was spelled in various ways. But in fact there was never any such tribal designation as Kinipetu. It refers to a group that does not, and never did, exist by that name. The word comes to us as a misunderstanding, but one that was repeated a number of times. The people called Kinipetu were in fact a group of Inuit called the Qairnirmiut, people who traditionally lived between Baker Lake and the seacoast near Chesterfield Inlet. In his anthropological classic, The Central Eskimo, published in 1888, Franz Boas referred to these Inuit as the Kinipetu or Agutit. He gave no explanation of the latter term and it did not appear again in his writings. Another writer in 1910 wrote, “The people here are called by the rest, Kinnepetu, which may be Englished [sic] ‘Damp Place People’.” So how did this misunderstanding come about? And what did early white visitors to the Kivalliq region think it meant? As is often the case, an explanation comes to us from the ethnographer and explorer Knud Rasmussen. During his travels in the region in the early 1920s, he met an old man, Auruattiaq, who provided an explanation for the term, which Rasmussen paraphrased as follows: “One summer at Marble Island his mother had been on board a whaling ship, and her clothing was wet. The rain was pouring down, and she had pointed to her clothing, which was dripping wet, saying: ‘kinipatoot’: ‘See how wet they are’; she had said this in order that she might be allowed to dry them at the white men’s stove, but the word, which no one understood, has since been taken to be the name of her tribe, and this is why all white men call the Qaernermiut the Kinipetu.” Rasmussen noted that the Qairnirmiut (his spelling was Qaernermiut) were formerly inland dwellers who had modified their settlement patterns to benefit from proximity to the whalers on the west coast of Hudson Bay, in the process becoming “skilful sailors in whale boats.” He observed also that they were “famous for their great skill at treating caribou skins and for the beautiful and festively trimmed clothing they nearly always wear.” The word kinipavoq appears in the “List of Words” that Rasmussen included in his well-known Fifth Thule Expedition reports. The meaning is given as “is soaked through.” He gives its Greenlandic equivalent as qauserpoq, which is also its equivalent in Baffin Island. “It is this word that, owing to a misunderstanding, has given the Qaernermiut the name of Kinepetu,” Rasmussen concludes. The well-known whaler, Captain George Comer, who spent many years among the Inuit near Repulse Bay, called these Inuit the Kenepetu, but also referred to them by a wildly inaccurate spelling of their correct name —Kiackennuckmiut. Boas, writing again about the Inuit in 1901, and having learned from Comer, called them Kinipetu, but noted that “their proper name is Kiaknukmiut.” Rasmussen’s colleague, Kaj Birket-Smith, noted that Inuit themselves never used the term. It was used only by white men. He ridiculed the Canadian explorer, A. P. Low, saying, “Low calls them Kenipitumiut, which is pure nonsense, as the suffix [-mio] presupposes a local designation,” and suggested that the term should be “expunged from scientific terminology,” and replaced with Qaernermiut. He glosses the meaning of that term as “the dwellers of the flat land.” Perhaps the last word on this peculiar term should go to the Inuit. In 1994, Dorothy Harley Eber, a writer to whom northerners owe a large debt of gratitude for her writings on Inuit in the whaling era, published an article, “A Feminine Focus on the Last Frontier,” about the Inuit photographs taken by Geraldine Moodie in 1903 and 1904. She recounted a 1987 visit to Joan Attuat, a respected elder in Baker Lake, who had been born at Cape Fullerton: “According to Attuat, it was the womenfolk of her family who were responsible for her people becoming known as the Kenepetu. Her grandmother Kookoo, and probably her great-grandmother Silu with her daughters, were out fishing. Hearing music, the women went aboard a whaling ship. ‘It was raining, and Kookoo was wet, and somebody called her over and said, ‘Come over here and dance with me,’ but she had a knapsack she was taking off and said, ‘Oh, just a minute! Kenepetu! I’m all wet.’ They nicknamed her Kenepetu and that way they all became Kenepetu.’”
“They display an extraordinary interest in the writings of the ancients, singling out in particular those which make for the welfare of soul and body” (Josephus, Jewish War II, viii, 6). The sectarians attached supreme importance to the study of the Scriptures, to biblical exegesis, to the interpretation of the law (halakha), and to prayer. The hundreds of scrolls discovered at the site and the rules of the Community preserved in them indicate that they took the biblical injunction, “Let not this Book of the Teaching cease from your lips, but recite it day and night” (Joshua 1:8), quite literally. Their laws enjoined them to ensure that shifts of community members be engaged in study around the clock, in order to reveal the “divine mysteries” of the law, history, and the cosmos. The sectarians’ scribal and literary activities apparently took place in several rooms in the communal center at Khirbet Qumran, mainly in the “scriptorium” on the upper floor. Most of the scrolls were written on parchment, with a small number on papyrus. The scribes used styluses made from sharpened reed or metal, which were dipped into black ink – a mixture of soot, gum, oil, and water. Inscribed bits of leather and pottery shards found at the site attest to the fact that they practiced before beginning the actual copying work. Most of the Hebrew and Aramaic scrolls found at Qumran were written in “Jewish” or square script, common during the Second Temple period. A few scrolls, however, were written in ancient Hebrew script, a very small number in Greek, and fewer still in a kind of secret writing (cryptographic script) used for texts dealing with mysteries that the sectarians wished to conceal. Scholars believe that some of the scrolls were written by the community scribes, but others were written outside of Qumran. “Being versed from their early years in the holy books [and] various forms of purification . . .” (Josephus, Jewish War II, viii, 12) All the books of the Hebrew Bible, except for Nehemiah and Esther, were discovered at Qumran. In some cases, several copies of the same book were found (for instance, there were thirty copies of Deuteronomy), while in others, only one copy came to light (e.g., Ezra). Sometimes the text is almost identical to the Masoretic text, which received its final form about one thousand years later in medieval codices; and sometimes it resembles other versions of the Bible (such as the Samaritan Pentateuch or the Greek translation known as the Septuagint). Scrolls bearing the Septuagint Greek translation (Exodus, Leviticus) and an Aramaic translation (Leviticus, Job) have survived as well. The most outstanding of the Dead Sea Scrolls is undoubtedly the Isaiah Scroll (Manuscript A) – the only biblical scroll from Qumran that has been preserved in its entirety (it is 734 cm long). This scroll is also one of the oldest to have been preserved; scholars estimate that it was written around 100 BCE. In addition, among the scrolls are some twenty additional copies of Isaiah, as well as six pesharim (sectarian exegetical works) based on the book; Isaiah is also frequently quoted in other scrolls. The prominence of this particular book is consistent with the Community’s messianic beliefs, since Isaiah (Judean Kingdom, 8th century BCE) is known for his prophecies concerning the End of Days. Explore The Great Isaiah Scroll > > Apocrypha in the Scrolls “Against them, my son, be warned! The making of many books is without limit” (Ecclesiastes 12:12) Besides the biblical books, there are many other literary works of the Second Temple period which, for religious and other reasons, were forbidden to be read (in public?) and were therefore not preserved by the Jews. Ironically, many of these works were preserved by Christians. Apocryphal books such as Tobit and Judith were preserved in Greek in the Septuagint translation of the Bible, and in other languages based on this translation. Pseudepigraphical books (attributed to fictitious authors) were preserved as independent works in a variety of languages. The Book of Jubilees, for example, survived in Ge’ez (classical Ethiopic), and the Fourth Book of Ezra survived in Latin. These apocryphal and pseudoepigraphical books were cherished by the members of the Judean Desert sect. Prior to the discovery of the Dead Sea Scrolls, some of the books had been known only in translation (such as the book of Tobit and the Testament of Judah), while others were altogether unknown. Among these are rewritten versions of biblical works (such as the Genesis Apocryphon), prayers, and wisdom literature. In some cases, several manuscripts of the same work were discovered, indicating that the sectarians highly valued these compositions and even considered a few of them (such as the First Book of Enoch) as full-fledged “Holy Scriptures.” Sectarian Scrolls: The Pesharim “Being versed from their early years in . . . apophthegms of the prophets; and seldom if ever do they err in their predictions” (Josephus Jewish War II, viii, 12) The Bible was the basis for the intellectual and spiritual experience of the members of the Qumran Community, and the purpose of its interpretation was in order “to do what is good and right before Him as He commanded by the hand of Moses and all His servants the prophets” (Community Rule 1:1–3). The exegetical works written by the sectarians deal with the interpretation of the laws of the Pentateuch (such as the Temple Scroll), of various biblical stories (such as the Testament of Levi), and, in particular, of the words of the Prophets. The method of biblical interpretation known as pesher is unique to Qumran. The pesharim may be divided into two types: those dealing with a specific subject (such as 4QFlorilegium), and those written as running commentaries. In pesharim of the second type, the biblical text is copied passage by passage in the original order, and each passage is explained by turn. Most of the “running” pesharim, of which there are about seventeen, are based on books of the Prophets, such as Isaiah, Nahum, or Habakkuk; there is also one pesher on the book of Psalms, which the Community also regarded as a prophetic work. The interpretations themselves are prophetic in nature and allude to events related to the period in which the works were composed (hence their importance for historical research). With a few exceptions, they name no historical personalities, but employ such expressions as “Teacher of Righteousness,” “Priest of Wickedness,” or “Man of Falsehood.” The Community Rule: The Sect’s Code “They live together formed into clubs, bands of comradeship with common meals, and never cease to conduct all their affairs to serve the general weal” (Philo, Apologia pro Iudaeis 11.5) Prior to the discovery of the Dead Sea Scrolls, the only evidence of the Essenes’ way of life was provided by classical sources (Josephus Flavius, Philo, and Pliny the Elder) and by a few allusions in rabbinic literature. The discovery of the scrolls allowed a rare first-hand glimpse of the lives of those pietists, through the “Rule” literature that governed their lives. This literature, later to evolve in a Christian monastic context, is unknown in the Bible, and its discovery at Qumran represents the earliest testimony to its existence. The work known as the “Community Rule” is considered a key to understanding the Community’s way of life, for it deals with such topics as the admittance of new members, rules of behavior at communal meals, and even theological principles. The picture that emerges from the scroll is one of a community that functioned as a collective unit and pursued a severe ascetic lifestyle based on stringent rules. The scroll, written in Hebrew, was found in twelve copies; the copy displayed in the Shrine of the Book, which is almost complete, was discovered in 1947. The Temple Scroll “They shall not profane the city where I abide, for I, the Lord, abide amongst the children of Israel for ever and ever” (Temple Scroll XLV: 13–14). The Temple Scroll, which deals with the structural details of the Temple and its rituals, proposes a plan for a future imaginary Temple, remarkably sophisticated, and, above all, pure, which was to replace the existing Temple in Jerusalem. This plan is based on the plan of the Tabernacle and of Solomon and Ezekiel’s Temples, but it is also influenced by Hellenistic architecture. The scroll is written in the style of the book of Deuteronomy, with God speaking as if in first person. Some authorities consider it an alternative to the Mosaic Law; others, a complementary legal interpretation (midrash halakha). This work combines the various laws relating to the Temple with a new version of the laws set out in Deuteronomy 12–23. Its author probably belonged to priestly circles and composed it at a time before the Community left Jerusalem for the desert, in the second half of the second century BCE. It was apparently written against the background of the controversy centering on the Temple in Jerusalem. The Temple Scroll, the longest of the Dead Sea Scrolls (8.148 m), comprises 66 columns of text. Explore the Temple Scroll > > Prayers, Hymns, and Thanksgiving Psalms The profoundly religious, reclusive community living at Qumran devoted all its energies to the worship of God. The sectarians believed that the angels were their companions and that their spiritual level elevated them to the border between the human and the divine. The atmosphere of sanctity that enveloped them is evident from the one hundred biblical psalms and more than two hundred extra-biblical prayers and hymns preserved in the scrolls. Most of the latter were previously unknown; they include prayers for different days (even the End of Days), magical spells, and so forth. Among this abundance of literary texts is a unique genre of hymns called hodayot or “Thanksgiving Hymns,” on the basis of their fixed opening formula, “I thank Thee, O Lord.” Scholars have divided the eight manuscripts of the Thanksgiving Hymns into two main types: “Hodayot of the Teacher,” in which an individual (the sect’s “Teacher of Righteousness”?) thanks God for rescuing him from Belial (Satan in the sect’s writings) and the forces of evil, and for granting him the intelligence to recount God’s greatness and justice; and “Hodayot of the Community,” hymns concerned with topics relevant to the Community as a whole. Both types extensively employ such terms as “mystery,” “appointed time,” and “light” and express ideas characteristic of the Community’s beliefs, such as divine love and predestination. The End of Days: The “War of the Sons of Light and the Sons of Darkness” “This is the day appointed by Him for the defeat and overthrow of the Prince of the kingdom of wickedness” (War of the Sons and Light and the Sons of Darkness XVII:5–6) The members of the Community of the yahad retired to the desert out of a profound conviction that they were living in the End of Days and that the final Day of Judgment was close at hand. They believed that all the stages of history were predetermined by God, and thus any attempt by the forces of the “Prince of Darkness” and “all the government of sons of injustice” to corrupt the “Sons of Righteousness” was destined to fail; salvation would ultimately arrive, as we read in Pesher Habakkuk (VII:13–14): “All the ages of God reach their appointed end as He determines for them in the mysteries of His wisdom.” The sectarians divided humanity into two camps: The “Sons of Light,” who were good and blessed by God – referring to the sectarians themselves; and the “Sons of Darkness,” who were evil and accursed – referring to everyone else (Jews and gentiles alike). They believed that in the End of Days these two camps would battle each other, as described in detail in the scroll now known as “The War of the Sons of Light and the Sons of Darkness.” This work, which provides a detailed account of the mobilization of troops, their numbers and division into units, weaponry, and so forth, states that at the end of the seventh round of battles, the forces of the “Sons of Light,” aided by God Himself and His angels, would vanquish the “Forces of Belial” (as Satan is called in the sect’s writings). Only then would the members of the Community be able to return to Jerusalem and engage in the proper worship of God in the future Temple, which would meet with the stringent requirements set out, for example, in the scroll known as “The New Jerusalem.”
Sometimes known as "nature's origami", the way that proteins fold is vital to ensuring they function correctly. But researchers at the University of Leeds have discovered this is a 'hit and miss' process, with proteins potentially folding wrongly many times before they form the correct structure for their intended purpose. The body's proteins carry out numerous functions and play a crucial role in the growth, repair and workings of cells. Sheena Radford, Professor of Structural Molecular Biology at the University of Leeds, says: "There's a fine balance between a protein folding into the correct shape so that it can carry out its job efficiently and it folding incorrectly, which can lead to disease. Just one wrong step can tip that balance." Proteins are made of amino acids arranged in a linear chain and the sequence of these amino acids is determined by the gene producing them. How these chains of amino acids are preprogrammed to fold into their correct protein structure is one of the mysteries of life. The culmination of many years' work, the collaborative study looked at the Im7 protein, a simple protein which is present in bacteria and has a crucial role to play in ensuring that bacteria do not kill themselves with the toxins they produce. "Im7 is like an anti-suicide agent," says Professor Radford. "We studied it partly because of its simplicity and partly because of the known evolutionary pressure on the protein to fold correctly to enable the bacteria to survive." The study has revealed that these proteins misfold en route to their intended structure, and importantly, has shown the forces at work during the folding process. While the chain of amino acids determines which shape a protein needs to take, the researchers discovered that it was the very amino acids central to the protein's function that were causing the misfolding. "This breakthrough could have huge implications for understanding the e |Contact: Jo Kelly| University of Leeds
Spaghetti with the Yeti Interactive Read-aloud On the third lesson, we will answer Integration of Knowledge and Ideas Core Standard’s text dependent questions. To learn more about text dependent questions visit this blog post here. Then students will come up with their own creature and what they would like to eat. We came up with a Sasquatch who only eats squash. The students will really get to be creative with their own creatures. The lesson will be completed as students create a food craft. Students can make spaghetti for the Yeti or they can make the food to go with their story. We used paper, yarn, glue, and fuzz balls for this craft. Students can use similar materials to make their own food.
On February 22, 2017, NASA broke the news that a star system has been discovered with seven Earthlike planets. Their Spitzer Space Telescope revealed the discovery of the seven planets orbiting a single star. At least three of the planets are believed to be so evolved that they may have life on them. According to NASA, this “discovery now sets a new record for the greatest number of habitable-zone planets found around a single star outside our solar system.” The Earthlike planets are orbiting around Trappist-1, a nearby dwarf star. The planets and star, which were discovered by NASA, exist 39 light-years away. Trappist-1 is an ultra-cool dwarf star, which is unlike our sun, as planets that orbit close to it can easily have liquid water. According to Daily Mail, no other star system that has been discovered before has been found to have such a large number of Earth-sized planets. It is believed that these planets have rocky compositions like Earth. They are roughly the same size as our planet, and six have surface temperatures between 32 and 212 °F. Researchers are suggesting that three of the newly found planets could have oceans of water with life evolving on them already. It is also being suggested that we may have proof of alien life within the next 10 years. Please SHARE this incredible discovery with all of your friends and family! Thumbnail Image Credit: Twitter / NASA Due to restrictions, this video cannot be viewed in your region.
Doctor insights on: What Is The Difference Between Pink Eye And A Sty Pink Eye (Conjunctivitis) (Definition) "Pink eye" refers to a viral infection of the conjunctiva. These infections are especially contagious among children. Newborns can be infected by bacteria in the birth canal. This condition is called ophthalmia neonatorum, and it must be treated immediately to preserve eyesight. "Pink eye" refers to a viral infection of the conjunctiva. These infections are especially contagious among children. Newborns can be infected by bacteria in the birth canal. This condition is called ophthalmia neonatorum, and it must be treated immediately to preserve eyesight. ...Read more Considerable: A stye is like a pimple of the glands in the lids - causing pain, local swelling discharge. It is in the skin of the lid and not the eye. Pink eye is usually an infection that is viral of the lining of the eye, causing redess, discharge and occasionally lowering of vision. It is on the surface of the eye and not the lid. Different locations: Pink eye refers to an infection in the conjunctiva, the clear membrane over the white of your eye. A sty is an infection in one of the glands in the eyelid. The area under my left eye is red, swollen and tender to touch. I do not believe it is a sty or pink eye - what could this be? Inflamed skin: If skin is red, swollen and tender to touch it is inflamed. The most concerning cause of inflammation is infection, so keep an eye out. Chronic irritation from rubbing or watering eyes can do it. Bug bites can do it. Sometimes allergies can do it, although these usually itch more than hurt until they get infected. You might also be in the early stage of a stye. Keep an eye out for a bump on eyelid.See 1 more doctor answer I woke up with my right eyelid swollen and a reddish purple. No sty, no pink eye, and I haven't been hit n it hurts. Please help! Styes occur: Any time, conjunctivitis could potentially cause a stye to form. Better be checked: Now, if you're concerned about your eye, a stye is still not a common occurrence, see your doctor/eye doctor I have had a stye in my eye for 3 weeks and have now developed pink eye in the same eye. What should I do? I have pink eye and a stye in my eye at the same time they occurred at the same time.... can this be dangerous? Not additive: These by themselves are uncomfortable but rarely ever harmful. If both occur at the same time, that compounds the discomfort but does not make either condition worse. There is no especial danger to these occurring together. Have a stye or pink eye in right eye swoll up suddenly unda bottom lid. Is there a quick home cure to help heal fast? Warm compresses: Hard to say for sure from your description what exactly is the problem, but certainly could be a stye or chalazion. It is ok to try a washcloth soaked in very warm water as a warm compress a few times a day, for about 5-10 minutes at a time. If there is no improvement within a couple of days, or worsening at any time, you should see an ophthalmologist promptly.See 1 more doctor answer I am 24 and have been experiencing pain in the back of my right eye. No pink eye or stye present. No sinus infection. Ideas? See ophthalmologist: Your vision is too precious to fool with and you should make very sure very quickly that nothing is going to damage your eye. Following that, if nothing is found it may be necessary to have an MRI with thin orbital slices done with contrast to determine if there may be a tumor in back of the eyeball. Please go and get evaluated soon.See 1 more doctor answer Have reoccurring styes on both eyes. No conjunctivitis, but the upper and lower lids are swollen and hurt much. Stye so large blocks sight. What 2do? Itchy sore yellowish/grey lesion lower inside corner of eye. Not pinkeye/allergies/foreign obj/stye. Itching began yesterday, lesion this am. Cause? Eye lesion: It is usually bacterial infection. It would need frequent wash with water and antibiotic eye ointment if not cleared within 24 hours Pink eye: A good question. Pink eye occasionally is caused by alelergy or chemicals but usually spreads from hand to eye as a virus. Less commonly a bacteria. They can look similar but the pink eye often goes away in 7 to 10 days whereas allergic eye problems may persist. There is more discolored eye drainage from a viral pink eye than from an allergic conjuctivitis. - Talk to a doctor online - What is the difference between a stye and pink eye? - Difference between viral or bacterial pink eye - Difference between bacterial or viral pink eye - Pink eye vs sty - Can you tell me the difference between pink eye and a regular eye infection? - How can you tell the difference between types of pink eye? - What is the difference between a lazy eye and a regular eye? - What is the difference between a comprehensive and a routine eye exam? - Difference between conjunctivitis and sty
This demonstration initializes with a concave mirror and an object represented by a green arrow. Two rays are traced to show the reflection of the object. A small point moves along each ray to highlight the direction of light travel. Real rays are indicated by solid lines, and virtual rays (rays that appear to pass through the mirror surface) are drawn with dotted lines. Three important points are also highlighted: the center of curvature, the focus, and the vertex. A label for each point appears when you point at it. The focus and center of curvature may be moved to alter the shape of the mirror. A toggle button has been provided to switch between a concave mirror and a convex mirror.
It’s been an exciting week for gamers with the release of the highly-anticipated Fallout 4 video game which takes place decades after a nuclear war has destroyed civilization and only those in fallout shelters have survived. But could humans really survive a nuclear apocalypse and spend years in a bunker? Scientists at the American Chemical Society (ACS) have tested the real-life challenges of living in a fallout shelter and say time, distance and shielding will determine who lives and who dies. The key to survival is to minimise your exposure ionising radiation – the stream of alpha particles, beta particles and gamma rays after a nuclear attack. Those who are exposed for the shortest time, are the furthest distance away and have the most shelter are significantly more likely to survive. Once the initial explosion is over, the biggest health risk is nuclear fallout, the radiation that spreads from debris lifted into the fireball during the explosion. Experts suggest fleeing high population areas and locations close to military bases. “NASA has developed radiation shielding for space flights which could be adapted for a fallout shelter,” says chemist Dr. Raychelle Burks. “Current research suggests that carbon nanotubes provide protection from radiation and ounce for ounce are at least a hundred times stronger than steel.” Scientists also suggest that water could be purified using graphene oxide and food could be grown using a complex aquaponics system which cycles nutrients between plants and fish. Daniel Salisbury from King’s College London has recently completed research on nuclear attacks and survival strategies. He suggests a simple test to determine whether you’re in danger from radiation: [Image: Centre for Science & Security Studies]
We live in a fast-changing world where technology continually affects the way people work and interact. It is no longer a myth but a well-established fact that 90 percent of future jobs will require ICT (Information Communications Technology) skills. The challenge goes far beyond that: how can we plan educational provisions for skills and jobs that do not yet exist? How should students and learners plan their educational path in such a world? Soon, people will no longer merely need to know how to carry out a particular job. They will also need to have transversal skills that allow them to up-skill and adapt to new working conditions. Change will be constant. As such, increasing both the quality and relevance of learning is essential if Europe’s citizens are to actively contribute to its future. Rigid environments are preventing innovation In parallel, the demand for education is increasing. To satisfy the increase in worldwide demand for higher education alone, large new universities would need to be built on a weekly basis. Increasing access to education is vital for our societies. This means ensuring that more people can go to university, that they can upgrade their knowledge regardless of age, and that high-quality resources are not a privilege for the few. In short, we need access for all – open education. Open education has the potential to increase the effectiveness, equity and efficiency of the provision of education. It is for this reason that the European Commission launched the “Opening up Education” initiative, a strategy for innovative teaching and learning through new technologies and open educational resources (OER). It focuses on three main areas: 1) Creating opportunities for organizations, teachers, and learners to innovate as well as encouraging educational institutions to promote innovation. For example, teachers must be equipped with the right training and skills to make the most of technology in personalized learning. I know from my visits to schools, colleges and universities that many teachers are passionate about embedding new technologies in their teaching practices and want to work closely with their counterparts in other countries. But, in some cases, rigid environments are preventing such innovation from taking place. 2) Open educational resources (OER): these are teaching and learning materials, frequently created by teachers themselves, made available through an open license. This allows other teachers to not only use these materials, but also to adapt them either to local contexts or to allow for recent developments. A more structured use of OER could simplify the educational process, allowing teachers to focus on what should be their core activity: supporting students in learning. Open licenses will contribute to widening access to learning opportunities. For example, adults who are currently employed but don’t have the time to return to formal education to upgrade their knowledge and skills can now sign up for Massive Open Online Courses (MOOCs) without being formally enrolled in a university. 3) Connectivity and innovation: we need to ensure proper infrastructure to connect educational institutions. Schools need to have the right equipment, and they need to be able to tap into the vast potential of open educational resources and new pedagogies. They also need to be connected to the rest of the world. Far too many schools still have no or only slow Internet connections. There are ways to bring technology efficiently and flexibly into the classroom. For example, a “bring-your-own-device” policy might be the quickest way to enable new forms of learning, but this requires interoperable tools and materials in formats that work across devices. A call for action The potential of open education does not come without challenges. We can see certain trends which should be considered by EU member states and educational institutions when revisiting their education systems and organizational models: MOOCs and certification: students can now acquire knowledge without being enrolled in an institution, for example, through MOOCs. This creates a challenge as regards the validation and recognition of such learning. Personalized learning: in ICT-rich learning environments, a teacher can have access to more accurate information on what each individual student is learning. Different learning materials can be provided, in real time, to different students to better meet their learning needs. Quality assessment and transparency: students must have transparent information regarding the quality of different MOOCs and OER to decide how relevant they are to their needs; peer-based reviews are now being explored by many institutions as a possible solution. The openness of these resources also brings increased transparency and makes it easier to monitor the quality of the resources. Education is the most valuable investment that our society can make for its future. During my mandate as European Commissioner for Education, Culture, Multilingualism and Youth, I have had the privilege to contribute to the process of modernizing and reinvigorating Europe’s education systems. I am proud of what we have achieved and I know that the Commission will continue to ensure that open education remains a priority for the future of the European Union. Read more in this debate: Peter Scott.