content
stringlengths
275
370k
Watch this video from ESA on Newton’s 3rd Law. Answer the following questions in the comments (and/or your workbook): - Name one of the action-reaction pairs from when the astronauts push against each other - If they both experienced the same force, why does the astronaut carrying the heavy battery not move as fast as his colleague? - Why did only one student move from the end of the Newton’s Cradle? - What is the action-reaction pair for a rocket? Use this quiz to check your understanding.
This image of Martian clouds illustrates the fact that they are found only in the equatorial region. Click on image for full size Image from: Hubble Unlike the Earth, where clouds are found around the entire globe, on Mars, clouds seem to be plentiful only in the equatorial region, as shown in this Hubble telescope image. This may be because water of Mars may only be found at equatorial regions. As early as 1796 scientists were reporting "yellow", and "white" or "bluish" clouds in the Martian atmosphere. With data from the Mariner 9 mission, scientists could finally prove that the clouds were made of water. Mars Global Surveyor is providing more proof of the existence of water clouds. More study is needed to understand just how the clouds come and go in the Martian atmosphere. For example, even though clouds have been found, there is no proof it actually rains on Mars! Precipitation of water depends upon how cold it is. The temperatures in the atmosphere may be too cold for water to fall to the ground as droplets. As a first step in answering some of those questions, Mars Pathfinder took measurements of many clouds in the Martian sky from the surface of Mars itself. Scientists are studying images of the Martian sky from the 80-day mission. The Mars '98 mission will carry a weather satellite, just like the instruments that are used to bring you the weather on the evening news. Then scientists expect to receive much more complete data about Martian weather. You might also be interested in: How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more Precipitation (pre-sip-uh-tay-shun) is any form of water that falls to the Earth's surface. Different forms of precipitation include drizzle, rain, hail, snow, sleet, and freezing rain. Precipitation...more The temperatures in the atmosphere of Mars are doggone cold! This is a graph which shows the temperatures measured in the atmosphere by Mars Pathfinder as it decended through the nighttime air and landed...more The Mars '98 mission was supposed to study the climate, weather, and surface at the Martian south pole. Mars '98 was to build upon the discoveries of the Mars Pathfinder and Mars Global Surveyor missions...more The small, round rock shown here was named The Lamb. It was found as part of Mars Pathfinder's investigation of the surface of Mars. This rock is special because of the soils found around the rock. They...more There seems to be no running water on the surface of Mars today even though there is evidence for running water, including river channels such as those shown here, and there are frozen, icy polar caps....more The atmosphere of Mars is much thinner than that of Earth, with a surface pressure averaging 1/100th that at the surface of the Earth. Surface temperatures range from -113oC at the winter pole to 0oC...more This image shows a local dust storm near the edge of the south polar cap. Viewing of this image at high resolution is recommended. This fascinating image shows dust swirling over a large area. Martian...more
Let’s consider the basic physics behind each step of vacuum or siphon brewing. Initially, heat is applied to water in a globe—as shown in the lead photo—and this system is “open” to the environment. Because you are applying heat to the system, some water is converted to steam. But that water vapor is free to exit the globe. By installing the funnel, as above, you effectively “close” the system. The gasket at the neck of the funnel keeps steam from escaping the globe. As long as heat is applied to the system, water continues to be converted to water vapor. Since a gas takes up more space than a liquid, more and more water is displaced as the amount of steam in the globe increases. In effect, the water vapor pushes or “kicks up” the column of water into the funnel. This process continues until the level of the steam in the globe reaches the bottom of the neck of the funnel, as shown in the photo above. At this point, any additional water vapor escapes the system by bubbling up through the column of water or coffee slurry in the funnel. The liquid remains in the funnel as long as you maintain an adequate amount of steam pressure in the globe. A gentle simmer in the globe will do the trick, and ensure that the slurry does not overheat. When the brewing cycle is complete, you initiate “drawdown” by turning off the heat source. Once you stop applying heat to the globe, you can actually trigger drawdown by simply blowing across the globe. Cooling the globe causes water vapor to convert back to water. Because water takes up less space than water vapor, a low pressure system forms in the globe as the steam condenses, which sucks or pulls the brew in the funnel back down into the globe. This is the vacuum stage from which the brewer gets its name. If you listen very closely at the end of drawdown, you will hear a sucking sound as the partial vacuum in the globe pulls the last of the liquid out of the bed of grounds, and some air rushes in behind it to equalize the atmospheric pressure between the globe and the environment. In many cases, you will see a transient crema-like bloom on the bed of grounds, some of which gets vacuumed through the filter and into the globe. You will also notice that the seal at the neck of the funnel has been sucked more tightly into the mouth of the globe. Simply rock the funnel and pop it loose. While the eponymous “vacuum” gets all the credit, steam is actually the primary motive force behind the siphon brewing process, which is perfectly fitting given that this is a steam age brewing system.
In many respects, Americans have begun to face the gruesome threads of history that are sewn into the country’s fabric. The mass genocide of indigenous peoples is generally understood to have been cruel, ruthless, murderous, and without humanity. The enslavement of African people and their descendants has been widely accepted as a despicable and vile institution that was leveraged to build the economic and physical infrastructure of the country. As of late, virtually all monuments to the Confederacy have been identified as inherently racist and rooted in the preservation of anti-black sentiment. But in the United States, there are still horrors that we’ve yet to fully grapple with as we work to confront our racial past and its effects. At the top of that list is lynching, a form of often-racialized terror where an individual or group is put to death — especially by hanging — for a perceived offense, with or without a trial. The act is usually carried out by a mob, and it happened with great frequency throughout U.S. history. Nearly 4,100, black Americans were lynched in the United States between 1877 and 1950, according to a report from the EJI, and those are just the ones on record. Most of these “racial terror lynchings,” as the EJI describes them in its report, remain undocumented because white people generally had no incentive to record the senseless, extrajudicial murders of black Americans at the hands of vigilante white mobs. Initially, many of these acts of racial terror were a direct response to the period of Reconstruction, between 1865 and 1877, when black American political, economic, and social access was temporarily invigorated. While most lynchings occurred in the South, they were also common in states such as Illinois, Indiana, Missouri, and Ohio. Read the full article at Teen Vogue.
The Georgian period of architecture and style ran from 1714 to 1830, covering the reigns of George I to George IV. The term is occasionally used to refer to buildings built in the reign of King William IV who ruled until 1837, though this is often called the ‘late Georgian’ period. After King William, the UK entered the Victorian age after William’s niece Victoria took the throne after his death at the age of 71. Grand stately homes were built during this period as some families accumulated wealth, which they spent to create country houses with landscapes and often follies and gatehouses. Yet, the most popular home being built at this time was the townhouse. These buildings are usually now protected with a Graded listing status and they still form large parts of the core of cities including London, Bath, Edinburgh, Dublin, Newcastle upon Tyne and Bristol. Early Georgian architecture As a new monarchy arrived from Hanover to Britain in 1714, this represented a major break with the past, which was reflected in new architecture for the nation's buildings. This meant a change from the Baroque style that can be found at St Paul’s Cathedral, Castle Howard and Blenheim Palace. Georgian architecture in contrast has proportion and balance, symmetry and simplicity. Early Georgian architecture was in the Palladian style, based heavily on ancient Rome and inspired by the works of Andrea Palladio (1508-1580). Palladianism was most popular from 1715 to 1760, when the style fell out of fashion after the English Civil War. Palladio (1508-1580) believed there was a perfect symmetry and proportion in nature, which could be replicated in buildings and created a set of architectural rules. He was inspired by the buildings of ancient Rome and British designers used his rules to create Palladian exteriors based on rules of proportion and were plain, in contrast to richly decorated interiors. Palladianism was fashionable from about 1715 to 1760. Characteristics included symmetry and balance, columns, scallop shell motifs, pediments and masks. Some architects did not want to be restricted by the Palladian style rules, which led to the neo-classical design, which still included symmetry, columns and pediments but without the strict rules of proportion. The vase shape was an architectural motif, with swags and festoons and classical figures. The late eighteenth and early nineteenth centuries saw the emergence of the Regency style, taking motifs from ancient Greece and Rome together with elements of nature and art from Egypt and France in a combination of colours and patterns that were incredibly rich. The regency style can still be seen in towns and cities which enjoy elegant rows of terraced houses built in what is now called the Regency Style including the famous Royal Crescent in Bath which was built between 1767-1775, Regent Street in London and the Esplanade in Weymouth The early nineteenth century began to absorb more exotic ideas with ancient Greek and Rome motifs. One of the best examples of the incorporation of exotic ideas is at the Royal Pavilion in Brighton, which became a curious mixture of Indian, Chinese, Tudor and Gothic styles. Gothic and Renaissance In the nineteenth century, Augustus Pugin a co-architect of the Houses of Parliament, declared gothic to be a morally superior style. John Ruskin tried to define which styles of architecture were "truthful". Yet the styles of architecture in Britain included Moorish, Hindu, Italianate, French, Byzantine, Dutch and Grecian details applied to buildings. By the 1880s this had become dismissively named "Renaissance bric-a-brac", by some architects, with Harrods department store being one of the most famous examples. Brunswick House has a square symmetrical shape and is carefully proportioned according to fashionable classical design principles from the Georgian era. Some of the typical structural features for the period and visible at Brunswick House include the following: - Often the houses had two or three storeys. - They can be two rooms deep and symmetrical both internally and externally. - They often have a panelled door in the centre of the house if large and detached - A cellar visible below ground floor is common and is where the kitchen is located. - Windows in Georgian houses are often small and six-paned towards the top of the property, while there are larger nine- or even twelve-panel windows on the main floors. - Almost exclusively Georgian houses have sash windows which slide up and down on a series of weights and pulleys. Most also originally had internal shutters. - The roof was often hidden behind a parapet, a low wall built around the edge of the roof which makes the buildings look totally rectangular. - The chimneys were often paired and located on both sides of the houses. This enabled them to have fireplaces in almost every room as now coal largely replaced wood. If you're thinking about hiring a venue for an event, check out our editors picks.
Understanding pi is as easy as counting to one, two, 3.1415926535… OK, we'll be here for a while if we keep that up. Here's what's important: Pi (π) is the 16th letter of the Greek alphabet, and is used to represent the most widely known mathematical constant. By definition, pi is the ratio of the circumference of a circle to its diameter. In other words, pi equals the circumference divided by the diameter (π = c/d). Conversely, the circumference of a circle is equal to pi times the diameter (c = πd). No matter how large or small a circle is, pi will always work out to be the same number. That number equals approximately 3.14, but it's a little more complicated than that. [10 Surprising Facts About Pi] Value of pi Pi is an irrational number, which means that it is a real number that cannot be expressed by a simple fraction. That's because pi is what mathematicians call an "infinite decimal" — after the decimal point, the digits go on forever and ever. When starting off in math, students are introduced to pi as a value of 3.14 or 3.14159. Though it is an irrational number, some use rational expressions to estimate pi, like 22/7 of 333/106. (These rational expressions are only accurate to a couple of decimal places.) While there is no exact value of pi, many mathematicians and math fans are interested in calculating pi to as many digits as possible. The Guinness World Record for reciting the most digits of pi belongs to Rajveer Meenaof India, who recited pi to 70,000 decimal places (while blindfolded) in 2015. Meanwhile, some computer programmers have calculated the value of pi to more than 22 trillion digits. Calculations like these are often unveiled on Pi Day, a pseudo-holiday that occurs every year on March 14 (3/14). Digits of pi The first 100 digits of pi are: 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510 58209 74944 59230 78164 06286 20899 86280 34825 34211 7067 The website piday.org has pi listed to the first million digits. The life of pi Pi has been known for nearly 4,000 years and was discovered by ancient Babylonians. A tablet from somewhere between 1900-1680 B.C. found pi to be 3.125. The ancient Egyptians were making similar discoveries, as evidenced by the Rhind Papyrus of 1650 B.C. In this document, the Egyptians calculated the area of a circle by a formula giving pi an approximate value of 3.1605. There is even a biblical verse where it appears pi was approximated: And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it about. — I Kings 7:23 (King James Version) The first calculation of pi was carried out by Archimedes of Syracuse (287-212 B.C.). One of the greatest mathematicians of the world, Archimedes used the Pythagorean Theorem to find the areas of two polygons. Archimedes approximated the area of a circle based on the area of a regular polygon inscribed within the circle and the area of a regular polygon within which the circle was circumscribed. The polygons, as Archimedes mapped them, gave the upper and lower bounds for the area of a circle, and he approximated that pi is between 3 1/7 and 3 10/71. Pi began being symbolized by the pi symbol (π) in the 1706 by the British mathematician William Jones. Jones used 3.14159 as the calculation for pi. Pi r squared In basic mathematics, pi is used to find the area and circumference of a circle. Pi is used to find area by multiplying the radius squared times pi. So, in trying to find the area of a circle with a radius of 3 centimeters, π32 = 28.27 cm. Because circles are naturally occurring in nature, and are often used in other mathematical equations, pi is all around us and is constantly being used. Pi has even trickled into the literary world. Pilish is a dialect of English in which the numbers of letters in successive words follow the digits of pi. Here's an example from "Not A Wake," by Mike Keith, the first book ever written completely in Pilish. Now I fall, a tired suburbian in liquid under the trees, Drifting alongside forests simmering red in the twilight over Europe. Now has 3 letters, I has 1 letter, fall has 4 letters, a has 1 letter, and so on, and so forth. This article was updated on Oct. 19, 2018 by Live Science Senior Writer, Brandon Spektor.
This artist’s concept depicts astronauts and human habitats on Mars. NASA A new report has called for NASA to update its rules preventing the spread of viruses, bacteria, and other microorganisms to other planets in the course of space exploration. The report was written by the Planetary Protection Independent Review Board (PPIRB), a recently created agency which considers the guidelines in place for preventing contamination of other planets or bodies by human activities, and also preventing astronauts bringing any potential contamination from elsewhere back to Earth with them. An example of a recent contamination issue was the Israeli craft Beresheet, which may have spilled thousands of tardigrades onto the moon when it crashed. Although in that case, it is uncertain whether the tardigrades survived and they are unlikely to do any harm even if they did, the potential is there for serious contamination if humans carry other forms of life with them to off-world locations such as Mars. The report points out that many of the guidelines for planetary protection were written at the beginnings of human space exploration and are in serious need of updating. In particular, it calls for recognizing the diversity which exists across planetary surfaces, rather than treating all of Mars as one entity, for example. In the case of the moon, the entire body is classified as being of interest in terms of potential development of life. But scientists now know that if life ever could have developed on the moon, it would have happened at its poles where ice has been found. Therefore, most of the moon could have this classification removed, which would make planning and executing lunar missions much easier. Some other aspects of planetary protection guidelines could be softened as well, such as the building of spacecraft in dedicated cleanrooms. These precautions take a great deal of time and money to implement and may not be necessary, according to some scientists. NASA officials welcomed the report as a chance to update the planetary protection guidelines. “The landscape for planetary protection is moving very fast. It’s exciting now that for the first time, many different players are able to contemplate missions of both commercial and scientific interest to bodies in our solar system,” Thomas Zurbuchen, associate administrator for NASA’s Science Mission Directorate, said in a statement. “We want to be prepared in this new environment with thoughtful and practical policies that enable scientific discoveries and preserve the integrity of our planet and the places we’re visiting.”
All teachers have experienced students who obviously have questions about the class or their homework but don’t ask even when given the opportunity, and the same can be true for whole classes. For teachers of some nationalities and age groups, this can be more common than students who are happy to stick up their hands. Signs that they have questions they aren’t asking include: - Reading the instructions on the worksheet after you have finished explaining what they should do (sometimes meaning not following your instructions when the two are not the same). - Asking each other questions. - Waiting until the other students start and following their lead. - Frantically searching for the right exercise when the activity starts, e.g. the CD begins. - A student who is not such a high level and so must have questions but never asks about anything. - Many changes when they corrected their own work (e.g. with the answer key or during self-correction of writing) but no questions afterwards. - Things they haven’t changed when being asked to correct their own written work in the places you have underlined. - Spending a lot of time doing one thing, e.g. one grammar question or one place they should correct their own written work, or skipping it. - Reaching for their dictionary or grammar book. - Quickly scanning their book or flicking through it, e.g. to the grammar section at the back. - No questions about what you have written even when your handwriting is difficult to read. - Questions always coming from the same few students. - No questions after an explanation of a difficult point, an explanation that contradicts what they have been told elsewhere, or an explanation the teacher thinks is probably not the best. - Many errors and few questions. - Body language and facial expressions, e.g. looking around hoping someone else will ask a question or fingers lifted off the table as if they were going to raise their hand but couldn’t quite do it. Reasons why they might be reluctant to ask questions are mainly connected to shyness, language problems, relevance, and the teacher’s and students’ roles. Examples of these and other reasons include: - They are afraid of asking a silly question. - They don’t want to be the first person to ask a question. - They are hoping someone else will ask about the thing they have questions about, or that the teacher will just answer the question anyway. - They’d prefer not to ask in front of the other students. - They don’t think they will understand the answer. - They can’t say their question in English, e.g. because of a lack of grammatical jargon like “adjective”. - They can’t say their question in correct English and are afraid of making (public) mistakes. - They need time to think before they can formulate their question. - They’d prefer to get an explanation in L1. - They missed a lesson or part of it and are afraid of asking something that has already been covered. - They think they should already know the answer. - They want to get student questions stage out of the way quickly and get on with the lesson. - They have the impression that teachers who ask “Any questions?” are usually just killing time. - They think they can find answers to all their questions elsewhere, e.g. on Google or in a grammar book, and want to spend class time on other things. - They aren’t sure if their question is relevant to the topic of the class or the interests of the other students. - They are only interested in things that are on the test. - They think they are the only one with that question. - They are not sure if it is a point that is too trivial to be worth asking about. - They have too many questions and can’t decide which to ask. - They are worried that it is too tricky or too big a point and so maybe will take up too much class time. Teacher’s and students’ roles - They think that students shouldn’t ask teachers questions, or at least are used to that classroom atmosphere. - They think that the teacher should anticipate and answer all questions without needing to be asked. - They want the teacher to decide what they should and shouldn’t know. - They want to spare the teacher embarrassment of not being able to answer a question. - They expect the teacher to nominate who should ask questions. - They are simply not in the habit of asking questions in class. - They had a negative experience when asking questions before, e.g. a brush off by the teacher, being told the question would be answered later but then it was forgotten about, being laughed at, or other students looking bored while their question was being answered. - They think they dominate too much and so are waiting for questions from others. What student questions we want and why As we have seen, there are many reasons why student might not want to ask questions, and some of them are legitimate (e.g. wanting to spend class time on other things) or connected to intractable things like personality. It could therefore be argued that students who don’t want to ask questions should just be given that choice. However, there are plenty of reasons why we might want all our students to ask us questions: - The questions we are asked give us information about our students and how they are receiving the classes, so it helps us tailor our classes to their level, interests and needs. It can also help us give them individual self-study tips. If questions are mainly coming from a few students, we get a distorted view of what students are having problems with and need. - Classroom questions are the most real kind of classroom communication. - Classroom questions bring up language they will need to find answers to their questions elsewhere, e.g. questions like “How do you spell…?” that they can use in their real life and the kinds of jargon they will find in monolingual self-study books like English Grammar in Use. - Asking questions could help make them less shy about speaking out in English more generally. - Getting into the habit of formulating questions should help train them to think more carefully about the language that they are being taught. - Classroom questions save classroom time, e.g. students checking their own work and then asking questions is quicker than going through all the answers as a class. - Explanations in response to questions are likely to be more understandable and memorable for students than those which the teacher chooses with no prompting from the students. Things teachers might want to encourage questions about include: - If the answers which students have written are also possible, e.g. if there are options which are not given in the workbook answer key. - Why their answers are wrong. - Explanations of the language used, e.g. what words mean. - Language they could use, e.g. in speaking tasks. - If there is an error, e.g. in the book, on a worksheet, or on the board. - What they are expected to do, e.g. the rules of the game or how to do the homework. - Tips on strategies, skills and self-study, e.g. recommendations for reading, ways to approach a test task or the best summer school abroad. - Language they have encountered outside the classroom, e.g. a word they heard in an underground announcement or a grammar explanation they had in high school. - Justifications, e.g. for what is being covered in class or for homework and how. - Their strengths and weaknesses and priorities. - Their progress. - Parts of speech. - Pronunciation (number of syllables, word stress, homophones, vowel sound, silent letters, how well they just pronounced something, etc) - Requests to write something on the board. - Meaning and things they can write in their notebooks to help remember it (definition, opposites, synonyms, translation, etc). Obtaining more student questions Perhaps the simplest way to get more questions out of your students is to give them examples of the kinds of questions they might want to ask. For example, if they check their own answers with the answer key after a workbook homework, you could suggest the questions “Is this answer also possible?”, “Why is this wrong?” and “What’s the difference between… and…?” These questions can also be put on the board, on a poster or on worksheets. Another way of encouraging them is to tell them how important this point is, e.g. “This point will be in the test”, “I’m going to ask you to use this language in a minute”, “The rest of today’s lesson/ the rest of this week/ this unit/ the next homework is about this point”, or “This is a particular problem for Japanese speakers/ for people studying abroad/ in this class/ in the IELTS test”. A related method is to tell them “This is your last chance to ask me questions before…” or “If you don’t have any questions for me, let me ask some questions to you”. This in turn is connected to the important tip of making sure that they will be able to use what you tell them after (preferably straight after) you answer their questions. The other major tip for making students ask more questions is to teach them the language they will need in order to do so, mainly meaning typical phrases like “Can you write it on the board?” and language to talk about language like “adverb” and “syllable”. Practice of these phrases can be combined with vocabulary revision by getting them to test each other on the spelling, word stress, different parts of speech etc of a list of vocabulary from the course up to that point. They can then use the same language to ask the teacher about any words and phrases they aren’t sure about. It may be possible to tie this kind of practice in with language points on your syllabus. For example requests and question formation can be tied in with typical classroom questions like “How do you pronounce this word?”, and determiners and giving advice can be tied in with self-study tips, e.g. “If I was you I would learn as many new words as possible”. A good way of making sure they do make the leap to using those questions in front of the whole class is to ask them the question you think they probably should be asking you, e.g. “What is the difference between ‘last week’ and ‘in the last week’?” If they can’t answer, get them to ask the same question back to you before you answer it. It may also be possible to change their attitudes about asking questions, e.g. do a lesson on cultural differences in education that includes the willingness or not to ask questions in lectures in different countries. This is particularly useful if they are likely to face that situation themselves, e.g. when studying abroad. A more basic way of trying to change their attitudes is simply by telling them when something, e.g. not following your instructions or doing badly in a test, is a symptom of not asking questions. There are also certain classroom and homework activities that are likely to prompt questions. One is giving them the answer key in the next class rather than with their homework exercises, so that they can call you over and ask you questions as they are checking their answers. A more unusual one is to give them a test and tell them after you take it in that they will be able to make changes to their answers in the next lesson. This should make them very motivated to study the relevant points at home and then ask questions before their last chance to get the answers right. Other useful things to say when inviting them to ask questions are: - “Don’t worry, I’m sure everyone else has the same questions” - “No questions? Does that mean it was too easy??” Despite all the nice tips above in some cases you might want to actually almost force them to ask you questions, e.g. by telling them that you expect one question each by the end of the class and you will pick on people in the last five minutes if they haven’t all asked you something by then. You can also do this by giving them all two pieces of card that represent questions and telling them they all have to use at least one of them by the end of the class, getting them to pass them to you as they do so. You could also allow them to pass them to each other for questions to their partners as long as their questions are in English. In other cases, you might want to assume that questions in front of the class are not likely to be forthcoming in the near future and find other ways of making sure you find out what they want to know and of answering their questions. Tips include: - Making yourself available outside class, e.g. telling them that you will be in the classroom or in the teachers’ room outside class time to ask questions and when. - Allowing questions by email or other internet forums such as a Moodle. - Going round and offering to answer questions while the other students are busy doing something else. - Getting them to ask each other questions in pairs or small groups and going round helping them with any questions they can’t answer.
(A) Daedalia Planum Context, Arsia Mons in upper right. (C) Compared with Amboy These pictures compare an image of wind features on a lava field on Mars with similar features on a lava field in southern California. The first picture (above, left) shows that the martian example occurs in western Daedalia Planum, a region covered by long, dark-toned lava flows southwest of Arsia Mons, the southernmost of the three large Tharsis Montes. The second picture (above, center) is Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image no. AB1-10905. It was acquired on January 29, 1998. What struck the MOC Science Team as most exciting about this image was that the relationship between lava flows, bright windblown sediment, and dark wind tails behind craters in AB1-10905 reminded them of a similar scene near Amboy, California, in the Mojave Desert (above, right). Based upon observations of Daedalia Planum from the Viking orbiters in the late 1970s, it has been assumed for 20 years that most of Daedalia Planum is a lava flow field that is mantled by bright dust. However, the similarity to the Amboy lava field in California has caused some scientists to re-think the situation on Mars. Instead of bright dust, it now appears that bright sand might be present in this portion of Daedalia Planum. What's the difference between dust and sand? Observations from the Viking and Mars Pathfinder landers have suggested that martian dust consists of very, very tiny particles of less than 10 micrometers (less than 1/10th the width of a human hair). Sand, on the other hand, is defined by sedimentologists as consisting of particles with sizes in the range 62.5 to 2000 micrometers(2000 micrometers is 2 millimeters, or about 8-hundredths of an inch). In the martian environment, sand moves close to the ground by bouncing and hopping when strong enough gusts of wind come along, this is called saltation. Dust, on the other hand, gets picked up by the wind and travels by being suspended in the air. When dust settles back to the ground, it forms a coating that blankets surfaces in a fairly uniform manner, whereas sand makes drifts, tails, and streaks as it interacts with obstacles such as craters, hills, and the lumpy surfaces of lava flows. At the Amboy lava field in California, bright sand is being blown across the dark lava from adjacent dry streambeds. When this sand encounters a volcanic cinder cone that rises above the lava field (pictures "B" and "D" in the above, right figure), the sand is deflected around the cone and leaves a dark "shadow" in which very little bright sand gets deposited. A similar situation is seen with respect to craters formed by meteor impact in the Daedalia Planum image AB1-10905 (pictures "A" and "C" in the above, right figure). The spectacular Amboy wind streak, lava flows, and cinder cone can often be seen from an airplane by passengers flying into or out of Los Angeles International Airport (LAX) from points east such as Denver, Colorado. MOC image AB1-10905 is illuminated from the left. The Amboy lava flows and cinder cone volcano are illuminated from the lower right. The Amboy photographs were taken from an airplane and are from the U.S. Geological Survey. Wind has blown material from right to left in the MOC image, and from upper left toward lower right in the Amboy pictures. North is up in all figures. For a higher-resolution view of the AB1-10905 MOC image (2.4 MBytes), CLICK HERE.
Figure: Early Altimeters, from "Evolution of the Modern Altimeter". Evolution of the Modern Altimeter - Probably the first suggestion that atmospheric pressure decreased with altitude came from Isaac Beeckman (1588-1637), but this had to await the invention of the mercury barometer by Torricelli about 1644 before being verified by Pascal in 1648. The altitude of free balloons was measured by portable mercury barometers some time after 1783. - Although the aneroid barometer had been invented by Zaiken in 1758, it did not reach a practical form until improved by Bourdon, in about 1845, from which time it was used both in free balloons and dirigibles. - Thus, a practical altimeter existed long before the Wright Brothers' first flight in 1903, but it is curious that no well authenticated record exists of altimeters being carried in aeroplanes before about 1913. - The mechanism of early altimeters was essentially similar to that of present day household barometers, which themselves have changed little since Bouron's day. - The pointer made one revolution over the full range, which varied from 0 - 7,000 ft in 1914 and 0 - 20,000 ft in about 1920. By 1945 the range requirements were 0 - 30,000 ft; the pointer then performed one and a half revolutions, being read against an inner scale for altitudes in excess of 17,000 ft. Rotatable scales were introduced at that time — so that the scale-zero could be aligned with the pointer before takeoff. - Though this altimeter performed fairly well under static conditions, it was of little use in the critical landing phase because of the change of indication with altitude, inertial forces, hysteresis and because the indicated altitude depended on the height above sea level of the landing field relative to the takeoff field and the differences in barometric pressure between them. - From 1928 onwards flying in bad weather began to be attacked scientifically, and the altimeter was developed to become a useful landing instrument. Apart from improvement in accuracy at low altitudes the mechanism was fitted with automatic temperature compensation and static balance to eliminate attitude and inertial effects. In the mid-thirties when radio links were established a baro-set adjustment was provided to allow the datum to be changed. Landing requirements also started the trend to pointer indications of 1,000 ft per revolution, necessitating two or even three pointers. The culmination of this development was the three-pointer sensitive altimeter, with a range of 0 - 35,000 ft, introduced in 1935. - This altimeter set the pattern for the next 20 years, during which time the mechanism was improved and the range extended to 0 - 60,000 ft, and even, in some instruments, 0 - 80,000 ft. - It was technically superseded by the servo altimeter in 1958 although it still survives in large numbers. Its inadequacy for high-altitude jet flight points to its decline for all but light or low performance aircraft. - The principle of the servo altimeter, introduced by Smiths Industries in 1958, was that aneroid capsules were relieved of all but the lightest mechanical work involved in the electrical detection of their position. The operation of the instrument was performed by an electrically powered servo. This resulted in significant gains in accuracy, the possibility of extension of the range to 100,000 ft, and the use of five-digit counter presentation making misreading virtually impossible. Air is comprised mostly of nitrogen (78%), with some oxygen (21%), and a bunch of other gases. It can also have water vapor which displaces all that other stuff. The important takeaway here is that air is made of stuff and stuff weighs more than zero. All that air on top of you is pushing you down. More than that, it is pushing you all around. As you climb in altitude, even if just by foot when climbing up a mountain, there is less stuff on top of you, so less pressure. You can measure just how much pressure is being exerted on you with a mercury barometer. You take a sealed glass tube with a vacuum at the top and a pool of mercury in a dish at the bottom. The mercury wants to slip out of the tube but the pressure of the air on the dish keeps it from doing so. If you measure the column of mercury you get a representation of how much pressure is being exerted in "inches of mercury." More about this: Properties of the Atmosphere. Hysteresis is simply "lag time." This is what we had in the mighty T-37. It worked fine but you had to be careful with reading the correct pointer. How does it work? Figure Sensitive Altimeter Components, from Instrument Flying Handbook, Figure 3-3. [Instrument Flying Handbook, pg. 3-3] The sensitive element in a sensitive altimeter is a stack of evacuated, corrugated bronze aneroid capsules like those shown in figure 3-3. The air pressure acting on these aneroids tries to compress them against their natural springiness, which tries to expand them. The result is that their thickness changes as the air pressure changes. Stacking several aneroids increases the dimension change as the pressure varies over the usable range of the instrument. Have you ever wondered why a bag of potato chips is filled with air? Well chances are the "air" is nitrogen to keep them fresh and the bag appears to be mostly plump to keep the chips from being crushed when the bag is handled. The nitrogen is under a little pressure so the bag stays inflated. But I digress . . . Take that bag of chips with you on a flight and notice that as you climb the cabin altitude also climbs. As that happens the cabin pressure decreases. But even as the pressure around the bag decreases the pressure inside the bag stays the same. That means the bag appears to get even plumper. So what does this have to do with altimeters? The aneroid inside the altimeter is composed of a stack of sealed chambers, just like that bag of chips. As the pressure around the aneroid decreases, the aneroid itself expands, moving the gears and levers that eventually move the needle on your altimeter. Flying a glass cockpit? Well it is the same principle, but the gears and levers are connected to electronic gizmos that feed the computers with the same information. du Feu, A. N., "Evolution of the Modern Altimeter", Flight International, 26 Dec 1968, pg. 1066. FAA-H-8083-15, Instrument Flying Handbook, U.S. Department of Transportation, Flight Standards Service, 2001.
Black holes may have grown incredibly rapidly in the newborn universe, perhaps helping explain why they appear so early in cosmic history, researchers say. Black holes possess gravitational pulls so powerful that not even light can escape their clutches. They are generally believed to form after massive stars die in gargantuan explosions known as supernovas, which crush the remaining cores into incredibly dense objects. Supermassive black holes millions to billions of times the mass of the sun occur at the center of most, if not all, galaxies. Such monstrously large black holes have existed since the infancy of the universe, some 800 million years or so after the Big Bang. However, it remains a mystery how these giants could have grown so big in the relatively short amount of time they had to form. [Images: Black Holes of the Universe] In modern black holes, features called accretion disks limit the speed of growth. These disks of gas and dust that swirl into black holes can prevent black holes from growing rapidly in two different ways, researchers say. First, as matter in an accretion disk gets close to a black hole, traffic jams occur that slow down any other infalling material. Second, as matter collides within these traffic jams, it heats up, generating energetic radiation that drives gas and dust away from the black hole. "Black holes don't actively suck in matter — they are not like vacuum cleaners," said lead study author Tal Alexander, an astrophysicist at the Weizmann Institute of Science in Rehovot, Israel. "A star or a gas stream can be on a stable orbit around a black hole, exactly as the Earth revolves around the sun, without falling into it," Alexander told Space.com. "It is actually quite a challenge to think of efficient ways to drive gas into the black hole at a high enough rate that can lead to rapid growth." Alexander and his colleague Priyamvada Natarajan may have found a way in which early black holes could have grown to supermassive proportions — in part, by operating without the restrictions of accretion disks. The pair detailed their findings online today (Aug. 7) in the journal Science. The scientists began with a model of a black hole 10 times the mass of the sun embedded in a cluster of thousands of stars. They fed the simulated black hole continuous flows of dense, cold, opaque gas. "The early universe was much smaller and hence denser on average than it is today," Alexander said. This cold, dense gas would have obscured a substantial amount of the energetic radiation given off by matter falling into the black hole. In addition, the gravitational pull of the many stars around the black hole "causes it to zigzag randomly, and this erratic motion prevents the formation of a slowly draining accretion disk," Alexander said. This means that matter falls into the black hole from all sides instead of getting forced into a disk around the black hole, from which it would swirl in far more slowly. The "supra-exponential growth" observed in the model black hole suggests that a black hole 10 times the mass of the sun could have grown to more than 10 billion times the mass of the sun by just 1 billion years after the Big Bang, researchers said. "This theoretical result shows a plausible route to the formation of supermassive black holes very soon after the Big Bang," Alexander said. Future research could examine whether supra-exponential growth of black holes could occur in modern times as well. The high-density and high-mass cold flows seen in the ancient universe may exist "for short times in unstable, dense, star-forming clusters, or in dense accretion disks around already-existing supermassive black holes," Alexander said. You can read the abstract of the new study here. - Black Holes: Warping Time & Space | Video - The Strangest Black Holes in the Universe - The Universe: Big Bang to Now in 10 Easy Steps - Space & Astronomy - Black holes - Supermassive black holes
New model of Earth’s interior reveals clues to hotspot volcanoes Scientists at UC Berkeley have detected previously unknown channels of slow-moving seismic waves in Earth’s upper mantle, a discovery that helps explain “hotspot volcanoes” that give birth to island chains such as Hawaii and Tahiti. Unlike volcanoes that emerge from collision zones between tectonic plates, hotspot volcanoes form in the middle of the plates. The prevalent theory for how a mid-plate volcano forms is that a single upwelling of hot, buoyant rock rises vertically as a plume from deep within Earth’s mantle – the layer found between the planet’s crust and core – and supplies the heat to feed volcanic eruptions. However, some hotspot volcano chains are not easily explained by this simple model, suggesting that a more complex interaction between plumes and the upper mantle is at play, said the study authors. The newfound channels of slow-moving seismic waves, described in a paper published today (Thursday, Sept. 5), inScience Express, provide an important piece of the puzzle in the formation of these hotspot volcanoes and other observations of unusually high heat flow from the ocean floor. The formation of volcanoes at the edges of plates is closely tied to the movement of tectonic plates, which are created as hot magma pushes up through fissures in mid-ocean ridges and solidifies. As the plates move away from the ridges, they cool, harden and get heavier, eventually sinking back down into the mantle at subduction zones. But scientists have noticed large swaths of the seafloor that are significantly warmer than expected from this tectonic plate-cooling model. It had been suggested that the plumes responsible for hotspot volcanism could also play a role in explaining these observations, but it was not entirely clear how. “We needed a clearer picture of where the extra heat is coming from and how it behaves in the upper mantle,” said the study’s senior author, Barbara Romanowicz, UC Berkeley professor of earth and planetary sciences and a researcher at the Berkeley Seismological Laboratory. “Our new finding helps bridge the gap between processes deep in the mantle and phenomenon observed on the earth’s surface, such as hotspots.” The researchers utilized a new technique that takes waveform data from earthquakes around the world, and then analyzed the individual “wiggles” in the seismograms to create a computer model of Earth’s interior. The technology is comparable to a CT scan. The model revealed channels – dubbed “low-velocity fingers” by the researchers – where seismic waves traveled unusually slowly. The fingers stretched out in bands measuring about 600 miles wide and 1,200 miles apart, and moved at depths of 120-220 miles below the seafloor. Seismic waves typically travel at speeds of 2.5 to 3 miles per second at these depths, but the channels exhibited a 4 percent slowdown in average seismic velocity. “We know that seismic velocity is influenced by temperature, and we estimate that the slowdown we’re seeing could represent a temperature increase of up to 200 degrees Celsius,” said study lead author Scott French, UC Berkeley graduate student in earth and planetary sciences. The formation of channels, similar to those revealed in the computer model, has been theoretically suggested to affect plumes in Earth’s mantle, but it has never before been imaged on a global scale. The fingers are also observed to align with the motion of the overlying tectonic plate, further evidence of “channeling” of plume material, the researchers said. “We believe that plumes contribute to the generation of hotspots and high heat flow, accompanied by complex interactions with the shallow upper mantle,” said French. “The exact nature of those interactions will need further study, but we now have a clearer picture that can help us understand the ‘plumbing’ of Earth’s mantle responsible for hotspot volcano islands like Tahiti, Reunion and Samoa.” Vedran Lekic, a graduate student in Romanowicz’s laboratory at the time of this research and now an assistant professor of geology at the University of Maryland, co-authored this study. The National Science Foundation and the National Energy Research Scientific Computing Center helped support this research.
Electrical fires are typically caused by poor wiring, faulty contacts, overloaded circuits or short circuits. Although newer homes have circuit breakers in place to shut down overloaded circuits, they can fail to activate before a fire starts. Therefore, it's important to be safe when installing and designing electrical circuits.Know More When electricity travels through wire, it creates heat via friction from electrons traveling along the wire. Wire that is designed to carry higher amounts of current is thicker for this reason; it can accommodate more electrons. If there is too much resistance in a wire, it can create enough heat to ignite surrounding material, such as the insulation. Unsteady or loose electrical contacts pose a similar risk. If there is inadequate space for electrons to flow, they generate excess heat. This is why appliances should be completely plugged in and the wires inspected and tested regularly. Overloaded circuits or outlets also can cause an electrical fire if failsafes are not present or don't activate in time. Each electrical circuit in a building is designed to carry a set amount of current. Exceeding this by turning on too many appliances at once can lead to overheating sufficient to start a fire before the electricity is shut off.Learn more in Electricity A short circuit is caused when two or more uninsulated wires come into contact with each other, which interferes with the electrical path of a circuit. The interference destabilizes normal functioning of electricity flow. The resistance generates a lot of heat in the wires and can lead to a fire.Full Answer > Fluctuation in line voltage can be caused by loose or corroded lines in either the house or the power lines, according to Northpower. It is also caused by insufficient wiring installation and even lightning strikes on or in the vicinity of power lines.Full Answer > Static electricity is caused when objects or particles make contact and either gain or lose electrons due to friction, and the charged object discharges into a nearby object. Rubbing objects together increases the contact area. Examples of static electricity include touching a doorknob in cold weather and receiving a shock.Full Answer > Static cling is caused by static electricity. Static electricity happens when two different materials are rubbed together and the electrons transfer.Full Answer >
Many American authors have used railroads symbolically in their novels, essays, or short stories. Some titles include The Octopus by Frank Norris, Song of the Lark by Willa Cather, and Walden by Henry David Thoreau. Teachers can provide students with excerpts from novels or other writings that use trains in this and other ways. Have students determine the significance of the train to the plot if appropriate, and the overall meaning of the work. Why did the author use the railroad as opposed to other forms of transportation? What does the use of a train convey to the reader? Students can then search the collection on the region where the story takes place or on names of railroads, such as Thoreau's Fitchburg Railroad. If maps of the region or railroad do not exist in the collection, students can search for others of the same time period or place. What additional information do the maps provide about the railroad system at that time? How do they enhance your understanding of the novel or story? Do they prove or disprove the historical accuracy or symbolic meaning of the writing?
Since Charles Darwin heralded evolution more than 150 years ago, scientists have sought to better understand when and how the vast variety of plants today diverged from common ancestors. A new University of Georgia study, just published in Nature, demonstrates key events in plant evolution. It allows scientists to infer what the gene order may have looked like in a common ancestor of higher plants. And it shows one way plants may have differentiated from their ancestors and each other. "By studying the completed sequence of the smallest flowering plant, Arabidopsis, we showed that most of its genes were duplicated about 200 million years ago and duplicated again about 80 million years ago," said Andrew Paterson, a UGA plant geneticist and director of the study. "The ensuing loss of 'extra genes' caused many of the differences among modern plants." Two years ago, scientists finished the genetic sequencing of Arabidopsis, a small, weedy plant. It was a major event, the first plant to be completely sequenced. Arabidopsis had been chosen with the assumption that it would be fairly easy, since it was small. Sometimes small packages aren't so simple. Seeded throughout its five chromosomes were thousands of genes that seemed to be "junk." When UGA scientists compared all of the genes, they found evidence of duplicated "blocks" of similar sets of genes in two, four or eight different places along the chromosomes. It's well known that many plants contain two or more copies of most genes. But why these copies exist and when they occurred has been unknown. Their surprising abundance in the tiny, well-studied Arabidopsis indicates that genome duplications may have played a bigger evolutionary role than was previously thought. Why were these blocks of genes duplicated? When did this happen? Answering these questions involved a lot of computerized comparing and contrasting. The scientists repeatedly compared related pairs of Arabidopsis genes with genes from other plants to figure out which genes had been "hanging out with each other," said UGA graduate student Brad Chapman, who coauthored the study, along with John Bowers, Junkang Rong and Paterson. "Genomes with similar blocks of duplication, 'spelled' in similar ways, had been hanging out together for longer periods of time," Chapman said. "We tested many, many combinations," Paterson said. "We tested Arabidopsis with cotton, cauliflower, alfalfa, soybeans, tomatoes, rice, pine trees and moss." After more than 22,000 such comparisons, the results were pooled, and the scientists looked for breakpoints. The breakpoints indicate duplication events, Paterson said. And the study shows that Arabidopsis has duplicated at least twice, and perhaps a third time. Each time a duplication event occurred, the entire genetic sequence of Arabidopsis doubled. The plant lived on with spare copies of all of its genetic material. And over time, the "extra genes" were shuffled around or lost. It is suspected that this may be one explanation for how different species emerged. "The duplication event that occurred 200 million years ago occurred in virtually all plants," Paterson said. "The duplication event 80 million years ago affected a lot of plants, but not as many." The study is attracting attention in the scientific community, because it combines an evolutionary approach with genomic data to learn more about the natural world. This information will have a significant economic impact because it permits scientists to make better use of the Arabidopsis sequence. It will allow them to study and improve other plants whose DNA hasn't yet been completely sequenced, such as peanuts, cotton or wheat, saving both time and money. "For example, we can take the 2,000 genes known on the cotton map, compare them with the Arabidopsis sequence and, with this analysis, make good, educated guesses about where the other 48,000 cotton genes are," Paterson said. The above story is based on materials provided by University Of Georgia. Note: Materials may be edited for content and length. Cite This Page:
Three Examples of Collaborative Learning Collaborative Learning Examples Collaborative learning activities can help students to develop problem solving and group work skills. There are many types of collaborative activities that students can complete. However, teachers should be prepared to have some talking and movement in the classroom for these activities. One simple activity that you can do with a variety of content is the jigsaw activity. The content or reading assignment is a metaphor for a "puzzle" that students break into smaller pieces to learn. - "Puzzles" or groups can be small, such as four to six students. Each student will be given a piece of the puzzle to learn or to investigate. For example, a long reading assignment can be broken into six smaller "chunks." Each student will take a chunk and become an expert on the content. - When the pieces of the "puzzle" are put together or when the group comes back together, each student will share what he or she learned. - After the group has its information organized and compiled, it can share its knowledge with the class. Situations to use the jigsaw strategy: - Covering a great deal of content in a textbook - Researching a new concept or idea - Learning new vocabulary from a list - Learning Greek and Latin Roots When a group investigates a new topic, it can be very fun, and the group will take ownership of the topic and the presentation. The teacher should select a broad topic, such as the Civil War. - The group of three-to-five students should narrow the topic down to a topic that they could research. Then, the group will share the information in around a five-to-ten minute presentation, depending on the age of the student. - The group will need to assign each person a subtopic of the group topic to research. - The group will come back together to share and organize the information. - The group will present its information to the class. Double Entry Journal A double entry journal can be completed with a pair of students. Students can each make one on their own. Then, the pair can collaborate and compile their ideas into one double entry journal. Teachers can use this with novels, textbook reading assignments, news articles, research information, etc. - Students need to create a large T on a piece of notebook paper. - On one side the student needs to write down interesting or important information from the reading assignment. - On the other side, the student needs to write what he or she thinks about the information. - When the journal is complete, the student needs to share his or her journal with another student. - The students need to compile one journal with information that the pair believes is important or intriguing. The jigsaw activity, group investigation and double entry journal are just three ways that students can collaborate and learn important content. The best part is that students are bouncing ideas off each other and helping create interesting products.
Hydroelectric generation is the primary source of electrical energy across Canada and has been used in the Yukon since the Gold Rush. Hydroelectricity is a clean, renewable energy source that provides reliable power throughout the year. Hydro electricity is created when water is used to rotate a turbine and generator. The force of the water spins the turbine, which then turns the generator to create electrical energy. Hydroelectric facilities can be run of river systems or storage reservoir systems. Run of river systems temporarily divert water from the river as it moves and requires no (or minimal) water storage. Rivers naturally have flow that fluctuates. For example, in the Yukon the spring and summer water flows are higher when precipitation falls primarily as rain and when snow and glaciers are melting. In the winter flows are low due to freezing conditions. Water is stored from the summer in reservoirs to make up for low water flows in winter. Figure 1 above shows a typical run of river hydro project. Water is diverted from the river through a penstock to a powerhouse and generating unit where it goes through the turbines and produces power. Figure 2 and 3 below shows storage hydro. Hydro reservoirs store water throughout the year. The water is channeled into a penstock (a large pipe) that then flows into the turbines to generate power. Water is discharged back into the river or waterway at the end of the powerhouse.
Chemistry by Chance: A Formula for Non-Life by Charles McCombs, Ph.D. Scientists observe life today in order to determine what processes were at work when life originated on this planet. It would be like looking at a 100-year-old photograph to determine which camera was used. The best result this type of analysis can provide is conjecture, and conjecture is the best that chemical evolution can produce. Evolutionists tell the tale that life was formed from chemicals, in some primordial soup from which life arose by accident. Can random chemical "accidents" produce the building blocks of life? The following eight obstacles in chemistry ensure that life by chance is untenable. 1. The Problem of Unreactivity The components necessary for life can be formed only by certain chemical reactions occurring in a specific environment. Water is an unreactive environment for all naturally-occurring chemicals. In a watery environment, amino acids and nucleotides cannot combine to form the polymeric backbone required for proteins and DNA/RNA. In the laboratory, the only way to cause a reaction to form a polymer is to have the chemical components activated and then placed in a reactive environment. The process must be completely water-free, since the activated compounds would react with water. How could proteins and DNA/RNA be formed in some primordial, watery soup if the natural components are unreactive and if the necessary activated components cannot exist in water? 2. The Problem of Ionization The problem of ionization also involves the issue of unreactivity. To produce a protein, the amine group of one amino acid must react with the acid group of another amino acid to form an amide bond. Such reactions must take place hundreds of times to build a protein. As mentioned above, the amino acid must be chemically activated to form the polymer, because without activation every amino acid would be ionized because of an acid-base reaction. The amine group is basic and will react quickly with the acid group also present. This acid-base reaction of amino acids is instantaneous in water, and the components necessary for protein formation are not present in a form in which they can react. This is the problem of ionization. 3. The Problem of Mass Action There is another major problem that will be encountered while trying to form the polymeric backbone of a protein or DNA/RNA. Every time one component reacts with a second component forming the polymer, the chemical reaction also forms water as a byproduct of the reaction. There is a rule of chemical reactions (based on Le Chatelier's Principle) called the Law of Mass Action that says all reactions proceed in a direction from highest to lowest concentration. This means that any reaction that produces water cannot be performed in the presence of water. This Law of Mass Action provides a total hindrance to protein, DNA/RNA, and polysaccharide formation because even if the condensation took place, the water from a supposed primordial soup would immediately hydrolyze them. Thus, if they are formed according to evolutionary theory, the water would have to be removed from the products, which is impossible in a "watery" soup. 4. The Problem of Reactivity Chemical reactivity involves the speed at which components react. If life began in some primordial soup through natural chemical reactions, then the laws of chemistry must be able to predict the sequence of those polymer chains. If a pool of amino acids or nucleotides came together in this environment, reacting to form the polymer chain of a protein or DNA/RNA, then there would have to be a chemical mechanism that determines the sequence of the individual components. In chemical reactions, there is only one way that all chemicals react: according to their relative reaction rates. Since all amino acids and nucleotides have different chemical structures, that difference in structure will cause each component to react at different rates. Consequently, each of the known amino acids and nucleotides has a known relative reaction rate, but this fact causes a serious problem for evolution. The relative reaction rate tells us how fast they react, not when they react. In a random chance chemical reaction, the sequence of amino acids can only be determined by their relative reaction rates. The polymer chain found in natural proteins and DNA/RNA has a sequence that does not correlate with the individual component’s reaction rates. In reality, all of the amino acids have relatively similar structures, and, therefore, they all have similar reaction rates. The same holds true for the polymerization of nucleotides to form DNA/RNA. The problem is that since all of the amino acids or nucleotide components would react at about the same rate, all proteins and all DNA/RNA would have a polymeric sequence different than that observed in our bodies. The product of natural or random reactions could never provide the precise sequences found in proteins and DNA/RNA. 5. The Problem of Selectivity Chemical selectivity concerns where components react. Since every chain has two ends, the reacting components can add to either end of the chain. Even if by some magical process a single component would react first followed by a second component, the products formed would be a mixture of at least four isomers because there are two ends to the chain. If there is an equal chance of one component reacting in two different locations, then half will react at one end and half at the other end. When the addition of the second component occurs, it will react at both ends of the chain of both products already present. Since the reaction rates for the amino acids are similar, as are those of the nucleotides, you would see all components adding randomly to both ends of the building chain. The result is a mixture of several isomers of which the desired sequence is only a minor product, and this is the problem with adding only two amino acids. As the third amino acid would begin to add, it can now react at both ends of four products, and so on. But since proteins may contain hundreds of amino acids in an exact sequence, imagine the huge number of undesired isomers that would be present from a random chance process. DNA/RNA contain billions of nucleotides in a precise sequence. Evolutionists might argue that all DNA/RNA and proteins were formed in this random manner and nature just selected the ones that worked. However, this assumption ignores the fact that there are not billions of "extra" DNA/RNA and proteins in human body. 6. The Problem of Solubility As the polymer chain becomes longer and as more components are added to the chain, the reactivity or rate of formation of the polymer becomes slower and slower, and the chemical solubility of the polymer in water decreases. Solubility is a vital factor because both the activated component and the polymeric chain to which it is being added must be soluble in water for the desired reaction to work. In fact, there is a point where the length of the polymer will decrease its solubility, eventually making the polymer insoluble in water. When this happens, the addition of more components will stop and the chain will not get any longer. However, the desired proteins and DNA/RNA found in the body would never be formed because the components are insoluble. 7. The Problem of Sugar Nucleotides, necessary for DNA and RNA, are formed by the reaction of a sugar molecule with one of four different heterocycles. Evolutionary theory requires that sugar must be present in that primordial soup. However, the presence of sugar creates another problem. The sugars required for DNA and RNA synthesis are called reducing sugars. Reducing sugars can cause the formation of undesired reaction products, plus they also remove the components necessary for the reaction. If amino acids (to form proteins) and sugars (to form nucleotides) were present in that soup, they would instantly react with each other, thereby removing both components from the mixture. The product of this undesired reaction cannot react with amino acids to form a protein chain, and that same product cannot react with heterocycles to form a nucleotide leading to DNA or RNA. 8. The Problem of Chirality Chirality is a property of many molecules with three-dimensional structures. Many molecules may have the same number and type of atoms and bonds, but differ only by being mirror images of each other. Such molecules are said to possess chirality or "handedness." Every single amino acid of every natural protein is made of "left-handed" molecules and every nucleotide of every DNA/RNA molecule is made of "right-handed" molecules. Proteins and DNA/RNA work as they do in the human body because they possess chirality; they work because chirality gives them the correct three-dimensional structure. Only one configuration works; the others do not. If proteins and DNA/RNA were formed by evolution, then the products formed would have the wrong chirality and, therefore, the wrong three-dimensional structure. Molecules of the wrong chirality do not support life in our bodies. The chemical control needed for the formation of a specific sequence in a polymer chain is just not possible through random chance. The synthesis of proteins and DNA/RNA in the laboratory requires the chemist to control the reaction conditions, to thoroughly understand the reactivity and selectivity of each component, and to carefully control the order of addition of the components as the chain is building in size. The successful formation of proteins and DNA/RNA in some imaginary primordial soup would require the same level of control as in the laboratory, but that level of control is not possible without a specific chemical controller. Any one of these eight problems could prevent the evolutionary process from forming the chemicals vital for life. Chirality alone would derail it. This is why evolutionary scientists hope you don't know chemistry. Darwin asserted that random, accidental natural processes formed life, but the principles of chemistry contradict this idea. The building blocks of life cannot be manufactured by accident. * Dr. McCombs is Associate Professor of the ICR Graduate School, and Assistant Director of the National Creation Science Foundation. Cite this article: McCombs, C. A. 2009. Chemistry by Chance: A Formula for Non-Life. Acts & Facts. 38 (2): 30. This article was originally published February, 2009. "Chemistry by Chance: A Formula for Non-Life", Institute for Creation Research, http://www.icr.org/article/chemistry-by-chance-formula-for-non-life (accessed July 23, 2018).
In 1800, only 10 percent of Americans lived west of the Appalachians. By 1824, however, 30 percent of Americans had moved west in search of fertile new land to farm. In 1800, there were only two states west of the Appalachians (Kentucky and Tennessee); by 1820, there were eight. This westward movement was the pivotal change that broadened democracy in the United States. Participation By a Few Prior to this migration westward, the center of power in the country lay along the eastern seaboard, particularly in the Northeast. Presidents and state governors were elected by electors chosen by state legislatures, and the men who made up these legislatures held property. Poor, non-property holding whites, African-Americans, and women were denied the right to vote, so that while America was nominally a democracy, there were large blocs of voters who were unable to participate. The movement of settlers west did not immediately change this voter status, but gradually Americans west of the Appalachians voted democratic ideals into the constitutions as Ohio, Louisiana, Illinois, Indiana, Mississippi, and Alabama became states. Up until 1824, presidential nominees were picked by caucuses of influential congressmen who got together to decide who might best represent their party. But this system was unsatisfactory to the thousands of new voters from west of the Appalachians who wanted their voices to be heard and who had been enfranchised when property requirements were removed as a prerequisite for suffrage. The First Popular Vote The 1824 presidential election, fought mainly between John Quincy Adams and Andrew Jackson, became the first election in American history where the winning candidate was not picked by a caucus and in which the majority of states chose their candidate by popular vote (of the 24 states at the time, only six left the choice up to state legislatures). John Quincy Adams won in a contested election thrown into the House of Representatives, but in 1828, Jackson came back to claim the prize of the presidency in an election that represented the triumph of the western states over the eastern ones: Jackson, who lived in Nashville, became the first president-elect from west of the Appalachians. Opening The Democratic Process Jackson was the great populist leader, and his Democratic-Republican followers were the forerunners of today’s Democrats. After 1828, other changes occurred that opened America’s democratic process. In 1831, the first national party conventions were held, which would became a fractious staple of American political life. And in the election of 1876, 81.8 percent of the voting age population actually turned out to vote, the highest percentage in American history. Within three-quarters of a century, democracy had broadened to include the great majority of the eligible electorate, who became avid participants in their political process. However, women still could not vote and blacks in Southern states often voted only with great difficulty. - Connor Prairie: Western Migration - Mississippi History Now: The Great Migration to the Mississippi Territory - U.S. History: Politics and the New Nation: A White Man’s Democracy - American History: From Revolution to Reconstruction and Beyond: Westward Expansion and Regional Differences - Miller Center.org: Presidents: John Quincy Adams: The Campaign and Election of 1824 - Miller Center.org: Presidents: Andrew Jackson: The Campaign and Election of 1824 & 1828 - The American Presidency Project: Voter Turnout in Presidential Elections - Photos.com/Photos.com/Getty Images
As the Mississippi River enters the Gulf of Mexico, it loses energy and dumps its load of sediment that it has carried on its journey through the mid continent. This pile of sediment, or mud, accumulates over the years building up the delta front. As one part of the delta becomes clogged with sediment, the delta front will migrate in search of new areas to grow. The area shown on this image is the currently active delta front of the Mississippi. The migratory nature of the delta forms natural traps for oil. Most of the land in the image consists of mud flats and marsh lands. There is little human settlement in this area due to the instability of the sediments. The main shipping channel of the Mississippi River is the broad stripe running northwest to southeast. This image was acquired on May 24, 2001 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER will image Earth for the next 6 years to map and monitor the changing surface of our planet. ASTER is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, California, is the U.S. Science team leader; Bjorn Eng of JPL is the project manager. The Terra mission is part of NASA's Earth Science Enterprise, a long-term research and technology program designed to examine Earth's land, oceans, atmosphere, ice and life as a total integrated system. The broad spectral coverage and high spectral resolution of ASTER will provide scientists in numerous disciplines with critical information for surface mapping, and monitoring dynamic conditions and temporal change. Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining cloud morphology and physical properties; wetlands evaluation; thermal pollution monitoring; coral reef degradation; surface temperature mapping of soils and geology; and measuring surface heat balance. Size: 54 x 57 km (33 x 39 miles) Location: 40.7 deg. North lat., 29.2 deg. West long. 89.4 Orientation: North at top Image Data: ASTER bands 1,2, and 3. Original Data Resolution: 15 m Date Acquired: May 24, 2001
Every year Monsoon brings not only the seasonal rains but also distinct seasonal diseases. Viral fever is one of the diseases and it comes every year with different intensity. This year also, we have been facing Influenza in states like Gujarat, Rajasthan and Punjab and Dengue fever in states like Kerala and Tamilnadu. Damage is severe and death toll have been increasing till today. “Feeling hot” or sweaty does not necessarily mean fever. Fever is diagnosed only when a body temperature of over 38o C has been recorded and shivering occurs with a rapid rise in body temperature. Influenza and its types Influenza is an acute systemic viral infection that primarily affects the respiratory tract. Mortality is greater in the elderly those with medical co-morbidities and pregnant women. Management and Prevention of Influenza - Spreads mainly through the droplet mode like coughing and sneezing and touching of infected serous fluids. - We may prevent the disease through frequent hand washing, avoiding the sick patients and wearing face masks. - Managing with proper anti-viral drugs under the careful supervision of Physician. - Prevention relies on seasonal vaccination of elderly and people with respiratory problem or immune compromise. - The vaccine composition changes every year to cover the “Predicted” seasonal strains but vaccination may fail when a new pandemic strain emerges. Avian influenza caused by the transmission of avian influenza A virus (H5N1) to human. Most cases have a history of contact with sick poultry and person- to- person spread. Infections with H5N1 viruses have been severe with enteric features and respiratory failure. Vaccination against seasonal ‘fiu’ does not adequately protect against avian influenza. In this case, influenza virus transmitted from pigs to humans. Re-assortment of swine, avian and human influenza strains can occur in pigs. Dengue is a mosquito born disease caused by dengue virus which is usually self-limiting in most cases. However, in some people it can present with the life threatening complications such as Dengue haemorrhagic fever and Dengue shock syndrome. Dengue is not contagious and does not spread by physical contact. What to do - Follow complete blood count - Watch for dehydration and take necessary steps for rehydration. - Watch for the warning signs including the platelet count and increasing haematocrit. - Watch for defervescence (indicating the beginning of critical phase) - Prevent spread of dengue within the house. Treatment in Siddha System - Guidelines followed by Central Council for Research in Siddha (CCRS) - Nilavembu kudineer - Papaya leaf juice - Adathodai kudineer. Treatment and taking drugs are strictly as per the recommendation of physicians only.
Definition of cohesion 1 : the act or state of sticking together tightly; especially : unity the lack of cohesion in the Party — The Times Literary Supplement (London) cohesion among soldiers in a unit 2 : union between similar plant parts or organs 3 : molecular attraction by which the particles of a body are united throughout the mass cohesionlessplay \kō-ˈhē-zhən-ləs\ adjective Examples of cohesion in a Sentence There was a lack of cohesion in the rebel army. Recent Examples of cohesion from the Web The contemporary drawings showed less cohesion (consisting of separate rather than related objects) and included less detail than those done by students when the study was initially conducted 20 years ago. Guards are easier to replace than tackles but o-line cohesion is no small thing. Richard Ferrand has also stood down as minister for territorial cohesion to lead the group of lawmakers elected under the banner of Macron’s party at the National Assembly. An awkward narrative framing device and frequent shifts between time periods further disrupt plot cohesion as Knudsen’s script attempts to backfill the history of rivalry among the trio. This and a handful of the neighboring homes were constructed by the same master craftsman (Lawrence Reese), so there's a cohesion to the homes that makes this part of Darlington exceptional and unique. The steady stream of Executive Orders, the ongoing promise of sweeping budget cuts to many parts of government and the ongoing lack of cohesion in Congress are creating anxiety for many safety-net programs that rely on federal funding. At least from a sonic perspective, the time away hardly seemed to affect the band's cohesion. Thanks to the guidance of Juan Carlos Osorio–the methodical Colombian manager with an eye for tactical detail and team cohesion– These example sentences are selected automatically from various online news sources to reflect current usage of the word 'cohesion'. Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. Send us feedback. Did You Know? Cohesion is one of the noun forms of cohere; the others are cohesiveness and coherence, each of which has a slightly different meaning. Coherence is often used to describe a person's speech or writing. An incoherent talk or blog post is one that doesn't "hang together;" and if the police pick up someone who they describe as incoherent, it means he or she isn't making sense. But to describe a group or team that always sticks together, you would use cohesive, not coherent. And the words you'd use in Chemistry class to describe the way molecules hang together—for example, the way water forms into beads and drops—are cohesion, cohesive, and cohesiveness. COHESION Defined for English Language Learners Definition of cohesion for English Language Learners : a condition in which people or things are closely united COHESION Defined for Kids Definition of cohesion for Students 1 : the action of sticking together 2 : the force of attraction between the molecules in a mass Seen and Heard What made you want to look up cohesion? Please tell us where you read or heard it (including the quote, if possible).
Australian, South American Marsupials Share Common Ancestry All current Australian marsupials can trace their ancestry back to South America, according to a new study by German researchers from the University of Munster’s Institute of Experimental Pathology. “While marsupials like the Australian tammar wallaby and the South American opossum seem to be quite different, research by Maria Nilsson and colleagues at the University of Munster”¦ shows otherwise,” a press release dated July 27 says. “Using sequences of a kind of ‘jumping gene,’ the team has reconstructed the marsupial family to reveal that all living Australian marsupials have one ancient origin in South America,” it continues. “This required a simple migration scenario whereby theoretically only one group of ancestral South American marsupials migrated across Antarctica to Australia.” Previously, scientists had theorized that marsupials had originated in Australia, but that different lineages had diverged when the continent and South America separated approximately 80 million years ago, the researchers say. Past studies on the genes of these creatures “have revealed contradictory results about which lineages are most closely related and which split off first,” according to the press release, which adds that the new study shows that modern Australian marsupials “appear to have branched off from a South American ancestor to form all currently known marsupials–kangaroos, the rodent-like bandicoots, and the Tasmanian devil. It is still a mystery how the two distinct Australian and South American branches of marsupials separated so cleanly, but perhaps future studies can shed light on how this occurred.” “I think this is pretty strong evidence now for the hypothesis of a single migration [to Australia] and a common ancestor,” research team member Juergen Schmitz told BBC News Environment Correspondent Richard Black on Tuesday. “Maybe it’s around 30-40 million years ago, but we cannot say because jumping genes do not give this information”¦ It’s now up to other people, maybe from the paleontology field, to find out when exactly it happened.” The study appears online in PLoS Biology, an open-access journal published by the Public Library of Science. On the Net:
Way out: Theoretical blueprint for invisibility Invisibility cloaks and starship cloaking devices may not only belong in the worlds of Harry Potter and Star Trek, according to researchers at Duke University’s Pratt School of Engineering and at Imperial College London. Using a new design theory, a blueprint was developed for an invisibility cloak with many uses, they suggest, such as hiding a refinery, electrical or other transmission towers, or manufacturing facilities in the way of a beautiful view. It could also clear an electromagnetic pathway, researchers say, for improved wireless communications. A cloak like this could hide an object so well that observers would be completely unaware of its presence. The researchers’ invisibility cloak could be realized with artificial composite materials called metamaterials. “The cloak would act like you've opened up a hole in space,” said David R. Smith, Augustine Scholar and professor of electrical and computer engineering at Duke's Pratt School. “All light or other electromagnetic waves are swept around the area, guided by the metamaterial to emerge on the other side as if they had passed through an empty volume of space.” Electromagnetic waves would flow around an object hidden inside the metamaterial cloak just as water in a river flows virtually undisturbed around a smooth rock, they say. First demonstrated by Smith and his colleagues in 2000, metamaterials can be made to interact with light or other electromagnetic waves in very precise ways. The cloak has not actually been created, but the researchers claim to have begun to produce metamaterials with suitable properties. “There are several possible goals one may have for cloaking an object,” said David Schurig, a research associate in electrical and computer engineering. “One goal would be to allow electromagnetic fields to essentially pass through a potentially obstructing object. For example, you may wish to put a cloak over the refinery that is blocking your view of the bay.” By eliminating the effects of obstructions, cloaking also could improve wireless communications, researchers said. Along the same principles, an acoustic cloak could serve as a protective shield, preventing the penetration of vibrations, sound, or seismic waves, they say. Cloaking would only be the first among a variety of uses for the design method, the researchers suggest. With fine-tuned metamaterials, electromagnetic radiation at frequencies ranging from visible light to electricity could be redirected at will for virtually any application. One example could be the development of metamaterials that focus light to provide a more perfect lens through optimization of its shape. “To exploit electromagnetism, engineers use materials to control and direct the field: a glass lens in a camera, a metal cage to screen sensitive equipment,‘black bodies’ of various forms to prevent unwanted reflections,” the researchers said in their article. The design theory provides the precise mathematical function describing a metamaterial with structural details that would allow its interaction with electromagnetic radiation in the manner desired, they said, to guide fabrication of metamaterials with those precise characteristics. The theory is simple, Smith said. “It's nothing that couldn't have been done 50 or even 100 years ago,” he said. “However, natural materials display only a limited palette of possible electromagnetic properties. The theory has only now become relevant because we can make metamaterials with the properties we are looking for.” The team's next major goal is an experimental verification of invisibility to electromagnetic waves at microwave frequencies. Such a cloak, the scientists said, would have utility for wireless communications, among other applications. For more information from Duke University’s Pratt School of Engineering, click here . For more from Imperial College, click here . —Edited by Lisa Sutor , Control Engineering contributing editor
UNDERSTANDING CAMERA LENS FLARE Lens flare is created when non-image forming light enters the lens and subsequently hits the camera's film or digital sensor. This often appears as a characteristic polygonal shape, with sides which depend on the shape of the lens diaphragm. It can lower the overall contrast of a photograph significantly and is often an undesired artifact, however some types of flare may actually enhance the artistic meaning of a photo. Understanding lens flare can help you use it — or avoid it — in a way which best suits how you wish to portray the final image. WHAT IT LOOKS LIKE The above image exhibits tell-tale signs of flare in the upper right caused by a bright sun just outside the image frame. These take the form of polygonal bright regions (usually 5-8 sides), in addition to bright streaks and an overall reduction in contrast (see below). The polygonal shapes vary in size and can actually become so large that they occupy a significant fraction of the image. Look for flare near very bright objects, although its effects can also be seen far away from the actual source (or even throughout the image). Flare can take many forms, and this may include just one or all of the polygonal shapes, bright streaks, or overall washed out look (veiling flare) shown above. BACKGROUND: HOW IT HAPPENS All but the simplest cameras contain lenses which are actually comprised of several "lens elements." Lens flare is caused by non-image light which does not pass (refract) directly along its intended path, but instead reflects internally on lens elements any number of times (back and forth) before finally reaching the film or digital sensor. Note: The aperture above is shown as being behind several lens elements. Lens elements often contain some type of anti-reflective coating which aims to minimize flare, however no multi-element lens eliminates it entirely. Light sources will still reflect a small fraction of their light, and this reflected light becomes visible as flare in regions where it becomes comparable in intensity to the refracted light (created by the actual image). Flare which appears as polygonal shapes is caused by light which reflects off the inside edges of the lens aperture (diaphragm), shown above. Although flare is technically caused by internal reflections, this often requires very intense light sources in order to become significant (relative to refracted light). Flare-inducing light sources may include the sun, artificial lighting and even a full moon. Even if the photo itself contains no intense light sources, stray light may still enter the lens if it hits the front element. Ordinarily light which is outside the angle of view does not contribute to the final image, but if this light reflects it may travel an unintended path and reach the film/sensor. In the visual example with flowers, the sun was not actually in the frame itself, but yet it still caused significant lens flare. REDUCING FLARE WITH LENS HOODS A good lens hood can nearly eliminate flare caused by stray light from outside the angle of view. Ensure that this hood has a completely non-reflective inner surface, such as felt, and that there are no regions which have rubbed off. Although using a lens hood may appear to be a simple solution, in reality most lens hoods do not extend far enough to block all stray light. This is particularly problematic when using 35 mm lenses on a digital SLR camera with a "crop factor," because these lens hoods were made for the greater angle of view. In addition, hoods for zoom lenses can only be designed to block all stray light at the widest focal length. Petal lens hoods often protect better than non-petal (round) types. This is because petal-style hoods take into account the aspect ratio of the camera's film or digital sensor, and so the angle of view is greater in one direction than the other. If the lens hood is inadequate, there are some easy but less convenient workarounds. Placing a hand or piece of paper exterior to the side of the lens which is nearest the flare-inducing light source can mimic the effect of a proper lens hood. On the other hand, it is sometimes hard to gauge when this makeshift hood will accidentally become part of the picture. A more expensive solution used by many pros is using adjustable bellows. This is just a lens hood which adjusts to precisely match the field of view for a given focal length. Another solution to using 35 mm lenses and hoods on a digital SLR with a crop factor is to purchase an alternative lens hood. Look for one which was designed for a lens with a narrower angle of view (assuming this still fits the hood mount on the lens). One common example is to use the EW-83DII hood with Canon's 17-40 f/4L lens, instead of the one it comes with. The EW-83DII hood works with both 1.6X and 1.3X (surprisingly) crop factors as it was designed to cover the angle of view for a 24 mm lens on a full-frame 35 mm camera. Although this provides better protection, it is still only adequate for the widest angle of view for a zoom lens. Despite all of these measures, there is no perfect solution. Real-world lens hoods cannot protect against stray light completely since the "perfect" lens hood would have to extend all the way out to the furthest object, closely following the angle of view. Unfortunately, the larger the lens hood the better — at least when only considering its light-blocking ability. Care should still be taken that this hood does not block any of the actual image light. INFLUENCE OF LENS TYPE In general, fixed focal length (or prime) lenses are less susceptible to lens flare than zoom lenses. Other than having an inadequate lens hood at all focal lengths, more complicated zoom lenses often have to contain more lens elements. Zoom lenses therefore have more internal surfaces from which light can reflect. Wide angle lenses are often designed to be more flare-resistant to bright light sources, mainly because the manufacturer knows that these will likely have the sun within or near the angle of view. Modern high-end lenses typically contain better anti-reflective coatings. Some older lenses made by Leica and Hasselblad do not contain any special coatings, and can thus flare up quite significantly under even soft lighting. MINIMIZING FLARE THROUGH COMPOSITION Flare is thus ultimately under the control of the photographer, based on where the lens is pointed and what is included within the frame. Although photographers never like to compromise their artistic flexibility for technical reasons, certain compositions can be very effective at minimizing flare. The best solutions are those where both artistic intent and technical quality coexist. One effective technique is to place objects within your image such that they partially or completely obstruct any flare-inducing light sources. The image on the left shows a cropped region within a photo where a tree trunk partially obstructed a street light during a long exposure. Even if the problematic light source is not located within the image, photographing from a position where that source is obstructed can also reduce flare. The best approach is to of course shoot with the problematic light source to your back, although this is usually either too limiting to the composition or not possible. Even changing the angle of the lens slightly can still at least change the appearance and position of the flare. VISUALIZING FLARE WITH THE DEPTH OF FIELD PREVIEW The appearance and position of lens flare changes depending on the aperture setting of the photo. The viewfinder image in a SLR camera represents how the scene appears only when the aperture is wide open (to create the brightest image), and so this may not be representative of how the flare will appear after the exposure. The depth of field preview button can be used to simulate what the flare will look like for other apertures, but beware that this will also darken the viewfinder image significantly. The depth of field preview button is usually found at the base of the lens mount, and can be pressed to simulate the streaks and polygonal flare shapes. This button is still inadequate for simulating how "washed out" the final image will appear, as this flare artifact also depends on the length of the exposure (more on this later). Lens filters, as with lens elements, need to have a good anti-reflective coating in order to reduce flare. Inexpensive UV, polarizing, and neutral density filters can all increase flare by introducing additional surfaces which light can reflect from.
Some scientific progress is made by developing new concepts, and some is made by throwing monkey wrenches into existing ideas. A letter published today in Nature looks like an instance of the latter. It's pretty well accepted that the Moon formed from material ejected after a Mars-sized body collided with the Earth, but the timeline of how the Moon came together after that impact is an area of active research. Our general understanding of the formation of planetary bodies involves chemical differentiation during the solidification of molten material. As vast "oceans" of magma slowly cool, certain minerals crystallize earlier than others, removing their constituents from the mix. On the Moon, a group of rocks called ferroan anorthosites (or FANs) are thought to have accumulated atop the magma ocean as they crystallized, forming the first lunar crust. FANs have proven very difficult to date because of the poorly constrained isotopic geochemistry of the oldest Moon rocks. As a result, their calculated ages have had rather large error bars attached to them. The ages that have emerged so far indicate that the FANs formed soon after the lunar material was ejected from Earth. In other words, the magma oceans cooled fairly quickly. The authors of this letter developed improved methods for dating FANs that allowed them to calculate ages with unprecedented precision. They used three isotopic systems commonly applied to these rocks—207Pb-206Pb, 147Sm-143Nd, and 146Sm-142Nd. For the first time, the researchers were able to calculate a concordant age—that is, an age on which these different series agree exactly. Previous attempts to date FANs had encountered too much error for the series to agree so precisely. The work resulted in an age of 4.36 billion years (give or take 3 million). That’s several tens of millions of years more recent than we had thought the Moon's crust formed. This leads to one of two possibilities: either the Moon took much longer to accrete and solidify than we thought, or the assumptions about FANs forming in the last stages of magma oceans are incorrect. A different process (known as serial magmatism) could explain the measured FAN ages, but the magma ocean theory was partly based on the characteristics of FANs. If FANs are in fact a product of a different process, our understanding of how planetary bodies solidify and differentiate could take a step backward.
Phonics Dance an effective teaching tool, researcher says Incorporating student chanting and movement into language instruction may help children learn words faster. An innovative phonics instruction program created by an Ohio first-grade teacher in 1999 compared favorably to a popular, more traditional program in recent research conducted by Dr. Amy Mullins, assistant professor of education at Bluffton University. While the 74 participating first-graders from two northwest Ohio elementary schools improved in four assessed areas regardless of the program used, the group taught with The Phonics Dance showed a slightly higher average increase in word identification skills, Mullins found. The first-year Bluffton faculty member presented her doctoral research results during a campus colloquium March 14. She earned her Ph.D. in educational curriculum and instruction from the University of Toledo last year. Calling phonics “the most debated aspect of reading instruction for several decades,” Mullins said the term “phonics” alone is confusing to many people, maybe because it has multiple meanings. It is “a system for encoding speech sounds into written words,” she said, but also “a method of teaching learners relationships between letters and sounds and how to apply the code to recognize words.” The Phonics Dance is a daily 20-minute lesson during which children chant, rhyme and write, as well as move, while learning word recognition skills. Ginny Dowd devised the program “after teaching first grade for several years and observing that some of her students had difficulty decoding words,” Mullins explained. “She was inspired to create an engaging approach to learning strategies to sound out words.” Educators nationwide have purchased and implemented The Phonics Dance, whose components include alphabet sound review; word association; “hunk and chunks” (a way of learning parts of letter patterns); and “monster” (confusing or tricky to spell) words, each of which has a specific chant. In the program, letter names and letter sounds are reviewed every day, and letter patterns are introduced beginning in the third week of school and continuing until Christmas. “With the basal phonics program,” she noted, “letter patterns are introduced, but later, and they are not all covered until the end of the school year.” “Students can decode many words without knowledge of letter patterns. However, students are able to decode words with automaticity if they know letter patterns,” she added. In her research, Mullins assessed the first-graders’ letter naming skills, identification of letter sounds and nonsense words, and phoneme segmentation—ability to break words down into individual sounds. She did her work at their rural, public schools during the first week of classes, after eight weeks of school and again after eight more weeks. The former elementary teacher said she wanted to study The Phonics Dance because, while many people have bought the program, there has been little research about its effectiveness. The school where she tested The Phonics Dance had a higher percentage of students considered socioeconomically disadvantaged—a factor that would normally correlate with students having more difficulty learning reading skills, she pointed out. While phonics should be a small part of an overall reading program, Mullins added, research has indicated that systematic phonics instruction is beneficial for children. Teacher candidates are required to take a phonics course while in Bluffton’s teacher education program.
Back To CourseEconomics 102: Macroeconomics 15 chapters | 123 lessons As a member, you'll also get unlimited access to over Jon has taught Economics and Finance and has an MBA in Finance Margie the cake baker comes into the lobby of the First National Bank of Ceelo to deposit a check for $20,000 into her savings account. Margie trusts that her money is safe at the bank, and she knows that at any time, she can return to the bank and withdraw her money. As she deposits the check, she sees Bob the business owner sitting in an office with an officer of the bank. Bob is talking to a loan officer about borrowing money to buy a new ginormous commercial mower with heated seats, gold plating and anti-lock brakes. The bank serves an important purpose by connecting savers like Margie, who want to earn a return on their money, with borrowers like Bob, who are willing to pay a price for the use of that money in their businesses. The bank pays a relatively low interest rate to customers like Margie who deposit money into savings, and the bank loans out most of this money to borrowers like Bob at a higher interest rate, keeping the difference. That's how the bank makes money. Although Margie may not realize it, two things will happen with the $20,000 she's depositing. A fraction of this deposit, say 10%, will be set aside by the bank as a reserve. The rest of this money, or 90% in this case, will get loaned out to borrowers like Bob the business owner, who wants to invest into his business. In this lesson, we're talking about the fractional reserve banking system, which is a fancy way of saying that banks loan out most, but not all, of the money that gets deposited into accounts. Most modern economies are based on this system. Let's take a look at Margie's deposit and Bob's loan from the bank's perspective and see how the fractional reserve banking system works in banks across the economy. First, a little history. The fractional reserve banking system is a system in which banks hold back a small fraction of their deposits in a reserve and loan out the rest of their deposits to borrowers. This whole idea started in the Middle Ages, when people used gold as money and needed a place to store it. The bankers, called goldsmiths at the time, agreed to hold onto a customer's gold for a small fee. Every person who deposited gold with the goldsmiths was given a gold receipt that they could hold onto and return whenever they wanted to redeem the gold. Eventually, people started using these gold receipts as money to buy goods and services. Then the goldsmiths observed that most people deposited their gold but only withdrew a small part of it. So, they came up with the idea to loan out some of this gold and charge interest, and that's where we get the fractional reserve banking system. The fractional reserve banking system legally permits banks to hold less than 100% of their deposits as a reserve. Banking serves as the foundation of the economy because entrepreneurs and businesses borrow money to invest, and their investment produces economic growth. The fractional reserve idea was designed to ensure that while they are loaning out money, they have enough reserves on hand to cover any withdrawals that consumers want to make from their accounts. Of course, the danger of this strategy is that when people become fearful, they sometimes show up to the bank and demand all of their money at the same time, in which case, there isn't enough money to give everyone. That's what economists call a run on the banks, and it's one of the reasons that the central bank was created - to lend money to banks if they run out of reserves. When Margie deposited $20,000 into her account at the bank, the bank calls that a demand deposit because she can place a demand on that money and withdraw it at any time. Her $20,000 deposit increased the bank's demand deposits by $20,000. In accounting, we describe this as a liability, because it is money that the bank owes to Margie. It's also considered an asset, and according to accounting, assets must always equal liabilities. The fractional reserve system requires banks to reserve a portion of every deposit as a safety precaution. Banks call this required reserves. How do they know what to reserve? There is a required reserve ratio for all banks in the economy. Who decides what this number is? The central bank. The required reserve ratio is the percentage of deposits that banks are required to reserve. Now let's talk about calculating required reserves. Required reserves cannot be loaned out. For example, a required reserve ratio of 10% means that 10% of all demand deposits must be set aside on reserve. To figure out how much a bank has to reserve, you simply multiply the amount of their deposits by the required reserve ratio. For example, with a required reserve ratio of 10%, and demand deposits of $20,000, the required reserves would be 10% x $20,000, which is $2,000. These reserves will not be held in the bank, they'll be held at the Federal Reserve Bank. Whatever is left after reserving this amount can be loaned out, something that banks call excess reserves. Excess reserves are bank reserves above and beyond the reserve requirement set by a central bank. Excess reserves may be loaned out by the bank in order to generate profits. For example, when the required reserve ratio is 10%, and a demand deposit of $20,000 is made at the First National Bank of Ceelo, required reserves of $2,000 would first be set aside. That means $20,000 minus $2,000, or $18,000, is considered excess reserves and can be loaned out to borrowers like Bob. In the fractional reserve banking system, when a bank lends to a customer, this increases the money supply. Okay, it's time to take off your economics hat, and put on your accounting hat for a minute. When a consumer deposits money into a bank checking or savings account, this demand deposit is considered a liability to the bank because they owe this money to the customer. Just think about that - when you deposit money in your bank, you can decide to take this money out if you need it or want it. The bank owes this money to you. You didn't give it to them; you allowed them to hold it for you. From their perspective, the bank's reserves and loans are both considered assets. In accounting, we have what's called a T-account, with reserves and loans on the asset side of their balance sheet and demand deposits on the liability side of the bank's balance sheet. The assets and liabilities always have to balance. That's why we call it a balance sheet. Let's review Margie's deposit of $20,000 and Bob's loan of $18,000, this time wearing our accounting hat, so we can look at it from the bank's perspective. Here's how we account for it: First National Bank of Ceelo Balance Sheet as of Dec 22, 2020 |Reserves $2,000||Demand Deposits $20,000| As you can see, Margie's $20,000 deposit is on the right side, under demand deposits, while the required reserves, as well as Bob's loan of $18,000, is on the left side. Notice that both sides of this balance sheet total $20,000. Now suppose that Bob has a great year, and he gets the White House as a lawn customer. How awesome is that? Each month he mows the White House lawn, he earns a $100,000 in a paycheck. Wow, that must be a really big lawn. Anyway, Bob takes one of his $100,000 checks and deposits it into the First National Bank of Ceelo. As a result, not only does the bank president take Bob to the Ruth's Sally Steak House, but excess reserves go up by $80,000. What we want to know is: how much is the required reserve ratio? You can probably see already that $20,000 would be left over, and out of $100,000, $20,000 represents 20%. However, it is more tricky when we're not using $100,000 as an example. So let's look at this one step at a time. To find the answer, remember what excess reserves mean. Excess reserves is money that the bank has available to loan out. In other words, excess reserves are not required reserves. Here's the formula we need to answer this question: Required reserves = deposit amount x required reserve ratio In this situation, required reserves = $100,000 minus $80,000, which is $20,000. Plugging the numbers we know into the formula gives us: $20,000 = $100,000 x the required reserve ratio. Now we solve for the required reserve ratio by dividing each side by $100,000, and get: $20,000 / $100,000 = required reserve ratio, or 20%. Let's summarize what we've learned in this lesson. The fractional reserve banking system is a system in which banks hold back a small fraction of their deposits in a reserve and loan out the rest of their deposits to borrowers. The fractional reserve banking system legally permits banks to hold less than 100% of their deposits as a reserve. The required reserve ratio is the percentage of deposits that banks are required to reserve. To figure out how much a bank has to reserve, you simply multiply the amount of new deposits by the required reserve ratio. Excess reserves are bank reserves above and beyond the reserve requirement set by a central bank. Excess reserves may be loaned out by the bank in order to generate profits. In the fractional reserve banking system, when a bank lends to a customer, this increases the money supply. When a customer deposits money into a bank checking or savings account, this demand deposit is considered a liability to the bank, because they owe this money to the customer. After watching this lesson, you should be ready to: To unlock this lesson you must be a Study.com Member. Create your account Did you know… We have over 49 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Back To CourseEconomics 102: Macroeconomics 15 chapters | 123 lessons
The Woolly Rhino is an extinct species of rhinoceros that lived in Europe during the Pleistocene Period. It had a thick coat and a layer of fat to keep it warm. It was 11-12 feet (3.8M) in length, 6 foot (2M) in height and weighed about 1.5 to 2 tons. It had two horns on its snout; the largest horn was 3 ft. (1.2M) long, made of matted hair and was very flat. Giving it the ability to brush away snow to find vegetation to graze on. This Ice Age Mammal had short thick legs and a stocky body, with long hair and small ears. It may have been best adapted to a tundra existence. Depicted in cave paintings in Southern France 30,000 years ago, they tell us that it may have had a wide dark band between the front and hind legs. This prehistoric beast ranged throughout Northern Europe and Asia. Living between 500,000 years ago and going extinct about 10,000 years ago. Although, carbon dating suggests that it may have survived in Western Siberia as recently as 8,000 years ago. It is most closely related to today’s Sumatran Rhinoceros.
As Published on Wired.com. By Mary Bates Scientists speculate two factors may influence why some animal species are smarter than others: the foraging behavior of a species (for instance, how cognitively demanding it is for the animals to obtain food) and the social complexity of the animals’ society. A new study looked at problem-solving skills, which reflect animals’ ability to understand and solve a novel situation, and whether they’re related to a species’ social complexity or foraging ecology. Anastasia Krasheninnikova, Stefan Bräger, and Ralf Wanker of the University of Hamburg, Germany, tested four parrot species with different social systems and diets: spectacled parrotlets, green-winged macaws, sulphur-crested cockatoos, and rainbow lorikeets. “One of the characteristics of complex cognition in animals is the ability to understand causal relationships spontaneously, and one way of testing this is asking the animal to obtain a reward that is out of reach,” says Krasheninnikova. She and her colleagues gave the birds five variations on a string-pulling task, involving strings that varied in their relationship to each other or to a food reward, to see whether the birds really understood the means-end relationship between the string and the food. The first test was a basic string-pulling task in which the bird must figure out how to pull up a piece of food suspended from a perch by a single piece of string. Almost all the birds of all species solved this test immediately. In the second task, there were two hanging strings, but only one was attached to a piece of food. If the bird really understood the string as a means to obtain the reward, it should pull only the rewarded string. Most of the birds (more than 75%) were able to solve this test on their first try. To make sure that the bird really understood the functional relationship between food and string and was not just pulling the string closest to the food reward, the third task used a pair of crossed strings. In this test, pulling the string directly above the food would not result in obtaining the food, while pulling the further string that is actually attached to the food would. The spectacled parrotlets and rainbow lorikeets outperformed the macaws and cockatoos on this test, and only the parrotlets were able to figure out the test when the strings were the same color. Krasheninnikova says this study is the first to document a parrot species solving the crossed-strings task spontaneously. The fourth task probed the flexibility of the bird’s behavior. The string was longer, so the bird could obtain the food from the ground rather than pulling the string up. Several members of all species adapted their problem-solving strategies by stopping string-pulling behavior and obtaining the food from the ground, but only the parrotlets and lorikeets clearly preferred the alternative strategy. In the fifth and final task, there were two rewarded strings, but one had a gap between its end and the reward. Solving this task required the bird to understand the mere presence of the reward does not guarantee the reward can be obtained; the food had to be connected to the string to work properly. Parrotlets were the only species to successfully solve task five. When Krasheninnikova and her colleagues compared their results to the birds’ lifestyles, they found the pattern in performance was best explained by differences in the species’ social structures rather than their diets. Spectacled parrotlets performed best of the four species tested and they live in what’s known as a fission-fusion society. These birds live in large groups where they form different social subunits that split and merge, providing the opportunity for many different kinds of social interactions. They are also the only one of the four species tested to form crèches where young birds pass through the socialization process. Green-winged macaws and sulphur-crested cockatoos live in small, stable family groups centered around a breeding pair and their offspring. These species failed tests four and five. The social organization of rainbow lorikeets falls somewhere between the parrotlets and the macaws and cockatoos — as does their performance on the string-pulling tasks. Lorikeets live in social groups of 10-40 individuals, but do not form subunits such as crèches. They performed better than macaws and cockatoos, but not as well as parrotlets. While these results support the social complexity hypothesis, the correlation between social structure and cognitive performance is mostly indirect. The reasoning behind the hypothesis is that living in social groups is cognitively demanding. “Individuals have to recognize group members and infer relationships among them,” Krasheninnikova says. “These demands favor the evolution of understanding functional relationships, such as which actions cause which outcomes.” Socially living animals might be able to apply this cause and effect thinking to their physical world as well as their social lives. Ann Zych – FunTime Birdy
This large, handsome shorebird is often seen on our coast, calling in loud springtime territorial displays, hunkered together in small winter flocks and prying limpets off rocks. Yet the species is rare across its range and poorly understood in California in particular. We’re taking steps to improve our understanding of this unique marine bird and help safeguard its future. You can help, too. (photo by Ron LeValley) Black Oystercatcher (Haematopus bachmani) is an a two-species genus withAmerican Oystercatcher, found in Baja California and points east. It is one of five “rocky intertidal obligate” shorebirds on the west coast, and of this guild, the only one that does not undertake large scale seasonal migration – many individuals likely stay in the same place year-round. (The other species in this guild are Wandering Tattler, Rock Sandpiper, Surfbird, and Black Turnstone.) It is also the rarest: the global population is estimated at 10,000-12,000 individuals, with perhaps 10% of that total in California. Oystercatchers favor sheltered areas of high tidal variation supporting plentiful invertebrates such as limpets, snails and mussels , making them sensitive indicators of intertidal habitat quality. We were inspired to take action in response to a dearth of baseline information on the species in California, enthusiasm on the part of a number of coastal chapters, and the opportunity for us to take the lead in organizing research and conservation here. We have great starting points in the 2007 Conservation Action Plan developed by federal agency scientists after Black Oystercatcher was designated a Focal Species by US Fish and Wildlife Service, and, the lessons learned and protocols developed by agency scientists and citizen volunteers tracking the species in Oregon over the last five years. We had decided to kick off our involvement by conducting a pilot breeding season survey at just a few sites in 2011, to better understand what it would require to elevate the effort to larger scales. The subsequent response we received from chapters, independent birders, and agencies was substantial enough to expand the scope of this year’s effort to representative sites across the entire state. A number of Audubon chapters including Redwood Region, Mendocino Coast, Madrone, and Monterey are participating at a very significant level, as are US Fish and Wildlife Service, State Parks, National Park Service and Bureau of Land Management California Coastal National Monument. The goals of this year’s survey are to gain an understanding of densities at suitable habitats in the breeding season; establish a baseline to understand trends in use of these habitats over time; generate a new population estimate for the state; use lessons learned from this first year to inform future activities; and, build a network of citizen and professional biologists who are tracking the species and communicating about it. Tools including E-bird, Christmas Bird Count, Google Earth and Google Sites are already playing a key role. Ultimately, we seek to answer questions related to securing the future of the species: how many Oystercatchers are in California, how are they distributed, and why? Are their population numbers and reproductive success increasing, decreasing, or stable? What are the primary threats to their population, and how important are these threats? What actions, such as the Marine Life Protection Act, are needed to safeguard their future? You can help Oystercatchers thrive in California. When you are at rocky intertidal shorelines in the spring and summer- on both islands and mainland areas – take care to avoid their nests, located just above high tide line. Parent Oystercatchers, which may abandon a nest once it has been disturbed, will let you know you are getting too close by flying overhead and calling loudly. When you see Oystercatchers, put your observations in E-Bird. Finally, just enjoy the sight and sound of these quintessential rocky intertidal birds. We will report on 2011 survey results this summer. For more information, please visit: https://sites.google.com/site/blackoystercatcherca/home
Hives Urticaria Chicago What are hives? - also known medically as urticaria are red itchy bumps - Raised off of the skin - Variable size from small bumps to silver dollar size or larger - Scratching of the skin can cause worsening of the hives (referred to as dermatographism, which literally means- writing on the skin) - Migratory - hives move from one place on the body to another over the course of the day - Can occur on any part of the body - Very itchy hives Key point: Many different types of rashes are often confused with hives. One factor that distinguishes hives from other types of rashes is that hives are migratory- meaning they move from one part of the body to another over a period of hours (an individual hive rarely stays in the same location for more than several hours at a time). What causes hives? Hives occur when mast cells (inflammatory cells of the skin) are triggered to release a chemical called histamine. Histamine affects the blood flow in the skin, causing the red itchy bumps. Anti-histamines are taken to block the effect of histamine in the body. What are other symptoms that can occur with the hives? The most common symptom in addition to the hives is angioedema (see below), which can occur with hives approximately half of the time. Most people experiencing hives or angioedema will not experience more life-threatening allergic symptoms. When hives do occur as part of a more severe allergic reaction (referred to as anaphylaxis), other symptoms can include: - Tongue or throat swelling - Light-headedness, feeling faint, loss of consciousness - Abdominal pain, diarrhea, nausea, or vomiting If hives occur with any of these symptoms, call 911 immediately. If they have occurred in the past, follow up with an allergist immediately to discuss important ways to treat severe allergic reactions in the future. My doctor told me that idiopathic hives (urticaria) are the most common type of hives. What does idiopathic mean? - literally means-arising spontaneously, or without a known cause. Many patients with hives will not be able to identify any allergic (external) triggers for their hives. This confuses many patients because there are many different allergic triggers that can also cause hives (see list below). So while allergic triggers are possible, many people with hives do not have an allergic trigger. This point is illustrated by the frustration most patients experience from not being able to identify a consistent trigger for their symptoms. Most patients discover that making such changes as their soaps, detergents, personal care products, etc does not eliminate their symptoms. Are idiopathic hives the same thing as autoimmune hives? Many patients with idiopathic hives are actually experiencing an “autoimmune” reaction. These patients make proteins that bind to mast, causing histamine release. We do not yet know why patients develop these proteins. Although autoimmune hives can be associated with thyroid problems, it is only rarely associated with other more serious autoimmune disorders. How long do hives last? Assuming the hives are idiopathic and no allergic cause is identified, hives lasting less than 6 weeks are considered acute and are more likely to resolve quickly. Hives lasting longer than 6 weeks are called chronic and are more likely to continue for a longer period. Essentially, the longer you have had your hives, the longer that they are likely to continue. Chronic hives can last for many months or even years. Some patients will have recurrent episodes over the years with long symptom-free periods in between. Unfortunately, we do not know what turns this process on and off. Ultimately, however, the hives will stop in most cases. So besides idiopathic or autoimmune urticaria, what are other possible causes? Chronic hives are much less likely to have an allergic trigger. Even with acute hives, we are often not able to identify any allergic triggers. However, it is still essential to consult your doctor to make sure that you are not at risk for a more severe allergic reaction. Possible allergic triggers include: - Foods: When a food allergy is responsible for causing hives, the allergic reaction typically occurs immediately after eating the food but may occur up to two hours later. A food allergy will rarely cause hives (or other allergic symptoms) more than several hours after eating the food. Also, allergic symptoms will usually occur every time that the food is eaten. Of course, there are always some exceptions and it is important to discuss your symptoms with a physician. - Medications: although any medication can cause an allergic reaction, some are more likely to do so than others. Because medication allergies can be potentially severe- it is extremely important to discuss your symptoms with a physician knowledgeable about medication allergies. Information here is intended only for informational purposes. Medications more likely to cause an allergic reaction are: - Insect stings (such as bees, wasps, hornets, yellow jackets, or fire ants) - Latex allergy - Heat or cold-induced - Viral infections (may be responsible for the some cases of acute hives in children) - Pressure-induced (areas of more intense pressure such as the feet, belt/waist area, or bra strap) So let me get this straight. Are you saying that despite all of the things that may cause hives, many people with hives do not have any type of allergy? Yes, but a person with hives should always follow up with their doctor to make sure there are no allergic causes. I know it is confusing. But ask yourself, is there anything that seems to consistently trigger your hives? For most people, the answer is “no.” Routine viral infections may be a common cause of acute hives (lasting less than six weeks), however there is no routine way of testing for this. Because an allergy causing hives should cause symptoms with each exposure (there are some exceptions to this rule), an allergist should be able to identify possible allergic triggers in most cases. If no allergy is found, the hives are likely to be idiopathic. Why do I sometimes experience swelling of my lips or eyes with the hives? This type of swelling is called angioedema and occurs when histamine is released in deeper areas of the skin. The lips and eyelids are most commonly involved although other areas of the body such as hands and feet can also swell. Swelling can last several days if not treated. The swelling can occur with or without the hives. Swelling only rarely involves the throat or tongue. If you have experienced possible throat or tongue swelling in the past, you should notify your doctor immediately. What else can cause angioedema? There are two types of angioedema that can occur without hives: - Hereditary angioedema - Ace inhibitor-induced angioedema: ace inhibitors are a popular class of blood pressure medications that can cause angioedema without hives. The following medications all contain an ace inhibitor. If you experience angioedema and are taking one of these medications, notify your doctor immediately. The angioedema can be potentially life threatening. - Lotensin HTC - Vaseretic HTC Most medications are assigned two different names. - Generic name: when a medication is first discovered, it is given a chemical name that describes certain biochemical properties of the medication. This is referred to as the generic (official) name. It is usually too long and cumbersome for general use. (i.e. ibuprofen is the generic name of a type of anti-inflammatory pain medication). - Trade (branded or proprietary) name: drug manufacturers create a name for their version of the medication that they produce. This trade name of the medication is the exclusive property of the company. The trade name is typically followed by the symbol ®. (i.e. Advil® is the trade name given to ibuprofen by one drug manufacturer. Motrin® is the trade name of ibuprofen given by a different drug manufacturer). Can stress cause hives? Although hives can be stressful, stress does not appear to cause hives for most people (Think of all of the stressful times in your life that were not associated with hives!!). Can hives or angioedema be life threatening? Fortunately, most people with idiopathic or autoimmune hives and/or angioedema do not develop swelling of the tongue, throat, or other life-threatening symptoms. However, if tongue swelling, throat swelling, or any difficulty breathing (respiratory distress) does occur, an Epipen® can be lifesaving and should be available at all times. An Epipen® contains the medication Epinephrine (adrenalin) in an injectible form and is used to reverse severe allergic reactions. It is very important to discuss your symptoms with your physician, particularly if you have had possible swelling of the throat, tongue, or other potentially life-threatening symptoms. Hives treatment / swelling (Angioedema) treatment options in Chicago There are a number of different hives treatment options that can be discussed with your doctor. Anti-histamines are the most commonly prescribed medications for treating hives. A number of different hives treatment medications are used to treat hives. (Trade names are capitalized and followed by ®, while the generic names are in lower case without the symbol ®). Hives treatment with Anti-histamine medications Hives treatment with Anti-histamine medications is the most common form of hives treatment and helps block the histamine release that causes the itching and swelling. Hives treatment with over-the-counter anti-histamines - Zyrtec® (ceterizine) is a very effective anti-histamine and is available without a prescription. A minority of patients can experience sedation. - Claritin® (loratidine) is non-sedating and available over-the-counter but less effective than Zyrtec®. - Benadryl® (diphenhydramine) is effective for hives treatment but sedation is a problem for many people. It has a shorter duration of action (4-6 hours) than Zyrtec® or Claritin®. Hives treatment with prescription anti-histamines: - Xyzal® (levocetirizine) is felt by many allergists to be the most effective anti-histamine for hives treatment. It is the new version of Zyrtec® and appears to have less sedation - Allegra® (fexofenadine) is one of the least sedating anti-histamines. - Clarinex® (desloratidine) is the new version of Claritin® and is non-sedating - Atarax® (hydroxyzine) has a shorter duration of action than most other anti-histamines and is also sedating. Hives treatment with H-2 blockers Although normally used to treat acid reflux (GERD), H-2 blockers can also be helpful for some patients with hives when used in combination with anti-histamines. Over-the-counter hives treatment H-2 blockers: - Pepcid® (famotidine) - Zantac® (ranitidine) Hives treatment with leukotriene modifiers These medications are usually used for asthma and allergic rhinitis but can sometimes be helpful for hives treatment. I usually try this medication when anti-histamines and H2 blockers have not been effective. No over-the-counter leukotriene modifiers are available. Hives treatment with prescription leukotriene modifiers - Singulair® (montelukast) - Accolate® (zafirlukast) - Zyflo® (zileuton) is not used commonly because of the need to follow liver tests while taking the medication. Hives treatment with oral corticosteroids Although the most effective medication for hives treatment, oral corticosteroids should be reserved for more severe hives that have not responded to other medications. Side effects with short-term use are not common but long-term use should be avoided when possible. It is important to discuss dosing instructions carefully with your doctor and to take the medication as prescribed. Steroid injections should not be routinely used to treat chronic hives. - Medrol dose-pak ® (methyl-prednisolone) - Orapred®1, Pediapred®2, Prelone®2 (prednisolone) 1Available in liquid form or as an orally disintegrating tablet (ODT) 2Available only in liquid form Hives treatment with Ketotifen A medication that can block histamine release that is only available in Europe and Canada. It can be effective when other medications have not worked. Hives treatment with Doxepin Available by prescription, doxepin was originally developed as an anti-depressant, but it is a potent anti-histamine and is sometimes used to treat more severe hives. Sedation and weight gain have limited its use. Hives treatment with other medications Other medications such as Plaquenil® (hydroxycloroquine), dapsone, and cyclosporine are occasionally used to treat severe chronic hives. Hives treatment and testing - For idiopathic hives lasting less than 6 weeks, no testing is usually required. However, there are times when testing should be considered as determined by your doctor. Hives that are painful or that leave pigment changes of the skin may be a sign of a more serious underlying disorder. - If a specific allergic trigger for the hives is suspected, allergy testing can be helpful - Routine blood work is most often normal but can be checked in patients with chronic hives. - Rarely, a skin biopsy (piece of skin is taken and sent to a lab for interpretation) is necessary for more severe hives that do not respond to medications Hives treatment in Chicago and Arlington Heights Clarity Allergy Center provides comprehensive hives testing, diagnosis and treatment in the Chicago area.
A study from Carnegie Institution for Science, led by Kate Marvel of Lawrence Livermore National Laboratory has revealed that there is enough wind power to meet all of the world’s energy demands. The US team say that with the help of airborne wind turbines—those that convert steadier and faster high-altitude winds into energy—the planet would be able to generate even more power than with ground- and ocean-based units alone. Carnegie’s Ken Caldeira, who aided Kate Marvel on the project, explained that high-altitude wind power could have a massive effect on the world’s renewable energy needs. Using models, the Carnegie team were able to quantify the amount of power that could be generated from both surface and atmospheric winds. They also looked at the geophysical limitations of these techniques to find which were the most efficient. The team’s research found that turbines create drag, or resistance, which removes momentum from the winds and tends to slow them. However as the number of wind turbines increase, the amount of energy that is extracted increases, but at some point, the winds would be slowed so much that adding more turbines would not generate more electricity. Combining their assorted research, the team was able to determine that more than 400 terrawatts of power could be extracted from surface winds and more than 1,800 terrawatts could be generated by winds extracted throughout the atmosphere. Currently, the planet only uses about 18 TW of power, but by harnessing near-surface winds this could increase the rate of wind power by more than 20 times today’s global power demand. Also the use of wind turbines on kites could potentially capture 100 times the current global power demand! “Looking at the big picture, it is more likely that economic, technological or political factors will determine the growth of wind power around the world, rather than geophysical limitations,” Caldeira said. via Science Daily
It is important to understand the concept of speed and how it is related to other concepts in physics. Students are often asked to calculate the speed of an object in various situations. In this article you will learn how to calculate the impact speed of an object when it is dropped from a given height. Read the problem carefully, drawing a diagram and identifying key information like the height and the initial speed of the object. Convert the speed to meters per second (m/s) and the height to meters (m). Square the initial speed of the object in m/s by muliplying the speed by itself. If the object is dropped from rest, the initial speed is zero. Multiply the height of the object in meters by 2 and 9.8 m/s^2 (the acceleration due to gravity). Add the two products found in Steps 3 and 4. Take the square root of the sum found. This number is the impact speed of the object in m/s. Write out each step. Use a pencil in case you make mistakes.
Analyze literature like a professor by taking careful notes, being aware of the multiple interpretations that can exist for a work, paying attention to different literary elements and determining what purpose these elements serve. Reading like a professor takes practice but can unlock new dimensions in a literary work.Continue Reading The first step to reading like a professor is to take ample notes when reading through a work of literature. It?s crucial to mark up the text, highlighting important passages, sections that seem to relate to one another, recurring thematic elements and anything else that is interesting or confusing. Taking good notes allows for better readings and re-readings. Professors are always aware of the multiple meanings that can exist in any work of literature. Far beyond a single interpretation, professors recognize the wealth of viewpoints that can offer different perspectives on a work. Even if a professor does not agree with a possible interpretation, she must try to be aware of other potential meanings when reading through a text. One of the main jobs of literary analysis is to interpret the different aspects of a work and determine how they contribute to the author?s message. Professors work to interpret features such as structure, character, plot, symbolism and point of view, determining how these aspects function in a text and what greater meanings they elicit.Learn more about Literature
What is an Ignition Coil? An Ignition Coil (also called a spark coil) is an induction coil in an automobile‘s ignition system that transforms the battery’s low voltage to the thousands of volts needed to create an electric spark in the spark plugs to ignite the fuel. Some coils have an internal resistor, while others rely on a resistor wire or an external resistor to limit the current flowing into the coil from the car’s 12-volt supply. The wire that goes from the ignition coil to the distributor and the high voltage wires that go from the distributor to each of the spark plugs are called spark plug wires or high tension leads. Originally, every ignition coil system required mechanical contact breaker points and a capacitor (condenser). More recent electronic ignition systems use a power transistor to provide pulses to the ignition coil. A modern passenger automobile may use one ignition coil for each engine cylinder (or pair of cylinders), eliminating fault-prone spark plug cables and a distributor to route the high voltage pulses. Ignition systems are not required for diesel engines which rely on compression to ignite the fuel/air mixture.
In a new scientific review, a team of 70 scientists from 19 countries warned that if no steps are taken to shield insects from the consequences of climate change, it will “drastically reduce our ability to build a sustainable future based on healthy, functional ecosystems.” Citing research from around the world, the team painted a bleak picture of the short- and long-term effects of climate change on insects, many of which have been in a state of decline for decades. Global warming and extreme weather events are already threatening some insects with extinction—and it will only get worse if current trends continue, scientists say. Some insects will be forced to move to cooler climes to survive, while others will face impacts to their fertility, life cycle and interactions with other species. Such drastic disruptions to ecosystems could ultimately come back to bite people, explained Anahí Espíndola, an assistant professor of entomology at the University of Maryland and one of the paper’s co-authors. “We need to realize, as humans, that we are one species out of millions of species, and there's no reason for us to assume that we’re never going to go extinct,” Espíndola said. “These changes to insects can affect our species in pretty drastic ways.” Insects play a central role in ecosystems by recycling nutrients and nourishing other organisms further up the food chain, including humans. In addition, much of the world’s food supply depends on pollinators like bees and butterflies, and healthy ecosystems help keep the number of pests and disease-carrying insects in check. These are just a few of the ecosystem services that could be compromised by climate change, the team of scientists cautioned. Unlike mammals, many insects are ectotherms, which means they are unable to regulate their own body temperature. Because they are so dependent on external conditions, they may respond to climate change more acutely than other animals. One way that insects cope with climate change is by shifting their range, or permanently relocating to places with lower temperatures. According to one study cited by Espíndola and other scientists, the ranges of nearly half of all insect species will diminish by 50% or more if the planet heats up 3.2°C. If warming is limited to 1.5°C—the goal of the global Paris Agreement on climate change—the ranges of 6% of insects will be affected. Espíndola, who studies the ways in which species respond to environmental changes over time, contributed to the sections of the paper that address range shifts. She explained that drastic changes to a species’ range can jeopardize their genetic diversity, potentially hampering their ability to adapt and survive. On the other hand, climate change may make some insects more pervasive—to the detriment of human health and agriculture. Global warming is expected to expand the geographical range of some disease vectors (such as mosquitoes) and crop-eating pests. “Many pests are actually pretty generalist, so that means they are able to feed on many different types of plants,” Espíndola said. “And those are the insects that—based on the data—seem to be the least negatively affected by climate change.” The team noted that the effects of climate change are often compounded by other human-caused impacts, such as habitat loss, pollution and the introduction of invasive species. Combined, these stressors make it more difficult for insects to adapt to changes in their environment. Though these effects are already being felt by insects, it is not too late to take action. The paper outlined steps that policymakers and the public can take to protect insects and their habitats. Scientists recommended “transformative action” in six areas: phasing out fossil fuels, curbing air pollutants, restoring and permanently protecting ecosystems, promoting mostly plant-based diets, moving towards a circular economy and stabilizing the global human population. The paper’s lead author, Jeffrey Harvey of the Netherlands Institute of Ecology (NIOO-KNAW) and Vrije Universiteit Amsterdam, said in a statement that urgent action is needed to protect insects and the ecosystems they support. “Insects are tough little critters, and we should be relieved that there is still room to correct our mistakes,” Harvey said. “We really need to enact policies to stabilize the global climate. In the meantime, at both government and individual levels, we can all pitch in and make urban and rural landscapes more insect-friendly.” The paper suggested ways that individuals can help, including managing public, private or urban gardens and other green spaces in a more ecologically-friendly way—for instance, incorporating native plants into the mix and avoiding pesticides and significant changes in land usage when possible. Espíndola also stressed the value of encouraging neighbors, friends and family to take similar steps, explaining that it’s an easy yet effective way to amplify one’s impact. “It is true that these small actions are very powerful,” Espíndola said. “They are even more powerful when they are not isolated.” Their paper, titled “Scientists’ warning on climate change and insects,” was published in Ecological Monographs on Nov. 7, 2022. Method of Research Subject of Research Scientists' warning on climate change and insects Article Publication Date
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. Click herefor a complete list of Reading Like a Historian lessons, and click here for a complete list of materials available in Spanish.
Orbital Period and Rotation Period The time that the planets finish orbiting is called a year. The time that the planets finish rotating is called a day. Venus is the second planet from the Sun and the third smallest planet in the solar system. Some people call Venus "Earth's Sister" because it is almost the same size of Earth, but its surface is different from Earth. Venus is the hottest planet. Venus is the slowest rotating planet and it rotates in the opposite direction of any other planet. Venus is the second fastest orbiting planet in the solar system after Mercury. Venus has no moons. Earth is the third planet from the Sun and is the fourth smallest planet in the solar system. Earth is the only planet in the solar system we know that supports life and the only planet we live on. Earth is sometimes called "The Goldilocks Planet" because it is not too hot like Venus and it is not too cold like Uranus. Earth is the third fastest orbiting planet and the third fastest rotating planet in the solar system. Earth has 1 known moon. Mars is the fourth planet from the Sun and the second smallest planet in the solar system after Mercury. Mars has a rocky dusty surface of iron oxide which gives it a reddish color appearance and is why some people call Mars "The Red Planet". Mars is the fourth fastest orbiting planet and is the fourth rotating planet in the solar system. Mars has a same length of rotation period as Earth. People are interested in Mars because they think that people can live there with a help of some special equipment. Mars have a thin atmosphere made of carbon dioxide and tiny amount of oxygen. Mars has 2 known moons: Phobos and Deimos. Jupiter is the fifth planet from the Sun and the largest planet in the solar system. Jupiter is called a "gas giant" because it is so big. People can't land on Jupiter because there is no ground to land on. Eventhough, if there is one to land on, Jupiter is covered by terrifying storms, stronger than the storms on Earth. Jupiter is the stormiest planet in the solar system. There is the biggest Jovian storm called The Great Red Spot. If people try to land on Jupiter, the gravity is so intense that it can crush people flat to death. Jupiter is the most massive planet. The gravity is the most intense in the solar system. Jupiter is the fourth slowest orbiting planet and the fastest rotating planet in the solar system. Jupiter has 67 known moons. Saturn is the sixth planet from the Sun and the second largest planet in the solar system. Saturn is extremely superfamous for its beautiful rings made of ice and rocks. Saturn has crystals of ammonia which gives it a pale yellow color. Saturn is the second fastest rotating planet and the third slowest orbiting planet in the solar system. Saturn has 62 known moons. Uranus is the seventh planet from the Sun and the third largest planet in the solar system. Uranus has rings around it, but it's much thinner than Saturn's rings. Uranus is the only planet to be tilted so much that it rotates sideways or up and down. Uranus is sometimes called "The Sideways Planet". Uranus is the coldest planet in the solar system, with an atmosphere made of methane and helium which gives it a lovely blue color. Uranus is also called the most boring planet because it has no iden Neptune is the eighth planet from the Sun, the fourth largest planet, and the farthest planet in the solar system. Neptune is a darker blue than Uranus and both of them are similar to each other and scientists are not sure why. There is a huge Neptunian storm called the Great Dark Spot, similar to Jupiter's Great Red Spot, but it disappeared completely in the late 20th century. Neptune is the windiest planet that has the fastest winds than any other planet. Since Neptune is far away from the Sun, Neptune is the slowest orbiting planet. Poor Neptune takes a very long time to finish orbiting around the Sun, for 165 Earth years. The same place where Neptune is now, 165 years ago is before the American Civil War, before computers, phones, tablets, television and cars has been invented. Neptune has the longest orbit of any other planet in the solar system. Neptune is the third fastest rotating planet. Neptune has 14 known moons.
The Use of Thermus Aquaticus in the Polymerase Chain Reaction Scientists use an understanding of the various biological processes common to all life forms in order to further our ability to research and investigate life. In the Polymerase Chain Reaction (PCR) scientists use DNA polymerase taken from the bacterium Thermus aquaticus to amplify segments of DNA sequences for DNA fingerprinting and other applications. The polymerase chain reaction is a technique that copies a specific segment of DNA quickly (Sadava, Hillis, Heller, Berenbaum 2009). In the scientific field it is necessary to make multiple copies of a DNA sequence in order to study DNA or perform genetic manipulations (Sadava, Hillis, Heller, Berenbaum 2009). DNA amplification is a process in which the polymerase chain reaction automatically replicates DNA multiple times in a test tube. The PCR amplification process involves three steps. The first step heats the reaction in order to denature the two strands of DNA, in the second step the reaction cools in order for the primers to anneal to the strands of DNA, and in the third step the reaction is warmed in order for the DNA polymerase chain reaction to catalyze the production of the complementary new strands (Sadava, Hillis, Heller, Berenbaum 2009). The three steps cycle and repeat until enough DNA that is needed is produced. The PCR mixture must contain, a double stranded DNA to act as a template, two primers, four dNTPs, a DNA polymerase that can withstand high temperatures, and salts and buffers to maintain a neutral PH (Sadava, Hillis, Heller, Berenbaum 2009). In the first step of the PCR reaction, DNA must be heated to more than 90 degrees Celsius. This is one of the main problems encountered in the PCR reaction because most DNA polymerase also denature at these temperatures, which means that during each cycle DNA polymerase must be added (Sadava, Hillis, Heller, Berenbaum 2009). Thomas Brock investigated this issue and realized that a bacterium called Thermus aquaticus lives in high temperatures, up to 95 degrees Celsius, and the DNA polymerase of Thermus aquaticus is heat resistant, it does not denature at high temperatures (Sadava, Hillis, Heller, Berenbaum 2009). Scientists used Thomas Brock's discovery and used Thermus aquaticus DNA polymerase in PCR. This allowed PCR to not have to be added during each cycle, and the DNA polymerase to withstand the high temperatures (Sadava, Hillis, Heller, Berenbaum 2009). PCR can be used for chemical analysis, identification of a person or organism, and to detect diseases (Sadava, Hillis, Heller, Berenbaum 2009). This essay will review three peer reviewed articles on the PCR reaction, Thermus aquaticus, and research done regarding these two topics, and will conclude with modern applications that use this technique. "Deoxyribonucleic Acid Polymerase from the Extreme Thermophile Thermus aquaticus" is an article by Alice Chein, David B. Edgar, and John M. Trela. Chein, Edgar, and Trela's article discusses the attributes of thermophiles and attempts to purify the DNA polymerase from the bacterium Thermus aquaticus. The purpose of the article is to discuss the characterization and purification of a thermophilic polymerase compared to DNA polymerases from mesophillic microorganisms(Chein, Edgar, and Trela 1976). A mesophillic microorganism is an organism that is too small to be seen by the naked eye and lives in moderate temperatures(Sadava, Hillis, Heller, Berenbaum 2009). A thermophile is an organism that lives in high, extreme, temperatures(Sadava, Hillis, Heller, Berenbaum 2009). The article explains that thermophiles are ubiquitous in nature and that many prokaryotic species live at temperatures above 45 degrees Celsius(Chein, Edgar, and Trela 1976). The materials that were used were, a strain of Thermus aquaticus, the cells were grown in a defined mineral salt containing glutamic acid, this served as the culture medium(Chein, Edgar, and Trela 1976). The growth conditions consisted of Erlenmeyer flasks in a water bath shaker, which were maintained at a temperature of 75 degrees Celsius, initially. Then transferred to carboys that were placed in hot-air incubators. The cultures were allowed to grow for twenty hours before being collected. Then the enzyme extract was prepared, followed by enzyme asseys, and Polyacrylamide gel electrophoresis. The results of the article concluded that the attempts to remove the BSA from the enzyme sample resulted in a substantial loss of the catalytic activity of the DNA polymerase(Chein, Edgar, and Trela 1976). The reasoning for this result could of stemmed from the low protein concentration of the DNA polymerase after it was separated from the BSA. The conclusion of the research states that it is unknown whether the DNA polymerase when separated from Thermus aquaticus represents the native form of the enzyme in vivo or if it is a result of proteolytic cleavage during the isolation(Chein, Edgar, and Trela 1976). The article did conclude that the enzyme did function at a temperature of 80 degrees Celsius. This temperature is higher than the DNA polymerase from Bacillus stearothemophilus(Chein, Edgar, and Trela 1976). The article stated that because of the temperature range there is a possibility of using the enzyme in gene synthesis. The article was written April 12 1976. Due to further research on the subject we now know that we can use the enzyme in gene synthesis, such as the PCR reaction, and that this is partially because of the temperature range. I kept this article as a primary research article because even though the information has been further tested and more detailed conclusions have been discovered, this article shows how scientific research is used as a stepping stone for further research. The conclusions about the enzyme functioning at higher temperatures is one of the reasons that Thermus aquaticus DNA polymerase is used in the polymerase chain reaction. "DNA sequencing with Thermus aquaticus DNA polymerase and direct sequencing of polymerase chain reaction-amplified DNA" is an article by Michael A. Innis, Kenneth B. Myambo, David H. Gelfand, and Mary Ann D. Brow. Innis, Myambo, Gelfand, and Brow were experimenting with modifying the conditions of the PCR reaction for the direct DNA sequencing of PCR products by using Thermus aquaticus DNA polymerase. They expected that because of the preparation of the DNA template and the direct sequencing that it would facilitate automation for larger sequencing projects(Innis, Myambo, Gelfand, and Brow 1988). Innis, Myambo, Gelfand, and Brow realized that Thermus aquaticus DNA polymerase (Taq polymerase) simplified the PCR procedure because it would not denature at higher temperatures, which meant that it was not necessary to resupply the enzyme after each cycle. The article states that Taq polymerase increases the yield and length of the products that can be amplified, which increases the sensitivity of PCR. The increase of sensitivity of PCR allows for the detection of rare target sequences(Innis, Myambo, Gelfand, and Brow 1988). The materials used in their experiments included a variety of enzymes, such as Taq DNA polymerase, Polynucleotide kinase from T4-infected Escherichia, nucleotides, oligonucleotides, DNA, 3' dideoxynucleotide, 5' triphosphates, and dNTPs (Innis, Myambo, Gelfand, and Brow 1988). Many methods were used within the research, such as, the annealing reaction, labeling reaction, extension-termination reaction, asymmetric PCRs, and sequencing of PCR products. The article concluded that Taq DNA polymerase is very rapid and progressed through the replication process automatically (Innis, Myambo, Gelfand, and Brow 1988). The article also concluded that Taq DNA polymerase is sensitive to free magnetism ion concentration. Innis, Myambo, Gelfand, and Brow found that Taq DNA polymerase incorporated the ddNTPs with varied efficiency. They presented efficient protocols for sequencing using Taq DNA polymerase, and suggest that Taq DNA polymerase does hold advantages in different applications for sequencing. The result's of the article showed that Taq polymerase operated at higher temperatures and lower salt concentrations, this supported the conclusion stating that DNA is highly efficient and superior to previously used products (Innis, Myambo, Gelfand, and Brow 1988). "Effective amplification of long targets from cloned inserts and human genomic DNA" is an article by Suzanne Cheng, Carita Fockler, Wayne M. Barnes, and Russsell Higuchi. Cheng, Fockler, Barnes, and Higuchi used the PCR reaction to amplify a gene cluster from human genomic DNA and phage A DNA. The purpose of their research was to amplify DNA sequences to make the speed and simplicity of the polymerase chain reaction function to facilitate studies in molecular genetics (Cheng, Fockler, Barnes, and Higuchi 1994). The materials a DNA that contained all four dNTPs, the DNA was from a human placenta, human genomic clones were purchased and grown as directed (Cheng, Fockler, Barnes, and Higuchi 1994). The Themostable DNA polymerases used were AmpliTaq DNA polymerase, rTth DNA polymerase which is from Thermus thermophillus, Pyrococcus furiosus polymerase, rTaq, polymerase buffers, dimethyl sulfoxide, and glycerol (Cheng, Fockler, Barnes, and Higuchi 1994). The methods used were analysis of PCR products, two thermal cycling profiles. During the analysis of PCR products the samples were regularly analyzed on standard gels or by using inversion gel electrophoresis. The research within the article concludes that the PCR was able to amplify longer strands of genomic DNA that was previously thought possible (Cheng, Fockler, Barnes, and Higuchi 1994) . The applications of the articles research can be used in genome maps, the ability to make longer DNA templates without having to do labor intensive cloning, rapid sequencing, and may be able to close gaps that are unclonable. It could lead to automated genome sequencing (Cheng, Fockler, Barnes, and Higuchi 1994). The three articles I chose all contain experiments and research done with Taq DNA polymerase in the PCR reaction. In the first article, also the earliest article, the authors experiment with separating the DNA polymerase from Thermus aquaticus. The results were inconclusive but did show that Thermus aquaticus thrives at higher temperatures, which could be used in the first step of the PCR reaction, when the DNA strand is denatured by high temperatures. The second article the authors experimented with modifying the conditions of the PCR reaction so that direct sequencing of PCR products could occur. They did this by using Thermus aquaticus DNA polymerase. Innis, Myambo, Gelfand, and Brow thought that because of the preparation of the DNA template and direct sequencing that it would be possible to sequence larger projects. The hypothesis was supported by their research. In the third article , and the latest article, the purpose of the research was to amplify DNA sequences in order to make the speed and simplicity of the PCR reaction function more efficiently in studies in molecular genetics. The research done was supported and also shows the potential processes that can be used due to their findings, such as automated genome sequencing. All three of the articles build on each other, allowing for further progress within the field of study. The PCR reaction has many uses in modern science, it is used not only for scientific studies and experiments but also for crime solving as well as medical diagnosis. Suzanne Cheng, Cartia Fockler, Wayne M. Barnes, and Russell Higuchi. June 1994. Effective amplification of long targets from cloned inserts human genomic DNA. Proc. Natl. Acad. Sci. Vol. 91, pp. 5695-5699. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC44063/?tool=pubmed Accessed Friday February 26 2010. Alice Chien, David B. Edgar, and John M. Trela. September 1976. Deoxyribonucleic Acid Polymerase from the Extreme Thermophile Thermus aquaticus. Journal of Bactersriology. Vol. 127 No. 3. pp 1550-1557. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC232952/?tool=pubmed Accessed Friday February 26 2010. Michael A. Innis, Kenneth B. Myambo, David H. Gelfand, and Mary Ann D. Brow. December 1988. DNA sequencing with Thermus aquaticus DNA polymerase direct sequencing of polymerase chain reaction- amplified DNA. Proc. Natl. Acad. Sci. Vol.: 85, pp 9436-9440. http://ukpmc.ac.uk/articlerender.cgi?tool=pubmed&pubmedid=3200828 Accessed Friday February 26 2010. David Sadava, David M. Hillis, H. Craig Heller, May R Berenbaum. 2009. Life The Science of Biology Ninth Edition. Printed in USA. The Courier Companies Inc. pp. 286.
Round-ups and Deportations to Killing Centres Related ImagesSee the photographs related to this lesson Review the information on Jewish Councils (Judenräte) from the United States Holocaust Memorial Museum's website with students. Ask students what questions they have about the role of the Judenrat in the Lodz Ghetto. Introduce Lawrence L. Langer's concept of "choiceless choice." Ask students to work in pairs to discuss and unpack how the concept of choiceless choice relates to the Lodz Ghetto's Jewish Council (Judenrat) and the role its chairman, Mordechai Chaim Rumkowski, played in the running of the Lodz Ghetto. Ask partner groups to report back to the class for a larger discussion. Save the Last Word for Me Make enough copies of each photograph so that each student can have one. For example, if you have a class of 30 students, make three copies of each image so you have a total of 30. Display the images on a table. At the start of the class, ask students to come to the table and select one image that interests them. On a piece of paper or an index card, each student should write responses to the following questions: What stands out for you in this image? What do you think is happening? Why did you choose this image? Once they have completed this exercise, students with different images should get into groups of three and complete a Save the Last Word for Me strategy. After students complete the Save the Last Word for Me exercise, transition to a whole-group discussion or journal-entry session. Some questions that you might want to use for this discussion or as a journal prompt are: - What new information have you learned about the deportation of Jews from the Lodz Ghetto to the killing centres? - How have these images altered or affirmed your knowledge about and understanding of the deportation of Jewish people from the Lodz Ghetto to killing centres? - What evidence do you see of the humanity and struggle for survival of the residents of Lodz? - How did the process of genocide attempt to strip the Jewish people of Lodz of their humanity? What evidence do you see? - What evidence do you see of the leadership of Mordechai Chaim Rumkowski in the photographs of the roundups and deportation to killing centres? - What are the moral and ethical implications of the "choiceless choices" the Jewish Council and Rumkowski faced in carrying out the edicts of the Nazis? Information on the role of Jewish Councils in ghettos. A short piece on the concept of "choiceless choices" by Lawrence L. Langer. Transcript for "Give Me Your Children": Voices from the Lodz Ghetto.
Primary Documents Collection for the Sugar Act of 1764 On August 5, 1764, Britain passed the Sugar Act, which extended the Molasses Act of 1733. The purpose of the Sugar Act, which was designed by Prime Minister George Grenville, was to raise revenue through the collection of customs duties on shipments of specific goods and to encourage American merchants to purchase sugar and molasses from plantations in the British West Indies. For the first time, Britain levied a tax on the American colonies for the purpose of raising revenue. However, the colonies, like many other towns in England and colonies throughout the Empire, did not have elected delegates that represented them in Parliament. Americans were used to paying taxes that were levied on them by their colonial legislatures and felt that taxes levied by Parliament were unconstitutional and illegal. They responded to the Sugar Act with the slogan, “No Taxation Without Representation.” In New England, where there was a significant rum production industry, the rallying cry was strongest and led to the publication of documents, letters, and pamphlets that helped shape the ideology of the American Revolution. Instructions to Boston’s Representatives On May 28, 1774, the Boston Town Meeting sent instructions to its representatives in the Massachusetts legislature. The letter, which was written by Samuel Adams, is significant because it marked the first time that any political body in the American Colonies publicly declared that Parliament did not have the constitutional authority to levy taxes on the colonies without their consent. In the letter, Adams wrote, “these unexpected proceedings may be preparatory to new taxations upon us: For if our trade may be taxed, why not our lands? Why not the produce of our lands, and every thing we possess or make use of? This we apprehend annihilates our charter right to govern and tax ourselves — It strikes at our British privileges, which as we have never forfeited them, we hold in common with our fellow subjects who are natives of Britain: If taxes are laid upon us in any shape without our having a legal representation where they are made, are we not reduc’d from the character of free Subjects to the miserable state of tributary slaves.” Rights of the British Colonies Asserted and Proved On July 30, 1764, Boston lawyer James Otis published this pamphlet. Otis had gained a reputation for not being afraid to criticize Parliament for passing laws to govern the colonies. He gained fame for his stance in 1761 when he argued against the legality of Writs of Assistance. In this pamphlet, he argued the property of British citizens in American could only be taxed by Parliament if the colonists were represented by elected officials. Otis also argued that men could not make laws to supersede natural laws. “Every British subject born on the continent of America, or in any other of the British dominions, is by the law of God and nature, by the common law, and by act of Parliament (exclusive of all charters from the crown), entitled to all the natural, essential, inherent, and inseparable rights of our fellow subjects in Great Britain. Among those rights are the following, which it is humbly conceived no man or body of men, not excepting the Parliament—justly, equitably, and consistently with their own rights and the constitution—can take away.” Petition from the Massachusetts House of Representatives to the House of Commons On November 3, 1764, the Massachusetts House of Representatives sent a letter to the House of Commons. In the letter, the members of the House of Representatives explained the difficulties the Sugar Act had placed on businesses in Massachusetts and made several arguments about their opposition to the provisions of the Sugar Act. The House specifically pointed out that the people of Massachusetts felt they were not being afforded the same rights as other British subjects. The letter stated, “…every Act of Parliament, which in this respect distinguishes his Majesty’s subjects in the colonies from their fellow subjects in Great Britain, must create a very sensible concern and grief.” Reasons Against Renewal of the Sugar Act A group of merchants in Boston worked together to publish a pamphlet that showed how the Sugar Act would not only harm commerce in the American Colonies but also commerce in Britain. Rights of the Colonies Examined Stephen Hopkins was the Governor of Rhode Island. After he returned from a trip to London in 1764, Hopkins wrote this pamphlet. It was published in 1765. Hopkins echoed many of the same sentiments as James Otis. Hopkins said, “British subjects are to be governed only agreeable to laws by which themselves have in some way consented.”
Objectives: To increase awareness of the diversity of life. To show different lifestyles of marine organisms. Method: Students will use a worksheet to match marine organisms with their proper lifestyle. Background: Water in the ocean creates different living conditions because of changing currents, light intensities and temperature gradients. These conditions increase diversity of organisms in the ocean. Many organisms begin their life as plankton (drifters). They move around on the currents. Some plankton are plants (phytoplankton) and some are animals (zooplankton). Some zooplankton live their whole life as plankton, while others change to become nekton and benthos. Nekton are free-swimmers. They can swim against the current and direct their movements. Turtles, fish, squid, birds and people are all nektonic. The bottom-dwellers are referred to as benthos. Some benthic organisms move around, like flounder, sea stars, crabs and shrimp. Others are sessile - firmly attached to a hard surface - like oysters, barnacles, sponges and corals. - Have students name creatures of the ocean. Ask if all these creatures live in the same place. - Discuss the different lifestyles: plankton, nekton and benthos - Hand out copies of the enclosed worksheet "What's Your Lifestyle?" -Have students cut out and paste the organisms where they belong in the water column according to their lifestyle. Name the three different lifestyles. Explain how they are different. Name one example of each.
New Direction for Varroa Control Research The varroa mite is the #1 threat to honey bees and the beekeeping industry around the world. These parasites infest just about every honey bee colony in the world, except for those in Australia. Originally a pest of the Asian honey bee, Apis cerana, these mites made the jump to a novel host, Apis mellifera, after people began moving the gentle European honey bee around the world. The trade and movement of bee stocks has exposed honey bee populations to many new and exotic diseases and pests, many of which have made their way to the U.S. in recent years. What are varroa mites and where did they come from? The varroa mite was introduced into North American in the mid-1980s and within a few years had spread coast to coast. Varroa is largely blamed for the demise of nearly all feral honey bee colonies in the U.S. within a few years. About the same time, U.S. beekeepers were faced with record low honey prices, due to competition from cheaper honey imported from Asia. Many beekeepers went out of business, while others began to move bees around for crop pollination in order to make ends meet. With fewer wild bees and fewer beekeepers, pollination services were increasingly needed, especially as the scale of agricultural production continued to grow. However, this increased movement of bees to meet these needs, combined with the widespread shipment of bees for hobbyist beekeepers, has resulted in the near universal distribution of honey bee pests and diseases, further compounding the bees' problems. How do varroa mites affect a honeybee hive? While these parasites are tiny to us (about 1/16 inch across), they are actually one of the largest parasites known, in proportion to the size of their host's body. Varroa mites themselves are able to chemically mimic the odor of honey bees, to invade and escape detection in the colony. Once in the bee hive, mites damage the bees in multiple ways in their unusual life cycle. They can feed directly on the hemolymph (blood) of adult bees, which weaken the bees and shortens their already brief lives, Doing so also spread viruses and other disease pathogens. Even more dangerously, the mites reproduce only in sealed brood cells of honey bees. Safe beneath the wax capping, a female mite lays eggs, and then she and her offspring feast on developing bee pupae. This feeding caused significant damage, reduces the bee's life expectancy, and transmits viruses while the mite reproduce exponentially. Some viruses have even adapted to reproduce more quickly inside the mites. Mites that pick up virus particles from one bee will vector that virus to other bees when feeding again later. Now, nearly 30 years after their introduction, varroa mites are still thriving. They appear able to quickly evolve resistance to the many chemical treatments and medications that beekeepers have tried to get rid of these pests -- a problem that has been made worse by over-use of the few effective products that have been available. There are many varroa treatment products available, but unfortunately none are the silver bullet that bees and beekeepers urgently need. Chemical pesticides leave residues in the wax honey combs and cannot be applied while the bees are storing honey for human consumption. Many also have sub-lethal effects on bee health. Organic acids and various essential plant oils can be useful to kill mites, but care must be used when applying any of these, as they may be very sensitive to temperature, or may work only when the hive is broodless (a very short period of time in the south). Some products can also affect honey quality, or even be very dangerous to bees and beekeepers if used improperly. New research on varroa mites New research led by Zachary Huang, at Michigan State University, may lead to a whole new direction of mite control. Using a process called RNA interference (or RNAi), scientists were able to effectively "silence" specific genes in the varroa mites in order to determine the specific role of the genes in the mite's biology. Using this technique, the team was able to identify two genes that caused high mortality in the mites when knocked out, and another four genes that appeared to control mite reproduction. Other research has shown that a mixture of specific double stranded RNA molecules (dsRNA) can be fed to bees, and varroa mites will take up these molecules when feeding on the bees' blood. Using RNAi in bees and in other medical applications is not new, but it's a science that has not yet been perfected. Other researchers have worked to stop honey bee viruses from reproducing using similar techniques. While the process can work well in a laboratory, getting it to perform well in the real world is much more difficult. Beekeepers need a product that can be fed directly to bees, that has a long shelf life, and does not have any adverse effects on the honey bees themselves. Working at a molecular level poses many challenges. Not least among these is the ability to identify one or more specific gene that are critical to mite survival and/or reproduction, but cannot be found in honey bees or any other organism that may be affected, and then knock it out without interfering with the host's health or biology. That's a tall order of very specific needs, and is therefore likely to be a ways off in the future. But, as the frontiers of this research continue to expand, and more precise tools are available for scientists to use, then this technique may open up other areas of research. The same technique could be applied to mosquito control, household pest control, or field crop pests. Someday, therapeutic RNAi medications may even be able to help humans to suppress conditions caused by genetic disorders.
What is the relationship between loudness & intensity? The intensity of a sound refers to the power of the sound divided by its area in square meters. The loudness of a sound is the ratio of the intensity of any sound to the threshold of hearing. It is measured in decibels, or dB. How is intensity End. to wavelength? The amplitude of a wave is a measure of the intensity or brightness of light relative to other wavelengths. Wave 1 and Wave 2 both have the same wavelength, but different amplitudes. It is important that the wavelength of light be determined because it determines the nature and character of light. What is wave intensity? Wave intensity is the average power of a wave as it travels through an area. The decibel scale is used to measure the intensity of sound waves. wave energy intensity. How are intensity and amplitude End.? Misconception Alert. Students might be confused about the meanings of intensity and amplitude. Although sound intensity and amplitude are both proportional, they are physical quantities. Sound intensity can be defined as sound power per unit area. Amplitude refers to the distance between the peak of a wave and its resting position. What is intensity equal to? Intensity refers to the power per unit area carried out by a wave. In the equation, intensity I is I=PAI=P A, where P is power through an area of A. W/m2 is the SI unit for I. The intensity of a sound wave is End. to its amplitude squared by the following relationship: I=(Dp)22rvw I=( D p ) 2 2 r v w . Is intensity proportional to energy? In fact, the energy of a wave is proportional to its ampltude squared. W Fx=Kx2. This definition of intensity applies to all energy in transit, even waves. Intensity is measured in watts per square meter (W/m2). What does intensity mean in color? Intensity (also known as saturation or chroma), refers to the purity of a particular color. Mixing a color with another color will cause it to lose its intensity. The intensity of the colors that you mix is affected by how far apart they are on the color wheel. What are high intensity colors? High intensity colors are pure hues that appear very bright. The degree of purity or brightness of a color or its dullness is called intensity. Which has more intensity red or blue? Blue light is shorter than red light and travels at the exact same speed (in vacuum), as red light. It also has a lower frequency. This results in a higher photon energy.
Be Aware That a Flu Vaccine Won’t Protect You from a Cold Getting a flu vaccination before flu season protects only against some pathogens of influenza predicted to be common for that particular flu season.7 So, while the vaccine may offer you protection from some influenza viruses that are expected to be common in the upcoming season, it cannot protect you from all. Washing your hands often with soap and water is one easy way to help prevent catching cold and flu.8 Take cover from Coughs and Sneezes Respiratory viruses spread in three ways:9 - Through small droplets that are aerosolized by coughs or sneezes. These droplets do not settle and can carry germs over relatively long distances through the air that others can inhale. - Through large droplets that are similarly transmitted through the air over relatively short distances and settle rapidly on objects and body parts. - Through direct contact with contaminated hands or surfaces. Sleep Off a Cold Not getting at least eight hours of sound sleep decreases your immune system’s ability to fight off a cold. Try to get a consistent seven to eight hours of good, quality sleep every night.10 Chill in the air? Don’t let it worry you. Cold weather doesn’t give you a cold or flu, viruses do.10 However, spending more time indoors with other people during the cold season increases the likelihood you will be exposed to cold and flu viruses,11 especially because cold and flu viruses tend to thrive in the dry conditions that are typical in this season.4 You may breathe more germ-infested air, which can contribute to why you get sick more often in the winter. Use Antibiotics Appropriately Unless you are diagnosed with a bacterial infection, avoid asking a doctor to prescribe antibiotics for cold or flu symptoms. Antibiotics are used to kill bacteria and therefore are ineffective in treating viral infections resulting from cold and flu viruses.12 In fact, the CDC warns that taking antibiotics unnecessarily can lead to dangerous antibiotic-resistant strains of bacteria.13 Fight Off Germs While Travelling The US Centers for Disease Control and Prevention (CDC) recommends that you only travel when you feel well, especially in the winter season.14 You can keep germs at bay by washing your hands often with soap and water when available.6 It may be wise to carry a bottle of alcohol-based hand sanitizer, for use when regular soap and water aren’t available or convenient.6
In Kumon Grade 6 Math Workbook: Fraction, children gain foundational skills for calculating fractions. Sections on greatest common factor and lowest common multiple prepare children for adding and subtracting fractions with unlike denominators, as well as multiplying and dividing fractions and mixed numbers. Topics Covered in this Book: - Reducing fractions to lowest possible denominator - Factors and multiples - Greatest common factor (GCF) - Lowest common multiple (LCM) - Addition & subtraction of fractions with unlike denominators - Multiplication & division of fractions and mixed numbers - Changing decimals into fractions - Calculations with three fractions Specificatons: 8 1/2 x 11 inches. paperback. 96 pages. full color. Payment & Security Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.
engaging “my neighbor” in the issue of sustainability By members of the Critical Issues Committee, Geological Society of America To some people, this could solve Earth’s population problem because they believe we will be able to emigrate to other worlds when push comes to shove. Space travel may be the special privilege of a few adventurous astronauts, but as a solution to our earthly problems, there is need for a reality check. Let’s start with a trip to the moon and a look back to our Blue Planet – a spectacular view now common on many advertising pages. But look elsewhere in the Universe. Nothing else that we see looks any bigger than it did from Earth. Planets are still points of light, as are the stars and galaxies. We are a long way from even our nearest planetary neighbor! And then there’s simple math. Earth’s current population (about 6 billion) is increasing by about 1% annually. The U.S. Census Bureau projects that by 2050 there will be a somewhat smaller rate of increase of 0.46%, and a probable median population of nearly nine billion people. The ANNUAL population increase in 2050 (9 billion X .0046) will be an estimated 41.5 million persons, down from the present annual increase of about 60 million. On a DAILY basis (41.5 million ÷ 365), population increase in 2050 will be about 115,000 persons, down from the present DAILY increase of about 170,000 persons. Daily permanent emigration of that many people, even given possible technological advances in space vehicles and propulsion by 2050, might be a bit optimistic. For all practical purposes, we must internalize and make plans for Spaceship Earth as the only realistic habitation for humans. Bringing human occupancy of this planet into balance with available ecological areas and terrestrial and oceanic resources, must be one of our highest priorities if we wish a sustainable future for our descendants. DEMONSTRATION 1. - Have your students check the Web for information about the energy and material requirements, including support infrastructure, for a single Space Shuttle launch. Have them consult the World Resources Institute’s annual compilation of human demands for resources and estimate how emigration of all excess population, if possible, would affect the present crunch on Earth resources.. DEMONSTRATION 2. - Have the students calculate how far from Earth, within our solar system, an astronaut would have to travel so that Mars, or one of Jupiter’s moons, would look the same size as the Blue Planet does from our moon. What does this tell us about the reality of regular daily exodus of humans (and their life support) on such trips? Return to Introduction Guidelines to Sustainability Literacy Part I: Stewardship of the Commons Part II: Understanding Deep Time Part III: Doubling Time Part IV: Sustainability and Resources Part V: The Connectedness of Everything Part VI: Ecological Footprint and Carrying Capacity Part VII: Spaceship Earth: There's No Place Left to Go Part VIII: Part of the Global Ecosystem Part IX: We Live in a World of Change Part X: What Do We Mean by Sustainable World? Part XI: Cultural Context of Sustainability Part XII: We Have The Option of Choice
John was an able administrator interested in law and government but he neither trusted others nor was trusted by them. Heavy taxation, disputes with the Church (John was excommunicated by the Pope in 1209) and unsuccessful attempts to recover his French possessions made him unpopular. Many of his barons rebelled, and in June 1215 they forced King John to sign a peace treaty accepting their reforms. This treaty, later known as Magna Carta, limited royal powers, defined feudal obligations between the King and the barons, and guaranteed a number of rights. The most influential clauses concerned the freedom of the Church; the redress of grievances of owners and tenants of land; the need to consult the Great Council of the Realm so as to prevent unjust taxation; mercantile and trading relationships; regulation of the machinery of justice so that justice be denied to no one; and the requirement to control the behaviour of royal officials. The most important clauses established the basis of habeas corpus ('you have the body'), i.e. that no one shall be imprisoned except by due process of law, and that 'to no one will we sell, to no one will we refuse or delay right or justice'. The Charter also established a council of barons who were to ensure that the Sovereign observed the Charter, with the right to wage war on him if he did not. Magna Carta was the first formal document insisting that the Sovereign was as much under the rule of law as his people, and that the rights of individuals were to be upheld even against the wishes of the sovereign. As a source of fundamental constitutional principles, Magna Carta came to be seen as an important definition of aspects of English law, and in later centuries as the basis of the liberties of the English people. As a peace treaty Magna Carta was a failure and the rebels invited Louis of France to become their king. When John died in 1216 England was in the grip of civil war and his 9 year old son, Henry III, became King.
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated. Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment. The map shows the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, in the meteorological data we have that information. Why do we use UTC? Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.
When the Japanese bombed Pearl Harbor, the United States quickly learned that the war that raged in Europe could very easily make its way across the sea. Distrust and fear of those who had attacked the United States led to many people fearing those who looked like they might be from Japan. President Roosevelt faced tremendous pressure to do something about the fear that was radiating throughout the west coast of the United States and therefore he set forth a policy that would deprive thousands of Americans of their rights. The internment of Japanese-Americans along with German and Italian Americans has become a dark spot in the country’s history as a time when the government overstepped its bounds and gave into fear. President Roosevelt Even Imported Enemy Aliens From Latin America to Intern Them President Roosevelt was not only concerned with Axis sympathizers in the United States but in Latin America as well. In July of 1940, he authorized the FBI to station agents at U.S. embassies throughout Latin America. Their goal was to have a list of names of people within those countries that they believed could have ties to axis powers. These agents were to keep an eye on those individuals and present the list should it become necessary to detain those individuals. In many Latin American countries, after the outbreak of World War II, people who came from countries that were part of the axis powers became targets. Because those of Japanese descent stood out, they became easy targets. In May 1940, as many as 600 homes, schools and businesses belonging to citizens of Japanese descent were burned down. With the animosity toward those of Japanese descent in their countries it was little surprise that many Latin American countries were willing to comply with American requests to monitor and potentially detain those individuals. After the attack on Pearl Harbor, the fear and hatred for people of Japanese descent only grew. President Roosevelt asked a dozen different Latin American countries to arrest citizens of Japanese descent. The idea was to detain all those individuals and then use them for hostage exchanges in order to get captured Americans back. Several countries complied and more than 2,000 people were deported from their countries and sent to internment camps in the United States. Many of those who were deported to the U.S. were angry that they were forced to leave the lives they had built for themselves in their home counties. Families were broken apart and when mothers would take their children to try and find their husbands in the U.S. they would end up detained themselves. In the internment camps the focus was on teaching everyone Japanese, German or Italian so that when they were used in a hostage exchange or deported they could speak the language.
Paleontologists of Ludwig-Maximilians-Universitaet (LMU) in Munich are currently studying a new specimen of Archaeopteryx, which reveals previously unknown features of the plumage. The initial findings shed light on the original function of feathers and their recruitment for flight. A century and a half after its discovery and a mere 150 million years or so since it took to the air, Archaeopteryx still has surprises in store: The eleventh specimen of the iconic “basal bird” so far discovered turns out to have the best preserved plumage of all, permitting detailed comparisons to be made with other feathered dinosaurs. The fossil is being subjected to a thorough examination by a team led by Dr. Oliver Rauhut, a paleontologist in the Department of Earth and Environmental Sciences at LMU Munich, who is also affiliated with the Bavarian State Collection for Paleontology and Geology in Munich. The first results of their analysis of the plumage are reported in the latest issue of Nature. The new data make a significant contribution to the ongoing debate over the evolution of feathers and its relationship to avian flight. They also imply that the links between feather development and the origin of flight are probably much more complex than has been assumed up to now. “For the first time, it has become possible to examine the detailed structure of the feathers on the body, the tail and, above all, on the legs,” says Oliver Rauhut. In the case of this new specimen, the feathers are, for the most part, preserved as impressions in the rock matrix. “Comparisons with other feathered predatory dinosaurs indicate that the plumage in the different regions of the body varied widely between these species. That suggests that primordial feathers did not evolve in connection with flight-related roles, but originated in other functional contexts,” says Dr. Christian Foth of LMU and the Bavarian State Collection for Paleontology and Geology in Munich, first author on the new paper. To keep warm and to catch the eye Predatory dinosaurs (theropods) with body plumage are now known to predate Archaeopteryx, and their feathers probably provided thermal insulation. Advanced species of predatory dinosaurs and primitive birds with feathered forelimbs may have used them as balance organs when running, like ostriches do today. Moreover, feathers could have served useful functions in brooding, camouflage and display. Indeed, the feathers on the tail, wings and hind-limbs most probably fulfilled functions in display, although it is very likely that Archaeopteryx was also capable of flight. “Interestingly, the lateral feathers in the tail of Archaeopteryx had an aerodynamic form, and most probably played an important role in its aerial abilities,” says Foth. On the basis of their investigation of the plumage of the new fossil, the researchers have been able to clarify the taxonomical relationship between Archaeopteryx and other species of feathered dinosaur. Here, the diversity in form and distribution of the feather tracts is particularly striking. For instance, among dinosaurs that had feathers on their legs, many had long feathers extending to the toes, while others had shorter down-like plumage. “If feathers had evolved originally for flight, functional constraints should have restricted their range of variation. And in primitive birds we do see less variation in wing feathers than in those on the hind-limbs or the tail,” explains Foth. These observations imply that feathers acquired their aerodynamic functions secondarily: Once feathers had been invented, they could be co-opted for flight. “It is even possible that the ability to fly evolved more than once within the theropods,” says Rauhut. “Since the feathers were already present, different groups of predatory dinosaurs and their descendants, the birds, could have exploited these structures in different ways.” The new results also contradict the theory that powered avian flight evolved from earlier four-winged species that were able to glide. Archaeopteryx represents a transitional form between reptiles and birds and is the best-known, and possibly the earliest, bird fossil. It proves that modern birds are directly descended from predatory dinosaurs, and are themselves essentially modern-day dinosaurs. The many new fossil species of feathered dinosaurs discovered in China in recent years have made it possible to place Archaeopteryx within a larger evolutionary context. However, when feathers first appeared and how often flight evolved are matters that are still under debate. The eleventh known specimen of Archaeopteryx is still in private hands. Like all other examples of the genus, it was found in the Altmühl valley in Bavaria, which in Late Jurassic times lay in the northern tropics, and at the bottom of a shallow sea, as all Archaeopteryx fossils found so far have been recovered from limestone deposits. Note : The above story is based on materials provided by Ludwig-Maximilians-Universitaet Muenchen (LMU).
This recently discovered, and under appreciated, form of carbon crystal structure has been added to the list which includes: coal - anthracite to peat charcoal, lump and briquette Biochar is created by pyrolysis in extreme oxygen deprivation which yields a shiny, jet black substance which is brittle and light weight. It is extremely chemically stable and has been observed in the Amazon River basin to last thousands of years in the soil. It is extremely porous, suggesting a lattice-like structure that enhances a seemingly enzymatic-like property, as one finds little, if any, chemical decomposition of the biochar. Mechanisms of its miraculous properties are not as yet scientifically understood, but it is observed to hold large amounts of moisture, and adjusts soil pH upwards, seemingly “unlocking” many soil nutrients in what have been previously described as infertile soils. It seemingly stabilizes soil, acting to prevent leaching of critical plant nutrients, yet simultaneously holds them available for plant uptake. It is observed to “unlock” potash in large amounts. In tests it acts to reduce Chlordane and DDX uptakes in plants by 68% and 79%. In other tests in has reduced both nitrous oxide and methane emissions by up to 80%. It reduces leaching of E.coli and salmonella from the soils. It is porous and extremely light weight, which creates a wonderful environment for beneficial soil organisms. Biochar may be a key to locking away CO2 gas. When scrap wood products are properly pyrolized copious amounts of biochar are produced and a net energy gain of syn-gas and bio-oil is realized. Biochar may prove to be a carbon “sink” or storage that the world so desperately needs to fight CO2 pollution of our atmospere. The storage of biochar does not depend upon the effort or expense of trying to bury CO2 in caves, wells or mineshafts. Instead it can be used on our fields to increase crop production. The negative: it probably reduces the effectiveness of pesticides. Link to Wikipedia - biochar
Of Russian origin: Cossacks There is hardly a single simple definition for them. They are not a nationality or a religion, they don’t represent a political party or movement and there is still no complete agreement among historians and anthropologists on who the Cossacks are. In the Wikipedia they are defined as “the militaristic communities of various ethnicities living in the steppe regions of Ukraine and also southern Russia.” Described in a few words, Cossacks are free men or adventurerers. In fact, their name is derived from the Turkish Qasaq, which means exactly that. There are also different versions of the origin of the Cossacks. According to some historians, in Russia and Ukraine Cossacks were the men who lived freely on the outlying districts. Usually they were serfs who had run away to find their own freedom. The government tried to find and punish them, but the number of those on the run became so great that it was impossible to catch them all and soon the state had to give up and recognize the newly established communities on its borders. The first such self-governing warrior Cossack communities were formed in the 15th century (or, according to some sources, in the 13th century) in the Dnieper and Don River regions. Cossacks also accepted Tatars, Germans, Turks and other nationalities into their communities, but there was one condition – they had to believe in Christ. Once accepted into the community, they stopped being Germans, Russians or Ukrainians – they became Cossacks. Cossacks had their own elected headman, called ataman, who had executive powers and was supreme commander during the war. Rada (the Band Assembly) held the legislative powers. The senior officers were called starshina and the Cossack settlements were called stanitsas. The Cossacks were named by their geographical locations. Some of the most famous ones were the Zaporozhian, Don and Kuban Cossacks. Military might of the Cossacks Cossack military traditions are strong and boys were trained as warriors from a very young age. As soon as a baby had cut his teeth, he was brought to the church and a service to St. John the Warrior was served, so that the boy would grow strong and fearless, and dedicated to Orthodoxy. At the age of three the child could already mount a horse and by five he was a confident rider. Father would also teach their sons the art of sharp shooting, adroitness and coordination from a very young age. Recognizing the Cossack’s military skills, the Russian government tried to control them and make them serve the Tsar. However, not all Cossacks were loyal to the Tsar and some participated in peasants’ revolts. The most famous rebellions were led by the Cossacks Stepan Razin, Kondratiy Bulavin and Emelyan Pugachev. In the 18th century the government turned the Cossacks into a special social estate, which was to serve the Russian Empire. Their main responsibilities were to guard the country’s borders. In order to keep the Cossacks loyal to the Tsar, the government gave them special privileges and vast social autonomy, which they valued. At the same time the Cossacks, remaining true to their free spirits, mostly respected the Tsar and the Patriarch, but hated state bureaucracy and when they felt the Tsar was unjust they didn’t hesitate to start rebellions. However, especially during the Romanov Dynasty, Cossacks were the most vigorous defenders of Russia. This continued up until the October Revolution of 1917. After the Bolshevik Revolution During the Russian Civil War the Cossacks fought mainly for the White Army, therefore, after the victory of the Red Army they were heavily persecuted, their lands were subjected to famine and they suffered many repressions. During the Second World War the Cossacks were split; some fought for the Soviet Union and some supported Nazi Germany. Many historians say that the reason some Cossacks supported the Nazis was that they saw it as a war against Stalin and against the demonic regime that killed their Tsar and Russia. The revival of the Cossacks and their traditions began in 1989, during the Perestroika period. In 2005, Vladimir Putin, then President of Russia, introduced a bill approved at the State Duma that recognized the Cossacks not only as a distinct ethno-cultural entity, but also as a potent military force. Today there are even special Cossack schools, where, along with the usual subjects like math and literature, students are taught Cossack traditions and history. Vast groups of Cossacks can now be found in the south of Russia and numerous Cossack groups inhabit the northwestern Caucasus, Kuban, Krasnodar and Stavropol regions.
2 LEARNING OBJECTIVESIntroduce the basic tools for analyzing processes and quality.Describe acceptance sampling and when it can be used to controlquality in services.Introduce statistical process control and describes how and whenit can be used to analyze service processes.Define the types of sampling errors that can occur when statisticalsampling is used.Distinguish between the statistical analysis of attributes and variables. 3 THE SEVEN BASIC QUALITY TOOLS Process Flow DiagramsChecksheetsHistograms/Bar ChartsPareto ChartsScatterplotsRun ChartsCause-and-Effect Diagrams 4 Exhibit 12S.1PROCESS FLOW DIAGRAM: DRYCLEANING SERVICE 11 Exhibit 12S.10EXAMPLE OF A CAUSE-AND-EFFECT DIAGRAM 12 Exhibit 12S.11THE EIGHT STEP QUALITY IMPROVEMENT PROCESS 13 TWO BASIC MODELS FOR STATISTICAL QUALITY CONTROL Acceptance Sampling: This involves testing samples after the goods or services have been produced.Statistical Process Control: This involves testing samples of goods or services as they are being produced. 14 Exhibit 12S.12THE RELATIONSHIP BETWEEN POPULATION QUALITY AND SAMPLE FINDINGSWhat the Population Quality Actually IsGoodBadGood Lot Accepted: No ErrorBad Lot Accepted: Type II Error or GoodWhat the Sample Finding IsGood Lot Rejected: Type II Error or Bad Lot Rejected: No ErrorBad
Ch. 6 – La nourriture et les courses This chapter deals with more food items (grocery shopping and supermarkets vs. small stores in France), the partitive, the partitive in the negative, and the verb “faire” – to do / to make (which is used a lot in the French language!) as well as the verbs “vouloir” – to want and “pouvoir”- to be able to (can). (Here is a game to practice the conjugation of vouloir and pouvoir) We discussed how in France they will most often go to these small specialty shops that just have certain types of food products. We will learn more about this later in the chapter. We’ll also discuss more of the French dining habits and French food specialties. –Exercise that you might enjoy to help you understand more about going grocery shopping in France (has a listening component as well) -Read this online preview of a book on France (read pages 78-79) that explains the differences between all of these shops -France for families gives a nice overview of shopping and a list of helpful vocabulary -Here are games to help you practice the vocabulary from “Mots 1” and “Mots 2” We will be putting the verbs “faire”, “vouloir”, and “pouvoir” in our verb books and learning about their usage. “Faire les courses” = to go grocery shopping. To learn more about “faire”, visit: about.com conjugation of “faire”, quizlet “faire” conjugation practice (has flashcards and the like) -This is a more in-depth explanation of “faire” and its uses The partitive is talking about “some” or a “part of” something. When you say you like carrots, you use the definite article. “J’aime les carottes”. But to say you want carrots, you don’t use the definite article. You would use the partitive “Je veux des carottes.” The partitive forms can be “de, des, du, or de la”or before a vowel ” d’ or de l’ “/ Here is a great quia game called “Challenge Board” (like Jeopardy) to review a wide variety of important chapter concepts!
The oxygen stores of the body are small, so life-threatening hypoxaemia can develop very rapidly with few clinical signs. The availability of robust and reliable pulse oximeters has revolutionised the safe monitoring of patients with unstable cardiorespiratory conditions, and those having medical and surgical procedures. While oximetry is now best practice in these circumstances, care must be taken in interpreting the results. There are confounding factors that may produce an erroneous signal and physiological factors that will affect the interpretation of the result. In the absence of these factors, the instruments are accurate detectors of arterial oxygen saturation, in the range between 100% and 70% with varying but reasonable performance down to 55%. The basic principles of operation are important to understand so that physiological interpretation is adequate and erroneous results can be identified. Patients at risk of hypoxaemia may need continuous monitoring of their oxygenation. Blood gas analysis requires arterial puncture and only measures the oxygenation at the time of the sample. By measuring oxygen saturation (instead of partial pressure) pulse oximetry enables non-invasive monitoring. The continuous measurement of the pulse rate is a bonus. The technology supporting clinical oximetry has been available for more than 80 years, but pulse oximeters have only been commercially available for about 20 years. Early oximeters required a cumbersome heated probe to 'arterialise' blood in the ear lobe. They were also difficult to calibrate and notoriously unstable. Nowadays relatively cheap and reliable oximeters have revolutionised the in vivo monitoring of patients' oxygenation during a wide range of critical clinical situations.1 While oximeters may be used to assess the efficiency of pulmonary gas exchange, at least in relation to oxygen uptake, they are more suited to assessing the adequacy of tissue oxygen delivery. Measuring oxygen in the arterial blood is important because serious acute hypoxaemia is notoriously difficult to detect clinically and by the time clinical cyanosis develops, the patient is usually in a parlous state. The oxygen stores of the body are small so the viability of many tissues is critically dependent on continuous delivery of an adequate oxygen supply. Oxygen delivery is proportional to the blood flow and arterial oxygen content (CaO2 [mL O2 per 100 mL blood]). For the whole body: oxygen delivery = cardiac output (Q) x CaO2 x 10 These variables are difficult to measure directly and rapidly, however CaO2 is linearly related, at least over relatively short periods of time, to the saturation of haemoglobin in arterial blood (SaO2). As oximeters provide a rapid and reliable in vivo measure of SaO2 this variable can be substituted for CaO2. This has been a very valuable advance, as long as the principles and sources of error are understood. A pulse oximeter detects the change in transmission of two wavelengths of light across a capillary bed, usually in the finger. The sensor is placed on the nail with the light source against the finger pulp. The detectors can be small because they are only receiving two wavelengths, one to detect oxygenated haemoglobin (O2Hb) and one to detect reduced haemoglobin (HHb). The absorption of light is related to the expansion of the capillary bed with the pulse (Fig.1). By comparing the light transmission through the pulsatile 'arterialised' capillary blood with the non-pulsatile venous blood the oximeter can calculate the haemoglobin saturation. Saturation is calculated as: O2Hb / [O2Hb + HHb] This is the so-called 'functional saturation', and is expressed as a percentage.2 It has been suggested that pulse oximeters should be called 'pulse spectrophotometers'. This would emphasise that they are inferring oxygen saturation from the well-known colour change between oxygenated and reduced haemoglobin and that, despite the elegant use of the pulse form to separate arterial blood from the other light absorbing structures in the finger, there are sources of error inherent in this technique which need to be appreciated. While the vast majority of devices in clinical use measure transmitted light, newer devices are being designed to measure the light reflected off pulsating tissue surfaces. These devices are being used in perinatal monitoring and in patients whose peripheral perfusion may be compromised, as in open heart surgery. Reflectance devices are currently hampered by poor signal-to-noise ratio and the need to detect very small pulsatile signals, but advances in technology are likely to overcome these difficulties. Sources of error The results of pulse oximetry can be affected by technical problems and physiological factors. The machines do not need regular calibration by the operator and the probes and electronics are extraordinarily robust. The machine will not display a result and will warn if it cannot detect an adequate pulse signal. It is fitted with an alarm, which can be set at low (or high) saturation levels as desired. Original calibration by the manufacturer is based on the empirical relation between in vivo pulse oximetry (SpO2) and the SaO2 measured on simultaneously sampled arterial blood in a CO-oximeter. CO-oximeters, so called because they measure carboxy- or CO haemoglobin, are now fitted to all modern blood gas analysers. They use multiple wavelengths of light to detect the four different forms of haemoglobin. The calibration process generally relies on data generated from healthy volunteers made hypoxaemic to generate SaO2 values between 70% and 100%. When the SaO2 is reduced to between 70% and 40% the pulse oximeters become significantly inaccurate, particularly below 55%, and fail to track rapidly developing profound hypoxaemia.3However, it can be argued that the accurate detection of falls between 85% and 70% is of most use in clinical monitoring. Physiological factors (Table 1) The presence of abnormal haemoglobins disturbs the relation between SaO2 and CaO2. 'Functional saturation' ignores the possible presence of methaemoglobin, carboxyhaemoglobin and other abnormalities of haemoglobin. These abnormal forms will not carry oxygen normally and will add to the denominator of the O2Hb / [O2Hb + HHb] ratio. Usually abnormal haemoglobins only comprise a few percent of the total, even in heavy smokers who have increased concentrations of carboxyhaemoglobin. However, common drugs such as paracetamol and sulfa drugs can induce the formation of methaemoglobin. Anaemia will also reduce the oxygen content without changing the calculated functional saturation. There will be difficulties relating the SpO2 to PaO2 if the position of the oxyhaemoglobin dissociation curve has been shifted by influences such as acid-base balance and carbon dioxide tension. If the SpO2 is above 92% the partial pressure of oxygen (PaO2) can change rapidly with very little change in saturation (Fig. 2). This latter physiological feature limits the usefulness of pulse oximetry in, for example, the assessment of pulmonary gas exchange efficiency and the detection of hyperoxia in preterm babies. Measurement of the partial pressure of arterial oxygen (PaO2) from in vitro samples or transcutaneous electrodes may be preferable for monitoring hyperoxia in the 'flat' part of the dissociation curve. Pulse oximetry has also been an enormous boon in intensive care units. However, measurements can be difficult to obtain in low perfusion states or where inotropes such as dopamine are being used to sustain blood pressure. Confounding factors (Table 2) Abnormal haemoglobins and anaemia are 'physiological' confounders but these abnormalities can also affect the accuracy of the measurements. In animal experiments, SpO2 decreases as methaemoglobin increases up to 35%, and SpO2 increases as carboxyhaemoglobin increases up to 70%. Modest concentrations of these haemoglobins will not substantially change SpO2 which is a functional saturation. Anaemia has to be severe (50 g/L) before it interferes significantly with the measurement. Abnormal dyes and pigments such as methylene blue (used to treat methaemoglobinaemia) and severe hyperbilirubinaemia may interfere. In most clinical circumstances, these disturbances will not be present to a significant degree, but they need to be kept in mind. Strong superficial pigments such as nail polish must be removed and signal failure may occur in black patients although careful positioning on the less pigmented nail bed usually overcomes this problem. Venous pulsation may confuse the signal, reducing the displayed saturation, particularly where a tourniquet is applied above the probe or in the presence of right heart failure or tricuspid incompetence. Excessive motion of the probe and strong incident light can also cause an erroneous or inadequate signal. Motion artifact is also a problem in many longer-term settings where movements can be interpreted as a pulse. Reliable pulse oximeters are now indispensable in all emergency departments, intensive care units (adult and neonatal) and operating theatres. Their use is considered to be good practice for procedures requiring sedation or instrumentation of the respiratory tract ranging from cardiac catheterisation to endoscopy and bronchoscopy. Their use in these procedures has uncovered quite alarming transient hypoxaemia requiring the use of supplemental oxygen. It is desirable to maintain the SpO2 above 90%.4 The routine use of pulse oximeters in operating theatres and recovery rooms has coincided with a dramatic decrease in perioperative morbidity and mortality although, interestingly, a cause and effect relation has not been confirmed.5Clearly, disastrous errors such as incorrect connections in anaesthetic machines can be quickly recognised. Patients presenting to emergency departments with cardiorespiratory disorders are routinely monitored with pulse oximetry. However, it is important to identify when the additional information available from in vitro analysis of an arterial blood gas sample is critical for management. Patients with worsening asthma, deteriorating pneumonia or left heart failure and those with chronic obstructive pulmonary disease developing clouded consciousness on supplemental oxygen all need arterial carbon dioxide partial pressure (PaCO2), pH and base excess measurements (Fig. 2). Pulse oximeters in sleep investigation laboratories have substantially contributed to the explosion of knowledge about sleep and breathing over the last few decades. They are also extensively used during exercise testing in pulmonary function and cardiac stress test units. Here, small falls in PaO2 in the higher range will be difficult to detect, but hazardous falls will be readily identified. Finally, pulse oximeters have a place in the non-procedural doctor's office where the detection of acute or chronic hypoxaemia may be important - as in the assessment of patients requiring home oxygen therapy. An SpO2 above 90% in a patient with chronic obstructive pulmonary disease is reassuring whereas a lower measurement would suggest the need for confirmatory measurement of arterial blood gases. Experience with these devices, and their widespread adoption, have emphasised their status as a truly important advance in non-invasive patient monitoring and investigation. - International symposium on innovations and applications of pulse oximetry (ISIAPO 2002). Anesth Analg 2002;94 (Suppl 1S). - Schnapp LM, Cohen NH. Pulse oximetry: uses and abuses. Chest 1990;98:1244-50. - Severinghaus JW, Naifeh KH. Accuracy of response of six pulse oximeters to profound hypoxia. Anesthesiology 1987;67:551-8. - Young IH, Le Souëf PN. Clinical oximetry. Med J Aust 1993;159:60-2. - Cooper JB, Cullen DJ, Nemeskal R, Hoaglin DC, Gevirtz CC, Csete M, et al. Effects of information feedback and pulse oximetry on the incidence of anesthesia complications. Anesthesiology 1987;67:686-94.
Cancer becomes especially lethal when it metastasizes from a primary tumor to other organs. But this spread does not occur randomly — within a population of cancer cells, certain subgroups preferentially seek out and colonize specific organs. New research from scientists at Memorial Sloan Kettering and Weill Cornell Medical College has found that tumor cells send signals throughout the body to prepare specific organs for the arrival of metastatic cells. These signals are transmitted through small vesicles — microscopic bubble-like compartments — known as exosomes, which act as location scouts to set the stage at distant sites where cancer cells can take root and thrive. After being secreted by cancer cells, the exosomes circulate throughout the body and are taken up by other cells. At particular metastatic sites, they prime what is called the microenvironment — noncancerous cells, molecules, and blood vessels that will eventually surround the tumor — to be nurturing to cancer cells after they arrive. “If the cancer cells are seeds, then the soil of the microenvironment needs to be appropriately fertilized for the seeds to grow,” says MSK medical oncologist Jacqueline Bromberg, who has led pioneering research into exosomes in collaboration with David Lyden of Weill-Cornell Medical College. “We know that patients with metastatic disease to one organ can shed millions of cancer cells into the circulation and yet all the other organs in the body can remain cancer free for a long time,” she explains. “The question we asked was whether cancer exosomes could selectively prepare their favorite organs before the seeds arrive.” Directed by Surface Proteins Earlier research had suggested that exosomes play a role in metastasis, but the details were unclear. While exosomes can be taken up by a wide variety of cells, in most cases they are not retained, nor do they transmit signals to change their surroundings. In the new study, led by Drs. Bromberg and Lyden and published online by the journal Nature, the researchers sought to learn whether certain molecules in the exosomes were “addressing” them to specific organs, which might shed light on why tumor cells later preferentially go to those same organs. They discovered that exosomes display a variety of receptor proteins called integrins on their surface, and that the integrin type determines which organ the exosome will target. For example, an integrin called αvΒ5 directs exosomes to the liver, while the integrin α6Β4 causes them to home in on the lung. The specific integrin makes it easier for the exosome to be taken in and prime a particular organ because the integrin binds to other proteins, called adhesion molecules, amid the organ’s cells. “These integrins are like cellular Velcro, and they’re looking for the right sticky ‘hook’ to adhere to,” Dr. Bromberg says. “When they connect with the right adhesion molecule, it allows the exosome to start doing its work preparing the organ to accept and nurture the cancer cells.”Back to top Powerful Effects on Organ Selection The researchers also showed that when they blocked the expression of αvΒ5 or α6Β4 in cancer cells, the exosomes those cells shed were no longer able to educate their respective target organs — the liver and lung. In experiments with mice, cancer cells that usually take root and grow in these organs could not do so without the exosomes carrying out their advance work. Another mouse study showed the exosomes can even redirect cancer cells to spread to organs they don’t usually target, further demonstrating their powerful effect. “Remarkably, if we treated mice with lung-targeting exosomes, we could redirect breast cancer cells that would normally spread to the bone, causing them to spread to the lungs instead,” Dr. Bromberg says. The researchers also found an important clue as to how the exosomes condition the target organs to be welcoming to cancer cells: They appear to stimulate the cells in the microenvironment to produce a protein group called S100, which are already known to precondition host cells for metastasis. “We think the exosome integrins not only help with adhesion to the microenvironment but also trigger key signaling pathways and inflammatory responses in target cells, resulting in the education of that organ to permit the growth of metastatic cells,” Dr. Bromberg explains. She says that the main short-term practical application of these findings might be to analyze exosomes, and the expression of integrins on their surface, to predict where a patient’s tumor is most likely to spread. In the longer term, having earlier knowledge about the likely site of metastasis could someday help clinicians devise focused treatments that prevent the emergence of metastases at these sites. “The next big challenge is to determine if this process is reversible — and if so, is there a step of no return?” Dr. Bromberg says. “Specifically, can we uneducate or reeducate these organs, rendering them inhospitable for metastasis?” She explains that many have suggested using exosomes to deliver drugs or nucleic acids to cancers. “Identifying the integrin ‘zip codes’ that target specific cell types and tissues provides us with a unique opportunity to deliver such therapies to the organs that sustain the cancer,” she says. “You could render future metastatic sites inhospitable so that cancer cells would not grow there.”Back to top
The fateful French Revolution (1789 – 1799) began under King Louis XVI. Among the contributing factors leading up to the revolution were three things: - The salon culture: private home talks about politics and ideas. This is significant because it helped to develop Enlightenment ideas that would later fuel the revolution. - Special privileges for the nobility. These privileges were mainly tax exemptions, which caused worse finances for the French government. - The major debt crisis. Due to all the wars, and participation in wars, the French were just managing to pay the interest rates of their loans. This alone consumed fifty-percent of their budget. So, in 1789 the king called a meeting of the Estates General. This was a collection of three parties that voted for laws, taxes, etc…. One party was the 1st Estate, made up of the clergy. Another party was the 2nd Estate, consisting of the nobility. Yet another party was the 3rd Estate, the commoners, in other words the rest of France. In the Estates General, the voting was made by estate (3 votes), instead of by head. This made the voting unfair. Even though the 3rd Estate greatly outnumbered the 1st and 2nd combined, the latter often voted together, making the vote count 2:1. In 1789, for the first time in history the 3rd Estate took sudden charge of the meeting. They outright decided without any legal permission to become a separate assembly: the National Assembly otherwise called the Assembly of the People. Then, after being thrown out of the main building, they assembled in a tennis court and made an oath (The Oath of the Tennis Court): we will not disperse until we can have a constitution of sorts for France. Then, the National Assembly approved and passed, instead of a constitution (which would come later), the Declaration of the Rights of Man and of the Citizen. This document was directly influenced by Thomas Jefferson who worked together with General LaFayette. The Catholic Church suffered deeply throughout the French Revolution. In 1789, church lands were confiscated and monastic vows were abolished and forbidden. In 1790 the Civil Constitution of the Clergy was passed. This stated that the clergy’s salaries were to be paid by the government, that priests and bishops would be locally elected, rather than appointed by the pope, and that bishops must give an oath of allegiance to the government. This last would separate the priests into two categories: the juring priests and the non-juring priests (oath givers and non-oath givers). In 1791, the revolution began to undermine to a greater degree the king’s political powers. The Civil Constitution was written, which stated that there would be a single legislative body and the king would be able to give only a suspensive veto, versus an all-out prohibition. The National Convention was assembled and constant international warfare, to distract the people from the plight, was decided upon. A year later the king’s guards were killed, Louis was placed under house arrest, under constant guard and not allowed to hear mass. That same year the September Massacres occurred, cruel and brutal, and then, in 1793 the final blow was dealt: the French king, Louis XVI, was beheaded. With the absence of a monarch the Reign of Terror began (1793 – 1794). The Committee of Public Safety was formed by the Convention. The leader was a man named Robespierre. The Law of Suspects was passed and the committee was free to execute all people whom they took to be anti-revolutionary (or rather anti-reign of terror). Two-thirds of the victims were from the former 3rd Estate. Following was a sweeping de-Christianization of France beginning with the abolition of the churches themselves. (This came from the ideas of the Enlightenment, mainly the idea of the use of reason alone.) The name saint was abolished and clerical dress was forbidden. Priests were no longer distinguished as juring or non-juring. Instead, all were demanded to give up their priesthood. This forced tens of thousands of priests to immigrate to other countries. Italy and Austria refused them, but they were welcomed in Spain, the Papal States and England (a non-Catholic country). In addition, a new calendar was made. There were twelve thirty-day months and three ten-day weeks (seven day weeks represented the Creation–Catholic). The remaining five days were used for feasting (this makes 365). Time didn’t anymore begin with Christ’s birth (year 0), but with the death of Louis XVI. However, the Reign of Terror soon got out of hand. Robespierre became a dictator, and soon a former ‘reign-of-terror man’ called Danton was fighting him. Danton was subsequently imprisoned and then executed. At his execution Danton told Robespierre that he too would soon be killed. True to this prophecy, Robespierre was executed shortly after in 9 Thermidor, a month in the new calendar. Here is a quotation from Robespierre justifying the use of terror: What is the fundamental principle of the democratic or popular government…? It is virtue… Republican virtue can be considered in relation to the people and in relation to the government; it is necessary in both. When only the government lacks virtue, there remains a resource in the people’s virtue; but when the people themselves are corrupted, liberty is already lost… If the spring of popular government in time of peace is virtue, the springs of popular government in revolution are at once virtue and terror. Virtue without terror is fatal; terror without virtue is powerless. Terror is nothing other than justice: prompt, severe, inflexible. It is therefore an emanation of virtue… a consequence of the general principle of democracy applied to our country’s most urgent needs. It has been said that terror is the principle of despotic government. Does your government therefore resemble despotism? Yes, as the sword that gleams in the hands of the heroes of liberty resembles those in the hands of the henchmen of tyranny. Let the despot govern by terror his brutalised subjects; he is right, as a despot. Subdue by terror the enemies of liberty, and you will be right… The government of the revolution is liberty’s despotism against tyranny…
Low level of oxygen in Earth's middle ages delayed evolution for two billion years A low level of atmospheric oxygen in Earth's middle ages held back evolution for 2 billion years, raising fresh questions about the origins of life on this planet. New research by the University of Exeter explains how oxygen was trapped at such low levels. Professor Tim Lenton and Dr Stuart Daines of the University of Exeter Geography department, created a computer model to explain how oxygen stabilised at low levels and failed to rise any further, despite oxygen already being produced by early photosynthesis. Their research helps explain why the 'great oxidation event', which introduced oxygen into the atmosphere around 2.4 billion years ago, did not generate modern levels of oxygen. In their paper, published in Nature Communications, Atmospheric oxygen regulation at low Proterozoic levels by incomplete oxidative weathering of sedimentary organic carbon, the University of Exeter scientists explain how organic material - the dead bodies of simple lifeforms - accumulated in the earth's sedimentary rocks. After the Great Oxidation, and once plate tectonics pushed these sediments to the surface, they reacted with oxygen in the atmosphere for the first time. The more oxygen in the atmosphere, the faster it reacted with this organic material, creating a regulatory mechanism whereby the oxygen was consumed by the sediments at the same rate at which it was produced. This mechanism broke down with the rise of land plants and a resultant doubling of global photosynthesis. The increasing concentration of oxygen in the atmosphere eventually overwhelmed the control on oxygen and meant it could finally rise to the levels we are used to today. This helped animals colonise the land, leading eventually to the evolution of mankind. The model suggests atmospheric oxygen was likely at around 10% of present day levels during the two billion years following the Great Oxidation Event, and no lower than 1% of the oxygen levels we know today. Professor Lenton said: "This time in Earth's history was a bit of a catch-22 situation. It wasn't possible to evolve complex life forms because there was not enough oxygen in the atmosphere, and there wasn't enough oxygen because complex plants hadn't evolved - It was only when land plants came about did we see a more significant rise in atmospheric oxygen. "The history of life on Earth is closely intertwined with the physical and chemical mechanisms of our planet. It is clear that life has had a profound role in creating the world we are used to, and the planet has similarly affected the trajectory of life. I think it's important people acknowledge the miracle of their own existence and recognise what an amazing planet this is." Life on earth is believed to have begun with the first bacteria evolving 3.8 billion years ago. Around 2.7 billion years ago the first oxygen-producing photosynthesis evolved in the oceans. But it was not until 600 million years ago that the first multi-celled animals such as sponges and jellyfish emerged in the ocean. By 470 million years ago the first plants grew on land with the first land animals such as millipedes appearing around 428 million years ago. Mammals did not rise to ecological prominence until after the dinosaurs went extinct 65 million years ago. Humans first appeared on earth 200,000 years ago.
Using â??Quotientâ? as a representation for fractions, how would you introduce the four operation including 1) addition, 2) subtraction, 3) multiplication, and 4) division? Introduce a word problem for each operation then describe how quotient as a representation for operations is used. Also in detail, describe how you will teach each operation to elementary students so they can easily understand. I would first explain that problems related to solving fractions have four levels of difficulty/understanding. The first level is to understand an "all case" approach that works with any fraction. The second level requires recognition of fractions that can use a short cut method which requires that the student applies the concept of factors. The third level requires reducing the final answer to simplest form, which also requires the application of factors. The fourth level involves solving problems of mixed numbers. Then, I would return to level one and give the following explanation: For addition, use the pattern: a/b + c/d = a*d/b*d + b*c/b*d. I would show the pattern with arrows pointing to the numbers in the first expression and how those ... Learning operations on fractions is a key mathematics skill for elementary students. This posting explains a concise strategy to teach the operations of addition, subtraction, multiplication and division for students just beginning with this topic. The posting is written for the teacher who is looking for a strategy to teach this topic.
Of the various forms of omega fatty acids, it is the families of polyunsaturated fats known as omega-6 and omega-3 that are most essential for health. These types of fat contain numerous sub-types, each of which has different actions and numerous by-products. Maintaining balance between these families is vital for long-term health and excess levels of one family over another is detrimental. It is well documented that most people following a typically Western diet are deficient in these essential fats and in particular the omega-3 family. Implications of omega-3 deficiency for health are profound and increase susceptibility to health issues throughout life. Babies born from mothers deficient in omega-3 have insufficient levels to support brain and eye development, while school age deficiency may manifest in attentional or behavioural problems. Through the teenage years, mood is likely to be affected by omega-3 deficiency (with particular risk of depression onset), while adults may be prone to anger and irritability. Older adults with low omega-3 brain levels are at higher risk of stroke, memory problems and early onset dementia. Low omega-3 levels at any age predispose individuals to higher risks for mental health issues such as depression, bipolar disorder and schizophrenia. Omega-3 fatty acids There are many different types of omega-3 fatty acids, therefore it is not so simple to say that you have a good intake of omega-3 fatty acids, as they are used very differently in the body and therefore have completely different health outcomes. The main differentiating feature of fatty acids is the ‘chain length’ – put simply, this is the amount of carbon atoms in the molecule. The more carbon atoms, the longer the chain. Short-chain fatty acids Short-chain omega-3 fatty acids are those found in plant sources such as linseeds and echium seeds, and are labelled as ‘essential fatty acids’ (EFAs) because they cannot be manufactured by the body, hence we must obtain them from our diet. It is from these short-chain fatty acids that our bodies derive the beneficial long-chain omega-3 fatty acids, requiring the use of enzymes in the body. The majority of short-chain fatty acids are utilised as fuel and therefore only some go on to be metabolised to the long-chain fatty acids. It is now well-known that these conversions are not very efficient in many people and while the conversion from ALA to EPA and DHA is greater in women compared with men, on average the estimated conversion of ALA to EPA is between 0.2% to 8% and ALA to DHA around 0.05%. [1-4] Diets that are rich in LA (common in Western populations) not only decrease the conversion of ALA by as much as 40%, but also influence the balance of pro-inflammatory to anti-inflammatory eicosanoids. This is simply because of the competition between ALA and LA for desaturation and elongation enzymes and competition between AA and EPA for COX and LOX enzymes. Modern lifestyle and diet can have a huge effect on the efficiency of conversion, for example, zinc, vitamin B6 and magnesium are all required for the enzymes to support this process and are classified as ‘co-factors’. Deficiencies in these co-factors are common, however, due to high consumption of refined foods (whose natural micronutrients are stripped during the manufacturing process – for example, with white bread and white pasta) and low intake of nutrient rich foods such as fruit and vegetables, grass-fed meat and eggs from free range hens. Other factors which inhibit normal fatty acid metabolism include viruses, trans fats, alcohol, caffeine and stress. Long-chain fatty acids Long-chain fatty acids such as omega-3 EPA and DHA are found in fish and fish oil, seafood and a small amount in grass-fed meat and dairy. DHA plays a vital role in maintaining the structure and fluidity of our cell membranes, whereas the majority of functional health benefits associated with omega-3 fatty acids are due to the effects of the long-chain omega-3 EPA, as hormone-like by-products of EPA called eicosanoids are important anti-inflammatory substances, and are also required for optimal functioning of the brain. The body requires the long-chain omega fatty acids for health, so it is not advisable to rely solely on intake of short-chain plant-sourced fats due to poor conversion rates. Directly consuming long-chain fats in the diet (whether through food or via supplementation with fish oil) effectively by-passes the many enzyme-dependent and difficult steps of fatty acid metabolism. When EPA levels are low, DHA is ‘sacrificed’ from the cell membrane to ‘step in’ for EPA and so low EPA intake directly increases our rate of brain structure loss. - Burdge GC: Metabolism of alpha-linolenic acid in humans. Prostaglandins, leukotrienes, and essential fatty acids 2006, 75:161-168. - Burdge GC, Calder PC: Conversion of alpha-linolenic acid to longer-chain polyunsaturated fatty acids in human adults. Reproduction, nutrition, development 2005, 45:581-597. - Burdge GC, Jones AE, Wootton SA: Eicosapentaenoic and docosapentaenoic acids are the principal products of alpha-linolenic acid metabolism in young men*. Br J Nutr 2002, 88:355-363. - Burdge GC, Wootton SA: Conversion of alpha-linolenic acid to eicosapentaenoic, docosapentaenoic and docosahexaenoic acids in young women. Br J Nutr 2002, 88:411-420. - Emken EA, Adlof RO, Gulley RM: Dietary linoleic acid influences desaturation and acylation of deuterium-labeled linoleic and linolenic acids in young adult males. Biochimica et biophysica acta 1994, 1213:277-288. Omega-6 fatty acids As with omega-3, the short-chain omega-6 fatty acids are also essential fatty acids as they cannot be made in the body and therefore must come from our diets. Linoleic acid (LA) is the essential short-chain omega-6 fatty acid, which is found predominantly in vegetable and nut oils such as corn, almond and sunflower oil. The short-chain omega-6 fatty acids must also be converted to longer-chain fatty acids in the body, but the end products can vary and are either inflammatory or anti-inflammatory. This makes omega-6 fatty acids a little confusing. You may have read that omega-6 is anti-inflammatory and beneficial for balancing hormones, though you likely have also read that excess omega-6 can exacerbate inflammatory conditions such as cardiovascular disease and arthritis. Both are true, and to understand our requirements for omega-6, we must look to the ratio between omega-3 and omega-6 – it is the balance of these two omega families which is of most importance for health. More specifically, the ratio between omega-3 EPA and omega-6 arachidonic acid (AA) is the key biomarker for long-term health. Meat from animals fed on grains is the main source of AA in our diets. To simplify the requirements of omega-6, consider that if omega-6 and omega-3 are balanced in a healthy ratio (for example,2:1), omega-6 is a healthy anti-inflammatory fatty acid, but if omega-6 is consumed in excess, (a ratio of 7:1 or higher), this can produce significant inflammation in the body due to the shift in end products from either family. Omega-6 consumption has certainly increased over the last few decades, partly due to farming techniques now that most animals are fed grains rich in omega-6, as opposed to their natural pasture-based diets, which were higher in beneficial omega-3. The use of refined vegetable oils in processed refined foods has also significantly increased our consumption of omega-6 fatty acids. To ensure that you are getting a healthy balance of omega-3 and omega-6, as our diets are so much higher in omega-6, ensure that supplements are higher in omega-3 to keep inflammation regulated. The omega-6 gamma linolenic acid (GLA) sourced from evening primrose oil is anti-inflammatory, as long as it is balanced with omega-3 EPA, therefore a supplement containing both omega-3 EPA from fish and omega-6 GLA from evening primrose oil, with a higher dose of EPA, is a great way to get both omega-3 and omega-6 and ensure the by-products from both families are anti-inflammatory. Omega-7 & 9 We sometimes forget that these other omega fatty acids exist, as there is so much emphasis placed on the wonderful health benefits of omega-3 fatty acids, and omega-6 to an extent, but being able to understand the importance and source of these other fatty acids is of course important. Omega-7 and omega-9 are not considered to be essential fats as they can be manufactured in the body, though intake of these fats is still beneficial for health and may reduce our risk of cardiovascular disease. Consumption of these fatty acids is high in the ‘Mediterranean’ diet. Rich sources of omega-7 include macadamia nut oil, sea buckthorn oil and coconut oil. Omega-9 can be found in olive oil and rapeseed oil. The most interesting point to note with the omega-7 and omega-9 fatty acids is that they are more heat stable than the delicate essential omega-3 and omega-6 fatty acids, which means that they make much more suitable oils for cooking with. Cooking omega-3 fats at a high temperature can ruin the fatty acid structure, making it difficult for our bodies to process and therefore no longer offering health benefits. Coconut oil on the other hand, and olive oil and rapeseed oil to a lesser extent, are much more heat-stable and therefore can be used in cooking while still providing benefits for our health.
Diagnostic Characteristics: Females are larger than males. Horseshoe crabs have a large, arched forebody covered by a horseshoe-shaped carapace, or upper shell, followed by a smooth abdomen with spines on the sides, and a thin tail. There are two pairs of simple eyes, or eyes with one lens, on top of the carapace and a pair of compound eyes, or eyes with multiple lenses, on ridges toward the sides. The mouthparts are made up of a pair of pincher-like mouthparts and a pair of clawed leg-like appendages. There are four pairs of clawed walking legs. The walking legs have seven segments, the last two form pinchers on the first four pairs of legs. The bases of the fourth pair of legs are fitted with special structures called flabella. The flabella are used to clean the book gills. The last pair of legs ends in four leaf-like structures. These legs are used to push through, and sweep away mud, silt, and sand as the horseshoe crab burrows through the sea bottom in search of food.The solid midsection, or abdomen, has six pairs of flap-like limbs. The first pair is joined together and protects the reproductive opening, through which the crab lays its eggs. The other five pairs form the gills, the organs through which the crab breathes underwater. They are called book gills because they resemble the pages of a book. Movable spines stick out on each side of the midsection. A long thin tail extends from the end of the midsection and is used for steering through the water and flipping over. Size: Adult horseshoe crabs range in length from 3.5 to 33.5 inches (89 to 850 millimeters). Habitat, biology and fisheries: Adults migrate inshore to intertidal sandy beaches to spawn in the spring. In the fall, adults move to deep bay waters or migrate to the Atlantic continental shelf to overwinter. Spawning generally occurs on protected sandy beaches from March through July, with peak activity occurring on the evening new and full moon high tides in May and June. Delaware Bay has the largest concentration of spawning horseshoe crabs.The horseshoe crab is a benthic or bottom-dwelling arthropod that utilizes both estuarine and continental shelf habitats. Spawning adults prefer sandy beach areas within bays and coves that are protected from wave energy. Horseshoe crabs spawn multiple times per season. Egg development is dependent on temperature, moisture, and oxygen content of the nest environment. Spawning habitat varies throughout the horseshoe crab range. In Massachusetts, New Jersey, and Delaware beaches are typically coarse-grained and well-drained as opposed to Florida beaches, which are typically fine-grained and poorly drained. Optimal spawning beaches may be a limiting reproductive factor for horseshoe crabs because they typically select beaches based on geochemical criteria. For example, results from a geomorphology study conducted along the New Jersey side of the Delaware Bay estimated that only 10.6 percent of the New Jersey shore adjacent to Delaware Bay provided optimal horseshoe crab spawning habitat and only 21.1 percent provided suitable spawning habitat. Nursery Habitat - The shoal water and shallow water areas of bays (e.g., Delaware Bay and Chesapeake Bay) are important nursery areas. Juveniles usually spend their first two years on intertidal sand flats. Older juveniles move out of intertidal areas to a few miles offshore, except during breeding migrations. Adults are exclusively subtidal, except during spawning. Specific requirements for adult habitat are not known. Although horseshoe crabs have been taken at depths >200 meters, scientists suggest that adults prefer depths <30 meters. During the spawning season, adults typically inhabit bay areas adjacent to spawning beaches and feed on bivalves. In the fall, adults may remain in bay areas or migrate to the Atlantic Ocean to overwinter on the continental shelf. Deep water areas are used by larger juveniles and adults to forage for food. They play a vital ecological role in the migration of shorebirds along the entire Atlantic seaboard, as well as providing bait for the American eel and conch fisheries along the coast. Additionally, their unique blood is used by the biomedical industry to produce Limulus Amoebocyte Lysate (LAL), an important tool in the detection of contaminants in patients, drugs and other medical supplies. Distribution: Maine to the Gulf of Mexico, but are most abundant from New Jersey to Virginia with their center of abundance around Delaware Bay.
Mask, 19th–early 20th century Probably Timor–Leste (East Timor) Wood, fiber, traces of paint, lime, and hair; H. 9 1/16 in. (23 cm) Purchase, Discovery Communications Inc. and Rogers Fund, 2000 (2000.444) The island of Timor gave rise to a distinctive tradition (or traditions) of dance masks whose precise origins and significance remain uncertain. What information exists suggests that many of the masks originated in Timor-Leste (East Timor). Portraying both male and female ancestors, they were worn by men during dances and other ceremonies, including celebrations of victory in war. When in use, the masks were typically painted, adorned with strips of hide or bristles representing facial hair, and worn with a headdress or hood that covered the head to further conceal the dancer's identity. The present mask has no eye holes and the wearer would have looked out through the mouth. The holes on the upper lip and forehead likely served for the attachment of a mustache and eyebrows. Some masks were made from perishable materials, but wood examples, such as this highly polished and deeply patinated work, were evidently preserved and reused many times.
|"I wouldn't be caught dead in that fur coat you're wearing". Photo by Naypong at freedigitalphotos.net.| - Conduction is when heat moves from a hotter area to a colder area across a still surface. If you stand barefoot on a cold sidewalk, the heat in your feet is going to transfer to the cooler surface of the sidewalk by conduction and you will get cooler (which is nice in the hot summer, but uncomfortable when the weather starts to get chilly). Conduction can happen when the body is in contact with a solid (like a sidewalk), a liquid (like a bath), or a gas (like the air around you). - Convection is essentially conduction with movement, and this movement makes the transfer of heat even faster. If you are standing inside and it is 70ºF in the building, you will likely be fairly comfortable. But if you are outside on a windy 70º day, even though the environment is the same temperature, you will get colder faster. - We are all familiar with the warming effects of the sun's radiation, but in reality, all objects give off electromagnetic radiation. Radiation within the visible spectrum we perceive as colored light, but most radiation is outside our visible range. - Evaporation happens when water (like sweat or moist breath) converts from a liquid state to a gaseous state, taking heat away from the body. Animals are always in contact with something (like surfaces, air, or water), so conduction is always occurring. |Imagine this circle is an animal's body, | Tb is the animal's body temperature and Te is the environmental temperature. The bigger (Tb-Te), the faster the animal will lose heat and cool down. |This works the other way around, too. | The bigger (Te-Tb), the faster the animal will heat up. But reptiles (as well as amphibians and fish) are ectotherms. They get almost all of their heat from their environments. They maintain their body temperatures behaviorally, by choosing what environment to hang out in and what position to put their body in. If they are cold, they go bask in the sun to absorb radiation heat or lay on a warmed rock to absorb conducted heat. If they are hot, they lay on a cool rock in the shade to lose heat by conduction or soak in a cool stream to lose heat by convection. To maintain a relatively constant body temperature, they are constantly moving between warm and cool areas to adjust their body temperature one direction or another. Many ectotherms rely on their ability to adjust their body temperatures quickly, and this ability depends on creating large driving forces of heat exchange. If an ectothermic reptile were to have an insulation layer, like fur, it would reduce its ability to adjust its body temperature by conduction and convection. It would lose its heat slowly and not be able to replace it fast enough. In the end, it would become too cold. It may seem paradoxical, but a lizard in a fur coat would likely die of cold-related physical issues (if not embarrassment). Interestingly enough, just because lizards don't have fur doesn't mean they couldn't have hair. In fact, some of them do have hair, but not how you may think. Hair, fur, feathers, and scales are all made up in large part by keratin proteins. Many gecko species are well known for their wide, sticky toes that help them climb smooth, vertical surfaces (like walls). Their secret? Ultra-thin keratin hairs growing out of the geckos' feet provide a chemical adhesive force to keep the animal secured to the wall surface. So reptiles may not have a need for fur, but some of them have an innovative use for hair. Want to know more about hairy geckos? Autumn K, Liang YA, Hsieh ST, Zesch W, Chan WP, Kenny TW, Fearing R, & Full RJ (2000). Adhesive force of a single gecko foot-hair. Nature, 405 (6787), 681-5 PMID: 10864324
The New International Encyclopædia/Harmonic Stop HARMONIC STOP. An organ-stop, having pipes double the usual length, and pierced mid-way, so that the tone produced is an octave higher than the ordinary pitch. Harmonic stops are composed generally of more than a single rank of pipes, tuned in octaves, double octaves, and double or triple thirds and fifths above the natural pitch of the keys; they comprise the mixture, furniture, cornet, etc. Those which have only a single rank of pipes tuned in thirds, fifths, with their octaves above the pitch represented on the keyboard, are called ‘mutation stops.’ They were introduced to give additional power to the ‘foundation stops,’ and also to produce a more brilliant effect in the performance of certain styles of music. See Organ.
Leadership refers to the role or process that enables systems and individuals to achieve their goals. Curriculum refers to all the experiences that learners have to go through in a program of education. Curriculum leadership therefore is the act of exercising functions that enables the achievement of a school's goal of providing quality education. The definition of curriculum leadership involves functions and goals. A curriculum leader has to take charge of making sure that the curriculum goals are achieved. That ultimate goal is to maximize student learning by providing quality in the content of learning. Curriculum leadership focuses on what is learned (the curriculum) and how it is taught (the instruction). Being a school head, the principal is responsible for making sure that the school has a quality curriculum and that the curriculum is implemented effectively. Achieving educational excellence is the goal. To attain such goal, the principal need to manifest curriculum leadership. The Roles and Functions of a Curriculum Leader Glatthorn (1997) was an educator interested in how curriculum development could be used to make teaching effective. He provides the list of the essential functions of curriculum leadership carried out at the school and classroom levels: Curriculum leadership functions at the school-level: a. develop the school's vision of a quality curriculum b. supplement the state's or district's educational goals c. develop the school's own program of studies d. develop a learning-centered schedule e. determine the nature and extent of curriculum integration f. align the curriculum g. monitor and assist in curriculum implementation Curriculum leadership functions at the classroom-level: a. develop yearly planning calendars for operationalizing the curriculum b. develop units of study c. enrich the curriculum and remediate learning d. evaluate the curriculum The roles and functions show that regardless of whether these are at the school level or classroom level, curriculum leadership involves tasks that guarantee quality education. The tasks and functions may further be specified into four major tasks: a. ensuring curriculum quality and applicability b. integrating and aligning the curriculum c. implementing the curriculum efficiently d. regularly evaluating, enriching, and updating the curriculum Exhibiting curriculum leadership means that the principal have to be vigilant in overseeing the many instructional activities in one's school so that educational goals will be achieved. This implies that curriculum leadership is also a component of instructional leadership. (Activity: Given the four major tasks of curriculum leadership, write some specific ways in which these tasks can be manifested). Source: Module: Lead Curriculum Implementation and Enrichment. EXCELS Flexible Course on Leading Curricular and Instructional Processes. SEAMEO INNOTECH, C 2005.
Early Motion-capture Techniques To master the skill of creating imaginary characters or natural scenes—regardless of artistic style—painters, sculptors, and graphic artists spend a great deal of time replicating reality by working from models, landscapes, or other references. It is not any different for animators: One has to understand and completely embrace human or animal motion to be able to create expressive, stylistic, or realistic-looking animations. To study the mechanics of a talking face, only a mirror is needed; a galloping horse or a running cheetah, however, is a more complicated subject. Stepping through video sequences or 3D motion-captured data could be essential for learning such complex motion—without good references, the subtle nuances of the movement are impossible to identify. But where should we look for quality references? There are some fine albums published more than 100 years ago that you should check out first! The demand to capture animal and human motion emerged way before computers or even animation and motion pictures were born. Artists, doctors, and scientists were desperate to find out how different animals move, how birds and insects fly, or how cats manage to always land on their feet. One problem of particular interest was the locomotion of four-legged animals. As later research has proved, all four-legged animals have the same walking pattern for maximum stability. However, a lot of old paintings showed running horses with all four legs stretched out, a pose that never occurs during any type of gait. Interestingly, horses and dogs in the shape of toys, sculptures, book illustrations, and stuffed animals are often depicted incorrectly. The famous photographic sequence by Eadweard Muybridge, “The Horse in Motion” (1878). In 1872, Leland Stanford, governor of California, businessman, and race-horse owner, set off to find the answer for the popularly-debated question of the time: Did all four hooves of a horse leave the ground at the same time during a gallop? Supporters of one side believed that one leg has to provide support at any moment during the gait, but Stanford (and others) claimed the contrary, and wanted scientific evidence to back his belief. He commissioned the English photographer Eadweard Muybridge to develop some technique to capture the moment. Since motion-picture cameras did not exist at that time, Muybridge assembled a device with a special electrical trigger mechanism and custom chemical formulas for processing, and managed to photograph Occident, Stanford’s race horse, completely airborne in 1877. (This can be seen in the famous 1878 photographic sequence by Eadweard Muybridge called “The Horse in Motion.”) Bottom: An incorrect depiction of horse gaits on the painting “Le derby d’Epsom” by Théodore Expanding the experiment—still funded by Stanford—Muybridge devised a new (and, arguably, the very first) motion-photography, or chronophotography, scheme involving 24 high-speed cameras positioned side by side and covering 20 feet of a long shed. Using innovative ideas to release the shutters as the horse (or other subjects) passed in front of the cameras, or alternatively by a clockwork mechanism, he created photographic sequences of a wide variety of animals and athletes in motion. The images were not of particularly good quality, but the wood engravings based on them were successfully published in scientific and photographic journals. Muybridge ultimately produced more than 100,000 sequence photographs, of which approximately 20,000 were reproduced as collotype prints, and the reader is likely to find multiple albums containing these on the shelves of nearby bookstores—invaluable references for all animators. Another significant figure of early motion capture, and a friend and competitor of Muybridge, was Étienne-Jules Marey, a French scientist and chronophotographer. He was obsessed with human and animal motion, and developed new, innovative techniques to aid his studies. His revolutionary idea was to record several phases of movement on the same photographic plate. In order to achieve this, he needed a fundamentally new kind of instrument, and built his high-speed “chronophotographic gun” in 1882 that was capable of shooting 12 consecutive frames a second. Using his device—which he later improved significantly—he studied horses, birds, dogs, donkeys, sheep, elephants, fish, and more. He was the first to capture how birds and insects fly—a significant accomplishment employed by aviation engineers not much after. Marey also conducted a study about cats always landing on their feet, and found that without any external force, they are indeed capable of twisting their body in air. Chickens, rabbits, and puppies were subjected to the same test, but only the rabbits came out on top.... Being a man of medicine, Marey became fascinated by the internal movements of the body and studied blood circulation, respiration, heart beats, and skeletal movements. For the latter, he used a similar technique to current optical motion capture: He attached reflective markers to the joints of actors wearing a black suit so he could capture the skeletal movement during walking and running cycles. While Muybridge was primarily a photographer with great engineering and scientific skills, Marey was an educated scientist executing innovative measurements. Both these men and their colleagues had a great influence on photography, medicine, aviation, engineering, and other sciences, as well as art—including animation and motion pictures. They produced countless photographs of human and animal motion, perfect references for computer animators.
rhyoliteArticle Free Pass rhyolite, extrusive igneous rock that is the volcanic equivalent of granite. Most rhyolites are porphyritic, indicating that crystallization began prior to extrusion. Crystallization may sometimes have begun while the magma was deeply buried; in such cases, the rock may consist principally of well-developed, large, single crystals (phenocrysts) at the time of extrusion. The amount of microcrystalline matrix (groundmass) in the final product may then be so small as to escape detection except under the microscope; such rocks (nevadites) are easily mistaken for granite in hand specimens. In most rhyolites, however, the period of such crystallization is relatively short, and the rock consists largely of a microcrystalline or partly glassy matrix containing few phenocrysts. The matrix is sometimes micropegmatitic or granophyric. The glassy rhyolites include obsidian, pitchstone, perlite, and pumice. The chemical composition of rhyolite is very like that of granite. This equivalence implies that at least some and probably most granites are of magmatic origin. The phenocrysts of rhyolite may include quartz, alkali feldspar, oligoclase feldspar, biotite, amphibole, or pyroxene. If an alkali pyroxene or alkali amphibole is the principal dark mineral, oligoclase will be rare or absent, and the feldspar phenocrysts will consist largely or entirely of alkali feldspar; rocks of this sort are called pantellerite. If both oligoclase and alkali feldspar are prominent among the phenocrysts, the dominant dark silicate will be biotite, and neither amphibole nor pyroxene, if present, will be of an alkaline variety; such lavas are the quartz porphyries or “true” rhyolites of most classifications. Certain differences between rhyolite and granite are noteworthy. Muscovite, a common mineral in granite, occurs very rarely and only as an alteration product in rhyolite. In most granites the alkali feldspar is a soda-poor microcline or microcline-perthite; in most rhyolites, however, it is sanidine, not infrequently rich in soda. A great excess of potassium over sodium, uncommon in granite except as a consequence of hydrothermal alteration, is not uncommon in rhyolites. Rhyolites are known from all parts of the Earth and from all geologic ages. They are mostly confined, like granites, to the continents or their immediate margins, but they are not entirely lacking elsewhere. Small quantities of rhyolite (or quartz trachyte) have been described from oceanic islands remote from any continent. What made you want to look up "rhyolite"? Please share what surprised you most...
San Francisco State University astronomer Stephen Kane and an international team of researchers have announced the discovery of a new rocky planet that could potentially have liquid water on its surface. The new planet, dubbed Kepler-186f, was discovered using NASA's Kepler telescope, launched in March 2009 to search for habitable zone, Earth-sized planets in our corner of the Milky Way Galaxy. A habitable zone planet orbits its star at a distance where any water on the planet's surface is likely to stay liquid. Since liquid water is critical to life on Earth, many astronomers believe the search for extraterrestrial life should focus on planets where liquid water occurs. "Some people call these habitable planets, which of course we have no idea if they are," said Kane, an assistant professor of physics and astronomy. "We simply know that they are in the habitable zone, and that is the best place to start looking for habitable planets." Kepler-186f is the fifth and outermost planet discovered orbiting around the dwarf star Kepler-186. The planets were discovered by the transit method, which detects potential planets as their orbits cross in front of their star and cause a very tiny but periodic dimming of the star's brightness. After the astronomers were able to confirm that Kepler-186f was a planet, they used the transit information to calculate the planet's size. Kepler-186f is slightly bigger than Earth, measuring about 1.1 Earth radii. (An Earth radius is the distance from the Earth's center to its surface). The researchers don't know yet what the mass of the planet might be, but they can make an estimate based on other planets of similar radii, Kane noted. Having the mass and radii of a planet allows the astronomers to calculate other features such as a planet's average density, "and once you know the average density of a planet, then you can start to say whether it's rocky or not," Kane explained. "What we've learned, just over the past few years, is that there is a definite transition which occurs around about 1.5 Earth radii," he continued. "What happens there is that for radii between 1.5 and 2 Earth radii, the planet becomes massive enough that it starts to accumulate a very thick hydrogen and helium atmosphere, so it starts to resemble the gas giants of our solar system rather than anything else that we see as terrestrial." The planet's size influences the strength of its gravitational pull, and its ability to pull in abundant gases like hydrogen and helium. At Kepler-186f's size, there is a small chance that it could have gathered up a thick hydrogen and helium envelope, "so there's a very excellent chance that it does have a rocky surface like the Earth," Kane said. Rocky planets like Earth, Mars and Venus gained their atmospheres as volcanic gasses like carbon dioxide and water vapor were released from the planets' interiors. Habitable zone planets like Earth orbit at a distance from a star where water vapor can stay liquid on the surface. Planets like Venus that orbit a little closer to the Sun lose their liquid water and are cloaked mostly in carbon dioxide. Planets like Mars that orbit further out from the Sun than Earth have their liquid water locked up as ice. Kepler-186f appears to be orbiting at the outer edge of the habitable zone around its star, which could mean that any liquid surface water would be in danger of freezing, Kane said. "However, it is also slightly larger than the Earth, and so the hope would be that this would result in a thicker atmosphere that would provide extra insulation" and make the surface warm enough to keep water liquid. Although Kepler-186f shows exciting signs of being Earth-like, Kane points out that its differences are also fascinating. "We're always trying to look for Earth analogs, and that is an Earth-like planet in the habitable zone around a star very much the same as our Sun," said Kane, who is the chair of Kepler's Habitable Zone Working Group. "This situation is a little bit different, because the star is quite different from our sun." Kepler-186 is an M-dwarf star, much smaller and cooler than the Sun. These stars are numerous in our galaxy, and have some features that make them promising places to look for life. "For example, small stars live a lot longer than larger stars," Kane explained, "and so that means there is a much longer period of time for biological evolution and biochemical reactions on the surface to take place." On the other hand, small stars tend to be more active than stars the size of our Sun, sending out more solar flares and potentially more radiation toward a planet's surface. "The diversity of these exoplanets is one of the most exciting things about the field," Kane said. "We're trying to understand how common our solar system is, and the more diversity we see, the more it helps us to understand what the answer to that question really is." "An Earth-sized Planet in the Habitable Zone of a Cool Star" by Elisa V. Quintana, Thomas Barclay, Sean N. Raymond, Jason F. Rowe1, Emeline Bolmont, Douglas A. Caldwell, Steve B. Howell, Stephen R. Kane, Daniel Huber, Justin R. Crepp, Jack J. Lissauer, David R. Ciardi, Jeffrey L. Coughlin, Mark E. Everett, Christopher E. Henze, Elliott Horch, Howard Isaacson, Eric B. Ford, Fred C. Adams, Martin Still, Roger C. Hunter, Billy Quarles and Franck Selsis was published in the April 18 issue of Science. Cite This Page:
British White cattle |This article needs additional citations for verification. (February 2007) (Learn how and when to remove this template message)| The British White is a naturally polled British cattle breed, white with black or red points, used mainly for beef. It has a confirmed history dating back to the 17th century, and may be derived from similar cattle kept in parks for many centuries before that. The British White has shortish white hair, and has dark points – usually black, but sometimes red. The coloured points include the ears, feet, eyelids, nose and often even teats. It is naturally polled (hornless), medium-sized and compactly built. There may be some coloured spots on the body fur, and the skin beneath the fur is usually coloured (grey or reddish), or pink with coloured spots. The colour-pointed pattern is found in many unrelated cattle breeds throughout the world – it is an extreme pale form of the similarly widespread colour-sided or lineback pattern. The red-pointed variant shows in about two per cent of British Whites, but since red colouration is genetically recessive to black in cattle, many of the black-pointed animals also carry the red allele. The colour-pointed pattern shows strongly in crosses with other breeds, often with additional dark spotting if the other parent was solid-coloured. As in other cattle the polled characteristic is dominant over horns, so first crosses are also polled. White cattle (often with black or red ears) are believed to have been highly regarded in Britain and Ireland in very early times, and herds of white cattle were kept as ornamental and sporting animals in enclosed parks for many centuries. They gave rise to the horned White Park cattle, and contributed to the polled British White. However, British Whites are not as genetically distinct from other British breeds as White Parks are, and so there is some doubt about their exact origins; other breeds such as Shorthorn may have contributed to their development. These cattle were kept in the Park of Whalley Abbey, in the Forest of Bowland near Clitheroe. After that time the major portion of the herd was moved to Norfolk, in the early 19th century. This herd was sold off in small lots, largely to nobility in the surrounding countryside, and formed the basis of the British White breed. By the early 20th century these cattle had declined to about 130 registered animals, mainly in the eastern counties of England. By the end of the 20th century numbers had grown to over 1,500 registered animals in the UK and perhaps 2,500 in the US, as well as many in other parts of the world such as Australia, where the breed was first imported by Mrs A Horden in 1958. The UK Rare Breeds Survival Trust lists it as a "minority" breed. In Britain, pedigrees are now maintained by the British White Cattle Society, although in the past British Whites and White Parks formed different sections in the same herdbook. The British White Cattle Society of Australia governs the breed in that country. Its first Herd Book was published in 1985. In North America the breed is represented by two separate societies, the British White Cattle Association of America and the American British White Park Association (confusingly, the latter does not cover the horned White Park). - The White Park is very similar to the British White, being white with black or red points, but with white, dark-tipped horns. It is more rangy, and usually has somewhat less spotting and less dark on the points. Related, similarly-coloured types include the Chillingham and Vaynol cattle. - Swedish Mountain or Fjäll cattle, a dairy type, may be colour-pointed. - The Irish Moiled is a red colour-sided traditional breed from Northern Ireland, – it may be white with red points, but it is more lightly built and of somewhat more dairy type than most British Whites. - The Belgian Blue (and its crosses) is often largely white with grey ears, but this heavily muscled, intensive beef breed is of very different type to the British White. - Holstein cattle may be nearly all-white, and such cattle sometimes have black ear tips; again these intensive dairy cattle are of very different type to the British White. - The White Galloway is a colour variety of the Galloway with dark points. - Hemmings, Jessica, Bos primigenius in Great Britain; or, Why do Fairy Cows Have Red Ears, Folklore Magazine, London, 2002 - Parsons, JM (2003). Cattle Breeds in Australia: a complete guide. Mt Waverley, Victoria: CH Jerram & Associates. p. 224. ISBN 978-0-9579086-2-8. - Rare Breeds Survival Trust watch list accessed 21 May 2008 - The British White Cattle Society of Australia, Herd Book Vol 1. Sydney: The British White Cattle Society of Australia. 1985. p. 112. - British White Cattle Association of America official web site. - American British White Park Association official web site. |Wikimedia Commons has media related to British White.|
We are living in the midst of a revolution in astronomy, with unprecedented images of the cosmos sent back from outer space. Take a guided tour through some of the best images in the universe, brought to you from Earth orbit and beyond. Since 1990, the Hubble Space Telescope has looked out to the cosmos from an orbital vantage point more than 350 miles above the earth's surface. After launch, scientists discovered that Hubble's optics were flawed, and it took three years and a dramatic spacewalk to correct its vision. Since then, the $2 billion, 12.5-ton orbiting observatory has ranked as one of NASA's greatest success stories. Hubble's best-known images include an iconic look at the starbirth region in the Eagle Nebula, shown above, as well as the Hourglass Nebula, which has been dubbed “the eye of God.” The slide show above takes you through those greatest hits. Hubble may be the best-known platform for space imagery, but scores of other missions have sent back images that are visually stunning as well as scientifically significant. Slideshow: Jewels of Jupiter “Jewels of Jupiter” “Jewels of Jupiter”presents snapshots of Jupiter and its moons, sent back by the Galileo and Cassini spacecraft. Galileo arrived at Jupiter in 1995 to sample the giant planet's atmosphere and record its cloud patterns, including the Great Red Spot. The mission was extended twice, so that Galileo could focus on the moons of Jupiter. Among the probe’s most intriguing findings were indications that there may be watery oceans beneath the icy crusts of two of those moons, Europa and Callisto. Some scientists believe such alien oceans could harbor life — but this hypothesis will have to be tested by future probes. Galileo was sent on a mission-ending plunge into Jupiter's atmosphere in 2003. Cassini, meanwhile, snapped pictures of Jupiter on its way to a 2004 rendezvous with Saturn. The marvels of Mars have been studied for more than 25 years by space probes including Viking landers and orbiters, the Hubble Space Telescope, Mars Pathfinder and Global Surveyor. The planet is now dry and cold, but scientists believe the Red Planet was once much more like Earth. The images from NASA spacecraft reveal canyons and flood plains where water once flowed. Liquid water may still exist far below the planet’s surface. Could life have developed on Mars billions of years ago? Might microbial life still exist in underground aquifers or beneath polar caps? Such questions will be the focus of future missions. Slideshow: Best of Cassini "Starring Saturn"highlights imagery of the ringed planet from the Cassini orbiter, which began its work in 2004. This slideshow highlights Saturn's atmosphere and rings, but we have other views from Cassini as well. You can continue your exploration with a slideshow focusing on Titan , Saturn's smog-covered moon, which boasts lakes of chilled hydrocarbons and may be following in primeval Earth's footsteps. You can also catch views of tiger-striped Enceladus , an ice-covered moon that may possess subsurface oceans of liquid water and perhaps life as well. Yet another slideshow features imagery of Saturn from Cassini plus earlier space missions. And as a bonus, we have a slideshow featuring Saturn and its moons that celebrates the 10th anniversary of Cassini's launch in 1997. “The Voyage of the Millennium” is our three-part retelling of America’s early space saga, in audio and historic imagery. Photojournalist Roger Ressmeyer went through stacks of NASA images and selected his favorites to show how Mercury and Gemini led up to the Apollo program and 1969's first moon landing. Slideshow: 'We choose to go to the moon’ Part 1:In the beginning, there were so many questions: Did the success of Sputnik mean the Soviets were taking control of the skies? Could the Americans ever hope to catch up in the space race? Would astronauts survive being sent up on rockets that had an annoying tendency to blow up? In 1962, President John Kennedy addressed those questions dramatically: “We choose to go to the moon,” he said, not because it was easy, but because it was hard. For seven years, hundreds of thousands of people — engineers and explorers — worked to answer the challenge. And some paid a terrible price. Slideshow: ‘One giant leap for mankind’ Part 2:Three men rose into space on July 16, 1969, to begin the world's greatest adventure: Apollo 11. On July 20, while Michael Collins stayed aboard the Apollo command module in lunar orbit, Neil Armstrong and Buzz Aldrin headed for the surface in the lunar lander, known as Eagle. With the fuel supply dwindling, Armstrong realized that the computerized trajectory was sending them toward a field of boulders. He overrode the computer controls, setting the lander down on the lunar shore with 20 seconds’ worth of fuel to spare. The Eagle had landed. The space race was won. The images were icons for a new age: Earth and moon against the blackness of space ... a flag that falsely seemed to flutter in the vacuum ... bootprints in moondust. It was, as Armstrong said, “one giant leap for mankind” — a mental as well as a technological leap. Slideshow: 'We leave as we came' Part 3:Apollo 11 marked the achievement of Kennedy’s goal: America had proven its prowess in space. What else was there to prove? For scientists, there was still a world of questions to be answered: How was the moon formed? What forces shaped it over billions of years? What could the moon tell us about Earth’s origins, and its fate? New tools, such as a lunar rover, were devised to make the quest more efficient. But for astronauts, there were the same old risks, the same potential price - as demonstrated by the near-tragedy of Apollo 13. Further giant leaps would have to wait. After Apollo, America would have to take more gradual, safer steps. When Apollo 17’s Gene Cernan stepped off the lunar soil in 1972, he knew it would be a long time before the next moonwalker arrived - although he never expected that the gap would extend beyond a quarter-century. “We leave as we came,” he said, “and God willing, as we shall return, with peace and hope for all mankind.” © 2013 msnbc.com Reprints
The active and dynamic process of acquiring skills and understandings which are needed for survival and well-being. At the individual level, learning improves the quality of life of the participant. At a broader social level, it has the potential to transform cultures, societies, politics and the world we live in. The most profound period of learning takes place in childhood where accelerated development of the brain takes place (researchers estimate that around 16 billion synaptic receptors per second are developing in the brains of children between 12 - 18 months of age). Tragically, the damaging effects of childhood trauma, stress, neglect and exposure to drugs and alcohol in the uterus are affecting the physical development of an alarming number of children. This has frightening implications, not only for the life prospects and outcomes of individuals but also for society as a whole. Learning is a cornerstone for participation in life at every level. It is what makes humans the most powerful and influential species on the planet. It is vital that lifelong learning be nurtured, respected and participated in by all. Learning is as natural as breathing and is equally essential. viết bởi livitup 15 Tháng một, 2012 1 more definition The end result of learning. May be substituted for learnt. I'm learnings how dumb yo' ass really id. viết bởi Anonymous 05 Tháng tám, 2003
instrumentation, inertial navigation, and in many other applications to detect and measure angular rates of change. A rate gyro (sometimes called a rate-of-turn gyro) consists of a spinning rotor mounted in a single gimbal, as shown in figure 3-13. A gyro mounted in this manner has one degree of freedom; that is, it is free to tilt in only one direction. "> RATE GYROS are used in weapons control equipment, aircraft instrumentation, inertial navigation, and in many other applications to detect and measure angular rates of change. A rate gyro (sometimes called a rate-of-turn gyro) consists of a spinning rotor mounted in a single gimbal, as shown in figure 3-13. A gyro mounted in this manner has one degree of freedom; that is, it is free to tilt in only one direction. The rotor in a rate gyro is restrained from precessing by some means, usually a spring arrangement. This is done to limit precession and to return the rotor to a neutral position when there is no angular change taking place. Remember, the amount of precession of a gyro is proportional to the force that causes the precession. Figure 3-13. - Rate gyro (single degree of freedom). If you attempt to change the gyro's plane of rotor spin by rotating the case about the input axis, the gyro will precess as shown in figure 3-14. From what you learned earlier in this chapter, the gyro does not appear to be obeying the rules for precession. However, turning the gyro case has the same effect as applying a torque on the spin axis. This is illustrated by arrow F in figure 3-14. You can determine the direction of precession by using the right-hand rule we discussed earlier. Figure 3-14. - Rate gyro precession. The force applied at F will cause the gyro to precess at right angles to the force. Likewise, attempting to turn the gyro case will cause the same result. The gyro will precess, as shown by the arrows, around the Y-Y axis (output axis). Since the rate of precession is proportional to the applied force, you can increase the precession by increasing the speed with which you are moving the gyro case. In other words, you have a rate gyro. The faster you turn the case, the more the gyro will precess, since the amount of precession is proportional to the rate at which you are turning the gyro case. This characteristic of a gyro, when properly used, fits the requirements needed to sense the rate of motion about any axis. Figure 3-15 shows a method of restraining the precession of a gyro to permit the calculation of an angle. Springs have been attached to the crossarm of the output shaft. These springs restrain the free precession of the gyro. The gyro may use other types of restraint, but no matter what type of restraint is used, the gyro is harnessed to produce some useful work. Figure 3-15. - Precession of a spring restrained rate gyro. As the gyro precesses, it exerts a precessional force against the springs that is proportional to the momentum of the spinning wheel and the applied force. For example, suppose you rotate the gyro case (fig. 3-15) at a speed corresponding to a horizontal force of 2 pounds at F. Obviously, the gyro will precess; and as it does, it will cause the crossarm to pull up on spring A with a certain force, say 1 pound. (This amount of force would vary with the length of the crossarm.) If you continue to turn the gyro case at this rate, the precession of the gyro will continually exert a pull on the spring. More precisely, the gyro will precess until the 1 pound pull of the crossarm is exactly counterbalanced by the tension of the spring; it will remain in a fixed position, as shown in figure 3-15. That is, it will remain in the precessed position as long as you continue to rotate the gyro case at the same, constant speed. A pointer attached to the output axis could be used with a calibrated scale to measure precise angular rates. When you stop moving the case, you remove the force at F, and the gyro stops precessing. The spring is still exerting a pull, however, so it pulls the crossarm back to the neutral position and returns the pointer to "zero." Suppose you now rotate the gyro case at a speed twice as fast as before, and in the same direction. This will be equal to a 4-pound force applied at F and a resulting 2-pound pull by the crossarm on spring A. In this situation the gyro will precess twice as far before the tension on the restraining spring equals the pull on the crossarm. Precession increases when the rate of rotation increases, as shown in figure 3-16. Figure 3-16. - Precession is proportional to the rate of rotation. Another type of rate gyro (often used in inertial navigation equipment) is the floated gyro unit. This unit generally uses a restraint known as a torsion bar. The advantage of the torsion bar over the spring is that the torsion bar needs no lever arm to exert torque. The torsion bar is mounted along the output axis (fig. 3-17), and produces restraining torque in either direction by twisting instead of pulling. Also, there is no gimbal bearing friction to cause interference with gyro operation. Figure 3-17. - Torsion bar-restrained floated rate gyro. A fluid surrounds the gyro sphere and provides flotation. It also provides protection from shock, and damps the oscillations resulting from sudden changes in the angular rate input. In this gyro, the inner gimbal displacement must be measured with some type of electrical pickoff. As the gyro case is rotated about the input axis, clockwise or counterclockwise, a precession torque will be developed about the output axis that will cause the inner gimbal to exert torque against the torsion bars. The torsion bars provide a restraining torque proportional to the amount of the inner gimbal's displacement. When the exerted gimbal torque is exactly opposed by the restraining torque provided by the torsion bars, the inner gimbal displacement will be proportional to the rate of rotation of the gyro case about the input axis. The pickoff measures this displacement and provides a signal whose amplitude and polarity (or phase) represent the direction and magnitude of the input angular velocity. The important point to remember is that every "rate" gyro measures the RATE OF ROTATION ABOUT ITS INPUT AXIS. Up to this point, we have illustrated only basic gyros. We used these basic, or simple, gyros to explain their principles of operation In actuality, the rate gyros used in typical modern day weapon systems are considerably more complex, and in some cases, very compact. Figure 3-18 shows a cutaway view of a rate gyro used in our Navy's missile systems and aircraft. Figure 3-18. - Rate gyro, cutaway view. |Integrated Publishing, Inc.|
As is the case for geoscientists on Earth, it is of fundamental importance for planetary researchers to have precise maps of the areas they are investigating. The creation of global image mosaics and atlases of the icy moons is therefore an important long-term project of the Cassini mission. The Cassini team at the DLR Institute of Planetary Research in Berlin uses the images from the camera system on the Cassini space probe for this purpose. The system has collected image data from various distances during the numerous flybys of Saturn's large icy moons. The example shows a map sheet from the northern hemisphere of the icy moon Enceladus at a scale of 1:500,000; this means that one centimetre on the original map sheet (105 centimetres by 75 centimetres) corresponds to five kilometres (500,000 centimetres) in reality. This representation is referred to in cartography as the 'Lambert conformal conic projection'. Enceladus has a diameter of 504 kilometres and is of particular interest to planetary researchers because of its 'cryovolcanic' ice eruptions, which are ejected into space from the southern polar region. The map is called 'Shahrazad' (Scheherazade). DLR cartographers give all surface phenomena on Enceladus, such as craters, plains or fissures, names from the oriental tales of the Arabian Nights – following rules set by the International Astronomical Union (IAU). Shahrazad is the storyteller in these tales. Credit: NASA/JPL/DLR/Space Science Institute.
1850 United States Census The United States Census of 1850 was the seventh census of the United States. It was done on June 1, 1850. It found the population of the United States to be 23,191,876. This was an increase of 35.9 percent from the 1840 Census. The total population included 3,204,313 slaves. This was the first census where there was an attempt to collect information about every member of every household, including women, children, and slaves. It was also the first census to ask about place of birth.
New Zealander George Vernon Hudson proposed the modern idea of daylight saving in 1895. Germany and Austria-Hungary organized the first implementation, starting on 30 April 1916. Many countries have used it at various times since then, particularly since the energy crisis of the 1970s. The practice has received both advocacy and criticism. Putting clocks forward benefits retailing, sports, and other activities that exploit sunlight after working hours, but can cause problems for evening entertainment and for other activities tied to the sun (such as farming) or to darkness (such as fireworks shows). Although some early proponents of DST aimed to reduce evening use of incandescent lighting (formerly a primary use of electricity), modern heating and cooling usage patterns differ greatly, and research about how DST currently affects energy use is limited or contradictory. DST clock shifts sometimes complicate timekeeping and can disrupt meetings, travel, billing, record keeping, medical devices, heavy equipment, and sleep patterns. Software can often adjust computer clocks automatically, but this can be limited and error-prone, particularly when various jurisdictions change the dates and timings of DST changes.
Organic acid anhydride An organic acid anhydride is an acid anhydride that is an organic compound. An acid anhydride is a compound that has two acyl groups bonded to the same oxygen atom. A common type of organic acid anhydride is a carboxylic anhydride, where the parent acid is a carboxylic acid, the formula of the anhydride being (RC(O))2O. Symmetrical acid anhydrides of this type are named by replacing the word acid in the name of the parent carboxylic acid by the word anhydride. Thus, (CH3CO)2O is called acetic anhydride. Mixed (or unsymmetrical) acid anhydrides, such as acetic formic anhydride (see below), are known. One or both acyl groups of an acid anhydride may also be derived from another type of organic acid, such as sulfonic acid or a phosphonic acid. One of the acyl groups of an acid anhydride can be derived from an inorganic acid such as phosphoric acid. The mixed anhydride 1,3-bisphosphoglycerate is an intermediate in the formation of ATP via glycolysis, is the mixed anhydride between 3-phosphoglyceric acid and phosphoric acid. Acidic oxides are often classified as acid anhydrides. Acid anhydrides are prepared in industry by diverse means. Acetic anhydride is mainly produced by the carbonylation of methyl acetate. Maleic anhydride is produced by the oxidation of benzene or butane. Laboratory routes emphasize the dehydration of the corresponding acids. The conditions vary from acid to acid, but phosphorus pentoxide is a common dehydrating agent: - CH3C(O)Cl + HCO2Na → HCO2COCH3 + NaCl Mixed anhydrides containing the acetyl group are prepared from ketene: - RCO2H + H2C=C=O → RCO2C(O)CH3 Acid anhydrides are a source of reactive acyl groups, and their reactions and uses resemble those of acyl halides. In reactions with protic substrates, the reactions afford equal amounts of the acylated product and the carboxylic acid: - RC(O)OC(O)R + HY → RC(O)Y + RCO2H for HY = HOR (alcohols), HNR'2 (ammonia, primary, secondary amines), aromatic ring (see Friedel-Crafts acylation). Acid anhydrides tend to be less electrophilic than acyl chlorides, and only one acyl group is transferred per molecule of acid anhydride, which leads to a lower atom efficiency. The low cost, however, of acetic anhydride makes it a common choice for acetylation reactions. Applications and occurrence of acid anhydrides Naphthalenetetracarboxylic dianhydride, a building block for complex organic compounds, is an example of a dianhydride. The "mixed anhydride" 1,3-bisphosphoglycerate (shown in protonated form) occurs widely in metabolic pathways. 3'-Phosphoadenosine-5'-phosphosulfate (PAPS) is a mixed anhydride of sulfuric and phosphoric acids and is the most common coenzyme in biological sulfate transfer reactions. Acetic anhydride is a major industrial chemical widely used for preparing acetate esters, e.g. cellulose acetate. Maleic anhydride is the precursor to various resins by copolymerization with styrene. Maleic anhydride is a dienophile in the Diels-Alder reaction. Dianhydrides, molecules containing two acid anhydride functions, are used to synthesize polyimides and sometimes polyesters and polyamides. Examples of dianhydrides: pyromellitic dianhydride (PMDA), 3,3’, 4,4’ - oxydiphtalic dianhydride (ODPA), 3,3’, 4,4’-benzophenone tetracarboxylic dianhydride (BTDA), 4,4’-diphtalic (hexafluoroisopropylidene) anhydride (6FDA), benzoquinonetetracarboxylic dianhydride, ethylenetetracarboxylic dianhydride. Polyanhydrides are a class of polymers characterized by anhydride bonds that connect repeat units of the polymer backbone chain. Natural organic acid anhydrides are rare, because of the reactivity of the functional group. Examples include cantharidin from species of blister beetle, including the Spanish fly, Lytta vesicatoria, and tautomycin, from the bacterium Streptomyces spiroverticillatus. Sulfur can replace oxygen, either in the carbonyl group or in the bridge. In the former case, the name of the acyl group is enclosed in parentheses to avoid ambiguity in the name, e.g., (thioacetic) anhydride (CH3C(S)OC(S)CH3). When two acyl groups are attached to the same sulfur atom, the resulting compound is called a thioanhydride, e.g., acetic thioanhydride ((CH3C(O))2S). - IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "acid anhydrides". - Panico R, Powell WH, Richer JC, eds. (1993). "Recommendation R-5.7.7". A Guide to IUPAC Nomenclature of Organic Compounds. IUPAC/Blackwell Science. pp. 123–25. ISBN 0-632-03488-2. - Nelson, D. L.; Cox, M. M. "Lehninger, Principles of Biochemistry" 3rd Ed. Worth Publishing: New York, 2000. ISBN 1-57259-153-6. - Zoeller, J. R.; Agreda, V. H.; Cook, S. L.; Lafferty, N. L.; Polichnowski, S. W.; Pond, D. M. "Eastman Chemical Company Acetic Anhydride Process" Catalysis Today (1992), volume 13, pp.73-91. doi:10.1016/0920-5861(92)80188-S - Lewis I. Krimen (1988). "Acetic Formic Anhydride". Org. Synth.; Coll. Vol. 6, p. 8 - Heimo Held, Alfred Rengstl, Dieter Mayer "Acetic Anhydride and Mixed Fatty Acid Anhydrides" Ullmann's Encyclopedia of Industrial Chemistry 2002, Wiley-VCH, Weinheim. doi:10.1002/14356007.a01_065 |Wikiquote has quotations related to: Organic acid anhydride|
- Understand the progressive stages of development in number. - Understand how strategy and knowledge are interrelated. - Identify clear links between the Number Framework with National Curriculum levels and National Standards. Now if I'm being honest, when I read these key points at the beginning of the meeting, my enthusiasm was not at its highest, and I was expecting to leave the staffroom at 5pm in a more typical 'post staff-meeting fashion'. However, once we got started, I realised that these were in fact the exact points I have been working to understand better in my own inquiry into maths. For my own head I translated the points of focus to: - What do learners need to understand at each stage. - Where, When and How do strategy and knowledge relate. - How do the Pink books connect to the curriculum. A selection of slides from Jo's presentation A key point that I took away from the PD was that learners who are stuck at a lower stage of maths in their knowledge, will have a hard time (or be unable to) use a strategy from a higher stage. I suddenly thought about my boys who seem to be stuck at stage 5, and realised this is part of where I have been going wrong. They are still battling with grouping and place value, and this is preventing them from moving onto new strategies for solving trickier problems. I feel more confident in targeting their specific needs now going forward in their lessons.
The prime directive of all organisms is to reproduce and survive, which is also the case for viruses, which in most cases are considered a nuisance to humans. Viruses - An Overview Viruses possess both living and non-living characteristics. The unique characteristic that differentiates viruses from other organisms is the fact that they require other organisms to host themselves in order to survive, hence they are deemed obligate parasites. Viruses can be spread in the following exemplar ways - Airborne - Viruses that infect their hosts from the open air - Blood Borne - Transmission of the virus between organisms when infected blood enters an organisms circulatory system - Contamination - Caused from the consumption of materials by organisms such as water and food which have viruses within Therefore viruses have many means of getting transmitted from one organism to another. Cell Assimilation by a Virus Viruses are tiny micro-organisms, and due to their size and simplicity, they are unable to replicate independently. Therefore, when a virus is situated in a host, it requires the means to reproduce before it dies out without producing more viruses. This is done by altering the genetic make up of a cell to start coding for materials required to make more viruses. By altering the cell instructions, more viruses can be produced which in turn, can affect more cells and continue their existence as a species. The following is a step by step guide of how an example bacteriophage (a virus that infects bacteria) takes control of its host cell and reproduces itself. - The virus approaches the bacteria and attaches itself to the cell membrane - The tail gives the virus the means to thrust its genetic information into the bacteria - Nucleotides from the host are 'stolen' in order for the virus to create copies of itself - The viral DNA alters the genetic coding of the host cell to create protein coats for the newly create viral DNA strands - The viral DNA enters its DNA coat - The cell is swollen with many copies of the original virus and bursts, allowing the viruses to attach themselves to other nearby cells - The process begins all over again with many more viruses attacking the hosts' cells Without a means of defence, the host that is under attack from the virus would soon die. The next page looks at how organisms defend themselves from these ruthless viruses.
Climate of the Arctic The climate of the Arctic is characterized by long, cold winters and short, cool summers. There is a large amount of variability in climate across the Arctic, but all regions experience extremes of solar radiation in both summer and winter. Some parts of the Arctic are covered by ice (sea ice, glacial ice, or snow) year-round, and nearly all parts of the Arctic experience long periods with some form of ice on the surface. Average January temperatures range from about −34 °C to 0 °C (−40 to +32 °F), and winter temperatures can drop below −50 °C (−58 °F) over large parts of the Arctic. Average July temperatures range from about −10 to +10 °C (14 to 50 °F), with some land areas occasionally exceeding 30 °C (86 °F) in summer. The Arctic consists of ocean that is largely surrounded by land. As such, the climate of much of the Arctic is moderated by the ocean water, which can never have a temperature below −2 °C (28 °F). In winter, this relatively warm water, even though covered by the polar ice pack, keeps the North Pole from being the coldest place in the Northern Hemisphere, and it is also part of the reason that Antarctica is so much colder than the Arctic. In summer, the presence of the nearby water keeps coastal areas from warming as much as they might otherwise. - 1 Overview of the Arctic - 2 History of Arctic climate observation - 3 Solar radiation - 4 Temperature - 5 Precipitation - 6 Sea ice - 7 Wind - 8 Climate change - 9 Global warming - 10 See also - 11 Notes - 12 Bibliography - 13 External links - 14 Further reading Overview of the Arctic There are different definitions of the Arctic. The most widely used definition, the area north of the Arctic Circle, where, on the June solstice, the sun does not set is used in astronomical and some geographical contexts. However, in a context of climate, the two most widely used definitions in this context are the area north of the northern tree line, and the area in which the average temperature of the warmest month is less than 10 °C (50 °F), which are nearly coincident over most land areas (NSIDC). This definition of the Arctic can be further divided into four different regions: - The Arctic Basin includes the Arctic Ocean within the average minimum extent of sea ice. - The Canadian Arctic Archipelago includes the large and small islands, except Greenland, on the Canadian side of the Arctic, and the waters between them. - The entire island of Greenland, although its ice sheet and ice-free coastal regions have different climatic conditions. - The Arctic waters that are not sea ice in late summer, including Hudson Bay, Baffin Bay, Ungava Bay, the Davis, Denmark, Hudson and Bering Straits, and the Labrador, Norwegian, (ice-free all year), Greenland, Baltic, Barents (southern part ice-free all year), Kara, Laptev, Chukchi, Okhotsk, sometimes Beaufort and Bering Seas. Moving inland from the coast over mainland North America and Eurasia, the moderating influence of the Arctic Ocean quickly diminishes, and the climate transitions from Arctic to subarctic, generally in less than 500 kilometres (300 mi), and often over a much shorter distance. History of Arctic climate observation Due to the lack of major population centres in the Arctic, weather and climate observations from the region tend to be widely spaced and of short duration compared to the midlatitudes and tropics. Though the Vikings explored parts of the Arctic over a millennium ago, and small numbers of people have been living along the Arctic coast for much longer, scientific knowledge about the region was slow to develop; the large islands of Severnaya Zemlya, just north of the Taymyr Peninsula on the Russian mainland, were not discovered until 1913, and not mapped until the early 1930s (Serreze and Barry, 2005). Early European exploration Much of the historical exploration of the Arctic was motivated by the search for the Northwest and Northeast Passages. Sixteenth- and seventeenth-century expeditions were largely driven by traders in search of these shortcuts between the Atlantic and the Pacific. These forays into the Arctic did not venture far from the North American and Eurasian coasts, and were unsuccessful at finding a navigable route through either passage. National and commercial expeditions continued to expand the detail on maps of the Arctic through the eighteenth century, but largely neglected other scientific observations. Expeditions from the 1760s to the middle of the 19th century were also led astray by attempts to sail north because of the belief by many at the time that the ocean surrounding the North Pole was ice-free. These early explorations did provide a sense of the sea ice conditions in the Arctic and occasionally some other climate-related information. By the early 19th century some expeditions were making a point of collecting more detailed meteorological, oceanographic, and geomagnetic observations, but they remained sporadic. Beginning in the 1850s regular meteorological observations became more common in many countries, and the British navy implemented a system of detailed observation (Serreze and Barry, 2005). As a result, expeditions from the second half of the nineteenth century began to provide a picture of the Arctic climate. Early European observing efforts The first major effort by Europeans to study the meteorology of the Arctic was the First International Polar Year (IPY) in 1882 to 1883. Eleven nations provided support to establish twelve observing stations around the Arctic. The observations were not as widespread or long-lasting as would be needed to describe the climate in detail, but they provided the first cohesive look at the Arctic weather. In 1884 the wreckage of the Jeanette, a ship abandoned three years earlier off Russia's eastern Arctic coast, was found on the coast of Greenland. This caused Fridtjof Nansen to realize that the sea ice was moving from the Siberian side of the Arctic to the Atlantic side. He decided to use this motion by freezing a specially designed ship, the Fram, into the sea ice and allowing it to be carried across the ocean. Meteorological observations were collected from the ship during its crossing from September 1893 to August 1896. This expedition also provided valuable insight into the circulation of the ice surface of the Arctic Ocean. In the early 1930s the first significant meteorological studies were carried out on the interior of the Greenland ice sheet. These provided knowledge of perhaps the most extreme climate of the Arctic, and also the first suggestion that the ice sheet lies in a depression of the bedrock below (now known to be caused by the weight of the ice itself). Fifty years after the first IPY, in 1932 to 1933, a second IPY was organized. This one was larger than the first, with 94 meteorological stations, but World War II delayed or prevented the publication of much of the data collected during it (Serreze and Barry 2005). Another significant moment in Arctic observing before World War II occurred in 1937 when the USSR established the first of over 30 North-Pole drifting stations. This station, like the later ones, was established on a thick ice floe and drifted for almost a year, its crew observing the atmosphere and ocean along the way. Cold-War era observations Following World War II, the Arctic, hilying between the USSR and North America, became a front line of the Cold War, inadvertently and significantly furthering our understanding of its climate. Between 1947 and 1957, the United States and Canadian governments established a chain of stations along the Arctic coast known as the Distant Early Warning Line (DEWLINE) to provide warning of a Soviet nuclear attack. Many of these stations also collected meteorological data. The Soviet Union was also interested in the Arctic and established a significant presence there by continuing the North-Pole drifting stations. This program operated continuously, with 30 stations in the Arctic from 1950 to 1991. These stations collected data that are valuable to this day for understanding the climate of the Arctic Basin. This map shows the location of Arctic research facilities during the mid-1970s and the tracks of drifting stations between 1958 and 1975. Another benefit from the Cold War was the acquisition of observations from United States and Soviet naval voyages into the Arctic. In 1958 an American nuclear submarine, the Nautilus was the first ship to reach the North Pole. In the decades that followed submarines regularly roamed under the Arctic sea ice, collecting sonar observations of the ice thickness and extent as they went. These data became available after the Cold War, and have provided evidence of thinning of the Arctic sea ice. The Soviet navy also operated in the Arctic, including a sailing of the nuclear-powered ice breaker Arktika to the North Pole in 1977, the first time a surface ship reached the pole. Scientific expeditions to the Arctic also became more common during the Cold-War decades, sometimes benefiting logistically or financially from the military interest. In 1966 the first deep ice core in Greenland was drilled at Camp Century, providing a glimpse of climate through the last ice age. This record was lengthened in the early 1990s when two deeper cores were taken from near the center of the Greenland Ice Sheet. Beginning in 1979 the Arctic Ocean Buoy Program (the International Arctic Buoy Program since 1991) has been collecting meteorological and ice-drift data across the Arctic Ocean with a network of 20 to 30 buoys. The end of the Soviet Union in 1991 led to a dramatic decrease in regular observations from the Arctic. The Russian government ended the system of drifting North Pole stations, and closed many of the surface stations in the Russian Arctic. Likewise the United States and Canadian governments cut back on spending for Arctic observing as the perceived need for the DEWLINE declined. As a result, the most complete collection of surface observations from the Arctic is for the period 1960 to 1990 (Serreze and Barry, 2005). The extensive array of satellite-based remote-sensing instruments now in orbit has helped to replace some of the observations that were lost after the Cold War, and has provided coverage that was impossible without them. Routine satellite observations of the Arctic began in the early 1970s, expanding and improving ever since. A result of these observations is a thorough record of sea-ice extent in the Arctic since 1979; the decreasing extent seen in this record (NASA, NSIDC), and its possible link to anthropogenic global warming, has helped increase interest in the Arctic in recent years. Today's satellite instruments provide routine views of not only cloud, snow, and sea-ice conditions in the Arctic, but also of other, perhaps less-expected, variables, including surface and atmospheric temperatures, atmospheric moisture content, winds, and ozone concentration. Civilian scientific research on the ground has certainly continued in the Arctic, and it is getting a boost from 2007 to 2009 as nations around the world increase spending on polar research as part of the third International Polar Year. During these two years thousands of scientists from over 60 nations will co-operate to carry out over 200 projects to learn about physical, biological, and social aspects of the Arctic and Antarctic (IPY). Modern researchers in the Arctic also benefit from computer models. These pieces of software are sometimes relatively simple, but often become highly complex as scientists try to include more and more elements of the environment to make the results more realistic. The models, though imperfect, often provide valuable insight into climate-related questions that cannot be tested in the real world. They are also used to try to predict future climate and the effect that changes to the atmosphere caused by humans may have on the Arctic and beyond. Another interesting use of models has been to use them, along with historical data, to produce a best estimate of the weather conditions over the entire globe during the last 50 years, filling in regions where no observations were made (ECMWF). These reanalysis datasets help compensate for the lack of observations over the Arctic. Almost all of the energy available to the Earth's surface and atmosphere comes from the sun in the form of solar radiation (light from the sun, including invisible ultraviolet and infrared light). Variations in the amount of solar radiation reaching different parts of the Earth are a principal driver of global and regional climate. Averaged over a year, latitude is the most important factor determining the amount of solar radiation reaching the top of the atmosphere; the incident solar radiation decreases smoothly from the Equator to the poles. This variation leads to the most obvious observation of regional climate: temperature tends to decrease with increasing latitude. In addition the length of each day, which is determined by the season, has a significant impact on the climate. The 24-hour days found near the poles in summer result in a large daily-average solar flux reaching the top of the atmosphere in these regions. On the June solstice 36% more solar radiation reaches the top of the atmosphere over the course of the day at the North Pole than at the Equator (Serreze and Barry, 2005). However, in the six months from the September equinox to March equinox the North Pole receives no sunlight. Images from the NOAA's North Pole Web Cam illustrate Arctic daylight, darkness and the changing of the seasons. The climate of the Arctic also depends on the amount of sunlight reaching the surface, and the amount that the surface absorbs are also important. Variations in the frequency of cloud cover can cause significant variations in the amount of solar radiation reaching the surface at locations with the same latitude. Changes in surface conditions, such as the appearance or disappearance of snow or sea ice, can cause large changes in the surface albedo, the fraction of the solar radiation reaching the surface that is reflected rather than absorbed. In the Arctic, during the winter months of November through February, the sun remains very low in the sky or does not rise at all. Where it does rise, the days are short, and the sun's low position in the sky means that, even at noon, not much energy is reaching the surface. Furthermore, most of the small amount of solar radiation that reaches the surface is reflected away by the bright snow cover. Cold snow reflects between 70% and 90% of the solar radiation that reaches it (Serreze and Barry, 2005), and most of the Arctic, with the exception of the ice-free parts of the sea, have snow covering the land or ice surface in winter. These factors result in a negligible input of solar energy to the Arctic in winter; the only things keeping the Arctic from continuously cooling all winter are the transport of warmer air and ocean water into the Arctic from the south and the transfer of heat from the subsurface land and ocean (both of which gain heat in summer and release it in winter) to the surface and atmosphere. Arctic days lengthen rapidly in March and April, and the sun rises higher in the sky during this time as well. Both of these changes bring more solar radiation to the Arctic during this period. During these early months of Northern Hemisphere spring most of the Arctic is still experiencing winter conditions, but with the addition of sunlight. The continued low temperatures, and the persisting white snow cover, mean that this additional energy reaching the Arctic from the sun is slow to have a significant impact because it is mostly reflected away without warming the surface. By May, temperatures are rising, as 24-hour daylight reaches many areas, but most of the Arctic is still snow-covered, so the Arctic surface reflects more than 70% of the sun's energy that reaches it over all areas but the Norwegian Sea and southern Bering Sea, where the ocean is ice free, and some of the land areas adjacent to these seas, where the moderating influence of the open water helps melt the snow early (Serreze and Barry, 2005). In most of the Arctic the significant snow melt begins in late May or sometime in June. This begins a feedback, as melting snow reflects less solar radiation (50% to 60%) than dry snow, allowing more energy to be absorbed and the melting to take place faster. As the snow disappears on land, the underlying surfaces absorb even more energy, and begin to warm rapidly. The interior of Greenland differs from the rest of the Arctic. The low springtime cloud frequency there and the high elevation, which reduces the amount of solar radiation absorbed or scattered by the atmosphere, combine to give this region the highest surface flux of solar radiation anywhere in the Arctic. However, the high elevation, and corresponding lower temperatures, help keep the bright snow from melting, limiting the warming effect of all this solar radiation. At the North Pole on the June solstice, around 21 June, the sun circles overhead at 23.5° above the horizon. This marks noon in the Pole's year-long day; from then until the September equinox, the sun will slowly approach nearer and nearer the horizon, offering less and less solar radiation to the Pole. This period of setting sun also roughly corresponds to summer in the Arctic. The rest of the Arctic will have the sun get lower in the sky and receive progressively shorter days. As the Arctic continues receiving energy from the sun during this time, the land, which is mostly free of snow by now, can warm up on clear days when the wind is not coming from the cold ocean. Over the Arctic Ocean the snow cover on the sea ice disappears and ponds of melt water start to form on the sea ice, further reducing the amount of sunlight the ice reflects and helping more ice melt. Around the edges of the Arctic Ocean the ice will melt and break up, exposing the ocean water, which absorbs almost all of the solar radiation that reaches it, storing the energy in the water column. By July and August, most of the land is bare and absorbs more than 80% of the sun's energy that reaches the surface. Where sea ice remains, in the central Arctic Basin and the straits between the islands in the Canadian Archipelago, the many melt ponds and lack of snow cause about half of the sun's energy to be absorbed (Serreze and Barry, 2005), but this mostly goes toward melting ice since the ice surface cannot warm above freezing. Frequent cloud cover, exceeding 80% frequency over much of the Arctic Ocean in July (Serreze and Barry, 2005), reduces the amount of solar radiation that reaches the surface by reflecting much of it before it gets to the surface. Unusual clear periods can lead to increased sea-ice melt or higher temperatures (NSIDC). The interior of Greenland continues to have less cloud cover than most of the Arctic, so during the summer period, like in spring, this area receives more solar radiation at the surface than any other part of the Arctic. Again though, interior Greenland's permanent snow cover reflects over 80% of this energy away from the surface. In September and October the days get rapidly shorter, and in northern areas the sun disappears from the sky entirely. As the amount of solar radiation available to the surface rapidly decreases, the temperatures follow suit. The sea ice begins to refreeze, and eventually gets a fresh snow cover, causing it to reflect even more of the dwindling amount of sunlight reaching it. Likewise, the northern land areas receive their winter snow cover, which combined with the reduced solar radiation at the surface, ensures an end to the warm days those areas may experience in summer. By November, winter is in full swing in most of the Arctic, and the small amount of solar radiation still reaching the region does not play a significant role in its climate. The Arctic is often perceived as a region stuck in a permanent deep freeze. While much of the region does experience very low temperatures, there is considerable variability with both location and season. Winter temperatures average below freezing over all of the Arctic except for small regions in the southern Norwegian and Bering Seas, which remain ice free throughout the winter. Average temperatures in summer are above freezing over all regions except the central Arctic Basin, where sea ice survives through the summer, and interior Greenland. The maps at right show the average temperature over the Arctic in January and July, generally the coldest and warmest months. These maps were made with data from the NCEP/NCAR Reanalysis, which incorporates available data into a computer model to create a consistent global data set. Neither the models nor the data are perfect, so these maps may differ from other estimates of surface temperatures; in particular, most Arctic climatologies show temperatures over the central Arctic Ocean in July averaging just below freezing, a few degrees lower than these maps show (Serreze and Barry, 2005; USSR, 1985; CIA, 1978). An earlier climatology of temperatures in the Arctic, based entirely on available data, is shown in this map from the CIA Polar Regions Atlas (1978). The coldest location in the Northern Hemisphere is not in the Arctic, but rather in the interior of Russia's Far East, in the upper-right quadrant of the maps. This is due to the region's continental climate, far from the moderating influence of the ocean, and to the valleys in the region that can trap cold, dense air and create strong temperature inversions, where the temperature increases, rather than decreases, with height (Serreze and Barry, 2005). The lowest officially recorded temperature in the Northern Hemisphere is the subject of controversy, due to the type of instrumentation used. These temperatures were measured by spirit thermometer, which is less accurate than a mercury thermometer. Measurement by spirit (alcohol) thermometers must be corrected (usually the correction is positive, being about 0.2 °C, but it is not so simple). According to "Climate of the USSR, issue 24, part I, Leningrad, 1956," the coldest temperature of −67.7 °C (−90 °F) occurred in Oymyakon on 6 February 1933, as well as in Verkhoyansk on 5 and 7 February 1892, respectively. However, this region is not part of the Arctic because its continental climate also allows it to have warm summers, with an average July temperature of 15 °C (59 °F). In the figure below showing station climatologies, the plot for Yakutsk is representative of this part of the Far East; Yakutsk has a slightly less extreme climate than Verkhoyansk. The Arctic Basin is typically covered by sea ice year round, which strongly influences its summer temperatures. It also experiences the longest period without sunlight of any part of the Arctic, and the longest period of continuous sunlight, though the frequent cloudiness in summer reduces the importance of this solar radiation. Despite its location centered on the North Pole, and the long period of darkness this brings, this is not the coldest part of the Arctic. In winter, the heat transferred from the −2 °C (28 °F) water through cracks in the ice and areas of open water helps to moderate the climate some, keeping average winter temperatures around −30 to −35 °C (−22 to −31 °F). Minimum temperatures in this region in winter are around −50 °C (−58 °F). In summer, the sea ice keeps the surface from warming above freezing. Sea ice is mostly fresh water since the salt is rejected by the ice as it forms, so the melting ice has a temperature of 0 °C (32 °F), and any extra energy from the sun goes to melting more ice, not to warming the surface. Air temperatures, at the standard measuring height of about 2 meters above the surface, can rise a few degrees above freezing between late May and September, though they tend to be within a degree of freezing, with very little variability during the height of the melt season. In the figure above showing station climatologies, the lower-left plot, for NP 7–8, is representative of conditions over the Arctic Basin. This plot shows data from the Soviet North Pole drifting stations, numbers 7 and 8. It shows the average temperature in the coldest months is in the −30s, and the temperature rises rapidly from April to May; July is the warmest month, and the narrowing of the maximum and minimum temperature lines shows the temperature does not vary far from freezing in the middle of summer; from August through December the temperature drops steadily. The small daily temperature range (the length of the vertical bars) results from the fact that the sun's elevation above the horizon does not change much or at all in this region during one day. Much of the winter variability in this region is due to clouds. Since there is no sunlight, the thermal radiation emitted by the atmosphere is one of this region's main sources of energy in winter. A cloudy sky can emit much more energy toward the surface than a clear sky, so when it is cloudy in winter, this region tends to be warm, and when it is clear, this region cools quickly (Serreze and Barry, 2005). In winter, the Canadian Archipelago experiences temperatures similar to those in the Arctic Basin, but in the summer months of June to August, the presence of so much land in this region allows it to warm more than the ice-covered Arctic Basin. In the station-climatology figure above, the plot for Resolute is typical of this region. The presence of the islands, most of which lose their snow cover in summer, allows the summer temperatures to rise well above freezing. The average high temperature in summer approaches 10 °C (50 °F), and the average low temperature in July is above freezing, though temperatures below freezing are observed every month of the year. The straits between these islands often remain covered by sea ice throughout the summer. This ice acts to keep the surface temperature at freezing, just as it does over the Arctic Basin, so a location on a strait would likely have a summer climate more like the Arctic Basin, but with higher maximum temperatures because of winds off of the nearby warm islands. Climatically, Greenland is divided into two very separate regions: the coastal region, much of which is ice free, and the inland ice sheet. The Greenland Ice Sheet covers about 80% of Greenland, extending to the coast in places, and has an average elevation of 2,100 m (6,900 ft) and a maximum elevation of 3,200 m (10,500 ft). Much of the ice sheet remains below freezing all year, and it has the coldest climate of any part of the Arctic. Coastal areas can be affected by nearby open water, or by heat transfer through sea ice from the ocean, and many parts lose their snow cover in summer, allowing them to absorb more solar radiation and warm more than the interior. Coastal regions on the northern half of Greenland experience winter temperatures similar to or slightly warmer than the Canadian Archipelago, with average January temperatures of −30 °C to −25 °C (−22 °F to −13 °F). These regions are slightly warmer than the Archipelago because of their closer proximity to areas of thin, first-year sea ice cover or to open ocean in the Baffin Bay and Greenland Sea. The coastal regions in the southern part of the island are influenced more by open ocean water and by frequent passage of cyclones, both of which help to keep the temperature there from being as low as in the north. As a result of these influences, the average temperature in these areas in January is considerably higher, between about −20 °C and −4 °C (−4 °F and +25 °F). The interior ice sheet escapes much of the influence of heat transfer from the ocean or from cyclones, and its high elevation also acts to give it a colder climate since temperatures tend to decrease with elevation. The result is winter temperatures that are lower than anywhere else in the Arctic, with average January temperatures of −45 °C to −30 °C (−49 °F to −22 °F), depending on location and on which data set is viewed. Minimum temperatures in winter over the higher parts of the ice sheet can drop below −60 °C (−76 °F; CIA, 1978). In the station climatology figure above, the Centrale plot is representative of the high Greenland Ice Sheet. In summer, the coastal regions of Greenland experience temperatures similar to the islands in the Canadian Archipelago, averaging just a few degrees above freezing in July, with slightly higher temperatures in the south and west than in the north and east. The interior ice sheet remains snow-covered throughout the summer, though significant portions do experience some snow melt (Serreze and Barry, 2005). This snow cover, combined with the ice sheet's elevation, help to keep temperatures here lower, with July averages between −12 °C and 0 °C (10 °F and 32 °F). Along the coast, temperatures are kept from varying too much by the moderating influence of the nearby water or melting sea ice. In the interior, temperatures are kept from rising much above freezing because of the snow-covered surface but can drop to −30 °C (−22 °F) even in July. Temperatures above 20 °C are rare but do sometimes occur in the far south and south-west coastal areas. Most of the ice-free seas are covered by ice for part of the year (see the map in the sea-ice section below). The exceptions are the southern part of the Barents Sea and most of the Norwegian Sea. These regions that remain ice-free throughout the year have very small annual temperature variations; average winter temperatures are kept near or above the freezing point of sea water (about −2 °C [28 °F]) since the unfrozen ocean cannot have a temperature below that, and summer temperatures in the parts of these regions that are considered part of the Arctic average less than 10 °C (50 °F). During the 46-year period when weather records were kept on Shemya Island, in the southern Bering Sea, the average temperature of the coldest month (February) was −0.6 °C (30.9 °F) and that of the warmest month (August) was 9.7 °C (49.4 °F); temperatures never dropped below −17 °C (+2 °F) or rose above 18 °C (64 °F; Western Regional Climate Center) The rest of the ice-free seas have ice cover for some part of the winter and spring, but lose that ice during the summer. These regions have summer temperatures between about 0 °C and 8 °C (32 °F and 46 °F). The winter ice cover allows temperatures to drop much lower in these regions than in the regions that are ice-free all year. Over most of the seas that are ice-covered seasonally, winter temperatures average between about −30 °C and −15 °C (−22 °F and +5 °F). Those areas near the sea-ice edge will remain somewhat warmer due to the moderating influence of the nearby open water. In the station-climatology figure above, the plots for Point Barrow, Tiksi, Murmansk, and Isfjord are typical of land areas adjacent to seas that are ice-covered seasonally. The presence of the land allows temperatures to reach slightly more extreme values than the seas themselves. Precipitation in most of the Arctic falls only as rain and snow. Over most areas snow is the dominant, or only, form of precipitation in winter, while both rain and snow fall in summer (Serreze and Barry 2005). The main exception to this general description is the high part of the Greenland Ice Sheet, which receives all of its precipitation as snow, in all seasons. Accurate climatologies of precipitation amount are more difficult to compile for the Arctic than climatologies of other variables such as temperature and pressure. All variables are measured at relatively few stations in the Arctic, but precipitation observations are made more uncertain due to the difficulty in catching in a gauge all of the snow that falls. Typically some falling snow is kept from entering precipitation gauges by winds, causing an underreporting of precipitation amounts in regions that receive a large fraction of their precipitation as snowfall. Corrections are made to data to account for this uncaught precipitation, but they are not perfect and introduce some error into the climatologies (Serreze and Barry 2005). The observations that are available show that precipitation amounts vary by about a factor of 10 across the Arctic, with some parts of the Arctic Basin and Canadian Archipelago receiving less than 150 mm (6 in) of precipitation annually, and parts of southeast Greenland receiving over 1200 mm (47 in) annually. Most regions receive less than 500 mm (20 in) annually (Serreze and Hurst 2000, USSR 1985). For comparison, annual precipitation averaged over the whole planet is about 1000 mm (39 in; see Precipitation). Unless otherwise noted, all precipitation amounts given in this article are liquid-equivalent amounts, meaning that frozen precipitation is melted before it is measured. The Arctic Basin is one of the driest parts of the Arctic. Most of the Basin receives less than 250 mm (10 in) of precipitation per year, qualifying it as a desert. Smaller regions of the Arctic Basin just north of Svalbard and the Taymyr Peninsula receive up to about 400 mm (16 in) per year (Serreze and Hurst 2000). Monthly precipitation totals over most of the Arctic Basin average about 15 mm (0.6 in) from November through May, and rise to 20 to 30 mm (0.8 to 1.2 in) in July, August, and September (Serreze and Hurst 2000). The dry winters result from the low frequency of cyclones in the region during that time, and the region's distance from warm open water that could provide a source of moisture (Serreze and Barry 2005). Despite the low precipitation totals in winter, precipitation frequency is higher in January, when 25% to 35% of observations reported precipitation, than in July, when 20% to 25% of observations reported precipitation (Serreze and Barry 2005). Much of the precipitation reported in winter is very light, possibly diamond dust. The number of days with measurable precipitation (more than 0.1 mm [0.004 in] in a day) is slightly greater in July than in January (USSR 1985). Of January observations reporting precipitation, 95% to 99% of them indicate it was frozen. In July, 40% to 60% of observations reporting precipitation indicate it was frozen (Serreze and Barry 2005). The parts of the Basin just north of Svalbard and the Taymyr Peninsula are exceptions to the general description just given. These regions receive many weakening cyclones from the North-Atlantic storm track, which is most active in winter. As a result, precipitation amounts over these parts of the basin are larger in winter than those given above. The warm air transported into these regions also mean that liquid precipitation is more common than over the rest of the Arctic Basin in both winter and summer. Annual precipitation totals in the Canadian Archipelago increase dramatically from north to south. The northern islands receive similar amounts, with a similar annual cycle, to the central Arctic Basin. Over Baffin Island and the smaller islands around it, annual totals increase from just over 200 mm (8 inches) in the north to about 500 mm (20 inches) in the south, where cyclones from the North Atlantic are more frequent (Serreze and Hurst 2000). Annual precipitation amounts given below for Greenland are from Figure 6.5 in Serreze and Barry (2005). Due to the scarcity of long-term weather records in Greenland, especially in the interior, this precipitation climatology was developed by analyzing the annual layers in the snow to determine annual snow accumulation (in liquid equivalent) and was modified on the coast with a model to account for the effects of the terrain on precipitation amounts. The southern third of Greenland protrudes into the North-Atlantic storm track, a region frequently influenced by cyclones. These frequent cyclones lead to larger annual precipitation totals than over most of the Arctic. This is especially true near the coast, where the terrain rises from sea level to over 2500 m (8200 ft), enhancing precipitation due to orographic lift. The result is annual precipitation totals of 400 mm (16 in) over the southern interior to over 1200 mm (47 in) near the southern and southeastern coasts. Some locations near these coasts where the terrain is particularly conducive to causing orographic lift receive up 2200 mm (87 in) of precipitation per year. More precipitation falls in winter, when the storm track is most active, than in summer. The west coast of the central third of Greenland is also influenced by some cyclones and orographic lift, and precipitation totals over the ice sheet slope near this coast are up to 600 mm (24 in) per year. The east coast of the central third of the island receives between 200 and 600 mm (8 and 24 in) of precipitation per year, with increasing amounts from north to south. Precipitation over the north coast is similar to that over the central Arctic Basin. The interior of the central and northern Greenland Ice Sheet is the driest part of the Arctic. Annual totals here range from less than 100 to about 200 mm (4 to 8 in). This region is continuously below freezing, so all precipitation falls as snow, with more in summer than in the winter time. (USSR 1985). The Chukchi, Laptev, and Kara Seas and Baffin Bay receive somewhat more precipitation than the Arctic Basin, with annual totals between 200 and 400 mm (8 and 16 in); annual cycles in the Chukchi and Laptev Seas and Baffin Bay are similar to those in the Arctic Basin, with more precipitation falling in summer than in winter, while the Kara Sea has a smaller annual cycle due to enhanced winter precipitation caused by cyclones from the North Atlantic storm track (Serreze and Hurst 2000; Serreze and Barry 2005). The Labrador, Norwegian, Greenland, and Barents Seas and Denmark and Davis Straits are strongly influenced by the cyclones in the North Atlantic storm track, which is most active in winter. As a result, these regions receive more precipitation in winter than in summer. Annual precipitation totals increase quickly from about 400 mm (16 in) in the northern to about 1400 mm (55 in) in the southern part of the region (Serreze and Hurst 2000). Precipitation is frequent in winter, with measurable totals falling on an average of 20 days each January in the Norwegian Sea (USSR 1985). The Bering Sea is influenced by the North Pacific storm track, and has annual precipitation totals between 400 mm and 800 mm (16 and 31 in), also with a winter maximum. Sea ice is frozen sea water that floats on the ocean's surface. It is the dominant surface type throughout the year in the Arctic Basin, and covers much of the ocean surface in the Arctic at some point during the year. The ice may be bare ice, or it may be covered by snow or ponds of melt water, depending on location and time of year. Sea ice is relatively thin, generally less than about 4 m (13 feet), with thicker ridges (NSIDC). NOAA's North Pole Web Cams having been tracking the Arctic summer sea ice transitions through spring thaw, summer melt ponds, and autumn freeze-up since the first webcam was deployed in 2002–present. Sea ice is important to the climate and the ocean in a variety of ways. It reduces the transfer of heat from the ocean to the atmosphere; it causes less solar energy to be absorbed at the surface, and provides a surface on which snow can accumulate, which further decreases the absorption of solar energy; since salt is rejected from the ice as it forms, the ice increases the salinity of the ocean's surface water where it forms and decreases the salinity where it melts, both of which can affect the ocean's circulation (NSIDC). The map at right shows the areas covered by sea ice when it is at its maximum extent (March) and its minimum extent (September). This map was made in the 1970s, and the extent of sea ice has decreased since then (see below), but this still gives a reasonable overview. At its maximum extent, in March, sea ice covers about 15 million km² (5.8 million sq mi) of the Northern Hemisphere, nearly as much area as the largest country, Russia (UNEP 2007). Winds and ocean currents cause the sea ice to move. The typical pattern of ice motion is shown on the map at right. On average, these motions carry sea ice from the Russian side of the Arctic Ocean into the Atlantic Ocean through the area east of Greenland, while they cause the ice on the North American side to rotate clockwise, sometimes for many years. Wind speeds over the Arctic Basin and the western Canadian Archipelago average between 4 and 6 metres per second (14 and 22 kilometres per hour, 9 and 13 miles per hour) in all seasons. Stronger winds do occur in storms, often causing whiteout conditions, but they rarely exceed 25 m/s (90 km/h, 55 mph) in these areas (Przybylak 2003). During all seasons, the strongest average winds are found in the North-Atlantic seas, Baffin Bay, and Bering and Chukchi Seas, where cyclone activity is most common. On the Atlantic side, the winds are strongest in winter, averaging 7 to 12 m/s (25 to 43 km/h, 16 to 27 mph), and weakest in summer, averaging 5 to 7 m/s (18 to 25 km/h, 11 to 16 mph). On the Pacific side they average 6 to 9 m/s (22 to 32 km/h, 13 to 20 mph) year round. Maximum wind speeds in the Atlantic region can approach 50 m/s (180 km/h, 110 mph) in winter (Przybylak 2003). As with the rest of the planet, the climate in the Arctic has changed throughout time. About 55 million years ago it is thought that parts of the Arctic supported subtropical ecosystems (Serreze and Barry 2005) and that Arctic sea-surface temperatures rose to about 23 °C (73 °F) during the Paleocene–Eocene Thermal Maximum. In the more recent past, the planet has experienced a series of ice ages and interglacial periods over about the last 2 million years, with the last ice age reaching its maximum extent about 18,000 years ago and ending by about 10,000 years ago. During these ice ages, large areas of northern North America and Eurasia were covered by ice sheets similar to the one found today on Greenland; Arctic climate conditions would have extended much further south, and conditions in the present-day Arctic region were likely colder. Temperature proxies suggest that over the last 8000 years the climate has been stable, with globally averaged temperature variations of less than about 1 °C (2 °F; see Paleoclimate). There are several reasons to expect that climate changes, from whatever cause, may be enhanced in the Arctic, relative to the mid-latitudes and tropics. First, is the ice-albedo feedback, whereby an initial warming causes snow and ice to melt, exposing darker surfaces that absorb more sunlight, leading to more warming. Second, because colder air holds less water vapour than warmer air, in the Arctic, a greater fraction of any increase in radiation absorbed by the surface goes directly into warming the atmosphere, whereas in the tropics, a greater fraction goes into evaporation. Third, because the Arctic temperature structure inhibits vertical air motions, the depth of the atmospheric layer that has to warm in order to cause warming of near-surface air is much shallower in the Arctic than in the tropics. Fourth, a reduction in sea-ice extent will lead to more energy being transferred from the warm ocean to the atmosphere, enhancing the warming. Finally, changes in atmospheric and oceanic circulation patterns caused by a global temperature change may cause more heat to be transferred to the Arctic, enhancing Arctic warming (ACIA 2004). According to the Intergovernmental Panel on Climate Change (IPCC), "warming of the climate system is unequivocal", and the global-mean temperature has increased by 0.6 to 0.9 °C (1.1 to 1.6 °F) over the last century. This report also states that "most of the observed increase in global average temperatures since the mid-20th century is very likely [greater than 90% chance] due to the observed increase in anthropogenic greenhouse gas concentrations." The IPCC also indicate that, over the last 100 years, the annually averaged temperature in the Arctic has increased by almost twice as much as the global mean temperature has. In 2009, NASA reported that 45 percent or more of the observed warming in the Arctic since 1976 was likely a result of changes in tiny airborne particles called aerosols. Climate models predict that the temperature increase in the Arctic over the next century will continue to be about twice the global average temperature increase. By the end of the 21st century, the annual average temperature in the Arctic is predicted to increase by 2.8 to 7.8 °C (5.0 to 14.0 °F), with more warming in winter (4.3 to 11.4 °C; 7.7 to 20.5 °F) than in summer (IPCC 2007). Decreases in sea-ice extent and thickness are expected to continue over the next century, with some models predicting the Arctic Ocean will be free of sea ice in late summer by the mid to late part of the century (IPCC 2007). A study published in the journal Science in September 2009 determined that temperatures in the Arctic are higher presently than they have been at any time in the previous 2,000 years. Samples from ice cores, tree rings and lake sediments from 23 sites were used by the team, led by Darrell Kaufman of Northern Arizona University, to provide snapshots of the changing climate. Geologists were able to track the summer Arctic temperatures as far back as the time of the Romans by studying natural signals in the landscape. The results highlighted that for around 1,900 years temperatures steadily dropped, caused by precession of earth's orbit that caused the planet to be slightly farther away from the sun during summer in the Northern Hemisphere. These orbital changes led to a cold period known as the little ice age during the 17th, 18th and 19th centuries. However, during the last 100 years temperatures have been rising, despite the fact that the continued changes in earth's orbit would have driven further cooling. The largest rises have occurred since 1950, with four of the five warmest decades in the last 2,000 years occurring between 1950 and 2000. The last decade was the warmest in the record. - Alaska Current - Aleutian Low - Arctic dipole anomaly - Arctic haze - Arctic methane release - Arctic oscillation - Arctic Report Card - Arctic sea ice decline - Arctic sea ice ecology and history - Beaufort Gyre - Climate of Antarctica - Climate of the Nordic countries - East Greenland Current - Ellesmere Ice Shelf - European windstorm - Icelandic Low - Labrador Current - Midnight sun - North Atlantic Current - Oyashio Current - Petermann Glacier - Polar climate - Polar easterlies - Polar high - Polar low - Polar vortex - Siberian High - Squamish (wind) - Subarctic climate - Taiga and tundra - West Greenland Current - 2009 Ends Warmest Decade on Record. NASA Earth Observatory Image of the Day, January 22, 2010. - Kaufman, Darrell S.; Schneider, David P.; McKay, Nicholas P.; Ammann, Caspar M.; Bradley, Raymond S.; Briffa, Keith R.; Miller, Gifford H.; Otto-Bliesner, Bette L.; Overpeck, Jonathan T.; Vinther, Bo M. (2009). "Recent Warming Reverses Long-Term Arctic Cooling". Science. 325 (5945): 1236–1239. Bibcode:2009Sci...325.1236K. doi:10.1126/science.1173983. PMID 19729653. - "Arctic 'warmest in 2000 years'". BBC News. September 3, 2009. Retrieved September 5, 2009. - Derbyshire, David (2009-09-04). "Arctic ice reveals last decade was hottest in 2,000 years". London: Daily Mail. Retrieved September 5, 2009. - Walsh, Bryan (2009-09-05). "Studies of the Arctic Suggest a Dire Situation". Time. Retrieved September 5, 2009. - "Natural cooling trend reversed". Financial Times. 2009-09-04. Retrieved September 4, 2009. - ACIA, 2004 Impacts of a Warming Arctic: Arctic Climate Impact Assessment. Cambridge University Press. - IPCC, 2007: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)). Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp. - NOAA's annually updated Arctic Report Card tracks recent environmental changes. - National Aeronautics and Space Administration. Arctic Sea Ice Continues to Decline, Arctic Temperatures Continue to Rise In 2005. Accessed September 6, 2007. - National Snow and Ice Data Center. All About Sea Ice. Accessed October 19, 2007. - National Snow and Ice Data Center. Cryospheric Climate Indicators: Sea Ice Index. Accessed September 6, 2007. - National Snow and Ice Data Center. NSIDC Arctic Climatology and Meteorology Primer. Accessed August 19, 2007. - Przybylak, Rajmund, 2003: The Climate of the Arctic, Kluwer Academic Publishers, Norwell, MA, USA, 270 pp. - Serreze, Mark C.; Hurst, Ciaran M. (2000). "Representation of Mean Arctic Precipitation from NCEP–NCAR and ERA Reanalyses". Journal of Climate. 13 (1): 182–201. Bibcode:2000JCli...13..182S. doi:10.1175/1520-0442(2000)013<0182:ROMAPF>2.0.CO;2. - Serreze, Mark C. and Roger Graham Barry, 2005: The Arctic Climate System, Cambridge University Press, New York, 385 pp. - UNEP (United Nations Environment Programme), 2007: Global Outlook for Ice & Snow, Chapter 5. - United States Central Intelligence Agency, 1978: Polar Regions Atlas, National Foreign Assessment Center, Washington, DC, 66 pp. - USSR State Committee on Hydrometeorology and Environment, and The Arctic and Antarctic Research Institute (chief editor A.F. Treshnikov), 1985: Atlas Arktiki (Atlas of the Arctic), Central Administrative Board of Geodesy and Cartography of the Ministerial Council of the USSR, Moscow, 204 pp (in Russian with some English summaries). [Государственный Комитет СССР по Гидрометеорологии и Контролю Природной Среды, и Ордена Ленина Арктический и Антарктический Научно-Исследовательский Институт (главный редактор Трешников А.Ф.), 1985: Атлас Арктики, Главное Управление Геодезии и Картографии при Совете Министров СССР, Москва, 204 стр.] - DAMOCLES, Developing Arctic Modeling and Observing Capabilities for Long-term Environmental Studies, Arctic Centre, University of Lapland European Union - Video on Climate Research in the Bering Sea - Arctic Theme Page – A comprehensive resource focused on the Arctic from NOAA - Arctic Change Detection – A near-realtime Arctic Change Indicator website NOAA - The Future of Arctic Climate and Global Impacts from NOAA - Collapsing Coastlines July 16, 2011; Vol.180 #2 Science News - How Climate Change Is Growing Forests in the Arctic June 4, 2012 Time (magazine) - Video of Ilulissat Glacier, Greenland – frontline retreats faster than before (4min 20s)
It is because when the English captured Jamaica in 1655 from the Spanish they found people there who may have been Marranos but must certainly have been Conversos. Interestingly, whilst no Jews were allowed in the Spanish New World, Jamaica, belonging to the Colףn family, seemed to shut its eyes at Conversos. There is evidence that they were involved with the introduction of Sugar Cane and the making of Sugar from as far back as 1512, produced by the "Portugals" that were sent to Jamaica. Why historic? It is because the British colonizers from as early as 1655 did nothing to expel or limit the Jews to visit and settle the island. As a result the Jewish population flourished and before the end of the 17th century a small synagogue was established in the infamous town of Port Royal. After the devastating earthquake of 1692 the Jews purchased a plot in the old Spanish capital and the then Jamaican capital of Spanish Town to establish a new house of worship. The synagogue Neveh Shalom was built by the beginning of the 18th century "in the style of the recently completed Bevis Marks synagogue" (London, England). The Spanish Town community was expanded by the formation of the congregation Kahal Kadosh Mikveh Israel who built their own synagogue in 1796. These congregations flourished for over 100 years until the capital of the island was moved to Kingston in 1872. There were already growing congregations of Sephardim from 1750 and Ashkenazim in Kingston and these expanded. Much of this story has been captured in "The History of the Portuguese Jews of Jamaica," published in 2000. Today many of the island's leading professionals, businessmen and leaders generally can trace Jewish ancestry in their genealogy. Jamaican Jewish families today can trace their ancestry to the older congregations of Amsterdam and London as well as to their later American connections. Not only has this link of culture and heritage contributed to the present but the past is full of the contributions of the Jamaican Jews to the islands rich history. In the fields of Poetry and Literature, in business and commerce generally, in manufacturing and farming, in art and music, Jamaican Jewish contributions have been outstanding. Since the removal of civil disabilities in 1832, Jews began to play a role in the public life of the island, a role that has continued to the present. They have been in the legislature, in the justice system, in elected political office, Ambassadors and Ministers of Government. The congregations came together finally in 1921 as the United Congregation of Israelites as noted above. Today the congregation still maintains the Synagogue, one of the few in the world with sand on its floor, designed and built in the traditional Sephardic style by the Jamaican Jew, Rudolph Daniel Cohen Henriques, and his brothers. The congregation sponsored and is still responsible for the Hillel Academy, a private preparatory and secondary school open to all denominations, in Kingston. They also maintain a Jewish home for the aged and less fortunate members of the community. Services and Religion school continue but the congregation is without a Rabbi at the present time.
One of the mysteries of the English language finally explained. A radar system in North America set up during the Cold War for the early detection of a missile attack. - ‘The north warning system replaced the distant early warning line in the early 1990s and is used to provide surveillance of potential attack routes via Arctic airspace.’ - ‘Fifty years ago last month, Canada and the United States approved the construction of the distant early warning line.’ - ‘The DEW Line was designed to provide distant early warning of manned bomber or intercontinental ballistic missile attacks.’ Top tips for CV writingRead more In this article we explore how to impress employers with a spot-on CV.
Vaccines and therapies for infectious diseases have saved many lives. Control of many infectious diseases has been one of medicine’s greatest accomplishments. Before the 20th century, infectious diseases were uncontrollable and a constant danger. Vaccines are not cures for diseases. They are preventative measures against a disease, and they help healthy people stay healthy. Vaccines help protect you from illnesses spread by microorganisms such as bacteria and viruses that cause illness. A vaccine is a substance that provides immunity to disease. Vaccines can be made from weakened or dead pathogenic microorganisms or a specific part of the bacteria or virus (antigen). Scientists are also learning how to make synthetic molecules that resemble molecules from the pathogenic microorganism. These adjuvants strengthen existing vaccines helping to stimulate (kick start) the immune system. Immunisation is the process of being vaccinated with a vaccine and becoming immune to the disease. The vaccine stimulates your immune system to make memory cells that will give you protection against that particular disease in the future. If the disease does reappear, these memory cells rapidly make antibodies, which identify and neutralise antigens (pathogenic substances), allowing your body to eliminate the disease immediately before it can do you any harm. Not just for your protection Vaccines are not just for your own protection. Getting vaccinated/immunised yourself helps protect the people around you. It keeps you from spreading a serious disease to anyone who is not vaccinated. People not in good health, babies and pregnant women are often not vaccinated because their immune systems may not respond to the vaccine effectively anyway. By choosing to get vaccinated, you may decrease their risk of getting a disease or infection, as well as yours. Getting vaccinated helps to stop epidemics and pandemics. Some dangerous diseases that once killed huge numbers of people have been stopped or are controlled because of vaccines. For example, polio was a common childhood illness in New Zealand mid last century. However, as a result of immunisation, polio has disappeared from New Zealand and most parts of the world – in 2000, the Western Pacific region was declared polio free. However, diseases may reappear if people stop being vaccinated. This has recently happened with tuberculosis (TB), which is making an appearance globally. The old vaccine for TB is not so effective either, so scientists are exploring new ways of making a more effective TB vaccine. Although vaccines exist for all sorts of diseases, both viral and bacterial, some diseases cannot be contained by them. The common cold and influenza are two examples. These diseases either mutate so quickly or have so many different strains in the wild that it is impossible to inject all of them into your body. Each time you get the flu, for example, you are getting a different strain of the same disease. Therapies are the range of measures that can be used to help the body’s natural defences fight off infectious diseases. They strengthen the immune system to do its job. These include medications (drugs), vitamins, healthy eating, exercise, rest and so on. Antibiotics were originally substances derived from living microorganisms that inhibit the growth of other microorganisms. They work either by destroying bacteria or by preventing them from multiplying. Penicillin, for example, comes from a living mould (Penicillium fungi) and destroys the cell wall of susceptible bacteria. It was the first antibiotic to be mass produced and saved hundreds of soldiers’ lives when it became available in the early 1940s. Some of the antibiotics in use today are man-made. Unfortunately, antibiotics do not destroy viruses. As a result, viruses are responsible for many of the serious (and often fatal) infectious diseases today. Although no real cures have been developed for viruses to date, some antiviral treatments are available and others are being developed. These prevent the virus multiplying and cause the illness to run its course more quickly, for example, acyclovir (Zovirax) used to treat herpes and Tamiflu used to treat H1N1 (swine) flu. For the future Scientists are continually researching for new ways to help the immune system destroy infectious diseases. New and innovative vaccines and drugs (that are synthetic rather than using actual disease microorganisms) are being worked on, but they take many years to produce. It can take 20–30 years to produce a new vaccine or drug because of the huge amount of testing needed. Once scientists consider they might have a new drug or vaccine, it needs to be tested rigorously. First, it may be tested on animals, then it gets tested on a few volunteers. If it still works well, it’s tested on a slightly larger group of people. Finally, it’s tested on large population groups before it can be sold on the market. All of this takes a very long time and a lot of money. One vaccine can cost about $2 billion to develop by the time it reaches the market.
Animals that live primarily in caves are known as troglobites. While there are no known true troglobite lizards, the misconception can occur due to the colloquial names of several species of troglobite salamanders: cave lizards and ghost lizards. Most troglobites such as cave salamanders have evolved with an absence of underused senses, such as eyesight, which would be useless in a subterranean setting. Cave Salamander History The first true scientific study of a troglobite was that of a cave salamander. This particular cave salamander was Proteus anguinus, initially identified as "dragon's larva" in 1689. About 80 years later, in 1768, Proteus anguinus was scientifically identified and studied by Joseph Nicolaus Lorenz, an Austrian naturalist. In 1822, the many different species of cave salamanders were brought to light when a professor of botany discovered and identified Eurycea lucifuga, the spotted-tail salamander. Proteus anguinus, also known as an olm, was the first cave salamander to be discovered. A blind amphibian, the olm lives in the waters that flow underground southeastern Europe. The olm has many nicknames: dragon's larva, white salamander and human fish. Weighing only half an ounce, Proteus anguinus is perhaps most remarkable for its lifespan: the predicted maximum olm lifespan is over 100 years. No one factor can determine why Proteus anguinus lives so long; the olm metabolism is normal for salamanders. Eurycea lucifuga, also known as a cave puppet and spotted-tail salamander, is a lungless amphibian with reddish-orange coloring and black spots. The spotted-tail salamander is large for its species; it can range up to nearly eight inches in length, although this is mostly tail, which accounts for more than half its size. Eurycea lucifuga was discovered in 1822 and can be found in moist, humid limestone caves throughout the southeastern United States. The spotted-tail salamander is a predator, feeding on snails, flies and fly larvae. One of the rarest cave salamanders is the endangered Eurycea rathbuni, known also as the ghost lizard and Texas blind salamander. This troglobite resides only in the San Marcos pool of the Edwards Plateau in San Marcos, Texas. Eurycea rathbuni's bright red external gills absorb oxygen from the water, and it's assumed to feed primarily on snails and freshwater shrimp. Although Eurycea rathbuni is neither a reptile nor a lizard, the nickname "ghost lizard" refers to its lizard-like body type and pale, white skin color. - Jupiterimages/Photos.com/Getty Images
Wood Thrush Conservation The Wood Thrush (Hylocichla mustelina) has become a symbol of declining Neotropical migratory forest birds, its population having decreased by more than 50% throughout its range since the mid-1960s. Wood Thrushes breed in forests throughout the eastern United States and southeastern Canada. In September, they fly south to winter mostly in primary, broad-leaved forests at lower elevations from southeastern Mexico to Panama. Destruction and fragmentation of forests in both breeding and wintering areas are thought to be factors in the species' declining abundance. Breeding individuals in smaller forest fragments and fragmented landscapes experience more nest predation and more cowbird parasitism and consequently poorer reproductive success than individuals nesting in larger areas and more forested landscapes. Loss of primary forests in the tropics may force birds into secondary habitats, where they may wander and may have higher mortality rates -- one of several unconfirmed aspects of this oft-studied species' biology. Use the web navigation on the upper right to learn more about the Wood Thrush on both the breeding and wintering grounds. The Species Profile includes information on identification, natural history, conservation status, how to help, and additional references. If you have any questions or suggestions, please contact
A motion simulator or motion platform is a mechanism that encapsulates occupants and creates the effect/feelings of being in a moving vehicle. A motion simulator can also be called a motion base, motion chassis or a motion seat. The movement is synchronous with visual display and is designed to add a tactile element to video gaming, simulation, and virtual reality. When motion is applied and synchronized to audio and video signals, the result is a combination of sight, sound, and touch. All full motion simulators move the entire occupant compartment and can convey changes in orientation and the effect of false gravitational forces. These motion cues trick the mind into thinking it is immersed in the simulated environment and experiencing kinematic changes in position, velocity, and acceleration. The mind's failure to accept the experience can result in motion sickness. Motion platforms can provide movement on up to six degrees of freedom: three rotational degrees of freedom (roll, pitch, yaw) and three translational or linear degrees of freedom (surge, heave, sway). - 1 Types - 2 Common uses - 3 How human physiology processes and responds to motion - 3.1 Proprioceptors - 3.2 Vestibular system - 3.3 Visual inputs - 3.4 Putting it together - how simulators trick the body - 4 Implementation using washout filters - 5 Impact - 6 Advantages and disadvantages of simulation in training - 7 See also - 8 References - Common examples of occupant-controlled motion simulators are flight simulators, driving simulators, and auto racing games. Other occupant-controlled vehicle simulation games simulate the control of boats, motorcycles, rollercoasters, military vehicles, ATVs, or spacecraft, among other craft types. - Examples of passive ride simulators are theme park rides where an entire theater system, with a projection screen in front of the seats, is in motion on giant actuators. An enhanced motion vehicle moves the motion base along a track in a show building. See Simulator ride and the Ride simulator section of this article for more details on passive motion simulators. Historically, motion platforms have varied widely in scale and cost. Those in the category of amusement park rides and commercial and military aircraft simulators are at the high end of this spectrum; arcade style amusement devices fall into the middle of the spectrum, while smaller and lower-costing home-based motion platforms comprise the other end. Modern motion platforms have become complicated machines, but they have simpler roots. Many of the early motion platforms were flight simulators used to train pilots. One of the first motion platforms, the Sanders Teacher, was created in 1910. The Sanders Teacher was an aircraft with control surfaces fitted to the ground by a simple universal joint. When wind was present, the pilot in training was able to use the control surfaces to move the simulator in the three rotational degrees of freedom. Around 1930, a large advance in motion platform technology was made with the creation of the Link Trainer. The Link Trainer used the control stick and external motors to control organ bellows located under the simulator. The bellows could inflate or deflate, causing the simulator to rotate with three degrees of freedom. In 1958 the Comet IV was designed using a three-degrees-of-freedom hydraulic system. After the Comet IV both the range of motion and the degrees of freedom exhibited by motion platforms was increased. The most expensive motion platforms utilize high-fidelity six-degrees-of-freedom motion, often coupled with advanced audio and visual systems. Today you will find motion platforms in many applications including: flight simulation, driving simulation, amusement rides, and even small home-based motion platforms. The high-end motion platform has been used in conjunction with military and commercial flight instruction and training applications. Today one can find high-end, multiple-occupant motion platforms in use with entertainment applications in theme parks throughout the world. The systems used in these applications are very large, weighing several tons, and are typically housed in facilities designed expressly for them. As a result of the force required to move the weight of these larger simulator systems and one or more occupants, the motion platform must be controlled by powerful and expensive hydraulic or electromagnetic cylinders. The cost of this type of motion platform exceeds US$100,000, and often goes well into the millions of dollars for the multi-occupant systems found at major theme park attractions. The complexity of these systems require extensive programming and maintenance, further extending the cost. A typical high-end motion system is the Stewart platform, which provides full 6 degrees of freedom (3 translation and 3 rotation) and employs sophisticated algorithms to provide high-fidelity motions and accelerations. These are used in a number of applications, including flight simulators for training pilots. However, the complexity and expensive mechanisms required to incorporate all degrees of freedom has led to alternative motion simulation technology using mainly the three rotational degrees of freedom. An analysis of capabilities of these systems reveals that a simulator with three rotational degrees of freedom is capable of producing motion simulation quality and vestibular motion sensations comparable to that produced by a Stewart platform. Historically these systems used hydraulics or pneumatics; however, many modern systems use electric actuators. The middle of the spectrum includes a number of disclosures involving powered motion platforms aimed at arcade-style amusement games, rides, and other arrangements. These systems fall into a price range from $10,000 to $99,000 USD. Typically the space requirements for such a platform are modest requiring only a portion of an arcade room and a smaller range of motion is provided via similar, less expensive, control systems than the high-end platforms. The lower-cost systems include home-based motion platforms, which have recently become a more common device used to enhance video games, simulation, and virtual reality. These systems fall into a price range from $1,000 to $9,000 USD. Within the 2000s (decade), several individuals and business entities have developed these smaller, more affordable motion systems. Most of these systems were developed mainly by flight simulation enthusiasts, were sold as do it yourself projects, and could be assembled in the home from common components for around one thousand US dollars ($1,000). Recently, there has been increased market interest in motion platforms for more personal, in-home, use. The application of these motion systems extends beyond just flight training simulation into a larger market of more generalized "craft-oriented" simulation, entertainment, and virtual reality systems. Motion platforms are commonly used in the field of engineering for analysis and verification of vehicle performance and design. The ability to link a computer-based dynamic model of a particular system to physical motion gives the user the ability to feel how the vehicle would respond to control inputs without the need to construct expensive prototypes. For example, an engineer designing an external fuel tank for an aircraft could have a pilot determine the effect on flying qualities or a mechanical engineer could feel the effects of a new brake system without building any hardware, saving time and money. Flight simulators are also used by aircraft manufacturers to test new hardware. By connecting a simulated cockpit with visual screen to a real flight control system in a laboratory, integrating the pilot with the electrical, mechanical, and hydraulic components that exist on the real aircraft, a complete system evaluation can be conducted prior to initial flight testing. This type of testing allows the simulation of "seeded faults" (i.e. an intentional hydraulic leak, software error, or computer shutdown) which serve to validate that an aircraft's redundant design features work as intended. A test pilot can also help identify system deficiencies such as inadequate or missing warning indicators, or even unintended control stick motion. This testing is necessary to simulate extremely high risk events that cannot be conducted in flight but nonetheless must be demonstrated. While 6 degree-of-freedom motion is not necessary for this type of testing, the visual screen allows the pilot to "fly" the aircraft while the faults are simultaneously triggered. - Star Tours and its sequel, located at Disneyland and other Disney theme parks, use purpose-modified military flight simulators known as Advanced Technology Leisure Application Simulators (ATLAS) to simulate a flight through outer space. - Wild Arctic at SeaWorld Orlando and SeaWorld San Diego. - Soarin' Over California, located in Disney California Adventure, uses an IMAX dome screen and a hang glider simulation to provide a beautiful simulated flight over many of California's scenic places. - StormRider is a simulator ride at Tokyo DisneySea. - Star Trek: The Experience was located at the Las Vegas Hilton between 1998 and 2008. Its "Klingon Encounter" culminated with a state of the art, 6 degrees-of-freedom flight simulator ride including associated space battle movie footage. - Back to the Future: The Ride, a simulator ride based on the Back to the Future film series, is located at Universal Studios Japan, and formerly at Universal Studios Florida and Universal Studios Hollywood. The ride used DeLorean-based simulator cars that faced a 70-foot-tall IMAX dome screen. In 2008, it was replaced at the Florida and Hollywood parks by another simulator ride, The Simpsons Ride. - The Funtastic World of Hanna-Barbera (now closed) was one of the original attractions at Universal Studios Florida. The ride used rocket-based simulator cars and a theater-sized screen. - Jimmy Neutron's Nicktoon Blast (now closed) was located at the Universal Studios Florida theme park where The Funtastic World of Hanna-Barbera had been located. The ride used rocket-based simulator cars and a theater-sized screen. - The National Air and Space Museum in Washington, D.C., houses a gallery full of two-seat interactive flight simulators doing 360-degree barrel rolls in air combat. - Europe in the Air, a simulator ride located in Busch Gardens Williamsburg, uses a motion platform, high-definition footage, and wind effects to simulate flight over Europe's notable icons. Some driving and flying simulation games allow the use of specialized controllers such as steering wheels, foot pedals or joysticks. Certain game controllers designed in recent years have employed haptic technology to provide realtime, tactile feedback to the user in the form of vibration from the controller. A motion simulator takes the next step by providing the player full-body tactile feedback. Motion gaming chairs can roll to the left and right and pitch forward and backward to simulate turning corners, accelerations and decelerations. Motion platforms permit a more stimulative and potentially realistic gaming experience, and allow for even greater physical correlation to sight and sound in game play. The way we perceive our body and our surroundings is a function of the way our brain interprets signals from our various sensory systems, such as sight, sound, balance and touch. Special sensory pick-up units (or sensory "pads") called receptors translate stimuli into sensory signals. External receptors (exteroceptors) respond to stimuli that arise outside the body, such as the light that stimulates the eyes, sound pressure that stimulates the ear, pressure and temperature that stimulates the skin and chemical substances that stimulate the nose and mouth. Internal receptors (enteroceptors) respond to stimuli that arise from within blood vessels. Postural stability is maintained through the vestibular reflexes acting on the neck and limbs. These reflexes, which are key to successful motion synchronization, are under the control of three classes of sensory input: - Proprioceptors are receptors located in your muscles, tendons, joints and the inner ear, which send signals to the brain regarding the body's position. Aircraft pilots sometimes refer to this type of sensory input as the “seat of your pants”. - The vestibular system contributes to balance and sense of spatial orientation and includes the vestibular organs, ocular system, and muscular system. The vestibular system is contained in the inner ear and interprets rotational motion and linear acceleration. The vestibular system does not interpret vertical motion. - Visual input from the eye relays information to the brain about the craft's position, velocity, and attitude relative to the ground. Proprioceptors are receptors located in your muscles, tendons, joints and the inner ear, which send signals to the brain regarding the body's position. An example of a "popular" proprioceptor often mentioned by aircraft pilots, is the "seat of the pants". In other words, these sensors present a picture to your brain as to where you are in space as external forces act on your body. Proprioceptors respond to stimuli generated by muscle movement and muscle tension. Signals generated by exteroceptors and proprioceptors are carried by sensory neurons or nerves and are called electrochemical signals. When a neuron receives such a signal, it sends it on to an adjacent neuron through a bridge called a synapse. A synapse "sparks" the impulse between neurons through electrical and chemical means. These sensory signals are processed by the brain and spinal cord, which then respond with motor signals that travel along motor nerves. Motor neurons, with their special fibres, carry these signals to muscles, which are instructed to either contract or relax. The downfall with our internal motion sensors is that once a constant speed or velocity is reached, these sensors stop reacting. Your brain now has to rely on visual cues until another movement takes place and the resultant force is felt. In motion simulation, when our internal motion sensors can no longer detect motion, a “washout” of the motion system may occur. A washout allows the motion platform occupant to think they are making a continuous movement when actually the motion has stopped. In other words, washout is where the simulator actually returns to a central, home, or reference position in anticipation of the next movement. This movement back to neutral must occur without the occupant actually realizing what is happening. This is an important aspect in motion simulators as the human feel sensations must be as close to real as possible. The vestibular system is the balancing and equilibrium system of the body that includes the vestibular organs, ocular system, and muscular system. The vestibular system is contained in the inner ear. It consists of three semicircular canals, or tubes, arranged at right angles to one another. Each canal is lined with hairs connected to nerve endings and is partially filled with fluid. When the head experiences acceleration the fluid moves within the canals, causing the hair follicles to move from their initial vertical orientation. In turn the nerve endings fire resulting in the brain interpreting the acceleration as pitch, roll, or yaw. There are, however, three shortcomings to this system. First, although the vestibular system is a very fast sense used to generate reflexes to maintain perceptual and postural stability, compared to the other senses of vision, touch and audition, vestibular input is perceived with delay. Indeed, although engineers typically try and reduce delays between physical and visual motion, it has been shown that a motion simulator should move about 130ms before visual motion in order to maximize motion simulator fidelity. Second, if the head experiences sustained accelerations on the order of 10 – 20 seconds, the hair follicles return to the “zero” or vertical position and the brain interprets this as the acceleration ceasing. Additionally, there is a lower acceleration threshold of about 2 degrees per second that the brain cannot perceive. In other words, slow and gradual enough motion below the threshold will not affect the vestibular system. As discussed in the preceding “Proprioceptors” section, this shortfall actually allows the simulator to return to a reference position in anticipation of the next movement. The human eye is the most important source of information in motion simulation. The eye relays information to the brain about the craft's position, velocity, and attitude relative to the ground. As a result, it is essential for realistic simulation that the motion works in direct synchronization to what is happening on the video output screen. Time delays cause disagreement within the brain, due to error between the expected input and the actual input given by the simulator. This disagreement can lead to dizziness, fatigue and nausea in some people. For example, if the occupant commands the vehicle to roll to the left, the visual displays must also roll by the same magnitude and at the same rate. Simultaneously, the cab tilts the occupant to imitate the motion. The occupant’s proprioceptors and vestibular system sense this motion. The motion and change in the visual inputs must align well enough such that any discrepancy is below the occupant’s threshold to detect the differences in motion. In order to be an effective training or entertainment device, the cues the brain receives by each of the body’s sensory inputs must agree. It is physically impossible to correctly simulate large scale ego-motion in the limited space of a laboratory. The standard approach to simulate motions (so called motion cueing) is to simulate the “relevant” cues as closely as possible, especially the acceleration of an observer. Visual and auditory cues enable humans to perceive their location in space on an absolute scale. On the other hand, the somatosensory cues, mainly proprioception and the signals from the vestibular system, code only relative information. But fortunately (for our purpose), humans cannot perceive accelerations and velocities perfectly and without systematic errors. And this is where the tricky business of motion simulation starts. We can use those imperfections of the human sensory and perceptual systems to cheat intelligently. In principle, velocity cannot be directly perceived by relative cues alone, like those from the vestibular system. For such a system, flying in space with some constant velocity is not different from sitting in a chair. However, changing the velocity is perceived as acceleration, or force acting on the human body. For the case of constant linear acceleration, a substitute for the real situation is simple. Since the amplitude of the acceleration is not very well perceived by humans, one can tilt the subject backwards and use the gravity vector as a replacement for correct resulting force from gravity and forward acceleration. In this case, leaning backwards is therefore not perceived differently from being constantly accelerated forwards. Linear accelerations are detected by otoliths. The otolith structure is simpler than the three-axis semicircular canals that detect angular accelerations. The otoliths contain calcium carbonate particles that lag behind head movement, deflecting hair cells. These cells transmit motion information to the brain and oculomotor muscles. Studies indicate that the otoliths detect the tangential component of the applied forces. A transfer function model between the perceived force and the applied forces is given by: Based on centrifuge experiments, threshold values of 0.0011 ft/s2 have been reported; values up to 0.4 ft/s2 have been reported based on airborne studies in the USSR. The same studies suggest that the threshold is not a linear acceleration but rather a jerk motion (third time derivative of position), and the reported threshold value is on the order of 0.1 ft/s3. These findings are supported by early studies showing that human movement kinematics is represented by characteristics of jerk profiles. Unfortunately, there is no easy way of cheating for rotations. Hence, many motion simulations try to avoid the problem by avoiding quick and large rotations altogether. The only convincing way of simulating larger turns is an initial yaw rotation above threshold and a back-motion below threshold. For roll and pitch, the static (otolithic) cues cannot be modified easily due to the ambiguity of linear accelerations and changes in gravitational direction. In real life, the ambiguity is resolved by using the dynamical properties of the vestibular and other sensory signals (most importantly, vision). Angular accelerations are detected by semicircular canals while linear accelerations are detected by another structure in the inner ear called the otolith. The three semicircular canals are mutually orthogonal (similar to three-axis accelerometer) and are filled with a fluid called the endolymph. In each canal, there is a section where the diameter is larger than the rest of the canal. This section is called the ampulla and is sealed by a flap called the cupula. Angular accelerations are detected as follows: an angular acceleration causes the fluid in the canals to move, deflecting the cupula. The nerves in the cupula report the motion to both the brain and oculomotor muscles, stabilizing eye movements. A transfer function model between the perceived angular displacement and the actual angular displacement is: A second-order model of the angle of the cupula is given by where is the damping ratio, is the natural frequency of the cupula, and is the input angular acceleration. Values of have been reported to be between 3.6 and 6.7 while values of have been reported to be between 0.75 and 1.9. Thus, the system is overdamped with distinct, real roots. The shorter time constant is 0.1 seconds, while the longer time constant depends on the axis about which the test subject is accelerating (roll, pitch, or yaw). These time constants are one to two orders of magnitude greater than the shorter time constant. Experiments have shown that angular accelerations below a certain level cannot be detected by a human test subject. Values of have been reported for pitch and roll accelerations in a flight simulator. The above studies indicate that the pilot's vestibular system detects accelerations before the aircraft instruments displays them. This can be considered an inner control loop in which the pilots responds to accelerations that occur in full-motion simulators and aircraft, but not in fixed simulators. This effect shows that there is a potential negative training transfer when transitioning from a fixed-based simulator to an aircraft and indicates the need for motion systems for pilot training. It is physically impossible to precisely simulate large scale egomotion in the limited space of a laboratory. There is simply no way around the physics. However, by exploiting some of the imperfections of the body’s sensory and perceptual systems, it is possible to create an environment in which the body perceives motion without actually moving the subject more than a few feet in any one direction. This is where the tricky business of motion simulation begins. The standard approach to simulating motion (so called motion cueing) is to simulate the “relevant” cues as closely as possible which trigger motion perception. These cues can be visual, auditory, or somatosensory in nature. Visual and auditory cues enable humans to perceive their location in space on an absolute scale, whereas somatosensory cues (mainly proprioception and other signals from the vestibular system) provide only relative feedback. Fortunately for us, humans cannot perceive velocity and acceleration directly without some form of error or uncertainty. For example, consider riding in a car traveling at some arbitrary constant speed. In this situation, our sense of sight and sound provide the only cues (excluding engine vibration) that the car is moving; no other forces act on the passengers of the car except for gravity. Next, consider the same example of a car moving at constant speed except this time, all passengers of the car are blindfolded. If the driver were to step on the gas, the car would accelerate forward thus pressing each passenger back into their seat. In this situation, each passenger would perceive the increase in speed by sensing the additional pressure from the seat cushion. However, if the car were traveling in reverse and the driver stepped on the brake pedal instead of the gas, the deceleration of the vehicle would create the same feeling of increased pressure from the seat cushion as in the case of acceleration that the passengers would be unable to distinguish which direction the vehicle is actually moving. Summary of most commonly used “tricks” - Moving the observer below detection threshold to gain additional simulation space - Trading the gravity vector for acceleration (tilting the seat) - Masking not-to-be-detected motions by noise (i.e., vibrations and jitter) - Guiding the attention of the observer away from the imperfections of the motion simulation Implementation using washout filters Washout filters are an important aspect of the implementation of motion platforms as they allow motion systems, with their limited range of motion, to simulate the range of vehicle dynamics being simulated. Since the human vestibular system automatically re-centers itself during steady motions, washout filters are used to suppress unnecessary low-frequency signals while returning the simulator back to a neutral position at accelerations below the threshold of human perception. For example, a pilot in a motion simulator may execute a steady, level turn for an extended period of time which would require the system stay at the associated bank angle, but a washout filter allows the system to slowly move back to an equilibrium position at a rate below the threshold which the pilot can detect. This allows the higher level dynamics of the computed vehicle to provide realistic cues for human perception, while remaining within the limitations of the simulator. Three common types of washout filters include classical, adaptive and optimal washout filters. The classical washout filter comprises linear low-pass and high-pass filters. The signal into the filter is split into translation and rotational signals. High-pass filters are used for simulating transient translational and rotational accelerations, while the low-pass filters are used to simulate sustaining accelerations. The adaptive washout filter uses the classical washout filter scheme, but utilizes a self-tuning mechanism that is not featured with the classical washout filter. Finally, the optimal washout filter takes into account models for vestibular system. Classical Control Representation The classical washout filter is simply a combination of high-pass and low-pass filters; thus, the implementation of the filter is compatibly easy. However, the parameters of these filters have to be empirically determined. The inputs to the classical washout filter are vehicle-specific forces and angular rate. Both of the inputs are expressed in the vehicle-body-fixed frame. Since low-frequency force is dominant in driving the motion base, force is high-pass filtered, and yields the simulator translations. Much the same operation is done for angular rate. To identify the tilt of the motion platform, the tilt mechanism first supplies the low-frequency component of force for rotation calculation. Then, the high-frequency component 'f' is used to orient the gravity vector 'g' of the simulator platform: Typically, to find position, the low-pass filter (in a continuous-time setting) is represented in the s-domain with the following transfer function: The inputs to the high-pass filter are then calculated according to the following equation: where are the force inputs. The high-pass filter may then be represented according to (for example) the following series: The two integrators in this series represent the integration of acceleration into velocity, and velocity into position, respectively. , and represent the filter parameters. It is evident that the output of the filter will vanish in steady state, preserving the location of the open-loop equilibrium points. This means that while transient inputs will be "passed", steady-state inputs will not, thus fulfilling the requirements of the filter. The present practice for empirically determining the parameters within the washout filter is a trial and error subjective tuning process whereby a skilled evaluation pilot flies predetermined maneuvers. After each flight the pilot's impression of the motion is communicated to a washout filter expert who then adjusts the washout filter coefficients in an attempt to satisfy the pilot. Researchers have also proposed using a tuning paradigm and the capturing of such using an expert system. Nonlinear Washout Filter This washout filter can be regarded as the result of a combination of an Adaptive and an Optimal washout filter. A nonlinear approach is desired to further maximize the available motion cues within the hardware limitations of the motion system, therefore resulting in a more realistic experience. For example, the algorithm described by Daniel and Augusto computes a gain, α, as a function of the system states; thus, the washout is time varying. The 'α' gain will increase as the platform states increase their magnitude, making room for a faster control action to quickly washout the platform to its original position. The opposite outcome occurs when the magnitude of the platform states is small or decreasing, prolonging the motion cues which will be sustained for longer durations. Likewise, the work of Telban and Cardullo added an integrated perception model that includes both visual and vestibular sensation to optimize the human's perception of motion. This model as shown to improve pilot's responses to motion cues. Adaptive Washout Filter This adaptive approach was developed at NASA Langley. It is made up of a combination of empirically determined filters in which several of the coefficients are varied in a prescribed manner in order to minimize a set objective (cost) function. In a study conducted at the University of Toronto the coordinated adaptive filter provided the “most favorable pilot ratings” as compared with the other two types of washout filters. The benefits of this style of washout filter can be summarized with two major points. First, the adaptive characteristics give more realistic motion cues when the simulator is near its neutral position, and the motion is only reduced at the limits of the motions systems capabilities, allowing for better use of the motion system’s capabilities. Second, the cost function or the objective function (by which the washout filter is optimized) is very flexible and various terms may be added in order to incorporate higher fidelity models. This allows for an expandable system that is capable of changing over time, resulting in a system that responds in the most accurate way throughout the simulated flight. The disadvantages are that the behavior is difficult to adjust, primarily due to the cross fed channels. Finally execution time is relatively high due to the large number of derivative function calls required. In addition as more complex cost functions are introduced the corresponding computing time required will increase. Although washout filters do provide great utility for allowing the simulation of a wider range of conditions than the physical capabilities of a motion platform, there are limitations to their performance and practicality in simulation applications. Washout filters take advantage of the limitations of human sensing to the appearance of a larger simulation environment than actually exists. For example, a pilot in a motion simulator may execute a steady, level turn for an extended period of time which would require the system stay at the associated bank angle. In this situation, a washout filter allows the system to slowly move back to an equilibrium position at a rate below the threshold which the pilot can detect. The benefit of this is that the motion system now has a greater range of motion available for when the pilot executes his next maneuver. Such behavior is easily applied in the context of aircraft simulation with very predictable and gradual maneuvers (such as commercial aircraft or larger transports). However, these slow, smooth dynamics do not exist in all practical simulation environments and diminish the returns of washout filters and a motion system. Take training of fighter pilots, for example: while the steady, cruise regime of a fighter aircraft may be able to be well simulated within these limitations, in aerial combat situations flight maneuvers are executed in a very rapid manner to physical extremes. In these scenarios, there is not time for a washout filter to react to bring the motion system back to its range equilibrium resulting in the motion system quickly hitting its range of movement limitations and effectively ceasing to accurately simulate the dynamics. It is for this reason that motion and washout filter based systems are often reserved for those that experience a limited range of flight conditions. The filters themselves may also introduce false cues, defined as: 1) a motion cue in the simulator that is in the opposite direction to that in the aircraft, 2) a motion cue in the simulator when none was expected in the aircraft, and 3) a relatively high-frequency distortion of a sustained cue in the simulator for an expected sustained cue in the aircraft. The previous definition groups together all of the cueing errors that lead to very large decreases in perceived motion fidelity. Six potential sources of false cues are: - Software or Hardware Limiting:When the simulator approaches a displacement limit, two methods of protection are provided: 1) software limiting and 2) hardware limiting. In either case the simulator is decelerated to prevent damage to the motion system. Large false cues are often associated with this deceleration. - Return to Neutral: This false cue is attributed to the overshoot of the high-pass filters to step-type inputs. This type of response only occurs if second- or third-order high-pass filters are used. - Tilt-Coordination Angular Rate - Tilt-Coordination Remnant: For sustained specific force input in sway or surge, the simulator will achieve a steady-state pitch or roll angle because of tilt-coordination. If the input ends abruptly, then the highpass specific force response will initially cancel out the specific force associated with the tilt, but only for a brief time before the restricted simulator displacement prohibits translational acceleration of the simulator. If the tilt is removed quickly, then a tilt-coordination angular rate false cue will occur; if not, the remaining tilt will create a sensation of acceleration, called a tilt-coordination remnant false cue. - Tilt Coordination Angular Acceleration: This false cue is caused by the angular acceleration generated by the tilt-coordination occurring about a point other than the pilot’s head. The angular acceleration combined with the moment arm from the center of rotation to the pilot’s head results in the specific force false cue at the pilot’s head. The point about which angular rotations are simulated (the so-called reference point) is typically at the centroid of the upper bearing block frame for hexapod motion systems. The use of physical motion applied in flight simulators has been a debated and researched topic. The Engineering department at the University of Victoria conducted a series of tests in the 1980s, to quantify the perceptions of airline pilots in flight simulation and the impact of motion on the simulation environment. In the end, it was found that there was a definite positive effect on how the pilots perceived the simulation environment when motion was present and there was almost unanimous dislike for the simulation environment that lacked motion. A conclusion that could be drawn on the findings of the Response of Airline Pilots study is that the realism of the simulation is in direct relationship to the accuracy of the simulation on the pilot. When applied to video gaming and evaluated within our own gaming experiences, realism can be directly related to the enjoyment of a game by the game player. In other words – motion enabled gaming is more realistic, thus more iterative and more stimulating. However, there are adverse effects to the use of motion in simulation that can take away from the primary purpose of using the simulator in the first place such as Motion Sickness. For instance, there have been reports of military pilots throwing off their vestibular system because of moving their heads around in the simulator similar to how they would in an actual aircraft to maintain their sensitivity to accelerations. However, due to the limits on simulator acceleration, this effect becomes detrimental when transitioning back to a real aircraft. Adverse effects (simulator sickness) Motion or simulator sickness: Simulators work by “tricking” the mind into believing that the inputs it is receiving from visual, vestibular and proprioceptive inputs are a specific type of desired motion. When any of the cues received by the brain do not correlate with the others, motion sickness can occur. In principle, simulator sickness is simply a form of motion sickness that can result from discrepancies between the cues from the three physical source inputs. For example, riding on a ship with no windows sends a cue that the body is accelerating and rotating in various directions from the vestibular system, but the visual system sees no motion since the room is moving in the same manner as the occupant. In this situation, many would feel motion sickness. Along with simulator sickness, additional symptoms have been observed after exposure to motion simulation. These symptoms include feelings of warmth, pallor and sweating, depression and apathy, headache and fullness of head, drowsiness and fatigue, difficulty focusing eyes, eye strain, blurred vision, burping, difficulty concentrating, and visual flashbacks. Lingering effects of these symptoms were observed to sometimes last up to a day or two after exposure to the motion simulator. Contributing factors to simulator sickness Several factors contribute to simulation sickness, which can be categorized into human variables, simulator usage, and equipment. Common human variable factors include susceptibility, flight hours, fitness, and medication/drugs. An individual’s variance in susceptibility to motion sickness is a dominant contributing factor to simulator sickness. Increasing flight hours is also an issue for pilots as they become more accustomed to the actual motion in a vehicle. Contributing factors due to simulator usage are adaptation, distorted or complicated scene content, longer simulation length, and freeze/reset. Freeze/reset refers to the starting or ending points of a simulation, which should be as close to steady and level conditions as possible. Clearly, if a simulation is ended in the middle of an extreme maneuver then the test subjects IMU system is likely to be distorted. Simulator equipment factors that contribute to motion sickness are quality of motion system, quality of visual system, off-axis viewing, poorly aligned optics, flicker, and delay/mismatch between visual and motion systems. The delay/mismatch issue has historically been a concern in simulator technology, where time lag between pilot input and the visual and motion systems can cause confusion and generally decrease simulator performance. Debate over performance enhancement from motion simulators In theory, the concept of motion simulators seem self-explanatory: if the perception of events can be mimicked exactly, they will provide the user an identical experience. However, this ideal performance is next to impossible to achieve. Although the motion of vehicles can be simulated in 6 degrees of freedom (all that should be required to mimic motion), the impacts of simulated motion on pilots, and operators in many other fields, often leave trainees with a multitude of adverse side effects not seen in un-simulated motion. Further, there are many scenarios which may be difficult to simulate in training simulators exposing a concern that replacing real world exposure with motion simulations may be inadequate. Due to the exorbitant cost of adding motion to simulators, military programs have established research units to investigate the impact of “skill acquisition” with the use of motion simulators. These units have provided results as recent as 2006 despite the use motion simulators over the last century. From an Army study, it was determined that “motion-based simulators are recommended for training when individuals must continue to perform skill-based tasks…while the ground vehicle negotiates rough terrain.” However, if individuals are not required to negotiate rough terrain, or motion sickness does not detract from performance in the field, then “motion is not recommended.” The existence of adverse side effects of virtual environments has spawned a plethora of studies from predicting and measuring the impact of the side effects to identifying their specific causes. Advantages and disadvantages of simulation in training - Simulators provide a safe means of training in the operation of potentially dangerous craft (e.g., aircraft). - The expense of training on real equipment can sometimes exceed the expense of a simulator. - Time between training sessions may be reduced since it may be as simple as resetting the motion system to initial conditions. - The true environment may not be mimicked identically; therefore the pilot/rider may be confused by the lack of expected sensations or not properly prepared for the real environment. - Lining up all sensor inputs to eliminate or at least mitigate the risk of "simulator sickness" can be challenging. - Age of participant as well as amount of experience in true environment modifies reactions to simulated environment. |Wikimedia Commons has media related to Motion simulator.| - Degrees of freedom (mechanics) - Driving simulator - Flight simulator - Simulator sickness - Stewart platform - Vestibular system - "SimCraft :: Military Grade Full Motion Simulators for SimRacing and FlightSim". SimCraft Corporation. 2006-06-12. - "Motion Platforms". Moorabbin Flying Services. 2006-06-12. - Markus von der Heyde & Bernhard E. Riecke (2001-12). "how to cheat in motion simulation – comparing the engineering and fun ride approach to motion cueing". CiteSeerX: 10 .1 .1 .8 .9350. Check date values in: - "Allerton, D. (2009). Principles of Flight Simulation. John Wiley & Sons, Ltd. - "Motion Platforms or Motion Seats?" (PDF). Phillip Denne, Transforce Developments Ltd. 2004-09-01. - "Motion Systems and Visual Displays" (PDF). Phillip Denne. 1994-01-12. - Scanlon, Charles H. (December 1987). "Effect of Motion Cues During Complex Curved Approach and Landing Tasks" (PDF). NASA. pp. 6–9. Retrieved 2009-07-19. - Rollings, Andrew; Ernest Adams (2003). Andrew Rollings and Ernest Adams on Game Design. New Riders Publishing. pp. 395–415. ISBN 1-59273-001-9. - Page, Ray L. “Brief History of Flight Simulation.” In SimTechT 2000 Proceedings. Sydney: The SimtechT 2000 Organizing and Technical Committee, 2000 - Nicolas A. Pouliot; Clément M. Gosselin; Meyer A. Nahon (January 1998). "Motion Simulation Capabilities of Three-Degree-of-Freedom Flight Simulators". Journal of Aircraft 35 (1): 9–17. doi:10.2514/2.2283. - "XSimulator DIY Motion Simulator Community". xsimulator.net. 2013-09-24. - Barnett-Cowan, M., and Harris, L. R. (2009), Perceived timing of vestibular stimulation relative to touch, light and sound Experimental Brain Research, 198: 221-231. doi: 10.1007/s00221-009-1779-4 http://link.springer.com/article/10.1007%2Fs00221-009-1779-4 - Grant P, Lee PTS (2007) Motion–visual phase-error detection in a flight simulator. J Aircr 44:927–935 - Flash, Tamar; Hogan, Neville; (1985). "The coordination of arm movements: an experimentally confirmed mathematical model". The journal of Neuroscience 5: 1688–1703. - Chen, S.H.; Fu, L.D. (2010). "An optimal washout filter design for a motion platform with senseless and angular scaling maneuvers". Proceedings of the American Control conference: 4295–4300. - Grant, P.R.; Reid, L.D. (1997). "Motion washout filter tuning: Rules and requirements.". Journal of Aircraft 34 (2): 145–151. doi:10.2514/2.2158. - Springer, K.; Gattringer, H. & Bremer, H. (2011). "Towards Washout Filter Concepts for Motion Simulators on the Base of a Stewart Platform". PAMM 11 (1): 955–956. doi:10.1002/pamm.201110448. - R. Graf and R. Dillmann, "Active acceleration compensation using a Stewart platform on a mobile robot," in Proc. 2nd Euromicro Workshop Advanced Mobile Robots, Brescia, Italy, 1997, pp. 59-64. - Grant, P.R.; Reid, L.D. (1997). "PROTEST: An Expert System for Tuning Simulator Washout Filters". Journal of Aircraft 34 (2): 145–151. - Daniel, B. "Motion Cueing in the Chalmers Driving Simulator: An Optimization-Based Control Approach" (PDF). Chalmers University. Retrieved 14 April 2014. - Telban, R.J. (May 2005). Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches (PDF). NASA Contractor Report CR-2005-213747. - Nahon, M.A.; Reid, L.D. "Simulator motion-drive algorithms-A designer's perspective". Journal of Guidance, Control and Dynamics 13 (2): 356–362. doi:10.2514/3.20557. - Lloyd D Reid; Meyer A. Nahon (July 1988). "Response of airline pilots to variations in flight simulator motion algorithms". Journal of Aircraft 25 (7): 639–646. doi:10.2514/3.45635. - "Effects of Motion on Skill Acquisition in Future Simulators" (PDF). DTIC. - Michael K. McGee. "Assessing Negative Side Effects in Virtual Environments". - U.S. Army Research Institute for the Behavioral and Social Sciences (April 2005). "Introduction to and Review of Simulator Sickness Research" (PDF).
STONE AGE AMPUTATION: Scientists found evidence of sophisticated amputation in this skeleton from the Stone Age. (Courtesy of Antiquity Journal) Stone Age doctors prove to be more medically advanced than we first imagined, as new evidence of surgery undertaken almost 7,000 years ago comes to light. Confirming advanced medical knowledge in 4900 B.C., the findings challenge the existing history of surgery and its development. In a Neolithic site excavated in 2005 at Buthiers-Boulancourt, 40 miles south of Paris, scientists found the skeleton of an old man buried almost 7,000 years ago. Tests showed an intentional and successful amputation in which a sharpened flint was used to cut the man’s humerus bone above the trochlea indent. Impressively, the patient was even anesthetized. The limb was cleanly cut off, and the wound was treated in sterile conditions. It has been common knowledge that Stone Age doctors performed trephinations (that is, cutting through the skull), but amputations have been unheard of up until now. According to a research paper published in the Antiquity Journal, the macroscopic examination has not revealed any infection in contact with this amputation, suggesting that it was conducted in relatively aseptic conditions. Scientists found that the patient survived the operation, and although he suffered from osteoarthritis, he lived for months if not years afterward. According to the Daily Mail, researcher Cécile Buquet-Marcon said that pain-killing plants such as the hallucinogenic Datura were possibly used, and other plants such as sage were probably used to clean the wound. The loss of the patient’s forearm did not exclude him from the community. His grave measures an above average 6.5 feet and contains a schist axe, a flint pick, and the remains of a young animal, which point to a high social rank.
The oldest galaxy known might be a tiny dwarf galaxy orbiting the Milky Way. Segue 1 is very, very tiny. It appears to contain only a few hundred stars, compared with the few hundred billion stars in the Milky Way Galaxy. Researchers led by Anna Frebel of the Massachusetts Institute of Technology in Cambridge collected detailed information on the elemental composition of six of the brightest of Segue 1’s stars using the Las Campanas Observatory’s Magellan Telescopes in Chile and the Keck Observatory in Hawaii. The measurements, reported in a paper accepted for Astrophysical Journal and posted on the arXiv repository, revealed that these stars are made almost entirely of hydrogen and helium, and contain just trace amounts of heavier elements such as iron. No other galaxy studied holds so few heavy elements, making Segue 1 the “least chemically evolved galaxy known.” Complex elements are forged inside the cores of stars by the nuclear fusion of more basic elements such as hydrogen and helium atoms. When stars explode in supernovae, even heavier atoms are created. elements spew into space to infuse the gas that births the next generation of stars, so that each successive generation contains more and more heavy elements, known as metals. “Segue 1 is so ridiculously metal-poor that we suspect at least a couple of the stars are direct descendants of the first stars ever to blow up in the universe,” says study co-author Evan Kirby of the University of California, Irvine.
A federal agency has proposed listing 66 species of coral under the Endangered Species Act, which would bolster protections of the animals. The proposed listing comes after a 2009 petition by the Center for Biological Diversity, an environmental group, asserting that the federal government needed to do more to protect coral species. Under the proposal, the National Oceanographic and Atmospheric Administration (NOAA) would list seven coral species as endangered and 52 as threatened in the Pacific, with five endangered and two threatened in the Caribbean. The listing could lead to further protections for areas where these corals live, perhaps earning them designation as "critical habitat." Such a step would restrict commercial activities in the areas, while preventing any trade or harvesting of the corals. "Corals provide habitat to support fisheries that feed millions of people; generate jobs and income to local economies through recreation, tourism and fisheries; and protect coastlines from storms and erosion," said NOAA administrator Jane Lubchenco in a statement from the agency. "Yet, scientific research indicates that climate change and other activities are putting these corals at risk. This is an important, sensible next step toward preserving the benefits provided by these species, both now and into the future." NOAA has identified 19 threats to the survival of coral, including ocean acidification, rising ocean temperatures and coral diseases. As the concentration of carbon dioxide increases in the atmosphere, the oceans warm beyond what corals can withstand, leading to coral bleaching and eventually to die-offs. Before the proposed listing is finalized in late 2013, the agency will hold 18 public meetings during a 90-day public comment period. - Colorful Creations: Incredible Coral - Gallery: Creatures from the Census of Marine Life - Images: Trip to the Coral Triangle Copyright 2012 OurAmazingPlanet, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
The McCall-Crabbs books offer 60 short reading passages followed by 8 multiple choice questions. Questions are set up like standardized test questions to familiarize children with that format. They focus on both comprehension and inferential skills. These exercises can be used in a number of ways. The optimal use is to allow exactly three minutes for a student to read the passage and complete the questions. Grade equivalents corresponding to each score (number correct) are shown at the bottom of each page. This gives us an immediate, although not always accurate assessment of how they are doing. Tracking these scores through a number of lessons gives us a much more accurate picture. For some students, the time pressure will be inappropriate, so we can use the lessons untimed, ignoring the grade equivalent scores. In such cases, oral reading of the passages might even be appropriate. There are six books in the series, labeled A-F. These correspond roughly to grade levels 3-8, although children can vary dramatically in reading levels at the same ages. A single teachers manual provides instructions and answer keys for all six books. While there are some concerns because the books are more than 30 years old and don't reflect the Common Core State Standards, homeschoolers and other educators still love them. You cannot correlate these directly to what's now expected at various grade levels since academic standards now expect reading to be taught in kindergarten, something rarely done 30 years ago. Nevertheless, basic reading skills remain the same, and these books are useful for developing and assessing those skills.
Rootstocks and Propagation Fruit trees grown from seed produce inferior fruits and are best used as rootstocks for grafting. Such rootstocks will produce full-size trees, which may not be desirable since such trees may be too big to prune, pick or spray. For example, standard pear trees can grow from 25 to 40 feet tall. This is the reason that stems (called scions) cut from fruit tree cultivars with desirable fruit characteristics are grafted onto rootstocks that will determine the tree’s size. - Fruit trees on dwarf rootstocks mature at eight to 10 feet tall; - Semidwarf rootstocks mature at 12 to 18 feet. (Although dwarf trees can grow in more shallow soils than semidwarf and standard trees, they require much more pruning and training, and are hard to mow under.) The life span of a semidwarf tree is 25 to 30 years; a full-sized tree’s life span is 140 years. Rootstocks also affect: - Years to bearing - How well the tree will withstand drought, waterlogging, cold, disease and other adverse conditions. Grafting to Propagate Grafting is the best way to propagate most fruit trees. Using this method, you can quickly start large numbers of trees of the same cultivar. Grafting techniques take time to master and are best learned by working alongside an experienced tutor. Although there are a number of different grafting methods to choose from, all of them bind two regions of actively dividing cells together as one. Many detailed texts are available on specific techniques for different species. -- Emily Goodman No discussion of growing fruits can avoid mentioning pests and diseases. These are a fact of life for fruit growers, and coping with them is of utmost importance if you wish to sell your produce. You must strike a balance between: - Controlling problems sufficiently to meet consumers’ cosmetic requirements for fruits, and - Using as few poisons as possible to minimize ecological damage and meet consumers’ desire for “natural” or organic produce. Plant Choices and Care Matter Head-off some problems before they start by planting disease- and pest-resistant cultivars of fruit plants wherever possible. In some cases, native American plants are better adapted to local environments than Asian or European species. Then, follow growing techniques that minimize problems: - Maximize air circulation and sunlight for each plant. - Water and prune correctly. - Don’t fertilize after midsummer because this will encourage tender new growth late in the year, when the plant is most vulnerable to winter damage and other problems after that. - Planting large numbers of the same plant in one place makes it easier for the insects and animals that eat them to have their fill, so interspersing different species can help control problems. When you encounter an insect or animal pest, try to minimize its damage with traps, barriers, and other physical deterrents so you don’t have to utilize poisons. For example ... Birds adore mulberries, so planting mulberry trees can distract them from eating your other fruit crops--a win-win solution for everybody. Encourage beneficial insects by planting the small-flowered, herbal plants they use as food and shelter near your fruit plants. Use traps and sticky barriers to catch insects before they reach your fruit. Pesticides should be your last resort. They kill beneficial insects as readily as pests and can harm animals and humans also. Organic pesticides, which are made from botanical or biological compounds, such as chemicals found in some plants, are just as toxic as synthetically derived chemicals, although they usually break down faster after use. They are not harmless and must be used with appropriate care. Start lower on the poison chain with soap spray or baking soda compounds you can make yourself and work up. Aim to spray as little as possible throughout the year. Try also to educate your customers. If people understood the chemical price they were paying for “perfect” fruit, they might learn to tolerate produce that looks different, but that’s healthier, less polluting and often better tasting. -- Lorraine Anderson This article contains excerpts from "The Art of Fruit Trees" by Lorraine Anderson and "Twisting Tradition in the Orchard" by Emily Goodman. Read the full articles in Popular Farming Series: Orcharding, a publication with in-depth information for those who grow or would like to grow orchard crops. Buy one online or call (800) PET-BOOK (738-2665).
Equations and inequalities : elementary problems and theorems in algebra and number theory Science Library (Li and Ma) |QA218 .H4713 2000||Unknown| - Includes bibliographical references and index. - 1 Algebraic Identities and Equations.- 1 Formulas for Powers.- 2 Finite Sums.- 3 Polynomials.- 4 Symmetric Polynomials.- 5 Systems of Equations.- 6 Irrational Equations.- 7 Some Applications of Complex Numbers.- 2 Algebraic Inequalities.- 1 Definitions and Properties.- 2 Basic Methods.- 3 The Use of Algebraic Formulas.- 4 The Method of Squares.- 5 The Discriminant and Cauchy's Inequality.- 6 The Induction Principle.- 7 Chebyshev's Inequality.- 8 Inequalities Between Means.- 9 Appendix on Irrational Numbers.- 3 Number Theory.- 1 Basic Concepts.- 2 Prime Numbers.- 3 Congruences.- 4 Congruences in One Variable.- 5 Diophantine Equations.- 6 Solvability of Diopha, ntine Equations.- 7 Integer Part and Fractional Part.- 8 Base Representations.- 9 Dirichlet's Principle.- 10 Polynomials.- 4 Hints and Answers.- 1 Hints and Answers to Chapter 1.- 2 Hints and Answers to Chapter 2.- 3 Hints and Answers to Chapter 3. - (source: Nielsen Book Data)9780387989426 20160615 - Publisher's Summary - A look at solving problems in three areas of classical elementary mathematics: equations and systems of equations of various kinds, algebraic inequalities, and elementary number theory, in particular divisibility and diophantine equations. In each topic, brief theoretical discussions are followed by carefully worked out examples of increasing difficulty, and by exercises which range from routine to rather more challenging problems. While it emphasizes some methods that are not usually covered in beginning university courses, the book nevertheless teaches techniques and skills which are useful beyond the specific topics covered here. With approximately 330 examples and 760 exercises. (source: Nielsen Book Data)9780387989426 20160615 - Supplemental links Table of contents only - Publication date - CMS books in mathematics ; - 0387989420 (acid-free paper) - 9780387989426 (acid-free paper) Browse related items Start at call number:
This article briefly presents the few simple visual principals of the Hebrew language. The examples are mostly set in fonts by Maxim Iorsh of Culmus Project. Direction of Hebrew text is from right to left, unlike Latin languages that read left to right. This affects both the structure and direction of the letterforms. As Hebrew evolved, some Hebrew letters were designated for use only in the end of words. These letters have seperate glyphs, but in essence, they are "versions" of other letters. Hebrew vowels aren't letters - they're small diacritic symbols that decorate the letterforms. Diacritics aren't common in everyday Hebrew, but are essential in text faces. The Hebrew alphabet is consisted of only a single set of letters, unlike Latin alphabets, which are bicameral: they may feature, for example, both uppercase and lowercase letters in one alphabet. The Hebrew hyphen ("Maqaf") is unique in Hebrew punctuation, because it is used to connect two words to form a single term. Notice how it aligns with top horizontal strokes. Hebrew period in "Serif" faces usually looks like a tiny tilted square. This is also true for question marks, etc. Furthermore, traditional Hebrew calligraphy extensively utilizes the shape of the diamond in the alphabets themselves. Hebrew uses Western numerals and punctuation, which usually align with the average letter height. Single quote and double quote are commonly in "typewriter" style. Only the letter "Lamed" in Hebrew erects beyond the "x-height". Some letters extend below the "baseline", most obvious are four of the end-of-word letters, and the letter "Kof", but also the "Ayin", in many cases, slightly descends below the baseline. Horizontal strokes in "Serif" Hebrew letterforms are thicker than vertical strokes. Notice the change in thickness of diagonal strokes. Hebrew letterforms, unlike Latin, don't conform to simple geometry. In classic faces, there isn't even one completely straight angle! Note the unique, yet "squarish" nature of the letterforms, which is opposite to the Roman "circle based" shapes. While Latin letterforms can be assimilated to sophisticated architectual constructs, that erect from the (virtual) ground up, Hebrew letterforms are more like wire hangers, that hang from an invisible coat rack. "Closed" letteforms (that have vertical strokes on both their sides) are usually rendered slightly wider, to satisfy human optical perception. There is no "true" Hebrew Italic style. The closest thing to Italics in Hebrew is the "David Oblique" typeface, which was designed in accordance to ancient semi-cursive scripts, and therefore can be considered somewhat "Italic". The Hebrew cursive alphabet is the Hebrew "script", which is almost always used by Hebrew literates as handwriting. This style of letters looks like a different alphabet altogether, and features a bigger variety of shapes and more roundness.
Earth Island News The term “frozen” often connotes “immune to change.” Perhaps no land is so firmly tied to that word as Siberia. Of all parts of Siberia, no place so richly deserves this connection as the grand Lena River Valley. The ninth-longest river in the world, the Lena flows through one of the world’s iciest lands, where the sun is seldom seen during winter. Nearly 80 percent of the watershed is continuous permafrost – earth that never thaws fully, even in summer. However, climate change is reaching this remote outpost; this may have consequences not just locally but for the rest of the world as well. Perhaps the most striking effect of global warming is that it attacks the very foundation of the Lena Basin: permafrost. Permafrost has long been the bane of Siberians. Digging anything from wells to basements or simply laying a foundation requires cutting through many feet of ice. As Siberia warms, though, the permafrost is going the way of the disappearing Arctic ice cap. As permafrost melts, the earth gives way underneath buildings and roads. A United Nations Environment Programme report says that 300 buildings in Yakutsk, the regional capital, have been damaged by permafrost melt. UNEP predicts that over 70 percent of apartment buildings built between 1950 and 1990 will fail by 2010. By 2030, that total reaches 100 percent. The natural landscape is being transformed as well. Dr. Robert Holmes, a researcher at the Woods Hole Research Center in Massachusetts, reports that in some areas of the Arctic, new lakes and marshes are being formed. “As the permafrost thaws, the ground surface drops somewhat,” says Holmes, “causing a depression that when filled with water creates a lake.” Existing lakes are also deepening due to the thaw. Holmes notes that in Alaska, and perhaps elsewhere in the Arctic, the permafrost below some existing lakes has been pierced, draining them completely. Similarly, the New York Times reports that the freeing up of coastal ice is threatening an entire village in northeastern Russia, as the coastline erodes away at a rate of 15 to 18 feet a year. Dr. Holmes and his colleagues have discovered another change affecting the Lena, this one perhaps even more profound. As they reported in an article in the December 13, 2002 issue of Science, the amount of water flowing to the Arctic from the Lena has increased seven percent in the last seven decades (abstract here). With a river as large as the Lena, that is a significant amount. One might think Siberia’s great ice melt is to blame, but Dr. James McClelland, another researcher on the team, says that Eurasia simply doesn’t have enough ice stored in glaciers to account for the Lena’s swelling. “We have calculated that it would have taken a much larger change in permafrost thaw depths than anyone has observed to account for the change in river discharge.” The actual story begins far away. “In the tropics, high temperatures and intense sunlight evaporate huge amounts of water, and the atmosphere transports much of this moisture away from the tropics and toward the poles,” explains Holmes. “Warmer air can hold more moisture, so as the Earth warms, there is more moisture in the atmosphere. So we think that global warming is causing more atmospheric moisture to enter Siberia, leading to more precipitation.” One important local effect may be that increased volume of the Lena will exacerbate the river’s devastating seasonal floods. Any snowmelt-fed river rises in the spring, but the Lena and other Russian rivers are unusual in that they flow north. In most other rivers, the headwaters high in the mountains thaw last, but the Lena’s source, a thousand miles south of its delta, thaws first. Ice floats downstream and jams the river, causing flooding. In 2001, a giant ice dam submerged Yakutsk and Kirensk and destroyed much of Lensk. The biggest change, though, could be international. “The ironic part,” says McClelland, “is that global warming could actually lead to the cooling of Northern Europe.” The mechanism is complex. “It has been hypothesized that increases in Arctic river discharge (including the Lena) could slow or stop North Atlantic Deep Water (NADW) formation in the Greenland-Iceland-Norwegian (GIN) Seas and Labrador Sea,” he explains. “The combination of salinity and cold temperatures in the GIN and Labrador seas lead to formation of very dense water that sinks and flows southward along the bottom of the ocean. This water is replaced by warmer (less dense) water flowing north at the ocean’s surface. This warm water flowing from south to north in the Atlantic, mainly via the Gulf Stream, keeps Northern Europe warmer than it otherwise would be.” Without the NADW, this stream of warm water bathing Northern Europe would cease. McClelland continues: “Increasing river inputs work against NADW formation [because] fresh water is much less dense than saltwater. By adding more fresh water to the NADW formation regions, the surface waters become less dense and therefore less likely to sink and move southward.” Global warming may amplify itself as Siberia thaws. “There is a huge amount of organic matter (dead plants and animals) locked up in permafrost, frozen and preserved for thousands of years,” writes Holmes. “As the permafrost thaws, this organic matter is ‘removed from the freezer,’ and much of it may then be decomposed and converted to carbon dioxide. More carbon dioxide leads to more warming, causing more permafrost thaw, releasing more ancient organic matter, leading to more carbon dioxide, etc.” “The potential consequences of changes in discharge from the Lena River warrant conservation measures,” says McClelland. However, neither he nor Russian environmental experts queried knew of any conservation projects specifically targeting the Lena. Russia has ratified the Kyoto Treaty, putting it into effect among signatory nations, but that is only a modest beginning. The Lena River region is a case study in the complexity of global warming. Warmer temperatures caused by industrial activity thousands of miles from the Lena are causing rising temperatures in the tropics, leading to more rain in Siberia, perhaps eventually cooling Northern Europe. It is truly a global phenomenon with enormous local consequences. Even what was frozen in the icebox of Siberia will change.
Forces of weathering In this chapter: The natural chemical or physical processes that change rocks are called weathering Heat and water usually speed up the weathering process There are three types of weathering: physical; chemical; and biotic The changing of rocks by physical forces like temperature changes, heat, wind and frost is called physical weathering Rocks can be changed by chemical reactions that involve water and the rock's minerals Different forces of nature create different landforms. These forces can be internal or external. Internal forces are generated from inside the Earth. External natural forces are generated above the Earth's crust. This chapter looks at the external type of natural force that also plays a very important role in shaping the surface of the Earth. This chapter is about weathering. Weather and weathering Plate tectonics change the Earth's crust, causing different types of rocks to be formed, destroyed or mixed together. Some rocks are lifted to the surface while others are buried deep down in the Earth's crust. As soon as rocks are lifted to the surface they become exposed to wind, rain, frost, the heat of the sun or to water waves. In other words, they are exposed to different weather conditions. These weather conditions change the rocks. The natural chemical or physical processes that change rocks are called weathering. Weathering is a very slow process. Unlike earthquakes and volcanic eruptions that can change landforms in a matter of hours, changes caused by weathering take thousands or millions of years. The speed at which weathering forces work depends a lot on the action of water and the temperature of the area. Heat and water usually speed up the weathering process by speeding up the chemical reactions that change the rocks. Types of weathering There are three types of weathering: physical; chemical; and biotic. The changing of rocks by physical forces like temperature changes, heat, wind and frost is called physical weathering. For example, when water freezes it expands, which means that it takes up more space. A block of ice, for example, will take up more space than the amount of liquid from which it was formed. When water freezes it generates a very strong force that can break the hardest rocks. Physical weathering usually affects rocks near the Earth's surface. Rocks can be changed by chemical reactions that involve water and the rocks' minerals. This sort of weathering is called chemical weathering. Chemical weathering can change the colour of rocks, break them up or cause them to form different materials which, in turn, will form new types of rocks. The rusty, red-coloured spots on some rocks, for example, are the result of chemical weathering. These spots appear because some rocks contain iron. When iron reacts with air and water it becomes rust. Rainwater contains different chemicals which have been absorbed from the air. Rainwater is a very powerful weathering force. Biotic weathering is the combination of physical and chemical weathering caused by plants and animals. A plant's roots, for example, can crumble surrounding rocks while the plant grows. Some animals can produce a type of acid that affects rocks. Forces of erosion Weathering usually involves different natural forces that work together. For example, flowing water, wind or ice contribute to the natural destructive process called erosion. There are two stages in the erosion process: weathering and transporting. Firstly, rocks are broken into smaller pieces by weathering. They are then transported from the higher to the lower parts of the Earth's crust. Rivers and sea waves are important rock transporters. The force of flowing water is very strong. Rivers and streams slowly soften and mould the lines of our landscapes. Water wears away the materials in river beds and their banks and carries them from one area to another. The minerals in river water make ocean water salty. The faster the river flows, the more rocks and minerals it can move. Water flows at different speed in different parts of a river. The water flow is slowest near the banks and at the bottom of the river. Movements of earth down a steep slope are called landslides. Landslides happen when hills or mountains become unstable. Slopes become unstable when the pieces of rock that form the slopes are not properly supported. Human activities, water, frost and ice can cause landslides. Sometimes, too many buildings are built on a slope which is not strong enough to support them. Cutting down forests or digging channels can also cause landslides. Rainwater which falls during a storm loosens the soil, turning it into mud that slides down from slopes. This is called a mudslide. Frost can push pieces of rocks up the slope. These pieces melt then settle lower down the slope, causing a very slow movement of soil, called soil creep. Frost also makes rocks split, forming big cracks along the slopes. Glaciers are large streams of ice that flow through mountain valleys. Glaciers slowly slide lower and lower down the valley until they finally melt. The ice in glaciers is made of compressed snow lumps that are packed together.
Solubility is defined as how much of a solute will dissolve in a particular amount of a solvent. Most solutes vary with different solvents. In the example used above, sugar was the solute and the solvents were methyl alcohol and water. According to HowStuffWorks, solubility differs greatly depending on the state of matter, temperature and pressure. For example, most solids and liquids increase in solubility at higher temperatures, but in the same situation, gases decrease in solubility. Under pressure, though, gases become more soluble. An example of this would be carbonated drinks. Drinks such as soda are bottled under pressure because gases are more soluble in this state. When the pressure is released by someone opening the container, the carbon dioxide instantly starts to lose its solubility and begins to escape. Based on these properties, there are several examples of solubility. Salt, for instance, is soluble in water, but it isn't soluble in oil. It is possible to add both cream and sugar to coffee because both are soluble in the drink. Another example of solubility is in the air: oxygen is soluble in nitrogen.Learn more about Solutions & Mixtures
Masses of thousands of enslaved humans in the South did now not anticipate U.S.A. Politicians to training session their fate, fleeing to Union army strains and thereby making use of pressure for governmental action that might transform a struggle for Union into one that also might kill slavery. The fee of the warfare changed into appalling. Extra American soldiers lost their lives than in all different wars blended from the colonial period via the last phase of the Vietnam struggle. The battle introduced wide-scale economic destruction to the confederate states, which misplaced two-thirds of their assessed wealth (emancipated slaves accounted for a whole lot of this). In evaluation, the northern economic system thrived. Numbers bring a experience of the relative monetary price: between 1860 and 1870, northern wealth increased by using 50 percent; at some stage in that identical decade, southern wealth decreased by means of 60 percent.People remembered the war in one-of-a-kind approaches. Most white northerners recalled a crusade that saved the Union.
There may be a suite of organic chemical reactions occurring in interstellar space that astronomers haven't considered. In 2012, astronomers discovered methoxy molecules containing carbon, hydrogen and oxygen in the Perseus molecular cloud, around 600 light years from Earth. But researchers were unable to reproduce this molecule in the lab by allowing reactants to condense on dust grains, leaving a mystery as to how it could have formed. The answer was found in Quantum weirdness that can generate a molecule in space that shouldn't exist by the classic rules of chemistry. In short, interstellar space is a kind of quantum chemistry lab, that may create a host of other organic molecules astronomers have discovered in space. But methoxy could also be created by combining a hydroxyl radical and methanol gas, both present in space through a process called quantum tunnelling that can give the hydroxyl radical a chance to tunnnel through the energy barrier instead of going over it. Heard and colleagues discovered that despite the presence of a barrier, the rate coefficient for the reaction between the hydroxyl radical (OH) and methanol—one of the most abundant organic molecules in space—is almost two orders of magnitude larger at 63 K than previously measured at ∼200 K. At low temperatures, the molecules slow down, increasing the likelihood of tunnelling. "At normal temperatures they just collide off each other, but when you go down in temperature they hang out together long enough," says Heard. The team also observed the formation of the methoxy radical molecule, created by the formation of a hydrogen-bonded complex that is sufficiently long-lived to undergo quantum-mechanical tunnelling. They concluded that this tunnelling mechanism for the oxidation of organic molecules by OH is widespread in low-temperature interstellar environments. The reaction occurred 50 times faster via quantum tunnelling than if it occurred normally at room temperature by hurdling the energy barrier. Empty space is much colder than 63 kelvin, but dust clouds near stars can reach this temperature, added Heard. "We're showing there is organic chemistry in space of the type of reactions where it was assumed these just wouldn't happen," says Heard. The image at the top of the page shows the Perseus Molecular Cloud At microwave wavelengths, taken by the Planck Space Craft which sees electons moving through the Milky Way, and dust being warmed by starlight from stars forming within. These components of the interstellar medium have studied at length over several decades. The electrons are known to emit primarily at radio waves (low frequencies), while the dust grains primarily in the far-infrared (high frequencies). In the 1990s, emission was observed which couldn't be explained by either, and became known as "Anomalous Microwave Emission". Several theories of the origin of this emission have been proposed, and now the wavelength coverage of Planck's Low Frequency Instrument is ideal for observing and characterising it. An advantage that Planck has is that the combination of the two instruments give a much broader wavelength coverage, which allows the separation of this anomalous emission from the better understood components. “We are now becoming rather confident that the emission is due to nano-scale spinning grains of dust, which rotate up to ten thousand million times per second,” says Clive Dickinson from the University of Manchester, who led an analysis of the AME using Planck's maps. “These are the smallest dust grains known, comprising only 10 to 50 atoms; spun up by collisions with atoms or photons, they emit radiation at frequencies between 10 and 60 GHz,” he explains. This region in the constellation of Perseus shown was one of two regions within our Galaxy studied in detail. Thanks to Planck's high sensitivity and to its unprecedented spectral coverage, it has been possible to characterise the anomalous emission arising from these two objects in such great detail that many of the alternative theories could be discarded, and to show that at least a significant contribution to the AME, if not the only one, is due to nano-scale spinning dust grains. Journal reference: Nature Chemistry, DOI: 10.1038/NCHEM.1692 The Daily Galaxy via Nature Chemistry, Space.com, and New Scientist
Sympathy exists when the feelings or emotions of one person lead to similar feelings in another person so that they share feeling. Mostly sympathy means the sharing of unhappiness or suffering, but it can also mean sharing other (positive) emotions. In a broader sense, it can refer to the sharing of political or ideological sentiments, such as in the phrase "a communist sympathiser". The psychological state of sympathy is closely linked with that of empathy, but is not identical to it. Empathy is understanding and feeling another person's emotions as they feel them, but makes no statement as to how they are viewed. Sympathy, by contrast, implies a degree of equal feeling, that is, the sympathiser views the matter similarly to how the person themselves does. It thus implies concern, or care or a wish to reduce negative feelings others are experiencing.