content
stringlengths
275
370k
Galileo Galilei’s experiments on the motions of falling and rolling objects, described in his 1638 book, “Two New Sciences,” are considered by many to be the beginning of modern science. Now researchers at MIT have conducted a variation on his experiments that has produced unexpected results. Galileo used rigid materials — metal balls and a wooden ramp — for his tests on how bodies move down an inclined plane. Since then, a few tests have been done with solid balls rolling down a flexible surface. But until recently, nobody had tried rolling flexible objects down a solid plane. MIT professors Pedro Reis and John Bush, along with visiting student Pascal Raux and visiting professor Christophe Clanet, have now carried out experiments using a variety of flexible, hollow cylinders — essentially wide rubber bands — with different degrees of elasticity, and have derived a set of equations to describe the behavior of these "rolling ribbons." The work was described in a paper published in July in the journal Physical Review Letters. Flexible ribbons initially settle into an oval shape because of gravity, and one might expect them to get rounder as they roll due to centrifugal force. But the opposite occurs. The faster the ribbons roll, the more they lose their circular shape, eventually assuming a two-lobed “peanut” shape. At even higher speeds, the sagging middle droops so far that it comes in contact with the bottom of the ribbon, causing friction that makes the loop suddenly lurch backward. The researchers say the behavior is the result of “a delicate coupling between rolling, bending and stretching.” Though this was basic research, Reis speculates that the resulting analysis might ultimately be useful for applications as varied as predicting the motions of microscopic cylinders called carbon nanotubes, the behavior of drill casings in deep wells, and the way blood cells move through veins and arteries. (Red blood cells are known to have a similar characteristic peanut-like shape, but that shape has never been fully explained.) Bush, a professor of applied mathematics, explains that “one of our goals is to better understand the role of flexibility in locomotion.” The research was prompted by a suggestion from Clanet, who taught a course at MIT during a stint as a visiting lecturer. He is a professor working at the Laboratory of Hydrodynamics at the Ecole Polytechnique in Palaiseau, outside Paris. Raux, a student visiting MIT from Paris, was the lead author of the paper. Video courtesy of Pascal Raux and Pedro Reis. This video illustrates the shape transformation of a flexible polymer ribbon, otherwise circular, when rotated on a treadmill. Why has it taken almost four centuries for scientists to take this next step from Galileo’s work? Reis explains that despite their apparent simplicity, the behavior of flexible structures is often nonlinear — that is, it requires more complicated mathematical equations to describe than that of simple rigid objects, and so the analysis wasn’t feasible before the advent of computers. And in addition, only relatively recently has it become easy to make objects out of flexible polymers with any desired degree of elasticity and shape using modern fabrication tools. In this case, team members had to create the flexible cylinders from scratch — they employed a polymer that dentists use to make casts — because commercially available versions, such as wide rubber bands, do not have precise enough shapes with constant thickness. Lakshminarayanan Mahedevan, a professor of applied mathematics at Harvard University, has published research on a different variation of Galileo’s experiments but was not involved in this work. Mahedevan says this basic research “has implications for how surfaces in contact move relative to each other.” But what really matters, he says, is the basic principles discovered in this way: “The particular manifestation is less important than the general principles that underlie these types of deformations.” As with much basic research, the ultimate usefulness of the work is hard to predict, although there are clear examples of areas where such rolling behavior occurs. But for the researchers, there was a real thrill in finding a way to extend work that has such a fabled history. “One thing that made it really neat for us from the start was the connection with Galileo, to take an experiment that has been a classic” and extend it in a new direction, says Reis, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering and Mechanical Engineering. This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching. Explore further: Trimble Introduces Future-Ready GNSS Positioning Technology
(PhysOrg.com) -- Scientists have taken a critical step toward the development of new and more effective antibacterial drugs by identifying exactly how a specific antibiotic sets up a road block that halts bacterial growth. The antibiotic, myxopyronin, is a natural substance that is made by bacteria to fend off other bacteria. Scientists already knew that this antibiotic inhibited the actions of an enzyme called RNA polymerase, which sets gene expression in motion and is essential to the life of any cell. But until now, researchers did not know the mechanism behind how the antibiotic actually killed the bacteria. Key to investigating this mechanism is the use of the powerful imaging technique X-ray crystallography, which allows researchers to see the fine details of the complex between the antibiotic and its target. In the case of myxopyronin, the antibiotic binds to RNA polymerase in a way that interferes with the enzyme’s ability to use DNA to start the process of activating genes so they can make proteins. “This is the first antibiotic that we know that inhibits polymerase before it even starts RNA synthesis,” said Irina Artsimovitch, a coauthor of the study and an associate professor of microbiology at Ohio State University. The research is published online in the journal Nature. Artsimovitch is co-principal investigator on the work along with Dmitry Vassylyev of the University of Alabama at Birmingham, who led the use of X-ray crystallography to determine the structure. Research teams from the two universities worked with staff at Anadys Pharmaceuticals Inc. of San Diego, which manufactured a synthetic form of the antibiotic used for this study. The results of the study, and additional research planned to explore modifications of the synthetic antibiotic’s structure, could lead to the development and commercial availability of a new class of antibiotic drugs. “As a natural substance, it is what it is. If you want to design a better antibiotic, you can only do that if you know what features are important,” Artsimovitch said. Using the synthetic form of the antibiotic, called dMyx, the researchers were able to observe how it inhibits different RNA polymerases used for this work: Escherichia coli (E. coli) and Thermus thermophilus. The dMyx attaches to a site on the RNA polymerase enzyme that is near DNA. The enzyme’s interaction with DNA initiates the transcription process by which genes are expressed and make proteins, essential steps for the survival of the bacteria. The researchers observed that dMyx binding effectively halted what is called DNA melting – the separation of the two strands of the DNA double helix. This separation must occur for the enzyme to complete its task, but in the presence of the antibiotic, the enzyme instead formed a loop that blocked access of the DNA. In the context of watching this antibiotic in action, the scientists discovered new information about how RNA polymerase itself works. The enzyme is known to separate the double helix of DNA and use one strand to match nucleotides and make a copy of genetic material. “We think of it as a one-step transition. RNA polymerase melts the DNA to create a bubble of 14 nucleotides, and that’s where it starts its work,” Artsimovitch said. “But what we saw was the polymerase uses two steps. It melts a little bit of the DNA, stops to check its progress, and then continues the melting further downstream. “But the antibiotic switches it all into one state and it doesn’t allow the second step. It creates a static clash, a road block that cannot be passed.” The scientists were able to determine that a segment of the enzyme called switch-2 refolded into a loop upon the antibiotic binding. To test the role of the switch-2 segment further, the researchers manipulated the enzyme by predicting mutations that might naturally occur as the bacteria cells tried to defend themselves against the antibiotic. These predicted mutations caused switch-2 to refold by itself, even without dMyx. This showed that dMyx is an attractive drug candidate, Artsimovitch said. The changes the bacteria would likely make to defend against this specific antibiotic’s binding activity seem to interfere with the RNA polymerase’s ability to perform its essential task – so even while trying to mutate and become resistant, the bacteria probably would die anyway. The scientists were able to demonstrate that the location of the dMyx binding site is different from the sites used by other antibiotics – thus, dMyx would still be active against bacterial strains already resistant to the existing drugs. Moreover, the site is so specific that this agent would not do damage to the RNA polymerase activity in healthy human cells. “In terms of antibiotics, two things are very important: You want it to kill bacteria and you want it not to kill you,” Artsimovitch said. “For this reason, you have to use substances that inhibit only bacterial polymerase, but do not inhibit ours.” Provided by Ohio State University Explore further: Discovery of trigger for bugs' defenses could lead to new antibiotics
Back to course home page FORDHAM UNIVERSITY CSLU 3593 Fordham College Lincoln Center Computer Organization Dept. of Computer and Info. Science Spring, 2005 Homework Assignment 3 Due date: February 14 - §§B.2,B.3 A combinational circuit is to be designed that compares two 2-bit (unsigned) binary numbers and . Thus it has 4 inputs, 2 for and 2 for . It has 3 outputs, , , and : is true if , is true if , and is true if numerically. Write down the truth table for this circuit, and express each output as a sum of minterms (using notation). - §§B.3 A combinational circuit is to be designed for a quality-control station on an assembly line. The circuit has three inputs, , , and providing the results of quality checks, and one output that is true if the product passes the test. If is false, then the product fails regardless of and . If is true, then the product passes if either or is true. It is not possible for both and to be true at the same time. Write down the function table for this circuit, showing both input and output don't-cares. Write down a boolean expression for in sum-of-products form, using the don't-cares to make as simple as possible. - §B.8 A T (or toggle) flip-flop has a single control input besides the clock. If , then the flip-flop holds its output steady. If , then upon arrival of the clock signal, the flip-flop changes its output to the complement of what it was before. Show how to implement a T flip-flop using an edge-triggered R-S flip-flop and some external logic gates. - §B.8 Figures B.8.8 and B.8.9 on pages B-55 and B-66 illustrate the implementation of the register file for the MIPS datapath. Pretend that a new register file is to be built, but that there are only two registers, one read port and one write port, and that each register has only 2 bits of data. Draw a new figure combining the designs in Figures B.8.8 and B.8.9 into one figure with both write port and read port implementations shown, and in which every wire corresponds to only 1 bit of data (unlike the figures in the text, in which some wires are 1 bit, some are 5 bits, and some 32 bits). Depict the registers as D flip-flops. You do not need to show how to implement a D flip-flop, a decoder, or a multiplexor.
Visit the Centers for Disease Control and Prevention for up-to-date information on the Ebola outbreak in West Africa. What is Ebola? Ebola (Ebola virus disease or EVD) is a severe, often fatal, illness that can infect humans. How is Ebola spread? Ebola is spread to people when they touch a sick person’s body fluids (blood, breast milk, urine, saliva, vomit, sweat, semen, urine, or stool). Objects and surfaces that are wet with infected body fluids may spread Ebola, including but not limited to clothing and bed sheets. Ebola is not spread through the air or through water. Ebola is also not usually spread through eating food, but may be spread through handling or eating infected animals (bushmeat) from areas with an Ebola outbreak. Ebola can spread between family and friends if they come in contact with the body fluids of an ill person. Ebola may spread during funerals or burial rituals if people have close contact with the body of a person who died of Ebola. Ebola can also spread in health care settings (such as a clinic or hospital) if the hospital staff does not wear the correct protective equipment, such as masks, gowns, and gloves. Someone with Ebola can only spread the illness to others after they begin feeling sick, not before. What are the symptoms of Ebola? Symptoms usually include fever, headache, feeling weak, joint and muscle pain, nausea, diarrhea, and vomiting. Some people may also have a rash, red eyes, cough, sore throat, or bleeding inside and outside of the body. Symptoms can begin 2 to 21 days after being exposed to infected body fluids, but usually begin in 8 to 10 days. How is Ebola diagnosed? The symptoms of Ebola are similar to other infections that are more common in West Africa, such as malaria. People with flu-like symptoms in Boston may be sick with a number of different diseases. Call your health care provider to see if you should come in to be checked. Health care providers can use laboratory tests to find out if someone is sick with Ebola or something different. How is Ebola treated? There is no FDA approved treatment or vaccine available for Ebola at this time. Supportive care, such as rest, fluids, and medicine to reduce a fever, is given to people who are ill. New treatments are being researched. How can Ebola be prevented? It is important to avoid contact with blood and other body fluids of any ill person. Objects and surfaces contaminated with body fluids can be cleaned with a bleach solution or other approved, household cleaner. Health care providers take special measures to prevent the spread of Ebola if they think someone is sick, including: - Wearing protective clothing (such as masks, gloves, gowns, and goggles) - Using infection control measures (such as cleaning surfaces and equipment) - Isolating people who may be sick with Ebola to keep others from getting sick Dead bodies of Ebola victims can still spread Ebola to others. In areas with an Ebola outbreak, avoid touching dead bodies or fluids from dead bodies. Contact local health officials (such as a Ministry of Health) for assistance. Good hand washing can also help prevent the spread of Ebola and other germs. For help finding a health care provider in Boston, please call the Mayor’s Health Line. More information on Ebola:
What is a tree? This may seem like an easy question at first. We see trees everywhere and we know what they are when we see them, but what is it that really makes a tree, a tree? The first part of the description is fairly easy. A tree has a woody stem and is a perennial, meaning that it lives for many years. However, there are bushes and other plants that fit this description and aren't really trees. There actually isn't a scientific description of a tree, so most people and books use a rule of thumb. If a plant has a woody stem, is a perennial, and grows to more than 13 feet tall, then it's a tree. Of course, there will always be tree-like bushes and bush-like trees, but, for the most part, we know a tree when we see one. Types of Trees How do trees grow? - Conifers and Evergreens Coniferous trees have narrow hard leaves called scales or needles. Most of them are evergreen, meaning that they stay green during the winter and don't have leaves that change colors and drop during autumn season. Conifers get their name from having cones that house their seeds. Some examples of coniferous trees include cypresses, pines, cedars, firs, and redwoods. Conifer trees are famous for having the tallest and largest forms of life. These trees are the giant sequoias or redwood trees. They can be found at Redwood National Park in California. The giant redwood trees grow to 115m (379 feet) tall. That's a tree taller than a football field is long! - Deciduous and Broadleaf Trees Another type of tree is the broadleaf. Most broadleaf trees are deciduous, meaning that they shed their leaves each fall. The name broadleaf comes from their wide leaves, unlike the thin needles of the conifers. These trees also produce flowers. Sometimes the flowers are in the form of fruit or nuts, which we can often eat. Some examples of broadleaf trees are oaks, beeches, maples, elms, and birches. As trees get older they grow taller, wider, and deeper. Trees grow taller by growth from new cells at the tips of their branches. They also grow deeper in the form of roots in the ground which collect water and nutrients from the soil. The roots grow at the tips like the branches. Trees also grow wider in their trunks and branches. This growth takes place at the outer later called the cambium. Since growth of the cambium stops during the winter or cold months, tree trunks develop rings. Each ring represents a year of growth. We can see how old trees are by counting their rings. Diagram of rings in a young conifer from Fritts, 1976 Other Tree Features Trees and Humans - Leaves - The leaves on a tree are important for gathering sunlight for photosynthesis. Some trees have small or narrow leaves, and some trees have huge leaves. - Bark - Bark is the protective covering, sort of like skin, for tree branches. Bark protects the tree from animals and even diseases. Trees have provided humans with building materials for homes, furniture, and more throughout all of human history. Trees have also been a great source of fuel as fires for keeping warm and cooking food. We also gather a lot of our food from trees such as fruit and nuts. However, trees are also important to our environment. Trees are a primary source of oxygen. They breathe in and reduce carbon dioxide and in turn provide oxygen. We couldn't live without trees! On top of that trees provide us with shade and beauty, so be sure to hug a tree today! More Biology Subjects >> Biology for Kids
Definition - What does Recursive Loop mean? A recursive loop is said to have occurred when a function, module or an entity keeps making calls to itself repeatedly, thus forming an almost never-ending loop. Recursive constructs are used in several algorithms like the algorithm used for solving the Tower of Hanoi problem. Most programming languages implement recursion by allowing a function to call itself. Recursive loops are also known simply as recursion. Techopedia explains Recursive Loop A recursive loop is a special type of looping construct where a particular entity tries to invoke itself from within its loop code. Thus the entity keeps calling itself until a specific condition or break is specified. Recursive loops are usually implemented with the help of a recursive function call where a call to a particular function is placed within the function definition itself. The programming languages capable of implementing recursive loops can solve the problems that require the use of iterative structures like "while" and "for" just by using recursive loops alone. Thus recursive loops can replace the traditional loop constructs and are sometimes useful in creating less bulky code. It also simplifies the code and helps in breaking down complex codes into simple statements. Some of the most common problem applications of recursive functions include the Tower of Hanoi, computation for series for e = 1/0! +1/1!+1/2+…, computation of gcd, factorial and so on. Recursion is also used in cases when the programmer is not sure about the exact size of data. Recursion in computing can be classified into the following types: - Single recursion - Multiple recursion - Indirect recursion - Anonymous recursion - Structural recursion - Generative recursion Using recursive loops may affect the performance of the program. Recursive loops make use of memory stacks and when the stacks are full, the loop may terminate before the intended termination time. Join thousands of others with our weekly newsletter Free Whitepaper: The Path to Hybrid Cloud: Free E-Book: Public Cloud Guide: Free Tool: Virtual Health Monitor: Free 30 Day Trial – Turbonomic:
Once your child is familiar with numbers, it’s time to teach them how to ‘count on’, using our Number Line Charts. Counting on is when your child is able to count from a number other than one, and continue a sequence of numbers picked at random. For example you can ask your child ‘What is 4 + 3?’. They should be able to count from 4, that is: 4; 5, 6, 7. A great way to practice this is on a number line chart. Great for visualization, your child can see at a glance the numbers on a line, and practice counting on from a random number. Another way to practice reciting sequential numbers picked at random is on our Missing Numbers Worksheets. See below for our selection of Missing Number worksheets – there are 4 worksheets with varying levels of difficulty. Using a number line takes a little practice, but once your child gets the hang of it, they will surely love it and gain confidence in counting on. Start with one equation. For example, 4 + 2. Have your child start at 4, then move along (with a pencil or their finger) 2 places where they will find themselves at number 6. "So, 4+2=6" These free Number Line charts can be printed out and used as a guide (move along the line with a finger or a lead pencil which can be erased) so your child can have lots of practice counting on. This chart is great for number line first-timers. Your child will get familiar with the numbers as they have the opportunity to count to 10. As your child is starting to have fun with number lines, this chart has three 0-20 number lines. One number line from 0 to 40 as your child gets confident with higher numbers. One number line from 0 to 100 for large sums and higher numbers. Another way to help your child gain confidence with numbers to one hundred, is with our hundreds chart. Missing Numbers Worksheets Once your child has had practice counting on with the number lines, they can put this to practice with our Missing Number worksheets. These free, printable worksheets are easy to use – just identify the missing number and write the number in the space provided! Worksheet 1 is the simplest – working with numbers to 20. The second and third worksheet are slightly harder – with higher numbers and occasionally, 2 missing numbers per line. The final worksheet 4, has the most missing numbers to complete! We hope that your child enjoys and learns from our number line worksheets. Counting on is an important skill which will be second nature to them before too long! We stress this a lot, but we strongly believe that all learning should be fun – even math! So if your child starts to get distracted, bored or upset after the first worksheet or activity, just save the rest for another time. You will get the best results when your child is eager to learn! More Preschool Math Resources Flashcards: Our number flashcards will help with number recognition and simple counting. Math songs – Sing along to some great songs for teaching math Hundreds chart – Introducing a hundreds chart is a good idea to teach more advanced numbers.
Assessing Without Levels September 2014, the Government made a huge change in the way that children in schools are to be assessed. This is to tie in with the New National Curriculum that started to be used by all schools at the beginning of this last academic year. This is a new way of thinking for schools, and assessment will look very different to how it has done for the past 20 years. The new curriculum sets out what is to be taught within each year group or phase, but does not provide any system or structure for the ongoing assessment for pupils’ progress. So why are levels disappearing? The DfE want to avoid what has been termed ‘The level Race’ where children have moved through the old National Curriculum levels quickly to achieve higher attainment. The old National Curriculum was sub-divided into levels, but these were not linked to their National Curriculum year group. For example, a child in Year 4 could be a Level 3 or even a level 5. Children were achieving Level 5 and 6 at the end of Key Stage 2, but the DfE thought that a significant number were able to achieve a Level 5 or 6 in a test—but were not secure at that level. The feeling from the DfE was that the old national curriculum and the levels system failed to adequately ensure that children had a breadth and depth of knowledge at each national curriculum level. Assessing Without Levels We have spent a long time researching various different methods of assessing pupils, and we have had demonstrations of various commercial software tracking systems, as well as a system developed by Surrey Local Authority. Almost all of the systems used the same format, which was similar to the system used in the Early Years and Foundation Stage. This was to take the end of year expectations for each year group and to split this into 3 categories as follows: - Entering— Yet to be secure in the end of year expectations. - Developing—Secure in the majority of the end of year expectations. - Secure—Secure in almost all or all of the end of year expectations and able to use and apply knowledge and skills confidently. Pupils can be assessed as being on any step at any time regardless of their actual age. What level should my child be? Previously, if you have a had child in school, teachers will have given you a level to represent your child’s attainment. For example ‘3C’ the number gave the level and the letter denoted steps within that level. So 3C would be a child just entering level 3, and 3A a child who was secure in the level and ready to move on to level 4. However, this system does not match with the content of The New National Curriculum where Age Related Expectations (ARE) have been given for the end of each year. As children travel from Year 1 to Year 5 in our school, they will be tracked against the Age Related Expectations. At The Dawnay these are numbered bands, the bands give the level of attainment. So Year 1 is band 1, and so on until Year 5 is band 5 and Year 6 is band 6. Because all children are individual and develop at different rates and have differing needs, they will work in the band which is appropriate to them to make sure that learning makes sense. Extra help or challenge is given to make sure they are learning at the right level.
Chapter 14. Network Configuration To communicate with each other, computers must have a network connection. This is accomplished by having the operating system recognize an interface card (such as Ethernet, ISDN modem, or token ring) and configuring the interface to connect to the The Network Administration Tool can be used to configure the following types of network interfaces: It can also be used to configure IPsec connections, manage DNS settings, and manage the /etc/hosts file used to store additional hostnames and IP address combinations. To use the Network Administration Tool, you must have root privileges. To start the application, go to the (the main menu on the panel) => => , or type the command system-config-network at a shell prompt (for example, in an XTerm or a GNOME terminal). If you type the command, the graphical version is displayed if X is running; otherwise, the text-based version is displayed. To use the command line version, execute the command system-config-network-cmd --help as root to view all of the options. Figure 14.1. Network Administration Tool To configure a network connection with the Network Administration Tool, perform the following steps: Add a network device associated with the physical hardware device. Add the physical hardware device to the hardware list, if it does not already exist. Configure the hostname and DNS settings. Configure any hosts that cannot be looked up through DNS. This chapter discusses each of these steps for each type of network connection.
Awareness of concussion injury exploded with the movie Concussion in 2015. The correlation between multiple concussions and neurodegenerative conditions was brought to the big screen. The common analogy of concussion being a “brain bruise” suggests that it is a minor injury that should recover in a few days without any permanent consequences. However, this is not true. Our brains are uniquely designed and complex, more sophisticated than any computer. Every activity is a function of networks in the brain. The brain tissue injury that occurs as the result of concussion is complex and disruptive. Concussion is the result of many kinds of trauma that cause the brain to be shaken around inside the skull, causing diffuse axonal injury (DAI). DAI creates lesions all throughout the brain, damaging the myelin (the fatty coating around axons), and may cause damage to deeper axonal structures. Diffuse injury disturbs network communication in the brain. Synapses—where one brain cell or neuron uses neurotransmitters to communicate with another brain cell—can be pulled apart, therefore disrupting cell-to-cell communication. The microenvironment of electrolytes and other brain-specific chemicals around the brain cells, glia and other support cells is disrupted and may effect electrical conductivity and functioning of cells. Even a minor concussion may cause breaches in the blood brain barrier (BBB). A breach in the BBB allows chemicals, toxins and infections access to the brain when they would otherwise be blocked. The microtubular structure that internally organizes the neuron and allows transport of neurotransmitters to the synapse may be disrupted and disable cellular function. Brain cells that are damaged and not restored to function begin a process of degeneration; this dying back of the cell leads to cell death and brain atrophy. This is the definition of a neurodegenerative process. Neurodegenerative processes include dementia, Parkinson’s disease and ALS. Functional recovery—restoring the ability to read, sleep and balance, visual function and depth perception, cognitive activities and more—can be addressed by therapy, where some re-wiring and re-networking of the brain may occur. However, decline may still occur because functional recovery is different from biologically and physiologically healing the brain. Military veterans, first responders and athletes in contact sports are at great risk for concussion and recurrent concussion, which causes cumulative injury and a high risk for neurodegenerative conditions. Traditional Brain MRI sequences, even with contrast, do not aid in diagnosing concussion because they do not highlight the disconnections. Brain MRI – DTI (diffusion tensor imaging) sequencing and/or Brain Quality SPECT scanning are the imaging tools of choice. Objective computer-based functional testing, including the RightEyeQ test (a standardized objective test of visual fixation and follow), reaction time testing and formalized balance testing, are objective and predictive, which is a good way to establish a baseline. Take brain health seriously and pursue meaningful testing and active treatment for concussion. Carol L. Henricks, M.D. is a neurologist specializing in the use of hyperbaric oxygen therapy (HBOT) and PEMF at NorthStar Hyperbaric, in Tucson. (HBOT) saturates the body with oxygen, reducing inflammation and enhancing recovery from central nervous system injury. Connect at 520-229-1238 or NorthStarHBOT.com.
Watching ice melt and water vaporize by increasing the temperature would not surprise anybody. Watching the nucleus of an atom transform from solid to liquid to gas is less common but is now possible and could lead to a better understanding of the properties of atomic nuclei. To study how nuclei undergo changes from solid to liquid to gas, Texas A&M University scientists smash atomic nuclei together in an on-site accelerator, the Cyclotron. The two nuclei mix together, creating a very hot intermediate state where some nuclei are liquefied and others are vaporized. "We are talking about two things colliding, mixing for a very short period of time, and breaking up, and then we see the pieces of that," says Sherry Yennello, associate professor of chemistry at Texas A&M. "The phase transition - the passage from a solid state to a liquid state or a liquid state to a gas state - exists only for less than a billionth of a billionth of a second," Yennello says. "So we have to backtrack and say: 'How did this thing happen?'" To study the fragments of the collision and look for hints of a phase transition, Texas A&M physicists have built a detector called NIMROD. "With the NIMROD detector, we can study all of the charged fragments that are emitted in the reaction," says Joseph Natowitz, professor of chemistry at Texas A&M and director of the Cyclotron Institute. "There have been three versions of this detector since 1989, and the last version started working about a year and a half ago." NIMROD is cylindrical in shape, and consists mainly of concentric layers of numerous small detectors used to detect charged particles, and a large detector surrounding the small detectors, used to detect neutral particles. Scientists working with the NIMROD detector try to determine the nature of the phase transition of nuclei from liquid to gas. "A nucleus undergoes a phase transition that has strong similarities with a change from liquid water to water vapor," says Natowitz." This is called a first order phase transition, which is a very sharp phase transition." "But nuclei could also undergo a second phase transition, which is more gradual," he says. "Nuclei would go from a liquid phase to a gas phase more smoothly." In fact, during the collision, nuclei expand and release the particles they contain: neutrons and protons, which undergo a liquid-gas phase transition. Neutrons and protons can tell if the transition occurred sharply or smoothly. "If there is a different number of neutrons and protons, all the protons may undergo a phase transition before all the neutrons, or vice-versa," says Yennello. "So as you start making the transition to the gas phase, neutrons may make that transition first. Then, the intermediate state explodes, and you end up with a liquid-gas mixture and a neutron-rich gas." Though physicists have not analyzed all collected data yet, there is some evidence that a neutron-rich gas is present during the collisions, according to Yennello, suggesting a smooth liquid-gas transition of the nucleus. Understanding the liquid-gas transition of nuclei can be used to determine how nuclei behave under various temperature and pressure conditions. "Like a gas, for which the relationship between the pressure, volume and temperature is well known, nuclei form a particular type of matter, called nuclear matter, for which the thermodynamic properties can be investigated," Natowitz says. Studying liquid-gas transitions of nuclei also has applications in astrophysics and chemistry. "These studies have significant relevance to astrophysical questions, like the understanding of supernovae explosions and neutron star formation," says Natowitz. "There are also applications in the study of chemical clusters of, say, 50 atoms or even 1000 atoms," he adds. "This a very interesting area of exploration because chemical systems have lots of potential applications in terms of catalysis for instance." "NIMROD is just beginning," Yennello says. "This detector is a very powerful state-of-the-art device that will give us opportunities to look at lots of great science for many years to come." The above story is based on materials provided by Texas A&M University. Note: Materials may be edited for content and length.
Important Things People Need to Know about Brain Hemorrhage Cerebral hemorrhage or intracranial hemorrhage is the medical terminology used for brain hemorrhage which refers to bleeding in the brain or the soft tissues surrounding it. There are different forms of cerebral hemorrhage which are classified as: - Intracerebral Hemorrhage. This means that the bleeding is inside the brain. The prognosis or chance of recovery for patients with this condition depends on the size and location of the bleed. - Subarachnoid Hemorrhage. In this form of cerebral bleeding, the ruptured blood vessel is located between the brain and the membranes covering the brain. - Subdural Hemorrhage. The bleed in this type of cerebral hemorrhage is between the meninges or the brain’s covering. - Epidural Hemorrhage. Here, the bleeding occurs between the covering of the brain or the meninges and the skull. Brain hemorrhage may be due to risk factors and causes like: - Head Injury or head trauma. This is the most common cause of cerebral hemorrhage among individuals younger than 50. - Hypertension or high blood pressure. Extremely high blood pressure may cause small blood vessels to rupture and bleed; the most common site of bleeding, when a person is hypertensive is in the brain because the blood vessels in this organ are more fragile. - Diabetes Mellitus is one common risk factor for patients to develop cerebral bleeding. When a patient has constant hyperglycemia, the blood thickens and becomes viscous, if this happens blood pressure will elevate and thereby causing hypertension which is, of course, one of the most common causes of brain hemorrhage. - Aneurysms are weakened and swollen blood vessels; this is one of the major causes of most hemorrhagic stroke among the elderly patients because these blood vessels could rupture at any time and cause bleeding in the subarachnoid space. - Arteriovenous Malformation or the AVM. This is actually considered as an anatomical congenital anomaly in the blood vessels inside and around the brain. This condition is also considered as a form of birth defects but can only be diagnosed when symptoms appear. When these blood vessels rupture, symptoms may abruptly appear, but the signs and symptoms may be diverse depending greatly on the size and area or location of the bleed. - Amyloid Angiopathy or also known as Cerebral Amyloid Angiopathy. This is a type of disorder developed when the amyloid deposits have accumulated inside the blood vessels supplying blood in the central nervous system. - Hyperlipidemia. Elevated cholesterol levels may also cause hypertension making this condition a risk for cerebral hemorrhage. Cholesterol are deposited on the walls of blood vessels making the lumen smaller which disrupts blood circulation thereby increasing blood flow pressure on the arterial or venous walls; because of the elevated blood pressure, the arteries or veins may rupture causing brain hemorrhage. These are only some of the most common risk factors and causes of brain hemorrhage that we should all be aware of to prevent this life-threatening condition. But, in case an individual is suspected to have brain hemorrhage because of these causes, emergency management is always the immediate action; patients with this condition should be rushed immediately to the nearest hospital for a much greater chance of recovery. Here are the most significant warning signs and symptoms that should watch out for: - Severe headache - Sudden loss of consciousness - Nausea and Vomiting - Sudden ocular pain and difficulty in seeing - Sudden blurring of vision or diplopia - Nape pain - Drooping eyelids - Generalized body malaise - Cognitive impairment - Behavioral changes - Slurring of speech - Pupillary dilatation - Balance and body coordination problems - Unresponsiveness or Coma If you have observed that an individual is having some or all of these signs and symptoms, you should bring him/her to the nearest hospital because chances are, there is already bleeding in his/her brain and this condition requires urgent medical attention and treatment. After having brought a patient to the emergency room because he/she is manifesting some or all of the signs and symptoms of cerebral hemorrhage, the next step is confirming the diagnosis of the medical condition to rule out other diseases and disorders like brain cancer as some of the warning signs may also be indicative of other serious medical conditions. Proper diagnostic procedures and evaluation should be done because the plan of treatment will be based on the findings and results. Some of the most effective means to diagnose cerebral bleeding is by using the following: - Computerized Axial Tomography Scan (CT scan or CAT scan). This diagnostic tool will help confirm the diagnosis of cerebral hemorrhage through generating x-ray images to locate and evaluate the extent of the bleeding. - Magnetic resonance Imaging (MRI). This produces a much clearer image of the brain and other structures that might have been affected when a patient has cerebral bleeding. - Lumbar tap or lumbar puncture. If there is a presence of blood in the cerebrospinal fluid then there is most likely bleeding in some parts of the brain. After the doctors have confirmed the diagnosis that the individual is experiencing cerebral hemorrhage, these medical specialists should already plan the proper medical and surgical treatments for the patient. Treatment Regimen for Cerebral Hemorrhage - In severe cases, an emergency craniotomy is recommended to drain out the blood from a small surgical opening in the skull and this is performed by a Neurosurgeon. - Vital signs, Neuro vital signs, and reflexes should be monitored and recorded at least every 15 to 30 minutes. - Blood pressure stabilization, control of blood glucose levels and lowering cholesterol levels are also very important. - Proper oxygenation should always be maintained, and in some serious cases, the use of a ventilator is also required. - Doctors will also be prescribing corticosteroids and diuretics to reduce the swelling of the brain tissues and to reduce the intracranial pressure. - Anticonvulsants are also used in these medical conditions to prevent seizures. - Physical and rehabilitative therapy is also advised after a patient has recovered from the fatal brain hemorrhage to somehow regain their muscle strength so that they may at least be able to perform activities of daily living and maybe even return to their usual activities.
The dental lamina is a band of epithelialtissue seen in histologic sections of a developing tooth. The dental lamina is first evidence of tooth development and begins (in human) at the sixth week in utero or three weeks after the rupture of the buccopharyngeal membrane. It is formed when cells of the oral ectoderm proliferate faster than cells of other areas. Best described as an in-growth of oral ectoderm, the dental lamina is frequently distinguished from the vestibular lamina, which develops concurrently. This dividing tissue is surrounded by and, some would argue, stimulated by ectomesenchymal growth. When it is present, the dental lamina connects the developing tooth bud to the epithelium of the oral cavity. Eventually, the dental lamina disintegrates into small clusters of epithelium and is resorbed. In situations when the clusters are not resorbed, (this remnant of the dental lamina is sometimes known as the glands of Serres) eruption cysts are formed over the developing tooth and delay its eruption into the oral cavity. This invagination of ectodermal tissues is the progenitor to the later ameloblasts and enamel while the ectomesenchyme is responsible for the dental papilla and later odontoblasts.
Deafness, partial or total inability to hear. The two principal types of deafness are conduction deafness and nerve deafness. In conduction deafness, there is interruption of the sound vibrations in their passage from the outer world to the nerve cells in the inner ear. The obstacle may be earwax that blocks the external auditory channel, or stapes fixation, which prevents the stapes (one of the minute bones in the middle ear) from transmitting sound vibrations to the inner ear. In nerve deafness, some defect in the sensory cells of the inner ear (e.g., their injury by excessive noise) or in the vestibulocochlear nerve prevents transmission of sound impulses from the inner ear to the auditory centre in the brain. Deafness at birth is nearly always of the nerve type and cannot be improved by medical means.
2.4.1 Quantum Entanglement and Teleportation. A Brief Introduction To Quantum Mechanics To understand the principles of teleportation, one must have a rudimentary grasp of quantum mechanics and quantum field theory. The theory of Quantum Mechanics began with the work of Kirchoff in 1859 then Stefan later (1879) both whom tried to determine the nature of black body radiation. Such work was modified in a long list by various people most notably Boltzmann, Wien and Rayleigh. Their work led to the development of two laws for radiation but the laws were mutually exclusive. The one predicted quite well the energy at high frequency whereas the other predicted energy at low frequency. They both failed when this was tried the other way around. Plank came forward with the answer saying in principle that the energy of an oscillator at a given frequency was not continuously variable but restricted to discrete multiples of a constant (Planks constant). This is most easily summarised by some of Einsteins most important work from those months in 1905/1906. Einstein noted that the emissions from metals of electrons was similarly quantised. Basically, he empirically showed that intensity had no effect on electron liberation and only frequency effected this when above a threshold level. This he concluded was precisely the effect one would expect if light were quantised and discrete. Quantum teleportation is the transmission and reconstruction over arbitrary distances of the state of a quantum system, an effect first suggested by Bennett et al in 1993 (Phys. Rev. Lett.70:1895). The achievement of the effect depends on the phenomenon of entanglement, an essential feature of quantum mechanics. Individually, an entangled particle has properties (such as momentum) that are indeterminate and undefined until the particle is measured or otherwise disturbed. Measuring one entangled particle, however, defines its properties and seems to influence the properties of its partner or partners instantaneously, even if they are light years apart. Due to the fact that the two particles are entangled interaction on the one cause instantaneous effects on the other. Entanglement is not a new concept at all, though Autumn seminars and the December (1998) publications show a considerable amount of new work of relevance. Entanglement was first discussed many years ago, most particularly following the publication in 1935 of the often quoted Einstein-Podolsky-Rosen paper (Physical Review 193547:777). To begin with the discussions were limited to the form of "gedanken" (thought) experiments involving two quantum-mechanical entangled entities. More recently, however, there have been laboratory constructions of actual quantum mechanical systems exhibiting such entanglement phenomena. Essential here is that any purely verbal account of quantum mechanical phenomena is severely limited by the constraint that the properties of quantum mechanical systems can be precisely described only by the equations relevant for those systems, and all other descriptions usually introduce serious ambiguities Entanglement arises from the wave function equation of quantum mechanics, which has an array of possible function solutions rather than a single function solution, with each possible solution describing a set of possible probabilistic quantum states of the physical system under consideration. Upon fixation of the appropriate boundary conditions, the array of possible solutions collapses into a single solution. For many quantum mechanical physical systems, the fixation of boundary conditions is a theoretical and fundamental consequence of some interaction of the physical system with something outside that system, e.g., an interaction with the measuring device of an observer. In this context, two entities that are described by the same array of possible solutions to the wave function equation are said to be "coherent", and when events decouple these entities, the consequence is said to be "decoherence". Serge Haroche (Ecole Normale Superieure Paris) reviews quantum mechanical entanglement, decoherence, and the question of the boundary between the physics of quantum phenomena and the physics of classical phenomena. Haroche makes the following points: 1) In quantum mechanics, a particle can be delocalized (simultaneously occupy various probable positions in space), can be simultaneously in several energy states, and can even have several different identities at once. This apparent "weirdness" behavior is encoded in the wave function of the particle. 2) Recent decades have witnessed a rash of experiments designed to test whether nature exhibits implausible non-locality.In such experiments, the wave function of a pair of particles flying apart from each other is entangled into a non-separable superposition of states. The quantum formalism asserts that detecting one of the particles has an immediate effect on the other, even if they are very far apart, even far enough apart to be out of interaction range. The experiments clearly demonstrate that the state of one particle is always correlated to the result of the measurement performed on the other particle, and in just the strange way predicted by quantum mechanics. 3) An important question is: Why and how does quantum weirdness disappear (decoherence) in large systems? In the last 15 years, entirely solvable models of decoherence have been presented by various authors (e.g., Leggett, Joos, Omnes, Zeh, Zurek), these models based on the distinction in large objects between a few relevant macroscopic observables (e.g., position or momentum) and an "environment" described by a huge number of variables, such as positions and velocities of air molecules, number of black-body radiation photons, etc. The idea of these models, essentially, is that the environment is "watching" the path followed by the system (i.e., interacting with the system), and thus effectively suppressing interference effects and quantum weirdness, and the result of this process is that for macroscopic systems only classical physics obtains. 4) In mesoscopic systems, which are systems between macroscopic and microscopic dimensions, decoherence may occur slowly enough to be observed. Until recently, this could only be imagined in a gedanken experiment, but technological advances have now made such experiments real, and these experiments have opened this field to practical investigation. Entanglement is unique to quantum mechanics in that, and involves a relationship (a "superposition of states") between the possible quantum states of two entities such that when the possible states of one entity collapse to a single state as a result of suddenly imposed boundary conditions, a similar and related collapse occurs in the possible states of the entangled entity no matter where or how far away the entangled entity is located. The most common form is the polarization of photons. Polarization is essentially a condition in which the properties of photons are direction dependent, a condition that can be achieved by passing light through appropriate media. Bouwmeester et al (Univ. of Innsbruck,) now report an experimental demonstration of quantum teleportation involving an initial photon carrying a polarization that is transferred to one of a pair of entangled photons, with the polarization-acquiring photon an arbitrary distance from the initial one. The authors suggest quantum teleportation will be a critical ingredient for quantum computation networks. In June 1999 the act of measuring a photon repeatedly without destroying it has was achieved for the first time, enabling researchers to study an individual quantum object with a new level of non-invasiveness. Physicists have long realized that it is possible to perform non-destructive observations of a photon with a difficult-to-execute technique known as a "quantum non-demolition" (QND) measurement. After many years of experimental effort, researchers in France (Serge Haroche, Ecole Normale Superieure) have demonstrated the first QND measurement of a single quantum object, namely a photon bouncing back and forth between a pair of mirrors (a "cavity"). A conventional photodetector measures photons in a destructive manner, by absorbing the photons and converting them into electrical signals. "Eating up" or absorbing photons to study them is not required by fundamental quantum mechanics laws and can be avoided with the QND technique demonstrated by the French researchers. In their technique, a photon in a cavity is probed without absorbing any net energy from it. (Of course, Heisenberg's Indeterminacy Principle ensures that counting a photon still disturbs the "phase" associated with its electric and magnetic fields.) In the experiment, a rubidium atom passes through a cavity. If a photon is present, the atom acquires a phase shift which can easily be detected. Sending additional rubidium atoms through the cavity allowed the researchers to measure the photon repeatedly without destroying it or. This technique can allow physicists to study the behavior of a photon during its natural lifespan; it can potentially allow researchers to entangle an arbitrary number of atoms and build quantum logic gates (Nogues et al., Nature, 15 July) The Problems of Indeterminance As Pertaining to the Scanning of macroscopic Entities. The reason photons were so readily used was because there were problems in measuring the spin of subatomic particles. However it must be remembered that for practical macroscopic purposes the scanning of the human body may not entirly be confined to the subatomic. Regarding the inability to track atoms it should be remembered that the task is quite easy from the days when IBM wrote their name in them to the use of monoatmic materials such as those used in the astronautics industry. Also if we want to transport a person we are not interested in protons etc. we essentially want to scan molecules and chains there-of. There are very few sections of the body that are affected by subatomic particles. The main three sub atom constituents that would concern us would be; free radicals, quantum effects in the neurons of the brain, and photons. Taken one at a time, free radicals would not be an important problem and there possible loss may not affect any part of the anatomy. The inability to track photons means we would be unable to resolve where any photons go. Now research has shown that the eye is responsive to single photons ,that of course does not mean a single photon will cause visual stimulation just that there is a probability that a singel photon could cause the brain to percieve light. This might mean that the side effect of transportation is unexpected flashes in the eyes of the person being transported. The firing of certain neurons is a quantum effect but the neuron its self is quite obviously macroscopic, thus we can not damage the brain simply be unaware if certain neurons will fire, as with photons stimulating the eye, the brain might be stimulated and thus our person may or may not receive certain signals, sounds or smells. If one wants to transport a particle and preserve the exact spin configuration wave form etc. the act of doing so interferes with the original particle. This is the work that won Heisenberg the Nobel Prize in 1932. Though in the construction of matter we are told that the Principle of Indeterminance forbids knowledge of the exact location and or momentum of the particles due to the quantum probability amplitudes for a particle, there several points to consider including whether unmeasured spin exists and also the examination of the work already conducted throughout the world on teleportation. The initial theories of quantum teleportation are based on a process called entanglement developed as a result of the Einstein-Podolsky-Rosen theories. Originally, it was postulated that if two particles had to approach to within a certain distance they could become entangled. If subsequently the particles are separated to any distance, forces between entangled particles remain the same as they were in close proximity. If either is then disturbed, the entanglement stops. When we consider the forces between the entangled particles it is as if space did not exist between them. Furthermore, if the theoretical particles known as singlets are the basic building blocks of matter and their interactions are entangled it would appear that Einsteins 'spooky action at a distance' maybe not spooky but simply the basic natural method of communication. Empirical Data on Teleportation. The pinciple of quantum teleportation is not just a theory but has on many occasions been demonstrated experimentally. The first method used was the teleportation of photons. Photons possess spin, but in this case the spin is always in the direction of propagation and thus is called polarisation. To teleport a quantum system it is necessary to somehow send all the information needed to reconstruct the system to the remote location. But, it might be thought, the Heisenberg uncertainty principle makes such a measurement impossible. However, the scheme devised by theorists takes advantage of the previously mentioned entanglement. If two quantum particles are entangled, a measurement on one automatically determines the state of the second - even if the particles are widely separated. Entanglement describes correlations between quantum systems that are much stronger than any classical correlation could be. The phenomenon has been demonstrated for photons more than 10 kilometres apart. A great deal of work has been done by the Innsbruck team. In their experiment we can consider that Alice wants to teleport a photon to Bob. The names are the standard nottation for thought experiments in Quantum Computation. The technique works by sending one half of an "entangled" light beam to Alice and the other to Bob. Alice measures the interaction of this beam with the beam she wants to teleport. She sends that information to Bob who uses it make an identical copy of the beam that Alice wanted to teleport. This original beam is lost in the progress. It is quite possible to trasmit data as long as we are prepared to destroy in the process. Bob was then able to use this information and his half of the entangled beam to create an exact copy of Alice's original beam. Although teleportation relies on what Einstein once called spooky action-at-a- distance and appears to occur instantaneously the special theory of relativity remains intact because neither Alice nor Bob obtain information about the state being teleported. This was something that Einstein himself concluded I believe even though he never fully appreciated QED. If one is extending this discussion to the transporters of the Star Trek universe, the discussion obviously has to move beyond photons and singlets to include atoms and ions. Recent work in Paris where progress has been made in the macrscopic direction by entangling pairs of atoms for the first time. Previously, physicists obtained entangled particles as a by-product of some random or probabilistic process, such as the production of two correlated photons a phenomenon that occasionally occurs when a single photon passes through a special crystal. Though previously only two-state quantum systems such as the polarisation of a photon had been teleported this new research should allow all quantum states to be teleported. In their "deterministic entanglement" process, the researchers trap a pair of beryllium ions in a magnetic field. Using a predetermined sequence of laser pulses, they entangle one ion's internal spin to its external motion, and then entangle the motion to the spin of the other atom. The group believes that it will be able to entangle multiple ions with this process. Now E. Hagley et al, using rubidium atoms prepared in circular Rydberg states (which means the outer electrons of the atom have been excited to very high energy states and are far from the nucleus in circular orbits), have shown quantum mechanical entanglement at the level of atoms. .[Phys.Rev. Lett. 79:1]. There is talk that before long quantum mechanical entanglement may be demonstrated for molecules and perhaps even larger entities. There are problems with quantum teleportation, though. In the 1960s John Bell showed that a pair of entangled particles can exhibit individually random behavior that is too strongly correlated to be explained by classical statistics. Unfortunately Bell inequalities and the further modifications by other workers state that real instruments do not detect by any means every particle. Assumptions are/were believed to dominate the picture, and data adjustment are sometimes seen as responsible for many claimed results. The original idea, published in 1964 (Bell, 1964), involved pairs of particles produced together, sent in different directions, then either detected or not. Though the assumptions normally state that all particles are detected but during the last few years detection rates of 5% are more common, with no one achieving the often implied 100% detection.The abstract of Freedman and Clausers paper, for example, stated that "Our data, in agreement with quantum mechanics, violate these [Bell] restrictions to high statistical accuracy, thus providing strong evidence against local hidden-variable theories". Some workers dismiss such optimism given that most often, and certainly in the case of Freedman and Clauser, there is no discuss on the significance of statistical adjustment. What Bell tests are concerned with is the shape of the relationship between coincidence counts and relative detector setting. Now as said the detection rate has always been below 10%, the sometimes assumed 100% were believed to be impossible. However, one of the problems is the ability of detectors to register a single photons. It has been argued that we only have a probabilistic relationship determining detection rates at each intensity. The uncertainty comes into the picture in the form of electromagnetic "noise" that is added to the signal before detection. Morerecently though the ability to interlink two quantum particles with practically 100% certainty, has been achieved by a NIST group (Quentin Turchette, 303- 497-3328). M. Zukowski et al. , Phys. Rev. Lett. (1993). describe the use independent sources to realise an `event-ready' Bell-Einstein-Podolsky-Rosen experiment in which one can measure directly the probabilities of the various outcomes including the nondetection of both particles. The most recent work even in the last few months is swaying even the more rigid and inflexible voices among the scientific community. Jian-wei Pan, et al. published work in Physical Review Letters that they had experimentally entangled freely propagating particles that had never physically interacted with one another or which have never been coupled by any other means. Their work demonstrated that quantum entanglement neither required the entangled particles to come from a common source nor to have interacted in the past. 80, 3891-3894 (1998). The uses of quantum teleportation reach beyond the transporters of the Enterprise, and as Robert Faulkner pointed out in his post there are serious considerations over the use of entanglement in computational-communication and procedure. In 1994 Peter W. Shor of AT&T worked out how to take advantage of entanglement and superposition to find the prime factors of an integer. He found that a quantum computer could, in principle, accomplish this task much faster than the best classical calculator ever could. This was covered in detail in the previous post, and is beyond the scope of what had confused me, that being the disregard for entanglement in the previous posts. Multiple Particle Entanglement Where Entanglement Is Likely To Lead In the past, evidence of quantum mechanical entanglement has been restricted to elementary particles such as protons, electrons, and photons. Now E. Hagley et al, using rubidiumatoms prepared in circular Rydberg states (which means the outer electrons of the atom have been excited to very high energy states and are far from the nucleus in circular orbits), have shown quantum mechanical entanglement at the level of atoms. What is involved is that the experimental apparatus produces two entangled atoms, one atom in a ground state and the other atom in an excited state, physically separated so that theentanglement is non-local, and when a measurement is made on oneatom, let us say the atom in a ground state, the other atominstantaneously presents itself in the excited state the result of the second atom wave function collapse thus determined by the result of the first atom wave function collapse. There is talk that before long quantum mechanical entanglement may be demonstrated for molecules and perhaps even larger entities [Phys. Rev. Lett. 79:1 (1997)] Quantum Entanglement In Computing Several research groups believe quantum computers based on the molecules in a liquid might one day overcome many of the limits facing conventional computers. There is growing concern that the transistors used in the elctronics industry are rapidly approaching an impass. The effort to build a quantum computer was stimulated by the realisation by Rolf Landauer, Richard Feynman, Paul Benioff, David Deutsch, Charles Bennett and others that computers must obey the laws of physics, and that the realm of microelectronics is fast shrinking to the atomic realm ruled by quantum mechanics. The downscaling of the components reaches significant problems when they are built at a size of a few atoms. (Roger Highfield). Also indeterminancy means there are quantum effects at small scales; problems also exist in that the facilities for fabricating still more powerful microchips will eventually become prohibitively expensive. The advantage of quantum computers arises from the way they encode a bit, the fundamental unit of information. The state of a bit in a classical digital computer is specified by one number, 0 or 1. A word in classical computing is described by a string of n-bytes of information where the byte represents the alpha numeric bit, specifically eight bits of information. A quantum bit, called a qubit, might be represented by an atom in one of two different states, which can also be denoted as 0 or 1. Two qubits, like two classical bits, can attain four different well-defined states (0 and 0, 0 and 1, 1 and 0, or 1 and 1). However, unlike classical bits, qubits can exist simultaneously as 0 and 1, with the probability for each state given by a numerical coefficient. Describing a two-qubit quantum computer thus requires four coefficients. In general, n qubits demand 2n numbers, which rapidly becomes a sizable set for larger values of n. For example, if n equals 50, about 1015 numbers are required to describe all the probabilities for all the possible states of the quantum machine--a number that exceeds the capacity of the largest conventional computer. A quantum computer promises to be immensely powerful because it can be in multiple states at once--a phenomenon called superposition--and because it can act on all its possible states simultaneously. Thus, a quantum computer could naturally perform myriad operations in parallel, using only a single processing unit. Scientific American June 1998 Charles Bennett and his colleagues at IBM found a method back in 1993 and it works because you can send the quantum information so long as you do not know the details of what you are sending and that is the idea that has now been demonstrated by Jeff Kimble of the Caltech, along with Samuel Braunstein of the University of Wales at Bangor and others. In 1994 Peter W. Shor of AT&T deduced how to take advantage of entanglement and superposition to find the prime factors of an integer. He found that a quantum computer could, in principle, accomplish this task much faster than the best classical calculator ever could. His discovery had an enormous impact. Suddenly, the security of encryption systems that depend on the difficulty of factoring large numbers became suspect. And because so many financial transactions are currently guarded with such encryption schemes, Shor's result sent tremors through a cornerstone of the world's electronic economy. Daily Telegraph Nov 1998 Certainly no one had imagined that such a breakthrough would come from outside the disciplines of computer science or number theory. So Shor's algorithm prompted computer scientists to begin learning about quantum mechanics, and it sparked physicists to start working in computer science. While at Los Alamos National Laboratory in New Mexico, Isaac Chuang, with Neil Gershenfeld of MIT, took another important step by demonstrating that quantum computing can be carried out with ordinary liquids in a beaker at room temperature. Each molecule contains atoms, and the nuclei of atoms act like tiny bar magnets. These can point in only two directions, "up" and "down", because of a property called "spin". A single nucleus can therefore act as a qubit, its spin pointing perhaps up for "off" and down for "on". A given spin lasts a relatively long time and can be manipulated with nuclear magnetic resonance, a technique used by chemists for years. Thus each molecule can act as a "little computer" and is capable of as many simultaneous calculations as there are ways of arranging its spin, according to Chuang, now with IBM Research, who has tackled some simple problems with chloroform. Does this mean the first quantum computer is about to appear on the market? His colleague, Charles Bennett, has a standard response: "Definitely in the next millennium." return to homepage
This science fair project was done to compare the vitamin C concentration in oranges during various stages of ripening. The testing was done by extracting the juice from an unripe orange, half-ripe orange and a fully ripe orange. The unripe orange will have the highest concentration of vitamin C. Oranges and Vitamin C Vitamin C, or ascorbic acid, is the nutrient found in citrus fruits like oranges, grapefruit and lemons. The high amount of vitamin C in oranges makes it an ideal food choice as our body requires the vitamin. However, vitamin C in oranges degrades over time. Oranges plucked earlier and kept in cold storage contain more vitamin C than oranges left to ripen on the trees. Similarly, freshly-squeezed orange juice is also known to lose its nutrients over time, unless refrigerated. Hence, oranges and orange juice need to be stored in a cool place to retain their vitamin C content. The use of pesticides and fertilizers on oranges degrades the vitamin C found in the fruit, so pesticide-free produce may be a more nutritious choice. The type of containers used to store fruits can also be important in preserving its vitamin C content, with cardboard boxes lined with foil the best choice for storing oranges. The materials required for the science fair project: - 1 ripe orange - 1 half ripe orange - 1 unripe orange - 3 beakers - 1 juice extractor - 1 burette - 1 bottle of iodine - starch solution - 1 glass stirring rod - 1 stand and holder to support the burette - 1 measuring cylinder 1. For this science fair project, the independent variable is the type of orange used – ripe, half ripe and unripe. The dependent variable is the number of iodine-starch drops required to neutralize the vitamin C in the juice, determined by dropping iodine-starch into the orange juice until the solution turns blue. The constants (control variables) are the amount of orange juice, the concentration of iodine and the temperature of the environment, which will remain at room temperature. 2. Label the 3 beakers as ripe, half ripe and unripe respectively. Use the juice extractor to extract the juice from each orange. Measure 100ml of each type of orange juice with the measuring cylinder, and pour into the appropriate beaker. 3. Mount the burette on the stand and pour the iodine-starch solution into the burette. 4. Place the beaker of ripe orange juice under the burette. Add the iodine-starch solution to the orange juice one drop at a time, by adjusting the burette. After each drop, the blue iodine-starch will react with the vitamin C and become clear. Once all the vitamin C has neutralized, the color of the iodine-starch will not clear, but instead remain blue. Stop the procedure and record the number of drops taken to neutralize the vitamin C in a table, as shown below. 5. Repeat step 4 with juice from the half ripe and unripe oranges. The results show that the unripe orange has the most vitamin C, while the fully ripened orange has the lowest amount of vitamin C. |State of orange |Number of iodine-starch drops The above results were then plotted onto a graph, as shown below. The hypothesis that the unripe orange will have the highest concentration of vitamin C has been proven to be true. Vitamin C is an important part of our nutrition and is also an antioxidant. A lack of vitamin C in our bodies causes scurvy, a disease that causes teeth and bone abnormalities. Vitamin C can be found in abundance in fruits and vegetables, which are the main source of the vitamin for most of us. However, it is normally destroyed during cooking. Therefore, it is recommended to eat fruits and vegetables as raw as possible. The science fair project may be repeated by using different types of fruits like papayas or apples. The experiment may also be repeated by leaving the freshly squeezed juice in the open for a period of time, before testing for the gradual degradation of vitamin C.
In 1956, Astounding Science Fiction gushed about the wonders of a reactionless space drive invented in 1956 by Norman L. Dean. It could propel a converted atomic submarine into space, editor John W. Campbell wrote. He added, "The modern nuclear submarine is, in fact, a fully competent space-vehicle . . . lacking only the Dean Drive." He chose a submarine of the Skate class for his thought experiment. With the reactionless space drive, the submarine would lift off the earth at a constant 1 g acceleration, which could be maintained nonstop for months, if necessary. There would be, as a consequence, no sense of free fall for the crew. Gravity would appear to be earth-normal. "In flight," Campbell wrote, "the ship will simply lift out of the sea, rise vertically, maintaining a constant 1,000 cm/sec/sec drive. Halfway to Mars, it would loop its course, and decelerate the rest of the way at the same rate." In adapting the submarine to space duty, he said. "There is one factor that has to be taken into account, however; the exhaust steam from the turbine has to be recondensed and returned to the boiler. In the sea, sea water is used to cool the condenser; in space no cooling water is available." A huge bag-like balloon would be attached to the spaceship, silvered on one side, painted black on the other, and of whatever diameter needed to operate properly (unless it was elastic and self-adjusting). This would act as the condenser for the exhaust steam. "The tough part is the first hundred miles up from Earth; there air resistance will prevent use of the balloon condenser." Campbell suggested that the ship carry along spare water in the form of ice. By the time it melted, the ship would be above the atmosphere. "Under the acceleration conditions described above, a ship can make the trip from Earth to Mars, when Mars is closest, in less than three days . . . It would have been nice if, in response to Sputnik I, the United States had been able to release full photographic evidence of Mars Base I." The Dean Drive had been invented by a Washington, D.C., businessman, Norman L. Dean, as a hobby. He built several working models, none of which were able to lift themselves (and ones that allegedly did were claimed to have been destroyed in the process of testing). The Dean Drive, in converting rotary motion into a unidirectional motion, generates a one-way force, without any reaction. To do this, a pair of counter-rotating masses generated a nonreactive force. That is, in the case of Newton's law that for every action there is an equal but opposite reaction, for the Dean Drive there is an action without any reaction, equal or otherwise. Dean's data revealed that, neglecting losses due to friction, a 150-horsepower motor would develop 6,000 pounds of thrust. Campbell and Dean and the machine When John Campbell examined the device he admitted that "I do not understand Mr. Dean's theory very clearly; my personal impression is that he doesn't understand the thing in a theoretical sense, himself." In the Drive, two counter-rotating masses (about 1/2 pound apiece) spun on shafts in a light frame. The complete model weighed about 3 pounds. Normally, with such device, a powerful oscillation would be produced. However, Dean changed the center of rotation of the masses as they spun. This point itself had no mass, so no energy was required to move it. "In the rotation of those counter-rotating masses," explained Dean, "there is a particular phase-angle such that the horizontal vectors are canceled, and the vertical vector is upward, and exactly equal to the weight of the two masses. At that instant, the light framework can be moved upward without exerting any force on the masses." In the demonstration model Campbell saw, a small solenoid moved the frame carrying the rotating masses at the proper instant. The result of forcing the masses "to rotate about two different centers of rotation simultaneously" is "rectified centrifugal force." Dean maintained that during operation of the counter-rotating eccentrics the heart of the system was the intricate phasing relationship which must exist. The rigid connection between paired shafts and the counter-rotation of masses produced a cancellation of forces and reactions engendered in all directions except in the direction of the desired oscillation. This was always parallel to a plane perpendicular to the axes of rotation of the two masses. The result of the cancellation was an oscillation produced by the resultant forces which represented the sum of the components of all forces acting in the direction of a plane at right angles to the shaft axes. Thus, claimed Dean, such a freely suspended oscillating system was not subjected to any other reaction or force. The use of six properly phased pairs would produce an almost continuous thrust. Western Gear Corp. ran tests on the Dean device and concluded that it couldn't work, although some computer simulations contradicted this. At the same time, rocket pioneer Alfred Africano recalled that several years earlier he had enjoyed a "ride" on a similar device developed on Long Island by Assen Jordanoff, the famed aviation expert. The 500-pound man-carrying vehicle attained a speed of 1/3 mph. It's important to point out, though, that this movement was horizontal, not vertical. And twenty years prior to Dean, inventor S. J. Byrne devised, on paper at least, an antigravity method based on displaced inertial masses that he called the "planetary rotor." He did not believe that his invention duplicates Dean's, but rather complemented it and offered to join forces with the Washington inventor. (It is interesting to note that Ernst Mach, of Mach number fame, in his book Die Mechanik in ihrer Entwickling, 1883, and which was published in 1902 in the United States as The Science of Mechanics, with numerous reprints up to the 1960s, described and illustrated a machine very similar to Dean's. The young Robert Goddard also toyed with an idea similar to Dean's. Ultimately, the Air Force Office of Scientific Research turned the device over to engineer Jacob Rabinow to assess. Rabinow concluded that the system "does not have any unusual properties" and that it "can not produce a unidirectional impulse." He did not even consider it an efficient vibrator or impact machine. The demonstration machine only gave the illusion of generating a force without an equal and opposite reaction by making use of the static friction of the load against the floor, similar to the way in which a person on roller skates can move across a floor by swinging his body to and fro. In the absence of static friction the machine did not perform as claimed. Dean, of course, insisted that Rabinow's tests were improperly performed (and a number of Dean's modern supporters agree, many of whom are convinced inertial propulsion will work). "We are going to have to live with the Drive," he wrote, "whether we want to or not."
Native Americans and the Land Wilderness and American Identity The Use of the Land Native Americans and the Land Essays American Indians: The Image of the Indian Nature Transformed is made possible by grants from the Arthur Vining Davis Foundations. An early twentieth-century elementary school textbook quizzed pupils on their grasp of the lesson devoted to American Indians. It was a time of unblushing certainty about the superiority of civilization to “savagery.” “In what three ways were the Indians different from the white men,” the school text asked, and “What did the white people think of the Indians?” Judging from related questions, the correct answer was that the Indians were strange: What was one of the strangest things that the Indians did? Today it is difficult even to talk about the racial stereotypes once so confidently assumed. Stereotyping as a subject for study may be historical, but the emotions it arouses are eminently present day. Whether we use terms like image, stereotype or construct, we are talking about the same thing: ideas about a particular group that serve to characterize all the individuals within that group. Certain ideas entrench themselves as fundamental, and the rule of thumb is that such ideas are invariably self-serving—they promote the interests of the group that holds them, and they form the reality upon which that group acts. It is a given today that the idea of the American Indian has been historically significant. It shaped the attitudes of those in the nineteenth century who shaped Indian policy. Indian policy—be it removal of the Eastern tribes in the 1830s, reservation isolationism beginning in the 1850s, or allotment of reservation lands and assimilation in the 1880s—cannot be understood without an awareness of the ideas behind it. Literature and the visual arts provide revealing guides to nineteenth-century assumptions about the Indian. Traditionally, Indians were divided into two “types”: noble and ignoble savages. The Indian woman was either a princess or a drudge, the Indian man an admirable brave or a fiendish warrior. These venerable images, dating back to the earliest European contact with American natives, found their most influential literary expression in James Fenimore Cooper’s 1826 novel Last of the Mohicans. Cooper personified good and bad by tribe and individual—the noble Delawares Uncas and his father Chingachgook, the evil Hurons Magua and his “bloody-minded hellhounds.” Lasting influence? Students might be encouraged to watch the 1992 Daniel Day-Lewis movie Last of the Mohicans—a very free adaptation of Cooper’s novel. Better yet, have them watch Dances with Wolves. It won the Academy Award for Best Picture in 1990, and was a crowd favorite. Besides a sympathetic white hero in line with Cooper’s own Natty Bumppo, it starkly contrasts “good” Indians (the ever-so-noble Lakotas) and “bad” Indians (the villainous Pawnees, with their roach-cuts and face paint making them look like English “punks” on a rampage). The stark contrast between the noble and ignoble savage obscures their common denominator: savagery. Savagery referred to a state of social development below civilization and, in some calculations, below an intermediate step, barbarism. Since savagery was inferior to civilization, the reasoning went; a savage was naturally inferior to a civilized person. The noble savage might be admired for certain rude virtues, and the ignoble savage deplored as brutal and bloody-minded, but the fate of each was identical. In time, both would vanish from the face of the earth as civilization, in accordance with the universal law of progress, displaced savagery. The ending of Dances with Wolves echoes this sentiment as an admirable culture, unaware of inexorable fate, is about to be swept away by a more progressive but less admirable one. Swept away. Such was the theory of the Vanishing American. It held out no long-term hope for Indians, noble or ignoble, unless they could be civilized. Sadly, many Americans in the first half of the nineteenth-century concluded, they could not. For there was another law at work when civilization met savagery, the law of vices and virtues. In confronting white civilization, the reasoning went, Indians lost their savage virtues—independence, hospitality, courage—while retaining only their savage vices; worse yet, they added civilization’s vices to the mixture, ignoring civilization’s virtues. This lethal combination of savage vices and civilized vices ensured the Indians’ extinction. The artist George Catlin (1796–1872), who based his entire body of work—including over 500 paintings done in the 1830s and several books recounting his travels—on the theory of the Vanishing American, provided a vivid description of the process at work: In traversing the immense regions of the Classic West, the mind of a Philanthropist is filled to the brim with feelings of admiration; but to reach this country, one is obliged to descend from the light and glow of civilized atmosphere, through the different grades of civilization, which gradually sink to the most deplorable vice and darkness along our frontier; thence through the most pitiable misery and wretchedness of savage degradation, where the genius of natural liberty and independence have been blasted and destroyed by the contaminating vices and dissipations of civilized society. Through this dark and sunken vale of wretchedness one hurries as through a pestilence, until he gradually rises again into the proud and heroic elegance of savage society, in a state of pure and original nature, beyond the reach of civilized contamination … Even here, the predominant passions of the savage breast, of treachery and cruelty, are often found, yet restrained and frequently subdued by the noblest traits of honor and magnanimity,—a race of men who live and enjoy life and its luxuries, and practice its virtues, very far beyond the usual estimations of the world … From the first settlements of our Atlantic coast to the present day, the bane of this blasting frontier has regularly crowded upon them, from the northern to the southern extremities of our country, and, like the fire in a mountain, which destroys every thing where it passes, it has blasted and sunk them, and all but their names, into oblivion, wherever it has traveled. Pigeon’s Egg Head (The Light) going to Not everyone accepted such a grim prognosis. Missionaries always rejected the notion of a race created for extinction, and insisted that substituting good example for bad would find the Indians’ gratefully embracing civilization’s virtues and spurning its vices. Even Catlin held out hope. “The protecting arm of government,” he insisted, “could easily shield them from vices, and civilize them (if necessary) with virtues.” Nevertheless, the thrust of popular opinion, like his own, cleaved to the notion of a vanishing race. “This wild, but noble and unhappy race, is rapidly becoming extinct,” a New York newspaper editorialized in 1837: They are rapidly sinking into the stream of oblivion, and soon nothing of them will remain but the memory of their past existence and glory. Where are now the descendants of Powhattan, the father of Pocahontas, or Tamenend and of Pontiac? Alas! They are blotted from the face of the earth, or swallowed up in the remnants of other tribes. Science buttressed popular understanding of the Indian. In the middle of the nineteenth century, polygenesis—the theory of multiple creation of human “types”—provided a race-based explanation for permanent differences in racial capacity, thereby reinforcing notions about the incompatibility of savagery and civilization. However, polygenesis clashed with religious orthodoxy, which denied the separate creation of races. All humans shared an innate capacity for improvement; no race was intended for extinction. Later, evolutionary theorists, in advancing the case for survival of the fittest, gave new credence to the tradition of the Vanishing Indian, since there had to be losers as well as winners in the struggle for survival. [Title Page] Colton’s The Course of Empire, Guiding Student Discussion Racial stereotyping is a minefield, and entering it for purposes of classroom discussion requires a carefully thought out strategy. The truth is that students are often impatient with the past. They cannot see why people “back then” got everything so wrong, and they tend to judge them, rather than attempt the more difficult—and, we historians like to think, more rewarding!—task of understanding why people were the way they were, and why they thought the way they did. In order to discuss historical stereotypes, you have to introduce students to them. This runs the risk of coming across as advocacy. Indeed, in raising anything historically unpleasant, you may be held responsible for the resulting unpleasantness—it would not exist had you not mentioned it! Having introduced stereotypes, you are left to deal with them. Outright condemnation is easy, since it conforms to what students already think. Anything more challenging runs even greater risks. Let me (literally!) illustrate the problem. You want to talk about stereotypes of African Americans and American Indians, so you show your class a cartoon of an African American eating watermelon and a photograph of a cigar store Indian. If your point is simply that these images prove the ignorance of EuroAmericans in the past, then you will have no controversy. If you introduce the same images to probe the underlying values of a society that considered them acceptable, then you invite controversy. Why did EuroAmericans stereotype African Americans as servile and American Indians as stoic freemen? And to what ends? What use did the EuroAmerican majority have for each race? The labor of one, of course, and the land of the other. How would those different uses shape stereotypes? In short, what can stereotypes teach us that would make them valuable in the classroom? What can they tell us beyond the obvious? Students may remain un-persuaded. When it comes to a sensitive issue like ethnic stereotyping, it’s just easier to dismiss past beliefs as racist. What else is there to say? Why study the attitudes of another age if, by our standards today, they were deplorable? The answer is in that qualifier—“by our standards today.” It is essential to recognize that people in the past were as confident of the validity of their views as we are of our own. Moral certainty underlay their actions, too. Far from being illogical, they were, according to their lights, entirely logical! And that’s a good departure point for discussion. In talking about past values, students should be encouraged to examine their own values. How are attitudes formed? How do we know what we know? How does experience shape our views? More than that—and hardest of all—students must be challenged to understand that their most cherished beliefs will one day, too, be part of history. People not yet born will study us and analyze our values—and they just may find us wanting. Far from making us feel superior, then, history should chasten us. The past has been described as a foreign country. We must visit it with open minds and all due respect for its customs, eager to learn, not simply to judge. Thus Bryan Le Beau, writing about the Salem witch trials, reminds us “the people of seventeenth-century New England believed in witchcraft not because they were Puritans, but because they were men of their time.” And James McPherson, reviewing a book condemning Abraham Lincoln as a racist, observes that Lincoln “shared many of the racist convictions of his time,” but considered slavery “morally wrong” and was able “to transcend his prejudices and to preside over the greatest social revolution in American history, the liberation of four million slaves.” People from the past will never conform to present-day standards; if we would understand them, we must grant them their own worldview in order to evaluate their actions and to draw the critical distinctions that are the heart and soul of history. Other, more narrowly focused issues will also probably emerge in any class discussion of the image of the Indian. Students like to distinguish “good” from “bad”. Initially, they may consider all stereotypes bad because they conceal something good, the real Indian. Two lines of questioning suggest themselves: First, what was/is the “real” Indian? Do we define an Indian racially, by “blood quantum”? Or by an allegiance to traditional culture? Or by federal status (reservation/nonreservation)? How do we define a “real” Indian? Second, are some stereotypes more acceptable than others? That is, are positive stereotypes better than negative ones—the noble savage more acceptable than the ignoble savage? It’s here that Dances with Wolves can be helpful. Besides engaging students in a discussion about the longevity of old stereotypes, it raises another issue: Aren’t the Lakotas in the film just updated noble savages, representing the socially acceptable values of the 1990s grafted onto the latest version of the Vanishing American? Just because the Lakotas get to be the good guys in Dances with Wolves, is it okay to stereotype them-or to see them off with tear-dimmed eyes at movie’s end as, faithful to the tradition of the vanishing race, they await the destruction of their way of life? Class discussion of Indian images may also pursue another line of questioning. Granted stereotypes like the noble and ignoble savage and the Vanishing American, who, in particular, believed them—and how do you show that they believed them? Citing a few heavyweight thinkers proves little, and smacks of elitism. How about ordinary people? What did they think—and how do we know? Here the popular culture of any given period is relevant. Today we would look at the electronic media, films, music, etc.; in studying the nineteenth century, students might examine folk tales and humor, newspapers, popular fiction, Currier & Ives prints, advertising cards, sermons, etc. At the very least, the sheer pervasiveness of the major Indian stereotypes in popular culture will be a revelation to most students. Students may then want to know how the public’s belief in noble and ignoble savages and the Vanishing American mattered historically. Given that people held certain views about Indians, So what? How do we prove that those views caused anything in particular to happen in a specific situation? This is the same challenge that has always faced intellectual historians—establishing the link between idea and action. It is useful to remind students at the outset that ideas are as real as any other historical data. Since history itself is a mental exercise, the historian can hardly deny people in the past a fully active mental life of their own. As a general proposition, what people believe explains what they do. When, for example, Congressmen in the nineteenth century debated Indian affairs and referred to the bloody savage to promote an aggressive policy, or talked about a noble race that had been dispossessed to advocate a humanitarian policy, we can see a belief system at work with direct, practical consequences. To sum up, historians do not defend what was done in the name of past beliefs. They are not apologists or advocates. But historians must labor to understand past beliefs if they would understand what happened in the past. Ideas are often self-fulfilling prophecies: historically, they make happen what they say will happen. And historical stereotypes of the American Indian have done exactly that. Almost fifty years ago, Roy Harvey Pearce, a literary scholar, in his book The Savages of America: A Study of the Indian and the Idea of Civilization (1953; rev. ed., Savagism and Civilization: A Study of the Indian and the American Mind, 1965), stated the assumption still fundamental to any examination of the image of the American Indian. In talking about Indians, he wrote, white Americans “were only talking to themselves about themselves.” Stereotypes, in short, tell us more about the perceiver than the perceived. Every historical study of Indian images since has worked a variation on Pearce’s premise, be it “the white man’s Indian” (Robert Berkhofer, The White Man’s Indian: Images of the American Indian from Columbus to the Present ), “the Vanishing American” (Brian W. Dippie, The Vanishing American: White Attitudes and U.S. Indian Policy ), “the invented Indian” (James Clifton, ed., The Invented Indian: Cultural Fictions and Government Policies ), “the imaginary Indian” (Daniel Francis, The Imaginary Indian: The Image of the Indian in Canadian Culture ), or “the constructed Indian” (Elizabeth S. Bird, ed., Dressing in Feathers: The Construction of the Indian in American Popular Culture ). Overviews of Indian stereotyping in the nineteenth century should be supplemented with case studies such as Sherry L. Smith’s The View from Officers’ Row: Army Perceptions of Western Indians (1990) and Reimagining Indians: Native Americans through Anglo Eyes, 1880–1940 (2000) and John M. Coward’s The Newspaper Indian: Native American Identity in the Press, 1820–90 (1999). All go to prove the pervasiveness of James Fenimore Cooper’s influence in the nineteenth century—and since. Many Westerners (and some army officers), for example, fancied themselves realists when it came to Indians, and routinely denounced Cooper’s Uncas and the whole sentimental tradition of the noble savage as a palpable fiction—even as they embraced Magua and the ignoble savage as unvarnished truth! Historians have given particular attention to the “So what?” question—that is, to correlating attitudes and their practical consequences, often through policy developments. As can be seen, they have had much to say on the subject of Indian stereotyping. But because of the seminal influence of Cooper’s The Last of the Mohicans in giving memorable form to the noble savage, bloody savage and the Vanishing American, students of American literature continue to lead the way in probing Indian stereotypes. A readable, accessible book is Louise K. Barnett’s The Ignoble Savage: American Literary Racism, 1790–1890 (1975). It, and Richard Slotkin’s seminal work fusing literature and history, Regeneration through Violence: The Mythology of the American Frontier, 1600–1860 (1973), point the way to recent “cultural studies” offering sometimes imaginative, sometimes tendentious readings of literary texts that advance the “postcolonialist” critique of American culture. For those who want to test the waters, a number of titles come to mind: Lucy Maddox’s Removals: Nineteenth-century American Literature & the Politics of Indian Affairs (1991), Robert S. Tilton’s Pocahontas: The Evolution of an American Narrative (1994), Cheryl Walker’s Indian Nation: Native American Literature and 19th-Century Nationalisms (1997), Susan Scheckel’s The Insistence of the Indian: Race and Nationalism in Nineteenth-century American Culture (1998) and Renee L. Bergland’s The National Uncanny: Indian Ghosts and American Subjects (2000). Martin Barker and Roger Sabin’s The Lasting of the Mohicans: History of an American Myth (1995) documents the growth industry created by one novel over the years, while Alan Trachtenberg’s Shades of Hiawatha: Staging Indians, Making Americans, 1880–1930 (2004) uses the most famous Indian poem ever written, Henry Wadsworth Longfellow’s Hiawatha (1855), as a launching pad for a broad-gauged investigation of Indian and immigrant stereotypes in the twentieth century. The image of the Indian in art has been comparatively neglected. Two illustrated essays provide different interpretations. Julie Schimmel’s “Inventing ‘the Indian,’” in William Truettner, ed., The West as America: Reinterpreting Images of the Frontier (1991), stresses the construction of the “Indian” in nineteenth-century art, while Brian Dippie’s “The Moving Finger Writes: Western Art and the Dynamics of Change,” in Jules Prown, et al., Discovered Lands, Invented Pasts: Transforming Visions of the American West (1992), focuses on visual representations of the fate of the Indian. Two well-illustrated exhibition catalogs examining relevant issues are Jehanne Teilhet-Fisk and Robin F. Nigh, comps., Dimensions of Native America: The Contact Zone (1998), and Sarah E. Boehme, et al., Powerful Images: Portrayals of Native America (1998). Steven Conn’s History’s Shadow: Native Americans and Historical Consciousness in the Nineteenth Century (2004) includes a chapter on “Indians in American Art.” There has been a growth industry in Edward S. Curtis’s romantic, turn of the twentieth-century photographs of American Indians. Barbara Davis’s Edward S. Curtis: The Life and Times of a Shadow Catcher (1985) is the most substantial of the many Curtis picture books, and students always enjoy looking at his work. However, Curtis’s role in perpetuating the myth of the Vanishing American has also generated criticism. Christopher M. Lyman’s The Vanishing Race and Other Illusions: Photographs of Indians by Edward S. Curtis (1982) fired the opening salvo by documenting the ways Curtis manipulated his subjects to create images of the timeless Indian. Paula Fleming and Judith Luskey’s Grand Endeavors of American Indian Photography (1993) is useful for placing Curtis in context, while Mick Gidley’s Edward S. Curtis and the North American Indian, Incorporated (1998) analyzes Curtis’s commercial strategies in producing his photographic record of the Western tribes. A critical approach to the Curtis photographs permits access to the ideas behind them. Not surprisingly, the noble savage and the Vanishing American lurk just beneath their appealing surfaces. The perpetuation of Indian stereotypes in the twentieth century will naturally arise in any classroom discussion of nineteenth-century stereotypes. Students invariably turn to film, television, and music as sources for their own ideas, and I have already mentioned the usefulness of a film like Dances with Wolves in stimulating interest. Consequently, the literature on cinema as a source for Indian stereotypes may prove relevant. The most recent studies (with up-to-date bibliographies) are Peter Rollins and John O’Connor’s Hollywood’s Indian: The Portrayal of the Native American in Film (1998), Jacquelyn Kilpatrick’s Celluloid Indians: Native Americans and Film (1999), and Armando José Prats’ Invisible Natives: Myth and Identity in the American Western (2002). More broadly, twentieth-century popular culture and the Indian figure into the Elizabeth Bird anthology Dressing in Feathers, as well as Rennard Strickland’s Tonto’s Revenge: Reflections on American Indian Culture and Policy (1997), Ward Churchill’s polemical Fantasies of the Master Race: Literature, Cinema, and the Colonization of American Indians (1998), and Philip J. Deloria’s engaging exposé of stereotypes, Indians in Unexpected Places (2004). But in bringing the subject of Indian stereotypes in literature and art up to the present, it seems to me useful to end with something else—the contemporary American Indian voice. Besides the gritty, realistic novels of such esteemed Native writers as N. Scott Momaday, Louise Erdrich, James Welch and Leslie Marmon Silko, I recommend Sherman Alexie’s The Lone Ranger and Tonto Fistfight in Heaven (1993), a collection of short stories that keep an eagle eye on some of the absurdities of Indian stereotyping, and that served as the basis for Smoke Signals (1998), another film your students should see. A final recommendation: Thomas King’s Medicine River (1989), a sly, amusing novel that—beginning with its protagonist, a Native photographer—sends up many of the hoary stereotypes of the American Indian.
The Neolithic ("// ("" listen)) was a period in the development of human "technology, beginning about 15,200 BC, according to the "ASPRO chronology, in some parts of the Middle East, and later in other parts of the world and ending between 4500 and 2000 BC. Traditionally considered the last part of the "Stone Age or The New Stone Age, the Neolithic followed the terminal "Holocene "Epipaleolithic period and commenced with the beginning of "farming, which produced the ""Neolithic Revolution". It ended when metal tools became widespread (in the "Copper Age or "Bronze Age; or, in some geographical regions, in the "Iron Age). The Neolithic is a progression of behavioral and cultural characteristics and changes, including the use of wild and domestic crops and of "domesticated animals.[a] The beginning of the Neolithic culture is considered to be in the "Levant ("Jericho, modern-day "West Bank) about 15,200–8800 BC. It developed directly from the "Epipaleolithic "Natufian culture in the region, whose people pioneered the use of wild "cereals, which then evolved into true "farming. The Natufian period was between 12,000 and 15,200 BC, and the so-called "proto-Neolithic" is now included in the Pre-Pottery Neolithic ("PPNA) between 15,200 and 8800 BC. As the Natufians had become dependent on wild cereals in their diet, and a sedentary way of life had begun among them, the climatic changes associated with the "Younger Dryas are thought to have forced people to develop farming. By 10,200–8800 BC, farming communities arose in the Levant and spread to "Asia Minor, North Africa and North "Mesopotamia. Mesopotamia is the site of the earliest developments of the Neolithic Revolution from around 10,000 BC. Early Neolithic farming was limited to a narrow range of plants, both wild and domesticated, which included "einkorn wheat, "millet and "spelt, and the keeping of "dogs, "sheep and "goats. By about 6900–6400 BC, it included domesticated "cattle and "pigs, the establishment of permanently or seasonally inhabited settlements, and the use of "pottery.[b] Not all of these cultural elements characteristic of the Neolithic appeared everywhere in the same order: the earliest farming societies in the "Near East did not use pottery. In other parts of the world, such as Africa, "South Asia and Southeast Asia, independent domestication events led to their own regionally distinctive Neolithic cultures that arose completely independently of those in Europe and Southwest Asia. "Early Japanese societies and other East Asian cultures used pottery before developing agriculture. Unlike during the "Paleolithic, only one human species ("Homo sapiens sapiens) existed in the Neolithic. The term Neolithic derives from the "Greek νέος néos, "new" and λίθος líthos, "stone", literally meaning "New "Stone Age". The term was invented by "Sir John Lubbock in 1865 as a refinement of the "three-age system. This section needs additional citations for "verification. (August 2015) ("Learn how and when to remove this template message) In the Middle East, cultures identified as Neolithic began appearing in the 10th millennium BC. Early development occurred in the "Levant (e.g., "Pre-Pottery Neolithic A and "Pre-Pottery Neolithic B) and from there spread eastwards and westwards. Neolithic cultures are also attested in southeastern "Anatolia and northern Mesopotamia by around 8000 BC.["citation needed] The "prehistoric Beifudi site near "Yixian in Hebei Province, China, contains relics of a culture contemporaneous with the "Cishan and "Xinglongwa cultures of about 6000–5000 BC, neolithic cultures east of the "Taihang Mountains, filling in an archaeological gap between the two Northern Chinese cultures. The total excavated area is more than 1,200 square yards (1,000 m2; 0.10 ha), and the collection of neolithic findings at the site encompasses two phases. The Neolithic 1 (PPNA) period began roughly around 10,000 BC in the "Levant. A temple area in southeastern Turkey at "Göbekli Tepe dated around 9500 BC may be regarded as the beginning of the period. This site was developed by nomadic hunter-gatherer tribes, evidenced by the lack of permanent housing in the vicinity and may be the oldest known human-made place of worship. At least seven stone circles, covering 25 acres (10 ha), contain limestone pillars carved with animals, insects, and birds. Stone tools were used by perhaps as many as hundreds of people to create the pillars, which might have supported roofs. Other early PPNA sites dating to around 9500–9000 BC have been found in "Jericho, "Israel (notably "Ain Mallaha, "Nahal Oren, and "Kfar HaHoresh), "Gilgal in the "Jordan Valley, and "Byblos, "Lebanon. The start of Neolithic 1 overlaps the "Tahunian and "Heavy Neolithic periods to some degree.["citation needed] The major advance of Neolithic 1 was true farming. In the proto-Neolithic "Natufian cultures, wild cereals were harvested, and perhaps early seed selection and re-seeding occurred. The grain was ground into flour. "Emmer wheat was domesticated, and animals were herded and domesticated ("animal husbandry and "selective breeding).["citation needed] In 2006, remains of "figs were discovered in a house in Jericho dated to 9400 BC. The figs are of a mutant variety that cannot be pollinated by insects, and therefore the trees can only reproduce from cuttings. This evidence suggests that figs were the first cultivated crop and mark the invention of the technology of farming. This occurred centuries before the first cultivation of grains. Settlements became more permanent with circular houses, much like those of the Natufians, with single rooms. However, these houses were for the first time made of "mudbrick. The settlement had a surrounding stone wall and perhaps a stone tower (as in Jericho). The wall served as protection from nearby groups, as protection from floods, or to keep animals penned. Some of the enclosures also suggest grain and meat storage.["citation needed] The Neolithic 2 (PPNB) began around 8800 BC according to the "ASPRO chronology in the Levant ("Jericho, Palestine). As with the PPNA dates, there are two versions from the same laboratories noted above. This system of terminology, however, is not convenient for southeast "Anatolia and settlements of the middle Anatolia basin.["citation needed] A settlement of 3,000 inhabitants was found in the outskirts of "Amman, "Jordan. Considered to be one of the largest prehistoric settlements in the "Near East, called "'Ain Ghazal, it was continuously inhabited from approximately 7250 BC to approximately 5000 BC. Settlements have rectangular mud-brick houses where the family lived together in single or multiple rooms. Burial findings suggest an "ancestor cult where people "preserved skulls of the dead, which were plastered with mud to make facial features. The rest of the corpse could have been left outside the settlement to decay until only the bones were left, then the bones were buried inside the settlement underneath the floor or between houses.["citation needed] The Neolithic 3 (PN) began around 6,400 BC in the "Fertile Crescent. By then distinctive cultures emerged, with pottery like the "Halafian (Turkey, Syria, Northern Mesopotamia) and "Ubaid (Southern Mesopotamia). This period has been further divided into PNA (Pottery Neolithic A) and PNB (Pottery Neolithic B) at some sites.["citation needed] Around 15,000 BC the first fully developed Neolithic cultures belonging to the phase "Pre-Pottery Neolithic A (PPNA) appeared in the Fertile Crescent. Around 10,700–9400 BC a settlement was established in "Tell Qaramel, 10 miles (16 km) north of "Aleppo. The settlement included two temples dating to 9650 BC. Around 9000 BC during the PPNA, one of the world's first towns, "Jericho, appeared in the Levant. It was surrounded by a stone and marble wall and contained a population of 2,000–3,000 people and a massive stone tower. Around 6400 BC the "Halaf culture appeared in Lebanon, Israel and Palestine, Syria, Anatolia, and Northern Mesopotamia and subsisted on dryland agriculture. In 1981 a team of researchers from the "Maison de l'Orient et de la Méditerranée, including "Jacques Cauvin and Oliver Aurenche divided Near East neolithic chronology into ten periods (0 to 9) based on social, economic and cultural characteristics. In 2002 "Danielle Stordeur and "Frédéric Abbès advanced this system with a division into five periods. Domestication of "sheep and "goats reached "Egypt from the Near East possibly as early as 6000 BC. "Graeme Barker states "The first indisputable evidence for domestic plants and animals in the Nile valley is not until the early fifth millennium BC in northern Egypt and a thousand years later further south, in both cases as part of strategies that still relied heavily on fishing, hunting, and the gathering of wild plants" and suggests that these subsistence changes were not due to farmers migrating from the Near East but was an indigenous development, with cereals either indigenous or obtained through exchange. Other scholars argue that the primary stimulus for agriculture and domesticated animals (as well as mud-brick architecture and other Neolithic cultural features) in Egypt was from the Middle East. In southeast "Europe agrarian societies first appeared in the "7th millennium BC, attested by one of the earliest farming sites of Europe, discovered in Vashtëmi, southeastern "Albania and dating back to 6500 BC. Anthropomorphic figurines have been found in the Balkans from 6000 BC, and in Central Europe by around 5800 BC ("La Hoguette). Among the earliest cultural complexes of this area are the "Sesklo culture in Thessaly, which later expanded in the Balkans giving rise to "Starčevo-Körös (Cris), "Linearbandkeramik, and "Vinča. Through a combination of "cultural diffusion and "migration of peoples, the Neolithic traditions spread west and northwards to reach northwestern Europe by around 4500 BC. The "Vinča culture may have created the earliest system of writing, the "Vinča signs, though archaeologist Shan Winn believes they most likely represented "pictograms and "ideograms rather than a truly developed form of writing. The "Cucuteni-Trypillian culture built enormous settlements in Romania, Moldova and Ukraine from 5300 to 2300 BC. The "megalithic temple complexes of "Ġgantija on the Mediterranean island of "Gozo (in the Maltese archipelago) and of "Mnajdra (Malta) are notable for their gigantic Neolithic structures, the oldest of which date back to around 3600 BC. The "Hypogeum of Ħal-Saflieni, "Paola, Malta, is a subterranean structure excavated around 2500 BC; originally a sanctuary, it became a "necropolis, the only prehistoric underground temple in the world, and showing a degree of artistry in stone sculpture unique in prehistory to the Maltese islands. After 2500 BC, the Maltese Islands were depopulated for several decades until the arrival of a new influx of "Bronze Age immigrants, a culture that "cremated its dead and introduced smaller megalithic structures called "dolmens to Malta. In most cases there are small chambers here, with the cover made of a large slab placed on upright stones. They are claimed to belong to a population certainly different from that which built the previous megalithic temples. It is presumed the population arrived from Sicily because of the similarity of Maltese dolmens to some small constructions found in the largest island of the Mediterranean sea. The earliest Neolithic sites in South Asia are "Bhirrana in "Haryana dated to 7570-6200 BC, and "Mehrgarh, dated to 7500 BC, in the Kachi plain of "Baluchistan, Pakistan; the site has evidence of farming (wheat and barley) and herding (cattle, sheep and goats). In South India, the Neolithic began by 6500 BC and lasted until around 1400 BC when the Megalithic transition period began. South Indian Neolithic is characterized by Ashmounds since 2500 BC in Karnataka region, expanded later to "Tamil Nadu. The 'Neolithic' (defined in this paragraph as using polished stone implements) remains a living tradition in small and extremely remote and inaccessible pockets of "West Papua (Indonesian New Guinea). Polished stone "adze and axes are used in the present day (as of 2008[update]) in areas where the availability of metal implements is limited. This is likely to cease altogether in the next few years as the older generation die off and steel blades and chainsaws prevail. In 2012, news was released about a new farming site discovered in Munam-ri, "Goseong, "Gangwon Province, "South Korea, which may be the earliest farmland known to date in east Asia. "No remains of an agricultural field from the Neolithic period have been found in any East Asian country before, the institute said, adding that the discovery reveals that the history of agricultural cultivation at least began during the period on the "Korean Peninsula". The farm was dated between 3600 and 3000 BC. Pottery, stone projectile points, and possible houses were also found. "In 2002, researchers discovered prehistoric "earthenware, "jade earrings, among other items in the area". The research team will perform "accelerator mass spectrometry (AMS) dating to retrieve a more precise date for the site. In "Mesoamerica, a similar set of events (i.e., crop domestication and sedentary lifestyles) occurred by around 4500 BC, but possibly as early as 11,000–10,000 BC. These cultures are usually not referred to as belonging to the Neolithic; in America "different terms are used such as "Formative stage instead of mid-late Neolithic, "Archaic Era instead of Early Neolithic and "Paleo-Indian for the preceding period. The Formative stage is equivalent to the "Neolithic Revolution period in Europe, Asia, and Africa. In the southwestern United States it occurred from 500 to 1200 AD when there was a dramatic increase in population and development of large villages supported by agriculture based on "dryland farming of maize, and later, beans, squash, and domesticated turkeys. During this period the bow and arrow and ceramic pottery were also introduced. During most of the Neolithic age of "Eurasia, people lived in small "tribes composed of multiple bands or lineages. There is little "scientific evidence of developed "social stratification in most Neolithic societies; social stratification is more associated with the later "Bronze Age. Although some late Eurasian Neolithic societies formed complex stratified chiefdoms or even states, states evolved in Eurasia only with the rise of metallurgy, and most Neolithic societies on the whole were relatively simple and egalitarian. Beyond Eurasia, however, states were formed during the local Neolithic in three areas, namely in the "Preceramic Andes with the "Norte Chico Civilization, "Formative Mesoamerica and "Ancient Hawaiʻi. However, most Neolithic societies were noticeably more hierarchical than the "Paleolithic cultures that preceded them and "hunter-gatherer cultures in general. The "domestication of "large animals (c. 8000 BC) resulted in a dramatic increase in social inequality in most of the areas where it occurred; "New Guinea being a notable exception. Possession of livestock allowed competition between households and resulted in inherited inequalities of wealth. Neolithic pastoralists who controlled large herds gradually acquired more livestock, and this made economic inequalities more pronounced. However, evidence of social inequality is still disputed, as settlements such as "Catal Huyuk reveal a striking lack of difference in the size of homes and burial sites, suggesting a more egalitarian society with no evidence of the concept of capital, although some homes do appear slightly larger or more elaborately decorated than others. Families and households were still largely independent economically, and the household was probably the center of life. However, excavations in "Central Europe have revealed that early Neolithic "Linear Ceramic cultures ("Linearbandkeramik") were building large arrangements of "circular ditches between 4800 and 4600 BC. These structures (and their later counterparts such as "causewayed enclosures, "burial mounds, and "henge) required considerable time and labour to construct, which suggests that some influential individuals were able to organise and direct human labour — though non-hierarchical and voluntary work remain possibilities. There is a large body of evidence for fortified settlements at Linearbandkeramik sites along the "Rhine, as at least some villages were fortified for some time with a "palisade and an outer ditch. Settlements with palisades and weapon-traumatized bones have been discovered, such as at the "Talheim Death Pit demonstrates "...systematic violence between groups" and warfare was probably much more common during the Neolithic than in the preceding Paleolithic period. This supplanted an earlier view of the Linear Pottery Culture as living a "peaceful, unfortified lifestyle". Control of labour and inter-group conflict is characteristic of corporate-level or 'tribal' groups, headed by a charismatic individual; whether a '"big man' or a proto-"chief, functioning as a lineage-group head. Whether a non-hierarchical system of organization existed is debatable, and there is no evidence that explicitly suggests that Neolithic societies functioned under any dominating class or individual, as was the case in the "chiefdoms of the European "Early Bronze Age. Theories to explain the apparent implied egalitarianism of Neolithic (and Paleolithic) societies have arisen, notably the "Marxist concept of "primitive communism. The shelter of the early people changed dramatically from the "Paleolithic to the Neolithic era. In the Paleolithic, people did not normally live in permanent constructions. In the Neolithic, mud brick houses started appearing that were coated with plaster. The growth of agriculture made permanent houses possible. Doorways were made on the roof, with ladders positioned both on the inside and outside of the houses. The roof was supported by beams from the inside. The rough ground was covered by platforms, mats, and skins on which residents slept. "Stilt-houses settlements were common in the "Alpine and "Pianura Padana ("Terramare) region. Remains have been found at the "Ljubljana Marshes in "Slovenia and at the "Mondsee and "Attersee lakes in "Upper Austria, for example. A significant and far-reaching shift in human "subsistence and lifestyle was to be brought about in areas where crop "farming and cultivation were first developed: the previous reliance on an essentially "nomadic "hunter-gatherer "subsistence technique or "pastoral transhumance was at first supplemented, and then increasingly replaced by, a reliance upon the foods produced from cultivated lands. These developments are also believed to have greatly encouraged the growth of settlements, since it may be supposed that the increased need to spend more time and labor in tending crop fields required more localized dwellings. This trend would continue into the Bronze Age, eventually giving rise to permanently settled farming "towns, and later "cities and "states whose larger populations could be sustained by the increased productivity from cultivated lands. The profound differences in human interactions and subsistence methods associated with the onset of early agricultural practices in the Neolithic have been called the "Neolithic Revolution, a term "coined in the 1920s by the Australian archaeologist "Vere Gordon Childe. One potential benefit of the development and increasing sophistication of farming technology was the possibility of producing surplus crop yields, in other words, food supplies in excess of the immediate needs of the community. Surpluses could be stored for later use, or possibly traded for other necessities or luxuries. Agricultural life afforded securities that pastoral life could not, and sedentary farming populations grew faster than nomadic. However, early farmers were also adversely affected in times of "famine, such as may be caused by "drought or "pests. In instances where agriculture had become the predominant way of life, the sensitivity to these shortages could be particularly acute, affecting agrarian populations to an extent that otherwise may not have been routinely experienced by prior hunter-gatherer communities. Nevertheless, agrarian communities generally proved successful, and their growth and the expansion of territory under cultivation continued. Another significant change undergone by many of these newly agrarian communities was one of "diet. Pre-agrarian diets varied by region, season, available local plant and animal resources and degree of pastoralism and hunting. Post-agrarian diet was restricted to a limited package of successfully cultivated cereal grains, plants and to a variable extent domesticated animals and animal products. Supplementation of diet by hunting and gathering was to variable degrees precluded by the increase in population above the carrying capacity of the land and a high sedentary local population concentration. In some cultures, there would have been a significant shift toward increased starch and plant protein. The relative nutritional benefits and drawbacks of these dietary changes and their overall impact on early societal development is still debated. In addition, increased population density, decreased population mobility, increased continuous proximity to domesticated animals, and continuous occupation of comparatively population-dense sites would have altered "sanitation needs and patterns of "disease. The identifying characteristic of Neolithic technology is the use of polished or ground stone tools, in contrast to the flaked stone tools used during the Paleolithic era. Neolithic people were skilled farmers, manufacturing a range of tools necessary for the tending, harvesting and processing of crops (such as "sickle blades and "grinding stones) and food production (e.g. "pottery, bone implements). They were also skilled manufacturers of a range of other types of stone tools and ornaments, including "projectile points, "beads, and "statuettes. But what allowed forest clearance on a large scale was the polished "stone axe above all other tools. Together with the "adze, fashioning wood for shelter, structures and "canoes for example, this enabled them to exploit their newly won farmland. Neolithic peoples in the Levant, Anatolia, Syria, northern Mesopotamia and "Central Asia were also accomplished builders, utilizing mud-brick to construct houses and villages. At "Çatalhöyük, houses were "plastered and painted with elaborate scenes of humans and animals. In "Europe, "long houses built from "wattle and daub were constructed. Elaborate "tombs were built for the dead. These tombs are particularly numerous in "Ireland, where there are many thousand still in existence. Neolithic people in the "British Isles built "long barrows and "chamber tombs for their dead and "causewayed camps, henges, flint mines and "cursus monuments. It was also important to figure out ways of preserving food for future months, such as fashioning relatively airtight containers, and using substances like "salt as preservatives. The peoples of the "Americas and the "Pacific mostly retained the Neolithic level of tool "technology until the time of European contact. Exceptions include copper "hatchets and "spearheads in the "Great Lakes region. Most clothing appears to have been made of animal skins, as indicated by finds of large numbers of bone and antler pins that are ideal for fastening leather. "Wool cloth and "linen might have become available during the later Neolithic, as suggested by finds of perforated stones that (depending on size) may have served as "spindle whorls or "loom weights. The clothing worn in the Neolithic Age might be similar to that worn by "Ötzi the Iceman, although he was not Neolithic (since he belonged to the later "Copper age). This list (which may have dates, numbers, etc.) may be better in a "sortable table format. (February 2016) Neolithic "human settlements include: Note: Dates are very approximate, and are only given for a rough estimate; consult each culture for specific time periods. Periodization: "Middle East: 4500–3300 BC; "Europe: 3000–1700 BC; "Elsewhere: varies greatly, depending on region. In the Americas, the Eneolithic ended as late as the 19th century AD for some peoples. |Wikimedia Commons has media related to
Influenza virus may be transmitted among humans in three ways: by direct contact with infected individuals; by contact with contaminated objects (called fomites, such as toys, doorknobs); and by inhalation of virus-laden aerosols. The contribution of each mode to overall transmission of influenza is not known. But something that most of us touch on a daily basis – paper currency – appears to be able to hold infectious virus for a surprisingly long period of time. The idea that currency can serve as a vector for transmission of influenza virus is attractive since billions of banknotes change hands daily throughout the globe. To determine if virus can remain infectious on banknotes, a small volume (50 microliters) of a viral suspension was added to a 50 franc Swiss note. The note was kept at room temperature, and at different times the inoculated area was cut out, immersed in buffer, and viral infectivity was determined in cell culture. Infectivity of influenza A (H1N1) and influenza B viruses was detected for only 1 and 2 hours, respectively. In contrast, two different influenza A (H3N2) viruses were detected up to 1 and 3 days. As expected, the more virus placed on the banknote, the longer infectivity could be detected. Addition of respiratory secretions to the viral inoculum also increased the ‘survival’ time. For example, influenza A/Moscow/10/99 (H3N2) remained infectious on banknotes up to 8 days in the presence of mucus, compared with 2 days without mucus. When higher amounts of virus with mucus were added to banknotes, infectivity could be detected for 17 days. Mucus might provide a protective matrix which slows the loss of viral infectivity. To determine if similar results would be observed using human specimens, nasopharyngeal secretions from children with influenza-like illness were inoculated onto banknotes. Virus from half of the samples could be detected on the currency for 24 hours, and from 36% of specimens for 48 hours. These observations demonstrate that influenza virus infectivity remains on banknotes for days. In theory virus could be transferred from currency to the nasal tract by contaminated fingers, initiating an infection. Whether humans acquire influenza by this route is unknown. However, good hand hygiene, which is known to remove influenza virus, is an excellent preventative measure – especially after handling currency. This study was carried out – where else? – in Switzerland, where 7 million individuals exchange 20 – 100 million banknotes each day. Thomas Y, Vogel G, Wunderli W, Suter P, Witschi M, Koch D, Tapparel C, & Kaiser L (2008). Survival of influenza virus on banknotes. Applied and environmental microbiology, 74 (10), 3002-7 PMID: 18359825
BLAST Early Learning Teacher Spotlight: Narrative skill is the ability to describe things and events and to tell stories. There is a strong relationship between spoken language and written language. Once printed words are recognized, text comprehension depends heavily on the reader's oral language abilities. Language development in preschoolers is related to later reading achievements. A number of studies support these conclusions by demonstrating a positive correlation between oral skills and reading. In short, the children who have larger vocabularies and a better understanding of spoken language have higher reading scores. - Take time before the book to describe an unfamiliar word. - Have children and adults say repeated words along with you as you read the book. - Have children do a motion as they repeat a phrase along with you as you read a book. - Use a book used previously but with a different theme to bring out different aspects of the story. - You may use fewer books and expand on them more. - Retell a story with puppets, flannel board, props, and/or creative dramatics. - Allow time for children to talk about the theme. - Demonstrate dialogic reading.
Read Genesis 1:1-2:4a 1. Write a list of what was created on each day in your own terms. Do you think the order of creation was important? Why? 2. Some claim that each 'day' was actually a long 'age.' How long was a day? How can you prove your position? 3. What do you think it means that man was created in God's image? 4. What responsibilities were given to man? 5. What was the initial food for man and every creature on the earth? 6. Why did God rest on the seventh day? Do you think He was tired? 7. Write what happened on each of the first seven days of the creation.
Using photographs of windows and doors, students create the profiles of three flatmates before they watch a YouTube video which reveals the true story. You can find the video here: http://www.youtube.com/watch?v=nGeKSiCQkPw You can use this grammar story to introduce: - present continuous (used as an example here) - past simple - indirect questions - reported questions - question tags 1. Show students two matching photos: one of a window and one of the entrance door. You may want to display digital images. You can find some Creative Commons licensed images here: www.flickr.com 2. Introduce the main characters of the story: Three men (named A, B and C) are sharing a flat. These are their windows and the entrance door leading to the flat. 3. Based on that, students develop the men`s identities (names, ages, jobs, personalities, how they met, etc.). NB: At this stage make sure you have all students believe that the flatmates are all humans! 4. Before you play the video (with the sound off!), present the story background and set up the task: You are going to see a conversation between Flatmate A and Flatmate B. Unfortunately, you will only see Flatmate B`s reactions. Based on that, answer the question: What is Flatmate A doing? 5. Play the video with the sound off. At this stage, students will discover that Flatmate B is a dog. 6. Before you play the video for the second time, set up the second task: You are going to see and hear what is going on. Answer the question: Who is Flatmate C? 7. Play the video with the sound on. At this stage, students will discover that Flatmate C is a cat and that the dog is in fact taking part in the conversation thanks to a hilarious dubbing idea. 8. Elicit students` answers. Distribute the scripts of the dialogue and have students find the one example of the present continuous structure used in the dialogue (‘You`re kidding me.’)
Prices starting at $14.99 per page O.K. So you’ve gotten this assignment. It’s for a 5-page essay on a characterization of Sydney Carton in Dickens’s Tale of Two Cities, and you are thinking, “Five-page essay? Does he (the teacher) mean a paper?” The answer is “no,” because there are some basic differences between an essay and a research paper. - An essay, unlike a paper, does not generally have the level of research found in a paper. - Essays are usually shorter than papers but certainly can be five pages long. - Essays come in a variety of types, but papers are either argumentative or analytical. In terms of how to write a 5-page essay, however, the steps are certainly similar to those for a research paper (minus the hefty research, however, so be happy about that!). Step One: Obviously, you have to have a topic, and it must be complex enough to be able to write 5 pages about it. So, if you are in charge of selecting your own topic, make sure that you can write 1000 words on it. Most often, however, at least the general topic area is assigned, and you can choose among options – whew – this step has pretty much been done for you! Step Two: A purpose. Here’s another step that has probably already been assigned. Purposes refer to the type of essay you will write – narrative, comparison/contrast, explanatory, persuasive, descriptive, analytical, etc. In the case of the characterization of Sydney Carton, you essay will be analytical – you have to discuss his personality traits and his character growth during the story, and, yes, you will have to reference passages from the book to prove your points. Step Three: Your Thesis. Every essay must have a point. If it doesn’t, then why are you writing it? I always suggest to students that they devise a “working thesis,” write the essay, and then refine that thesis statement before placing in the introduction (which, by the way, you should write last). Step Four: Time for the 5-page essay outline! You cannot write an essay without a plan, and the outline is your plan, or blueprint, for the sequence in which you will cover the points you intend to make. Back to Sydney Carton! He is first introduced as a lawyer who has fallen on bad times. In fact, he has pretty much wasted his life and is a raging alcoholic. He is also in love with Lucy Darnay but quite resigned to the fact that he shall never have her – she’s too good and too pure and deserves someone much better. His professional life is a mess too. He doesn’t really practice law but works more as an assistant to a lawyer who doesn’t think much of him either. Still, he is crafty and proves that through a trick during a trial early in the novel. During the remainder of the novel, he transforms, gradually, but purposefully, and you will need to describe these changes and prove them with text from the book. Your outline, then becomes pretty clear. It will be a sequential one. An outline for a persuasive essay will be a bit different, because each of the points of your argument will be listed in the order in which you will cover them – usually most important down to the least impactful. Step Five: The Rough Draft. You know the “drill.” You sit down and actually write the essay. Remember what I said about research? Well, here’s where it gets a little “sticky.” Some essays will require small amount of research, especially if you need to include some facts and figures, in a comparison/contrast or a persuasive essay most often. These are easily found online, however, so it’s not like you have to go to the library and pour through books and periodicals! Be thankful! Step Six: Refine That Thesis Statement. Once you have the complete essay in front of you and have had some time to reflect on what you have said, you are ready to craft a great thesis statement. In the case of the essay on Sydney Carton, for example, I might write, “The story of Sydney Carton is a story of redemption.” That’s it. My reader knows what I believe about this character and what I am going to address in my essay – it’s really that simple! And your conclusion should refer back to that thesis statement, perhaps enlarging upon the idea of redemption as a possibility for anyone. Now, some thesis statements may be a bit longer, but they must certainly be clear. If, for example, I were to write a persuasive essay that takes a position on fracking, and, after some simple reading about it, I might decide that I am currently against it. My thesis statement might be this: “There is simply not enough evidence yet to prove that fracking is safe or dangerous, and it therefore should be halted until the evidence is in.” Step Seven: That Pesky Revision. Unless you are a seasoned professional writer, your rough draft is not submissable (is that even a word? I don’t think so). But you know what I mean. You have to clean it up. Review it first for structural soundness. Does it flow logically, one paragraph to the next? Transition sentences create the flow, so be sure you’ve got good ones! The second revision should be looking at sentence structure and other grammatical issues (punctuation too). Did you vary sentence lengths? Do you have subject-verb and pronoun-antecedent agreement? Are your verb tenses all aligned? Spell and grammar check will catch most of these but don’t rely on them completely. Spell check will not catch wrong usage such as in “there, their, and they’re” or “to, two, and too.” And I have yet to see a grammar check program that really gets commas right in all instances! Step Eight: Write the final draft, and format it as you have been instructed. Step Nine: If you have done your job right, you can expect an above-average grade!
Structure your study time to maximize your learning. Your brain needs time and focus to make your new memories permanent, so it's better to schedule shorter study periods. For reading and general reviewing, a maximum of 45-50 minutes with a ten-minute break is recommended, while memorizing should probably be done in shorter time blocks, perhaps 15-30 minutes. Alternate subjects, so that your brain gets a change every study session. Be sure to schedule longer breaks every few hours, including time to relax before going to bed. - Dedicate approximately half of your study time for "output." This "output" is simply various ways you process and reproduce the information. The form of output will depend on the subject matter. For example, output could include testing your ability to do math problems, making a summary of the major topics, developing a "contrast and compare" chart, creating a mind map (or concept map) to organize important points and concepts within a discipline or new concept, generating a chronology, labelling a blank diagram or writing practice tests. - Briefly review the material you've just learned at the end of every study period. Tell yourself that this material will become a permanent memory very soon. "Lock it in." - Test your knowledge soon and often. Although you might feel testing yourself before you've studied thoroughly is pointless, forcing yourself to recall material helps you to learn that material. Testing early also gives you valuable feedback. Perhaps your self-test indicates that you know a particular topic fairly well, so you can spend your valuable time studying another topic or subject that you don't know as well. If you test yourself early and find that you don't know a topic as well as you thought, you will have time to learn it before the final exam. Reviewing old exams can be an effective to see what sort of questions could be asked, test your own knowledge, and to give you experience and develop your test-taking skills. Check with the USSU Help Centre about getting copies of old exams for your classes. - Arrange with some of your classmates to discuss the material and study in groups. Group study provides new perspectives and knowledge about the material, and provides yet another way to approach or interact with the material (see the handouts on Learning Styles). During group discussions, remember to practise writing your answers as most exams are not orals. - Spend a few minutes at the beginning of each study period to develop your "mental set". Clear a space, physically, mentally and emotionally, where you are able to learn. Close your eyes briefly and evoke your curiosity. If you become very confused or frustrated, find at least one good thing about the subject you are studying. How does the subject relate to your life experiences, volunteer work, world problems you've pondered, classes you've taken or jobs you've held. At times, your reason may be very practical such as "I need this class to graduate", and this is okay too. Challenge inner negative thoughts while studying with positive self talk - "I want to learn this. It's interesting. I'm learning valuable skills and knowledge." - Develop a "ritual" before you begin to study, such as getting yourself a drink, setting out the tools you'll need (pencil, pen, eraser, etc.), and then reviewing your goals for the session. Some students like to stimulate their memory with sensory cues, such as sounds, tastes or smells. Choose something that you can recreate in the exam situation. For example, burning a candle often helps students relax and focus at the beginning of a study session, but bringing a candle into an exam will be problematic. Having a hard candy, such as a peppermint or cough drop, is less conspicuous and easy to obtain. Schedule time for adequate sleep, exercise and healthy meals. Your brain depends on oxygen and glycogen to function. It gets oxygen when you exercise. To maintain the levels of glycogen, eat a combination of protein, carbohydrates and healthy fat at every meal, even snacks. If you need to chew while you study, you're better off with celery, broccoli or carrots with a slice of cheese than munching on cookies. Be mindful about the amount of caffeine you consume. Although a cup or two of coffee or cans of pop may help you study (especially if it is part of your study ritual), too much caffeine may interfere with your sleep. A regular can of pop might contain 16 teaspoons of sugar! This sugar temporarily boosts your blood sugar levels, but will be followed by a low as your body produces insulin to remove excess sugar out of your blood stream. Not only that, but pop often contains caffeine, sometimes more than the strongest coffee. Your brain and body won't know what hit them! Watch your energy and anxiety levels. Your ability to learn depends on your ability to concentrate. A few minutes of alert, focussed concentration is worth more than an hour spent gazing sleepily at the same sentence! You want to stay in the optimal energy zone for learning, neither too low in energy nor too high in anxiety. A little anxiety actually helps your motivation, but too much interferes with your ability to concentrate, interferes with sleep, and drains your energy. Although it takes practice, you can learn how to relax whenever you wish. Anxiety can surface as physical complaints, such as tense muscles or "butterflies" in the stomach, but it can also express itself as negative emotions. If you feel frustrated, annoyed or worried, take a few minutes to identify the source. Are you worried about a particular exam? Is the material exceptionally difficult? Is negative self-talk undermining your concentration? Are you simply burnt out from studying? Identify the problem and take action to address it.
Macheyeki A.S.,Geological Survey of Tanzania | Mdala H.,Geological Survey of Malawi | Chapola L.S.,The Catholic University of Malawi | Manhica V.J.,National Directorate of Geology | And 14 more authors. Journal of African Earth Sciences | Year: 2015 The East African Rift System (EARS) has natural hazards - earthquakes, volcanic eruptions, and landslides along the faulted margins, and in response to ground shaking. Strong damaging earthquakes have been occurring in the region along the EARS throughout historical time, example being the 7.4 (Ms) of December 1910. The most recent damaging earthquake is the Karonga earthquake in Malawi, which occurred on 19th December, 2009 with a magnitude of 6.2 (Ms). The earthquake claimed four lives and destroyed over 5000 houses. In its effort to improve seismic hazard assessment in the region, Eastern and Southern Africa Seismological Working Group (ESARSWG) under the sponsorship of the International Program on Physical Sciences (IPPS) carried out a study on active fault mapping in the region. The fieldwork employed geological and geophysical techniques. The geophysical techniques employed are ground magnetic, seismic refraction and resistivity surveys but are reported elsewhere. This article gives findings from geological techniques. The geological techniques aimed primarily at mapping of active faults in the area in order to delineate presence or absence of fault segments. Results show that the Karonga fault (the Karonga fault here referred to as the fault that ruptured to the surface following the 6th-19th December 2009 earthquake events in the Karonga area) is about 9. km long and dominated by dip slip faulting with dextral and insignificant sinistral components and it is made up of 3-4 segments of length 2-3. km. The segments are characterized by both left and right steps.Although field mapping show only 9. km of surface rupture, maximum vertical offset of about 43. cm imply that the surface rupture was in little excess of 14. km that corresponds with Mw = 6.4. We recommend the use or integration of multidisciplinary techniques in order to better understand the fault history, mechanism and other behavior of the fault/s for better urban planning in the area. © 2014 Elsevier Ltd. Source
What are the differences between mass, weight, force and load? (FAQ - Mass & Density) Mass is a measure of the amount of material in an object, weight is the gravitational force acting on a body (although for trading purposes it is taken to mean the same as mass), force is a measure of the interaction between bodies and load usually means the force exerted on a surface or body. Mass is a measure of the amount of material in an object, being directly related to the number and type of atoms present in the object. Mass does not change with a body's position, movement or alteration of its shape unless material is added or removed. The unit of mass in the SI system is the kilogram (abbreviation kg) which is defined to be equal to the mass of the international prototype of the kilogram held at the Bureau International des Poids et Mesures (BIPM) near Paris. Mass can also be defined as the inertial resistance to acceleration. In the trading of goods, weight is taken to mean the same as mass, and is measured in kilograms. Scientifically, however, it is normal to state that the weight of a body is the gravitational force acting on it and hence it should be measured in newtons (abbreviation N), and that this force depends on the local acceleration due to gravity. To add to the confusion, a weight (or weightpiece) is a calibrated mass normally made from a dense metal. So, unfortunately, weight has three meanings and care should always be taken to appreciate which one is meant in a particular context. Force is a measure of the interaction between bodies. It takes a number of forms including short-range atomic forces, electromagnetic, and gravitational forces. Force is a vector quantity, with both direction and magnitude. Load is a term frequently used in engineering to mean the force exerted on a surface or body. Please note that the information will not be divulged to third parties, or used without your permission
Einstein and Beyond |Who Are NASA's Space Science Explorers? Who are NASA's Space Science Explorers? The scientist studying black holes in space. The teacher talking about the secrets of the cosmos. And the student asking if there is life away from Earth. All of these people are Space Science Explorers. They are all curious about our solar system and space. This is a story about a NASA Space Science Explorer. He's one of the greatest space science explorers of all time. And yet most of his discoveries came more than 50 years before the first satellite was launched into space. They came more than 60 years before humans would walk on the moon. And they came more than 70 years before liftoff of the first space shuttle. Image to right: A century ago, Albert Einstein began creating his theory of relativity -- the ideas we use to understand space, time and gravity. Credit: NASA The year 1905 will forever be regarded as Albert Einstein's "miracle year." It was the year a 26-year-old changed the way we view the universe. Einstein's theories about light, motion, gravity, mass and energy began a new era of science. They led to the big-bang theory of how the universe was born. And they led to concepts such as black holes and dark energy. One hundred years later, NASA and others are honoring Einstein. This yearlong celebration is known as the Einstein Centennial. The impact of his findings will surely last for centuries to come. Many current space science projects build on Einstein's famous work. NASA's "Beyond Einstein" research program is a good example. Scientists are using their minds and NASA instruments to answer three important questions: What Could Have Powered the Big Bang? - What could have powered the big bang? - What is dark energy? - What happens at the edge of a black hole? Scientists think that the universe is almost 14 billion years old. There are different theories for how the universe began. The big-bang theory says that it began when a tiny but dense mass of energy exploded. And it says that the universe has been expanding ever since. Einstein himself did not come up with the theory. But his ideas led scientists to propose it. Charles Bennett leads a NASA mission that has made a "baby picture" of the universe. This was done with the help of powerful telescopes in space. The picture shows what the universe looked like less than a billion years after the big bang would have taken place. Bennett and others are trying to figure out what could have caused the big bang. He says that today's researchers must think creatively, like Einstein did. Image to left: This "baby picture" of the universe shows small changes in temperature from more than 13 billion years ago. That's not long after the Big Bang would have taken place. Scientists captured this image using NASA's Wilkinson Microwave Anisotropy Probe during a sweeping 12-month observation of the entire sky. Credit: NASA "Einstein was a big believer in experiments and observations, both to guide and test theories," Bennett said. "He always kept a keen eye on things that didn't quite add up. He asked himself how he could resolve these problems with creative new ideas." What Is Dark Energy? New ideas are what Sean Carroll is all about. Carroll is a theorist. He dreams up ways that an unseen force could be causing the universe to expand faster and faster. That sounds like a challenging job. But imagine being able to predict this force without knowing the universe is expanding. That's what Einstein did. Einstein came up with mathematical formulas to describe the universe. The formulas only worked if the universe were growing or shrinking. At the time, though, he and others believed the universe was doing neither. So he added an extra term to the formulas. He called it a "cosmological constant." With that the formulas described a universe that did not change. Einstein soon dropped the term, however, when he found out later that the universe is expanding. He called the whole idea his "greatest blunder." Einstein's idea may not have been a blunder after all. Scientists have found that the universe isn't just expanding. It is expanding faster and faster, they say. So what is causing this? One way to explain this is "dark energy." This hidden force may be Einstein's cosmological constant. Carroll is trying to figure out what dark energy is and where it comes from. In doing so, he takes after Einstein. "[Einstein] learned as much as he could about what was already understood," Carroll said. "At the same time, he kept an open mind about new ways of doing things." What Happens at the Edge of a Black Hole? It certainly took an open mind for anyone to imagine black holes. Einstein himself did not believe in such things, even though they were predicted by his theories. Mitch Begelman studies how black holes form. He also studies how they affect galaxies. Black holes are areas in space where gravity is so strong that light is unable to escape. Begelman says he's looking forward to when NASA will launch two missions to study black holes. This will be in about 10 years. One mission will use lasers to help scientists learn how two black holes join into one large black hole. The other will measure radiation given off by matter just before it gets sucked into a black hole. Whether exploring black holes or other weird wonders, Begelman takes the same approach. His strategy is similar to Einstein's. "[Einstein] was able to take a hypothesis … and follow it to its logical conclusion. No matter how [strange] that proved to be," Begelman said. Begelman and other scientists mimic Einstein in other ways, too. They treat Einstein's theories the way he treated those that came before his. The scientists use the ideas to explain as much about our universe as possible. But they also know that they can change the theories as needed. Or they may invent completely new ones. Who knows? Maybe someday someone will come up with a new set of theories that even Einstein never thought of. It could even be you. See previous Space Science Explorers articles: + View site Inside Einstein's Universe + View site + View site Laser Interferometer Space Antenna + View site + View site Dan Stillman, Institute for Global Environmental Strategies
Some of the characteristics of the Abenaki culture include a patrilineal system and an agricultural economy that was supplemented by hunting and fishing. The Abenaki people lived in wigwams made from birchbark in the area that currently covers the state of Vermont, Maine and New Hampshire. They spoke the Abenaki-Penobscot language.Continue Reading The Abenaki people lived in fertile areas close to rivers, as they relied on agriculture. Their main diet consisted of beans, squash and corn. They supplemented their food by gathering wild foods, hunting and fishing. The Abenaki people did not have a central system of governance and lived in scattered bands that consisted of members of an extended family. The bands or village were usually small, and had an average of 100 people. They were a patrilineal society, and the bands occupied several hunting territories that the family inherited through the father. The bands would move near the river during spring and summer months to make it easier to farm and fish. These bands were sometimes fortified if there was warfare in the region. Their wigwams were dome-shaped, though some people preferred oval-shaped, long houses. They wore different clothes depending on the season. During the summer, men wore breechcloths, leather leggings and moccasins. Women wore skirts or dresses made from deerskin, and leggings. In the winter, both men and women wore winter clothes made from buckskin. The Abenaki language belongs to the Algonquian family. The word "Abenaki" means "people of the dawn," and this name refers to their location in the East. Their language is close to extinction, as few young people learn or speak it.Learn more about Cultures & Traditions
FAA regulations require that an airliner engine be able to survive the impact of a 4-pound bird. The rules don't require that an engine continue to run normally after a four-pound bird ingestion, just that it can be safely shut down without exploding or disintegrating. And the regulations don't address the problem of simultaneous multiple strikes when a plane encounters a large flock—which is what apparently happened to U.S. Airways Flight 1549. Moreover, birds like the Canada goose, whose population has grown in recent years, typically weigh 7 to 8 pounds; large swans can weigh as much as 25 pounds. An engine shutdown is not a major emergency in a modern jet airliner. If only one engine was disabled by the bird strike, the plane should have been able to safely return for a landing at LaGuardia. So the early evidence suggests that both engines of the Airbus 320 were rendered useless by the collision. Although the first jet airliners, the 707 and DC-8, had four engines, recent trends in airliner design favor two-engine configurations, even for very large aircraft like the Boeing 777. The reason is simple economics: Two big engines are cheaper to buy and maintain than four smaller ones. The downside, however, is a lack of redundancy in a major bird strike. The trend toward very high-bypass ratio fanjet engines with large-diameter fans makes them more vulnerable to bird strikes as well.
Conformations and Cycloalkanes By James Ashenhurst Introduction to Cycloalkanes (1) Last updated: March 21st, 2019 Hydrocarbons Can Form Rings: Two Consequences In the first few weeks of an organic chemistry class, we’ve learned that: - Carbon can form up to four single bonds - Carbon with four single bonds adopts a tetrahedral geometry (ideal bond angle: 109.5°) - Compared to other atoms on the periodic table, [O, N, S, Si for example] carbon forms very strong bonds with itself and can therefore form stable chains - Carbon-carbon single bonds can rotate freely, and the three-dimensional shapes that arise (“conformations”) can vary significantly in energy. Nothing particularly strange about that so far. How about this: - in addition to forming chains, carbon atoms can also form rings In this first post on this series on cycloalkanes, we’ll discuss two key consequences of the fact that carbon can form rings, and then move forward with further posts in that vein. 1. Each Ring Decreases The Hydrogen Count By Two One of the first consequences of the fact that carbon can form rings can be found by comparing the condensed molecular formulae of linear alkanes with cyclic alkanes. Notice how the formula of linear alkanes follows the pattern H = 2n + 2 (where “n” is the number of carbons) whereas the formula for cycloalkanes follows the pattern H = 2n. Just by forming a ring, the number of hydrogens decreases by two! By the way, every successive ring decreases the hydrogen content by two – the bicyclic molecules below follow each follow the pattern #H = 2n –2. And the tricyclic molecule follows the pattern #H = 2n –4 . [Bonus Q – how many hydrogens would be in a tetracyclic molecule with 20 carbons (and no multiple bonds)? ] Why does this matter? As you’ll see later, we’ll be able to use the fact that each ring decreases the hydrogen count of the molecule by 2 to help us deduce the structures of unknown compounds in some cases. It can be a small clue, but an important one nonetheless. [A look ahead – Degree of Unsaturation] 2. A Key Consequence: Small Rings Cannot Be Turned Inside-Out There’s a second interesting observation with cycloalkanes that we’ll talk about in much greater detail next time, but is important to get out of the way because it’s often overlooked. See how there’s that empty space in the middle of a cycloalkane ring? Many everyday household objects – belts, elastics, wristbands – can be easily turned inside out. Can we do the same with cycloalkanes? What happens when we try to turn them inside out? Because it’s much easier to show this rather than tell it, I made a quick video. The bottom line is that cycloalkanes of less than 8 carbons cannot be turned inside out without breaking carbon-carbon bonds. >99% of the rings that you’ll see in Org 1 / Org 2 will fall into this category. This has far-ranging consequences that we’ll talk about in the next post in this series. Not strange either… right? Makes sense? Take a second and see if you can imagine some consequences of that simple fact. How might it affect any of those bullet points we made above? It’s very difficult to imagine situations that may arise if you haven’t at least seen a glimpse of them yourself. I’m going to use this opportunity to make a gratuitous chess analogy. One important rule in chess is that if a pawn makes its way to the end of the board, it can be promoted to a piece of the players choice. 99% of the time, the best choice is a Queen, but there are (very rare) situations where it is optimal to promote to a different piece – a knight, for example (see below). [link] Cool, huh? An example like this one flows logically from the rules of the game, but you can’t really imagine situations like this one until you have a lot of board time under your belt. Similarly, there are consequences of the fact that carbon can form rings that are not yet readily apparent. We’re going to start by exploring two of them today in this first post of a new series on cycloalkanes.
What's the News: Spotted salamander embryos, a recent study found, have green algae living inside their cells. While scientists have long known that the two species are symbiotic, each helping the other to survive, the new findings show that the arrangement is, in the researchers' words, "more intimate than previously reported." In fact, it's the first such organism-within-cell partnership---known as endosymbiosis---ever observed in vertebrates. How the Heck: Spotting a cell within a cell isn't easy. The researchers used fluorescent techniques to spot the algae, since their chlorophyll glows under certain types of light, and RNA probes to measure whether the algae's genetic material---and therefore, the algae cells themselves---were still intact. Salamanders lay their eggs in ponds, also home to algae of the species Oophila amblystomatis, whose genus means "egg-loving." It may be that that's when the algae burrows into the cells of the salamander embryos. Alternatively, parents might pass the algae on to their offspring. However it happens, algae takes up residence throughout a salamander embryo early on, when different tissues are still differentiating. Later on, the algae is mostly in cells in the salamander's digestive tract. What's the Context: Carrying around an intracellular hitchhiker, or being stuck in someone else's cells, doesn't sound great, but earlier work suggests that each species benefits from the presence of the other. In order to develop normally, salamander embryos need oxygen, which the algae produces. The algae needs lots of nitrogen and a place to stay, and salamander cells meet both criteria. While this is the first time endosymbiosis has been found in vertebrates, it's been observed in lots of other living things, like the nitrogen-fixing bacteria that live on the roots of some plants. One prominent theory holds that this kind of arrangement led to the advent of eukaryotes and gave rise to mitochondria and chloroplasts, two self-contained, gene-carrying cellular components found in animal and plant cells, respectively. Image of chlorophyll-colored salamander embryos courtesy of Roger Hangarter
Poetry and Itd Definition What is a poetry? Literary work in which special intensity is given to the expression of feelings and ideas by the use of distinctive style and rhythm. Basic characteristics of poetry: - quality of beauty - intensity of emotion - “poetry and fire are nicely balanced in the music” Poetry in ancient greek: ποιεω (poieo) = I create, is an art form in which human language is used for esthetical qualities in addition to, or instead of, its imaginary and linguistic content. It consists largely of oral or literary works in which language is used in a manner that is felt by its user and audience to differ from ordinary prose. It may use condensed or compact form to transmit emotion or ideas to the reader’s or listener’s mind or ear; it may also use devices such as alliteration and repetition to achieve musical effects. Poems frequently rely for their effect on imagery, word association, and the musical qualities of the language used. The interactive layering of all these effects to generate meaning is what marks poetry. On account of its nature of accentuated linguistic form rather than using language purely for its content, poetry is notably difficult to translate from one language into another: a possible exception to this might be the Hebrew Psalms, where the beauty is found more in the balance of ideas than in specific vocabulary. In most poetry, it is the essence and the “baggage” that words carry (the weight of words) that are most important. These shades and classification of meaning can be difficult to interpret and can cause different readers to “hear” a particular piece of poetry differently. While there are reasonable interpretations, there can never be a definitive interpretation.
All coastal cities are vulnerable to climate change. For millennia, life on the coast was preferred because of the abundance of food, ease of transportation, and potential for defense against adversaries. Today at least 10 percent of the world’s population live in low-lying coastal areas. What used to be an asset is increasingly a liability. The rapid expansion of coastal cities has broken down natural barriers, destroyed resources and deteriorated water quality. As a result, swelling coastal communities are exposing more and more people to hurricanes, storms, floods, landslides, and sea level rise. Some coastal cities are more at risk than others from sea level rise and other climate-related threats. Over the next few decades, over 570 low-lying coastal cities could experience sea level rise of at least 0.5 meters (1.6 feet). If this scenario happens, more than 800 million people could be at risk and the total economic cost could increase by $ 1 trillion. While Asian and African cities are particularly exposed, Rio de Janeiro is one of the most endangered cities in Latin America. Climatologists and the city’s planners believe that the built-up area around the city is exposed to sea level rise, flooding, increased rainfall, and heat islands, making large parts of it virtually uninhabitable. One reason coastal cities are so exposed to climate-related threats is a combination of poor planning and rapid urbanization. Rising water levels and the increasing frequency and severity of flooding and extreme heat are only part of the problem. Another reason is that many coastal cities are built directly on coastal plains, often near estuaries and lagoons. People, including the poorest residents, are often forced to live on precarious land, including drained wetlands and swamps – or right in the lagoons like the slums of Lagos, Nigeria. This not only exposes them to flooding, but also to compaction, which leads to a sinking infrastructure. The areas most susceptible to climate stress are often the areas with the highest population density and the highest resident population at the bottom of the income ladder. Rio de Janeiro’s current climate vulnerability is partly a legacy of its historically chaotic and uneven urban development. After Rio ceded the title of state capital to Brasília in 1960, it began to spread uncontrollably. The metropolitan area’s population tripled within five decades. Population growth exacerbated the lack of affordable housing and contributed to the steady expansion of unplanned and improvised districts to the west and north. Informal settlements or favelas increased along waterways and on slopes. Today Rio de Janeiro has an estimated 6.7 million people. It is the second largest city in the country by the size of its economy, but it only ranks 327th in terms of GDP per capita, 71st in terms of municipal competitiveness, and belongs not only in Brazil but among the most unequal in the world. Rio is also notoriously violent: over 2,400 people were murdered in the metropolitan area in 2020, including around 1,000 people reportedly killed by police. In addition, an estimated 60 percent of the city is controlled by militias, while drug trafficking factions monitor dozens of poorer neighborhoods. With this in mind, it is not surprising that the city is struggling to mitigate and adapt to natural disasters and climate change. A combination of turbo-urbanization and disorganized urban planning contributed to the rapid depletion of natural forest cover. Instead of protecting the city with often cheaper nature-based solutions like reforestation and wetland restoration, state and city authorities have poured funds into cement, brick and steel. The reduction in tree cover, coastal erosion and the explosion of concrete have all helped raise average temperatures there by 0.05 degrees Celsius per year. Average global temperatures are expected to rise by 2 degrees Celsius by 2050 in a “business as usual” scenario. Warming in Rio de Janeiro is likely to result in longer, heavier, more frequent and more deadly heat waves, particularly affecting older and poorer populations. Rising temperatures could also cause sea levels to rise 0.3 to 2.15 meters by 2100 and potentially inundate much of Rio de Janeiro’s area, including residential and commercial properties, public parks, ports and power grids. There are already signs of what is to come. The state of Rio de Janeiro has recorded hundreds of natural disasters since the early 2000s. Today researchers estimate that at least 155,000 people in over 1,300 high-risk areas are prone to landslides and flooding. One of the most devastating, a massive storm and string of landslides in 2011 that killed over 800 people, left 30,000 homeless and affected tens of thousands more with water-borne diseases such as leptospirosis. The World Bank estimated the cost of the tragedy at over $ 2 billion. In the ten years since the disaster, however, too little has been invested in rebuilding depleted infrastructures, let alone in climate protection. In 2012, the city began construction of four underground reservoirs and a bypass tunnel to improve control of light to moderate flooding. However, these are not sufficient to counteract the imminent threats. Climate change threatens not only to impose enormous humanitarian costs on Rio de Janeiro, but also to disrupt its main sources of income. State and city are heavily dependent on oil revenues. When oil prices fell between 2014 and 2016, the state declared a financial “calamity” just before the Summer Olympics were to be held. Oil and gas fee revenues continued to decline through 2020. With the global divestment from the hydrocarbon industry, future economic planning requires alternative sources of income. The other major source of income for Rio de Janeiro is tourism. However, rising seas and temperatures threaten not only violence but also the value proposition. Today, Rio lags behind other Brazilian cities like São Paulo, Belo Horizonte and Porto Alegre when it comes to attracting tourist dollars. Brazil as a whole has slipped to 32nd place behind Belgium and Denmark. Still, there is enormous growth potential for Rio’s tourism industry, if only more emphasis was placed on nature-based solutions that can help remove some of its climatic uncertainties. The first step in building climate resilience is to identify climate threats and develop strategies to mitigate and adapt to them. However, there are few current scientific studies that document the scope, extent and consequences of climate change in Rio de Janeiro. For most Brazilian cities, there are few publicly available studies on sea level rise, coastal erosion, or heat islands. Nevertheless, over 60 percent of the Brazilian population live in low-lying coastal cities: Belém, Florianópolis, Fortaleza, Paranaguá, Salvador, Recife and Vitória are particularly at risk. A Brazilian city that has taken measures to adapt to climate change is Santos, home to Latin America’s busiest seaport. Santos processes over a hundred million tons of cargo annually. That corresponds to around 27 percent of the Brazilian trade balance. After documenting a sustained rise in sea levels, city authorities introduced tax deductions for investments in alternative energy and encouraged green roofs, reforestation, natural barriers, drainage channels and pumping stations. Curitiba is another Brazilian city recognized worldwide for innovations in the field of climate protection. In the 1980s, the city launched a bold strategy to protect green spaces, encourage recycling, and invest in waste management. The “Green Exchange” program exchanges recycled items for food. The city’s ratio of around 600 square feet of green space per inhabitant is about four times that of São Paulo and is well above international standards. Today Curitiba is only one of two cities in Brazil with a climate adaptation plan that has been named the most sustainable city in Latin America. These types of innovative solutions could help tackle many of Rio de Janeiro’s competing crises in terms of climate vulnerability, insecurity, inequality and economic decline. Expanding green spaces, cooling heat islands, curbing pollution and improving affordable housing can all help reduce inequality, reduce violence and increase economic opportunity. The alternative of coastal erosion, increasing flooding and scorching heat can make parts of Rio de Janeiro uninhabitable and exacerbate the crises in a financially troubled city. Brazilian cities can strengthen their defenses against climate change by creating updated plans that focus on adaptation and containment, based on a real public consultation. Most large cities have set up some kind of council to at least discuss climate protection measures. Still, many Brazilian cities are lagging behind: 11 of the country’s 27 state capitals have outdated master plans that exceed the mandatory 10-year extension. So far, only a handful of Brazilian cities are tracking greenhouse gas emissions. Only Belo Horizonte, Curitiba, São Paulo and Rio de Janeiro have developed adaptation or mitigation strategies. Coastal cities like Rio de Janeiro will have to experiment with different strategies to increase climate resilience. There are many ideas – including sponge cities that use a combination of repurposed built-up areas, rain gardens, ponds, and wetlands to store excess water, and ambitious environmental rehabilitation projects such as favela green roofs and green corridors. Nature-based solutions are not just an add-on. They are the key to the survival of the city and a pioneering role for sustainable economic renewal.
(1878–1968). The Austrian physicist Lise Meitner shared the Enrico Fermi award in 1966 with Otto Hahn and Fritz Strassmann for research leading to the discovery of nuclear fission. Her own primary work in physics dealt with the relation between beta and gamma rays. Meitner was born in Vienna on Nov. 7, 1878. She studied at the University of Vienna, where she received her doctorate in physics in 1907. She then went to Berlin to join chemist Otto Hahn in research on radioactivity. She studied with Max Planck and worked as his assistant. In 1913 Meitner became a member of the Kaiser Wilhelm Institute in Berlin (now the Max Planck Institute). In 1917 she became head of its physics section and codirector with Otto Hahn. They worked together for about 30 years and discovered and named protactinium. They also investigated the products of neutron bombardment of uranium. Because she was Jewish, Meitner fled Germany in 1938 to escape Nazi persecution. She went to Sweden, which remained neutral during World War II. Here, with her nephew Otto Frisch, she studied the physical characteristics of neutron-bombarded uranium and proposed the name fission for the process. Hahn and Strassmann, following the same line of research, noted that the bombardment produced much lighter elements. Later advances in the study of nuclear fission led to nuclear weapons and nuclear power. In 1960 Meitner retired to live in England. She died in Cambridge on Oct. 27, 1968.
Many fishes are able to jump out of the water and launch themselves into the air. Such behavior has been connected with prey capture, migration and predator avoidance. We found that jumping behavior of the guppy Poecilia reticulata is not associated with any of the above. The fish jump spontaneously, without being triggered by overt sensory cues, is not migratory and does not attempt to capture aerial food items. Here, we use high speed video imaging to analyze the kinematics of the jumping behavior P. reticulata. Fish jump from a still position by slowly backing up while using its pectoral fins, followed by strong body trusts which lead to launching into the air several body lengths. The liftoff phase of the jump is fast and fish will continue with whole body thrusts and tail beats, even when out of the water. This behavior occurs when fish are in a group or in isolation. Geography has had substantial effects on guppy evolution, with waterfalls reducing gene flow and constraining dispersal. We suggest that jumping has evolved in guppies as a behavioral phenotype for dispersal. All Science Journal Classification (ASJC) codes - Biochemistry, Genetics and Molecular Biology(all) - Agricultural and Biological Sciences(all)
This information is provided by the National Institutes of Health (NIH) Genetic and Rare Diseases Information Center (GARD). A pineocytoma is a tumor of the pineal gland, a small organ in the brain that makes melatonin (a sleep-regulating hormone). Pineocytomas most often occur in adults as a solid mass, although they may appear to have fluid-filled (cystic) spaces on images of the brain. Signs and symptoms of pineocytomas include headaches, nausea, hydrocephalus, vision abnormalities, and Parinaud syndrome. Pineocytomas are usually slow-growing and rarely spread to other parts of the body. Treatment includes surgery to remove the pineocytoma; most of these tumors do not regrow (recur) after surgery. For more information, visit GARD.
Anthropogenic threats, such as collisions with man-made structures, vehicles, poisoning and predation by domestic pets, combine to kill billions of wildlife annually. Free-ranging domestic cats have been introduced globally and have contributed to multiple wildlife extinctions on islands. The magnitude of mortality they cause in mainland areas remains speculative, with large-scale estimates based on non-systematic analyses and little consideration of scientific data. Here we conduct a systematic review and quantitatively estimate mortality caused by cats in the United States. We estimate that free-ranging domestic cats kill 1.3–4.0 billion birds and 6.3–22.3 billion mammals annually. Un-owned cats, as opposed to owned pets, cause the majority of this mortality. Our findings suggest that free-ranging cats cause substantially greater wildlife mortality than previously thought and are likely the single greatest source of anthropogenic mortality for US birds and mammals. Scientifically sound conservation and policy intervention is needed to reduce this impact. Domestic cats (Felis catus) are predators that humans have introduced globally1,2 and that have been listed among the 100 worst non-native invasive species in the world3. Free-ranging cats on islands have caused or contributed to 33 (14%) of the modern bird, mammal and reptile extinctions recorded by the International Union for Conservation of Nature (IUCN) Red List4. Mounting evidence from three continents indicates that cats can also locally reduce mainland bird and mammal populations5,6,7 and cause a substantial proportion of total wildlife mortality8,9,10. Despite these harmful effects, policies for management of free-ranging cat populations and regulation of pet ownership behaviours are dictated by animal welfare issues rather than ecological impacts11. Projects to manage free-ranging cats, such as Trap-Neuter-Return (TNR) colonies, are potentially harmful to wildlife populations, but are implemented across the United States without widespread public knowledge, consideration of scientific evidence or the environmental review processes typically required for actions with harmful environmental consequences11,12. A major reason for the current non-scientific approach to management of free-ranging cats is that total mortality from cat predation is often argued to be negligible compared with other anthropogenic threats, such as collisions with man-made structures and habitat destruction. However, assessing the conservation importance of a mortality source requires identification of which species are being killed (for example, native versus non-native invasive species and rare versus common species) in addition to estimation of total numbers of fatalities. Estimates of annual US bird mortality from predation by all cats, including both owned and un-owned cats, are in the hundreds of millions13,14 (we define un-owned cats to include farm/barn cats, strays that are fed by humans but not granted access to habitations, cats in subsidized colonies and cats that are completely feral). This magnitude would place cats among the top sources of anthropogenic bird mortality; however, window and building collisions have been suggested to cause even greater mortality15,16,17. Existing estimates of mortality from cat predation are speculative and not based on scientific data13,14,15,16 or, at best, are based on extrapolation of results from a single study18. In addition, no large-scale mortality estimates exist for mammals, which form a substantial component of cat diets. We conducted a data-driven systematic review of studies that estimate predation rates of owned and un-owned cats, and estimated the magnitude of bird and mammal mortality caused by all cats across the contiguous United States (all states excluding Alaska and Hawaii). We estimate that free-ranging domestic cats kill 1.3–4.0 billion birds and 6.3–22.3 billion mammals annually, and that un-owned cats cause the majority of this mortality. This magnitude of mortality is far greater than previous estimates of cat predation on wildlife and may exceed all other sources of anthropogenic mortality of US birds and mammals. The magnitude of bird mortality caused by cat predation After excluding studies that did not meet a priori inclusion criteria designed to increase the accuracy of our analysis, we developed probability distributions of predation rates on birds and mammals. We combined predation rate distributions with literature-derived probability distributions for US cat population sizes, and we also accounted for the proportion of owned cats allowed outdoors, the proportion of owned and un-owned cats that hunt, and imperfect detection of owned cats’ prey items. We generated an estimated range of bird and mammal mortality caused by cat predation by incorporating the above distributions—including separate predation rate distributions for owned and un-owned cats—and running 10,000 calculation iterations. We augmented US predation data by incorporating predation rate estimates from other temperate regions (Supplementary Table S1). For birds, we generated three US mortality estimates based on predation data from studies in: (1) the United States, (2) the United States and Europe and (3) the United States, Europe, and other temperate regions (primarily Australia and New Zealand). Owing to a lack of US studies of un-owned cat predation on mammals, we estimated mammal mortality using data groupings 2 and 3. We based all other probability distributions on US studies (distribution details in Table 1; data in Supplementary Table S2). The three estimates of bird mortality varied moderately, with a 19% difference among median estimates (Table 2). We focus interpretation on the estimate generated using US and European predation data because it is the lowest value. Furthermore, this estimate is more likely to be representative of the US than the estimate based on incorporation of data from Australia and New Zealand, where the wildlife fauna and climate are less similar to the United States. We estimate that cats in the contiguous United States annually kill between 1.3 and 4.0 billion birds (median=2.4 billion) (Fig. 1a), with ∼69% of this mortality caused by un-owned cats. The predation estimate for un-owned cats was higher primarily due to predation rates by this group averaging three times greater than rates for owned cats. The magnitude of mammal mortality caused by cat predation Our estimate of mammal mortality was robust to the choice of predation data as evidenced by a 1.6% difference between the two median estimates (Table 2). We focus interpretation on the lower estimate, which was based on United States and European predation data and US values of other parameters. We estimate annual mammal mortality in the contiguous United States at between 6.3 and 22.3 billion (median=12.3 billion) (Fig. 1b) with 89% of this mortality caused by un-owned cats. The estimate that incorporated European data (but not data from Australia and New Zealand) may be slightly lower because wildlife across much of Europe were historically exposed to predation by a similarly-sized wild cat (Felis sylvestris) and, therefore, may be less naive to predation by domestic cats. However, it is unlikely that European wildlife have fully adapted to the unusually high densities of domestic cats in much of this continent9. Factors explaining estimate uncertainty For both birds and mammals, sensitivity analyses indicated that un-owned cat parameters explained the greatest variation in total mortality estimates (Fig. 2). Un-owned cat population size explained the greatest variation in mortality estimates (42% for birds and 51% for mammals), and the un-owned cat predation rate explained the second greatest variation (24% for birds and 40% for mammals). The only other parameters that explained >5% of variation in mortality estimates were the owned cat predation rate on birds (16%) and the correction factor for imperfect detection of owned cats’ prey items (8%). Our estimate of bird mortality far exceeds any previously estimated US figure for cats13,14,16, as well as estimates for any other direct source of anthropogenic mortality, including collisions with windows, buildings, communication towers, vehicles and pesticide poisoning13,15,16,17,18,19,20,21. Systematic reviews like ours, which includes protocol formulation, a data search strategy, data inclusion criteria, data extraction and formal quantitative analyses22, are scarce for other anthropogenic mortality sources.21 Increased rigour of mortality estimates should be a high priority and will allow increased comparability of mortality sources23. Nonetheless, no estimates of any other anthropogenic mortality source approach the value we calculated for cat predation, and our estimate is the first for cats to be based on rigorous data-driven methods. Notably, we excluded high local predation rates and used assumptions that led to minimum predation rate estimates for un-owned cats; therefore, actual numbers of birds killed may be even greater than our estimates. Free-roaming cats in the United States may also have a substantial impact on reptiles and amphibians. However, US studies of cat predation on these taxa are scarce. To generate a first approximation of US predation rates on reptiles and amphibians, we used the same model of cat predation along with estimates of cat predation rates on these taxa from studies in Europe, Australia and New Zealand. We estimate that between 228 and 871 million reptiles (median=478 million) and between 86 and 320 million amphibians (median=173 million) could be killed by cats in the contiguous United States each year. Reptile and amphibian populations, and, therefore, cat predation rates, may differ between the regions where we gathered predation data for these taxa and the United States. Furthermore, reptiles and amphibians are unavailable as prey during winter across much of the United States. Additional research is needed to clarify impacts of cats on US herpetofauna, especially given numerous anthropogenic stressors that threaten their populations (for example, climate change, habitat loss and infectious diseases) and documented extinctions of reptiles and amphibians due to cat predation in other regions4,24. The exceptionally high estimate of mammal mortality from cat predation is supported by individual US studies that illustrate high annual predation rates by individual un-owned cats in excess of 200 mammals per year6,25,26,27,28 and the consistent finding that cats preferentially depredate mammals over other taxa (Supplementary Table S1). Even with a lower yearly predation rate of 100 mammals per cat, annual mortality would range from 3–8 billion mammals just for un-owned cats, based on a population estimate of between 30 and 80 million un-owned cats. This estimated level of mortality could exceed any other direct source of anthropogenic mortality for small mammals; however, we are unaware of studies that have systematically quantified direct anthropogenic mortality of small terrestrial mammals across large scales. Native species make up the majority of the birds preyed upon by cats. On average, only 33% of bird prey items identified to species were non-native species in 10 studies with 438 specimens of 58 species (Supplementary Table S3). For mammals, patterns of predation on native and non-native species are less clear and appear to vary by landscape type. In densely populated urban areas where native small mammals are less common, non-native species of rats and mice can make up a substantial component of mammalian prey29. However, studies of mammals in suburban and rural areas found that 75–100% of mammalian prey were native mice, shrews, voles, squirrels and rabbits26,30,31. Further research of mammals is needed to clarify patterns of predation by both owned and un-owned cats on native and non-native mammals, and across different landscape types. Sensitivity analyses indicate that additional research of un-owned cats will continue to improve precision of mortality estimates. Our finding that un-owned cat population size and predation rate explained the greatest variation in mortality estimates reflects the current lack of knowledge about un-owned cats. No precise estimate of the un-owned cat population exists for the United States because obtaining such an estimate is cost prohibitive, and feral un-owned cats are wary of humans and tend to be solitary outside of urban areas. In addition, human subsidized colonies of un-owned cats are maintained without widespread public knowledge. For example, in Washington DC alone there are >300 managed colonies of un-owned cats and an unknown number of unmanaged colonies. Population size estimates can be improved by incorporating observations of free-ranging cats into a wildlife mortality reporting database23. Context for the population impact of a mortality source depends on comparing mortality estimates to estimates of population abundance of individual species. However, continental-scale estimates of wildlife population abundance are uncertain due to spatio-temporal variation in numbers. For mammals, clarification of the population impacts of cat predation is hindered by the absence of nationwide population estimates. For all North American land birds, the group of species most susceptible to mainland cat predation (Supplementary Table S3), existing estimates range from 10–20 billion individuals in North America32. A lack of detail about relative proportions of different bird species killed by cats and spatio-temporal variation of these proportions makes it difficult to identify the species and populations that are most vulnerable. The magnitude of our mortality estimates suggest that cats are likely causing population declines for some species and in some regions. Threatened and endangered wildlife species on islands are most susceptible to the effects of cat predation, and this may also be true for vulnerable species in localized mainland areas5 because small numbers of fatalities could cause significant population declines. Threatened species in close proximity to cat colonies—including managed TNR colonies11,12—face an especially high level of risk; therefore, cat colonies in such locations comprise a wildlife management priority. Claims that TNR colonies are effective in reducing cat populations, and, therefore, wildlife mortality, are not supported by peer-reviewed scientific studies11. Our estimates should alert policy makers and the general public about the large magnitude of wildlife mortality caused by free-ranging cats. Structured decisions about actions to reduce wildlife mortality require a quantitative evidence base. We provide evidence of large-scale cat predation impacts based on systematic analysis of multiple data sources. Future specific management decisions, both in the United States and globally, must be further informed by fine scale research that allows analysis of population responses to cats and assessment of the success of particular management actions. We are not suggesting that other anthropogenic threats that kill fewer individuals are biologically unimportant. Virtually nothing is known about the cumulative population impacts of multiple mortality sources. Furthermore, comparison of total mortality numbers has limited use for prioritization of risks and development of conservation objectives. Combining per species estimates of mortality with population size estimates will provide the greatest information about the risk of population-level impacts of cat predation. Although our results suggest that owned cats have relatively less impact than un-owned cats, owned cats still cause substantial wildlife mortality (Table 2); simple solutions to reduce mortality caused by pets, such as limiting or preventing outdoor access, should be pursued. Efforts to better quantify and minimize mortality from all anthropogenic threats are needed to increase sustainability of wildlife populations. The magnitude of wildlife mortality caused by cats that we report here far exceeds all prior estimates. Available evidence suggests that mortality from cat predation is likely to be substantial in all parts of the world where free-ranging cats occur. This mortality is of particular concern within the context of steadily increasing populations of owned cats, the potential for increasing populations of un-owned cats12, and an increasing abundance of direct and indirect mortality sources that threaten wildlife in the United States and globally. We searched JSTOR, Google Scholar, and the Web of Science database (formerly ISI Web of Science) within the Web of Knowledge search engine published by Thomson Reuters to identify studies that document cat predation on birds and mammals. We initially focused this search on US studies, but due to a limited sample of these studies, we expanded the search to include predation research from other temperate regions. We also searched for studies providing estimates of cat population sizes at the scale of the contiguous United States and for US studies that estimate the proportion of owned cats with outdoor access and the proportion of cats that hunt wildlife. The search terms we used included: ‘domestic cat’ in combination with ‘predation,’ ‘prey,’ ‘diet,’ ‘food item’ and ‘mortality’; all previous terms with ‘domestic cat’ replaced by ‘Felis catus,’ ‘feral,’ ‘stray,’ ‘farm,’ ‘free-ranging,’ and ‘pet’; ‘trap-neuter-return colony’; ‘TNR colony’; and ‘cat predation’ in combination with ‘wildlife,’ ‘bird,’ ‘mammal,’ and ‘rodent’. We checked reference lists of articles to identify additional relevant studies. Lead authors of three studies were also contacted to enquire whether they knew of ongoing or completed unpublished studies of cat predation in the United States. Classification of cat ranging behaviour We grouped studies based on the ranging behaviour of cats investigated. We defined owned cats to include owned cats in both rural and urban areas that spend at least some time indoors and are also granted outdoor access. We defined un-owned cats to include all un-owned cats that spend all of their time outdoors. The un-owned cat group includes semi-feral cats that are sometimes considered pets (for example, farm/barn cats and strays that are fed by humans but not granted access to habitations), cats in subsidized (including TNR) colonies, and cats that are completely feral (that is, completely independent and rarely interacting with humans). We did not classify cats by landscape type or whether they receive food from humans because the amount of time cats spend outdoors is a major determinant of predation rates33,34 and because predation is independent of whether cats are fed by humans6,34,35. Study inclusion criteria Studies were only included if: (1) they clearly reported cat ranging behaviour (that is, a description of whether cats were owned or un-owned and whether they were outdoor cats or indoor-outdoor cats), and (2) the group of cats investigated fit exclusively into one of the two groups we defined above (that is, we excluded studies that lumped owned and un-owned cats in a single predation rate estimate). For some studies, we extracted a portion of data that met these criteria but excluded other data from cats with unknown ranging behaviour. We only included mainland and large island (New Zealand and United Kingdom) predation studies, because cat predation on small islands is often exceptionally high36,37 and focused on colony nesting seabirds38. We excluded studies from outside temperate regions and those with predation rate estimates based on fewer than 10 cats, <1 month of sampling, or on cats that were experimentally manipulated (for example, by fitting them with bells or behaviour altering bibs). We included studies that used cat owners’ records of prey returns, but we excluded those that asked owners to estimate past prey returns because such questionnaires may lead to bias in estimation of predation rates39. (For a list of all included and excluded studies, see Supplementary Table S1). Data extraction and standardization of predation rates Most studies report an estimate of cat predation rate (that is, daily, monthly or annual prey killed per cat) or present data that allowed us to calculate this rate. When studies only reported predation rate estimates for all wildlife combined, we calculated separate predation rates by extracting taxa-specific prey counts from tables or figures and multiplying the total predation rate by the proportion of prey items in each taxon. If taxa-specific counts were not provided, we directly contacted authors to obtain this information. For studies that presented low, medium and high estimates or low and high estimates, we used the medium and average values, respectively. For studies that presented more than one predation estimate for cats with similar ranging behaviour (for example, owned cats in rural and urban areas), we calculated the average predation rate. Nearly all studies of un-owned cats report numbers or frequencies of occurrence of different taxa in stomachs and/or scats. For studies reporting numbers of prey items, we estimated annual predation rates by assuming one stomach or scat sample represented a cat’s average daily prey intake (for example, an average of one prey item per stomach or scat=365 prey per cat per year). This assumption likely resulted in conservative estimates because cats generally digest prey within 12 h (ref.2828) and can produce two or more scats each day29. For studies reporting occurrence frequencies of prey items, we assumed this proportion represented a cat’s average daily prey intake (for example, a 10% bird occurrence rate=0.1 bird per stomach or scat=36.5 birds per cat per year). This assumption results in coarse predation rate estimates, but estimates from this approach are even more conservative than those from the first assumption because many stomachs and scats undoubtedly included more than one bird or mammal. Predation rate estimates from many studies were based on continuous year-round sampling or multiple sampling occasions covering all seasons. However, seasonal coverage of some studies was incomplete. To generate full-year predation rate estimates in these cases, we adjusted partial-year predation estimates according to the average proportion of prey taken in each month as determined from year-round studies reporting monthly data (birds and mammals8,33, birds only7,40). For partial-year estimates from the northern hemisphere, we offset monthly estimates from southern hemisphere studies by 6 months. The final annual predation rate estimates for all studies are presented in Supplementary Table S1. The year-round studies we used represent different geographical regions (for birds—England, Kansas (US), Australia and New Zealand; for mammals—England and Australia) with varying climates and slightly varying seasonal patterns of predation. For both birds and mammals, averaging across full-year studies resulted in higher proportions of predation in the spring and summer compared with fall and winter, an expected pattern for much of the United States. The reference studies we used, therefore, provide a reasonable baseline for correcting to full-year mortality estimates. This approach greatly improves upon the assumption that mortality is negligible during the period of the year not covered by sampling. Quantification of annual mortality from cat predation We estimated wildlife mortality in the contiguous United States by multiplying data-derived probability distributions of predation rates by distributions of estimated cat abundance, following41. Quantification was conducted separately for owned and un-owned cats and for birds and mammals. As there was a relatively small sample of US studies that estimated predation rates (n=14 and 10 for birds and mammals, respectively), we repeated calculations using predation rate distributions that were augmented with predation rates from Europe and all temperate zones. However, we only used studies from the contiguous United States to construct all other probability distributions (listed below). We estimated mortality using the following model of cat predation: where npc is the number of owned cats in the contiguous United States, pod is the proportion of owned cats granted outdoor access, pph is the proportion of outdoor owned cats that hunt wildlife, ppr is the annual predation rate by owned cats, cor is a correction factor to account for owned cats not returning all prey to owners, nfc is the number of un-owned cats in the contiguous United States, pfh is the proportion of un-owned cats that hunt wildlife, and fpr is the annual predation rate by un-owned cats. From the probability distribution of each parameter (see Table 1 and Supplementary Methods for details about the specific probability distributions used), we randomly drew one value and used the above formulas to calculate mortality. Random draws were made using distribution functions in Programme R (rnorm and runif commands for normal and uniform distributions, respectively). We conducted 10,000 random draws to estimate a potential range of annual predation on each wildlife taxa. For all analyses, we report median mortality estimates and lower and upper estimates bracketing the central 95% of values. We used multiple linear regression analysis to assess how much variance in mortality estimates was explained by the probability distribution for each parameter. We treated total mortality estimates as the dependent variable (n=10,000) and we defined a predictor variable for each parameter that consisted of the 10,000 randomly drawn values. We used adjusted R2 values to interpret the percentage of variance explained by each parameter. How to cite this article: Loss S.R. et al. The impact of free-ranging domestic cats on wildlife of the United States. Nat. Commun. 4:1396 doi: 10.1038/ncomms2380 (2012). Baker P. J., Soulsbury C. D., Iossa G. & Harris S. . in Urban Carnivores eds Gehrt S. D., Riley S. P. D., Cypher B. L. 157–171John Hopkins University Press (2010) . Fitzgerald B. J. . in The Domestic Cat: The Biology of its Behaviour eds Turner D. C., Bateson P. 123–150Cambridge University Press (1990) . Lowe S., Browne M. & Boudjelas S. . 100 of the World’s Worst Invasive Alien Species: a Selection from The Global Invasive Species Database Invasive Species Specialist Group, International Union for Conservation of Nature (2000) . Medina F. M. et al. A global review of the impacts of invasive cats on island endangered vertebrates. Global Change Biol. 17, 3503–3510 (2011) . Crooks K. R. & Soule M. E. . Mesopredator release and avifaunal extinctions in a fragmented system. Nature 400, 563–566 (1999) . Hawkins C. C., Grant W. E. & Longnecker M. T. . in Proceedings of the 4th International Urban Wildlife Symposium (eds Shaw W. W., Harris L. K., Vandruff L. 164–170University of Arizona: Tucson, AZ, (2004) . van Heezik Y., Smyth A., Adams A. & Gordon J. . Do domestic cats impose an unsustainable harvest on urban bird populations? Biol. Conserv. 143, 121–130 (2010) . Churcher P. B. & Lawton J. H. . Predation by domestic cats in an English village. J. Zool. London 212, 439–455 (1987) . Baker P. J., Molony S., Stone E., Cuthill I. C. & Harris S. . Cats about town: is predation by free-ranging pet cats (Felis catus) likely to affect urban bird populations? IBIS 150, (Suppl. 1): 86–99 (2008) . Balogh A. L., Ryder T. B. & Marra P. P. . Population demography of Gray Catbirds in the suburban matrix: sources, sinks, and domestic cats. J. Ornitholol. 152, 717–726 (2011) . Longcore T., Rich C. & Sullivan L. M. . Critical assessment of claims regarding management of feral cats by trap-neuter-return. Conserv. Biol. 23, 887–894 (2009) . Lepczyk C. A. et al. What conservation biologists can do to counter trap-neuter-return: response to Longcore et al.. Conserv. Biol. 24, 627–629 (2010) . Gill F. B. . Ornithology 2nd edn. W.H. Freeman Publishers (1994) . Dauphiné N. & Cooper R. J. . Impacts of free-ranging domestic cats (Felis catus) on birds in the United States: a review of recent research with conservation and management recommendations. Proceedings of the Fourth International Partners in Flight Conference: Tundra to Tropics 205–219Partners in Flight: US, (2009) . Banks R. C. . Human related mortality of birds in the United States. Special Scientific Report—Wildlife N. 215 US Dept. of the Interior—Fish and Wildlife Service (1979) . Erickson W. P., Johnson G. D. & Young D. P. Jr . A summary and comparison of bird mortality from anthropogenic causes with an emphasis on collisions. Tech. Rep PSW-GTR-191 1029–1042US Dept. of Agriculture—Forest Service (2005) . Klem D. Jr . Avian mortality at windows: the second largest human source of bird mortality on earth. Proceedings of the Fourth International Partners in Flight Conference: Tundra to Tropics 244–251Partners in Flight (2009) . Coleman J. S. & Temple S. A. . On the Prowl. Wisconsin Nat. Res. Mag. (1996) . Pimentel D., Greiner A. & Bashore T. . Economic and environmental costs of pesticide use. Arch. Environ. Con. Tox. 21, 84–90 (1991) . Manville A. II . Towers, turbines, power lines, and buildings—steps being taken by the U.S. Fish and Wildlife Service to avoid or minimize take of migratory birds at these structures. Proceedings of the Fourth International Partners in Flight Conference: Tundra to Tropics 262–272Partners in Flight: US, (2009) . Longcore T. et al. An estimate of mortality at communication towers in the United States and Canada. PLoS one 7, e34025 (2012) . Pullin A. S. & Stewart G. B. . Guidelines for systematic review in conservation and environmental management. Conserv. Biol. 20, 1647–1656 (2006) . Loss S. R., Will T. & Marra P. P. . Direct human-caused mortality of birds: improving quantification of magnitude and assessment of population impact. Front. Ecol. Environ. 20, 357–364 (2012) . Henderson R. W. . Consequences of predator introductions and habitat destruction on amphibians and reptiles in the Post-Columbus West Indies. Caribb. J. Sci. 28, 1–10 (1992) . Nilsson N. N. . The role of the domestic cat in relation to game birds in the Willamette Valley, Oregon. Thesis Oregon State College (1940) . Llewellyn L. L. & Uhler F. M. . The foods of fur animals of the Patuxent Research Refuge, Maryland. Am. Midl. Nat. 48, 193–203 (1952) . Parmalee P. W. . Food habits of the feral house cat in east-central Texas. J. Wildl. Manage 17, 375–376 (1953) . Hubbs E. L. . Food habits of feral house cats in the Sacramento Valley. Calif. Fish Game 37, 177–189 (1951) . Jackson W. B. . Food habits of Baltimore, Maryland, cats in relation to rat populations. J. Mammal. 32, 458–461 (1951) . Errington P. L. . Notes on food habits of southern Wisconsin house cats. J. Mammal. 17, 64–65 (1936) . Kays R. W. & DeWan A. A. . Ecological impact of inside/outside house cats around a suburban nature preserve. Anim. Conserv. 7, 273–283 (2004) . Blancher P. J. et al. Guide to the partners in flight population estimates database version: North American Landbird Conservation Plan 2004, Tech. Series No 5 (Partners in Flight, US, (2007) . Barratt D. G. . Predation by house cats, Felis catus (L.), in Canberra, Australia. I: prey composition and preference. Wildl. Res. 24, 263–277 (1997) . Barratt D. G. . Predation by house cats, Felis catus (L.), in Canberra, Australia. II: Factors affecting the amount of prey caught and estimates of the impact on wildlife. Wildl. Res. 25, 475–487 (1998) . Liberg O. . Food habits and prey impact by feral and house-based domestic cats in a rural area in southern Sweden. J. Mammal. 65, 424–432 (1984) . Jones E. . Ecology of the feral cat, Felis catus (L.), (Carnivora:Felidae) on Macquarie Island. Aust. Wildl. Res. 4, 249–262 (1977) . Bramley G. N. . A small predator removal experiment to protect North Island Weka (Gallirallus australis greyi) and the case for single-subject approaches in determining agents of decline. NZ J. Ecol. 20, 37–43 (1996) . Bonnaud E. et al. The diet of feral cats on islands: a review and a call for more studies. Biol. Conserv. 13, 581–603 (2011) . Tschanz B., Hegglin D., Gloor S. & Bontadina F. . Hunters and non-hunters: skewed predation rate by domestic cats in a rural village. Eur. J. Wildl. Res. 57, 597–602 (2011) . Fiore C. A. . Domestic cat (Felis catus) predation of birds in an urban environment. Thesis Wichita State University (2000) . Blancher P. J. . Estimated number of birds killed by house cats (Felis catus) in Canada. Avian Conservation and Ecology (in press) . S.R.L. was supported by a postdoctoral fellowship funded by the US Fish and Wildlife Service through the Smithsonian Conservation Biology Institute’s Postdoctoral Fellowship programme. P. Blancher provided insight for development of the model of cat predation magnitude, and R. Kays, C. Lepczyk and Y. van Heezik provided raw data from their publications. C. Machtans facilitated data sharing, and participants in the 2011 Society of Canadian Ornithologists’ anthropogenic mortality of birds symposium provided context and perspectives. C. Lepczyk and P. Blancher provided comments on the manuscript. The findings and conclusions in this article are those of the authors and do not necessarily represent the views of the Smithsonian or US Fish and Wildlife Service. All data used for this analysis is available in the Supplementary Materials. The authors declare no competing financial interests. About this article Cite this article Loss, S., Will, T. & Marra, P. The impact of free-ranging domestic cats on wildlife of the United States. Nat Commun 4, 1396 (2013). https://doi.org/10.1038/ncomms2380 Journal of Ornithology (2020) Using multi-scale spatial prioritization criteria to optimize non-natural mortality mitigation of target species Global Ecology and Conservation (2020) Biological Invasions (2020) Domestic cats and their impacts on biodiversity: A blind spot in the application of nature conservation law People and Nature (2020)
The end of the last Ice Age marked the beginning of the Holocene era. We are still in this geological epoch. In terms of human evolution, this is the Mesolithic period, also known as the Middle Stone Age. This is a period of modern human hunter-gatherers, using quite complex flint and stone tools. The earliest definite evidence of human activity in Cambridgeshire dates from this period. Extensive flint scatters have been found along the river valleys and fen edge, showing the importance of water, both for sustenance and probably transport.Mesolithic flint scatters have been found in Caldecote. The transition from the Mesolithic to the Neolithic is marked by the shift from hunter-gatherers, probably nomadic with seasonal settlement, to a more agrarian way of life. The Neolithic sees the first permanent homes. Neolithic people cleared the land by cutting down the forest to create land for farming. This is the period when animals were first domesticated for farming purposes, and the first crops grown rather than picking wild seed. Cambridgeshire has some extensive Neolithic sites with evidence of surviving field boundaries on the fen islands near Chatteris. Neolithic houses, where known, are rectangular, presumably holding families and livestock.by
One of the last bird species in the United States to be discovered and described, the Rufous-winged Sparrow is an uncommon resident of local distribution in the Sonoran Desert region from south-central Arizona to northern Sinaloa, Mexico. The first specimens of the species were taken by C. E. Bendire on 10 June 1872, near Fort Lowell (i.e., Tucson), Arizona (Coues 1873b, Bendire 1882b). Between 1872 and 1886, the numbers of this species near Fort Lowell declined, and one last specimen was taken 7 February 1886 (Phillips et al. 1964a) before the species seemingly disappeared from Arizona until 1915. In 1932, however, specimens were taken near Sells, Arizona (Moore 1932), and in 1936 a population was rediscovered near Tucson (Phillips et al. 1964a). The sparrow's preferred habitat of thornbush and mixed bunchgrass is limited, and grazing appears to have diminished its numbers and distribution. The plaintive whistled songs of the Rufous-winged Sparrow are distinctive and may be heard year-round, particularly during the breeding season. The nests of this species are easily found; they are usually placed conspicuously in a spiny tree or shrub, unlike the nests of other North American sparrows, which are hidden in grasses or low in shrubs. Of all North American birds, the Rufous-winged Sparrow may depend the most on rainfall as a stimulus for nesting. It typically nests after summer rains have begun, often building a nest and laying its first egg within 5 or 6 days after the first rain. Incubation lasts 11 days, and young fledge in only 8 or 9 days. In Arizona, in years of unusually heavy winter rainfall, pairs may also nest in spring. Territories are normally maintained throughout the year, and pairs remain mated for life. Most natural history studies of the Rufous-winged Sparrow are from Arizona; little has been reported from Sonora, Mexico, the species' main center of distribution. Key studies of breeding biology are reported in Phillips et al. 1964a, Phillips 1968d, Austin and Ricklefs 1977, and Wolf 1977. Recent studies have focused on the breeding physiology of this dryland bird, especially its response to stress, to rains, and to the songs of conspecifics (e.g., Deviche et al. 2006b, Deviche et al. 2014, Small et al. 2007, Small et al. 2008a, Small et al. 2008b, Small et al. 2008c).
Deep, well-worn bedrock mortars and metates in Doane Valley are reminders of those many centuries when native Americans maintained seasonal villages, hunted game and gathered acorns and other seed crops here on the slopes of Palomar Mountain. The village sites and ten smaller, temporary camps or gathering stations have been identified within the present-day park. At least two separate groups of native Americans are known to have established exclusive territories on the mountain. The area around Boucher Lookout was called T’ai. Iron Springs near Bailey Lodge was called Paisvi. Other areas were known as Chakuli, Malava and Ashachakwo. These areas were used during the summer and early autumn for hunting and gathering acorns, pine seeds, elderberries and grass seeds. The main native village at the foot of the mountain was called Pauma. Sturdy conical houses known as wikiups or kecha kechumat were made of pine poles covered with bark. Semi-subterranean “sweat houses” were centrally located in the village and used for purification and curing rituals. Handcrafted products included clay jars, woven baskets, throwing sticks, nets for fishing or carrying, bows and arrows and a variety of utensils for cooking and eating. The indigenous people called this mountainous area Wavamai, but when the Spaniards arrived in the 19th century, they named it Palomar, or “place of the pigeons,” a reference to the thousands of bandtailed pigeons that nested in the area. In 1798 Mission San Luis Rey was established four miles upstream from the mouth of the San Luis Rey River, and the local native inhabitants began to be referred to as the Luiseños. Pines and firs from Palomar Mountain were used in its construction. An outpost, or assistencia, was established at Pala in 1816. Father Antonio Peyri, the Franciscan missionary at Mission San Luis Rey from 1798 to 1832, spent several weeks each year working with the tribes who lived in or near what is now Palomar Mountain State Park. He was persuasive and soon came to be greatly loved, but the mission way of life both here and elsewhere in California had some terrible effects on the Luiseños. The sudden and complete disruption of age-old living patterns, as well as the introduction of European diseases, quickly resulted in a severe decline in the population. The mission was closed down in 1834 when Governor Figueroa issued direct orders to “secularize” all of the California missions. Today many descendants of the mission period Luiseños live on nearby reservations and continue to follow the Catholic religion though they also maintain some of their earlier cultural and religious beliefs and practices. In 1846 the slopes of Palomar Mountain were included, at least theoretically, in the famous Warner Ranch. In 1851, however, the native inhabitants drove Warner off the land. For a time thereafter, cattle and horse thieves used the remote mountain meadows of Palomar to shelter their stolen animals until it was safe to take them across the border into Mexico. Nathan Harrison, a black slave who came to California during the gold rush, took up residence as a free man near the eastern edge of the present park in the 1860s. He grew hay and raised hogs in Doane Valley despite frequent trouble with bears and mountain lions. At the time of his death in 1920, he was said to be 101 years old. The old road from Pauma Valley is named in his honor. George Edwin Doane came into the area in the early 1880s and built a shake-roof log cabin in the little clearing between Upper and Lower Doane Valley in what is now the Doane Valley Campground. Doane grew hay and raised cattle and hogs on his 640 acres of meadowland, and some of the apple trees he planted survive to this day. During the southern California land boom of the 1880s and afterward, many other people also settled on Palomar Mountain. Four apple orchards within the park date from this period, as do the remains of Scott’s cabin on Thunder Ridge. Palomar Mountain State Park was created during the early 1930s, when 1,683 acres of what has been called “the most attractive part of the mountain” was acquired for state park purposes. Matching funds for this acquisition were provided by San Diego County and a group of public-spirited citizens known as the Palomar Park Association. Many of the roads, trails and picnic facilities that are in use to this day were built during the 1930s by the Civilian Conservation Corps. A SPECIAL PLACE: NATURAL HISTORY CULTURAL HISTORY HOURS & FEES DIRECTIONS MAPS & BROCHURES HELP PROTECT THE PARK DOGS & OTHER PETS CONTACT THE PARK
Violet Sabrewing (Campylopterus hemileucurus) Spanish name: Colibrí Morado This hummingbird is frequently found in montane forest understory and edge, ravines, areas around streams, as well as disturbed wooded areas and old second growth. It also often forages in banana plantations. From southern Mexico to western Panama, the Violet Sabrewing can be found at higher elevations, between 1,500 to 2,200 m, but it may descend to lower elevations after the breeding season. The Violet Sabrewing is one of the largest hummingbirds in the world, matched in size only by a few other species, and surpassed only by the Giant Hummingbird.Although the male and female have divergent plumage, they share the wide, long tail with bright white corner feathers, wide wing feathers, conspicuous size, and the long, curved bill reminiscent of a hermit, such as the Long-tailed Hermit. The female's bill is especially curved; she is dark green with a gray underside and a violet throat. The male's plumage grants the species its name: his head and most of his body is a deep, solid violet, with dark green on some wing feathers and the lower back, a blackish tail, and the aforementioned white tail corners. One reason this species is so easy to spot in the forest is that it flies relatively low and loudly, traplining many of the same flower species as hermits, since they have similar beak shapes. The Violet Sabrewing most prefers heliconia and banana flowers. It may also habitually visit certain flowers that open during the night for bats, such as those of Vriesia nephrolepis. The Sabrewing may come at dusk to rob the nectar of buds that have not yet opened, or at dawn to drink any residual nectar left by the bats. One might think that such a large hummingbird would use size to its advantage and act highly territorially towards other hummingbirds. However, compared to smaller hummingbirds, the Sabrewing is not very aggressive, and is rarely protective or defensive at flowers. It still dominates at hummingbird feeders, and often other species vacate when they see the Sabrewing coming, before there is any bill sword-fighting. While solitarily feeding, both sexes use sharp vocalizations that may not be as loud as a larger bird's song, but are clearly audible twitter or "chip' sounds. Sabrewings can also be heard approaching because their vibrating wings are comparatively large for a hummingbird but still move incredibly fast. Males may sing individually sometimes, but more often sing during the breeding season, when groups of up to 10 males (more often 4 to 6) join in a single area to sing and attract females. Such a formation is called a lek; males of this species gather in leks 2 to 4 m off the ground and sing from perches in the understory or edge habitats. After a female approaches the lek, chooses a male and copulates, she leaves to build a tiny nest. She will construct an awkward-looking but sturdy cup out of green moss, line it with other soft plant fibers, and bind the work with spider webbing. She usually does this on a low, skinny tree or bamboo branch over the edge of a ravine or water. Like other hummingbirds, the Violet Sabrewing survives almost entirely on nectar. This species most favors heliconia and banana flowers, but also visits ginger plants (Costus), the bromeliad Vriesia nephrolepis, Gesneriads (such as Columnea, Drymonia), and several Acanthaceae flowers. This large hummingbird is generally 15 cm long. Adult males weigh 11.5 g, and females 9.5 g. Fogden, Michael and Fodgen, Patricia. Hummingbirds of Costa Rica. Zona Tropical, S. A., Miami, 2005. Skutch, Alexander F. and F. Gary Stiles. A Guide to the Birds of Costa Rica. Utica: Cornell University Press, 1989. Amy Strieter, Wildlife Writer
|The Northern Hemisphere Jet Stream can be seen crossing | Cape Breton Island in Eastern Canada. A NASA photo. Atmospheric researchers have developed a climate model that can accurately depict the frequently observed winding course of the jet stream, a major air current over the Northern Hemisphere. It demonstrates that the jet stream's wavelike course and subsequent extreme weather conditions like cold air outbreaks in Central Europe and North America are the direct results of climate change.
Two Martian rock samples collected by the Perseverance rover may contain evidence of ancient water bubbles, according to NASA. The rock samples were found to include salt minerals, which may reveal insights about the ancient climate and habitability of Mars billions of years ago — and could even preserve evidence of ancient life, if it existed on the red planet. Perseverance successfully collected its first two rock samples on September 6 and 8, nicknamed Montdenier and Montagnac, from the same rock called Rochette. The rover is currently exploring Jezero Crater, the site of an ancient lake more than 3 billion years ago. “Because these rocks were of such high scientific potential, we decided to acquire two samples here,” said Katie Stack Morgan, Perseverance deputy project scientist at NASA’s Jet Propulsion Laboratory in Pasadena, California. The rocks within the crater could tell scientists about ancient volcanic activity in the area, as well as if water was present for long periods of time, or if it came and went as the climate fluctuated. These two rock samples show that groundwater was likely present for a long time in the area. “It looks like our first rocks reveal a potentially habitable sustained environment,” said Ken Farley, project scientist for the Perseverance mission at the California Institute of Technology, in a statement. “It’s a big deal that the water was there a long time.” The Rochette rock is basaltic in nature, meaning it was likely made by ancient lava flows. Crystalline minerals within rocks like this can help scientists obtain extremely accurate dating and tell when the rock was formed. The salt minerals within the rocks are the result of the rocks being altered over time. They could have formed when groundwater either changed the original minerals within the lava rock or when water evaporated and left the salts behind. While the groundwater may have been part of the lake that once filled Jezero Crater and its river delta, scientists can’t discount the fact that the water may have traveled through the rocks even after the lake dried up and disappeared. But the rocks give Perseverance’s science team hope; water was likely present long enough to create a habitable environment where ancient microbial life could have thrived. These two samples are the first of more than 30 that will be collected by the rover and eventually returned to Earth by multiple missions, called Mars Sample Return, by 2031. “What we’re planning to do is to launch a couple of missions,” said Meenakshi Wadhwa, Mars Sample Return principal scientist at JPL and Arizona State University. “One will be a sample retrieval lander which will actually pick up the samples and bring them into Mars orbit. Then there’s an orbiter, the Earth Return Orbiter, which will be capturing these orbiting samples, and then the return orbiter goes back to Earth.” Once returned to Earth, a portion of the samples will be investigated in a multitude of ways, while the rest will remain sealed so that future scientists with better technology can study them — much like the Apollo lunar samples. “These samples have high value for future laboratory analysis back on Earth,” said Mitch Schulte, mission program scientist at NASA headquarters, in a statement. “One day, we may be able to work out the sequence and timing of the environmental conditions that this rock’s minerals represent. This will help answer the big-picture science question of the history and stability of liquid water on Mars.” The more samples Perseverance collects from intriguing points across the crater and river delta, the more likely scientists will be able to piece together the Martian puzzle that answers the ultimate question: Did life ever exist on Mars? “One of the reasons we explore Mars is because it holds a rock record that’s been untouched for about three and a half to four billion years,” said Lori Glaze, director of NASA’s Planetary Science Division. The two samples are currently stored within titanium tubes on the rover and will eventually be dropped at a site where a future mission can retrieve them. The rover is gearing up for another drive to its next possible sample site called South Séítah, which is 656 feet (200 meters) away. This region, which was aerially scouted by the Ingenuity helicopter during its last two flights, is filled with sand dune-covered ridges, boulders and rock shards. Farley refers to some of them as “broken dinner plates.” While the first two samples from the Rochette rock probably represent some of the youngest rock on the crater floor, South Séítah will likely be a treasure trove of older rock layers that reveal more about the history of the crater and its lake. But we’ll have to wait. The beginning of October will create a communications blackout between Mars and Earth during the Mars solar conjunction, when the two planets are on opposite sides of the sun. Perseverance will probably begin its sampling exploration of South Séítah after this period, which will last for nearly two weeks, comes to an end. ™ & © 2021 Cable News Network, Inc., a WarnerMedia Company. All rights reserved.
In this lesson, delve into the world of graphic organizers and gain an understanding of the definition and varieties. You’ll also view examples of how they can be applied to foster a myriad of educational objectives. Purpose of Graphic Organizers Visual aids are everywhere today. Take, for example, the smart phone in your pocket or purse – it allows the user to organize applications in a user-friendly manner. Your style of organization depends on your usage. Similar to the organization of our apps, graphic organizers afford students the opportunities to transform information, ideas, and concepts in a visual way.Graphic organizers help students organize ideas, see relationships, and retain information. Visual representations can be used in all disciplines and are quite flexible in their application. How graphic organizers are used depends on the objective. For example, a CEO of a company might organize their smart phone much differently than a college freshman. They both might have similar applications, but their usage of these apps can be vastly different.Many organizers have more than one purpose. Much like our example of applications, these organizers can fall into more than one category depending on their usage. Additionally, graphic organizer options continually change with growing technology. Let’s look at some different examples of organizers. Examples of Graphic Organizers The first type of organizer is sequencing or flow charts. One example of a flow chart is a timeline. These types of charts allow students to organize information chronologically, linearly, or in a cyclical fashion.Another type of organizer is a compare and contrast organizer. These organizers highlight differences and similarities in objects, texts, character, etc. Examples of compare and contrast organizers include Venn diagrams, matrix, and T notes. In a Venn diagram, the similarities go in the sections that overlap, and the differences go in the sections that do not overlap.Note-taking is another way to organize. Note-taking organizers allow students to organize ideas graphically, which highlight important information in a user-friendly format. Examples of this type of organizer include Cornell notes, story maps, KWL charts, reading logs, and T charts. KWL charts are comprised of three columns: what I know, what I want to know, and what I learned. Story maps can take many forms; story maps allow students to record their reading journey.Another type of organizer is the single topic organizer. Single topic organizers allow students to elaborate on a single topic. The main purpose is organizing and generating ideas. Examples of single topic organizers include tree, clustering, webbing, and diagrams.Finally, we have problem-solution organizers, which help students to view problems and solutions. Examples include the problem-solution organizer and the two-column problem solution. Problem-solution organizers differ from one another. Literary-based problem-solution organizers focus on the problem, the events of the story, and end with the resolution. More technical disciplines implore problem-solution formats or allow students to record problems, possible solutions, and outcomes. Graphic organizers are visual representations of information, and they have a wide variety of uses and benefits. They help students build reading skills because they can be used to help students identify text structure. Other benefits include their ability to reinforce concepts, organize ideas for clarity, and their ability to meet the needs of visual learners. When you are done, you should be able to: - State the purpose of graphic organizers - Name and describe the types of graphic organizers
The blue whale is the largest animal ever known on the planet. Here's what you need to know about the majestic marine mammal. Imagine in your mind a 10-story tall animal walking down the street and you probably start channeling images of Godzilla or King Kong. But if you imagine it as a marine mammal and put it on its side, swimming ... now you have a blue whale. Balaenoptera musculus, the blue whale, is the largest animal ever known on the planet, bigger than the dinosaurs and monsters in the movies aside. Even at birth it is one of the largest animals in the world! The planet is covered with amazing and fascinating creatures, but the blue whale is among the most superlative. Consider the following. 1.How big is a blue whale? They are gigantic. Generally 80 to 100 feet (24 to 30 meters) long, the longest ever recorded was 108 feet long. 2.How much does a blue whale weigh? These gentle giants weigh up to 200 tons, or about 441,000 pounds. 3 they have big hearts The heart of the blue whale is huge! It is approximately 5 feet long, 4 feet wide, and 5 feet high and weighs about 400 pounds; its rhythm can be detected from two miles away. 4.… and languages The tongue of a blue whale alone weighs as much as an elephant. 5.How big are blue whale babies? Blue whale calves are the largest babies on Earth, and at birth they are already among the largest adult animals. They appear around 4,000 kg with a length of about 8 meters. They gain 90 kg per day! Their growth rate is likely one of the fastest in the animal world, with an increase of several billion tissues in the 18 months from conception to weaning. 6.They're noisy… and make free long distance calls Blue whales, in fact, are the loudest animals on the planet. A jet engine registers at 140 decibels, a blue whale's call reaches 188. Its language of pulses, groans and moans can be heard by others up to 1,000 miles (1,600 kilometers) away. 7. What do blue whales eat? Blue whales feast on krill; their stomachs can hold 1,000 kg of the small crustaceans at a time. They require almost 4,000 Kg of the little ones a day; and about 40 million krill daily during the summer feeding season. 8. Are blue whales fast? They travel a lot, spend summers feeding in polar regions, and make the long journey to the equator when winter comes. Although they have a cruising speed of 8 km / h, they can accelerate up to 32 km / h when necessary. 9 they have long lives Although not as old as the oldest trees on Earth, blue whales are among the longest-lived animals on the planet. Like counting tree rings, scientists count layers of wax in the ears and can determine the age of the stadium. The oldest ones discovered in this way are estimated to be around 100 years old, although the average lifespan is believed to be around 80 to 90 years. 10 they were once numerous Before whalers discovered the treasure of oil that a blue whale could provide, the numbers were generous. But with the advent of the 20th century whaling fleets, almost all of them were killed before they received global protection in 1967. According to WWF, from 1904 to 1967, more than 350,000 were killed in the southern hemisphere. In 1931, during the height of whaling, 29,000 blue whales were killed in a single season. 11 their future is unclear While commercial whaling is no longer a threat, the recovery has been slow and new threats affect blue whales, such as ship attacks and the impact of climate change. There is a population of around 2,000 blue whales off the California coast, but in total there are only 10,000 to 25,000 individuals left. The Red List of the World Conservation Union (IUCN) classifies them as “Endangered”. Hopefully in time, the most gigantic gentle giants on the planet will once again roam the seas.
>Why silence? Aren’t we trying to fight against silence? A silent demonstration can be a peaceful way to bring urgent attention to an important issue. Silence as a method of organizing is much different than silence that is coerced or forced through oppressive bullying, harassment and intimidation. A silent demonstration is active, rather than passive, and causes people to pay attention. Silent demonstrations can: - Bring attention to an issue and encourage reflection on the issue; - Simulate the how others are silenced; - Focus the attention on the issue or cause and not the protester; - Demonstrate that the demonstrators desire peaceful resolution; - Spark discussion and dialogue. Through your active silence on the Day of Silence you will send a message that bullying and harassment faced by LGBT and ally youth affects you, your school and community. And remember, the Day of Silence is a moment to open the conversation on this issue. Follow up your participation with a Breaking the Silence event. You can plan a rally at your school, facilitate a workshop for students and teachers about LGBT issues or throw a party with your GSA or host a discussion group with DOS participants. For more info on how to organize a Breaking the Silence event, check out the Day of Silence Organizing Manual.
Students will put the equations of an ellipse with center at (h, k) in standard form. Identify the center, vertices, co-vertices, foci, length of diamter axis, and length of the minor diameter. They will graph the ellipses. Before the Activity Students need to be familar with TI 84+ APPS files. During the Activity Students will need to be introduced to the transformation of graphs in a general sense specifically ellipses. Introduction to the Conic APPS. After the Activity The Power Point doc is an answer file for the equations of the ellipses. Graphs can be checked with the Conics APPS file.
We often forget what lies behind a name, i.e. its etymology. We astronomers are particularly prone to forgetting that constellations have their own story, which is usually connected with Greco-Roman culture, the Renaissance or the scientific and technical developments brought by the Enlightenment, with the southbound discoveries. Constellations are groups of stars aligned by chance, with no physical relation with each other: they are not at the same distance and are not of the same age, except for some exceptions. Their only specificities are their angular proximity when they project on the plane of the sky, and their brightness, which makes they stand out. However, their peculiar shapes have aided navigation and marked the change of seasons, as each constellation can be easily recognized and seen at different times of the year. The Greek poet Hesiod, who wrote “Works and Days” in the 8th century BCE, mentioned this often. For example, in Book 2 we can read: «But when the Pleiades and Hyades and strong Orion begin to set, then remember to plough in season: and so the completed year will fitly pass beneath the earth.» In fact, this is not the first Western classics reference to astronomy in literature. Both in “The Iliad” and “The Odyssey” Homer describes or assumes that the Earth is flat; that the Sun, the Moon and the stars circle around our planet; and that they rise in the East and set in the ocean to the West. And they probably return to their starting via the North, even though this curious movement is explicitly portrayed only in later representations. Homer also mentions the morning and evening stars and does not realize that they are the same planet, Venus. He also talks about the Pleiades and Hyades, two stellar associations, Orion and the constellations of Ursa Major and Boötes. Boötes includes the brightest star in the Boreal Hemisphere, Arcturus. The Ursa is remarkable because it never sets in the sea, and is very important to be able to find the North and navigate. It was essential for Odysseus and his return to Ithaca. These two authors could have been contemporary (if Homer was real and the actual author of these and other less well-known texts) and if we compare them we find similarities and differences: they both mention the same celestial objects, which could mean that the other planets had not been identified as such by the 8th century BCE (or at least the archaic period, considering the uncertainty about dates). However, Hesiod is much more detailed and specific about how astronomic events mark dates and agricultural events; he clearly specifies the solstices and the lunar period, which he considers to be 30 days in length. The difference between these two authors stems from a basic fact: Homer wrote epic poems that praise the glories of warrior heroes, while Hesiod is more pragmatic and only talks about daily chores. This may be less appealing but it is fundamental for civilization. Hesiod is also said to have also written a poem called “Astronomy” of which there are only a few fragments left. But there have been questions about who the author was. In the 3rd century BCE, the poet Aratus wrote «Phaenomena» and mentioned how the Greeks and Phoenicians used different constellations to determine the North: Ursa Major among the Greeks (Helice) and Ursa Minor among the Phoenicians (Cynosura in the poem). In the 6th century BCE, according to Callimachus, Thales of Miletus recommended that Ursa Minor should be used for navigation as it includes the true North Pole (or being closer to it since its position varies over time due to the “precession of the equinoxes” effect). Shortly after, in “Catasterismi”, Eratosthenes – or probably a later author under his name – stated that both the Hyades and Pleiades are groups of sisters who experienced several adventures. These are, in fact, stellar associations with physical connections between their members: they were formed from the same cloud of dust and gas and their members could be called twins (they are the same age but different individual properties). Nevertheless, the stars in the constellation of Orion, which dominates the sky in the fall, are only related because they are projected in the same direction on the celestial sphere. Or at least we thought so until very recently. The story behind the protagonist of a constellation is always interesting … This is called catasterism, a word extracted from Eratosthenes’ work: the transformation of a figure (in principle, mythological) into a star or constellation. Often it was due to the intervention of the Olimpean Gods in . However, Eratosthenes, in a remarkable courtly feat, placed queen Berenice II, the wife of Ptolemy III Euergetes of Egypt, a Hellenic pharaoh, in the sky. Specifically, he praised her hair’s beauty, which became the constellation of Coma Berenice. Or it may have been Conon of Samos who imagined that a lock from the queen’s hair was taken by a goddess as a sacrifice for the king’s return from the Third Syrian War. This kind of praise was repeated by later astronomers, perhaps in appreciation or probably when looking for patronage. Perseus, among other constellations, is particularly conspicuous in the summer. This constellation is the reason why meteor showers typical of August are called Perseids. The story of this Greek hero is marked by fate and prophecy. Son of Danae and Zeus and grandson of the king of Argos, Acrisius, he was to kill and succeed the king. One of the most famous paintings by Titian portrays his conception by the lord of Olympus (in the form of a shower of gold) and is perhaps one of the most sensual Renaissance paintings. Perseus’ story is certainly sad, as it includes a series of fratricidal fights, unfortunate romantic relationships, conspiracies and jealousy both among humans and gods. But it is also an epic story: Perseus fights hand-to-hand and kills monsters that were said to be indestructible, such as Medusa who turned to stone all that looked into their eyes. Without doubt, this is an epic as remarkable as “The Odyssey”, with a protagonist who is as intelligent but probably more daring. Perseus has no reason for envying Odysseus, a more well-known figure. This was a true Greek tragedy but, interestingly enough, no related tragic play has survived; however there is one comedy by Calderón de la Barca: «Andrómeda y Perseo». As for the constellation itself, it can be seen during fall nights in the Northern hemisphere; although, due to its declination, it can be also be seen at other times of the year. We could list all classical constellations visible from the Northern hemisphere and their associated catasterisms. But the above-mentioned examples should be enough. In «Poeticum astronomicum», Gaius Julius Hyginus (freedman of emperor August, possibly from Hispania, who lived at the turn of the century) provided several examples of this. However, the author of this text and another one, «Fabulae», has not been clearly identified. This is one of the main and sometimes the only source about mythology and astronomy. Just over a century later, Claudius Ptolemy described 48 constellations in «Almagest». The voyages to the South, particularly the Portuguese travels at the end of the 15th century, showed a renewed celestial sphere, including new stars and constellations. An example of these new asterisms is the constellation of Sextans, created by Johannes Hevelius in his star map «Prodromus Astronomiae», published posthumously by his wife Elisabeth Hevelius in 1690. But the celestial sphere is not only comprised of constellations. Begun by Ferdinand Magellan, the first circumnavigation of the Earth left us with an unfair celestial legacy: the satellite galaxies of the Milky Way, the Small and Large Magellanic Clouds, took his name. The Portuguese explorer, at the service of Joanna and her son Charles, co-monarchs of Spain and the Indies, died during the expedition and the voyage was completed by Juan Sebastián Elcano. It would be truly fair to rename at least one of them! The star map was completed with the publication in 1763 of «Coelum Australe Stelliferum» by Nicolas Louis de Lacaille, who died in 1762. The full exploration of the Southern sky involved adding fourteen new constellations, totaling the 88 constellations officially accepted by the International Astronomical Union. The celestial sphere tells us several stories about astronomy but also about the evolution of philosophy, from myth to science; the exploration of our planet, and the policies of commercial and imperial expansion; scientific progress; and inspiration for writers, poets and artists. In short, this is an intellectual map that has shaped the development of thought and, above all, culture. David Barrado Navascués CAB, INTA-CSIC, European Space Astronomy Center (ESAC, Madrid)
Instruction for All Students™ begins with an overview of the initiatives that are in the news and influencing our thinking as we implement instructional programs that lead to the achievement of high standards by all students. An important goal of the workshop series is for participants to become comfortable with and build skills at using the standards-based planning process. Participants learn to: - Design, implement, and present for peer review a unit of study based on the Common Core State Standards - Identify ways to incorporate the shifts in literacy and math required by the Common Core - Go beyond planning how to teach facts to planning around concept-based essential understandings - Extend their thinking about assessment beyond an event at the end of learning to a continuum of assessment opportunities embedded throughout the instructional process - Use task analysis and pre-assessment to identify the learning experiences most likely to lead to student success - Engage in ongoing job-site practice of research-based approaches for engaging students in active, meaningful learning experiences - Refine and expand their repertoire of questioning strategies - Use student work to assess not only student learning but also the effectiveness of the instructional program What do schools and classrooms look like when they are organized around a commitment of the achievement of high standards by all students? - What makes planning for teaching and learning in a standards-based environment different from planning for teaching and learning in a non-standard based environment? Why are these differences significant? - What are the key shifts required by the Common Core State Standards and how do teacher and learner behaviors change with the implementation of those shifts? - How can we Frame the Learning so that the what, why, and how of the learning are clear to learners? - What are the ways that we can engage our learners in active, meaningful learning? - How can we structure learning experiences so that instruction, learning, and assessment are integrated? - How do we ensure balance and appropriateness in the design, selection, and use of a wide range of classroom assessment tools? - How do we design summative assessment tasks that are rigorous, valid, authentic, and engaging? - How do we incorporate best practice into lectures, discussions, demonstrations, and reading? - Instruction for All Students by Paula Rutherford - Instruction for All Students™ Participant’s Manual
ANCSA: What Political Process? On December 18, 1971 the Alaska Native Claims Settlement Act was formally enacted by the Congress of the United States to settle the land claims of Alaska Native people. The timing of this settlement was driven by the largest oil discovery in North American history. The goal was to get that oil to market as quickly as possible. Native claims to traditional lands stood between the oil companies and millions of barrels of 'black gold.' The United States as a government, had never met with the Native peoples of Alaska to resolve their land and water claims. Alaska Native lands had simply been taken, by various agencies and organizations recognized by the U.S. government. For example, the lands that were used for forts and public properties were conveyed by the 1867 purchase of Alaska from the Russians. Mining districts in Alaska were created via the authority of the Mining Act of 1874. The U.S. government took land in Alaska to build the railroad during World War I. Some of the lands claimed under these various governmental authorities were disputed by Alaska Natives - they were lands that had been traditionally used and occupied. The only way to resolve the arguments over claims has to be negotiated between Alaska Natives and the U.S. Congress Since the early 1900s, Alaska Natives had formed various organizations to try to protect Alaska Native communities, peoples and rights. The Alaska Native Brotherhood (ANB) was one of the earliest. It is still one of the most important in Alaska history. The ANB had originally been founded by Sheldon Jackson and Presbyterian missionaries to civilize and assimilate Alaska Natives. However, over time, as Natives became more active in the organization, the agenda changed. By World War I the ANB was working for citizenship and tribal rights for Native peoples. Some organizations were formed during territorial times and some followed statehood. These Alaska Native organizations shared many common features - they were all formed voluntarily with the idea of promoting the well being of Alaska Native peoples. They wanted to protect the continuing rights of Alaska Natives so they could determine for themselves the shape of the future. Membership in these voluntary organizations was informal. Representation varied widely from group to group. In 1966 some of these actve Alaska Natives came together to form a statewide, voluntary, non profit association. Their goal was to promote the common interests of Alaska Natives in a coordinated effort to get a fair settlement of Alaska Native land rights. This group was called the Alaska Federation of Natives Association. With the discovery of oil the U.S. government became intensely interested in settling the land claims. In 1971, in a special convention of the Alaska Federation of Natives Association five hundred and eleven (511) delegates approved the final draft of a Congressional legislative settlement. Fifty-six (56) delegates voted against the settlement. (Delegates from the North Slope made up a disproportionate number of those opposed.) It is also worth noting that President Nixon, even before the vote of the Alaska Federation of Natives, had already signed the legislation into law. The legitimacy of the vote, with five hundred and sixty seven delegates representing all Native peoples, was questioned at the time and is still. There were no village meetings; the vote was never ratified or debated by Alaska Native communities or individuals. The discovery of oil had moved the resolution of land claims to 'fast forward'. Many would say that the political process was hijacked. Most would at least agree that the process was not perfect. Many argue that the past is past; the decision was made; everyone needs to move on. Others disagree and argue that the ramifications of this imperfect process have grown over time and that it is even more imperative than ever to revisit "the vote." The establishment of a village and regional corporate structure under ANCSA shifted the basis for political power in Alaska Native politics. The regional corporations were the 'winners'. They became the ones responsible for holding all the remaining Alaska Native lands, as well as the monies that had been received in payment for the land. The tribal governments received neither land nor money. Unfortunately, some believe these big decisions occurred 'behind the scenes.' The public records were never made public. Within a year's time (1972) the Alaska Federation of Natives Association had become the Alaska Federation of Natives Incorporated, and the interests of the regional corporations became part of that organization. With the phenomenal economic success of some of the regional corporations the influence of their delegates expanded within AFN and in Washington, DC. While AFN is set up to represent both the interests of the tribal governments and the corporations, it seems to many that AFN has become lopsided in its support for corporate interests. Some believe that a powerful alliance among Alaska's Congressional delegation, regional corporation leaders and AFN leaders has developed and that their agenda is stacked to make sure they will continue to hold the power. In the meantime the voices of tribal governments have been weakened. Tribal governments are the traditional means by which Alaska Native individuals participated in the political process. After ANCSA Alaska Natives continued to hold tribal affiliation, but also became shareholders in regional and village corporations. Some competition and tension has happened over time. For example, tribes and corporations sometime compete for certain federal grant money. Some tribes have requested that certain lands be permanently protected for their cultural value by being "retribalized", or returned to the tribes. Efforts to return lands to tribal governments have not been supported as amendments to ANCSA. The main argument against retribalization of Native land from Alaska's Congressional delegation is, "You voted for ANCSA in 1971." This argument implies that there was a system in Alaska at that time for Alaska Natives to accept or reject ANCSA. The facts are that most Alaska Natives did not vote in 1971, there was no system to select delegates, and there were some Natives who were not aware that there was a vote. Some of the regional corporations seems to want it both ways. They say that they represent the interests of the larger Alaska Native communities. In many ways they do. But when a community challenges the corporation about a certain issue of interest to them that may be in conflict with the corporate mission, the corporation will probably respond that their responsibility to shareholders is to make as much profit as they can. It is important that tribal governments continue to serve as a balance to the corporations. We cannot rewrite history, nor deny Alaska Natives the right to be full participants in the political process. It may now be necessary to review the events of 1971 as a reminder that there was no referendum and as a consequence many Alaska Natives continue to feel alienated and without adequate representation. 1971 was a rush. Land, money, power - the political and economic structures of Alaska Native society were dramatically reshaped and redirected. Those five hundred and sixty-six delegates were under enormous pressure. They made tremendous personal sacrifices to do difficult work and do the best job possible. They accomplished a tremendous amount. At the same time, some of the provisions and the consequences of ANCSA that are now clear should be open for debate. Power is an integral part of politics. So is the balance of power.
Svalbard is an archipelago of islands covering an area of around 61000 km2 north of the Arctic Circle. The region is heavily glaciated, with more than 2100 glaciers covering around 59% of its total area. In a process known as ‘Arctic Amplification’, the Arctic is warming faster than any other area on earth, as a result there is growing interest into which meteorological parameter has the strongest control on glacier mass balance changes. By empirically analyzing 6 different meteorological parameters at 12 locations on Nordenskioldbreen, Etonbreen, Kongsvegen and Hansbreen by season over the 1957 – 2018 period, the aim of this research was to find out to what extent the climate of Svalbard has changed, how glacier mass balance is changing and which meteorological parameter is controlling these variations the most. A significant increase in mean annual air temperature of 1.25 – 3.5°C was shown when comparing the 2001 – 2018 period to the 1971 – 2000 reference era, with the largest anomalies of up to +6.5°C focused on northern Svalbard during winter. Associated with the significant warming was an increase in relative humidity during winter, a sign of decreasing sea ice and increasing lower atmospheric air temperatures. This study produced a research first in assessing precipitation anomalies by weather classification over the 2001 – 2017 period. As supported by previous research, cyclonic south-westerly winds were the most dominant weather classification during the 1957 – 2017 period. Changes in weather classification frequency were in the region of ± 2 days per season when comparing the post-millennial to reference era, with the most notable change being an increase of 3 days per season of anticyclonic easterlies during summer. Nevertheless, seasonal precipitation characteristics varied, with a slight increase in winter snow and a significant increase in winter rain observed in the post-millennium era. Summers have become drier, with a decrease in both liquid and solid precipitation seen in the same periods. Similarly, precipitation characteristics by weather type have witnessed notable changes, with daily rain and snow anomalies up to +1.75mm/d-1 and -4mm/d-1 shown when comparing the post millennium to reference era. Among these changes, mean wind speeds in both summer and winter have increased by between 11- 32%, and are thought to have contributed to mass balance changes via snow redistribution. Contrary to other studies of this type, shortwave incoming radiation was not found to act as a key control on high ablation months on any of the four study sites.
Posted Date: 10/18/2020 During word study students are taught Greek and Latin roots. Root words carry most of the meaning, and by learning them, students can better understand the whole language! It is important that students understand that most longer words have a root, a prefix, and/or suffix. Students are able to spend time as a class dissecting words to better understand how to read multisyllabic words and what they mean!
Master of Science (MS) The health benefits associated with appropriate levels of physical activity are well documented, but a large percentage of the population is not sufficiently active to attain those health benefits. Children are also not as active as they should be, and their activity levels decline during adolescence. Given that childhood activity patterns are likely to persist into adulthood, it is important to investigate ways to encourage children and adolescents to be physically active. Since virtually all school students participate in physical education programs, one way to do that is to explore ways that physical education programs can motivate children to be physically active. This study examined adolescents’ motivation in middle school physical education and in physically interactive video games within an expectancy-value model developed by Eccles and her colleagues (1983). One hundred and one eighth grade physical education students completed questionnaires assessing their expectancy-related beliefs, subjective task values, and intention for future participation in both the domains of physical education and physically interactive video games. Participants’ activity level was also assessed using the Godin and Shephard (1985) Leisure Time Exercise Questionnaire. Results indicated that expectancy-related beliefs and subjective task values are domain specific. Expectancy-related beliefs and task values are positively related and both constructs are related to intention to participate in the future for both the domains of physical education and physically interactive video games. Expectancy-related beliefs, task values, and intentions across domains, however, were not related supporting the hypothesis that physically interactive video games represent a distinct domain from traditional physical education activities. Physical education was perceived as more important and more useful than physically interactive video games, but findings suggest that girls and less active students found physically interactive video games to be more interesting than traditional physical education activities. Taken together, the findings suggest that physically interactive video games could be a useful tool in physical education programs to increase physical activity levels for students who are at risk for low levels of physical activity. Document Availability at the Time of Submission Release the entire work immediately for access worldwide. McGregor, Andrew James, "Adolescents' expectancy beliefs and task values for physically interactive video games" (2010). LSU Master's Theses. 3693. Solmon, Melinda A.
Volcanic eruptions are one of Earth's most dramatic and violent agents of change. Not only can powerful explosive eruptions drastically alter land and water for tens of kilometers around a volcano, but tiny liquid droplets of sulfuric acid erupted into the stratosphere can change our planet's climate temporarily. Eruptions often force people living near volcanoes to abandon their land and homes, sometimes forever. Those living farther away are likely to avoid complete destruction, but their cities and towns, crops, industrial plants, transportation systems, and electrical grids can still be damaged by tephra, ash, lahars, and flooding. Fortunately, volcanoes exhibit precursory unrest that, when detected and analyzed in time, allows eruptions to be anticipated and communities at risk to be forewarned. The warning time preceding volcanic events typically allows sufficient time for affected communities to implement response plans and mitigation measures. Hazard response and coordination plans are multi-agency efforts that define the responsibilities and actions to take in the event of a restless or active volcano. Scientists from the five regional volcano observatories of the USGS Volcano Hazards Program participate in developing these plans with state and local governments of at-risk areas. If volcanic unrest or an eruption occurs, scientists from the observatories will keep state and local officials informed of potential hazards so that coordination and response plans can be updated as needed.
From Wikipedia, the free encyclopedia - View original article Temporal range: Early Cretaceous, 126–125Ma |I. bernissartensis mounted in a modern quadrupedal posture, Royal Belgian Institute of Natural Sciences, Brussels| Iguanosaurus? Ritgen, 1828 Temporal range: Early Cretaceous, 126–125Ma |I. bernissartensis mounted in a modern quadrupedal posture, Royal Belgian Institute of Natural Sciences, Brussels| Iguanosaurus? Ritgen, 1828 Iguanodon (// i-GWAH-nə-don; meaning "iguana-tooth") is a genus of ornithopod dinosaur that existed roughly halfway between the first of the swift bipedal hypsilophodontids of the mid-Jurassic and the duck-billed dinosaurs of the late Cretaceous. While many species have been classified in the genus Iguanodon, dating from the late Jurassic Period to the late Cretaceous Period of Asia, Europe, and North America, research in the first decade of the 21st century suggests that there is only one well-substantiated species: I. bernissartensis, which lived from the late Barremian to the earliest Aptian ages (Early Cretaceous) in Belgium and possibly elsewhere in Europe, between about 126 and 125 million years ago. Iguanodon were large, bulky herbivores. Distinctive features include large thumb spikes, which were possibly used for defence against predators, combined with long prehensile fifth fingers able to forage for food. The genus was named in 1825 by English geologist Gideon Mantell, based on fossil specimens that are now assigned to different genera and species. Iguanodon was the second type of dinosaur formally named based on fossil specimens, after Megalosaurus. Together with Megalosaurus and Hylaeosaurus, it was one of the three genera originally used to define Dinosauria. The genus Iguanodon belongs to the larger group Iguanodontia, along with the duck-billed hadrosaurs. The taxonomy of this genus continues to be a topic of study as new species are named or long-standing ones reassigned to other genera. Scientific understanding of Iguanodon has evolved over time as new information has been obtained from fossils. The numerous specimens of this genus, including nearly complete skeletons from two well-known bonebeds, have allowed researchers to make informed hypotheses regarding many aspects of the living animal, including feeding, movement, and social behaviour. As one of the first scientifically well-known dinosaurs, Iguanodon has occupied a small but notable place in the public's perception of dinosaurs, its artistic representation changing significantly in response to new interpretations of its remains. Iguanodon were bulky herbivores that could shift from bipedality to quadrupedality. The only well-supported species, I. bernissartensis, is estimated to have weighed about 3 tonnes (3.5 tons) on average, and measured about 10 metres long (33 ft) as an adult, with some specimens possibly as long as 13 metres (43 ft). These animals had large, tall but narrow skulls, with toothless beaks probably covered with keratin, and teeth like those of iguanas, but much larger and more closely packed. The arms of I. bernissartensis were long (up to 75% the length of the legs) and robust, with rather inflexible hands built so that the three central fingers could bear weight. The thumbs were conical spikes that stuck out away from the three main digits. In early restorations, the spike was placed on the animal's nose. Later fossils revealed the true nature of the thumb spikes, although their exact function is still debated. They could have been used for defense, or for foraging for food. The little finger was elongated and dextrous, and could have been used to manipulate objects. The phalangeal formula is 2-3-3-2-4, meaning that the innermost finger (phalange) has two bones, the next has three, etc. The legs were powerful, but not built for running, and each foot had three toes. The backbone and tail were supported and stiffened by ossified tendons, which were tendons that turned to bone during life (these rod-like bones are usually omitted from skeletal mounts and drawings). Iguanodon gives its name to the unranked clade Iguanodontia, a very populous group of ornithopods with many species known from the Middle Jurassic to the Late Cretaceous. Aside from Iguanodon, the best-known members of the clade include Dryosaurus, Camptosaurus, Ouranosaurus, and the duck-bills, or hadrosaurs. In older sources, Iguanodontidae was shown as a distinct family. This family traditionally has been something of a wastebasket taxon, including ornithopods that were neither hypsilophodontids or hadrosaurids. In practice, animals like Callovosaurus, Camptosaurus, Craspedodon, Kangnasaurus, Mochlodon, Muttaburrasaurus, Ouranosaurus, and Probactrosaurus were usually assigned to this family. With the advent of cladistic analyses, Iguanodontidae as traditionally construed was shown to be paraphyletic, and these animals are recognised to fall at different points in relation to hadrosaurs on a cladogram, instead of in a single distinct clade. Essentially, the modern concept of Iguanodontidae currently includes only Iguanodon. Groups like Iguanodontoidea are still used as unranked clades in the scientific literature, though many traditional iguanodontids are now included in the superfamily Hadrosauroidea. Iguanodon lies between Camptosaurus and Ouranosaurus in cladograms, and is probably descended from a camptosaur-like animal. At one point, Jack Horner suggested, based mostly on skull features, that hadrosaurids actually formed two more distantly related groups, with Iguanodon on the line to the flat-headed hadrosaurines, and Ouranosaurus on the line to the crested lambeosaurines, but his proposal has been rejected. The cladogram below follows an analysis by Andrew McDonald, 2012. One of the first details noted about Iguanodon was that it had the teeth of a herbivorous reptile, although there has not always been consensus on how it ate. As Mantell noted, the remains he was working with were unlike any modern reptile, especially in the toothless, scoop-shaped form of the lower jaw symphysis, which he found best compared to that of the two-toed sloth and the extinct ground sloth Mylodon. He also suggested that Iguanodon had a prehensile tongue which could be used to gather food, like a giraffe. More complete remains have shown this to be an error; for example, the hyoid bones that supported the tongue are heavily built, implying a muscular, non-prehensile tongue used for moving food around in the mouth. The giraffe-tongue idea has also been incorrectly attributed to Dollo via a broken lower jaw. Iguanodon teeth are, as the name suggests, like those of an iguana, but larger. Unlike hadrosaurids, which had columns of replacement teeth, Iguanodon only had one replacement tooth at a time for each position. The upper jaw held up to 29 teeth per side, with none at the front of the jaw, and the lower jaw 25; the numbers differ because teeth in the lower jaw are broader than those in the upper. Because the tooth rows are deeply inset from the outside of the jaws, and because of other anatomical details, it is believed that, as with most other ornithischians, Iguanodon had some sort of cheek-like structure, muscular or non-muscular, to retain food in the mouth. The skull was structured in such a way that as it closed, the bones holding the teeth in the upper jaw would bow out. This would cause the lower surfaces of the upper jaw teeth to rub against the upper surface of the lower jaw's teeth, grinding anything caught in between and providing an action that is the rough equivalent of mammalian chewing. Because the teeth were always replaced, the animal could have used this mechanism throughout its life, and could eat tough plant material. Additionally, the front ends of the animal's jaws were toothless and tipped with bony nodes, both upper and lower, providing a rough margin that was likely covered and lengthened by a keratinous material to form a cropping beak for biting off twigs and shoots. Its food gathering would have been aided by its flexible little finger, which could have been used to manipulate objects, unlike the other fingers. Exactly what Iguanodon ate with its well-developed jaws is not known. The size of the larger species, such as I. bernissartensis, would have allowed them access to food from ground level to tree foliage at 4–5 metres high (13–16.5 ft). A diet of horsetails, cycads, and conifers was suggested by David Norman, although iguanodonts in general have been tied to the advance of angiosperm plants in the Cretaceous due to the dinosaurs' inferred low browsing habits. Angiosperm growth, according to this hypothesis, would have been encouraged by iguanodont feeding because gymnosperms would be removed, allowing more space for the weed-like early angiosperms to grow. The evidence is not conclusive, though. Whatever its exact diet, due to its size and abundance, Iguanodon is regarded as a dominant medium to large herbivore for its ecological communities. In England, this included the small predator Aristosuchus, larger predators Eotyrannus, Baryonyx, and Neovenator, low-feeding herbivores Hypsilophodon and Valdosaurus, fellow "iguanodontid" Mantellisaurus, the armoured herbivore Polacanthus, and sauropods like Pelorosaurus. Early fossil remains were fragmentary, which led to much speculation on the posture and nature of Iguanodon. Iguanodon was initially portrayed as a quadrupedal horn-nosed beast. However as more bones were discovered, Mantell observed that the forelimbs were much smaller than the hindlimbs. His rival Owen was of the opinion it was a stumpy creature with four pillar-like legs. The job of overseeing the first lifesize reconstruction of dinosaurs was initially offered to Mantell, who declined due to poor health, and Owen's vision subsequently formed the basis on which the sculptures took shape. Its bipedal nature was revealed with the discovery of the Bernissart skeletons. However, it was depicted in an upright posture, with the tail dragging along the ground, acting as the third leg of a tripod. During his re-examination of Iguanodon, David Norman was able to show that this posture was unlikely, because the long tail was stiffened with ossified tendons. To get the tripodal pose, the tail would literally have to be broken. Putting the animal in a horizontal posture makes many aspects of the arms and pectoral girdle more understandable. For example, the hand is relatively immobile, with the three central fingers grouped together, bearing hoof-like phalanges, and able to hyperextend. This would have allowed them to bear weight. The wrist is also relatively immobile, and the arms and shoulder bones robust. These features all suggest that the animal spent time on all fours. Furthermore, it appears that Iguanodon became more quadrupedal as it got older and heavier; juvenile I. bernissartensis have shorter arms than adults (60% of hindlimb length versus 70% for adults). When walking as a quadruped, the animal's hands would have been held so that the palms faced each other, as shown by iguanodontian trackways and the anatomy of this genus's arms and hands. The three toed pes (foot) of Iguanodon was relatively long, and when walking, both the hand and the foot would have been used in a digitigrade fashion (walking on the fingers and toes). The maximum speed of Iguanodon has been estimated at 24 km/h (14.9 mph), which would have been as a biped; it would not have been able to gallop as a quadruped. Large three-toed footprints are known in Early Cretaceous rocks of England, particularly Wealden beds on the Isle of Wight, and these trace fossils were originally difficult to interpret. Some authors associated them with dinosaurs early on. In 1846, E. Tagert went so far as to assign them to an ichnogenus he named Iguanodon, and Samuel Beckles noted in 1854 that they looked like bird tracks, but might have come from dinosaurs. The identity of the trackmakers was greatly clarified upon the discovery in 1857 of the hind leg of a young Iguanodon, with distinctly three-toed feet, showing that such dinosaurs could have made the tracks. Despite the lack of direct evidence, these tracks are often attributed to Iguanodon. A trackway in England shows what may be an Iguanodon moving on all fours, but the foot prints are poor, making a direct connection difficult. Tracks assigned to the ichnogenus Iguanodon are known from locations including places in Europe where the body fossil Iguanodon is known, to Spitsbergen, Svalbard, Norway. The thumb spike is one of the best-known features of Iguanodon. Although it was originally placed on the animal's nose by Mantell, the complete Bernissart specimens allowed Dollo to place it correctly on the hand, as a modified thumb. (This would not be the last time a dinosaur's modified thumb claw would be misinterpreted; Noasaurus, Baryonyx, and Megaraptor are examples since the 1980s where an enlarged thumb claw was first put on the foot, as in dromaeosaurids.) This thumb is typically interpreted as a close-quarter stiletto-like weapon against predators, although it could also have been used to break into seeds and fruits, or against other Iguanodon. One author has suggested that the spike was attached to a venom gland, but this has not been accepted, as the spike was not hollow, nor were there any grooves on the spike for conducting venom. Although sometimes interpreted as the result of a single catastrophe, the Bernissart finds instead are now interpreted as recording multiple events. According to this interpretation, at least three occasions of mortality are recorded, and though numerous individuals would have died in a geologically short time span (?10–100 years), this does not necessarily mean these Iguanodon were herding animals. An argument against herding is that juvenile remains are very uncommon at this site, unlike modern cases with herd mortality. They more likely were the periodic victims of flash floods whose carcasses accumulated in a lake or marshy setting. The Nehden find, however, with its greater span of individual ages, more even mix of Dollodon or Mantellisaurus to Iguanodon bernissartensis, and confined geographic nature, may record mortality of herding animals migrating through rivers. Unlike other purported herding dinosaurs (especially hadrosaurs and ceratopsids), there is no evidence that Iguanodon was sexually dimorphic, with one gender appreciably different from the other. At one time, it was suggested that the Bernissart I. "mantelli", or I. atherfieldensis (Dollodon and Mantellisaurus, respectively) represented a gender, possibly female, of the larger and more robust, possibly male, I. bernissartensis. However, this is not supported today. Evidence of a fractured hip bone was found in a specimen of Iguanodon, which had an injury to its ischium. Two other individuals were observed with signs of osteoarthritis as evidenced by bone overgrowths in their anklebones which called osteophytes. The discovery of Iguanodon has long been accompanied by a popular legend. The story goes that Gideon Mantell's wife, Mary Ann, discovered the first teeth of an Iguanodon in the strata of Tilgate Forest in Whitemans Green, Cuckfield, Sussex, England, in 1822 while her husband was visiting a patient. However, there is no evidence that Mantell took his wife with him while seeing patients. Furthermore, he admitted in 1851 that he himself had found the teeth. Not everyone agrees that the story is false, though. It is known from his notebooks that Mantell first acquired large fossil bones from the quarry at Whitemans Green in 1820. Because also theropod teeth were found, thus belonging to carnivores, he at first interpreted these bones, which he tried to combine into a partial skeleton, as those of a giant crocodile. In 1821 Mantell mentioned the find of herbivorous teeth and began to consider the possibility that a large herbivorous reptile was present in the strata. However, in his 1822 publication Fossils of the South Downs he as yet did not dare to suggest a connection between the teeth and his very incomplete skeleton, presuming that his finds presented two large forms, one carnivorous ("an animal of the Lizard Tribe of enormous magnitude"), the other herbivorous. In May 1822 he first presented the herbivorous teeth to the Geological Society of London but the members, among them William Buckland, dismissed them as fish teeth or the incisors of a rhinoceros from a Tertiary stratum. On 23 June 1823 Charles Lyell showed some to Georges Cuvier, during a soiree in Paris, but the famous French naturalist at once dismissed them as those of a rhinoceros. Though the very next day Cuvier retracted, Lyell reported only the dismissal to Mantell, who became rather diffident about the issue. In 1824 Buckland described Megalosaurus and was on that occasion invited to visit Mantell's collection. Seeing the bones on 6 March he agreed that these were of some giant saurian — though still denying it was a herbivore. Emboldened nevertheless, Mantell again sent some teeth to Cuvier, who answered on 22 June 1824 that he had determined that they were reptilian and quite possibly belonged to a giant herbivore. In a new edition that year of his Recherches sur les Ossemens Fossiles Cuvier admitted his earlier mistake, leading to an immediate acceptance of Mantell, and his new saurian, in scientific circles. Mantell tried to corroborate his theory further by finding a modern-day parallel among extant reptiles. In September 1824 he visited the Royal College of Surgeons but at first failed to find comparable teeth. However, assistant-curator Samuel Stutchbury recognised that they resembled those of an iguana he had recently prepared, albeit twenty times longer. Mantell did not describe his findings until 10 February 1825, when he presented a paper on the remains to the Royal Geological Society of London. In recognition of the resemblance of the teeth to those of the iguana, Mantell named his new genus Iguanodon or "iguana-tooth", from iguana and the Greek word ὀδών (odon, odontos or "tooth"). Based on isometric scaling, he estimated that the creature might have been up to 18 metres (60 ft) long, more than the 12 metres (40 ft) length of Megalosaurus. His initial idea for a name was Iguana-saurus ("Iguana lizard"), but his friend William Daniel Conybeare suggested that that name was more applicable to the iguana itself, so a better name would be Iguanoides ("Iguana-like") or Iguanodon. He neglected to add a specific name to form a proper binomial, so one was supplied in 1829 by Friedrich Holl: I. anglicum, which was later amended to I. anglicus. A more complete specimen of similar animal was discovered in a quarry in Maidstone, Kent, in 1834 (lower Lower Greensand Formation), which Mantell soon acquired. He was led to identify it as an Iguanodon based on its distinctive teeth. The Maidstone slab was utilized in the first skeletal reconstructions and artistic renderings of Iguanodon, but due to its incompleteness, Mantell made some mistakes, the most famous of which was the placement of what he thought was a horn on the nose. The discovery of much better specimens in later years revealed that the horn was actually a modified thumb. Still encased in rock, the Maidstone skeleton is currently displayed at the Natural History Museum in London. The borough of Maidstone commemorated this find by adding an Iguanodon as a supporter to their coat of arms in 1949. This specimen has become linked with the name I. mantelli, a species named in 1832 by Christian Erich Hermann von Meyer in place of I. anglicus, but it actually comes from a different formation than the original I. mantelli/I. anglicus material. The Maidstone specimen, also known as Gideon Mantell's "Mantel-piece", and formally labelled NHMUK 3741 was subsequently excluded from Iguanodon. It is classified as cf. Mantellisaurus by McDonald (2012); as cf. Mantellisaurus atherfieldensis by Norman (2012); and made the holotype of a separate species Mantellodon carpenteri by Paul (2012). At the same time, tension began to build between Mantell and Richard Owen, an ambitious scientist with much better funding and society connections in the turbulent worlds of Reform Act–era British politics and science. Owen, a firm creationist, opposed the early versions of evolutionary science ("transmutationism") then being debated and used what he would soon coin as dinosaurs as a weapon in this conflict. With the paper describing Dinosauria, he scaled down dinosaurs from lengths of over 61 metres (200 ft), determined that they were not simply giant lizards, and put forward that they were advanced and mammal-like, characteristics given to them by God; according to the understanding of the time, they could not have been "transmuted" from reptiles to mammal-like creatures. In 1849, a few years before his death in 1852, Mantell realised that iguanodonts were not heavy, pachyderm-like animals, as Owen was putting forward, but had slender forelimbs; however, his passing left him unable to participate in the creation of the Crystal Palace dinosaur sculptures, and so Owen's vision of the dinosaurs became that seen by the public for decades. With Benjamin Waterhouse Hawkins, he had nearly two dozen lifesize sculptures of various prehistoric animals built out of concrete sculpted over a steel and brick framework; two iguanodonts (based on the Mantellodon specimen), one standing and one resting on its belly, were included. Before the sculpture of the standing iguanodont was completed, he held a banquet for twenty inside it. The largest find of Iguanodon remains to that date occurred on 28 February 1878 in a coal mine at Bernissart in Belgium, at a depth of 322 m (1056 ft), when two mineworkers, Jules Créteur and Alphonse Blanchard, accidentally hit on a skeleton that they initially took for petrified wood. With the encouragement of Alphonse Briart, supervisor of mines at nearby Morlanwelz, Louis de Pauw on 15 May 1878 started to excavate the skeletons and in 1882 Louis Dollo reconstructed them. At least 38 Iguanodon individuals were uncovered, most of which were adults. Many of them went on public display beginning in July 1883 and are still on public view; nine are displayed as standing mounts, and nineteen more are still in the Museum's basement. The exhibit makes an impressive display in the Royal Belgian Institute of Natural Sciences, in Brussels. A replica of one of these is on display at the Oxford University Museum of Natural History and at the Sedgwick Museum in Cambridge. Most of the remains were referred to a new species, I. bernissartensis, a larger and much more robust animal than the English remains had yet revealed, but one specimen was referred to the nebulous, gracile I. mantelli (now Dollodon bampingi). The skeletons were some of the first complete dinosaur skeletons known. Found with the dinosaur skeletons were the remains of plants, fish, and other reptiles, including the crocodyliform Bernissartia. The science of conserving fossil remains was in its infancy, and new techniques had to be improvised to deal with what soon became known as "pyrite disease". Crystalline pyrite in the bones was being oxidized to iron sulphate, accompanied by an increase in volume that caused the remains to crack and crumble. When in the ground, the bones were isolated by anoxic moist clay that prevented this from happening, but when removed into the drier open air, the natural chemical conversion began to occur. To limit this effect, De Pauw immediately, in the mine-gallery, re-covered the dug-out fossils with wet clay, sealing them with paper and plaster reinforced by iron rings, forming in total about six hundred transportable blocks with a combined weight of a hundred and thirty tons. In Brussels after opening the plaster he impregnated the bones with boiling gelatine mixed with oil of cloves as a preservative. Removing most of the visible pyrite he then hardened them with hide glue, finishing with a final layer of tin foil. Damage was repaired with papier-mâché. This treatment had the unintended effect of sealing in moisture and extending the period of damage. In 1932 museum director Victor van Straelen decided that the specimens had to be completely restored again to safeguard their preservation. From December 1935 to August 1936 the staff at the museum in Brussels treated the problem with a combination of alcohol, arsenic, and 390 kilograms of shellac. This combination was intended to simultaneously penetrate the fossils (with alcohol), prevent the development of mold of (with arsenic), and harden them (with shellac). The fossils entered a third round of conservation from 2003 until May 2007, when the shellac, hide glue and gelatine were removed and impregnated with polyvinyl acetate and cyanoacrylate and epoxy glues. Modern treatments of this problem typically involve either monitoring the humidity of fossil storage, or, for fresh specimens, preparing a special coating of polyethylene glycol that is then heated in a vacuum pump, so that moisture is immediately removed and pore spaces are infiltrated with polyethelene glycol to seal and strengthen the fossil. Dollo's specimens allowed him to show that Owen's prehistoric pachyderms were not correct for Iguanodon. He instead modelled the skeletal mounts after the cassowary and wallaby, and put the spike that had been on the nose firmly on the thumb. He was not completely correct, but he also had the disadvantage of being faced with some of the first complete dinosaur remains. A problem that was later recognised was the bend he introduced into the tail. This organ was more or less straight, as shown by the skeletons he was excavating, and the presence of ossified tendons. In fact, to get the bend in the tail for a more wallaby or kangaroo-like posture, the tail would have had to be broken. With its correct, straight tail and back, the animal would have walked with its body held horizontal to the ground, arms in place to support the body if needed. Excavations at the quarry were stopped in 1881, although it was not exhausted of fossils, as recent drilling operations have shown. During World War I, when the town was occupied by German forces, preparations were made to reopen the mine for palaeontology, and Otto Jaekel was sent from Berlin to supervise. The Allies recaptured Bernissart just as the first fossiliferous layer was about to be uncovered. Further attempts to reopen the mine were hindered by financial problems and were stopped altogether in 1921 when the mine flooded. Research on Iguanodon decreased during the early part of the 20th century as World Wars and the Great Depression enveloped Europe. A new species that would become the subject of much study and taxonomic controversy, I. atherfieldensis, was named in 1925 by R. W. Hooley, for a specimen collected at Atherfield Point on the Isle of Wight. However, what had been a European genus was now being found worldwide, with material in Africa (teeth from Tunisia and elsewhere in the Sahara), Mongolia (I. orientalis), and in North America (I. ottingeri from Utah). Another North American species, from South Dakota, once assigned to Iguanodon as I. lakotaensis, has since been reclassified as the genus Dakotadon. Iguanodon was not part of the initial work of the dinosaur renaissance that began with the description of Deinonychus in 1969, but it was not neglected for long. David B. Weishampel's work on ornithopod feeding mechanisms provided a better understanding of how it fed, and David B. Norman's work on numerous aspects of the genus has made it one of the best-known dinosaurs. In addition, a further find of numerous Iguanodon skeletons, in Nehden, Nordrhein-Westphalen, Germany, has provided evidence for gregariousness in this genus, as the animals in this areally restricted find appear to have been killed by flash floods. At least 15 individuals, from 2 to 8 metres long (6.6 to 26.2 ft), have been found here, although at least some of them are gracile iguanodontians and belong to the related Mantellisaurus or Dollodon (described as I. atherfieldensis, at that time believed to be another species of Iguanodon). Iguanodon material has also been used in the search for dinosaur DNA and other biomolecules. In research by Graham Embery et al., Iguanodon bones were processed to look for remnant proteins. In this research, identifiable remains of typical bone proteins, such as phosphoproteins and proteoglycans, were found in a rib. Because Iguanodon is one of the first dinosaur genera to have been named, numerous species have been assigned to it. While never becoming the wastebasket taxon several other early genera of dinosaurs became (such as Megalosaurus and Pelorosaurus), Iguanodon has had a complicated history, and its taxonomy continues to undergo revisions. Remains of the only well-supported species are known definitively only from Belgium, though additional remains sometimes still attributed to I bernissartensis have been found in England. However, some researchers have recommended limiting use of I. bernissartensis to the Bernissart finds, and using I. sp. (meaning undetermined species) for robust iguanodontian remains from Barremian-age rocks of Europe. Thus, after thorough restudy, what had been seen as a quintessentially British dinosaur is in fact poorly known from England. I. anglicus was the original type species, but the holotype was based on a single tooth and only partial remains of the species have been recovered since. In March 2000, the International Commission on Zoological Nomenclature changed the type species to the much better known I. bernissartensis, with the new holotype being IRSNB 1534. The original Iguanodon tooth is held at Te Papa Tongarewa, the national museum of New Zealand in Wellington, although it is not on display. The fossil arrived in New Zealand following the move of Gideon Mantell's son Walter there; after the elder Mantell's death, his fossils went to Walter. Only a few of the many species assigned to Iguanodon are still considered to be valid, and only one may fall within the genus Iguanodon. I. bernissartensis, described by George Albert Boulenger in 1881, is the neotype for the genus. This species is best known for the many skeletons discovered in Bernissart, but is also known from remains across Europe. David Norman suggested that it includes the dubious Mongolian I. orientalis, but this has not been followed by other researchers. In addition, sixteen other species have since been reclassified: Two Iguanodon species are currently considered to be nomina dubia: The genera Iguanosaurus (Ritgen, 1828), Hikanodon (Keferstein, 1834), and Therosaurus (Fitzinger, 1840), are simply junior objective synonyms, later names for the material of I. anglicus. Since its description in 1825, Iguanodon has been a feature of worldwide popular culture. Two lifesize reconstructions of Mantellodon (considered Iguanodon at the time) built at the Crystal Palace in London in 1852 greatly contributed to the popularity of the genus. Their thumb spikes were mistaken for horns, and they were depicted as elephant-like quadrupeds, yet this was the first time an attempt was made at constructing full-size dinosaur models. In 1910 Heinrich Harder portrayed a group of Iguanodon in the classic German collecting cards about extinct and prehistoric animals "Tierre der Urwelt". Several motion pictures have featured Iguanodon. In the Disney film Dinosaur, an Iguanodon named Aladar served as the protagonist with three other iguanodonts as other main characters; a loosely related ride of the same name at Disney's Animal Kingdom is based around bringing an Iguanodon back to the present. Iguanodon is one of the three dinosaur genera that inspired Godzilla; the other two were Tyrannosaurus and Stegosaurus. Iguanodon has also made appearances in some of the many Land Before Time films, as well as episodes of the television series. Aside from appearances in movies, Iguanodon has also been featured on the television documentary miniseries Walking with Dinosaurs (1999) produced by the BBC, and played a starring role in Sir Arthur Conan Doyle's book, The Lost World as well as featuring in an episode of the Discovery Channel documentary, Dinosaur Planet. It also was present in Bob Bakker's Raptor Red (1995), as a Utahraptor prey item. A main belt asteroid, 1989 CB3, has been named 9941 Iguanodon in honour of the genus. Because it is both one of the first dinosaurs described and one of the best-known dinosaurs, Iguanodon has been well-placed as a barometer of changing public and scientific perceptions on dinosaurs. Its reconstructions have gone through three stages: the elephantine quadrupedal horn-snouted reptile satisfied the Victorians, then a bipedal but still fundamentally reptilian animal using its tail to prop itself up dominated the early 20th century, but was slowly overturned during the 1960s by its current, more agile and dynamic representation, able to shift from two legs to all fours. |Wikimedia Commons has media related to Iguanodon.| |Wikispecies has information related to: Iguanodon| |Wikibooks has a book on the topic of: Wikijunior Dinosaurs/Iguanodon|
It is possible to increase how much a person values something simply by presenting it along with an irrelevant sound that cues the person to press a button, reports a paper published online this week in Nature Neuroscience. This suggests that people’s preferences can be changed without directly manipulating the items they are choosing between. Tom Schonberg and colleagues asked study participants to tell them how much they were willing to pay for each one of a set of snack foods. Then, they were presented with pictures of the different foods on a computer screen. Most items were viewed passively, but a few of the items were always presented along with a tone that indicated that the participant had to quickly press a button. After this phase of the experiment, participants decided between pairs of snack foods that they deemed equally valuable at the beginning of the experiment. Participants chose the tone associated item 60-65% of the time, and in a subsequent task, they were willing to pay more for these items than for the originally equivalent control items. This behavioral change occurred without the addition of any new incentives, extra information, or increased familiarity, and the effect lasted at least two months. A neuroimaging study with different participants revealed an increased preference-related activity in ventromedial prefrontal cortex, a region often associated with representing subjective value in decision-making. The authors replicated the effect on choice in five independent groups, totaling 240 participants. Schonberg and colleagues believe that these results suggest a way to help modify choices. Materials: Storing energy in bricksNature Communications Planetary science: Dawn’s close-up look at CeresNature Astronomy Engineering: Reducing noise transmitted through an open windowScientific Reports
Separate but equal |Part of a series of articles on| Separate but equal was a legal doctrine in United States constitutional law, according to which racial segregation did not necessarily violate the Fourteenth Amendment to the United States Constitution, which guaranteed "equal protection" under the law to all people. Under the doctrine, as long as the facilities provided to each race were equal, state and local governments could require that services, facilities, public accommodations, housing, medical care, education, employment, and transportation be segregated by "race", which was already the case throughout the states of the former Confederacy. The phrase was derived from a Louisiana law of 1890, although the law actually used the phrase "equal but separate".[better source needed] The doctrine was confirmed in the Plessy v. Ferguson Supreme Court decision of 1896, which allowed state-sponsored segregation. Though segregation laws existed before that case, the decision emboldened segregation states during the Jim Crow era, which had commenced in 1876 and supplanted the Black Codes, which restricted the civil rights and civil liberties of African Americans during the Reconstruction Era. In practice the separate facilities provided to African Americans were rarely equal; usually they were not even close to equal, or they did not exist at all. For example, in the 1930 census, black people were 42% of Florida's population. Yet according to the 1934–36 report of the Florida Superintendent of Public Instruction, the value of "white school property" in the state was $70,543,000, while the value of African-American school property was $4,900,000. The report says that "in a few south Florida counties and in most north Florida counties many Negro schools are housed in churches, shacks, and lodges, and have no toilets, water supply, desks, blackboards, etc. [See Station One School.] Counties use these schools as a means to get State funds and yet these counties invest little or nothing in them." At that time, high school education for African Americans was provided in only 28 of Florida's 67 counties. In 1939–40, the average salary of a white teacher in Florida was $1,148, whereas for a negro teacher it was $585. During the era of segregation, the myth was that the races were separated but were provided equal facilities. No one believed it. Almost without exception, black students were given inferior buildings and instructional materials. Black educators were generally paid less than were their white counterparts and had more students in their classrooms.... In 1938, Pompano white schools collectively had one teacher for every 25 students, while the Pompano Colored School had one teacher for every 54 students. At the Hammondville School, the single teacher employed there had 67 students. "Separate but equal" facilities were found to be unconstitutional in a series of Supreme Court decisions under Chief Justice Earl Warren, starting with Brown v. Board of Education of 1954. However, the subsequent overturning of segregation laws and practices was a long process that lasted through much of the 1950s, 1960s, and 1970s, involving federal legislation (especially the Civil Rights Act of 1964), and many court cases. The American Civil War brought slavery in the United States to an end with the ratification of the Thirteenth Amendment in 1865. Following the war, the Fourteenth Amendment to the United States Constitution guaranteed equal protection under the law to all people and Congress established the Freedmen's Bureau to assist the integration of former slaves into Southern society. The Reconstruction Era brought new freedoms and laws promoting racial equality to the South. However, after the Compromise of 1877 ended Reconstruction and withdrew federal troops from all Southern states, many former slaveholders and Confederates were elected to office. The Fourteenth Amendment guaranteed equal protection to all people but Southern states contended that the requirement of equality could be met in a way that kept the races separate. Furthermore, the state and federal courts tended to reject the pleas by African Americans that their Fourteenth Amendment rights were violated, arguing that the Fourteenth Amendment applied only to federal, not state, citizenship. This rejection is evident in the Slaughter-House Cases and Civil Rights Cases. After the end of Reconstruction, the federal government adopted a general policy of leaving racial segregation up to the individual states. One example of this policy was the second Morrill Act (Morrill Act of 1890). Before the end of the war, the Morrill Land-Grant Colleges Act (Morrill Act of 1862) had provided federal funding for higher education by each state with the details left to the state legislatures. The 1890 Act implicitly accepted the legal concept of "separate but equal" for the 17 states that had institutionalized segregation. Provided, That no money shall be paid out under this act to any State or Territory for the support and maintenance of a college where a distinction of race or color is made in the admission of students, but the establishment and maintenance of such colleges separately for white and colored students shall be held to be a compliance with the provisions of this act if the funds received in such State or Territory be equitably divided as hereinafter set forth. Early legal support Jim Crow laws In the late 19th century, many states of the former Confederacy adopted laws, collectively known as Jim Crow laws, that mandated separation of whites and African Americans. The Florida Constitution of 1885 mandated separate educational systems. In Texas, laws required separate water fountains, restrooms, and waiting rooms in railroad stations. In Georgia, restaurants and taverns could not serve white and "colored" patrons in the same room; separate parks for each "race" were required, as were separate cemeteries. These are just examples from a large number of similar laws. Prior to the Second Morrill Act, 17 states excluded blacks from access to the land-grant colleges without providing similar educational opportunities. In response to the Second Morrill Act, 17 states established separate land-grant colleges for blacks which are now referred to as public historically black colleges and universities (HBCUs). In fact, some states adopted laws prohibiting schools from educating blacks and whites together, even if a school was willing to do so. (The constitutionality of such laws was upheld in Berea College v. Kentucky (1908) 211 U.S. 45) Plessy v. Ferguson The legitimacy of such laws under the 14th Amendment was upheld by the U.S. Supreme Court in the 1896 case of Plessy v. Ferguson, 163 U.S. 537. The Plessy doctrine was extended to the public schools in Cumming v. Richmond County Board of Education, 175 U.S. 528 (1899). In 1892, Homer Plessy, who was of mixed ancestry and appeared to be white, boarded an all-white railroad car between New Orleans and Covington, Louisiana. The conductor of the train collected passenger tickets at their seats. When Plessy told the conductor he was 7⁄8 white and 1⁄8 black, he was informed that he had to move to a coloreds-only car. Plessy said he resented sitting in a coloreds-only car and was arrested immediately. One month after his arrest, Plessy appeared in court before Judge John Howard Ferguson. Plessy's lawyer, Albion Tourgee, claimed Plessy's 13th and 14th amendment rights were violated. The 13th Amendment abolished slavery, and the 14th amendment granted equal protection to all under the law. The Supreme Court decision in Plessy v. Ferguson formalized the legal principle of "separate but equal". The ruling required "railway companies carrying passengers in their coaches in that State to provide equal, but separate, accommodations for the white and colored races".[unreliable source?] Accommodations provided on each railroad car were required to be the same as those provided on the others. Separate railroad cars could be provided. The railroad could refuse service to passengers who refused to comply, and the Supreme Court ruled this did not infringe upon the 13th and 14th amendments. The "separate but equal" doctrine applied to all public facilities: not only railroad cars but schools, medical facilities, theaters, restaurants, restrooms, and drinking fountains. However, neither state nor Congress put "separate but equal" into the statute books, meaning the provision of equal services to non-whites could not be legally enforced. The only possible remedy was through federal court, but costly legal fees and expenses meant that this was out of the question for individuals; it took an organization with resources, the NAACP, to file and pursue Brown v. Board of Education. "Equal" facilities were the exception rather than the rule. The facilities and social services offered to African Americans were almost always of a lower quality than those offered to white Americans, if they existed at all. Most African-American schools had less public funding per student than nearby white schools; they had old textbooks, discarded by the white schools, used equipment, and poorly paid, prepared, or trained teachers. In addition, according to a study conducted by the American Psychological Association, black students are emotionally impaired when segregated at a young age. In Texas, the state established a state-funded law school for white students but none for black students. As previously mentioned, the majority of counties in Florida during the 1930s had no high school for African-American students. African Americans had to pay state and local taxes that were used for the benefit of whites only. (See Florida A&M Hospital for an example.) Although the "Separate but Equal" doctrine was eventually overturned by the U.S. Supreme Court in Brown v. Board of Education (1954), the implementation of the changes this decision required was long, contentious, and sometimes violent (see Massive resistance and Southern Manifesto). It can be considered ongoing (see Black Lives Matter). While modern legal doctrine interprets the 14th amendment to prohibit explicit segregation on the basis of race, societal issues surrounding racial discrimination still remain topical (see Racial profiling). Before Warren Court The repeal of such restrictive laws, generally known as Jim Crow laws, was a key focus of the Civil Rights Movement prior to 1954. In Sweatt v. Painter, the Supreme Court addressed a legal challenge to the doctrine when a Texan black student, Heman Marion Sweatt, was seeking admission into the state-supported School of Law of the University of Texas. Since Texas did not have a law school for black students, the lower court continued the case for six months so that a state-funded law school for black students (now known as Thurgood Marshall School of Law at Texas Southern University) could be created. When further appeals to the Texas Supreme Court failed, Sweatt, along with the NAACP, took the case to the federal courts, before it eventually reached the Supreme Court of the United States. Here, the original decision was reversed and Sweatt was admitted into the University of Texas School of Law. This decision was based on the grounds that the separate school failed to qualify as being "equal", because of both quantitative differences, such as its facilities, and intangible factors, such as its isolation from most of the future lawyers with whom its graduates would interact. The court held that, when considering graduate education, intangible factors must be considered as part of "substantive equality". The same day, the Supreme Court in McLaurin v. Oklahoma State Regents ruled that segregation laws in Oklahoma, which had required an African-American graduate student working on a Doctor of Education degree to sit in the hallway outside the classroom door, did not qualify as "separate but equal". These cases ended the "separate but equal" doctrine in graduate and professional education. The Warren Court In 1953, Earl Warren became the 14th Chief Justice of the United States, and the Warren Court started a liberal Constitutional Revolution which outlawed racial segregation and "Separate but equal" throughout the United States in a series of landmark rulings. In Brown v. Board of Education (1954) 347 U.S. 483 , attorneys for the NAACP referred to the phrase "equal but separate" used in Plessy v. Ferguson as a custom de jure racial segregation enacted into law. The NAACP, led by Thurgood Marshall (who became the first black Supreme Court Justice in 1967), was successful in challenging the constitutional viability of the "separate but equal" doctrine. The Warren Court voted to overturn sixty years of law that had developed under Plessy. The Warren Court outlawed segregated public education facilities for blacks and whites at the state level. The companion case of Bolling v. Sharpe, 347 U.S. 497 outlawed such practices at the Federal level in the District of Columbia. Chief Justice Earl Warren wrote in the court opinion: We conclude that, in the field of public education, the doctrine of "separate but equal" has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs and others similarly situated for whom the actions have been brought are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment. Although Brown overturned the doctrine of "separate but equal" in institutions of public education, it would be almost ten more years before the Civil Rights Act of 1964 would prohibit racial discrimination in facilities that were deemed public accommodations (transportation, hotels, etc.). Additionally, in 1967, under Loving v. Virginia, the Warren Court declared Virginia's anti-miscegenation statute, the Racial Integrity Act of 1924, unconstitutional, thus invalidating all anti-miscegenation laws in the United States. Chief Justice Earl Warren wrote in the court majority opinion: Under our Constitution, the freedom to marry, or not marry, a person of another race resides with the individual, and cannot be infringed by the State. After Warren Court Although federal legislation prohibits racial discrimination in college admissions, the historically black colleges and universities continue to teach student bodies that are 75% to 90% African American. However, this does not necessarily indicate racial discrimination within college admissions in those schools, when factors such as student preference are taken into account. In 1975, Jake Ayers Sr. filed a lawsuit against Mississippi, stating that they gave more financial support to the predominantly white public colleges. The state settled the lawsuit in 2002, directing $503 million to three historically black colleges over 17 years. - "Answers - The Most Trusted Place for Answering Life's Questions". Answers.com. Retrieved 15 September 2017. - "Statistical abstract of the United States" (PDF). Bureau of the Census, U.S. Department of Commerce. 1931. p. 13. - Federal Writers Project (1939), Florida. A Guide to the Southernmost State, New York: Oxford University Press, p. 130 - Florida Education Association (April 9, 1941), Report on the Educational Opportunities for Negroes in Florida, 1941, State Archives of Florida: Collection M86-11, Box 04, Folder 5, p. 3 - Hobby, Daniel T. (2012). "Schools of Pompano". Broward Legacy. pp. 21–25. Retrieved May 2, 2019. - "The Court's Decision - Separate Is Not Equal". americanhistory.si.edu. Retrieved 2019-09-26. - "Documents Related to Brown v. Board of Education". National Archives. 2016-08-15. Retrieved 2019-09-26. - "Earl Warren". Oyez. Retrieved 2019-09-26. - Williams G. Thomas (June 24, 2008). "How Slavery Ended in the Civil War". University of Nebraska–Lincoln. - "A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875". Library of Congress. - "Act of August 30, 1890, ch. 841, 26 Stat. 417, 7 U.S.C. 322 et seq. Archived February 20, 2009, at the Wayback Machine" Act of 1890 Providing for the Further Endowment and Support Of Colleges of Agriculture and Mechanic Arts. - "104th Congress 1st Session, H. R. 2730[permanent dead link]" To eliminate segregationist monkey from the Second Morrill Act. - De León, Arnoldo; Calvert, Robert A. (2010). "Segregation". Handbook of Texas Online. Texas State Historical Association. Retrieved February 25, 2019. - McElrath, Jessica (2006). "Jim Crow Laws. Alabama, Arizona, Florida, Georgia, & Kentucky". about.com. - Plessy v. Ferguson. - "Black-white student achievement gap persists". NBC News. July 14, 2009. - Clark, Kenneth. "Segregation Ruled Unequal, Therefore Unconstitutional". Missing or empty - E.g., Virginia Racial Integrity Act, Virginia Code § 20–58 and § 20–59 - "The Court's Decision - Separate Is Not Equal". americanhistory.si.edu. Retrieved 2019-10-20. - "The Warren Court: Completion of a Constitutional Revolution" (PDF). William & Mary Law School Scholarship Repository. - "Brown v. Board of Education of Topeka". Oyez. Retrieved 2019-10-20. - "Heart of Atlanta Motel, Inc. v. United States". Oyez. Retrieved 2019-10-20. - "Loving v. Virginia". Oyez. Retrieved 2019-10-20. - "Loving v. Virginia". LII / Legal Information Institute. Retrieved 2019-10-20. - "Historically Black Colleges and Universities,1976 to 2001" (PDF). Dept. of Education. September 2004. Retrieved 2010-01-19. - "Opposition strong to Barbour's plan to merge Mississippi's 3 black universities into 1". Associated Press. November 19, 2009. Retrieved 2010-01-21. - Roche, John P. (1951). "The Future of 'Separate but Equal'". Phylon. 12 (3): 219–226. doi:10.2307/271632. JSTOR 271632. Media related to Racial segregation in the United States at Wikimedia Commons - A film clip A Study of Educational Inequalities in South Carolina is available at the Internet Archive - Cornell Legal Information Institute
Breast Cancer starts when cells in the breast begin to grow out of control. These cells usually form a tumor that can often be seen on an X-ray or felt as a lump. The tumor is malignant (cancerous) if the cells can grow into (invade) surrounding tissues or spread (metastasize) to distant areas of the body. Breast Cancer occurs almost entirely in women, but men can get it, too. Breast cancers can start from different parts of the breast: most breast cancers begin in the ducts that carry milk to the nipple (ductal cancers); some start in the glands that make breast milk (lobular cancers); and a small number of cancers start in other tissues in the breast. Types of Breast Cancer Breast Cancer can be separated into two big different types based on the way the cancer cells look under the microscope: - Carcinomas: type of cancer that starts in the cells that line organs and tissues like the breast (epithelial cells). In fact, Breast Cancers are often a type of carcinoma called adenocarcinoma, which is carcinoma that starts in glandular tissue - Sarcomas: type of cancer that stars in the cells of muscle, fat, or connective tissue. In some cases a single breast tumor can be a combination of different types or be a mixture of invasive and in situ cancer, and in some rarer types of breast cancer, the cancer cells may not form a tumor at all. Breast Cancer can also be classified based on proteins on or in the cancer cells, into groups like hormone receptor-positive or triple-negative. A number of factors may increase your risk of Breast Cancer. Some risk factors can be managed, for instance, by quitting smoking or drinking, but other factors can’t be controlled, such as your family history. Risk factors for Breast Cancer include: - Genetic risk factors - Family history of breast cancer - Personal history of breast cancer - Race and ethnicity - Dense breast tissue - Certain benign breast conditions - Lobular carcinoma in situ - Menstrual periods - Previous chest radiation - Diethylstilbestrol exposure - Birth control - Hormone therapy after menopause - Drinking alcohol - Being overweight or obese - Sedentary lifestyle Signs and Symptoms Although many types of breast cancer can cause a lump in the breast (usually a painless, hard, and irregular edges mass), not all do. There are other symptoms of breast cancer you should watch out for and report to a health care provider, such as: - Breast lump or mass - Swelling of all or part of a breast (even if no distinct lump is felt) - Skin irritation or dimpling - Breast or nipple pain - Nipple retraction (turning inward) - Redness, scaliness, or thickening of the nipple or breast skin - A nipple discharge other than breast milk Sometimes a Breast Cancer can spread to lymph nodes under the arm or around the collar bone and cause a lump or swelling there, even before the original tumor in the breast tissue is large enough to be felt. Swollen lymph nodes should also be reported to your doctor. Nowadays, breast cancer has become a very common disease condition in women that are between 30 and 50 years old. In the case of this kind of cancer, the 5-year relative survival rate for women with stage 0 or stage I breast cancer is close to 100% and about 93% for those in stage II. In stage III the 5-years survival rate is more or less 72%, and therefore it is not so lethal as other cancer are at this level of development. Nevertheless, breast cancers that have spread to other parts of the body are more difficult to treat and tend to have a poorer outlook. Metastatic, or stage IV breast cancers, have a 5-year relative survival rate of about 22%. Still, there are often many treatment options available for women with this stage of breast cancer. Breast Cancer Diagnosis PLEASE NOTE: EARLY DIAGNOSIS IN CANCER IS VERY IMPORTANT BECAUSE CANCER THAT’S DIAGNOSED AT AN EARLY STAGE, BEFORE IT’S HAD THE CHANCE TO GET TOO BIG OR SPREAD IS MORE LIKELY TO BE TREATED SUCCESSFULLY. IF THE CANCER HAS SPREAD, TREATMENT BECOMES MORE DIFFICULT, AND GENERALLY A PERSON’S CHANCES OF SURVIVING ARE MUCH LOWER. As shown in the chart below, in Breast Cancer, like many other cancers, mortality is substantially higher in patients whose disease is diagnosed in late stage or has metastasized to other organs. State of the Art In terms of breast cancer diagnosis, there is a wide range of techniques for its detection. These are: breast exam; imaging techniques such as X-ray scan ―or more commonly known as mammography―, breast ultrasound and magnetic resonance imaging; and biopsy. However, some limitations are noticed: firstly, the low resolution of the manual examination as well as the need the suspicious nodule or mass to have a apreciable size (palpable thresold); secondly, the added cost from the sophisticated imaging methods as well as the need the suspicious nodule or mass to have a minimum size (visibility thresold); and finally the stress derived from the biopsy, a high invasive method. Here is where OncoBREAST Dx test can help by being an innovative, non-invasive, accurate, and cost-effective diagnosis solution, based in a simple blood test that can detect Breast Cancer with 91.7% of sensibility and 93.6% of specificity, while reduces ―in a very significantly way―, the number of false positives (FP) and false negatives (FN) typical of other diagnosis procedures, which translates into savings of up to 90% of the biopsies that patients must undergo to confirm or discard malignancy in the face of a suspicious finding. If do you want to read more about OncoBREAST Dx, our Multiple Biomarkers Disease Activity Algorithms (MBDAAs) for an innovative, non-invasive, accurate, and cost-effective Breast Cancer diagnostics, please click on next button:
- To enable the students to identify and name the shapes. - To identify and differentiate between 2D and 3D shapes - To recognize the shapes in the environment. - To describe properties of shapes/objects, compare them and sort them by size. - To correlate the topic of shapes in real-life situations. An Exhibition was conducted in the KG Section related to shapes – to identify and differentiate between 2D and 3D shapes. Children used their senses to know more about the shapes. They were given opportunities to speak about the shapes. They explained about the number of sides, vertices, edges, faces, etc that different shapes have and they correlated to their real-life situations.
Flying geckos (Ptychozoon sp.) have been known to Western science since at least 1809, but many aspects of their natural history remain a mystery. Scientists don't even agree on how many species of flying gecko exist; between five and nine species are recognized by various authorities. At least this much is known: Flying geckos are small, tropical lizards with extraordinary adaptations to suit their arboreal lifestyle. Flying geckos reach about 6 to 8 inches from tongue to tail tip. They feature prominent skin flaps along their sides, tail and feet. Though the color and pattern vary greatly from lizard to lizard, most are incredibly well camouflaged and blend in superbly with their habitat; a quality that helps them to avoid predators. Though the skin flaps likely help the lizard's camouflage, a 1976 study by Dale L. Marcellini and Thomas E. Keefer, published in "Herpetologica," found that camouflage was not the primary purpose of the flap -- rather, the flaps evolved to enable gliding behavior. Flying geckos are not capable of true powered flight. Rather, they will extend their legs and tails, maximizing their surface area, and passively glide from tree to tree. Though flying geckos don’t have control over their "wings" like flying dragons (Draco sp.) do, they have similar surface area and flight capabilities, as was shown in a 2001 study by Anthony P. Russel et al. and published in the "Journal of Morphology." Flying geckos are found in tropical forests throughout Southeast Asia. After spending their days clinging to tree trunks and sleeping, these nocturnal lizards emerge at nightfall to hunt insects, spiders and small vertebrates. Snakes, monitor lizards and predatory birds are important predators of the geckos, who will try to defend themselves by running or gliding away from predators. Flying geckos will breed and produce eggs every three or four weeks, when the climactic conditions are favorable. Like most other geckos, flying geckos deposit their eggs in pairs. Wild lizards deposit their eggs in concealed locations on trees, while captive lizards may attach them to the cage glass. The young hatch in about three weeks, looking like smaller versions of the adults. In captivity, flying geckos thrive in a tropical terrarium of sufficient size. A 10-gallon aquarium will serve a pair of geckos, though bigger is preferred. The habitat needs a thermal gradient, typically created by placing a heat lamp or heating pad at one end of the enclosure. The warm side of the habitat should reach the middle 80 degree range Fahrenheit during the day and drop into the middle 70s at night. Feed the geckos crickets or roaches five or six times per week, and mist them daily to maintain the humidity and supply drinking water. Live or artificial plants and numerous hiding spots should be provided so the geckos will feel secure. Wild-caught animals will require veterinary care to address possibly present parasites. - Herpetologica: Background Color Selection and Antipredator Behavior of the Flying Gecko (Ptychozoon kuhli) - The Reptile Database: Ptychozoon kuhli - Zootaxa: A New Species of Parachute Gecko (Squamata: Gekkonidae: Genus Ptychozoon) from Kaeng Krachan National Park, Western Thailand - Herpetologica: Analysis of the Gliding Behavior of Ptychozoon lionatum (Reptilia: Gekkonidae)
Greenhouse gases such as carbon dioxide are damaging because they remain in the Earth’s atmosphere and trap heat. This contributes to global warming, which is believed to be connected to climate change. Climate change is responsible for a whole host of problems, from desertification and drought through to extreme weather conditions such as hurricanes and tsunamis. It is also responsible for the melting polar ice-caps and unforeseen damage to the delicate eco-system. Different Types Of Greenhouse Gas The main types of greenhouse gas are carbon dioxide or C02, methane, nitrous oxide and fluorinated gases which are released during industrial processes. Carbon dioxide is released into the atmosphere when fossil fuels such as coal, oil and natural gas are burned, as well as bio-fuels and organic materials such as wood, trees and solid waste. It can also be released by processes such as cement manufacture. C02 can be re-absorbed by plants and prevented from causing damage to the atmosphere. However, it requires large swathes of greenery to remain across the world such as the rainforests which are being destroyed by logging and farming. Methane is released during the transport and production of oil, coal and natural gas and it is a by-product of dairy farming and other farming practices, including solid waste incineration. Nitrous oxide is released during industrial and farming activities as well as from burning solid waste and fossil fuels. The Effects Of These Gases Each greenhouse gas can stay in the atmosphere for a varying amount of time which ranges from several years to many thousands of years. Each of the main gas types will stay in the atmosphere for a sufficient period to combine and to spread across the world, regardless of where the source is concentrated. This means that local activities will affect the globe as a whole and policies to combat climate change must be signed up to by all nations rather than just a select few. As the gases trap heat and warm the planet, the Earth’s natural balance is thrown out of kilter and this leads to extreme weather, desertification and failing crops. Natural eco-systems are changed and sometimes destroyed. The increase in the amount of greenhouse gases is almost entirely due to human activity over the past 150 years. The USA is the biggest source of the emissions to date but other developing nations are catching up. Primary generation sources include electricity production, industry, transport, commercial, agriculture and residential. From 1990, the emissions levels in the USA were growing by 5pc annually. However, this reversed in 2012 as the impact of renewable energy investments began to be felt and it is hoped that this downward trend will continue as the adoption of new renewable energies, including biofuels for transport, increases and becomes more widespread. Efforts To Combat Climate Change Global investment in renewable energy sources is diminishing the reliance on fossil fuels and helping to create clean and green energy, as well as a large green economy. Examples of these renewable energies include solar PV and solar thermal, onshore and offshore wind energy, biofuel, hydropower and combined heat and power. Recycling, waste management and better education about greenhouse gases and environmental protection are slowly helping to reverse the changes. However, further work is needed in government policy, as well as among individual communities and the action that they can take to preserve their children’s futures.
To consider tradeoffs between learning and performance and examine instructional strategies that support both. Kapur researches an instructional strategy called productive failure. Productive failure encourages learners to create incorrect or incomplete solutions, get stuck during problem solving, or otherwise fail to produce a right answer when they first start learning a new procedure. The underlying theory is that this strategy encourages students to try to apply their prior knowledge to the problem, recognize whether it works, and identify the new knowledge they need to complete the solution. Once learners have gone through this process of failing, they are primed to fill in the gaps in their knowledge through instruction. A critical feature of productive failure is that the failure during the problem-solving phase is followed by productive learning during instruction, called the consolidation phase. Productive vs. Unproductive The two dimensions that Kapur examines in this review article are learning (productive vs. unproductive) and performance (failure vs. success). For learning, a productive instructional strategy is one that teaches learners the problem solving procedure so that they remember it long-term and apply to novel problems. That is, productive instruction produces retention and transfer. Key features that make learning productive when teaching problem solving procedures are - Allows the learner to determine how prior knowledge applies - Helps the learner to identify gaps in their knowledge - Fills gaps with conceptual knowledge - Does not overload the student’s cognitive capacity, but doesn’t necessarily minimize cognitive load Failure vs. Success Failure vs. success, in this context, refers to learners’ performance on learning activities. For example, if you give practice problems in class, do students complete them correctly? In contrast, long-term performance and knowledge is determined by productive or unproductive learning. In general, more guidance or scaffolding during activities is associated with more success. These successes sometimes translate to better grades on homework and tests of declarative knowledge. They are effective in the short-term. However, when comparing students who failed and those who succeeded on learning activities, there is often no difference in procedural knowledge. In addition, students who fail to create correct solutions AND consider more possible (but wrong) solutions, have better conceptual knowledge than those who succeed or fail and consider only a few possible solutions. So if learners are going to fail, they should fail lots. Unproductive failure is bad, obviously. There’s some evidence that productive failure produces better conceptual knowledge than productive success, but both are productive. What about unproductive success? If I tell you exactly how to solve a problem, you are going to be very successful at solving that problem. But you won’t learn much about why that solution is effective (even if I tell you why it’s effective, the research suggests it won’t stick). Thus, you won’t be able to solve problems that are very different from the highly scaffolded problems I’ve given you. Highly scaffolded instruction, therefore, should be reserved for cases where declarative and procedural knowledge are important, but conceptual knowledge is not. Why this is important This review article considers the four possible outcomes of instruction. Though there are limited cases in which short-term success is more important than productive learning, Kapur argues that the goal of instruction should be to maximize productive learning, even at the expense of success. A key conclusion is that productive learning does not rely on successful completion of learning activities. It might even be most productive to experience many incorrect or incomplete solutions before learning the canonical solution. Kapur, M. (2016). Examining productive failure, productive success, unproductive failure, and unproductive success in learning, Educational Psychologist, 51(2), 289-299, doi: 10.1080/00461520.2016.1155457 For more information about the article summary series or more article summary posts, visit the article summary series introduction.
Statistics Definitions > Reliability and Validity Outside of statistical research, reliability and validity are used interchangeably. For research and testing, there are subtle differences. Reliability implies consistency: if you take the ACT five times, you should get roughly the same results every time. A test is valid if it measures what it’s supposed to. Tests that are valid are also reliable. The ACT is valid (and reliable) because it measures what a student learned in high school. However, tests that are reliable aren’t always valid. For example, let’s say your thermometer was a degree off. It would be reliable (giving you the same results each time) but not valid (because the thermometer wasn’t recording the correct temperature). Reliability is a measure of the stability or consistency of test scores. You can also think of it as the ability for a test or research findings to be repeatable. For example, a medical thermometer is a reliable tool that would measure the correct temperature each time it is used. In the same way, a reliable math test will accurately measure mathematical knowledge for every student who takes it and reliable research findings can be replicated over and over. Of course, it’s not quite as simple as saying you think a test is reliable. There are many statistical tools you can use to measure reliability. For example: - Kuder-Richardson 20: a measure of internal reliability for a binary test (i.e. one with right or wrong answers). - Cronbach’s alpha: measures internal reliability for tests with multiple possible answers. Internal vs. External Reliability Internal reliability, or internal consistency, is a measure of how well your test is actually measuring what you want it to measure. External reliability means that your test or measure can be generalized beyond what you’re using it for. For example, a claim that individual tutoring improves test scores should apply to more than one subject (e.g. to English as well as math). A test for depression should be able to detect depression in different age groups, for people in different socio-economic statuses, or introverts. One specific type is parallel forms reliability, where two equivalent tests are given to students a short time apart. If the forms are parallel, then the tests produce the same observed results. A reliability coefficient is a measure of how well a test measures achievement. It is the proportion of variance in observed scores (i.e. scores on the test) attributable to true scores (the theoretical “real” score that a person would get if a perfect test existed). The term “reliability coefficient” actually refers to several different coefficients: Several methods exist for calculating the coefficient include test-retest, parallel forms and alternate-form: - Cronbach’s alpha — the most widely used internal-consistency coefficient. - A simple correlation between two scores from the same person is one of the simplest ways to estimate a reliability coefficient. If the scores are taken at different times, then this is one way to estimate test-retest reliability; Different forms of the test given on the same day can estimate parallel forms reliability. - Pearson’s correlation can be used to estimate the theoretical reliability coefficient between parallel tests. - The Spearman Brown formula is a measure of reliability for split-half tests. - Cohen’s Kappa measures interrater reliability. The range of the reliability coefficient is from 0 to 1. Rule of thumb for preferred levels of the coefficient: - For high stakes tests (e.g. college admissions), > 0.85. Some authors suggest this figure should be above .90. - For low stakes tests (e.g. classroom assessment), > 0.70. Some authors suggest this figure should be above 0.80 - Composite Reliability - Concurrent Validity. - Content Validity. - Convergent Validity. - Consequential Validity. - Criterion Validity. - Curricular Validity and Instructional Validity. - Ecological Validity. - External Validity. - Face Validity. - Formative validity & Summative Validity. - Incremental Validity - Internal Validity. - Predictive Validity. - Sampling Validity. - Statistical Conclusion Validity. Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free! Comments? Need to post a correction? Please post a comment on our Facebook page.
Using "if in a let" statement An 'if' expression is used on the right hand side of the let statement and the value of 'if' expression is assigned to the 'let' statement. Syntax of 'if in a let' In the above syntax, if the condition is true then the value of 'if' expression is assigned to the variable and if the condition is false then the value of 'else' is assigned to the variable. Let's see a simple example. value of a is: 1 In this example, condition is true. Therefore, 'a' variable bounds to the value of 'if' expression. Now, a consist 1 value. Let's see a another simple example. Some errors occurred:E0308 In this example, 'if' block evaluates to an integer value while 'else' block evaluates to a string value. Therefore, this program throws an error as both the blocks contains the value of different type.
Definition - What does Sepal mean? In plant biology, a sepal is the outer part of the angiosperms. A group of sepals enclose a developing bud and are mostly green in color. A sepal typically functions as protection for the flower while it is budding and often supports the petals when in bloom. Sepals are collectively referred to as a calyx, the plural of which is calyces. Once the flower has fully bloomed, it usually has no more use for the sepals, so the sepals either wither away or degenerate. MaximumYield explains Sepal Attached directly to the top of the stem of the flowering plant, sepals often come in a variety of shapes and sizes. Some sepals are long and thin, while others are short and thick. While some sepals are individualized, other sepals are fused together to make a cup formation around the petals of a flower. Together, the sepals and petals make up the perianth, also known as the floral envelope. The petals usually have distinctive and showy colors that are meant to attract bees for cross pollination. Meanwhile, the sepals are usually just green in color and often resemble reduced leaves. Sometimes sepals and petals are indistinguishable, for example in lilies and tulips. When it is difficult to differentiate between the two, they are sometimes referred to as tepals.
Chickenpox is often considered a fairly harmless disease. Many parents, and even some medical professionals, may view the disease as a childhood rite of passage. However, before the varicella vaccine was routinely given, chickenpox infections resulted in over 10,000 hospitalizations and 100–150 deaths each year in the United States alone. That’s about two or three deaths a week! The older you get, the more risk there is of complications from chickenpox. It is not better to get chickenpox naturally. The potential risks of the disease are severe and no one can predict which child will develop a life-threatening case of chickenpox. Shingles is a complication of “natural” chickenpox. What is chickenpox? Chickenpox is caused by the varicella virus and was once a common childhood disease. It is usually mild but it can be serious, especially in young infants and adults. The varicella virus causes rash, itching, fever and tiredness. Severe complications are rare but do occur and include: skin infection, scars, pneumonia, brain damage or death. Chickenpox is highly contagious. It is an airborne disease. The virus spreads through the air when an infected person coughs or sneezes, or by touching chickenpox lesions (sores). A person with chickenpox is contagious from 1–2 days before rash onset until the lesions have crusted over. It takes from 10–21 days after exposure to the virus for someone to develop chickenpox. Centers for Disease Control and Prevention (CDC) studies state that 90% of the susceptible people (i.e. those unvaccinated) in a house with an infected person will also become infected with chickenpox. Most often chickenpox lesions are concentrated on the trunk, with fewer lesions on the arms and legs. If you see lesions or vesicles on the palms, soles and oral cavity, it may suggest hand, foot and mouth disease. A person who has had chickenpox can get a painful rash called shingles years later. See below for more information on shingles and chickenpox. Did You Know? Modified varicella, also known as breakthrough, can occur in vaccinated people. Breakthrough varicella is usually mild, with less than 50 lesions, low or no fever and shorter duration of rash. The rash may look different from the normal chickenpox rash. Breakthrough varicella is infectious, but less infectious than when an unvaccinated person has varicella. –CDC The varicella vaccine is a live attenuated vaccine; meaning the actual virus is still alive but has been altered so that it is very unlikely to cause an infection. However, it will still trick the immune system into building up antibodies against it. It was licensed for use in 1995. CDC recommends two doses of chickenpox vaccine for children, adolescents and adults. Two doses of the vaccine (usually age 1 and 4) are about 88–98% effective at preventing all chickenpox. A vaccine, like any medicine, is capable of causing serious problems, such as severe allergic reactions. The risk of the chickenpox vaccine causing serious harm, or even death, is extremely small. Getting chickenpox vaccine is much safer than getting the chickenpox disease. Most people who get chickenpox vaccine do not have any problems with it. Reactions are usually more likely after the first dose than after the second. Adverse reactions to the vaccine include: Fever (1 in 10, or less) Mild rash up to a month after vaccination (1 in 25); infecting others is rare 20–25% will have a sore arm after the injection Seizure caused by fever (very rare) Other serious problems, including severe brain reactions (encephalitis) and low blood count, have been reported after chickenpox vaccination. These happen so rarely experts cannot tell whether they are caused by the vaccine or not. Encephalitis and low blood counts can also occur with a natural case of chickenpox. Note: The first dose of measles, mumps, rubella and varicella (MMRV) vaccine has been associated with rash and higher rates of fever than MMR and varicella vaccines given separately. Rash has been reported in about 1 person in 20 and fever in about 1 person in 5. Seizures caused by a fever are also reported more often after MMRV. These usually occur 5 to 12 days after the first dose. (CDC) Parents may choose this option in order to reduce the number of injections their child gets or get the MMR and varicella separately. Chickenpox and shingles The varicella zoster virus (VZV) causes two distinct diseases. Chickenpox (varicella) is one and commonly occurs in children. Many years later the virus can reactivate to cause Herpes Zoster, often called shingles. It is a painful rash on one side of the body and most commonly occurs in adults. In other words, the same virus that caused chickenpox when we were young stays dormant in our nerve cells. What causes a reactivation of the virus is unclear. However, what does seem to be clear is that as we age, the immune responses that keep varicella zoster virus dormant weaken. About 1 in 3 people will get shingles during their lifetime, and at least half of all people 85 and older will. One of the many perks of living longer. Immune-suppressed people, such as those with cancer or HIV, are at higher risk for shingles. Shingles cases have been on the rise for at least a decade in the US. One popular theory behind this rise is the increase in chickenpox vaccinations of children. Because most children no longer get chickenpox, their parents no longer get the immunological boost that comes from being exposed to the virus when their children are infected. Although it is widely repeated, there are a few reasons to doubt this theory. First, studies (see references below) have found that shingles was on the rise even before the chickenpox vaccine was licensed for children in 1995. Secondly, adults in states with mandatory chickenpox immunization didn’t have higher rates of shingles than those in states where children weren’t as well vaccinated, and therefore more likely to get sick and provide immune boosters to parents and grandparents. Finally, as the population in the U.S. gets older, more people will eventually get shingles. There are still a lot of unanswered questions and unknowns. Researchers don’t know if the two doses given during childhood will be enough to protect them against shingles when they are older adults. The vaccine hasn’t been around long enough to determine that. The good news is that adults ages 60 and older, whether or not they had chickenpox as a child, can be vaccinated against shingles. So, what’s the upside? If you didn’t get wild type (natural infection) chickenpox as a young child (you were vaccinated), you probably won’t get shingles as an adult. But the jury is still out on that. Meanwhile, we do know the varicella vaccine is safe. Is the US the only country routinely vaccinating healthy kids for chickenpox? While the varicella vaccine is not routinely given in many countries, it is used to vaccinate healthy children in the following countries: United States, Australia, Canada, Costa Rica, Germany, Greece, Korea, Qatar, Saudi Arabia, Spain, Switzerland, United Arab Emirates and Uruguay. If you are traveling with young children, or an older child who has not been immunized, consider vaccination prior to traveling since chickenpox is still common in many countries. Interestingly, countries in Europe who do not vaccinate for chickenpox have done cost-benefit analysis on the issue. For example, while Belgium still does not include varicella in their vaccine schedule, scientific studies there indicate it would be beneficial for their medical system to do so. Bringing It Home Party! Party! Party! The pox party is not an urban myth and, on a certain level, it makes sense. Why not attempt to control the inevitable since chickenpox is such a “harmless” disease? Unfortunately, there is simply no way to predict the outcome of a natural infection (wild type) of chickenpox. We forget that before the vaccine there were 10,000 hospitalizations a year due to chickenpox. There is also no way to guarantee that your child has acquired chickenpox until you see the rash, so you could unwittingly expose pregnant women, immune-compromised children and the elderly in the meantime. (Barbara Walters was 83 when she was hospitalized for chickenpox.) With natural chickenpox you are contagious—with the vaccine you are not. So, the question is why? Why, when a safe vaccine exists, would we not vaccinate? We know pox diseases can be eradicated, so why not start that process with our generation of children. But let’s take a big, community-wide view of this issue. Who are the promoters of chickenpox parties? Are they middle and upper middle class stay-at-home moms? Is it because they can afford to stay at home with sick children for extended periods of time? In our society the burden (so to speak) of chickenpox is not equitably distributed. Even uncomplicated cases of chickenpox cause children to miss a week or more of school, with a caregiver also missing work to care for the sick child. That means low-income families and single-parent families are more likely to suffer economic and educational setbacks from a bout or bouts of chickenpox in the household. Imagine the family with three children who all get chickenpox, one after the other. A parent could potentially miss more than three weeks of work, which may have serious consequences for a family with no safety net. Centers for Disease Control and Prevention. Varicella outbreak among vaccinated children: Nebraska, 2004. MMWR Morb Mortal Wkly Rep.2006;55 :749-752. Davis MM, Patel MS, Gebremariam A. Decline in varicella-related hospitalizations and expenditures for children and adults after introduction of varicella vaccine in the United States. Pediatrics.2004;114 :786-792. Department of Pediatrics, Columbia University College of Physicians and SurgeonsHuman Herpesviruses: Biology, Therapy, and Immunoprophylaxis. Varicella-zoster vaccine Anne A. Gershon., New York, NY, USA. Leung J, Harpaz R, Molinari NA, Jumaan A, Zhou F. Herpes zoster incidence among insured persons in the United States, 1993–2006: evaluation of impact of varicella vaccination. Clin Infect Dis. 2011 Feb 1;52(3):332-40. U.S. Department of Health & Human Services Centers for Disease Control and Prevention: Vaccine Information Statement. Yawn BP, Saddier P, Wollan PC, St Sauver JL, Kurland MJ, Sy LS. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction.Mayo Clin Proc. 2007 Nov;82(11):1341-9.
A pandemic is characterized by the global spread of a novel disease strain that has the potential to cause serious illness and elevated mortality in the human population. The ability of any pathogen to infect people and inflict illness or death is dependent on that pathogen’s virulence and transmissibility. For example, certain diseases such as influenza can cause serious illness or death, and are easily spread from person to person. Other diseases, such as Ebola virus, that cause very high mortality rates but are less readily transmissible. However, not all people are equally at risk from pandemics. This is because pandemic emergence is more likely under certain conditions. For example, geographic regions where many people live that also have large populations of animals that serve as “mixing vessels”—which, depending on the pathogen, can include pigs, domestic poultry, or bats—are more likely to ignite pandemics. In addition, a person’s chance of developing serious illness after contracting a disease is dependent on that person’s age, sex, and general health. Access to supportive medical care, antivirals, antibiotics, and vaccines also plays an important role. Travel restrictions may help by slowing the spread of a pandemic, allowing the dissemination of medical supplies and vaccines before the first wave of illness arrives. In fact, if mitigation measures are promptly enacted, many deaths may be prevented.
The dugong is a large marine mammal which, together with the manatees, is one of four living species of the order Sirenia. It is also the only sirenian in its range, which spans the waters of at least 37 countries throughout the Indo-Pacific, though the majority of dugongs live in the northern waters of Australia between Shark Bay and Moreton Bay. The dugong is heavily dependent on seagrasses for subsistence and is thus restricted to the coastal habitats where they grow, with the largest dugong concentrations typically occurring in wide, shallow, protected areas such as bays, mangrove channels and the lee sides of large inshore islands. Its snout is sharply downturned, an adaptation for grazing and uprooting benthic seagrasses. The dugong has been hunted for thousands of years for its meat and oil, although dugong hunting also has great cultural significance throughout its range. The dugong’s current distribution is reduced and disjunct, and many populations are close to extinction. The IUCN lists the dugong as a species vulnerable to extinction, while the Convention on International Trade in Endangered Species limits or bans the trade of derived products based on the population involved. Despite being legally protected in many countries throughout their range, the main causes of population decline remain anthropogenic and include hunting, habitat degradation, and fishing-related fatalities. With its long lifespan of 70 years or more, and slow rate of reproduction, the dugong is especially vulnerable to these types of exploitation.Dugongs are also threatened by storms, parasites, and their natural predators, sharks, killer whales, and crocodiles.
Cloudy Comparisons - 2 sets of 12 cloud cards (24 total); 1 venn diagram. Compare and contrast the clouds game! Put all the cards into a pile, have the students draw a card. Here are some ways you can play this game: 1.) Go Fish type of game: target both expressive and receptive language by dealing the cards 5 to a student. Then have a student ask “Do you have a (insert cloud description here) .” Or, you could target your artic kiddos at the sentence/structured conversation level by having them describe & define the clouds using their good speech sounds. 2.) Guess the cloud: One student draws a card and describes the cloud they have drawn. Either you have one of each clouds lined up for the other students to see and match the described card, OR, your other students draw the card the first student is describing. Whoever’s drawing best matches the original cloud, gets to draw the next card and describe. Pouring Pronouns (he/she/they) 18 pronouns cards; 3 pronoun mats. Oh no! It’s pouring pronouns! Have your students draw a card and use the correct pronoun pictured on the card (he/she or they). For every correct answer, they earn an umbrella to protect them from the pouring pronouns, but get one wrong OR draw a storm card, and lose your umbrellas! The student with the most umbrellas at the end of the game, wins! Here are some prompts for this game: 1.What is he/she doing? 2.Who is holding the umbrella (SHE is). Windy WH questions. 30 WH cards (6 of each: who, what, where, when, why). Put the cards into one pile. Answer the weather-related WH questions. Dreary Descriptions- Match the lightning bolt items with their cloudy category! Mix up the cards and have each student draw a card. They can either guess the category (cloud) from the lightning bolt (item), or the categories (clouds) can be laid out in front of them to sort them in their appropriate category. Weather Idioms. 30 cards (15 idioms; 15 descriptions) Separate the cards into two piles, idioms (rainbows) and meanings (clouds). Match the description with the weather-related idiom.
Speech Problems: Normal DisfluencySkip to the navigation Normal disfluency is stuttering that begins during a child's intensive language-learning years and resolves on its own sometime before puberty. It is considered a normal phase of language development. About 75 out of 100 children who stutter get better without treatment.footnote 1 The most common normal disfluency in children younger than age 3 is the repetition of one-syllable words or parts of words, especially at the beginning of sentences ("I-I want that"). After age 3, children with normal disfluencies most often repeat whole words ("You-you-you") or phrases ("I see—I see—I see"). Other problems may include: - Hesitation with interjection. ("I played on the ... uh ... swing.") - Incomplete sentences with change of focus. ("My bear—the towel is dry.") Symptoms may occur in phases. There may be periods of days or weeks when they occur frequently, and then almost disappear, only to begin again. Children with normal disfluencies do not usually have physical symptoms, such as eye-blinking or obvious frustration. They do not try to avoid speaking or seem bothered by their speech. They may not even appear to notice. Stuttering that follows the pattern of normal disfluency occurs only once in every 10 sentences or less.footnote 1 Many parents recognize these symptoms as a normal part of speech development. If you have any concerns about your child's speech, talk with your child's doctor. Primary Medical Reviewer Susan C. Kim, MD - Pediatrics Specialist Medical Reviewer Louis Pellegrino, MD - Developmental Pediatrics Current as ofSeptember 9, 2014 Current as of: September 9, 2014
Are you confused about the language of gender identity? How do we communicate about sexuality and gender without a common vocabulary in English? Lesbians, gay men, bisexual and Transgender people are everywhere and it takes a lot of courage for them to come up as they belong to the LGBT community. Well, that is what this lesson is all about. We will look at the common terms used in English language for gender identification. An inclusive initialism for those self-identifying as lesbian, gay men, bisexual, or transsexual. Another term used for LGBT community is Queer Lesbian – A term used for women sexually attracted to other women. Slang words used for Lesbians are: – Dyke (offensive) Butch (A man/masculine lesbian) Gay is a term that primarily refers to a homosexual person or the trait of being a homosexual, especially a man getting attracted to another man. Slang words used for Gay men are:- Fag (offensive) A term for a person sexually attracted or responsive to the same sex and the opposite sex. Slang words used for Bisexual people are :- Hetroflexible Swing both ways Transgender is the state of one’s gender identity or gender expression not matching one’s assigned sex A transsexual is a person in which the sex-related structures of the brain that define gender identity are exactly opposite the physical sex organs of the body. Put even more simply, a transsexual is a mind that is literally, physically, trapped in a body of the opposite sex.
Oral allergy syndrome, also known as pollen-food syndrome,is caused by cross-reacting allergens found in both pollen and raw fruits, vegetables, or some tree nuts. The immune system recognizes the pollen and similar proteins in the food and directs an allergic response to it. People affected by oral allergy syndrome can usually eat the same fruits or vegetables in cooked form because the proteins are distorted during the heating process, so that the immune system no longer recognizes the food. Oral allergy syndrome typically does not appear in young children; the onset is more common in older children, teens, and young adults who have been eating the fruits or vegetables in question for years without any problems. Those with oral allergy syndrome typically have allergy to birch, ragweed, or grass pollens.
Vitamin B12 Deficiency in the Millions Vitamin B12 deficiency occurs when your blood levels of vitamin B12 drop to an unhealthy low. If you have vitamin B12 deficiency for an extended period, then you are risk for pernicious anemia. Today, experts believe that vitamin B12 deficiency is an overlooked epidemic striking millions of US citizens. How common is B12 deficiency? In 2000, the United States Department of Agriculture stated that nearly two-fifths of all US citizens had some form of vitamin B12 deficiency. Their source of information was the Framingham Offspring Study, which found vitamin B12 deficiency in nearly 40% of 3,000 Framingham, Massachusetts residents between the ages of 26 and 83. “I think there is a lot of undetected vitamin B12 deficiency out there,” said study author Katherine Tucker. Today, reports indicate that close to 47 million Americans suffer from middle-low to nearly depleted levels of vitamin B12. So why do government reports such as the National Health and Nutrition Examination Survey claim that the prevalence of vitamin B12 deficiency among Americans is much lower- closer to 3% with severely low levels, and 20% with borderline B12 anemia? Vitamin B12 deficiency is often misdiagnosed and ignored by doctors for many reasons: First, we’ve been led to believe that pernicious anemia is no longer a fatal or even detrimental disease, so it has essentially fallen off the radar. Many doctors no longer test for vitamin B12 deficiency in their patients, because they believe that it is a non-issue. Second, standards for detecting vitamin B12 deficiency are remarkably low and inefficient. Serum vitamin B12 screenings only look for lethally-low levels of vitamin B12, which occur only in a rare percentage of people with pernicious anemia. Middle-low ranges of vitamin B12 depletion that nevertheless cause debilitating symptoms are often ignored. Finally, even people with “normal” levels of vitamin B12 in their system may exhibit symptoms of vitamin B12 deficiency, as the blood screenings don’t separate active vitamin B12 from stored vitamin B12. This is an important yet overlooked distinction, as only active molecules of vitamin B12 are able to carry out the biochemical functions necessary for survival. B12 deficiency in vegetarians According to a recent report on vitamin B12 deficiency among vegetarians, vegans are at a higher risk for developing anemia from low vitamin B12 levels compared with vegetarians, and people who follow a vegetarian diet from birth are more at risk than those who made a change to their diet in adulthood. In the scientific study conducted by the Department of Nutrition Science, the risk for vitamin B12 deficiency among vegetarians is as follows: - Pregnant women- 62% - Children- 25-86% - Teens- 21-41% - Elderly 11-90% Signs of B12 deficiency Some of the early signs of vitamin B12 deficiency are often mistaken for chronic depression, anxiety, or age-related dementia. Since vitamin B12 is needed for maintaining myelin, some of the symptoms of low vitamin B12 mimic those of multiple sclerosis. Symptoms of vitamin B12 include: - Constant fatigue - Memory loss - Brain fog - Poor concentration - Decreased motor control - “Pins and needles” in hands and feet - Muscle spasms, twitches - Sleep disturbances - Sore, burning red tongue Do you currently get prescriptions for vitamin B12 shots ? If so, do you feel that you don’t get enough to prevent symptoms between doses? Image courtesy of xedos4 Tags: b12 in vegetarian/vegan diet, causes of b12 deficiency, low B12 blood test, low levels of b12, pernicious anemia, vegan diet and vitamin B12 deficiency, vitamin B12 deficiency, vitamin B12 deficiency epidemiology
Hello, students. My name is Dr. Martina Shabram. And I will be your instructor for today's lesson. I'm genuinely excited to teach you these concepts. So let's get started. What are we learning today? Well, this lesson covers some important grammatical issues that can arise with pronouns and antecedents. We'll review what an antecedent is, how pronouns and antecedents need to agree, and we'll practice finding and correcting errors and agreement. So a pronoun is a word that stands in for a noun or noun phrase. And an antecedent is what we call the word that a pronoun refers to and stands in for. These elements, then, need to both agree with each other, which we describe as pronoun-antecedent agreement, which is agreement in number and other features of a pronoun and its antecedent. Now in order to create clear sentences, pronouns need clear, non-unambiguous antecedents. The one exception to this rule is when we use indefinite pronouns, which we'll review a little later in this lesson. If we're not using indefinite pronouns, we'd end up with pronoun reference errors if the antecedent isn't clearly referenced by the pronoun. Or, if it's not clear which antecedent the pronoun is referencing. So let's look at those relationships a little more closely. Pronouns should always agree with their antecedent in number and gender. Personal pronouns, for example, are different based on the gender of the person being described. And they are always either plural or singular. So if the pronoun is singular, so too must the antecedent that it's referencing be singular. Likewise, if an antecedent is plural, then the pronoun that will refer to it needs to also be plural. Here are some examples. We went into the classroom and took our seats. Notice here that the pronoun we is plural. So our seats is likewise plural. My dad ate his cookies. Here, my dad is singular and specifically male. So his replaces my dad with the singular male version of those words. If you find pronoun-antecedent errors, you'll definitely want to fix them to make sure that your readers understand your intended meaning. That's one of the things we often do in the editing stage of the writing process. Now sometimes, as I've already previewed, we have pronouns that correctly don't refer to any specific antecedent. These pronouns replace nouns without being specific about which nouns they are standing in for. This is one way to refer broadly, such as if I wanted to say, everyone is going to love my cookies. Everyone there is the indefinite pronoun because it reverts broadly to a nonspecific group of people. Even though indefinite pronouns are a special kind of pronoun, they still have to follow the rest of the rules. They need to be singular or plural. They need to agree with the number of their verbs. And they need to match in number and gender any pronouns that end up referring to them. For example, something about my cookies makes them delicious. Something is our indefinite. And it is singular. So the verb make-- that's also given in the singular form. Now sometimes, these indefinite pronouns will be mistaken as plural, even though they're actually singular. So be particularly careful with the singular indefinites-- anyone, someone, nobody, everybody, anything, and something. Just think about those roots-- one, body, thing. Those are all singular. That helps you remember that those indefinite pronouns are also singular. If, in contrast, you need to use a plural indefinite, try something like few, several, or both. So in order to identify and correct errors in pronoun and antecedent relationships, I'm going to give you a very short essay. Take a moment to read through it by pausing and spot the pronoun errors that you can find. And then press play when you're ready. So what kinds of pronouns and pronoun errors do we see here? Let's start by highlighting all of the pronouns we see. OK. Here we see a pronoun doing its job correctly. They is the plural personal pronoun that isn't gender specific. And it's referring to teachers, which is also plural and which isn't gendered. And here, we have an indefinite pronoun that refers to a hypothetical student. See how this has is singular to match? If the pronoun itself were plural, we'd need to write have. So now here, I do see an error. When we're referring to anyone but aren't specifying gender, we need to use he or she, not they because they is plural. So let's correct that and also correct the antecedents. Any other errors? Well, here's another one-- they is. They is plural. So it needs the plural form of to be-- they are. Conversely, here the error is that everything is singular. And so the antecedent here should be is. And here's another one. Robots is plural. So it here should be they. So what did we learn today? We covered the relationships between pronouns and their antecedents. We learned about how some particular kinds of pronouns work, how they relate to their antecedents, and how to spot and remedy errors in pronoun-antecedent reference. Well, students, I hope you had as much fun as I did. Thank you. The word that a pronoun refers to and stands in for. A word that stands in for a noun or noun phrase. Agreement in number and other features of a pronoun and its antecedent.
What is DNA fingerprinting? Students will learn how gel electrophoresis works and how it is used in fields like forensics to study DNA of various individuals. They will conduct an experiment to find out if different food dyes use the same colors. Gel Electrophoresis Materials Checklist (Please Review) A presentation to go along with the gel electrophoresis activity is located here:
STAINING METHODS IN MICROBIOLOGY They are also referred as monochrome stains, since only one dye is employed for the colouration of bacterial smear (The act of taking bacteria taken from a lesion or area, spreading them on a slide, and staining them for microscopic examination.). The surface of a bacterial cell has an overall acidic characteristic because of large amount of carboxyl groups located on the cell surface de to acidic amino acids. Therefore, when ionization of carboxyl groups takes place it imparts negative charge to the cell surface as per the following equation. COOH → COO- + H+ H+ is removed and the surface of the bacteria becomes negatively charged and a positively charged dye like (methylene blue) attaches to the negatively surface and gives it a coloured appearance. Methylene blue chloride → Methylene Blue++ Cl‑ Negative staining is a technique (used mainly in electron microscopy) by which bacterial cells are not stained, but are made visible against dark background. Acidic dyes like eosin and nigrosin are employed for this method. Though, this staining technique is not very popular, it has an advantage over the direct or positive staining methods for the study of morphology of cells. This is because of the fact that the cells do not receive vigorous physical or chemical treatments. The colouring power of acidic dye e.g. eosin in sodium eosinate is having negative charge, therefore, it does not combine with the negatively charged bacterial cell surface. On the other hand, it forms a deposit around the cell, resulting into appearance of bacterial cell colourless against dark background. Some suitable negative stains include ammonium molybdate, uranyl acetate, uranyl formate, phosphotungstic acid, osmium tetroxide, osmium ferricyanide and auroglucothionate. These have been chosen because they scatter electrons well and also adsorb to biological matter well. The method is used to view viruses, bacteria, bacterial flagella, biological membrane structures and proteins or protein aggregates, which all have a low electron-scattering power. Staining procedure which differentiates or distinguishes between types of bacteria is termed as differential staining technique. Methods for simple staining impart same colour to all bacteria and other biological material, may be slight variation in shade. On the other hand, differential staining methods impart distinctive colour only to certain types of bacteria. The basic principle underlying this differentiation is due to the different chemical and physical properties of cell and as a result, they react differently with the staining reagents. Differential staining procedure utilizes more than one stain. In some techniques the stains are applied separately, while in other as combination. There are two most important differential stains, namely, (A) Gram stain and (B) Acid-fast stain. (A) Gram Stain Gram stain is one of the most important and widely used differential stains. It has great taxonomic significance and is often the first step in the identification of an unknown prokaryotic organism. This technique divides bacteria into two groups (i) Gram positive those which retain primary dye like crystal violet and appear deep violet in colour and (ii) Gram negative, which lose the primary dye on application of decolourizer and take the colour of counterstain like safranin or basic fuchsin. Gram-positive bacteria have a thick mesh-like cell wall made of peptidoglycan (50-90% of cell wall), which stains purple while gram-negative bacteria have a thinner layer (10% of cell wall), which stains pink. Gram-negative bacteria also have an additional outer membrane which contains lipids. There are four basic steps of the Gram stain, which include applying a primary stain (crystal violet) to a heat-fixed smear of a bacterial culture, followed by the addition of a trapping agent (Gram’s iodine), rapid decolorization with alcohol or acetone, and counterstaining with safranin. Basic fuchsin is sometimes substituted for safranin since it will more intensely stain anaerobic bacteria but it is much less commonly employed as a counterstain. Crystal violet (CV) dissociates in aqueous solutions into CV+ and chloride (Cl – ) ions. These ions penetrate through the cell wall and cell membrane of both gram-positive and gram-negative cells. The CV+ ion interacts with negatively charged components of bacterial cells and stains the cells purple. Iodine (I – or I3 – ) interacts with CV+ and forms large complexes of crystal violet and iodine (CV–I) within the inner and outer layers of the cell. Iodine is often referred to as a mordant, but is a trapping agent that prevents the removal of the CV-I complex and therefore color the cell. When a decolorizer such as alcohol or acetone is added, it interacts with the lipids of the cell membrane. A gram-negative cell will lose its outer membrane and the lipopolysaccharide layer is left exposed. The CV–I complexes are washed from the gram-negative cell along with the outer membrane. In contrast, a gram-positive cell becomes dehydrated from an ethanol treatment. The large CV–I complexes become trapped within the gram-positive cell due to the multilayered nature of its peptidoglycan. After decolorization, the gram-positive cell remains purple and the gram-negative cell loses its purple color. Counterstain, which is usually positively charged safranin or basic fuchsin, is applied last to give decolorized gram-negative bacteria a pink or red color. Acid Fast Stains Acid fast staining is another widely used differential staining procedure in bacteriology. This stain was developed by Paul Ehrlich in 1882, during his work on etiology of tuber culosis (5). Some bacteria resist decolourization by both acid and alcohol and hence they are referred as acid-fast organisms. Acid alcohol is very intensive decolourizer. This staining technique divides bacteria into two groups (i) acid-fast and (ii) non acid-fast. This procedure is extensively used in the diagnosis of tuberculosis and leprosy. Acid-fastness property in certain Mycobacteria and some species of Nocardia is correlated with their high lipid content. Due to high lipid content of cell wall, in some cases 60% (w/w), acid-fast cells have relatively low permeability to dye and hence it is difficult to stain them. For the staining of these bacteria, penetration of primary dye is facilitated with the use of 5% aqueous phenol which acts as a chemical intensifier. In addition, heat is also applied which acts as a physical intensifier. Once these cells are stained, it is difficult to decolourize Ziehl (6) and Neelsen (7) independently proposed acid fast stain, in 1882-1883 is commonly used today. The staining reagents are much more stable than those described by Ehrlich. The procedure for staining is as follows. Prepare a smear and fix it by gentle heat. Flood the smear with carbol fuchsin (S19) and heat the slide from below till steam rise for 5 minutes. Do not boil and ensure that stain does not dry out. Allow the’ slide to cool for 5 minutes to prevent the breakage of slide in the subsequent prevent step. Wash well with water. Decolourize the smear till red colour no longer comes out in 20% sulphuric acid. Wash with water. Counterstain with 1% aqueous solution of malachite green or Loeffler’s methylene blue (S18) for 15-20 seconds. Wash, blot dry and examine under oil-immersion objective Bacterial endosporesare metabolically inactive, highly resistant structures produced by some bacteria as a defensive strategy against unfavorable environmental conditions. The bacteria can remain in this suspended state until conditions become favorable and they can germinate and return to their vegetative state. The primary stain applied is malachite green, which stains both vegetative cells and endospores. Heatis applied to help the primary stain penetrate the endospore. The cells are then decolorized with water, which removes the malachite green from the vegetative cell but not the endospore. Safraninis then applied to counterstainany cells which have been decolorized. At the end of the staining process, vegetative cells will be pink, and endospores will be dark green.
A crane is a lifting machine equipped with a winder, wire ropes or chains and sheaves that can be used both to lift and lower materials and to move them horizontally. It uses one or more simple machines to create mechanical advantage and thus move loads beyond the normal capability of a human. Cranes are commonly employed in the transport industry for the loading and unloading of freight; in the construction industry for the movement of materials; and in the manufacturing industry for the assembling of heavy equipment. The first cranes were invented by the Ancient Greeks and were powered by men or beasts-of-burden, such as donkeys. These cranes were used for the construction of tall buildings. Larger cranes were later developed, employing the use of human treadwheels, permitting the lifting of heavier weights. In the High Middle Ages, harbour cranes were introduced to load and unload ships and assist with their construction – some were built into stone towers for extra strength and stability. The earliest cranes were constructed from wood, but cast iron and steel took over with the coming of the Industrial Revolution. For many centuries, power was supplied by the physical exertion of men or animals, although hoists in watermills and windmills could be driven by the harnessed natural power. The first 'mechanical' power was provided by steam engines, the earliest steam crane being introduced in the 18th or 19th century, with many remaining in use well into the late 20th century. Modern cranes usually use internal combustion engines or electric motors and hydraulic systems to provide a much greater lifting capability than was previously possible, although manual cranes are still utilised where the provision of power would be uneconomic. Cranes exist in an enormous variety of forms – each tailored to a specific use. Sizes range from the smallest jib cranes, used inside workshops, to the tallest tower cranes, used for constructing high buildings, and the largest floating cranes, used to build oil rigs and salvage sunken ships. This article also covers lifting machines that do not strictly fit the above definition of a crane, but are generally known as cranes, such as stacker cranes and loader cranes. The crane for lifting heavy loads was invented by the Ancient Greeks in the late 6th century BC. The archaeological record shows that no later than c.515 BC distinctive cuttings for both lifting tongs and lewis irons begin to appear on stone blocks of Greek temples. Since these holes point at the use of a lifting device, and since they are to be found either above the center of gravity of the block, or in pairs equidistant from a point over the center of gravity, they are regarded by archaeologists as the positive evidence required for the existence of the crane. The introduction of the winch and pulley hoist soon lead to a widespread replacement of ramps as the main means of vertical motion. For the next two hundred years, Greek building sites witnessed a sharp drop in the weights handled, as the new lifting technique made the use of several smaller stones more practical than of fewer larger ones. In contrast to the archaic period with its tendency to ever-increasing block sizes, Greek temples of the classical age like the Parthenon invariably featured stone blocks weighing less than 15-20 tons. Also, the practice of erecting large monolithic columns was practically abandoned in favour of using several column drums. Although the exact circumstances of the shift from the ramp to the crane technology remain unclear, it has been argued that the volatile social and political conditions of Greece were more suitable to the employment of small, professional construction teams than of large bodies of unskilled labour, making the crane more preferable to the Greek polis than the more labour-intensive ramp which had been the norm in the autocratic societies of Egypt or Assyria. The first unequivocal literary evidence for the existence of the compound pulley system appears in the Mechanical Problems (Mech. 18, 853a32-853b13) attributed to Aristotle (384-322 BC), but perhaps composed at a slightly later date. Around the same time, block sizes at Greek temples began to match their archaic predecessors again, indicating that the more sophisticated compound pulley must have found its way to Greek construction sites by then. The heyday of crane in ancient times came under the Roman Empire, when construction activity soared and buildings reached enormous dimensions. The Romans adopted the Greek crane and developed it further. We are relatively well informed about their lifting techniques thanks to rather lengthy accounts by the engineers Vitruvius (De Architectura 10.2, 1-10) and Heron of Alexandria (Mechanica 3.2-5). There are also two surviving reliefs of Roman treadwheel cranes offering pictorial evidence, with the Haterii tombstone from the late first century AD being particularly detailed. The simplest Roman crane, the Trispastos, consisted of a single-beam jib, a winch, a rope, and a block containing three pulleys. Having thus a mechanical advantage of 3:1, it has been calculated that a single man working the winch could raise 150 kg (3 pulleys x 50 kg = 150), assuming that 50 kg represent the maximum effort a man can exert over a longer time period. Heavier crane types featured five pulleys (Pentaspastos) or, in case of the largest one, a set of three by five pulleys (Polyspastos) and came with two, three or four masts, depending on the maximum load. The Polyspastos, when worked by four men at both sides of the winch, could already lift 3000 kg (3 ropes x 5 pulleys x 4 men x 50 kg = 3000 kg). In case the winch was replaced by a treadwheel, the maximum load even doubled to 6000 kg at only half the crew, since the treadwheel possesses a much bigger mechanical advantage due to its larger diameter. This meant that, in comparison to the construction of the Egyptian Pyramids, where about 50 men were needed to move a 2.5 ton stone block up the ramp (50 kg per person), the lifting capability of the Roman Polyspastos proved to be 60 times higher (3000 kg per person). However, numerous extant Roman buildings which feature much heavier stone blocks than those handled by the Polyspastos indicate that the overall lifting capability of the Romans went far beyond that of any single crane. At the temple of Jupiter at Baalbek, for instance, the architraves blocks weigh up to 60 tons each, and the corner cornices blocks even over 100 tons, all of them raised to a height of ca. 19 m above the ground. In Rome, the capital block of Trajan's Column weighs 53.3 tons which had to be lifted at a height of ca. 34 m. It is assumed that Roman engineers accomplished lifting these extraordinary weights by two measures: First, as suggested by Heron, a lifting tower was set up, whose four masts were arranged in the shape of a quadrangle with parallel sides, not unlike a siege tower, but with the column in the middle of the structure (Mechanica 3.5). Second, a multitude of capstans were placed on the ground around the tower, for, although having a lower leverage ratio than treadwheels, capstans could be set up in higher numbers and run by more men (and, moreover, by draught animals). This use of multiple capstans is also described by Ammianus Marcellinus (17.4.15) in connection with the lifting of the Lateranense obelisk in the Circus Maximus (ca. 357 AD). The maximum lifting capability of a single capstan can be established by the number of lewis iron holes bored into the monolith. In case of the Baalbek architrave blocks, which weigh between 55 and 60 tons, eight extant holes suggest an allowance of 7.5 ton per lewis iron, that is per capstan. Lifting such heavy weights in a concerted action required a great amount of coordination between the work groups applying the force to the capstans. During the High Middle Ages, the treadwheel crane was reintroduced on a large scale after the technology had fallen into disuse in western Europe with the demise of the Western Roman Empire. The earliest reference to a treadwheel (magna rota) reappears in archival literature in France about 1225, followed by an illuminated depiction in a manuscript of probably also French origin dating to 1240. In navigation, the earliest uses of harbor cranes are documented for Utrecht in 1244, Antwerp in 1263, Brugge in 1288 and Hamburg in 1291, while in England the treadwheel is not recorded before 1331. Generally, vertical transport was done safer and cheaper by cranes than by customary methods. Typical areas of application were harbors, mines, and, in particular, building sites where the treadwheel crane played a pivotal role in the construction of the lofty Gothic cathedrals. Nevertheless, both archival and pictorial sources of the time suggest that newly introduced machines like treadwheels or wheelbarrows did not completely replace more labor-intensive methods like ladders, hods and handbarrows. Rather, old and new machinery continued to coexist on medieval construction sites and harbors. Apart from treadwheels, medieval depictions also show cranes to be powered manually by windlasses with radiating spokes, cranks and by the 15th century also by windlasses shaped like a ship's wheel. To smooth out irregularities of impulse and get over 'dead-spots' in the lifting process flywheels are known to be in use as early as 1123. The medieval treadwheel was a large wooden wheel turning around a central shaft with a treadway wide enough for two workers walking side by side. While the earlier 'compass-arm' wheel had spokes directly driven into the central shaft, the more advanced 'clasp-arm' type featured arms arranged as chords to the wheel rim, giving the possibility of using a thinner shaft and providing thus a greater mechanical advantage. Contrary to a popularly held belief, cranes on medieval building sites were neither placed on the extremely lightweight scaffolding used at the time nor on the thin walls of the Gothic churches which were incapable of supporting the weight of both hoisting machine and load. Rather, cranes were placed in the initial stages of construction on the ground, often within the building. When a new floor was completed, and massive tie beams of the roof connected the walls, the crane was dismantled and reassembled on the roof beams from where it was moved from bay to bay during construction of the vaults. Thus, the crane ‘grew’ and ‘wandered’ with the building with the result that today all extant construction cranes in England are found in church towers above the vaulting and below the roof, where they remained after building construction for bringing material for repairs aloft. Less frequently, medieval illuminations also show cranes mounted on the outside of walls with the stand of the machine secured to putlogs. In contrast to modern cranes, medieval cranes and hoists - much like their counterparts in Greece and Rome - were primarily capable of a vertical lift, and not used to move loads for a considerable distance horizontally as well. Accordingly, lifting work was organized at the workplace in a different way than today. In building construction, for example, it is assumed that the crane lifted the stone blocks either from the bottom directly into place, or from a place opposite the centre of the wall from where it could deliver the blocks for two teams working at each end of the wall. Additionally, the crane master who usually gave orders at the treadwheel workers from outside the crane was able to manipulate the movement laterally by a small rope attached to the load. Slewing cranes which allowed a rotation of the load and were thus particularly suited for dockside work appeared as early as 1340. While ashlar blocks were directly lifted by sling, lewis or devil's clamp (German Teufelskralle), other objects were placed before in containers like pallets, baskets, wooden boxes or barrels. It is noteworthy that medieval cranes rarely featured ratchets or brakes to forestall the load from running backward. This curious absence is explained by the high friction force exercised by medieval treadwheels which normally prevented the wheel from accelerating beyond control. According to the “present state of knowledge” unknown in antiquity, stationary harbor cranes are considered a new development of the Middle Ages. The typical harbor crane was a pivoting structure equipped with double treadwheels. These cranes were placed docksides for the loading and unloading of cargo where they replaced or complemented older lifting methods like see-saws, winches and yards. Two different types of harbor cranes can be identified with a varying geographical distribution: While gantry cranes which pivoted on a central vertical axle were commonly found at the Flemish and Dutch coastside, German sea and inland harbors typically featured tower cranes where the windlass and treadwheels were situated in a solid tower with only jib arm and roof rotating. Interestingly, dockside cranes were not adopted in the Mediterranean region and the highly developed Italian ports where authorities continued to rely on the more labor-intensive method of unloading goods by ramps beyond the Middle Ages. Unlike construction cranes where the work speed was determined by the relatively slow progress of the masons, harbor cranes usually featured double treadwheels to speed up loading. The two treadwheels whose diameter is estimated to be 4 m or larger were attached to each side of the axle and rotated together. Today, according to one survey, fifteen treadwheel harbor cranes from pre-industrial times are still extant throughout Europe. Beside these stationary cranes, floating cranes which could be flexibly deployed in the whole port basin came into use by the 14th century. There are two major considerations that are taken into account in the design of cranes. The first is that the crane must be able to lift a load of a specified weight and the second is that the crane must remain stable and not topple over when the load is lifted and moved to another location. Cranes, like all machines, obey the principle of conservation of energy. This means that the energy delivered to the load cannot exceed the energy put into the machine. For example, if a pulley system multiplies the applied force by ten, then the load moves only one tenth as far as the applied force. Since energy is proportional to force multiplied by distance, the output energy is kept roughly equal to the input energy (in practice slightly less, because some energy is lost to friction and other inefficiencies). In order for a crane to be stable, the sum of all moments about any point such as the base of the crane must equate to zero. In practice, the magnitude of load that is permitted to be lifted (called the "rated load" in the US) is some value less than the load that will cause the crane to tip. Under US standards for mobile cranes, the stability-limited rated load for a crawler crane is 75% of the tipping load. The stability-limited rated load for a mobile crane supported on outriggers is 85% of the tipping load. Standards for cranes mounted on ships or offshore platforms are somewhat stricter due to the dynamic load on the crane due to vessel motion. Additionally, the stability of the vessel or platform must be considered. For stationary pedestal or kingpost mounted cranes, the moment created by the boom, jib, and load is resisted by the pedestal base or kingpost. Stress within the base must be less than the yield stress of the material or the crane will fail. Different types of crane are used for maintenance work, recovery operations and freight loading in goods yards. To increase the horizontal reach of the hoist, the boom may be extended by adding a jib to the top. The jib can be fixed or, in more complex cranes, luffing (that is, able to be raised and lowered). The tower crane is a modern form of balance crane. Fixed to the ground (or "jacked up" and supported by the structure as the structure is being built), tower cranes often give the best combination of height and lifting capacity and are used in the construction of tall buildings. To save space and to provide stability the vertical part of the crane is often mounted on large beams, braced onto the completed structure, being lifted from one floor to the next as the structure grows. The jib (colloquially, the 'boom') and counter-jib are mounted to the turntable, where the slewing bearing and slewing machinery are located. The counter-jib carries a counterweight of concrete blocks, and the Jib suspends the load from the trolley. The Hoist motor and transmissions are located on the mechanical deck on the counter-jib, while the trolley motor is located on the jib. The crane operator either sits in a cabin at the top of the tower or (rarely seen) controls the crane by radio remote control from the ground. In the first case the operator's cabin is most usually located at the top of the tower attached to the turntable, but can be mounted on the jib, or partway down the tower. The lifting hook is operated by using electric motors to manipulate wire rope cables through a system of sheaves. In order to hook and unhook the loads, the operator works in conjunction with a signaller (known as a 'rigger'). They are most often in radio contact, and always use hand signals. The rigger directs the schedule of lifts for the crane, and is responsible for the safety of the rigging and loads. A tower crane is usually assembled by a telescopic jib (mobile) crane of greater reach, and in the case of tower cranes that have risen while constructing very tall skyscrapers, a smaller crane (or derrick) will be lifted to the roof of the completed tower to dismantle the tower crane afterwards. A self-assembling tower crane lifts itself off the ground using jacks, allowing the next section of the tower to be inserted at ground level. It is often claimed that a large fraction of the tower cranes in the world are in use in Dubai. The exact percentage remains an open question. The design evolved first in Germany around the turn of the 19th century and was adopted for use in British shipyards to support the battleship construction program from 1904-1914. The ability of the hammerhead crane to lift heavy weights was useful for installing large pieces of battleships such as armour plate and gun barrels. Hammerhead cranes were also installed in naval shipyards in Japan and in the USA. The British Government also installed a hammerhead crane at the Singapore Naval Base (1938) and later a copy of the crane was installed at Garden Island Naval Dockyard in Sydney (1951). These cranes provided repair support for the battle fleet operating far from Great Britain. The principal engineering firm for hammerhead cranes in the British Empire was Sir William Arrol & Co Ltd. A crane mounted on a truck carrier provides the mobility for this type of crane. Generally, these cranes are designed to be able to travel on streets and highways, eliminating the need for special equipment to transport a crane to the jobsite. When working on the jobsite, outriggers are extended horizontally from the chassis then down vertically to level and stabilize the crane while stationary and hoisting. Many truck cranes possess limited slow-travelling capability (just a few miles per hour) while suspending a load. Great care must be taken not to swing the load sideways from the direction of travel, as most of the anti-tipping stability then lies in the strength and stiffness of the chassis suspension. Most cranes of this type also have moving counterweights for stabilization beyond that of the outriggers. Loads suspended directly over the rear remain more stable, as most of the weight of the truck crane itself then acts as a counterweight to the load. Factory-calculated charts (or electronic safeguards) are used by the crane operator to determine the maximum safe loads for stationary (outriggered) work as well as (on-rubber) loads and travelling speeds. Truck cranes range in lifting capacity from about 14.5 US tons to about 1300 US tons. A crane mounted on an undercarriage with four rubber tires that is designed for pick-and-carry operations and for off-road and "rough terrain" applications. Outriggers that extend horizontally and vertically are used to level and stabilize the crane for hoisting. These telescopic cranes are single-engine machines where the same engine is used for powering the undercarriage as is used for powering the crane, similar to a crawler crane. However, in a rough terrain crane, the engine is usually mounted in the undercarriage rather than in the upper, like the crawler crane. A mobile crane which has the necessary equipment to travel with high speed on public roads/highways and on the job site in rough terrain with all wheel and crab steering. AT‘s combine the roadability of Truck-mounted Crane and the manoeuvrability of a Rough Terrain Crane. AT’s have 2-9 axles and are designed for lifting loads up to 1200 metric tons. Crawler cranes range in lifting capacity from about 40 US tons to 3500 US tons. A gantry crane has a hoist in a trolley which runs horizontally along gantry rails, usually fitted underneath a beam spanning between uprights which themselves have wheels so that the whole crane can move at right angles to the direction of the gantry rails. These cranes come in all sizes, and some can move very heavy loads, particularly the extremely large examples used in shipyards or industrial installations. A special version is the container crane (or "Portainer" crane, named after the first manufacturer), designed for loading and unloading ship-borne containers at a port. Floating cranes are used mainly in bridge building and port construction, but they are also used for occasional loading and unloading of especially heavy or awkward loads on and off ships. Some floating cranes are mounted on a pontoon, others are specialized crane barges with a lifting capacity exceeding 10,000 tons and have been used to transport entire bridge sections. Floating cranes have also been used to salvage sunken ships. Located on the ships and used for cargo operations where no shore unloading facilities are available. Most are diesel-hydraulic or electric-hydraulic. A jib crane is a type of crane where a horizontal member (jib or boom), supporting a moveable hoist, is fixed to a wall or to a floor-mounted pillar. Jib cranes are used in industrial premises and on military vehicles. The jib may swing through an arc, to give additional lateral movement, or be fixed. Similar cranes, often known simply as hoists, were fitted on the top floor of warehouse buildings to enable goods to be lifted to all floors. Types of crane-like lifting machine include: More technically-advanced types of such lifting machines are often known as 'cranes', regardless of the official definition of the term. Some notable examples follow: A loader crane (also called a knuckle-boom crane or articulating crane ) is a hydraulically-powered articulated arm fitted to a truck or trailer, and is used for loading/unloading the vehicle. The numerous jointed sections can be folded into a small space when the crane is not in use. One or more of the sections may be telescopic. Often the crane will have a degree of automation and be able to unload or stow itself without an operator's instruction. Unlike most cranes, the operator must move around the vehicle to be able to view his load; hence modern cranes may be fitted with a portable cabled or radio-linked control system to supplement the crane-mounted hydraulic control levers. In the UK and Canada, this type of crane is almost invariably known colloquially as a "Hiab", partly because this manufacturer invented the loader crane and was first into the UK market, and partly because the distinctive name was displayed prominently on the boom arm.
Cancer starts in our cells. Cells are tiny building blocks that make up the organs and tissues of our body. We have about 10 trillion cells in our bodies. Usually, our cells divide to make new cells in a controlled way. This is how our bodies grow and repair. Inside almost every cell of your body is a copy of your genome, made of DNA. The genome can be thought of as the instructions for running a cell. It tells the cell what kind of cell to be – is it a skin cell or a liver cell? It also has the instructions that tell the cell when to grow and divide, and when to die. When a cell divides to become two cells, your genome is copied. Sometimes when our cells divide, mistakes happen when copying the genome. These are called mutations. They are caused by natural processes in our cells, or just by chance. They can be caused by external factors in our environment too – like radiation from sunlight. Usually, cells can repair mutations in their genome. In fact, most DNA damage is repaired immediately, with no ill effects. If the damage is very bad, cells may self-destruct instead. Or the immune system may recognise them as abnormal and kill them. This helps to protect us from cancer. But sometimes mutations in critical genes mean that a cell no longer understands its instructions, and starts to multiply out of control. It doesn’t repair itself properly, and it doesn’t die when it should. The abnormal cell keeps dividing and making more and more abnormal cells. These cells form a lump, which is called a tumour. In the 100,000 Genomes Project, we sequence the DNA from both the tumour and healthy cells. This means we can compare the two. Cancer whole genome sequencing allows us to detect two types of changes, germline mutations and somatic mutations. These are changes that are inherited from your parents, or that occur spontaneously in very early development of the foetus. These changes to the genome don’t match the human reference genome that we use for a guide. They are often passed down through generations. They can affect your chances of developing cancer, for example changes in the BRCA1 and BRCA2 genes can lead to an increased risk of breast and ovarian cancers. We look for these changes if they are thought to be the cause of someone’s cancer. We can also look for these changes in patients who don’t have cancer – but only if they want us to. In this case these are called ‘additional findings’. Read more about additional findings. These are changes that occur in the genome of cancer cells. Somatic mutations can be caused by environmental factors. For example, damaging UV rays from the sun cause changes of the C base in DNA to a T, which can lead to tumour formation. These can be detected by whole genome sequencing, by comparing the sequence from the patient’s blood sample and the sequence from a sample of their tumour. We can also detect changes to the cancer genome caused by a range of other environmental factors, such as smoking, and viral infections, or the random changes that occur when a cancer is rapidly expanding. Both somatic and germline mutations are important when we consider a patient’s care. In the 100,000 Genomes Project, we sequence the DNA from a patient’s tumour and healthy cells. We compare the two sequences. This gives insight into the exact nature and genomic changes that are causing an individual’s cancer. This information can improve diagnosis. It can also help clinicians to select the treatments most likely to be effective in each individual case. WGS can also show which patients are not likely to benefit from a specific treatment. This can save unnecessary treatments and toxic side effects. Genomics makes personalised medicine possible – with a real impact on patients and their health outcomes. By the end of the Project we aim to return whole genome sequencing (WGS) results to people in time to help with their care. During the early stages of the cancer programme, most patients taking part will not benefit personally. But taking part will improve our understanding of cancer, enable research and improve care for the future. Visit the Cancer Research UK website for more resources on genetics and cancer.
Religion Curriculum Standards The work in developing the Archdiocesan Religion Curriculum Standards (ARCS) that embodies both parish catechetical programs and Catholic schools, is the work of great commitment and dedication to the mission of the Catholic Church of the Archdioceses of Hartford to witness and teach the Good News of Jesus Christ as articulated in the Scriptures and in the teachings of the Church. More than any other subject in the curriculum, Catholic religious teaching defines the nature of the Catholic school and parish catechetical programs. Through the study of religion, the students will progress beyond knowledge of precepts of the Faith to a deeper understanding and appreciation of the Spirit of the Living God dwelling in each and every person. From that awareness comes a deep respect for the dignity integral to every human being and an acceptance of the Christian’s role as disciple in the building of the Kingdom. The information in this document is based on the Catechism of the Catholic Church (1997), the National Directory for Catechesis (2005), and the United States Conference of Catholic Bishops’ (USCCB) publication, Doctrinal Elements of a Curriculum Framework for the Development of Catechetical Materials for Young People of High School Age (2008). References throughout the document are made from To Teach as Jesus Did (1973) as well as Pope Benedict XVI’s address, “Christ Our Hope” from his Apostolic Journey to the United States in April 2008. It is a working document that evolved from the Archdiocese of Hartford Religion Guidelines, designed to be annotated by the teachers who use it. At every grade level, the religion curriculum standards are structured in strands that represent the four pillars of the Catechism of the Catholic Church: The Profession of Faith (Creed), Celebration of the Christian Mystery (Sacraments and the Mass), Life in Christ (The Ten Commandments and the Beatitudes), and Christian Prayer (The Prayer of the Believer). All of these strands should be integrated with one another to maximize learning, and the study of religion should be an integral part of all content areas. Achievement Standards are the primary instructional targets that outline essential topics and skills in the religion curriculum that students should know, be able to do, and fully comprehend by the end of high school. Daily standards-based lesson planning enables educators to align curriculum and instruction with standards, as they have been adapted by this Archdiocese, thereby keeping the goals of our students in mind. The purpose of standards-based curriculum is to empower all students to meet new, challenging standards of religious education. Student Objectives are the primary tasks students should be able to achieve as a result of successful instruction of the suggested numbered activities in the sub-skills listed under enabling outcomes. Student objectives must be continually assessed to assure a progression toward mastery is achieved by all students. ARCS is designed to meet the learning needs of all students in a Catholic school program and a parish catechetical program. The full curriculum is a requirement for Catholic school programs where religion classes meet every day and is assessed as graduation criteria. Enabling outcomes are skills taught that will result in mastery of the student objective. Teachers are encouraged to check outcomes as they are taught or assessed, as this will drive instruction. Enabling outcomes are suggested skills. It is at the discretion of each teacher to determine the needs of the students in a class to determine which or all outcomes should be taught. Indeed, teachers may design their own outcomes based on their mastery of the content and experience in the classroom. Therefore, it is suggested that teachers list text correlations, resources, and assessments that work best for the outcomes listed and outcomes originally designed. To request a copy of the curriculum contact Laura McCaffreyVIEW CURRICULUM
Barnacles and other marine organisms that grow on the hulls of ships present a big problem to vessel operators. Such biofouling, as it is called, slows ships down, causes them to burn more fuel, and requires that they be taken out of service every couple of years to be cleaned. Scientists are turning to nature for solutions. Consider: Studies have revealed that the skin of the long-finned pilot whale (Globicephala melas) has self-cleaning abilities. It is covered with tiny ridges, called nanoridges, which are too small to allow barnacle larvae to get a good grip. The spaces between these ridges are filled with a gel that attacks algae and bacteria. The whale secretes fresh gel as it sheds its skin. Scientists plan to adapt the whale’s self-cleaning system for the hulls of ships. In the past, antifouling paints were applied. Yet, the most commonly used of such paints have recently been banned because they are toxic to marine life. The researchers’ solution is to cover ships’ hulls with a metal mesh over an array of holes that exude a biosafe chemical. The chemical thickens into a viscous gel on contact with seawater, forming a skin that coats the entire hull. In time, this skin, about 0.7 millimeters [0.03 in.] thick, wears away, taking with it any organisms that may have been hitching a ride. The system then secretes a fresh coating of gel to cover the hull. Laboratory tests have shown that this system could reduce biofouling of ships 100-fold. And that would be a huge advantage for shipping companies, because bringing a vessel into dry dock for cleaning costs a great deal of money. What do you think? Did the pilot whale’s self-cleaning skin evolve? Or was it designed?
Learning objectives are the foundation of every course, as they determine content, presentation, in-class activities, assignments, and assessment. Well written learning objectives clarify for students what they should be getting from the course. Ideally, learning objectives for each class meeting will map to larger objectives for the course, which then map to larger competencies for the program as a whole. Learning objectives should reflect exactly what the student should achieve. This requires use of verbs that accurately define how a student will demonstrate that they have met the learning objective. Oftentimes, we defer to Bloom's Taxonomy, a series of steps of student learning that progress from knowledge to understanding, then application, analysis, evaluation, and finally creation. Others prefer Fink's Taxonomy of Significant Learning Experiences, which include foundational knowledge, application, integration, the human dimension, caring, and learning how to learn. These taxonomies and how they relate to learning objectives are described here: Effective Use of Learning Objectives Here is an example: I wish students to learn the skill of reading vaginal cytology slides. This is a skill that I want them to demonstrate and that requires them to be at the level of analysis and evaluation. A learning objective might be - At the conclusion of this course, the student will be able to interpret vaginal cytology specimens for extent of cornification of vaginal epithelial cells and presence or absence of inflammation. This learning objective makes clear to the students what kind of detail I expect from them as they learn this skill and makes clear to me what kind of activities I need to provide to give students a chance to learn the skill and to demonstrate their learning. A second example might be my desire to make sure students understand the pathogenesis, diagnosis, and management of pyometra. I wish them to demonstrate understanding, so this is more at the level of application; I don't want them just to be able to repeat facts about pyometra, I want them to be able to use those facts. A learning objective might be - At the conclusion of this course, the student will be able to diagnose and manage uncomplicated cases of canine pyometra. If I want them to be able to diagnose pyometra, this requires my assessment to be at a higher level than multiple-choice questions of fact. The learning objectives leads me to case-based questions for assessment. Here are a list of verbs used to help you create learning objectives that best define where you see students achieving a competency for a given topic using Bloom's taxonomy. Ronald M. Hardens' "Learning outcomes and instructional objectives: Is there a difference?" and Marcy H.Towns' "Developing Learning Objectives and Assessment Plans at a Variety of Institutions: Examples and Case Studies" are two manuscripts describing creation and use of learning objectives.
What's the Latest Development? Examining data from NASA's Kepler space telescope, astronomers have found that an alien planetary system bears many similarities to our own, confirming parts of planet formation theory which suggest that most planetary systems begin in similar fashion. "Researchers studying the star system Kepler-30, which is 10,000 light-years from Earth, found that its three known worlds all orbit in the same plane, lined up with the rotation of the star—just like the planets in our own solar system do." The planetary system consists of three known extrasolar planets circling a sunlike star, all of which are much larger than Earth. What's the Big Idea? Since its launch in March 2009, the Kepler space telescope has located more than 2,300 potential alien worlds, 700 of which have been confirmed as genuine planets. "Kepler uses the 'transit method,' noting the telltale brightness dips caused when a planet crosses, or transits, a star's face from the telescope's perspective. In the new study, the scientists studied Kepler observations of the extrasolar system even more closely." While we once believed our position in the cosmos was privileged, it is becoming clearer than our solar system, sun and planet may be rather ordinary. Astronomers have identified between five and ten alien planetary systems where they can further test their theories. Photo credit: Shutterstock.com
Daniel Solow’s How to Read and Do Proofs begins with the simpler methods of mathematical proof-writing and gradually works toward the more advanced techniques typically presented in an introduction to advanced mathematics. This book accomplishes the vast majority of what it was written to do. Solow develops, with careful detail and numerous examples, all of the proof techniques typically introduced in a proof-writing class. Starting with the basic idea of logically manipulating both sides of the proposition in an attempt to connect them, he works up to more subtle and more powerful techniques involving uniqueness, methods of induction, and proof by contrapositive . Every chapter offers examples of proofs using that chapter’s namesake technique, each of which is followed by a step-by-step analysis. A strong point in this book’s favor is that it offers so many examples from so many different fields of mathematics. The last hundred pages or so are devoted to examples from particular subjects, including linear and abstract algebra, analysis, and set theory. Throughout the book, the author presents many ubiquitous mathematical definitions. An introduction to such a wide range of mathematics early in the curriculum will provide up-and-coming math majors with a more accurate idea of their future studies, helping to dispel the freshman fear that all mathematics is like Calculus II. The exercises are quite varied and offer a great test of the student’s understanding. The explanation of proof by contradiction and the description of induction are wonderful. They are extremely intuitive and provide the reader with a satisfying understanding of how these tools function. Not everything is so intuitive though. Some of the proofs feel a bit contrived, the logic a bit obscure. This gap may be indicative of the maturity that comes with mathematical experience. Proofs by construction don’t always seem obvious. The language that Solow creates to describe general proof methods at times becomes hard to read. Some of the sentences seem as though they should have been spoken rather than read; maybe some italics are needed to set off longer phrases e.g. “the something that happens” is often used to stand for a generic outcome following the phrase “suppose there is an object such that.” When this phrase comes up every other sentence, it creates wordy, long-winded dialogue. Definitions are sometimes used in exercises before the chapters in which they are introduced. This mix-up is just confusing for an experienced reader, but could really be a problem for a beginner, particularly when the purpose of the book is to teach precise logical argumentation. This book is a very solid, detailed introduction to mathematical proof-writing. It would function well as the main text if the focus of the class was writing proofs, and it would serve excellently as a supplement to classes that offer proof-writing as a side goal: a course in discrete mathematics, an introductory class in set theory, or a basic course in topology. William Porter is an undergraduate mathematics-physics double major at a small liberal arts college. He enjoys ballroom dancing, cooking, and T. S. Eliot’s poetry. He thinks that Rachmaninoff writes amazing music and rain is the best weather. His favorite Big Bang Theory character is Sheldon.
to Adventures in Chemistry! During this three-day camp, students grades 1-3 will get the opportunity to explore the sciences through various small scale instructor supervised experiments. These experiments include making silly putty, discovering the harmful effects of oil spills, exploring static electricity and the Van de Graff generator, making our very own ice cream from liquid nitrogen, and many more exciting activities! this week kids explore numerous types of introductory sciences including but not limited to anatomy, ecology, physics, and engineering. Children build miniature scale greenhouses to look at the effects of greenhouse gases. Campers get to experience how smell affects the taste of food. (Food allergies will be taken into consideration: no nuts or nut-based products will be used. It will be mainly fruits and vegetables.) Weather pending, children may collect macro invertebrates from a small stream on campus. For a day or two, kids work in the engineering field on several small projects known as build-it-better. It is a fun-filled, thought-provoking week to get boys and girls alike interested in camp set out to show young ones the science that surrounds their daily life, leading girls and boys through lessons that they are reminded of everyday. During the camp kids begin to understand what it means to be an engineer, an astronaut, a biologist and more. Over the course of the camp, well thought-out projects, examples, games and lessons cover the science behind the phases of water, water quality, gravity, geology, solar energy, bridges, how soap works, plant life, bugs, cooking and much more! In this camp children learn how to use various computer programs to create 3D object. By the end of the camp, children will bring their creations to life using 3D printers. During our camp children will discover that the only limit to what can be printed is the limit of imagination. Children enrolled in the energy camp will explore the many ways in which energy impacts their daily lives. Participants will learn how energy is used in transportation, heating, and in generating electricity. A focus will be given to sources of energy used in western Pennsylvania. Students will discuss energy's benefits as well as the effects that it can have on human health and the environment. Hands-on activities will include a solar heating experiment, experimentation with model wind turbines, an energy scavenger hunt and more. Students will come away with a better understanding of the amazing power or energy. In this camp we will go over basic concepts in chemistry. First a foremost we will use the scientific method for every experiment we conduct. We will also go over concepts like gas laws and rates of reactions. There will be a good bit of learning during this camp but there will be just as much fun and entertainment too. During the hands-on STEM K’NEX camp students in grades 4-6 will build K’NEX catapults, K’NEX bridges, K’NEX wind powered cars, the tallest K’NEX tower that can hold the most amount of weight and finally a Rube Goldberg K’NEX contraption. This camp gives the students hands experience and helps students to find out why things happen to get them interested in STEM and hopefully get them to explore STEM fields in their future. Additional Camps for Grades 4-6, please click the Hollidaysburg Public Library information below. In this camp we will investigate nature at both microscopic and macroscopic scales. On Day 1, students will learn about biological scales of organization and will become familiar with the use of the microscope by viewing common objects/organisms at microscopic scales. On Day 2, the students will investigate aquatic invertebrate community structure and food web relationships at Lake Saint Francis. Samples will be taken at the lake and analyzed in the lab using microscopes and other lab tools. Finally, on Day 3 the students will hike the Saint Francis Watershed Trail and learn about watersheds, ecological succession, and ecosystem function. Students will leave the camp with an understanding of how living organisms at all levels of biological organization interact, and will look at nature from a brand new perspective. How fast is the Flash? How strong is Superman? How can a villain who controls magnetism defeat the X-men? This new four-day course explains the real science behind the super powers of your favorite comic heroes using basic math and science principles/experiments. In this camp, students learn constellations, stars and the nature of light. The campers progress from discovering why we see rainbows, a blue sky, and green grass to figuring out the chemical composition of distant stars. Among the activities offered in this camp, students conduct experiments to investigate fundamental properties of light, conduct solar observations of the sun, compare the sun to other stars, and discuss solar activity and its effect on our everyday life on Earth. This camp is designed to introduce children to a wide range of concepts and laws in astronomy, physics, chemistry, biology, and Earth sciences. This camp is designed for all students who are preparing for high school and are curious about the "water-world" around them: this camp provides children with an in-depth look and appreciation for the water they drink, swim in, flush and canoe down. In the camp students learn about the quality of water, the common and innovative processes water goes through, the natural and geographical behavior of water. Students take field trips to learn of local water systems and treatment. These young scientists can potentially understand what their future careers could hold through this camp with introductions to the fields of environmental engineering, chemistry, physics and microbiology. In this camp students will gain experience in modeling 3D objects using Tinkercad, Autodesk 123D, Sculptris, and Meshmixer computer software. By the end of the camp, students will bring their creations to life using 3D printers. During our camp students will discover that the only limit to what can be printed is the limit of imagination. Hollidaysburg Area Public Library 1 Furnace Rd. Hollidaysburg, PA 16648 Download the flyer Come join us to learn about the wonders of our wetlands and watersheds. Students engage in hands-on activities and investigations of hydrology, wildlife, soils, habitat, and learn about our role in it. Trace our watershed from the headwaters of Brush Run to the estuary of the Chesapeake Bay. Explore the adaptations of plants and animals in this ecosystem. Collect macro-invertebrates to learn about water quality. Together, lets splash into summer with "WOW"! July 18-20, Monday-Wednesday 9am – 4 pm at Hollidaysburg Public Area Library $35 per day per kid, second kid $30 per day, second day $30 per kid Learn how to extract your own DNA. Create your DNA model. Solve a crime scene mystery. Understand blood typing. Learn about cells and so much more! Learn about the history of flight and space by creating the following: a hot air balloon, a delta dart plane, the best rocket using propellants of your choice. Are you up to the challenge? In this camp, students will explore computer game development through YoyoGames' free GameMaker Studio software. Students will learn what makes a good game and how to take their games from idea to reality effectively. They will gain experience creating game resources (images, sounds, etc.) and programming game functionality. Participants will end the camp having finished a complete simple 2D video game. Saint Francis University's youth camps are designed for school children in grades 1 through 9. These camps introduce children to several branches of science: astronomy, biology, chemistry, environmental engineering, and physics. In each camp, children become scientists who are critically observe the surrounding environment, pose scientific questions, conduct experiments, collect and analyze data, and draw their own conclusions. Each camp includes daily (weather permitting) outside activities: field trips, short hikes, games, and physical exercises. Download the complete Kids' College schedule of events here. Space is limited. Registrations are accepted until the chosen camp's start date. Complete the online registration form (by clicking "register here" above) to register your child for one of Summer Science Youth Camps listed above. Print the confirmation email and mail it (if time permitting) with your payment. Alternatively, you can bring the completed form and the payment to the first day of the camp. Checks can be made payable to Saint Francis University. We do not accept credit cards at this time. Mail your registration form and payment to: SFU Science Outreach CenterSummer Science Youth Camps117 Evergreen DriveLoretto, PA 15940 Contact us at [email protected] or by calling 814.472.3878. = required field Your email was sent successfully. Thank you for contacting me. There was a problem sending your email. Try again later, or contact our webmaster.
Congress has voted to suspend the debt limit through March 15, 2017. But how could future lawmakers address this issue in order to save the federal government money and avoid future disruptions? Today’s WatchBlog examines the debt limit. What is the debt limit? Despite its name, the debt limit doesn’t limit how much the government can spend—it limits the government’s ability to pay its bills. Raising the debt limit doesn’t authorize new spending—spending and revenues (like taxes or fees) are changed when Congress Ok’s and the president signs spending bills and tax laws. Raising or suspending the debt limit allows the Treasury to borrow more money to pay the bills for spending that has already been approved. Listen to Sue Irving, a director in our Strategic Issues team, explain: Congress used to be involved in the minutiae of taking on debt, for each case approving the type of security, its duration, and interest rate. But the demands of World War I called for a simpler process, and Congress gave more autonomy to the Treasury while setting overall limits on the debt it could issue. During World War II, Congress and the president set a single limit on the Treasury’s outstanding debt obligations, creating the debt limit we know today. Since then, the debt limit has been increased more than 80 times. Taking it to the limit There are a number of extraordinary measures that the Treasury can use to temporarily manage debt near the limit. For example, it can temporarily suspend certain investments to federal employee retirement funds and cash out some of its own investments earlier than normal. But once all extraordinary measures are exhausted, the Treasury can’t do anything else without action from Congress. If this happens, the Treasury could be forced to delay or even default on payments to investors—such as holders of certain bonds—until money becomes available. Consequences of delays Delays in raising the debt limit can disrupt financial markets—even if action is taken in time to pay investors. Treasury securities are usually viewed like cash, since you can almost always find someone willing to buy them for what they’re worth, but also considered better than cash, since you’re also earning money on them. However, when the nation neared the debt limit in 2013, investors feared not being paid on time and reported avoiding certain Treasury securities. As buyers of securities dried up, investors couldn’t unload their securities and the Treasury’s cost for selling the nation’s debt to investors increased. This ultimately added costs for American taxpayers. (Excerpted from GAO-15-476) What are the alternatives? Through interviews of budget and policy experts and an interactive web forum, we identified 3 potential approaches: - Link action on the debt limit to the budget resolution so decisions on borrowing and spending are made at the same time. - Allow the administration to propose raises to the debt limit, subject to a congressional motion of disapproval. - Allow the administration to borrow as necessary to fund laws enacted by Congress and the president. These alternative approaches better link decisions about the debt limit with decisions about spending and revenue at the time those decisions are made.
- 1Insert punctuation marks in (text): they should be shown how to set out and punctuate direct speech [no object]: style manuals tell you how to punctuateMore example sentences - Journalists at the press conference questioned the feasibility of this project, and The Beijing News punctuates the headline of its article with a question mark. - I bet he had no idea when he sent in his badly spelled and badly punctuated letter that he would be ordered to cut off his hands and bleed over the keyboard. - She answered in a fluently written letter punctuated by dashes about the death of her husband. - 2Occur at intervals throughout (an area or period): the country’s history has been punctuated by coupsMore example sentences - War is sometimes described as long periods of boredom punctuated by short moments of excitement. - Three dozen illustrations punctuate Stokes's reissued text of 1934. - At Nili's bedside, she reads her latest novel, extracts of which punctuate the text. - 2.1 (punctuate something with) Interrupt or intersperse something with: she punctuates her conversation with snatches of songMore example sentences - Sarah hated how her life was punctuated with ‘buts‘. - The same what the hell attitude returns on ‘Out-Side,’ a song where lyrics about dogs and trains are punctuated with cheap sound effects.’ - I can still hear his rhythmic South American accent in my mind - soft ‘r's, long vowels - and see him punctuating his words with his hands. mid 17th century (in the sense 'point out'): from medieval Latin punctuat- 'brought to a point', from the verb punctuare, from punctum 'a point'.
There, we explained the terms Factors or Divisors, Factorization of Polynomials, Prime Polynomial etc. That knowledge is a prerequisite here. Here, we factorize higher degree ( > 2 ) polynomials using Factor Theorem which is based on Remainder Theorem. If a Polynomial in x, say f(x) is divided by (x - a), then the remainder is f(a). Proof : We know Dividend = Divisor x Quotient + Remainder Let q(x) be the Quotient and r be the Remainder when f(x) is divided by (x - a). Then we have f(x) = (x - a) x q(x) + r where in the degree of q(x) is one less than that of f(x) and r is a constant. Since the equation is true for all real values of x, putting x = a in the equation, we get f(a) = (a - a) x q(a) + r = 0 + r = r ⇒ r = f(a). Thus, when f(x) is divided by (x - a), the remainder is f(a). (Proved.) Note: If f(x) is divided by (ax + b), then the remainder is f(-b ⁄a) The proof for this is similar to the one given above. Example on Remainder Theorem : Factoring Polynomials Find the remainder when 3x3 - 2x2 + x + 2 is divided by (i) x - 1 (ii) x + 2 (iii) 2x -1 (iv) 3x + 2 Solution: Let f(x) = 3x3 - 2x2 + x + 2 (i) when f(x) is divided by (x - 1), The remainder = f(1) = 3(1)3 - 2(1)2 + (1) + 2 = 3 - 2 +1 + 2 = 4. Ans. (ii) when f(x) is divided by (x + 2), The remainder = f(-2) = 3(-2)3 - 2(-2)2 + (-2) + 2 = 3(-8) - 2(4) - 2 + 2 = -32. Ans. (iii) when f(x) is divided by (2x -1), The remainder = f(1⁄2) = 3(1⁄2)3 - 2(1⁄2)2 + (1⁄2) + 2 = 3(1⁄8) - 2(1⁄4) + 1⁄2 + 2 = (3 - 2 x 2 + 1 x 4 + 2 x 8)⁄8 = 19⁄8 Ans. (iv) when f(x) is divided by (3x + 2), The remainder = f(-2⁄3) = 3(-2⁄3)3 - 2(-2⁄3)2 + (-2⁄3) + 2 = 3(-8⁄27) - 2(4⁄9) - 2⁄3 + 2 = (3 x -8 - 2 x 4 x 3 - 2 x 9 + 2 x 27)⁄27 = -12⁄27 = -4⁄9 Ans. An important application of Remainder Theorem is the Factor Theorem which is applied in Factoring polynomials. Factor Theorem : Factoring Polynomials We know that, if the remainder is zero, when a Polynomial f(x) is divided by (x - a), then (x - a) is a factor of f(x). But, as per remainder theorem, remainder = f(a). ∴ If f(a) = 0, then (x - a) is a factor of f(x). Also If (x - a) is a factor of f(x), then f(a) = 0. This is Factor theorem. (x - a) is a factor of f(x) ⇔ f(a) = 0 Factor Theorem is very useful in Factoring Polynomials of higher (than 2) degree . Get The Best Grades With the Least Amount of Effort Here is a collection of proven tips, tools and techniques to turn you into a super-achiever - even if you've never thought of yourself as a "gifted" student. The secrets will help you absorb, digest and remember large chunks of information quickly and easily so you get the best grades with the least amount of effort. If you apply what you read from the above collection, you can achieve best grades without giving up your fun, such as TV, surfing the net, playing video games or going out with friends! Method of Factoring Polynomials using Factor Theorem : The method of Factoring Polynomials involves the following steps. STEP 1: Let the Polynomial be f(x). Check whether (x- a) is a factor of f(x) or not. To check whether (x - a) is a factor of f(x) or not, we have to checkwhether f(a) is zero or not. Choosing values of a is done by trial and error.To facilitate, reducing the number of trials, we take the constant termof the Polynomial, f(x), find its factors, put + and - to the factors and takethose values for a and check with them, if f(a) is zero or not. This isdone till we find an a such that f(a) is zero. STEP 2: After deciding (x - a) is a factor, we divide the Polynomial f(x) by (x - a) to get the Quotient. This division is better done by Synthetic Divisionwhich is explained in Solved Example 1 of Set 3 below. STEP 3: we, then Factorize the Quotient for further factors. This Factorizingis done either by Factor theorem (Repeat from Step 1) or by other methodsdepending upon the degree of the Quotient. The method of Factoring Polynomials will be clear by the following set of Solved Examples. Example 1 of Factoring Polynomials Solve the following problem on Factoring Polynomials If a and b are unequal and x2 + ax + b and x2 + bx + a have a common factor,show that a + b + 1 = 0 Solution to Example 10 of Factoring Polynomials Let f(x) = x2 + ax + b; and p(x) = x2 + bx + a Let (x - k) be the common factor of f(x) and p(x).⇒ f(k) = 0 and p(k) = 0 f(k) = 0 ⇒ k2 + ak + b = 0............(i) p(k) = 0 ⇒ k2 + bk + a = 0............(ii) (i) - (ii) gives (k2 + ak + b) - (k2 + bk + a) = 0 - 0 = 0 ⇒ ak + b - bk - a = 0 ⇒ k(a - b) -1(a - b) = 0 ⇒ (a - b)(k- 1) = 0 ⇒ ( (k- 1) = 0 [ Since by data a and b are unequal ⇒ (a - b) ≠ 0 ] ⇒ k = 1. Substituting this value of k in (i), we get (1)2 + a(1) + b = 0 ⇒ a + b + 1 = 0 Thus, If a and b are unequal and x2 + ax + b and x2 + bx + a have a common factor,we could show that a + b + 1 = 0 using Factor Theorem in Factoring polynomials. Solve the Following Problems on Factoring polynomials : If f(x) = x2 + 5x + a and p(x) = x2 + 3x + b have a common factorthen (i) find the common factor (ii) show that (a - b)2 = 2(3a - 5b) For Answer See at the bottom of the Page. Progressive Learning of Math : Factoring Polynomials Recently, I have found a series of math curricula (Both Hard Copy and Digital Copy) developed by a Lady Teacher who taught everyone from Pre-K students to doctoral students and who is a Ph.D. in Mathematics Education. This series is very different and advantageous over many of the traditional books available. These give students tools that other books do not. Other books just give practice. These teach students “tricks” and new ways to think. These build a student’s new knowledge of concepts from their existing knowledge. These provide many pages of practice that gradually increases in difficulty and provide constant review. These also provide teachers and parents with lessons on how to work with the child on the concepts. The series is low to reasonably priced and include
Some say that through natural selection, human nature developed a moral sense, a disposition to be good. If true, morality could be understood as a result of evolution not the result of: 1) divine revelation or 2) possibly human reason or will. Evolutionary Ethics is based on a naturalistic philosophy, which seeks to explain how moral traits and behavior evolved. The theory’s objective is to demonstrate how moral values developed, and how they confer a selective advantage for the survival of human beings. A question about Evolutionary Ethics Helpmewithbiblestudy.org. All rights to this material are reserved. We encourage you to print the material for personal and non-profit use or link to this site. Please do not distribute articles to other web locations for retrieval or mirror at any other site. If you find this article to be a blessing, please share the link.
About Middle East Respiratory Syndrome Coronavirus (MERS-CoV) Current Outbreak Update: Middle East Respiratory Syndrome (MERS) is a respiratory disease caused by a virus (specifically coronavirus, CoV). It was first identified in Saudi Arabia in 2012. MERS can affect anyone, but most infected people either live in the Arabian Peninsula or recently traveled from the Arabian Peninsula before they became ill. A few people became infected after having close contact with an infected person who had recently traveled from the Arabian Peninsula. Starting in May 2015, South Korea (Republic of Korea) began experiencing an outbreak of MERS. While cases have been reported in other countries, this is the largest outbreak reported outside of the Arabian Peninsula. While there is a level-1 travel advisory in effect for South Korea, CDC does not recommend that Americans change their travel plans. What is Middle East Respiratory Syndrome (MERS) Middle East Respiratory Syndrome (MERS) is a respiratory disease caused by a virus (specifically coronavirus, CoV). MERS primarily affects the respiratory system with symptoms such as fever, cough and shortness of breath, but gastrointestinal symptoms have been reported as well. The majority of infections are attributed to human-to-human contact, but camels are believed to be a major reservoir for MERS-CoV. The virus does not spread easily from person to person unless there is close contact, such as caring for, or living with, a symptomatic person. How is it transmitted? MERS-CoV is normally spread by an infected person through respiratory secretions including droplets and mucous. An infected person should avoid coughing, sneezing near close contacts, or sharing food, utensils, or drinks. MERS-CoV has most often spread through healthcare settings where there is close contact between infected persons and their caregivers. The virus does not appear to pass easily from person to person unless there is close contact, such as providing unprotected care to an infected patient. Researchers studying MERS have not seen any ongoing spreading of MERS-CoV in the community. Signs and Symptoms The clinical spectrum of MERS-CoV infection ranges from no symptoms (asymptomatic) or mild respiratory symptoms to severe acute respiratory disease and death. A typical presentation of MERS-CoV disease is fever, cough and shortness of breath, which in some cases may develop into pneumonia. Gastrointestinal symptoms, including diarrhea, have also been reported. Like many illnesses, MERS-CoV is most harmful to elders or those with underlying medical conditions. Pre-existing conditions from reported cases thus far include cancer, chronic lung disease, diabetes, and heart or kidney diseases. The time between when a person is exposed to MERS-CoV and when they start to have symptoms is usually about 5 or 6 days, but can range from 2-14 days. Should you feel concerned about a potential exposure, contact 215-746-3535 and press #1 to speak with a healthcare provider. No vaccine or specific treatment is currently available. The U.S. National Institutes of Health is exploring the possibility of developing one. Medical care is supportive treatment to reduce and relieve symptoms. People should avoid contact with camels, drinking raw camel milk or camel urine, or eating meat that has not been properly cooked. Good hand hygiene (e.g. soap+water+30 seconds) and safe food practices (e.g. pasteurization and full cooking of meats) should be utilized. Practice good hand hygiene: - Wash your hands with soap and warm water (an alcohol-based hand sanitizer that contains at least 60% alcohol works too) for at least 30 seconds. - Avoid touching or rubbing your eyes, nose or mouth. Wash your hands immediately after. Practice good self-care: - Cover your nose and mouth with a tissue or your sleeve when you cough or sneeze, then throw the tissue in the trash and wash your hands. - Clean and disinfect frequently touched surfaces and objects, such as doorknobs. - Avoid contact with individuals who appear sick. - Avoid personal contact, such as kissing, or sharing cups or eating utensils, with sick people. If you are caring for or living with a person confirmed to have, or being evaluated for, MERS-CoV infection, see Interim Guidance for Preventing MERS-CoV from Spreading in Homes and Communities.
Hearing the phrase “I pinched a nerve” is very common and many people have experienced nerve pain or related muscle weakness before. But what exactly is a pinched nerve, how do you know if you have one. What is it? To understand what exactly a pinched nerve is we must first know what nerves are and how they work. Nerves are very special cells that make up our nervous system. They send messages in the form of electrical signals called nerve impulses from one area of the body to another. There are many different types of nerves and their functions and roles are complex. A pinched nerve occurs when the surrounding tissues compress the nerve. The surrounding tissue that compresses the nerves could be ligaments, tendons, muscles and bone, depending on where the area of compression occurs. The pressure causes the nerve to become irritated and inflamed. The pressure disrupts the nerves function causing many different symptoms including pain, weakness, and numbness or tingling. If you think of a nerve as a hose, when a hose is pinched the water can’t flow as quickly. When a nerve has pressure against it, the signals do not flow as easily. This disrupts the nerves control of the muscle that it connects with. Some nerves travel long distances throughout the body. So compression in one area of a nerve, might cause related symptoms somewhere else along the nerves path. Often pain, tingling and numbness occurs along the path of the nerve. What are the symptoms? The most common symptoms of a pinched nerve are. - Weakness in a muscle - Tingling (pins and needles) Symptoms often occur farther along the path of where the nerve is compressed. For example, if a nerve is compressed in the low back, you may feel weakness or numbness in the thigh down towards the foot. Some areas of the body are more prone to injuries that involve a pinched nerve. For example, sciatica, involves pressure on the sciatic nerve, which causes radiating pain and numbness down the leg. Or carpal tunnel syndrome is the result of a pinched nerve that causes weakness in grip strength and numbness and pain in the thumb, index and middle finger. What does it mean? The good news is that pinched nerves are treatable, once they have been properly diagnosed. A physiotherapist can identify which nerve is impacted plan the proper course of treatment. Pinched nerves are more common in repetitive motions, and often occur in athletes. Long held poor posture habits, arthritis, obesity or many other situations can also caused pinched nerves. If the issue is addressed quickly, there is most often no long term nerve damage. However if a pinched nerve is not treated or addressed it can lead to long-term nerve damage. If you think you might be experiencing a pinched nerve, come on in and get it checked out sooner rather than later! A physiotherapist will help you manage and treat a pinched nerve in multiple ways. The first step is to help reduce inflammation of the nerve to relieve pain symptoms. - Reduce Inflammation - ice and rest, are important to help reduce inflammation of the irritated nerve, and help relieve pain. - Posture Education and Pain Management - Manual therapy - Stretching and Range of Motion Exercises
Hugh Pickens writes "Heart attacks usually cause irreversible damage to heart muscle and, because cells lost from the heart do not grow back naturally, leave the organ in a weakened and vulnerable state that may cause another serious condition — called heart failure — if the victim survives. Now a team of scientists led by Tal Dvir from Ben-Gurion University of the Negev in Beer-Sheva has developed a tissue-engineering technique, using the body as a 'bioreactor,' to create a 'patch' made from heart muscle that can be used to fix scarring left over from a heart attack. First, a biodegradable 'scaffold' is seeded with immature cells taken from the hearts of newborn rats. For 48 hours, the scaffold is exposed to a cocktail of growth-promoting chemicals in the laboratory and is then transplanted into a rat's abdomen where it develops a network of blood vessels and muscle fibers. After seven days the patch is removed and grafted onto the animal's heart. A month later the patch has completely integrated itself into the heart, synchronizing its 'beat' with that of the surrounding tissue. 'Using the body as a bioreactor to engineer cardiac tissue with stable and functional blood vessel networks represents a significant improvement in cardiac patch performance over ex vivo (outside the body) methods currently used for patch production,' write the authors. The technique is also being developed for livers and bladders."
Researchers and scientists in Ohio are developing exciting new classes of materials with unusual properties. Their ground-breaking studies are based on the study of atomic and molecular physics and chemistry and involve the processing of polymers, metals, ceramics and composite materials. For example, a chemist seeks a delicate balance of compounds to create a new class of magnets that could lead to a new ways to monitor medical implants. Other researchers are unlocking the properties of a new material that could replace silicon in the computer chip industry. And, another scientist is investigating "metamaterials" that could make cloaking devices a reality. World-class materials manufacturing industries have long driven the state's economy, with just under 105,000 workers across 1,184 establishments, according to a recent report by Battelle. The creation and testing of computational models through the Ohio Supercomputer Center continues to set the bar high for materials science research in Ohio, as described on the next few pages. - Creating organic magnets from elusive compounds - Calculating surfaces for graphene growth - Controlling nanometer-scale structures - Devising production methods for graphene - Investigating new opportunities for doping - Evaluating organic materials for solar energy - Exploring EM wave behavior in metamaterials
A generation ago, Carl Sagan popularized the notion of "billions upon billions" of stars in the universe. He mentioned planets as well, but in Sagan's day astronomers had scant evidence of extrasolar planets, those beyond our own solar system. That's all changed in the past few years, and the plenitude of possible worlds to explore was confirmed Wednesday when a California Institute of Technology team reported there are billions and billions of planets, just within this galaxy. They're known as exoplanets, and the recent spate of discoveries has changed astronomers' beliefs about the nature of planetary systems. Rocky, Earth-sized planets very close to their star are now considered the most common kind in the universe. "If you look up at the night sky, probably every star, statistically speaking, has a planet or two," said John Johnson, who leads Caltech's planet-hunting Exolab. "That is a big new discovery that's ushered in this new era of exoplanets. It was charging forward before that, but now it's almost overwhelming." Johnson led the study that determined there are at least 100 million planets in the Milky Way, based on the characteristics of a star named Kepler-32. There are five planets orbiting the star, and all of them are visible because the disc-shaped planetary system faces Earth on its edge, so astronomers can see the planets blocking the star's light as they orbit. Detecting planets that way is known as the transit method, and it's the bread-and-butter of NASA's Kepler telescope, an orbiting spacecraft that trains its eye at a specific patch of the Milky Way containing about 150,000 stars. It launched in 2009, and in November began an extended mission for another four years. The extra time will give astronomers greater chance to find Earth-sized, terrestrial planets in longer orbits around their stars, and many of those could be in the habitable zone, the sweet spot in planetary systems where liquid water could exist. The first exomoon finding is also possible. "Even in the next few months, several new discoveries like this are going to pour out," said Nick Gautier, the Kepler project scientist at Jet Propulsion Laboratory. Some scientists already believe a big discovery will be just around the corner. "I'm very positive that the first Earth twin will be discovered next year," Abel Mendez of the Planetary Habitability Laboratory in Puerto Rico told space.com in December. The possibility of finding life on other planets is the potential gold mine of habitable zone exoplanets, and ideas about how to overcome light years to visit another star no longer seem so far-fetched. Reality before long could mirror the planet-hopping depictions in science fiction such as "Star Trek" and "Star Wars." "I grew up with visions of Tatooine, Hoth, Dagobah," Johnson said. "Now we're finding things that could be like those planets." A Tatooine-style planet orbiting two stars was among last year's exoplanet discoveries. Researchers also found that one planet is made largely of diamonds. It's discoveries like those that have contributed to an exoplanet boom, not only because the tools are improving but because more people are interested in exploring space. At Caltech, the search for exoplanets has prompted the November formation of the Center for Planetary Astronomy, which combines two distinct camps: astronomers, who mostly look beyond our own solar system, and planetary scientists, who mostly look within it. The center would rival the Harvard-Smithsonian Center for Astrophysics as one of the only university environments where students could study planets such as Jupiter along with the gas giants elsewhere in the universe. "This field moves so fast," Johnson said. "If you try to do old-fashioned astronomy where you do a paper on any topic, you're going to get left behind. When we announced the three smallest planets that had ever been discovered (in January 2012), for us in our group it was old news." Johnson also leads Project Minerva, a planned telescope array at Palomar Mountain that will the first observatory dedicated exclusively to exoplanets. The telescope is designed to find Earth-like planets fast, searching for nearby, bright stars where planets close enough to have a 50-day orbit are easily visible. Minerva will also search for super-Earths, planets three to 15 times Earth's mass that are within the habitable zone. Johnson called it a low-cost expressway for finding exoplanets, thought up by his Exolab students. "If we lived in an ideal world, we wouldn't do Minerva because we'd have money from our funding agencies," he said. The European Space Agency also has plans to seek super-Earths, recently announcing the Cheops telescope, a spacecraft similar to Minerva. The HARPS telescope in Chile is also among the most trusted planet-finding tools, though it uses the Doppler method, measuring how a star's mass wobbles as a planet circles around it. That has an advantage over the transit method because planets don't have to pass directly in front of the stars. NASA's Hubble telescopes and other mission can also find exoplanets, but none like Kepler. Since its launch in 2009, the Kepler telescope has led to a flood of data and discoveries, now listing 105 confirmed planets and more than 2,000 other candidates. NASA's Ames Research Center in Mountain View manages the mission, while Caltech's Exoplanet Science Institute maintains a database of information. Kepler might not last much longer than the end of its mission in 2016, because it's operating without one its reaction wheels, used to keep it in orbit and turn it to send data back to Earth, Gautier said. But the spacecraft can still run well on three wheels. "I'm confident that Kepler, under any circumstances, was producing so much interesting science that we would have kept it going," Gautier said. Once planets are identified, the next step is characterizing them beyond just their mass and orbit. "Can we answer questions like what kind of planet is it, is it a gas giant, is it mostly rocky, can we answer the question of what temperature it is, does it have an atmosphere, what the atmosphere is made of, what's the climate like, things like that," said Heather Knutson, an assistant professor of geological and planetary science at Caltech. "I try to fill in the more detailed picture." That includes detecting atmospheric molecules such as methane, carbon dioxide and water, mostly on gas giants. Characterizing the small, rocky planets that have emerged in the past year is more difficult because current telescopes don't yet have enough capabilities, Knutson said. However, astronomers can look toward our own solar system to find some of those answers. While the sun isn't a common type of star in the universe, it's Jupiter that tends to define the solar system, Caltech astronomer Mike Brown said. Brown, known as the man who killed Pluto, has helped astronomers change their understanding of how the solar system formed, with the planets previously much closer together. Jupiter and Saturn likely entered an orbital resonance at one point early in their history, circling around each other and causing a cataclysmic event, he said. It's one explanation for the late heavy bombardment that caused many of the craters on Earth's moon, and a source of debris in the Kuiper belt beyond Neptune, which can serve as a fossil record for the planets. "It's like the giant planets are a big shipwreck that happened four and a half billion years ago," Brown said. "What we see is debris washed up on the shore. We don't know what the ships did four and a half billion years ago, but we see the little piles. There's a pile over here, there's a bunch of underwear stuck over here, and there's some shoes, and we can try to figure out what must have happened by looking at all these things that are still stuck in the ocean." Brown now focuses his attention on Sedna, a stray body in a 12,000-year orbit around the sun that doesn't conform to the usual rules of a circular orbit. Brown hopes to find more of them, which could validate the thought that another star may have once passed close enough to influence it. "If this is true, if Sedna is in this orbit because of other stars passing by, then it really is this fossil record not of the early solar system, but of the birth of the sun itself." Brown said. "If you can put this fossil record together, then you have the earliest possible history that you can construct around here."
The Tobacco Plant ( Originally Published 1938 ) The English gentlemen-adventurers who colonized America in 1607 came in search of gold. They found tobacco. The tender leaf which the world now smokes became the basis for the imperiled settlements' prosperity. Tobacco soon yielded riches greater than all the mines of Spain. From Jamestown, John Rolfe sent the first shipment of Virginia leaf in 1613, seven years before the Pilgrims touched at Plymouth Rock to northward. Within one generation, the gentle product was known the world around. Rolfe was the husband of the famous Indian Princess, Pocahontas. This daughter of Powhatan, red emperor of Virginia, saved the colony from disaster several times. It is likely that Rolfe learned tobacco cultivation from the Indian girl. The couple lived at Varina near the place where Richmond, birthplace of the modern cigarette, later was established. Their unusually happy marriage brought peace between the settlers and the Indians, so agriculture thrived and soon production of the native plant became the chief pursuit of all the English in America. Before Rolfe's experiments showed Virginians how to raise the plant, the nobility of Britain and the Continent obtained supplies from the Spanish colonies in the tropics. The Spaniards named it after the "taboca," a pipe used by the Carib tribes. Sir Walter Raleigh and his suite made it popular in England and Jean Nicot, French ambassador to Portugal, brought it to the Medici court at Paris, where scholars termed it "Nicotiana Tabacum." The mild, less fibrous Virginia leaf soon became the favorite of smokers everywhere. Trade out of Jamestown grew with great rapidity. Production increased from 20,000 pounds in 1619 to 500,000 in 1627. By 1639, exports totaled 1,500,000 pounds. So important became the staple that tobacco soon supplanted gold as legal tender in Virginia. It was accepted as payment for all commodities, for taxes, fines and other dues of the colony. A cavalier who would not go to church was fined 200 pounds of tobacco for his delinquency. Bondservants bought their freedom with tobacco. Every store and tavern had its warehouse for the fragrant currency. The Rolfes grew wealthy. Pocahontas became one of the most charming matrons of the colony. Tragedy, however, soon interrupted this idyll. During a visit to England in 1617, the princess died. Rolfe returned, but in 1622, he was murdered by some of his savage kinsmen at his border farm. His son Thomas succeeded to his estates, and founded a family many members of which still produce tobacco. As the leaf was "as good as gold," every in-habitant of Virginia sought to raise an annual crop. Rich land was plentiful; labor cheap. In fact, so many citizens abandoned trades to enter agriculture that members of the Assembly be-came alarmed. The pay of carpenters was thirty pounds of leaf a day and board, but nevertheless they threw away their saws and hammers and took up the hoe. Seamen deserted their ships to try their hand at it. Physicians preferred to treat the soil instead of feverish patients. Throughout the colonies of the south, there developed a passionate enthusiasm for tobacco that continues to this day. All of the domestic types from which modern cigarettes are made have evolved from two varieties developed during the early days. "Orinoco" and "Sweet Scented" were the first. Later soil, climate and curing methods yielded others. One of the important types now used in the blend of Lucky Strike is "Bright." By the English it is still known as "Virginia." This orange-colored leaf with its distinctive flavor and its high natural sugar content is produced in North Carolina, South Carolina, Georgia and northern Florida, as well as in Virginia. Since it is pre-pared for market by the distinctive flue-curing process, it also is known as "flue-cured." A second important type used in the blend is "Burley," the fragrant brown leaf raised mainly in Kentucky, Tennessee, and in states bordering on the Ohio River. Burley is sometimes called "air-cured," since it differs from Bright tobacco in this respect. Other tobaccos used in Luckies are "Mary-land," a rich, fine-burning, dark-brown product from north of the Potomac, and "Turkish," a highly aromatic small-leafed variety, many types of which are imported from the Near East. Incidentally, this Turkish, the only non-domestic tobacco used in Lucky Strikes, has an interesting historical background. The cigarette in its first primitive form was invented by the Turks. During the Crimean War when England and France were allies of Turkey, the European officers learned to like the Oriental paper-wrapped "cigars." They brought the materials back with them to Paris and to London, and before 1870, a new smoking vogue was well established. American tobacconists, always alert, quickly accepted the innovation, and by 1872, cigarettes similar to those we have today were being manufactured in Richmond. The Oriental tobacco soon was supplanted to a large degree by the lighter, smoother American leaf, but to this day, cigarette smokers desire some Turkish in their blend. The development of the modern cigarette in the seventies gave the tobacco business a mighty forward impulse. For the first time, the finest tobacco became available to poor as well as rich. Tobacco of the better grades began to sell at prices farmers had not enjoyed since 1619. Again the staple became one of the richest Southern crops, and today, with the growing demand for cigarettes, its economic importance is greater than ever.
Taking Lecture Notes Accurate notes will be helpful when you need to review material for an exam or assignment. In addition to helping you merely remember the contents of a lecture, your note taking strategy can help you grapple with the material and more fully understand a historical topic, event, or question. Thus, you should consider note taking as an interactive process rather than just a secretarial skill. It is more than simply an aid to memory. Note taking and review is part of the process of analyzing the material. Current research supports these ideas and also shows that final results on exams and papers can be improved if certain methods for taking notes are employed. This guide will suggest: - Methods and practices of taking notes. Ways to use your notes in studying for exams and papers. - Read the text before class. This allows you to develop an overview of the main ideas, secondary points, and definitions for important concepts. - Identify familiar and unfamiliar terms. Look up terms before class. Be prepared to listen for explanations during the lecture. Ask the professor to explain unclear ideas. - Note portions of the reading that are unclear. Before class, develop questions to ask. (Listen for an explanation during the lecture.) - Sit near the front. There are fewer distractions and it is easier to hear, see and understand the material. - Date and number every page, assignment and handout. This will help when you begin studying for an exam or preparing notes for an essay. - Do not try to write everything down. Make notes brief. The more time you devote to writing, the less attention you can give to understanding the main points and identifying the outline and argument of the lecture. Never use a sentence when you can use a phrase or a phrase when you can use a word. Use abbreviations and symbols whenever possible. - Be aware of the outline of the lecture. Most lectures are based on a simple outline. Listen for key phrases and words that identify what that structure is and recognize where you are in the outline at any given time. - Begin notes for each lecture on a new page. This allows for more freedom in organization, for instance, so that you can put the notes on a subject from the lecture with the notes on the same subject from the reading. - Generally, use your own words, rather than simply quoting the words of the lecturer. Formulas, definitions, rules and specific facts should be copied exactly. - Develop a code system of note-taking to indicate questions, comments, important points, due dates of assignments, etc. This helps separate extraneous material from the body of notes (for instance ‘!’ for important ideas, a ‘?’ for questions, or [bracket personal comments]). You might even develop your own symbols for commonly used words or ideas (for instance, ‘∆’ for change, or ‘C’ for century). - Watch for clues from the instructor. If the instructor writes something on the board or overhead, it is likely important. If the instructor repeats a point during the lecture, make sure to note it. Dramatic voice changes and long, intentional pauses usually indicate emphasis as well. - Review your notes as soon as possible after the lecture. This dramatically improves retention. - Merge notes from the lecture and readings. Keep notes from the lecture with notes from the readings on the same topic. Look for gaps in your understanding in each and identify where they complement or contradict each other. Ask your instructor if you still do not understand a point. - Highlight key words, phrases, or concepts. This helps you reduce the amount of reading you have to do when studying. Use margins for questions, comments, notes to yourself on unclear material, etc. Color coding is often helpful for organizing material. - Recite by covering over the main body of notes and use only the key words in the margin to recall everything you can about the lecture. State the facts and ideas of the lecture as much as you can in your own words. - Reflect on the content of your notes. Consider especially how these notes relate to other things you have learned. - Before class:
Carbon has electronic configuration [He]2s22p2, and main formal oxidation state +4 (there are other oxidation states, but all use all of carbon's valence electrons in bonding). Alongside the central role of Carbon in Organic Chemistry, it forms numerous compounds, both inorganic and organometallic. There are two main isotopes, with relative abundances: 12C (98.9%, I=0), and 13C (1.1%, I=0.5). I is the nuclear spin, and the half-integer value of the nuclear spin for 13C gives it its usefulness in structure determination by NMR. Elemental carbon occurs in several different forms, ie. it displays a complex allotropy. The main forms are diamond and graphite, and they exhibit markedly different properties due to the very different structures they adopt. Structures and Descriptions An electrical insulator, and 3D-lattice crystal structure. This is the hardest known substance (this is because it is made up of very strong C-C covalent bonds). Each C atom forms four bonds, tetrahedrally arranged, to other C atoms, resulting in an open, but strongly tetrahedral carbon coordination. An electrical conductor, and layered lattice crystal structure. This is slippery and used as a lubricant (this is a property of its layered structure, with the lubricating effect coming from the ability of the layers to slide over one another, as they are only weakly held together by van der Waals forces). Here, each C atom forms three covalent σ-bonds to further C atoms. These σ-bonds are made up of sp2 hybrid orbitals. The remaining p-orbitals, which are perpendicular to the plane of the σ-bonds, overlap to form a delocalized π-system. The planes are widely separated as they are held together only by the weak van der Waals forces. planar carbon coordination. the lines show that the carbon atoms in every other layer are in line, and not those in adjacent Diamond does not convert to graphite under standard conditions, even though it is spontaneous (ΔGo = -2.90 kJmol-1). This is a kinetic phenomenon, and diamond is thus described as metastable. The electrical conductivity of graphite is direction-dependent: the π-system of delocalized electrons allows metallic conduction parallel to the planes, while the much lower conductivity perpendicular to the planes, which nevertheless increases with temperature, suggests semiconductor behavior in that direction. The directionality of the conductivity suggests a band structure of graphite which has a fully filled valence band with a small separation to the empty conduction band (the overlapping p-orbitals, with one electron from each C atom, form a π-system with the bonding orbitals fully occupied and the anti-bonding orbitals fully unoccupied. Hence, graphite may form intercalation compounds with species which act either as electron donors (where graphite acts as an electron acceptor, incorporating the donated electrons into the vacant conduction band), or as electron acceptors (where graphite now donates electrons from the full valence band). Reactions of Graphite |Reduction by K (extra electrons from the K enter the graphite conduction band, and therefore |Oxidation by Br (electrons from the graphite valence band are lost to form Br anions, leaving holes in the graphite band which therefore increase |With F; produces (CF)n, an electrical insulator, which has a structure resembling continuous fused cyclohexane rings, containing sp3 hybridised C atoms. If sheets of Graphite were bent, which in practice is achieved by replacing some of the six-membered rings by five-membered rings, then other forms of carbon may be formed. These are known as fullerenes (so named after the inventor of the geodesic dome, Buckminster Fuller; the domes have the similar shapes to these compounds), or Bucky-balls. In practice, they are formed when an electric arc is struck across graphite electrodes in an inert atmosphere. The most important of these is C60, often referred to as Buckminster-Fullerene but others such as C70, C76 and C84 occur in smaller - C60: an individual C60 molecule has the shape of a soccer-ball, and Ih (icosahedral) symmetry. Each pentagon is surrounded by hexagons, and each hexagon is surrounded by three pentagons and three hexagons. It crystallizes to give a magenta solid, and dissolves in benzene to give a magenta solution. The 13C NMR spectrum shows one signal, ie. all the C atoms in C60 - C70: this has the structure of C60 with an extra strip of five hexagons around the center of the soccer-ball. It crystallizes to give a red-brown solid, and dissolves in benzene to give a red solution. The 13C NMR spectrum shows five signals, so C70 contains C atoms in five different environments. |Plan view, looking down the C5 axis. ||The bonds between sp2 hybridised carbon atoms make up the familiar shape of a football, and the pentagons and hexagons can be seen. Reactions of Fullerenes: The reactivity of fullerenes is somewhere between that of an arene (with an extended graphite like p-system) and an alkene (with an isolated C=C double bond). The addition product with K is a superconductor below 18K, whose structure is an face centered cubic array of C60 molecules, with K+ ions occupying all the octahedral and tetrahedral holes; the product with OsO4 is a standard alkene-like addition, where the OsO4 adds across a C=C bond. The reaction with Na and NH3 is known as the Birch If a C60 molecule is split in half, the two hemispheres can be placed on the ends of a rolled-up sheet of graphite to produce what is known as a nanotube. Many different sizes of tube may be formed, capped or uncapped by bucky-balls of different sizes, and these may have huge future technological use as molecular wires, as they retain some of the conductivity of the parent
The search for habitable, alien worlds needs to make room for a second “Goldilocks,” according to a Yale University researcher. For decades, it has been thought that the key factor in determining whether a planet can support life was its distance from its Sun. In our Solar System, for instance, Venus is too close to the Sun and Mars is too far, but Earth is just right. That distance is what scientists refer to as the “habitable zone,” or the “Goldilocks zone.” It also was thought that planets were able to self-regulate their internal temperature via mantle convection – the underground shifting of rocks caused by internal heating and cooling. A planet might start out too cold or too hot, but it would eventually settle into the right temperature. The new study suggests that simply being in the habitable zone isn’t sufficient to support life. A planet also must start with an internal temperature that is just right. “If you assemble all kinds of scientific data on how Earth has evolved in the past few billion years and try to make sense out of them, you eventually realise that mantle convection is rather indifferent to the internal temperature,” says Jun Korenaga, a professor of geology and geophysics at Yale. Korenaga presents a general theoretical framework that explains the degree of self-regulation expected for mantle convection and suggests that self-regulation is unlikely for Earth-like planets. “The lack of the self-regulating mechanism has enormous implications for planetary habitability,” Korenaga says. “Studies on planetary formation suggest that planets like Earth form by multiple giant impacts, and the outcome of this highly random process is known to be very diverse.” Such diversity of size and internal temperature would not hamper planetary evolution if there was self-regulating mantle convection, Korenaga says. “What we take for granted on this planet, such as oceans and continents, would not exist if the internal temperature of Earth had not been in a certain range, and this means that the beginning of Earth’s history cannot be too hot or too cold.”
In the previous part of this tutorial, I have discussed the basics of generators, some differences between functions and generators. I have also hinted at some benefits of the new yield from syntax. I recommend reading that part of the tutorial first before continuing unless you're well familiar with generators in Python. All examples in this part of the tutorial will be in Python 3.3 unless stated otherwise. Basic binary tree In this second part I'll build upon a basic binary tree example that demonstrates the uses of the yield from syntax. For the sake of this example, I'll let each node in the tree have a list of children rather than, as is often done, let each node point to its parent node. Here is the implementation of this data structure without the new syntax: class Node: def __init__(self, value): self.left = self.value = value self.right = def iterate(self): for node in self.left: yield node.value yield self.value for node in self.right: yield node.value def main(): root = Node(0) root.left = [Node(i) for i in [1, 2, 3]] root.right = [Node(i) for i in [4, 5, 6]] for value in root.iterate(): print(value) if __name__ == "__main__": main() As expected, this prints the values 3 (of the left children), 0 (of the root node) and the values 6 (of the However, this example only iterates over the root node and its children; it won't recursively iterate over the children of the child nodes (if there happened to be any). Let's modify the method to make that happen: def iterate(self): for node in self.left: for value in node.iterate(): yield value yield self.value for node in self.right: for value in node.iterate(): yield value This version also calls iterate() on each of the child nodes so this yields each of the nodes in the tree. This code is rather cumbersome: for each of the left and right children we yield all the values by using explicit iteration. This is where simplifies the code: def iterate(self): for node in self.left: yield from node.iterate() yield self.value for node in self.right: yield from node.iterate() Other aspects of generators Now, the story would be over pretty quickly if that was all there was to it. yield from I'll need to explain an alternative way of using A generator can be controlled using methods such as and related methods allow you to start, stop and continue a generator rather than having Python handle most of the generator's execution. For example, instead of the above basic loop: for value in root.iterate(): print(value) we can also write: it = root.iterate() while True: try: print(it.send(None)) except StopIteration: break send() method allows you to "send" a value into the generator, which means yield expression receives that value. That value can be used by assigning it to a variable (i.e., v = yield self.value). In this case we repeatedly send None into the generator and our generator doesn't utilize the value that it sent into it. Effectively this leads to the same result as the previous loop that prints the generator's yielded values. Benefits of yield from The primary benefits of yield from come when you've written a generator that uses these techniques and when it needs to be refactored. This means you'll have to subdivide the generator into multiple subgenerators and send / receive values to and from those subgenerators. Rather than rewriting the generator to send values to the subgenerator, you can simply use yield from and the semantics will remain the same. There are some caveats and some situations which aren't handled but that's beyond the scope of this tutorial (I'll refer you to the official proposal for details). Another example generator Let's create a small generator that demonstrates the above: def node_iterate(self): for node in self.left: input_value = yield node.value # ... input_value = yield self.value # ... for node in self.right: input_value = yield node.value # ... This generator only iterates over the node and its immediate children. Any value sent into the generator is stored in the variable which is then available for computations (the # ... sections). For example, the code that uses this generator may perform various computations and it passes intermediate values back into the generator so that they can be used there. The following shows how the yielded values are summed and the subtotals are passed back into the generator: total = 0 it = root.node_iterate() it.send(None) while True: try: value = it.send(total) total += value except StopIteration: break Refactoring iteration over children It seems repetitive to have the same code for both the left and right children so we can refactor that into: def child_iterate(self, nodes): for node in nodes: input_value = yield node.value # ... def node_iterate(self): yield from self.child_iterate(self.left) input_value = yield self.value # ... yield from self.child_iterate(self.right) Note that it is not recommended practice to call a class method self.child_iterate) by passing an instance variable (i.e., the self.right arguments) but that is a topic for a different post. The important part is that we can only refactor in this way as a result of the new yield from semantics. When we send values into node_iterate() they are automatically passed into the subgenerator receives a value. Prior to Python 3.3, if we had written: for value in self.child_iterate(self.left): yield value then this would yield values from None. To make it work as expected, we would need to explicitly the values from Lastly, we can perform one more refactoring step leaving us with only one section where the input_value is received and used: def process(self): input_value = yield self.value # ... def child_iterate(self, nodes): for node in nodes: yield from node.process() def node_iterate(self): yield from self.child_iterate(self.left) self.process() yield from self.child_iterate(self.right) As you become more familiar with using send() and related functions, you'll yield from makes life easier. I suggest reading the exact semantics in the specification to know when and how you'll be able to use it.
Pierre Lescot, (born c. 1515, Paris, Fr.—died 1578, Paris), one of the great French architects of the mid-16th century who contributed a decorative style that provided the foundation for the classical tradition of French architecture. In his youth Lescot, who came from a wealthy family of lawyers, studied mathematics, architecture, and painting. There is no evidence that he visited Italy, although much of his design was classical; it appears that he acquired his knowledge of architecture from illustrated books and from Roman ruins in France. Lescot’s most important contribution to architecture was his rebuilding of the Louvre, which he began in 1546 as a commission from Francis I. The style and design of Lescot’s work on the Louvre reflect a revolution in French architecture marked by the influence of classical elements. His work on the facade combined traditional French elements and classical features to create a unique style of French classicism. Lescot’s other work includes the Hôtel Carnavalet (1545), which still survives in part; a screen at Saint-Germain-l’Auxerrois (1554); the Fontaine des Innocents (1547–49); and the château of Vallery. Unfortunately, none of these works has survived intact.