content
stringlengths
275
370k
Language is a key element in satisfactory interpersonal-social relationships. Every day students are faced with understanding language used in social situations. They try to understand what a teacher’s tone of voice implies or what a classmate’s joke means. Students may require guidance in properly interpreting language in the social context. Activities that target a student’s understanding of language that has social meaning (tone, intonation, word choice, use of sarcasm, etc.) will benefit a student in the classroom as well as in every day life. Here are some strategies to help students process language in social settings. - The interpretation of another person’s feelings is complex. In order to develop a valid sense of another person’s emotions, the listener must devote attention to actively listening, and also, review his/her memory for similar social situations. - Use films, videos, and plays to discuss how characters feel and what signs, expressions, etc. indicate those feelings. Have students dramatize reading passages and listen to each other speaking, characterizing tone of voice and what it implies. - Promote students’ understanding of the use of body language and body movement as a cue to how one feels. Give students practice with both “reading” and “projecting” the appropriate body language. - It can be very helpful for students to develop an understanding of the language of their peers (peer lingo), even though they themselves may not use that lingo.
Lecture 8 by Julie Zelenski for the Programming Abstractions Course (CS106B) in the Stanford Computer Science Department. Julie talks about solving problems recursively. She covers functional recursion with the simple example of writing an exponential function using recursion. From the simple program performing as an exponential function Julie continues to show a more efficient recursion code. The next example she covers is that of binary search and how recursion is used in this instance. Complete Playlist for the Course: CS 106B Course Website: Stanford Center for Professional Development: Stanford University Channel on YouTube: Tagged under: computer,science,technology,software,engineering,++,programming,language,lecture,recursion,functional,palindrome,binary,search Clip makes it super easy to turn any public video into a formative assessment activity in your classroom. Add multiple choice quizzes, questions and browse hundreds of approved, video lesson ideas for Clip Make YouTube one of your teaching aids - Works perfectly with lesson micro-teaching plans 1. Students enter a simple code 2. You play the video 3. The students comment 4. You review and reflect * Whiteboard required for teacher-paced activities With four apps, each designed around existing classroom activities, Spiral gives you the power to do formative assessment with anything you teach. Carry out a quickfire formative assessment to see what the whole class is thinking Create interactive presentations to spark creativity in class Student teams can create and share collaborative presentations from linked devices Turn any public video into a live chat with questions and quizzes
How to Analyze the Use of Metaphors in Literature Analyzing any element of writing requires you to think beyond a basic summary and consider the purpose and intent of the element along with its relationship to the whole text. This technique also applies to analyzing literary elements and devices, such as a metaphor. Metaphors -- comparisons of two unlike things without using “like” or “as” -- are commonly used in prose and poetry to advance a theme, setting, character or plot. Uncovering the Meaning To analyze a metaphor, you must first identify it. Mark the passage with a highlighter or pen to indicate the exact metaphor. A simple metaphor will consist of a single phrase or line, while an extended metaphor may transcend the entire passage. Next, determine the two elements of comparison. Use the margins of the text to write out the literal element and the descriptive or unlikely comparison. Once you have identified the metaphor’s components, analyze them for relevance, application and purpose. By interpreting the metaphor and applying it to the entire text, you will discover a new layer of meaning in the text. The process of analyzing metaphors may reveal more about the setting by providing the reader with a vivid mental picture. For example, an excerpt from Gary Soto’s poem, “Oranges,” reads, “I peeled my orange / That was so bright against / The gray of December / That, from some distance / Someone might have thought / I was making a fire in my hands.” The metaphor “the gray of December” exposes a dull and dreary day, which when applied to the rest of the statement and the entire poem juxtaposes a drab winter day against the joy the narrator is experiencing on his first date. Analyzing a metaphor often reveals a text’s theme. For example, Edgar Lee Masters’ poem “George Gray” compares a sailboat to a person’s life: “And now I know that we must lift the sail / And catch the winds of destiny / Wherever they drive the boat.” Masters suggests that a life and a sailboat are both at the mercy of destiny: The captain isn’t fully in control of either. The use of metaphor in this poem creates depth and helps the reader understand the theme. Authors use metaphor to develop dynamic characters both physically and emotionally. Because metaphors enable writers to create new meanings and account for the many lapses in our ability to understand the world around us, characters become more vibrant and accessible through metaphors. For example, in the play “Romeo and Juliet,” Shakespeare uses Romeo to describe Juliet as “a rich jewel in an Ethiop’s ear-- / Beauty too rich for use, for earth too dear!” By comparing Juliet’s appearance to that of a rich man’s jewel, she is solidified in the reader’s mind as beautiful and highly valued. - Deeper Reading; Kelly Gallagher - Writers Express: A Handbook for Young Writers, Thinkers, and Learners; Dave Kemper, et al. - Purdue OWL: Using Metaphors in Creative Writing - Southeastern Louisiana University: Critical Analysis - Design Pics/Design Pics/Getty Images
Project WET is an interdisciplinary water education program intended to supplement a school's existing curriculum. Using water as a theme, Project WET provides hands-on activities to enhance the teaching of science, math, social studies, language arts, and many other required subjects. Link to PASS Correlation The goal of Project WET is to promote the awareness, appreciation, knowledge and stewardship of water resources through the development and dissemination of classroom ready teaching aids. By using Project WET services and resource materials, educators and their students will gain the knowledge, skill, and commitment needed to make informed decisions about water resource uses and conservation. Project WET is primarily designed for classroom teachers of grades K-12, but natural resource professionals, youth leaders, nature center instructors, and other educators who work with students in these age groups will also find Project WET particularly useful. The heart of Project WET is the 500 page Curriculum and Activity Guide which contains more than 90 hands-on water activities and includes instructions, background materials, and cross-references for each activity. Project WET also offers a guide to hands-on wetland education entitled WOW! The Wonders of Wetlands. Additional water education resources cover water history, ground water flow models, children's stories, and more. Project WET materials are distributed through minimum 6-hour workshops. There is a $10 per person fee for the workshop. Link to Workshop Schedule Project WET is co-sponsored in Oklahoma by the Oklahoma Conservation Commission, the Department of Environmental Quality, and the Water Resources Board To learn more about Project WET in Oklahoma, contact Karla Beatty at the Oklahoma Conservation Commission, 405-521-6788, or e-mail her at [email protected].
As the CosmosID blog illustrates regularly, microbes are remarkable for their ability to shape and affect just about every aspect of the world we live in. Yet, each week we are newly awed by publications that highlight discoveries of microbial feats and applications. This week was no different, as we were captivated by a story about a researcher who aims to cure cancer using salmonella. The idea of treating cancer with bacteria is not new. In fact, research on this particular topic dates back to at least the 1890s. However, up until now, research efforts have been inhibited by the toxicity of salmonella. The person behind this more recent effort is the research director for the Cancer Research Center in Columbia, Missouri, Abe Eisenstark, who has a background in salmonella research. It was this experience that drove him to direct Alison Dino, a scientist at the University of Missouri, to start experimenting with using old salmonella samples as weapons against cancer. Entertaining this wild notion, Dino began by putting these bacterial samples into the same petri dishes occupied by tumor cells. As she combined the two biological warriors for a showdown, she wanted to see if the salmonella would be drawn to the tumor cells, and if the bacteria would also leave healthy cells alone. A win would mean the bacteria attacked the cancer cells while leaving the healthy cells unharmed. To her amazement, she found a promising candidate in a strain labeled CRC 1674. It helps to first understand that research had already shown that salmonella is drawn to tumor cells. But the novel idea behind this research story is Dr. Eisenstark’s insight to use aged salmonella. He predicted that the bacteria samples, having survived in isolated vials for decades, would have adapted to low energy environments by inhibiting their own toxicity, as that characteristic demands energy that the bacteria could not have spared.
So, what is color theory? Color theory is the theory behind color mixing and color pairing. It provides methods and guidance for understanding how colors influence one another. Understanding colors and how they can work together or against one another can really help your designs look their absolute best. Having a general working knowledge of color theory will help you when you are choosing not only the colors of your designs, but also the colors of the products you choose for your designs. The 3 Characteristics of Color These terms describe the basic attributes of any color. Understanding these terms, and learning how to manipulate the characteristics of the colors you work with can be a highly valuable skill. Hue refers to the color itself. So, if you are dealing with a red, the hue would be red. Saturation refers to the amount of hue that is present in any given color. If you “desaturate” a color, it completely takes the hue away, and you are left with some variation of gray. If you add saturation, it intensifies the hue. The pink on the left is more saturated than the pink on the right. Value refers to the darkness or lightness of a color. The blue on the left has a darker value than the blue on the right. Yellow has a lighter value than purple. The Color Wheel Color wheels are great tools for beginners and experts alike! The color wheel helps to visualize the way colors work together. They can also help show what happened when one color is added to another. Color wheels can be simple, or very complex! If color theory interests you, I would highly recommend purchasing a color wheel of your own! Red, Yellow, Blue Theoretically, all colors are some variation of the primary colors. Primary colors also cannot be created by mixing other colors. They are the foundation of all color! Green, Purple, Orange Secondary colors are colors that are created when you mix together two primary colors. For example, when you mix blue and yellow, you get green. When you mix blue and red, you get purple, Yellow-orange, red-orange, red-purple, blue-purple, blue-green, yellow-green Tertiary colors are created when you mix a secondary color and a primary color together. There are two main ways to create color harmony. You can either use complimentary or analogous color pairings. Complimentary colors are colors that compliment each other. They will appear more vibrant and intense when they are placed next to one another. They are found opposite one another on the color wheel. If you really want your colors to POP, use complimentary colors! Blue and blue-purple are opposite on the color wheel from orange and yellow-orange. These colors intensify and compliment each other. Yellow and yellow-green are opposite from one another on the color wheel from purple and red-purple. These colors are make each other pop! Analogous colors appear next to each other on the color wheel. If you choose any three colors that are alongside one another on the color wheel, you are sure to come up with a harmonious pairing. Yellow-orange, orange, and red-orange are all next to one another on the color wheel. All three colors work well together. Blue-purple, purple, and red-purple are also all next to one another on the color wheel. All three of these colors also look great together! You can do this with any 3 colors that are connected on the color wheel. Hopefully this lesson in color theory was helpful to you all! If you have any questions or comments, please let us know in the comments section down below!
From Past to Future, Germanium May Revolutionize Computers Again Over sixty years ago the first transistors were developed and were quite different from today's critical computing component in a number of ways. Perhaps the most obvious difference is that in the beginning they were the size of a thumbnail, while today they are measured in nanometers. Another difference though may be undone in the future, thanks to researchers at The Ohio State University. The current dominate material in electronics is the semiconductor, silicon, but those first transistors were made from germanium, another semiconductor still used today, but is less common. That may change though as the researchers have isolated germanane, the germanium equivalent to graphene. Both graphene and germanane are single-atom thick sheets of their respective elements and conductor electrons at high speed, but germanane has the important property of being a semiconductor, unlike graphene. Researchers have been trying to create it for some time but the germanium atoms from one sheet kept binding to others, making multilayered crystals. To overcome this, the researchers forced calcium atoms between the layers before replacing them with hydrogen atoms. The bonds with the hydrogen atoms were so weak though that the layers of germanane could be successfully pulled apart. While the electron mobility for germanane is definitely valuable, another property should also be. Unlike silicon and bulk germanium, germanane has a 'direct band gap,' which means it absorbs and emits light very easily; an important property for future optoelectronics. Source: The Ohio State University
Symbiotic Upcycling: Turning “Low Value” Compounds into Biomass Scientists Discover the First Known Sulfur-oxidizing Symbiont to be Entirely Heterotrophic - The same species of Kentrophoros stained with a fluorescent dye that stains its DNA green. The three bright spots in the middle are the three cell nuclei of the eukaryotic host. The green "haze" elsewhere are the tightly-packed bacterial symbiont cells, too small to be individually distinguished in this image. Image: Brandon Seah/Max Planck Institute for Marine Microbiology. Kentron, a bacterial symbiont of ciliates, turns cellular waste products into biomass. It is the first known sulfur-oxidizing symbiont to be entirely heterotrophic. Researchers from the Max Planck Institute for Marine Microbiology now report about this unexpected bacterium that turns waste into food. Plants use light energy from the sun for photosynthesis to turn carbon dioxide (CO2) into biomass. Animals can’t do that. Therefore, some of them have teamed up with bacteria that carry out a process called chemosynthesis. It works almost like photosynthesis, only that it uses chemical energy instead of light energy. Many animals rely on chemosynthetic bacteria to supply them with food. The symbionts turn CO2 into biomass and are subsequently digested by their host. Kentron, a bacterium nourishing the ciliate Kentrophoros, was thought to be ‘just another’ chemosynthetic symbiont. However, recent results indicate that it is not. Turning waste into food An international team led by scientists from the Max Planck Institute for Marine Microbiology sequenced the genome of Kentron, the sulfur-oxidizing symbiont of the ciliates. “Contrary to our expectations, we couldn’t find any of the known genes for the fixation of CO2,” reports first author Brandon Seah. Without being able to fix CO2, what does Kentron grow on? “From their genes, it seems that Kentron uses small organic compounds and turns those into biomass,” Nicole Dubilier, director at the Max Planck Institute for Marine Microbiology and senior author of the study, explains. These include compounds such as acetate or propionate, which are typical ‘low value’ cellular waste products. “In this sense, Kentron is upcycling the garbage. It most probably recycles waste products from the environment and from their hosts into ‘higher value’ biomass to feed their hosts.” Underpinning genetic analyses with isotope fingerprinting Kentrophoros is a thin, ribbon-like ciliate that lives in sandy marine sediments, where it can easily squeeze and move between sand particles. It almost entirely relies on its symbionts for nutrition and has even given up its own mouth. Seah, who now works at the Max Planck Institute for Developmental Biology in Tübingen, and his colleagues collected specimens at sites in the Mediterranean, Caribbean and Baltic Seas. However, Kentrophoros does not grow and reproduce in the lab. So how could the researchers investigate Kentron’s food preferences? “Our collaborators in Calgary and North Carolina have developed a way to estimate the stable isotope fingerprint of proteins from the tiny samples that we have,” Seah explains. This fingerprint tells a lot about the source of carbon an organism uses. The Kentron bacteria have a fingerprint that is completely unlike any other chemosynthetic symbiont’s fingerprint from similar habitats. “This clearly shows that Kentron is getting its carbon differently than other symbionts.” Textbook knowledge put to the test This research provides a counterexample to textbook descriptions. These usually say that the symbiotic bacteria make most of their biomass from either CO2 or methane. In contrast, Kentron does not appear to have this ability to make biomass from scratch. “Uptake of organic substrates from the environment and recycling waste from their hosts might play a bigger role in these symbioses than previously thought,” senior author Harald Gruber-Vodicka from the Max Planck Institute for Marine Microbiology concludes. “This has implications in ecological models of carbon cycling in the environment, and we are excited to look further into the details and pros and cons of either strategy.” Brandon K. B. Seah, Chakkiath Paul Anthony, Bruno Huettel, Jan Zarzycki, Lennart Schada von Borzyskowski, Tobias J. Erb, Angela Kouris, Manuel Kleiner, Manuel Liebeke, Nicole Dubilier, Harald R. Gruber-Vodicka: Sulfur-oxidizing symbionts without canonical genes for autotrophic CO2 fixation, mBio (2019); DOI: [10.1128/mBio.01112-19].
Black Lives Matter This article was initially published on our blog by the Fashion Revolution Brazil team and written by Bárbara Poerner, on Monday, June 1st. It has been translated into English below. Black lives matter. And black lives were used for cotton production in the United States during slavery times. Cotton has long been a protagonist in the textile industry. If today this natural fibre is the most used and produced around the world, it is because throughout history, it has been a driving product of our capitalist system. Cotton cultivation is very old – it predates the emergence of markets – but its production was one of the first textiles to be industrialised, during the period of the European Industrial Revolution, mainly in England. Cotton production widely spurred on the Industrial Revolution, and while women and children worked extensively in spinning mills in places like Manchester (United Kingdom), African people were kidnapped on their continent of origin and enslaved in huge cotton plantations in the United States and elsewhere. The southern United States was a major cotton exporter and, consequently, a major producer. Many cotton workers were slaves, and although fibre and fabric were not the main item in the global trade at the time, millions of black people were enslaved to produce it. In the years to follow, those same southern US states had racism legislated through racial segregation. Today, this segregation is ongoing, not in laws, but institutionalized and intrinsic throughout the country. On May 25, 2020, a man named George Floyd was brutally murdered by the US police. This case, along with so many others, is part of the legacy of violence that racism promotes, and that years of slavery and racism fuel a policy of extermination of black people. This reveals how the productive systems of fashion (textiles or clothing) are very supported by exploratory and perverse systems, which leave their marks of death until today. The cotton of the slavery period was the product of the slavery system, it was the product of the deaths of black people. That same system reveals itself today, with the murder of George Floyd, with the stray bullets that always find black bodies like João Pedro and João Vitor, with the deaths of others who are marginalized, and with the death of being able to dream and live in peace, breathing. This problem is not exclusive to the USA, Brazil has its systems rooted in the same context of structural racism. And although colonisation and slavery here were different, with fewer cotton fields and more coffee and sugar fields, we see a manifestation of that same racism that also kills black people, expels quilombola[Indigenous] communities from their territories, marginalises the black population and leaves behind inequality. This process also occurs a lot through the exploitation of nature, which happens simultaneously with exploitation of people in this collection of injustices. Violence is renewed and runs through the entire fashion production chain, where many people, mostly women, are subjected to conditions of modern slavery and racism continues to show its face in the fields to which it passes. We need to look at the past of the fashion industry and understand its implications for the present. We need to affirm our commitment to a revolution in fashion, to walk the walk, and accept the need to build firmly anti-racist actions in the present, so that there is no longer a future for racism. It is urgent to revolutionize fashion because black lives matter. Ever important. by Bárbara Poerner
The full exploration of Mars began in earnest in 1976, when the audacious Viking mission arrived at the Red Planet. Previous spacecraft had carried out an initial reconnaissance, but Viking went much further. It brought not just one, but two spacecraft, each of which dispatched a complex laboratory to land on the surface. What's more, both of these Viking orbiters were designed not for a quick fly-by, but for a long-term mapping mission. Together, the Viking 1 and 2 orbiters captured many thousands of high-quality images of the surface. To this day, Mars explorers consider the data Viking collected part of a valuable knowledge base. Viking 1 spent more than four years in active service, and toward the end of its life it snapped hundreds of pictures during the early northern Martian summer. The U.S. Geological Survey combined those images into a series of extraordinary mosaics. Instead of focusing on one region or feature on the surface, these new compilations recreated how the entire face of Mars would look to an observer in a spacecraft flying about 2500 kilometers above the ground. These USGS hemispheric mosaics were prepared in high resolution and somewhat enhanced color. The result is a crisp, clear studio portrait of a planet. There's a word that's highly overused when describing pictures from space, but these images are, in fact, stunning. The first mosaic below is one of the most commonly-used pictures of Mars. It includes a couple of particularly striking features: the hemisphere-wide gash of Valles Marineris, and three towering volcanoes in Tharsis. But it's not the only portrait in the series. Here are several more, some much less often seen. Together, they reveal the several faces of a world. For all the excitement of following the rovers as they pick their way carefully through an explorer's playground in Gale and Endeavour, it's refreshing to see a reminder of this planet's grand, stark sweep.
Of all the categories of materials out there, using recyclables to make recycled crafts ranks at the top of our list! Recycled crafts for kids teach resourcefulness, one of the most important traits any creative person can have. Not only that, they teach kids how things are made, the different materials we use in products, and how these materials impact our planet. In this post we’ll cover why we love using recyclables in creative projects, the best materials to save, and share a GIGANTIC list of projects to try organized by material! Why Recycled Materials Rock: - They teach resourcefulness. Using recycled materials forces kids to think of new ways to use a common material or product. It’s the ultimate creative challenge to transform something made for one purpose into something entirely new. - They are free or cheap! Why buy materials when you can reuse something that would have otherwise gone in the recycling bin? Raiding the bin for project supplies is a great way to save money and time. - It’s an introduction to the environmental impact of our choices. The materials a product is made from have an impact on our environment both positively and negatively. None us are 100% perfect in our choices, but by learning about materials, which ones decompose and which do not, we can begin to teach kids about making responsible choices in the products they buy. The Best Recycled Materials to Hoard Once you start using recycled materials in projects you may find yourself turning into a hoarder…that’s okay! I have a few bins dedicated entirely to recycled materials in my garage. It’s a good idea to collect and store materials in bins so that you can keep everything organized and not drive your family crazy. It also can take time to build up a stash of certain materials so having a dedicated area to store them will help you find them down the line. Case in point: I spent 6 months collecting coffee cans for our camp (and drank lot of coffee) and each time I was done with a can I threw it in a bag in the garage. How to use this list: To see projects for each recyclable, click on the material links below to be taken to the project ideas section in this post. - Cardboard Tubes - Plastic Caps - Plastic Lids - Tin Cans - Coffee Cans - Bubble Wrap - Egg Cartons - Plastic Baskets - Disposable Baking Trays - Plastic Cups - Plastic Cutlery - Milk Cartons - Packing Peanuts - Plastic Bags - Paper Bags - Cereal Boxes Recycled Crafts Idea List This list is long because there are so many amazing projects out there featuring recycled materials! Making recycled materials crafts is a wonderful way to teach kids how to be resourceful and inventive but also to think about the choices we make as consumers. I hope this massive list of projects leaves you inspired AND reflective about the everyday materials we use.
Freeboard is a design characteristic of a vessel. For small boats, it is the minimum distance from the top edge of the side of the boat, or gunwale, down to the waterline. The weight of the loaded boat equals the weight of the volume of water it displaces. The boat's weight affects the freeboard because the heavier it is, the lower the boat sits in the water. The displacement or weight of the vessel and its physical dimensions determine the theoretical freeboard of the boat. Find the length of the boat from the boat's specifications or measure the length at the expected waterline. Find or measure the height of its side or gunwale from the keel. This is the minimum vertical distance from the bottom of the hull to the top of the side. Find the boat's beam or measure its maximum width. Calculate the effective area of the hull by multiplying the length at the waterline by the beam and a factor of 0.6. This factor corrects for the shape of the hull in boats without flat bottoms, as most of them are. Find or estimate the weight of the boat and add it to that of the passengers and contents. This is the displacement of the vessel. Divide the displacement by the density of water, which is approximately 28.1 Kilogram per cubic foot. This is the volume of the water displaced by the submerged boat. Divide the displaced volume by the cross-sectional area of the submerged part of the boat calculated above. The result is the submerged depth of the boat. Subtract the submerged depth from the gunwale height. The result is the freeboard. A boat with a V-shaped hull is 12 feet long with a width of 36 inches, or three feet. The boat weighs 45.4 Kilogram and, with two passengers and gear, its displacement is 227 Kilogram. The shortest vertical distance from the keel to the top of the gunwale is 15 inches. The effective area of the hull is 12 feet * 3 feet * 0.6, or 21.6 square feet. The 500-pound displacement divided by the water density of 28.1 Kilogram per cubic feet is about 8.1 cubic feet of water. Dividing the 8.1 cubic feet by the boat's area of 21.6 square feet yields 0.375 feet, or 4.5 inches of water. Subtracting 4.5 from the 15-inch height of the gunwale produces a freeboard of 10.5 inches. - If the boat has a relatively flat bottom and the hull shape is simple enough to approximate with a small number of polygons, calculate the boat hull's area directly without using the 0.6 factor. In this case, measure the hull width at several points and approximate its shape with composite figures. Calculate the overall area as the sum of the areas of the individual polygons. This will usually be a rectangle for the aft part of the boat and a triangle or rhombus for the bow.
Primary storage, or memory, means the space on your hard drive that is briefly used for working space. This usually occurs in a chip. Memory consists of four types of memory chips RAM, ROM, CMOS and flash. RAM stand for random access memory and ROM stand for read only memory.these are also called primary memory of a computer. RAM stands for Random Access Memory and is a type of chip used in primary storage memory. It is also temporary storage, holding software instructions and short-term working memory for the processor. RAM can be increased in most computers by using the expandable memory slots. Ram video Primary storage (RAM)is called 'primary' because it is the main memory that is accessible to the CPU. It is used to store data that is currently being used. Random access memoryEdit Random access memory or RAM is a form of data storage used in computers. Taken in the form of integrated circuits which represents primary or temporary storage, it allows data which is stored to be accessed in any order, which is why it's called random. Making it random instead of sequential greatly increases the speed the computer can operate since time is not wasted going to the place where needed data is stored (as in tape backups). Random_access_memory Volatile is a descriptive word for RAM which is short term memory; when the computer loses power the temporary storage will be lost . In order to prevent data from being lost, it must be saved to a hard disk or permanent storage area called ROM (Read Only Memory). RAM (in most cases)work faster than a hard drive, and some, faster than flash memory (which isn't volatile). This greater speed is why it's still used in computers today. Computers will run faster and more efficiently with more RAM. The suggested amount of RAM for current operating systems such as Windows Vista/XP and OSX 10.5 to run smoothly with the weight of shared graphics memory and programs is 1024 megabytes (1GB). Currently, RAM comes in 4 types: DDR1, DDR2, DDR3 and DDR4 available since 2014. DDR1 and 2 are currently most used. DDR3 operates at a higher frequency and has 3 data transfer channels thus increasing bandwidth. Newly developed SSD hard disks currently are capable of meeting and exceeding the transfer rate of DDR and DDR2. A 32bit operating system is only capable of utilizing 4gb of RAM. However a 64bit operating system is capable of supporting much more memory.As compared to ROM, RAM is costlier. ROM (Read Only Memory) refers to a read only memory chip that cannot be written on or erased by the computer user without special equipment. While using ROM contents are not lost when power to the computer is no longer available. Since it does not need power, and cannot be rewritten the only things put on ROM are starting (booting) instructions. CMOS stands for Complementary metal–oxide–semiconductor. This is a technology used in chips and analog circuits. Cmos It doesn't lose it's contents when turned off even if the content was not saved. Also keeps the time and date current even when the computer is turned off.
By Steven McGinty One of the biggest myths of modern times is that all children and young people are ‘digital natives’. That is, they have developed an understanding of digital technologies as they’ve grown up, rather than as adults. But this view has been heavily contested, with research highlighting that young people are not a “homogeneous generation of digital children”. In the media, the issue is rarely given attention. Instead, news reports focus on the use of futuristic technologies in the classroom, such as East Renfrewshire Council’s recent announcement of their investment of £250,000 in virtual reality equipment. The less spoken truth is that many children and young people are leaving school without basic digital skills. In 2017, the Carnegie Trust UK published a report challenging the assumption that all young people are digitally literate. They highlighted that as many as 300,000 young people in the UK still lack basic digital skills, and that although more are becoming digitally engaged, the division is deepening for those that remain excluded. In particular, the report highlighted that vulnerable young people are most at risk, such as those who are unemployed, experiencing homelessness, living in care, in secure accommodation, excluded from mainstream education, or seeking asylum. Research by the UK Digital Skills Taskforce has also found that many young people lack digital skills. However, an arguably more worrying finding from their study was that 23% of parents did not believe digital skills were relevant to their children’s future career success. This suggests that digital literacy is as much associated with socio-cultural values as to whether you are Generation X or Generation Y. Similarly, the CfBT Education Trust examined the digital divide in access to the internet for school students aged five to 15. It found that children from households of the lowest socio-economic class access the internet for just as long as those from other backgrounds, but they are significantly less likely to use the internet to carry out school work or homework. As a result, the report recommended that interventions should not focus on improving access but rather ensuring that students are using technology effectively. Further research by the CfBT Education Trust found that only 3% of young people did not have access to the internet, and suggested that schemes which provide students with free equipment are in danger of wasting resources. Many believe digital skills are essential for academic success. This includes the House of Lords Select Committee on Communications, who in 2017 recommended that digital skills should be taught alongside reading, writing and mathematics, rather than in specialist computer science classes. Research, however, is unclear on the digital divide’s impact on educational performance (for example, research has shown that smartphone use has no impact on education attainment). But teachers are concerned about their pupils, and in a 2010 survey 55% of teachers felt that the digital divide was putting children at a serious disadvantage. However, there are organisations offering hope to young people. For instance, Nominet Trust’s Digital Reach programme is working with leading youth organisations to increase digital skills amongst some of the UK’s most disadvantaged young people. Vicki Hearn, director at Nominet Trust, explains that: “Digitally disadvantaged young people are amongst the hardest-to-reach and we need new models to engage with them to disrupt the cycle of disadvantage and exclusion. Our evidenced approach gives us confidence that Digital Reach will have a tangible impact on the lives of those who have so far been left behind.” Whether someone has digital skills or not is often a mix of their socio-economic class, cultural values, and even personality traits. However, if everyone is to prosper in a digital society, it will be important that all children and young people are encouraged to develop these digital skills, so they can utilise the technologies of tomorrow. The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team.
Sapphires, like any naturally occurring gemstone, are formed by the different shifts, mixings and chemical changes that are constantly taking place in the earth. Sapphires are created through certain shifts in heat and pressure, and can be found in both metamorphic and igneous rocks. Rocks in which sapphires can be found include granite, schist, gneiss, nephaline syenite and a variety of others. They may also be found in deposits of alluvium. When sapphires are formed naturally, they are hexagonal, and are called corundum. Due to the remarkable hardness of sapphires, second only to diamond, they are highly prized. Corundum can be found in a variety of colors; however, it is only considered to be a sapphire when it is not red. Red corundum is referred to as a ruby. During the formation of corundum, the coloring of the stone is dependent upon what minerals are present. For instance, when iron is present, sapphires may have a green or yellow hue to them, whereas the presence of vanadium will create purple sapphires. The most prized sapphires are blue, which is a result of titanium being present when the stone is formed. With advances in science and technology, methods have been created for artificially growing sapphire crystals. The original process was discovered in 1902, and it consisted of alumina powder being added to a oxyhydrogen flame, which is in turn directed downward. Alumina in this flame is slowly "deposited" in a teardrop shape called a boule. A variety of chemicals can be added throughout this process to create sapphires of multiple hues, as well as red rubies. While other processes have been discovered since the early 1900s, it is these artificial sapphires that have opened up the use of the stone for technological purposes, including use in panes of glass, and as focusing devices in lasers. - sapphire necklace image by OMKAR A.V from Fotolia.com
More than one-tenth of children exhibit some sort of developmental delay. A child may be slow in learning to climb stairs, independently toileting, catching onto reading or picking up on social queues. As a preschooler, Albert Einstein dealt with delayed speech. And, like Einstein, most children with one or more developmental delays will ultimately close the gap between their abilities and those of their typical peers. There are certain behavioral techniques parents can apply to help a child cooperatively participate in activities focused on areas of concern. The most effective behavioral techniques for developmentally delayed children begin with a list of reward items your child finds especially enticing. He may like swinging, chocolate-chip cookies or listening to a certain music group. Compile a list of free or inexpensive items that your youngster particularly enjoys. Each of these items can serve as a reward or motivator as you work with your child on any objective related to a life-skill area of concern. For instance, if you want your child to independently wash her hands and she loves playing computer games, match the task of hand-washing with the reward of extra computer time. For developmentally delayed children, life can be frustrating. Even young children can quickly become discouraged and display avoidance behaviors for whatever life facet gives them grief. Part of an effective behavioral technique that encourages children to tackle projects not to their liking is to chunk the task down into easy-to-accomplish segments. For instance, for hand-washing, instruct your daughter to turn on the water, then turn it off. Have her do this five times. Then, allow her a short time on a reward item for a job well-done. Now, model for her turning the water on, squirting soap on her hands and rubbing them together. Have her follow through on these short, simple steps a few times. Again, allow her reward time. Finally, demonstrate for her turning on the water, soaping and rinsing the hands, turning off the water and towel-drying them. As she successfully follows through on the multiple steps, again reward her with a predetermined item from her reward list. When working with developmentally delayed children, there are two very important means of behavioral technique communication that set them up for a successful learning experience. First, after giving your child a directive -- such as turn the water faucet on and off -- have him repeat back to you what he is to do. You may need to give the instruction more than once; continue to request that he repeat back to you the direction given until he gets it right. Consistently employ this checking technique to be sure that your child understands what he is to do up front. Then, catch him doing it correctly! Quick and frequent verbal encouragement provides oral guidance that both reassures your child that he's on the right track and motivates him to repeat it. At the end of any activity with your developmentally delayed child, take a moment to review with her what just took place. Questions should be short, to the point and centered on the nature of the task: "What did you just do?" She may answer "I washed my hands all by myself." This is your opportunity to reassure yourself that she understood the skill she was working on and to join with her in a high-five for a job well-done. Re-employ these same behavioral techniques to motivate your child to expend effort on any life skill she may need extra work on.
National Human Genome Research Institute Bethesda, Md., Thurs., May 6, 2010 - Researchers have produced the first whole genome sequence of the 3 billion letters in the Neanderthal genome, and the initial analysis suggests that up to 2 percent of the DNA in the genome of present-day humans outside of Africa originated in Neanderthals or in Neanderthals' ancestors. The international research team, which includes researchers from the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health, reports its findings in the May 7, 2010, issue of Science. The current fossil record suggests that Neanderthals, or Homo neanderthalensis, diverged from the primate line that led to present-day humans, or Homo sapiens, some 400,000 years ago in Africa. Neanderthals migrated north into Eurasia, where they became a geographically isolated group that evolved independently from the line that became modern humans in Africa. They lived in Europe and western Asia, as far east as southern Siberia and as far south as the Middle East. Approximately 30,000 years ago, Neanderthals disappeared. That makes them the most recent, extinct relative of modern humans, as both Neanderthals and humans share a common ancestor from about 800,000 years ago. Chimpanzees diverged from the same primate line some 5 million to 7 million years ago. The researchers compared DNA samples from the bones of three female Neanderthals who lived some 40,000 years ago in Europe to samples from five present-day humans from China, France, Papua New Guinea, southern Africa and western Africa. This provided the first genome-wide look at the similarities and differences of the closest evolutionary relative to humans, and maybe even identifying, for the first time, genetic variations that gave rise to modern humans. "This sequencing project is a technological tour de force," said NHGRI Director Eric D. Green, M.D., Ph.D. "You must appreciate that this international team has produced a draft sequence of a genome that existed 400 centuries ago. Their analysis shows the power of comparative genomics and brings new insights to our understanding of human evolution." The Neanderthal DNA was removed from bones discovered at Vindija Cave in Croatia and prepared in the clean room facility of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, to prevent contamination with contemporary DNA. The Max Planck group is led by their Department of Evolutionary Genetics Director Svante Pääbo, Ph.D., a well-known pioneer in Neanderthal genome research. The team deposited the Neanderthal genome sequence in the publicly available NIH genetic sequence database GenBank. To understand the genomic differences between present-day humans and Neanderthals, the researchers compared subtle differences in the Neanderthal genome to the genomes found in DNA from the five people, as well as to chimpanzee DNA. An analysis of the genetic variation showed that Neanderthal DNA is 99.7 percent identical to present-day human DNA, and 98.8 percent identical to chimpanzee DNA. Present-day human DNA is also 98.8 percent identical to chimpanzee."The genomic calculations showed good correlation with the fossil record," said coauthor Jim Mullikin, Ph.D., an NHGRI computational geneticist and acting director of the NIH Intramural Sequencing Center. "According to our results, the ancestors of Neanderthals and modern humans went their separate ways about 400,000 years ago." The comparison between Neanderthal and present-day human genomes has produced a catalog of genetic differences that allow the researchers to identify features that are unique to present-day humans. For example, the catalog includes differences in genes that code for functional elements, such as proteins, in which the Neanderthal versions are more like those of the chimpanzee than present-day humans. Some evolutionary changes were found in known genes involved in cognitive development, skull structure, energy metabolism, skin morphology and wound healing. Anthropologists have used the fossil record to construct tree-shaped diagrams that show how the different branches of hominins, which includes humans and human ancestors, split off from one another. These diagrams tend to proceed in a straight line, from the tree-trunk base of a common ancestor through progressively smaller branches until the species of interest is reached. The Neanderthal data suggests evolution did not proceed in a straight line. Rather, evolution appears to be a messier process, with emerging species merging back into the lines from which they diverged. Now the view emerging from the genomic data suggests that Neanderthals - who migrated out of Africa a few hundred thousand years ago - re-encountered anatomically modern humans, who began migrating out of Africa some 80,000 years ago. Humans migrating out of Africa were likely to be small pioneering groups and appear to have encountered Neanderthals living in the Fertile Crescent of the Middle East about 60,000 years ago. "It was a very unique series of events, with a founding population of modern humans of greatly reduced size -- tens to hundreds of individuals," Dr. Mullikin said. Geneticists can detect a population constriction or bottleneck where certain genetic markers are concentrated; that only occurs when the population is small. "At that time," Dr. Mullikin continued, "where the population was greatly reduced, the modern humans migrating out of Africa encountered Neanderthals and inter-breeding occurred between the two groups, leaving an additional, but subtle, genetic signature in the out-of-Africa group of modern humans." As modern humans migrated out of the Middle East after encountering Neanderthals, and dispersed across the globe, they carried Neanderthal DNA with them. The research team concluded that 2 percent of the genomes of present-day humans living from Europe to Asia - and as far into the Pacific Ocean as Papua New Guinea - was inherited from Neanderthals. The team did not find traces of Neanderthal DNA in the two present-day humans from Africa. It is not known, however, whether a more systematic sampling of African populations will reveal the presence of Neanderthal DNA in some indigenous Africans. "The data suggests that the genes flowed from Neanderthal to modern humans," Dr. Mullikin said. "That had to have occurred at least once during the 20,000 to 30,000 years, in which modern humans and Neanderthal both lived on the Eurasian continent." The researchers have not yet detected any signs that DNA from modern humans can be found in the Neanderthal genome. Previous studies, such as the International HapMap Project, which created a comprehensive catalog of human genetic variation, examined common genetic variation in populations across the globe, and concluded that average genetic variation between a person in Asia, Europe or Africa was essentially identical. The current study raises the possibility that Europeans and Asians, who include Neanderthal DNA, may be slightly more distinct from Africans than previously appreciated - a difference at the DNA sequence level that could not be seen with the resolution of the HapMap. "These are preliminary data based on a very limited number of samples, so it is not clear how widely applicable these findings are to all populations," said Vence L. Bonham, Jr., J.D., senior advisor to the NHGRI Director on Societal Implications of Genomics. "The findings do not change our basic understanding that humans originated in Africa and dispersed around the world in a migration out of that continent." NHGRI is one of the 27 institutes and centers at the NIH, an agency of the Department of Health and Human Services. The NHGRI Division of Intramural Research develops and implements technology to understand, diagnose and treat genomic and genetic diseases. Additional information about NHGRI can be found at its website, www.genome.gov. The National Institutes of Health - "The Nation's Medical Research Agency" - includes 27 institutes and centers, and is a component of the U.S. Department of Health and Human Services. It is the primary federal agency for conducting and supporting basic, clinical and translational medical research, and it investigates the causes, treatments and cures for both common and rare diseases. For more, visit www.nih.gov. Last Reviewed: July 11, 2013
Information about Leatherback turtles Common Name: Leatherback - named for its unique shell which is composed of a layer of thin, tough, rubbery skin, strengthened by thousands of tiny bone plates that makes it look "leathery." Scientific Name: Dermochelys coriacea Description: Head has a deeply notched upper jaw with 2 cusps. The leatherback turtle is the only sea turtle that lacks a hard shell. Its carapace is large, elongated and flexible with 7 distinct ridges running the length of the animal. Composed of a layer of thin, tough, rubbery skin, strengthened by thousands of tiny bone plates, the carapace does not have scales, except in hatchlings. All flippers are without claws. The leatherback turtle's carapace is dark grey or black with white or pale spots, while the plastron is whitish to black and marked by 5 ridges. Hatchlings have white blotches on carapace. Size: 4 to 6 feet (121-183 cm). The largest leatherback ever recorded was almost 10 feet (305 cm) from the tip of its beak to the tip of its tail and weighed in at 2,019 pounds (916 kg). Weight: 550 to 1,545 pounds (250-700 kg). Diet: Leatherback turtles have delicate, scissor-like jaws. Their jaws would be damaged by anything other than a diet of soft-bodied animals, so they feed almost exclusively on jellyfish. It is remarkable that this large, active animal can survive on a diet of jellyfish, which are composed mostly of water and appear to be a poor source of nutrients. Habitat: Leatherback turtles are primarily found in the open ocean, as far north as Alaska and as far south as the southern tip of Africa, though recent satellite tracking research indicates that leatherbacks feed in areas just offshore. Known to be active in water below 40 degrees Fahrenheit, the only reptile known to remain active at such a low temperature. Nesting: Nest at intervals of 2 to 3 years, though recent research has indicated they can nest every year. Nests between 6 to 9 times per season, with an average of 10 days between nestings. Lays an average of 80 fertilized eggs, the size of billiard balls, and 30 smaller, unfertilized eggs, in each nest. Eggs incubate for about 65 days. Unlike other species of sea turtles, leatherback turtle females may change nesting beaches, though they tend to stay in the same region. Range: Most widely distributed of all sea turtles. Found world wide with the largest north and south range of all the sea turtle species. With its streamlined body shape and the powerful front flippers, a leatherback turtle can swim thousands of miles over open ocean and against fast currents. Status: U.S. - Listed as Endangered (in danger of extinction within the foreseeable future) under the U.S. Federal Endangered Species Act. International - Listed as Critically Endangered (facing an extremely high risk of extinction in the wild in the immediate future) by the International Union for Conservation of Nature and Natural Resources. Threats to Survival: Greatest threat to leatherback sea turtles is from incidental take in commercial fisheries and marine pollution (such as balloons and plastic bags floating in the water, which are mistaken for jellyfish).Population Estimate*: 35,860 nesting females.Australian Flatback Turtle
Reducing greenhouse gas (GHG) emissions, which result from the burning of fossil fuels, also reduces the incidence of health problems from particulate matter (PM) in these emissions. A team of scientists at the Lawrence Berkeley National Laboratory (Berkeley Lab), the National Institute of Environmental Health Sciences (NIEHS), RAND Corp., and the University of Washington, has calculated that the economic benefit of reduced health impacts from GHG reduction strategies in the U.S. range between $6 and $14 billion annually in 2020, depending on how the reductions are accomplished. This equates to a health benefit of between $40 and $93 per metric ton of carbon dioxide reduction. “The importance of this result,” says Dev Millstein, Berkeley Lab Project Scientist who participated in the research, “is that avoiding adverse health impacts from particulate matter can help offset the cost of implementing policies that reduce GHG emissions.” Millstein is in the Environmental Energy Technologies Division (EETD) of Berkeley Lab. The team compared ten different strategies each equal to one “U.S. wedge.” A wedge is a scenario of activities that reduced CO2 emissions by 150 million metric tons per year in 2020, increasing to 750 million metric tons per year in 2060. Increasingly implemented in the marketplace over time, the strategies in each wedge provides greater and greater reductions in carbon emissions compared to what the emissions would have been without the measure (business as usual). The wedge concept was originally devised by Stephen Pacala and Robert Socolow of Princeton University and has become a standard method of analyzing the impact of mitigation measures on greenhouse gas emission. Jeffery Greenblatt, a Berkeley Lab author of the health benefits study, contributed to writing the original wedge paper and several follow-up papers. “This paper provides an alternative approach to comparing different ways to reduce greenhouse gases according to how much they improve health,” said John Balbus, M.D., NIEHS Senior Advisor for Public Health. “Decisions about how to address climate change need to be informed by many factors, but we believe this analysis helps advance the thinking on how to bring health considerations into these decisions.” Strategies considered by the team encompassed efficiency improvements in light- and heavy-duty vehicles, buildings and coal power plants; reducing light-duty vehicle-miles traveled; and substitution of coal electricity with lower-carbon energy sources. Examples of increased building efficiency included adding insulation, sealing and energy-efficient windows; increasing the efficiency of appliances, lighting and miscellaneous plug-load devices such as televisions and computers; and more efficient furnaces and water heaters. Coal power substitution options included natural gas, nuclear, wind and solar photovoltaic power. In 2020, health savings from one U.S. wedge of GHG reduction could range from $6 billion to $14 billion per year, depending on the strategy, the researchers calculated. If measures were implemented at an accelerated pace, resulting in reductions of 300 million metric tons of CO2 per year by 2020, the savings would range from $10 to $24 billion. The accelerated case represents a situation in which extremely aggressive policies aimed at reducing greenhouse gas emissions were implemented across the U.S. Health benefits would come primarily from the reduction in particulate matter emissions that would result from movement away from burning of fossil fuels. Particulate matter less than 10 micrometers in size (PM-10) can enter the lungs and bloodstream. PM less than 2.5 micrometers in size (PM-2.5) is considered especially dangerous, and has been linked in numerous studies to increased respiratory symptoms such as coughing and difficulty breathing, decreased lung function, premature death in people with pre-existing conditions, heart attacks, and aggravated asthma. The researchers estimated by how much each of the ten strategies could decrease GHG emissions, and particulate matter emissions, for each decade through 2060. Well-established health impact functions used in public health studies provided the rate of change between reductions in PM-2.5 and estimates of the number of health outcomes. Their sources included the EPA’s Regulatory Impact Analysis (RIA) for PM2.5 as well as Center for Disease Control and Prevention databases. “The results of this study provides policymakers with a better understanding of some of the health-related co-benefits of reducing greenhouse gas emissions, an area that is very rarely mentioned when the costs and benefits of reducing climate change impacts are discussed. It provides them with a more complete picture of the true costs and benefits climate change mitigation programs,“ says Greenblatt. The study “A wedge-based approach to estimating health co-benefits of climate change mitigation activities in the United States” was written by John M. Balbus (NIEH), Jeffery B. Greenblatt (Berkeley Lab), Ramya Chari (RAND Corp.), Dev Millstein (Berkeley Lab), and Kristie L. Ebi (University of Washington, School of Public Health). Climatic Change, November 2014, Volume 127, Issue 2, pp 199-210 This research was supported in part by Laboratory Directed Research and Development funding at the Lawrence Berkeley National Laboratory. # # # Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit www.lbl.gov.
Diabetes mellitus is a metabolic disorder in which inadequate production of the hormone insulin or a resistance to its actions in the body can lead to high blood sugar levels. Insulin is needed to get sugar into cells of the body, where it is used for energy. When sugar cannot get into cells, it remains in the blood at high levels. Complications of diabetes arise from long-term exposure to high blood sugar. The cardiovascular, nervous, visual and urinary systems are most commonly affected by chronically high blood sugars. Video of the Day The cardiovascular system includes the heart and blood vessels. High blood sugar and increased blood fat levels commonly found in people with diabetes contribute to fatty deposits called plaques on the inner walls of blood vessels, causing inflammation. This leads to decreased blood flow and hardening of the blood vessels called atherosclerosis. High blood sugar also results in glycation, where sugars attach to proteins, making them sticky. This occurs on proteins found in blood vessels, also resulting in inflammation. When this occurs in the heart, it can lead to cardiovascular disease. According to a 2016 report from the American Heart Association, 68 percent of people with diabetes older than 65 die of heart disease. Nerve damage called diabetic neuropathy is common in people with diabetes. Symptoms typically appear after several years but may be present when diabetes is diagnosed, as the disease may have gone undetected for many years. Diabetic nerve damage known as peripheral neuropathy is most common in the legs and feet. According to a 2005 statement by the American Diabetes Association, up to 50 percent of people with diabetes have peripheral neuropathy. This typically starts as numbness or tingling that progresses to loss of pain and heat and cold perception in feet or hands, making it difficult to sense an injury. Another type of nerve damage called diabetic autonomic neuropathy affects nerves regulating the heart, blood vessels, and digestive and other systems. This condition can lead to problems with blood pressure, heart rhythm and digestion, among others. The Centers for Disease Control and Prevention reports that in 2005 to 2008, 28.5 percent of adults with diabetes 40 years or older had diabetic retinopathy. This eye disease is caused by high blood sugar levels leading to blood vessel damage and fluid leakage in the vision-sensing part of the eye called the retina. Diabetic macular edema is a complication of diabetic retinopathy wherein the center of the retina, which is responsible for detailed vision, is affected. These conditions can eventually lead to blindness. High blood sugar can also lead to an increased risk of cataracts and glaucoma. These eye disorders occur earlier and more often in people with diabetes, compared to those without the disease. In 2011, CDC reported that diabetes was the primary cause of kidney failure in 44 percent of people newly diagnosed with the condition. High levels of blood sugar can damage the kidneys. The result is an illness known as diabetic nephropathy that can eventually lead to kidney failure. High blood sugar levels initially damage the blood vessels in the kidneys. As diabetic nephropathy progresses, there is thickening of kidney tissue and scarring. When the kidneys are damaged, they cannot filter the blood properly. This results in waste and fluid buildup in the blood, and leakage of important blood proteins into the urine.
Genetic variation: Genetic variation refers to the variation in alleles of genes. It occurs both within and among populations. Genetic variation is important because it provides the genetic material for natural selection. Genetic variation is brought about by mutation, which is a permanent change in the chemical structure of a gene. Polyploidy is an example of chromosomal mutation. Polyploidy is a condition wherein organisms have three or more sets of genetic variation. Genetic variation can also be identified by identifying phenotypic variation, genotypic variation or examining variation at the level of enzymes using the process of protein electrophoresis. Polymorphic genes: Polymorphic genes have more than one allele at each locus. Half of the genes that code for enzymes in insects and plants may be polymorphic, whereas polymorphisms are less common in vertebrates. Causes of genetic variation: Ultimately, genetic variation is caused by variation in the order of bases in the nucleotides in genes. Examination of DNA has shown that genetic variation can occur in both coding exon regions and in the non-coding intron region of genes. Results of Genetic Variation: Genetic variation will result in phenotypic variation if variation in the order of nucleotides in the DNA sequence results in a difference in the order of amino acids in proteins coded by that DNA sequence, and if the resultant differences in amino acid sequence influence the shape, and thus the function of the enzyme. Geographic variation in genes: Geographic variation in genes often occurs among populations living in different locations. Geographic variation may be due to differences in selective pressures or to genetic drift. Factors that cause genetic variation: Mutations are the ultimate source of genetic variation. Mutations are likely to be rare and most mutations are neutral or deleterious, but in some instances the new alleles can be favored by natural selection. Genetic variation can also be produced by the recombination of chromosomes that occurs during sexual reproduction, called independent assortment Crossing over and random segregation during meiosis can result in the production of new alleles or new combinations of alleles. Furthermore, random fertilization also contributes to variation. Variation and recombination can be facilitated by transposable and transposed genetic elements, commonly known as endogenous retroviruses, LINEs, SINEs, etc. A variety of factors maintain genetic variation in populations. Potentially harmful recessive alleles can be hidden from selection in the heterozygous individuals in populations of diploid organisms. Natural selection can also maintain genetic variation in balanced polymorphisms that occur when heterozygotes are favored or when selection is frequency dependent.
Ultrasound: Sound with a higher frequency than we can hear. (above 20000Hz) The range of human hearing is 20 to 20000Hz. Used for: Ultrasonic scanning in medicine (parental scans of a baby in the womb), destruction of kidney stones. Ultrasound waves are partly reflected at a boundary between two different types of body tissue. How ultrasound imaging works: - wave passes from one medium into another. some of the wave is reflected off the boundary and some is transmitted (refracted). This is partial reflection. - You can point a pulse of ultrasound at an object and wherever there are boundaries between 1 substance and another, some of the ultrasound gets reflected back. - time taken for reflections to reach a detector can…
C++ Programming/Code/Standard C Library/Functions/atol #include <cstdlib> long atol( const char *str ); The function atol() converts str into a long, then returns that value. atol() will read from str until it finds any character that should not be in a long. The resulting truncated value is then converted and returned. For example, x = atol( "1024.0001" ); results in x being set to 1024L.
December 5, 2002: A unique peanut-shaped cocoon of dust, called a reflection nebula, surrounds a cluster of young, hot stars in this view from NASA's Hubble Space Telescope. The "double bubble," called N30B, is inside a larger nebula, named DEM L 106. The larger nebula is embedded in the Large Magellanic Cloud, a satellite galaxy of our Milky Way located 160,000 light-years away. The wispy filaments of DEM L 106 fill much of the image. The hot stars are inside the double bubble. Hubble captures the brilliant blue-white colors of these stars. The very bright star at the top of the picture, called Henize S22, illuminates the dusty cocoon like a flashlight shining on smoke particles. This searing supergiant star is only 25 light-years from the N30B nebula. Viewed from N30B, the brilliant star would appear 250 times brighter than the planet Venus does in our sky.
Chemotherapy is the use of drugs to kill cancer cells. Chemotherapy is sometimes given after surgery for colorectal cancer to try to prevent the disease from recurring, or coming back. This additional treatment is called adjuvant therapy. The doctor may use one drug or a combination of drugs. Chemotherapy is usually given in cycles: a treatment period followed by a recovery period, then another treatment period, and so on. Anticancer drugs may be taken by mouth or given by injection into a blood vessel or body cavity. Chemotherapy is a systemic therapy, meaning that the drugs enter the bloodstream and travel through the body. Here are some questions you may want to ask about chemotherapy: Researchers are studying ways of putting chemotherapy drugs directly into the area to be treated. For colorectal cancer that has spread to the liver, drugs can be injected into a blood vessel that leads directly to the liver. (This treatment is called intrahepatic chemotherapy.) Researchers are also investigating a method in which the doctor puts anticancer drugs directly into the abdomen during or after surgery. (This is called intraperitoneal chemotherapy.) Usually a person has chemotherapy as an outpatient at the hospital, at the doctor’s office or at home. However, depending on which drugs are given, how they are given, and the patient’s general health, a short hospital stay may be necessary.
JON LOMBERG / SCIENCE PHOTO LIBRARY JON LOMBERG / SCIENCE PHOTO LIBRARY Gravitational lens. Artwork of a black hole (centre) distorting the view of a galaxy behind it in space. This is known as gravitational lensing. It occurs when light from a distant object is bent around a massive object in the same line of sight by the black hole's gravitational field. The galaxy appears as two images because the black hole is slightly offset from the line of sight. If the black hole were exactly aligned with the galaxy, a ring of light would be seen. The idea that light could be bent by gravity was put forward by Albert Einstein in his general theory of relativity (1915). Several examples of gravit- ational lenses have been found in recent years. Model release not required. Property release not required.
When a person’s ability to function independently is limited only by his/her environment Multiple Choice Questions (10 points) Please underline the best answer. Each question is worth one point. 1. When a person’s ability to function independently is limited only by his/her environment, this is referred to as a(n): 2. Under which disability category are most students receiving special education services: a. speech and language impairments b. mental retardation c. physical disability d. specific learning disability 3. IDEA requires that an IEP be developed and implemented for every child with disabilities: a. between the ages of 3 and 21 b. between the ages of 6 and 21 c. between birth to age 21 d. or at risk for developing disabilities between the ages of 6 and 21 4. Significantly subaverage intelligence and poor academic performance must be present to classify students as intellectually disabled. What other variable must be present for the classification? a. a biological or genetic condition b. deficits in adaptive behavior c. a discrepancy between achievement and ability d. distractibility and hyperactivity in any context 5. Which of the following students would most likely be considered for self-contained classroom? a. Michael demonstrates severe deficits in math, but for all other subjects his performance is on grade level. He is motivated to learn and attempts to abide by the teacher’s recommendations. b. Janie has difficulty in reading, which affects her performance in other content areas. In addition, she is highly disorganized, distractible, and physically aggressive with her classmates. c. Dominic has a reading disability. He has learned strategies to compensate for his disorder. His performance in most academic areas is satisfactory. d. Betty Ann was in self-contained classroom in elementary school because of significant academic deficits. In the middle school, Betty-Ann was taught strategies to learn more efficiently, which has drastically improved her academic performance. 6. Most children with emotional and behavioral disorders score: a. above normal on IQ tests and academically achieve below what their scores indicate. b. in the average range on IQ tests and academically achieve at their grade level. c. below normal on IQ tests and academically achieve below what their test score predict. d. below normal on IQ tests and academically achieve above what their test scores predict. 7. Francine, a fourth grader, says “thleep” when attempting to say “sleep.” What kind of articulation disorder does she have? 8. The legal definition of visual impairment is based on: a. visual acuity and field of vision b. visual acuity and the extent to which the person learns through the visual channel c. the extent to which the person learns through other senses d. depth perception and peripheral vision. 9. When PL. 94-142 was amended in 1990, which of the following disability category a. limb deficiency disabilities b. muscular dystrophy c. traumatic brain injury 10. Which of the following is the most widely supported employment approach? a. sheltered workshops b. mobile work crew c. individual placement model d. small business enterprise model II- Essay Questions (10 points) Directions: Please address every aspect of the question and give details and examples to support your answers. Use your own words when answering these questions. If you use an author’s exact words, you need to use quotes and give page numbers. Attach a reference list of at least two different sources in addition to the textbook in APA style at the end of your answers to each question. Your answers to these questions may be gathered from a number of sources, including the textbook, my possible answers to the weekly discussion questions, the chats, and the sources listed in my possible answers and in the syllabus. Please go beyond the textbook for your answers to these questions. Please do and turn in your own work. a. What is the difference between mainstreaming and inclusion? Is the least restrictive environment (LRE) always the regular education classroom? What is the philosophy of including students with disabilities in general education classrooms? What are the advantages and disadvantages of inclusion? How could inclusion be implemented in your school or in a school that does not have inclusion? What must the school administrator do to promote the inclusion of children with disabilities in this school? Give details, examples, and sources to support your arguments. (3 points) b. Do most students who are identified as having learning disabilities have a true disability, or are they just low achievers or victims of poor instruction? How can you distinguish between a child who has a learning disability and a child who is behind academically because of poor schooling? Would both of them qualify for special education services? Would you use the same approach in educating these two students? Give details, examples, and sources to support your arguments. (3 points) c. Why do students who are gifted need special education? After all, these children are bright enough that they can learn on their own. Should these students be educated with the same-age peers or with older students who share the same intellectual and academic talents and interests? Describe the enrichment and acceleration models and the advantages and disadvantages of each model. Which placement would you prefer for your own gifted child and why? Give details, examples, and sources to support your arguments. d. How can special education programs for children with disabilities prepare them for successful transition to post-secondary education or to the world of work? Who decides on the future of these young men and women? When do you stop pushing them through the general school curriculum and start putting more emphasis on their functional skills and job-related skills? What jobs and services are available in your community to help young adults with disabilities become productive members of society? Give details, examples, and sources to support your arguments. (2 points) Mainstreaming is the placement of a child with an exceptionality or disability in a generation classroom situation with expectations that the child will manage to work and produce results and assignments at a rate similar to that of…………………. Get instant access to the full solution from yourhomeworksolutions by clicking the purchase button below
the standard deviation is a measure of the spread of scores within a set of data. Usually, we are interested in the standard deviation of a population. However, as we are often presented with data from a sample only, we can estimate the population standard deviation from a sample standard deviation. These two standard deviations - sample and population standard deviations - are calculated differently. In statistics, we are usually presented with having to calculate sample standard deviations, and so this is what this article will focus on, although the formula for a population standard deviation will also be shown.
|← Television verses Gender||Smoking Cigarettes →| According to Robertson, (2008), fire prevention includes any fire service activity that lessens the incidence and severity of uncontrolled fire. Typically, fire prevention method employed by fire service focus on inspection, which includes engineering, code enforcement, public fire safety education, and fire investigations. Habitually, emergency planning and preparedness has always been a part of fire prevention and an objective of the fire service. Its significance has been highlighted in the past by the scope of emergency response required in cases of terrorist attacks, Hurricane Katrina, among others. Fire service personnel are prepared for fire prevention functions within the fire service through various departments which are organized in a number of ways. Rural volunteer fire service personnel may be organized by battalions based upon shifts of work or geographical distribution of individual fire stations. Municipal fire departments and county fire departments or fire divisions of county public safety departments are typically organized into two divisions that represent the varied responsibilities of the department or division. These divisions commonly include the following: administration, operations, prevention investigation training, communication emergency management, and equipment maintenance. The majority of fire department typically is charged with the duty of fire suppression, emergency response, and rescue activities (Arthur, 2003). For example, in North Carolina, the state building code requires that a fire safety and building evacuation plan be prepared for a number of types of building occupancies. Building occupant has to take part in fire drills, and employees must also be trained in fire emergency procedures. Other emergency planning activities in which fire departments take part in so as to ensure safety within the fire service department include preparation for disaster responses with partners such as urban search and rescue teams from the jurisdictions, the national guard, and other state and federal agencies. Additionally, various fire departments engage in a wide range of activities that are designed to inform the public concerning the need to prevent fires and be prepared in event fire does occur. However, it is vital to note that this kind of preparation is bound to change with time since new recruits are required to acquire the needed knowledge in regards to fire protection services (James, 2004). Carson and Clinker, (2000) asserts formal fire safety education is essential as a means of prevention is recognized today as one of the most viable methods of combating many fire emergencies facing a number of countries around the world. Campaigns and education in regards to fire emergency prevention among other vices have made great strides in dealing with critical and costly issues that affect the quality of life in communities. Fire departments all over the world have seen prevention education as a means of fighting fire, in an effort to minimize the loss of life and property from such catastrophic events. This process has been witnessed to be a working idea since most of the lives and properties have been saved since its inception. For example, the LaGrange Fire Department has participated in prevention education programs for the past decade. This effort began in an elementary school and has developed to include middle school students, as well as adults. During this period the community has witnessed very little cases of fires and loss of life as a result of fire education programs. In conclusion, general fire safety preparedness is a vital in the department of fire and rescue services. However the ongoing training program should therefore be carefully planned, evaluated, and revised as required. It is also important that personnel from all levels of the department be included in such a program so that their particular needed and concerns are not overlooked. An inclusive process is also likely to help in achieving the support from all levels of organization, even for some training that might be popular. It is therefore important to not that effective fire prevention depends on the adoption of up-to-date codes and standards and a personnel network (Carson and Clinker, 2000).
Researchers from the University of Granada have discovered that children who eat meals which are prepared and eaten at home, have a reduced risk of childhood obesity. Children who eat meals prepared outside the home are more likely to become overweight or obese. The study has made a connection between the health of a child and who has responsibility for preparing their lunch. According to an entry in the Nutricion hospitalaria journal, mothers know what the nutritional requirements are of a child, and are able to use that knowledge of nutrition to prepare a healthy diet for their children. Jamie Oliver the celebrity chef, has been campaigning for healthy school meals since 2005. In 2006 national standards were introduced, which all school meals in the UK have had to meet. Each school meal has to provide two portions of fruit and vegetables, with oily fish and good quality meat being featured on the menu on a regular basis. Schools can only serve foods which are deep fried twice a week. However, a number of academy schools are not meeting the national standards as they are not under any obligation to do so. Jamie Oliver expressed his concern of the quality of school dinners in these schools towards the end of 2011. The researchers studied 718 children from 13 schools, aged nine to 17 years old. The researchers looked at the family, how often the child exercised and the frequency of eating specified foods. The children’s size, weight and Body Mass Index were also measured. The study revealed that children who are more likely to spend time on computer games, watching TV or surfing the net are more likely to suffer childhood obesity or be overweight. The study concluded that a child’s family was instrumental in passing on healthy eating and exercise habits.
The disadvantages of centrally planned economies include the inefficient distribution of resources and the suppression of economic freedom. Centrally planned economies are generally associated with dictatorial political states.Continue Reading In a centrally planned economy, planners cannot accurately predict consumer preferences, surpluses and shortages, so they cannot efficiently allocate resources. This results in an abundance of goods that cannot be sold in some areas and a shortage of goods that are in high demand in others. In a free market economy, the allocation of scarce resources is dictated by the price system, so resources go where supply and demand dictate. One example of a centrally planned economy is the former Soviet Union, which operated as a centrally planned economy from the Bolshevik Revolution of 1917 until the 1991 fall of the Communist Party. Central economic planning also stifles economic freedom, as citizens have no incentive to innovate or to take entrepreneurial risks. The desire to earn profit is a foundation of the free market system. Central planners suppress the profit motive by taking decisions from businessmen and transferring them to the government. Economist Adam Smith believed that society functioned best when the economy was guided by an "invisible hand" that rewarded personal economic freedom and risk taking. Central planning handcuffs the "invisible hand."Learn more about Economics
Stands for "Heat Sink and Fan." Nearly all computers have heat sinks, which help keep the CPU cool and prevent it from overheating. But sometimes the heat sink itself can become too hot. This can happen if the CPU is running at full capacity for an extended period of time or if the air surrounding the computer is simply too hot. Therefore, a fan is often used in combination with the heat sink to keep both the CPU and heat sink at an acceptable temperature. This combination is creatively called a "heat sink and fan," or HSF. The fan moves cool air across the heat sink, pushing hot air away from the computer. Each CPU has a thermometer built in that keeps track of the processor's temperature. If the temperature becomes to hot, the fan or fans near the CPU may speed up to help cool the processor and heat sink.
Turn natural curiosity into deep, lasting learnings! Inquiry is what drives us all toward new knowledge, but how do we transform children’s natural ability to notice and wonder into the full learning cycle of observing, thinking, and critically questioning? Through this new edition of the bestselling Why Are School Buses Always Yellow? you’ll find simple, yet systematic ways to develop authentic student inquiry that fosters deep learning. This new edition features: - Updates based on the latest research around inquiry-based teaching - Emphasis on turning inquiry into critical thinking, assessing students’ inquiry, and involving families in the inquiry process - Examples for K–8 across subject areas - New emphasis on critical thinking about technologies - New and updated activities, checklists, templates, and implementation tools - Alignment with Common Core and Next Generation Science Standards With this invaluable resource, help students transform their playful wonderings into deeper questions about content—and develop the higher-level thinking skills they need for success in school and in life. "Educators often talk about developing lifelong learners - our team has had great success using Why Are School Buses Always Yellow? to catalyze professional conversations about how we can better cultivate curiosity through an inquiry approach. I strongly recommend this [new edition] for those who are interested in unlocking the uniquely creative capacity of our youngest learners." Devin Vodicka, Superintendent Vista Unified School District, Vista, CA - Updated to reflect the newest research. - New stories and examples, broadened to grades K through 8. - Discussion and alignment of the inquiry process with Common Core State Standards and Next Generation Science Standards. - New chapter on how to use technology to pursue inquiry and also build professional practice - Each chapters has new "Try this!" activities, new graphics, and updated checklists, templates, reflection questions with room for notetaking, a glossary of key terms, and other implementation tools
A new history of Antarctic ice A field camp for glacial geologists on James Ross Island, northern Antarctic Peninsula. 19 June 2015 by Bethan Davies There's still a huge amount we don't know about the history of the southern polar ice sheet. Bethan Davies was part of an international team that brought together the latest findings to reveal a complex and dynamic Antarctica. How sensitive is the Antarctic Ice Sheet to climate change? How does it react to a warmer atmosphere and ocean, and how can we estimate how much ice it is losing now? To answer these questions, we need to know how it has retreated and thinned since the last ice age. The Scientific Committee for Antarctic Research (SCAR) commissioned an international team of 78 scientists from 14 countries to compile the most complete review of the history of the ice sheet to date. The team includes scientists from many backgrounds, including marine geologists, those studying changes recorded in lake muds, ice-sheet modellers, terrestrial glacial geologists and glaciologists. The review looked at the ice sheet's extent and thickness at the Last Glacial Maximum - the peak of the last ice age, 25,000 years ago - and then every 5,000 years until the present. The challenge we faced was to take information from many sources and many different international teams, and turn it into a unified history of the Antarctic ice cap over the last few tens of thousands of years. The Earth is still rebounding after the weight of the ice sheets was removed at the end of the last ice age, and measurements of the modern sheet's height above sea level or changes in mass have to take this into account. This in turn means we need detailed knowledge of past ice volumes. And rates and magnitudes of ice-sheet change during periods of past rapid climate change - such as the last transition between the last ice age and the present interglacial period - and over the last few thousand years put current ice-sheet change into context, and help us understand thresholds and tipping points beyond which even more rapid changes may occur. The changing ice cap You might imagine the relationship between ice and temperature is simple - when the climate gets colder the ice increases, and when it warms the ice melts. But our results show the ice sheet is dynamic, with various areas gaining and losing ice at different times. Around much of the continent, the glaciers reached their peak about 20,000 years ago, when large areas of the ice sheet reached the edge of the continental shelf. Yet in parts of the colder, drier East Antarctic Ice Sheet the story was very different, with the ice sheet remaining relatively stable for the last 25,000 years. In fact, the ice surface in its interior was up to 100m lower than the present day, because the cold, dry air meant there was less snowfall. In several cases, our work highlighted apparent contradictions that will need more work to understand. For example, the Weddell Sea was a particular area of contention, with apparent discrepancies between the evidence from land and sea. Part of the problem is that this region is remote, even by Antarctic standards, so data are sparse and the extensive sea ice makes investigation from ships difficult. The limited marine data we do have from the Weddell Sea region suggest that at its maximum, the ice sheet extended to near the outer edge of the continental shelf, implying it was thicker than the present one. In contrast, geological evidence from mountain ranges inshore suggests that there was very little change in the ice sheet's elevation between the Last Glacial Maximum and today. This limited change in thickness indicates that the ice in the Weddell Sea embayment was thin and floating, rather than grounded in the deep troughs on the continental shelf. This leaves scientists with two alternative scenarios to test and investigate over coming years. Timeline showing the dynamic changes occurring in different parts of the Antarctic continent. It shows measurements of mean annual air temperature compared with the present from the Vostok ice core in East Antarctica. You can clearly see how different parts of the continent reach their maximum ice volume and then deglaciate at different times. Again and again, we learned that things are more complicated than we'd thought. For example, a key finding is that the ice-sheet's retreat from its maximum extent was asynchronous; that is, it did not respond to a warming climate in a uniform way. In the west around the Antarctic Peninsula, Bellingshausen Sea and sub-Antarctic islands, the recession was well under way by 18,000 years ago. But at that time most of the East Antarctic Ice Sheet was still grounded near its greatest extent, and the grounding line - the transition between floating ice and ice that rests on the sea floor - in the Ross Sea barely moved at all. By 10,000 years ago, the Antarctic Peninsula Ice Sheet, ice in the Amundsen Sea, and glaciers on the sub-Antarctic islands had largely receded to the inner continental shelf. By 5,000 years ago, their configurations were similar to the present. Conversely, ice-sheet recession in East Antarctica didn't really get under way until 12,000 to 6,000 years ago, when the oceans were warming significantly. At a more local scale, individual ice streams shrank back at different times, often depending on the landscape they flowed through. Some of our discoveries have implications far beyond Antarctica. One was that the ice sheet contributed less than 10m to global sea levels as it melted - less than earlier studies estimated - and probably contributed relatively little to the rapid rise of around 20m in global sea levels around 14,500 years ago. Some scientists have thought Antarctica was the source for this massive increase, but our research suggests that other ice sheets probably played a bigger role. The review reveals a complex Antarctic Ice Sheet that responded sensitively to changes in air and ocean temperature in different ways at different times. Patterns of ice recession varied strongly from place to place, controlled in part by the landscape and ocean water depths. Ice-sheet modellers will now use the project's results to test their computer models on historical data; if the models can successfully simulate what we know happened in the past - known as hindcasting - then we can be more confident in their projections of the future. This will in turn help us understand how the Antarctic ice will respond to a changing environment. It is clear from the project that the glacial history of large areas of Antarctica remains little-studied and poorly understood, and in some areas we still can't decide between competing hypotheses. It's a complicated story that highlights the complexities involved in reconstructing ancient ice sheets. Major reviews like this identify the state of the art and clarify future research directions. They give the scientific community access to a wide range of data sources; data that often would not have been available otherwise. Other specialists can then use these data; for example, computer modellers need accurate geological information against which to test their models. By understanding how, how quickly and how much the ice sheet responded to oceanic and atmospheric changes in the past, we will be better able to judge how it will react to similar changes in the future, and will be able to provide better projections of future sea-level rise. Bethan Davies is a glaciologist and physical geographer at Royal Holloway, University of London. Email: [email protected]. The review described in this article appears as seven papers in a special issue of Quaternary Science Reviews. Funding came from a variety of funding sources, including NERC and SCAR.
A Research Project is an independent piece of work in which you formulate the question(s) to be addressed, with guidance from your supervisor. It intends to provide you with an opportunity to further your intellectual and personal development by undertaking a significant academic project. It can be defined as a scholarly inquiry into a problem or issue, involving a systematic approach to the gathering and analysis of information/data, leading to the production of a high standard and well-structured report. The purpose of a Project is to develop and apply students‟ research skills which may form the basis of professional research practice, academically and otherwise. More specifically, stress will be laid upon skills of: - Identification of a field of study; - Formulation of research questions; - Problem solving; - Data collection and analysis; - Inference (assumption, conclusion, conjecture, consequences, deduction, etc.); - Proof (confirmation, substantiate); - Dissemination (broadcasting, publication),and The Project is one element of your degree where you have certain level of freedom to select what to study or investigate. It can be one of the most valuable learning experiences you could ever go through. The two basic types of skills required from any researcher are: Core skills and abilities There are a common core of skills and attitudes which all researchers should possess. A researcher should be able to apply these skills in different situations with different topics and problems. Ability to integrate theory and method An understanding of the inter-relationship between theory, method and research design, practical skills and particular methods, the knowledge base of the subject and methodological foundations. Why are you doing a Project? The purpose of the Project is to develop and apply students‟ research skills which may form the basis of professional research practice in a wide range of areas in the business discipline – with a special reference to the international market and businesses. Hence, research in the form of a Project is intended to help you to find solutions and answers to a question after a thorough study and analysis of related theories and evidence. This process not only enhances your understanding of different theories, it also allows you to examine and/or test these theories in an applied and a practical way. Doing a research is a craft skill, which is why the basic educational process that takes place is that of learning by doing. The main aims and objectives of a Research Project. The main aim of the Project is: - To test a student’s performance against the educational objectives of breadth, depth, and synthesis in the field; - To train students in the recognition, formulation, execution, and writing-up of a specific project; - To give students experience of working independently on a single medium size project under staff supervision. The major purpose of a Project/research is to: - Provide you with the opportunity to demonstrate the ability to devise, select and use a range of methodologies appropriate to the chosen project; - Allow you to show the application of the skills of data collection, critical analysis and concept synthesis necessary for the formation of defensible conclusions and/or recommendations; - Allow you the opportunity to demonstrate an ability to draw appropriate conclusions argued from the evidence presented; - Provide a forum in which you may demonstrate the skills of structuring and presenting a balanced, informed, complete, clear and concise written argument. Different types of research Research has traditionally been classified into two types: pure and applied. Pure research supplies the theories, and an applied research uses and tests-out these theories. In reality, such a classification tends to be too rigid and restrictive. There are three basic types of research: - Explanatory research, which aims to tackle a new problem, issue, topic about which little is known. - Testing – out research, which tries to find the limits of previously proposed generalisations. The amount of testing-out to be done is endless and continuous because in this way we are able to improve the existing generalised theory or idea. - Problem – solving research, which focuses on a particular problem and uses a wide range of intellectual resources to solve this problem. The role of the supervisor Supervisors play an important role in guiding and supporting students. They are expected to: - Assign some directed reading as appropriate; - Stimulate and enthuse you; - Provide a steady stream of interaction of ideas and guidance; - Help you develop a suitable methodology; - Help you draw up your individual detailed project plan; - Encourage you to produce an in-depth literature survey chapter at an early stage; - Stay in contact with you at regular intervals; - Monitor your progress so as to ensure targets are met on time; - Indicate deficiencies in drafts of the project; - Approve the final draft; - Examine the submitted Project, together with a second examiner and an external examiner. You should be in regular contact with your supervisor, throughout the project. The frequency of the contact with your supervisor is up to you to decide but the recommendation is at least once per month. You must take the initiative: the frequency of your meeting with your supervisor and the responsibility of successfully completing the Project on time remains solely with you. You are expected to keep a record of your meetings with your supervisor by filling in the online contact sheet (available on WebCT Project site, or on paper). Things that any research project must have: - The ability to develop a purposeful, feasible and manageable dissertation and to frame concise, meaningful questions; - The ability to choose and apply a clear and appropriate methodology for the project; - An awareness of the subtleties of the subject area and the ability to handle complex issues; - An ability to see both the “big picture” and significant details; - Evidence that the relevant literature has been read, digested and appropriately made use of; - Some evidence of the ability to think independently and critically (i.e. originality); - Evidence that work has been done carefully, that proper attention has been given to detail, that the subject has been treated comprehensively (i.e. thoroughness); - A systematic, logical and appropriate structure for the dissertation; - Evidence that the dissertation is indeed the work of the person who submitted it; - Compliance with the formal requirement set by the University. Things that any research project should NOT have: - Use your own personal experience – “anecdotal evidence” – unless it is treated in a rigorous manner; - Use the words “I think‟ or “I believe‟; - Stray outside your discipline and subject area; - Use poor English; - Disregard the University’s regulations; - Be submitted late. Source: Business School, University of Greenwich
Fin whales, or fin-backed whales, are found in all major oceans and open seas. Some populations are migratory, moving into colder waters during the spring and summer months to feed. In autumn, they return to temperate or tropical oceans. Because of the difference in seasons in the northern and southern hemisphere, northern and southern populations of fin whales do not meet at the equator at the same time during the year. Other populations are sedentary, staying in the same area throughout the year. Non-migratory populations are found in the Mediterranean Sea and the Gulf of California. (Gambell, 1985; Jefferson, Leatherwood, and Webber, 1994; Nowak, 1991) In summer in the North Pacific Ocean, fin whales migrate to the Chukchi Sea, the Gulf of Alaska, and coastal California. In the winter, they are found from California to the Sea of Japan, East China and Yellow Seas, and into the Philippine Sea. (Gambell, 1985) During the summer in the North Atlantic Ocean, fin whales are found from the North American coast to Arctic waters around Greenland, Iceland, north Norway, and into the Barents Sea. In the winter these fin whale populations are found from the ice edge toward the Caribbean and the Gulf of Mexico and from southern Norway to Spain. (Gambell, 1985) In the southern hemisphere, fin whales enter and leave the Antarctic throughout the year. Larger and older whales tend to travel further south than younger ones. (Gambell, 1985) Biogeographic Regions: Arctic Ocean; Arctic Ocean :: Native; Indian Ocean; Indian Ocean :: Native; Atlantic Ocean; Atlantic Ocean :: Native; Pacific Ocean; Pacific Ocean :: Native; Mediterranean Sea; Mediterranean Sea :: Native Other Geographic Terms: Cosmopolitan - Jefferson, T., S. Leatherwood, M. Webber. 1994. Marine Mammals of the World. Rome, Italy: Food and Agriculture Organization of the United Nations.
The past continuous is an English verb tense used for continuous actions in the past. This lesson shows you how to use it correctly. You can start with the structure: Past Continuous Structure Subject + Was/Were + Verb + ING Past Continuous Examples: - I was sleeping. - They were dancing. Past Continuous Uses Past continuous is used to show an action that was continuing in the past until another action interrupted it. The continuing action is past continuous and the interrupting action is simple past. Here is an example: - I was sleeping when you called me. In this example, the action of sleeping was continuing over time until the telephone call interrupted the sleeping. Here are some more examples: - She was playing in the park when it started to rain. - While they were talking, the pizza arrived. - The car crashed when we were walking across the street. An expression of time can be used as an interruption as well. - We were eating dinner at 4 o’clock. - At 6 o’clock we were still studying for our test. Here are some more uses:: Past continuous is used to show two actions happening at the same time in the past, like in these examples: - I was reading and he was watching TV. - While she was sleeping, Jack was doing his homework. Notice the word “while” is often used in these situations: - While I was cleaning, she was watching television. - He was talking while she was studying. Repetition in the Past Past Continuous can also be used to show actions that happened many times in the past. - He was always breaking something when he was a child. - I couldn’t enjoy the movie because the people behind me were talking the whole time. Here are some more tips for using the past continuous: Past continuous can often be used in the same places as simple past. - Yesterday I read a book. - Yesterday I was reading a book. Often, it is determined by the question. - What were you doing yesterday? I was reading. - What did you do yesterday? I read. Remember that, just like present continuous, you cannot use non-continuous verbs in continuous tenses. - I was liking pizza when I was young. - I liked pizza when I was young. - I was wanting to eat pizza yesterday. - I wanted to eat pizza yesterday. Are you ready to check your understanding of the past continuous? Try these tests: And here is a listening exercise you can use to practice your listening and see the past continuous in action:
Video modeling has been used in the field of Applied Behavior Analysis (ABA) to teach skills such as conversational speech (Charlop & Milstein, 1989), perspective taking (Charlop-Christy & Daneshvar, 2003), and complex play sequences (D'Ateno et al, 2003). Some research evaluating the models has been conducted to include an investigation of effectiveness when siblings as the identified model (Taylor et al, 1999). In a video modeling procedure, the learner watches the video (often looped 2 or 3 times), then is given the materials to play/work with. Using a carefully created task analysis the teacher would score the steps the individual completed correctly and which ones they completed incorrectly. As a general rule, if the learner has steps 1, 2, and 3, begin teaching step 4. If a learner displays more splinter skills such as steps 1, 2, 5, 7, and 10, then begin by teaching the total task. Deciding whether or not to include language in the video depends on the skill you are targeting and the skills the learner has. For social skills, and commenting during play, many times my teaching videos do include comments. Learn more (read the relevant research) about VIDEO MODELING.
From Wikipedia, the free encyclopedia - View original article A single-stage-to-orbit (or SSTO) vehicle reaches orbit from the surface of a body without jettisoning hardware, expending only propellants and fluids. The term usually, but not exclusively, refers to reusable vehicles. No Earth-launched SSTO launch vehicles have ever been constructed. To date, orbital launches have been performed either by multi-stage fully or partially expendable rockets, or by the Space Shuttle which was multi-stage and partially reusable. A large proportion of the cost of Earth space launches comes, not from fuel, but from damage and destruction of hardware in launch and reentry. Single stage to orbit flight from Earth with a complete return of hardware to Earth offers the promise of significant reductions in the cost of launching people, equipment and supplies into orbit. It is considered to be marginally possible to launch a single stage to orbit spacecraft from Earth. The principal complicating factors for SSTO from Earth are: high orbital velocity of over 7400 m/s; the need to overcome the earth's gravity, especially in the early stages of flight; and flight within the Earth's atmosphere, which limits speed in the early stages of flight and influences engine performance. The marginality of SSTO can be seen in the launch of the space shuttle. The shuttle and main tank combination successfully orbits after booster separation from an altitude of 45 kilometers (140,000 ft) and a speed of 4,828 kilometers per hour (3,000 mph). This is approximately 12% of the gravitational potential energy and just 3% of the kinetic energy needed for orbital velocity (4% of total energy required). Notable single stage to orbit research spacecraft include Skylon, the DC-X, the X-33, and the Roton SSTO. However, despite showing some promise, none of them has come close to achieving orbit yet due to problems with finding the most efficient propulsion system. Single-stage-to-orbit has been achieved from the Moon by both the Apollo program's Lunar Module and several robotic spacecraft of the Soviet Luna program; the lower lunar gravity and absence of any significant atmosphere makes this much easier than from Earth. |This section requires expansion. (June 2011)| There have been various approaches to SSTO, including pure rockets that are launched and land vertically, air-breathing scramjet-powered vehicles that are launched and land horizontally, nuclear-powered vehicles, and even jet-engine-powered vehicles that can fly into orbit and return landing like an airliner, completely intact. For rocket-powered SSTO, the main challenge is achieving a high enough mass-ratio to carry sufficient propellant to achieve orbit, plus a meaningful payload weight. One possibility is to give the rocket an initial speed with a space gun, as planned in the Quicklaunch project. For air-breathing SSTO, the main challenge is system complexity and associated research and development costs, material science, and construction techniques necessary for surviving sustained high-speed flight within the atmosphere, and achieving a high enough mass-ratio to carry sufficient propellant to achieve orbit, plus a meaningful payload weight. Air-breathing designs typically fly at supersonic or hypersonic speeds, and usually include a rocket engine for the final burn for orbit. Whether rocket-powered or air-breathing, a reusable vehicle must be rugged enough to survive multiple round trips into space without adding excessive weight or maintenance. In addition a reusable vehicle must be able to reenter without damage, and land safely. While single-stage rockets were once thought to be beyond reach, advances in materials technology and construction techniques have shown them to be possible. For example, calculations show that the Titan II first stage, launched on its own, would have a 25-to-1 ratio of fuel to vehicle hardware. It has a sufficiently efficient engine to achieve orbit, but without carrying much payload. Hydrogen might seem the obvious fuel for SSTO vehicles. When burned with oxygen, hydrogen gives the highest specific impulse of any commonly used fuel: around 450 seconds, compared with up to 350 seconds for kerosene. Hydrogen has the following advantages: However, hydrogen also has these disadvantages: These issues can be dealt with, but at extra cost. While kerosene tanks can be 1% of the weight of their contents, hydrogen tanks often must weigh 10% of their contents. This is because of both the low density and the additional insulation required to minimize boiloff (a problem which does not occur with kerosene and many other fuels). The low density of hydrogen further affects the design of the rest of the vehicle — pumps and pipework need to be much larger in order to pump the fuel to the engine. The end result is the thrust/weight ratio of hydrogen-fueled engines is 30–50% lower than comparable engines using denser fuels. This inefficiency indirectly affects gravity losses as well; the vehicle has to hold itself up on rocket power until it reaches orbit. The lower excess thrust of the hydrogen engines due to the lower thrust/weight ratio means that the vehicle must ascend more steeply, and so less thrust acts horizontally. Less horizontal thrust results in taking longer to reach orbit, and gravity losses are increased by at least 300 meters per second. While not appearing large, the mass ratio to delta-v curve is very steep to reach orbit in a single stage, and this makes a 10% difference to the mass ratio on top of the tankage and pump savings. The overall effect is that there is surprisingly little difference in overall performance between SSTOs that use hydrogen and those that use denser fuels, except that hydrogen vehicles may be rather more expensive to develop and buy. Careful studies have shown that some dense fuels (for example liquid propane) exceed the performance of hydrogen fuel when used in an SSTO launch vehicle by 10% for the same dry weight. Operational experience with the DC/X experimental rocket has caused a number of SSTO advocates to reconsider hydrogen as a satisfactory fuel. The late Max Hunter, while employing hydrogen fuel in the DC/X, often said that he thought the first successful orbital SSTO would more likely be fueled by propane. Some SSTO vehicles use the same engine for all altitudes, which is a problem for traditional engines with a bell-shaped nozzle. Depending on the atmospheric pressure, different bell shapes are optimal. Engines operating in the lower atmosphere have shorter bells than those designed to work in vacuum. Having a bell not optimized for the height makes the engine less efficient. One possible solution would be to use an aerospike engine, which can be effective in a wide range of ambient pressures. In fact, a linear aerospike engine was used in the X-33 design. Still, at very high altitudes, the extremely large engine bells tend to expand the exhaust gases down to near vacuum pressures. As a result, these engine bells are counterproductive due to their excess weight. Some SSTO vehicles simply use very high pressure engines which permit high ratios to be used from ground level. This gives good performance, negating the need for more complex solutions. Some designs for SSTO attempt to use airbreathing jet engines that collect oxidizer and reaction mass from the atmosphere to reduce the take-off weight of the vehicle. Some of the issues with this approach are: Thus with for example scramjet designs (e.g. X-43) the mass budgets do not seem to close for orbital launch. Similar issues occur with single stage vehicles attempting to carry conventional jet engines to orbit- the weight of the jet engines is not compensated by the reduction in propellant sufficiently. On the other hand LACE-like precooled airbreathing designs such as the Skylon spaceplane (and ATREX) which transition to rocket thrust at rather lower speeds (Mach 5.5) do seem to give, on paper at least, an improved orbital mass fraction over pure rockets (even multistage rockets) sufficiently to hold out the possibility of full reusability with better payload fraction. It is important to note that mass fraction is an important concept in the engineering of a rocket. However, mass fraction may have little to do with the costs of a rocket, as the costs of fuel are very small when compared to the costs of the engineering program as a whole. As a result, a cheap rocket with a poor mass fraction may be able to deliver more payload to orbit with a given amount of money than a more complicated, more efficient rocket. Many vehicles are only narrowly suborbital, so practically anything that gives a relatively small delta-v increase can be helpful, and outside assistance for a vehicle is therefore desirable. Proposed launch assists include: And on-orbit resources such as: Due to weight issues such as shielding, many nuclear propulsion systems are unable to lift their own weight, and hence are unsuitable for launching to orbit. However some designs such as the Orion project and some nuclear thermal designs do have a thrust to weight ratio in excess of 1, enabling them to lift off. Clearly one of the main issues with nuclear propulsion would be safety, both during a launch for the passengers, but also in case of a failure during launch. No current program is attempting nuclear propulsion from Earth's surface. Because they can be more energetic than the potential energy that chemical fuel allows for, some laser or microwave powered rocket concepts have the potential to launch vehicles into orbit, single stage. In practice, this area is relatively undeveloped, and current technology falls far short of this. The high cost per launch of the Space Shuttle sparked interest throughout the 1980s in designing a cheaper successor vehicle. Several official design studies were done, but most were basically smaller versions of the existing Shuttle concept. Most cost analysis studies of the Space Shuttle have shown that workforce is by far the single greatest expense. Early shuttle discussions speculated airliner-type operation, with a two-week turnaround. However, senior NASA planners envisioned no more than 10 to 12 flights per year for the entire shuttle fleet. The absolute maximum flights per year for the entire fleet was limited by external tank manufacturing capacity to 24 per year. Very efficient (hence complex and sophisticated) main engines were required to fit within the available vehicle space. Likewise the only known suitable lightweight thermal protection was delicate, maintenance-intensive silica tiles. These and other design decisions resulted in a vehicle that requires great maintenance after every mission. The engines are removed and inspected, and prior to the new "block II" main engines, the turbopumps were removed, disassembled and rebuilt. While Space Shuttle Atlantis was refurbished and relaunched in 53 days between missions STS-51-J and STS-61-B, generally months were required to repair an orbiter for a new mission. Many in the aerospace community[who?] concluded that an entirely self-contained, reusable single-stage vehicle could solve these problems. The idea behind such a vehicle is to reduce the processing requirements from those of the Shuttle. |This section possibly contains original research. (November 2013)| The early Atlas rocket is an expendable SSTO by some definitions. It is a "stage-and-a-half" rocket, jettisoning two of its three engines during ascent but retaining its fuel tanks and other structural elements. However, by modern standards the engines ran at low pressure and thus not particularly high specific impulse and were not especially lightweight; using engines operating with a higher specific impulse would have eliminated the need to drop engines in the first place. The first stage of the Titan II had the mass ratio required for single-stage-to-orbit capability with a small payload. A rocket stage is not a complete launch vehicle, but this demonstrates that an expendable SSTO was probably achievable with 1962 technology. It is easier to achieve SSTO from a body with lower gravitational pull than Earth, such as the Moon or Mars. The Apollo Lunar Module achieved deorbit to a soft landing, and return to lunar orbit, each with a single stage for descent and ascent. A detailed study into SSTO vehicles was prepared by Chrysler Corporation's Space Division in 1970–1971 under NASA contract NAS8-26341. Their proposal (Shuttle SERV) was an enormous vehicle with more than 50,000 kg of payload, utilizing jet engines for (vertical) landing. While the technical problems seemed to be solvable, the USAF required a winged design (for cross range) that led to the Shuttle as we know it today. The unmanned DC-X technology demonstrator, originally developed by McDonnell Douglas for the Strategic Defense Initiative (SDI) program office, was an attempt to build a vehicle that could lead to an SSTO vehicle. The one-third-size test craft was operated and maintained by a tiny crew of three people based out of a trailer, and the craft was once relaunched less than 24 hours after landing. Although the test program was not without mishap (including a minor explosion), the DC-X demonstrated that the maintenance aspects of the concept were sound. That project was cancelled when it crashed on the fourth flight after transferring management from the Strategic Defense Initiative Organization to NASA. The Aquarius Launch Vehicle was designed to bring bulk materials to orbit as cheaply as possible. The British Government partnered with the ESA in 2010 to promote a single-stage to orbit spaceplane concept called Skylon. This design was pioneered by Reaction Engines Limited, a company founded by Alan Bond after HOTOL was canceled. The Skylon spaceplane has been positively received by the British government, and the British Interplanetary Society. Pending a successful engine test in June 2011, the company will begin Phase 3 of development with the first orders expected around 2011-2013. On June 1, 2012, Romanian organization ARCA announced that they are constructing an expendable rocket, named Haas 2C that will attempt to reach orbit in one stage. The rocket has 520 kg empty weight and can carry 15.5 tons of fuel. It will use kerosene as fuel and liquid oxygen as oxydizer. In Spring 2012 they have successfully tested a lightweight composite kerosene fuel tank. The liquid oxygen tank is being designed and it will also be made of composite materials. The launch is expected to take place in Spring 2013. Many studies have shown that regardless of selected technology, the most effective cost reduction technique is economies of scale . Merely launching a large total quantity reduces the manufacturing costs per vehicle, similar to how the mass production of automobiles brought about great increases in affordability. Using this concept, some aerospace analysts believe the way to lower launch costs is the exact opposite of SSTO. Whereas reusable SSTOs would reduce per launch costs by making a reusable high-tech vehicle that launches frequently with low maintenance, the "mass production" approach views the technical advances as a source of the cost problem in the first place. By simply building and launching large quantities of rockets, and hence launching a large volume of payload, costs can be brought down. This approach was attempted in the late ’70s, early ’80s in West Germany with the Democratic Republic of the Congo-based OTRAG rocket and might have been successful if the project was not killed following political pressure from France, the Soviet Union and other parties. A related idea is to obtain economies of scale from building simple, massive, multi-stage rockets using cheap, off-the-shelf parts. The vehicles would be dumped into the ocean after use. This strategy is known as the "big dumb booster" approach. This is somewhat similar to the approach some previous systems have taken, using simple engine systems with "low-tech" fuels, as the Russian and Chinese space programs still do. These nations' launches are significantly cheaper than their Western counterparts. An alternative to scale is to make the discarded stages practically reusable: this is the goal of the SpaceX reusable launch system development program and its Grasshopper demonstrator.
The Civil War impacted the lives of everyone in the United States and this was no different for the children at the time. Some children actually served in the army as soldiers, while others witnessed the horror of war from afar. Many children had to grow up quickly, taking on new responsibilities at home or on the warfront. Boys in the Army Although soldiers were officially supposed to be at least 18 years old, both sides needed soldiers and were willing to look the other way when it came to age. As a result thousands of young boys between the ages of 13 and 17 fought in the Civil War. Many of these boys were killed or wounded in battle. Drummer Boys and Messengers The youngest of the boy soldiers usually ended up being drummers or messengers. Boys as young as 10 years old are on record as serving as drummers during the Civil War. Drummers were used for communication on the battlefield. Different drum rolls signaled different commands like "retreat" or "attack." Other boys were used as messengers. They were usually fast runners who would bravely run important battle messages from one commander to another. Child soldier in the US Civil War by Unknown The most famous of the boy soldiers during the Civil War was Johnny Clem. Johnny first tried to join the Union Army at the age of 9, but was rejected because of his age and size. However, he didn't give up. He followed along with the 22nd Michigan regiment until they adopted him as their drummer. He officially joined the Union Army two years later at the age of 13. He became famous when he shot a Confederate officer and escaped during a battle at Chickamauga, GA. Throughout the war Johnny's adventures and exploits became legendary. He continued on as a soldier after the war rising to the rank of Brigadier General. Children in the Army Camps Some children served in the army camps. They would help wash dishes, fix meals, and set up the camp when it moved. These children were in less danger than the soldiers doing the fighting, but were often near the front lines. Children at Home War wasn't easy for the children at home, either. Most children had a relative who was off fighting the war such as a father, brother, or uncle. They had to work extra hard and sometimes take on the jobs of adults to help make ends meet. They also lived in fear that their father or brother may never return. Children in the South Children living in the South had an added fear because much of the fighting took place in the South. If their home was near a battle, they would hear gunfire and cannon through the night. They may also see soldiers marching by on their way to fight or returning from a battle. They hoped the enemy soldiers wouldn't destroy their crops or their home. Interesting Facts about Children in the Civil War Some boys would put a note with the number 18 in their shoes when applying for the army. This way they could say "I'm over 18" without really lying. Johnny Clem was the last veteran of the Civil War to retire from the U.S. Armed Forces in 1915. The Civil War is sometimes called "The Boys' War" because so many young men fought as soldiers. Some historians estimate that as many as 20% of the soldiers who fought in the Civil War were under the age of 18.
A tropical cyclone is a storm system characterized by a large low-pressure center and numerous thunderstorms that produce strong winds and heavy rain. Tropical cyclones feed on heat released when moist air rises, resulting in condensation of water vapor contained in the moist air. They are fueled by a different heat mechanism than other cyclonic windstorms such as nor'easters, European windstorms, and polar lows, leading to their classification as "warm core" storm systems. Tropical cyclones originate in the doldrums near the equator, about 10° away from it. The term "tropical" refers to both the geographic origin of these systems, which form almost exclusively in tropical regions of the globe, and their formation in maritime tropical air masses. The term "cyclone" refers to such storms' cyclonic nature, with counterclockwise rotation in the Northern Hemisphere and clockwise rotation in the Southern Hemisphere. Depending on its location and strength, a tropical cyclone is referred to by names such as hurricane, typhoon, tropical storm, cyclonic storm, tropical depression, and simply cyclone. While tropical cyclones can produce extremely powerful winds and torrential rain, they are also able to produce high waves and damaging storm surge as well as spawning tornadoes. They develop over large bodies of warm water, and lose their strength if they move over land. This is why coastal regions can receive significant damage from a tropical cyclone, while inland regions are relatively safe from receiving strong winds. Heavy rains, however, can produce significant flooding inland, and storm surges can produce extensive coastal flooding up to 40 kilometres (25 mi) from the coastline. Although their effects on human populations can be devastating, tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere, and to maintain a relatively stable and warm temperature worldwide. Many tropical cyclones develop when the atmospheric conditions around a weak disturbance in the atmosphere are favorable. The background environment is modulated by climatological cycles and patterns such as the Madden-Julian oscillation, El Niño-Southern Oscillation, and the Atlantic multidecadal oscillation. Others form when other types of cyclones acquire tropical characteristics. Tropical systems are then moved by steering winds in the troposphere; if the conditions remain favorable, the tropical disturbance intensifies, and can even develop an eye. On the other end of the spectrum, if the conditions around the system deteriorate or the tropical cyclone makes landfall, the system weakens and eventually dissipates. It is not possible to artificially induce the dissipation of these systems with current technology. |Part of a series on| |Tropical cyclones portal| All tropical cyclones are areas of low atmospheric pressure near the Earth's surface. The pressures recorded at the centers of tropical cyclones are among the lowest that occur on Earth's surface at sea level. Tropical cyclones are characterized and driven by the release of large amounts of latent heat of condensation, which occurs when moist air is carried upwards and its water vapor condenses. This heat is distributed vertically around the center of the storm. Thus, at any given altitude (except close to the surface, where water temperature dictates air temperature) the environment inside the cyclone is warmer than its outer surroundings. A strong tropical cyclone will harbor an area of sinking air at the center of circulation. If this area is strong enough, it can develop into a large "eye". Weather in the eye is normally calm and free of clouds, although the sea may be extremely violent. The eye is normally circular in shape, and may range in size from 3 kilometres (1.9 mi) to 370 kilometres (230 mi) in diameter. Intense, mature tropical cyclones can sometimes exhibit an outward curving of the eyewall's top, making it resemble a football stadium; this phenomenon is thus sometimes referred to as the stadium effect. There are other features that either surround the eye, or cover it. The central dense overcast is the concentrated area of strong thunderstorm activity near the center of a tropical cyclone; in weaker tropical cyclones, the CDO may cover the center completely. The eyewall is a circle of strong thunderstorms that surrounds the eye; here is where the greatest wind speeds are found, where clouds reach the highest, and precipitation is the heaviest. The heaviest wind damage occurs where a tropical cyclone's eyewall passes over land. Eyewall replacement cycles occur naturally in intense tropical cyclones. When cyclones reach peak intensity they usually have an eyewall and radius of maximum winds that contract to a very small size, around 10 kilometres (6.2 mi) to 25 kilometres (16 mi). Outer rainbands can organize into an outer ring of thunderstorms that slowly moves inward and robs the inner eyewall of its needed moisture and angular momentum. When the inner eyewall weakens, the tropical cyclone weakens (in other words, the maximum sustained winds weaken and the central pressure rises.) The outer eyewall replaces the inner one completely at the end of the cycle. The storm can be of the same intensity as it was previously or even stronger after the eyewall replacement cycle finishes. The storm may strengthen again as it builds a new outer ring for the next eyewall replacement. |Size descriptions of tropical cyclones| |Less than 2 degrees latitude||Very small/midget| |2 to 3 degrees of latitude||Small| |3 to 6 degrees of latitude||Medium/Average| |6 to 8 degrees of latitude||Large anti-dwarf| |Over 8 degrees of latitude||Very large| One measure of the size of a tropical cyclone is determined by measuring the distance from its center of circulation to its outermost closed isobar, also known as its ROCI. If the radius is less than two degrees of latitude or 222 kilometres (138 mi), then the cyclone is "very small" or a "midget". A radius between 3 and 6 latitude degrees or 333 kilometres (207 mi) to 670 kilometres (420 mi) are considered "average-sized". "Very large" tropical cyclones have a radius of greater than 8 degrees or 888 kilometres (552 mi). Use of this measure has objectively determined that tropical cyclones in the northwest Pacific Ocean are the largest on earth on average, with Atlantic tropical cyclones roughly half their size. Other methods of determining a tropical cyclone's size include measuring the radius of gale force winds and measuring the radius at which its relative vorticity field decreases to 1×10−5 s−1 from its center. A tropical cyclone's primary energy source is the release of the heat of condensation from water vapor condensing at high altitudes, with solar heating being the initial source for evaporation. Therefore, a tropical cyclone can be visualized as a giant vertical heat engine supported by mechanics driven by physical forces such as the rotation and gravity of the Earth. In another way, tropical cyclones could be viewed as a special type of mesoscale convective complex, which continues to develop over a vast source of relative warmth and moisture. While an initial warm core system, such as an organized thunderstorm complex, is necessary for the formation of a tropical cyclone, a large flux of energy is needed to lower atmospheric pressure more than a few millibars (0.10 inch of mercury). The inflow of warmth and moisture from the underlying ocean surface is critical for tropical cyclone strengthening. A significant amount of the inflow in the cyclone is in the lowest 1 kilometre (3,300 ft) of the atmosphere. Condensation leads to higher wind speeds, as a tiny fraction of the released energy is converted into mechanical energy; the faster winds and lower pressure associated with them in turn cause increased surface evaporation and thus even more condensation. Much of the released energy drives updrafts that increase the height of the storm clouds, speeding up condensation. This positive feedback loop continues for as long as conditions are favorable for tropical cyclone development. Factors such as a continued lack of equilibrium in air mass distribution would also give supporting energy to the cyclone. The rotation of the Earth causes the system to spin, an effect known as the Coriolis effect, giving it a cyclonic characteristic and affecting the trajectory of the storm. What primarily distinguishes tropical cyclones from other meteorological phenomena is deep convection as a driving force. Because convection is strongest in a tropical climate, it defines the initial domain of the tropical cyclone. By contrast, mid-latitude cyclones draw their energy mostly from pre-existing horizontal temperature gradients in the atmosphere. To continue to drive its heat engine, a tropical cyclone must remain over warm water, which provides the needed atmospheric moisture to keep the positive feedback loop running. When a tropical cyclone passes over land, it is cut off from its heat source and its strength diminishes rapidly. The passage of a tropical cyclone over the ocean can cause the upper layers of the ocean to cool substantially, which can influence subsequent cyclone development. Cooling is primarily caused by upwelling of cold water from deeper in the ocean because of the wind. The cooler water causes the storm to weaken. This is a negative feedback process that causes the storms to weaken over sea because of their own effects. Additional cooling may come in the form of cold water from falling raindrops (this is because the atmosphere is cooler at higher altitudes). Cloud cover may also play a role in cooling the ocean, by shielding the ocean surface from direct sunlight before and slightly after the storm passage. All these effects can combine to produce a dramatic drop in sea surface temperature over a large area in just a few days. Scientists at the US National Center for Atmospheric Research estimate that a tropical cyclone releases heat energy at the rate of 50 to 200 exajoules (1018 J) per day, equivalent to about 1 PW (1015 watt). This rate of energy release is equivalent to 70 times the world energy consumption of humans and 200 times the worldwide electrical generating capacity, or to exploding a 10-megaton nuclear bomb every 20 minutes. While the most obvious motion of clouds is toward the center, tropical cyclones also develop an upper-level (high-altitude) outward flow of clouds. These originate from air that has released its moisture and is expelled at high altitude through the "chimney" of the storm engine. This outflow produces high, thin cirrus clouds that spiral away from the center. The clouds are thin enough for the sun to be visible through them. These high cirrus clouds may be the first signs of an approaching tropical cyclone. As air parcels are lifted within the eye of the storm the vorticity is reduced, causing the outflow from a tropical cyclone to have anti-cyclonic motion. |Basins and WMO Monitoring Institutions| |Basin||Responsible RSMCs and TCWCs| |North Atlantic||National Hurricane Center (United States)| |North-East Pacific||National Hurricane Center (United States)| |North-Central Pacific||Central Pacific Hurricane Center (United States)| |North-West Pacific||Japan Meteorological Agency| |North Indian Ocean||India Meteorological Department| |South-West Indian Ocean||Météo-France| |Australian region||Bureau of Meteorology† (Australia) Meteorological and Geophysical Agency† (Indonesia) Papua New Guinea National Weather Service† |Southern Pacific||Fiji Meteorological Service Meteorological Service of New Zealand† |†: Indicates a Tropical Cyclone Warning Center| There are six Regional Specialized Meteorological Centers (RSMCs) worldwide. These organizations are designated by the World Meteorological Organization and are responsible for tracking and issuing bulletins, warnings, and advisories about tropical cyclones in their designated areas of responsibility. Additionally, there are six Tropical Cyclone Warning Centers (TCWCs) that provide information to smaller regions. The RSMCs and TCWCs are not the only organizations that provide information about tropical cyclones to the public. The Joint Typhoon Warning Center (JTWC) issues advisories in all basins except the Northern Atlantic for the purposes of the United States Government. The Philippine Atmospheric, Geophysical and Astronomical Services Administration (PAGASA) issues advisories and names for tropical cyclones that approach the Philippines in the Northwestern Pacific to protect the life and property of its citizens. The Canadian Hurricane Center (CHC) issues advisories on hurricanes and their remnants for Canadian citizens when they affect Canada. On 26 March 2004, Cyclone Catarina became the first recorded South Atlantic cyclone and subsequently struck southern Brazil with winds equivalent to Category 2 on the Saffir-Simpson Hurricane Scale. As the cyclone formed outside the authority of another warning center, Brazilian meteorologists initially treated the system as an extratropical cyclone, although subsequently classified it as tropical. Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active whilst November is the only month with all the tropical cyclone basins active. In the Northern Atlantic Ocean, a distinct hurricane season occurs from June 1 to November 30, sharply peaking from late August through September. The statistical peak of the Atlantic hurricane season is 10 September. The Northeast Pacific Ocean has a broader period of activity, but in a similar time frame to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November. In the Southern Hemisphere, the tropical cyclone year begins on July 1 and runs all year round and encompasses the tropical cyclone seasons which run from November 1 until the end of April with peaks in mid-February to early March. |Season lengths and seasonal averages| |Basin||Season start||Season end||Tropical Storms |Category 3+ TCs |Australia Southwest Pacific||November||April||9||4.8||1.9| The formation of tropical cyclones is the topic of extensive ongoing research and is still not fully understood. While six factors appear to be generally necessary, tropical cyclones may occasionally form without meeting all of the following conditions. In most situations, water temperatures of at least 26.5 °C (79.7 °F) are needed down to a depth of at least 50 m (160 ft); waters of this temperature cause the overlying atmosphere to be unstable enough to sustain convection and thunderstorms. Another factor is rapid cooling with height, which allows the release of the heat of condensation that powers a tropical cyclone. High humidity is needed, especially in the lower-to-mid troposphere; when there is a great deal of moisture in the atmosphere, conditions are more favorable for disturbances to develop. Low amounts of wind shear are needed, as high shear is disruptive to the storm's circulation. Tropical cyclones generally need to form more than 555 km (345 mi) or 5 degrees of latitude away from the equator, allowing the Coriolis effect to deflect winds blowing towards the low pressure center and creating a circulation. Lastly, a formative tropical cyclone needs a pre-existing system of disturbed weather, although without a circulation no cyclonic development will take place. Low-latitude and low-level westerly wind bursts associated with the Madden-Julian oscillation can create favorable conditions for tropical cyclogenesis by initiating tropical disturbances. Most tropical cyclones form in a worldwide band of thunderstorm activity called by several names: the Intertropical Front (ITF), the Intertropical Convergence Zone (ITCZ), or the monsoon trough. Another important source of atmospheric instability is found in tropical waves, which cause about 85% of intense tropical cyclones in the Atlantic ocean, and become most of the tropical cyclones in the Eastern Pacific basin. Tropical cyclones move westward when equatorward of the subtropical ridge, intensifying as they move. Most of these systems form between 10 and 30 degrees away of the equator, and 87% form no farther away than 20 degrees of latitude, north or south. Because the Coriolis effect initiates and maintains tropical cyclone rotation, tropical cyclones rarely form or move within about 5 degrees of the equator, where the Coriolis effect is weakest. However, it is possible for tropical cyclones to form within this boundary as Tropical Storm Vamei did in 2001 and Cyclone Agni in 2004. Although tropical cyclones are large systems generating enormous energy, their movements over the Earth's surface are controlled by large-scale winds—the streams in the Earth's atmosphere. The path of motion is referred to as a tropical cyclone's track and has been compared by Dr. Neil Frank, former director of the National Hurricane Center, to "leaves carried along by a stream". Tropical systems, while generally located equatorward of the 20th parallel, are steered primarily westward by the east-to-west winds on the equatorward side of the subtropical ridge—a persistent high pressure area over the world's oceans. In the tropical North Atlantic and Northeast Pacific oceans, trade winds—another name for the westward-moving wind currents—steer tropical waves westward from the African coast and towards the Caribbean Sea, North America, and ultimately into the central Pacific ocean before the waves dampen out. These waves are the precursors to many tropical cyclones within this region. In the Indian Ocean and Western Pacific (both north and south of the equator), tropical cyclogenesis is strongly influenced by the seasonal movement of the Intertropical Convergence Zone and the monsoon trough, rather than by easterly waves. Tropical cyclones can also be steered by other systems, such as other low pressure systems, high pressure systems, warm fronts, and cold fronts. The Earth's rotation imparts an acceleration known as the Coriolis effect, Coriolis acceleration, or colloquially, Coriolis force. This acceleration causes cyclonic systems to turn towards the poles in the absence of strong steering currents. The poleward portion of a tropical cyclone contains easterly winds, and the Coriolis effect pulls them slightly more poleward. The westerly winds on the equatorward portion of the cyclone pull slightly towards the equator, but, because the Coriolis effect weakens toward the equator, the net drag on the cyclone is poleward. Thus, tropical cyclones in the Northern Hemisphere usually turn north (before being blown east), and tropical cyclones in the Southern Hemisphere usually turn south (before being blown east) when no other effects counteract the Coriolis effect. When a tropical cyclone crosses the subtropical ridge axis, its general track around the high-pressure area is deflected significantly by winds moving towards the general low-pressure area to its north. When the cyclone track becomes strongly poleward with an easterly component, the cyclone has begun recurvature. A typhoon moving through the Pacific Ocean towards Asia, for example, will recurve offshore of Japan to the north, and then to the northeast, if the typhoon encounters southwesterly winds (blowing northeastward) around a low-pressure system passing over China or Siberia. Many tropical cyclones are eventually forced toward the northeast by extratropical cyclones in this manner, which move from west to east to the north of the subtropical ridge. An example of a tropical cyclone in recurvature was Typhoon Ioke in 2006, which took a similar trajectory. Officially, landfall is when a storm's center (the center of its circulation, not its edge) crosses the coastline. Storm conditions may be experienced on the coast and inland hours before landfall; in fact, a tropical cyclone can launch its strongest winds over land, yet not make landfall; if this occurs, then it is said that the storm made a direct hit on the coast. As a result of the narrowness of this definition, the landfall area experiences half of a land-bound storm by the time the actual landfall occurs. For emergency preparedness, actions should be timed from when a certain wind speed or intensity of rainfall will reach land, not from when landfall will occur. When two cyclones approach one another, their centers will begin orbiting cyclonically about a point between the two systems. The two vortices will be attracted to each other, and eventually spiral into the center point and merge. When the two vortices are of unequal size, the larger vortex will tend to dominate the interaction, and the smaller vortex will orbit around it. This phenomenon is called the Fujiwhara effect, after Sakuhei Fujiwhara. A tropical cyclone can cease to have tropical characteristics through several different ways. One such way is if it moves over land, thus depriving it of the warm water it needs to power itself, quickly losing strength. Most strong storms lose their strength very rapidly after landfall and become disorganized areas of low pressure within a day or two, or evolve into extratropical cyclones. While there is a chance a tropical cyclone could regenerate if it managed to get back over open warm water, if it remains over mountains for even a short time, weakening will accelerate. Many storm fatalities occur in mountainous terrain, as the dying storm unleashes torrential rainfall, leading to deadly floods and mudslides, similar to those that happened with Hurricane Mitch in 1998. Additionally, dissipation can occur if a storm remains in the same area of ocean for too long, mixing the upper 60 metres (200 ft) of water, dropping sea surface temperatures more than 5 °C (9 °F). Without warm surface water, the storm cannot survive. A tropical cyclone can dissipate when it moves over waters significantly below 26.5 °C (79.7 °F). This will cause the storm to lose its tropical characteristics (i.e. thunderstorms near the center and warm core) and become a remnant low pressure area, which can persist for several days. This is the main dissipation mechanism in the Northeast Pacific ocean. Weakening or dissipation can occur if it experiences vertical wind shear, causing the convection and heat engine to move away from the center; this normally ceases development of a tropical cyclone. Additionally, its interaction with the main belt of the Westerlies, by means of merging with a nearby frontal zone, can cause tropical cyclones to evolve into extratropical cyclones. This transition can take 1–3 days. Even after a tropical cyclone is said to be extratropical or dissipated, it can still have tropical storm force (or occasionally hurricane/typhoon force) winds and drop several inches of rainfall. In the Pacific ocean and Atlantic ocean, such tropical-derived cyclones of higher latitudes can be violent and may occasionally remain at hurricane or typhoon-force wind speeds when they reach the west coast of North America. These phenomena can also affect Europe, where they are known as European windstorms; Hurricane Iris's extratropical remnants are an example of such a windstorm from 1995. Additionally, a cyclone can merge with another area of low pressure, becoming a larger area of low pressure. This can strengthen the resultant system, although it may no longer be a tropical cyclone. Studies in the 2000s have given rise to the hypothesis that large amounts of dust reduce the strength of tropical cyclones. In the 1960s and 1970s, the United States government attempted to weaken hurricanes through Project Stormfury by seeding selected storms with silver iodide. It was thought that the seeding would cause supercooled water in the outer rainbands to freeze, causing the inner eyewall to collapse and thus reducing the winds. The winds of Hurricane Debbie—a hurricane seeded in Project Stormfury—dropped as much as 31%, but Debbie regained its strength after each of two seeding forays. In an earlier episode in 1947, disaster struck when a hurricane east of Jacksonville, Florida promptly changed its course after being seeded, and smashed into Savannah, Georgia. Because there was so much uncertainty about the behavior of these storms, the federal government would not approve seeding operations unless the hurricane had a less than 10% chance of making landfall within 48 hours, greatly reducing the number of possible test storms. The project was dropped after it was discovered that eyewall replacement cycles occur naturally in strong hurricanes, casting doubt on the result of the earlier attempts. Today, it is known that silver iodide seeding is not likely to have an effect because the amount of supercooled water in the rainbands of a tropical cyclone is too low. Other approaches have been suggested over time, including cooling the water under a tropical cyclone by towing icebergs into the tropical oceans. Other ideas range from covering the ocean in a substance that inhibits evaporation, dropping large quantities of ice into the eye at very early stages of development (so that the latent heat is absorbed by the ice, instead of being converted to kinetic energy that would feed the positive feedback loop), or blasting the cyclone apart with nuclear weapons. Project Cirrus even involved throwing dry ice on a cyclone. These approaches all suffer from one flaw above many others: tropical cyclones are simply too large and short-lived for any of the weakening techniques to be practical. Tropical cyclones out at sea cause large waves, heavy rain, and high winds, disrupting international shipping and, at times, causing shipwrecks. Tropical cyclones stir up water, leaving a cool wake behind them, which causes the region to be less favourable for subsequent tropical cyclones. On land, strong winds can damage or destroy vehicles, buildings, bridges, and other outside objects, turning loose debris into deadly flying projectiles. The storm surge, or the increase in sea level due to the cyclone, is typically the worst effect from landfalling tropical cyclones, historically resulting in 90% of tropical cyclone deaths. The broad rotation of a landfalling tropical cyclone, and vertical wind shear at its periphery, spawns tornadoes. Tornadoes can also be spawned as a result of eyewall mesovortices, which persist until landfall. Over the past two centuries, tropical cyclones have been responsible for the deaths of about 1.9 million people worldwide. Large areas of standing water caused by flooding lead to infection, as well as contributing to mosquito-borne illnesses. Crowded evacuees in shelters increase the risk of disease propagation. Tropical cyclones significantly interrupt infrastructure, leading to power outages, bridge destruction, and the hampering of reconstruction efforts. Although cyclones take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Tropical cyclones also help maintain the global heat balance by moving warm, moist tropical air to the middle latitudes and polar regions. The storm surge and winds of hurricanes may be destructive to human-made structures, but they also stir up the waters of coastal estuaries, which are typically important fish breeding locales. Tropical cyclone destruction spurs redevelopment, greatly increasing local property values. Intense tropical cyclones pose a particular observation challenge, as they are a dangerous oceanic phenomenon, and weather stations, being relatively sparse, are rarely available on the site of the storm itself. Surface observations are generally available only if the storm is passing over an island or a coastal area, or if there is a nearby ship. Usually, real-time measurements are taken in the periphery of the cyclone, where conditions are less catastrophic and its true strength cannot be evaluated. For this reason, there are teams of meteorologists that move into the path of tropical cyclones to help evaluate their strength at the point of landfall. Tropical cyclones far from land are tracked by weather satellites capturing visible and infrared images from space, usually at half-hour to quarter-hour intervals. As a storm approaches land, it can be observed by land-based Doppler radar. Radar plays a crucial role around landfall by showing a storm's location and intensity every several minutes. In-situ measurements, in real-time, can be taken by sending specially equipped reconnaissance flights into the cyclone. In the Atlantic basin, these flights are regularly flown by United States government hurricane hunters. The aircraft used are WC-130 Hercules and WP-3D Orions, both four-engine turboprop cargo aircraft. These aircraft fly directly into the cyclone and take direct and remote-sensing measurements. The aircraft also launch GPS dropsondes inside the cyclone. These sondes measure temperature, humidity, pressure, and especially winds between flight level and the ocean's surface. A new era in hurricane observation began when a remotely piloted Aerosonde, a small drone aircraft, was flown through Tropical Storm Ophelia as it passed Virginia's Eastern Shore during the 2005 hurricane season. A similar mission was also completed successfully in the western Pacific ocean. This demonstrated a new way to probe the storms at low altitudes that human pilots seldom dare. Because of the forces that affect tropical cyclone tracks, accurate track predictions depend on determining the position and strength of high- and low-pressure areas, and predicting how those areas will change during the life of a tropical system. The deep layer mean flow, or average wind through the depth of the troposphere, is considered the best tool in determining track direction and speed. If storms are significantly sheared, use of wind speed measurements at a lower altitude, such as at the 700 hPa pressure surface (3,000 metres / 9,800 feet above sea level) will produce better predictions. Tropical forecasters also consider smoothing out short-term wobbles of the storm as it allows them to determine a more accurate long-term trajectory. High-speed computers and sophisticated simulation software allow forecasters to produce computer models that predict tropical cyclone tracks based on the future position and strength of high- and low-pressure systems. Combining forecast models with increased understanding of the forces that act on tropical cyclones, as well as with a wealth of data from Earth-orbiting satellites and other sensors, scientists have increased the accuracy of track forecasts over recent decades. However, scientists are not as skillful at predicting the intensity of tropical cyclones. The lack of improvement in intensity forecasting is attributed to the complexity of tropical systems and an incomplete understanding of factors that affect their development. Tropical cyclones are classified into three main groups, based on intensity: tropical depressions, tropical storms, and a third group of more intense storms, whose name depends on the region. For example, if a tropical storm in the Northwestern Pacific reaches hurricane-strength winds on the Beaufort scale, it is referred to as a typhoon; if a tropical storm passes the same benchmark in the Northeast Pacific Basin, or in the Atlantic, it is called a hurricane. Neither "hurricane" nor "typhoon" is used in either the Southern Hemisphere or the Indian Ocean. In these basins, storms of tropical nature are referred to simply as "cyclones". Additionally, as indicated in the table below, each basin uses a separate system of terminology, making comparisons between different basins difficult. In the Pacific Ocean, hurricanes from the Central North Pacific sometimes cross the International Date Line into the Northwest Pacific, becoming typhoons (such as Hurricane/Typhoon Ioke in 2006); on rare occasions, the reverse will occur. It should also be noted that typhoons with sustained winds greater than 67 metres per second (130 kn) or 150 miles per hour (240 km/h) are called Super Typhoons by the Joint Typhoon Warning Center. A tropical depression is an organized system of clouds and thunderstorms with a defined, closed surface circulation and maximum sustained winds of less than 17 metres per second (33 kn) or 39 miles per hour (63 km/h). It has no eye and does not typically have the organization or the spiral shape of more powerful storms. However, it is already a low-pressure system, hence the name "depression". The practice of the Philippines is to name tropical depressions from their own naming convention when the depressions are within the Philippines' area of responsibility. A tropical storm is an organized system of strong thunderstorms with a defined surface circulation and maximum sustained winds between 17 metres per second (33 kn) (39 miles per hour (63 km/h)) and 32 metres per second (62 kn) (73 miles per hour (117 km/h)). At this point, the distinctive cyclonic shape starts to develop, although an eye is not usually present. Government weather services, other than the Philippines, first assign names to systems that reach this intensity (thus the term named storm). A hurricane or typhoon (sometimes simply referred to as a tropical cyclone, as opposed to a depression or storm) is a system with sustained winds of at least 33 metres per second (64 kn) or 74 miles per hour (119 km/h). A cyclone of this intensity tends to develop an eye, an area of relative calm (and lowest atmospheric pressure) at the center of circulation. The eye is often visible in satellite images as a small, circular, cloud-free spot. Surrounding the eye is the eyewall, an area about 16 kilometres (9.9 mi) to 80 kilometres (50 mi) wide in which the strongest thunderstorms and winds circulate around the storm's center. Maximum sustained winds in the strongest tropical cyclones have been estimated at about 85 metres per second (165 kn) or 195 miles per hour (314 km/h). |Tropical Cyclone Classifications (all winds are 10-minute averages)| |Beaufort scale||10-minute sustained winds (knots)||N Indian Ocean |SW Indian Ocean |NE Pacific & NHC, CHC & CPHC |0–6||<28 knots (32 mph; 52 km/h)||Depression||Trop. Disturbance||Tropical Low||Tropical Depression||Tropical Depression||Tropical Depression||Tropical Depression| |7||28–29 knots (32–33 mph; 52–54 km/h)||Deep Depression||Depression| |30–33 knots (35–38 mph; 56–61 km/h)||Tropical Storm||Tropical Storm| |8–9||34–47 knots (39–54 mph; 63–87 km/h)||Cyclonic Storm||Moderate Tropical Storm||Tropical Cyclone (1)||Tropical Cyclone (1)||Tropical Storm| |10||48–55 knots (55–63 mph; 89–102 km/h)||Severe Cyclonic Storm||Severe Tropical Storm||Tropical Cyclone (2)||Tropical Cyclone (2)||Severe Tropical Storm| |11||56–63 knots (64–72 mph; 104–117 km/h)||Typhoon||Hurricane (1)| |12||64–72 knots (74–83 mph; 119–133 km/h)||Very Severe Cyclonic Storm||Tropical Cyclone||Severe Tropical Cyclone (3)||Severe Tropical Cyclone (3)||Typhoon| |73–85 knots (84–98 mph; 135–157 km/h)||Hurricane (2)| |86–89 knots (99–102 mph; 159–165 km/h)||Severe Tropical Cyclone (4)||Severe Tropical Cyclone (4)||Major Hurricane (3)| |90–99 knots (100–110 mph; 170–180 km/h)||Intense Tropical Cyclone| |100–106 knots (120–120 mph; 190–200 km/h)||Major Hurricane (4)| |107–114 knots (123–131 mph; 198–211 km/h)||Severe Tropical Cyclone (5)||Severe Tropical Cyclone (5)| |115–119 knots (132–137 mph; 213–220 km/h)||Very Intense Tropical Cyclone||Super Typhoon| |>120 knots (140 mph; 220 km/h)||Super Cyclonic Storm||Major Hurricane (5)| The word typhoon, which is used today in the Northwest Pacific, may be derived from Urdu, Persian and Arabic ţūfān (طوفان), which in turn originates from Greek tuphōn (Τυφών), a monster in Greek mythology responsible for hot winds. The related Portuguese word tufão, used in Portuguese for typhoons, is also derived from Greek tuphōn. It is also similar to Chinese "dafeng" ("daifung" in Cantonese) (大風 – literally big winds), which may explain why "typhoon" came to be used for East Asian cyclones. The word hurricane, used in the North Atlantic and Northeast Pacific, is derived from the name of a native Caribbean Amerindian storm god, Huracan, via Spanish huracán. (Huracan is also the source of the word Orcan, another word for the European windstorm. These events should not be confused.) Huracan became the Spanish term for hurricanes. Storms reaching tropical storm strength were initially given names to eliminate confusion when there are multiple systems in any individual basin at the same time, which assists in warning people of the coming storm. In most cases, a tropical cyclone retains its name throughout its life; however, under special circumstances, tropical cyclones may be renamed while active. These names are taken from lists that vary from region to region and are usually drafted a few years ahead of time. The lists are decided on, depending on the regions, either by committees of the World Meteorological Organization (called primarily to discuss many other issues), or by national weather offices involved in the forecasting of the storms. Each year, the names of particularly destructive storms (if there are any) are "retired" and new names are chosen to take their place. Tropical cyclones that cause extreme destruction are rare, although when they occur, they can cause great amounts of damage or thousands of fatalities. The 1970 Bhola cyclone is the deadliest tropical cyclone on record, killing more than 300,000 people and potentially as many as 1 million after striking the densely populated Ganges Delta region of Bangladesh on 13 November 1970. Its powerful storm surge was responsible for the high death toll. The North Indian cyclone basin has historically been the deadliest basin. Elsewhere, Typhoon Nina killed nearly 100,000 in China in 1975 due to a 100-year flood that caused 62 dams including the Banqiao Dam to fail. The Great Hurricane of 1780 is the deadliest Atlantic hurricane on record, killing about 22,000 people in the Lesser Antilles. A tropical cyclone does need not be particularly strong to cause memorable damage, primarily if the deaths are from rainfall or mudslides. Tropical Storm Thelma in November 1991 killed thousands in the Philippines, while in 1982, the unnamed tropical depression that eventually became Hurricane Paul killed around 1,000 people in Central America. Hurricane Katrina is estimated as the costliest tropical cyclone worldwide, causing $81.2 billion in property damage (2008 USD) with overall damage estimates exceeding $100 billion (2005 USD). Katrina killed at least 1,836 people after striking Louisiana and Mississippi as a major hurricane in August 2005. Hurricane Andrew is the second most destructive tropical cyclone in U.S history, with damages totaling $40.7 billion (2008 USD), and with damage costs at $31.5 billion (2008 USD), Hurricane Ike is the third most destructive tropical cyclone in U.S history. The Galveston Hurricane of 1900 is the deadliest natural disaster in the United States, killing an estimated 6,000 to 12,000 people in Galveston, Texas. Hurricane Mitch caused more than 10, 000 fatalities in Latin America. Hurricane Iniki in 1992 was the most powerful storm to strike Hawaii in recorded history, hitting Kauai as a Category 4 hurricane, killing six people, and causing U.S. $3 billion in damage. Other destructive Eastern Pacific hurricanes include Pauline and Kenna, both causing severe damage after striking Mexico as major hurricanes. In March 2004, Cyclone Gafilo struck northeastern Madagascar as a powerful cyclone, killing 74, affecting more than 200,000, and becoming the worst cyclone to affect the nation for more than 20 years. The most intense storm on record was Typhoon Tip in the northwestern Pacific Ocean in 1979, which reached a minimum pressure of 870 mbar (25.69 inHg) and maximum sustained wind speeds of 165 knots (85 m/s) or 190 miles per hour (310 km/h). Tip, however, does not solely hold the record for fastest sustained winds in a cyclone. Typhoon Keith in the Pacific and Hurricanes Camille and Allen in the North Atlantic currently share this record with Tip. Camille was the only storm to actually strike land while at that intensity, making it, with 165 knots (85 m/s) or 190 miles per hour (310 km/h) sustained winds and 183 knots (94 m/s) or 210 miles per hour (340 km/h) gusts, the strongest tropical cyclone on record at landfall. Typhoon Nancy in 1961 had recorded wind speeds of 185 knots (95 m/s) or 215 miles per hour (346 km/h), but recent research indicates that wind speeds from the 1940s to the 1960s were gauged too high, and this is no longer considered the storm with the highest wind speeds on record. Similarly, a surface-level gust caused by Typhoon Paka on Guam was recorded at 205 knots (105 m/s) or 235 miles per hour (378 km/h). Had it been confirmed, it would be the strongest non-tornadic wind ever recorded on the Earth's surface, but the reading had to be discarded since the anemometer was damaged by the storm. In addition to being the most intense tropical cyclone on record, Tip was the largest cyclone on record, with tropical storm-force winds 2,170 kilometres (1,350 mi) in diameter. The smallest storm on record, Tropical Storm Marco, formed during October 2008, and made landfall in Veracruz. Marco generated tropical storm-force winds only 37 kilometres (23 mi) in diameter. Hurricane John is the longest-lasting tropical cyclone on record, lasting 31 days in 1994. Before the advent of satellite imagery in 1961, however, many tropical cyclones were underestimated in their durations. John is the second longest-tracked tropical cyclone in the Northern Hemisphere on record, behind Typhoon Ophelia of 1960, which had a path of 8,500 miles (12,500 km). Reliable data for Southern Hemisphere cyclones is unavailable. Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies. When the subtropical ridge position shifts due to El Nino, so will the preferred tropical cyclone tracks. Areas west of Japan and Korea tend to experience much fewer September-November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E which would favor the Japanese archipelago. During El Niño years, Guam's chance of a tropical cyclone impact is one-third of the long term average. The tropical Atlantic ocean experiences depressed activity due to increased vertical wind shear across the region during El Niño years. During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific ocean, which increases the landfall threat to China. While the number of storms in the Atlantic has increased since 1995, there is no obvious global trend; the annual number of tropical cyclones worldwide remains about 87 ± 10. However, the ability of climatologists to make long-term data analysis in certain basins is limited by the lack of reliable historical data in some basins, primarily in the Southern Hemisphere. In spite of that, there is some evidence that the intensity of hurricanes is increasing. Kerry Emanuel stated, "Records of hurricane activity worldwide show an upswing of both the maximum wind speed in and the duration of hurricanes. The energy released by the average hurricane (again considering all hurricanes worldwide) seems to have increased by around 70% in the past 30 years or so, corresponding to about a 15% increase in the maximum wind speed and a 60% increase in storm lifetime." Atlantic storms are becoming more destructive financially, since five of the ten most expensive storms in United States history have occurred since 1990. According to the World Meteorological Organization, “recent increase in societal impact from tropical cyclones has largely been caused by rising concentrations of population and infrastructure in coastal regions.” Pielke et al. (2008) normalized mainland U.S. hurricane damage from 1900–2005 to 2005 values and found no remaining trend of increasing absolute damage. The 1970s and 1980s were notable because of the extremely low amounts of damage compared to other decades. The decade 1996–2005 was the second most damaging among the past 11 decades, with only the decade 1926–1935 surpassing its costs. The most damaging single storm is the 1926 Miami hurricane, with $157 billion of normalized damage. Often in part because of the threat of hurricanes, many coastal regions had sparse population between major ports until the advent of automobile tourism; therefore, the most severe portions of hurricanes striking the coast may have gone unmeasured in some instances. The combined effects of ship destruction and remote landfall severely limit the number of intense hurricanes in the official record before the era of hurricane reconnaissance aircraft and satellite meteorology. Although the record shows a distinct increase in the number and strength of intense hurricanes, therefore, experts regard the early data as suspect. The number and strength of Atlantic hurricanes may undergo a 50–70 year cycle, also known as the Atlantic Multidecadal Oscillation. Nyberg et al. reconstructed Atlantic major hurricane activity back to the early 18th century and found five periods averaging 3–5 major hurricanes per year and lasting 40–60 years, and six other averaging 1.5–2.5 major hurricanes per year and lasting 10–20 years. These periods are associated with the Atlantic multidecadal oscillation. Throughout, a decadal oscillation related to solar irradiance was responsible for enhancing/dampening the number of major hurricanes by 1–2 per year. Although more common since 1995, few above-normal hurricane seasons occurred during 1970–94. Destructive hurricanes struck frequently from 1926–60, including many major New England hurricanes. Twenty-one Atlantic tropical storms formed in 1933, a record only recently exceeded in 2005, which saw 28 storms. Tropical hurricanes occurred infrequently during the seasons of 1900–25; however, many intense storms formed during 1870–99. During the 1887 season, 19 tropical storms formed, of which a record 4 occurred after 1 November and 11 strengthened into hurricanes. Few hurricanes occurred in the 1840s to 1860s; however, many struck in the early 19th century, including a 1821 storm that made a direct hit on New York City. Some historical weather experts say these storms may have been as high as Category 4 in strength. These active hurricane seasons predated satellite coverage of the Atlantic basin. Before the satellite era began in 1960, tropical storms or hurricanes went undetected unless a reconnaissance aircraft encountered one, a ship reported a voyage through the storm, or a storm hit land in a populated area. The official record, therefore, could miss storms in which no ship experienced gale-force winds, recognized it as a tropical storm (as opposed to a high-latitude extra-tropical cyclone, a tropical wave, or a brief squall), returned to port, and reported the experience. Proxy records based on paleotempestological research have revealed that major hurricane activity along the Gulf of Mexico coast varies on timescales of centuries to millennia. Few major hurricanes struck the Gulf coast during 3000–1400 BC and again during the most recent millennium. These quiescent intervals were separated by a hyperactive period during 1400 BC and 1000 AD, when the Gulf coast was struck frequently by catastrophic hurricanes and their landfall probabilities increased by 3–5 times. This millennial-scale variability has been attributed to long-term shifts in the position of the Azores High, which may also be linked to changes in the strength of the North Atlantic Oscillation. According to the Azores High hypothesis, an anti-phase pattern is expected to exist between the Gulf of Mexico coast and the Atlantic coast. During the quiescent periods, a more northeasterly position of the Azores High would result in more hurricanes being steered towards the Atlantic coast. During the hyperactive period, more hurricanes were steered towards the Gulf coast as the Azores High was shifted to a more southwesterly position near the Caribbean. Such a displacement of the Azores High is consistent with paleoclimatic evidence that shows an abrupt onset of a drier climate in Haiti around 3200 14C years BP, and a change towards more humid conditions in the Great Plains during the late-Holocene as more moisture was pumped up the Mississippi Valley through the Gulf coast. Preliminary data from the northern Atlantic coast seem to support the Azores High hypothesis. A 3000-year proxy record from a coastal lake in Cape Cod suggests that hurricane activity increased significantly during the past 500–1000 years, just as the Gulf coast was amid a quiescent period of the last millennium. The U.S. National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory performed a simulation to determine if there is a statistical trend in the frequency or strength of tropical cyclones over time. The simulation concluded "the strongest hurricanes in the present climate may be upstaged by even more intense hurricanes over the next century as the earth's climate is warmed by increasing levels of greenhouse gases in the atmosphere". In an article in Nature, Kerry Emanuel stated that potential hurricane destructiveness, a measure combining hurricane strength, duration, and frequency, "is highly correlated with tropical sea surface temperature, reflecting well-documented climate signals, including multidecadal oscillations in the North Atlantic and North Pacific, and global warming". Emanuel predicted "a substantial increase in hurricane-related losses in the twenty-first century". In more recent work published by Emanuel (in the March 2008 issue of the Bulletin of the American Meteorological Society), he states that new climate modeling data indicates “global warming should reduce the global frequency of hurricanes.” The new work suggests that, even in a dramatically warming world, hurricane frequency and intensity may not substantially rise during the next two centuries. Similarly, P.J. Webster and others published an article in Science examining the "changes in tropical cyclone number, duration, and intensity" over the past 35 years, the period when satellite data has been available. Their main finding was although the number of cyclones decreased throughout the planet excluding the north Atlantic Ocean, there was a great increase in the number and proportion of very strong cyclones. |Rank||Hurricane||Season||Cost (2005 USD)| |6||“New England”||1938||$39.2 billion| |Main article: List of costliest Atlantic hurricanes| The strength of the reported effect is surprising in light of modeling studies that predict only a one half category increase in storm intensity as a result of a ~2 °C (3.6 °F) global warming. Such a response would have predicted only a ~10% increase in Emanuel's potential destructiveness index during the 20th century rather than the ~75–120% increase he reported. Secondly, after adjusting for changes in population and inflation, and despite a more than 100% increase in Emanuel's potential destructiveness index, no statistically significant increase in the monetary damages resulting from Atlantic hurricanes has been found. Sufficiently warm sea surface temperatures are considered vital to the development of tropical cyclones. Although neither study can directly link hurricanes with global warming, the increase in sea surface temperatures is believed to be due to both global warming and natural variability, e.g. the hypothesized Atlantic Multidecadal Oscillation (AMO), although an exact attribution has not been defined. However, recent temperatures are the warmest ever observed for many ocean basins. In February 2007, the United Nations Intergovernmental Panel on Climate Change released its fourth assessment report on climate change. The report noted many observed changes in the climate, including atmospheric composition, global average temperatures, ocean conditions, among others. The report concluded the observed increase in tropical cyclone intensity is larger than climate models predict. Additionally, the report considered that it is likely that storm intensity will continue to increase through the 21st century, and declared it more likely than not that there has been some human contribution to the increases in tropical cyclone intensity. However, there is no universal agreement about the magnitude of the effects anthropogenic global warming has on tropical cyclone formation, track, and intensity. For example, critics such as Chris Landsea assert that man-made effects would be "quite tiny compared to the observed large natural hurricane variability". A statement by the American Meteorological Society on 1 February 2007 stated that trends in tropical cyclone records offer "evidence both for and against the existence of a detectable anthropogenic signal" in tropical cyclogenesis. Although many aspects of a link between tropical cyclones and global warming are still being "hotly debated", a point of agreement is that no individual tropical cyclone or season can be attributed to global warming. Research reported in the 3 September 2008 issue of Nature found that the strongest tropical cyclones are getting stronger, particularly over the North Atlantic and Indian oceans. Wind speeds for the strongest tropical storms increased from an average of 140 miles per hour (230 km/h) in 1981 to 156 miles per hour (251 km/h) in 2006, while the ocean temperature, averaged globally over the all regions where tropical cyclones form, increased from 28.2 °C (82.8 °F) to 28.5 °C (83.3 °F) during this period. In addition to tropical cyclones, there are two other classes of cyclones within the spectrum of cyclone types. These kinds of cyclones, known as extratropical cyclones and subtropical cyclones, can be stages a tropical cyclone passes through during its formation or dissipation. An extratropical cyclone is a storm that derives energy from horizontal temperature differences, which are typical in higher latitudes. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses; additionally, although not as frequently, an extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone. From space, extratropical storms have a characteristic "comma-shaped" cloud pattern. Extratropical cyclones can also be dangerous when their low-pressure centers cause powerful winds and high seas. A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form in a wide band of latitudes, from the equator to 50°. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm. From an operational standpoint, a tropical cyclone is usually not considered to become subtropical during its extratropical transition. In popular culture, tropical cyclones have made appearances in different types of media, including films, books, television, music, and electronic games. The media can have tropical cyclones that are entirely fictional, or can be based on real events. For example, George Rippey Stewart's Storm, a best-seller published in 1941, is thought to have influenced meteorologists into giving female names to Pacific tropical cyclones. Another example is the hurricane in The Perfect Storm, which describes the sinking of the Andrea Gail by the 1991 Perfect Storm. Also, hypothetical hurricanes have been featured in parts of the plots of series such as The Simpsons, Invasion, Family Guy, Seinfeld, Dawson's Creek, and CSI Miami. The 2004 film The Day After Tomorrow includes several mentions of actual tropical cyclones as well as featuring fantastical "hurricane-like" non-tropical Arctic storms.
Dinosaurs on the Jurassic Coast About 225 million years ago, the first dinosaur appeared in the forests of what is now South America. It was a small two-legged animal about the size of a chicken. It was quick and agile on its two strong legs, and probably a very effective hunter. Within about 30 million years its dinosaur relatives had become the dominant creatures on the planet; they were to reign for about 165 million years Most people know the dinosaur is a reptile, but many erroneously think the term applies to all giant reptiles. In fact dinosaurs came in all shapes and sizes; but certainly some of the larger ones were the biggest, fastest and most ferocious animals that have ever walked the Earth. The main distinguishing feature of a dinosaur is that it is a reptile that developed an upright gait. This gave it an immediate advantage over other reptiles with thier awkward sideways gait At the beginning of the Triassic period there was a mass extinction of many species. These events have happened a number of times during geological history and have continued to puzzle geologists. Certainly much competion to the emerging dinosaurs was removed by this global catasrophe. The dinosaurs diversified rapidly during the Triassic and, with little competition, got bigger and bigger. Other reptiles also evolved, some taking to the air During the Jurassic dinosaurs diversified further and became truly dominant. Huge plant eating varieties evolved along with ferocious predators. Other forms of life also flourished and, in the sea, other reptiles grew to immense size. The shallow, tropical seas in which Dorset's rocks were deposited must have been home to a fantastic array of life The Cretatious period saw dinosaurs continue to dominate. Some of the most well known dinosaurs like Tyrannosaurus lived in this period and the shallow lagoons where the Purbeck rock was formed have helped preserve thier bones for us to study. At Durlston Bay and Worbarrow Bay it is possible to sea the footprints of these massive creatures. The most likely example you will find is the three pronged footprint of the Iguanadon (pictured above), a plant eater that was up to 10 metres long and 5 metres high. At the end of the Cretatious period (about 65 million years ago) the dinosaurs died out. Whether or not it was the impact of a huge meteorite as has been suggested is still open to debate. The evidence is strong, but it should be remembered that mass extinctions have taken place at other times Text: Fossils and Rocks of the Jurrasic Coast by Robert Westwood
Adequate vision and healthy eyes are crucial for a child's success in school. Just ask your child's teacher. If a child isn't able to see letters and symbols on a classroom whiteboard clearly, or view numbers and shapes in a math book, their ability to learn will be compromised. In the United States, about 58.8 million children have prescription glasses. There are several reasons a child might have vision problems. He might struggle seeing objects far away or his work close up. Or one set of eye muscles might be weaker than the other, making it difficult for the eyes to work together. Anytime parents have concerns about their child's eyes, they should discuss them with the child's pediatrician or family doctor. The American Academy of Ophthalmology recommends the following guidelines for childhood vision examinations: Newborns — A pediatrician or family doctor will check the eyes of a baby for overall eye health. If problems arise, the doctor may refer you to a pediatric ophthalmologist. Preschool-age children — Between the ages of 3 and 4 years, a child's ability to see clearly should be tested along with overall eye health and eye alignment. School-age children — As children enter kindergarten, they should have a formal eye exam by an optometrist or ophthalmologist. Children with glasses — An annual eye exam should be scheduled for children who have prescription glasses. In addition, your child's doctor or eye specialist may recommend more frequent eye exams for other eye health issues. Don't forget to talk to your doctor about eye safety if your child plays sports. School nurses routinely screen students' vision in kindergarten, first, third, fifth, eighth and 10th grades. But a school nurse can offer that service any time a parent or teacher has a concern. If abnormal results are found, the nurse will contact parents and advise them on the next step, usually an exam by an optometrist or ophthalmologist. If your child has glasses or contacts, teach her to wear them every day as prescribed. Here are some ways to do that: Model how to wear glasses or contacts properly. Children learn a lot from watching their parents. Parents who wear glasses or contacts should take time to show a child how they take care of their own glasses or contacts, including cleaning and storing them. Encourage your child to personalize his eyeglass case. Help prevent glasses from getting broken or lenses scratched by teaching your child to use an eyeglass case for safe storage. Consider buying a couple of durable but generic eyeglass cases. Have your child choose fun stickers or use markers to personalize the outside of the case. Make a reward chart. Look in the education section of stores for chore charts. Have your child choose stickers to add to the chart for each day that she wears her glasses at home and to school. Have a small reward for your child when she has accumulated a certain number of stickers. People who have difficulty paying for the cost of an eye exam or purchasing eyeglasses can seek financial help through a variety of resources in Wichita. They should not hesitate to get more information from their child's school nurse.
The pharynx is a hollow tube about five inches long that starts behind the nose and leads to the upper food pipe (esophagus) and the upper windpipe (trachea). The pharynx has three parts: - Nasopharynx: The nasopharynx, the upper part of the pharynx, is behind the nose. - Oropharynx: The oropharynx is the middle part of the pharynx. The oropharynx includes the back of the mouth (soft palate), the posterior third of the tongue (base of the tongue), and the tonsils. - Hypopharynx: The hypopharynx is the lower part of the pharynx and is the uppermost part of the food pipe (esophagus). Throat Cancer Risk Factors More than 13,000 throat cancers are diagnosed in the United States every year, affecting about three in every 100,000 Americans. The main risk factors for throat cancers are tobacco use and heavy drinking. Infection with the sexually transmitted Human Papilloma Virus (HPV) is also a significant risk factor, especially for oropharyngeal cancers. Throat Cancer Symptoms - A lump in the neck - Difficulty swallowing, or pain when swallowing - Hoarseness or change in voice - Persistent sore throat - Ear pain - Persistent bad breath - Breathing difficulty - Blood-tinged saliva - Double vision - Hearing loss in one ear The chance of a cure in throat cancer patients is high when the cancer is diagnosed at an early stage. Survival rates also vary by the location of the cancer. Overall, the five-year survival rate is over 65 percent for cancers of the oropharynx, and about 30 percent for cancers of the hypopharynx.
VCOP is a core component of Ros Wilson’s ‘Big Writing’ method for raising standards in writing that has been implemented in thousands of primary schools throughout the United Kingdom and around the world. The method relates to the English language only. VCOP and ‘Big Writing’ itself are all based on the premise If a child can’t say it, they can’t write it!. ‘Big Writing’ and its associated strategies stress the importance of talk and that ‘boys love to talk and what is good for the boys is good for the girls!’ As opposed to discussing subordinate clauses etc with children, VCOP simplifies matters to discuss: - V: The range of ambitious vocabulary a pupil knows; WOW words. - C: The range of ways pupils have of joining ideas, phrases & sentences - O: The strategies pupils have for opening sentences; especially the 3 key openers: connectives, ‘ly’ words & ‘ing’ words - P: The range of punctuation a pupil can use & the accuracy with which they use it. Pupils are encouraged to be ambitious, to up-level their work and pay attention to their use of V.C.O.P., which with The Punctuation Pyramid, becomes a toolkit for writing at a higher level. Stealing and borrowing are encouraged when pupils see elements of V.C.O.P. in peers’ work that they like. The concept is simple, that children do not need to understand the educationalist terminology to use the skills in their work.
Small-angle neutron scattering |Science with neutrons| Small-angle neutron scattering (SANS) is an experimental technique that uses elastic neutron scattering at small scattering angles to investigate the structure of various substances at a mesoscopic scale of about 1–100 nm. Small angle neutron scattering is in many respects very similar to small-angle X-ray scattering (SAXS); both techniques are jointly referred to as small-angle scattering (SAS). Advantages of SANS over SAXS are its sensitivity to light elements, the possibility of isotope labelling, and the strong scattering by magnetic moments. During a SANS experiment a beam of neutrons is directed at a sample, which can be an aqueous solution, a solid, a powder, or a crystal. The neutrons are elastically scattered by nuclear interaction with the nuclei or interaction with magnetic momentum of unpaired electrons. In X-ray scattering, photons interact with the electronic cloud so the bigger the element, the bigger the effect is. In neutron scattering, neutrons interact with nuclei and the interaction depends on the isotope; some light elements like deuterium show similar scattering cross section as heavy elements like Pb. In zero order dynamical theory of diffraction the refractive index is directly related to the scattering length density and is a measure of the strength of the interaction of a neutron wave with a given nucleus. The following table shows the neutron scattering length for a few chemical elements (in 10−12 cm). Note that the relative scale of the scattering lengths is the same. Another important point is that the scattering from hydrogen is distinct from that of deuterium. Also, hydrogen is one of the few elements that has a negative scatter, which means that neutrons deflected from hydrogen are 180° out of phase relative to those deflected by the other elements. These features are important for the technique of contrast variation (see below). SANS usually uses collimation of the neutron beam to determine the scattering angle of a neutron, which results in an ever lower signal-to-noise ratio for data that contains information on the properties of a sample at relatively long length scales, beyond ~1 μm. The traditional solution is to increase the brightness of the source, as in Ultra Small Angle Neutron Scattering (USANS). As an alternative Spin-echo Small-angle Neutron Scattering (SESANS) was introduced, using neutron spin echo to track the scattering angle, and expanding the range of length scales which can be studied by neutron scattering to well beyond 10 μm. SANS in biology A crucial feature of SANS that makes it particularly useful for the biological sciences is the special behavior of hydrogen, especially compared to deuterium. In biological systems hydrogen can be exchanged with deuterium which usually has minimal effect on the sample but has dramatic effects on the scattering. The technique of contrast variation (or contrast matching) relies on the differential scatter of hydrogen vs. deuterium. Figure 1 shows the scattering length density for water and various biological macromolecules as a function of the deuterium concentration. (Adapted from.) Biological samples are usually dissolved in water, so their hydrogens are able to exchange with any deuteriums in the solvent. Since the overall scatter of a molecule depends on the scatter of all its components, this will depend on the ratio of hydrogen to deuterium in the molecule. At certain ratios of H2O to D2O, called match points, the scatter from the molecule will equal that of the solvent, and thus be eliminated when the scatter from the buffer is subtracted from the data. For instance the match point for proteins is typically around 40-45% D2O, and at that concentration the scatter from the protein will be indistinguishable from that of the buffer. To use contrast variation, different components of a system must scatter differently. This can be based on inherent scattering differences, e.g. DNA vs. protein, or arise from differentially labeled components, e.g. having one protein in a complex deuterated while the rest are protonated. In terms of modelling, small-angle X-ray and neutron scattering data can be combined with the program MONSA. An example in which SAXS, SANS and EM data has been used to build an atomic model of a large multi-subunit enzyme has recently been published. For some examples of this method see. - Jacrot, B (1976). "The study of biological structures by neutron scattering from solution". Reports on Progress in Physics. 39 (10): 911–53. Bibcode:1976RPPh...39..911J. doi:10.1088/0034-4885/39/10/001. - Kennaway, Chris; Taylor, James; et al. (1 Jan 2012). "Structure and operation of the DNA-translocating type I DNA restriction enzymes". Genes & Development. 26 (4): 92–104. doi:10.1101/gad.179085.111. PMC . PMID 22215814. - Perkins, SJ (January 1, 1988). "Structural studies of proteins by high-flux X-ray and neutron solution scattering". Biochemical Journal. 254 (2): 313–27. PMC . PMID 3052433. - Fejgin, Lev A.: Structure analysis by small-angle X-ray and neutron scattering. New York: Plenum (1987). - Higgins, Julia S.; Benoît, Henri: Polymers and neutron scattering. Oxford: Clarendon Press (1994?). - The Small-Angle Scattering portal, link collection, with elaborate software list - World directory of SANS instruments - B. Hammouda: Probing Nanoscale Structures - The SANS Toolbox (690 pages) - Small Angle Scattering at ISIS Neutron and Muon Source
Recent fossil discoveries in Kenya have sharpened debate on the earliest origins of humans and has thrown open the question of how long hominids (as all human species are known) have walked upright. Leg and arm bones of about 21 individuals and a few jaw bones were found near Lake Turkana last month and have been dated to between 3.9 and 4.4 million years ago. They are said to belong to a new species called Australopithecus Anamensis, and show distinct traits indicating that they walked upright - a finding that pushes~ the earliest known date for hominid bipedalism back about half a million years. The jaw bones show ape-like characteristics - placement of the teeth, small ear openings in the skull. But the teeth also show characteristics which are unique to humans - namely, the tooth enamel is much thicker than apes, a feature common to hominids. According to the international team of paleontologists working on the siite, the arm and leg bones clearly belong to a species of upright walkers. The upper part of the shin bone is shaped so as to bear much more weight than the four-legged apes, thus allowing the hominid to walk erect. This feature was crucial for allowing later increases in brain size. With upright walking, humans were able to free more air and blood to flow to the brain, allowing it to grow. The body was better able to cool itself and the hands were freed to be used creating tools and processing foot. Since these finds belong to a species much older than that of the fameous Lucy fossils discovered in the 1970s, many paleontologists and anthropologists are faced with a radical rethinking of previous assumptions on human lineage. It now seems, according to Dr. Meave Leakey of the National Museums of Kenya and wife of noted fossil hunter Dr. Richard Leakey, that there were a number of hominid species coexisting during man's formative years. Several of the earliest hominids may simply have died out and may have no relation to Homo Sapiens Sapiens, the sole surviving hominid species. This has opened a new field of inquiry into how the earliest hominids, if at all, interacted with one another - Were they bitter rivals, with one species killing off another? Did they cooperate with one another? Or, did they keep their distance and occupy different and unrelated niches in the food chain of East Africa? These latest finds may help in the understanding of man's relation to the apes and to other surviving species. If paleontologists and anthropologists can piece together the life styles of the earliest hominids, it may aid in the understanding of human nature, instinct and climatic and genetic traits which eventually led to the rapid increase in human brain size and to the evolution of human culture.Read the Peoples Weekly World Tired of the same old system: Join the Communist Party, USA Info: [email protected]; or (212) 989-4994;
What is PBL? Problem Based Learning (PBL) is an educational method that engages students in inquiry-based real world problem-solving. Used extensively in medical education since the 1970s, PBL is an instructional approach that teaches students “how to learn” by collaboratively solving authentic industry problems. While already adopted in fields including business and law, it is only beginning to emerge in science, technology, engineering and mathematics (STEM) education. PBL is an exciting and challenging alternative to traditional lecture-based instruction that provides students with learning experiences that engage them directly in the types of problems and situations they will encounter in the 21st century workplace. Students of PBL become active participants in their own learning as they encounter new and unfamiliar learning situations where problem parameters are ill-defined and ambiguous — just like in the real world. When utilizing the PBL approach, learning occurs collaboratively in small groups, problems are presented before any formal preparation has occurred — the problem itself drives the learning — and new information is acquired via self-directed learning . Research shows that compared with traditional lecture-based instruction, PBL improves: - Student understanding and retention of ideas. - Critical thinking and problem-solving skills. - Motivation and learning engagement. - The ability to work in teams. - The ability to transfer skills and knowledge to new situations. The PBL Challenge model is designed to scaffold student learning by acclimating students to PBL through their own learned experience. Instructors have the option to choose an implementation approach to PBL that range from the structured (entirely instructor-led, least student autonomy), to guided (instructor-guided, increased student autonomy) and open-ended (instructor as facilitator, most student autonomy) levels based on students’ experience with PBL.
- 1 What is absorption in simple words? - 2 What is the definition for absorption? - 3 What is the definition of absorption in the digestive system? - 4 What do you mean by absorption in chemistry? - 5 What is a real life example of absorption? - 6 What is absorption in human body? - 7 What is the importance of absorption? - 8 What is the absorption of energy? - 9 What is an example of transmission? - 10 What are the 6 main organs of the digestive system? - 11 What are the six processes of digestion? - 12 Where is absorption in the digestive system? - 13 What is absorption and its types? - 14 Where is adsorption used? - 15 How does the absorption of food takes place in human body? What is absorption in simple words? Absorption is a condition in which something takes in another substance. It is a physical or chemical phenomenon or process, in which atoms, molecules, or ions enter in the inner part (called “bulk”) of a gas, liquid, or solid material. What is the definition for absorption? 1a: the process of absorbing something or of being absorbed absorption of water — compare adsorption. b: interception of radiant energy or sound waves. 2: entire occupation of the mind his absorption in his work. What is the definition of absorption in the digestive system? Absorption. The simple molecules that result from chemical digestion pass through cell membranes of the lining in the small intestine into the blood or lymph capillaries. This process is called absorption. What do you mean by absorption in chemistry? In chemistry, absorption is a process by which a substance incorporated in one state is transferred into another substance of a different state (e.g., gases being absorbed by a liquid or liquids being absorbed by a solid). What is a real life example of absorption? One example of absorption is black pavement which absorbs energy from light. The black pavement becomes hot from absorbing the light waves and little of the light is reflected making the pavement appear black. A white stripe painted on the pavement will reflect more of the light and absorb less. What is absorption in human body? Absorption is the process by which the products of digestion are absorbed by the blood to be supplied to the rest of the body. During absorption, the digested products are transported into the blood or lymph through the mucous membrane. What is the importance of absorption? Good digestion is paramount to overall animal health, but, no matter how excellent the digestion process is, proper absorption is required for the animal to utilize the nutrients in the feed. Because of this, absorptive capacity is arguably an important component of overall gut health or functionality. What is the absorption of energy? Absorption, in wave motion, the transfer of the energy of a wave to matter as the wave passes through it. If there is only a small fractional absorption of energy, the medium is said to be transparent to that particular radiation, but, if all the energy is lost, the medium is said to be opaque. What is an example of transmission? An example of transmission is when something travels over cable wires to get to its destination. An example of the transmission of a virus is when a person spreads a cold virus by sneezing on someone else. The act of transmitting, e.g. data or electric power. The fact of being transmitted. What are the 6 main organs of the digestive system? The main organs that make up the digestive system (in order of their function) are the mouth, esophagus, stomach, small intestine, large intestine, rectum and anus. Helping them along the way are the pancreas, gall bladder and liver. What are the six processes of digestion? The six major activities of the digestive system are ingestion, propulsion, mechanical breakdown, chemical digestion, absorption, and elimination. First, food is ingested, chewed, and swallowed. Where is absorption in the digestive system? Digestion begins in the mouth and continues as food travels through the small intestine. Most absorption occurs in the small intestine. What is absorption and its types? There are 2 types of absorption processes: physical absorption and chemical absorption, depending on whether there is any chemical reaction between the solute and the solvent (absorbent). Where is adsorption used? Adsorption is present in many natural, physical, biological and chemical systems and is widely used in industrial applications such as heterogeneous catalysts, activated charcoal, capturing and using waste heat to provide cold water for air conditioning and other process requirements ( adsorption chillers), synthetic How does the absorption of food takes place in human body? The small intestine is the part of the gastrointestinal tract between the stomach and the large intestine where much of the digestion of food takes place. The primary function of the small intestine is the absorption of nutrients and minerals found in food.
How can you tell if two lines are perpendicular? Explanation: If the slopes of two lines can be calculated, an easy way to determine whether they are perpendicular is to multiply their slopes. If the product of the slopes is , then the lines are perpendicular. In this case, the slope of the line is and the slope of the line is . With this consideration in mind, how do you know if two lines are parallel or perpendicular?Note that two lines are parallel if their slopes are equal and they have different y-intercepts. In other words, perpendicular slopes are negative reciprocals of each other. Аdditionally what is the rule for perpendicular lines?If two non-vertical lines in the same plane intersect at a right angle then they are said to be perpendicular. Horizontal and vertical lines are perpendicular to each other i.e. the axes of the coordinate plane. The slopes of two perpendicular lines are negative reciprocals. Subsequently, question is, how do you know two lines are parallel?We can determine from their equations whether two lines are parallel by comparing their slopes. If the slopes are the same and the y-intercepts are different, the lines are parallel. If the slopes are different, the lines are not parallel. Unlike parallel lines, perpendicular lines do intersect. Do you have your own answer or clarification? Related questions and answers Perpendicular lines are lines that intersect at right angles. If you multiply the slopes of two perpendicular lines in the plane, you get −1 . That is, the slopes of perpendicular lines are opposite reciprocals . Alternate angle theorem states that when two parallel lines are cut by a transversal, then the resulting alternate interior angles or alternate exterior angles are congruent. To prove: If two parallel lines are cut by a transversal, then the alternate interior angles are equal. Ways to Prove Two Lines Parallel - Show that corresponding angles are equal. - Show that alternative interior angles are equal. - Show that consecutive interior angles are supplementary. - Show that consecutive exterior angles are supplementary. - In a plane, show that the lines are perpendicular to the same line. Geometric formulation. In projective geometry, any pair of lines always intersects at some point, but parallel lines do not intersect in the real plane. The line at infinity is added to the real plane. This completes the plane, because now parallel lines intersect at a point which lies on the line at infinity. Step-by-step explanation: First, if a transversal intersects two lines so that corresponding angles are congruent, then the lines are parallel. Second, if a transversal intersects two lines so that interior angles on the same side of the transversal are supplementary, then the lines are parallel. When the lines are parallel, there are no solutions, and sometimes the two equations will graph as the same line, in which case we have an infinite number of solutions. Some special terms are sometimes used to describe these kinds of systems. When the lines are parallel, there are no solutions, and sometimes the two equations will graph as the same line, in which case we have an infinite number of solutions. If two parallel lines are cut by a transversal, then corresponding angles are congruent. If two lines are cut by a transversal and corresponding angles are congruent, then the lines are parallel. If two lines intersect to form a linear pair of congruent angles, then the lines are perpendicular. In a plane, if a transversal is perpendicular to one of two parallel lines, then it is perpendicular to the other line. In a plane, if two lines are perpendicular to the same line, then they are parallel to each other. If two lines intersect to form a linear pair of "congruent angles", the lines are therefore perpendicular. Congruent angles are just angles that are equal to each other! If two lines are perpendicular, they will intersect to form four right angles. If two lines are cut by a transversal so that a pair of alternate exterior angles are congruent, then the lines are parallel. If two line are cut by a transversal so that a pair of vertical angles are congruent, then the lines are parallel. Since, vertical angles don't prove the lines cut by a transversal are parallel. Perpendicular lines intersect at a 90-degree angle. The two lines can meet at a corner and stop, or continue through each other. Two distinct lines intersecting each other at 90° or a right angle are called perpendicular lines. Here, AB is perpendicular to XY because AB and XY intersect each other at 90°. The two lines are parallel and do not intersect each other. They can never be perpendicular to each other. Perpendicular lines are two or more lines that intersect at a 90-degree angle, like the two lines drawn on this graph. These 90-degree angles are also known as right angles. When the lines are parallel or perpendicular, text will appear to let you know you've done it! o Look at the slopes of the two parallel lines. Click the answer to find similar crossword clues. |what parallel lines never do| |Something one can never do| |Something you should never do: Hyph.| When two parallel lines are crossed by a transversal, the pair of angles formed on the inner side of the parallel lines, but on the opposite sides of the transversal are called alternate interior angles. These angles are always equal. Lines that intersect each other forming a right angle are called perpendicular lines. Example: the steps of a straight ladder; the opposite sides of a rectangle. Since parallel lines never cross, then there can be no intersection; that is, for a system of equations that graphs as parallel lines, there can be no solution. This is called an "inconsistent" system of equations, and it has no solution. In spherical geometry Parallel lines DO NOT EXIST. In Euclidean geometry a postulate exists stating that through a point, there exists only 1 parallel to a given line. Therefore, Parallel lines do not exist since any great circle (line) through a point must intersect our original great circle. Perpendicular - Definition with Examples Two distinct lines intersecting each other at 90° or a right angle are called perpendicular lines. Example: Here, AB is perpendicular to XY because AB and XY intersect each other at 90°. Non-Example: The two lines are parallel and do not intersect each other. Parallel lines are lines in a plane that are always the same distance apart. Parallel lines never intersect. Perpendicular lines are lines that intersect at a right (90 degrees) angle. In real life, while railroad tracks, the edges of sidewalks, and the markings on streets are all parallel, the tracks, sidewalks, and streets go up and down hills and around curves. Those three real-life examples are good, but not perfect, models of parallel lines. Consider railroad tracks. What are the parallel lines? Parallel lines are equidistant lines (lines having equal distance from each other) that will never meet. In geometry, parallel lines are lines in a plane which do not meet; that is, two straight lines in a plane that do not intersect at any point are said to be parallel.
In this century, extreme heat would wipe away hundreds of thousands of tons of fish accessible for catching in a country's seas, on top of long-term climate change-related fish population declines, according to a new UBC research. Researchers from the University of British Columbia's Institute for Oceans and Fisheries (IOF) used a complex model to incorporate extreme annual ocean temperatures in Exclusive Economic Zones, which account for the majority of global fish catches, into climate-related projections for fish, fisheries, and the human communities that rely on them. Worst-Case Scenario Model Modeling a worst-case scenario in which no action is done to reduce greenhouse gas emissions, they predicted a 6% loss in annual prospective catches and a 77% reduction in biomass, or the number of fish by weight in a particular region, owing to sweltering years. These reductions are in addition to those expected as a result of long-term decadal-scale climate change. Researchers predicted that during severe ocean temperature events, on top of expected temperature changes per decade, fisheries earnings would be cut by 3% globally and employment by 2%, resulting in a possible loss of millions of jobs. Dr. William Cheung, professor and head of UBC's Institute for the Oceans and Fisheries, remarked, "These high yearly temperatures would be an extra shock to an overburdened system" (IOF). "We show that in nations where long-term trends, such as ocean warming and deoxygenation, have already damaged fisheries, adding the shock of temperature extremes would increase the consequences to a point where these fisheries will likely be unable to adapt. It's similar to how COVID-19 puts a strain on the healthcare system by introducing a new burden." According to co-author Dr. Thomas Frölicher, a professor at the University of Bern's division of climate and environmental physics, extreme temperature occurrences are expected to become more common in the future. "Today's maritime heatwaves and their severe consequences on fisheries are foreshadowing events, as these events are producing environmental conditions that long-term global warming will not produce for decades." Worsening Situation Around the Globe EEZs in the Indo-Pacific region, notably waters surrounding South and Southeast Asia and Pacific Islands; the Eastern Tropical Pacific, which runs along the Pacific coast of the Americas; and certain nations in the West African region, according to the researchers, would be impacted worse than others. An extreme marine heat event in Bangladesh, where fisheries-related sectors employ one-third of the country's workforce, is expected to eliminate 2% of the country's fisheries jobs, or about one million jobs, in addition to the more than six million jobs that will be lost by 2050 due to long-term climate change. Ecuador's position is similarly bleak, with severe high-temperature occurrences predicted to wreak havoc on an extra 10% of the country's fisheries earnings, or about $100 million, on top of the 25% decrease expected by the mid-twentieth century. An Urgent Situation "This study emphasizes the urgent need to create solutions to maritime temperature extremes," Cheung added. "Temperature extremes are difficult to anticipate in terms of when and where they will occur, especially in hotspots with little capacity to give reliable scientific projections for their fisheries. When planning responses to long-term climate change, we must account for this uncertainty." Active fisheries management, according to Cheung, is critical. For example, adjusting capture quotas in years when fish populations are suffering from high-temperature events or, in severe situations, shuttering fisheries to allow stocks to recover are two possible responses. "We need systems in place to deal with it," Cheung explained. For more Environmental news updates, don't forget to follow Nature World News! © 2021 NatureWorldNews.com All rights reserved. Do not reproduce without permission.
Over the past few decades, roboticists and computer scientists have developed a variety of data-based techniques for teaching robots how to complete different tasks. To achieve satisfactory results, however, these techniques should be trained on reliable and large datasets, preferably labeled with information related to the task they are learning to complete. For instance, when trying to teach robots to complete tasks that involve the manipulation of objects, these techniques could be trained on videos of humans manipulating objects, which should ideally include information about the types of grasps they are using. This allows the robots to easily identify the strategies they should employ to grasp or manipulate specific objects. Researchers at University of Pisa, Istituto Italiano di Tecnologia, Alpen-Adria-Universitat Klagenfurt and TU Delft recently developed a new taxonomy to label videos of humans manipulating objects. This grasp classification method, introduced in a paper published in IEEE Robotics and Automation Letters, accounts for movements prior to the grasping of objects, for bi-manual grasps and for non-prehensile strategies. “We have been working for some time now (some of us for a long time) on studying human behavior in grasping and manipulation, and on using the great insights that you get from the human example to build more effective robotic hands and algorithms,” Cosimo Della Santina, one of the researchers who carried out the study, told TechXplore. “In this process, one exercise that we do a lot is finding ways of representing the wide variety of human capabilities.” In the past, other research teams introduced taxonomies that characterize human grasping strategies and actions. Nonetheless, the taxonomies they proposed so far were not developed with video labeling in mind, thus they have considerable limitations when applied to this task. “For example, existing taxonomies do not have the right trade-off between granularity and ease of implementation and usually discard important aspects that are present in hand-centered video material, such as bimanual grasps,” Matteo Bianchi, another researcher involved in the study, told TechXplore. “For these reasons, we propose a new taxonomy that was specifically developed for enabling the labeling of human grasping videos. This taxonomy can account for pre-grasp phases, bimanual grasps, nonprehensile manipulation, and environmental exploitation events.” In addition to introducing a new taxonomy, Della Santina, Bianchi and their colleagues used this taxonomy to create a labeled dataset containing videos of humans performing daily activities that involve object manipulation. In their paper, they also describe a series of MatLab tools for labeling new videos of humans completing manipulation tasks. “We show that there is a lot to gain in looking for the right tradeoff between capability of explaining complex human behaviors and easing the practical endeavor of labeling videos which are not explicitly produced as output of scientific research,” Della Santina said. “Our study opens up the possibility of leveraging the large abundance of videos involving human hands that can be easily found on the web (e.g., youtube) to study human behavior in a precise and scientific way.” When considering, for instance, videos of cooks preparing a meal, the camera filming these videos is generally focused on the hands of the cook featured in the video. However, so far there was no well-defined language that allowed engineers to use this video footage to train machine learning algorithms. Della Santina, Bianchi and their colleagues introduced such a language and validated it. In the future, the labeled dataset they compiled could be used to train both existing and new algorithms on image recognition, robotic grasping and robotic manipulation tasks. In addition, the taxonomy introduced in their paper could help to compile other datasets and to label other videos of humans manipulating objects. “Empowered by this new tool we plan to keep doing what we all like the most: be amazed by the capability of humans during even the most mundane activities involving grasping and manipulation and think of ways of transferring these capabilities to robots,” Della Santina said. “We believe that the new language we developed will multiply our capability of doing these things, by incorporating non-scientific material in our scientific investigations.” An approach to achieve compliant robotic manipulation inspired by human adaptive control strategies Understanding human manipulation with the environment: a novel taxonomy for video labelling. IEEE Robotics and Automation Letters(2021). DOI: 10.1109 / LRA.2021.3094246. © 2021 Science X Network A new taxonomy to characterize human grasp types in videos (2021, July 28) retrieved 28 July 2021 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Nitrogen dioxide (NO2) is a key component of urban air pollution. The nitrogen oxides ("NOx" of which NO2 is one component) are emitted from any combustion process. Coal- and gas-fired power plants and vehicles constitute the major anthropogenic (human-produced) sources. Forest fires and lightning are natural sources of NO2, but globally it is clear that anthropogenic sources dominate. High levels of NO2 are significant as they are associated with: 1) haze that reduces visibility; 2) irritation of the eyes, nose, throat, and lungs; 3) acid rain; 4) reduced terrestrial plant growth; 5) oxygen-depleting algal blooms; and 6) corrosion of building materials. Because of the importance of trace pollutants such as NO2 for air quality and human health around the world, instruments have been developed and installed on research satellites to measure NO2 on Earth. Collaborative teams from space agencies around the world build the sensors and retrieval algorithms to make these space-based images available to the world scientific and policy-making communities. This animation shows a time series of NO2 measured from such satellites from October 2004 to December 2009. These particular images were captured by the Dutch-Finnish Ozone Monitoring Instrument (OMI) on NASA's Earth Observing System AURA satellite. One of the objectives of the AURA mission is to measure trace pollutants such as NO2, ozone, carbon monoxide, and aerosols by their interaction with light of visible and non-visible wavelengths. These data have been converted to false-color images so the viewer can "see" hot spots in NO2 concentration around the globe. NO2 is a pollutant with a relatively short atmospheric lifetime, so it does not get transported far from its source. Thus, these satellite images provide a direct indication of where NO2 sources are located. In remote, unpolluted regions of the globe, NO2 will be uniformly low. Several striking observations can be made from this global view of the NO2 air pollutant. First, the dominant NO2 source regions correspond primarily to areas with high population and large industry, for example, eastern China, northern Europe, and the eastern United States. Because most electricity is produced by burning fossil fuels in power plants that emit NO2, these hot spots are strongly correlated with electricity usage and fuel sources. Second, urban centers show up as smaller hot spots. High vehicle traffic leads to elevated NO2. For example, Mexico City, Tokyo, and Los Angeles are all clearly marked by their NO2 plume. If one zooms in to California, the north-south transit corridor of the I-5 freeway is actually visible from space via its NO2 signature. Third, a clear seasonal pattern is apparent. In the northern hemisphere winter, peak NO2 is much higher in all the northern hemisphere hot spots. This is attributed both to heavier use of combustion power plants for wintertime home heating, as well as the fact that NO2 stays in the air longer in the winter. The atmospheric lifetime of NO2 is driven primarily by reactions initiated by sunlight. With less sunlight in the wintertime, reactions that break down NO2 are not easily initiated, and the NO2 is removed more slowly from the atmosphere. Urban areas report NO2 air quality standard "exceedance" events, when the level of this pollutant is higher than deemed safe by environmental agencies. This happens most frequently during the cold winter months. Similarly, one sees hot spots over urban centers in the southern hemisphere during their local winter (June - September). In addition, some years see spread out elevated regional NO2 over southern Brazil and sub-Saharan Africa, typically peaking in September. This is the signature of "biomass burning," when large swathes of forest burn in those regions. Some seasonal burning is apparent in northern Australia as well. These are significant contributions to southern hemisphere NO2 but are dwarfed in comparison to northern hemisphere industry. Finally, we can look for evidence of a long-term trend in regional NO2 emissions. Comparing December 2009 to December 2004, there is higher regional NO2 in China compared to North America or Northern Europe in 2009. This correlates with China's booming economy in recent years. In December of 2004 the relative amounts of NO2 over these regions was more consistent. This can be attributed both to economic activity as well as to effective policy efforts to reduce NOx emissions in some countries. These satellite datasets provide a rich basis for interpretation of urban/rural, seasonal, and decadal (long-term) trends in air quality. C1 Patterns. Students identify similarities and differences in order to sort and classify natural objects and designed products. They identify patterns related to time, including simple rates of change and cycles, and to use these patterns to make predictions. C3 Scale Proportion and Quantity. Students recognize natural objects and observable phenomena exist from the very small to the immensely large. They use standard units to measure and describe physical quantities such as weight, time, temperature, and volume. C4 Systems and System Models. Students understand that a system is a group of related parts that make up a whole and can carry out functions its individual parts cannot. They can also describe a system in terms of its components and their interactions. C1 Patterns. Students recognize that macroscopic patterns are related to the nature of microscopic and atomic-level structure. They identify patterns in rates of change and other numerical relationships that provide information about natural and human designed systems. They use patterns to identify cause and effect relationships, and use graphs and charts to identify patterns in data. C3 Scale Proportion and Quantity. Students observe time, space, and energy phenomena at various scales using models to study systems that are too large or too small. They understand phenomena observed at one scale may not be observable at another scale, and the function of natural and designed systems may change with scale. They use proportional relationships (e.g., speed as the ratio of distance traveled to time taken) to gather information about the magnitude of properties and processes. They represent scientific relationships through the use of algebraic expressions and equations C6 Structures and Functions. Students model complex and microscopic structures and systems and visualize how their function depends on the shapes, composition, and relationships among its parts. They analyze many complex natural and designed structures and systems to determine how they function. They design structures to serve particular functions by taking into account properties of different materials, and how materials can be shaped and used. C1 Patterns. Students observe patterns in systems at different scales and cite patterns as empirical evidence for causality in supporting their explanations of phenomena. They recognize classifications or explanations used at one scale may not be useful or need revision using a different scale; thus requiring improved investigations and experiments. They use mathematical representations to identify certain patterns and analyze patterns of performance in order to re-engineer and improve a designed system. C3 Scale Proportion and Quantity. Students understand the significance of a phenomenon is dependent on the scale, proportion, and quantity at which it occurs. They recognize patterns observable at one scale may not be observable or exist at other scales, and some systems can only be studied indirectly as they are too small, too large, too fast, or too slow to observe directly. Students use orders of magnitude to understand how a model at one scale relates to a model at another scale. They use algebraic thinking to examine scientific data and predict the effect of a change in one variable on another (e.g., linear growth vs. exponential growth). C4 Systems and System Models. Students can investigate or analyze a system by defining its boundaries and initial conditions, as well as its inputs and outputs. They can use models (e.g., physical, mathematical, computer models) to simulate the flow of energy, matter, and interactions within and between systems at different scales. They can also use models and simulations to predict the behavior of a system, and recognize that these predictions have limited precision and reliability due to the assumptions and approximations inherent in the models. They can also design systems to do specific tasks. ESS2.D Weather & Climate. Weather is the combination of sunlight, wind, snow or rain, and temperature in a particular region and time. People record weather patterns over time ESS3.C Human Impact on Earth systems. Societal activities have had major effects on the land, ocean, atmosphere, and even outer space. Societal activities can also help protect Earth’s resources and environments. PS1.A Structure of Matter. Because matter exists as particles that are too small to see, matter is always conserved even if it seems to disappear. Measurements of a variety of observable properties can be used to identify particular materials. PS1.B Chemical Reactions. Chemical reactions that occur when substances are mixed can be identified by the emergence of substances with different properties; the total mass remains the same. ESS2.D Weather & Climate. Complex interactions determine local weather patterns and influence climate, including the role of the ocean. ESS3.C Human Impact on Earth systems. Human activities have altered the biosphere, sometimes damaging it, although changes to environments can have different impacts for different living things. Activities and technologies can be engineered to reduce people’s impacts on Earth. LS2.A Interdependent Relationships in Ecosystems. Organisms and populations are dependent on their environmental interactions both with other living things and with nonliving factors, any of which can limit their growth. Competitive, predatory, and mutually beneficial interactions vary across ecosystems but the patterns are shared. PS1.A Structure of Matter. The fact that matter is composed of atoms and molecules can be used to explain the properties of substances, diversity of materials, states of matter, phase changes, and conservation of matter. PS1.B Chemical Reactions. Reacting substances rearrange to form different molecules, but the number of atoms is conserved. Some reactions release energy and others absorb energy. ESS2.D Weather & Climate. The role of radiation from the sun and its interactions with the atmosphere, ocean, and land are the foundation for the global climate system. Global climate models are used to predict future changes, including changes influenced by human behavior and natural factors ESS3.C Human Impact on Earth systems. Sustainability of human societies and the biodiversity that supports them requires responsible management of natural resources, including the development of technologies that produce less pollution and waste and that preclude ecosystem degradation. LS2.A Interdependent Relationships in Ecosystems. Ecosystems have carrying capacities resulting from biotic and abiotic factors. The fundamental tension between resource availability and organism populations affects the abundance of species in any given ecosystem. PS1.A Structure of Matter. The sub-atomic structural model and interactions between electric charges at the atomic scale can be used to explain the structure and interactions of matter, including chemical reactions and nuclear processes. Repeating patterns of the periodic table reflect patterns of outer electrons. A stable molecule has less energy than the same set of atoms separated; one must provide at least this energy to take the molecule apart PS1.B Chemical Reactions. Chemical processes are understood in terms of collisions of molecules, rearrangement of atoms, and changes in energy as determined by properties of elements involved. PS2.C Stability & Instability in Physical Systems. Systems often change in predictable ways; understanding the forces that drive the transformations and cycles within a system, as well as the forces imposed on the system from the outside, helps predict its behavior under a variety of conditions. When a system has a great number of component pieces, one may not be able to predict much about its precise future. For such systems (e.g., with very many colliding molecules), one can often predict average but not detailed properties and behaviors (e.g., average temperature, motion, and rates of chemical change but not the trajectories or other changes of particular molecules). Systems may evolve in unpredictable ways when the outcome depends sensitively on the starting condition and the starting condition cannot be specified precisely enough to distinguish between different possible outcomes. Boersma, K.F., H.J. Eskes, J.P. Veefkind, E.J. Brinksma, R.J. van der A, M. Sneep, G.H.J. van den Oord, P.F. Levelt, P. Stammes, J.F. Gleason and E.J. Bucsela, Near-real time retrieval of tropospheric NO2 from OMI, Atm. Chem. Phys., 2013-2128, sref:1680-7
During the Nara period, the Japanese court actively sought to absorb the body of advanced ideas and knowledge of astronomy, engineering and city-building, medicine, technology, arts and music and governmental organization, that were coming out of the Chinese Tang Dynasty’s capital city Changan (Cho-an to the Japanese). These ideas were transmitted via two-way diplomatic missions between Nara and the Changan capital, and for the most part, the incoming corpus of knowledge and ideas, studied, embraced and adopted enthusiastically, including its fashions — much of which almost wholesale. More details and background on this here. The city of Changan, … “continued to be the principal capital of the empire and entered the greatest period of its development under the Tang Dynasty (618-907). “At the height of its glory in the mid-eighth century, Chang’an was the most populous, cosmopolitan, and civilized city in the world” (Richard B. Mather, foreword to Xiong, p. ix), occupying some 84 sq. km. with around one million inhabitants. It suffered major damage during the An Lushan rebellion in the mid-8th century, but even toward the end of the Tang period, when the empire was in disarray, the “enormous size” of the city impressed an Arab visitor. Under the Tang, the city was a major religious center, not only for Buddhism and Taoism but also for several religions which were relatively recent arrivals in China: Zoroastrianism, Nestorianism and Manichaeism… a Japanese pilgrim noted in 844 that there were over 300 Buddhist temples in Chang’an.” — “Chang’an (Xian)” (a University of Washington’s resource) The writer Anthony Aveni in “Bringing sky to earth” gives us a good account and concise summary of the Tang Empire-Son of Heaven’s worldview that would have been transmitted to Japanese during the Nara period. “…cities with long written histories – like Beijing – provide us with some unanticipated connections. The written legacy helps us understand the reasons behind the desire to orientate one’s capital to the stars. A strong bond existed between astrology and good government: a mandate from heaven underlay all Chinese dynastic ideology. Chinese society has always been bureaucratically organized. Family histories contain lengthy chapters on astronomy, with data such as where and when celestial objects appeared or disappeared, their colour, brightness, direction of motion and their gathering together in one place. These histories also suggest implications that such data might have on family affairs: thus one Chinese historian and court astrologer explains that when planets gather, either there is great fortune or there is great calamity. He knows this because when they gathered in Roon (Scorpio), the Zhou dynasty flourished, but when they gathered in Winnowing Basket (Sagittarius), Qi became the emperor. The Chinese called their constellations the ‘heavenly minions’. But when they looked among them in the north they saw not a pair of wheeling bears flanked by a dragon as we do, but rather a celestial empire. Which constellations did they recognize and what do the Chinese stars tell us about their ideas concerning rulership and the orientation of the city? Confucius compared the emperor’s rule with Polaris, the north star: just as the emperor was the axis of the earthly state, so his celestial pivot was the polar constellation. The economy revolved around the fixed emperor the way the stars turn about the immoveable pole. According to one legend, the Divine King was born out of the light radiated upon his mother by the Pole Star. Four of the seven stars in what we know as the Little Dipper, plus two others, constituted the Kou Chen or ‘Angular Arranger’ of the Chin Shu dynasty. These stars made up the great ‘Purple Palace’ and each of their celestial functionaries had its terrestrial social counterpart. One member of the group was the crown prince who governed the moon while another, the great emperor, ruled the sun. A third, son of the imperial concubine, governed the five planets, while a fourth was the empress, and a fifth the heavenly palace itself. When the emperor’s star lost its brightness, his earthly counterpart would sacrifice his authority, while the crown prince would become anxious when his star appeared dim, especially when it lay to the right of the emperor. The four surrounding stars of the palace proper are Pei Chi, the ‘Four Supporters’. On Chinese star maps they appear well situated to perform their task, which is to issue orders to the rest of the state. The ‘Golden Canopy’ is made up of seven stars, most of them corresponding to the pole-centred stars of our constellation Draco. It covered the palatial inhabitants and emissaries. Beyond them lay the stars of the Northern Dipper. More concerned with realizing celestial principles in the earthly realm, these ‘Seven Regulators’ are aptly situated to possess the manoeuvrability to come down close to Earth so that they can inspect the four quarters of the empire. According to one version, the Big Dipper is the carriage of the great theocrat who periodically wheels around the central palace to review conditions. Its stars are the source of Yin and Yang, the two-fold way of knowing what resolves the tension between opposing polarities: male and female, light and dark, active and passive. Yin and Yang wax and wane with cosmic time and make up the potentiality of the human condition. For every affair of state the starry winds of good and bad fortune blow across the sky. Why this royal fixation with the stars of the north? Like the power invested in royalty, they were eternally visible, never obscured by the horizon. Indeed in temperate latitudes the stars that turn about the pole are raised quite high in the sky. The fixity of the polar axis is a cosmic metaphor for the constant power of the state. Given the close parallel between the events surrounding the palace economy and the celestial arrangement, it seems logical to enquire whether Chinese royal architecture, like that of Stonehenge and Teotihuacan, is also situated in perfect harmony with the land- and skyscape. To harmonize the arrangement of the royal capital with the local contours of cosmic energy, the king would call in a geomancer to perform the art of feng shui. This expert would decide where to select and how to arrange a site. His sources of cosmic knowledge were the local magnetic field, the paths of streams and the land forms; he might also consult oracle bones, engraved pieces of bone and shell used in divination. Sometimes workers would need to remove vast quantities of boulders or plant forests of trees to regulate the disposition of Yin and Yang energies passing in and out of the site. There is an account of the foundation ritual associated with the city of Lo-yang of the Zhou dynasty at the close of the second millennium bc. On the second day of the third month: Diog-Kung, Duke of Zhou, began to lay the foundations and establish a new and important city at Glak (Lo) in the eastern state. The people of the four quarters concurred strongly and assembled for the corvée … In the second month, the third quarter, on the sixth day in the morning the King walked from the capital of Diog (Chou) and reached P’iong (Feng). The Great Protector preceded Diog-Kung to inspect the site. When it came to the third month … on the third day the Great Protector arrived at Glak in the morning and took the tortoise oracle as bearing on the site. When he had obtained the oracle, he planned and laid out the city. On the third day the Great Protector and all the people of Yin began work on the public emplacements in the loop of the Glak river. The attention to detail regarding place and time suggests that acquiring proper urban form depended on getting things right with nature – especially the cardinal axes. If it were to function properly, the city needed to be accurately partitioned into its quarters. Beijing still preserves its ancient cosmic plan. If you stand in Tiananmen Square you can line up the Bell and Drum Towers, the Monument to the People’s Heroes, and the Mausoleum of Mao Zedong on a perfect north-south axis. Continue that line and you’ll discover that it runs through the gates of the old city. Today the cosmic axis is defined by a marble pavement that marks the imperial meridian. The Hall of Supreme Harmony, which houses the emperor’s throne, lies at its northern terminus; this symbolizes the circumpolar region where the earth meets the sky. Beijing offers a lasting reminder of the cosmically ordained duties of the emperor. He had to perform a specific task at the beginning of the first month of each season, these being determined by the court astronomers who followed the course of the moon and sun and the five planets across the lunar mansions of the Chinese zodiac. The emperor would go to the eastern quarter ouf his domain to start the new year every spring equinox to pray for a sound harvest; then, followed by his ministers, he would plough a ceremonial furrow in a field. At the other seasonal pivots he would visit the other quarters of his city. This calendar would have been familiar to any farmer, for it was based on what he could see in the sky. At the beginning of summer Antares lay due south at sunset, while on the first of winter the Tristar of Orion’s Belt took its place. Of course, farmers knew well when they could plant, but they needed to be aware that the official time to do so occurred when the handle of the Dipper pointed straight down, for then was it the first day of spring – the time for the king to come forth and speak to the people about the new year’s harvest. The keeping of the observations and the preparation of the calendar resided in the state observatory. This institution lay hidden within the bowels of the Purple Palace. The importance of astronomical observing in the world of politics made secrecy a necessity. One directive issued by a ninth-century Tang Dynasty king reads: If we hear of any intercourse between the astronomical officials or of their subordinates, and officials of any other government departments, or miscellaneous common people, it will be regarded as a violation of security relations which should be strictly adhered to. From now on, therefore, the astronomical officials are on no account to mix with civil servants and common people in general. Let the Censorate see to it. And so the astronomers, spurred on by their government, performed their appointed task: to give the correct time so that the affairs of state might be properly conducted. “ Source of article excerpt: Bringing the Sky Down to Earth by Anthony Aveni | Published in History Today Volume: 58 Issue 6 2008 (retrieved online Jan 25, 2014: http://www.historytoday.com/anthony-aveni/bringing-sky-down-earth) Anthony F. Aveni is the Russell Colgate Distinguished Professor of Astronomy and Anthropology and Native American Studies at Colgate University. He is the author of People and the Sky: Our Ancestors and the Cosmos (Thames & Hudson). Further recommended sources and readings: Nara capital built in the shadow of the Chinese empire & under the influences of the Silk Road (Heritage of Japan wordpress blog) C. Cullen, Astronomy and Mathematics in Ancient China (Needham Resarch Institute, 2007) Victor Cunrui Xiong, Sui-Tang Chang’an: A Study in the Urban History of Medieval China (Ann Arbor: Center for Chinese Studies, The University of Michigan, 2000).
December 12, 2019 How Risso’s dolphins strike a balance between holding their breath and finding food What do marine mammals eat? It’s a simple question with profound implications for marine-mammal conservation and fisheries research. But it can a be tough question for scientists to answer because they can’t see what these animals are doing underwater. MBARI researcher Kelly Benoit-Bird is finding new ways to answer this question using specialized echosounders mounted on ships and undersea robots. In a recent paper, Benoit-Bird demonstrated for the first time how researchers can simultaneously measure the distribution, abundance, type, size, and movement of both predators and their prey in the deep sea. For this study, Benoit-Bird and her collaborators focused on Risso’s dolphins, a common marine predator. Risso’s dolphins have historically been considered “specialist predators,” eating almost nothing but relatively large deep-sea squid. But the new paper shows that the dolphins can modify their behavior to feed on less nutritious prey such as small fish and small, shallow-dwelling squid. Such “prey switching” has previously been considered uncommon and inefficient. But Benoit-Bird’s study shows that some Risso’s dolphins do it all the time. The study also showed that Risso’s dolphins regularly forage during the daytime as well as at night—something that biologists had not even considered before. Risso’s dolphins are found in coastal waters around the world. Since they are so common, you would think that marine biologists would already know what, where, and when they eat. But there’s a lot that scientists don’t know because historically they haven’t been able to observe the dolphins hunting deep below the sea surface. In the last decade, however, scientists have taken a cue from the dolphins themselves, finding new ways to see underwater using bursts of high-frequency sound. Benoit-Bird, a marine biologist and acoustics researcher, has developed echosounder techniques that allow her to see, identify, and measure individual animals underwater—not just large animals like Risso’s dolphins, but also their smaller prey, whether it be deep-sea squid or small fishes and krill that live closer to the surface. As described in two recent research papers, Benoit-Bird and her coauthors used two echosounders—one mounted on a ship and the other on an underwater robot—-to observe Risso’s dolphins hunting near Catalina Island, off Southern California. They combined the echosounder data with data from tags attached to a few dolphins, which recorded the animals’ movement and vocalizations while they were underwater. The ship-mounted echosounder revealed at least three layers of animals (potential prey) lurking at different depths: 1) a shallow layer of animals that stayed within 50 meters (160 feet) of the surface; 2) a layer of animals that lurked about 300 meters (1,000 feet) below the surface in the daytime, but moved up toward the surface at night; and 3) a deep layer of animals (where most of the squid were found) that stayed about 425 meters (1,400 feet) below the surface both day and night. The echosounder on the underwater robot (an autonomous underwater vehicle, or AUV) allowed researchers to identify the types and sizes of individual prey animals in each of these layers. This allowed them to determine which layers were inhabited by fishes, squids, and/or crustaceans (such as krill). The researchers confirmed the AUV echosounder observations by dragging nets though the different layers. “By combining all these different research approaches we were able to see in detail the behavior of both predators and prey,” Benoit-Bird said. “One thing we found out was that Risso’s dolphins don’t always forage the way people thought.” Risso’s dolphins have historically been considered “specialist predators,” eating almost nothing but deep-sea squid. However, Benoit-Bird’s research shows that they also eat small fish and (much less often) crustaceans, especially when they are heading back toward the surface after a feeding dive. This means the dolphins, far from being locked into a single type of prey, are able to switch from one type of prey to another, even in a single dive. “Prey switching has generally been thought of as something that happens over months or years,” said Benoit-Bird. “The idea that dolphins could do this during a single dive is pretty remarkable. Our research shows that the dolphins plan their dives in advance, but also make decisions quickly during each dive. They’re continually balancing their need for food in the depths with their need for oxygen at the surface.” Historically, Risso’s dolphins have been thought to hunt at night, when many deep-sea animals swim up toward the surface. In theory, this would allow the dolphins to spend less time, energy, and oxygen finding and catching prey. However, the new data clearly show the dolphins hunting both day and night. Even at night the dolphins made lots of deep dives because their preferred prey (deep-sea squid) don’t migrate toward the surface, but stay in the depths all night long. The researchers estimated that squid in the deepest layer accounted for over 60 percent of the food energy gained by the dolphins. But smaller, vertically-migrating squid and fish accounted for about one third of their food, especially at night. As Benoit-Bird explained, “It might be easier to feed at night, but you might not get enough food during that time.” During their five- to ten-minute dives, the dolphins sought out layers with the densest patches of prey, particularly squid. But they also used less-dense or nutritious prey to “fill in the corners.” The authors suggest that Risso’s dolphins in this study area may be “working near the edge of their energy needs, where small gains may be important to the individual’s overall success.” The authors note that although “prey switching” has been considered a relatively inefficient method of foraging, it is apparently a key strategy in the survival of Risso’s dolphins in this area. The researchers suggest that simple models of prey availability and nutritive value cannot explain the dolphin’s behavior because the dolphins are constantly balancing prey availability with their need to breathe. In conclusion, Benoit-Bird noted, “Because prey in the ocean are patchy and always shifting, I suspect prey switching may be relatively common among marine animals. In most cases we just don’t have data from all these different sources to document this.” Article by Kim Fulton-Bennett Original journal articles: Kelly J. Benoit-Bird, Southall, B.L., Moline, M.A., (2019). Dynamic foraging in Risso’s dolphins revealed in 4 dimensions. Marine Ecology Progress Series: 632: 221–234, 2019, https://doi.org/10.3354/meps13157 Arranz P., Benoit-Bird, K.J., Southall, B.L., Calambokidis, J., Friedlaender, A.S., Tyack, P.L., (2018). Risso’s dolphins plan foraging dives. Journal of Experimental Biology: 221, DOI: 10.1242/jeb.165209 For additional information or images relating to this article, please contact: Kim Fulton-Bennett
Knights were required to provide, of course, military service. That was, after all, their primary vocation. Their oath of fealty to a lord would include a promise to protect their lord’s life, honor, and property. Besides fighting on horseback during battle, a knight might also do duty at the lord’s garrison, act as an arbiter or even a judge in minor grievances, and proffer advice and counsel to his master on a variety of issues. It was common for knights to be assigned administrative tasks such as garnering supplies. In return, the lord would defend the knight and his family from their enemies, and avenge wrongs. The lord would grant fiefs of land, access to forests, rents from houses, and income from a plethora of sources such as mills and mines. The fiefs were given to support the knight’s family and military needs such as horses, squires, training, and additional stores. Usually the fiefs were hereditary, and passed to the knight’s children, but not always. For example, if a knight was assigned as castellan (governor of a castle), the lord could take that fief away, particularly if there was rebellion in the knight’s family. It was not uncommon for knights to swear fealty to more than one lord, acquiring more fiefs in the process. Many ministerales became wealthy by such devices, and tiptoed around conflicting oaths with difficulty. Lords were not always averse to such practices, as they could turn to their wealthy knights for financial support in waging war or providing ransom for kidnapped familae. The penalties for breaking the knightly code were severe. Confiscation of fiefs was the customary punishment. Depending on the gravity of the offense, he could lose his wife, his children, and be cast an outlaw, whereby no one was allowed to assist him in any way. On the other hand, lords who did not fulfill their duty suffered little consequence. If it was an ongoing problem or widespread across all of the lord’s retinue, rebellions did occur. Royal courts might have to be involved and that outcome usually was to neither parties’ benefit, but rather to the royal household itself. Thus, the incentive was to settle matters within the familae and maintain the peace.
Translations of Linear Functions The parent function, or most basic function in the linear family, is the linear function . Its graph is a line that passes through the origin and has a slope of . Other linear functions can be graphed as a transformation of the parent function . A translation, or shift, is a transformation in which a graph is moved vertically or horizontally. For any function , the function can be translated vertically units or horizontally units. For and : - The graph of is the graph of translated up units. - The graph of is the graph of translated down units. - The graph of is the graph of translated right units. - The graph of is the graph of translated left units. A translation of the parent linear function can be viewed as either a horizontal or vertical translation.For example, the parent function of a line is: Stretches, Compressions, and Reflections of Linear Functions For , where is a scaling factor: - The graph of is a vertical stretch or compression of the graph of by a factor of . For , the graph of is stretched. For , the graph of is compressed. - The graph of is the reflection of the graph of across the -axis. The function that shows a reflection across the -axis is: Combined Transformations of Linear Functions The next term in the given linear function is 3. It shows that 2 is added to the vertically stretched function f(x)=2x. So, the vertically stretched graph will be translated 3 units up.
“Auto” has several meanings. In our present society the word auto is commonly used to describe any kind of motorized vehicle. A car is generally defined as a wheeled traveling vehicle used for travel. Most definitions of automobiles say that they’re run on roads, most often seat eight-people, have a chassis and generally transport individuals rather than products. On the other hand, the term “motorized vehicle” refers to a vehicle that’s not mechanically driven, such as a boat. The term “automobile” means any type of motor vehicle powered by internal combustion engines. One of the main components in an automobile is an engine. The engine is typically powered by a gasoline or diesel fueled engine with an automatic transmission and manual transmission. The automatic transmission only differs from a manual transmission in the sense that it doesn’t require a driver input for the gears to change; it only requires a push of a button. With an automatic transmission, the gearshift changes gears automatically when the driver wants it to. This is different than a manual transmission, wherein a driver must move his hands towards the wheel and then slightly to the shift lever in order to initiate the shift. In contrast to an automatic transmission, a stick shift is the gear selection mechanism found in an automatic transmission. The stick is pressed while the brake pedal is still being pressed on a manual transmission. The driver must either press the brake pedal whilst the gear selector is depressed (manual) or whilst the steering wheel is in neutral (automatic). The main article used to describe automatic transmissions is the “Cvt” formula. The first step to understanding how an automatic transmission works is understanding the meaning of the next step. The next step is understanding what an “Cvt” is. An “Cvt” is an abbreviation for “coupled terminal voltage”. Basically this term refers to a transfer level used to describe the intensity of power sent to an electric motor. If you study automotive engineering you will understand that there are many factors which can reduce the power supplied to the electric motor, which includes; air resistance, cooling fan speed, air intake restriction and more. The next main article we are going to look at is the “torque converter” and why an automatic transmission doesn’t need one. The auto transmission has two different power trains, the first is an electric motor with a clutch. The second main article we are going to look at is the torque converter. The torque converter on a manual transmission shifts power from the clutch into the main shaft. To put it simply, when you accelerate a manual transmission, the clutch puts power into the engine instead of the main shaft. When you go through a gear change, your transmission is working under torque. To sum it up, modern transmissions do not need a torque converter to shift power. Clutch-based power transmission just requires a neutral input on a straight line. The best way to describe a manual transmission is the gear ratio between the transmission and motor. Gear ratio is a mathematical ratio that describes the ratio of gear teeth on the transmission to engine speed. In other words, in order for a car to drive at its maximum efficiency, the ratio of gears must be the greatest. The modern transmissions by Honda, GMC, Mercedes Benz, VW, and others have gear ratios that are optimal for maximum fuel efficiency.
Information on the risks of acrylamide and how you can reduce the chances of being harmed by it. Acrylamide is a chemical substance formed when starchy foods, such as potatoes and bread, are cooked at high temperatures (above 120°C). It can be formed when foods are: Acrylamide is not deliberately added to foods – it is a natural by-product of the cooking process and has always been present in our food. It is found in a wide range of foods including: - roasted potatoes and root vegetables Potential health effects of acrylamide Laboratory tests show that acrylamide in the diet causes cancer in animals. Scientists agree that acrylamide in food has the potential to cause cancer in humans as well. We recommend that the amount of acrylamide we all consume is reduced, as a precaution. What the food industry is doing to reduce acrylamide The food industry has undertaken a lot of work to identify and implement measures to reduce acrylamide levels in food. This includes developing guidance on ways to limit acrylamide formation in a variety of foods and processes. Legislation now requires food business operators to put in place simple, practical steps to manage acrylamide within their food safety management systems, including sourcing of ingredients, and appropriate storage. How to reduce acrylamide at home To reduce your consumption of acrylamide when preparing food at home, we advise you should: - aim for a golden yellow colour or lighter when frying, baking, toasting or roasting starchy foods - follow the cooking instructions on the pack when cooking packaged foods like chips and roast potatoes - eat a healthy, balanced diet and get your 5 A Day to help reduce your risk of cancer We previously advised consumers against storage of raw potatoes in the fridge at home, as it was thought this could lead to the formation of additional sugars (known as cold sweetening) which can then convert into acrylamide when the potatoes are fried, roasted or baked. A recent study, which has been reviewed by the Committee on the Toxicity of Chemicals in Food, Consumer Products and the Environment (COT), has shown that home storage of potatoes in the fridge doesn’t materially increase acrylamide forming potential when compared to storage in a cool, dark place. So, if you wish to help avoid food waste, you can choose to store either in the fridge or in a cool, dark place. Acrylamide is formed during high temperature cooking, when water, sugar and amino acids combine to create a food's characteristic flavour, texture, colour and smell. This process is called the Maillard reaction. Long cooking times and higher temperatures form more acrylamide than short cooking times and lower temperatures. Organisations including the World Health Organisation, the European Food Safety Authority (EFSA) and UK scientific advisory committees have assessed the risks posed by acrylamide. In 2015, the EFSA published its risk assessment of acrylamide in food. The assessment confirms that acrylamide levels found in food have the potential to increase the risk of cancer for people of all ages. However, it’s not possible to estimate how much the risk is increased. Acrylamide in your diet could contribute to your lifetime risk of developing cancer. As it’s not possible to establish a safe level of exposure for acrylamide to quantify the risk, the EFSA used a ‘margin of exposure’ approach. The margin of exposure (MOE) approach provides an indication of the level of health concern posed by a substance’s presence in food. EFSA’s Scientific Committee states that, for substances that are genotoxic and carcinogenic, a MOE of 10,000 or higher is of low concern for public health. The MOE identified in our total diet study on acrylamide have indicated a concern for public health. These range between 300 for an average adult consumer and 120 for toddlers. Our work on acrylamide To understand more about acrylamide and how to reduce the risk it presents we are: - supporting food manufacturers’ initiatives to reduce acrylamide in foods - conducting and publishing annual monitoring data for acrylamide in a range of foods - working with industry to help manufacturers comply with the new legislation - advising people what they can do to reduce acrylamide in food they cook at home
Having learned about the importance of speech projection and delivery speed in the first two articles in this series on vocal variety, it’s now time to turn our attention to pitch and resonance. What is pitch and why is it important? In vocal variety terms, pitch relates to whether you are speaking in a high, low or natural voice and it is defined as the rate at which your vocal folds vibrate. The faster the vibration, the higher your pitch, the slower the vibration, the lower your pitch. Your “natural” pitch is influenced by a number of physical factors such as the length and thickness of your vocal folds as well as the size and shape of your body, but it can also be affected by things like your emotions and your mood. For example, if you are feeling stressed, the muscles around your vocal cords will automatically contract, and your pitch will rise. There has been a lot of research into effects of vocal pitch and how it is perceived and there is now a body of academic literature supporting the idea of a general preference for lower vocal pitch. Participants tend to ascribe more positive personality traits to lower pitched voices (O’Hair & Cody, 1987; Imhof, 2010) A well-known example Margaret Thatcher became Britain’s first female Prime Minister in 1979, but when she first entered parliament in 1959 her voice was described by many as “shrill and hectoring”. Researchers who have studied recordings of her speeches over the intervening period noted that it gradually became deeper and more “steely.” Her biographer claimed that, around 1975 when she became the leader of her party, Mrs Thatcher engaged the services of a voice coach from the National Theatre to help make her voice more authoritative. Please note: I am not advocating that you need to go to such lengths. However, there are some simple vocal exercises that can help you develop your tonal flexibility. They can make your speaking voice more engaging and interesting as well as help you to project more gravitas and authority. Vocal Variety – placement and timbre Have you ever heard somebody speak and felt that the richness of their voice was intoxicating? The term timbre relates to the texture of your voice, and it is distinct from both pitch and loudness. In the human voice, timbre is a product of the way the sound made by our vocal cords resonates in the various air-filled cavities in our body. These cavities include: - Nose (nasal cavity including sinuses) - Mouth (oral cavity) - Throat (Pharynx) The voice that our audience hears is a blend, based on the relative amount of resonance from each of these three cavities You have probably heard people who, for a variety of reasons, speak mainly through their nasal cavities. This type of voice is often described as thin and reedy or sharp. In comparison, a voice that includes more resonance from the throat and pharynx is warmer, richer and more rounded. A simple vocal placement exercise You can start to become aware of your natural placement and how to change it by reciting a simple nursery rhyme out loud. Step 1 – Place your fingertips lightly on the bridge of your nose and recite your chosen rhyme whilst focusing your attention on the vibration in your fingertips. Step 2 – Repeat this exercise with your fingertips touching the sides of your throat. Step 3 – Finally, repeat the exercise again with the palm of one hand resting lightly on the front of your chest. As you change the placement of your voice, you will notice a change in tone and timbre. Once you have become aware of the physical sensations associated with speaking from each of these resonating chambers, you can experiment with blending them to produce higher or lower and richer or thinner tones.
At Bucklesham Primary School, teachers provide a wide range of contexts for spoken language throughout the school day. Teachers and other adults in school model speaking clearly. This includes clear diction, reasoned argument, using imaginative and challenging language and use of Standard English. Listening is modelled, as is the appropriate use of non-verbal communication, respecting the views of others. Teachers are also sensitive in encouraging the participation of retiring or reticent children. Spoken Language outcomes are planned for in all areas of the curriculum. Roles are shared amongst pupils: sometimes a pupil will be the questioner, presenter, etc. Learning takes place in a variety of situations and group settings. For example, these could include reading aloud as an individual, working collaboratively on an investigation, reporting findings as a newscaster, interviewing people as part of a research project, acting as a guide for a visitor to school or responding to a text in shared or guided reading. Spoken Language will be a focus across the curriculum and across the school day in a variety of settings. Feel their ideas and opinions are valued Listen to verbal instructions which are clear Offer ideas and opinions which may differ from others Verbalise ideas in a variety of situations Ask and answer questions appropriately Think before they speak – plan out Appreciate opinions of others Speak aloud with confidence for the appropriate audience Plan for speaking and listening Consider oral outcomes Encourage discussion, debate and role play Value and build on pupils’ contributions Understand how to develop skills progressively Use resources effectively Set realistic goals Use different approache
achondrite,any stony meteorite containing no chondrules (small, roughly spherical objects that formed in the solar nebula). The only exclusions are carbonaceous chondrites of the CI group, which, though they are clearly chondrites, are so heavily altered by water that any evidence for their having contained chondrules is lost. Achondrites, constituting about 4 percent of the known meteorites, are similar in appearance to terrestrial igneous rocks that have a low silica contentscontent, such as basalts, peridotites, and pyroxenites. Most formed by various melting and crystallization processes in within asteroids. Most The majority of achondrites belong to one of the following groups: acapulcoites, angrites, aubrites, chassignites, diogenites, eucrites, howardites, lodranites, nakhlites, shergottites, and ureilites. The shergottites, nakhlites, and chassignites almost certainy certainly came from Mars. There are also In addition, a small group of achondrites are believed to be derived from the Moon. See alsochondrite.
The Chemistry Tutor: Learning by Example DVD Series provides an introduction to chemistry through step-by-step example problems. Emphasis is placed on giving students confidence in their skills by gradual repetition so that the skills learned are committed to long term memory. This episode covers ions and ionic charge. Students will learn what an ion is and how it relates to a neutral atom, as well as how to calculate the ionic charge on an atom with practical examples. Grades 9-Adult. 34 minutes on DVD. DVD Playable in Bermuda, Canada, United States and U.S. territories. Please check if your equipment can play DVDs coded for this region. Learn more about DVDs and Videos Have a question about this product? Ask us here.
|This article does not cite any references or sources. (February 2007)| An authoring system is a program that has pre-programmed elements for the development of interactive multimedia software titles. Authoring systems can be defined as software that allows its user to create multimedia applications for manipulating multimedia objects. In the development of educational software, an authoring system is a program that allows a non-programmer to easily create software with programming features. The programming features are built in but hidden behind buttons and other tools, so the author does not need to know how to program. Generally authoring systems provide lots of graphics, interaction, and other tools educational software needs. An authoring system usually includes an authoring language, a programming language built (or extended) with functionality for representing the tutoring system. The functionality offered by the authoring language may be programming functionality for use by programmers or domain representation functionality for use by subject experts. There is overlap between authoring languages with domain representation functionality and domain-specific languages. An authoring language is a programming language used to create tutorials, computer-based training courseware, websites, CD-ROMs and other interactive computer programs. Authoring systems (packages) generally provide high-level visual tools that enable a complete system to be designed without writing any programming code, although the authoring language is there for more in-depth usage. - Hollywood (programming language) with its Hollywood Designer graphical interface. - Learning management system - Web design program - XML editor |This article related to a type of software is a stub. You can help Wikipedia by expanding it.|
UNIVERSITY PARK, PA. -- While most farmers consider viruses and fungi potential threats to their crops, these microbes can help wild plants adapt to extreme conditions, according to a Penn State virologist. Discovering how microbes collaborate to improve the hardiness of plants is a key to sustainable agriculture that can help meet increasing food demands, in addition to avoiding possible conflicts over scare resources, said Marilyn Roossinck, professor of plant pathology and environmental microbiology, and biology. "It's a security issue," Roossinck said. "The amount of arable land is shrinking as cities are growing, and climate change is also affecting our ability to grow enough food and food shortages can lead to unrest and wars." Population growth makes this research important as well, Roossinck added. "The global population is heading toward 9 billion and incidents of drought like we had recently are all concerns," said Roossinck. "We need to start taking this seriously." Roossinck, who reports on the findings today (Feb. 17) at the annual meeting of the American Association for the Advancement of Science in Boston, said that she and her colleagues found an example of a collaboration between plants and viruses that confer drought tolerance to many different crop plants. The researchers tested four different viruses and several different plants, including crops such as rice, tomato, squash and beets, and showed that the viruses increased the plants' ability to tolerate drought. Virus infection also provided cold tolerance in some cases. A leafy plant, related to a common weed known as lamb's quarter, was also infected with a virus that caused a local infection. The infection was enough to boost the plant's drought tolerance and may mean that the virus does not have to actively replicate in the cells where the resistance to drought occurs, according to Roossinck. In studies on plants that thrive in the volcanic soils o |Contact: Matthew Swayne|
Aerosol research is still in its infancy. Further studies will reveal the definitive long-term impact of these tiny particles Scientists are yet to unravel all the mysteries about the tiny and yet complex three-dimensional particles. Key details about their properties and their effects remain elusive. So far, it has not been possible to make reliable measurements to determine their distribution and properties on a global scale. Large uncertainties remain in estimating the magnitude of various aerosols emitted from different sources. Studies based on indoex data are quite preliminary in nature, however, latest results provide an indication of their potential to impact climate scenarios, rainfall and agriculture. The indoex experiment showed that aerosols could be transported to long distances away from the emitting region. Soot lodged in sub-micron sulphate and organic aerosols was seen in distant south showing that these have a lifetime of over a week. The project also fuelled doubts about similar large-scale transport of pollution occurring in other regions of the Earth. Pollutants from Asia are being transported across the Pacific Ocean by winds. Massive dust storms in Asia, which also carry sulphate and organic aerosols, transport soil eastward to Japan and across the Pacific to the us. However, developing countries have borne far worse consequences of industrialisation than the Northern countries. A recent study by Leon Rotstayn and Ulrike Lohmann, Dalhousie University, Australia, has revealed that sulphate aerosols formed upon oxidation of sulphur dioxide emissions from industries in North America and Europe may have been responsible for causing severe droughts in the Sahel region of Africa in the 1980s. Precipitation in this region has fallen by between 20 and 50 per cent in the last 30 years. Also, these emissions may have led to a greater rainfall in Australia as the tropical rain belt shifted southwards (see map: A season of drought). On the other hand, dust storms from Africa's arid Sahelian region, which can rise up to four kilometres into the sky, travel across the Atlantic Ocean to Florida and put about 500 million to 1 billion tonnes of dust into the atmosphere. The incidence of dust clouds reaching Florida has increased since the drought in the Sahel region. Transcontinental dust clouds originating from Asia and Africa are the most significant for North America as itcz along the equator acts as a sort of wind barrier to separate the storms of the Northern and Southern Hemispheres. The Sahel region, the Sahara desert, the Indus valley in India, the Taklimakan desert north of the Himalayas and the Gobi desert in Mongolia, are a few main sources of dust in the world. Surface wind blowing at high speeds over the desert and low sparse grassland regions in Mongolia and western China can inject enormous plumes of dust high into the atmosphere. These are then transported as far as the Pacific Ocean basin before being depleted by rain or gravitational pull. Compared with the clouds from Africa, Asian dust shows higher concentration of human caused air pollutants such as sulphates, perhaps due to Asia's dense population and industrial cities. For instance, dust storms from the Mongolian region start from severely eroded soils, due to overgrazing and overfarming, and as these dust clouds pass over Beijing and other large cities, they gather industrial pollutants. Apart from the health effects associated with particulate matter, mineral constituents of dust present an added hazard (see box: Killers at large). Fine iron particles, which give the reddish tinge to the African dust, can generate hydroxyl radicals on the lung surface, which can scar lung tissue over time and decrease its effectiveness. Another important feature of transcontinental dust clouds is the microbes they transport. Initially it was believed that ocean distances were too large for bacteria to survive, but research points that intercontinental dust may transmit viable pathogens. In thick clouds of dust, the uv exposure at the bottom can be just half of that at the upper surface, so microbes in the lower layer can be protected and can survive the transport. Pathogens carried in dust can cause skin infections such as rashes and open sores, and may affect crops like cotton, peaches and rice. Indirectly, dust aerosols can impact nutrition by reducing crop yields -- through erosion of soil nutrients and by shading crops from needed uv light. They also cause allergic reactions and diseases. Lifetimes of most anthropogenic aerosols are in the range of 5 to 10 days. If aerosols are mostly confined to an altitude of one kilometre, they can travel about a thousand kilometre in so many days. But when lifted to higher layers, like in case of the indoex haze where maximum concentration of aerosols was at about three kilometres, aerosols can travel half way around the globe within a week. While emerging research has answered some questions, many nagging ones remain unanswered. Does the haze balance the warming due to greenhouse gases (ghgs) or does it intensify it? How does air pollution from Asia affect worldwide concentrations of pollutants? Since indoex was conducted during January to March, there is no data on the extent of the haze during the rest of the year. Scientists like Mitra caution that a lot more research is needed on the effect of haze on monsoons, effect of reduced sunlight on water budget and soil moisture to be able to make any conclusive statements. He suggests a holistic policy framework taking into consideration ghgs, aerosols and short-lived gases to emphasise interactions between global processes like warming of the Earth due to ghgs and, local and regional processes like air pollution (see chart: Pollutants at work). In short, we need to learn more. But we also have to act on the basis of what we know today. Some people are beginning to think that the heat absorbing soot is a godsend as it may help to reduce global warming. Nothing can be further from the truth. As scientists working on indoex show, the short-term fix of aerosol cooling could well be deadly for the regional climate -- its rainfall and sunlight. By making the region, more drought-prone, it could act as a double-whammy, with vulnerable people becoming even more vulnerable to the impacts of global climate change as they occur. But as Mitra says, it is important to distinguish between the causes and impacts of greenhouse gases, with lifetime ranges of decades to centuries, from aerosols. However, this should not take away from the need to learn more about the impacts of aerosols -- on the health of people to the health of our local climate system. Interestingly, the action agenda, in both cases, greenhouse gases and regional haze is complementary -- to reduce emissions from fossil fuel burning. But the actors will be different. In the case of global warming, the first accused, is the industrialised world, which is continuing to emit, far beyond its share of the global atmospheric space. Its emissions -- from its era of industrialisation, have added to the concentration of gases in the atmosphere. This historical or grandfathering of the atmospheric pollution load, is what is creating the problem today. Developing countries have little space for economic development. Climate change policy is, therefore, about creating the ecological space for developing countries to grow, by limiting the emissions of the industrialised world. In the case of regional haze, the accused are the users of fossil fuel in the developing world -- from thermal power plants, to automobiles and to burning of biomass. The 'survival' emissions of the poor -- to burn firewood to cook food -- cannot be compared with 'luxury' emissions of the rich -- to drive a car, for instance. Therefore, in this case, the onus of change lies with the rich of the fast developing world. We have to recognise, and fast, that air pollution is doing more damage than just killing us softly. We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together. Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.
Bullying is unwanted aggressive behavior of an individual or group among school-aged children. The behavior is usually intentional to cause hurt or harm to another person in hopes of making the victim feel defenseless and/or intimidated; therefore, trying to achieve a power imbalance. Bullying is most often a repeated occurrence. MOST COMMOM EXAMPLES OF BULLYING - Spreading false rumors - Intentional Exclusion - Ridiculing someone publicly - Any physical contact - Taking property - Damaging property - Any unwelcome physical contact DID YOU NOW? The month of OCTOBER is National Bullying Prevention Month? DO YOU NEED HELP??? * Report the bully to any school official * Tell a responsible adult/parent * Tell the School Resource officer * Call 911 * Talk with any medical physician or counselors * Suicidal thoughts. Call 1-800-273-TALK (8255) * TEXT “NOBULLY” to 444999 (For Montgomery County)
Difference between mole and molecule – mole vs molecule While studying the chapter on gas laws you must have come across the term ‘mole’ or ‘mol‘. You may have seen gram-mol as well in that chapter. I find many students confused with mole (or mol) and molecule. Here we will discuss and explain the concept of mole or mol with some examples. We’ll also go through a quick definition of molecules. This will help you to understand the difference between mole and molecule. Related words covered in this post are are mole or mol, molecule, gram mole, g mol, gram mol, molar mass and mole vs molecule. Mol or Mole ? Mol vs Mole? Any difference between mol and mole? Any difference between mol or mole? A big NO. Note that mole and mol are synonymous. (same thing). So we can use either mol or mole to refer to that stuff (to be discussed in the para below). But to note that molecule is different from mole or mol. So lets discuss the concepts in the next section. Mole or Mol – Concepts One mole (also abbreviated as mol) is the amount of substance that contains a specific number of particles ( 6.023 X 1023). These particles may be atoms or molecules or ions etc. Just to simplify, 1 mole of sand means a sand sample where 6.023 X 1023 number of sand particles are there. So if we say 1 mol of Oxygen gas, then that means an amount of Oxygen sample which contains 6.023 X 1023 number of Oxygen molecules. (in gas form of Oxygen is available in molecules) Similarly 1 mol of atomic Carbon contains 6.023 X 1023 number of Carbon atoms. (in atomic form of Carbon, it is available in atoms) This specific number (6.023 X 1023 ) is called Avogadro number. Difference between mole and molecule – Mole vs Molecule If you already have the concept of molecules, this concept of mole or mol we just discussed above will certainly help you to understand the difference between mole and molecule. Otherwise to brush up, I am adding a definition of molecule here: Definition of a molecule: A molecule is the smallest particle in an element or compound that possesses the chemical properties of that element or compound. Molecules are made up of atoms that are again held together by chemical bonds. Summary: Now we know that a Mol or Mole may contain particles of atoms or molecules or ions etc. of certain specific number. But Mol (or Mole) is an entirely different concept from molecule and you should understand their differences as you go through this post! Gram-mole: representation of molar mass in gram 1 gram mole of Oxygen gas: Oxygen gas exists in molecular form in the nature. So in this case we have to count the molecular mass of Oxygen( O2 = 2X16) and add the gram unit with it to get the mass of 1 gram mole Oxygen. So 1 gram mole Oxygen gas means 32 grams of Oxygen gas. Now 1 gram mole atomic Carbon: Carbon is available in atomic form. So in this case we have to get the atomic mass of Carbon (C = 1X12) and again add the gram unit with it. So 1 gram mole atomic Carbon means 12 gram Carbon. As water (steam/vapour) is available in molecular form, so 1 gram mole water (H2O) means 2X1+16 that is 18 gram of water molecules. Gram mole and Avogadro number So we can say that 32 grams of Oxygen gas will contain Avogadro number (6.023 X 1023) of Oxygen molecules. Similarly 12 grams of atomic Carbon contains Avogadro number (6.023 X 1023) of Carbon atoms. And 18 gram of water molecules will contain Avogadro number (6.023 X 1023) of water molecules. **Remember that (6.023 X 1023 ) is called Avogadro number. Q1. Find out the the number of moles in 132 gram of CO2. Molar mass of CO2 = gram molecular mass of CO2 as it is available in molecular form = [12 + 2X16] gram = 44 gram. we know, total mass of the sample = mass per mole X number of moles = molar mass X number of moles so, number of moles (n) = mass of the sample / mass per mole = 132/44 =3 Q2. Find out the number of molecules in 8 gram of Oxygen. Solution: you can try and do it yourself. (Ans: 1.506 X 10^23 molecules) Suggested Post: physics problems
- Join over 1.2 million students every month - Accelerate your learning by 29% - Unlimited access from just £6.99 per month University Degree: Wordsworth Meet our team of inspirational teachers Write a close analysis of Lines left upon a Seat in a Yew-tree which stands near the lake of Esthwaite from William Wordsworth and S. T. Coleridges Lyrical Ballads, discussing whatever features of language or themes seem important. The poem The Dungeon is an extract from a play by Coleridge. It is part of a soliloquy from Osorio spoken by the hero after being imprisoned. The poem focuses on the social injustices made by mankind; specifically that prison is no place for a man to be or to reflect on his guilt. Therefore, the poem argues that one crime cannot not be 'cured' by another crime (imprisonment) this is described as unjust and immoral but Coleridge suggests that 'nature' is the 'true cure' of this indignation "O nature/ Healest thy wandering and distempered child:"2. - Word count: 1285 In the first half of stanza one the poet uses enjambment to create a natural flow and rhythm that mirrors the poet's view of nature at this point in the poem. Wordsworth's use of enjambment, along with end-stopped lines, is also used throughout the first stanza to modulate emotion and create pace by speeding up and slowing down the language; the speaker goes from recalling the beauty, silence and calmness of the woods to the noisy 'merciless' ravaging of the trees, then back to peace once more. - Word count: 1422 Write an essay of 1,500 words, in which you compare and contrast the way nature is represented in the following Romantic poem and extract from a Romantic poem: Percy Bysshe Shelleys Mont Blanc and lines 452-542 fro The 'second generation', however, in which Shelley is included, belong to the post-war period, and having lived neither through the Revolution itself nor the reaction, they saw this change of view as a betrayal. Shelley's writing can be characterized as a continuous rebellion aiming at the establishment of the reign of love and freedom in human society. 'Mont Blanc' constitutes an impressive statement of his belief in a benevolent force in Nature and of moral activity in man. Likewise, Wordsworth's Book 6 from The Prelude, entitled 'Cambridge and the Alps', aims at charting 'the growth of a poet's mind', with particular emphasis on the importance of Nature, which is always a key notion in his philosophy and poetry. - Word count: 1736 For ease of reference to the film, I use names and terms as they appear in the English-dubbed version of Nausica� released in 2005, to convey the Shinto and Christian elements found in the film, looking at broad themes as well as symbols. Film synopsis The story in Nausica� takes place a thousand years after a global war, the "Seven Days of Fire." Great Warriors, biological weapons with nuclear capabilities, destroyed everything. However, enclaves of surviving human colonies exist throughout the Fukai, or the Sea of Decay. - Word count: 2464 He speaks about the relationship of the tree with the earth, the man's sense of introspection with regards to the tree, and the almighty power of God. These innate human and natural characteristics contrive fundamentally "good" poetic emotions from the poem, and create the predisposition that is in opposition of the argument Cleanth Brooks and Robert Penn Warren issue. Degrees of expression are used with each instant of thought to supply a particular depth to resolve different intensities of signification. - Word count: 1164 I predict that whatever has taken place will change Snow's homecoming from what he expects it to be like. "Snow was not any too eager to reach home" is given as the reason for his decision to take the longer route (l.4). Tennant's use of litotes is effective in emphasizing how, contrary to what one might expect, this prodigal father's homecoming is "never the scene of wild enthusiasm" (l.5). Tennant subtly characterizes Snow through the reactions of others, albeit from times prior to this one. The lack of enthusiasm is clear in the son's "Hey, Mum, Dad's here," which lacks the diction of joy or surprise, and in the wife's "grimly" said "Hello! so you're back, are you?" (7) - Word count: 1481 By doing this, the poet portrays an image of an axe striking tree trunks. The poet proceeds by using alliteration to display the way in which the shadows of the trees could be seen along the "wind-wandering weed-winding bank" where the river and meadow met. In order to emphasise the shadows of the trees, the poet also uses internal rhyme: "dandled a sandalled". In the second stanza, Hopkins speaks of the aftermath of the destruction. The poet begins this verse by using strong words such as "hew" and "delve" to describe our harsh actions upon the earth, for our "country is so tender" that even harming it a little can permanently alter it. - Word count: 772 Write an essay of 1500 words, in which you compare and contrast the treatment of the City in the following Romantic poem and extract from a Romantic poem: Mary Robinson's 'January 1795' and lines 624-741 from Book Seventh of The Prelude by William Wordswo Indeed the poet presents four different visions of London - tranquillity, chaos, loss of social order and a return to order - which I shall analyse further. In the first section of this extract (lines 624-642) Wordsworth celebrates the tranquillity of the city streets at night. He uses metaphor to compare human-life to a tide that 'stands still' (line 631). This peacefulness is echoed in lines 634-635: 'The calmness, beauty of the spectacle; Sky, stillness, moonshine, empty streets and sounds' Wordsworth employs an alliterative technique, repeating the letter 's' to produce an audible swishing sound akin to the sea lapping against the shore, inducing an idyllic sound of calmness. - Word count: 1790 William Wordsworth, one of the best English romantic poets ever, gave us this beautiful poem ''Daffodils''. Thanks to his Lyrical Ballads, we saw the the Romantic movement in literature. The Prelude is supposed to be the best work of this man, but this poem based on nature, happens to be one which we can't dare to avoid. I was forced to study this one more than once in my school days, which means that I still have every line going through my mind, especially while I am closer to the nature! Wordsworth was often called the poet of nature, thanks to his poems which gives new meaning to nature! - Word count: 596 Angrily, the poet accuses the modern age of losing its connection to nature and to everything meaningful. Man no longer appreciates nature and instead he exploits it for his own material gain. As a result, we are "out of tune" with nature. This relatively simple poem states that humans are too preoccupied with the material "Getting and spending" and they have lost touch with the spiritual. Hence, this will not help people in life, "It moves us not." In the sestet, the poet proposes an impossible personal solution to his problem; he wishes he could have been raised as a pagan. - Word count: 566 Here it was clear that humans had total and complete control and Mother Nature had no power. The sun was high in the sky shining brightly overhead bearing down with hot burning fingers piercing the skin and tearing at the flesh within. A slow stream snaked lazily round a bend, licking the corners but never quite touching them it seemed. Then the stream rose up to meet with the main hall of the abbey and they two joined like lovers as the stream ran down to the east wing. Now the noise was ceasing, dying back as if cut by an invisible sword it was loosing its grip on the would be quiet abbey ruins. - Word count: 1167 On Wenlock Edge and Beeny Cliff - Compare and contrast the ways in which two poets communicate feelings about the passing of time To show the change in tone, in stanza 1, line 3 the poets says: "The woman whom I loved so, and who loyally loved me." But in stanza 5, line 3 the mood is different: "And nor knows nor cares for Beeny, and will laugh there nevermore." In the first passage the tone is joyful as the poet uses "loved" twice which is seen as something to be blissful to be loved or to love someone. It also shows everyone is happy as it says "woman who I loved so" and "who loyally loved me". - Word count: 1345 romanticism in 'The Tyger' by William Blake, 'On This Day I Complete My Thirty-Sixth Year' by Lord Byron and 'The World is Too Much with Us' by William Wordsworth. The tiger is not revealed as a good or bad animal, but like something amazing and frightening. The poet begins this poem; in the first stanza by imagining the tiger burning in the jungle at night: 'Tyger, Tyger, burning bright, in the forests of the night...' This also suggests that the tiger was born from fire; it was imitated rather than created. He then asks: 'What immortal hand or eye could frame thy fearful symmetry?' Here the immortal hands and eye refer to God; the symmetry refers to the tiger. This is the first question asked in the poem; from here onwards each following stanza has further questions, all of which refine the first. - Word count: 1682 Illustrate and explain how different poets make use of the traditional imagery of nature in a range of poems you have studied. Nature's symbols and images have been used to express a range of ideas. The theme of nature can be used to help describe human behaviour and emotions, and as a source of inspiration to help draw ideas and help develop them in the poets mind. The natural world has been written about by many authors and poets. 'Welcome to Spring' by John Lyly is a nature poem, but it is about human nature, and human behaviour. It is about dark human behaviour, about a rape, by a Greek King. It is describing a Greek myth about a King who raped his wife's sister and cut out her tongue so she could not tell anyone. - Word count: 1247 The second chapter contains two major ideas. The first is Turner's defense and explanation of the appropriateness of anger. Turner thinks that society wrongly taught the people to repress and fear their emotions. Turner finds primal emotions to be necessary to our survival, as well as the survival of the wild. He explains that anger occurs when we defend something we love or something we feel is sacred. He reminds us to cherish our anger and use it to fuel rebellion. Turner criticizes the cowardice of modern environmentalists in the following passage: "The courage and resistance shown by the Navajos at Big Mountain, by Polish workers, by blacks in South Africa, and, most extraordinarily, by Chinese students in Tiananmen - Word count: 3470 Why I beleive that 'She Dwelt Among the Untrodden Ways' by William Wordsworth and 'Muliebrity' by Sujata Bhatt are memorable poems. Wordsworth's explicit love for nature is obvious due to his mastery of the language which allows him to bring such emotion and power into his poem without the use of sophisticated words. Wordsworth shows his love for nature through his poem 'She Dwelt Among the Untrodden Ways' where he shows his sentiments for a girl living alone with nature. In this poem the girl, Lucy, is considered a child of nature. She is pure like the earth and perhaps has grown up along with nature. - Word count: 775 I will explore the romantic aspects in William Wordsworth's poems 'The Daffodils,' Percy Shelley's poem 'Ozymandias' and William Blake's poem 'The Tyger.'The poem 'Daffodils' contains various characteristics that would classify it as a romantic poem A prime example from his work to prove this is, 'Little we see in Nature that is ours.' Anti-establismentism was another aspect of romanticism, as all romantics opposed to established institutions such as the Church and the Monarchy. Percy Bysshe Shelley was one of the main romantics along side William Blake that disputed against institutions such as mentioned above. The ancient and exotic were another attribute of romanticism, as romantics were fascinated by different cultures which differentiated by either time or distance. - Word count: 1468 After several incidents at school, he was taken out of school and taught at home by his father. Eventually, he reentered school and received an eighth grade diploma from the Wilkins School. Around the age of 13, he began taking piano lessons and became seriously interested in music. It was through music, that he became a better disciplined person and learned how to use art to channel his emotions. He studied under the very prominent pianist, Frederich Zech. It was through his training that Adams grew to love music and was an accomplished pianist himself. - Word count: 800 NATURE, natural, and the group of words derived from them, or allied to them in etymology, have at all times filled a great pl According to the Platonic method which is still the best type of such investigations, the first thing to be done with so vague a term is to ascertain precisely what it means. It is also a rule of the same method that the meaning of an abstraction is best sought for in the concrete---of an universal in the particular. Adopting this course with the word Nature, the first question must be, what is meant by the ``nature'' of a particular object? - Word count: 15520 As the mariner goes in search of understanding and redemption, the supernatural world clearly engulfs him. His world is based in a nightmare universe, always with elements of the realistic world present. For much of the poem, it is set in an empty ocean, the mariner adrift on a boat by himself, symbolically cut off and isolated from the rest of the world and human companionship. - Word count: 547 Compare and contrast the views on human nature and conflict of any two of the following thinkers: Thucydides, Augustine, Machiavelli, Hobbes, Schmitt, Morgenthau, Kissinger or Mearscheimer. Machiavelli and Thucydides � For Machiavelli, man is alone and helpless in this world. Even if God is perhaps friend to the valiant and he, or Christ, may at times bring some relief to the wretched, man's condition in this world remains disconsolate.1 Human Nature in Machiavelli is a simple concept. It would not do Machiavelli's reputation justice to miss out what he 'generally' thinks of 'all men': ...one can generally say this about men: they are ungrateful, fickle simulators and deceivers, avoiders of danger, and greedy for gain. While you work for their benefit they are completely yours, offering you their blood...when the need to do so is far away. - Word count: 2987 Human nature in Thucydides Thucydides says, after describing the Corcyra civil war: Then, with the ordinary conventions of civilized life thrown into confusion We cannot simply take the words and read into them that they pertain to any human situation at any time. Rather we should take them as they were presented, in the context of the human situation in which they are given - that of war (impending, possible or dissuadable). We are able to judge for ourselves that 'human nature' at any point, necessarily depends upon all the forces surrounding it, and this The History agrees. Additionally, it only remains to be said that Thucydides is thus obviously relating to us how the 'warring' part of human nature reveals itself: - Word count: 556 "Design, pattern or what I am in the habit of calling inscape, is what I above all aim at in poetry." Discuss Hopkins' poetry in the light of this statement. This however is juxtaposed by the following sestet when Hopkins speaks of God; the rhyming scheme changes and becomes less precise, the language also becomes more complicated and the use of repetition combined with alliteration and assonance throughout creates a confused atmosphere. Unusually however, it is not this closing which is harder to understand, but the opening octet that appears cloudy and unclear. 'Stones ring; like each tucked string tells, each hung bell's Bow swung finds tongue to fling out broad its name; ' These two lines are taken from the octet which Hopkins has used to represent mankind. - Word count: 1774 The supernatural in Coleridge's "The Rime of the Ancient mariner" & the uncanny in Hoffman's the "Sandman" Whereas Uncanny '' has to do with a sense of strangeness, mystery or eeriness. More particularly it concerns a sense of familiarity which appears at the very heart of the familiar, or else a sense of familiarity which appears at the very heart of the unfamiliar.'' (An Introduction to literature, Criticism and Theory, 1995, p.33'). The "Rime of the Ancient Mariner", discusses a story in how a Ship having passed the Line was driven by storms to the cold Country towards the South Pole ; and how from thence it made her course to the tropical Latitude of the Great Pacific Ocean ; and of the strange things that befell ; and in what manner the Ancient Mariner came back to his own Country. - Word count: 2251 The children find themselves trapped on an island, isolated from society and civilisation. It is an island sufficient for their survival; there is plenty of fruit and nuts for their consumption, and they are free from predation. And it is in this absence of fear for survival that their Freudian "Id"1 responses of desire begin to manifest themselves; the children begin wanting to hunt, wanting to exclude the weak, and wanting power. Golding first dramatises the children's Id response in the first election. - Word count: 1172
No race can afford to neglect the enlightenment of its mothers. Frances Ellen Watkins Harper National Association of Colored Women - Mary McLeod Bethune - W.E.B. Du Bois - Mary Church Terrell - Sojourner Truth - Harriet Tubman - Booker T. Washington - Ida B. Wells-Barnett - Standing Up for Change online exhibit - Abolitionist Movement - 14th and 15th Amendment - Minor v. Happersett - Opposition to Suffrage - Progressive Era Reforms - 19th Amendment - Ida B. Wells-Barnett - Women's History Minute Video Black women like Sojourner Truth, Frances Ellen Watkins Harper, and Harriet Tubman participated in women’s rights movements throughout the nineteenth century. Despite their efforts, black women as a whole were often excluded from organizations and their activities. Black female reformers understood that in addition to their sex, their race significantly affected their rights and available opportunities. White suffragists and their organizations ignored the challenges that African American women faced. They chose not to integrate issues of race into their campaigns. In the 1880s, black reformers began organizing their own groups. In 1896, they founded the National Association of Colored Women (NACW), which became the largest federation of local black women’s clubs. (While the term “Colored Women” was a respectable term in the early twentieth century, the phrase is no longer in use today.) Suffragist Mary Church Terrell became the first president of the NACW. Suffrage was an important goal for black female reformers. Unlike predominantly white suffrage organizations, however, the NACW advocated for a wide range of reforms to improve life for African Americans. Jim Crow laws in the South enforced segregation. Blacks and whites attended separate schools, used separate drinking fountains, and even swore on separate Bibles in court. Black students had fewer opportunities to receive a good education, much less go to an elite college, than white students. Schools for African Americans often had old textbooks and dilapidated buildings. The 1896 Supreme Court case, Plessy v. Ferguson supported Jim Crow laws as long as segregated facilities were “separate but equal.” But, as the 1954 Brown v. Board of Education case ruled, separate facilities were never equal. The NACW’s motto was “Lifting as We Climb.” They advocated for women’s rights as well as to “uplift” and improve the status of African Americans. For example, black men officially had won the right to vote in 1870. Since then, impossible literacy tests, high poll taxes, and grandfather clauses prevented many of them from casting their ballots. NACW suffragists wanted the vote for women and to ensure that black men could vote too. Racism persisted even in the most socially progressive movements of the era. The National American Woman Suffrage Association, the dominant white suffrage organization, held conventions that excluded black women. Black women were forced to march separately in suffrage parades. Furthermore, the History of Woman Suffrage volumes by Elizabeth Cady Stanton and Susan B. Anthony in the 1880s largely overlooked the contributions of black suffragists in favor of a history that featured white suffragists. The significance of black women in the movement was overlooked in the first suffrage histories, and is often overlooked today. By Allison Lange, Ph.D. - What was the role of black women suffragists in the Movement? - Why were African American women often forced to establish their own organizations? - How did their priorities differ from those of white women? - What was the National Association of Colored Women's Clubs?
To show that carbon dioxide is necessary for photosynthesis. Take two young potted plants which have been kept in the dark to make the leaves starch-free. Place each of them under a large bell-jar, and place these on vaselined glass sheets. Along with one of the plants is placed a lighted candle which will use up the oxygen under the jar by combustion and replace it .with carbon dioxide. Beside the other plant is placed a dish of caustic soda to absorb the carbon dioxide in the air under the bell-jar. After leaving these two in the light for a few hours, some leaves are removed from each plant and tested for stajch. It is observed that the leaves of the plant surrounded by plenty of carbon dioxide show the presence of starch, whereas the others without carbon dioxide have no starch. Therefore carbon dioxide is necessary for photosynthesis.
What are lysimeters? Lysimeters are delineated soil columns with a known volume and surface area. The column can be either filled manually with disturbed soil material or be equipped with an undisturbed soil monolith excavated from the site of investigation. Lysimeter experiments can be either conducted in the laboratory (laboratory lysimeter) or in the field (field lysimeter). Experiments using lysimeters have been carried out for more than 200 years (Goss und Ehlers, 2009). The lysimeter technique has been improved constantly and has been applied to even more complex research topics ever since. Application of lysimeters Lysimeter experiments are a convenient method to determine water balance variables. In combination with precipitation measurements it is possible to directly calculate the evapotranspiration rate by using the recorded mass change of a weighable lysimeter. Lysimeter experiments are utilized to investigate water balances of ecosystems or crop water use of rain fed crops. Lysimeters equipped with a leachate collecting system allow the quantitative and qualitative (in the laboratory) investigation of the seepage water. Further, lysimeters can be equipped with additional sensors such as tensiometers, soil moisture probes, thermometers, and suction probes which allow the investigation of the functioning and mechanisms of ecosystems. The results can be transferred from small to large scales. Due to the possibility of long term field investigations under given site conditions, lysimeter experiments can be used to derive statements about the water balance under certain climate scenarios. A comparison of several identical lysimeters in areas with different weathering conditions or a comparison of different soil types or different vegetation under the same weather conditions over a longer time period is also a common research goal. These investigations provide the foundation for many models to estimate the effect of climate change, the spread of contamination in the soil or the success of remediation measures. Fields of application Lysimeter experiments can be either conducted in the laboratory or in the field. Typical areas of applications are agricultural sites, forest sites, landfill sites, post-mine landscapes as well as contaminated sites in need of rehabilitation. The combination of several lysimeters is recommended for statistically verified statements. Up to 4 lysimeters can be put together in autonomous UGT-lysimeter stations made of PE-HD. For larger test setups several stations can be combined or large lysimeter systems erected with concrete basements are also possible. The size of the soil monoliths can vary from very small dimensions (95 cm² in area, less than 1 m deep) to large lysimeters (2 m² in area, up to 3.5 m deep). For location based research targets it is necessary to investigate undisturbed soil monoliths. The patented UGT-Excavation Technology provides a non-destructive excavation of the soil monolith with minimal impact on the environment. Lysimeter experiments can also be conducted in the laboratory. Under defined and controlled boundary conditions an investigation of the behavior of natural soils or vegetations under special environmental conditions, as well as physical/hydrological soil properties of manually imported soils or processes of (contaminant) substance distribution, relocation and leaching is possible. Goss, M.J. und Ehlers, W., 2009: The role of lysimeters in the development of our understanding of soil water and nutrient dynamics in ecosystems. Soil Use and Management, 25, 213-223.
Browsers, grazers, miners and suckers. From the tiniest caterpillar, to the most majestic red deer stag, herbivores – the animals that eat plants – have a huge effect on vegetation throughout Scotland’s native forests. Indeed, most of the Earth’s terrestrial ecosystems are heavily influenced by these animals. The action of herbivores, known as herbivory, differs from predation in that predators generally kill the animal they eat, whereas plants usually survive after being fed upon by herbivores. Also, a plant can support a large population of herbivores, as there are so many parts of it which can be eaten: leaves, buds, bark, wood, stem, sap, flowers, pollen, nectar, roots, fruits and seeds. Take a rowan tree (Sorbus aucuparia) as an example. A deer might browse some emerging buds, twigs and leaves in spring. A host of insects may feed on the nectar and pollen from its flowers, caterpillars of several moths make mines in the leaves, and in the autumn fieldfares (Turdus pilaris) and their relatives will gorge on the berries. These are just a fraction of the herbivores the tree may support. There are a myriad of herbivore species in the forest and their interactions with plants are far-reaching and complex. Below we will look at some of the main herbivore groups. Using examples of the more obvious and influential species and their effects, we will gain a glimpse into the fascinating world of herbivory. Insects have been referred to as major architects of the plant world. Although much of the attention given to forest insects has been related to pest outbreaks in monoculture plantations, native forests are more diverse and generally less vulnerable to pest attacks. On the whole it is not in an insect population’s interest to completely kill its host. When insects do kill plants, it is often the weak ones, and this contributes to a stronger, healthier gene pool, a diverse forest structure, and allowing more light to reach the forest floor. Caterpillars, including those of the pine looper moth (Bupalus piniaria) and the pine sawfly (Neodiprion sertifer) feed on the needles of Scots pine (Pinus sylvestris), but their numbers are controlled by predators such as wood ants (Formica aquilonia and F. lugubris). The frass, or excrement, and other organic matter dropped by the caterpillars enriches the forest floor. Bees and butterflies such as the pearl-bordered fritillary (Boloria euphrosyne) are among the more obvious forest insect herbivores. Feeding on sugar-rich nectar provided by flowers, they spread pollen between plants. Along with the provision of berries, this is a good illustration of how plants have adapted to utilise herbivores for their mutual benefit. Less visible are the various tiny insects which ‘mine’ leaves. These live inside the leaf itself and move around as they consume the cellulose there. Some insects, particularly the true bugs such as aphids, suck liquids from plants. There are a range of insect (and other invertebrate) herbivores which induce plants to produce galls. Laying their eggs in the plant’s tissues, they cause an abnormal growth in the plant which provides protection and shelter for the insect larva. Some of the spangle galls – small discs found on the underside of the leaves of oaks (Quercus spp.) – are induced by a tiny wasp (Neuroterus quercusbaccarum). These do little harm to the tree, and provide food for birds such as blue tits (Parus caeruleus). The main mammal herbivores in the forest include red deer (Cervus elaphus), roe deer (Capreolus capreolus), the red squirrel (Sciurus vulgaris), wood mice (Apodemus sylvaticus), voles (Microtus agrestis) and hares (Lepus spp.). In the past, there were also other large mammalian herbivores, but they are now extinct in Britain (see below). Each species has (or had) its own unique feeding habits, adding to the complexity and richness of the Caledonian Forest. Deer play a vital role in the forest, as their grazing and browsing helps to create a diverse structure in the vegetation. It is quite natural that some tree seedlings are browsed, and this can help produce glades, and promotes interesting growth forms on the trees as they mature. However, in large areas of the Highlands the grazing pressure is so intense that native forest is unable to regenerate at all. The high deer population at present (combined with the effects of sheep and other livestock) means that a very large proportion of seedlings are being overgrazed. Bitten back year after year, the young trees eventually die, leading to an unnaturally low proportion of woodland cover. The influence of deer can also be seen on more mature trees. Bark is sometimes stripped in order to access the more nutritious inner bark, and in parts of the forest (and even some parks and gardens), a definite ‘skirt’ can be seen around some trees, where deer have browsed the lower branches. This can give the trees the appearance of having been trimmed with a hedge trimmer! Although they are much smaller than deer, red squirrels can also be significant as herbivores. By one estimate, a single squirrel can eat the seeds from 20,000 Scots pine cones in the course of a year. They also feed on broadleaved tree seeds such as acorns and hazelnuts, and aid the regeneration and dispersal of these trees, through their caching of the seeds for the winter. Not all of the stored seeds are recovered and some grow on to become new trees, often at a considerable distance from the parent. Birds are among the most mobile herbivores, and their feeding habits can be crucial for seed dispersal . The capercaillie (Tetrao urogallus) helps to spread the seeds of plants such as blaeberry (Vaccinium myrtillus), and it grazes on pine needles in the tops of ‘granny’ pines, contributing to the trees’ unique growth forms. Another native pinewood bird, the Scottish crossbill (Loxia scotica), has evolved its characteristic crossed beak so that it can prise open pine cones and extract the seeds from inside. Members of the thrush family are well-known for their fondness for berries. Redwings (Turdus iliacus) among others distribute rowan berries considerable distances. Clusters of these trees are often found growing beneath Scots pines, where perching birds deliver the seeds in their droppings. Rowan can also be found by rocky perches, far from any other tree cover, which highlights the importance of birds in expanding the forest. The mistle thrush (Turdus viscivorus) is a major distributor of holly berries (Ilex aquifolium), and it is possible that a concentration of holly on the south shore of Loch Beinn a Mheadhoin in Glen Affric is the work of this bird. Jays (Garrulus glandarius) rely on acorns in the autumn. They bury thousands for storage, and while they remember the locations of many of them, some are inevitably left, and grow into mature oaks. It is interesting to note that jays may be seen more readily in Glen Moriston, which has a considerable number of oaks, than just to the north, in Glen Affric, where there are very few oaks. Many insectivorous bird species play a role in regulating herbivory, by feeding on the insects which eat the leaves of trees and other plants. Impacts on plants The impact of herbivores on plants can be huge, but it is often difficult to measure accurately, as the influence varies depending on what part of the plant is eaten. For example, if a deer eats 10 grams of birch buds, the long-term effect is greater than if it were to eat 10 grams of mature birch leaves. This is because the potential growth from the buds would have been lost. Moderate levels of herbivory can actually stimulate plant growth, and make them more vigorous. Herbivores and plant evolution As well as affecting the distribution and vigour of plants in the forest ecosystem in the short-term, constant pressure from herbivores over millions of years has forced plants to evolve a variety of defences. Obviously plants are not nearly as mobile as most animal species, but they are far from defenceless. Physical defences such as spines or thorns can be an effective way of deterring certain hungry mammals. Holly and thistles (Cirsium spp.) are familiar examples. The nuts of hazel (Coryllus avellana) have a hard shell which helps to protect the nut itself, and grasses contain high levels of silica (the substance from which sand and glass are formed), which wears down the teeth of herbivores. Chemical defences are not as immediately obvious, but are certainly very powerful. Plants can produce a wide range of toxic chemicals, such as tannins and alkaloids. Tannins act by inhibiting the absorption of proteins by herbivores, which can eventually lead to malnourishment and death. Oak trees contain high levels of this substance, to protect their leaves and other parts from herbivore attack. Interestingly, the tannin from oak was once much in demand for tanning leather. Research in the United States has suggested that some trees may actually release chemical signals (pheromones) when under heavy insect attack, to stimulate the production of defensive chemicals in trees elsewhere in the forest. Does this process take place in the Caledonian Forest? It would certainly be an interesting area for research. Timing can be an effective defence strategy. Some deciduous tree species will avoid bursting into leaf at exactly the same time as their neighbours, to avoid the risk of coinciding with a ‘boom’ caterpillar emergence. Another major strategy is spatial defence – that is, keeping vulnerable tissues out of reach of hungry herbivores, so the very ‘treeness’ of trees – a stem which generally keeps the foliage high off the ground – is partly a defence. The story is not one-sided, however. Over long periods of time herbivores themselves have had to adapt to tackle plants’ defences. Rodents such as the red squirrel have front incisors able to tackle the shells protecting energy-rich hazelnuts. Deer and cattle have evolved a complex digestive system. Cellulose is a major structural component of plants. A substance called cellulase is required to break it down, and while vertebrates are unable to produce their own cellulase, certain microbes can. All animals have these micro-organisms in their digestive tracts, but some, known as ruminants, have a specially-adapted stomach called a rumen, which has evolved for this job. These large herbivores have also developed high-crowned teeth to resist the wear from eating silica-rich grasses. Past and future: the role of herbivores in ecological restoration Studying the effects of large mammals is relevant in restoring our native forest ecosystems. In the past, a fully functioning ecosystem had a wide range of herbivores that were crucial in keeping the forest diverse and healthy. Centuries ago, Scottish forests were home to the moose (Alces alces), aurochs (Bos primigenius) (the huge, prehistoric ancestor of today’s domestic cattle), reindeer (Rangifer tarandus), beaver (Castor fiber) and wild boar (Sus scrofa), as well as the current complement of herbivores. Each had its own unique effect on the forest. Dutch ecologist Frans Vera suggests that grazing animals would have played a more significant role than has previously been assumed. No one is certain what Scotland’s prehistoric forests would have looked like, but the extensive woodland had a very diverse structure (including some open areas), and was influenced by the wide range of large herbivores and their complex interactions. In addition, the population and distribution of the herbivores themselves was affected by their predators. Clearly the extirpation of some of our native herbivores and their predators has had major knock-on effects for the Caledonian Forest. There have been some innovative projects involving the use of large mammals in European conservation, many of which are relevant because the insights they offer may help with ecological restoration in the Highlands. One of these is Oostvadersplassen in the Dutch polders. There, a range of different herbivores are being used to try and mimic prehistoric grazing patterns. These include large Heck cattle, which have been bred to closely resemble the auroch. There are also konik ponies which are similar to the prehistoric wild horse known as the tarpan (Equus caballus gmelini). Another example is the ancient Polish forest of Bialowieza, which has a dynamic mosaic of woodland and open ground maintained by herbivores such as the reintroduced population of European bison (Bison bonasus). In native woodland in Glen Garry, south of Glen Affric, Forestry Commission for Scotland have reduced grazing pressure so effectively that birch regeneration has been extremely dense. Highland cattle have now been introduced to the site to help create a more natural vegetation structure, and their browsing and trampling action is helping to open up some areas and break up the ground. As far as can be judged within a few years, Scots pine seedlings seem to be benefiting from this intervention. At their Corrimony Nature Reserve, the RSPB are also using cattle to keep an area open for black grouse (Tetrao tetrix). These birds utilise leks, in which the males perform ritual courtship displays, and if those areas become too overgrown the population would suffer. It is exciting to think that grouse may have depended on aurochs in a similar way. The role of the European beaver (Castor fiber) is currently topical with the proposed trial reintroduction in Argyll. These herbivores have a key role in creating habitats for a wide range of other forest organisms. They help to create standing dead wood by ring-barking some trees, as well as flooding some areas. This provides habitat for a host of dead wood-dependent insects and fungi, as well as for hole-nesting birds such as the great spotted woodpecker (Dendrocopos major). Beavers also keep wetland areas open, to the benefit of many other species. Wild boar are omnivores, although much of their feeding may still be classed as herbivory. Roots and tubers form an important part of their diet and the boars’ rooting behaviour helps to create seedbeds for trees and expose food for birds. Herbivores play a fundamental role in keeping the forest healthy and diverse. Human influences over the millennia have shifted the balance to the point where we have lost some herbivores, and now have excessive populations of others, which has a profound effect on the vegetation. Developing a fuller understanding of the role of herbivory, and re-establishing a more natural herbivore fauna, are key steps in restoring our native forests.
224 hands-on science experiments and ideas with step-by-step instructions delight and amaze children as they experience nature, the human body, electricity, floating and sinking, and more. Categorized by curriculum areas, each activity includes a list of vocabulary words and easily accessible materials. 157 pages. Children learn best through hands-on exploration, observation, and discovery. With more than 100 activities, this book gives them the opportunity to actively engage, experiment, create, and discover the exciting world of science. Children will have fun developing problem-solving skills while becoming comfortable with exploring their world. With 100 classroom-tested activities this book makes it easy for teachers to incorporate discussions about caring for the Earth into any curriculum. Each activity features learning objectives, vocabulary, related children's books, materials, preparation (if necessary), directions, and an assessment component. Paperback. 100 easy-to-do activities give children a peek into the lives of their creepy-crawler friends. Each activity features learning objectives, vocabulary, related children's books, materials, preparation (if necessary), directions, and an assessment component. Level J reading level. Our planet Earth has a great diversity of ecosystems, both beautiful and dangerous.Would you want to visit the wildest places left on Earth, even with the dangers of wild animals and harsh climates? Through these books, follow the adventures of child explorers as they travel through the world's biomes. Visit a Temperate Forest, a Desert, a Prairie, a Rainforest, a Coral Reef and a Wetland. Written by Bridget Heos.
Neurospora crassa is an ascomycete, the red bread mold. Like all fungi, it reproduces by spores. It produces two kinds of spores: Neurospora is particularly well suited for genetic studies because - Conidia are spores produced by asexual reproduction. Mitosis of the haploid nuclei of the active, growing fungus generates the conidia. - Ascospores, on the other hand, are formed following sexual reproduction. If two different mating types ("sexes") are allowed to grow together, they will fuse to form a diploid zygote. Meiosis of this zygote then gives rise to the haploid ascospores. (If crossing over should occur, what spore patterns could be produced?) - it can be grown quickly on simple culture medium. - It spends most of its life cycle in the haploid condition so any recessive mutations will show up in its phenotype. - When the diploid zygote undergoes meiosis, the nuclei produced by are confined to a narrow tube, the ascus. - Because the nuclei cannot slip past one another, if the ascus will finally have four spores at one end containing one allele (a) and four spores at the other end containing the other allele (A). - the zygote nucleus is heterozygous for a gene (shown here as a and A) and - no crossing over near that locus occurs during meiosis I, Sucrose, a few salts, and one vitamin — biotin — provide the nutrients that Neurospora needs to synthesize all the macromolecules of its cells. Geneticists George W. Beadle and E. L. Tatum - exposed some of the conidia of one mating type of Neurospora to ultraviolet rays in order to induce mutations. - Then individual irradiated spores were allowed to germinate on a "complete" medium; that is, one enriched with various vitamins and amino acids. - Once each had developed a mycelium, it was allowed to mate with the other mating type. - The ascospores produced were dissected out individually and each one placed on complete medium. - After growth had occurred, portions of each culture were subcultured on minimal medium. - Sometimes growth continued; sometimes it didn't. - When it did not ("1st" in the figure) , the particular strain was then supplied with a mixture of vitamins, amino acids, etc. until growth did occur ("2nd "). - Eventually each mutated strain was found to have acquired a need for one nutrient; in the example illustrated here, the vitamin thiamine ("3rd"). Beadle and Tatum reasoned that radiation had caused a gene that permits the synthesis of thiamine from the simple ingredients in minimal medium to mutate to an allele that does not. The synthesis of thiamine from sucrose requires a number of chemical reactions, each one catalyzed by a specific enzyme. By adding, one at a time, the different precursors of thiamine to the medium in which their mutant mold was placed, they were able to narrow down the defect to the absence of a single enzyme. - If they added to the minimal medium any precursor further along in the process, growth occurred. - Any precursor before the blocked step could not support growth. Thus, in this example, the conversion of precursor C to precursor D was blocked because of the absence of the needed enzyme (c). This led them to postulate the one gene - one enzyme theory: each gene in an organism controls the production of a specific enzyme. It is these enzymes that catalyze the reactions that lead to the phenotype of the organism. Today, we know that, in fact, not only enzymes, but all the other proteins from which the organism is built are encoded by genes. Galagan, J. E., et al. report in the 24 April 2003 issue of Nature the completion of the sequencing of the entire genome of N. crassa. Its 38,639,769 base pairs of DNA encode: - 10,082 proteins (9,200 of them longer than 100 amino acids) - 424 tRNAs - 74 5S rRNAs - 175-200 copies of the 25S/17S/5.8S rRNA gene cluster. 23 February 2011
Number Games and Activities for 0 – 10 is part of the Exploring Maths series for teachers of 5 – 7 year old students. This 86 page book contains 10 easy-to-use, multi-purpose games to help numbers come alive in your classroom. The 10 themes have immediate appeal for young students, who will love manipulating frogs, dogbones and freckles as part of their daily maths activities. Each activity is designed to help students recognise, match and use numbers 0 – 10 in an imaginative way, with follow-up ideas for using numbers 11 – 19. There are also 8 posters showing how to count in other languages. All the illustrations are by Janice Bowles who loves to create happy, characters that will charm your students. Everything is designed to make your teaching life easier. You’ll never run out of ideas again. For example, your students will love to try “Find a Bone”, “Something Fishy” or “Busy Bees”. Click here to purchase a copy directly from the publisher. If you are a subscriber and you would like to download 25 sample pages, just click here
This button, from a set of nine, offers the viewer a chance to peek into the age of Enlightenment, a period of time when the human mind was breaking free from the constraints of the Church and the limitations of the Middle Ages. The Renaissance, primarily spanning the fifteenth through sixteenth centuries, is often thought to be the period of time when great advances in the arts and sciences were taking place, yet there was still significant suppression of individual free thought. This was true until the seventeenth century, when the Age of Enlightenment, with its focus on science and rationality, emerged. That period saw the rise of men of science who refuted theological teachings and pushed against the restraints that restricted learning on every level. Enlightenment began with the scientific revolution and affected every level of society by placing an emphasis on change, understanding, questioning and learning. The Enlightenment lasted from the mid-seventeenth through the eighteenth centuries, and created among its adherents a great interest in collecting specimens of all things natural, from insects to whale bone, the very small to the very grand. Collectors sought the rarest examples of rocks, shells, insects, and the like, or items like cups, knives or jewelry featuring these curiosities. Avid collectors dedicated spaces, called curiosity cabinets, to their collections. Depending on the standing of the collector, these spaces ranged from small cabinets to entire rooms. Placing items under glass domes was a common way to preserve and display them in a collection. Although notable men like Descartes, Voltaire and Newton in Europe and Franklin and Jefferson in the United States were associated with the movement, this period of rationality encouraged learning and discovery among women. Women were known to form their own collections, often picking up specimens along the shorelines near their homes. These motifs were even translated into fashion as leaves, flowers and even seaweed became patterns for fabrics used for dresses. Buttons like these are much like miniature display cases and could easily have been applied to either men’s or women’s garments as decorative elements. These display buttons built upon the notion of “habitat” buttons which emerged during the years following the Renaissance and were a form of keepsake, housing dried flowers or locks of hair under glass. Made in France during the eighteenth century, this group of buttons in the Cooper Hewitt collection feature small views of naturalia, and are fine examples of how an idea and practice from society at large could be distilled to its essence and successfully applied to the decorative arts. Susan Teichman is a design historian who specializes jewelry history and the architectural history of synagogues.
With timely diagnosis, testicular cancer is most likely treatable and most often curable. It is the most common cancer in men 15 to 34 years old. Cancer starts when cells in the body begin to grow out of control. Cells in nearly any part of the body can become cancer, and can spread to other areas of the body. Cancer that starts in the testicles is called testicular cancer. Germ cell tumours More than 90% of cancers of the testicle develop in special cells known as germ cells. These are the cells that make sperm. The 2 main types of germ cell tumours (GCTs) in men are: - Non-seminomas, which are made up of embryonal carcinoma, yolk sac carcinoma, choriocarcinoma, and/or teratoma Doctors can tell what type of testicular cancer you have by looking at the cells under a microscope. These 2 types occur about equally. Many testicular cancers contain both seminoma and non-seminoma cells. These mixed germ cell tumours are treated as non-seminomas because they grow and spread like non-seminomas. Tumours can also develop in the supportive and hormone-producing tissues, or stroma, of the testicles. These tumours are known as gonadal stromal tumours. They make up less than 5% of adult testicular tumours but up to 20% of childhood testicular tumours. The 2 main types are Leydig cell tumours and Sertoli cell tumours. Secondary testicular cancers Cancers that start in another organ and then spread to the testicle are called secondary testicular cancers. These are not true testicular cancers – they are named and treated based on where they started. Lymphoma is the most common secondary testicular cancer. Testicular lymphoma occurs more often than primary testicular tumours in men older than 50. The outlook depends on the type and stage of lymphoma. The usual treatment is surgical removal, followed by radiation and/or chemotherapy. In boys with acute leukemia, the leukemia cells can sometimes form a tumour in the testicle. Along with chemotherapy to treat the leukemia, this might require treatment with radiation or surgery to remove the testicle. Cancers of the prostate, lung, skin (melanoma), kidney, and other organs also can spread to the testicles. The prognosis for these cancers tends to be poor because these cancers have usually spread widely to other organs as well. Treatment depends on the specific type of cancer.
This lesson, students rotate between 7 stations. The primary objective at each station is to observe the behavior of the charged objects at each station and draw a charge diagram based on that observation. The seven stations require the following supplies. Applied NGSS include Science Practice 1: Asking questions about their observations and Science Practice 2: Developing and using models where students draw charge diagrams. Given the supplies, students also apply Science Practice 3: Planning and carrying out investigations, Science Practice 6: Constructing explanations and Science Practice 7: Engaging in argument from evidence. CCSS Math Practice 3: Construct viable arguments and critique the reasoning of others is also involved. This is all in the context of NGSS performance standard HS-PS2-4: Use mathematical representations of Newton’s Law of Gravitation and Coulomb’s Law to describe and predict the gravitational and electrostatic forces between objects. Before class begins I set up the seven stations described on the Station Description Sheets. I also put a description sheet at each station. Then I project a count-down timer which is on my computer and set to 5:25 on the board. At the start of class, I tell the students that today they experience a variety of charged objects. They are to play around with the situations and construct an explanation of what they observe by making a series of charge diagrams like they learned in All Charged Up. They also answer questions and construct explanations on what they observe. They have 5 minutes per station plus 25 seconds to transition from one station to the next. I want to set up heterogeneous groups of four, so I count the number of students in the class and divide that number by 4. For example, if I have 28 students, then there are 7 groups. I go around the room and have students count off, 1-7. I instruct all 1s to go to station 1, etc. This method creates random grouping which tend to be heterogeneous. Once the students are in their groups, I hand out the Station Write-Ups. On the sheet there are rolls for each student to perform so that all students have a responsibility that helps the whole group succeed. Also, they should change rolls from one station to the next. I start the timer and instruct the students to begin at their current station. While students engage with the items at each station, I circulate the room and observe them to make sure students are on task. I also want to help when there are questions. I usually answer one of their questions with a question of my own as I try to lead them to their own explanations. It is great to watch the students figure out how to make things move without direct contact, such as the Can Quiver station, the water attraction station and the two balloons station. The timer is set to loop, so I don't have to do anything at the front of the room. Students are self-directed throughout this activity. I just walk around and watch the students work. If I see an example of exemplary work, I put a star right on the student handout. For the work that shows misunderstandings, I ask leading questions on why they believe their answer is correct. For instance, if they show two objects sticking together but with the same charge, I ask them what happens when you bring like charges together. If I see multiple groups with the same errors, I grab a sample and take a picture of it with my document camera for review at the end of the class. At the end of the station work, students return to their original station. I let students know that it is ok for them to correct any misunderstandings they have while we look at some samples. I display some of the common mistakes that I see and ask students to explain what is wrong with the diagram. For instance, with this balloon diagram, the charge is spread out over the balloon surfaces as though they were conductors, but they are insulators, so the charge will stay where the fur was wiped. Or this can diagram shows the can as having a net charge, but it does not. The can should be neutral. I ask more questions to get students to explain that the charge on the can becomes polarized due to the rod being near it. As students exit, I collect their work for assessment. For the next lesson, we review the students' work.
What Causes AIDS?Sunday, April 21, 2013 - 14:38 HIV is a type of virus called a retrovirus. Like all viruses, it must invade the cells of other organisms to survive and reproduce. HIV multiplies in the human immune system's CD4+ T cells and kills vast numbers of the cells it infects. The result is disease symptoms. Nice To Know: There are two forms of HIV: - HIV-1 is the more common and more potent form. This form of HIV has spread throughout the world. - HIV-2, which is less potent that HIV-1, is found predominantly in West Africa. It is also more closely related to two HIV-like viruses found in monkeys. There also are different strains of the virus, which makes it difficult to find one single treatment. About The Immune System Our bodies use a natural defense system to protect us from bacteria, fungi, viruses, and other microscopic invaders. This system includes general, nonspecific defenses as well as weapons custom-designed against specific health threats: - Innate, or nonspecific, immunityis the first line of defense. Our skin, tears, mucus, and saliva, as well as the swelling that occurs after an infection or injury, contain types of immune cells and chemicals that attack disease-causing agents attempting to invade the body. - Adaptive, or specific, immunity uses specialized cells and proteins called antibodiesto attack invaders that get past the first line of defense. These weapons target specific proteins called antigens, found on the surface of the invading organism. The immune system can quickly rally these custom-tailored defenses if this particular invader attacks again. There are two types of adaptive immune responses: - The humoral immune response involves the action of specialized antibody-producing white blood cells. The antibodies (proteins produced by the immune system to fight infectious agents such as viruses), which circulate in the blood and other body fluids, can recognize specific antigens (substances that stimulate the production of antibodies). They latch onto the viruses, bacteria, toxins, and other substances that bear these antigens, targeting them for destruction. - The cell-mediated immune response involves the action of another group of specialized white blood cells that direct and regulate the body's immune responses or directly attack cells that are infected or cancerous. How Do White Blood Cells Help Fight Disease? White blood cells, particularly - Macrophages contribute to both nonspecific and specific immune responses. These versatile cells act as scavengers, engulfing and digesting microbes and other foreign material in a cell-eating process called phagocytosis. They also, upon encountering an invading organism, release chemical messengers that alert other cells of the immune system and summon T lymphocytesto the scene. B lymphocytes, or B cells, serve as the body's antibody factories. Each antibody is targeted to recognize and bind to an antigenfrom a specific invader. When antibodies circulating through blood and body fluids encounter this invader, they mark it for destruction. - T lymphocytes, or T cells, are part of the cellular immune response. Some T cells, like CD4+ T cells (also called "helper" T cells), direct and regulate the body's immune responses. Others are killer cells that attack cells that are infected or cancerous. How Does HIV Infection Become Established In The Body? Researchers have found evidence that immune-system cells called HIV targets cells in the immune system that display a protein called Nice To Know: When HIV encounters a CD4+ cell, a protein called gp120 that protrudes from HIV's surface recognizes the CD4 protein and binds tightly to it. Another viral protein, p24, forms a casing that surrounds HIV's genetic material. HIV's genetic material contains the information needed by the virus to infect cells, produce new copies of virus, or cause disease. For example, these genes encode enzymes that HIV requires to reproduce itself. Those enzymes are
Among the most prominent diseases that cause learning disabilities in the affected people, the Martin-Bell syndrome or Fragile X syndrome deserves a special mention. It is a condition that leads to severe visible physical deformations in the patients and can also affect their mental abilities adversely. It is a disease that affects both women and men and a defect in a gene called FMR1 leads to its occurrence. On an average, one out of every 4000 female and 1 in every 3600 men from any ethnicity or race can be affected by this malady. This is an inherited disorder in which the defective gene is passed from the body of the parents to their child. The mutation of the gene that causes the syndrome may vary from person to person. People with a trivial alteration in the FMR1 gene show very little symptoms of the ailment where as people with greatly altered gene have severe symptoms in their body. People affected with this disease have a faulty gene on their X chromosome. In individuals who have a normal variant of this gene, a protein required for the development of the brain is formed. On the contrary, a faulty gene does not generate any protein at all. Since men have single X chromosome they are more susceptible to this menace. Women on the other hand possess another X chromosome so they can get away with mild effects. They can also act as the carriers of the FMR1 gene. The men affected with a fragile X gene can not pass on it to a son as they can only give a Y chromosome to the male child. But they can transmit it to their daughters since the daughters receive the X chromosome from the father. The women acting as the carrier of the FMR1 gene can pass it to both their daughters and sons. The sons of such women who get the faulty gene have a high chance of developing intellectual limitations later in their life. As a matter of fact, all victims of Fragile X syndrome do not exhibit identical symptoms. The affected people belonging to the same family can show different symptoms. The symptoms affect the victim’s mental abilities, physical attributes, sensory perceptions and language skills. The significant effect of this disease is psychological impairment. The IQ levels of the affected people can vary and a person with reduced IQ can face severe hardships in learning. Other important traits include behavioral and emotional problems, hyperactivity and unpredictable mood swings. Among the physical problems that the ailment causes, large ears and long face are commonplace in the victims. In some affected people, the ears can become protruding and they can have flat feet. Some of them can also have bigger testicles. A victim can also suffer from speech problems. The speech of the victims can be cluttered and other difficulties of pronunciation may also arise. Another severe consequence of this disease is autism. The syndrome can be identified by a DNA test. The provision of antenatal testing is also there. Unfortunately, no specific cure is available for the Fragile X syndrome. However, the families of the affected children should provide educational and emotional support to them in order to overcome the difficulties. Often long term rehabilitative measures are essential in order to cure the patient from this malaise. The doctors feel that further research and understanding of the gene and the underlying causes for its mutation can bring hope for the victims. As of now, the doctors rely on providing specialized education to the affected children. Behavioral therapy can also be implemented in treating patients with severe behavioral limitations. Patients with a record of the disease affecting the ancestors are advised to undergo genetic counseling. This helps them to ascertain the prospect of having kids with the ailment. It also points out how severe the symptoms can become in the next generation. To sum up there is no proper cure for this genetic ailment. Care should however be taken to make sure that this disease does not pass on to the second generation as then it can assume dangerous proportions.
What You Need: 1. Chairs in a circle, with one less chair than there are players; one person stands in the middle. Example: 15 players, 14 chairs in a circle, 1 player standing in the middle. 2. Word list with affixes or root words that you have studied in class for player in the middle to call from. Example Class 1: This class had learned the affixes shown in red (see list below) Example Class 2: This class had learned the root words in red (see list below). Example Class 1 Example Class 2 Example Class 1: These note cards (1 for each player, but only 3 shown here) would correspond with the above Example Class 1 word list. 1. Start off by putting the word list on the document camera, and reading it through chorally with your class. Point out the target affixes and/or root words to build confidence and show them that they already know a significant portion of the word because they have studied these word parts. 2. Explain that when they are in the middle they must read one word—ask them to please challenge themselves, but of course if they are not sure of a word, they can choose a word from the list that they know with certainty. They also have the option of asking a teacher to help them read a word from the list. This is also a chance to encourage students to have you repeat any words that they are unsure of. 3. Hand out the notecards to the entire class. One per student with target affixes and/or root words written down. Tell students not to show the note cards to each other; they have one minute to read the words on their note cards. You can tell them to put the note card face down when they are sure they know the words; teachers may want to help students who leave the note cards face up for 30 seconds or more. 4. Teacher starts in the middle for the practice round. Explain that you are going to read a word from the list. If a student’s target affix and/or root word is in the word they must get up and move to a different chair. They MAY NOT move to the chair that is directly beside them. The person in the middle tries to get a chair, and the one left standing is in the middle and reads the next word from the list. Example Class 1: Person in the middle reads attractive; all the players with -ive on their card must get up and switch chairs, and the person in the middle tries to sit down. Example Class 2: Person in the middle reads benefit; all the players with bene- on their card must get up and switch chairs, and the person in the middle tries to sit down. 5. Set a time limit for 10 minutes, and if you want you may offer a prize to the students who never end up in the middle. Stories from the Classroom 1. This game really does challenge students’ auditory processing skills. We do this game only after they are able to decode a word part with automaticity: students can recognize –ive and bene- all day long when I show them a flash card, or a word and ask them what the suffix or root word is. Thus, they are able to decode (read) these word parts. With this game we are really asking them to take the first step in encoding (spelling), which is hearing a word, or saying a word to yourself, and then identifying the parts that make up the word without seeing them. 2. I played this game with my students when I was 7 months pregnant. It can be a slightly physical game, and at one point, one of my more concerned students screamed, “Stay away from Ms. Young’s belly!” I hope you enjoy playing Fruit Cocktail in your classroom!
A heart attack, or myocardial infarction, occurs when blood flow to the heart is blocked. Heart attacks are caused by blockages in coronary arteries, the blood vessels that carry oxygen-rich blood to the heart. A heart attack is a medical emergency. Blocked or reduced blood flow to the heart damages the heart muscle. If blood flow is not restored quickly, the heart muscle will begin to die. Blood flow to the heart can become completely cut off or severely reduced when a blood clot gets lodged in any artery that has been previously narrowed by a build-up of plaque. People with a build-up of plaque in their arteries have coronary heart disease, a major risk factor for heart attacks. Plaque is a combination of fat, cholesterol, and other substances that build up in the inner lining of the artery walls. This condition is often referred to as atherosclerosis, or "hardening of the arteries." Heart Attacks and Cardiac Arrest The term "heart attack," is often incorrectly used to describe cardiac arrest — when the heart suddenly stops beating. While heart attacks can lead to cardiac arrest, the heart doesn't always stop beating during a heart attack. Someone in the United States has a heart attack every 43 seconds, according to the Centers for Disease Control and Prevention (CDC). In a 2014 report, the American Heart Association estimated that around 1 million people in the United States have a heart attack each year. About 1 in 6 people who suffer a heart attack die as a result. Heart disease is the leading cause of death among adults in the United States. Types of Heart Attack Heart attacks are divided into types based on severity. STEMI Heart Attacks: STEMI heart attacks are the deadliest type of heart attacks. STEMI is short for ST-segment elevation myocardial infarction. Sometimes called a massive heart attack or a widowmaker heart attack, a STEMI heart attack happens when a coronary artery is completely blocked. As a result, a large portion of the heart cannot receive blood, and the heart muscle quickly begins to die. NSTEMI Heart Attacks: NSTEMI heart attacks happen when blood flow to the heart through a coronary artery is severely restricted but not entirely blocked. NSTEMI stands for non-ST segment elevation myocardial infarction. An NSTEMI heart attack, sometimes referred to as a mini heart attack or mild heart attack, usually causes less damage to the heart than a massive heart attack. Silent Heart Attacks: Some people have heart attacks with no or few symptoms. These are often referred to as silent heart attacks. Though they come with no symptoms, silent heart attacks are not harmless. They can cause permanent damage to the heart muscle. Heart Attack Complications Complications may arise after a heart attack, depending on the location and extent of damage to the heart. Common heart attack complications include: Arrhythmia: Arrhythmias happen when the electrical signals that control heartbeats can't travel properly through the heart. An arrhythmia may cause heart palpitations or irregular heartbeat. Heart Failure: Damage to the heart from a heart attack or coronary heart disease can lead to problems pumping blood to and from the heart. Heart failure happens when the heart's pumping action becomes weaker and the heart cannot pump enough blood to meet the body's needs. Valve Problems: A heart attack may damage the valves that keep blood flowing in the correct direction through the heart. Valve problems can lead to abnormal heart murmurs. Depression: A heart attack can be a scary, stressful, life-changing event. About one in five people who survive a heart attack will develop depression shortly after the incident, according to medical studies. - Heart attack prevalence; CDC. - Heart disease statistics; American Heart Association. - Atherosclerosis; American Heart Association. - Types of heart attacks; The Society for Cardiovascular Angiography and Interventions. - Silent heart attacks; National Heart, Lung, and Blood Institute. - RB Williams. (2011). “Depression After Heart Attack.” Circulation. Last Updated: 6/9/2015
This interactive math activity helps students practice multiplying decimals. In this math lesson, students will solve multiplication problems in which both of the products are numbers with decimals. The interactive multiplication problems are presented in a variety of question formats. Some questions are horizontal multiplication problems. Others are vertical multiplication problems. There are also a number of word problems in this multiplying decimals activity. Students are allowed a specific number of hints in this multiplication lesson. The hint will help students figure out the first step in the multiplication problem. For example, the hint may say, “Step 1: multiply the tenths.” The hint will also display a graphic with this first step of the multiplication question completed. When students have used up all their hints in a lesson, the hint window will display a message saying there are no more hints available. One of the amazing features of this interactive multiplying decimals lesson is the detailed explanation page that appears when a student answers a question incorrectly. The explanation graphic contains a detailed breakdown of the multiplication problem, in four easy-to-follow steps. Multiply the tenths. Multiply the ones. Multiply the tens. Add the partial products. The explanation also includes important reminders, such as “Remember to regroup when necessary,” and “Move the decimal three places.” The thorough explanations in this engaging math lesson help students learn from their mistakes and improve their understanding of multiplication math skills. When students are immersed this multiplying decimals math activity, they will feel like they are playing a fun, challenging math game. Teachers will see each “I Know It” lesson as an invaluable opportunity for their students to practice basic math skills. It's a win-win for students and teachers alike! You'll love the kid-friendly interface in this interactive math lesson. The questions are displayed in an easy-to-read format. The hint button is just a click away. In the upper-right corner of the screen, a progress icon tells students how many questions they've answered in the lesson. Below the progress icon, a score-keeper lets students know how many points they've accumulated so far as they solve the decimal multiplication questions correctly. Another incredible feature of this multiplying decimals activity for kids is the audio option available for every question in the math lesson. Simply click the audio icon next to the question text, and the question will be read aloud in a clear voice. This option is great for students who have difficulty reading or students with ESL/ELL needs. We're excited for you and your students to try out this interactive multiplication game in your multiplying with decimals lesson. Will you give it a try today? Learning how to multiply with decimals has never been more fun than with math practice from “I Know It”! Remember to browse our growing collection of 5th grade math activities today! Did you know you can try any “I Know It” lesson for free? That's right! The number of questions available for you to answer will be limited until you become a member of the website, but we're convinced you'll love the quality content in each of our math lessons, including multiplying decimals. Besides the excellent variety and quality in the math activities themselves, we think you're also going to enjoy the administrative tools for teachers on “I Know It.” Teachers can create student logins, view their students' scores for completed lessons, adjust lesson settings, and access student progress reports. Additionally, educators are able to assign lessons based on the skill level and individual needs of each student. This is possible because grade levels on “I Know It” are alternately referred to as “Level A,” “Level B,” “Level C,” instead of “1st Grade,” “2nd Grade,” “3rd Grade,” etc. Students won't see a grade level attached to the lesson they're working on, making it easy for teachers to adjust settings to meet the child's educational needs. We've categorized this multiplying decimals activity as a Level E lesson. It may be appropriate to use at a fourth grade or fifth grade level. Number and Operations in Base Ten Perform Operations with Multi-Digit Whole Numbers and with Decimals to Hundredths Students should be able to add, subtract, multiply, and divide decimals to the hundredths place, using the assistance of concrete models or drawings to develop strategies based on place value and the properties of operations. Students should then be able to relate the strategy to a written method and explain the reasoning used. You might also be interested in.... Multiplying Decimals by Whole Numbers In this math activity, students will multiply decimal numbers by whole numbers. Regrouping and moving the decimal are math skills included in this lesson. Multiplying 4-Digit by 2-Digit Numbers In this math game, students will multiply 4-digit numbers by 2-digit numbers. Questions are presented in a vertical, horizontal, and word problem format.
When we say reversible computing, we mean performing computation in such a way that any previous state of the computation can always be reconstructed given a description of the current state. Such a computation is "reversible" since the reconstruction of previous states could be applied to allow progressing backwards through the computer's sequence of states, in a time-reversed fashion. Maintaining the property of reversibility requires that no information about the state of the computer can ever be thrown away. This might seem to be a severe restriction, causing a reversible computer's memory to fill up quickly with all the intermediate results that are normally thrown away during a computation. However, it turns out that reversibility can actually be maintained with only a very gradual increase in storage requirements, given some modest overhead in terms of computation time. Moreover, it seems that many interesting tasks can be performed reversibly without any significant memory or time overhead. Also, reversibility isn't an all-or-nothing proposition; if the overhead of total reversibility is too high, it can be greatly reduced by allowing information to be thrown away occasionally, while still allowing almost all of the benefits of the reversibility to be realized. Reversible computing may have applications in computer security and transaction processing, but we anticipate that the main long-term benefit will be to allow faster, smaller computers. You see, physics itself is actually reversible, and so whenever a computer is supposedly "erasing" some piece of information (e.g. by grounding a circuit node that is holding charge representing a bit of information), in actuality that information is only being changed into a different form, namely heat, and the removal of this heat puts severe limits on the amount of computation that can be performed in a small, compact volume of space in a short time. If, on the other hand, the computer's operation is reversible, then any unwanted information can be managed in a much more efficient fashion, and can often be reabsorbed back into the state of the computation, never requiring dissipation to heat, or removal from the system. We believe this trick of avoiding information removal will ultimately allow much higher computational density to be achieved than if computers remain irreversible and continuously produce unwanted information which must be removed by cooling mechanisms which are relatively inefficient.
Astronomers have found a diamond the size of Earth. The cooled white dwarf star, a huge chunk of crystallized carbon, is orbiting a pulsar about 900 light-years away, according to National Geographic. Read the rest Okay, maybe I'm an idiot, but this is one of those facts I'd missed until recently. Despite the impression you may have gotten from grade school and/or old Superman cartoons, diamonds are probably not lumps of coal that just got compressed real good—at least, not in exactly the way you might imagine. Diamonds are made out of carbon, but the best evidence suggests that they form far more deeply down in the Earth than coal does. Instead of coal being smushed into diamonds, imagine something more like those "grow crystals out of Borax and water" experiments you did in grade school. Only, in this case, the experiment is performed in the fiery depths of Hell, as very un-coal-like atoms of carbon are compressed and heated deep in the Earth's mantle until they start to bond together and grow into a crystalline structure. Once the crystals are formed, they get to the surface of the Earth via volcanic eruptions. The really interesting thing about all of this is that it's one of those ideas that's very hard to verify. Diamonds form at a depth we can't go observe directly. All we have to work with is indirect evidence. Because of that, nobody knows exactly where the necessary carbon to make diamonds comes from. This is why the "diamonds are coal" story exists. Some scientists think the carbon is stuff that's existed in the Earth since this planet was formed. Others think it might be coming from terrestrial carbon that got shifted down to the lower levels via plate subduction—although, even then, we're talking about carbon, but not necessarily coal. Read the rest
Neanderthal man, or Homo sapiens neanderthalensis, takes its name from the region in which it was first discovered (the Neander valley near the German city of Dusseldorf in 1856). It had a receding forehead, an unobtrusive jaw and large nose. It is believed to have disappeared, for unknown reasons, some 30,000 years ago. Even though the fact that Neanderthals were actually true human beings is now accepted by many evolutionists, others still tend to portray them as primitive cave-dwellers devoid of abstract thought. The fact is, however, that the many discoveries in recent years regarding the Neanderthal anatomy and culture have shown that this tendency is unfounded and that Neanderthals were a genuine human race. Above all, anatomical and artistic comparisons between modern and Neanderthal man have revealed no evolutionary superiority. The fact that Neanderthals had powerful bodies or narrow foreheads does not mean they were a primitive species. The large people from Northwest Europe cannot, for instance, be said to be more primitive than the smaller Chinese and pygmies, because their skeletal structure is not a defining factor in their behaviour and level of intelligence. Furthermore, if anatomical features are to be taken as a criterion, then according to evolutionist logic Neanderthals should be regarded as more intelligent than modern man. This is because evolutionists base human intelligence on brain size, and the Neanderthal brain volume is on average 13% greater than that of modern man. Discoveries from the caves in which Neanderthals lived have provided important clues to the way they behaved similarly to human beings. We know, for instance, that Neanderthals treated the sick and injured and buried their dead with flowers. This of course demonstrates that Neanderthals were social beings who possessed the concepts of love and affection. The following are studies which have revealed that Neanderthals were genuine human beings, and comments by experts based on these: • Erik Trinkhaus, an expert who spent many years studying Neanderthal anatomy, states his conclusions in these terms: “Detailed comparisons of Neanderthal skeletal remains with those of modern humans have shown that there is nothing in Neanderthal anatomy that conclusively indicates locomotor, manipulative, intellectual, or linguistic abilities inferior to those of modern humans.” (1) • One of the most interesting Neanderthal discoveries is a flute made from a bear bone. The musicologist Bob Fink, who analysed this bone found in a cave in Northern Yugoslavia in 1995, established that the instrument produced four notes and had half and full tones. This discovery shows that the Neanderthals used the seven-note scale which forms the basis of Western music. (2) Fink states that "the distance between the second and third holes on the old flute is double that between the third and fourth." This means that the first distance represents a full note, and the distance next to it a half note. Fink says, "These three notes … are inescapably diatonic and will sound like a near-perfect fit within any kind of standard diatonic scale, modern or antique," thus revealing that Neanderthals were people with an ear for and knowledge of music. A 26,000-year-old sewing needle, proved to have been used by Neanderthal people, was also found during fossil excavations. This needle, which is made of bone, is exceedingly straight and has a hole for the thread to be passed through. (3) People who wear clothing and feel the need for a sewing needle cannot be considered "primitive."New Mexico University professor of anthropology and archaeology Steven L. Kuhn and Mary C. Stiner spent many years studying Neanderthal caves on the shores of Southwest Italy and concluded that Neanderthals engaged in activities requiring as complex a capacity for thought as that of modern-day human beings. (4) All this is proof that Neanderthals were true human beings. The continuation of the error of the primitive Neanderthal man which certain evolutionist publications still persist in is an extension of evolutionists’ tendency to interpret fossils from a Darwinist perspective. Erik Trinkhaus, an expert on Neanderthals, points to this tendency in these words: ‘Infuriatingly, the fossils do not speak for themselves. It is the examining scientists who bring them to life, often endowing them with their own best or worst characteristics. Each generation projects onto Neandertals its own fears, culture, and sometimes even personal history. They are a mute repository for our own nature, though we flatter ourselves that we are uncovering theirs rather than displaying ours. ‘This is especially evident in one of the more fascinating aspects of the twisting tale of Neandertals and their interpretation: the creation of full-flesh reconstructions…’ (5) Another scientist who refers to the preconceptions on the subject of the Neanderthals is the archaeologist Jan Simek from Pennsylvania University. Himself an evolutionist, Simek carried out excavations in a Neanderthal cave in Southwest France together with Jahn Phipriga, and put forward a thesis based on the fires that had been lit in the cave and the large number of fish bones there which gave rise to a wide debate. He stated that large quantities of grasses had been burned, and that this could have had no other purpose other than to keep flies off or else to smoke fish. Many evolutionists at that time opposed that thesis by saying that smoking fish in order to use them later required planning ability, and that it was impossible for Neanderthals to have managed such a feat. Simek set out the Darwinist preconceptions opposing his theory in the documentary ‘Neandertal Enigma,’ broadcast on National Geographic TV on March 26, 2003: ‘Some anthropologists only consider the differences. Cultural and biological differences are sought and found, and it is said that they (the Neanderthals) were a different species from modern man, or even a different race. In my view, it is we who are responsible [for the debates concerning whether the Neanderthals were a species eliminated in the fictitious process of evolution or who else carried their genes down to the present day by mixing with Homo sapiens]. As all the many people engaged in archaeology and anthropology we are guilty for approaching the facts in such a prejudiced manner.’ The attempts to portray the Neanderthals, who have been proven by scientific discoveries to have possessed abstract thought stem from Darwinist prejudices. Evolutionists are deceived into thinking that there is constant conflict in nature and that life is a fight for survival. This leads them to support groundless scenarios regarding the Neanderthals and to regard and to portray Neanderthal man as a primitive creature eliminated during the struggle with Homo sapiens. (1) Erik Trinkaus, "Hard Times Among the Neanderthals," Natural History, vol. 87, December 1978, p. 10; R. L. Holloway, "The Neanderthal Brain: What Was Primitive," American Journal of Physical Anthropology Supplement, vol. 12, 1991, p. 94. (emphasis added) (2) "Neandertals Lived Harmoniously," The AAAS Science News Service, April 3, 1997 (3) D. Johanson, B. Edgar, From Lucy to Language, p. 99. (4) S. L. Kuhn, "Subsistence, Technology, and Adaptive Variation in Middle Paleolithic Italy," American Anthropologist, vol. 94, no. 2, March 1992, pp. 309-310. (5) Trinkaus, E. and Shipman, P. The Neandertals, Alfred A. Knopf, New York, 399, 1992
Full Stomach, Gallbladder and Pancreas Description [Continued from above] . . . Anatomy of the Stomach, Gallbladder, and Pancreas A hollow muscular organ about the size of 2 closed fists, the stomach is located inferior to the diaphragm and lateral to the liver on the left side of the abdominal cavity. The stomach forms part of the gastrointestinal tract between the esophagus and the duodenum (the first section of the small intestine). The wall of the stomach contains several layers of epithelium, smooth muscle, nerves, and blood vessels. The innermost layer of the stomach is made of epithelium containing many invaginations known as gastric pits. The cells of the gastric pits produce gastric juice - an acidic mixture of mucus, enzymes and hydrochloric acid. The hollow portion of the stomach serves as the storage vessel for food before it moves on to the intestines to be further digested and absorbed. At the inferior end of the stomach is a band of smooth muscle called the pyloric sphincter. The pyloric sphincter opens and closes to regulate the flow of food into the duodenum. The gallbladder is a 3-inch long pear-shaped sac located on the posterior border of the liver. Connected to the bile ducts of the liver through the cystic duct, the gallbladder receives bile transported from the liver for storage on a regular basis to prepare for the digestion of future meals. During digestion of a meal, smooth muscles in the walls of the gallbladder contract to push bile into the bile ducts that lead to the duodenum. Once in the duodenum, bile helps with the digestion of fats. The pancreas is a 6-inch long heterocrine gland located inferior to the stomach and surrounded by the duodenum on its medial end. This organ extends laterally from the duodenum toward the left side of the abdominal cavity, where it tapers to a point. The pancreas is considered a heterocrine gland because it has both endocrine and exocrine gland functions. Small masses of endocrine cells known as pancreatic islets make up around 1% of the pancreas and produce the hormones insulin and glucagon to regulate glucose homeostasis in the blood stream. The other 99% of the pancreas contains exocrine cells that produce powerful enzymes that are excreted into the duodenum during digestion. These enzymes together with water and sodium bicarbonate secreted from the pancreas are known as pancreatic juice. Physiology of the Stomach, Gallbladder, and Pancreas The stomach, gallbladder, and pancreas work together as a team to perform the majority of the digestion of food. - Food entering the stomach from the esophagus has been minimally processed – it has been physically digested by chewing and moistened by saliva, but is chemically almost identical to unchewed food. - Upon entering the stomach, each mass of swallowed food comes into contact with the acidic gastric juice, which contains hydrochloric acid and the protein-digesting enzyme pepsin. These chemicals begin working on the chemical digestion of the molecules that make up the food. - At the same time, the food is mixed by the smooth muscles of the stomach wall to increase the amount of contact between the food and the gastric juice. The secretions of the stomach also continue the process of moistening and physically softening the food until the food becomes an acidic semi-liquid material known as chyme. - At this point, the stomach begins to push the chyme through the pyloric sphincter and into the duodenum. - In the duodenum, the bulk of digestion is completed thanks to the preparation of chyme by the stomach and the addition of secretions from the gallbladder and pancreas. Bile from the gallbladder acts as an emulsifier to break large masses of fats into smaller masses. Pancreatic juice contains bicarbonate ions to neutralize the hydrochloric acid of chyme. Enzymes present in the pancreatic juice complete the chemical digestion of large molecules that began in the mouth and stomach. - The completely digested food is then ready for absorption by the intestines. The stomach, gallbladder, and pancreas all function together as storage organs of the digestive system. The stomach stores food that has been ingested and releases it in small masses to the duodenum. The release of small masses of food at a time improves the digestive efficiency of the intestines, liver, gallbladder, and pancreas and prevents undigested food from making its way into feces. As they are accessory organs of the digestive system, the gallbladder and pancreas have no food passing through them. They do, however, act as storage organs by storing the chemicals necessary for the chemical digestion of foods. The gallbladder stores bile produced by the liver so that there is a sufficient supply of bile on hand to digest fats at any given time. The pancreas stores the pancreatic juice produced by its own exocrine glands so that it is prepared to digest foods at all times. The stomach, gallbladder, and pancreas all share the common function of secretion of substances from exocrine glands. The stomach contains 3 different exocrine cells inside of its gastric pits: mucous cells, parietal cells, and chief cells. - Mucous cells produce mucus and bicarbonate ion that cover the surface of the stomach lining, protecting the underlying cells from the damaging effects of hydrochloric acid and digestive enzymes. - Parietal cells produce hydrochloric acid to digest foods and kill pathogens that enter the body through the mouth. - Chief cells produce the protein pepsinogen that is turned into the enzyme pepsin when it comes into contact with hydrochloric acid. Pepsin digests proteins into their component amino acids. The mixture of mucus, hydrochloric acid, and pepsin is known as gastric juice. Gastric juice mixes with food to produce chyme, which the stomach releases into the duodenum for further digestion. The gallbladder stores and secretes bile into the duodenum to aid in the digestion of chyme. A mixture of water, bile salts, cholesterol, and bilirubin, bile emulsifies large masses of fats into smaller masses. These smaller masses have a higher ratio of surface area to volume when compared to large masses, making it easier for them to be digested. The pancreas stores and secretes pancreatic juice into the duodenum to complete the chemical digestion of food that began in the mouth and stomach. Pancreatic juice contains a mixture of enzymes including amylases, proteases, lipases, and nucleases. - Carbohydrates entering the small intestine are broken down into monosaccharides by enzymes such as pancreatic amylase, maltase, and lactase. - Proteins in the duodenum are chemically digested into amino acids by pancreatic enzymes such as trypsin and carboxypeptidase. - Pancreatic lipase breaks triglycerides into fatty acids and monoglycerides. - The nucleic acids DNA and RNA are broken down by nucleases into their component sugars and nitrogenous bases. Several hormones are used to regulate the functions of the stomach, gallbladder, and pancreas. The hormones gastrin, cholecystokinin, and secretin are secreted by organs of the digestive system in response to the presence of food and change the function of the stomach, gallbladder, and pancreas. Our pancreas produces the hormones insulin and glucagon to affect the behavior of cells throughout the body. - Gastrin is a hormone produced by the walls of the stomach in response to the filling of the stomach with food. Food stretches the stomach walls and raises the normally acidic pH of the stomach. G cells in the gastric glands of the stomach respond to these changes by producing gastrin. G cells release gastrin into the blood where it stimulates the exocrine cells of the stomach to produce gastric juice. Gastrin also stimulates smooth muscle tissue of the gastrointestinal tract to increase the mixing and movement of food. Finally, gastrin relaxes the smooth muscles that form the pyloric sphincter, causing the pyloric sphincter to open. The opening of the pyloric sphincter allows food stored in the stomach to begin entering the duodenum for further digestion and absorption in the intestines. - Cholecystokinin (CCK), a hormone produced in the walls of the small intestine, is released into the bloodstream in response to the presence of chyme in the intestine that contains high levels of proteins and fats. Proteins and fats are more difficult for the body to digest than carbohydrates are, so CCK is important in making changes to the digestive system to handle these types of foods. CCK travels through the bloodstream to the stomach, where it slows the emptying of the stomach to give the intestines more time to digest the protein- and fat-rich chyme. CCK also stimulates the gallbladder and pancreas to increase their secretion of bile and pancreatic juice to improve the digestion of fats and proteins. Finally, CCK is detected by receptors in the satiety center of the hypothalamus that control the feeling of hunger. The satiety center reads the presence of CCK as an indication that the body is no longer hungry for food. - Secretin is another hormone produced by the intestinal walls, but unlike CCK, it is produced in response to the acidity of chyme that the stomach releases into the duodenum. Secretin flows through the bloodstream to the stomach, where it inhibits the production of hydrochloric acid by parietal cells. Secretin also binds to receptors in the gallbladder and pancreas, stimulating them to secrete increased amounts of bile and pancreatic juice. Sodium bicarbonate present in pancreatic juice neutralizes the acidity of the chyme to prevent damage to the walls of the duodenum and provides a neutral pH environment for the digestion of chyme. - Insulin is a hormone produced by the beta cells of the pancreatic islets of the pancreas. The pancreas produces insulin in response to the presence of high levels of glucose in the blood. Insulin stimulates cells, particularly in the liver and skeletal muscles, to absorb glucose from the blood and use it as an energy source or store it as glycogen. Insulin also stimulates adipocytes to absorb glucose to build triglycerides for energy storage. Our body produces higher levels of insulin following a meal in order to remove glucose molecules from the blood before they can reach high concentrations and become toxic to the body’s cells. - Glucagon is a hormone produced by the alpha cells of the pancreatic islets of the pancreas. Glucagon acts as an antagonist to insulin by stimulating the release of glucose into the bloodstream to raise blood glucose levels between meals. Hepatocytes in the liver store glucose in large macromolecules known as glycogen. Glucagon binding to receptors on hepatocytes triggers the breakdown of glycogen into many glucose molecules, which are then released into the bloodstream. Prepared by Tim Taylor, Anatomy and Physiology Instructor
If you want to take an electric car on a long drive, you need a gas-powered generator, like the one in the Chevrolet Volt, to extend its range. The problem is that when it’s running on the generator, it’s no more efficient than a conventional car. In fact, it’s even less efficient, because it has a heavy battery pack to lug around. Now researchers at the University of Maryland have made a fuel cell that could provide a far more efficient alternative to a gasoline generator. Like all fuel cells, it generates electricity through a chemical reaction, rather than by burning fuel, and can be twice as efficient at generating electricity as a generator that uses combustion. The researchers’ fuel cell is a greatly improved version of a type that has a solid ceramic electrolyte, and is known as a solid-oxide fuel cell. Unlike the hydrogen fuel cells typically used in cars, solid-oxide fuel cells can run on a variety of readily available fuels, including diesel, gasoline, and natural gas. They’ve been used for generating power for buildings, but they’ve been considered impractical for use in cars because they’re far too big and because they operate at very high temperatures—typically at about 900 ⁰C. By developing new electrolyte materials and changing the cell’s design, the researchers made a fuel cell that is much more compact. It can produce 10 times as much power, for its size, as a conventional one, and could be smaller than a gasoline engine while producing as much power. The researchers have also lowered the temperature at which the fuel cell operates by hundreds of degrees, which will allow them to use cheaper materials. “It’s a huge difference in cost,” says Eric Wachsman, director of the University of Maryland Energy Research Center, who led the research. He says the researchers have identified simple ways to improve the power output and reduce the temperature further still, using methods that are already showing promising results it the lab. These advances could bring costs to a point that they are competitive with gasoline engines. Wachsman says he’s in the early stages of starting a company to commercialize the technology. Wachsman’s fuel cells currently operate at 650 ⁰C, and his goal is to bring that down to 350 ⁰C for use in cars. Insulating the fuel cells isn’t difficult since they’re small—a fuel cell stack big enough to power a car would only need to be 10 centimeters on a side. High temperatures are a bigger problem because they make it necessary to use expensive, heat-resistant materials within the device, and because heating the cell to operating temperatures takes a long time. By bringing the temperatures down, Wachsman can use cheaper materials and decrease the amount of time it takes the cell to start.
The Remembering or Knowledge level (Bloom’s Taxonomy) involves bringing to mind the information to be recalled. That’s it. Simple. Simple unless you have a difficult time remembering things, or the information to be remembered had little or no significance to you at the time you encountered it. Remembering requires bringing back to mind something we have already memorized. When we think of memorizing we typically make it more difficult that it really needs to be. For example, there are favorite poems that we can remember line-for-line without trying. There are parts of a book that we can relate to others with perfect recall. So we can think of this as a simple cause and effect or input and output. If we want something to be remembered, the mind will need to act on it in some way — even if that means simply enjoying it. Otherwise, if strictly memorizing without anything to hang the information on, we can expect the value to be very short term (remember it for a test, and forget it the next minute). At this level, think like a reporter. Just the facts. Who, what, where, when, how. Or if you prefer verbs: Here are a variety of activities you can use for the Remembering or Knowledge level: - Create a timeline for a character in history. - List the characters in a story using this interactive from ReadWriteThink. - Recite a poem you have memorized. - Tell me what a story you have read was about. - Recite the alphabet. - Repeat the “two times” table. - Tell me what you saw in the painting you viewed. - Recall your favorite summer. - Listen to a classical composer. - Memorize and recite a favorite Scripture passage. - Label the states on a map of the United States. The Natural Application It is much easier to remember something that has relevance for us or that is important to us. Determine what interests your child. What is he into at the moment? Help him pick out several books to read that deal with this interest. Up next: Understanding Younger students can recall their ABCs with the interactive at ReadWriteThink.com. Math Flash Cards Students can practice recalling their math facts. Create Flash Cards Application that helps you create your own flash cards. Great for reviewing anything and everything — including Latin vocabulary.
Programmable logic controller A programmable logic controller (PLC) is a digital system used to make electrical and mechanical processes become automated. PLC can be found in almost all of the heavy industries which require process and sequences. PLC contains a unique set of inputs and outputs. The logic of PLC is programmed so that user can create a highly dedicated PLC working system for their intended purpose. |Wikimedia Commons has media related to: Programmable logic controller|
What is Feudalism? Feudalism: The dominant social system in medival europe. There was a king, noblemen (Lords), knights, and serfs. What is a Manor? Manor: a self sufficiant estate of a medival lord. This was the heart of the medival economy. People Who Lived in a Manor The people who lived in a manor were the king, the lord, and the lord's family. The Feudal Triangle How Society Groups are Interdependent? The society groups depend on eachother becasue they all have different basic needs that other people need. The king provides land for the lords while the lords offer safety for the king. The serfs offer food tokens to the lords in exhachange for their land. Also, everyone is dependent on the bishop because if he prays for you then you will make it to heaven. That is how people are dependent in their society on the other social groups. How does the feudal System affect Politics? The feudal system affects politics because people were born into their groups and that meant people had no choice on who their king would be. Also, lords would take the money from the serfs and the serfs had no choice. They were forced to give up their money, but they did get it in trade for land. Lastly, the lords would choose random serfs to fight in war, so serfs were basically forced to do everything. Lords had a lot of power when it came to making decisions for the government. That is how the feudal system affects politics. How does the Feudal System affect Social Structure? The feudal system affects social structure because once you are born into that one social group, you are forced to stay in that group. So if you were born into the group of becoming king, you would be happy because you know you will have power. This goes the same for being a lord because they basically run the people while the king is just sitting in his throne. On the other hand, some people were born into the serfs group and once you are born into it, you are stuck in it. The serfs would feel cheated and overpowered. That is how the feudal system affects social structure. How does the feudal system affect economy? The feudal system affected the economy in different ways. First of all, most people today depend on imports and exports, but with the feudal system, there was no trade. Everybody depended on eachother for their basic everyday needs. Also, all the pieces of land owned by a lord were already taken care of itself. People would not have to work on it, but few serfs might've, so this means some people weren't able to work because land was being taken away. That is how the feudal system affects their economy.
A Few Facts About Thoracic Strain Thoracic strain involves the muscles of the mid and upper back. To understand what constitutes a thoracic strain it may be helpful to explain a few things about the part of the body under discussion. The spinal column consists of three parts, the cervical spine, made up of the vertebrae in the neck, the thoracic spine, consisting of the vertebrae from the shoulders to the waist, and the lumbar spine, which consists of the vertebrae from the waist down to the tail bone. The region of the lumbar spine is commonly referred to as the lower back, and the region of the thoracic spine is called the upper back. A thoracic strain therefore is one affects the muscles of the upper back. One can also suffer a thoracic sprain, which is a different situation, although in mild cases it can sometimes be difficult to distinguish between the two. Whereas a strain involves irritation of injury to a muscle, a sprain affects ligaments or joints, and is often the more serious of the two disorders. A Thoracic Strain Is Not Common - Short of suffering trauma as the result of a serious accident, damage to the thoracic muscles are not particularly common. A thoracic strain, when it does occur is most often mild, which is not to say that it does not cause any discomfort. A low level injury to the back can be very uncomfortable, even quite painful, even tough the nature of the injury may not be particularly serious. When we suffer strains affecting the back muscles, they are most apt to happen in the neck muscles (cervical) or the lower back (lumbar). The reason for this is that these two regions are constructed with mobility in mind, while the upper back, the thoracic area, is constructed with strength and stability in mind. The thoracic spine consists of 12 vertebrae, each of which attaches to a pair of ribs. Nine of these rib pairs join together in the chest area at the sternum, the other 3 pair being "floating” ribs. The result is a relatively strong and stable, though slightly flexible, rib cage. The stability of the rib cage has the effect of protecting the thoracic muscles in the back from sudden or excessive movement, and consequent damage or injury. Strain Happens - Still, a thoracic sprain can and does happen. This type of sprain most often happens to athletes, who use their back muscles more than most, and are apt to injure those muscles when moving a weight when in an awkward or off-balance position. Non-athletes can also suffer a thoracic strain by using these same muscles in such activities such as heavy yard work. In this case the strain usually occurs because the muscles have not been used for some time, and are stressed when suddenly called upon to perform a task, like raking leaves, or hauling material in a wheelbarrow. A mild version of a thoracic sprain can even occur as a result of poor posture, or sitting over a computer keyboard for an extended period of time. Mild To Severe Strain - A thoracic strain may be mild. A muscle as been stretched or stressed, but no tearing, except possibly at a microscopic level, has occurred. A medium strain indicates that muscle tearing has occurred, or the tendon attaching muscle to bone has been injured. Whereas a mild strain usually involves low-level pain, a medium strain is often accompanied by a loss of strength in the muscle involved, and significantly more pain. A thoracic strain is said to be severe when there has actually been a rupture of a muscle and/or tendon. A fourth condition is chronic strain. This is usually mild, but recurring, and occurs with constant overuse of a muscle or muscle group. The example cited above, hovering over a computer keyboard for long periods of time, is a situation which could contribute to chronic strain. Treatment - Treatment for a mild thoracic strain, and even at times a more moderate strain, usually consists of a period of rest, the taking of pain pills, and in the case of swelling, alternating hot and cold treatments (heat lamps and ice packs). Massage can be very effective as well as long as the injury is not severe. In the event a severe strain occurs, the course of treatment needs to be determined by a physician. If pain persists at any level, it is best to consult with a physician, as if proper treatment is not given, a chronic situation could result. No one wants a chronic back problem, thoracic or otherwise, if they can avoid it.
- What Makes the Declaration Unique? - What Is Equality? - What is the Basis for the Theory of the Declaration? "The laws of nature and of nature's God" are the beginning point of the political theory of the American founding. They explain the Founders' decision to declare America's independence from England. But what does this phrase mean--"the laws of nature and of nature's God"? First, it means that nature encompasses laws: certain obligations are prescribed for all human beings by nature--or more specifically, by the fact that all humans share a common nature. Today, some scientists claim that "nature knows no morals." For the Founders, that is what one might expect to hear from a tyrant like Hitler or Stalin, but not anyone who understands that human nature itself, rightly understood, provides objective standards of how human life should be lived. Second, "laws of nature" are laws that can be grasped by human reason. The Founders did not believe-as one often hears today-that there is a right to liberty because "who's to say what's right or wrong?" The Founders were not moral relativists. To the contrary, they boldly proclaim that they grasp certain fundamental principles or moral and political conduct. Third and finally, the "laws of nature," accessible in principle to any person anywhere in the world who thinks clearly about the nature of human beings, mean that the American founding is not based on ideas specifically tied to one people, such as "the rights of Englishmen," but on ideas that are true for all people everywhere. We will begin to see what some of these ideas or principles are in the Declaration's second paragraph.
The mathematics learning area allows learners to develop mathematical skills and understandings that they can apply to many areas of life. The focus of Meaningful Maths [M2] in NT Schools is improving student achievement in mathematics through improving the professional capability of teachers. The approach is informed by developing understandings about mathematical learning and effective professional development. All classroom numeracy programs should be implementing the Australian Curriculum and informed by Meaningful Maths philosophies and beliefs The mathematics learning area allows learners to develop mathematical skills and understandings that they can apply to many areas of life. Learners will develop sound strategies for investigating and problem solving, as well as positive attitudes about their capacity to effectively use their mathematics in many life situations. Alawa Numeracy Plan Overview is Alawa's Whole School Approach to Meaningful Maths. For further information about what is covered in each year at school for mathematics, refer to the Australian Curriculum website. For further information about the NZ Maths Number Framework, visit the website.
Imagine that a friend of yours would like to play a game. Your friend writes down two different numbers on two separate slips which you cannot spot. Afterwards you are allowed to choose one of the slips and read the number written on it. The game’s goal is to make a rough guess on the value of the second number. You think there is a 50:50 chance for guessing correctly? Although you may not believe it: The probability for a success is definitely higher when you apply the following strategy! A surprising strategy The first step of the strategy is making up a random number M. If M is greater than the known first number, then guess that the second number is also greater than the first number. If M is less than the first number, then guess that the second number is less than the first number too. “And how does this help with the probability?”, you may ask. Imagine we knew the second number. With the first number F and the second number S we obtain 6 constellations arranged in 3 cases as illustrated below. As you can see, we only face a 50:50 chance in case 1 and case 2. This is due to the possibility of giving a wrong guess here. However, in case 3 there is no such option. You will always win as long as your number gets between the known number F and the unknown number S. So the chance of winning has to be greater than 50% when considering all three cases! Let P1, P2, P3 denote the probabilities that case 1, 2 and 3 occur. Since P1+P2+P3=1 we can express the probability of a successful guess PS in a mathematical manner. Even from a mathematician’s point of view, the success probability is higher than 0.5 because P3 can only be positive 😉 Can’t believe your eyes? But why is this outcome so counter-intuitive? People tend to think in a too abstract way. They ignore that they are given an extra piece of information, namely the first number. The above strategy makes use of the additional information while randomly guessing wastes it. Hence, you are better off when applying our guessing strategy. I think you agree that the particular benefit of the strategy depends on M since its choice determines whether you end up in the third case. I recommend to make M equal to 10 or 20, since most people choose pairs like 2 and 87, 1 and 2834, -100 and 100… Share this fun with your friends 😀
Define function. Why do we use function in a program ? What is the difference between C function and C++ functions ?Ripunjay Tiwari Ans. A function a set program statement that can be processed independently. A function can be invoked which behaves as though its code is inserted at the point of the function call. The communication between a caller (calling function) and callee (called function) takes place through parameters. Functions are independent because variable names and labels defined within its body are local to it. The various components associated with functions are as follows – Void show(int a, int b); //function declaration Main ( ) Show(x, y): //function call Void show(int a, int b) //function definition …………. //function body The first statement Is the function declaration. This is the line before the beginning of main. It provides the following information to the compiler – (i) the name of the function (ii) the type of value returned and (iii) the number and the types of the arguments that must be supplied in a call to the function. Function prototyping is one of the key improvements added to the C++ functions. When a function call is encountered, the compiler checks the functions call with its prototype so that correct argument types are used. The compiler informs the user about any violation in the actual parameters that are to be passed to a function. The function declaration terminates with a semicolon. Function declarations are also called prototype, since they provide a model or blue print of the function. The function itself is referred to as function definition. The first line of the function definition is known as function declaratory and is followed by the function body. Thus, the declaratory and the function body make up the function definition. The declaratory and declaration must use the same function name, the number of arguments, the arguments type and the return type. The body of the function is enclosed in braces. C++ allows the definition to be placed anywhere in the program. If the function is defined before its invocation, then its prototypes is optional. A function is a dormant entity, which gets life only when a call to the function is made. A function call is specified by the function name followed by the argument enclosed in parentheses and terminated by a semicolon in the function call. The parameters specified in the function call are known as actual parameters and those specified in the function declaratory are known as formal parameters and those specified in the function declaratory are known as formal parameter. When a function call is made, a one-to-one correspondence is established between the actual and the formal parameters. The scope of formal parameters is limited to its function only. When the function is called, control is transferred to the first statement in the function body. The other statements in the function body are then executed and control returns to the main program when the closing brace is encountered. A function can or cannot return a value. It can occur anywhere in the function body and as soon as it is encountered, execution control will be returned to the caller. The most important reason to use functions is to divide major programs in small functions. Another advantages of using functions is that it is possible to reduce the size of a program by calling and using them at different places in the program. Any sequence of instruction that appears in a program more than once is candidate for being made into a function. The function’s code is stored in only one place in memory, even though the function is executed many times in the course of the program. The use of functions offer flexibility in the design, development, and implementation of the program to solve complex problems.
About Bone Cancer What should people know about bone cancer? Bone is a hard connective tissue that makes up the skeleton and provides shape to the body. Bone provides support and protects many of the body's fragile organs. Most bones start out as cartilage, a firm rubbery tissue that cushions bones and joints. After the bone is formed, some cartilage may remain at the ends to serve as a cushion between bones, such as in the knee or shoulder. Many types of cancer (such as breast cancer, prostate cancer and lung cancer) frequently spread to the bones. This is called metastasis. This fact sheet only includes information about cancer that began in the bone (called primary bone cancer); cancers that began in other parts of the body and spread to the bone (metastases) are not included. There are a number of different types of bone cancer. The most common types are osteosarcoma, chondrosarcoma, and Ewing sarcoma. - Osteosarcoma, also called osteogenic sarcoma, is the most common type of bone cancer. It accounts for about 35% of bone cancers in the US. Osteosarcoma often occurs in the ends of the long bones of the arms and legs, where new tissue forms as young adults grow. It occurs most often in young people (ages 10 to 30), but can occur in older people. - Chondrosarcoma is cancer of the cartilage and is the second most common type of bone cancer. It accounts for about 25% of all bone cancers in the US. Chondrosarcoma is rare in young people, occurring mostly in adults age 50 and over. Chondrosarcomas usually occur in cartilage around the pelvis, knee, shoulder or upper part of the thigh. - Ewing sarcoma is the third most common type of bone cancer, and the second most common type among children. It usually occurs in the shaft of the bone, most often in the hip, ribs, upper arm and thigh. Ewing sarcoma mostly affects children and young adults between the ages of 10 and 30. Ewing sarcoma accounts for about 15% of bone cancers in the US. Each year in New York State, about 100 men and 100 women (including children) are diagnosed with bone cancer. About 40 men and 35 women, again including children, in New York die of the disease each year. Who gets bone cancer? Although bone cancer can occur at any age, two of the most common types (osteosarcoma and Ewing sarcoma) occur primarily in children and young adults. Certain types of bone cancer, such as osteosarcoma, are more common among men than women. Ewing sarcoma occurs most frequently in Whites and is very rare in Blacks and Asians. What factors increase risk for developing bone cancer? At this time, the causes of bone cancer are not well understood. However, scientists agree that certain factors increase a person's risk of developing this disease. These risk factors include: - Hereditary conditions and family history. People with certain inherited diseases (Li-Fraumeni syndrome, Rothmund-Thomson syndrome) are at increased risk for bone cancer. Children who have had hereditary retinoblastoma (a rare cancer of the eye) are at greater risk of developing bone cancer. - Ionizing radiation. Exposure to high doses of ionizing radiation (such as radiation treatment for other cancers) increases the risk of getting bone cancer. - Paget's disease. People with Paget's disease, a condition of abnormal bone growth, are at increased risk for bone cancer. What other risk factors for bone cancer are scientists studying? Scientists are studying other possible risk factors for bone cancer such as bone marrow transplantation and genetic changes. They are also examining the role medical implants or injuries may have in the development of bone cancer. Additional research is needed to determine the role, if any, these factors may have in the development of bone cancer. What can I do to reduce my chances of getting bone cancer? To help reduce the risk of getting bone cancer: - Be aware of your family history and discuss any concerns with your health care provider. - Discuss the risks and benefits of medical imaging, such as CT scans, with your health care provider to avoid unnecessary exposure to ionizing radiation. This is particularly important for children. How else can I reduce my risk for cancer? The following may help reduce the risk of developing cancer: - Choose a healthy diet to achieve and maintain a healthy weight. Eat more vegetables, fruits and whole grains and eat less red and processed (e.g., bacon, sausage, luncheon meat, hot dogs) meats. These actions may reduce the risk of developing many types of cancer as well as other diseases. - Exercise regularly. - Do not smoke. If you currently smoke, quit. Avoid exposure to second hand smoke. For more information on quitting smoking, visit the NYS Smoker's Quitline at www.nysmokefree.com or call 1-866-NY-QUITS. - Talk with your health care provider about recommended cancer screenings.
Dizziness is a condition that occurs when you feel lightheaded, weak, or physically unsteady. Some people may feel as if the room is spinning around them. Vomiting occurs when your stomach contents travel upward from your stomach to your esophagus and out your mouth. Vomiting can be forceful and painful. Chronic vomiting can damage the teeth and the delicate lining of the esophagus and mouth, because vomit is highly acidic. Causes of dizziness and vomiting include: - affected cardiac output: When your heart isn’t pumping adequately, your blood pressure drops. Dizziness and vomiting can result. A heart attack and stroke also can cause these symptoms. - anxiety: Intense feelings of anxiety can lead to physical symptoms, such as dizziness and vomiting. - inner ear inflammation: The inner ear is responsible for helping maintain balance in the body. Inflammation can cause dizziness that leads to nausea and vomiting. - medications: including sedatives, chemotherapy, tranquilizers and anti-seizure medications - vestibular migraine: Migraines are headaches that can cause intense symptoms, including dizziness, nausea, and light and noise sensitivity. Other common causes include: - motion sickness - morning sickness - low blood sugar - ingesting poison or breathing in harmful chemicals Call 911 or have someone drive you to the hospital if you suspect that you’re having a heart attack or stroke. See your doctor if you’re pregnant and these symptoms affect your ability to eat, drink, or sleep. Dizziness and vomiting often will go away without treatment, but you should seek medical attention if you vomit blood, pass stool that has blood, or lose consciousness. Seek medical attention if your symptoms don’t subside within two to three days. This information is a summary. Always seek medical attention if you’re concerned you may be experiencing a medical emergency. Your doctor will try to determine what’s causing your dizziness and vomiting. To do so, they may ask several questions, including: - Are you taking any new medications? - Have you experienced these symptoms before? - When did your symptoms start? - What makes your symptoms worse or better? Anti-emetics are medications used to treat vomiting. Some examples are ondansetron (Zofran) and promethazine (Phenergan). Meclizine (Antivert) is available over-the-counter and in prescription strength for dizziness. This type of medication is used treat motion sickness, nausea, and dizziness. If you’re prone to motion sickness and you’re planning to travel, your doctor may prescribe you a scopolamine (Transderm Scop) patch. This option is suitable only for adults. If you’re taking a new medication, don’t discontinue its use unless your physician instructs you to—even if you suspect it may be related to your dizziness and nausea. Dizziness and nausea will often resolve with rest. Staying hydrated and eating bland foods that don’t stimulate or upset your stomach can help. Examples include: - dry toast - refined grains You can prevent dizziness and vomiting due to low blood sugar by eating meals at regular intervals and avoiding taking too much insulin. If you experience motion sickness, avoid boat trips and always sit in the front seat of a vehicle. Avoid foods that upset your stomach, or foods that you’re allergic to.
A metamorphic rock used to be some other type of rock, but it was changed inside the Earth to become a new type of rock. The word metamorphism comes from ancient Greek words for “change” (meta) and “form” (morph). The type of rock that a metamorphic rock used to be, prior to metamorphism, is called the protolith. During metamorphism the mineral content and texture of the protolith are changed due to changes in the physical and chemical environment of the rock. Metamorphism can be caused by burial, tectonic stress, heating by magma, or alteration by fluids. At advanced stages of metamorphism, it is common for a metamorphic rock to develop such a different set of minerals and such a thoroughly changed texture that it is difficult to recognize what the protolith was. A rock undergoing metamorphism remains a solid rock during the process. Rocks do not melt during most conditions of metamorphism. At the highest grade of metamorphism, rocks begin to partially melt, at which point the boundary of metamorphic conditions is surpassed and the igneous part of the rock cycle is entered. Even though rocks remain solid during metamorphism, fluid is generally present in the microscopic spaces between the minerals. This fluid phase may play a major role in the chemical reactions that are an important part of how metamorphism occurs. The fluid usually consists largely of water. Metamorphic rocks provide a record of the processes that occurred inside Earth as the rock was subjected to changing physical and chemical conditions. This gives the geologist literally “inside information” on what occurs within the Earth during such processes as the formation of new mountain ranges, the collision of continents, the subduction of oceanic plates, and the circulation of sea water into hot oceanic crust. Metamorphic rocks are like probes that have gone down into the Earth and come back, bringing an record of the conditions they encountered on their journey in the depths of the Earth. FACTORS THAT CONTROL METAMORPHISM The reason rocks undergo metamorphism is that the minerals in a rock are only stable under a limited range of pressure, temperature, and chemical conditions. When rocks are subjected to large enough changes in these factors, the minerals will undergo chemical reactions that result in their replacement by new minerals, minerals that are stable in the new conditions. Chemical Composition of the Protolith The type of rock undergoes metamorphism is a major factor in determing what type of metamorphic rock it becomes. In short the identify of the protolith plays a big role the identity of the metamorphic rock. A fluid phase may introduce or remove chemical substances into or out of the rock during metamorphism, but in most metamorphic rock, most of the atoms in the protolith are be present in the metamorphic rock after metamorphism; the atoms will likely be rearranged into new mineral forms within the rock. Therefore, not only does the protolith determine the initial chemistry of the metamorphic rock, most metamorphic rocks do not change their bulk (overall) chemical compositions very much during metamorphism. The fact that most metamorphic rocks retain most of their original atoms means that even if the rock was so thoroughly metamorphosed that it no longer looks at all like the protolith, the rock can be analyzed in terms of its bulk chemical composition to determine what type of rock the protolith was. If the protolith is an arenite, made mostly of the mineral quartz (SiO2), metamorphism cannot turn the rock into a marble, which is made of the mineral calcite (CaCO3). In fact, as a result of metamorphism, a pure quartz arenite will become quartzite. It is still made of quartz, but the quartz has recrystallized during metamorphism, filling in most of the pore space of the arenite with new quartz growth and becoming a denser, harder rock. The reason pure arenite becomes quartzite is that the mineral quartz is stable over a wide range of pressures and temperatures. Under most metamorphic conditions quartz will simply recrystallize, overgrow the existing quartz grains with more quartz, and reorient its quartz crystals to become a new rock type made of quartz. Many protoliths have chemical compositions consisting of more than three chemical elements, and most protoliths are made of minerals that do not remain stable in the conditions encountered during metamorphism. Such protoliths undergo chemical reactions during metamorphism that replace the protolith minerals with new metamorphic minerals, made of the atoms from the protolith minerals rearranged into new mineral structures. Temperature is another major factor of metamorphism. There are two ways to think about how the temperature of a rock can be increased as a result of geologic processes. If rocks are buried within the Earth, the deeper they go, the higher the temperatures they experience. This is because temperature inside the Earth increases along what is called the geothermal gradient, or geotherm for short. Therefore, if rocks are simply buried deep enough enough sediment, they will experience temperatures high enough to cause metamorphism. This temperature is about 200ºC (approximately 400ºF). Tectonic processes are another way rocks can be moved deeper along the geotherm. Faulting and folding the rocks of the crust, can move rocks to much greater depth than simple burial can. Yet another way a rock in the Earth’s crust can have its temperature greatly increased is by the intrusion of magma nearby. Magma intrusion subjects nearby rock to higher temperature with no increase in depth or pressure. The upper limit of metamorphism, beyond which igneous conditions occur, is the temperature and pressure at which partial melting of the rocks begins. This limit varies greatly, depending on the pressure, the chemical composition of the rocks, and the presence of a fluid phase. With water present in the fluid, some types of rock begin melting, if the pressure is high enough, at temperatures of about 600 ºC (approximately 1100 ºF) at the low end. Other types of rock, if there is no fluid in the rock to lower the melting temperature, will remain solid and continue undergoing metamorphism to over 1000 ºC (approximately 1800 ºF). Pressure is a measure of the stress, the physical force, being applied to the surface of a material. It is defined as the force per unit area acting on the surface, in a direction perpendicular to the surface. Lithostatic pressure is the pressure exerted on a rock by all the surrounding rock. The source of the pressure is the weight of all the rocks above. Lithostatic pressure increases as depth within the Earth increases and is a uniform stress—the pressure applies equally in all directions on the rock. If pressure does not apply equally in all directions, differential stress occurs. There are two types of differential stress. Normal stress compresses (pushes together) rock in one direction, the direction of maximum stress. At the same time, in a perpendicular direction, the rock undergoes tension (stretching), in the direction of minimum stress. Shear stress pushes one side of the rock in a direction parallel to the side, while at the same time, the other side of the rock is being pushed in the opposite direction. Differential stress has a major influence on the the appearance of a metamorphic rock. Differential stress can flatten pre-existing grains in the rock, as shown in the diagram below. Metamorphic minerals that grow under differential stress will have a preferred orientation if the minerals have atomic structures that tend to make them form either flat or elongate crystals. This will be especially apparent for micas or other sheet silicates that grow during metamorphism, such as biotite, muscovite, chlorite, talc, or serpentine. If any of these flat minerals are growing under normal stress, they will grow with their sheets oriented perpendicular to the direction of maximum compression. This results in a rock that can be easily broken along the parallel mineral sheets. Such a rock is said to be foliated, or to have foliation. Any open space between the mineral grains in a rock, however microscopic, may contain a fluid phase. Most commonly, if there is a fluid phase in a rock during metamorphism, it will be a hydrous fluid, consisting of water and things dissolved in the water. Less commonly, it may be a carbon dioxide fluid or some other fluid. The presence of a fluid phase is a major factor during metamorphism because it helps determine which metamorphic reactions will occur and how fast they will occur. The fluid phase can also influence the rate at which mineral crystals deform or change shape. Most of this influence is due to the dissolved ions that pass in and out of the fluid phase. If during metamorphism enough ions are introduced to or removed from the rock via the fluid to change the bulk chemical composition of the rock, the rock is said to have undergone metasomatism. However, most metamorphic rocks do not undergo sufficient change in their bulk chemistry to be considered metasomatic rocks. Most metamorphism of rocks takes place slowly inside the Earth. Regional metamorphism takes place on a timescale of millions of years. Metamorphism usually involves slow changes to rocks in the solid state, as atoms or ions diffuse out of unstable minerals that are breaking down in the given pressure and temperature conditions and migrate into new minerals that are stable in those conditions. This type of chemical reaction takes a long time. GRADES OF METAMORPHISM Metamorphic grade refers to the general temperature and pressure conditions that prevailed during metamorphism. As the pressure and temperature increase, rocks undergo metamorphism at higher metamorphic grade. Rocks changing from one type of metamorphic rock to another as they encounter higher grades of metamorphism are said to be undergoing prograde metamorphism. Low-grade metamorphism takes place at approximately 200–320 ºC and relatively low pressure. This is not far beyond the conditions in which sediments get lithified into sedimentary rocks, and it is common for a low-grade metamorphic rock to look somewhat like its protolith. Low grade metamorphic rocks tend to characterized by an abundance of hydrous minerals, minerals that contain water within their crystal structure. Examples of low grade hydrous minerals include clay, serpentine, and chlorite. Under low grade metamorphism many of the metamorphic minerals will not grow large enough to be seen without a microscope. Medium-grade metamorphism takes place at approximately at 320–450 ºC and at moderate pressures. Low grade hydrous minerals are replaced by micas such as biotite and muscovite, and non-hydrous minerals such as garnet may grow. Garnet is an example of a mineral which may form porphyroblasts, metamorphic mineral grains that are larger in size and more equant in shape (about the same diameter in all directions), thus standing out among the smaller, flatter, or more elongate minerals. High-grade metamorphism takes place at temperatures above about 450 ºC. Micas tend to break down. New minerals such as hornblende will form, which is stable at higher temperatures. However, as metamorphic grade increases to even higher grade, all hydrous minerals, which includes hornblende, may break down and be replaced by other, higher-temperature, non-hydrous minerals such as pyroxene. During high-grade metamorphism the minerals tend to grow larger. Some varieties of valuable gemstones, such as rubies, emeralds, and jade, come from high grade metamorphic rocks. At the highest metamorphic grade, if the temperature gets high enough, the rock will start to melt, entering the next stage of the rock cycle, the igneous stage. The temperature at which melting begins ranges from about 600 ºC to over 1000 ºC depending on the rock and fluids in the rock. Index minerals, which are indicators of metamorphic grade. In a given rock type, which starts with a particular chemical composition, lower-grade index minerals are replaced by higher-grade index minerals in a sequence of chemical reactions that proceeds as the rock undergoes prograde metamorphism. For example, in rocks made of metamorphosed shale, metamorphism may prograde through the following index minerals: - chlorite characterizes the lowest regional metamorphic grade - biotite replaces chlorite at the next metamorphic grade, which could be considered medium-low grade - garnet appears at the next metamorphic grade, medium grade - staurolite marks the next metamorphic grade, which is medium-high grade - sillimanite is a characteristic mineral of high grade metamorphic rocks Index minerals are used by geologists to map metamorphic grade in regions of metamorphic rock. A geologist maps and collects rock samples across the region and marks the geologic map with the location of each rock sample and the type of index mineral it contains. By drawing lines around the areas where each type of index mineral occurs, the geologist delineates the zones of different metamorphic grades in the region. The lines are known as isograds. TYPES OF METAMORPHISM Regional metamorphism occurs where large areas of rock are subjected to large amounts of differential stress for long intervals of time, conditions typically associated with mountain building. Mountain building occurs at subduction zones and at continental collision zones where two plates each bearing continental crust, converge upon each other. Most foliated metamorphic rocks—slate, phyllite, schist, and gneiss—are formed during regional metamorphism. As the rocks become heated at depth in the Earth during regional metamorphism they become ductile, which means they are relatively soft even though they are still solid. The folding and deformation of the rock while it is ductile may greatly distort the original shapes and orientations of the rock, producing folded layers and mineral veins that have highly deformed or even convoluted shapes. The diagram below shows folds forming during an early stage of regional metamorphism, along with development of foliation, in response to normal stress. The photograph below shows high-grade metamorphic rock that has undergone several stages of foliation development and folding during regional metamorphism, and may even have reached such a high temperature that it began to melt. Contact metamorphism occurs to solid rock next to an igneous intrusion and is caused by the heat from the nearby body of magma. Because contact metamorphism is not caused by changes in pressure or by differential stress, contact metamorphic rocks do not become foliated. Where intrusions of magma occur at shallow levels of the crust, the zone of contact metamorphism around the intrusion is relatively narrow, sometimes only a few m (a few feet) thick, ranging up to contact metamorphic zones over 1000 m (over 3000 feet) across around larger intrusions that released more heat into the adjacent crust. The zone of contact metamorphism surrounding an igneous intrusion is called the metamorphic aureole. The rocks closest to the contact with the intrusion are heated to the highest temperatures, so the metamorphic grade is highest there and diminishes with increasing distance away from the contact. Because contact metamorphism occurs at shallow to moderate depths in the crust and subjects the rocks to temperatures up to the verge of igneous conditions, it is sometimes referred to as high-temperature, low-pressure metamorphism. Hornfels, which is a hard metamorphic rock formed from fine-grained clastic sedimentary rocks, is a common product of contact metamorphism. Hydrothermal metamorphism is the result of extensive interaction of rock with high-temperature fluids. The difference in composition between the existing rock and the invading fluid drives the chemical reactions. The hydrothermal fluid may originate from a magma that intruded nearby and caused fluid to circulate in the nearby crust, from circulating hot groundwater, or from ocean water. If the fluid introduces substantal amounts of ions into the rock and removes substantial amounts of ions from it, the fluid has metasomatized the rock—changed its chemical composition. Ocean water that penetrates hot, cracked oceanic crust and circulates as hydrothermal fluid in ocean floor basalts produces extensive hydrothermal metamorphism adjacent to mid-ocean spreading ridges and other ocean-floor volcanic zones. Much of the basalt subjected to this type of metamorphism turns into a type of metamorphic rock known as greenschist. Greenschist contains a set of minerals, some of them green, which may include chlorite, epidote, talc, Na-plagioclase, or actinolite. The fluids eventually escape through vents in the ocean floor known as black smokers, producing thick deposits of minerals on the ocean floor around the vents. Burial metamorphism occurs to rocks buried beneath sediments to depths that exceed the conditions in which sedimentary rocks form. Because rocks undergoing burial metamorphism encounter the uniform stress of lithostatic pressure, not differential pressure, they do not develop foliation. Burial metamorphism is the lowest grade of metamorphism. The main type of mineral that usually grows during burial metamorphism is zeolite, a group of low-density silicate minerals. It usually requires a strong microscope see the small grains of zeolite minerals that form during burial metamorphism. Dynamic metamorphism is caused mainly by high shear stress along fault zones or shear zones in the crust. The minerals of the protolith may be pulverized and crushed by the high rate of shear strain that occurs in these zones. Rocks metamorphosed in these zones usually exhibit a combination of fractured, partly disintegrated, partly recrystallized versions of the original minerals, along with new mineral growth that occured during the metamorphism. Because fault zones and shear zones are highly localized, rocks that have undergone dynamic metamorphism are not a widespread type of metamorphic rock; they are much less abundant, than regional or contact metamorphic rocks. Subduction Zone Metamorphism During subduction, a tectonic plate, consisting of oceanic crust and lithospheric mantle, is recycled back into the deeper mantle. In most subduction zones the subducting plate is relatively cold compared with the high temperature it had when first formed at a mid-ocean spreading ridge. Subduction takes the rocks to great depth in the Earth relatively quickly. This produces a characteristic type of metamorphism, sometimes called high-pressure, low-temperature (high-P, low-T) metamorphism, which only occurs deep in a subduction zone. In oceanic basalts that are part of a subducting plate, the high-P, low-T conditions create a distinctive set of metamorphic minerals including a type of amphibole, called glaucophane, that has a blue color. Blueschist is the name given to this type of metamorphic rock. Blueschist is generally interpreted as having been produced within a subduction zone, even if the plate boundaries have subsequently shifted and that location is no longer at a subduction zone. Much as the minerals and textures of sedimentary rocks can be used as windows to see into the environment in which the sediments were deposited on the Earth’s surface, the minerals and textures of metamorphic rocks provide windows through which we view the conditions of pressure, temperature, fluids, and stress that occurred inside the Earth during metamorphism. The pressure and temperature conditions under which specific types of metamorphic rocks form has been determined by a combination labratory experiments, physics-based theoretical calculations, along with evidence in the textures of the rocks and their field relations as recorded on geologic maps. The knowledge of temperatures and pressures at which particular types of metamorphic rocks form led to the concept of metamorphic facies. Each metamorphic facies is represented by a specific type of metamorphic rock that forms under a specific pressure and temperature conditions. Even though the name of the each metamorphic facies is taken from a type of rock that forms under those conditions, that is not the only type of rock that will form in those conditions. For example, if the protolith is basalt, it will turn into greenschist under greenschist facies conditions, and that is what facies is named for. However, if the protolith is shale, a muscovite-biotite schist, which is not green, will form instead. If it can be determined that a muscovite-biotite schist formed at around 350ºC temperature and 400 MPa pressure, it can be stated that the rock formed in the greenschist facies, even though the rock is not itself a greenschist. The diagram below shows metamorphic facies in terms of pressure and temperature condiditons inside the Earth. Earth’s surface conditions are near the top left corner of the graph at about 15ºC which is the average temperature at Earth’s surface and 0.1 MPa (megapascals), which is about the average atmospheric pressure on the Earth’s surface. Just as atmospheric pressure comes from the weight of all the air above a point on the Earth’s surface, pressure inside the Earth comes from the weight of all the rock above a given depth. Rocks are much denser than air and MPa is the unit most commonly uses to express pressures inside the Earth. One MPa equals nearly 10 atmospheres. A pressure of 1000 MPa corresponds to a depth of about 35 km inside the Earth. Although pressure inside the Earth is determined by the depth, temperature depends on more than depth. Temperature depends on the heat flow, which varies from location to location. The way temperature changes with depth inside the Earth is called the geothermal gradient, geotherm for short. In the diagram below, three different geotherms are marked with dashed lines. The three geotherms represent different geological settings in the Earth. High-pressure, low-temperature geotherms occurs in subduction zones. As the diagram shows, rocks undergoing prograde metamorphism in subduction zones will be subjected to zeolite, blueschist, and ultimately eclogite facies conditions. High-temperature, low-pressure geotherms occur in the vicinity of igneous intrusions in the shallow crust, underlying a volcanically active area. Rocks that have their pressure and temperature conditions increased along such a geotherm will metamorphose in the hornfels facies and, if it gets hot enough, in the granulite facies. Blueschist facies and hornfels facies are associated with unusual geothermal gradients. The most common conditions in the Earth are found along geotherms between those two extremes. Most regional metamorphic rocks are formed in conditions within this range of geothermal gradients, passing through the greenschist facies to the amphibolites facies. At the maximum pressures and temperatures the rocks may encounter within the Earth in this range of geotherms, they will enter either the granulite or eclogite facies. Regionally metamorphosed rocks that contain hydrous fluids will begin to melt before they pass beyond the amphibolite facies. The minerals in metamorphic rock are often a completely different set of minerals than in the protolith. But, because the mineral assemblage in the metamorphic rock reflects the overall chemical composition of the rock, the set of minerals found in the rock can give us a good idea of the type of protolith, even if the metamorphic rock no longer looks anything like its protolith. The following terms are used to describe protoliths, and the types of metamorphic rocks they turn into, in terms of their general chemical compositions. - pelitic—pelitic rocks are high in alumina and, as protoliths, were usually shales or mudstones. Pelitic metamorphic rocks contain alumina-rich minerals such as the micas, garnet, andalusite, kyanite, silliminate, or staurolite. Pelitic metamorphic rocks formed during regional metamorphism include many varieties of slate, phyllite, schist, and gneiss. - mafic—mafic protoliths and the metamorphic rocks they become are high in magnesium and iron relative to silicon. Basalt is the most common mafic protolith. It can turn into mafic metamorphic rocks such as greenschist and amphibolites with chlorite, actinolite, biotite, hornblende, or plagioclase in them, depending on metamorphic grade. - calcareous—Calcareous rocks are calcium-rich rocks. Typically, as protoliths, calcareous rocks were either limestone or dolostone, which most commonly turn into marble as metamorphic rocks. - quartzofeldspathic—Protoliths such as granite, rhyolite, and arkose are rocks that consist mostly of a combination of quartz and feldspar are quartzofeldspathic. High-grade regional metamorphism of quartzofeldspathic rocks produces gneisses containing feldspar, quartz, biotite, and possibly hornblende. TYPES OF METAMORPHIC ROCKS Metamorphic rock fall into two categories, foliated and unfoliated. Most foliated metamorphic rocks originate from regional metamorphism. Some unfoliated metamorphic rocks, such as hornfels, originate only by contact metamorphism, but others can originate either by contact metamorphism or by regional metamorphism. Quartz and marble are prime examples of unfoliated that can be produced by either regional or contact metamorphism. Both rock types consist of metamorphic minerals that do not have flat or elongate shapes and thus cannot become layered even if they are produced under differential stress. A geologist working with metamorphic rocks collects the rocks in the field and looks for the patterns the rocks form in outcrops as well as how those outcrops are related to other types of rock with which they are in contact. Field evidence is often required to know for sure whether rocks are products of regional metamorphism, contact metamorphism, or some other type of metamorphism. If only looking at rock samples in a laboratory, one can be sure of the type of metamorphism that produced a foliated metamorphic rock such as schist or gneiss, or a hornfels, which is unfoliated, but one cannot be sure of the type of metamorphism that produced an unfoliated marble or quartzite. Foliated Metamorphic Rocks Foliated metamorphic rocks are named for their style of foliation. However, a more complete name of each particular type of foliated metamorphic rock includes the main minerals that the rock comprises, such as biotite-garnet schist rather than just schist. - slate—slates form at low metamorphic grade by the growth of fine-grained chlorite and clay minerals. The preferred orientation of these sheet silicates causes the rock to easily break along parallel planes, giving the rock a slaty cleavage. Some slate breaks into such extensively flat sheets of rock that it is used as the base of pool tables, beneath a layer of rubber and felt. Roof tiles are also sometimes made of slate. - phyllite—phyllite is a low-medium grade regional metamorphic rock in which the clay minerals and chlorite have been at least partly replaced by mica mica minerals, muscovite and biotite. This gives the surfaces of phyllite a satiny luster, much brighter than the surface of a piece of slate. It is also common for the differential stresses under which phyllite forms to have produced a set of folds in the rock, making the foliation surfaces wavy or irregular, in contrast to the often perfectly flat surfaces of slaty cleavage. - schist—the size of mineral crystals tends to grow larger with increasing metamorphic grade. Schist is a product of medium grades of metamorphism and is characterized by visibly prominent, parallel sheets of mica or similar sheet silicates, usually either muscovite or biotite, or both. In schist, the sheets of mica are usually arranged in irregular planes rather than perfectly flat planes, giving the rock a schistose foliation (or simply schistosity). Schist often contains more than just micas among its minerals, such as quartz, feldspars, and garnet. - amphibolite—a poorly foliated to unfoliated mafic metamorphic rock, usually consisting largely of the common black amphibole known as hornblende, plus plagioclase, plus or minus biotite and possibly other minerals; it usually does not contain any quartz. Amphibolite forms at medium-high metamorphic grades. Amphibolite is also listed below in the section on unfoliated metamorphic rocks. - gneiss—like the word schist, the word gneiss is originated from the German language; it is pronounced “nice.” As metamorphic grade continue to increase, sheet silicates become unstable and dark minerals such as hornblende or pyroxene start to grow. The dark-colored minerals tend to form separate bands or stripes in the rock, giving it a gneissic foliation of dark and light streaks. Gneiss is a high-grade metamorphic rock. Many types of gneiss look somewhat like granite, except that the gneiss has dark and light stripes whereas in granite randomly oriented and distributed minerals with no stripes or layers. - migmatite—a combination of high-grade regional metamorphic rock – usually gneiss or schist – and granitic igneous rock. The granitic rock in migmatite probably originated from partial melting of some of the metamorphic rock, though in some migmatites the granite may have intruded the rock from deeper in the crust. In migmatite you can see metamorphic rock that has reached the limits of metamorphism and begun transitioning into the igneous stage of the rock cycle by melting to form magma. Names of different styles of foliation come from the common rocks that exhibit such foliation: - slate has slaty foliation - phyllite has phyllitic foliation - schist has schistose foliation - gneiss has gneissic foliation (also called gneissose foliation) Unfoliated Metamorphic Rocks Unfoliated metamorphic rocks lack a planar (oriented) fabric, either because the minerals did not grow under differential stress, or because the minerals that grew during metamorphism are not minerals that have elongate or flat shapes. Because they lack foliation, these rocks are named entirely on the basis of their mineralogy. - hornfels—hornfels are very hard rocks formed by contact metamorphism of shale, siltstone, or sandstone. The heat from the nearby magma “bakes” the sedimentary rocks and recrystallizes the minerals in them into a new texture that no longer breaks easily along the original sedimentary bedding planes. Depending on the composition of the rock and the temperature reached, minerals indicative of high metamorphic grade such as pyroxene may occur in some hornfels, though many hornfels have minerals indicating medium grade metamorphism. - amphibolite—amphibolites are dark-colored rocks with amphibole, usually the common black amphibole known as hornblende, as their most abundant mineral, along with plagioclase and possibly other minerals, though usually no quartz. Amphibolites are poorly foliated to unfoliated and form at medium to medium-high grades of metamorphism from basalt or gabbro. - quartzite—quartzite is a metamorphic rock made almost entirely of quartz, for which the protolith was quartz arenite. Because quartz is stable over a wide range of pressure and temperature, little or no new minerals form in quartzite during metamorphism. Instead, the quartz grains recrystallize into a denser, harder rock than the original sandstone. If struck by a rock hammer, quartzite will commonly break right through the quartz grains, rather than around them as when quartz arenite is broken. - marble—marble is a metamorphic rock made up almost entirely of either calcite or dolomite, for which the protolith was either limestone or dolostone, respectively. Marbles may have bands of different colors which were deformed into convoluted folds while the rock was ductile. Such marble is often used as decorative stone in buildings. Some marble, which is considered better quality stone for carving into statues, lacks color bands. Metamorphic Rock Classification |Foliated Metamorphic Rocks| |Crystal Size||Mineralogy||Protolith||Metamorphism||Rock Name| |very fine||clay minerals||shale||low grade regional||slate| |fine||clay minerals, biotite, muscovite||shale||low grade regional||phyllite| |medium to coarse||biotite, muscovite, quartz, garnet, plagioclase||shale, basalt||medium grade regional||schist| |medium to coarse||amphibole, plagioclase, biotite||basalt||medium grade regional||amphibolite | (Note: may be unfoliated) |medium to coarse||plagioclase, orthoclase, quartz, biotite, amphibole, pyroxene||basalt, granite, shale||high grade regional||gneiss| |Unfoliated Metamorphic Rocks| |Crystal Size||Mineralogy||Protolith||Metamorphism||Rock Name| |fine to coarse||quartz||sandstone||regional or contact||quartzite| |fine to coarse||calcite||limestone||regional or contact||marble| |fine||pyroxene, amphibole, plagioclase||shale||contact||hornfels| Note that not all minerals listed in the mineralogy column will be present in every rock of that type and that some rocks may have minerals not listed here. - What skill does this content help you develop? - What are the key topics covered in this content? - How can the content in this section help you demonstrate mastery of a specific skill? - What questions do you have about this content?