content
stringlengths
275
370k
Where Do Termites Live? Knowing where termites live can be useful. By blocking access to potential termite habitats, you can prevent infestations in and around your home. In this article I have detailed the three main species of termite, and where each prefers to live. Recap: What Are Termites? Termites are social insects that reside in colonies. Termite society is divided into three different classes: workers, soldiers, and reproductives. The termite king and queen are the founders of the entire colony. They, along with the alates, are known as primary reproductives. They do not leave the nest. Termite alates are reproductive adults that are only produced by mature colonies. Any alate has the potential to be a king or queen (depending on the gender). Termite alates leave the colony at certain times of the year. If they survive departing the nest and finding a suitable location, they start new colonies. Secondary reproductives spend their lives inside the colony. They are also known as supplemental reproductives. In the event the king or queen dies, a secondary reproductive will assume reproductive duties. Soldiers are sterile, blind termites that can be male or female. This caste defends and protects the colony. Soldiers either have strong jaws (mandibles) or the ability to excrete a toxic chemical. They can exit the nest if needed, to defend it. The final and largest caste consists of workers. Worker termites are sterile and blind, as soldiers are. They feed, groom, and care for the other social classes. Workers perform nest construction and maintenance duties. They forage outside of the nest for sustenance and resources. Where Do Termites Live? There are three main species of termite that are considered pests in the U.S. These are: dampwood, drywood, and subterranean termites. Each type of termite prefers a specific habitat. All termites require cellulose to survive, which is found in wood. For this reason, despite their differences, these pests build their nests in or near wood – and they don’t always resemble those huge termite mounds you see on television. As their name suggests, dampwood termites build their nests in damp or wet wood. While they require at least some moisture, these termites can extend their nests to nearby dryer wood if necessary. Wood that is rotting or saturated with moisture is ideal for dampwood termites. If the wood is directly in contact with soil, that is preferable. Outdoors, dampwood termites will nest in sites such as decaying trees. If you live in a wet climate, neglected wood in your yard is a prime site for dampwood termites. This includes wooden porches. Inside your home, these termites will settle in similar areas. If you have leaky pipes that dampen nearby wood, you are at risk of attracting dampwood termites. As dampwood termites also require high humidity, they nest deep inside the wood rather than near the surface. Drywood termites make their nests in dry wood. They do not need to have free access to moisture, unlike the other two types of termites. The drywood termite can burrow deep inside wood. Infestations are difficult to detect until the colony produces swarmers, or you see termite feces. By this stage, the infestation is usually advanced. They tend to disperse themselves widely throughout an infested structure. Rather than being centered in one place, they extend their nests through networks of galleries and tunnels. In one home, there can be several separate colonies of drywood termite. While they can be found anywhere in a wooden structure, they are partial to higher areas. In your own home, this can include the attic and upper floors. Similar to dampwood termites, these termites cannot survive in dry environments. They require constant access to moisture in order to survive. Without moisture, subterranean termites would dehydrate and die. For this reason, they build their nests in damp soil. In addition to the moisture in the soil, there are other advantages of building their nests below ground. There, the bulk of the colony is shielded from predators, such as other insects and reptiles. The soil can also be used to construct tunnels above ground. These shelter tunnels allow worker termites to travel directly from the nest to the structure they are invading. Once they’ve invaded a wooden structure, they bring the food back to the underground nest to feed the colony. There are some circumstances in which subterranean termites can live above ground. As long as there is sufficient humidity and the termites can access water freely, they can thrive. When subterranean termites live above ground, this is known as an isolated infestation. These can occur in locations on your property such as an attic with a leaking roof. Article Last Updated on
Risk Factors & Prevention Hepatitis C is a Blood-Borne Infection, spread only when the blood of an HCV+ person enters into the body of another person through a break in the skin or mucous membrane. Unfortunately, there are many ways in which this can happen. - HCV can be prevented by avoiding risk factors. And those who have any risk factors should be tested, regardless of symptoms. Remember, HCV can silently do its deadly damage for many decades without any symptoms. - Risk factors have changed over the years, so we describe two different sets of risk factors, depending on year of birth. Many baby-boomers and the elderly are HCV+ but don’t know it. Testing (one-time only) of members of these groups, followed by any required treatment, is essential to prevent cirrhosis, liver cancer, liver failure, need for liver transplant, and other extra-hepatic complications of HCV. - Although HCV is transmitted only through blood-to-blood contact, it is much easier to get than HIV because the virus can live much longer than HIV! HCV can survive several days outside the human body under the right conditions. First, be clear how the hepatitis C virus (HCV) is NOT TRANSMITTED. HCV is NOT airborne. And HCV is absolutely NOT spread by: - sneezing or coughing - holding hands or kissing (unless there is deep kissing and open sores are present in both parties) - using the same eating or cooking utensils - using the same toilet, shower, or bathtub - eating food prepared by someone with HCV - holding a child in your arms - swimming in the same pool HCV Risk Factors for those Born Before 1975: - Prior to 1992, organs for transplant, blood and blood products in Canada were not generally screened for HCV. As a result, many Canadians contracted it through a contaminated transplant organ, blood transfusion or blood product. Blood products included plasma, platelets, gammaglobulin [immune globulin], occasional rare cases with RhoGAM used for Rh-negativemothers, etc.). During the period 1990-1992, testing transplant organs and donated blood for HCV was phased in. Thanks to HCV testing with modern methods, the risk of acquiring hepatitis C from organ transplant, blood transfusion, or blood products is now practically unknown. - From the 1940s up into the 1960s, mass vaccination devices were occasionally used on groups such as school children and soldiers which had the potential of spreading HCV from one individual to another. These devices continued to be used in less developed countries for decades longer. - Improperly sterilized implements and equipment used in many medical/dental procedures (including acupuncture and dialysis) has spread HCV from patient to patient, or between patient and practitioner. Gradually universal precautions have become accepted, which require practitioners to act as if EVERY PATIENT and EVERY MEDICAL or DENTAL WORKER has HCV. The use of disposable gloves and needles is now taken for granted, as is the use of autoclaves for sterilizing all equipment which gets re-used. Constant vigilance is still needed, to make sure universal precautions are followed. - In addition, those born before 1975 should look at HCV Risk Factors for Everyone (below), as these also apply to them. HCV Risk Factors for Everyone: - Shared recreational drug equipment (contaminated needles, syringes, pipes, cookers, straws, etc. can transmit HCV among users. Transmission occurs not only by piercing the skin, but also via fragile mucous passages which may contain open cuts and sores (lesions). - Sexual transmission is extremely rare in the general population, though it is common among MSM (men who have sex with men), those co-infected with HIV or other infections, those who have unprotected sex with multiple partners, and those who engage in rough sex in which blood, bloody semen or bloody mucous could be transmitted. It is theoretically possible to transmit HCV sexually when an HCV+ woman is menstruating (although this also requires that her partner have some lesion or cut for the menstrual fluid to enter). PROTECTION: If in doubt, use a condom! For oral sex if lesions are present, consider using a latex dental dam, or simply a sheet of plastic wrap. - Mother to child transmission during childbirth occurs in about 5% of the births by HCV+ women. Because of the high spontaneous clearance rate during the first year of life, children of HCV-infected mothers should not be tested until 18 months or later. Transmission during breastfeeding has not been demonstrated, though in case of breast infection or lesions, precautions should be taken. - Needle-stick injuries, and blood-related accidents occur primarily among medical/dental professionals, emergency personnel, and those who clean or handle medical waste. Immediate reporting is essential! - Traumatic sports or combat injuries involving direct transfer of blood from one party to another can result in HCV transmission among athletes, veterans, and victims of crime. - RARE but POSSIBLE: Danger of HCV transmission exists throughout the beauty (including body art) industry, but particularly via invasive practices such as tattoos, piercing, and other body art, manicures, pedicures, etc. Over the last couple decades practitioners have gradually adopted universal precautions similar to those used by medical/dental professionals. Beauty industry professionals have generally been better at observing proper precautions than amateurs who work out of their home or in prison. Note that all body art tools and equipment must be disposable or autoclaved, including ink and ink bottles, templates, etc. Think before you ink! - RARE but POSSIBLE: Sharing of personal objects such as razors, nail clippers or scissors, toothbrushes and Waterpiks can spread HCV if a contaminated object enters through a break in the skin or mucous membrane. - RARE but POSSIBLE: Dialysis (especially long-term), colonoscopy, and other medical procedures are still occasionally implicated in transmitting HCV if components of equipment are not properly sterilized. Groups which Public Health Agency of Canada considers at higher risk of HCV than the general population* 8 out of 1000 people in Canada’s general population are HCV+. Some groups of people are known to have a higher % of HCV than this. This information should not be used to discriminate against members of these groups, only to encourage members to be particularly cautious and to consider getting tested. Successful HCV treatment does not prevent someone from contracting HCV again. It can take several weeks for an HCV test to show positive results. - 690 out of 1000 current IV drug users are HCV+. - 476 out of 1000 former IV drug users are HCV+ (including one-time users, even 40+ years ago). - 398 out of 1000 hemophiliacs are HCV+. - 280 out of 1000 inmates in federal prisons are HCV+. - 250 – 300 out of 1000 HIV+ people are also HCV+ (this statistic from World Health Organization) - 50 out of 1000 MSM (men who have sex with men) are HCV+. - 50 out of 1000 street-involved youth are HCV+. - 30 out of 1000 aboriginal group members are HCV+. - Some immigrant populations come from countries with extremely high rates of HCV+. These include parts of South America, Africa (especially Central and Northeast Africa), and Asia. In these countries the disease has generally been spread through medical procedures and mass vaccinations. - In general, men are at about 50% higher risk than women, and people born 1945-1975 (particularly 1950 – 1965) are at somewhat risk than those older or younger. * information from Hepatitis C in Canada:2005-2010 Surveillance Report. Centre for Communicable Diseases and Infection Control, Infectious Disease Prevention and Control Branch, Public HealthAgency of Canada; 2011.
Forget melting glaciers, acidifying oceans and changing weather patterns: climate change is now going after goats. New research has found that climate change is causing mountain goats living in the Alps to shrink. The study, which was published Tuesday in the journal Frontiers in Zoology, found that adolescent Alpine chamois mountain goats are significantly smaller than their peers were 30 years ago, weighing about 25 percent less than goats in the 1980s did. The researchers called this change in body mass over 3 decades “striking.” They also said the shrinking “appears to be strongly linked” with increased temperatures in the growing season of the goats’ Alpine habitats. The study noted that climate change has been linked to changes in body mass of other species before. But in those situations, the mass change was typically due to a change in the amount of food available or in the timing in which food was available — changes in bud burst timing in the spring, for instance. That wasn’t the case in the goats’ situation, however. “Instead, our results provide support for our second putative driver: that climate change could be directly affecting chamois behaviour or physiology, limiting their ability to acquire resources,” the report states. The researchers note that in ungulates, a group of mammals that includes goats, cows, horses and other hoofed animals (though whales and dolphins are also sometimes included), “behavioural changes, such as allocating less time to foraging, play an important role in thermoregulation.” This means that, to avoid getting too hot, the chamois goats don’t forage for food as much during the hottest parts of the day, and don’t forage as much in general when the temperature remains high throughout the day. If chamois eat less during warmer growing seasons, they aren’t likely to reach the same body mass as goats who lived through cooler growing seasons. “Higher daily temperatures during spring and summer may have led to juvenile chamois spending more time resting and less time foraging than in the past, reducing their ability to store energy reserves and invest in growth,” the report states. As National Geographic points out, the goats’ smaller size may help them better withstand hotter summers, simply because they have a greater surface area to body mass ratio. However, a lower body mass may also mean the goats aren’t as prepared for harsh Alpine winters. As the study notes, this isn’t the first time that scientists have discovered climate change to be behind changes in animal body mass. Six species of salamander in the Appalachian Mountains have been growing shorter over the last 50 or so years, with salamanders living in the southernmost sites that researchers visited showing the most shrinkage. And shrinking sea ice is in turn leading to smaller polar bears — the lack of sea ice means polar bears can’t hunt like they used to, and end up spending less time eating and gaining weight.
All of us need to follow a healthy, balanced diet which is low in fat. Fat is very high in calories with each gram of fat providing more than twice as many calories compared to protein and carbohydrate. Eating too much fat can lead to you taking in more calories than your body needs which causes weight gain which can affect your diabetes control and overall health. The type of fat is important too. Having too much saturated fat in your diet can cause high levels of what’s known as ‘bad cholesterol’ (low-density lipoprotein or LDL), which increases the risk of cardiovascular disease (CVD). People with diabetes are at increased risk of CVD, so it’s even more important to make healthier food choices. In this section Should I avoid fat completely? Fat plays a very important role in the body, so you need to include a small amount of it in your diet. Fat in our body fulfils a wide range of functions, which include: - supplying energy for cells - providing essential fatty acids that your body can't make - transporting fat-soluble vitamins (A, D, E and K) - providing a protective layer around vital organs - being necessary in the production of hormones. However, fats are high in calories, so it’s important to limit the amount you use – especially if you’re trying to manage your weight. Next time you’re cooking or shopping, have a look at the nutritional label to see what types of fats are in the product you’re buying. The main types of fat found in our food are saturated and unsaturated, and most foods will have a combination of these. All of us need to cut saturated fat and use unsaturated fats and oils, such as rapeseed or olive oil, as these types are better for your heart. Saturated fat is present in higher amounts in animal products, such as: - meat products and poultry - processed foods like pastries, cakes and biscuits. Saturated fats increase the amount of bad cholesterol (LDL) in the body. LDL transports cholesterol from the liver to the cells and too much LDL cholesterol can lead to a build-up of fatty material in the artery walls, which increases the risk of CVD. There are two types of unsaturated fats: They can help to maintain the ‘good cholesterol’ (high-density lipoprotein or HDL) in the body. HDL carries cholesterol away from the cells and back to the liver, where it’s either broken down or passed out of the body as a waste product. Monounsaturated fats are present in a higher amount in olive oil, rapeseed oil and avocado. Omega 6 and Omega 3 fatty acids Polyunsaturated fats are further divided into Omega 6 and Omega 3 fatty acids. Most dietary polyunsaturated fat is in the form of Omega 6, found in sunflower, safflower, corn, groundnut and soya oils. Oily fish, such as mackerel, sardines, trout and pilchards, is a good source of omega 3 oils. Trans fatty acids or trans fats Trans fatty acids have a similar effect to saturated fats, where they increase the amount of LDL in the body, but they also lower the amount of HDLs. Trans fats are found in small amounts in milk, cheese, beef and lamb. Trans fats are also produced when ordinary oils are heated to fry foods at a very high temperature, which is why takeaway foods tend to be high in trans fats. The main issue arises because they are also created by the food manufacturing industry using a chemical process known as hydrogenation that hardens vegetable oil to solid or semi-solid fats. These artificially produced trans fats are found in significant quantities in margarine and manufactured foods that contain partially hydrogenated fat. Many manufacturers have now reduced the amount of trans fats in their products due to a movement to reduce them in manufactured foods over the years. All fats are high in calories contain the same amount of calories and would contribute the same amount of weight gain. So, whichever fat you choose to use, make sure that you limit the amount. Cholesterol is a fatty, wax-like substance and is vital for the normal function of the body. It’s mainly made in the liver, but it can also be found in some foods. Cholesterol found naturally in some foods has very little influence on blood cholesterol levels. Foods that contain high amounts of dietary cholesterol, such as liver, egg yolk and shellfish, can be included in the diet, although the key is to cook them without fat or use small amounts of unsaturated fat. Follow these tips to help you reduce your fat intake – especially when it comes to cutting the amount of saturated fat you eat - Use skimmed or semi-skimmed milk and other low-fat dairy products - Choose lean cuts of meat and trim any visible fat. - Remove fat and skin from poultry. - Cut Swap saturated fats, such as butter, ghee, lard or coconut oils, and replace with small amounts of unsaturated fats and oils like rapeseed, sunflower or olive oils and rapeseed oilspreads. - Choose lower-fat cooking methods, such as grilling, poaching and steaming or stir-fry with a small amount of oil. - Limit takeaway foods. Some may be very high in saturated – and often trans – fats. - Spray oils are brilliant for saving calories – some are as low as 1Kcal per spray. - Always read the food label – this can tell you how much fat and saturated fat is in the product. Opt for foods that have more green or amber traffic lights to help make healthier choices. Originally published in Diabetes Balance magazine – become a Diabetes UK member and get your copy.
2018 - 2019 Course Content & Assessment In Year 10, students will evaluate the outcomes of the First World War, how peace was established and then lost. Likewise, the policies of Mussolini’s Italy and Hitler’s Germany are examined in relation to the failure of the League and the outbreak of the Second World War. With the defeat of Hitler, students move on to consider the emergence of the Cold War 1945 to 1955. In Year 11, students will explore how two countries responded to the social, economic and political issues of the inter-war years. The USA: Having entered the First World War, America opts for a policy of isolation. During this period, America experienced a period of boom, technological change, economic collapse and war. In contrast, students evaluate how Totalitarianism was established in Germany in 1933 after the failure of Weimer Democracy. In the summer term of Year 10 students will complete their controlled assessment. In GCSE History students will gain the opportunity to study a diverse range of topics spanning the last 1000 years. These include: Section A - Germany 1890-1945. How the Nazis came to power and what it was like to live in Nazi Germany Section B - Conflict and Tension in Asia 1950-75. This includes looking at how America was defeated in Vietnam. Section C - Britain: Health and the people 1000-present day. This is the sometimes gory part of the course and will give an overview of change in Britain in the context of medical advancements. Section D - Norman England 1066-1100. Looking at topics such as the Battle of Hastings and castle building. GCSE History gives students the opportunity to not only discover fascinating aspects of our past but to: Explore how the past has been represented and interpreted for different purposes. Develop the ability to ask questions and to investigate the past Organise and communicate their knowledge and views in a variety of ways Apply their historical knowledge to the present so that they can fill their role as responsible citizens of the I took History because I wanted to know about the events that made our world what it is today. I enjoy lessons as they are well structured and not just copying out of a textbook. We often have class discussions, watch clips that bring the topics to life and work in groups to complete our work. I also value the fact that I am not just learning about the past but am developing transferable skills such as how to structure an argument, essay writing and the analysis of sources. I wasn’t 100% certain about taking the subject in year 9 but am glad that I did. Sam Josephs - Tudor House Entry Criteria & Progression Route A: GCSE Grade 6 in History
Thermal conductivity is the property of a material to conduct heat. Heat transfer occurs at a lower rate across materials of low thermal conductivity than across materials of high thermal conductivity. Materials of low thermal conductivity are used as thermal insulation. The thermal conductivity of a material may depend on temperature. The thermal conductivity of refractories is a property required for selecting their thermal transmission characteristics. Users select refractories to provide specified conditions of heat loss and cold face temperature, without exceeding the temperature limitation of the refractory. This test method establishes the testing for thermal conductivity of refractories using the calorimeter. This procedure requires a large thermal gradient and steady state conditions. The results are based upon a mean temperature. The data from this test method are suitable for specification acceptance, and design of multi-layer refractory construction. Hot Wire Method The hot wire technique is a transient, intermittent isothermal method for measuring thermal conductivity. A thin platinum wire is placed between two appropriately prepared 9" bricks of same density. Heat generated by current applied to the wire is conducted away from the wire at a rate dependent on the thermal conductivity of the material. Fulfilling the directive of our founder and benefactor, the Edward Orton Jr. Ceramic Foundation deploys its profits to support studies and research at the university level to promote and advance the science of materials processing.
The permafrost may be about to spring an unwelcome surprise, with Arctic soils thought to be thawing faster than anyone had predicted. This threatens to release vast quantities of frozen methane into the atmosphere and transform the northern landscape. One-fourth of all the land in the northern half of the globe is defined as permafrost. This long-frozen soil is home to the detritus of life over many thousands of years: the remains of plants, animals and microbes. The permanently frozen soils of the region hold, so far in a harmless state, 1,600 billion tonnes of carbon: twice as much as exists in the atmosphere. And as the Arctic warms, this could release ever-greater volumes of a potent greenhouse gas, to accelerate global warming still further, and the consequent collapse of the soil, the flooding and the landslides could change not just the habitat but even the contours of the high latitudes. “We are watching this sleeping giant wake up right in front of our eyes,” said Merritt Turetsky, an ecologist at the University of Guelph in Canada. “We work in areas where permafrost contains a lot of ice, and our field sites are being destroyed by abrupt collapse of this ice, not gradually over decades, but very quickly over months to years.” Reducing global emissions might be the surest way to slow further release of permafrost carbon into the atmosphere. Let’s keep that carbon where it belongs—safely frozen in the stunning soils of the north. And Miriam Jones, of the US Geological Survey, said: “This abrupt thaw is changing forested ecosystems to thaw lakes and wetlands, resulting in a wholesale transformation of the landscape that not only impacts carbon feedbacks to climate but is also altering wildlife habitat and damaging infrastructure.” The two scientists are among 14 researchers who argue in the journal Nature that the thaw is happening far faster than anyone had predicted. The Arctic is warming at a rate faster than almost anywhere else on Earth. So far the thaw affects less than one-fifth of the entire permafrost, but even this relatively small area has the potential to double what climate scientists call “feedback”—the release of hitherto stored greenhouse gases to fuel yet faster warming. It is the latest in a series of increasingly urgent warnings about the rate of change in the Arctic. Stable climate patterns are maintained by stable temperatures. As the polar north warms twice as fast as the average for the rest of the world, the all-important difference between tropics and polar regions begins to accelerate the advance of spring, and delay the next freeze to bring weather extremes and ever higher sea level rises which could soon start to exact a toll on human economies on an unprecedented scale. Researchers have been warning for years about the consequences of thaw and the release of ever more carbon into the greenhouse atmosphere. But it is only in recent months that climate scientists have begun to see the effect of ice melt at depth upon the soils that—for now—support Arctic roads, buildings and pipelines as well as a huge natural ecosystem of plants and animals adapted by thousands of years of evolution to long winters and brief flowering summers. Goal in jeopardy Put simply: 195 nations met in Paris in 2015 and agreed to contain average global warming to “well below” 2°Cabove the long-term level for most of human history. Accelerating thaw in the Arctic puts that goal at risk. The researchers call for better and more reliable observation of change in the region, more investment in on-the-ground measurement of change, more information about the extent of carbon emissions from the soils, better models of global change in the region, and better reporting of change. “We can’t prevent abrupt thawing of the permafrost, but we can try to forecast where and when it is likely to happen, to enable decision makers and communities to protect people and resources”, the scientists write. “Reducing global emissions might be the surest way to slow further release of permafrost carbon into the atmosphere. Let’s keep that carbon where it belongs—safely frozen in the stunning soils of the north.” This story was published with permission from Climate News Network. Thanks for reading to the end of this story! We would be grateful if you would consider joining as a member of The EB Circle. This helps to keep our stories and resources free for all, and it also supports independent journalism dedicated to sustainable development. It only costs as little as S$5 a month, and you would be helping to make a big difference.
What is Geothermal? Geothermal is the natural heat of the Earth. The temperature at the Earth’s centre is estimated to be 5,500oC – almost as hot as the surface of the Sun. This heat is derived from the original formation of the planet and from the decay of the radioactive elements in the Earth’s crust. It is transferred to the subsurface by conduction and convection. For centuries, geothermal springs have been used for bathing, heating and cooking. But only in the early 20th century did people start to consider the heat from inside the Earth as a practical source of energy with huge potential. Geothermal energy is now used to produce electricity, to heat and cool buildings as well as for other industrial purposes like grain and lumber drying, pulp and paper processing, fruit and vegetable cultivation, soil warming and many others. The exploitable geothermal resources are found throughout the world and are utilized nowadays in 83 countries. Only a small fraction of geothermal has been used so far and there is enough room for development in both electricity generation and direct use applications. Geothermal represents a promising energy source to satisfy the growing energy needs. There is something good for each letter:
Mountains are powerful symbols in American Literature. In the Bible, the mountains are sacred. They are a symbol of divinity and holiness. Mountains are closer to God which makes them so special. When Moses has a vision of God and receives the 10 commandments, it is on top of Mount Sinai. When Moses goes up to Mount Sinai: “The Lord descended to the top of Mount Sinai and called Moses to the top of the mountain. So Moses went up and the Lord said to him, “ Go down and warn the people so they do not force their way through to see the Lord and many of them perish. Even the priests, who approach the Lord, must consecrate themselves, or the Lord will break out against them.” (King James’ Bible, Exodus 19: 19-22). The mountains present a supernatural element to the lives of the Israelites. Even priests are not allowed to listen to God. The Mountain provides a moment for Moses to lift up from his reality and come closer to God. In the Bible, the holiest interactions with God or the Devil are placed on the mountains. They provide opportunity and present supernatural change whenever they are mentioned. In the “I Have a Dream” speech, the mountain is a symbol for hopes and dreams. Beyond the mountains are where freedom and hope lies. It symbolizes a spiritual, mental, and emotional type of liberation. When Mr. King is delivering his speech he says, “This is our hope. This is the faith that I go back to the South with. With this faith we will be able to hew out of the mountain of despair a stone of hope. With this faith we will be able to transform the jangling discords of our nation into a beautiful symphony of brotherhood.” Martin Luther King uses the idea of mountains to express a climb over and beyond oppression, discrimination, and racism. King prophesize the mountains viewing them as a heavenly place where any issue can be challenged. Nothing is impossible once you reach the mountains – and that is King’s goal. He wants people to know that with faith, they can move mountains and change their fate. King uses this metaphor of “moving mountains” to show how with faith and strength, people of all races can come together and create a force to move a mountain. People look up to the mountains as a symbol of infinite possibility. When Emerson discusses mountains and the transparent, transcendental eye, the mountaintop symbolizes great trials and tribulation – the seemingly important things in life (which are given to much attention in his opinion). According to Emerson, “Nobody trips over mountains. It is the small pebble that causes you to stumble. Pass all the pebbles in your path and you will find you have crossed the mountain. The mind does not create what it perceives, anymore than the eye creates the rose.” Mountains symbolize the bigger things in life -the more important things. Emerson expresses that the most difficult obstacles in life are the little things that occur. He believes that once people are able to see and overcome the little tribulations in life, they ultimately complete the biggest one. The pebbles in life are often unexpected – they come and go without being seen. Emerson believes that once people have the ability to absorb nature and gain awareness, that they will be able to surpass the physical characteristics of life. The mountaintop symbolizes nature and a person’s ability to beyond it and enter reality. Once you surpass the mountaintop, you have reached a state of enlightenment and transcendentalism. In the famous 60s song “Ain’t no mountain high enough”, mountains symbolize obstacles. During the time when this song was released, schools were being integrated in the U.S. and many white people were against this. The song is a story of a young man losing the love of his life and dealing with racism as he tries to weave his way through life. The song evokes a meaningful message opposing segregation and racism during the Civil Rights Movement. The song presents a deep connection with those who faced racial discrimination and segregation in America and supports peace and love. The song uses love and the young man’s dedication to it in order to symbolize a force powerful enough to defeat racism. Once the mountain is overcome, those fighting for equality have won.
A controllable transistor engineered from a single phosphorus atom shown here in the center of an image from a computer model, sits in a channel in a silicon crystal. The smallest transistor ever built — in fact, the smallest transistor that can be built — has been created using a single phosphorus atom by an international team of researchers at the University of New South Wales, Purdue University and the University of Melbourne. The single-atom device was described Sunday (Feb. 19) in a paper in the journal Nature Nanotechnology. Michelle Simmons, group leader and director of the ARC Centre for Quantum Computation and Communication at the University of New South Wales, says the development is less about improving current technology than building future tech… “This is a beautiful demonstration of controlling matter at the atomic scale to make a real device,” Simmons says. “Fifty years ago when the first transistor was developed, no one could have predicted the role that computers would play in our society today. As we transition to atomic-scale devices, we are now entering a new paradigm where quantum mechanics promises a similar technological disruption. It is the promise of this future technology that makes this present development so exciting.” The same research team announced in January that it had developed a wire of phosphorus and silicon — just one atom tall and four atoms wide — that behaved like copper wire. Simulations of the atomic transistor to model its behavior were conducted at Purdue using nanoHUB technology, an online community resource site for researchers in computational nanotechnology. Gerhard Klimeck, who directed the Purdue group that ran the simulations, says this is an important development because it shows how small electronic components can be engineered. “To me, this is the physical limit of Moore’s Law,” Klimeck says. “We can’t make it smaller than this.” Although definitions can vary, simply stated Moore’s Law holds that the number of transistors that can be placed on a processor will double approximately every 18 months. The latest Intel chip, the “Sandy Bridge,” uses a manufacturing process to place 2.3 billion transistors 32 nanometers apart. A single phosphorus atom, by comparison, is just 0.1 nanometers across, which would significantly reduce the size of processors made using this technique, although it may be many years before single-atom processors actually are manufactured. The single-atom transistor does have one serious limitation: It must be kept very cold, at least as cold as liquid nitrogen, or minus 391 degrees Fahrenheit (minus 196 Celsius). “The atom sits in a well or channel, and for it to operate as a transistor the electrons must stay in that channel,” Klimeck says. “At higher temperatures, the electrons move more and go outside of the channel. For this atom to act like a metal you have to contain the electrons to the channel. “If someone develops a technique to contain the electrons, this technique could be used to build a computer that would work at room temperature. But this is a fundamental question for this technology.” Although single atoms serving as transistors have been observed before, this is the first time a single-atom transistor has been controllably engineered with atomic precision. The structure even has markers that allow researchers to attach contacts and apply a voltage, says Martin Fuechsle, a researcher at the University of New South Wales and lead author on the journal paper. “The thing that is unique about what we have done is that we have, with atomic precision, positioned this individual atom within our device,” Fuechsle says. Simmons says this control is the key step in making a single-atom device. “By achieving the placement of a single atom, we have, at the same time, developed a technique that will allow us to be able to place several of these single-atom devices towards the goal of a developing a scalable system.” The single-atom transistor could lead the way to building a quantum computer that works by controlling the electrons and thereby the quantum information, or qubits. Some scientists, however, have doubts that such a device can ever be built. “Whilst this result is a major milestone in scalable silicon quantum computing, it does not answer the question of whether quantum computing is possible or not,” Simmons says. “The answer to this lies in whether quantum coherence can be controlled over large numbers of qubits. The technique we have developed is potentially scalable, using the same materials as the silicon industry, but more time is needed to realize this goal.” Klimeck says despite the hurdles, the single-atom transistor is an important development. “This opens eyes because it is a device that behaves like metal in silicon. This will lead to many more discoveries.” The research project spanned the globe and was the result of many years of effort. “When I established this program 10 years ago, many people thought it was impossible with too many technical hurdles. However, on reading into the literature I could not see any practical reason why it would not be possible,” Simmons says. “Brute determination and systemic studies were necessary — as well as having many outstanding students and postdoctoral researchers who have worked on the project.” Klimeck notes that modern collaboration and community-building tools such as nanoHUB played an important role. “This was a trans-Pacific collaboration that came about through the community created in nanoHUB. Now Purdue graduate students spend time studying at the University of New South Wales, and their students travel to Purdue to learn more about nanotechnology. It has been a rewarding collaboration, both for the scientific discoveries and for the personal relationships that were formed.” (Credit: Purdue University image)
To decide whether and where to move in the body, cells must read chemical signals in their environment. Individual cells do not act alone during this process, two new studies on mouse mammary tissue show. Instead, the cells make decisions collectively after exchanging information about the chemical messages they are receiving. "Cells talk to nearby cells and compare notes before they make a move," says Ilya Nemenman, a theoretical biophysicist at Emory University and a co-author of both studies, published by the Proceedings of the National Academy of Sciences (PNAS). The co-authors also include scientists from Johns Hopkins, Yale and Purdue. The researchers discovered that the cell communication process works similarly to a message relay in the telephone game. "Each cell only talks to its neighbor," Nemenman explains. "A cell in position one only talks to a cell in position two. So position one needs to communicate with position two in order to get information from the cell in position three." And like the telephone game - where a line of people whisper a message to the person next to them - the original message starts to become distorted as it travels down the line. The researchers found that, for the cells in their experiments, the message begins to get garbled after passing through about four cells, by a factor of about three. "We built a mathematical model for this linear relay of cellular information and derived a formula for its best possible accuracy," Nemenman says. "Directed cell migration is important in processes from cancer to the development of organs and tissues. Other researchers can apply our model beyond the mouse mammary gland and analyze similar phenomena in a wide variety of healthy and diseased systems." Since at least the 1970s, and pivotal work by Howard Berg and Ed Purcell, scientists have been trying to understand in detail how cells decide to take an action based on chemical cues. Every cell in a body has the same genome but they can do different things and go in different directions because they measure different chemical signals in their environment. Those chemical signals are made up of molecules that randomly move around. "Cells can sense not just the precise concentration of a chemical signal, but concentration differences," Nemenman says. "That's very important because in order to know which direction to move, a cell has to know in which direction the concentration of the chemical signal is higher. Cells sense this gradient and it gives them a reference for the direction in which to move and grow." Berg and Purcell understood the best possible margin of error - the detection limit - for such gradient sensing. During the subsequent 30 years, researchers have established that many different cells, in many different organisms, work at this detection limit. Living cells can sense chemicals better than any manmade device. It was not known, however, that cells can sense signals and make movement decisions collectively. "Previous research has typically focused on cultured cells," Nemenman says. "And when you culture cells, the first thing to go away is cell-to-cell interaction. The cells are no longer a functioning tissue, but a culture of individual cells, so it's difficult to study many collective effects." The first PNAS paper drew from three-dimensional micro-fluidic techniques from the Yale University lab of Andre Levchenko, a biomedical engineer who studies how cells navigate; research on mouse mammary tissue at the Johns Hopkins lab of Andrew Ewald, a biologist focused on the cellular mechanisms of cancer; and the quantification methods of Nemenman, who studies the physics of biological systems, and Andrew Mugler, a former post-doctoral fellow in Nemenman's lab at Emory who now has his own research group at Purdue. The 3D micro fluidics allowed the researchers to experiment with functional organoids, or clumps of cells. The method does not disrupt the interaction of the cells. The results showed that epidermal growth factor, or EGF, is the signal that these cells track, and that the cells were not making decisions about which way to move as individuals, but collectively. "The clumps of cells, working collectively, could detect insanely small differences in concentration gradients - such as 498 molecules of EGF versus 502 molecules - on different sides of one cell," Nemenman says. "That accuracy is way better than the best possible margin of error determined by Berg and Purcell of about plus or minus 20. Even at these small concentration gradients, the organoids start reshaping and moving toward the higher concentration. These cells are not just optimal gradient detectors. They seem super optimal, defying the laws of nature." Collective cell communication boosts their detection accuracy, turning a line of about four cells into a single, super-accurate measurement unit. In the second PNAS paper, Nemenman, Mugler and Levchenko looked at the limits to the cells' precision of collective gradient sensing not just spatially, but over time. "We hypothesized that if the cells kept on communicating with one another over hours or days, and kept on accumulating information, that might expand the accuracy further than four cells across," Nemenman says. "Surprisingly, however, this was not the case. We found that there is always a limit of how far information can travel without being garbled in these cellular systems." Together, the two papers offer a detailed model for collective cellular gradient sensing, verified by experiments in mouse mammary organoids. The collective model expands the classic Berg-Purcell results for the best accuracy of an individual cell, which stood for almost forty years. The new formula quantifies the additional advantages and limitations on the accuracy coming from the cells working collectively. "Our findings are not just intellectually important. They provide new ways to study many normal and abnormal developmental processes," Nemenman says.
My favorite examples: - Let’s eat Grandma! - Cathy finds inspiration in cooking her family and her cats. How many punctuation marks are there? Quick! Rattle off as many as you can. (Hint, I have already used about half of them.) In English grammar, there are fourteen different punctuation marks that I think of as the “primary” punctuation marks – the period, comma, question mark, exclamation point, colon, semicolon, dash, hyphen, parentheses, brackets, braces, ellipses, quotation marks, and apostrophes. These are the marks that help us with sentence structure, help us clarify meaning and distinguish between different sets of ideas. Putting all of these into smaller groups, we can look at them like this: The Full Stop – the period, the question mark, the exclamation mark All three of these punctuation marks indicate the end of the sentence. Periods end declarative sentences. Do I really need to explain when to use a question mark? Exclamation points should be self-explanatory! The Pause – comma, the semi-colon, the colon The comma was rated as the punctuation mark you were most grateful for according to Grammarly back in 2012. I covered a few common mistakes people make with commas in this post “A Comment About Commas” and later in this post “Comma Drama.” The semicolon continues to be the punctuation mark that befuddles people the most. I offered my take on the humble semicolon here. You use colons before a list or an explanation. Perhaps I will cover this more in-depth in a future post. Connections and breaks – dashes and hyphens Dashes come in two forms: the endash (-) and the emdash (--). Endashes are used to connect numbers or connect elements of a compound adjective (Abraham Lincoln was president of the United States from 1861-1865). An emdash (so-called because the size of the dash is about the size of the letter M) can be used to separate clauses, introduce a phrase for added emphasis, or what I’m most guilty of – indicate a break in thought or sentence structure. Hyphens create compound words, particularly modifiers (“She was a well-known cook.”). Hyphens are also used in prefixes (“I wonder if they had any kind of pre-nuptial agreement?”). Next week, we’ll conclude with a quick overview of brackets, parentheses, braces, ellipses, quotation marks, and apostrophes. Have a grammar rule you’d like me to explore? Drop me a line at [email protected]. Author Catherine Spicer is a manager of customer content services at PR Newswire and has never been inspired to cook her family or her cats.
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Blending Fiction and Nonfiction to Improve Comprehension and Writing Skills |Grades||3 – 5| |Lesson Plan Type||Standard Lesson| |Estimated Time||Five 60- to 90-minute sessions| Queensbury, New York - Use fiction to begin discussion of a content area topic and to generate questions about the topic - Use nonfiction to answer the questions they have about the content area subject - Conduct Internet research to further explore the subject and to resolve any questions left unanswered by the nonfiction text Demonstrate their knowledge on the subject and an understanding of the basic elements of fiction and nonfiction: –By contributing to the creation of a class chart –By writing an original piece that features both narrative and expository elements The following outline is approximate. You may need to adjust the number of days depending on the length of texts and how much in-class time you can devote to Internet use. Additional time may also be needed to allow for student writing. Depending on the characteristics of your classroom and the abilities of your students, this writing time will vary. |1.||Using the Know-What-Learn (K-W-L) method, begin class with a discussion of what the students already know about the topic and generate a list on the board or projector. [Teacher's note: Use of the K-W-L method is explained in the article cited in the From Theory to Practice section of this lesson. The article also discusses several other strategies that work well with paired texts (i.e., Venn diagramming, directed reading-thinking activity, webbing, and activating prior knowledge). You might use any of these strategies as a substitute or supplement in this lesson.] |2.||Ask students what questions they have about the topic? What would they like to learn? List students' ideas and questions in a separate column. |3.||Depending on grade level and text availability, read the fiction text aloud or have your students read the text silently. If you are reading aloud, stop along the way to fill in a third column of new facts and information that address the questions asked in the second column. If the students have finished reading silently, generate the third column as a class. |4.||Are there any questions in the second column still unanswered? In addition, ask students if they have any new questions to add to the list. At this stage, there should still be a lot left to discover. Explain that fiction is enjoyable, but it may not be the best source for gathering factual information. Let them know that in the next class session they will be turning to a nonfiction text to further explore the topic. |1.||Review the second column generated by the class in the first session. Is there anything else that can be added to the want-to-learn list before going forward? |2.||Depending on grade level and text availability, read the nonfiction text aloud or have your students read the text silently. Add to the third column as you did during the previous session. |3.||Again, review the second column. Some unanswered questions are likely to still be on the list. If all the questions have been answered, ask the students if reading the nonfiction text has generated yet more questions. Continue to expand on the second column accordingly. |4.||Explain that whereas nonfiction is often better than fiction in answering questions about a topic, not all questions will be answered by one nonfiction text. Provide content materials and Internet access for students to explore other nonfiction sources about the topic. Have them take notes as they find new facts and information to add to the K-W-L chart. |5.||Have students report back as a class what additional information they have found to complete the third column. |1.||In a whole-class discussion, chart the genre elements of fiction and nonfiction, having students cite examples from the texts and the Internet. Chart responses so that fiction elements are on one side and nonfiction elements are on the opposite side, leaving space in the middle for a chart that will blend elements from both sides. (See Sample Genre Chart as a guide.) |2.||Prompt students about the possibility of texts having both narrative and expository elements. Ask for textual examples from the paired set and, as a class, discuss the types of writing that blend fiction and nonfiction elements: Are they familiar with realistic fiction? The Magic School Bus series? Can a letter be both narrative and expository? A diary? A comic strip? |3.||Return to the empty column between the fiction and nonfiction charts. What elements, fiction and nonfiction, can be found in the types of texts just discussed? Chart these elements in the middle column. |4.||Explain to students that their task will be to create an imaginative work (narrative) that includes facts about the given topic (expository), blending the genre elements of fiction and nonfiction as you have just done in the middle column. Distribute the criteria checklist (or a modified version of your own) to students, so that they are fully aware of what is expected in the final product. |5.||Encourage students to choose the type of writing piece they will create and later share. Be sure and let students know what form this sharing will take (several suggestions are given in Session 5) and how this portion of the lesson will be assessed. |Provide a brief time for students to talk out their ideas and share story starters and elements of their writing with the class or with partners. Share possible ideas to get them started. |1.||Provide time for students to independently complete their writing pieces, offering support and guidance throughout the process. |2.||Encourage students who are writing in one of the following applicable formats to create a final product using an interactive from this website: |3.||If more time is needed, have students complete their writing as a homework assignment. For this part of the lesson, a variety of sharing techniques could be used, such as whole-class presentations, small-group presentations, or partner reads. Other real-life applications could include sharing in the school newspaper, with a younger class, at an author's share or authors' tea, with family members, or in a library display. The timeframe is also flexible, not necessarily limited to one day. For example, sharing could be worked in throughout the school day, as in a daily morning activity. - Observation, anecdotal notes based on class discussions, and student handouts can be used to assess: How well students use the fiction text to generate ideas and questions about the subject How well students approach the nonfiction text as a means of gaining knowledge on the subject How well students use the Internet as a nonfiction source to answer their questions - Note students' contribution to and understanding of the class-generated K-W-L and genre charts. - Evaluate the writing assignment based on how well students blend elements of the two genres and demonstrate knowledge gained on the subject. (The criteria checklist can aid in this assessment.) - If sharing takes some form of oral presentation, your classroom or state standards rubric may be applied here.
Women played an important role for the united states in world war ii although they did not enter combat as soldiers, many women helped by serving in the armed forcesthey also helped to keep the country together at the home front. Some of the most common roles for women in the revolutionary war were cooks, maids, laundresses, water bearers and seamstresses for the army this was the first time women held these jobs in the military since these positions were usually reserved for male soldiers. Video: roles of women in the revolutionary war men fought bravely during the revolutionary war to defeat the british and form a new nation, but women were also essential to the war effort on both. Women in the civil war summary: there were many women playing important roles in the civil war, including nurses, spies, soldiers, abolitionists, civil rights advocates and promoters of women’s suffrage most women were engaged in supplying the troops with food, clothing, medical supplies, and even money through fundraising. During world war ii there were 11,868 enlisted women and 978 female officers in the coast guard women's reserve in 1947, the women's reserve of the coast guard was inactivated world war ii was over and there was no campaign to encourage women to enlist as spars. Even though society did not easily permit females to participate in the revolutionary war, women did great things by giving to their country in many different ways there are many women who definitely had a significant role in the formation of this country during the revolution when formal politics did not include women early on, mercy. In other non-combat roles eighty-eight women are captured and held as pows (prisoners of war) 1948 congress passes the women’s armed services integration act granting women permanent status in the military subject to military authority and regulations and entitled to veterans benefits. Women served in the civil war as nurses, spies, and vivandieres explore these stories with students through a video clip and close examination of two dresses and a woman's uniformthis lesson plan (which includes background information, guided analysis questions, and full-color primary sources) was produced to accompany the exhibition the price of freedom: americans at war, by the smithsonian. Roles for women in us army expand pentagon rules dictate that women may not be assigned to ground combat units but the nature of the iraq war has led to a blurring of distinction between front. Women’s work would be vital to the british war effort in world war two, so much so that it soon became compulsory (women had to do it by law) early in 1941, ernest bevin, the government minister for labour, declared that, 'one million wives' were 'wanted for war work. World war ii: 1939-1945 tabs through images in its collections, this website explores women's role in war work during the second world war women under fire text depicting the life of a young woman entering the world of work for the first time during wwii the land army. The role of women in combat positions has been debated throughout american history, even though women have been on the frontlines since the revolutionary war. The ninth role in which women were found during the revolution and the civil war is the role of warrior women who served as warriors usually gave up their gender identity in order to fight sometimes they disguised themselves and kept their true gender secret--in some cases, for decades after the war was over. Women played many roles in the civil warthey did not sit idly by waiting for the men in their lives to come home from the battlefield many women supported the war effort as nurses and aides, while others took a more upfront approach and secretly enlisted in the army or served as spies and smugglers. Many women wanted to play an active role in the war, and hundreds of voluntary women's auxiliary and paramilitary organisations had been formed by 1940 a shortage of male recruits forced the military to establish female branches in 1941 and 1942. The civil war marked a turning point for women and their role in society before the civil war, work for most women was in the home women were expected to cook and clean to make the home comfortable for the family and presentable for guests. But many women wanted to take a more active role in the war effort inspired by the work of florence nightingale and her fellow nurses in the crimean war, they tried to find a way to work on the. Revolutionary war women were able to play a significant role, specifically because men looked down upon them betsy ross sewing the first american flag because women were considered too simple to understand complex military strategy during the american revolution, men spoke freely around them. Despite the fact that today’s women serve on the front lines as soldiers, engage in diplomacy and lead nations, war is often still considered a man’s game. Through this historical shaping of the female gender, it’s clear that world war ii got women’s foot in the door in the industrial world and changed the idea of gender roles in american society in the end i believe that the ideas of women changes dramatically during and after world war ii. Professor jo fox considers the use of women as symbols, victims and homemakers in world war one propaganda professor jo fox considers the use of women as symbols, victims and homemakers in world war one propaganda women’s symbolic role in the war story’, in gail braybon (ed), evidence, history and the great war:.
Paleontology and geology During the Permian, the continents were colliding to form the supercontinent Pangea. One of these collisions resulted in a mountain-building event, the Alleghenian Orogeny, that formed the modern Appalachians. The vast swamps of Pennsylvania of the Carboniferous dried up as the seas drained off the rising landscape. As the Permian was primarily a time of erosion in the state, there are few outcrops of this age in Pennsylvania. However, ostracodes and a few tiny fish teeth have been recovered from the Permian of Green and Washington Counties in southwestern Pennsylvania.
Lesson 2 of 7 Objective: SWBAT identify the relationships among combinations of 10. They will also use standard notation to represent addition situations. Following the established quick flash routine (http://www.cc.betterlesson.com/lesson/501226/quick-flash). Show set one of cards (see resource section). Then repeat with set two (see resource section). When you are done compare the arrangements. The students are being proficient with making sense of quantities and their relationships in the problem situations (CCSS.Math.Practice.MP2). Introducing Three Columns Advanced Preparation: You will need a die and 60 connecting cubes (2 different colors). Start by asking a student to model the game with you. Explain to the class that they will be playing a new game today called 3 Columns. Tell them that they will play with a partner and work together to build three columns. Each column must be the same number of cubes high. Today we will build towers that are 10 cubes high. There is a video of my introduction of this activity to the class. This is a rather tricky task (especially the recording) and I wanted the reader to be able to see the introduction. The video is about 10 minutes but you could watch part of it and get the gist. However, I wanted to leave the whole video for those who were needing to see and hear the discussion. Students will use the recording sheet to play the game with their partner. Each teammate can fill out their own sheet (see resource entitled Three Towers) but should work on the same towers. Playing Three Towers The students will now play three columns with their partner. As they play, circulate and ask students how many cubes are in their tower so far (an example video clip is in the resource section)? How many more do you need. This will get students to start thinking about the compliments of 10(CCSS.Math.Content.1.OA.C.6). The students are adding within 20, demonstrating fluency for addition within 10 and using strategies such as counting on and making ten. Some will need to use the visual model to figure out how many more, some will count on, and some will just know the fact (CCSS.Math.Practice.MP2). The students are again making sense of quantities of 10 and the relationships of sums of 10. Session Wrap Up The focus for this discussion will be on how to use standard notation to represent their columns. Start by showing the students a set of three columns. Tell them that this was a set of towers that you saw today (just make up a set of towers). For this case I have blue and yellow cubes. The first column has 2 blue, then 3 yellow, and then 5 blue cubes. The second is 1 blue, 6 yellow, and 3 blue and the third has 5 yellow and 5 blue. I then ask the students how many blue cubes were used for the first column? Once the total of blue and yellow cubes are determined (for the 1st column) ask how we could represent this on paper. Students will suggest drawing the column on paper. After it is drawn and labeled, repeat that we agree that there were 7 blue and 3 yellow. I will then review the addition notation by writing 10=7+3 on the chart paper (CCSS.Math.Practice.MP2) The students are making sense of the quantity of cubes in each column. Then review the symbols used in the equation and ask how does this equation connect with the column we built. Continue doing this process with the other two columns. I have included an example (see Jack's Work in resources) of the finished student sheet. This would be a great example for leading the discussion of the use of equations.
High School Physics/Rotational Motion In a classic beginning physics demonstration, the instructor stands on a swiveling platform and holds a spinning bicycle wheel at arm's length. The wheel is vertical and the instructor is standing still. The instructor then tilts the wheel toward horizontal. This causes the instructor to start spinning slowly on the platform. Bringing the wheel back to vertical and tilting it the other way makes the instructor spin the other way. Why? Imagine the wheel as a collection of small particles. Particles want to move in a straight line. In order for them to move in a circle there must be a force accelerating the particles toward the center of the circle (acceleration is a change in speed or direction or both — in this case just direction). This force is ultimately provided by bonds between the atoms in the wheel and spokes. What happens when the instructor turns the spinning wheel from vertical to horizontal? Consider a particle somewhere on the wheel. If the wheel weren't being tilted, it would be accelerated around the circle as always. But since the wheel is tilting, it now has to follow a new path. A change in path is an acceleration, which in turn requires force (from the instructor's hands, transmitted through the spokes to the rim). Now consider the particle opposite the first particle on the wheel. It also has to change path, but in the opposite direction. Since the forces on opposite sides are in opposite directions, the result is torque. Each pair of opposite particles on the wheel contributes to the torque that causes the instructor to turn on the platform. The particles that are further away have a longer lever-arm in relation to the professor, so the net torque is non-zero. (Torque=Force*Lever-arm) Tilting the wheel the other direction produces torque in the opposite direction, slowing the instructor's spin and eventually reversing it.
Semiconductor makers have supplied ever-more-efficient chips. But performance limits may soon be reached, partly because of the difficulty of making transistors small enough. Conventional transistors switch on or off when a burst of current passes through. As the transistor gets smaller, so does the level of current required. For the smallest of the small, labs have made transistors that switch in response to a single electron-but such nanode-vices have required cryogenic cooling. Now Princeton electrical engineer Stephen Chou has created a single-electron memory that works at room temperature. Manufacture of such devices, however, is several years off, awaiting greater understanding of the chips’ unusual properties.
Artificial intelligence (AI) is the fodder of science fiction past, but reality may be creeping up. It is a big subject of research in today's universities and corporations. Some AI systems are designed to handle specific problems and tasks. Others take a more general approach. General intelligence, sometimes known as strong AI, usually involves research in reasoning, planning, learning, communication, perception, and movement. Integrating these areas creates what we know as AI systems. However, an MIT Media Lab research team led by Catherine Havasi is looking to add another area of significance into the mix. The researchers have developed a database called ConceptNet for common sense. Many AI systems currently use methods based on keywords and statistics; computer systems could pick up and understand basic facts using these methods but will have a difficult time understanding basic human communication. For example, if someone says, "I want some chips right now," humans will often interpret "chips" as meaning potato chips. But "chips" may easily confuse a computer system. Are we talking about potato chips? Computer chips? Poker chips? The idea behind ConceptNet is to give systems and technical devices a better understanding of the human language. The researchers wrote in a paper that they formed a crowdsourced database meant to give ConceptNet an optimized approach for making "context-based inferences over real-world texts." It also helps computers grasp "unknown or novel concepts by employing structural analogies to situate them within what is already known" by the computer. Robert Sloan and Stellan Ohlsson of the University of Illinois at Chicago recently tested the system. They used the Wechsler Preschool and Primary Scale of Intelligence, a test commonly used to measure a child's IQ. The test focused on the verbal categories, including information, vocabulary, and word reasoning. For a question such as, "What would you wear on your feet?," ConceptNet will search its database for the words must commonly associated with "wear" and "feet." Overall, ConceptNet's verbal IQ was equal to that of a 4-year-old child. The MIT researchers said in a press release that they can improve ConceptNet's performance by using better algorithms. Furthermore, they think the latest version, which includes 17 million statements (the one tested by UIC had 1 million) can achieve an even higher score. A system like ConceptNet could help engineers create extremely beneficial applications that would otherwise be impossible.
Brush fire in Tanzania. (Photo by R. Butler) FIRES IN THE RAINFOREST By Rhett Butler | Last updated July 27, 2012 Rainforests are increasingly susceptible to forest fires today due to degradation from selective logging, fragmentation, and agricultural activities. Scientists are concerned that much of the Amazon is at risk of burning, and that in the future we could see fires similar to those that so damaged Indonesia in recent el Niño years. Today most rainforest fires originate in nearby pasturelands and agricultural fields where fires are used for land clearing and crop maintenance. Every year, during the burning season, tens of thousands of fires are set by land speculators, ranchers, plantation owners, and poor farmers to clear bush and forest. Under dry conditions these agricultural forests can easily spread into neighboring rainforest. Low-level fires in the rainforest are not unusual. Even in "virgin" forests, fires may burn across thousands of acres of forest during dry years. The distinction between these fires and the fires that forests are increasingly experiencing today is the frequency of occurrence and level of intensity. Natural fires in the Amazon generally do little more than burn dry leaf litter and small seedlings. Typically these fires have flames that only reach a few inches in height and have virtually no impact on tall trees or the canopy itself. However, in passing, the fire sets the path for recurrent fires and subsequent forest loss. Once-burned forests are twice as likely to be deforested as unburned forests, largely because the initial fires—however small—thin out the canopy, allowing more desiccating sunlight to reach the forest floor. Previously burned forests, in addition to having more combustible material, are also often adjacent to fire-maintained pastures and therefore are frequently exposed to sources of ignition. Subsequent fires burn with increased velocity and intensity and cause higher tree mortality. Fires intervals of less than 20 years may eliminate all trees in the forest stand. Under "normal" rainfall and humidity conditions most of these fires are extinguished by the arrival of the rainy season or monsoon. Usually virgin forests serve as a sort of humid barrier which prevents the spread of agricultural fires. However, under dry conditions—such as those of an el Niño year—fires can spread from pastures and fields into primary forest. In the 1990s, 90 percent of burning in the Brazilian Amazon occurred in El Niño years. The unusually strong El Niño of 1997-98 contributed to massive forest fires. In the Amazon, humidity in the Basin was 45-55 percent lower than usual and the Woods Hole Research Center estimated that 400,000 square kilometers of forest could go up in smoke during the burning season. In early 1998, some of these fears materialized as 13,200 square miles (34,000 square km) of Roraima state in northern Brazil burned. The fires, started by subsistence farmers, spread rapidly across the dry savanna and advanced into rainforest usually too humid to burn. As many as 3,800 square miles (10,000 sq km) of intact rainforest were damaged or destroyed by these fires. The government firefighting efforts had virtually no effect and it was only freak heavy showers that extinguished the flames. Dry conditions again returned in 2005, when the Amazon experienced the worst drought in recorded history. As rivers went dry and communities were left stranded, tens of thousands of fires burned. That calamity was topped in 2010 by an even worse drought that affected a million square kilometers of Amazon forest. In both of these droughts, dry conditions were linked not to El Niño but to warm temperatures in the tropical Atlantic Ocean, raising fears that climate change could worsen droughts in the Amazon. Besides destroying the rainforest ecosystem and killing wildlife, these fires create other environmental problems. The "burnings" release thousands of tons of carbon into the atmosphere and the smoke produced causes local airport closings and hospitalizations for smoke inhalation. These fires are significant sources of greenhouse gases. For example, in a four-month period (July-October) in 1987, about 19,300 square miles (50,000 sq. km) of the Brazilian Amazon burned in the states of Para, Rondonia, Mato Grosso, and Acre. The burning produced carbon dioxide containing more than 500 million tons of carbon, 44 million tons of carbon monoxide, and millions of tons of other particles and nitrogen oxides. The tropical forest fires that have made headlines of late will only worsen as more forest is degraded and the area of previously burned forest expands. A study by IMAZON (the Institute for Man and Nature in the Amazon) found that for every acre burned or cleared which shows up on satellite, at least one-acre burns undetected under the forest canopy. These leaf-litter fires can burn for months with warm temperatures and little rain, and subsequent fires in these previously burned areas are more intense and destructive. Other studies have warned that climate change could significantly dry forests in the Amazon Basin and Africa, increasing their risk of burning. In light of this potential scenario and to better understand the impact of extended drought in the Amazon and the resilience of the forest to fire, the Woods Hole Research Center and NASA are have conducted an extensive series of large-scale experiments in the Brazilian rainforest. Separate findings from NASA suggest that heavy smoke from Amazon forest fires inhibits cloud formation and reduces rainfall. This conclusion, combined with other NASA studies suggesting that deforestation can affect regional climate, means that the Amazon rainforest may be on the verge of a significant environmental transformation—one that will increasingly leave the ecosystem vulnerable to fire. Articles on rainforest fires >> - Why are forest fires getting worse in the Amazon? Other versions of this page spanish | french | portuguese | chinese | japanese Continued / Next: Other pages in this section: Selection of information sources Forest fires in the Amazon have had extensive coverage in popular media and academic literature in recent years. The Woods Hole Research Center gives an excellent background of fire in the Amazon in its RisQue98 (Risco de Queimada, or "Risk of Burning" in Amazonia - 1998), Several papers in scientific journals examine the extent of fires and how they move from agricultural lands into intact rainforest including: Nepstad, D.C. et al, "Large-scale impoverishment of Amazonian forests by logging and fire," Nature Vol. 398 (505-508), 8-April-99; Cochrane, M.A. et al., "Positive feedbacks in the fire dynamic of closed canopy tropical forests," Science Vol. 284 (1832-1935) 11-June-1999; Cochrane, M.A. "Forest Fires in the Brazilian Amazon." Conservation Biology Vol. 12 No. 5 (949-950), Oct. 1998. Some press articles on the subject include: Simons, M., "Vast Amazon Fires, Man-Made, Linked to Global Warming," New York Times, 8/12/88; Margolis, M., "Thousands of Amazon Acres Burning," Washington Post. 9/8/88; Simons, M., "Vast Amazon Fires, Man-Made, Linked to Global Warming," New York Times, 8/12/88; Wilson, E.O., The Diversity of Life, Cambridge, Mass.: Belknap Press, 1992; Schemo, D.J., "Amazon Jungle Going Up in Smoke Again," New York Times. 10/13/95; Christie, M., "The Amazon Is Burning Again, Officials Say," Reuters. 10/3/97; Donn, J., "Report: Amazon rain forest fading," Associated Press, 4/8/99; and Couzin, J., "The forest still burns." U.S. News & World Report April 19, 1999. Plantation companies in Sumatra failing to meet fire prevention standards (10/14/2014) An inter-agency audit of 17 plantation and timber concessions in Riau Province, Indonesia, found that every company is failing to meet fire prevention and control standards. In addition, several companies are working in prohibited areas, including peatlands with depths over 3 meters. Despite high deforestation, Indonesia making progress on forests, says Norwegian official (10/02/2014) Despite having a deforestation rate that now outpaces that of the Brazilian Amazon, Indonesia is beginning to undertake critical reforms necessary to curb destruction of its carbon-dense rainforests and peatlands, says a top Norwegian official. Speaking with mongabay.com in Jakarta on Monday, Stig Traavik, Norway's ambassador to Indonesia, drew parallels between recent developments in Indonesia and initiatives launched in Brazil a decade ago, when deforestation was nearly five times higher than it is today. After 12 years, Indonesia finally ratifies transboundary haze agreement (09/19/2014) Indonesia ended 12 years of stalling this week, becoming the last ASEAN nation to ratify an agreement on transboundary haze. As smoke from more than 1,200 fires in Sumatra and Kalimantan pushed air pollution in neighboring Singapore to 'unhealthy' levels, the Indonesian House of Representatives ratified the 2002 ASEAN Agreement on Transboundary Haze Pollution (AATHP). More news on forest fires More rainforest news
5.1. Simulation models for ground-motion generation using urn games To start out, we will look at two connected random processes in the form of a simple game. At first glance it seems to have little to do with seismic hazard. Looking closer, however, it will turn out that it contains features which make it quite useful as a very simple simulation model for seismically generated ground motion. It consists of five urns which are shown in Figs. 5.1.1 and 5.1.2. Each urn contains 100 balls. Figure 5.1.1 Urn with 2 red, 8 green, 22 blue and 68 gray balls. Make it interactive by allowing a ball to be drawn randomly. The urn in Fig. 5.1.1 contains 2 red, 8 green, 22 blue and 68 gray balls. In addition, there are four urns with numerical labels shown in Fig. 5.1.2, one urn for each of the colors present in the urn in Fig. 5.1.1. Figure 5.1.2 Four urns with 100 balls, each carrying a numerical label. The colors of the balls correspond to the colors of the balls in Fig. 5.1.1. Make it interactive by allowing balls to be drawn randomly. Each of the balls in the four urns in Fig. 5.1.2 is labeled with a numerical value according to the scheme shown in Fig. 5.1. 3 and listed in Table 5.1. Figure 5.1.3 Histograms showing the numbers of particular numerical labels of the balls in the four urns in Fig. 5.1.2. The colors of the histograms correspond to the colors of the balls in Fig. 5.1.1. Table 5.1.1. Numbers of particular numerical labels of the balls in the four lower urns in Fig. 1.1. Let us assume that we have some automatic mechanism to randomly draw balls from any of the urns. We can now combine random draws from the urn in Fig. 5.1.1 and the urns in Fig. 5.1.2 in a single experiment. We first have a ball drawn from the urn in Fig. 5.1.1, followed by a draw from that urn in Fig. 5.1.2 which contains the balls in the color which we obtained from the draw from the urn in Fig. 5.1.1. So if the first draw results in a blue ball, the second draw will be done from that urn in Fig. 5.1.2 which contains only blue balls. The ball obtained in the second draw will contain a numerical label, which is the result of our experiment. If we repeat this experiment a thousand times, we call this a game with thousand draws. Fig. 5.1.4 shows the resulting histograms of the values of the numerical labels for four games with thousand draws. You will note that the resulting histogram changes slightly but not by much. Potential extra figure???: Stripped down version of what is now Fig. 5.6 to ilustrate the generation of a single histogram. Figure 5.1.4 Histogram of numerical labels obtained from playing the urn game with thousand draws from the urns shown in Fig. 5.1.1 and Fig. 5.1.2 for four times. In other words, the number of times which a particular numerical label will be obtained in a game of thousand draws seems to be a more or less stable feature of the experimental setup. The histograms indicate that that the majority of the values of the numerical labels which were obtained show values slightly below 1.0 with their number decreasing with increasing value of the numerical label. One might wonder how this can be related to seismic hazard. Well, if we think of the colors in the top urn to represent earthquake magnitudes as illustrated in Fig. 5.1.5 (the magnitude concept is discussed in more detail in chapter 3) and the numerical labels of the colored balls in the lower urns to represent ground acceleration in , the connection becomes rather obvious. Each single experiment consists of two random processes, one from which a magnitude value is generated which is followed by a second, dependent one from which a ground motion value is generated. As in nature, the distributions of ground motion values shift to larger values with increasing magnitudes (see Fig. 5.1.3 and Table 5.1). For this example to hold, the distance of the earthquakes to the site of interest is assumed to be constant. For the sake of the following arguments, however, this is acceptable. Figure 5.1.5 The same urn as in Fig. 5.1.1, but with numerical labels. Mathematically this defines a random variable. Make it interactive by allowing a ball to be drawn randomly. We can make the connection to earthquake generated ground shaking even stronger if we add a temporal element to the simulation, by considering that the experiments are conducted sequentially in time. In this case, we can define a quantity which will later be refered to as occurrence or activity rate (in units of number of earthquakes per time) which is simply the total number of values observed divided by the total duration over which we assume the experiments to take place. There are several ways in use to describe the results of this experiment in different types of histograms. e. g. by binning the particular numerical values and count the number in each bin as in Fig. 5.1.4 (Counts). by summing the counts from the bin with the smallest numerical values to the largest one (CumulativeCount). by counting the number of values in each bin above the selected value (SurvivalCount). by normalizing the counts over all bins so that they sum up to 1 (Probability). For large numbers of draws in a random experiment this can be related to the so-called frequentist definition of probability which is discussed in chapter 2. by normalizing the product of the counts times the bin width in which they occur over all bins so that they sum up to 1 (PDF). This way, the area under the histogram is normalized to one. This can be related to the so-called probability density function (PDF) of a continuous distribution which is also discussed in chapter 2. by cumulating the product of the counts times the bin width in which they occur over all bins from the bin with the smallest to the bin with the largest value (CDF). This can be related to the so-called cumulative distribution function (CDF) which is also discussed in chapter 2. by taking 1 – CDF. This function is sometimes called survival function (SF). All of the histogram types above are related to each other through simple operations and/or a single normalization constant as can be seen in detail in the following interactive figure (Fig. 5.1.6). It allows to display the outcome of the urn game for a varying number of experiments and for the mentioned histogram types. In addition it allows the read to choose between a autoscaled plot or a plot with a given maximum value. Figure 5.1.6 Displaying the outcome of the urn game for a varying number of single experiments. Note: Make yourself familiar with the properties of the different histogram types and their mutual relations. Make restart button. Replace label “number of games” by “number of draws” . .............. 5.1. Simulation models for ground-motion generation using urn games .............. 5.1.1. How safe do you want to be?
Urbanization is a historical phenomenon closely linked to changes in technology and to some extent science that also influences and is influenced by ethical ideals. Both technology and science develop with more intensity in cities, in part promoted by urban models of human behavior, which in turn may be reinforced by notions of technological instrumentalism and scientific objectivity. Urbanization, Ancient and Modern The term urbanization refers to the increasing concentration of people in cities. The first cities appeared after the development of plant cultivation and animal domestication. Formerly nomadic tribes settled in fertile river valleys and became increasingly dependent on agriculture. The ancient cities of Mesopotamia were established between about 4000 and 3000 b.c.e. The cities of ancient Egypt appeared around 3300 b.c.e. and were closely linked to the increasing power of the pharaohs, who were both secular and spiritual leaders who could use their power to create new cities. By about 2500 b.c.e. urban societies had developed in other parts of the world, such as the Indus River Valley in India and Pakistan and the Yellow River Valley of China. Subsequent urban developments of a classical form occurred in Athens, Rome, and other parts of the eastern Mediterranean. Despite urbanization in these ancient forms most people continued to live outside cities. The modern city is linked closely to the development of industrialization, especially in Europe and North America. Before the Industrial Revolution cities were primarily centers for trade, political power, and religious authority. The rise of the machine in the late 1700s in both Europe and North America led to new city forms characterized by larger numbers of people living in areas with greater population density. As machines were developed and manufacturing increased, people began to migrate to cities from rural areas as laborers and consumers. Technological change is not exclusive to the post–Industrial Revolution era. What distinguished that historical period was the unprecedented rapid increase in the number, kind, and effects of technological innovation and associated increases in urbanization. About 3 percent of the world population lived in urban areas in 1800, a number that rose to 13 percent in 1900 and more than 40 percent in 2000. The Modern City The rise of the modern city had significant economic, social, and cultural impacts. Urbanization changed many of the traditional institutions, values, and human experiences that characterized preindustrial cities. For example, while cities grew in importance in economic terms, they also became centers of poverty. Cities also brought together people of different cultures with different worldviews, traditions, and values. In addition, the concentration of people in urban areas created a host of ethical issues related to living together closely. In 1905 the German social theorist Max Weber (1864–1920) observed that industrialization represented a fundamental process of social change that was embedded in the development of rationality and scientific knowledge. According to Weber, "demystification" challenged traditional religious ideas by providing an alternative basis of knowledge. Weber concluded that this brought about a notable decline in the acceptance of the spiritual explanations that are at the heart of religious beliefs and practices. As a result human activities that previously had been dominated by religious authority were controlled by an appeal to scientific and rational thinking. In 1965 the Harvard professor of divinity Harvey Cox observed a close interconnectedness between the rise of urban civilization and the collapse of traditional religion. "Urbanization," Cox stated, "constitutes a massive change in the way men live together, and became possible in its contemporary form only with scientific and technological advances which sprang from the wreckage of traditional views" (Cox 1965, p. 1). Cox argued that that epochal change in worldviews resulted directly from the changing nature and character of cities. As cities became more cosmopolitan and as technology fostered greater interconnectedness through travel and communications, religion, Cox argued, lost its centrality in the hearts and minds of people. Nonreligious perspectives on the human condition replaced Christian religious norms and standards for conduct. Urbanization in a Global Context The patterns of economic, social, and cultural changes caused by rapid urbanization in the nineteenth and twentieth centuries are observable in modern cities. In general terms the world population is becoming predominantly urban. Industrialized or more developed countries were more than 75 percent urbanized in 2000, compared with 39 percent for less developed countries. To a certain extent economic gain and higher incomes are associated with urbanization. The expansion of production, communication, knowledge, and trade helped raise standards of living in the more developed countries. In developing countries the urbanization experience has been vastly different: Industrialization accounts for a much lower proportion of the national economy, and these countries also have significantly lower income per capita. The concern in developing countries is the rate at which increases in the numbers of people living in urban areas are occurring. According to the United Nations Center for Human Settlements (2001), 40 percent of the population of developing countries was living in urban areas in 2001. By 2020 that number is expected to increase to 52 percent. In 2001 three-quarters of global population growth occurred in urban areas in developing countries, posing significant problems associated with rapid growth in the parts of the world least capable of accommodating it. Most of the projected growth will occur in megacities: cities with a population of ten million or more. These areas already face increasing difficulties in providing their inhabitants with adequate water, food, shelter, employment, sanitation, and basic services. Poverty has become increasingly urbanized as more people migrate from rural to urban areas. The United Nations Center for Human Settlements (2001) estimates that more than a billion people live in crowed slums in inner cities or in squatter settlements on the periphery of large urbanized areas. Not only does this result in strained local conditions, the rapid growth and concentration of poverty in urban areas in the developing world often leads to adverse consequences for national economies. Although modern cities are part of a highly interdependent global network fostered by new information, communication, and transportation technologies, one significant characteristic of cities in the twenty-first century is the growth of disparities between the rich and the poor. The United Nations calls this the "divided city," and it is characteristic of urban areas in both developed and developing countries. Some researchers predict a new wave of rapid technological change in urban areas driven by information and communications technologies, which reinforce urban polarization and cause further erosion of traditional economic, social, and cultural activities. New technologies, they observe, reinforce and extend the reach of the economically and culturally powerful. Those who already have access to new technologies and most able to benefit from the potential of new technologies will use them to their advantage to assure their place as the principal beneficiaries of the "information revolution." Another phenomenon closely linked to the modern city, especially in North America and parts of Europe, is suburbanization. Driven by advances in transportation and communication technologies, sprawl patterns of urbanization from central cities to suburbs began to emerge after 1945. By 1960, 60 million people in the United States were living in suburbs, compared with only 45 million in cities. Since 1980 suburban populations have grown ten times faster than have central-city populations. In response to the problems associated with the rapid rise of modern urbanization and its attendant problems, urban planning emerged in the United States around the end of the nineteenth century. Although examples of planned cities date back several thousand years, urban planning developed from demands for social reform in both England and the United States. In the early twenty-first century urban planners are part of a distinct occupational skill group that applies a specified body of knowledge and techniques addressing land use, city functions, and a wide variety of other urban characteristics. M. ANN HOWARD Callahan, Daniel, ed. (1966). The Secular City Debate. New York: Macmillan. This collection of essays, critiques, and book reviews by a diverse group of authors was compiled in response to Harvard professor Harvey Cox's book The Secular City. The editor notes that the response to Cox's work was surprising, with more than 225,000 million copies sold in the first printing. An afterword by professor Cox is included. Cox, Harvey. (1965). The Secular City, rev. edition. New York: Macmillan. Explores the theological significance of the modern city and includes an interpretation of the relationship between urbanization and modern secular society. Cox contrasts the modern city (technopolis) with traditional forms of human communities, particularly with regard to the influence of religion and religious institutions. Ginsburg, Norton. (1966). "The City and Modernization." In Modernization, ed. Myron Weiner. New York: Basic Books. Hetzler, Stanley A. (1969). Technological Growth and Social Change. London: Routledge & Kegan Paul. This collection of essays was originally prepared as lectures for Forum, an educational radio program sponsored by Voice of America. Twenty-five scholars explore how modernization, defined as technological development, occurs and how it can be accelerated. Ginsburg's essay reviews the history of cities and their historical functions. He describes the distinctly different characteristics of the modern city. Lebebvre, Henri. (2003). The Urban Revolution, trans. Robert Bononno. Minneapolis: University of Minnesota Press. Henri Lefebre (1901–1991) was an influential French philosopher and sociologist. This work was originally published in 1970, but not translated into English until this edition. Highly theoretical, connecting urban research with social theory and philosophy, Lefebre's work marked a new view of "urbanism." Lefebre posited that "urban society" more aptly describes modern societies, rather than the term "postindustrial society," arguing that all forms of human settlements have been altered due to industrialization and urbanization. He noted that agricultural activity is inextricably linked to "industrialization." Even traditional forms of village life around the globe have been permanently transformed by industrial production and consumption. Mumford, Lewis. (1961). The City in History: Its Origins, Its Transformations, and Its Prospects. New York: Harcourt Brace Jovanovich. An extensive exploration of the history of the city, beginning with Ancient Mesopotamia and Egypt, through the modern. In addition to an historical overview, Mumford critiques many of the historical urban forms. He is especially critical of the modern manifestation of urban communities and the negative influence of capitalism resulting in resource depletion. The annotated bibliography is extensive and very helpful to anyone interested in in-depth material. Prud'Homme, Remy. (1989). "New Trends in Cities of the World." In Cities in a Global Society, ed. Richard V. Knight and Gary Gappert. Newbury Park, CA: Sage. Collection of essays that details the phenomenon of rapid global urbanization and the forces influencing the extraordinary changes in modern human settlements. Prud'Homme observes that the cities will have increasingly distinct "global" as opposed to "national" roles to play as economic forces become more global. United Nations Center for Human Settlements. (2001). Cities in a Globalizing World. London: Earthscan. Compilation of work by more than eighty international researchers. The report reviews the status of the world's cities and summarizes global trends that will impact the cities of the future. Especially notable is the observation regarding the increased isolation of the urban poor in both developed and developing countries. "Urbanization." Encyclopedia of Science, Technology, and Ethics. . Encyclopedia.com. (September 24, 2018). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/urbanization "Urbanization." Encyclopedia of Science, Technology, and Ethics. . Retrieved September 24, 2018 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/urbanization
Negative symptoms are thoughts, feelings, or behaviors normally present that are absent or diminished in a person with a mental disorder. Examples of negative symptoms are social withdrawal, apathy (decreased motivation), poverty of speech (brief replies), inability to experience pleasure (anhedonia), limited emotional expression, or defects in attention control. The term "negative symptoms" is specifically used for describing schizophrenia , but sometimes used more generally in reference to disorders such as depression or dementia . These symptoms may be associated with altered brainwave activity or brain damage. They can be more difficult to diagnose than positive symptoms ( hallucinations , delusions , bizarre behavior, or formal thought disorder) because they represent a lesser degree of normal, desirable activity rather than the presence of undesirable or bizarre behavior. Side effects of certain medications, demoralization (loss of positive emotions like hope or confidence usually as the result of situations in which one feels powerless), or a lack of stimulation in one's environment can also cause negative symptoms, so these possibilities must be ruled out before attributing the symptoms to a disorder. Sandra L. Friedrich, M.A.
I figured that before I start posting pages regarding brain cancers and brain disorders, I might as well go back to the beginning and address what animal cells appear like under normal circumstances. I’ll talk about the cells’ organelles here and what each of them do. What does an animal cell look like, and what are the structures within it? Animal cells have come a long way since the amoebae: for one, most creatures on Earth today are multi-cellular as opposed to amoebae, which are typically single-celled. In order to truly explain the marvel of the human body, I feel I must get in touch with my inner geek. To paraphrase the Borg from Star Trek (yes, I am a closet Trekkie), humans are like the Collective: we are billions of cells working as one. Depending on where they are located within the body, they each serve a specific function and they each serve the greater whole. Additionally, each cell has several structures within them known as organelles which each serve their own purpose: I’ll talk about them below. What do each of the organelles do? Animal cells (including human cells) are known as eukaryotic cells. What are eukaryotic cells, you ask? Well, the main difference between eukaryotic cells and prokaryotic cells (which I’ll address in Introduction to Bacteria) is that eukaryotic cells have a nucleus, whereas prokaryotic cells have a nucleotid. The cells’ DNA is contained within the nucleus and can often be condensed within the nucleolus (a nucleus within the nucleus, if you will). The edge of the nucleus is known as the nuclear envelope: this structure is not solid, instead it has small holes in it in order to allow the exchange of messenger RNA to and from the nucleus. More details about this process can be found in DNA and RNA: The Basics. Ribosomes are the protein manufacturers of the cell. They are responsible for translating messenger RNA that arrives from the nucleus. They can be found either attached within the rough endoplasmic reticulum or floating freely. This structure is one of only a couple of structures that exist within both eukaryotic and prokaryotic cells. The ribosome is actually made up of two parts: a top subunit and a bottom subunit, which sandwich together when proteins are being created. Mitochondria are only found within eukaryotic cells, and they are the energy producers of the cell. They take our dietary energy sources (sugars, fats and proteins) and convert them to ATP (adenosine tri-phosphate, our main energy source) through the citric acidi or Krebs cycle. This cycle requires oxygen to work: hence why we cannot survive without it in our air supply. There are folds within the mitochondria called cristae, which serve a similar function to villi in the small intestine or sulci and gyri within the grey matter of the brain: they increase surface area. Cristae, like the nuclear envelope, is selective in which molecules are allowed to pass through it. The endoplasmic reticulum is made up of membranes, and there are two separate categories of ER: smooth endoplasmic reticulum and rough endoplasmic reticulum. The difference between the two is that the rough variety has ribosomes entangled within it and the smooth ER does not. The organ within the human body that contains the most rough ER is the liver, as this is the place where most of our protein creation takes place. Rough Endoplasmic Reticulum Protein formation may be the most well-known function of rough ER, however it is not the only function that it serves. Another key function that it serves it that it is a key factor in the production of our body’s enzymes. Smooth Endoplasmic Reticulum Since the smooth endoplasmic reticulum does not have ribosomes, it must serve some other purpose other than the production of proteins. In fact, it has several: it is involved in the metabolism of lipids (fat) and carbohydrates and it helps to get rid of our body’s toxins. A lysosome is an organelle involved in waste disposal. This is essential for keeping the cells healthy, as it is involved in digesting foreign materials such as invading bacteria, recycling of cellular receptors and unneeded or damaged organelles. The recycling process of lysosomes is known as endocytosis. Like the endoplasmic reticulum, the Golgi body is also made up of membranes. Products created in the ER such as digestive enzymes and hormones are then transferred to the Golgi body and these are modified there and then are sorted for secretion. Centrioles are structures composed of tubulin and tend to be found mostly in eukaryotic cells. When they are seen in pairs, they are known as centrosomes. The tubules extend across the cell. They are responsible for maintaining the cells’ strength: it is similar to how the flexible beams hold a tent up when it is put together: without them the tent would just collapse. The plasma membrane is the outer edge of the cell. The cytoplasm is the gel-like fluid which fills the inside of the cell and its primary function is to act like a shock absorber for the organelles. Due to the cytoplasm of the cells being composed of mostly water, and the fact that the human body is composed of literally billions of cells, we require water at least once every three days in order to survive, because some of our water is lost every day through our sweat as well as through our urine and stool. It is also the meeting place for molecules that perform the metabolic reactions within our body and a place where calcium is transferred. In the medical lab setting, the cytoplasm is very useful, as it is the secondary site (the primary being the nucleus) where diagnostic stains tend to like to hang around. In tissue biopsies, the H&E (haematoxylin and eosin stain) is used and the pink eosin stain tends to hang around the cytoplasm, whereas the purple haematoxylin attaches to the nuclei. The equivalent can be seen in the Gram stain when examining for bacteria: the cytoplasm tends to hold either the crystal violet-Grams iodine complex or the safranin (or neutral red) counterstain, making the bacteria appear either purple or reddish-pink under the microscope .
Ever since the recognition of the Neanderthals as an archaic human in the mid-nineteenth century, the fossilized bones of extinct humans have been used by paleoanthropologists to explore human origins. These bones told the story of how the earliest humans—bipedal apes, actually—first emerged in Africa some 6 to 7 million years ago. Starting about 2 million years ago, the bones revealed, as humans became anatomically and behaviorally more modern, they swept out of Africa in waves into Asia, Europe and finally the New World. Even as paleoanthropologists continued to make important discoveries—Mary Leakey’s Nutcracker Man in 1959, Don Johanson’s Lucy in 1974, and most recently Martin Pickford’s Millennium Man, to name just a few—experts in genetics were looking at the human species from a very different angle. In 1953 James Watson and Francis Crick first saw the double helix structure of DNA, the basic building block of all life. In the 1970s it was shown that humans share 98.7% of their genes with the great apes—that in fact genetically we are more closely related to chimpanzees than chimpanzees are to gorillas. And most recently the entire human genome has been mapped—we now know where each of the genes on the chromosomes that make up DNA is located on the double helix. In Human Origins: What Bones and Genomes Tell Us about Ourselves, two of the world’s foremost scientists, geneticist Rob DeSalle and paleoanthropologist Ian Tattersall, show how research into the human genome confirms what fossil bones have told us about human origins. This unprecedented integration of the fossil and genomic records provides the most complete understanding possible of humanity’s place in nature, its emergence from the rest of the living world, and the evolutionary processes that have molded human populations to be what they are today. Human Origins serves as a companion volume to the American Museum of Natural History’s new permanent exhibit, as well as standing alone as an accessible overview of recent insights into what it means to be human. What Readers Are Saying: "Mention the search for human origins to an intelligent layperson and you most likely evoke the image of a khaki-clad paleontologist, in the mold of Richard Leakey, scrabbling for bones in an African landscape. Great progress in tracing out the human family tree has indeed been made in this fashion, but over the last few decades remarkable advances in several disciplines--molecular biology and genetics most of all--have revolutionized the way we regard the human past. . . Appropriately, the American Museum of Natural History (AMNH) recently decided that it was time to mount a new permanent exhibition on human origins, incorporating diverse fields of study. The exhibit's companion book, written by geneticist Rob DeSalle and paleonanthropologist Ian Tattersall, both on the museum's staff, is an accessible and authoritative summary of the major insights of the new synthesis."--Natural History ". . . an accessible and authoritative summary of the major insights of the new synthesis."--Natural History “Popular books on human evolution are quite common, with a dozen or more published every year. However, this one is different in lots of ways. First of all, it is written by two scientists who really know the material. DeSalle and Tattersall are Curators at the American Museum of Natural History and are world authorities on molecular systematics and human evolution respectively. They also write very well so that the integration of genetic and paleontological information flows seamlessly. Finally, as the authors note, this volume is as much about philosophy as it is about science. In presenting the latest evidence on human evolution from both genetic and paleontological perspectives, they also confront, head-on, the role of science and the scientific method (as well as creationism and intelligent design) in the way we understand the world around us and our place in it." --The Quarterly Review of Biology “The conventional wisdom of paleoanthropology has been sometimes challenged, sometimes supported, and most often sharpened by the precision of the new molecular techniques. The American Museum of Natural History (AMNH) recently decided that it was time to mount a new permanent exhibition on human origins, incorporating diverse fields of study. The exhibit’s companion book, written by geneticist Rob DeSalle and paleoanthropologist Ian Tattersall, both on the museum’s staff, is an accessible and authoritative summary of the major insights of the new synthesis.” --Natural History "This book is the most readable, thorough, and up-to-date summary of human evolution I know, and I envy the successful effort to integrate traditional paleoanthropology with the growing contributions of molecular anthropology. No one interested in human evolution can afford to pass it by." --Richard Klein, Professor of Anthropological Sciences, Stanford University "Through stimulating text and informative figures, DeSalle and Tattersall use their collective expertise in paleoanthropology and genomics to provide a fascinating overview of the science of human origins. The timing of this book could not be more compelling—as new technologies and scientific approaches are providing researchers unprecedented glimpses into the window of our own evolutionary history." --Eric Green, Scientific Director, National Human Genome Research Institute "A truly synergistic analysis of human evolution, smoothly integrating the strengths of both genomics and paleoanthropology into a profoundly coherent and unique account of the human evolutionary saga. A first-of-its-kind intellectual tour de force!" --Niles Eldredge, author of Darwin: Discovering the Tree of Life "Rob DeSalle and Ian Tattersall have teamed up to present an outstanding comprehensive review of modern human origins, incorporating recent breakthroughs in the fields of genetics, archeology, neuroscience, and paleoanthropology. The authors explain the history of the sequencing of the human genome and the methods used in both genetic and paleoanthropological studies in a manner that is accessible to a general audience but detailed enough for a specialist. They address a series of important questions such as 'What is it that distinguishes us from our closest living relative, the chimpanzee?' 'When and where did modern humans evolve?' 'What routes did they take as they migrated around the globe?' ' How have modern humans adapted, both genetically and culturally, to changing environments?' 'Why did Neanderthals become extinct?' and 'When and how did modern humans arrive in the New World?'" --Sarah Tishkoff, Associate Professor, Department of Biology, University of Maryland-Baltimore "Rob DeSalle and Ian Tattersall successfully weave together the genetic and fossil evidence for one of nature's greatest experiments: the origins of humans. In this delightful and engaging book, they present incontestable proof for human evolution and document the riveting journey that led us from a common ancestor with the African apes to the emergence of modern humans. This is more than just a companion book to the American Museum of Natural History's Spitzer Hall of Human Origins. It is an insightful and welcome argument for our own evolutionary origins. Read this book and be rewarded. Understand your ancestry as never before." --Donald C. Johanson, Director, Institute of Human Origins "With the help of pages and pages of colorful illustrations, DeSalle and Tattersall describe the basic principles of genetics and genomics from DNA through mutation and natural selection to phylogeny reconstruction and population movements. They place human evolution in the context of the evolution of all life forms. The final chapters focus on the uniquely human features of the brain and language, and the epilogue ponders our future. Originally written as a companion volume to accompany the new Hall of Human Origins at the American Museum of Natural History, this book is an exceptionally readable and up-to-date summary of human evolution. It is an authoritative and fun publication that will be accessible to anyone with even the faintest recollection of high school biology or any curiosity whatsoever about how we came to be the way we are.” --The Quarterly Review of Biology
Major Genres In The New Testament Major Genres in the New Testament The Gospels are the proclamation of the ‘good news’ about Jesus and intended to establish or increase the people’s faith in him. They are portraits of the life of Christ, his teachings, his actions, and his death, burial, and resurrection. (I.e.-Matthew, Mark, Luke, John) The Book of Acts is a partial narrative of the beginnings and growth of early Christianity. It is not a concise history of the early Church but focuses on the actions of the early primary leaders of the Church. (I.e.-Acts) The Letters are tangible letters addressing practical and theological issues relevant to particular communities of faith in the first century. (I.e.-Letters of Paul) The section of New Testament Biblical literature called Church Orders is a collection of instructions for the practical organization of religious communities. (I.e.-1 Timothy, Titus) A Testament is a document that gives a dying person’s last wishes and instructions for his or her successors. In the New Testament’s case it is the Apostle Paul and the Apostle Peter giving instruction. (I.e.-2 Timothy, 2 Peter) This Homily/Sermons section is an exegetical sermon that cites and interprets older biblical texts (The Old Testament) in reference to Jesus. (I.e.-Hebrews) The Wisdom Collection is of general instructions on how to live an ethical Christian life. (I.e.-James) The Epistles are more stylized literary works in letter format. They served as ‘circular letters’ intended for broader audiences. (I.e.-1 and 2 Peter) Apocalyptic Literature is a vivid symbolic narrative that “reveals” God’s views about a historical crisis in order to provide encouragement for a difficult present and a hope for a better future. (I.e.-Revelation) The above lists are not comprehensive but include the most prominent categories of biblical literature. There are other smaller genres found within the various books. For example, the New Testament Gospels contain narrative literature, discourse material, and some mixed genres. Narrative genres include genealogies, narrator’s introductions, transitions and summaries, miracle stories, conflict and controversy stories, visions, reports, etc. Discourse genres include parables and allegories, hymns and prayers, laws and legal interpretations, exhortations, short individual sayings, longer speeches, discourses and monologues, etc. Mixed genres include longer narratives that contain extended dialogues and pronouncement stories. Many of these sub-genres can be further sub-divided; for example, miracles can include healings, exorcisms, restoration miracles, nature miracles, etc. Another example is the psalms, which include enthronement psalms, processional psalms, individual laments, hymns of praise, etc. Failure to take the type of literature into account may lead to a skewed interpretation of the biblical passage. Figurative language communicates truth in a symbolic way. As we have seen in the four foundational questions, “What is the Bible? How was the Bible Written? How did we get the Bible? and How do we interpret the Bible? our ability to understand the Bible will be based primarily on how we answer those four foundational questions. If this introductory study is going to be successful, the basic concepts of lesson one will set the stage for a life long, rewarding, and fulfilling look at the Scriptures.
Science is a truly amazing subject considering that its foundation relies on questioning and understanding the natural world and all its phenomena. As humans, we naturally want to learn why things are the way they are and constantly build on our prior knowledge. With maturity, we come to understand the links science has to the real world and use our problem solving skills unconsciously. As teachers, we need to become aware of those conscious actions and roadmaps used to attain knowledge and connect findings to real word situations. We need to go back to when we were children and how we learned science as well as apply our understandings of today’s generation and their mental capacities. If we can provide students with hands-on material and realistic, hands-on experiments, children will attain higher level thinking and continuously question scientific reasoning. Not only do we need to provide students with a scientific basis of learning but also establish an understanding that science has its limits; as does, technology and engineering considering that they build off one another and depend on scientific findings. Science is always changing and has its restrictions. One limitation is that science is a social process that relies on people collaborating through procedures, and tests to analyse such findings. Publishing and sharing with the public or surrounding community is how society has acquired developmental knowledge. Considering the social nature of science, many findings are subject to biases and results that may have been overlooked. As teachers, it is our job to educate students to constantly question and ask ‘why’ – as our society depends on innovation, collaboration and the transformation of new talents.
How did scientists decide that sponges were animals when they don’t have a mouth, eyes, brains, any organs or moving parts? The cells that make up the body of sponges are like animal cells and not like plant cells. When a sponge dies it smells like a decaying animal. The dead brown or grey sponges found on the beach are the remaining skeletons. Live sponges are often very colourful. Only a few sponges are soft enough to be used for washing. The skeletons of most sponges feel more like sandpaper. They need to be attached to something solid such as rocks. Sponges have small holes over their body so water can pass into it. Many of the cells in a sponge have tiny but long whips. The thousands of cells working together use their whips to make water flow through their bodies. Other cells are able to filter out microscopic items of food which are digested by the cells. The water exits out of larger holes. A sponge the size of a coffee mug can have more than a thousand litres of water pass through it in a day.
THE CONVENTION ON THE RIGHTS OF PERSONS WITH DISABILITIES The Right to Health Convention on the Rights of Persons with Disabilities Article 25, Health States Parties recognize that persons with disabilities have the right to the enjoyment of the highest attainable standard of health without discrimination on the basis of disability. States Parties shall take all appropriate measures to ensure access for persons with disabilities to health services that are gender-sensitive, including health-related rehabilitation. In particular, States Parties shall: (a) Provide persons with disabilities with the same range, quality and standard of free or affordable health care and programmes as provided to other persons, including in the area of sexual and reproductive health and population-based public health programmes. (b) Provide those health services needed by persons with disabilities specifically because of their disabilities, including early identification and intervention as appropriate, and services designed to minimize and prevent further disabilities, including among children and older persons. (c) Provide these health services as close as possible to people’s own communities, including in rural areas. (d) Require health professionals to provide care of the same quality to persons with disabilities as to others, including on the basis of free and informed consent by, inter alia, raising awareness of the human rights, dignity, autonomy and needs of persons with disabilities through training and the promulgation of ethical standards for public and private health care. (e) Prohibit discrimination against persons with disabilities in the provision of health insurance, and life insurance where such insurance is permitted by national law, which shall be provided in a fair and reasonable manner. Prevent discriminatory denial of health care or health services or food and fluids on the basis of disability. The information contained in this chapter will enable participants to work towards the following objectives: · Understand what is meant by the right to the “highest attainable standard of health”. · Define the relationship between health and disability. · Define the distinction between health care and habilitation/rehabilitation services. · Understand and explain to others the importance of equal access to health care resources for persons with disabilities. · Understand the interrelationship between the right to health and other human rights. · Identify ways in which the right of persons with disabilities to the highest attainable standard of health have been promoted, denied, or misunderstood. · Understand the provisions on health in the Convention on the Rights of Persons with Disabilities (CRPD). GETTING STARTED: THINKING ABOUT HEALTH AS A HUMAN RIGHT What does the right to health include? Is it a right to be healthy? Is it a right to have health care services? Is it something else? We know that with every human right comes a corresponding responsibility for governments and society to ensure that this right is respected, protected, and fulfilled. But no one can guarantee the right to be free from all disease. However, societies and governments do have great control over many underlying determinants of health, including physical conditions in the environment that affect people’s health, such as public sanitation, the availability of clean water, and environmental pollution levels. In addition, societies have laws, policies, and programmes aimed at promoting and protecting human health. Every country has a health system to provide medical care and public health programmes designed to provide information about health risks, disease prevention, and healthy living. Governments are responsible for the quality and equity of national health systems. Furthermore, health for all people is also directly affected by other human rights, such as access to education, employment, and an adequate standard of living. Poor or uneducated people are far more likely to suffer ill-health than those with economic security and decent living conditions. These examples demonstrate how the right to health is indivisible, interdependent, and interrelated with other human rights. Violations and Barriers to the Right to Health Poverty, lack of education, poor living conditions, and other human rights issues that impact human health disproportionately affect persons with disabilities. For instance, in many countries clean water may be publicly available but not accessible to persons with disabilities. Likewise, health care is often not accessible or available to persons with disabilities on an equal basis with others because of factors like inaccessible buildings, lack of communications accommodations in the health care setting, and even denial of treatment based on a disability. Health services and important information about health are often inaccessible to persons with disabilities. For example, some countries broadcast information about HIV/AIDS education over the radio but do not provide that information in a manner that is accessible to persons who are deaf. In addition, many health clinics located in rural areas are not physically accessible to persons who use wheelchairs. Health care providers often do not provide important materials, such as consent forms or information about prescription drugs, in a manner that is accessible to persons who are blind or visually impaired. Persons with psychosocial or intellectual disabilities may be stripped of their right to make decisions related to their own health or may only receive limited information about treatment options. While governments are not responsible for ensuring good health, they are responsible for addressing factors in the social, economic, legal, and physical environment that impact health. Thus, health as a human rights issue is framed in terms of the “highest attainable standard of health.” In other words, people have a right to the conditions and resources that promote and facilitate a healthy life. In addition to understanding what is meant by the right to health, it is also important to understand what is meant by health. In the Preamble to its Constitution, the World Health Organization (WHO) defines health in the following broad terms: Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity. The WHO also affirms the definition and importance of the right to health in the Preamble to its Constitution with the following statement: The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, political belief, economic or social condition . . . Governments have a responsibility for the health of their peoples which can be fulfilled only by the provision of adequate health and social measures. Disability and Health While it is commonly accepted that there are many issues, such as literacy and poverty level that can adversely affect human health, disability has traditionally been viewed as inherently being a health issue. In reality, persons with disabilities experience disease and illness in the same way that other people do. They can be in good health or poor health, just like anyone else. Some persons with disabilities may be more vulnerable to communicable illnesses, such as influenza, and it is certainly true that some disabilities have the potential to create health problems, known as “secondary conditions.” Common examples of secondary conditions include, for example, pressure sores and respiratory distress in persons with mobility impairments. It is also true that some health problems can cause permanent disabilities and/or create temporary disabling conditions. In other words, a disability can be both a cause and an effect of a health problem, or a disability can be present in a completely healthy person. When disability is classified as a “health problem,” people think of a disability as being the same thing as an illness or disease. Therefore, the medical community is regarded as responsible for “curing” or “treating” disability, rather than it being the responsibility of governments and societies to address disability as part of the social or human rights agenda. The “medical model of disability” focuses on prevention, cure, and symptom management of the disability by the health profession. Unfortunately, this approach does nothing to help eliminate the fundamental problems of discrimination, lack of access, and other social and political issues that create barriers to the right to health for persons with disabilities. Health and Habilitation/Rehabilitation Closely related to the perception of disability in narrow terms as a health issue and reinforced by the medical model of disability is the notion that habilitation and rehabilitation are also medical subjects and therefore part of the health context. Habilitation and rehabilitation include a range of measures – physical, vocational, educational, training-related, and others – necessary to empower persons with disabilities to maximize independence and the ability to participate in society, not simply to achieve physical or mental health. For this reason, the right to health and the right to habilitation and rehabilitation are addressed separately in the CRPD. The exception, of course, is that health-related rehabilitation is recognized as part of the right to health. This would include, for example, physical therapy to strengthen muscles that are affected by an injury, illness, or disability. The Medical Model vs. the Social Model The Medical Model of Disability Perhaps the most significant and widespread myth affecting human rights and disability is the idea that disability is simply a medical problem that needs to be solved or an illness that needs to be “cured.” This notion implies that a person with a disability is somehow “broken” or “sick” and requires fixing or healing. By defining disability as the problem and medical intervention as the solution, individuals, societies, and governments avoid the responsibility of addressing the barriers that exist in the social and physical environment. Instead they place the burden on the health profession to address the “problem” in the person with the disability. Many governments throughout the world have fuelled the medical model by funding extensive medical research that aims to find the “cure” for certain disabilities, while not providing any funding to remove the barriers that create disability in society. The Social Model of Disability The social model envisions disability as something that is created by the barriers and attitudes in society, not a trait or characteristic that is inherent in the person. Under the social model, society creates many of the social and physical barriers we consider “disabling,” and this model focuses on eliminating those barriers, not on “fixing” or “curing” disabilities. This includes modifying the built environment, providing information in accessible formats, and making sure that laws and policies support the exercise of full participation and non-discrimination. WHAT DOES HUMAN RIGHTS LAW SAY ABOUT THE RIGHT TO HEALTH? The human right to health was first recognized, although indirectly, in Article 25(1) of the Universal Declaration of Human Rights (UDHR): Everyone has the right to a standard of living adequate for the health and wellbeing of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control. The UDHR focuses on the human rights associated with an adequate standard of living, but it clearly states that the ultimate objective of those rights is to achieve the “health and well-being” of the individual. Thus, the right to health is inextricably linked to other human rights, such as housing, social security, and, of course, medical care itself. In 1966, the human right to health was defined in Article 12 of the International Covenant on Economic, Social and Cultural Rights (ICESCR): The States Parties to the present Covenant recognize the right of everyone to the enjoyment of the highest attainable standard of physical and mental health. This language remains the fundamental expression of the right to health in the context of human rights. However, given the complexity of the subject, the Committee on Economic, Social and Cultural Rights, which monitors implementation of the ICESCR, issued General Comment 14 to articulate more fully the freedoms, entitlements, and substantive obligations associated with the right to the highest attainable standard of health guaranteed by the ICESCR: The right to health is not to be understood as a right to be healthy. The right to health contains both freedoms and entitlements. The freedoms include the right to control one’s health and body, including sexual and reproductive freedom, and the right to be free from interference, such as the right to be free from torture, non-consensual medical treatment, and experimentation. By contrast, the entitlements include the right to a system of health protection that provides equality of opportunity for people to enjoy the highest attainable level of health. An important analytical framework used to deepen understanding of the content of the right to health is that health services, goods, and facilities, including the underlying determinants of health, shall be available, accessible, acceptable, and of good quality. This framework applies to mental and physical health care and related support services provided to persons with disabilities. The AAAQ Framework Applied to Persons with Disabilities Availability: Health care facilities, goods, and services must be available in adequate numbers through a State, including adequate numbers of health care providers trained to provide disability-specific support and mental health-related services. Accessibility: Includes four overlapping dimensions: · Non-discrimination: Mental and physical health care services must be available without discrimination on the basis of disability or any other prohibited ground. States must take positive measures to ensure equality of access to persons with disabilities. States must also ensure that persons with disabilities get the same level of medical care within the same systems as others. · Physical accessibility: Health facilities, goods, and services must be within safe physical reach for persons with disabilities and other vulnerable or marginalized groups, such as ethnic minorities and indigenous populations, women, children, adolescents, older persons, and persons with HIV/AIDS. Accessibility also implies that medical services and underlying determinants of health, such as safe and potable water and adequate sanitation facilities, are accessible, within safe physical reach, including in rural areas. Accessibility further includes adequate access to buildings for persons with disabilities. · Economic accessibility: Health facilities, goods, and services, including medicines and assistive devices, must be economically accessible (affordable) to consumers with disabilities. · Information accessibility: Accessibility includes the right to seek, receive, and impart information and ideas concerning health issues. Information relating to health and other matters, including diagnosis and treatment, must be accessible to persons with disabilities. This entitlement is often denied to persons with disabilities because they are wrongly judged to lack the capacity to make or participate in decisions about their treatment and care. However, accessibility of information should not impair the right to have personal health data treated with confidentiality. Acceptability: Health care facilities, goods, and services provided to persons with disabilities must be culturally acceptable and respectful of medical ethics. Quality: Health care facilities, goods, and services provided to persons with disabilities must be of good quality, as well as scientifically and medically appropriate. Among other things, this quality requirement mandates skilled medical and other personnel who are provided with disability training, evidence-based interventions, scientifically approved and unexpired drugs, appropriate hospital equipment, safe and potable water, and adequate sanitation. Source: Adapted from Committee on Economic, Social and Cultural Rights, General Comment 14, The right to the highest attainable standard of health (22nd session, 2000), U.N. Doc. E/C.12/2000/4 (11 August 2000): http://www1.umn.edu/humanrts/gencomm/escgencom14.htm General Comment 5 of the ICESCR was developed by the Committee on Economic, Social and Cultural Rights to address disability in the context of the Covenant, including the subject of health. Together, ICESCR General Comments 5 and 14 make it clear that persons with disabilities have the right not only to accessible health care services, but also to equality and non-discrimination in relation to all aspects of the right to health. This includes equal access to available healthcare services and to equality with respect to the resources, conditions, and underlying determinants required for the highest attainable standard of health. CRPD Article 25, Health, reinforces these previous standards of general equality, non-discrimination, and access issues and expands upon States’ obligations in specific areas, in particular: · The right to sexual and reproductive health services. · Access to population-based public health programmes. · Services provided as close as possible to people’s communities. · Provision of disability-specific health services, including prevention of further disabilities. · Autonomy and independence in healthcare decisions, on the basis of free and informed consent. · Non-discrimination in access to health insurance and life insurance. · Prohibition against the denial of care, including food and fluids, on the basis of disability. Article 25 must be read in conjunction with CRPD Article 9, Accessibility, among other cross-cutting provisions. Article 9 addresses the general topic of access, requiring that States Parties take appropriate measures to ensure equal access to facilities and services open or provided to the public, including physical premises, and communications and information systems. The Duty to Respect, Protect, and Fulfil Obligations Relating to the Right to Health Taken as a whole, States’ obligations with regard to health include: 1. Obligation to respect: States must refrain from denying or limiting equal access to health care services, as well as to the underlying determinants of health for persons with disabilities. Example: The State repeals a law that discriminates against persons with disabilities in their access to health care and adopts a law that recognizes that persons with disabilities in public or private institutions, such as hospitals or prisons, may not be denied access to health care and related support services, or water and sanitation. 2. Obligation to protect: States must take all appropriate measures to ensure that third parties, such as health clinic professionals, service provider organizations, or others do not harm the right to health of persons with disabilities. Example: The State takes measures to ensure that health care providers do not discriminate against persons with disabilities in the provision of health care. Example: The State adopts specific measures to ensure that persons with disabilities are effectively reached in public health programmes, such as infectious disease prevention education. Example: The State provides reasonable accommodations to ensure equal access to health services for persons who are deaf in the form of on-call sign language interpreter services at medical facilities. Example: The State investigates reports of discriminatory treatment of patients with disabilities. 3. Obligation to fulfil: States must be proactive in their adoption and implementation of measures to give effect to the principles of equal access and non-discrimination in health care provisions. Example: The State provides disability training to health care providers to help them understand how to effectively accommodate consumers with disabilities. Example: The State provides information on dental services in accessible formats for persons with disabilities, such as plain language for persons with intellectual disabilities. Example: The State ensures that the right to health of persons with physical and mental disabilities is adequately reflected in their national health strategy, plan of action, and other policies, such as national poverty reduction plans. In sum, international human rights law strongly supports the right of persons with disabilities to have equal and effective access to health services. The enjoyment of the right to health facilitates the enjoyment of other rights by persons with disabilities. Health Promotion and Disease Prevention Persons with disabilities benefit from healthy choices and suffer from illnesses and accidents just like everyone else. However, the incidence of infectious diseases and other preventable conditions among persons with disabilities is often higher than for the rest of the population because public health programmes fail to provide information in accessible formats and do not make an effort to target persons with disabilities. Participation by persons with disabilities and their representative organizations in the design and implementation of public health efforts is essential to ensuring that persons with disabilities are able to benefit from these crucial programmes. The CRPD specifically recognizes the importance of gender-sensitive health services and the need for equal access to sexual and reproductive health and population-based health programmes. Even though the CRPD makes it clear that all public health programmes must include persons with disabilities on an equal basis with others, these particular subjects are highlighted because they are areas in which persons with disabilities are often assumed to be a-sexual, forgotten, de-prioritized, or simply discriminated against in health care systems and national and international health agendas. Concerning Non-discriminatory Health Care Access Purohit and Moore v. The Gambia: In a complaint to the African Commission on Human and Peoples’ Rights on behalf of mental health patients detained in a unit, legislation governing mental health, the Lunatics Detention Act of 1917, was challenged. The complaint alleged that the Act contained no guidelines for making a determination and diagnosis of mental disability; included no safeguards required during the diagnosis, certification, or detention of the person; and lacked requirements for consent to treatment, independent examination of hospital conditions, and provision for legal aid or for compensation in the case of a rights violation. The Commission held, among other things, that The Gambia failed to comply with requirements of Articles 16 (best attainable standard of physical and mental health) and 18(4) (right to special measures for disabled persons with regards to moral and physical needs) of the African Charter on Human and Peoples’ Rights. Furthermore, the Commission held that States Parties were required to take concrete and targeted steps to ensure the right to health. Purohit and Moore v. The Gambia, Communication 241/2001 (2003) AHRLR 96 (ACHPR 2003). Eldridge v. British Columbia: A group of deaf applicants challenged the absence of sign-language interpreters in the publicly funded health care system.The Supreme Court of Canada held that provincial governments had a positive obligation under the Canadian Charter of Rights and Freedoms to address the needs of disadvantaged groups such as persons with disabilities. The Court held that the applicants had a right to publicly funded sign-language interpretation in the provision of health care and that the failure of the authorities to ensure that the applicants benefited equally from the provincial medicare scheme amounted to discrimination. Eldridge v. British Columbia (Attorney General) 2 S.C.R. 624. Victor Rosario Congo v. Ecuador: The Inter-American Commission of Human Rights, which monitors the American Convention on Human Rights, held that in the case of persons with mental disabilities, prison settings must also be appropriate for their mental and physical needs. Victor Rosario Congo v. Ecuador, Case 11.427, Report No. 63/99, Inter-Am. C.H.R., OEA/Ser.L/V/II.95 Doc. 7 rev. at 475 (1998). Keenan v. United Kingdom: The European Court of Human Rights found a violation of the prohibition on inhuman and degrading treatment where a person with a mental disability was detained in squalid, inhumane conditions without receiving appropriate treatment. Although the European Convention for the Protection of Human Rights and Fundamental Freedoms does include the right to health, this case clearly also reflects violations of the right to health due to the potential for significant physical and mental deterioration or even death. Keenan v. United Kingdom, App. No. 27229/95, 33 Eur. H. R. Rep. 913, 964 (2001). Persons with Disabilities and HIV/AIDS In 2004, the World Bank, working in partnership with the Yale School of Public Health, conducted a Global Survey on HIV/AIDS and Disability with preliminary findings from this and follow-on research suggesting that persons with disabilities have infection rates comparable to, and quite possibly significantly higher than, rates found in the general public. Very often, children, adolescents, and adults with disabilities are invisible in HIV/AIDS outreach efforts due to stigma and discrimination, including the common and wholly false assumptions that persons with disabilities are not sexually active, are unlikely to use drugs or alcohol, and/or are at less risk of violence or rape than their non-disabled peers. Persons with disabilities are more vulnerable to infection if they do not have ready access to information, education, and services necessary to ensure sexual and reproductive health and prevention of infection. Poverty exposes women and girls with disabilities to sexual exploitation, and research suggests that a large percentage of persons with disabilities will experience sexual assault or abuse during their lifetime. Vulnerability also decreases the likelihood of being able to negotiate safe sex. Persons with intellectual disabilities and persons with disabilities living in institutional settings also experience elevated risks of sexual violence and abuse. Physical barriers to centres for HIV prevention as well as voluntary counselling and testing (hereafter VCT), treatment, and care limit access for persons with mobility impairments. Likewise, transport may be unavailable or inaccessible to persons with disabilities. Communication barriers limit access of HIV/AIDS messaging, such as radio programming, to persons who are deaf. For individuals with disabilities who are HIV-infected, poverty and barriers such as lack of transport to medical treatment centres hampers effective access to care and treatment, including antiretroviral and other medications for opportunistic infections. Privacy and confidentiality may be compromised for persons with disabilities in the context of HIV testing and counselling owing to the presence of personal assistants or sign language interpreters. Where access to antiretroviral therapy and post-exposure prophylaxis is limited, persons with disabilities may not be prioritized for treatment on account of disability-related stigma and discrimination. Participation in Medical Decision-making Under international human rights law, the population is entitled to participate in health-related policy decision-making at all levels. The right to participate extends to persons with disabilities who, like all persons, have the right to participate in decision-making processes that affect their health and development, as well as in every aspect of service delivery. CRPD Article 25, Health, reinforces the principles in CRPD Article 12, Equal recognition before the law, related to the freedom to make decisions about one’s health care. It specifies that States Parties must require health professionals to “provide care of the same quality to persons with disabilities as to others, including on the basis of free and informed consent” and to adopt measures that raise awareness about “human rights, dignity, autonomy and needs of persons with disabilities through training and the promulgation of ethical standards for public and private health care.” Failure to respect the independence, autonomy, and dignity of persons with disabilities in the context of medical decision-making led to horrific human rights abuses against children and adults with disabilities, including forced sterilization, cruel and totally bogus methods to “cure” specific behaviours in persons with psychosocial disabilities, psycho-surgery such as lobotomies, therapeutic, and non-therapeutic biomedical research, and experimentation. The right to be free from torture and other forms of violence is addressed in detail in Part 2, Chapter 6, Freedom from Torture and Other Forms of Abuse. Finally, persons with disabilities, like all people, are entitled to all treatment and life-sustaining measures available, and they are also entitled to forgo such care as a matter of individual choice. This is a matter of equality, both in terms of the right to life and with respect to the right to personal integrity and decision making regarding one’s own medical treatment. USEFUL RESOURCES ON THE RIGHT TO HEALTH · The Center for Universal Design and The North Carolina Office on Disability and Health, Removing Barriers to Health Care: A Guide for Health Professionals: http://www.fpg.unc.edu/~ncodh/rbar o Provides helpful guidance on making health care accessible. · Committee on Economic, Social and Cultural Rights, General Comment 14, The right to the highest attainable standard of health (22nd session, 2000), U.N. Doc. E/C.12/2000/4 (11 August 2000): http://www1.umn.edu/humanrts/gencomm/escgencom14.htm o General Comment providing detailed analysis of the right to health under the ICESCR. · Office of the High Commissioner for Human Rights/World Health Organization, The Right to Health, Fact Sheet No. 31: http://www.ohchr.org/Documents/Publications/Factsheet31.pdf o Comprehensive coverage of the right to health under international human rights law. · Office of the High Commissioner for Human Rights & UNAIDS, International Guidelines on HIV/AIDS and Human Rights: Consolidated Version (Geneva: OHCHR & UNAIDS) (2006): http://data.unaids.org/Publications/IRC-pub07/jc1252-internguidelines_en.pdf o Detailed guidelines on health and human rights in the context of HIV/AIDS. · Nora E. Groce, et al, “HIV/AIDS and Disability: Capturing Hidden Voices” (New Haven, Connecticut: World Bank Group/Yale School of Public Health) (2004): http://globalsurvey.med.yale.edu o Leading study on HIV/AIDS and disability. · Janet E. Lord, David Suozzi & Allyn L. Taylor, “Lessons from the Experience of the UN Convention on the Rights of Persons with Disabilities: Addressing the Democratic Deficit in Global Health Governance,” 38 J. Law. Med. & Ethics 564 (2010). o Assessing the implications of the CRPD for global health governance. · National Council on Disability, The Right to Health: Fundamental Concepts and The American Disability Experience (2005): http://www.ncd.gov/publications/2005/08022005-Concepts o Overview of health and disability within the US and international human rights contexts. · Special Rapporteur on the Right of Everyone to the Enjoyment of the Highest Attainable Standard of Physical and Mental Health: http://www.ohchr.org/english/issues/health/right o Webpage for the Special Rapporteur on the Right to Health. · Special Rapporteur on the Right to Health, “Mental Disability and the Right to Health” (11 February 2005): http://daccess-dds-ny.un.org/doc/UNDOC/GEN/G05/108/93/PDF/G0510893.pdf?OpenElement o Detailed report by the Special Rapporteur on mental disability and health rights. · Michael Stein, Janet E. Lord & Dorothy Weiss, “Equal Access to Health Care under the UN Disability Rights Convention,” in Medicine and Social Justice: Essays on Distribution and Care (Rosamond Rhodes et al. eds. 2012). o Discussion of health rights in the context of the CRPD. · UNAIDS, World Health Organization & Office of the High Commissioner for Human Rights, “Disability and HIV Policy Brief” (2009): http://data.unaids.org/pub/Manual/2009/jc1632_pol_brief_disability_long_en.pdf o Introducing the intersections between HIV/AIDS and disability. · United National Population Fund, “Emerging Issues: Sexual and Reproductive Health of Women with Disabilities,”: http://www.unfpa.org/upload/lib_pub_file/741_filename_UNFPA_DisFact_web_sp-1.pdf o Overview of main issues confronting women with disabilities in the sexual and reproductive health context. · World Health Organization & World Bank, World Report on Disability (2011): http://whqlibdoc.who.int/publications/2011/9789240685215_eng.pdf o First ever world report on disability with comprehensive coverage of health issues.
Not being able to accurately identify species can have major implications for our understanding of biodiversity and for the setting of conservation priorities in particular. If all of the individuals in a region are thought to be one species, and 70 per cent of populations are lost due to disease or predation, it may not be catastrophic for this species’ overall survival. However, if these same individuals actually belong to a number of different species — each found only in different subregions but misidentified as the same species — then losing the majority of populations could mean the unacknowledged extinction of entire species. Traditionally, morphological (physical) differences have been used as the basis for identifying different species. Increasingly scientists are recognising the limitations of morphology for discovering the existence of so-called ‘cryptic’ species — those that cannot be distinguished based on appearance alone. Fortunately, a range of genetic technologies are now available to help researchers discover cryptic species. Mr Mark Adams from the Museum’s Evolutionary Biology Unit uses ‘allozyme electrophoresis’, one of the ‘oldest’ molecular genetic techniques, to test the validity of the existing taxonomy in a range of native species. While Mark has investigated species in groups as diverse as bats, butterflies and bacteria, his current focus/obsession involves Australia’s freshwater fishes. One particular species, the mountain galaxias (Galaxias olidus) has been variously identified as between one and six species since it was first described in the 1800s. In the past, the traditional measurements and counts used to define fish species have been inconclusive for mountain galaxias. A review by the world taxonomic expert on this group of fishes determined they were all one species, albeit highly variable. Humans and dogs are both examples of highly morphologically variable species. Allozyme electrophoresis conducted recently by Mark showed that there were in fact 15 species of mountain galaxias, some only found in the Murray-Darling basin, and others only in coastal rivers across eastern Victoria and southern New South Wales. Locally, two of these species co-occur on the Fleurieu Peninsula. Mark’s collaborator Dr Tarmo Raadik (Arthur Rylah Institute for Environmental Research, Victoria) undertook traditional taxonomic work to look for morphological differences between the 15 candidate species (species that have not yet been named). Dr Raadik was able to use quantitative measurements of morphometrics — size and shape — to identify all 15 species. Ecological differences between galaxias species were also identified. For example, Tarmo’s research has found that one species lives only in the ‘riffles’ of fast-flowing mountain streams in central Victoria. More importantly, several species survive only as remnant populations in single rivers; indeed one species is now restricted to several hundred metres of an isolated, narrow stream. Trout introduced for recreational fishing feed on the juveniles of these fishes, which could place this and other isolated species in grave danger of extinction. Whether working on fish, frogs or fungi, a good proportion of Mark’s genetic analyses either validate the number of species identified by traditional approaches or discover new candidate species. With candidate species, the Museum ‘voucher’ specimens that formed the basis of these genetic analyses can then be used to determine whether morphological differences can, with hindsight, be used to differentiate the species. The research has highlighted the value of genetic techniques for identifying cryptic species, but also emphasises how molecular studies complement, rather than replace, traditional taxonomic approaches.
Valley, to protect existing properties and open up new lands to agricultural production. By the mid-1930s the progressive vision for water development had become national policy. Initial federal efforts to engage in river basin water management began with the Lower Mississippi Valley Commission during the presidency of Franklin Roosevelt. The 1934 National Resources Planning Board (NRPB), which undertook the task of defining how the natural resources of the nation could direct that era's weak economy to economic health, argued that water control structures were a part of the nation's economic relief and recovery effort; it stated (NRPB, 1934, p. 255): [I]n the interest of the national welfare there must be national control of all running waters of the United States, from the desert trickle that might make an acre or two productive to the rushing flood waters of the Mississippi. The NRPB's comprehensive watershed management program also included permanently converting steeply sloped lands that were in agricultural use to forest cover. The purpose served by reforested land was limited: these restored lands would reduce the intensity of runoff in order to reduce flooding. Deep percolation would store rainfall in ground water that would later be available for economic uses. In 1950, President Truman's Water Policy Commission stated that integrated river basin planning could lead to the development of the nation's economy: . . . the American people are awakening to the new concept that the river basins are economic units; that many problems center around the use and control of the water resources.... In summarizing the thinking of this era, Gilbert White articulated three elements to what Wengert (1981) later called the "pure doctrine" of river basin development: the multiple-purpose water storage project, an integrated system of projects within river basins, and the goal of water resources management being regional economic development. Plans for water development projects were expected to be defined through rational analysis by water management scientists, who would foresee the opportunities for water development and formulate the optimal sequence of projects to be put in place over time. This faith in scientific planning could be traced to the progressive era. For example, President Theodore Roosevelt, in a 1908 letter transmitting the report of the Inland Waterways Commission to the Congress (Morell, 1956), stated,
In many countries formed by revolution or an act of independence – the United States is the best example – most constitutional law is contained in a single document. In a democracy with a written constitution, legislators cannot make just any laws they wish. A country’s constitution, among other things, defines the powers and limits of powers that can be exercised by the different levels and branches of government. Canada, in contrast, became a country by an act of the Parliament of Great Britain. Consequently, the closest thing to a constitutional document would be the British North America Act of 1867 (the BNA Act, now known as the Constitution Act, 1867), by which the British colonies of Upper and Lower Canada, Nova Scotia, and New Brunswick were united in a confederation called the Dominion of Canada. (Prince Edward Island, although a member of the team that shaped Confederation, did not join until later.) Although there is no single constitution in Canadian law, the Constitution Act – a part of the Canada Act of 1982 – finally “patriated” or brought home from Great Britain Canada’s constitution as created by the BNA Act. The Constitution Act declares the Constitution of Canada to be the supreme law of Canada and includes some 30 acts and orders that are part of it. It reaffirms Canada’s dual legal system by stating provinces have exclusive jurisdiction over property and civil rights. It also includes Aboriginal rights, those related to the historical occupancy and use of the land by Aboriginal peoples, treaty rights, agreements between the Crown and particular groups of Aboriginal people. Because of Canada’s dual legal system (bijuralism), every federal law must be drafted in both official languages but it must also respect both the common-law and civil-law traditions in the provinces. Confederation of the colonies into the Dominion of Canada did not involve any break with the Imperial government. The new country was still part of the British Empire, governed by authority appointed by the monarch on the advice of the British Colonial Secretary at Westminster. The BNA Act provided for confederation, but it did not codify a new set of constitutional rules for Canada or even include a clause for amending or changing the Act. For this reason, until 1982 any amendments to the BNA Act had to be enacted by the Parliament in England. The Constitution sets out the basic principles of democratic government in Canada when it defines the powers of the three branches of government: the executive, the legislative and the judicial. The executive power in Canada is vested in the Queen. In our democratic society, this is only a constitutional convention, as the real executive power rests with the Cabinet. The Cabinet, at the federal level, consists of the Prime Minister and Ministers who are answerable to Parliament for government activities. As well, Ministers are responsible for government departments, such as the Department of Finance and the Department of Justice. When we say “the government” in a general way, we are usually referring to the executive. The legislative branch is Parliament, which consists of the House of Commons, the Senate and the Monarch or her representative, the Governor General. Most laws in Canada are first examined and discussed by the Cabinet, then presented for debate and approval by members of the House of Commons and the Senate. Before a bill becomes a law, the Queen or her representative, the Governor General, must also approve or “assent to” it. This requirement of royal assent does not mean that the Queen is politically powerful; by constitutional convention, the Monarch always follows the advice of the government. The Minister of Justice is responsible for the Department of Justice, which provides legal services such as drafting laws and providing lawyers for the government and its departments. This department also develops policies and programs for victims, families, children and youth criminal justice. The Minister of Justice is also the Attorney General or chief law officer of Canada. In the provinces, the same process applies but the Queen’s provincial representative is called the Lieutenant Governor. Our Constitution also provides for a judiciary, the judges who preside over cases before the courts. The role of the judiciary is to interpret and apply the law and the Constitution, and to give impartial judgments in all cases, whether they involve public law, such as a criminal case, or private (civil) law, such as a dispute over a contract. They also contribute to the common law when they interpret previous decisions or set new precedents. The Constitution provides only for federally appointed judges. Provincial judges are appointed to office under provincial laws. Under Canada’s federal system of government the authority or “jurisdiction” to make laws is divided between the Parliament of Canada and the provincial and territorial legislatures. Parliament can make laws for all Canada, but only about matters assigned to it by the Constitution. A provincial or territorial legislature, likewise, can make laws only about matters over which it has been assigned jurisdiction. This means these laws apply only within the province’s borders. Australia and the United States also have federal systems in which jurisdiction is divided between the federal government and the various states. In contrast, in the United Kingdom Parliament has sole authority to pass laws for the entire country. The federal Parliament deals, for the most part, with issues concerning Canada as a whole, such as trade between provinces, national defence, criminal law, money, patents and the postal service. It is responsible as well for the Yukon, the Northwest Territories and Nunavut. The provinces have the authority to make laws concerning education, property, civil rights, the administration of justice, hospitals, municipalities and other matters of a local or private nature within the provinces. Federal law allows territories to elect councils with powers similar to those of the provincial legislatures, and citizens of territories thus govern themselves. There are also local or municipal governments. They are created under provincial laws and can make bylaws regulating a variety of local matters, such as zoning, smoking, pesticide use, parking, business regulations, and construction permits. Finally, Aboriginal peoples in Canada have different types of government. For example, Indian bands can have a range of governmental powers over reserve lands under the federal Indian Act. Other Aboriginal governments, such as self-governments, exercise governmental powers as a result of specific agreements negotiated with the federal and provincial or territorial governments. Courtesy: Department of Justice
The Moon has many implications for our earthly gardening endeavors. Gardening by the phases of the Moon can speed the rate of seed germination, and by gardening during specific signs of the zodiac (which correspond to the elements; each plant has a particular preference for which elemental sign it is planted in) you can effectively create a happily blooming garden. Paragon Space Development Corporation has recently teamed up with Odyssey Moon to develop a pressurized mini-greenhouse that will be deployed on the surface of the moon; the hope is to have the first plants growing “on” the Moon by 2012. The goal is to plant a seed within the pressurized greenhouse, watch it grow into a plant, and the hope is that it will then flower and seed itself. While the greenhouse has been equipped with all of the things the plant will need to thrive (such as ways to protect it from the Sun’s glaring radiation, enough soil and carbon dioxide, and oxygen removal), this truly marks one of the first steps to gardening outside the confines of our earthly atmosphere and certainly gives new meaning to the term “lunar gardening.” Because even getting such a device to the Moon will take careful planning on the part of aerospace engineers, the likelihood of these engineers including a plan for the greenhouse to make it to the Moon during a specific lunar phase is small. But still, I have to wonder: We know how the Moon affects our earthly gardens; what would be the implications of such lunar energy on a plant grown on (though not actually touching the surface of) the Moon?
- slide 1 of 4 Gun Engine is a green energy engine that has the same mechanism as that of a conventional gun. The environmentally friendly engine uses liquid or gaseous fuel to move a piston in the same way as a bullet is shot out of a gun’s barrel. This highly efficient engine provides a spectacular efficiency of ninety two percent with almost negligible pollution. - slide 2 of 4 How Does a Gun Work? The important parts of a basic gun are barrel, bullet, ammo casing, primer and gun powder. When the gun’s trigger is pulled, the igniter or primer ignites the gun powder, which explodes and pushes the bullet towards a small piston. As the bullet moves towards the piston, the air in the barrel and cylinder is compressed which increases the pressure on the piston. The piston compresses against a spring which increases the pressure. When this pressure reaches a particular point, the bullet stops and the whole process reverses. The reversal process thus pushes the bullet out of the barrel with a torque that is a hundreds times higher than the piston pressure. The gun engine also works almost on the same principle. However, in a gun engine, the cylinder resembles a gun, the bullet resembles a piston, the primer resembles the igniter, and the combustion chamber resembles the ammo casing. - slide 3 of 4 How Does the Gun Engine Work? In the Gun engine, instead of gun powder, any type of liquid or gaseous fuel is used. The fuel is vaporized before it enters the combustion chamber because vaporized fuel leads to faster combustion. The fuel is ignited by means of a conventional igniter. As the fuel burns, the pressure generated pushes the piston into an air filled chamber. As the piston moves further inside the chamber, the air compresses and after a certain point pushes the piston back like a spring once the combustion is complete. The piston is attached to a mechanical arrangement that generates electricity. Water is put with the fuel for vaporization in the ratio of eight parts of fuel to one part of water. The water facilitates two functions – First it gets converted to steam and thus increases the pressure on the piston and second keeps the engine cool by absorbing the heat. Moreover, the exhaust generated does not require any kind of treatment. The production of NOx is prevented because of the special design of the engine which also prevents the generation of other pollutants. The engine has an internal cooling system that saves energy and thus produces more work. Also, as there are no pollutants generated, there is no deposition of carbon inside the engine and thus less of a requirement for maintenance. Thus, the Gun Engine not only eliminates loss of energy, but also provides high efficiency without any pollution.
How Do We Read? More specifically, how does the brain recognize letters? Pattern Recognition How does pattern recognition in the brain work? Could a mathematical or computer model be built to emulate the way the brain recognizes letters? Critical Point Theory Critical point theory is the theory our group is testing It states that brain recognizes how lines in letters intersect, and stores representations of these “critical points” When you see a letter, your brain compares the critical points to stored critical point patterns to identify the letter Major Questions Is this theory valid? If it is, how powerful is it? Can we create a mathematical model that can be used to accurately predict how difficult it is to read a letter? Aside from critical point theory parameters, what other parameters would be useful to have in such a model? The Letter Sets 50 letters were drawn using an image editor Ten were control, and were normal letters Ten had critical points covered by “paint” Ten were covered by “paint,” but the critical points were left uncovered Ten had the critical point relative orientation “bent” Ten letters were bent, but the critical point relative orientation was unchanged The Experiment People were shown these letters, and asked to name the letters Response time, in seconds, was recorded by the experimenter They were told explicitly that they don’t need to mention the case of the letter Sample Critically Occluded Letter Data Divided into Two Analyses Occlusion Bend Dependent Variable Response Time Explanatory Variables Occlusion Type Percent Occlusion Frequency Position Points Occluded Dependent Variable Response Time Explanatory Variable Bend Type Occlusion Set Analysis Through ANOVA, it was found that occlusion type was indeed a significant predictor of response time Through ANCOVA, it was shown that differences in occlusion percentage between occlusion types did not falsely suggest that the occlusion types were different Through visual inspection and ANCOVA, it was shown that critically occluded points had overall higher reaction times than noncritically occluded points A final model was derived through stepwise regression Due to a lack of low occlusion percentage representation, occlusion percentage had an insignificant, negative “nonsense” coefficient, and was removed Bend Set Analysis Unfortunately, we couldn’t devise a method of quantifying how bent a letter is in comparison to an unbent version of the same letter. Do you have any ideas as to how we could do this? Through ANOVA it was shown that bend type was a significant factor in predicting response time Results It was demonstrated that our critical point theory has statistical support Although the model created is poor, from the outset this study was intended to be an initial, exploratory study The study suffered somewhat from lack of difference across administrations of the experiment If this was to be followed up, it would benefit from computer- generation of unique experiments Overall, the theory was demonstrated to show real evidence of validity. It is safe to call the study a success The frequency data used in this experiment was taken from an online source. A formal citation is available upon written request to Jeff Cochran, and will be included in the Project Part II write-up.
Reaching Diverse Audiences In McKeachie's Teaching Tips (2011, Wadsworth, Cengage Learning), authors Svinicki and McKeachie write that "Responding to the individual student may be the most important way to improve your instruction." But in addition to each student having his or her own preferences on how they learn, educators today also encounter students from diverse cultural backgrounds. Svinicki and McKeachie offer ways to adapt your own behavior to enhance the learning environment for culturally diverse students. Keep in mind behaviors from different cultures have different meanings. For example, Svinicki and McKeachie warn that eye contact – or a lack of it – is not an automatic sign of inattention. In fact, in some cultures it's considered rude to stare at a person of higher status. Also understand that motivation and stress can come from different places than they generally do in Western culture. Not everyone values individual achievement over collectivism, and being in the minority in a classroom or being a first-generation student can bring with it unique stressors and anxiety. Finally, you can make your classroom a warmer environment for ethnic minority students by modifying your teaching methods. Being concrete, choosing appropriate non-verbal behaviors, and being accessible are all helpful to making your environment more inviting to culturally diverse students. A fuller understanding of the meanings behind certain student behaviors puts you in a better position to respond to them appropriately. Avoid making assumptions, and instead consider what cultural motivations or meanings could be at play. (Adapted from Svinicki and McKeachie 2011, 151-169) Content adapted from Svinicki, Marilla and McKeachie, Wilbert J. 2011. McKeachie's Teaching Tips: Strategies, Research, and Theory for College and University Teachers. 13th ed. Belmont, CA: Wadsworth, Cengage Learning.
|The Ebola Virus The Ebola Virus History of, Occurrences, and Effects of Ebola, a virus which acquires its name from the Ebola River (located in Zaire, Africa), first emerged in September 1976, when it erupted simultaneously in 55 villages near the headwaters of the river. It seemed to come out of nowhere, and resulted in the deaths of nine out of every ten victims. Although it originated over 20 years ago, it still remains as a fear among African citizens, where the virus has reappeared occasionally in parts of the continent. In fact, and outbreak of the Ebola virus has been reported in Kampala, Uganda just recently, and is still a problem to this very day. Ebola causes severe viral hemorrhagic fevers in humans and monkeys, and has a 90 % fatality rate. Though there is no cure for the disease, researchers have found limited medical possibilities to help prevent one from catching this horrible virus. The Ebola virus can be passed from one person into another by bodily...
What is cardiac arrest? Cardiac arrest is the abrupt loss of heart function in a person who may or may not have diagnosed heart disease. The time and mode of death are unexpected. It occurs instantly or shortly after symptoms appear. Each year about 295,000 emergency medical services-treated out-of-hospital cardiac arrests occur in the United States. Is a heart attack the same as cardiac arrest? No. The term "heart attack" is often mistakenly used to describe sudden cardiac arrest. While a heart attack may cause cardiac arrest and sudden death, the terms don't mean the same thing. Heart attacks are caused by a blockage that stops blood flow to the heart. A heart attack (or myocardial infarction) refers to death of heart muscle tissue due to the loss of blood supply, not necessarily resulting in the death of the heart attack victim. Cardiac arrest is caused when the heart's electrical system malfunctions. In cardiac arrest death results when the heart suddenly stops working properly. This is caused by abnormal, or irregular, heart rhythms (called arrhythmias). The most common arrhythmia in cardiac arrest is ventricular fibrillation. This is when the heart's lower chambers suddenly start beating chaotically and don't pump blood. Death occurs within minutes after the heart stops. Cardiac arrest may be reversed if CPR (cardiopulmonary resuscitation) is performed or a defibrillator is used to shock the heart and restore a normal heart rhythm within a few minutes. The term "heart failure" makes it sound like the heart is no longer working at all and there's nothing that can be done. Actually, heart failure means that the heart isn't pumping as well as it should be. Your body depends on the heart's pumping action to deliver oxygen- and nutrient-rich blood to the body's cells. When the cells are nourished properly, the body can function normally. With heart failure, the weakened heart can't supply the cells with enough blood . This results in fatigue and shortness of breath. Everyday activities such as walking, climbing stairs or carrying groceries can become very difficult. Heart failure is a serious condition, and usually there's no cure. But many people with heart failure lead a full, enjoyable life when the condition is managed with heart failure medications and healthy lifestyle changes. It's also helpful to have the support of family and friends who understand your condition. Information provided by the American Heart Association
This short video describes how the compression of Antarctic snow into ice captures air from past atmospheres. It shows how ice cores are drilled from the Antarctic ice and prepared for shipment and subsequent analysis. This classroom activity is aimed at an understanding of different ecosystems by understanding the influence of temperature and precipitation. Students correlate graphs of vegetation vigor with those of temperature and precipitation data for four diverse ecosystems, ranging from near-equatorial to polar, and spanning both hemispheres to determine which climatic factor is limiting growth. This video is accompanied by supporting materials including background essay and discussion questions. The focus is on changes happening to permafrost in the Arctic landscape, with Alaska Native peoples and Western scientists discussing both the causes of thawing and its impact on the ecosystem. The video shows the consequences of erosion, including mudslides and inland lakes being drained of water. An Inuit expresses his uncertainty about the ultimate effect this will have on his community and culture. This activity introduces students to visualization capabilities available through NASA's Earth Observatory, global map collection, NASA NEO and ImageJ. Using these tools, students build several animations of satellite data that illustrate carbon pathways through the Earth system. This short video examines the recent melting ice shelves in the Antarctica Peninsula; the potential collapse of West Antarctic ice shelf; and how global sea levels, coastal cities, and beaches would be affected. This video stitches together nine separate videos about energy sources (hydro, coal, geothermal, nuclear, wind, biofuels, solar, natural gas, and oil) from the Switch Energy site. Videos can be viewed as a group, or separately, each under their own title.
What Is Degenerative Disc Disease? Between each of the vertebrae of the human spine are discs. These discs prevent the vertebrae from rubbing against each other, allowing the spine to be flexible. They also provide support for the vertebrae. However, like bones, these spinal discs deteriorate with age. In some people, the deterioration is more severe. This is known as spondylosis or spinal osteoarthritis, commonly referred to as degenerative disc disease. Degenerative disc disease is most common in adults over the age of 30, and can cause a wide range of symptoms. Not every case of degenerative disc disease is the same. Because the discs deteriorate in different places and at different rates, the pain caused by this condition is different for every person. Common symptoms of degenerative disc disease include: - Stiffness, loss of flexibility - Tingling or numbness - Inflammation of the spine - Bone spurs As frustrating as these symptoms can be, most can be treated through non-surgical methods. The important thing is to get your spine inspected early before the disease begins to disrupt your life. Degenerative disc disease is primarily diagnosed through medical imaging, such as a CT scan, an MRI or an X-ray. These tests can reveal deformities in the spine such as shrinkage in certain areas, bone spurs or other abnormalities, and areas where there is more pressure on the nerves or spinal cord. A diagnosis and treatment plan can be developed based on the results of the medical imaging. Since each condition is different, exact treatment programs will be different. However, there are several avenues of treatment that have proven helpful in cases of degenerative disc disease: - Physical Therapy: To help restore flexibility and strength to your spine, physical therapy may be recommended. Treatment may also involve a behavioral portion, designed to teach you better working and movement habits. - Spinal Traction: This stretching technique is recommended in cases where the nerves are being compressed by the shrinking vertebrae. - Medicines: Painkillers, muscle relaxants and anti-inflammatory drugs are often prescribed to assist patients with managing their pain. - Alternative Treatments: Acupuncture, behavioral modification therapy, deep tissue massage and yoga have shown to help patients manage their condition. These can be used in conjunction with other forms of treatment. - Surgery: The symptoms and the progression of the disease will determine whether surgeries such as discectomy, percutaneous disc compression or spinal fusion are necessary. Surgery is usually considered a last resort for degenerative disc disease. Early diagnosis and treatment can make degenerative disc disease manageable so those who suffer from the disease are able to maintain a normal lifestyle. Make an appointment with your physician to reach a diagnosis and discuss possible solutions to relieve the pain. Find Back Pain Close To Home Simply fill out the form below to get started on your road to recovery:
When molten rockmagmasqueezes into preexisting rocks and crystallizes, it forms an igneous rock body called an intrusion. Intrusions can vary in thickness from centimeters to hundreds of kilometers. An intrusion is younger, in relative age, than any rock it cuts through. When molten rocklavaflows on Earth's surface and solidifies, it forms a mass of igneous rock called an extrusion. Extrusions include lava flows and volcanoes. The extrusion is younger than any rocks beneath it, but will be older than any rocks that may later form on top of it. Our email-based homework help and assignment help assistance offers brilliant insights and simulations which help make the subject practical and pertinent An inclusion is a body of older rock within igneous rock. Often when magma rises towards Earth's surface, pieces of the rock that the magma is intrudingpushing throughwill fall into the magma. Usually these pieces of older rock will melt to become part of the magma. However, if the temperature of the magma is lower, as occurs when the magma is about to solidify, the older body will not melt. The result is the formation of Some of Earth's oldest dated rocks have inclusions in them; thus, scientists know that even older rocks exist. However, they may not know how much older these rocks are geologic events in an area. Correlation is also useful in finding certain mineral resources, such as fossil fuels, which are found in rocks of a specific age. Some of the methods of correlation are described below. Homework, assignment help in earth science at Transtutors.com gives a detailed idea of all these categorizations As life forms on Earth constantly evolve, or change over time, some life forms exist or are dominant only during specific intervals of geologic time. Thus, fossils in rocks can be used to order an area's geologic events according to relative age. For example, the rock record shows that dinosaurs existed in a long interval called the Mesozoic Era. However, certain types of dinosaurs existed only for shorter time intervals. In general, fossils in rock layers serve to establish the relative age of the rock layers. Geology Homework questions and Assignment help about earths science can be solved perfectly by our experts at Transtutors.com. One of the basic principles geologists use to interpret geologic history is the uniformity of process, which implies that "the present is the key to the past." This principle assumes that geologic processes happening today also occurred in the past and that much of the rock record can be interpreted by observing present geologic processes. Uniformity of process does not mean that different processes could not have happened or that past geologic processes always occurred at the same rate as they do today. You will find various topics in Geology that may help you to complete your Earth science Homework and Assignment help at Transtutors.com.
Yoga, Mindfulness and Social Emotional Skills in the Classroom Discover the art and science of mindfulness, social emotional learning, and yoga-based principles for grades K-12. Focus on neurological readiness for learning and the connection between social emotional learning and cross-curricular academic achievement. Investigate best practices in the newest brain research. Complete integrative lesson planning that is research based, and practical while utilizing the collaborative coaching processes. Explore activities that develop social awareness to establish and maintain positive relationships, and the ability to recognize the thoughts, feelings and perspectives of others, including those different from one’s own. Strategies for use with special populations, such as ADHD, autism, and learning disabled, will be delineated throughout each session through videos, lecture, whole class and small group discussions and reading materials. As a result of participation in this course, students will: - Research results and demonstrate understanding of how the techniques and activities of mindfulness and yoga based activities impact the teacher, student achievement and classroom practices. - Understand the developmental processes for self regulation, motor, social-emotional and communication skills and how to address each during group instruction. - Learn how to design yoga sequences that maximize self regulation through respiration patterns that are differentiated by age. - Analyze and demonstrate through application-based writing the best practices for the behavioral outcomes of sensory processing disorders such as those students with attention deficit, hyperactivity, anxiety, and autism spectrum disorders. - Demonstrate proficiency in application of social-emotional learning standards when designing an integrative lesson plan. - Understand how the use of intention and reflection are embedded in a lesson and how each impacts one’s relationship with oneself, to students, and in one’s ability to recognize, support and deepen healthy emotional expression. Gillen, L., & Gillen. J. (2007). Yoga calm for children - Educating heart, mind and body. Portland, OR: Three Pebble Press.
What are the types of rocks and structures formed by igneous and volcanic processes? Igneous Rocks & Processes 1- What is a magma? A melt (usually of silicates) + crystals + gases that forms and occurs beneath the surface of the earth. When a magma reaches the surface and begins to flow, it loses its gases and becomes a lava. 2- What is the chemical composition of a magma? Magmas do not always have the same chemical composition. This is evidenced by the variety of igneous rocks that occur at the surface of the earth or which formed at depth, and the different types of volcanic eruptions. By carefully studying the chemistry of the different types of igneous rocks, and their associations with each other, petrologists were able to classify magmas into four main chemical groups: 1- Acidic: rich in SiO2, Na2O and K2O. Rocks produced from such magmas may have up to 77% by weight SiO2. "Granite" (see below) is an example of an acidic rock, and many acidic magmas are broadly known as "granitic". 2- Intermediate: rich in SiO2, Na2O, K2O as well as CaO and Al2O3. Rocks produced from such magmas have SiO2 values in the range 55 to 65% by weight. 3- Basic: rich in CaO, MgO and FeO. Rocks of this type have SiO2 values of 45 - 55% by weight. Basalt (see below) is an example of a basic rock, and many basic magmas are broadly known as "basaltic". 4- Ultrabasic: Are magmas poor in SiO2, but with large amounts of FeO and MgO. Ultrabasic rocks may have SiO2 values as low as 38% by weight. Table 1 lists the chemical compositions of some igneous rocks belonging to these four types. Table 1: Average chemical compositions of selected igneous rock types 3- How does a magma form? Most magmas are generated by partial melting in the asthenosphere, but the same process can occur in other layers of the upper mantle or even in the uppermost mantle and the lower crust (i.e deep parts of the lithosphere!). In order for us to understand this process, and the depths at which it occurs, we have to consider three things: (i) change of T with depth (needed to melt the rocks) * geothermal gradient (ii) how different rocks melt at different temperatures (iii) how the melting of rocks depends on pressures and water content * melting curves The temperature in the earth generally increases regularly with increasing depth, and the variation of temperature with depth at a specific time in the earth's history is known as the "geothermal gradient". If a rock is buried deep below the surface, it will first be metamorphosed, then at some higher temperatue, some of its constituent minerals will begin to melt. Because different minerals have different melting points, and because a rock is an aggregate of different minerals, melting will take place over a range of temperatures. Accordingly, this process is known as "partial melting", since only part of the rock melts at any given temperature. Figures 2a & b show the relationship between two different geothermal gradients (one beneath the continents; normal or average geothermal gradient, and the other beneath the oceans; a high geopthermal gradient), and the melting curves of an acidic rock (a granite, in the presence of H2O) and that of an ultrabasic rock (a dry peridotite). Two things are clear from this figure: (i) Acidic rocks, which are light in colour, melt at lower temperature compared to basic and ultrabasic rocks, and (ii) an acidic melt can be generated at depths as low as 35 km, whereas a basic magma is generated in the mantle at depths of 300 km! 4- How does the magma move? Because the magma is predominantly in the liquid state, it usually has a density lower than that of the overlying rocks. Therefore, most magmas will have a tendency to rise to shallower levels of the crust or even to the surface. The movement of magmas from deeper to shallower levels takes place either along fissures, cracks, or bedding planes, or by a process known as "stoping", where the magma interacts with some of the overlying rocks, first by engulfing them then perhaps melting them, a process known as assimilation. Assimilation will therefore lead to a change in the chemical composition of the melt, and will create new conduits for the continued movement of the magma upwards. In addition to density, viscosity (cf. Chernicoff, p. 74) plays an important role in magma movement. 5- Where does the magma occur or accumulate? The volume or space occupied by a magma at depth is known as the magma chamber. 6- How does an igneous rock form from a magma? When a magma rises to shallower levels, and begins to lose heat, minerals begin to crystallize. Because melting is the reverse of crystallization, understanding how a rock melts will help us understand how the same rock can form from a magma. In general, for the same composition of magma/rock, minerals that melt last will be the first to crystallize. If these early formed minerals are "left" inside the magma chamber and allowed to react with the cooling liquid, the final rock to form after all the magma has crystallized will be a basic rock similar in composition to the original "parent" magma from which it crystallized. On the other hand, if the early formed crystals are somehow prevented from reacting with the remaining magma, this magma will gradually change its composition, becoming more and more acidic with progressive crystallization. The process by which a magma forms two or more "bodies" of different chemical compositions, and as a result of which the magma itself changes its own chemical composition, is known as magmatic differentiation (we have already talked about the differentiation of the earth into a core, mantle, crust, hydrosphere and atmosphere in the first chapter; note how magmatic differentiation plays a role in this process by comparing the compositions of the crust and mantle!). The two most common processes involved in magmatic differentiation are: (i) Fractional crystallization: where the crystals that form from a magma are separated from this melt by settling down to the bottom of the magma chamber (if they are denser than the magma), by floating on top of the magma (if they are lighter), or by filter pressing (subjecting the magma chamber to stress which "squeezes out" the molten magma leaving behind the crystals). (ii) Assimilation: where the magma engulfs and melts some of the surrounding country rocks, thus changing its own chemical composition (by being "contaminated" by the country rocks). (cf the section on mineralogy of igneous rocks below!) 7- Where do magmas crystallize? How do the forms or structures of their resulting rocks vary with depth of crystallization? If the magma is allowed to cool slowly at considerable depths, the minerals have time to form large crystals, and the resulting rock becomes texturally ... Solution discusses igneous rocks, materials, and associated processes.
Presentation on theme: "…it’s as easy as rolling off a cliff…"— Presentation transcript: 1 …it’s as easy as rolling off a cliff… Projectile Motion…it’s as easy as rolling off a cliff… 2 PredictionBefore you participated in the PhET simulation, you made a prediction and explained your reasoning to me via . 3 A Little History…A few years ago, researchers went to elementary, middle, and high schools as well as universities and showed students this image and asked them,“Ignoring air resistance, which of the following correctly shows what an object would do if it rolled off a cliff?” 4 The ResultsThe breakdown of answers they got was almost exactly the same at all ages.About 60% said “A” was correct. The object will stop in midair, and then start to fall straight down. Because some people referred to the coyote in cartoons, the researchers called it the Wile E. Coyote Effect.About 25% said “B” was correct. The object will move forward at first, but will eventually just fall straight down.About 15% answered “C”. The object will continue to move forwards the entire time it is falling. 5 So What’s the Correct Answer? I’m not going to tell you. We’ll revisit the question at the end of these notes. 6 Observations from the Simulation As we saw with the simulation, the projectile that fell straight down and the one that was shot from a cannon horizontally - hit the ground in the same amount of time.So what effect did horizontal velocity have on the time it took (the downward motion of) the projectile to hit the ground?None!The best conclusion we can make from this is that the horizontal motion of a projectile does not affect downward motion of the projectile. 7 Observations from the lab Intuition will tell you that the horizontally launched object will “hang” in the air.But…YOUR INTUITION (at least in this case) IS WRONG!Here’s a videoHere’s another video 8 Horizontal and Vertical Motion The most important thing you can remember about projectile motion is this:Horizontal and Vertical motion are completely, 100% INDEPENDENT of each other – even when they are happening at the same time.Your lab question was, “How does initial velocity affect the amount of time it takes a horizontally-launched object to reach the ground?” 9 Does Horizontal Affect Vertical? The short answer to the lab question is:It Doesn’t! The horizontal motion of the projectile is unaffected by the downward (vertical) force of gravity.What does affect how long it takes an object to hit the ground (ignoring air resistance)?One thing and one thing only…The height it is launched from! 10 Looking at velocity vectors On the next slide, we’re going to look at the paths two projectiles follow. One projectile is shot out of a cannon, the other is dropped at exactly the same moment.yes, just like the lab When going through the slide, remember:velocity is a vector (has magnitude & direction)gravity is a constant forceconstant forces cause acceleration 11 two cannon balls – one is given a horizontal force, the other is just dropped gravity acts downward on bothWhile clicking through this slide, keep in mind that the law of inertia tells us that objects in motion stay in motion at a constant speed and in a straight line. Since horizontal motion is independent of vertical motion, the horizontal vector never changes!horizontal motion continuesgravity acts on all objects equallygravity is a constant force that accelerates all falling objects (ignoring air resistance) 12 Back to the cliff… So…which path will the red ball travel? It will follow path C – because its horizontal motion will continue at the same speed and direction (law of inertia) while gravity exerts a downward force at the same time! 13 ReflectionAs we saw with the ball and the cliff question, many people – even highly educated people – have misconceptions about falling objects versus objects with a high horizontal velocity.Ask five of your friends or family members which bullet will stay in the air longer: one shot from a pistol or one dropped from pistol height. (Obviously, don’t ask students in this class or physics majors ) 14 ReflectionOn the discussion board page, join the discussion about why you think most people have the misconception that a bullet fired from a gun will “hang” in the air. What is it about a bullet from a gun (or an arrow from a bow and arrow, or a cannon ball out of a cannon, etc.) that makes it so hard to believe that gravity acts on it exactly the same way gravity acts on an object with no horizontal velocity (ie, one that is dropped)?Make one original post and respond to at least two other student’s posts.
11. Doppler's Effect When a car at rest on a road sounds its high frequency horn and you are also standing on the road near by, you'll hear the sound of same frequency it is sounding but when the car approaches you with its horn sounding, the pitch (frequency) of its sound seems to drop as the car passes. This phenomenon was first described by an Australian Scientist Christian Doppler, is called the Doppler effect, He explained that when a source of sound and a listener are in motion relative to each other, the frequency of the sound heard by the listener is not the same as the source frequency. Lets discuss the Doppler effect in detail for different cases. 11.1 Stationary Source and Stationary Observer Figure shown a stationary sources of frequency n0 which produces sound waves in air of wavelength l0 given as [v = speed of sound in air] Although sound waves are longitudinal, here we represent sound weaves by the transverse displacement curve as shown in figure to understand the concept in a better way. As source produces waves, these waves travel towards, stationary observer O in the medium (air) with speed v and wavelength l0. As observer is at rest here it will observe the same wavelength l0 is approaching it with speed v so it will listen the frequency n given as [same as that of source] ...(1) This is why when a stationary observer list ends the sound from a stationary source of sound, it detects the same frequency sound which the source is producting. Thus no Doppler effect takes place if there is no relative motion between source and observer. 11.2 Stationary Source and Moving Observer Figure shown the case when a stationary sources of frequency n0 produces sound waves which have wavelength in air given as These waves travel toward moving observer with velocity v0 towards, the source. When sound waves approach observer, it will receive the waves of wavelength l0 with speed v v0 (relative speed). Thus the frequency of sound heard by observer can be given as Apparent frequency nap = Similarly we can say that if the observer is receding away from the source the apparent frequency heard by the observer will be given as 11.3 Moving Source and Stationary Observer Figure shows the situation when a moving source S of frequency n0 produces sound waves in medium (air) and the waves travel toward observer with velocity v. Here if we carefully look at the initial situation when source starts moving with velocity vs as well as it starts producting waves. The period of one oscillation is sec and in this duration source emits one wavelength l0 in the direction of propagation of waves with speed v, but in this duration the source will also move forward by a distance vs. Thus the effective wavelength of emitted sound in air is slightly compressed by this distance as shown in figure. This is termed as apparent wavelength of sound in medium (air) by the moving source. This is given as Apparent wavelength ...(1) Now this wavelength will approach observer with speed v ( O is at rest). Thus the frequency of sound heard by observer can be given as = = ...(2) Similarly if source is receding away from observer, the apparent wavelength emitted by source in air toward observer will be slightly expanded and the apparent frequency heard by the stationary observer can be given as 11.4 Moving Source and Moving Observer Let us consider the situation when both source and observer are moving in same direction as shown in figure at speeds vs and v0 respectively. In this case the apparent wavelength emitted by the source behind it is given as Now this wavelength will approach the observer at relative speed v v0 thus the apparent frequency of sound heard by the observer is given as By looking at the expression of apparent frequency given by equation, we can easily develop a general relation for finding the apparent frequency heard by a moving observer due to a moving source as Here + and - signs are chosen according to the direction of motion of source and observer. The sign convention related to the motion direction can be stated as : (i) For both source and observer v0 and vs are taken in equation with -ve sign if they are moving in the direction of i.e. the direction of propagation of sound from source to observer. (ii) For both source and observer v0 and vs are taken in equation (2) with ve sign if they are moving in the direction opposite to i.e. opposite to the direction of propagation of sound from source to observer. 11.5 Doppler Effect in Reflected Sound When a car is moving toward a stationary wall as shown in figure. If the car sounds a horn, wave travels towards the wall and is reflected from the wall. When the reflected wave is heard by the driver, it appears to be of relatively high pitch. If we wish to measure the frequency of reflected sound then the problem must be handled in two steps. First we treat the stationary wall as stationary observer and car as a moving source of sound of frequency n0. In this case the frequency received by the wall is given as Now wall reflects this frequency and behaves like a stationary source of sound of frequency n1 and car (driver) behave like a moving observer with velocity vc. Here the apparent frequency heard by the car driver can be given as = = (2) Same problem can also be solved in a different manner by using method of sound images. In this procedure we assume the image of the sound source behind the reflector. In previous example we can explain this by situation shown in figure. Here we assume that the sound which is reflected by the stationary wall is coming from the image of car which is at the back of it and coming toward it with velocity vc. Now the frequency of sound heard by car driver can directly be given as nap = n0 ...(3) This method of images for solving problems of Doppler effect is very convenient but is used only for velocities of source and observer which are very small compared to the speed of sound and it should not be used frequently when the reflector of sound is moving. 11.6 Doppler's Effect for Accelerated Motion For the case of a moving source and a moving observer, we known the apparent frequency observer can be given as Here v is the velocity of sound and v0 and vs are the velocity of observer and source respectively. When a source of observer has accelerated or retarded motion then in equation (4) we use that value of v0 at which observer receives the sound and for source, we use that value of vs at which it has emitted the wave. The alternative method of solving this case is by the traditional method of compressing or expending wavelength of sound by motion of source and using relative velocity of sound with respect to observer 11.7 Doppler's Effect when Source and Observer are not in Same Line of Motion Consider the situation shown in figure. Two cars 1 and 2 are moving along perpendicular roads at speed v1 and v2. When car - 1 sound a horn of frequency n0, it emits sound in all directions and say car - 2 is at the position, shown in figure. when it receives the sound. In such cases we use velocity components of the cars along the line joining the source and observer thus the apparent frequency of sound heard by car-2 can be given as
Above: This rendering shows the Lockheed Martin future supersonic advanced concept featuring two engines under the wings and one on top of the fuselage (not visible in this image). Image Credit: NASA/Lockheed Martin. Supersonic flight is one of the four speeds of flight. They are called the regimes of flight. The regimes of flight are subsonic, transonic, supersonic and hypersonic. Vehicles that fly at supersonic speeds are flying faster than the speed of sound. The speed of sound is about 768 miles per hour (1,236 kilometers per hour) at sea level. These speeds are referred to by Mach numbers. The Mach number is the ratio of the speed of the aircraft to the speed of sound. Flight that is faster than Mach 1 is supersonic. Supersonic includes speeds up to five times faster than the speed of sound, or Mach 5. An F/A-18 Hornet aircraft speeds up to supersonic speed. The Hornet is flying through an unusual cloud. This kind of cloud sometimes forms as aircraft break the sound barrier. Credits: Ensign John Gay, USS Constellation, U.S. Navy In 1947, Air Force Capt. Charles E. “Chuck” Yeager became the first person to fly an aircraft faster than the speed of sound.
The Saharan zone of Mali, an area of fixed dunes and false steppes, contains vegetation made up of thick-leaved and thorny plants (mimosas and gum trees). The vegetation of the Sahelian zone resembles that of the steppes, with thorny plants and shrubby savannas. There are two main vegetation zones that correspond to the climatic regions of Sudan and the Sahel in the Sudanic zone, localized forest corridors are found along the Guinean border and in the river valleys; the rest of the area is covered with Savannah. The Sudanese zone is an area of herbaceous vegetation; its trees are bastard mahogany, kapok, baobab, and Shea. The incidence of trees decreases to the north as the Sudanic zone merges with the Sahel. The Sahel is characterized by steppe vegetation, notably drought-resistant trees as the baobab, doum palm, and Palmyra. These trees also disappear to the north, where short, thorny plants such as the mimosa, acacia, and cram-cram, a member of the grass family) grow; all vegetation is absent in the far-north region of the Sahara. Beginning in the latter half of the 20th century, deforestation overgrazing, and repeated episodes of drought served to greatly speed the rate of naturally occurring desertification, resulting in the encroachment of the desert on the Sahel. The Inner Niger Delta is also known for its large waterfowl breeding colonies, with 80,000 breeding pairs of birds within 15 species of cormorant, heron, spoonbill, and ibis (Denny 1991). The Inner Niger Delta is also a breeding stronghold for the endangered West African subspecies of black-crowned crane the delta is essential waterfowl habitat because it remains wet in the dry season long after other areas dry up. A notable non-wetland bird species is the endemic Mali fire finch which is found only in Mali and largely confined to the delta area. The river Prinia is considered near-endemic to this Ecoregion. The vast floodplains also provide a habitat for the Nile crocodile.
This week Brian Clegg is bidding to highlight the chemistry of sodium sulfate. If you were to consider all the possible ionic compounds that could be given the nickname ‘the miraculous salt’, or sal mirabalis, there would inevitably be a range of contenders. Perhaps good old sodium chloride, because of its ability to improve the taste of food, or perhaps copper sulfate because of its stunning blue colour in crystal form. But what might not spring to mind immediately is sodium sulfate, apparently so named because it made a good laxative. Also known as Glauber’s salt, after its discoverer, Johann Glauber, sodium sulfate is a simple inorganic compound with the formula Na2SO4. Glauber’s discovery of sodium sulfate in mineral spring water in the mid seventeenth century came at a time when medicine still focussed largely on achieving the balance of four imaginary ‘humours’: blood, phlegm, yellow bile and black bile. Illness was considered to be the result of the humours getting out of balance, so the unfortunate patient was typically treated to bloodletting, an emetic to produce vomiting, or a laxative to ‘purge’ the body of the unbalanced substances. When sodium sulfate proved to be a relatively harmless, yet dramatically effective laxative, it was welcomed into the medical armoury. The spring water no doubt picked up its sodium sulfate from a relatively common mineral with the related name mirabilite, which is pretty much a pure hydrous version of the compound with ten water molecules to each of sodium sulfate. The source for this material is sodium ions, released from the erosion of igneous rocks, reacting in water with sulfur deposits. For some time in the nineteenth century, sodium sulfate was manufactured from sodium chloride and sulfuric acid in the Leblanc process, as an intermediary to meet a rising demand for sodium carbonate, until the manufacture was wiped out by the superior Solvay process. Now we have returned to mirabilite as the primary source, which is easily converted to anhydrous sodium sulfate (also known as the evil sounding thenardite), as the hydrated form is unstable in a dry atmosphere. Most of us will have used a product containing sodium sulfate, though oddly it is present in a role that has no practical function. Powdered detergents for washing clothes usually contain sodium sulfate simply to bulk up the product, making detergent manufacturers the biggest users of the compound. It does nothing for the wash, but as a very cheap, pH neutral substance that readily dissolves in warm water, it simply passes through the system, making the product less costly to produce per unit weight. There isn’t as much of it around as there used to be, though, as powder has declined in popularity and there is no need for filler in liquids and gels. Our compound is sometimes confused with sodium lauryl sulfate (also known as sodium dodecyl sulfate) and sodium laureth sulfate which are surfactants used in a range of cleaning products from detergents to toothpastes, but both these compounds are quite complex organic structures which aren’t produced from basic sodium sulfate. Our sulfate also turns up as a fining agent – not in the more familiar environment of wine or beer fining, but in making glass. In the alcoholic drinks, the fining agent’s role is to extract organic substances that make the liquid cloudy, where in glass it picks up scum and prevents small bubbles from forming. But perhaps the most interesting application of sodium sulfate is in the rapidly advancing world of solar energy heat storage. As solar thermal power plants, which concentrate incoming light with mirrors to store energy in the form of heat, become more common, there is a need to hold onto that heat before using it. Sodium sulfate takes a high amount of energy to change from solid to liquid and goes through a second phase change at around 32 degrees Celsius when it changes to the anhydrous form, which means that it can store considerably more heat energy than would be expected for any particular mass. Although it isn’t appropriate for the high temperature systems that store heat directly from solar collectors, it has the potential to be valuable in secondary solar facilities, for instance where the heated material gradually releases the heat to warm a building. Like many of the simple substances discovered in the early days of chemistry, sodium sulfate has gone through a range of uses since it took its place on the medical stage as the laxative Glauber’s salt. And though these applications continue to change – as, for instance, our use of powdered detergent for laundry declines – it seems likely that we will always have a use for this inoffensive inorganic compound. Science writer Brian Clegg with the salty chemistry of sodium sulfate. Next week, a compound alleviating the burdens of life. ‘If life hands you lemons, make lemonade.’ So goes the saying, but you might, instead, make citric acid. Find out its uses in next week’s Chemistry in its Element. Until then, thank you for listening, I’m Meera Senthilingam.
Many Scholastic news articles are perfect to use because they are short, and for the most part have a structure that is similar to how I want my students to write. The articles often include: Mint should stop making pennies. Return to Top of Page free graphic organizers I would imagine that most of the graphic organizers presented on this page would be suitable for any grade level. The "lights" in their eyes just seem to burn more brightly. And, let the lights shine on. |Beacon Lesson Plan Library||The argument must include sound reasoning and reliable external evidence, stating facts, giving logical reasons, using examples, and quoting reliable experts and original sources.| Comments Have your say about what you just read! Leave me a comment in the box below. Talking avatars teach 30 language arts mini-lessons via digital projector or SMART Board while you relax, 20 writing tutorials, 60 multimedia warm ups. Great for Journalism and Language Arts This free writing software is designed for individual workstations. 1. to separate (a material or abstract entity) into constituent parts or elements; determine the elements or essential features of (opposed to synthesize): to analyze an argument to examine critically, so as to bring out the essential elements or give the essence of: to analyze a poem. 3. to examine carefully and in detail so as to identify causes, key factors, possible results, etc. Writing enables deeper thinking and learning in every content area. Let's teach it in every content area. Having students write across the disciplines would transform K–12 education. If grounded in generous amounts of reading and discussion, this practice could have more impact on college and. Graphic Organizers for Argumentative Writing. caninariojana.com • caninariojana.com 2 Close Reading Questions After they have read the excerpt(s), can your students answer these questions? What is the author’s argument? What position does the . Strategies and Methods Tools Motivating Students: Free downloads are available. Step by step examples for planning, implementing, and evaluating inductive and deductive activities that really work with kids. The deductive approach is a great way to deliver concepts quickly and efficiently. How to Effectively Use Inductive Teaching Activities with Kids These inductive teaching methods are guaranteed to increase student motivation and participation. Kids learn content while sharpening processing skills. Students learn content while establishing their confidence as learners. Establishing classroom routines, providing warm up activities, structuring instructional time, the "Going to the Movies" approach, setting expectations, and. Organizing to Enhance Discipline and Order Organizing for effective classroom management: Use these reliable strategies to greatly improve discipline and order. A place for everything and. Controlling traffic, preparing students for instruction, obtaining materials, managing the pencil sharpener, maximizing instructional time, more. How to develop strategies for multiple instructional approaches, tips on how to implement strategies, examples of CHAMPs strategies, and. Tools for Teaching Writing Writing Prompts: Over for Practice Essays, Journal Entries, and More Persuasive and expository essay writing prompts, reader response questions and statements, and journal writing prompts for every day of the school year. These high-interest prompts will encourage kids to describe, explain, persuade, and narrate every day of the school year. These prompts give students focus and purpose as they respond in writing to fiction and nonfiction they have read. Use them for practice or for the. Great Tips for Enhancing Effectiveness Ideas for first year teachers: Establishing connections with kids, showcasing relevance, managing the classroom, using classroom routines, communicating with parents, and. First Day of School:Each Flocabulary video has unique printable exercises and handouts that you can find in the Teacher's Guide, just to the right of the video. In addition to those video-specific resources, the following organizers will help your students dive deep into the subject. PERSUASIVE WRITING GRAPHIC ORGANIZER Name: _____ Date: _____ Topic: Opening Sentences: Transition Word or Phrase What vocabulary words will I use to make my argument in a strong but polite way?. Graphic Organizers for Opinion Writing By Genia Connell. Grades 1–2 Both sides of the argument; Clearly stated opinions; I love using the graphic organizers in my Grade 3 Writing Lessons to Meet the Common Core. Other teachers in my building use the resources for their grade level as well. Eighth grade language arts Here is a list of language arts skills students learn in eighth grade! These skills are organized into categories, and you can move your mouse over any skill name to . Argumentative Essay Outline. There are three possible argumentative essay outline (s) which can be used as a starting point. Ensure that you have brainstormed the PROS and CONS of your topic effectively and have sufficient material on hand which will assist you in making your refutations. Explore Jennifer Carboni's board "Argument writing graphic organizers" on Pinterest. | See more ideas about Teaching ideas, Teaching writing and School.
HOW CAN YOU HELP PREVENT BULLYING? Even if there aren’t any incidents of bullying going on in your classroom at the moment, you can still create a safe, bully-free zone to make students feel safe and able to talk to you if anything’s bothering them. Let students know that you’re able to help. Make sure that your pupils know that they can talk to you if anything is worrying them. Especially with things that are going on online, students are often hesitant about going to talk to their teachers, for fear of being judged or told off. Make your students aware of the fact that you’re always available if they want someone to speak to – and that you’ll never pass any information on without telling them first. Put time aside to have lessons on using the internet safely and responsibly. There are many great resources that explain the issues of cyberbullying to children in a clear, informative way. Making children aware of what is acceptable behaviour online will help them recognise the signs of bullying, so that they can get help before things escalate. Reading books as a class, and identifying any bullying to the children, can also work, as this gives children an idea of what the warning signs are. Remind them to follow the rules. Most social media sites have an age restriction of 13 years old. Despite this, many children from as young as 8 years old have Instagram, Snapchat and Facebook. Although there isn’t much that you can do to prevent this, reminding children of these rules, and explaining why these rules exist, will help them to understand that social media can be dangerous, especially for young people. Be firm with little incidents. Although some things, such as name-calling, don’t appear too serious at first, make children aware that this isn’t acceptable behaviour, even if it is intended as a joke. Little things often build up, and being firm at the beginning could prevent bullying before it escalates. HOW CAN YOU DEAL WITH INCIDENTS OF BULLYING? It can sometimes be difficult to deal with bullying effectively. Young children can become upset very quickly, and it’s crucial that nobody feels uncomfortable at school after the incident has been dealt with. Here are some ideas on how you can handle the situation: Speak to each child separately first. It’s important that you hear each side of the story without the children feeling like they can’t say certain things because of somebody else. Make each child comfortable and don’t raise your voice – be calm and let them talk to you. Speak to the the child/children who carried out the bullying and ask them why they did it. Often, children bully others to gain attention, or because something is going on at home. Try not to sound accusing and listen to their reasons. Speak to the parents. Explain what has been going on and ask them to talk to their children as well. Children will be more willing to open up to their parents, so this can help you understand the situation a bit more. Be wary of sanctions. Punishments such as staying inside at break time can often be ineffective and instead cause resentment and upset. Try to stay away from punishments that don’t relate to bullying- losing play time makes sense if someone wasn’t paying attention in lessons, but less so for bullying. Instead, focus on correcting behaviour and helping the bully build positive relationships with the other children. Chat with them regularly and make sure that they are feeling ok. Children who are happy and comfortable are less likely to repeat incidents of bullying. Don’t force friendships. Although you should encourage the bullies to apologise, avoid using language such as “make friends” or “be friends again”. Children should feel like they have their own choice when it comes to choosing their friends, and there is no obligation to like somebody just because they have apologised. Instead, make sure that all parties treat each other with respect.
Volcanoes have been in the news a lot recently, one volcano in particular located in Iceland at the Eyjafjallajökull glacier, which brought Europe’s air travel to a halt. But what is a volcano? I’m going to try and explain them, but don’t forget I’m not a volcanologist. Volcanology is the study of volcanoes. The first thing you have to understand is the structure of the earth, there are three layers:- 1. The Crust: The crust is the outer layer of Earth. It is made up of plates about 18 miles thick, these plates sometimes move. It is the bit we live on. 2. The Mantle: The second layer is called the mantle. It is about 1,800 miles thick. 3. The Core: The inner layer is called the core. Between the Earth’s crust and the mantle is a substance called magma which is made of rock and gases. When plates on the surface collide, one plate slides on top of the other, the one beneath is pushed down. Magma is squeezed up between the plates. A volcano is a landform that can take many shapes, but we usually envisage the classic cone shape, a kind of mountain that opens downward into a pool of molten rock (magma) below the surface of the earth. As pressure builds up it needs to escape somewhere, so it forces its way up “fissures” which are narrow cracks in the earth’s crust, so the volcano acts like a giant safety valve. Gases, rock and magma erupt through the opening and spill over or fill the air with lava fragments and ash. These eruptions can cause lateral blasts, lava flows, hot ash flows, mudslides, avalanches, falling ash and floods. Volcanic eruptions have been known to knock down entire forests. An erupting volcano can trigger tsunamis, flashfloods, arthquakes, mudflows and rockfalls. Once the magma erupts through the surface of the earth we call it lava: Flowing lava ranges from 1,300° to 2,200° F (700° to 1,200° C) in temperature and glows red hot to white hot as it flows. Volcanic ash, which caused all the recent problems in Europe, is made of pulverized rock, and can be harsh, acidic, gritty, glassy and smelly. The ash can cause damage to the lungs of older people, babies, people with respiratory problems and livestock. More than 80 percent of the earth’s surface is volcanic in origin and gaseous emissions from volcanoes formed the earth’s atmosphere. Now there are around 1510 ‘active’ volcanoes in the world, 80 or more volcanoes are under the oceans. To be considered active a volcano has to have erupted in the last 10,000 years, and have a reasonable chance of erupting in the future. There are no live volcanoes in the UK, but there are extinct ones, for example Arthur’s (Archers) sat in Edinburgh the capital of Scotland is an extinct volcano. In fact Edinburgh is situated on top of a series of extinct volcanoes. In the U.S.A volcanoes are found mainly in Hawaii, Alaska, California, Oregon and Washington. The greatest chance of eruptions near areas where many people live is in Hawaii and Alaska. The danger area around a volcano covers about a 20-mile radius. In May 18, 1980, Mount St. Helens erupted in Washington state. It killed 58 people and caused more than $1 billion in property damage. Rock debris from a lateral blast of Mount St. Helens travelled at around 250 miles per hour. Some volcanoes are neither alive or extinct, they are dormant. The May 18, 1980 eruption of Mount St. Helens in the Cascade Range of Washington State happened after more than 100 years of dormancy. When the volcano erupted, it took the lives of 58 people and caused $1.2 billion in damage. The 1992 eruption of Mount Pinatubo in the Philippines Islands caused 342 deaths and more than 250,000 people had to be evacuated. Myths and Legends There are many myths and legends surrounding volcanoes. The name itself “volcano” has its origin from the name of Vulcan, the god of fire in Roman mythology. The Legend of Pele The Native Hawaiians know all about volcanoes. This isn’t surprising as Mauna Loa, in Hawaii, is probably the biggest volcano in the world: It rises off of the seafloor to 13,000 feet above sea level or about 29,000 feet above the seafloor. According to legend, volcanic eruptions were caused by Pele, the beautiful but tempestuous Goddess of Volcanoes. Pele had frequent moments of anger, which brought about eruptions. She was both honoured and feared. She could cause earthquakes by stamping her feet or volcanic eruptions and fiery lava by digging with her Pa’oa, her magic stick. Pele had a long and bitter argument with her older sister, Namakaokahai. The fight ended by forming the Hawaiian Islands. First, Pele used her magic stick on Kauai, but she was attacked by her older sister and left for dead. Pele recovered and fled to Oahu, where she dug several “fire pits,” including the crater Diamond Head, in Honolulu. After that, Pele left her mark on the island of Molokai before traveling further southeast to Maui and creating the Haleakala Volcano. By then, Namakaokahai, Pele’s older sister, realized she was still alive and she went to Maui to do battle. After a terrific fight, Namakaokahai again believed that she had killed her younger sister. But Pele was still alive and she was busy working at the Mauna Loa Volcano, on the big island of Hawaii. Finally, Namakaokahai realized that she could never crush her sister’s indomitable spirit and she gave up the struggle. Pele dug her final and eternal fire pit, the Halemaumau Crater, at the summit of the Kilauea Volcano. She is said to live there to this day.
How is it possible that we never learn a single IP address of a site, yet we can browse the Internet without any problems? What magical process transforms simple-to-write domain names into IP addresses and makes it so easy for us humans? It is called DNS resolution! What is the DNS resolution? DNS resolution is the process that DNS uses to resolve domain names to their IP addresses. It starts with a simple client’s DNS query for a domain name that later goes through a DNS recursive resolver, different DNS servers on different levels (Root, TLD, and authoritative servers) and brings back the IP address of the domain in the form of an A or AAAA record. What are the steps of the DNS resolution process? When a client whats to visit a new site (a domain name that it hasn’t visited before), a DNS resolution will have the following steps: - The start of the DNS query. The client will write a domain name in their browser, and this will trigger a DNS lookup process that will start searching for the IP address of the domain name. - The query will go to a DNS recursive resolver server. This kind of server will search for the answer if it doesn’t have it inside its memory cache. It will communicate with the rest of the DNS servers and finally provide the answer to the client. The first check will be to the Root server. - The Root server is the highest level on the DNS hierarchy. It will see the last extension of the requested domain (The TLD like .net, .info, .com, etc.) and redirect the query to the right TLD server. - The recursive will ask the TLD server for the domain name, and the TLD server will answer with the correct nameserver for the domain name. - Once more, the recursive needs to perform another lookup and go to the authoritative nameserver for the domain name. As it is authoritative for the requested domain, it can finally provide the domain name’s IP address. - The recursive DNS server finally has the answer and sends it to the client. It will save it inside its cache memory for later use. - The client gets it, saves it in its cache memory too. So now, it can access the site with the provided IP address. The DNS resolution process takes many steps, and the DNS query needs to go through many servers on the way, but what a client experience is just a short moment of waiting. Why should we care about it? We should care about the DNS resolution for 2 reasons: - Availability. If you are a site owner and your users want to visit your site, then the nameserver that is responsible for your domain name needs to do its task. If you haven’t chosen an extra DNS service, you are relying on your domain registrar’s nameservers. If it is down, your domain and site won’t be available. - Speed. When a user is visiting your site, the first step will be the DNS resolution. If it is slow, it will take extra time to access the content. If it is extremely slow, many of the users will just leave the page. This is why you want the DNS process to happen fast.
Scientists at the Canadian Light Source are on the forefront of battery technology using cheaper materials with higher energy and better recharging rates that make them ideal for electric vehicles (EVs). The switch from conventional internal combustion engines to EVs is well underway. However, limited mileage of current EVs due to the confined energy storage capability of available battery systems is a major reason why these vehicles are not more common on the road. A group of researchers from the CLS and Western University have made significant strides in addressing the rechargeability and reaction kinetics of sodium-air batteries. They believe understanding sodium-air battery systems and the chemical composition and charging behaviour will contribute to manufacturing more road-worthy batteries for EVs. “Metal-air cells use different chemistry from conventional lithium-ion batteries, making them more suited to compete with gasoline,” said Dr. Xueliang (Andy) Sun, Canada Research Chair from Western’s Department of Mechanical and Materials Engineering. “Development of new rechargeable battery systems with higher energy density will increase the EVs mileage and make them more practical for everyday use. “On the other side, higher energy density battery systems will pave the road for renewable energy sources in order to decrease emissions and climate change consequences,” said Sun. During their experiments, researchers looked at different “discharge products” from the sodium-air batteries under various physicochemical conditions. Products such as sodium peroxide and sodium superoxide are produced. Understanding these discharge products is critically important to the charging cycle of the battery cell, since various oxides exhibit different charging potentials. The experiments were conducted using the powerful X-rays of the CLS VLS-PGM beamline. “We took advantage of the high brightness and high-energy resolution of the photoemission endstation, using a surface sensitive technique to identify the different states of the sodium oxides,” said Dr. Xiaoyu Cui, CLS staff scientist. “We could also monitor the change in the chemical composition of the products by changing the kinetic parameters of the cell. The conclusive data from the CLS helped us confirm our hypothesis.” According to the researchers, only a few studies have ever addressed sodium-air battery systems, with limited understanding behind the chemistry of the cell. Their work was published in the journal Energy and Environmental Science and the authors believe the findings of the study contribute to better understanding the chemistry behind sodium-air cells which, in turn, will result in improved recharging rates and energy efficiencies. “Although lots of research has been done to develop rechargeable, high energy metal-air battery cells during the past decade, there is still a long road ahead to achieve a practical high-energy battery system that can meet the demand for our current EVs,” said Sun. “We are working to develop novel materials for different battery systems to increase the energy density and lifecycle. “Metal-air batteries are less expensive compared with other battery systems such as lithium-ion. Specifically, sodium-air batteries are very cost effective since the materials can easily be supplied from natural resources – sodium and oxygen being among the most abundant elements on earth.”
The world has a template for conservation: protected areas. The United States invented the template with its great national parks, protecting pristine wilderness. But what if the template is wrong? What if it is doomed to fail in a crowded world where most species live most of their time outside protected areas? This year is the International Year of Biodiversity. It will be a year in which calls for more parks will grow in order to halt the unprecedented loss of species across the world. The calls will come especially from mainstream environmental groups like WWF, the Nature Conservancy, and Conservation International, for whom protecting “hotspots” of biodiversity on land — which they either own or manage — is a core activity. The great African parks of today are, ecologically, as artificial as an English country garden. There are more parks every year. And yet despite this, conservation is failing. Neither the Biodiversity Convention, signed at the Earth Summit in Rio de Janeiro in 1992, nor the promise made a decade later at the World Summit on Sustainable Development in Johannesburg to staunch the loss of species, has done more than prevent the losses from accelerating. Perhaps the whole idea of sealing off wilderness from human activity is fatally flawed — a misreading of our symbiotic relationships with nature. The trouble is that what works in the wide open spaces of the U.S. — in Yellowstone and Yosemite national parks — may not work elsewhere, where there are more people and the demands on land are far greater. The test bed is likely to be Africa, where more of the world’s large mammals survive than anywhere else. It was in Africa that the U.S. parks model for conservation was first tried out on a global stage. It began with the godfather of America’s national parks, President Theodore Roosevelt, and a safari hunt, probably the greatest and most famous safari ever. A hundred years ago this year, the recently retired U.S. president spent a year with his son Kermit in the African bush, eventually sending home more than 10,000 carcasses, most of them to the Smithsonian Institution. To this day, one of his white rhinos, suitably stuffed, retains a revered place in the mammal room at the Institution’s National Museum of Natural History in Washington, D.C. This orgy of killing in what Roosevelt called “the greatest of the world’s great hunting grounds” was big news back home, and cemented the outside world’s perception of Africa as a primeval landscape teeming with wildebeest and elephants, lions, and zebras. Not many years later, the old hunters became the founders of the great national parks that still cover much of the continent. For them killing and conservation went hand in hand. They believed they were protecting a wild landscape, the world’s last great hunting grounds. Yet this was mythmaking on a grand scale, for much of the “primeval” Africa that Roosevelt saw was less than two decades old. And the great parks of today are, ecologically, as artificial as an English country garden. This misreading of the landscape, and miscasting of conservation, goes to the heart of many of the problems conservationists face today. So what happened in Africa a hundred years ago. Why this misreading? The story began when an Italian expeditionary force arriving in the Horn of Africa in 1887. The small band brought with them livestock from Asia that carried a vicious hitchhiker — a cattle virus that causes a disease called rinderpest. Native to the steppes of central Asia, this close relative of measles and canine distemper had periodically swept through Europe, but was unknown in Africa south of the Sahara. The virus quickly spread to native cattle and traveled from Eritrea, through Ethiopia, and down trails south along the Rift Valley and west across the Sahel. The British colonial authorities in southern Africa tried to halt the passage of the disease by erecting a 1,000-mile barbed-wire fence and shooting infected cattle. But it was futile. Rinderpest created an ecological revolution against people and in favor of wildlife. The pandemic was arguably the greatest natural calamity ever to befall Africa. “Never before in the memory of man, or by the voice of tradition, have the cattle died in such vast numbers; never before has the wild game suffered… The enormous extent of the devastation can hardly be exaggerated,” wrote Frederick Lugard, a British army captain who traveled the caravan routes of northern Kenya in 1890. Rinderpest only targets cloven-hoofed animals, but indirectly it devastated the human population, too. Herders had no livestock. Farmers had no oxen to pull their plows or drive the waterwheels that irrigated the fields. Hungry populations fell prey to diseases such as smallpox, cholera, and typhoid. Modern researchers have not estimated how many people died, but Lugard wrote: “Everywhere the people I saw were gaunt and half-starved, and covered with skin diseases. They had no crops of any sort to replace the milk and meat which formed their natural diet.” In places, epidemics coincided with drought. Between 1888 and 1892, roughly a third of the population of Ethiopia, several million people, is thought to have perished. Great pastoral civilizations across the continent were shattered. Central African cattle-rearing tribes like the Tutsi and Karamajong starved, along with Sudanese nations like the Dinka and Bari, West Africans like the Fulani, and southern Africans like the Nama and Herero. The folklore of the Maasai of East Africa tells of the enkidaaroto, the “destruction,” of 1891. They lost most of their cattle, and two-thirds of the Maasai died. One elder later recalled that the corpses were “so many and so close together that the vultures had forgotten how to fly.” Many of these societies never recovered their numbers, let alone their wealth and power. Rinderpest served up the continent on a plate for European colonialists. In its wake, the Germans and British secured control of Tanzania and Kenya with barely a fight. In southern Africa, the hungry and destitute Zulus migrated to the gold mines of Witwatersrand, helping to create the brutal social divide between black and white from which apartheid sprang. It is an extraordinary story, rarely told. But the ramifications did not stop with the “scramble for Africa.” Paradoxically, this cataclysm for wildlife created the “primeval landscape” discovered by Roosevelt and his hunting chums. How come? First the epidemic killed huge amounts of native wildlife. But it created an ideal landscape for the spread of the tsetse fly, which even today is second only to AIDS as an obstacle to Africa’s development. The tsetse fly lives in lowland tropical bush. It carries trypanosomiasis, a disease that is often endemic among wild animals such as ruminants — likely conveying some immunity — but can cause widespread epidemics among cattle and humans, in whom it is called sleeping sickness. Tsetse flies like lush vegetation, where adults can deposit their larvae. Before rinderpest arrived, the cattle herds kept by pastoralists on the African plains had always checked the spread of tsetse by grazing the bush. But with rinderpest decimating the cattle, the woody vegetation grew fast. So after the epidemic passed, when wild animal populations revived much faster than the cattle, the tsetse flies spread fast through bush they had once been unable to occupy. The flies and the sleeping sickness they carried in turn kept humans and their cattle from returning to graze down the bush. In East Africa, highland areas where cattle had until recently roamed free quickly became tsetse-infested bush and woodland. In southern Africa, the fly spread through the Zambezi and Limpopo valleys, creating no-go areas for cattle where once they had thrived. In this way, rinderpest created an ecological revolution against people and cattle and in favor of wildlife. Africa has never fully recovered. Probably half a million people contract sleeping sickness each year, of whom some 100,000 die. The tsetse fly remains a major obstacle to the economic development of whole regions, often thriving in the most fertile lowlands that would otherwise make ideal cattle country. Conservationists are seeking to preserve a version of the wild that has not existed for thousands of years. For conservationists, this makes sleeping sickness “the best game warden in Africa.” But it has also warped our perceptions of Africa. European colonists “just assumed that the country they found packed with animals and empty of people was the way that Africa had always been,” says John Reader, author of Africa: A biography of the Continent. Julian Huxley, head of UNESCO and a founder of the World Wildlife Fund in the early 1960s, described the East African plains as “a surviving sector of the rich natural world as it was before the rise of modern man.” In their ignorance, conservationists created Africa’s great national parks in regions where rinderpest had recently destroyed human society: the Serengeti and Masai Mara, Tsavo and Selous, Kafue, Okavango, Kruger, and the rest. And they decreed that humans and their cattle had henceforth to be excluded at all costs. One of the most famous of these conservationists, the German biologist Bernhard Grzimek who in 1960 wrote the book Serengeti Shall Not Die and directed the film of the same name, worked indefatigably to keep the Masai out of the Serengeti. “A National Park must remain a piece of primordial wilderness to be effective,” he wrote. “No men, not even native ones, should live inside its borders. The Serengeti cannot support wild animals and domestic cattle at the same time.” This is the image of conservation that we perpetuate to this day. But it is built on a myth. It is only recently that researchers have realized that, before rinderpest, cattle and wild game coexisted on the plains of Africa. “Pastoralists had herded their cattle in harmony with wildlife for thousands of years,” says Robin Reid, who until last year was an ecologist at the International Livestock Research Institute in Nairobi. By excluding cattle from large areas, colonial ecologists and their successors destroyed that dynamic of coexistence and replaced it with a conservation ideology based on separation — nature on one side of the fence, mankind on the other. But by following this goal, we are trying to put back together something that has not existed for thousands of years. And not just on the plains of Africa. Rainforest researchers from the Amazon and Central America to the jungles of Africa and Borneo have recently been discovering that there are probably no truly pristine rainforests anywhere in the world. Prior to 1492, the Amazon was infested with humans. Most of central Africa’s forests have been consumed at least once for iron smelting. In setting pristine nature as the ideal, conservationists are seeking to preserve a version of the wild that has probably not existed in most of the world for thousands of years. Is co-existence possible between man and wildlife? In a few parts of Africa, where rural communities are able to benefit from the profits from wildlife tourism and even trophy hunting, a new accommodation has been found. Perhaps other models can be found in the 21st century. Maybe, a century after Roosevelt toted his guns across the African bush, parks need replacing with a new approach to wildlife conservation: one based not on separation but on coexistence between wildlife and humans.
STRATEGY: Embed STEAM (science, technology, engineering, arts, and math), computer science, and workforce connections throughout instruction. Effective school and district leaders embed STEM (science, technology, engineering, and math), computer science, and workforce connections throughout instruction to develop students who take thoughtful risks, engage in experiential learning, persist in problem solving, embrace collaboration, and work through the creative process. Through intentional connections between standards, assessments, and lesson design across science, technology, engineering, and math, STEM instructional models afford students access to an integrated curriculum with multiple opportunities to engage in authentic, challenge-based learning and design thinking to solve real-world problems. STEM competencies are essential for learners to be able to function in the twenty-first century workplace. STEM stands for “science, technology, engineering, and mathematics” not only as individual academic subjects, but also as an interdisciplinary subject (or area of study). Through the integration of the four disciplines, STEM education assigns priority to the development of critical thinking, creativity, science literacy, and problem-solving skills to enable the next generation of innovators. To this end, STEM education emphasizes design thinking, investigation, and inquiry and devotes explicit attention to the development and application of these skills in real-world contexts. First Steps to Consider STEM ensures that important scientific, mathematical, technological, and engineering-linked concepts and practices are understood and applied in an interdisciplinary manner. STEM is about developing scientific, technological, and mathematical insights, concepts, and practices and using them to solve complex questions and real-world problems. For a student to be proficient, STEM disciplines require not only content knowledge but also specific thinking dispositions, frequently referred to as the “5 C’s”: collaboration, communication, creativity, and critical and computational thinking. - Establish a STEM advisory committee to develop a shared vision for STEM education that includes parents and representatives from the school, community, higher education institutions, and industry, and engage the committee in ongoing monitoring of the vision. - Identify potential gaps in student readiness and teacher professional development. - Identify priorities and common goals for STEM education to develop and execute a thoughtful, strategic plan. - Establish a STEM framework for teaching and learning that advances problem-solving learning through the application of STEM concepts and practices. - Sequence how STEM knowledge, skills, and attitudes will be addressed in the curriculum. - Identify budgetary needs required to increase resources for STEM learning. - Increase awareness of the benefits of STEM literacy for all students. Adopt policies and standards for quality STEM professional learning to support teachers with STEM practices and principles and to share research related to STEM goals. - Promote a school climate and culture that advances an innovative, entrepreneurial, and inquisitive mindset where students and teachers are unafraid to take risks associated with STEM educational expectations. - Leverage professional learning communities and face-to-face time with teachers to develop and promote a consistent understanding of STEM and to support cross-curricular collaboration and professional learning. - Support teachers with the design of STEM experiences that are developmentally appropriate and extend students’ opportunities to solve real-world challenges. - Establish systems built around excellence and equity that afford students universal access to STEM experiences and programs. - Establish school structures that facilitate the implementation of an interdisciplinary approach to STEM education and personalized learning experiences for students. - Adjust the master schedule as needed to accommodate STEM experiences and programs. - Establish and sustain partnerships with local business, industry, and higher education. - Create, identify, and promote STEM career pathways. - Create, identify, and promote externship programs for students and teachers to increase applied learning and work-based learning experiences for both. - Collaborate with stakeholders to measure the effectiveness of the adopted STEM framework and to inform progressive expectations associated with STEM education. Complexities & Pitfalls To create and promote high-quality STEM education, school leaders must avoid common pitfalls. They must also be prepared to persist through the challenges that will come from shifting from a product to process-focused learning design. - Failing to promote equitable access to STEM learning experiences or only implementing STEM in specialized schools (e.g., Career and Technology Education Centers). - Challenging recruitment and retention issues impacting the number of STEM teachers available to advance equitable STEM education and opportunities. - Adequate access among teachers to research, resources, knowledge, and expertise to support the effective implementation of STEM pedagogy. - Lack of coordination and alignment among schools, higher education, and industry related to STEM objectives, resulting in limited opportunities and outcomes. - Funding limitations. - How will the shared vision and purpose for STEM education be developed and communicated? What are the short-term and long-term goals for the STEM education program? What immediate and future budgetary implications are there for implementing STEM in the school? - How will STEM standards be integrated into the existing curriculum to ensure that students master STEM competencies? Which standards align with local issues that would allow for real-world application of the STEM competencies? - How will the STEM framework or instructional model inform the implementation of STEM instructional practices? How will the STEM framework or instructional model support student-centered instruction and students as agents of their learning? - How will professional learning support teachers with shifting instructional practices to ensure fidelity of implementation of STEM pedagogy? - How will students experience STEM in the school? What types of extracurricular STEM programs will students experience? Who will sponsor extracurricular STEM programs? - How might school leaders support teachers in the creation and implementation of STEM lessons and activities? - What strategy will be employed to provide opportunities for teacher to plan STEM lessons and activities collaboratively? What grouping configurations will best support the learning outcomes expected? (Grade-level teams? Subject-alike teams? Cross-curricular teams?) - To what degree are STEM experiences and programs being implemented in the school? - What options exists for enabling STEM professionals to become teachers through alternative certification options? - How will the effectiveness of the STEM education program, lessons, and activities be evaluated?
The very foundation of Speech Blubs is based on the science of mirror neurons and video modeling. These are both highly effective, evidence-based teaching models which have shown great success, especially for speech perception. What are Mirror Neurons? Before we delve into mirror neurons, we need to have a quick understanding of neurons in general. Neurons are specialized cells in our brains which communicate in unique ways to store sensory information. They do this by “firing” to send and receive messages to coordinate our overall functioning. “Firing” is just another word for “learning,” and the different neurons in our brains respond to different sensory information. Mirror Neurons are a type of neuron that “fire” when observing an action and when performing an action – learning occurs through a process of observation-imitation. This means that your child’s mirror neurons respond to indirect experiences (observations) and direct experiences (imitations), especially when an action (or actor) is providing useful information. Learn more about Mirror Neurons here. Here’s Why Mirror Neurons are so Important In order to understand the world around us, we need to learn through our senses. We all have mirror neurons, and when we use them to learn through imitation, or “mirroring,” we are able to learn languages, process emotional states, and perform many other important cognitive processes. By tapping into your child’s mirror neurons, we are helping them to learn and perceive speech in various learning contexts and social situations. They’re Clever Too . . . Because we use our mirror neurons all the time, they have become quite good at predicting and distinguishing intention to decide what behaviors, and which individuals, are worthy of imitation. This is focused learning at its best! What About Video Modeling? Video modeling is a wonderful learning medium supported by the science of mirror neurons. As a teaching method, video modeling is a form of observational learning which allows your child to perceive speech and learn other desirable behaviors by watching video demonstrations and imitating the behaviors within them. What’s great about video modeling is that it’s cost-effective, practical, and an empirically supported intervention strategy. What is Speech Perception? Though it may seem to be a simple concept, speech perception is actually a highly complex skill which refers to your child’s ability to hear speech and then understand it – something that is very important for both speech and language development. Every successful communicator uses speech perception. We use it to understand what is being said and to formulate appropriate responses. We use it whether we are initiating the communication or receiving it. Children who have difficulties with speech perception also have difficulties communicating. That is why early identification and intervention is so important. How Do We Use Speech Perception? When your child perceives speech, they use cognitive, motor, and sensory processes to hear and understand it. Your child’s ability to communicate effectively is determined by their ability to perceive and use speech through: - Physical gestures - Changes in pitch - Different tempos - Regular and irregular rhythms - Varying stress - Alternating intonation - Prosody (pitch, rhythm, tempo, stress, and intonation) We’ve incorporated all these components into our app’s videos through diversity and relentless trial testing. It’s our goal to help your child develop their speech perception capabilities so that they can adapt to changes in their environment and the people that they communicate with. How Speech Blubs Can Help Our app develops your child’s speech perception skills and other desirable behaviors by promoting learning through watching video demonstrations and then imitating the behavior. Speech Blubs uses videos of real kids to make your child’s experience very similar to face-to-face video-chatting. It’s a positive use of screen time! Skills that your child gains through targeting mirror neurons and video modeling have a big chance of being maintained, and generalized. By utilizing cueing and prompting methods, our app provides consistency and efficiency, which encourages your child to become independent. Your child can then carry this independence over into a variety of contexts and use it to promote further skills such as: - Play skills - Social initiation skills - Conversational and greeting skills - Adaptive and functional skills - Perspective-taking skills By using the video modeling and the science of mirror neurons in our app, your child should make progress with their speech perception abilities, even if they are not yet ready at first to imitate the kids in our app’s videos, and prefer to simply observe instead. If you’re worried about your child’s speech perception skills, you can use our free screener by downloading our app. We’ll even give you a personalized report and actionable advice with the results. You can download the app from the App Store (https://apple.co/2UJcddR) or Google Play (https://bit.ly/2QyYTKX). You have an ally in Speech Blubs and our biggest success is seeing your child achieve their greatest potential.
One of the basic building blocks of life is nitrogen. An international consortium was able to detect ammonium salt containing nitrogen on the cometary surface of Chury thanks to a method using analogs for comet material. The method on which the study on the detection of ammonium salt is based was developed at the University of Bern. Comets and asteroids are objects in our solar system that have not developed much since the planets were formed. As a result, they are in a sense the archives of the solar system, and determining their composition could also contribute to a better understanding of the formation of the planets. One way to determine the composition of asteroids and comets is to study the sunlight reflected by them, since the materials on their surface absorb sunlight at certain wavelengths. We talk about a comet’s spectrum, which has certain absorption features. VIRTIS (Visible, InfraRed and Thermal Imaging Spectrometer) on board the European Space Agency’s (ESA) Rosetta space probe mapped the surface of comet 67P/Churyumov-Gerasimenko, known as Chury for short, from August 2014 to May 2015. The data gathered by VIRTIS showed that the cometary surface is uniform almost everywhere in terms of composition: The surface is very dark and slightly red in color, because of a mixture of complex, carbonaceous compounds and opaque minerals. However, the exact nature of the compounds responsible for the measured absorption features on Chury has been difficult to establish until now. Cometary analogue provided the solution to the puzzle To identify which compounds are responsible for the absorption features, researchers led by Olivier Poch from the Institute of Planetology and Astrophysics at the Université de Grenoble Alpes carried out laboratory experiments in which they created cometary analogues and simulated conditions similar to those in space. Poch had developed the method together with researchers from Bern when he was still working at the University of Bern Physics Institute. The researchers tested various potential compounds on the cometary analogues and measured their spectra, just as the VIRTIS instrument on board Rosetta had done with Chury’s surface. The experiments showed that ammonium salts explain specific features in the spectrum of Chury. Antoine Pommerol from the University of Bern Physics Institute is one of the co-authors of the study, which is now published in Science. He explains: “While Olivier Poch was working at the University of Bern, we jointly developed methods and procedures to create replicas of the surfaces of cometary nuclei.” The surfaces were altered by sublimating the ice on them under simulated space conditions. “These realistic laboratory simulations allow us to compare laboratory results and data recorded by the instruments on Rosetta or other comet missions. The new study builds on these methods to explain the strongest spectral feature observed by the VIRTIS spectrometer with Chury,” Pommerol continues. Nicolas Thomas, Director of the University of Bern Physics Institute and also co-author of the study, says: “Our laboratory in Bern offers the ideal opportunities to test ideas and theories with experiments that have been formulated on the basis of data gathered by instruments on space missions. This ensures that the interpretations of the data are really plausible.” Vital building block “hides” in ammonium salts The results are identical to those from the Bern mass spectrometer ROSINA, which had also gathered data on Chury on board Rosetta. A study published in Nature Astronomy in February under the leadership of astrophysicist Kathrin Altwegg was the first to detect nitrogen, one of the basic building blocks of life, in the nebulous covering of comets. It had “hidden” itself in the nebulous covering of Chury in the form of ammonium salts, the occurrence of which could not be measured until now. Although the exact amount of salt is still difficult to estimate from the available data, it is likely that these ammonium salts contain most of the nitrogen present in the Chury comet. According to the researchers, the results also contribute to a better understanding of the evolution of nitrogen in interstellar space and its role in prebiotic chemistry. More information: Olivier Poch et al. Ammonium salts are a reservoir of nitrogen on a cometary nucleus and possibly on some asteroids. Science (2020). DOI: 10.1126/science.aaw7462 Image: Gas and dust rise from “Chury’s” surface as the comet approaches the point of its orbit closest to the sun.
In this activity, you will investigate the relationship between weight and mass. Mass and weight are different quantities. Mass is a measure of an object's inertia, the extent to which an object resists changes to its state of motion. Weight is a measure of the interaction between an object and the planet the object is nearest to. Usually that planet is the earth. The weight of an object is related to its mass. In this activity we will find out what that relationship is. 1 Kg Spring Scale, Hooked Masses (1- 100g, 2- 200g, 1- 500g), Ring Stand and Base, S Clamp, Crossbar, Collar Hook, Graph Paper.
Screenshot of Civilization IV, a later version of the game that MIT's computer played. What’s the News: Many video gamers scoff at the idea of actually reading the instruction manual for a game. But a manual can not only teach you how to play a game, it can also give you the basics of language—that is, if you’re a machine-learning computer. Researchers at MIT’s Computer Science and Artificial Intelligence Lab have now designed a computer system that can learn the meaning of certain words by playing complex games like Civilization II and comparing on-screen information to the game’s instruction manual. How the Heck: The researchers, lead by computer scientist Regina Barzilay, began by giving their machine-learning system very basic knowledge about Civilization II, such as the various actions it can take (moving the cursor, clicking, etc.). The computer also had access to the words and other information that popped up on-screen—though it didn’t understand what the text and objects meant—and it knew when it won or lost a game. Here, the computer’s behavior was mostly random and it was able to win 46 percent of the time. The researchers then augmented the computer system so that it could use the game’s manual to develop strategies. So, when words like "river" now popped up during game play, the computer searched for those words in the instructions and analyzed the surrounding text. With this information, the computer made assumptions about what actions the words corresponded to, giving greater weight to ideas that consistently produced good results and trashing those associated with poor results. Its winning percentage jumped to 79 percent. What’s the Context: Two years ago, Barzilay conducted a similar experiment where she had her machine-learning system install software on a Windows PC by using instructions available on Microsoft’s website. The system carried out 80 percent of the steps that a person using the same instructions would. The Future Holds: For most complex games that allow players to compete against computer opponents, programmers must develop and code various strategies for the computer to follow. The researchers say that programmers will soon be able to use their system to automatically create those algorithms (via MIT News). The team is currently trying to affix robotic systems with their "meaning-inferring algorithms." (via MIT News
Tree pruning is both an art and a science. Pruning is done for several reasons: in order to achieve a certain effect or look in landscape (the artistic side), or for health or growth reasons (the scientific side). When done properly, pruning can improve the appearance of a tree and increase its life expectancy. Proper pruning opens the canopy of the tree to permit more air movement and sunlight penetration, for example. However, the reverse is also true. When done improperly, pruning can drastically cut short a tree’s life expectancy, even killing it in some extreme circumstances. To avoid such situations, tree care professionals adhere to accepted standard of practices when pruning trees, called the American National Standard for tree pruning. This standard, designated ANSI A300, was implemented in 1995 and must be followed for pruning trees in all situations and locations. Several indicators will tell you whether or not your tree is sick and in need of attention. Warning signs of structural instability include cracks in the trunk or major limbs, hollow and/or decayed areas or the presence of extensive dead wood. Mushrooms growing from the base of the tree or under its canopy could also indicate root decay. It pays to be highly suspicious of any tree that has had construction activities such as trenching, addition or removal of soil, digging or heavy equipment movement anywhere under the spread of its branches. These activities can cause root death, which in turn could lead to the structural instability of the tree. Even a healthy and otherwise safe tree can become hazardous if it is growing close to electric power lines. Someone who touches or climbs a tree while it's resting on a live power line could be electrocuted. Any tree that has limbs within 10 feet of overhead lines should be considered hazardous, and should be left to the professionals. If you suspect a hazardous condition, it will pay to have your tree evaluated by a professional – you could be held responsible for any damage or personal injury caused by a tree on your property. Most often, trees change color with the seasons. Changing weather conditions will affect the balance of the chemical composition in a tree’s leaves, thus changing the color of the leaves. Color-changing leaves make for a beautiful display in the fall, but early changes in leaf-color can be a sign that your tree is stressed and susceptible to insect and disease attack. Occasionally only one or two limbs of the tree will show premature color changes, and this could in fact be a sign of a disease at work, weakening only the infected limbs. But it’s far more common for the entire tree to show color change, which is usually linked to root-related stress. Trees will respond to that stress by limiting their above-ground growth, evidenced by their premature color change. If the leaves on your trees are changing color before the fall season, consult with a professional arborist, who can identify any problems and offer solutions. Winter weather can be even more harmful to trees and shrubs that are already stressed, so in order to make sure they survive the cold, start by making sure they’re in good health year-round. Proper location for plants is step one to making sure they’ll survive, no matter the weather conditions. Certain areas around a home’s landscape offer different climatic conditions. These areas, known as microclimates, should be understood and used for planting appropriate trees. A professional nursery operator or arborist can help you choose the best tree and the ideal location to plant that tree around your house. In winter, the ground around the root system of the plant or tree freezes, stopping or slowing the circulation of water in the tree. If the root system is frozen, the plant is not able to draw in water. Placing mulch around a tree produces a year-round benefit because it not only increases the microbial activity and fertility of the soil underneath it, but it also acts as insulation between the root system and the outside climate. This helps retain moisture in the root system and reduce the change in soil temperature. Trees can also suffer from a kind of sunburn during the winter. When the sun shines brightly on a cold winter day, it may heat up the bark of a tree to a temperature that stimulates cellular activity. But as soon as the sun's rays are blocked, the bark temperature drops quickly, rupturing and killing the active cells, causing “sunscald,” the symptoms of which are elongated, sunken, dried or cracked areas of dead bark, generally on the south side of the tree. To prevent this, wrap the trunk of your trees with a commercial tree wrap, plastic tree guard or light-colored material that reflects the sun and reduces the temperature changes in the bark. The weight of snow and ice on a tree can cause branches and even the entire tree to topple. Ensuring that your trees are properly pruned can make them better able to withstand the extra weight of ice and snow. Salt used for deicing streets and sidewalks is also dangerous to trees, shrubs and grass in the winter. You can avoid injury by using only non-injurious types of deicing salts or avoiding salt applications to sensitive soil areas. Preparing trees for natural disasters is a must and should be done well in advance. The older the tree, the heavier it becomes, increasing the chances of a fall. Larger trees will also affect an increased area should they or their larger limbs fall. This means that power lines, homes and other structures that might not have been threatened a few years ago might suddenly be under threat by a tree that has grown. To help limit the chances of dangerous falls caused by storms, have a professional arborist evaluate your trees to determine potential weaknesses and dangers. You should also ask the arborist to look for signs of potential hazards, such as stress cracks, weak branches and other subtle or hidden indicators of potential hazards. Other warning signs for you to watch out for include: Remember that a tree is a living thing, and its integrity and stability changes over time. Don't assume that a tree that has survived 10 severe storms will necessarily survive 11. Proper treatment of any condition, insects included, begins with a proper diagnosis. A professional arborist, nursery operator or state/county extension agent can help you determine what kind of insect is in your tree. Once the bugs are identified, it can be determined if they’re harmful to the tree, beneficial to the tree or have no effect at all. Some insects are beneficial, because they control populations of harmful insects through predation or parasitism. It is in your best interest to keep them, so you want to avoid any treatments that take out the good bugs with the bad bugs. But if the insect is harmful, you must ask how harmful is it if it’s worth treating. Most professional arborists operate on the philosophy of treating only when the environmental/economic risk from the insect has reached a certain threshold. For the most part, the bugs you see in your tree are likely benign. Termites, for example, do pose a threat to trees and should be treated by a professional. On the other hand, carpenter ants don’t harm trees, but actually indicate decayed wood is present, providing a warning to you of a potentially hazardous situation. The time of year to take care of your trees really depends on what you need to have done. Many tree care activities can actually be carried out all year long, but spring and summer allow for the best opportunities to identify tree health problems, since a cursory inspection can tell whether the tree looks healthy compared to previous years or nearby trees of the same species. However, most pest management activities have a very specific and narrow window of treatment that coincides with when the pest is active on the plant and/or vulnerable to the treatment. Some experts say that in temperate climates, fall and winter are the best times to prune your trees. But pruning generally can be done anytime of year, with a few exceptions – pruning an American elm when the beetle that carries Dutch elm disease is busy flying from infected to healthy host trees greatly increases the elm's chances of infection, for instance. The practice known as “topping,” or the cutting off large parts of a tree, is the tree care equivalent to amputation. Trees are often topped to height or shape, leaving branch stubs and little or no foliage. This causes damage to trees that includes: Leaving large exposed wounds that the tree can't readily close. “Lion-tailing” is another practice that severely damages trees. In this case, the inner foliage, branches and limbs of a tree are stripped bare, and the limbs left on the tree are long and bare except for a characteristic "tuft" of foliage at the end, giving the appearance of a lion's tail. This causes damage to trees that includes: Topping should not be confused with proper crown reduction pruning, which will safely reduce a tree's size and redirect its growth, nor should lion-tailing be confused with proper thinning, which is the selective removal of branches to decrease weight and wind resistance. Generally, proper pruning of either type will never remove more than 25% of the tree's foliage. While trees add significantly to value and beauty of a neighborhood, they are also responsible for costly property damage as well as dangerous power outages. Tree failure is by far the leading cause of outages nationwide. Trees that grow into electrical conductors present a potential hazard to an entire community if it becomes energized or wipes out a power line. This is why utility line clearance contractors trim trees in neighborhoods across the country. However, some residents do feel that these trees are needlessly damaged. While it is critical for utilities to trim trees, sometimes severely, it is nevertheless important for them to follow tree care standards of practice. When evaluating the quality of line clearance tree trimming, it's important to consider that the utility's primary objective is to prevent outages as well as electrical hazards. Minimally, the tree should be left in a healthy state, with at least some aesthetic value. If this cannot be accomplished, the utility may opt to remove the tree rather than create an eyesore and future problem. Scientific research has proven that it is better from the standpoint of tree health if the trimming crew removes whole limbs with a relatively small number of large cuts versus making numerous small cuts and leaving stubbed off branches. Finally, utilities have found that removal of entire limbs helps to train the future growth of the tree away from the wires, keeping maintenance costs to a minimum while helping to insure that the tree won't need the same drastic pruning in the future.
Virgil Aeneid Latin Text VIRGIL was a Latin poet who flourished in Rome in the C1st B.C. during the reign of the Emperor Augustus. His works include the Aeneid, an twelve book epic describing the founding of Latium by the Trojan hero Aeneas, and two pastoral poems- … The Aeneid (/ ɪ ˈ n iː ɪ d /; Latin: Aeneis [ae̯ˈneːɪs]) is a Latin epic poem, written by Virgil between 29 and 19 BC, that tells the legendary story of Aeneas, a Trojan who travelled to Italy, where he became the ancestor of the Romans. English and Latin translated by J.W. MacKail . Publius Vergilius Maro was born on the 15th of October, 70 B.C.E., near Mantua. He is primarily known as the author of the Aeneid, the epic poem which links the birth of Rome to the Trojan war, and thus to the Homeric Cycle. The Aeneid is Virgil: Virgil, Roman poet, best known for his national epic, the Aeneid (from c. 30 bce; unfinished at his death). Virgil was regarded by the Romans as their greatest poet, an estimation that subsequent generations have upheld. Publius Vergilius Maro (Classical Latin: [ˈpuː.blɪ.ʊs wɛrˈɡɪ.lɪ.ʊs ˈma.roː]; traditional dates October 15, 70 BC – September 21, 19 BC), usually called Virgil or Vergil (/ ˈ v ɜːr dʒ ɪ l /) in English, was an ancient Roman poet of the Augustan period. Oxford Cambridge and RSA Qualification Accredited AS and A LEVEL Set Text Guide LATIN H443 For first teaching in 2016 Virgil Aeneid 8 Version 1 www.ocr.org.uk/languages AENEID BOOK 6, TRANSLATED BY H. R. FAIRCLOUGH Thus he cries weeping, and gives his fleet the reins, and at last glides up to the shores of Euboean Cumae. Virgil’s epic poem The Aeneid documents the founding of Rome by a Trojan hero. As with other ancient epics, our hero has to remain … P. VERGILIVS MARO (70 – 19 B.C.) AENEID. Aeneid I: Aeneid II: Aeneid III: Aeneid IV: Aeneid V: Aeneid VI: Aeneid VII: Aeneid VIII
An observer is on top of a lighthouse. How far from the foot of the lighthouse is the horizon that the observer can see? Two perpendicular lines lie across each other and the end points are joined to form a quadrilateral. Eight ratios are defined, three are given but five need to be found. How would you design the tiering of seats in a stadium so that all spectators have a good view? Prove that the shaded area of the semicircle is equal to the area of the inner circle. How can you represent the curvature of a cylinder on a flat piece of paper? The first of three articles on the History of Trigonometry. This takes us from the Egyptians to early work on trigonometry in China. The second of three articles on the History of Trigonometry. The third of three articles on the History of Trigonometry. If you were to set the X weight to 2 what do you think the angle might be? The length AM can be calculated using trigonometry in two different ways. Create this pair of equivalent calculations for different peg boards, notice a general result, and account for it. On a nine-point pegboard a band is stretched over 4 pegs in a "figure of 8" arrangement. How many different "figure of 8" arrangements can be made ? Logo helps us to understand gradients of lines and why Muggles Magic is not magic but mathematics. See the problem Muggles magic. The area of a square inscribed in a circle with a unit radius is, satisfyingly, 2. What is the area of a regular hexagon inscribed in a circle with a unit radius? It is obvious that we can fit four circles of diameter 1 unit in a square of side 2 without overlapping. What is the smallest square into which we can fit 3 circles of diameter 1 unit? Describe how to construct three circles which have areas in the ratio 1:2:3. Follow the instructions and you can take a rectangle, cut it into 4 pieces, discard two small triangles, put together the remaining two pieces and end up with a rectangle the same size. Try it! Three squares are drawn on the sides of a triangle ABC. Their areas are respectively 18 000, 20 000 and 26 000 square centimetres. If the outer vertices of the squares are joined, three more. . . . The sides of a triangle are 25, 39 and 40 units of length. Find the diameter of the circumscribed circle. A dot starts at the point (1,0) and turns anticlockwise. Can you estimate the height of the dot after it has turned through 45 degrees? Can you calculate its height? Follow instructions to fold sheets of A4 paper into pentagons and assemble them to form a dodecahedron. Calculate the error in the angle of the not perfectly regular pentagons you make. The area of a regular pentagon looks about twice as a big as the pentangle star drawn within it. Is it? Can you explain what is happening and account for the values being displayed? Six circular discs are packed in different-shaped boxes so that the discs touch their neighbours and the sides of the box. Can you put the boxes in order according to the areas of their bases? The sine of an angle is equal to the cosine of its complement. Can you explain why and does this rule extend beyond angles of 90 degrees? The Earth is further from the Sun than Venus, but how much further? Twice as far? Ten times? Straight lines are drawn from each corner of a square to the mid points of the opposite sides. Express the area of the octagon that is formed at the centre as a fraction of the area of the square. How far should the roof overhang to shade windows from the mid-day sun? Three points A, B and C lie in this order on a line, and P is any point in the plane. Use the Cosine Rule to prove the following statement. From the measurements and the clue given find the area of the square that is not covered by the triangle and the circle. This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different?. . . . The coke machine in college takes 50 pence pieces. It also takes a certain foreign coin of traditional design... A moveable screen slides along a mirrored corridor towards a centrally placed light source. A ray of light from that source is directed towards a wall of the corridor, which it strikes at 45 degrees. . . . An environment that simulates a protractor carrying a right- angled triangle of unit hypotenuse. In this problem we are faced with an apparently easy area problem, but it has gone horribly wrong! What happened? What angle is needed for a ball to do a circuit of the billiard table and then pass through its original position? You can use a clinometer to measure the height of tall things that you can't possibly reach to the top of, Make a clinometer and use it to help you estimate the heights of tall objects.
Subspace or Hyperspace Hi, I'm Dr. Daniel Jackson. Now, you've heard the term "hyperspace" for years in sci-fi movies and television shows, but what does it really mean?'Subspace' or 'Hyperspace' are terms used in science fiction to describe certain forms of space that can do things impossible in regular space (see also Green Rocks). Subspace Subspace was popularized by Star Trek and is a trope for a form of space that has different physical properties from normal space and allows the Enterprise crew (and the writers) to do all sorts of things that have some degree of scientific "consistency" but can't actually happen in the real world. For example, generating a subspace field can alter the apparent mass of an object, allowing it to be moved more easily. It's also the basis of FTL Radio, which makes communications possible in ships that are moving faster than light (since real-life radio transmissions can only travel at light speed). It was used on Star Trek: The Next Generation and Star Trek: Deep Space Nine with regularity, often just to fill the Applied Phlebotinum slot for the episode. Star Trek: Voyager took this to silly extremes-at least one episode referenced hypersubspace. Your guess as to what that means is as good as ours. Before there was Star Trek, Golden Age science fiction would sometimes include references to "sub-etheric" communications or waves. The idea of the ether had already been disproved, but the term was useful for "waves that behave kinda like light, only different." Not to be confused with the other subspace. Or with the other other subspace. Or with Sub Space. Hyperspace In the real world, hyperspace refers to mathematical concepts involving more than 3 spatial dimensions. Hyperspace or hyperdrive is often used to describe Faster-Than-Light Travel or another dimensions or other scifi concepts, and as such has been part of the SF lexicon at least since the pulp magazines of the 1930s. For example, in Star Wars, it is how starships achieve faster-than-light travel. Likewise, Babylon 5 uses a hyperspace, but with a far different set of rules and base technologies behind it. If both terms are used in a story, then typically subspace will only allow data transmissions, but will carry them almost instantaneously. Hyperspace will be slower, but will at least temporarily allow matter (spaceships) to travel through it. The name subspace seems to be taken from the subobjects of various mathematical spaces, such as vector spaces in linear algebra or metric and topological spaces in topology. The name hyperspace was originally used to refer to vector spaces with more than three dimensions. Thus the origin of the hyperspace concept is probably linked to Another Dimension. Subtrope of Another Dimension. Hyperspace may or may not be a scary place. See also Hyperspace Index. open/close all folders Anime and Manga - In Uchuu Senkan Yamato this is where space submarines go when they "submerge". And yes, they do have periscopes to peek back into normal space with. If Space Is an Ocean, subspace is what's under the surface. - In Crest of the Stars, it's called "Planar Space", and it is literally two-dimensional, making it rather the opposite of hyperspace. Spaceships have to generate a "space-time bubble" around themselves to avoid ceasing to exist as spaceships and getting irreversibly turned into exotic particles. Ships can only enter or exit Planar Space at naturally-occurring gates (called "Sords"), and distances and locations in Planar Space does not correspond in any meaningful way to distances or locations in normal space, though it is for the most part a great deal shorter and all those explored so far have exited into the Milky Way galaxy. - Scott Pilgrim: Ramona Flowers uses Subspace to get around quicker for her job as a delivery girl, and owns a Subspace handbag. Ramona specifies that it's not the same kind of subspace featured in Super Mario Bros. 2, where it was essentially a Dark World, and not really this trope. - The Authority has a ship that exists in and travels through "The Bleed"- a seemingly endless expanse of red void that lies between (and connects?) each and every dimension for DC and Image comics, and possibly even Marvel and Dark Horse Comics. - Most Marvel Comics teleporters use some form of subspace to accomplish their teleportation - they jaunt to subspace, move a short distance, then come back out having covered vast distances. It is also stated in the original "Official Handbook of the Marvel Universe" and its subsequent variants that most characters with some form of growth draw the extra mass from there, while those who shrink store their shed mass there (until they reach a certain size limit, when they suddenly 'slip' into a different universe). - Grant Morrison's JLA reveals that hyperspace, known to the White Martians as the Still Zone, to Prometheus as the Ghost Zone, and to the Queen Bee as the Honeycomb, is also the Phantom Zone used as a prison on Krypton and Limbo (as in, the place you go when you die if you weren't good or evil enough to go directly to Heaven or Hell). - David Brin's Uplift novels have five different "levels" of hyperspace, each one seemingly more bizarre than the last and host to its own strange forms of life. In the meme level of hyperspace, bizarre biological transformations and even Ret Gone are common hazards. - Hyperspace comes in a variety of "bands" in Weber's Empire from the Ashes, though the only difference between them seems to be the speed limit. Ships must maintain stasis fields during travel; if the field is broached, the ship is destroyed without a trace. Ships in normal space can detect ships traveling in hyperspace but not the other way around, allowing the creation of undetectable (to their targets) mines that warp into hyperspace to disrupt the stasis fields of ships passing over them in hyperspace. Achuultani ships use the slower hyperbands, but their missiles cover all of the bands, making them much harder to block. - In the Honor Harrington series, starships can enter Hyper and travel at effective FTL speeds as distances in Hyper are shorter than in realspace. The "higher" the Hyper band, the greater the speed-multiplication. Dangers include Gravity Waves and "walls" between different levels of Hyper. - Neal Asher's Human Polity series has Null Space, exposure to which drives an unprotected human quite mad. - John Meaney's sci-fi books (such as the Nulapeiron sequence) feature Mu Space, a fractal continuum (because 4 dimensions are dull compared to an infinite number of dimensions). Once again, exposure to mu-space by normal humans tends to result in screaming insanity, though both cybernetic and genetic pilots can navigate it without problem. The former are blind, another common theme... - Alan Dean Foster's Humanx Commonwealth stories feature both in the form of space-plus and space-minus. Space-plus is what ships travel through while space-minus is what communications travel through. Space-minus travel is faster than space-plus travel but any objects sent by space-minus get turned into soup. Later novels reveal that the Precursors figured out how to travel through space-minus and even more exotic things. - Foster's version of how "subspace" works bled over into Star Trek due to his writing the Star Trek Log series of animated series novelizations in the Seventies. - In the Star Wars Expanded Universe, Subspace is sometimes used for communications and sensors, but has shorter range and is slower than Hyperwave, making this a subversion, as both Subspace and Hyperspace can be used for Sensors, Communications and Travel, but Hyperspace is unilaterally faster. - In the Animorphs series, "Zero-space" or "Z-space" is supposed to be anti-space which can be used for faster-than-light travel. However, it also shifts, so how fast it can take you to a given location may vary. (This is part of the reason why Earth isn't getting much help from the Andalites for most of the series.) This is also where Shapeshifter Baggage goes and where new matter comes from. Don't worry; the chances of a stray spaceship hitting your spare mass when you're in morph is infinitesimally small. Yes, of course it still almost kills our protagonists at one point. - Iain M. Banks's Culture novels feature two types of Hyperspace: Ultraspace and Infraspace. This is a result of the description of the nature of the Universe in those novels - as the Universe expands, other Universes are expanding "inside" it (in a multi-dimensional analogue of a kind of expanding onion, with the individual layers of the onion representing Universes). Hyperspace is found "in between" the Universes, with Ultraspace defined as the Hyperspace between a Universe and the one "above" it and Infraspace being defined as the one between a Universe and the one "below" it. The current method of travel amongst The Culture is to alternate between the two, accelerating first in one then switching to the other over and over until max speed is reached, then sticking with whichever is desired. Interestingly, these spaces appear to have a plasticity — a ship that is accelerating hard or breaking hard is described as creating churning waves in whichever space it's in. - Then there's the Excession, which shocks every AI Mind which sees it as it is somehow connected to Infraspace and Ultraspace simultaneously, acting as a bridge between three universes at once. - John E. Stith's Redshift Rendezvous features several levels of Hyperspace in which the universe is progressively smaller and the speed of light decreases with each step. The story is largely concerned with a murder mystery on a spaceship traveling in the level where the speed of light is 10m/s and relativistic effects are a part of everyday life. - In 50 Great Short Short Science Fiction Stories one of the short stories deal with breaking through to hyperspace only to discover that it is slower than light speed and thus useless. - All FTL travel in The History of the Galaxy is done through a dimension/anomaly called "hypersphere". Unlike the mathematical term, which simply means a sphere in more than 3 dimensions, this hypersphere is more like your typical sci-fi hyperspace. It exists alongside normal space/time and appears to have a spherical shape (with our galaxy surrounding it). Like any sphere, it has a center, and several later (timeline-wise) novels deal with what's located there and the impact it has on interstellar travel. The properties of hypersphere are stated in most novels, with novels dealing with the nature of hypersphere going into more detail. Of note is the fact that humans are one of the few known races to have developed hyperdrives (although the discovery of hypersphere itself was a complete, and tragic, accident, involving the disappearance of the first extrasolar colony ship). Most other races have learned to use the "horizontal" force-lines in hypersphere (they connect large stellar bodies such as stars or planets) as tunnels of sorts, creating a Portal Network. While they don't need ships to travel from planet to planet, they are limited to the network, until their ships traveling on sublight can set up a gate in a new system. When humanity first encounters them, the aliens quickly adapt human hyperdrives for their own ships. Hyperdrives are made up of two generators: one to "submerge" a ship into hypersphere and one to "surface" it back to normal space. They are designed to be infallible and almost never break down. - Hypersphere is also split into ten layers based on the energy levels required to reach them. However, the deeper a ship submerges, the stronger the "energy pressure" gets until a ship is crushed like an empty eggshell without strong enough shields. Most civilian ships tend to travel in the first layer, which is the slowest (or, rather, the distance between the points corresponding to normal-space locations is not as short as in other layers). Warships tend to have stronger shields and travel in the third or fourth layer. Only unmanned probes with exceptional shielding can hope to survive traveling as deep as the sixth layer. Later novels reveal what happens when one reaches the tenth layer. If one manages to do so, then the traveler reaches the very center of hypersphere, which contains a miniature projection of the galaxy from all the vertical "tension lines" converging at the location from every star in the galaxy. There are also several planets orbiting the projection, but they are not native to this location, being the result of an experiment to alien races millions of years ago to put a permanent base of operations in hypersphere in order to serve as a hub to the rest of the galaxy. By traveling along a vertical "tension line" to the center of hypersphere, one is then able to pick another vertical and "surface" in normal space at that star system. Basically, one could get anywhere in the galaxy in a matter of hours, eliminating the need for normal hypersphere travel. There is a slight caveat in that the intense energy pressure at the center of hypersphere prevents electrical devices from functioning, while also imbuing some people with magic-like abilities. Thus, all ships wishing to use this method of travel must use a mix of electronic and photonic systems. The former are used until one reaches the center, while the latter are used for navigating in this area. - All spacefaring races in Mikhail Akhmanov's Arrivals from the Dark series use the so-called "contour drive" for instantaneous jumps across great distances. The drive sends the ship into a parallel dimension known as Limbo before making another transition into normal space at different coordinates. The power requirements are actually quite small compared to examples in other sci-fi works. The biggest problem is the difficult in calculating precise exit coordinates. The difficulty of making precise calculations is further increased by the distance of the jump and any gravity fields at both the start and end locations. Thus, jumps typically take place at the outskirts of star systems, although short-range in-system jumps are also possible to escape danger. Most ships tend to use the "short hop" system to reach far-off destinations (e.g. jump to a nearby system, recalculate, jump again, etc.). When fleets jump, they do so individually and frequently end up scattered throughout destination system. Small countour drives typically burn themselves out after a single use, and are, thus, only used in message drones. However, it's possible for a relatively small ship to be FTL-capable, as demonstrated by the three-man patrol ships (called "beyri") used by the humans hired as Defenders by the Lo'ona Aeo (on the other hand, Lo'ona Aeo technology is way more advanced than that of all other known races). Live Action TV - In Stargate SG-1, subspace and hyperspace are shown as separate things, frequently both used and fairly consistent in their application. Subspace is used to allow near-instantaneous communications, while hyperspace is reserved for transportation of physical objects and depicted as the slower of the two. Travel via Stargate relies on using artificial wormholes to travel through subspace, dematerialising an object and sending it through a matter stream before rematerialising it, allowing for individuals to travel across the galaxy (or galaxies) in seconds, without all that tedious mucking around in hyperspace. - Averted in Stargate Universe, where the means in which Destiny travels through FTL is unknown other than it doesn't rely on hyperspace. - In The Tomorrow People, John theorizes that their form of teleportation involves travel through hyperspace. They later learn that Tomorrow People who do not successfully "Break out" (ie. come into their powers) get lost in hyperspace and eventually lose bodily cohesion. Elizabeth is saved from such a fate in her introductory episode. John later adjusts their Jaunting belts to "change the angle" at which they enter hyperspace, as justification for a special effect change. In the Big Finish series, one of the villains is the insane, disembodied consciousness of a Tomorrow Person who had become stuck in hyperspace. - Space 1889 complete averts this. While they can use ether to travel at very high speeds indeed, they do so in perfectly regular space. - The Warhammer 40,000 verse has an immaterial, psychic, parallel dimension known technically as the "Immaterium", colloquially (and classically) as the "Empyrean", also known as "Warpspace", the "Sea of Souls", and a number of other names, but most commonly called the "Warp". The Warp fits this trope as a hyperspace, filled with daemons, the occasional deity, space hulks, and is the source of all psychic and sorcerous power. Direct exposure to the Warp or its gradual influence causes mutation, insanity, and a high risk of Demonic Possession, and (for humans at least) the people able to navigate it are blind to normal space. - Since you go to the Warp when you die, hyperspace in Warhammer 40k really is flying through Hell. Humans don't have a strong enough psyche to hold themselves together, so their souls will either just dissolve back into the Warp or "lose sense", intuitively becoming animalistic or comatose—assuming that they don't get eaten by Daemons before either of those can occur. The Eldar, on the other hand, can survive and remain cognizant in the Warp—and will do almost anything to avoid it, because they accidentally created an Eldar-eating Chaos God(dess?). The Warp used to be a safe place for them, but no longer. That's why they use soulstones, is to prevent them from getting into the Warp. - While the Warp jump remains the primary means of FTL travel for many races, the Webway is a hyperspace utilized almost exclusively by the Eldar. It is made of a series of tunnels somewhere between the Warp and realspace, connecting portals from millions of locations in realspace together. It's limited in that the Webway portals are fixed and new locations must have a portal built at that location to be accessed afterwards, and that since the cataclysmic fall of the Eldar, many parts of the Webway have been destroyed, lost, inhabited by Daemons or other strange creatures and dangerous entities; yet despite all of this, Webway travel is much quicker and safer than relying on Warp jump technology. - Yet other species have different means of FTL travel: Necrons, who use unimaginably advanced technology, go through subspace by way of their use of inertialess drives to accomplish the setting's only actual FTL travel. - Tyranids use a subspace mean of FTL by a living creature that harnesses a neighboring planetary systems' gravity well. - The Tau skirt the border between hyperspace and subspace, having not mastered the technology for a full translation into the Warp, "skimming" the border between space and Warpspace instead. This means that while they're safe from the stuff lurking in the Warp, their FTL is something like five times slower than Imperial Warp travel. - The Tau can't really go into the Warp because they have no Psykers - they have no means to truly access it beyond "skimming" the divide between the galaxy and the Immaterium (whatever that means), and they wouldn't be able to navigate through the Warp even if they got there. And it turns out, even for the unpsychic Tau, being lost in Hell is a bad thing. - Orks, on the other hand, love Warp travel, as they see the unimaginable dangers and ravenous monstrosities as welcome distractions from the monotony of getting to yet another planet to loot and burn. - In BattleTech, travel via Jumpship is explicitly described as using a hyperspace field to translocate any jump-drive equipped vessel up to thirty light years from its previous location. Notable for making realspace gravity a very real threat when emerging from a jump, requiring Jumpships to be well away from stars and planets when jumping into a system. Similarly, the setting's Hyperpulse Generators are a kind of Subspace Ansible explicitly stated to generate hyperspace fields to transmit messages across fifty lightyears of distance. - In Traveller jump space is always approximately a week, but distance is from one to six parsecs in that week depending on the power of the engines in a given ship. - In Dungeons & Dragons, the Astral Plane works just like Hyperspace in many Sci-Fi settings. Everything that is Bigger on the Inside works by creating a pocket of real space inside the Astral Plane, and all spells that teleport people to other points on the same plane, or to other planes, tunnel through the Astral Plane. - The classic computer game Elite called it Witch Space. - In the Halo series it's called slipspace, but works exactly like any other hyperspace concept. - Maybe not exactly. Slipspace is described as a series of eleven dimensions of spacetime through which a ship can take wormhole-style 'shortcuts' to reach their destination (it's been described as taking the 'flat sheet used to represent space' idea with two points on opposite edges and crumpling that sheet into a ball, creating new extra spaces that allow less travel time if you have the tech to move your ships into & out of those spaces). It can also be used to transmit data and communications almost instantly. It's other uses ranged from stasis to miniaturizing dyson spheres. It is hinted that there are actually several other dimensions, although they have not yet fully been explored. - Slipspace also has a efficiency factor. The entry into Slipspace with a human and Covenant drives are compared to a butcher knife and scalpel, respectively, with war-era UNSC drives going a little over 2 1/2 light-years per day and being horribly inaccurate both space- and time-wise for 'exits', while Covenant drives are accurate on an atomic level and can reach over 900 light-years per day; the Forerunners, whom the Covenant (poorly) reverse-engineered their tech from, could travel much faster, reaching points clear outside the Milky Way Galaxy in just a few days. - The Homeworld video game series treats hyperspace as a sort of time-delayed quantum teleportation requiring massive amounts of energy. Sometimes it goes badly. - Star Control II has Hyperspace, the standard method of interstellar travel, and Quasi-Space, which after obtaining a certain item, you are able to travel through for the cost of 10 fuel. Once in Quasi-Space you can move your ship without consuming fuel and go through a portal to Hyperspace, it's the most efficient method for long trips in the game. - The FreeSpace series uses a kind of hyperspace drive (called a "subspace drive") for short range jump inside a system. For long range jumps between star systems, the same drive is used with a network of stable wormholes (called "Jump Nodes"), that are otherwise completely invisible in normal space. As a result, new star systems can only be reached if a wormhole leading to it is discovered. The last level in the first game even takes place in one of these wormholes, while the sequel culminates in a desperate plan to deliberately induce the unintended side-effect of blowing up a capital ship inside subspace - collapsing the Jump Node it is in - to cut off the new Shivan incursion from the remainder of GTVA space - Sword of the Stars has both. Subspace, also called node space, is used by humans and the Zuul to travel between stars (humans use natural fixed tunnels, while the Zuul "dig" their own, which are unstable and collapse over time). The Tarka use hyperdrives to generate localized hyperspace bubbles around their ships to propel themseves to FTL speeds. It is possible that the Hivers' Portal Network also uses hyperspace for instant teleportation between gates. While the method used by the Morrigi is known only as either the Void Cutter Drive or the Flock Drive, it can be assumed also uses hyperspace. - The Hiver gates are described as sending objects and messages through the "skin" of the universe. The novel Deacon's Tale claims that it's only safe for Hivers. When Cai Rui (a human) travels through a gate, he feels like he's being turned inside out. - This is the reason that faster-than-light travel works in the Star Ocean series. However, by the events of the third game in the series, Subspace warp has largely been displaced by the even-faster Gravitic Warp technology.note - WarCraft The Twisting Nether in the Warcraft series is hyperspace in all but name. It's a series of hyperspace-like extra dimensions that connects all worlds together. Portal magic uses this to work. Also, the former world of Draenor (called Outland after its destruction) was entirely shifted into the Twisting Nether when it was destroyed by a portal storm, so it has a REALLY interesting sky.
Introduction to mathematical logic The first chapter is on the (relatively simple) propositional calculus. The second deals with quantification theory - I would recommend that you look at the start of chapter 3 to start with, so as to have a concrete example. The later parts of chapter 3 get on to more complicated mathematical logic, such as Gödel's incompleteness theorem. Chapter 4 is on set theory. Now you may have come across set theory at school, but I have to tell you that the axiomatic version is a totally different ball game. But with the introduction to axiomatics from the previous chapters, this book would be a good place to start studying it. The fifth chapter is on computability. I wouldn't suggest this book as a starting point for this - it's much easier to approach it via real computers. However, it could be useful for seeing the links with other forms of logic.
Magnesium is element atomic number 12. (Mark Fergus, CSIRO) Magnesium is the element that is atomic number 12 on the periodic table. It is one of the alkaline earth metals, with a silver appearance in pure form. Here are interesting element facts: Element Atomic Number 12 Facts The symbol for atomic number 12 is Mg. The name magnesium comes from the Greek region called Magnesia, a source of magnesium-bearing compounds. Magnesium forms a highly reactive positively-charged cation, Mg2+. The atomic weight of element number 12 is 24.3050. The pure metal is lightweight and quickly oxidizes to form magnesium oxide in air. Most magnesium compounds are white crystalline solids. The density of the metal is only about 2/3 that of aluminum. Magnesium is used for pencil sharpeners and firestarters. (Firetwister) Magnesium may be commercially obtained via electrolysis of seawater. Magnesium is not found as a pure element on earth, but occurs in numerous compounds. It makes up around 2% of the Earth’s crust. Atomic number 12 is the 9th most abundant element in the universe. It is the third most abundant element in seawater. It is the 11th most abundant element in the human body, by weight. Ancient man used magnesium compounds, such as Epsom salts, as laxatives, blood purifiers, and therapeutic agents. Much more magnesium is believed to be found below the Earth’s crust. If you took out all of the magnesium from the Earth, you’d have enough of the element to make a planet the size of Mars, and have metal left over. Sir Humphry Davy first prepared pure magnesium in 1808 by passing electricity through magnesium oxide. Magnesium is made in stars by fusion of the elements helium and neon. The element is used to make car bodies, cans, and flares. Magnesium is the 4th most abundant mineral in the human body. The average adult person has about 24 grams of element 12 in the body. It is essential for human life and participates in over 300 biochemical reactions. Over half the magnesium in the human body (50-60%) is stored in the teeth and bones. Approximately 39% of the body’s magnesium stores are in muscles. Only about 1% of the element in the body is extracellular. Magnesium deficiency in humans increases risk of osteoporosis, diabetes, heart disease, metabolic problems, and other health conditions. Magnesium is a key element in chlorophyll, the green pigment in plants used for photosynthesis. Magnesium burns readily. Attempting to put out a magnesium fire with water will cause the fire to flare up, not put it out. Because magnesium is light and strong, one of its uses was to make Mag wheels for vehicles. However, Mag wheels that contain the element are uncommon now because the magnesium too readily ignited, presenting a fire hazard. The wheels still bear the name, even though their composition has changed. Over three dozen prescription medications interfere with magnesium absorption, including allergy medication, asthma drugs, diuretics, antibiotics, and some chemotherapeutic agents. China produces about 80% of the global supply of magnesium.
Figure 2.3 from the NCA: Observed global average temperature changes (black line), model simulations using only changes in natural factors (solar and volcanic) in light gray/blue, and model simulations with the addition of human-induced emissions (darker blue). Climate changes since 1950 cannot be explained by natural factors or variability, and can only be explained by human factors. (Figure source: adapted from Huber and Knutti, 2012). 2. Some extreme weather and climate events have increased in recent decades, and new and stronger evidence confirms that some of these increases are related to human activities. Changes in extreme weather events are the primary way that most people experience climate change. Human-induced climate change has already increased the number and strength of some of these extreme events. Over the last 50 years, much of the United States has seen an increase in prolonged periods of excessively high temperatures, more heavy downpours, and in some regions, more severe droughts. See page 24. 3. Human-induced climate change is projected to continue, and it will accelerate significantly if global emissions of heat-trapping gases continue to increase. Heat-trapping gases already in the atmosphere have committed us to a hotter future with more climate-related impacts over the next few decades. The magnitude of climate change beyond the next few decades depends primarily on the amount of heat-trapping gases that human activities emit globally, now and in the future. See page 28. 4. Impacts related to climate change are already evident in many sectors and are expected to become increasingly disruptive across the nation throughout this century and beyond. Climate change is already affecting societies and the natural world. Climate change interacts with other environmental and societal factors in ways that can either moderate or intensify these impacts. The types and magnitudes of impacts vary across the nation and through time. Children, the elderly, the sick, and the poor are especially vulnerable. There is mounting evidence that harm to the nation will increase substantially in the future unless global emissions of heat-trapping gases are greatly reduced. See page 32. 5. Climate change threatens human health and well-being in many ways, including through more extreme weather events and wildfire, decreased air quality, and diseases transmitted by insects, food, and water. Climate change is increasing the risks of heat stress, respiratory stress from poor air quality, and the spread of waterborne diseases. Extreme weather events often lead to fatalities and a variety of health impacts on vulnerable populations, including impacts on mental health, such as anxiety and post-traumatic stress disorder. Large-scale changes in the environment due to climate change and extreme weather events are increasing the risk of the emergence or reemergence of health threats that are currently uncommon in the United States, such as dengue fever. See page 34. 6. Infrastructure is being damaged by sea level rise, heavy downpours, and extreme heat; damages are projected to increase with continued climate change. Sea level rise, storm surge, and heavy downpours, in combination with the pattern of continued development in coastal areas, are increasing damage to U.S. infrastructure including roads, buildings, and industrial facilities, and are also increasing risks to ports and coastal military installations. Flooding along rivers, lakes, and in cities following heavy downpours, prolonged rains, and rapid melting of snowpack is exceeding the limits of flood protection infrastructure designed for historical conditions. Extreme heat is damaging transportation infrastructure such as roads, rail lines, and airport runways. See page 38. 7. Water quality and water supply reliability are jeopardized by climate change in a variety of ways that affect ecosystems and livelihoods. Surface and groundwater supplies in some regions are already stressed by increasing demand for water as well as declining runoff and groundwater recharge. In some regions, particularly the southern part of the country and the Caribbean and Pacific Islands, climate change is increasing the likelihood of water shortages and competition for water among its many uses. Water quality is diminishing in many areas, particularly due to increasing sediment and contaminant concentrations after heavy downpours. See page 42 8. Climate disruptions to agriculture have been increasing and are projected to become more severe over this century. Some areas are already experiencing climate-related disruptions, particularly due to extreme weather events. While some U.S. regions and some types of agricultural production will be relatively resilient to climate change over the next 25 years or so, others will increasingly suffer from stresses due to extreme heat, drought, disease, and heavy downpours. From mid-century on, climate change is projected to have more negative impacts on crops and livestock across the country – a trennd that could diminish the security of our food supply. See page 46. 9. Climate change poses particular threats to Indigenous Peoples’ health, well-being, and ways of life. Chronic stresses such as extreme poverty are being exacerbated by climate change impacts such as reduced access to traditional foods, decreased water quality, and increasing exposure to health and safety hazards. In parts of Alaska, Louisiana, the Pacific Islands, and other coastal locations, climate change impacts (through erosion and inundation) are so severe that some communities are already relocating from historical homelands to which their traditions and cultural identities are tied. Particularly in Alaska, the rapid pace of temperature rise, ice and snow melt, and permafrost thaw are significantly affecting critical infrastructure and traditional livelihoods. See page 48. 10. Ecosystems and the benefits they provide to society are being affected by climate change. The capacity of ecosystems to buffer the impacts of extreme events like fires, floods, and severe storms is being overwhelmed. Climate change impacts on biodiversity are already being observed in alteration of the timing of critical biological events such as spring bud burst and substantial range shifts of many species. In the longer term, there is an increased risk of species extinction. These changes have social, cultural, and economic effects. Events such as droughts, floods, wildfires, and pest outbreaks associated with climate change (for example, bark beetles in the West) are already disrupting ecosystems. These changes limit the capacity of ecosystems, such as forests, barrier beaches, and wetlands, to continue to play important roles in reducing the impacts of these extreme events on infrastructure, human communities, and other valued resources. See page 50. 11. Ocean waters are becoming warmer and more acidic, broadly affecting ocean circulation, chemistry, ecosystems, and marine life. More acidic waters inhibit the formation of shells, skeletons, and coral reefs. Warmer waters harm coral reefs and alter the distribution, abundance, and productivity of many marine species. The rising temperature and changing chemistry of ocean water combine with other stresses, such as overfishing and coastal and marine pollution, to alter marine-based food production and harm fishing communities. See page 58. 12. Planning for adaptation (to address and prepare for impacts) and mitigation (to reduce future climate change, for example by cutting emissions) is becoming more widespread, but current implementation efforts are insufficient to avoid increasingly negative social, environmental, and economic consequences. Actions to reduce emissions, increase carbon uptake, adapt to a changing climate, and increase resilience to impacts that are unavoidable can improve public health, economic development, ecosystem protection, and quality of life. See page 62. Full Reference to the NCA Melillo, Jerry M., Terese (T.C.) Richmond, and Gary W. Yohe, Eds., 2014: Climate Change Impacts in the United States: The Third National Climate Assessment. U.S. Global Change Research Program, 841 pp. doi:10.7930/J0Z31WJ2.
The human central nervous system is a very advanced and extremely complex system. This system is also very vulnerable, due to its poor capacity for regeneration in adults. For example, a significantly damaged spinal cord does not recover. “This applies to all mammals. It may be that evolution has led to this due to the stability required by a complex nervous system. Were neurons to constantly grow and establish new connections, the results could be chaotic. However, this stability becomes a problem when the central nervous system is injured,” Professor Heikki Rauvala explains. Currently, there is not a single drug available to trigger regeneration in the central nervous system. “Discovering such a drug is the eventual objective of our work,” he adds. “We want to find a way to fix damaged spinal cords.” A drug that would bring about regeneration in the central nervous system would be a vast advancement for spinal cord injury patients, but in addition to that, there would be demand for such a drug in treating many disorders that degenerate the nervous system. “Many diseases of the central nervous system, such as neurodegenerative diseases, traumatic brain injury and MS, destroy cells and their connections in the nervous system,” Rauvala points out. “Bad guys” may turn out to be important allies Rauvala has focused his investigations particularly on the saccharide structures found in the intercellular material and cell surfaces of neural tissue, liable to bind with, among others, growth factors and the proteins in the intercellular material and cell surfaces. Chondroitin sulphates belonging to glycosaminoglycans are generally considered the cause for the non-regeneration of the central nervous system, since they inhibit neuron precursors from developing into neurons, as well as the establishment of connections between neurons. However, Rauvala has observed that the role of chondroitin sulphates in the growth of the central nervous system is more complex than previously thought. Years ago, he isolated a protein that activates neuron growth and is expressed in the intercellular material of the central nervous system. Rauvala named his discovery HB-GAM (heparin-binding growth-associated molecule), which has later also become known as pleiotrophin. Investigating the protein more closely, Rauvala found that, in addition to heparin, it binds tightly with chondroitin sulphates. “We noticed that HB-GAM is expressed in particularly large quantities in the intercellular material of the central nervous system at the time when the system is developing and still quite plastic. We started considering the importance of this phenomenon. Could HB-GAM, as it were, ‘override’ the inhibitory effect that chondroitin sulphates have on the growth of cells in the central nervous system?” Rauvala explains. The researchers observed how brain cells behaved in cell cultures containing chondroitin sulphates, but no HB-GAM. The result was clear: no cell growth. Next, HB-GAM was added to the culture. “The brain cells started animatedly developing and growing a network of neurons!” To their surprise, the researchers found that HB-GAM was not the only factor underlying this lively growth, but it was a matter of cooperation between it and chondroitin sulphate: if chondroitin sulphates were removed from the culture, leaving only the HB-GAM protein, growth was much weaker compared to both being present. “Chondroitin sulphates are not, after all, solely the bad guys in this process.” Motor functions in mice improved by the method The results gained from cell cultures were inspiring, but would this method also work in a living organism? Disease models were the next step. “We use a number of disease models to investigate the recovery of various types of spinal cord injury,” says Natalia Kulesskaya, a postdoctoral researcher. The first findings have been promising: drug therapy has improved the performance of mice in tests requiring movement. The research group has also been investigating the best method for administering the drug. Dosage into the bloodstream does not work, since the blood-brain barrier and the blood-cerebrospinal fluid barrier restrict the passage of many pharmacological substances into the central nervous system. One option is to inject the drug directly into the area of injury. This method is practical and efficacious in situations where the injured area must in any case be operated on, making it unnecessary to perform a separate surgical procedure to administer the drug. However, cases vary, which necessitates other methods of drug administration. Another alternative is lumbar puncture, the method used when collecting samples of cerebrospinal fluid. “It turned out that this method works – we were able to prove that the drug injected into cerebrospinal fluid ended up in the injured area,” describes Kulesskaya. “We know this works, now we just have to prove how it works.” According to Rauvala, his group is the only one in the world taking this approach to repairing central nervous system damage. “Methods based on the elimination of chondroitin sulphates or preventing their activity are already being investigated and developed in various locations, but we have a different approach: we are attempting to harness them for utilisation. We believe their activity can be steered to the desired direction. To this end, we are not only using HG-GAM, but also studying similar molecules that can achieve a similar effect.” It was perhaps the unique approach of Rauvala’s group that impressed the assessors and decision-makers of the Wings for Life foundation, which funds research focused on curing spinal cord injury. “Ours is admittedly an audacious project,” Rauvala and Kulesskaya concede. “But we know that the approach we have developed is working! We have already demonstrated its feasibility in cell cultures and disease models. Now we still have to select the optimal drug candidate from among HB-GAM-type molecules and establish through immunohistochemical research methods that structural changes do actually occur in neural tissue. This is what we are currently doing. In two years, we hope to have data on this area as well to present to our funders.”
Research activities in the MRSEC Soft Materials Research Center (SMRC) that includes molecular simulation group of Prof. Bedrov have discovered an elusive phase of matter, first proposed more than 100 years ago and sought after ever since. The “ferroelectric nematic” phase of liquid crystal has been described in recent study published in the Proceedings of the National Academy of Sciences (PNAS 2020 117, 14021-14031; https://doi.org/10.1073/pnas.2002290117). The discovery opens a door to a new universe of materials. Nematic liquid crystals have been a hot topic in materials research since the 1970s. These materials exhibit a curious mix of fluid- and solid-like behaviors, which allow them to control light and have been extensively used in liquid crystal displays (LCDs) in many laptops, TVs and cellphones. The nematic liquid crystals like dropping a handful of pins on a table. The pins in this case are rod-shaped molecules that are “polar”—with heads that carry, say, a positive charge and tails that are negatively charged. In a traditional nematic liquid crystal, half of the pins point up and the other half point down, with the direction chosen at random. A ferroelectric nematic liquid crystal phase, however, patches or “domains” form in the sample in which the molecules all point in the same direction, either up or down, and therefore creating a material with polar ordering. Debye and Born first suggested in the 1910s that, if you designed a liquid crystal correctly, its molecules could spontaneously fall into a polar ordered state. In the decades since, however, scientists struggled to find a liquid crystal phase that behaved in the same way. That is, until MRSEC researchers began examining RM734, an organic molecule created by a group of British scientists several years ago. That same British group, plus a second team of Slovenian scientists, reported that RM734 exhibited a conventional nematic liquid crystal phase at higher temperatures. At lower temperatures, another unusual phase appeared. When the MRSEC team tried to observe that strange phase under the microscope they noticed something new. Under a weak electric field, this phase of RM734 was 100 to 1,000 times more responsive to electric fields than the usual nematic liquid crystals and the molecules are nearly all pointing in the same direction. However, experimentally it is hard to zoom down to molecular scale and understand why and how these RM734 molecules were achieving such collective behavior. This is where atomistic molecular dynamics simulations conducted by Dengpan Dong and Xiaoyu Wei from Prof. Bedrov group allowed to gain atomic scale understanding. First, the simulations were able to confirm that aligning all RM734 molecules in the same direction is energetically more favorable than to have conventional random alignment of molecular dipoles. Second, detail analysis of structural and orientational correlations obtained from simulations identified key groups and intermolecular interactions that stabilize the ferroelectric nematic phase. Using these tools Bedrov’s group currently explores other chemical structures that can lead to a similar behavior. Discovery of this new liquid crystal material starts a new chapter in condensed-matter physics and could open up a wealth of technological innovations—from new types of display screens to reimagined computer memory. Within couple days of publication, the manuscript got a world-wide attention and was picked up by more than 25 news outlets around the world.
Huntington’s disease (HD) is an inherited disorder that causes brain cells, called neurons, to die in various areas of the brain, including those that help to control voluntary (intentional) movement. Symptoms of the disease, which gets progressively worse, include uncontrolled movements (called chorea), abnormal body postures, and changes in behavior, emotion, judgment, and cognition. People with HD also develop impaired coordination, slurred speech, and difficulty feeding and swallowing. HD typically begins between ages 30 and 50. An earlier onset form called juvenile HD occurs under age 20. Its symptoms differ somewhat from adult onset HD and include rigidity, slowness, difficulty at school, rapid involuntary muscle jerks called myoclonus, and seizures. More than 30,000 Americans have HD. Huntington’s disease is caused by a mutation in the gene for a protein called huntingtin. The defect causes the cytosine, adenine, and guanine (CAG) building blocks of DNA to repeat many more times than is normal. Each child of a parent with HD has a 50-50 chance of inheriting the HD gene. A child who does not inherit the HD gene will not develop the disease and generally cannot pass it to subsequent generations. A person who inherits the HD gene will eventually develop the disease. HD is generally diagnosed based on a genetic test, medical history, brain imaging, and neurological and laboratory tests. Huntington’s disease causes disability that gets worse over time. Currently no treatment is available to slow, stop, or reverse the course of HD. People with HD usually die within 10 to 30 years following diagnosis, most commonly from infections (most often pneumonia) and injuries related to falls. There is no treatment that can stop or reverse the course of HD. Tetrabenazine and deuterabenazine can treat chorea associated with HD. Antipsychotic drugs may ease chorea and help to control hallucinations, delusions, and violent outbursts. Drugs may be prescribed to treat depression and anxiety. Side effects of drugs used to treat the symptoms of HD may include fatigue, sedation, decreased concentration, restlessness, or hyperexcitability, and should be only used when symptoms create problems for the individual.
Reporting on the #SciWri16 #3Dgenome session. If you have taken any genetics courses in high school or college, this top image might be familiar to you. Inside a cell, there is the nucleus, which houses the chromosome, and the chromosome is made of the DNA. DNA is an astounding molecule. If we were to stretch out a DNA molecule, it would be about 2 meters long (around 6.5 feet). The nucleus that houses the DNA is only 6 micrometers in size, which is about 100 times smaller than a grain of salt. That is like trying to stuff a thread that stretches from Philadelphia to Washington, D.C. (140 miles) in a check-in luggage bag (27 inches in width). And, to make this even more complex, the DNA cannot simply get haphazardly packed into the nucleus. It has to remain functional so the cell can produce functional proteins and survive — meaning that how it is packed matters. So how does the DNA fit into the nucleus? To answer that question, Baylor College of Medicine geneticist Erez Lieberman Aiden and his team are investigating how genome remains functional in spite of being packed into such tight space. “Usually when we are packing things in backpacks or car trunks, I can take it all out later,” Lieberman Aiden said. “That’s not case for the genome. The genome has to stay in the nucleus forever. The cell doesn’t ever have space to expand. The problem of how you package a genome into the cell has layers of complexity that most packing problems we encounter don’t have. Most packing problems you encounter are about packing in minimized space. The genome has to be packed in minimized space and stay incredibly functional.” Contact Map Reveals 3D Genome Structure At the ScienceWriters2016 conference, Lieberman Aiden gave a talk called The Human Genome’s 3D Code to explain the way he and his colleagues study how DNA is organized and packed in the nucleus. See below for a Storify archive of this #3Dgenome session’s tweets [Roca 2016]. Lieberman Aiden described a tool that he helped develop as a graduate student called Hi-C — a play on the names of similar research tools (3C, 4C, and 5C) and the name of a popular juice [Lieberman-Aiden 2009]. The tool is designed to reveal which parts of the genome are close to each other. Using Hi-C, Lieberman Aiden and his colleagues were able to develop a “contact map,” which shows the physical distance between the two regions of the genome. To illustrate how Hi-C works, Lieberman Aiden made a contact map for characters from the television show The Simpsons. The contact map shows how many times two characters are in a picture together, which indicates how close the two characters are. In the example, Homer has many more pictures with Marge than with other characters, suggesting Homer and Marge have a close relationship and interact a lot. The same goes for two regions of the genome that show a close physical distance in the Hi-C contact map. The closer the two regions are, the more they interact. The two regions might be close to each other and interact often because they have a functional relationship, like Homer and Marge. And based on the contact map, researchers can get a glimpse into how the genome is packed into the tiny nucleus. 3D Structure Shows Link between Structure and Function After analyzing the contact map, the researchers came to two main conclusions. First, all parts of the human genome roughly falls into one of two compartments that are physically located in different places in the nucleus. Lieberman Aiden described the compartments as “social clubs,” in which a particular regions of the DNA can “hang out” with other DNA sequences in either club; but, it is more likely to hang out in one more than the other. One compartment is comprised of the “active” genome, which expresses genes to produce proteins. The other compartment is the “inactive” genome, where genes are not expressed. Based on these observations, it was clear to the researchers that the genome structure and its functions are closely tied. Because of this link, Lieberman Aiden said that the folding of the DNA is a dynamic process, in which the structural changes accompany functional changes. He compared the dynamic changes of the genome structure to a person’s re-folding of a newspaper, emphasizing that people fold newspapers differently depending on what stories they want to read. “How do you read a newspaper? Newspapers actually have a lot of information; but, you can constantly change how you fold the information in order to access what you want. Usually when you are reading the newspaper, you don’t throw out the sections. You have all the sections. The question is how do I make the bins that I’m using accessible? The way you fold [the newspaper] has a huge impact on the prominence of different pieces of information.” 3D Structure Illustrates Our Genome’s Complexity Human genomes consists of large loops, which Lieberman Aiden illustrated using blue cupcakes in his presentation. This conclusion is significant because it changes how we think about our genome. Human (and other mammalian) genomes are not linear and simple like in bacteria, where there are no complex 3D genome structures. Loops allow parts of the genome to influence distant parts of the genome — parts of the genome, like enhancers, that are important for a gene’s function no longer have to be right next to each other. According to Lieberman Aiden, this conclusion from the Hi-C experiment illustrates how complex our genome is. This is a concept that still remains a curiosity for many researchers including himself. What could be the purpose of such complexity? Short answer — no one knows. But researchers are certain about one thing — the loops and other complex structures are clearly important. According to Lieberman Aiden, they are very well preserved across different mammalian species. If the loops are not important, there is no reason for different species to still have them. Still, Lieberman Aiden said the human genome seems “inexplicably messy” compared to less-complex organisms like bacteria. “We don’t really know why the human genome is designed this way,” Lieberman Aiden said. “There’s a joke — if you are on the committee of designing the smaller genomes, you would gladly take credit for it. But if someone accused you of being on the committee of designing the human genome, you would be embarrassed.” Investigating Human Genome While Being One’s Strongest Advocate Constructing the contact map and revealing the genome’s 3D structure would not be possible without tireless work from a team of scientists. Neva Durand, a staff scientist at Lieberman Aiden’s lab, helped to create two software programs for analyzing Hi-C data: Juicebox, which helps researchers visualize the data, and Juicer, which transforms raw Hi-C data into files that can be visualized using Juicebox. Durand said she initially began her studies in computer science; but, she decided to work with Lieberman Aiden in order to pursue interesting questions and prevent herself from feeling constrained in her previous field of study. “Erez has lots of very cool ideas; and, he likes looking at a diverse array of problems in many different areas,” Durand said. “This aligns really well with me. I love learning new things and didn’t want to be stuck on one tiny little branch of research. It’s a very exciting, fast-moving field, and I learn more and more every day. The work is intense but a lot of fun.” Having started her studies in computer science, Durand said she is familiar with feeling outnumbered as a female researcher. She said it is imperative for female and minority researchers to find a voice and be one’s own strongest advocate. Although some in technical disciplines might not appreciate these qualities in women, such advocacy is necessary for female career advancement. “I think it’s always challenging and some people face more challenges than others,” Durand said. “And it’s completely unfair. However, given that the system is biased the way it is, it’s important to believe in yourself and advocate for yourself. I think especially members of minority groups have trouble believing in themselves, even when by any objective member they are succeeding.” Durand also stressed the importance to find a supportive mentor who could provide the best environment possible for one to move forward. “And it’s equally important to find good mentors that believe in you and will give you encouragement and that extra push it takes to succeed,” Durand said. “I’ve been very lucky to always find excellent mentors — both male and female — at every stage. They were very supportive at every stage of my career. This has continued with Erez, who is a fantastic mentor.” (top) Depiction of eukaryotic cell DNA. (Credit: Wikimedia) (middle) Contact Map of Simpsons characters. (Credit: Ian Street) (bottom) Neva Durand. (Credit: N. Durand) I. Park (2016) Bringing DNA Structure and Function to the 3D Realm. DiverseScholar 7:5 Diverse Scholar is now publishing original written works. Submit article ideas by contacting us at [email protected]. This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License. Originally published 31-Dec-2016.
Y-H Percival Zhang, chief science officer of Cell-Free BioInnovations and an associate professor of biological systems engineering at Virginia Tech, contributed this article to Live Science's Expert Voices: Op-Ed & Insights. It might seem strange to use an ingredient found in cupcakes and cookies as an energy source, but most living cells break down sugar to produce energy. And, interestingly, the energy density of sugar is significantly higher than that of current lithium-ion batteries. Recently, my colleagues and I successfully demonstrated the concept of a sugar biobattery that can completely convert the chemical energy in sugar substrates into electricity. Working under a Small Business Innovation Research (SBIR) grant from the U.S. National Science Foundation, we reported the findings in the January 2014 issue of Nature Communications. This breakthrough sugar-powered biobattery can achieve an energy-storage density of about 596 ampere-hours per kilogram (A-h/kg) — an order of magnitude higher than the 42 A-h/kg energy density of a typical lithium-ion battery. A sugar biobattery with such a high energy density could last at least ten times longer than existing lithium-ion batteries of the same weight. [Electric Bacteria Could Be Used for Bio-Battery ] This nature-inspired biobattery is a type of enzymatic fuel cell (EFC) — an electrobiochemical device that converts chemical energy from fuels such as starch and glycogen into electricity. While EFCs operate under the same general principles as traditional fuel cells, they use enzymes instead of noble-metal catalysts to oxidize their fuel. Enzymes allow for the use of more-complex fuels (such as glucose), and these more-complex fuels are what give EFCs their superior energy density. For example, the complex sugar hexose — upon complete oxidation — can release 24 electrons per glucose molecule during oxidation, whereas hydrogen (a fuel used in traditional fuel cells) releases only two electrons. Until now, however, EFCs have been limited to releasing just two to four electrons per glucose molecule. As my colleague Zhiguang Zhu, a senior scientist at Cell-Free BioInnovations, has said, our team is not the first to propose using sugar as the fuel in the biobattery. However, we are first to demonstrate the complete oxidation of the biobattery's sugar so we achieve a near-theoretical energy conversion yield that no one else has reported. For our battery, we constructed a synthetic catabolic pathway (a series of metabolic reactions that break down complex organic molecules) containing 13 enzymes to completely oxidize the glucose units of maltodextrin, yielding nearly 24 electrons per glucose molecule. We put specific thermostable enzymes into one vessel to constitute a synthetic enzymatic pathway that can perform a cascade of biological reactions to completely "burn" the sugar, converting it into carbon dioxide, water and electricity. Unlike natural catabolic pathways for the oxidation of glucose in cells, the designed synthetic pathway does not require costly and unstable cofactors, such as adenosine triphosphate (ATP, critical for energy processes in human cells), coenzyme A, or a cellular membrane. Instead, we used two redox enzymes that generate reduced nicotinamide adenine dinucleotide (NADH) from sugar metabolites. NADH, a reducing agent involved in redox reactions, is a natural electron mediator that carries electrons from one molecule to another. We also used ten other enzymes responsible for sustaining metabolic cycles and an additional enzyme that transfers electrons from NADH to the system's electrode. This new synthetic pathway enables the biobattery to extract the entire theoretical number of electrons per glucose unit and thereby use all the chemical energy in the sugar. This is a significant breakthrough. In addition to its superior energy density, the sugar biobattery is also less costly than the lithium-ion battery, refillable, environmentally friendly, and nonflammable. While we continue to work on extending the lifetime, increasing the power density, and reducing the cost of electrode materials for such a battery, we hope that the rapidly growing appetite for powering portable electronic devices could well be met with this energy-dense sugar biobattery in the future. This technology was funded through the NSF Small Business Innovation Research Program. This article was prepared by the National Science Foundation in partnership with CEP. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google +. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.
Innovation has been pinpointed as a crucial strategy to shift towards a sustainable food system in the EAT-Lancet Report released in January. Although agricultural innovation should not be limited to technology, it has without doubt had substantial impacts on our current food systems. Technology, however, is a double-edged sword, and should be properly evaluated prior to its application. How do you imagine our food system will change in the following decades? In a visualized food system of 2040, automation and mechanisation are expected to expand from food processing upwards to food production. Robotic harvesters and meat processing robotics are already on the way to reshape the human-nature relationship through agriculture. Unmanned aerial vehicles (UAVs), commonly known as drones, have also been widely used to collect spatial data for precision farming. To address food production in an urban context, advanced technologies have been employed to develop vertical farming to minimise land use by growing crops in an upright setting. These technologies, along with buzz words such as Internet of Things (IoT) and blockchain, have been developed to digitize our existing food systems. Before crowning agricultural digitisation with the title of the fourth agricultural revolution, it is worth examining the last revolution, the Green Revolution in the 1960s. While the Green Revolution was well-praised for multiplying the yields of grains, especially in Latin America and Asia, its shortcomings are not often raised. The newly developed seeds, with the claim of high-yielding characteristic, did not perform as expected when they were first grown in the Philippines. On the contrary, the situation of hunger and malnutrition was exacerbated in the regions cultivating those varieties. Studies by leading research institutes later revealed that these breeds would only reach their potential when they are planted with extra inputs like irrigation, mechanisation, fertilizers, pesticides and herbicides. The accessibility to capital to purchase these additional applications then yielded unequal outcomes for richer and poorer farmers. Not only did this lead to socio-economic division, but the growth of more limited varieties also resulted in the loss of genetic diversity. Since the Green Revolution, at least 300 out of 3500 traditional rice varieties have gone extinct, and more than 95% of rice paddies were predominantly cultivated with the high-yielding breeds by 1986. The unintended consequences of agricultural technology could also be averted if technological innovation is handled with more care. In face of climate change, agricultural practices have to be more sustainable, in terms of both resource use and resilience to extreme weather. To efficiently assist such transition in areas highly vulnerable to climate change, technology is integrated into the comprehensive approach of farming in climate smart villages. Examples of these villages in West Africa use information and communication technology (ICT) to provide real-time weather forecasts and advisories on agricultural practices. The climate smart agriculture hence makes the villages more adaptive to uncertain conditions. To prevent mistakes from the past, it requires thorough consideration and cautious application when it comes to technology, which itself is essentially a neutral tool. “Ultimately, it’s the way human beings, with our vast stores of ingenuity, deploy the power of the technology and tools that makes the biggest difference.” – Bill Gates, Co-chair, Bill and Melinda Gates Foundation
|This article needs additional citations for verification. (February 2012) (Learn how and when to remove this template message)| In computing, multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of time by executing them concurrently. New tasks start and interrupt already started ones before they have reached completion, instead of executing the tasks sequentially so each started task needs to reach its end before a new one is started. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking does not necessarily mean that multiple tasks are executing at exactly the same time (simultaneously). In other words, multitasking does not imply parallel execution, but it does mean that more than one task can be part-way through execution at the same time, and that more than one task is advancing over a given period of time. Even on multiprocessor or multicore computers, which have multiple CPUs/cores so more than one task can be executed at once (physically, one per CPU or core), multitasking allows many more tasks to be run than there are CPUs. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch; the illusion of parallelism is achieved when context switches occur frequently enough. Operating systems may adopt one of many different scheduling strategies, which generally fall into the following categories: - In multiprogramming systems, the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a tape) or until the computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize CPU usage. - In time-sharing systems, the running task is required to relinquish the CPU, either voluntarily or by an external event such as a hardware interrupt. Time sharing systems are designed to allow several programs to execute apparently simultaneously. - In real-time systems, some waiting tasks are guaranteed to be given the CPU when an external event occurs. Real time systems are designed to control mechanical devices such as industrial robots, which require timely processing. The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Danish and Norwegian. In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient. The first computer using a multiprogramming system was the British Leo III owned by J. Lyons and Co. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, non-existent and invisible to them. Multiprogramming doesn't give any guarantee that a program will run in a timely manner. Indeed, the very first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed. The expression "time sharing" usually designated computers shared by interactive users at terminals, such as IBM's TSO, and VM/CMS. The term "time-sharing" is no longer commonly used, having been replaced by "multitasking", following the advent of personal computers and workstations rather than shared interactive systems. Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the scheduling scheme employed by Microsoft Windows (prior to Windows 95 and Windows NT) and Classic Mac OS (prior to Mac OS X) in order to enable multiple applications to be run simultaneously. Windows 9x also used cooperative multitasking, but only for 16-bit legacy applications, much the same way as pre-Leopard PowerPC versions of Mac OS X used it for Classic applications. The network operating system NetWare used cooperative multitasking up to NetWare 6.5. Cooperative multitasking is still used today on RISC OS systems. As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile. Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was supported on DEC's PDP-8 computers, and implemented in OS/360 MFT in 1967, in MULTICS (1964), and Unix (1969); it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives. At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. The earliest preemptive multitasking OS available to home users was Sinclair QDOS on the Sinclair QL, released in 1984, but very few people bought the machine. Commodore's powerful Amiga, released the following year, was the first commercially successful home computer to use the technology, and its multimedia abilities make it a clear ancestor of contemporary multitasking personal computers. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95. It was later adopted on the Apple Macintosh by Mac OS X that, as a Unix-like operating system, uses preemptive multitasking for all native applications. A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively, and legacy 16-bit Windows 3.x programs are multitasked cooperatively within a single process, although in the NT family it is possible to force a 16-bit application to run as a separate preemptively multitasked process. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer provide support for legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications. Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time. As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e. g., one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data. Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as lightweight processes because switching between threads does not involve changing the memory context. While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors. Some systems directly support multithreading in hardware. Essential to any multitasking system is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside of the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security. In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside of its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and the specific error in question, the user may receive an access violation error message such as "segmentation fault". In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL. Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software. Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage. Processes that are entirely independent are not much trouble to program in a multitasking environment. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks. Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities. In other words, if you were in need of a smaller explanation, it is the task of doing two (or more) things at one time. - "Concurrency vs Parallelism, Concurrent Programming vs Parallel Programming". Oracle. Retrieved March 23, 2016. - "Preemptive multitasking". riscos.info. 2009-11-03. Retrieved 2014-07-27. - "UNIX, Part One". The Digital Research Initiative. ibiblio.org. 2002-01-30. Retrieved 2014-01-09. - Smart Computing Article - Windows 2000 &16-Bit Applications[dead link] - Eduardo Ciliendo; Takechika Kunimasa (April 25, 2008). "Linux Performance and Tuning Guidelines" (PDF). redbooks.ibm.com. IBM. p. 4. Retrieved March 1, 2015. - "Context Switch Definition". linfo.org. May 28, 2006. Retrieved February 26, 2015. - "What are threads (user/kernel)?". tldp.org. September 8, 1997. Retrieved February 26, 2015.
Lymph nodes are small, oval-shaped organs that contain immune cells to attack and kill foreign invaders, such as viruses. They’re an important part of the body’s immune system. Lymph nodes are also known as lymph glands. Lymph nodes are found in various parts of the body, including the neck, armpits, and groin. They’re linked by lymphatic vessels, which carry lymph throughout the body. Lymph is a clear fluid containing white blood cells (WBCs) and dead and diseased tissue for disposal. The primary function of lymph nodes is to harbor the body’s disease-fighting cells and to filter lymph before it reenters circulation. When you’re sick and your lymph nodes send out disease-fighting cells and compounds, they may become inflamed or painful. The condition of having inflamed lymph nodes is referred to as lymphadenitis. Lymph node inflammation can occur for a variety of reasons. Any infection or virus, including the common cold, can cause your lymph nodes to swell. Cancer can also cause lymph node inflammation. This includes blood cancer, such as leukemia and lymphoma. Lymph node inflammation can cause a variety of symptoms. Symptoms depend on the cause of the swelling and the location of the swollen lymph nodes. Common symptoms accompanying lymph node inflammation include: A doctor typically diagnoses lymph node inflammation through a physical examination. The doctor will feel around the location of various lymph nodes to check for swelling or sensitivity. They may also ask you about any associated symptoms, such as those listed above. Because a wide range of conditions can cause lymph node inflammation, your doctor may request a biopsy. A lymph node biopsy is a short procedure in which the doctor removes a sample of lymph tissue. A pathologist will test this sample. This type of doctor examines tissue samples and interprets lab results. A biopsy is often the most reliable way to determine why lymph node inflammation has occurred. Treatment for lymph node inflammation depends on its cause. In some cases, treatment may not be necessary. For example, treatment is unlikely to be recommended for: - healthy adults whose bodies are already conquering the infection - children, whose active immune systems can result in frequent swelling If treatment is required, it can vary from self-treatment to surgery and other therapies. In other cases, a course of antibiotics may be used to help the body fight the infection that’s causing lymph node swelling. If a lymph node itself becomes infected, an abscess may form. Swelling will usually go down quickly when the abscess is drained. To do this, your doctor will first numb the area. Then they’ll make a small cut that allows the infected pus to escape. The area may be packed with gauze to ensure healing. If your lymph node swelling is due to a cancerous tumor, there are a number of treatment options. These include surgery to remove the tumor, chemotherapy, and radiation. Your doctor will discuss each of these options, including their pros and cons, before starting your treatment.
NIH Curriculum Supplement Series Teacher’s Guides & Lesson Plans The National Institutes of Health (NIH) Curriculum Supplement Series are teacher’s guides with two weeks of lessons on the science behind seventeen health topics. Supplements for elementary school, middle school, and high school students cover topics as diverse as the mouth, energy balance, and bioethics and with print and video information. NIH has provided demos to give you an idea of what to expect. Each supplement includes a web version accessible to all users. Free print versions are available and can be requested only by educators in the U.S. Teachers can download a PDF guide and support material. Individual state standards can be viewed by selecting from a pull down menu or downloaded as a PDF. The web version also has a section of student activities, many of which include multimedia resources. Each supplement’s homepage lists the modules, objectives, number of lessons and time required for each lesson. For example, Human Genetic Variation. The Teacher’s Guide for Human Genetic Variation provides suggestions for implementation, a manual for student activities, masters, additional resources, PDF files of lesson plans and masters for printing, and updates. NIH and the National Science Teachers Association (NSTA) offer an archive of five professional development webinars at the NSTA Learning Center. Programs are approximately 60-90 minutes long. Accessing the programs require downloading Elluminate Live! Software. Teachers can register to attend web seminars by registering at the NSTA Learning Center. Registering provides access to more than 3,000 different resources and opportunities, in addition to NSTA Web Seminars. Registration is free.
When you imagine cold, icy Pluto, orbiting in the distant regions of the Solar System, you imagine snowy white ball. You can also look through these books from Amazon.com if you want more information about Pluto. But images of Pluto, captured by the Hubble Space Telescope have shown that Pluto’s surface isn’t just pure ice. Instead, it has a dirty yellow color, with darker and brighter regions across its surface. Hubble studied the entire surface of Pluto as it rotated through a 6.4 day period. The images revealed almost a dozen distinctive features never before seen by astronomers. This included a “ragged” northern polar cap cut in half by a dark strip, a bright spot seen to rotate around the dwarf planet, and a cluster of dark spots. The images also confirmed the presence of icy-bright polar cap features. Some of the variations seen on Pluto’s surface could be topographic features, like basins and fresh impact craters. But most of them are probably caused by the complex distribution of frosts that move across Pluto’s surface during its orbital and seasonal cycles. The surface area of Pluto is 1.795 x 107 square kilometers; about 0.033% the surface area of Earth. When Pluto is furthest away from the Sun, gases like nitrogen, carbon monoxide and methane partially freeze onto its surface. All will be revealed when NASA’s New Horizons spacecraft finally arrives at Pluto in 2015, finally capturing close-up pictures of Pluto and its moon Charon.
failed to become a member of the. Often times something will wreck the cycle and draw them decrease back out for awhile. During the 1930s, the combination of the Great Depression and the memory of tragic losses in World War I contributed to pushing American public opinion and policy toward isolationism. Common Sense, which presents numerous arguments for shunning alliances. The second half of the 20th century saw a massive increase in American military campaigns of all sizes, ranging from declared wars to covert operations. The surprise Japanese attack on the.S. Most Americans opposed any actual declaration of war on the Axis countries, but everything abruptly changed when Japan naval forces sneak-attacked Pearl Harbor on December 7, 1941. It produced greater oppression at least as often as greater liberty. Americans finally realized that the Atlantic Ocean would not protect them from Germany in the age of modern warfare, and that they must actively protect their country. The insulting description could mean a complete cut off from the rest of the world, like Tokugawa Japan. For more information, please see the full notice. American Isolationism in the 1930s. Although the United States took measures to avoid political and military conflicts across the oceans, it continued to expand economically and protect its interests in Latin America. Prompt: To what extent did the goals of American foreign policy change in the years? At that time, however, Americans were still not prepared to risk their lives and livelihoods for peace abroad. Interests in that conflict did not justify the number.S. The United States' occupation of the Philippines during the Spanish-American War thrust.S. Interests into the far western Pacific Ocean Imperial Japan's sphere of interest. Some members of Congress opposed membership in the League out of concern that it would draw the United States into European conflicts, although ultimately the collective security clause sank the possibility.S. Interventionism gradually became less attractive. During World War I, however, President Woodrow Wilson made a case for.S. In 1940, Roosevelt boldly transferred fifty World War I destroyers to Britain in exchange for eight valuable defense bases stretching from Newfoundland to South America. Thomas Paine crystallized isolationist notions in his work.
Jared Diamond describes the global origins of writing, and the uses to which writing was put in its first few thousand years. [T]here have been only a few occasions in history when people invented writing entirely on their own. The two indisputably independent inventions, of writing were achieved by the Sumerians of Mesopotamia somewhat before 3000 B.C. and by Mexican Indians before 600 B.C.; Egyptian writing of 3000 B.C. and Chinese writing (by 1300 B.C.) may also have arisen independently. Probably all other peoples who have developed writing since then have borrowed, adapted, or at least been inspired by existing systems. The independent invention that we can trace in greatest detail is history’s oldest writing system, Sumerian cuneiform. For thousands of years before it jelled, people in some farming villages of the Fertile Crescent had been using clay tokens of various simple shapes for accounting purposes, such as recording numbers of sheep and amounts of grain. In the last centuries before 3000 B.C., developments in accounting technology, format, and signs rapidly led to the first system of writing. One such technological innovation was the use of flat clay tablets as a convenient writing surface. Initially, the clay was scratched with pointed tools, which gradually yielded to reed styluses for neatly pressing a mark into the tablet. Developments in format included the gradual adoption of conventions whose necessity is now universally accepted: that writing should be organized into ruled rows or columns (horizontal rows for the Sumerians, as for modern Europeans); that the lines should be read in a constant direction (left to right for Sumerians, as for modern Europeans); and that the lines should be read from top to bottom of the tablet rather than vice versa. … Early stages in the development of the solution have been detected especially in thousands of clay tablets excavated from the ruins of the former Sumerian city of Uruk, on the Euphrates River about 200 miles southeast of modern Baghdad. The first Sumerian writing signs were recognizable pictures of the object referred to (for instance, a picture of a fish or a bird). Naturally, those pictorial signs consisted mainly of numerals plus nouns for visible objects; the resulting texts were merely accounting reports in a telegraphic shorthand devoid of grammatical elements. Gradually, the forms of the signs became more abstract, especially when the pointed writing tools were replaced by reed styluses. New signs were created by combining old signs to produce new meanings: for example, the sign for head was combined with the sign for bread in order to produce a sign signifying eat. … Besides Sumerian cuneiform, the other certain instance of independent origins of writing in human history comes from Native American societies of Mesoamerica, probably southern Mexico. Mesoamerican writing is believed to have arisen independently of Old World writing, because there is no convincing evidence for pre-Norse contact of New World societies with Old World societies possessing writing. In addition, the forms of Mesoamerican writing signs were entirely different from those of any Old World script. About a dozen Mesoamerican scripts are known, all or most of them apparently related to each other (for example, in their numerical and calendrical systems), and most of them still only partially deciphered. At the moment, the earliest preserved Mesoamerican script is from the Zapotec area of southern Mexico around 600 B.C., but by far the best-1 understood one is of the Lowland Maya region, where the oldest known written date corresponds to A.D. 292. … While Sumerian and Mesoamerican languages bear no special relation to each other among the world’s languages, both raised similar basic issues in reducing them to writing. The solutions that Sumerians invented before 3000 B.C. were reinvented, halfway around the world, by early Mesoamerican Indians before 600 B.C. With the possible exceptions of the Egyptian, Chinese, and Easter Island writing to be considered later, all other writing systems devised anywhere in the world, at any time, appear to have been descendants of systems modified from or at least inspired by Sumerian or early Mesoamerican writing. One reason why there were so few independent origins of writing is the great difficulty of inventing it, as we have already discussed. The other reason is that other opportunities for the independent invention of writing were preempted by Sumerian or early Mesoamerican writing and their derivatives. We know that the development of Sumerian writing took at least hundreds, possibly thousands, of years. As we shall see, the prerequisites for those developments consisted of several features of human society that determined whether a society would find writing useful, and whether the society could support the necessary specialist scribes. Many other human societies besides those of the Sumerians and early Mexicans—such as those of ancient India, Crete, and Ethiopia—evolved these prerequisites. However, the Sumerians and early Mexicans happened to have been the first to evolve them in the Old World and the New World, respectively. Once the Sumerians and early Mexicans had invented writing, the details or principles of their writing spread rapidly to other societies, before they could go through the necessary centuries or millennia of independent experimentation with writing themselves. Thus, that potential for other, independent experiments was preempted or aborted. The spread of writing has occurred by either of two contrasting methods, which find parallels throughout the history of technology and ideas. Someone invents something and puts it to use. How do you, another would-be user, then design something similar for your own use, knowing that other people have already got their own model built and working? … As for Chinese writing, first attested around 1300 B.C. but with possible earlier precursors, it too has unique local signs and some unique principles, and most scholars assume that it evolved independently. Writing had developed before 3000 B.C. in Sumer, 4,000 miles west of early Chinese urban centers, and appeared by 2200 B.C. in the Indus Valley, 2,600 miles west, but no early writing systems are known from the whole area between the Indus Valley and China. Thus, there is no evidence that the earliest Chinese scribes could have had knowledge of any other writing system to inspire them. Egyptian hieroglyphics, the most famous of all ancient writing systems, are also usually assumed to be the product of independent invention, but the alternative interpretation of idea diffusion is more feasible than in the case of Chinese writing. Hieroglyphic writing appeared rather suddenly, in nearly full-blown form, around 3000 B.C. Egypt lay only 800 miles west of Sumer, with which Egypt had trade contacts. I find it suspicious that no evidence of a gradual development of hieroglyphs has come down to us, even though Egypt’s dry climate would have been favorable for preserving earlier experiments in writing, and though the similarly dry climate of Sumer has yielded abundant evidence of the development of Sumerian cuneiform for at least several centuries before 3000 b.c. Equally suspicious is the appearance of several other, apparently independently designed, writing systems in Iran, Crete, and Turkey (so-called proto-Elamite writing, Cretan pictographs, and Hieroglyphic Hittite, respectively), after the rise of Sumerian and Egyptian writing. Although each of those systems used distinctive sets of signs not borrowed from Egypt or Sumer, the peoples involved could hardly have been unaware of the writing of their neighboring trade partners. … [F]ew people ever learned to write these early scripts. Knowledge of writing was confined to professional scribes in the employ of the king or temple. For instance, there is no hint that Linear B was used or understood by any Mycenaean Greek beyond small cadres of palace bureaucrats. Since individual Linear B scribes can be distinguished by their handwriting on preserved documents, we can say that all preserved Linear B documents from the palaces of Knossos and Pylos are the work of a mere 75 and 40 scribes, respectively. The uses of these telegraphic, clumsy, ambiguous early scripts were as restricted as the number of their users. Anyone hoping to discover how Sumerians of 3000 B.C. thought and felt is in for a disappointment. Instead, the first Sumerian texts are emotionless accounts of palace and temple bureaucrats. About 90 percent of the tablets in the earliest known Sumerian archives, from the city of Uruk, are clerical records of goods paid in, workers given rations, and agricultural products distributed. Only later, as Sumerians progressed beyond logograms to phonetic writing, did they begin to write prose narratives, such as propaganda and myths. Mycenaean Greeks never even reached that propaganda-and-myths stage. One-third of all Linear B tablets from the palace of Knossos are accountants’ records of sheep and wool, while an inordinate proportion of writing at the palace of Pylos consists of records of flax. Linear B was inherently so ambiguous that it remained restricted to palace accounts, whose context and limited word choices made the interpretation clear. Not a trace of its use for literature has survived. The Iliad and Odyssey were composed and transmitted by nonliterate bards for nonliterate listeners, and not committed to writing until the development of the Greek alphabet hundreds of years later. Similarly restricted uses, characterize early Egyptian, Mesoamerican, and Chinese writing. Early Egyptian hieroglyphs recorded religious and state propaganda and bureaucratic accounts. Preserved Maya writing was similarly devoted to propaganda, births and accessions and victories of kings, and astronomical observations of priests. The oldest preserved Chinese writing of the late Shang Dynasty consists of religious divination about dynastic affairs, incised into so-called oracle bones. A sample Shang text: “The king, reading the meaning of the crack [in a bone cracked by heating], said: ‘If the child is born on a keng day, it will be extremely auspicious.’ “ To us today, it is tempting to ask why societies with early writing systems accepted the ambiguities that restricted writing to a few functions and a few scribes. But even to pose that question is to illustrate the gap between ancient perspectives and our own expectations of mass literacy. The intended restricted uses of early writing provided a positive disincentive for devising less ambiguous writing systems. The kings and priests of ancient Sumer wanted writing to be used by professional scribes to record numbers of sheep owed in taxes, not by the masses to write poetry and hatch plots.
How Much and How Many belong to Count and Noncount Nouns. What is a Noun? Nouns – are name of persons, places, things, animals and events. Examples: Trish, New York, Cat, balloons, house, glass, park and more… Count Nouns – Nouns that can be counted. Noncount Nouns – Nouns that cannot be counted and it is also called Mass Nouns Lets us now check how “How Many and How Much” being used in a sentence. Examples: 1. How many apples do you have? - Count Nouns - because you can count the apples, then use "how many". 2. How much sugar do you want? - Noncount Nouns - because you cannot count sugar, then use "how much". However, Noncount or Mass Nouns can be counted by using measurement words. Examples: 1. How many packs of sugar do you want? -Count Nouns - because of the measurement "pack of sugar", then use how many. 2. How many glasses of water do you take everyday? - Count Nouns - because of the measurement word "glass of water" Note: Count Nouns - can be made plural and Noncount Nouns or Mass Nouns cannot be made plural. Count Nouns Examples: (singular-plural) 1. apple - apples 2. egg - eggs 3. mango - mangoes 4. book - books 5. banana - bananas Noncount Nouns Examples: (plural only) 1. sugar 2. milk 3. cheese 4. salt 5. ice cream Partitive – is a phrase which made up of a count noun followed by of and a count or mass noun. Examples: 1. Two piles of cards piles of - count noun cards - count noun 2. A glass of lemonade glass of - count noun lemonade - noncount or mass noun
CLICK ON weeks 0 - 40 and follow along every 2 weeks of fetal development Carefully crafted light pulses control neuron activity Specially tailored, ultrafast pulses of light can trigger neurons to fire and could one day help patients with light-sensitive circadian or mood problems, according to a new study in mice at the University of Illinois. Chemists use carefully crafted light beams, called coherent control, to regulate chemical reactions, but this study is the first demonstration of using such beams to control function in a living cell. The study used optogenetic mouse neurons - or, cells that had a gene added to make them respond to light. However, researchers say the same technique could be used on cells that are naturally responsive to light, such as those in the retina. "The saying, 'The eye is the window to the soul' has some merit, because our bodies respond to light. Photoreceptors in our retinas connect to different parts in the brain that control mood, metabolic rhythms and circadian rhythms," explains Dr. Stephen Boppart, leader of the study published in the journal Nature Physics. Boppart is an Illinois professor of electrical and computer engineering and of bioengineering, and also is a medical doctor. The research uses light to excite a light-sensitive channel in the membrane of neurons. When the channels are excited, they allow ions to pass through, causing neurons to fire. While most biological systems in nature are accustomed to the continuous light from the sun, Boppart's team used a flurry of very short light pulses - less than 100 femtoseconds. This delivers a lot of energy in a short period of time, exciting the molecules to different energy states. Along with controlling the length of the light pulses, Boppart's team controls the order of wavelengths in each light pulse. "When you have an ultrashort or ultrafast pulse of light, there's many colors in that pulse. We can control which colors come first and how bright each color will be," Boppart said. "For example, blue wavelengths are much higher energy than red wavelengths. If we choose which color comes first, we can control what energy the molecule sees at what time, to drive the excitement higher or back down to the base line. If we create a pulse where the red comes before the blue, it's very different than if the blue comes before the red." The research demonstrates that using patterns of tailored light pulses, you can make neurons fire in different patterns. Boppart says coherent control could give optogenetics studies more flexibility, since changing properties of the light used can give researchers more avenues than having to engineer mice with new genes every time they want a different neuron behavior. Outside of optogenetics, the researchers are working to test their coherent control technique with naturally light-responsive cells and processes - retinal cells and photosynthesis, for example. "What we're doing for the very first time is using light and coherent control to regulate biological function. This is fundamentally more universal than optogenetics - that's just the first example we used," Boppart said. "Ultimately, this could be a gene-free, drug-free way of regulating cell and tissue function. We think there could be 'opto-ceuticals,' methods of treating patients with light." Retinal-based opsins are light-sensitive proteins. The photoisomerization reaction of these proteins has been studied outside cellular environments using ultrashort tailored light pulses1,2,3,4,5. However, how living cell functions can be modulated via opsins by modifying fundamental nonlinear optical properties of light interacting with the retinal chromophore has remained largely unexplored. We report the use of chirped ultrashort near-infrared pulses to modulate light-evoked ionic current from Channelrhodopsin-2 (ChR2) in brain tissue, and consequently the firing pattern of neurons, by manipulating the phase of the spectral components of the light. These results confirm that quantum coherence of the retinal-based protein system, even in a living neuron, can influence its current output, and open up the possibilities of using designer-tailored pulses for controlling molecular dynamics of opsins in living tissue to selectively enhance or suppress neuronal function for adaptive feedback-loop applications in the future. Authors: Kush Paul, Parijat Sengupta, Eugene D. Ark, Haohua Tu, Youbo Zhao & Stephen A. Boppart The paper "Coherent control of an opsin in living brain tissue" is available online. DOI: 10.1038/nphys4257 Return to top of page Illinois researchers used ultrafast pulses of tailored light to make neurons fire in different patterns, the first example of coherent control in a living cell. Image: Stephen Boppart, University of Illinois.
Economics is the science that concerns itself with economies, from how societies produce goods and services, to how they consume them. It has influenced world finance at many important junctions throughout history and is a vital part of our everyday lives. The assumptions that guide the study of economics, have changed dramatically throughout history. In this article, we'll look at the history of how economic thought has changed over time, and the major participants in its development. Tutorial: Economics Basics The Father of Economics Adam Smith is widely credited for creating the field of economics, however, he was inspired by French writers, who shared his hatred of mercantilism. In fact, the first methodical study of how economies work, was undertaken by these French physiocrats. Smith took many of their ideas and expanded them into a thesis about how economies should work, as opposed to how they do work. Smith believed that competition was self-regulating and that governments should take no part in business through tariffs, taxes or any other means, unless it was to protect free-market competition. Many economic theories today are, at least in part, a reaction to Smith's pivotal work in the field. (For more on this influential economist, see Adam Smith: The Father Of Economics.) The Dismal Science of Marx and Malthus Karl Marx and Thomas Malthus had decidedly poor reactions to Smith's treatise. Malthus predicted that growing populations would outstrip the food supply. He was proven wrong, however, because he didn't foresee technological innovations that would allow production to keep pace with a growing population. Nonetheless, his work shifted the focus of economics to the scarcity of things, versus the demand for them. (For related reading, see Economics Basics: Demand and Supply.) This increased focus on scarcity led Karl Marx to declare that the means of production were the most important components in any economy. Marx took his ideas further and became convinced that a class war was going to be initiated by the inherent instabilities he saw in capitalism. However, Marx underestimated the flexibility of capitalism. Instead of creating a clear owner and worker class, investing created a mixed class where owners and workers hold the interests of both classes, in balance. Despite his overly rigid theory, Marx did accurately predicted one trend: businesses grew larger and more powerful, in accordance to the degree of free-market capitalism allowed. (For more insight, see History Of Capitalism.) Speaking in Numbers Leon Walras, a French economist, gave economics a new language in his book "Elements of Pure Economics." Walras went to the roots of economic theory and made models and theories that reflected what he found there. General equilibrium theory came from his work, as well as the tendency to express economic concepts statistically and mathematically, instead of just in prose. Alfred Marshall took the mathematical modeling of economies to new heights, introducing many concepts that are still not fully understood, such as economies of scale, marginal utility and the real-cost paradigm. It is nearly impossible to expose an economy to experimental rigor, therefore, economics is on the edge of science. Through mathematical modeling, however, some economic theory has been rendered testable. (For more, read What Are Economies Of Scale? and Economics Basics: Utility.) John Maynard Keynes' mixed economy was a response to charges levied by Marx, long ago, that capitalist societies aren't self-correcting. Marx saw this as a fatal flaw, whereas Keynes saw this as a chance for government to justify its existence. Keynesian economics is the code of action that the Federal Reserve follows, to keep the economy running smoothly. (To learn about how the Fed does this, see The Federal Reserve.) Back to the Beginning: Milton Friedman The economic policies of the last two decades all bear the marks of Milton Friedman's work. As the U.S. economy matured, Friedman argued that the government had to begin removing the redundant controls it had imposed upon the market, such as antitrust legislation. Rather than growing bigger on the increasing gross domestic product (GDP), Friedman thought that governments should focus on consuming less of an economy's capital, so that more remained in the system. With more capital in the system, it would be possible for the economy to operate without any government interference. (For more on Friedman and his work, see Free Market Maven: Milton Friedman.) The Bottom Line Economic thought has diverged into two streams: theoretical and practical. Theoretical economics uses the language of mathematics, statistics and computational modeling to test pure concepts that, in turn, help economists understand the truths of practical economics and shape them into governmental policy. The business cycle, boom and bust cycles, and anti-inflation measures, are outgrowths of economics; understanding them helps the market and government adjust for these variables.
Unit 1: Exploring Data |Main Concepts | Demonstration | Teaching Tips | Data Analysis & Activity | Practice Questions | Connections | Fathom Tutorial | Milestone| Main Concepts• In this course, we'll see two approaches to data analysis. In one, we ask questions and then gather data to answer the questions. In the other, we have data already, and then ask appropriate questions. In this unit, we'll focus on the second approach. • Data are numbers in context. • In order to answer questions about research or study, data must first be organized. Organization includes both numerical and graphical summaries. • Summaries (both numerical and graphical) have dual roles: they help us explore and discover possibly unknown features of the data, and they are also used to communicate features of the data to others. • Numerical and graphical summaries should capture features of the distribution of the data. Important features include the center, the spread, and the shape. Any unusual features in the data should also be acknowledged. • Focus should be on interpretation of organized data, not calculation (of numerical summaries) or construction (of graphs).
It is hard to imagine a world without an abundance of clean, fresh water. To many young people, water is a hot shower or something you put in a squirt gun for rip roaring fun. At the end of the day, however, water means so much more. Safe, accessible fresh water is essential to a healthy life. The goal of the Water Environment Federation (WEF) is to ensure clean water for all. Since 1928, WEF has been committed to providing water quality professionals with access to the best science, engineering, and technical practices. The veteran organization also understands the importance of engaging teachers, students, and citizens in protecting and preserving local and global water resources. YES! recommends Water Environment Federation for forging partnerships with grassroots and educational associations, such as Project WET and the National Science Teachers Association to develop a of water-related classroom resources. These materials offer your students brilliant opportunities to learn about and engage in the science, environmental, and social aspects of water quality. World Water Monitoring Day: September 18 See website: World Water Monitoring Day Together with the International Water Association, WEF coordinates World Water Monitoring Day to bring awareness and engage local citizens to protect their local waterways. This simple but significant activity gathers people across the globe to monitor conditions of nearby rivers, streams, estuaries, and other bodies of water. Though the official day is September 18, your students can test their local waters anytime between March 22 through December 31. Last year, over 120,000 people in 81 countries participated. How to register your water site and purchase a monitoring kit. VISIT: Getting Involved! Key questions to guide your students' observations, and, ultimately, protect their local stream, river, or lake. VISIT: Observation Guidelines VISIT: Stockholm Junior Water Prize The Stockholm Junior Water Prize is perhaps the most prestigious youth award for a water-related science project. The international contest for high school students spotlights their research papers on innovative solutions to today’s water challenges. Projects focus on local, regional, national, or global issues. It is essential that all projects use a research-oriented approach, which means they must use scientifically accepted methodologies for experimentation, monitoring, and reporting, including statistical analysis. State winners and their science teachers are flown to the national competition. The U.S. winner competes with national winners from 30 other countries for the international honors during World Water Week, September 5-11, in Stockholm. Criteria and Guidelines Unlike other science competitions, the Stockholm Junior Water Prize weighs the quality of the scientific research paper much more heavily than how it is visually presented. VISIT: Criteria and Guidelines Deliberation and deadline details vary by state for this spring competition. VISIT: Eligibility and Entry See website: WEFTeach Each year, WEF presents a workshop at the National Science Teachers Association’s annual national conference. WEFTeach, a “train the teachers” program, makes water education accessible to thousands of teachers and students each year. Its materials are downloadable and free online to teachers across the country. This year’s workshop, Stream Assessment: An active, integrated approach to science learning, features hands-on experiments and easy-to-follow lesson plans on the chemical, biological, and geophysical assessment of stream water quality. The curricula is for middle and high school students. More Water Environment Federation resources:
Australia is the world's smallest continent and sixth-largest country with an area of 7,686,850 km². It has relatively speaking more desert land than any other continent and, with about 22.5 million inhabitants, a low population density. Australia's isolation accounts for its unique varieties of vegetation and animal life and its distinct Aboriginal culture. Although a highly developed country, vast areas of the interior, known as the Outback, remain all but uninhabited. The Outback has changed little in centuries and could therefore be called the Real Australia. In Outback Australia both Aboriginal culture and the western pioneer culture live side by side in a vast and varied land of great contrast, harsh and beautiful, arid one part of the year, flooded the other, sometimes blazing hot, while frost may occur in the Centre as well. It is a land of deserts and waterfalls, balmy tropical nights and cyclones. There are droughts and floods, sometimes within the space of one year. It's unique. For at least 40,000 years before Europeans settled here, Australia was inhabited by indigenous Australians, who belonged to one or more of the roughly 250 language groups. These Aboriginal groups probably migrated from south east Asia, using land bridges and short sea-crossings; they could have come via what is now Papua New Guinea, that was once attached to northern Australia. They could have crossed by land into Tasmania, before rising sea levels made it into an island. Most Indigenous Australians were hunter-gatherers, with a complex oral culture and spiritual values based on reverence for the land. The Torres Strait Islanders, ethnically Melanesian and related to people from mainland Papua New Guinea, were horticulturalists and hunter-gatherers and quite distinct from the Aboriginal groups living on the mainland. The first recorded European sighting of the Australian mainland and the first recorded European landfall on the Australian continent were attributed to the Dutch navigator Willem Janszoon, who apparently landed near what is today the town of Weipa, western Cape York, on 26 February 1606. The Dutch charted the western and northern coastlines of what became known as "New Holland" during the 17th century, but were not interested to settle, as they were more preoccupied with trade in the East Indies; and the Australian indigenous peoples had nothing to trade! The English buccaneer William Dampier landed on the northwest coast of Australia in 1688 and 1699, but apparently was not impressed with what he found, describing a land of "Sun, sand, sin and sore eyes". In 1770, James Cook sailed along and mapped the east coast of Australia, naming it "New South Wales" and claiming it for Great Britain - something that seemed normal in those days, without need to consult the people living there. Cook's discoveries prepared the way for establishment of a new penal colony: as the American colonies had become independent, prisoners could no longer be sent there. On 26 January 1788 the First Fleet under command of Captain Arthur Phillip landed at Sydney Cove, Port Jackson; this date eventually became Australia Day, the National day. The British Crown Colony of New South Wales was proclaimed. Van Diemen's Land, now known as Tasmania, was settled in 1803 and became a separate colony in 1825. The western part of Australia was formally claimed in 1828, South Australia became a colony in 1836, Victoria in 1851, and Queensland in 1859. The Northern Territory was founded in 1911 when it was excised from South Australia. On 1 January 1901, federation of the colonies was achieved and the Commonwealth of Australia was established. The Federal Capital Territory (later renamed the Australian Capital Territory) was formed in 1911 as the location for the federal capital of Canberra, situated about halfway between the cities of Sydney and Melbourne. In many parts of Australia the Palaeolithic culture of the Aborigines did not survive European settlement of the early 1800s. Living from fishing, hunting, and gathering, the Aborigines developed kinship systems and rich and complex mythologies. It is estimated there were about 500 tribes and as many languages among the 350,000 Aborigines living in Australia when the Europeans arrived. The indigenous population declined steeply for 150 years following settlement, mainly due to infectious disease; dispossession of their land and culture and the removal of Aboriginal children from their families (the "Stolen Generations") also contributed to the decline in the Indigenous population. Today their numbers have increased again and Aboriginal and Torres Strait Islander society is in transition, yet traditional culture lives on, adapting to the modern world.
Adolescence is such a weird period in everybody’s life, and most of us, simply want to go through this period as fast as possible and become an adult. There are so many changes in a child’s life during this time, from hormones going wild, and getting more responsibilities, to changes in social dynamics, and finally gaining more freedom. Teens become more independent through the choice of after school activities, driving, and getting a part-time job to learn about fiscal responsibility. They also learn about critical thinking and peer pressure, and this period is essential for the further development of a child. Sleeping well is essential for adolescents, as they are still growing and developing, and they should preferably get 9 hours each night. Unfortunately, with a lot of responsibilities and different activities, teens often choose to sacrifice sleep to get everything else done. Missing necessary rest on a daily basis leads to sleep deprivation that is just terrible for their health. It results in a weakened immune system, impaired memory, decreased learning ability and attention which ultimately leads to worse academic performance, harder time controlling emotions, and increased risk of mental disorders, such as anxiety and depression. To make things even worse, these mental disorders also have an additional impact on sleep, which leads to even shorter rest time per night. Because of all of this, adolescents need to pay special attention to their sleep habits, and researchers think that the best way to do this is to shift school starts a little later in the morning. Sleep Deprivation and Adolescence As much as 60% of teens report feeling fatigued during the day, and 15% have even fallen asleep during school. A growing body of evidence tells us that the reason for that is the early morning start, which is something we can change. Academic researchers agree that moving the school start to 8.30 am or later can bring many benefits. Unfortunately, 83% of middle and 93% of high schools start before 8.30 am. Because of work, school, extracurricular activities, and other responsibilities, 90% of teenagers don’t get the recommended 9 hours of sleep — no wonder why they feel so exhausted. They also further compromise their rest by using electronics late at night. Screens emit blue light that suppresses the production of melatonin, a hormone that is essential for sleep, and basically tells our brain that it’s time to be active. Because of that, it is harder to fall asleep, and there is a higher risk of sleep deprivation. Sleep deprivation is no joke, as it affects many aspects of our lives, including our cognitive performance. When we lack sleep, our ability to concentrate is impaired, it is harder to obtain and retain new information, and our problem-solving skills are way worse. All of these are much needed for excellent academic performance. Our emotional well-being is affected by lack of sleep as well. Sleep deprived people are more likely to act irrational, make poor judgments, and have a harder time regulating their mood and temper. Mix that in with a combination of hormones going wild in teenagers, and the effects only get worse. It may cause them to have a hard time coping with the stresses of everyday life and school, and they might turn to alcohol, drugs, and nicotine abuse. Poor decision making can also make them think that it’s okay to drive when they are under the influence of alcohol, or when they are feeling too tired. Car accidents are the number one death cause among teenagers. Beside affecting us mentally and emotionally, sleep deprivation also has physical consequences. When we don’t get enough sleep, our body’s production of ghrelin and leptin, two hormones responsible for our appetite, is affected. That makes us crave more sugary and fatty foods, and that brings us one step closer to weight gain and obesity. Chronic lack of sleep also increases our chances of type 2 diabetes, high blood pressure, heart disease, even certain types of cancer. Why Don’t They Go to Bed Earlier? A logical step if you are constantly sleep deprived is to go to bed earlier. Unfortunately, it is not that easy. Teenagers need 9 hours of sleep, compared to 7 to 9 that is recommended for adults. Also, right around the adolescence, there is a natural shift in a body’s circadian rhythms. The production of melatonin starts later in the night compared to childhood, and it also stops later in the morning. Because of that, teenagers tend to go to sleep later and to sleep longer in the morning. This shift is also observed in other animal species during adolescence, so it is entirely normal behavior. Unfortunately, early school start makes them miss the needed sleep, and they just can’t go to bed earlier. School dictates everything. Teens need to find time to squeeze in other extracurricular activities, jobs, socializing with friends, family obligations, hobbies, and other basic needs like eating and bathing. They also need to contribute to the household by doing chores, and even though they spend a big part of the day at school, they still have homework and extra assignments to do back at home. This brings a lot of stress, and they often willingly choose to compromise their sleep, so that they can have time for all these activities. To complicate things even more, most teens are not aware of good sleep hygiene, and they often take part in behaviors that are damaging their sleep. The basics of sleeping well are: - Stick to a regular schedule by going to sleep and waking up at the same time every day, even on weekends. - Get the right amount of sleep, which is 7 to 9 for adults, 9 for teenagers, and even more for children. Toddlers sleep for 16 hours a day. - Create a relaxing bedtime routine that helps you unwind before bedtime. - Sleep in a cool, quiet, dark bedroom, free of any distractions. - Do not use electronics at least 30 minutes before bed. - Avoid drinking caffeine, alcohol, nicotine, and eating large meals before bed. If you have ever come in contact with a teenager, you know that these are the directions they simply don’t follow. They stay up late playing games, watching videos on Youtube, and chatting with their friends. Prolonged exposure to bright screens trick their brains into thinking that it is daytime, so it makes it harder to fall asleep when they decide to lay down. They also drink a lot of energy drinks to keep up with responsibilities. Energy drinks and sodas are full of caffeine, and they are proven to disrupt sleep, especially if taken too close to bedtime. Benefits of Later School Start Times Current research shows that it can be very beneficial to move the school start time to 8.30 am or later. Generally, students spend that extra time sleeping, and it is significant for their well-being. Some of the benefits are: - Longer sleep duration - Increased daytime alertness - Fewer chances of falling asleep in the class - Better attendance due to fewer sick days and fatigue - Decreased risk of depression and anxiety - Fewer car accidents due to drowsy driving - Better academic performance (better scores on tests including GPA and college admission test scores) - Faster reaction times - Fewer disciplinary actions - Better relationship with family and friends - Mood improvement Unfortunately, even with all of these proven benefits, parents don’t seem to understand the needs of their children. Only around 50% of parents are in favor of moving school start times. Scientific Research Supporting Later School Starts One study from 2018 looked at 375 students in Singapore, and how delayed sleep start affected them. Academic success is extremely important in Eastern Asian countries, so researchers were interested in how socially acceptable this delay would be, and how the students would behave. The school agreed to move the start 45 minutes later to determine short and long-term impact on students. The findings showed that after one month, even though students went to sleep a little later, on average they spent 23.2 minutes more asleep. Nine months later, the effects were a bit smaller, but there was still a 10 minutes increase in sleeping time. Students also reported lower levels of daytime sleepiness and higher levels of emotional well-being at both instances. The majority of students (89.1%), parents (75.6%), and teachers (67.6%) agreed that the later start times were better for students. It means that it is feasible to delay the school start, even in the culture that often chooses to sacrifice sleep to study more and get better academic performance. A 2017 study was conducted to see how the delayed start times later than 8.30 am would affect student attendance and graduation rates. They monitored over 30,000 students from 29 different high schools located across seven different states. Interestingly, both the attendance and graduation rates significantly improved, giving the more reason for delaying school starts. A comprehensive assessment of school starting times in Canada was done in 2016. Researchers wanted to see how this parameter correlated with the quantity of sleep the students were getting. They collected data from 362 schools in Canada, and they surveyed nearly 30,000 students aged 10 to 18. They found out that the average starting time was 8.43 am. And even though students slept for over 8 and a half hours on average on a school night, 60% still felt fatigued in the morning. For every 10 minute delay in starting time, students got 3.2 more minutes of sleep; they were 1.6% more likely to get sufficient sleep, and also 2.1% less likely to feel tired in the morning. As the students who were attending school later reported to get more sleep and feel well-rested in the morning, it just builds a larger case to why we should just quit torturing our kids with early morning wake-ups. A study done in 2014 by the University of Minnesota, followed over 9,000 students from 8 different public schools. Their goal was to see how the later start time correlated academic performance, overall health, and well-being of students. The results were not surprising, as the later start times enabled 60% of students to get at least 8 hours of sleep, which is a bare minimum for teenagers. Start of 8.35 am or later meant significantly improved academic performance. Students had higher grades in core subjects such as maths, science, English and social studies. They also performed better on state and national tests. Students’ attendance improved, as there are less sick days because of the better sleep quality, while their daytime fatigue decreased. Students who slept less than 8 hours per night, reported significantly higher symptoms of depression, anxiety, caffeine and substance use. Their grades and overall performance was also much lower. Another key finding is that when a school changed starting time from 7.35 to 8.55 am; there was a massive 70% decrease in teenage car crashes. Sleep-deprived kids were also observed to be more sedentary and prone to junk food, as exercise, eating healthy and sleeping well are all tied together. A study of nearly 10,000 students from 2008 showed consistent results. Researchers analyzed the effects of one-hour delay on students and car crash accidents. They found out that the total sleep time of students increased by 12 to 36 minutes depending on the grade. The percentage of students getting 8 or more hours of rest risen from 37.5% to 50%, as did the number of kids having at least 9 hours (6.3% to 10.8%). Car crash rates were lower by 16.5%. Is Changing School Start Times Too Complicated? Getting sufficient sleep, better academic performance, being well-rested during the day, mood improvement, fewer signs of mental health disorders, lower car crash rates, there are just too many benefits to moving delaying school starts. But why aren’t we doing it? The main concern that the officials have for this is the cost. They said it would just take too much money, with the most significant chunk going to the adaptation of bus schedules. Current schedules are fitted to high and elementary school needs, so changing this would probably mean that there would need to employ more drivers and rent more buses, which cost a lot. However, if it benefits our children so much, is it really important? Few researchers have gone as far as predicting that we would economically have a lot of benefits from school delay. There would be far fewer car crashes, and the improved academic performance and better education would mean more economic gain. Not to mention that obesity, suicide, mental disorders, and other health issue rates would drop, which is all beneficial to the economy as well. Let’s cut out all the excuses and do what’s right for our children according to scientific research, and that is delaying school start to 8.30 am or later.
1. Diagramming all the steps in a production process (flow chart) so every expected happening is understood; 2. Measuring results of the production process (cause and effect diagram); and 3. Implementation of the Plan-Do-Study-Act (PDSA) cycle to refine processes and improve results. The primary diagramming tools are the flow chart and the cause-and-effect diagram. Useful measurement tools, some may be unfamiliar, include the check sheet, the pareto chart, the histogram, scatter diagrams, run charts and control charts. The use of these tools to measure production processes and the application of statistical analysis to the measurements is known as Statistical Process Control (SPC). Tools, Techniques Explained To illustrate, let's say we've identified a reproduction problem that is being expressed as low farrowing rate. Artificial insemination is used. Flow charts, the pictorial representations of a process, have three steps: 1. Accurately draw all the steps that actually occur in a process (Figure 1); 2. Draw a flow chart of the steps the process should follow if everything is being done correctly; and 3. Compare the two charts to find where they are different. Cause-and-effect diagrams, also called "fishbone" diagrams, are used to help identify all potential causes of a specific problem (Figure 2). The effect or problem is listed on the right side of the chart; the major influences or causes are listed on the left. The causes are usually categorized as: people, machine, method or material. When constructing the cause-and-effect diagram, causes must be well defined so that the most likely causes can be selected for further analysis. Since the usual tendency is to attribute causation much too easily, other tools are necessary to determine causation vs. association. Check sheets are the simplest data-gathering forms and are the logical starting point in many problem-solving situations (Figure 3). They begin the process of translating opinions into facts, identifying prevalence. Pareto charts are used to display the relative importance of the problems (Figure 4). They deal only with characteristics of a product or service. They are a special form of a vertical bar graph that ranks problems by frequency of occurrence. It is important to remember, however, that the most frequent problems are not necessarily the most costly. Re-ranking based on cost may be more appropriate than frequency of occurrence in many cases. Histograms go a step further than pareto charts by displaying the distribution of measurement data over time (Figure 5). This tool can begin to show the variation inherent in any process. Histograms help visualize variability (the range of outcomes) and skewness (whether outcomes are weighted on one side or another of the mean). Scatter diagrams are used to analyze two variables to determine their relatedness (Figure 6). Scatter diagrams show possible cause-and-effect relationships and the strength of those relationships. Statistical tests can be applied to scatter diagrams to determine exact levels of correlation. Run charts are the simplest way to chart a process over time (Figure 7). They are useful for displaying potential trends and observing long-range averages. A danger in using run charts is the tendency to see minor or normal variation as being significant. Some simple rules can be used with run charts that don't require sophisticated statistical analysis. When nine consecutive points run on one side of the average this indicates that a potentially significant event has occurred or that the average has changed. When six consecutive points are either increasing or decreasing with no reversal, regardless of where they fall in relation to the average, it is unlikely that the change is due to chance. Finally, if 14 points in a row are alternating up and down, the occurrences are not likely due to chance. Control charts are run charts with statistically determined upper control limits (UCL) and lower control limits (LCL) - three standard deviations plotted on the chart (Figure 8). Control charts help determine how much of the variability in a process is due to random variation and how much is due to unique events or individual actions. Control charts can be refined by calculating "zones" based on 1, 2 or 3 standard deviations. With control charts it is possible to begin determining chance occurrence vs. real trends, whether a system is "in control" or "out of control" and whether "common causes" (system causes) or "special causes" (human error or unique events) are responsible for suboptimal performance.
- Passivity and passiveness are nouns derived from the adjective passive. - Both nouns mean the same thing and are often listed as synonyms. Passive is an adjective; a word we use to describe someone or something that is not active. Someone who is lethargic can be described as passive. When we’re allowing something to happen and not doing it ourselves, we’re being passive. You probably know that you can turn an adjective into a noun by adding the right suffix. In the case of the adjective passive, you can use two different suffixes to create two different nouns with the same meaning. Meaning of Passivity and Passiveness The two nouns in question are passivity and passiveness. If you look them up in dictionaries, you’ll find that they’re often listed simply as nouns derived from the adjective passive. If you manage to find them defined individually, you can expect the dictionary to tell you that passivity is the state of being passive and that passiveness is the condition of being passive. In most cases, the words are listed as being each other’s synonyms. The suffix in the word passivity—-ity—is generally used with words of Latin origin. And yes, passive is a word of Latin origin. On the other hand, the -ness in passiveness is a suffix native to the English language, and as such can be used on any word, regardless of its origin. That’s why it’s okay to say either passivity or passiveness, though it appears that passivity is the more common one. Passivity and Passiveness: Examples Passivity and passiveness are nouns that denote a state and a condition of someone or something. You can use it when you want to say that someone is not taking an active role in things going on around them, that someone is letting things go by without reacting to them, or that someone is letting something be done to them without resisting: from Grammarly Blog
The Economy of Upper Peru Spain immediately recognized the enormous economic potential of Upper Peru. The highlands were rich in minerals, and Potosí had the Western world's largest concentration of silver. The area was heavily populated and hence could supply workers for the silver mines. In addition, Upper Peru could provide food for the miners on the Altiplano. Despite these conditions, silver production fluctuated dramatically during the colonial period. After an initial fifteen-year surge in production, output began to fall in 1560 as a result of a severe labor shortage caused by the Indian population's inability to resist European diseases. Around the same time, Potosí's rich surface deposits became depleted, which meant that even more labor would be required to extract silver. The labor shortage was addressed by Francisco de Toledo, the energetic viceroy (the king's personal representative) of Peru, during a visit to Upper Peru in the 1570s. Toledo used the preColumbian mita to extract forced labor for the mines at Potosí from some sixteen districts in the highlands, which were designated as supplying mita. Adult males could be required to spend every sixth year working in the mines. Henceforth, Potosí mining depended on the mita as well as on a labor system in which relatively free men worked alongside those who were coerced. Toledo also regulated the mining laws, established a mint at Potosí, and introduced the mercury amalgam process. Adoption of the amalgam process was particularly important, according to Herbert S. Klein, in that it eliminated Indian control over refining. The second problem, the exhaustion of the high-content surface ores, required technological innovations. Hydraulic power took on increased importance because of the construction of large refining centers. By 1621 a system of artificial lakes with a storage capacity of several million tons provided a steady supply of water for refineries. With the labor and technological problems resolved, silver mining flourished. By the middle of the seventeenth century, silver mining at Potosí had become so important that the city had the largest population in the Western Hemisphere, approximately 160,000 inhabitants. The end of the seventeenth-century boom, however, was followed by a major decline in the mining industry. The exhaustion of the first rich veins required deeper and more expensive shafts. The rapid decrease of the Indian population as a result of disease and exploitation by the mita also contributed to the reduction in silver output. After 1700 only small amounts of bullion from Upper Peru were shipped to Spain. Kings from the Bourbon Dynasty in Spain tried to reform the colonial economy in the mid-eighteenth century by reviving mining. The Spanish crown provided the financial support necessary to develop deeper shafts, and in 1736 it agreed to lower the tax rate from 20 to 10 percent of the total output. The crown also helped create a minerals purchasing bank, the Banco de San Carlos, in 1751 and subsidized the price of mercury to local mines. The foundation of an academy of metallurgy in Potosí indicated the crown's concern with technical improvements in silver production. The attempts to revive the mining sector in Upper Peru were only partially successful, however, and could not halt the economic collapse of Potosí at the beginning of the nineteenth century. Nevertheless, mining remained critical to the economy of Upper Peru because food supplies sent from the valleys to mining centers on the Altiplano influenced agricultural production. Farming at first took place on encomiendas. The crown granted a small number of conquistadors the right to the labor and produce of Indians living on the encomienda, and by the 1650s there were some eighty-two encomiendas in Upper Peru. Encomenderos tended to monopolize agricultural production, control the cheap Indian labor, and collect the tribute that the Indians had to pay to the crown. Because encomenderos were difficult to control and abused their laborers, however, the crown tried repeatedly to bring Indians under its direct jurisdiction and control. In the second half of the sixteenth century, agricultural production shifted from encomiendas to large estates, on which Indians worked in exchange for the use of land. Cochabamba became a major producer of corn and wheat, and the valleys produced coca leaves in increasing amounts during colonial rule. In addition to mining and agricultural production, Indian tribute (alcabala) became an increasingly important source of income for the crown despite Indian migration to avoid payment. An early effort to collect tribute from Indians by moving them into villages or indigenous communities (comunidades indígenas) was unsuccessful because of resistance from both encomenderos and Indians. But by the late eighteenth century, an increase in the Indian population, the extension of tribute payments to all Indian males (including those who owned land), and a relative decline in income from the mines combined to make alcabala the second largest source of income in Upper Peru. Tribute payments also increased because Spanish absolutism made no concessions to human misfortune, such as natural disasters. The Indian tribute was increased by 1 million pesos annually. |Country Studies main page | Bolivia Country Studies main page|
Mali to Mecca, Mansa Musa Makes the Hajj is a problem-based learning unit based on Internet research and integrating history-social science, English-Language arts and visual and performing arts. While this unit is designed for Internet access, it can be modified for one-computer classrooms and no-internet access situations. As a problem-based learning experience, this unit uses the historical event of Mansa Musa, Ruler of Mali, making the Hajj or holy pilgrimage to Mecca in 1324 AD. Legend has it that Mansa Musa's entourage included over 50,000 members of his court, including 500 men who proceeded him, each carrying a six-pound solid gold staff. Behind this awesome spectacle came over eighty camels, each bearing hundreds of pounds of gold dust. When Mansa Musa passed through the Egyptian city of Cairo, he was so lavish with his gifts of gold that the price of gold fell and the Egyptian economy was affected for more than two decades! Students are challenged to become councilors in Mansa Musa's court and are charged with preparing a report on the short- and long-term effects of this journey on the future of the Empire of Mali. Through research and group collaboration, students will become knowledgeable about the glories and achievements of the African Islamic Empire of Mali and the Muslim Empire centered in the Arabian Peninsula. Sequencing with the Textbook This unit of study correlates with "Across the Centuries" by Houghton Mifflin Publishers. It serves as a major research project and bridge between Unit 2 The Growth of Islam and Unit 3 Sub-Saharan Africa. While this unit is developed for presentation through web pages or multimedia presentations such as HyperStudio or Power Point, low-tech alternatives such as posters, oral reports, short skits, interviews and written reports can be substituted. These authentic assessment alternatives can be very appropriate for resource, inclusion and ESL students. The web page and multimedia presentation products assume that your students have an appropriate level of word-processing and web page design skills. It is not the intent of this unit to include direct instruction and time to teach the basics of web page design and multimedia presentations to students. To keep students focused and on-track, it is advisable to have a series of periodic due dates for certain stages of the unit. This will allow the teacher to check on student progress and redirect a student if needed. This approach breaks down the total assignment into less overwhelming small steps and simulates real world deadlines. This day-by-day sequence is based on 50 minute periods which meet daily. The activities can be adjusted for block or flexible schedules. The time sequence can be shortened if students do the Internet and library research as homework. - Introduce the challenge of Mansa Musa deciding to make the Hajj to Mecca. Have students read the royal decree that outlines the general tasks for the councilors. - Students will randomly draw council positions resulting in four or more members in each of the five council groups. Record the student names for each council. - Students will read the specific duties for their specific council. - Have council members meet and go over their specific duties as a group. Students will brainstorm a list of research questions, which will be submitted to Mansa Musa (the teacher) at the end of the class. This work should be checked for focus and understanding of the assigned task. - Lead all students in a class discussion of what they know and what they need to know to start researching. - Use the vocabulary list as a starting point for formulating research strategies. The vocabulary list can also be assigned as an upcoming quiz . - Members of each council should meet to refine their research list from the class discussion of the previous day. - Members will take responsibility for researching a specific topic from the task and write a research plan of action which includes vocabulary and research questions. Initial Research - five periods - Research will be done on an individual basis, using Internet sources, print materials, CR-ROMS and other sources of information. Research can be accomplished through a combination of classroom reading time, library research time and research at home. - Before each structured research period, have students fill out a research plan-of-action to help keep them focused and to record collected information. These forms should be periodically collected and graded by the teacher. This will provide periodic check-points of progress for both the students and the teacher. - As students are researching, instruct them to collect 'artifacts', maps and images and save them to their disk. - Members of each council will meet to compare collected research and share information on valuable web sites to visit. - Members should go over their task again to be sure that they are collecting relevant information. - Council members can now decided where they need to concentrate their research efforts to cover all the areas of their assigned task. - Each member will fill out a Research Plan-of-Action form to help focus them. Integrated Council Meetings - Council groups will meet together to share information that is useful for each group. - Councilors of Transportation will meet with Councilors of Commerce and the Councilors of Knowledge to select trade goods and technologies which will impress the Arab and Western worlds. - Councilors of Knowledge will meet with the Councilors of Transportation and the Councilors of Commerce, to discuss how to use the intellectual and technological achievements as products of trade throughout the Arab and Western worlds. - Councilors of Religious Faith will need to confer with Councilors of Arts and the Councilors of Knowledge to discuss possible mosque designs. Refined Research - three periods - Students will use the next three periods to fill in the 'holes' in their collected research within their specific topics. Days 14-20 - 7 periods Council Meeting - Presentation Preparation - A review of the assigned task is a good idea to refresh everyone's memory and make sure they are focused on all the elements of the task. - Members of each council meet to again share information and select the most relevant parts for their presentation to Mansa Musa. Each of the topics should again be assigned to each member. - Council members will start 'story-boarding' their assigned area into one to three web pages or power point slides. Use the story-boarding form. Presentations - 3 class periods Student presentations can include a combination of the following: - Web pages or power point presentations - Maps, charts and posters - Images, drawings, 'building plans' or models - Oral presentations - Bibliography with sources cited. (Must be included)
GENERAL FAMILIES / NOVEL BEETLES The second part of the collection includes examples of the many other families that belong to the Order Coleoptera. Examples of these other families are: Generally elongated and named click beetles for their ability to flick themselves into the air and right themselves when they fall on their backs. They achieve this by a mechanism between their protothorax and mesothorax. Stag beetles get their name from the large antler-like mandibles on the male. Sexual dimorphism is pronounced: females have much smaller mandibles. These beetles are becoming increasingly rare. Cardinal beetles live under bark and are predaceous, eating other insects or These are the largest beetles in the collection. Their size and beauty have made them popular with collectors, they are now considered rare.
BBC Plant Science — Plants can communicate the onset of an attack from aphids by making use of an underground network of fungi, researchers have found. Instances of plant communication through the air have been documented, in which chemicals emitted by a damaged plant can be picked up by a neighbour. But below ground, most land plants are connected by fungi called mycorrhizae. The new study, published in Ecology Letters, demonstrates clearly that these fungi also aid in communication. It joins an established body of literature, recently reviewed in the Journal of Chemical Ecology and in Trends in Plant Science, which has suggested that the mycorrhizae can act as a kind of information network among plants. Researchers from the University of Aberdeen, the James Hutton Institute and Rothamsted Research, all in the UK, devised a clever experiment to isolate the effects of these thread-like networks. The team concerned themselves with aphids, tiny insects that feed on and damage plants. Many plants have a chemical armoury that they deploy when aphids attack, with chemicals that both repel the aphids and attract parasitic wasps that are aphids’ natural predators. The team grew sets of five broad bean plants, allowing three in each group to develop mycorrhizal networks, and preventing the networks’ growth in the other two. To prevent any through-the-air chemical communication, the plants were covered with bags. As the researchers allowed single plants in the sets to be infested with aphids, they found that if the infested plant was connected to another by the mycorrhizae, the un-infested plant began to mount its chemical defence. Electron micrograph of aphid Some strains of wheat have been genetically modified specifically to resist the aphid threat Those unconnected by the networks appeared not to receive the signal of attack, and showed no chemical response. “Mycorrhizal fungi need to get [products of photosynthesis] from the plant, and they have to do something for the plant,” explained John Pickett of Rothamsted Research. “In the past, we thought of them making nutrients available from the [roots and soil], but now we see another evolutionary role for them in which they pay the plant back by transmitting the signal efficiently,” he told BBC News. Prof Pickett expressed his “abject surprise that it was just so powerful – just such a fantastic signalling system”. (06/24/2013)
Looking at pictures of the Moon, even from the historic “giant leap” photograph, it is easy to understand why scientists used to think of it as a big dust ball. However, “conventional wisdom” has been changing over the years. This is largely due to the information garnered from missions such as NASA’s 2009 Lunar Crater Observation and Sensing Satellite (L-CROSS) lunar-impact probe, as well as new scanning technologies and more precise measurements, which have been facilitated by enhanced instrumentation and improved analytical detection limits, on samples returned to Earth following the Apollo missions. In a paper published in the Feb. 17 issue of Nature Geoscience, researchers Hejiu Hui, postdoctoral research associate of civil and environmental engineering and earth sciences at the University of Notre Dame; Anne H. Peslier, scientist at Jacobs Technology and manager of the electron microprobe at the Astromaterials Research and Exploration Science Division at Johnson Space Center; Youxue Zhang, the James R. O’Neil Collegiate Professor of Earth and Environmental Sciences at the University of Michigan; and Clive R. Neal, professor of civil and environmental engineering and earth sciences at Notre Dame, show that they have detected significant amounts of water in the samples of the lunar highland upper crust obtained during the Apollo missions. The lunar highlands are thought to represent the original crust, crystallized from a mostly molten early Moon that is called the lunar magma ocean. Their findings indicate that the early Moon was not only wet, but also the water that was there was not substantially lost during the Moon’s formation. This new evidence seems to contradict the predominant lunar formation theory — that the Moon was formed from debris generated during a giant impact between Earth and another planetary body, approximately the size of Mars. According to Hui, “the presence of water in the early Moon needs to be reconciled with the favored formation scenario that had been supported by the volatile elements and isotopes in the samples, such as zinc.” As little as five years ago, no one had detected water in the samples returned from the Moon. The advancement of instrumentation, such as secondary ion mass spectrometry and Fourier transform infrared spectroscopy, has made it possible to detect tiny, but measureable, amounts of water in the mineral grains from Apollo samples. “It’s not ‘liquid’ water that was measured during these studies but hydroxyl groups (developed from water that did exist in the lunar magma ocean) that was distributed within mineral grain,” says Hui. “We are able to detect those hydroxyl groups in the crystalline structure of the Apollo samples.” The hydroxyl groups the team detected are evidence that the lunar interior contained significant water during the Moon’s early molten state, before the crust solidified, and that they may have played a key role in the development of lunar basalts. “The presence of water,” says Hui, “could imply a more prolonged solidification of the lunar magma ocean than the once popular anhydrous moon scenario suggests.” Contact: Clive R. Neal, 574-631-8328, [email protected] Originally published by newsinfo.nd.edu on February 18, 2013.at
|This image is reprinted with permission from University of Michigan Congenital Heart Center. Some congenital (present at birth) heart defects cause oxygen-rich blood and oxygen-poor blood to mix in your circulation. This can result in your body not receiving the amount of oxygen it needs for healthy functioning. For children with single ventricles that support then entire circulation, staged surgical procedures are necessary to separate the venous and arterial circulations. It is said that these children will proceed down a "single ventricle/Fontan pathway." The Fontan procedure is a surgical technique used to separate oxygen-rich and oxygen-poor blood, but the surgery does not create a normal circulatory pattern. For example, in a normal heart, the two lower heart chambers (ventricles) act as pumps, with one ventricle pumping blood to the body and the other pumping blood to the lungs. After the Fontan procedure, only one chamber pumps blood, and it must be strong enough to pull blood that is passively moving through the lungs into the heart and then pump it out to the body. Keep reading for descriptions of the Fontan procedure during childhood and continuing support needed as an adult. The Fontan Procedure in Childhood In the Fontan procedure, surgery is performed to connect the vein (inferior vena cava) carrying oxygen-depleted blood from the lower body directly to the pulmonary arteries, which carry blood to the lungs to pick up oxygen. Prior to the Fontan procedure, typically the venous blood from the upper body is directly connected to the pulmonary arteries by surgery (bidirectional Glenn or hemi-Fontan). The inferior vena cava is connected to the pulmonary arteries using a tube that bypasses the heart or by creating a baffle within the heart to direct the blood upward to the lungs. The surgical methods used to establish these connections have developed over the years so there are several types of Fontan procedures that have been used in the past. Some use reconstructed pathways within the atrium (intracardiac Fontan), while others use tubes to connect the inferior vena cava to the pulmonary arteries outside of the heart (extracardiac Fontan). Sometimes because of the inefficiency in circulation related to some types of Fontan surgery, new connections (Fontan revision) need to be made. After completion of the Fontan procedure, the blue (oxygen-poor) and red (oxygen-rich) blood circulations are fully separated. However, there is now no pumping chamber in the heart to propel blood through the lungs. Blood passively moves through the lungs and returns to the heart via the pulmonary veins. By virtue of not having a pumping chamber on the venous side of the circulation, the pressure within the Fontan circuit must always be higher than the chamber that the blood from the lungs passes into. This results in the pressure within the veins being higher than normal. Babies who undergo the Fontan procedure have only one functioning ventricle to pump blood to the body. In order for the procedure to succeed, the blue blood from the body must be able to pass through the lungs easily to pick up oxygen since there is no pumping chamber that pushes blood directly through the lungs. Additionally, the ventricle that pumps to the body needs to be reasonably healthy and strong enough to “suck” blood from the lung (pulmonary) circulation and propel it forward into the body. If it is not strong enough, the procedure will fail. The valves related to the main ventricle also need to function correctly. Leakage or narrowing of these valves will place additional strain on the ventricle. Also, after the intracardiac Fontan procedure, there is a risk that the heart’s right upper chamber (atrium) will become stretched, although this occurs less frequently with today’s more refined surgical techniques. If the atrium is stretched, an abnormal heart rhythm may result. Abnormal heart rhythms will interfere with the function of the heart in pumping enough blood to the body, resulting in fatigue, fainting and heart failure. Because the pressure in the veins and lung vessels needs to be higher to allow blood to passively pass through the lungs to return to the heart, this places additional stress on the other organs of the body, including the liver, gastrointestinal (GI) tract and kidneys. The resultant congestion of venous blood in these organs can cause problems, including fibrosis/cirrhosis of the liver, kidney dysfunction and problems with absorption of nutrients from the GI tract. Sometimes these problems become more of an issue than the pumping function of the heart itself. It is becoming increasingly clear that although the Fontan operation can restore separation of “blue” blood from “red” blood into their respective circulations, long-term success of this circulation is not expected. It is possible that heart transplantation may eventually have to be performed, but the relatively scarce supply of these organs means that many will not be able to receive them. Furthermore, the multiple exposures of these patients to blood transfusions may affect the ability to find a heart that “matches” and will not be rejected quickly. The Fontan Procedure in Adulthood Most Fontan procedures are performed during childhood, but the procedure is sometimes performed in adults. After a Fontan procedure, close follow-up with a cardiologist will be required across the lifespan. There are many potential complications in patients who have undergone a Fontan procedure. Since there is no pump to push blood from the inferior vena cava (IVC) to the pulmonary arteries, the pressure in the IVC is higher than normal. This can lead to liver failure. It is important for patients not to consume alcohol and to limit medications that can damage the liver. The body may form extra blood vessels that reroute blue blood away from the lungs and back to the body’s circulation (venovenous collaterals). This can cause cyanosis and make the patient appear blue, and it can lead to progressive fatigue. Some of these vessels can be blocked off (occluded) in a hospital’s catheterization lab. Patients can also retain fluid and have swelling, especially of the feet, ankles or abdomen. This can result from heart failure, obstruction of blood flow to the lungs or a condition called protein-losing enteropathy (PLE). In PLE, the body is unable to absorb proteins from the digestive tract. Other symptoms include abdominal pain and diarrhea. There are currently no standard treatments to cure PLE, but your cardiologist may try different medications to improve symptoms. Another rare complication of the Fontan procedure is plastic bronchitis. Patients can have difficulty breathing and begin coughing up thick protein casts. They are called casts because they look like the casts of the bronchial tree. These casts can be very difficult to clear and may be life-threatening if they cause airway obstruction. Some patients require admission to the hospital for more aggressive treatments to remove these secretions. It is important to consult your cardiologist about the kind and level of activity that is appropriate for you, given your condition, and to check with him or her before trying new activities.
Life can be scary for endangered loggerhead sea turtles immediately after they hatch. After climbing out of their underground nest, the baby turtles must quickly traverse a variety of terrains for several hundred feet to reach the ocean. While these turtles' limbs are adapted for a life at sea, their flippers enable excellent mobility over dune grass, rigid obstacles and sand of varying compaction and moisture content. A new field study conducted by researchers at the Georgia Institute of Technology is the first to show how these hatchlings use their limbs to move quickly on loose sand and hard ground to reach the ocean. This research may help engineers build robots that can travel across complex environments. "Locomotion on sand is challenging because sand surfaces can flow during limb interaction and slipping can result, causing both instability and decreased locomotor performance, but these turtles are able to adapt," said Daniel Goldman, an assistant professor in the Georgia Tech School of Physics. "On hard-packed sand at the water's edge, these turtles push forward by digging a claw on their flipper into the ground so that they don't slip, and on loose sand they advance by pushing off against a solid region of sand that forms behind their flippers." Details of the study were published online on February 10, 2010 in the journal Biology Letters. This research was supported by the Burroughs Wellcome Fund, National Science Foundation, and the Army Research Laboratory. In collaboration with the Georgia Sea Turtle Center, biology graduate student Nicole Mazouchova studied the movement of sea turtle hatchlings of the species Caretta caretta at Jekyll Island on the coast of Georgia. She and research technician Andrei Savu worked from a mobile laboratory that contained a nearly three-foot-long trackway filled with dry Jekyll Island sand. The trackway contained tiny holes in the bottom through which air could be blown. The air pulses elevated the granules and caused them to settle into a loosely packed solid state, allowing the researchers to closely control the density of the sand. In addition to challenging hatchlings to traverse loosely packed sand in the trackway, the researchers also studied the turtles' movement on hard surfaces -- a sandpaper-covered board placed on top of the sand. Two high-speed cameras recorded the movements of the hatchlings along the trackway, and showed how the turtles altered their locomotion to move on different surfaces. "We assumed that the turtles would perform best on rigid ground because it would not give way under their flippers, but our experiments showed that while the turtles' average speed on sand was reduced by 28 percent relative to hard ground, their maximal speeds were the same for both surfaces," noted Goldman. The researchers' investigations showed that on the rigid sandpaper surface, the turtles anchored a claw located on their wrists into the sandpaper and propelled themselves forward. During the thrusting process, one of the turtle's shoulders rotated toward its body and its wrist did not bend, keeping the limb fully extended. In contrast, on loosely packed sand, pressure from the thin edge of one of the turtle's flippers caused the limb to penetrate into the sand. The turtle's shoulder then rotated as the flipper penetrated until the flipper was perpendicular to the surface and the turtle's body lifted from the surface. "The turtles dug into the loosely packed sand, lifted their bellies off the ground, lurched forward, stopped, and did it again," explained Goldman. To extend their biological observations, Goldman and physics graduate student Nick Gravish designed an artificial flipper system in the laboratory. The flipper consisted of a thin aluminum plate that was inserted into and dragged along the trackway filled with Jekyll Island sand. Calibrated strain gauges mounted on the flipper provided force measurements during the dragging procedure. "Our model revealed that a major challenge for rapid locomotion of hatchling sea turtles on sand is the balance between high speed, which requires large inertial forces, and the potential for failure through fluidization of the sand," explained Goldman. "We believe that the turtles modulate the amount of force they use to push into the sand so that it remains below the force required for the ground to break apart and become fluidlike." Goldman and his team plan to conduct further field studies and laboratory experiments to determine if and how the turtles control their limb movements on granular media to avoid sand fluidization. They are also developing robots that move along granular media like the sea turtle hatchings. "These research results are valuable for roboticists who want to know the minimum number of appendage features necessary to move effectively on land and whether they can just design a robot with a flat mitt and a claw like these turtles have," noted Goldman. - Nicole Mazouchova, Nick Gravish, Andrei Savu, and Daniel I. Goldman. Utilization of granular solidification during terrestrial locomotion of hatchling sea turtles. Biology Letters, 2010; DOI: 10.1098/rsbl.2009.1041 Cite This Page:
Children Who Just Watch While many young children, when given the opportunity, will immediately engage in play with others, families and early childhood teachers often encounter children who want only to watch from the side. These children will watch others playing around them - constructing a towering building; reenacting a battle of dinosaurs in the sandbox; putting on a puppet show - without actually getting involved. Family members and teachers may be anxious when preschoolers do not engage in play with other children, but this "onlooker stage of play" can be an important step in the social development of young children. It is an opportunity for young children to learn and mentally practice interacting with others. With adult guidance, they'll benefit from this thoughtful time. In the onlooker stage, children don't physically interact, but their minds and feelings are fully engaged in the play of others. You can see it in their faces and body language. Their eyes may open wide as they see a block building growing taller, then they may dart quickly to another corner to determine the location of the growling dinosaur sounds. Their faces may break into smiles at the antics of other children pretending to be monkeys and gorillas. Each type of play has value: in solitary play, children acquire self-knowledge; other kinds of play help them build confidence, practice interacting, and learn how to cooperate with other children. Children who go through an onlooker (or "watcher") stage get to be mentally engaged without the potential intimidation of actually being in the thick of things. This engagement offers children opportunities to mentally manipulate what they see and hear, organizing and integrating information and storing it away for future use. The children may actually be mentally placing themselves into a situation they are observing, and testing how they might respond if they were involved. As "watchers," children have opportunities to manipulate their cognitive experience of the behaviors of others, gaining information which will later be used within the context of their physical, verbal, emotional, and social behaviors. The use of this information is not just imitation, but a true understanding of the causes, actions, and consequences of particular behaviors - similar to the way preschoolers might use self-talk or private speech to review what they have learned about words and language. The onlooker stage offers an opportunity to watch and learn before stepping into the action. All young children do some watching; some young children do it a lot. We now know that this is a valuable experience for children. As family members and as early childhood teachers, we are often anxious when preschoolers are not willing to engage overtly in play with other children. Perhaps we should allow them more time to watch and learn. When the time is right, they will be more comfortable and successful moving into the world of full social interaction. Excerpted from "He's Watching! The Importance of the Onlooker Stage of Play" by Sarah Jane Anderson - an article in the NAEYC journal, Young Children. Early Years Are Learning Years™ is a regular series from NAEYC providing tips to help parents and early childhood educators give young children a great start on learning. Reprinted with the permission of the National Association for the Education of Young Children. © 2008 NAEYC
The sensitive educator should realize that kids go to school for a living. School is their job, their livelihood, and their identity. Thus, the crucial role that teachers play in the youngster's social development and self-concept should not be under-estimated. Even if a youngster is enjoying “academic success,” her attitude about school will be determined by the degree of “social success” she experiences. There is much that the educator can do to promote social development in the Aspergers child. Kids tend to fall into four basic social categories in the school environment: - Children who, although not openly rejected, are ignored by peers and are uninvolved in the social aspects of school. - Children who have successfully established positive relationships within a variety of social settings. - Children who “fit-in” with a peer-group based on common interests, but seldom move beyond that group. - Children who are consistently rejected, bullied and harassed by peers. Many children with AS and HFA find themselves in the rejected/bullied subgroup. Their reputations as being rather “odd” plague them over the years. It is important for the educator to assist the Aspergers youngster’s peers in changing their view of this boy or girl. Discipline is a rather ineffective method of correcting bullying or rejecting behavior. For example, if the teacher disciplines Michael for insulting Ronnie, she only increases Michael's resentment of Ronnie. But, the teacher can increase Michael’s level of acceptance in several ways. Here’s how: 1. Assign the Aspergers youngster to work in pairs with a “socially skilled” youngster who will be accepting and supportive. Cooperative activities can be especially effective in the effort to include the rejected youngster in class. These activities enable the youngster to use her academic strengths while simultaneously developing her social skills. 2. Assign the rejected youngster to a leadership position in class wherein his peers become dependent on him (e.g., line leader). This can serve to increase his status and acceptance. However, understand that this may be an unfamiliar role for the Aspergers student, and he may require some guidance from the teacher in order to ensure success. 3. Attempt to determine specific interests, hobbies or strengths of the rejected youngster. This can be accomplished through discussions, interviews or surveys. Once the teacher has identified the youngster's strengths, celebrate it in a very public manner. For example, if the child has a particular interest in Indian wood carvings, find a ‘read-aloud’ adventure story in which an Indian plays an important role in the plot. Encourage the youngster to bring a couple of his Indian wood carvings to class and show how they were made. By playing the expert role, a rejected youngster can greatly increase his status. 4. Board and card games can be used to foster social development in class. These activities require children to utilize a variety of social skills (e.g., voice modulation, taking turns, sportsmanship, dealing with competition, etc.). These activities can also be used to promote academic skills. Since games are often motivating for children, this activity can be used as positive reinforcement. 5. Educators at the high school level must be particularly aware of the teen that is being rejected by peers. During the teenage years, it is very important that the Aspergers youngster be accepted by his peers. The rejection suffered by teens with social skill deficits often places them at risk for emotional problems. 6. The child with social skill deficits invariably experiences rejection in any activity that requires children to select classmates for teams or groups. This selection process generally finds the rejected youngster in the awkward position of being the "last one picked." Avoid these humiliating situations by pre-selecting the teams or drawing names from a hat. 7. The educator can assist the Aspergers youngster by making him aware of the traits that are widely-accepted and admired by his peers (e.g., when a particular child converses, extends invitations, gives compliments, greets others, laughs, shares, smiles, tells jokes, etc.). 8. The educator needs to recognize the critical role that the youngster's mom and dad – and even siblings – can play in the development of social competency. Ask the youngster’s mother or father to visit school for a conference to discuss the child’s social status and needs. School and home must work in concert to ensure that target skills are reinforced and monitored. Social goals should be listed and prioritized. Focus on a small set of social skills (e.g., making eye contact, sharing, and taking turns) rather than trying to deal simultaneously with the entire inventory of social skills. 9. The educator should demonstrate acceptance of - and affection for - the rejected youngster. This conveys the constant message that this youngster is worthy of attention. The educator can use her status as a leader to increase the status of the Aspergers youngster. 10. The socially incompetent youngster often experiences isolation and rejection in his neighborhood, on the school bus, and in peer-group activities. The educator can provide this child with a learning environment wherein he can feel comfortable, accepted and welcome. Coming to school every day can become a helpless event for some Aspergers kids – unless they succeed at what they do. Educators are shields against that helplessness. Teaching Social Skills and Emotion Management
The Parent/Teacher Guide has complete instructions on how to teach this math program. The Student Workbook has worksheets and practice pages. Both are needed for a complete program. Multiplication: Booklet 1 - Basic Concepts - Grade 3 Multiplication: Booklet 2 - Beginning Long Multiplication and Basics of Distribution - Grade 4 Expanded Tables Flip Book Multiplication: Booklet 3 - Properties and Factoring - Grade 4 Multiplication: Booklet 4 - Working with Large Numbers and Decimals - Grade 5 In multiplication the Distributive Property is the guiding rule for breaking up large numbers to multiply. These books use colorful drawings and blocks to teach this very important property. Understanding the Distributive Property of Multiplication is fundamental to algebra. Not understanding it is one of the major reasons students fail in algebra. By using our math rogram, student will not only be able to solve large multiplication problems, they will understand how the general idea works. They will then be able to apply it to the general algebraic form later. Click on covers below to view or order.
Synopses & Reviews Super Science Matter and Materials Experiments is bursting with 10 great experiments for little scientists - Photographic step-by-step guide to each experiment. - Key scientific concepts explained and put to the test. - Notes for parents, teachers and helpers on support and safety. With 10 fantastic experiments covering matter and materials, children can get to grips with scientific theories with ease. # A detailed list of the everyday materials needed is given at the beginning of the book, alongside a helpful guidance section for parents, helpers or teachers. #The clear step-by-step instructions for each experiment are accompanied by fantastic photographs. # These fun experiments give kids a hands-on experience of learning and they can even record their findings, just like a real scientist. # This book is ideal for kids aged 7+. . One of the science experiments books that every kid should have to understand how to put scientific ideas into practice Fun experiments featured in Super Science Matter and Materials Experiments: # Can you mix it? Using two different materials, can you mix them together or do they repel each other? # Colour separation: Discover how colour pigments move at different distances up wet paper # Electric bubble maker: Send electricity through two pencils using a battery to create a bubble maker
(5) Post-Palaeozoic radiation of this group has been attributed to the effects of increased predation pressures. (4) During the remainder of the Palaeozoic things calmed down from the boom of the Ordovician and bivalve diversity stabilized. During this time certain species developed extensive siphons, thus opening up a whole new range of modes of life. The development of these siphons along with a muscle foot gave bivalves the first opportunity to bury themselves deep within the sediment. This gave a distinct advantage over the previously (3) During the Late Ordovician/Early Silurian gills began to perform a filter-feeding function as well as a respiratory one. (2) There was a rapid radiation in the Early Ordovician resulting in the development of all the major sub-classes by the Middle Ordovician. Dysodont, heterodont and taxodont dentitions were established during this time. (1) The earliest known bivalves have been reported from the Basal Cambrian. The end-Permian extinction event saw almost 95% of all species becoming extinct. Prior to this mass extinction, bivalves had slowly risen in diversity, but had failed to dcome to dominance over the brachiopods. Both bivalves and brachiopods were hard hit by the mass extinction, but the bivalves recovered their diversity levels during the Triassic, and the brachiopods did not. Why?
|Photon-recoil bilocation experiment at Heidelberg| Erwin Schrödinger, who devised the Schrödinger equation that governs quantum behavior, also demonstrated the preposterousness of his own equation by showing that under certain special conditions quantum theory seemed to allow a cat (Schrödinger's Cat) to be alive and dead at the same time. Humans can't yet do this to cats, but clever physicists are discovering how to put larger and larger systems into a "quantum superposition" in which a single entity can comfortably dwell in two distinct (and seemingly contradictory) states of existence. The Heidelberg experiment with Argon atoms (explained popularly here, in the physics arXiv here and published in Nature here) dramatically demonstrates two important features of quantum reality: 1) if it is experimentally impossible to tell whether a process went one way or the other, then, in reality, IT WENT BOTH WAYS AT ONCE (like a Schrödinger Cat); 2) quantum systems behave like waves when not looked at--and like particles when you look. The Heidelberg physicists looked at laser-excited Argon atoms which shed their excitation by emitting a single photon of light. The photon goes off in a random direction and the Argon atom recoils in the opposite direction. Ordinary physics so far. But Tomkovic and pals modified this experiment by placing a gold mirror behind the excited Argon atom. Now (if the mirror is close enough to the atom) it is impossible for anyone to tell whether the emitted photon was emitted directly or bounced off the mirror. According to the rules of quantum mechanics then, the Argon atom must be imagined to recoil IN BOTH DIRECTIONS AT ONCE--both towards and away from the mirror. But this paradoxical situation is present only if we don't look. Like Schrödinger's Cat, who will be either alive or dead (if we look) but not both, the bilocal Argon atom (if we look) will always be found to be recoiling in only one direction--towards the mirror (M) or away from the mirror (A) but never both at the same time. To prove that the Argon atom was really in the bilocal superposition state we have to devise an experiment that involves both motions (M and A) at once. (Same to verify the Cat--we need to devise a measurement that looks at both LIVE and DEAD cat at the same time.) To measure both recoil states at once, the Heidelberg guys set up a laser standing wave by shining a laser directly into a mirror and scattered the bilocal Argon atom off the peaks and troughs of this optical standing wave. Just as a wave of light is diffracted off the regular peaks and troughs of a matter-made CD disk, so a wave of matter (Argon atoms) can be diffracted from a regular pattern of light (a laser shining into a mirror). When an Argon atom encounters the regular lattice of laser light, it is split (because of its wave nature) into a transmitted (T) and a diffracted (D) wave. The intensity of the laser is adjusted so that the relative proportion of these two waves is approximately 50/50. In its encounter with the laser lattice, each state (M and A) of the bilocated Argon atom is split into two parts (T and D), so now THE SAME ARGON ATOM is traveling in four directions at once (MT, MD, AT, AD). Furthermore (as long as we don't look) these four distinct parts act like waves. This means they can constructively and destructively interfere depending on their "phase difference". The two waves MT and AD are mixed and the result sent to particle detector #1. The two waves AT and MD are mixed and sent to particle detector #2. For each atom only one count is recorded--one particle in, one particle out. But the PATTERN OF PARTICLES in each detector will depend on the details of the four-fold experience each wavelet has encountered on its way to a particle detector. This hidden wave-like experience is altered by moving the laser mirror L which shifts the position of the peaks of the optical diffraction grating. In quantum theory, the amplitude of a matter wave is related to the probability that it will trigger a count in a particle detector. Even though the unlooked-at Argon atom is split into four partial waves, the looked-at Argon particle can only trigger one detector. The outcome of the Heidelberg experiment consists of counting the number of atoms detected in counters #1 and #2 as a function of the laser mirror position L. The results of this experiment show that, while it was unobserved, a single Argon atom was 1) in two places at once because of the mirror's ambiguisation of photon recoil, then 2) four places at once after encountering the laser diffraction grating, 3) then at last, only one place at a time when it is finally observed by either atom counter #1 or atom counter #2. The term "Schrödinger Cat state" has come to mean ANY MACROSCOPIC SYSTEM that can be placed in a quantum superposition. Does an Argon atom qualify as a Schrödinger Cat? Argon is made up of 40 nucleons, each consisting of 3 quarks. Furthermore each Argon atom is surrounded by 18 electrons for a total of 138 elementary particles--each "doing its own thing" while the atom as a whole exists in four separate places at the same time. Now a cat surely has more parts than a single Argon atom, but the Heidelberg experiment demonstrates that, with a little ingenuity, a quite complicated system can be coaxed into quantum superposition. Today's physics students are lucky. When I was learning quantum physics in the 60s, much of the quantum weirdness existed only as mere theoretical formalism. Now in 2011, many of these theoretical possibilities have become solid experimental fact. This marvelous Heidelberg quadralocated Argon atom joins the growing list of barely believable experimental hints from Nature Herself about how She routinely cooks up the bizarre quantum realities that underlie the commonplace facts of ordinary life.