content
stringlengths
275
370k
White lions are extinct in the wild, and only a small number exist on a South African Nature Reserve and various zoos around the world. On December 9, 2008, two white lion cubs were born at Serbia’s Belgrade Zoo, making a total of five white lions at the zoo. Although traditionally considered divine due to their colouring, white lions are facing extinction, with only about 200 in the entire world. White lions, also known as blond lions, are not albinos; rather, they are an extremely rare type of African lion. The white coat results from a recessive gene, which means that for a lion to be white, both parents must carry the gene. As a result, few are born, and existing white lions are targeted by game hunters because they fetch a high price on the black market. Few White Lions Remain Although white lions originated in the Greater Timbavati and southern Kruger Park region of South Africa, Europeans began removing them from their natural habitat and putting them into captive breeding operations and zoos beginning in the 1970s. Lion culling and trophy hunting during that era also depleted their numbers. Disease and habitat destruction further reduced the number of white lions, and to make matters worse, lions are unpopular with the locals due to their tendency to attack cattle when migratory wild game is scarce. Although these are problems affecting all lion populations, given that white lions are already extremely rare, their risk of extinction is even greater. While not currently defined as a subspecies of the tawny lion (Panthera leo), the Global White Lion Protection Trust is attempting to win the subspecies classification for white lions, based on the precedent set by the classification of the white Kermode (Spirit) Bear in British Columbia as a subspecies of the Black Bear. Winning such a classification would enable the white lion to be listed as endangered, and thus receive certain protections. Some Believe That White Lions are at a Disadvantage in the Wild Many have speculated that because white lions do not have the natural camouflage of their tawny counterparts, they are highly visible, particularly when young. As a result, white lion cubs may be more likely to fall victim to predators such as hyenas, and it would be more difficult for them to hide. Also, although lionesses do most of the hunting, young male lions are forced out of the pride when they are old enough to become a threat to the dominant older males, but still relatively inexperienced. After their expulsion, they must hunt for themselves unless they are able to take over another pride. As a result, a young male white lion could be at a particular disadvantage, having to hunt all alone and being exceptionally visible to his prey. However, in the Greater Timbavati region where white lions originated, there are pale sandy riverbeds and sun-bleached grasses. Also, most animals that lions prey on are colour-blind, and thus not likely to perceive a significant colour difference between white and tawny lions. It has been noted that in the wild, white lions are frequently the dominant members of their prides, leading hunting expeditions and raising litters successfully, which suggests that contrary to popular belief, their colour does not present a problem in their natural habitat. White Lion Conservation Efforts The white lion population continues to be at risk, and captive breeding programs seek to keep them from disappearing completely. However, due to the small number of white lions available, there is a risk of inbreeding among close relatives, which can lead to genetic problems. Conservationists have attempted to mitigate this risk by breeding the offspring of captive white lions with regular tawny lions. The Global White Lion Protection Trust is working to re-establish white lions in their natural Greater Timbavati habitat, and to protect the small population that exists on the reserve. For information on conservation efforts and how to help white lions, visit the Global White Lion Protection Trust website. To view video footage of the Belgrade Zoo’s white lions, see BBC News. More on Endangered Big Cats - Anabell, Maxine. (2001). “White Lions.” Lairweb.org.nz – Tiger Territory. - Bryner, Jeanna. (17 October 2008). “Rare White Lions Get Wild.” LiveScience.com. - CBC.ca. (11 July 2008). “White Lions Settle in Quebec Zoo to Breed.” - The Global White Lion Protection Trust website. (n.d.). WhiteLions.org. - Wray, James. (11 December 2008). “In Photos: “Belgrade Zoo White Lion Cub.” MonstersandCritics.com/science.
Naturally occurring hydrocarbon systems found in petroleum reservoirs are mixtures of organic compounds which exhibit multiphase behavior over wide ranges of pressures and temperatures. These hydrocarbon accumulations may occur in the gaseous state, the liquid state, the solid state, or in various combinations of gas, liquid, and solid. These all are produced from the earth either in liquid or gaseous form and these materials are known as crude oil or natural gas depending upon there composition. Crude oil is the most desire product of petroleum industries but natural gas is commonly produced along with crude oil. Chemically petroleum consists of about 10 to 15 wt % of hydrogen and 85 to 90% of carbon. Oxygen, sulfur, nitrogen and helium may be found in fewer amounts. Since petroleum is mostly composed of hydrocarbon, the molecular constitution of crude oils differs widely. It has wide range of hydrocarbon series. Of these series the most commonly encountered are the paraffin, the olefins, the polyethylene, the acetylenes turpenes and benzenes. Natural gas is predominantly composed of lower –molecular weight hydrocarbons of the paraffin series. To Schedule a General Composition Of Petroleum tutoring session To submit General Composition Of Petroleum assignment click here.
How can we help? You can also find more resources in our Select a category Something is confusing Something is broken I have a suggestion What is your email? What is 1 + 3? Chap. 2 How does electrical energy get around? a closed loop path of conduction through which an electric current flows an incomplete path that will not permit an electric current to flow a complete path for electric current the flow of electrons through material the difference in potential energy between charges in two different locations, voltage the SI unit of electric potential potential difference, the difference of potential between two points any opposition that slows down or prevents movement of electrons through a conductor, opposition to the flow of electricity unit for measuring electric resistance a scienctific law stating that the strength of a current is equal to the voltage divided by the resistance of the circuit Amps=volts / ohms the rate of electron flow in a circuit, the unit of current unit used to measure the amount of current in a circuit
The Central processing unit, or the CPU as it is better known, is a microchip attached to the motherboard. The CPU is commonly referred to as the brain of the computer. The CPU receives input data from the the user then sends out data to the other parts of the computer such as the hard drive and the RAM. The data is then stored either temporarily or permanently. Similarly how the human brain receives information from the senses and the the brain analyzes and stores the information. The CPU is the most important part of any piece of technology that does and computing. Without a CPU a computer is essentially useless. The first CPU was introduced on November 14, 1971. With the help of Ted Hoff, Intel developed and released the Intel 4004. The Intel 4004 had 2,300 transistors. The first CPU was capable of performed 60,000 operations per second, and cost only $200.00. The development of CPU’s has grown tremendously since 1971. Certain supercomputers, such as the Tianhe-1A have CPU capable of executing over 2.5 quadrillion operations per second. However, average CPU’s can do about 7 billion calculations per second. Though we have come such a long way in only the past 46 years, in the past couple of years CPU development has slowed down substantially due to the limitations of the materials used to construct CPU's. The CPU is made up of several components that allow it to operate. However, there are really only two essential parts of a CPU. Those parts are the arithmetic logic unit and the control unit. The job of the arithmetic logic unit is to do arithmetic and logic operations. It does so using a binary. The control unit is capable of executing 4 main tasks. It can Fetch from the memory of the computer, decode, execute/ carry out functions, and write-back by displaying the decoded functions.
Statement of the Problem: -Forsythe and Blackwelder (1998) identified 144 crater basins in the Noachian highlands that have inflowing channel systems but no outflow channels. -These basins are thought to represent the groundwater table at the time of their infilling. -The channels are assumed to be formed by groundwater sapping methods. -The craters are believed to be a source of water vapor for the atmosphere through evaporation and sublimation processes. -The lowering of the local water table in the craters would then drive groundwater flow into the basins. -This could have large impacts on the hydrologic cycle. -Do these crater basins actually represent the groundwater table? -Were all the channels formed by groundwater processes? Does surface runoff appear to be a possible mechanism for the origin of the channels? -After the impact that formed the craters, large amounts of heat and possibly impact melt would result. Could this alone be the cause of the channels? -If the basins were a source of water vapor for the atmosphere, we would expect more precipitation at about this time. Is there a progression of older groundwater induced channels to younger runoff channels as precipitation possibly increased? -What can MOLA data tell us? Can we delineate drainage divides, relief, topography, terraces, deltas, etc.? -Choose a few crater basins to analyze based on Forsythe and Blackwelder's work. -Obtain MOLA data for the areas. -Examine the topography around and within the crater basins. -Study characteristics such as topography, relief, drainage density, etc. -Look for evidence of terraces, shorelines, evaporite deposits, deltas, impact melts, etc.
The vanishing giant Some popular beliefs are so strongly, if subconsciously, held that they need to be refuted over and over. One is the notion that fossils form when a creature is slowly buried by the ‘sands of time.’ Somehow, most people see fossilization as the long-term result of an average death. This is why, for instance, people often assume that if there were kangaroos in the Middle East (we know from the history in the Bible that there were, even if only for a short time), then we should find their fossils there. Or at least in places between Mt Ararat and their current Australian home. The answer is that fossilization is a rare, special event in today’s world. In the normal course of events, animals that die do not form fossils. Millions of kangaroos are killed on Australia’s roads each year, but they are decidedly not in the first stages of fossil formation. They decompose. This elephant carcass provides a dramatic illustration. The inset above [refer to Creation Magazine Volume 24:4 for photos] shows it one day after death, the inset photo above right was taken 7–8 days later.1 Biological processes (mostly insect activity) have so ravaged its structure that it is clear that, shortly, all that will be left will be a few scattered bones. These will also most likely succumb to the forces of erosion and destruction, unless they are buried by a local flood in sediment which then soon hardens to prevent further decay through oxygen and bacterial action. Of course, not all decomposition is as spectacularly rapid as this example. But even allowing several more months for the process, the point is that under normal conditions today, virtually any specimen will decompose rather than fossilize. So, when someone finds a relatively intact dinosaur skeleton, for instance, consider that it had to be buried quickly to form in the first place. Consider also that most such fossils are found in huge graveyards, often within layers of rock (such as the Dakota Sandstone in the USA) that cover hundreds of thousands of square km. Then ask yourself whether we see things like that happening on the Earth today. The Bible’s account of a massive global hydraulic cataclysm is a much more logical explanation for the existence of ‘billions of dead things, laid down by water, all over the Earth.’ - Natural Environment Research Council, p. 4, Spring 2002. Return to text.
Global Hand Washing DayWhy so much fuss about hand washing with soap? Between 5-10% of all children under the age of five living in poor countries develop pneumonia. In Pakistan alone more than 250,000 children die due to diarrhoea-related diseases every year. According to Ministry of Environment, Government of Pakistan, the total health plus economic costs of work days lost for treating water borne diseases exceeds Rs. 100 billion annually, and over 40% of the hospital beds are occupied by patients suffering from water-borne diseases. In Pakistan, projected population growth for the next 10 years is estimated to increase by 40% to 250 million from the current 180 million. Hence the demand potential for safe drinking water, improved hygiene and adequate sanitation facilities will also dramatically increase manifold, thus overburdening the frail health infrastructure and the meagre budget allocated to health. There is a vital and close link between water-borne diseases and lack of safe drinking water, hygiene and sanitation. These three vital pillars of good health that need an integrated strategy. Not washing hands at critical times particularly before eating and after using the toilet, can lead to diarrhoea-related infections, typhoid, cholera, gastroenteritis, and also to Hepatitis A and E. Regular hand washing with soap reduces acute respiratory infections, pneumonia, and diarrhoea-related diseases in children under five years of age by over 50%, according to a 2004 CMC Study published in JAMA Karachi.1.1 billion people worldwide lack access to safe drinking water and 2.6 billion people lack adequate sanitation according to Fresh Water Action Network, South Asia.The practice of hand washing with soap is abysmally low globally, ranging between zero to 34% at critical times [before eating and after using toilet]. Pakistan is tragically on the lower end of the scale despite having some of the best soap brands in the world being manufacturerd locally. Global Hand Washing Day [GHD] is dedicated to raising awareness of hand washing with soap as a key approach to disease prevention that can contribute to a significant reduction in child morbidity and mortality by more than 50%, thus dramatically saving billions in terms of health and economic costs each year. GHD was first celebrated on October 15, 2008 when the United Nations General Assembly designated the year 2008 as the International Year of Sanitation. The theme of hand washing with soap is focused on message to schoolchildren wash their hands with soap regularly. More than 200 million children, parents, teachers and NGO and government workers in over 70 countries will be celebrating the fourth Global hand Washing Day [GHD] on October 15, 2011. Initiated by the Global Public-Private Partnership for Hand Washing with Soap, GHD is endorsed by the a wide array of government, international institutions, society organizations ,such as CDC, UNICEF, USAID-HIP, AED, World Bank-WSP, NGOs, P&G, Unilever, Colgate-Palmolive and other private sector companies and NGOs around the globe. Practice of hand washing with soap in Pakistan is alarmingly low in the rural areas. In a District-Based Multiple Cluster Survey of South Punjab done with the collaboration of UNICEF, it was found that in Bhakkar District more than 56% of the population did not wash their hands with soap. In Mianwali District, over 42% of the population did not wash their hands with soap at critical times. In Pakpattan District more than one-third of the population did not practice hand washing with soap. It is more a question of social behaviour rather than the price of soap. It is interesting to note that there is a strong and positive correlation between not washing hands with soaps and the incidence of diarrhoea. For example, in Dera Ghazi Khan where the more than 36% of the population did not wash hands with soap, it was found that more than 52% of the children under five were suffering from diarrheal infections. In rural Punjab, the same study found that about 24% of the children under the age of five were suffering from different episodes of diarrhoea. The story of urban Punjab is not pleasing either. More than 21% of the children under five were found to be suffering with different diarrheal infections. In Sindh, the number of children under five, suffering from gastroenteritis and other diarrheal infections was found significantly higher in flood disaster areas where drinking water was highly contaminated. In an earlier study it was found that more than 19% of the children in rural Sindh were suffering from different types of diarrheal infections attributable to lack of hand washing with soap and contaminated water. Global Hand Washing Day 2011 will revolve around activities in playgrounds, classrooms, community spaces and public places to drive the hands washing with soap campaign to trigger a behaviour change in children on a massive scale. In Pakistan only a few multinational companies, such as Unilever and P&G, have adopted the social marketing approach and have run short but effective educative animated cartoon commercials and school-based campaigns to communicate the message of hand washing with soap to school children to wash their hands with soap. Unfortunately, the majority of the soap manufacturers are still focused on beauty and skincare as the unique selling proposition in their marketing strategy, thus being strategically blind to a much bigger and lucrative rural market segment that does not wash hands with soap.
New research by over 40 scientists from almost 30 institutions, led by BirdLife International, has found that only half of these important areas are currently protected. The researchers discovered this trend by analysing the overlap between protected areas and two worldwide networks of important sites for wildlife: Important Bird Areas, which comprise more than 10,000 globally significant sites for conserving birds, and Alliance for Zero Extinction sites, which include 600 sites holding the last remaining population of highly threatened vertebrates and plants. "Sockingly, half of the most important sites for nature conservation have not yet been protected", said Dr Stuart Butchart, BirdLife's Global Research and Indicators Coordinator. "And only one-third to one-fifth of sites are completely protected the remainder are only partially covered by protected areas. While coverage of important sites by protected areas has increased over time, the proportion of area covering important sites, as opposed to less important land for conservation, has declined annually since 1950." "This is despite the fact that we found evidence that protection of important sites may slow the rate at which species are driven towards extinction: by 50 percent for birds with protection of at least half of the Important Bird Areas at which they occur, and by 30 percent for birds, mammals and amphibians restricted to protected areas compared with those restricted to unprotected or partially protected sites. By using the IUCN Red List Index to measure changes in the status of species, and linking this to the degree of protection for important conservation sites, we found good evidence that protected areas may play an important role in slowing the loss of biodiversity." With governments having committed to halt the extinction of threatened species and to expand protected areas in both number and extent, they could achieve both of these aims and benefit local communities by focusing new protected areas on the networks of sites considered to be the most important places for wildlife. For example, establishment of a protected area on the Liben Plain in Ethiopia would help to safeguard the future of the Critically Endangered Liben Lark, which is found nowhere else. Similarly, the designation of a proposed biosphere reserve in the Massif de la Hotte in Haiti would protect 15 highly threatened frog species that are restricted to just this single site. In both cases, appropriate management would ensure that local communities also benefit from enhanced protection of these sites. There are probably several reasons why recently designated protected areas have tended not to protect the most biologically important areas. For example, some sites may be chosen for their remoteness and lower suitability for agriculture, rather than because they can best mitigate the rapid and extensive land-use change that threatens most species. Other protected areas may have been targeted primarily at locations for recreation, tourism, hunting, scenery or cultural interest. In addition to designating a comprehensive network of protected areas, governments need to ensure that these reserves are adequately managed. The team estimated that this would cost roughly US$23 billion per year: more than four times the current expenditure. However, in countries with low or moderately low incomes, increased management funding would require less than one-tenth of this sum, just double what is currently spent. "Such sums may seem large, but they are tiny by comparison to the value of the benefits that people obtain from biodiversity. These 'ecosystem services', such as pollination of crops, water purification and climate regulation, have been estimated to be worth trillions of dollars each year", said Butchart. Important Bird Areas and Alliance for Zero Extinction sites represent existing, systematically identified global networks of significant sites for nature conservation. Adequately protecting and managing them would help to prevent extinctions, safeguard the benefits that people derive from these sites, and contribute towards countries meeting their international commitments on protected areas. Dr. Frank Larsen, scientist with Conservation International who contributed to the study, said: "Since world leaders have agreed to increase the current protected areas from 13 percent to 17 percent of Earth's land by 2020, those four percentage points really needs to be focused on the unprotected sites that are the most important for nature. With the global population projected to skyrocket over the next 30 years, so will our demand for natural resources. Protecting those remaining pockets of nature will be crucial if we want to have food, water and a host of other vital benefits that that will allow us to survive and prosper." "Some countries are already leading the way, with governments using Important Bird Area and Alliance for Zero Extinction site inventories to inform designation of protected areas, for example in Madagascar, Nicaragua, the Philippines, and in the European Union. We encourage other governments to follow these examples as they expand their protected area networks, thereby maximising the effectiveness of nature protection", concluded Butchart. Provided by Conservation International This Phys.org Science News Wire page contains a press release issued by an organization mentioned above and is provided to you “as is” with little or no review from Phys.Org staff.
- By Vivien Williams Mayo Clinic Minute: Why kids need to play In a recent report, the American Academy of Pediatrics stresses the importance of letting children play. They say unstructured play allows for proper development and relieves toxic stress. Dr. Angela Mattke, a Mayo Clinic pediatrician, agrees that this type of playtime is important for good health. Journalists: Broadcast-quality video pkg (0:59) is in the downloads. Read the script. "We want them to be learning through play," says Dr. Mattke. "Unstructured play is the best way that they can learn their developmental skills. They can learn social and emotional regulation. They can learn how to relate and problem-solve with other children." Good old-fashioned playtime not only helps children develop social skills, but also it helps with language skills and stress relief. And in a world where screens are everywhere, she says it’s important to make sure to turn them off. "There’s a lot of different areas that too much screen time can affect the health of children. So the first one would be sleep. We see it from young children all the way up to teenagers," says Dr. Mattke. Too much screen time is associated with being sedentary, and moving is important for good health. So turn off the TV, put down the screens, and let the children play.
How Effectively Can We Protect Astronauts From Cosmic Radiations On Mars? On a day to day basis, we hardly care to realize that how the earth protects us from outer objects and space radiations. With the advent of space exploration age and our intense fascination towards astronomy, long space journeys appear very much on the horizon. The deep space travel, however, comes with many expected as well as unknown risks. The cosmic radiations coming out of the sun and from distant galaxies can pose a great risk for space travel. The scientists have spent a great deal of their time and efforts to understand the effects of these radiations all this while. For quite some time now, astronauts have been traveling from the earth and the International Space Station (ISS). But they are also gearing up for longer journeys much deeper into our solar system in coming years. This will increase their risk of getting exposed to these harmful space radiations. The immediate danger being posed to the astronauts after leaving the earth’s influence is in the form of solar flares coming directly from the sun’s surface. These solar flares are a result of coronal mass ejections that are an accelerated wave of extremely high energized particles, which can damage the circuitry of satellites and cause cancers to humans. In addition to these particles, the astronauts are constantly bombarded by the cosmic radiations coming from the deep space that are believed to originate from supernovas. A Trip to the Mars For a long time, Mars has been an object of great curiosity for the scientists and astronomers equally. We have so far sent dozens of probes on its surface in order to study its atmosphere and landscape. In many ways Mars is somewhat similar to the earth – it has a 24-hour day, it has polar ice caps, it has tilted axis with its orbit, and it has the kind of landscape that forms by flowing water. Elon Musk – CEO of SpaceX, a widely known private space agency – has revealed his plans of carrying out multi-planetary space journeys by sending crews to Mars and beyond. However, the Mars mission will not be a cake walk. The recently released Hollywood film “The Martian” also underlines some potential risks humans can face on Mars in accordance with the risks evaluated by NASA – gravity fields, isolation, hostile environment, space radiation, and distance from the earth. Protecting Astronauts from Space Radiation According to the NASA’s 2001 Mars Odyssey spacecraft, the astronauts on the Martian surface will be prone to receive 2.5 times higher radiation levels as compared to the ISS – 22 millirads per day, which means 8000 millirads per year. This radiation level if kept unchecked will render them at an increased risk of cancer, potential damage to the central nervous system, degenerative tissue diseases and radiation sickness. There are many ways by which radiations penetrate the human body that will ultimately pave way for the above mentioned risks. NASA is coming up with unique ways to monitor and measure how radiation affects humans while living in space, and to identify their biological countermeasures. It is even developing methods to optimize shielding in order to protect the astronauts on a journey to Mars. The scientists of NASA and other premier space agencies around the world are studying sun’s modulation in order to make better decisions for the future Mars missions. This field of study and research is known as Heliophysics, which helps us better understand how and when solar eruptions occur as well as their overall impact on space radiation environment. The second source of radiations, better known as galactic cosmic rays (GCRs) are more difficult to shield. They are also highly energized particles and accelerated to near the speed of light and reach to a dangerous level. These constitute of even heavier elements that can knock out atoms from the materials such as space suits of astronauts and walls of spacecraft. When it comes to the protection of the astronauts, today’s technology depends on passive as well as active shielding. The passive protection involves the creation of a barrier between astronauts and the incoming radiation. The active shielding, on the other hand, involves the use of electric and magnetic fields to divert harmful radiations. As far as the lengthier missions such as the one to the Mars are concerned, passive shielding will likely not work for extended time period. This type of protection is too expensive a proposition as well. According to the European Union’s Space Radiation Superconducting Shield (SR2S), adding an extra weight of 2.2 pounds to a spacecraft results in a $15,000 rise to the overall cost of the mission. In this regard, a safer and economically more feasible technology is underway in the form of magnetic cables. In another similar attempt NASA is developing a material that can act as an effective shield as well as the primary structure of a spacecraft. This material is known as Hydrogenated boron nitrite nanotubes or Hydrogenated BNNTs, which is made up of carbon, boron, and nitrogen with hydrogen in the empty spaces between the tubes. This material provides a strong physical structure along with the capability of excellently absorbing the secondary neutrons. In order to create an effective force field for the protection of astronauts, scientists are considering Magnesium Diboride (MgB2) as a powerful compound. In this direction, an Italian Company Columbus Superconductors has used MgB2 for medical applications as well as magnetic levitation. The theory suggests these cables can create a superconducting magnetic force field that can go a long way to protect astronauts on their journey to Mars. With the arrival of aggressive space age, the space journey will also begin sooner or later. Astronauts are the ones who will have to bear the brunt of radiations when they travel into the deep space. Even if we assume that the mission to Mars will not necessarily be fatal, but there are certain health risks for the astronauts that need to be addressed. Different space agencies along with scientists are coming up with different methods to protect astronauts from harmful radiations. These methods include induced barriers and artificial force fields and superconducting magnetic shield.
Volcanic eruptions are the result of convection currents, which are repetitive actions that occur underground. As pressure builds beneath the surface of the Earth, it pushes rock sediments upwards, releasing molten rock. Otherwise known as lava, this released material can reach temperatures as high as 1093 degrees Celsius. Volcanic Convection Currents Volcanic convection currents are the reactions to the heat energy within the Earth's core causing a repeated rise and fall of Earth's properties. Image a glass cylinder that uses a candle as its heat source; the molecules at the bottom of the cylinder heat first, rising to the top where they cool and fall back down to the bottom. The movement of the molecules from top to bottom is the convection current. The constant heat causes the same cycle to happen over and over, as convection currents push the liquid material in the volcano's tube toward the Earth's surface. Tectonic Plate Shifts Earth has three major layer: the core, mantle and the crust. As convection currents reach the mantel, the heat causes a collision between the continental plate and the oceanic plate under water. The collision causes the two plates to converge, which means that the ocean plate slides downward at a 45 or lesser degree angle. The convection current continues to push the heated magma past the mantel level, reaching the crust of the Earth's surface and producing a lava spout. Convection current movements create a push and pull effect, creating the volcanic trenches, which are formed when two plates collide. The friction between the plates cause one to melt, forcing the other to move downward and leaving a gap. If the magma continues to rise towards the surface of the gap, the formation of another volcano may take place. The entire transformation is so complicated that it takes centuries to develop and complete, which is the reason volcanoes do not just pop up. Hawaiian volcanoes are shield types, which are flat dome-like shapes with characteristics of calm eruptions. This is the case because the extruded lava is a steady cascade, producing fluid lava flows in stark contrast to an explosive release of lava by other volcanoes. The texture and consistently of the lava allows it to travel over long distances. Volcanologists continue to study volcanoes in great detail, especially shield types which have risen from the ocean floor and continue to expand physical land boundaries. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
Definition - What does Photochemical Oxidants mean? Photochemical oxidants are the product of chemical reactions that occur between nitrogen oxide (NOx) and any of a host of different volatile organic compounds (VOCs). Common or well known photochemical oxidants include ozone (O3), hydrogen peroxide (H2O2), and peroxyacetyle nitrate (PAN). These photochemical oxidants are cause for concern, as they can have negative effects on human, plant and animal health. Safeopedia explains Photochemical Oxidants Organic compounds are chemicals that contain carbon and are found in all living things. Volatile organic compounds (VOCs) are organic compounds that easily turn to vapours and become highly flammable. These are released from burning fuel such as gasoline, wood or coal, solvents, natural gas etc.. When combined with nitrogen oxide they become photochemical oxidants like ozone and impact climate change significantly.
Java is a programming language originally developed by James Gosling at Sun Microsystems (which has since merged into Oracle Corporation) and released in 1995 as a core component of Sun Microsystems’ Java platform. The original and reference implementation Java compilers, virtual machines, and class libraries were developed by Sun from 1991 and first released in 1995. - Round off Double value to 2 decimal places in Java - How to convert byte to blob in Java - How to convert Java Object to / from JSON (Using Gson) - Sorting user-defined objects using Java Comparator - How to use JNDI to get database connection or data source Date and Time - How to get days in a month - How to get current date and time - How to compare dates - How to find difference between dates - How to get Current Timestamp - Convert String to Date - Date Formatting using SimpleDateFormat
6th Grade Mathematics Overview Ratios and Proportional Relationships - Understand ratio concepts and use ratio reasoning to solve problems. The Number System - Apply and extend previous understandings of multiplication and division to divide fractions by fractions. - Compute fluently with multi-digit numbers and find common factors and multiples. - Apply and extend previous understandings of numbers to the system of rational numbers. Expressions and Equations - Apply and extend previous understandings of arithmetic to algebraic expressions. - Reason about and solve one-variable equations and inequalities. - Represent and analyze quantitative relationships between dependent and independent variables. - Solve real-world and mathematical problems involving area, surface area, and volume. Statistics and Probability - Develop understanding of statistical variability. - Summarize and describe distributions. - Make sense of problems and persevere in solving them. - Reason abstractly and quantitatively. - Construct viable arguments and critique the reasoning of others. - Model with mathematics. - Use appropriate tools strategically. - Attend to precision. - Look for and make use of structure. - Look for and express regularity in repeated reasoning.
The limited benefit of fluoride exposure has been found only in connection with topical exposure to pharmaceutical-grade sodium fluoride, NOT systemic exposure through ingestion. Several studies have shown that fluoride provides little to no dental protection, particularly with systemic use. Furthermore, evidence indicates that fluoride exposure can only delay and not prevent tooth caries. Fluoride exposure can, however, increase the risk of dental fluorosis, a medical and cosmetic problem, and other more serious health hazards. (See the dangers of fluoride) The following are common sources of fluoride: - Fluoridated drinking water While all of the sources mentioned below can contribute to fluoride exposure, the greatest exposure to fluoride in children arises in the form of fluoridated water. (please see section on fluoride in water for more information) - Toothpaste, gels, mouthwashes, pills, other dental applications Swallowing fluoride tablets causes dental fluorosis in 64 percent of youngchildren, according to Dr. Pebbles 1974 study. Numerous medical professionals have noted that children can ingest greater amounts of fluoride from dental products alone than is recommended by health experts. - Processed cereals and other foods The act of processing foods can increase the concentrations of fluoride found in these products. - Mechanically de-boned chicken: a 1999 Journal of Agricultural Food Chemistry study found that the fluoride in mechanically separated chicken contributes to the risk of dental fluorosis in children under the age of eight. - Infant formula bear in mind the dangers of fluoride poisoning are particularly high in young developing children - Fish and Seafood: particularly canned fish and shell fish - Foods cooked in Teflon pans - Beer and wine - Tea certain decaffeinated teas, iced teas, and instant teas appear to be most harmful - Fluoridated salt US and Canada do not have fluoridated salt programs, though dozens of nations have implemented these health hazardous programs - Anesthetics enflurane, isoflurane, and sevoflurane. | Back to top
Juneteenth is the official annual celebration for June 19, 1865, the day Union Army soldiers came to Texas to inform unaware slaves they were free under the Emancipation Proclamation. Holding significant meaning to African-Americans, the holiday represents a turning point in American history. Children can commemorate Juneteenth through a list of activities with the 19th in mind. Video of the Day Teachers can help students commemorate Juneteenth by going over African-American folktales, songs and hymns slaves shared. Students can sing along to the songs and learn about their significance in getting slaves through tough times, and further significance to the African-American heritage. Another focus can be placed on the folktales and the messages they conveyed, such as respect for elders and survival. Children can also create African crafts at home or at school. One activity in particular involves creating a string of African flags by using various construction paper colors, crayons, scissors, ruler and a long piece of green string. Children can draw relatively simple African flags on the construction paper. Afterward, the flag is attached to the long string using tape, glue or staples. As Africans were first brought to the Americas as slaves, thousands of different African dance styles merged with European styles to create new cultural traditions. Enslaved African-Americans would carry on these dances as a link to their African ancestry. Teachers or parents who wish to help their children celebrate Juneteenth can teach their children these dances, such as the Juba and Ring Dance. They are suitable for helping children understand the ways of life for slaves. Lastly, children can celebrate Juneteenth by simply doing activities deeply rooted in African-American tradition. Parents can take their children to African-American history museums or see it there are any local Juneteemth celebrations in town. Many of these celebrations feature children's activities. Consider taking your children to any public places that specialize in promoting African-American culture. Choose whatever it is that you feel your children would enjoy doing.
What Is Asthma? Asthma is a chronic condition affecting the airways of the lungs. The hallmark symptoms of asthma are wheezing and difficulty breathing, but intermittent cough or chest tightness may be the only symptom. These respiratory symptoms usually come in episodes set off by various environmental or situational "triggers." Triggers include -- but aren't limited to -- chemicals, pollution, seasonal allergens like pollen and ragweed, animal dander, exercise, smoke, anxiety, and an upper respiratory virus like a cold. Most people with asthma have only mild and infrequent episodes. For them, the condition is an occasional inconvenience. For others, episodes can be frequent, serious, and even life-threatening if not properly treated. They may need emergency medical treatment. If you have asthma, have regular checkups by a doctor. An asthma exacerbation (asthma attack) may pass quickly or last more than a day. Sometimes symptoms recur suddenly and with surprising intensity. This "second wave" attack can be more severe and dangerous than the initial episode and may last days or even weeks. Asthma affects more than 23 million Americans of all ages, including more than 7 million children. Asthma is the leading cause of school absenteeism and pediatric hospital admission. Although asthma is seldom fatal, it is quite serious. If you have asthma, there are excellent (safe and effective) prescription medications to control it, so you should seek the help of a doctor before trying alternative therapies. Asthma Myths and Facts Myth: People with asthma shouldn't exercise. Fact: Exercise is as important for people with asthma as it is for anyone else. With care or pretreatment, people with asthma can exercise normally and often vigorously. People with asthma generally do better with exercise in environments with relatively high humidity, since exercise-induced airway narrowing (bronchospasm) can be caused by drying of the airways. Slow warm-up and cool-down periods with exercise also helps to prevent exercise-induced bronchospasm (EIB). Myth: You'll outgrow asthma. Fact: This is both true and false. About half of the people who had asthma when they were between age 2 and 10 seem to "outgrow" the disease as they grow taller and notice a marked decrease in asthma symptoms. But in many cases, symptoms recur when they hit their 30s, start smoking, get a respiratory virus, or experience a large inhalant exposure. It's also common to develop asthma as an adult even if you did not have it as a child. What Causes Asthma? Asthma is usually not a problem with breathing in, but with breathing out. Asthma is a chronic illness with three main features: - Inflammation of the airways of the lungs - Constriction (bronchospasm or narrowing) of the airways (bronchioles) due to contraction of the muscles that surround the airways that is reversible - Extreme sensitivity of the airways to certain asthma triggers which cause them to quickly constrict, slowly become swollen, and secrete more mucus Asthma and allergies are much more common in people with a family history of asthma or allergies. The factors which worsen asthma vary from individual to individual. Each person with asthma should seek to determine exactly which factors cause their asthma to worsen. Common asthma triggers include: - Allergies, such as allergies to house dust mites, cockroaches, cats, dogs, molds, mice, and grass, weed, and tree pollens - Infections, colds, influenza, and other respiratory viruses - Irritants, such as strong odors from perfumes or cleaning solutions, air pollution, and especially smoke from tobacco, incense, candles, or fires - Exercise, especially in dry or cold environments - Cold or dry weather and changes in temperature and/or humidity, such as with thunderstorms - Strong emotions, such as anxiety, laughter, or crying (which can cause heavy breathing) - Reflux of acid from the stomach (GERD) - Pain medications, such as aspirin or NSAIDs (10% of those with asthma are aspirin and NSAID sensitive)
Bumblebees play, according to new research from Queen Mary University of London published in animal behavior. It’s the first time object-playing behavior has been demonstrated in an insect, adding to the growing evidence that bees can experience positive “feelings.” The research team set up numerous experiments to test their hypothesis, which showed that bumblebees go out of their way to roll wooden balls repeatedly despite there being no apparent incentive to do so. TO DO. The study also found that younger bees rolled more balls than older bees, mirroring human behavior of young children and other juvenile mammals and birds being the most playful, and that male bees rolled them longer. than their female counterparts. The study followed 45 bumblebees in an arena and gave them the options of walking through a clear path to reach a feeding area or deviating from that path in areas with wood balls. Individual bees rolled balls between 1 and an impressive 117 times during the experiment. The repeated behavior suggested that ball rolling was rewarding. This was supported by another experiment where 42 other bees were given access to two colored chambers, one always containing moving balls and the other without any objects. When tested and given a choice of two chambers, neither containing balls, the bees showed a preference for the color of the chamber previously associated with wooden balls. The set-up of the experiments removed any idea that the bees were moving the balls for a greater purpose than play. The rolling balls did not contribute to survival strategies, such as gaining food, clearing clutter, or mate and were performed under stress-free conditions. The research builds on previous experiments from the same lab at Queen Mary, which showed that bumblebees can learn to score a goal, by rolling a ball towards a target, in exchange for a sweet food reward. In the previous experiment, the team observed bumblebees rolling balls outside of the experiment, without getting a food reward. The new research showed that bees rolled balls repeatedly without being trained and given food to do so – it was voluntary and spontaneous – therefore similar to play behavior as seen in other animals. First author of the study, Samadi Galpayage, Ph.D. student at Queen Mary University of London says that “it is certainly breathtaking, sometimes amusing, to watch the bumblebees show something like a game. They approach and manipulate these ‘toys’ again and again. It shows, once again, that despite their small size and small brains, they are more than just little robotic beings.” “They can actually experience some sort of positive emotional states, even if they’re rudimentary, like other larger or less fluffy animals. This kind of finding has implications for our understanding of sentience and well-being. insects and will hopefully encourage an ever greater respect and protection of life on Earth.” Professor Lars Chittka, professor of sensory and behavioral ecology at Queen Mary University of London, director of the laboratory and author of the recent book The Mind of a Bee, says that “this research provides a strong indication that the minds of insects is much more sophisticated than one might imagine. There are many animals that just play for fun, but most examples are from young mammals and birds. “We are producing ever-increasing amounts of evidence supporting the need to do everything we can to protect insects that are millions of miles away from the mindless, insensitive creatures they are traditionally thought to be.” Sick bee queens have shrunken ovaries, putting their colonies at risk Hiruni Samadi Galpayage – Where do the bumblebees play?, animal behavior (2022). DOI: 10.1016/j.anbehav.2022.08.013 Provided by Queen Mary, University of London Quote: First ever study shows bumblebees ‘play’ (2022, October 27) Retrieved October 28, 2022 from https://phys.org/news/2022-10-first-ever-bumble-bees.html This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only. #FirstEver #Study #Shows #Bumblebees #Play
The history of Vijayawada is believed to be evolved from the mythological period. According to the legends, Arjuna, one of the brothers of Pandava in Mahabharata times, prayed on the Indrakila Hill to seek the blessings of Lord Shiva. The Lord then appeared in front of him as a hunter and provided him a weapon called Pasupatastra. Thus to commemorate his victory with Lord Shiva, Arjuna installed here Vijayeswara and this region was later called as Vijayavata or Vijayawada. The archeological evidences also suggest that the town existed from the Stone Age and various remains have been found across the River Krishna to prove the same. The historical period of Vijayawada can also be traced back to the reign of Chalukyas of Kalyani in India. During this time, Krishnadev Rai was the ruler of the Chalukya’s kingdom who designated the town as the religious and cultural capital. In 639 AD, the famous Chinese traveler Huin Tsang also visited this cultural town of Vijayawada. In 1900, the British rule got established in the region and during that time, the city was well flourished and significant changes were brought regarding the infrastructure and major facilities in the city. The well known project of Prakasam Barrage and a railway bridge over River Krishna was the result of the efforts done during the British rule in Vijayawada.
Miniature antibodies called nanobodies, derived from llamas, have demonstrated therapeutic potential in the fight against Covid and its variants, according to a study. Amid the growing threat of Omicron – the new and potentially more dangerous SARS-CoV-2 variant, scientists are ramping up the quest for Covid treatments. Rockefeller scientists Michael P. Rout and Brian T. Chait and their colleagues at the Seattle Children’s Research Institute selected a repertoire of over one hundred nanobodies based on their potency and ability to target different parts of the SARS-CoV-2 spike protein. Produced by immunised llamas, the nanobodies were shown to neutralise the original coronavirus and several of its variants, including Delta, with high efficacy in lab tests. Studies to assess their potency against the new Omicron variant are underway. The researchers hope that a nanobody combination could be developed into a Covid drug effective against both current and future variants. “Based on the way our nanobodies bind to the virus, we are hopeful that many will remain effective, perhaps even against Omicron,” Rout said. “We should have those results soon.” The findings are published in the journal eLife. A human antibody is a chunky formation of two protein chains. But the corresponding nanobody molecule produced by llamas, camels, and other species of the Camelidae family consists of only one protein. To obtain the nanobodies, the researchers took blood samples from llamas who had received small doses of coronavirus proteins similar to a vaccine. They then sequenced the DNA corresponding to diverse nanobodies produced by the llamas’ immune system and expressed these genes in bacteria to produce large amounts of the nanobodies for lab analysis. Nanobodies that showed desired properties were then selected and further tested to identify those most capable of neutralising the virus. The small size of nanobodies allows them to access hard-to-reach spots on the SARS-CoV-2 virus that larger antibodies may be unable to access. It also allows researchers to combine nanobodies capable of hitting different parts of the virus, minimising its chances of escape. “One of the most amazing things we observed with the nanobodies is that they show extraordinary synergy,” Chait said. “The combined effect is much greater than the sum of its parts.” The researchers next plan to test the safety and efficacy of the nanobodies in animal studies. Besides being small and nimble, nanobodies are also inexpensive to mass-produce in yeast or bacteria. Moreover, they are remarkably stable. The ability of these molecules to withstand high temperatures and long storage times means that they could be developed into a drug accessible in various settings worldwide.
Numbers boards are a wonderful way to show students the many different ways to represent a number. Here is everything you need to make number boards 21-30 for your classroom! Not only will they help your students learn their numbers, but they will really get a deeper understanding of each number! These number boards are interactive and allow students to come up and help fill in the different ways to represent the number you are learning. The Number Boards Include: -Large Numbers 21-30 -Large Numbers Words 21-30 -Lines to model the number -Three Ten Frames -Number Line 20-30 -Drawing to represent the number -Base ten blocks to represent the number -Addition equations to go with the number (2 options) -Tens and Ones Check out my blog: A Spoonful of Learning
New research has uncovered the secret side-gig of a 3.2 billion-year-old class of enzymes, called nitrogenase. Scientists long believed that the primary function of these enzymes was to convert nitrogen into ammonia, an essential process that makes Earth habitable. But the new research describes a use for this enzyme that could help create more eco-friendly plastic production. The enzymes are present in large quantities in a type of soil-dwelling bacteria, called Rhodospirillum rubrum. When exposed to oxygen-depleted environments, which nitrogenase thrives in, these bacteria reveal a previously unknown biological pathway, allowing it to transform sulfur into ethylene. Ethylene is a natural gas used in the production of everyday plastics, like disposable grocery bags. Together, the new observations may offer a safer way to create plastics without fossil fuels. They could also help farmers better understand how an abundance of ethylene may be harming their crops. "We thought, well, that's weird." Organisms burping out ethylene is something scientists have been actively studying for some time, but Justin North, a research scientist at Ohio State and first author of the new study, says that these previously identified biological pathways to ethylene had a bit of an explosive problem. "For about a decade, researchers have studied the biological production of ethylene through a different mechanism that occurs in oxygenated environments," North tells Inverse. "There is a technical hurdle to scaling up that process as ethylene and oxygen mixed at industrial scales could be explosive." Because this newly discovered pathway in bacteria is anaerobic, meaning it doesn't require oxygen, North says it may be possible to scale something like this without the risk of explosions. The study was published Thursday in the journal Science. How does it work — Unaware of their scientific import, these bacteria are simply generating the proteins they need for survival. In drier, more oxygenated soil conditions there may be more sulfur available to produce methionine, an amino acid necessary for building proteins. But in waterlogged soil, the availability of sulfur becomes much lower. The bacteria appear to solve this problem by turning to a handy survival skill: scavenging sulfur from the surrounding cellular waste instead. As they did so, the bacteria appeared to burp out high levels of methane and ethylene as a byproduct. "We know these bacteria are producing hydrogen and consuming carbon dioxide," North says. "But, lo and behold, they were making copious amounts of ethylene gas. And we thought, well, that's weird." Using mass spectroscopy, a technique that can look at the mass, structure, and composition of molecules, North and his colleagues discovered the ethylene-producing bacteria samples were chock-full of the ancient enzyme, nitrogenase. These enzymes date back over three billion years, and originally thrived at a time before needing oxygen for biological processes was all the rage. As the enzyme's name would suggest, researchers previously believed this class of enzymes' sole purpose was to transform nitrogen into ammonia — a necessary process for supporting life on Earth — but Robert Hettich, one of the study's co-authors, says that sometimes long-held thinking can be misleading. "Sometimes the naming or annotation of a gene or gene family can be misleading," Hettich tells Inverse. "In fact, the gene might have a secondary function, a night job so to speak, or it might actually be doing something completely different... [T]he data are the data." Future impact — In addition to potentially offering a new path for scientists to harness natural ethylene gas for the production of plastics instead of relying on fossil fuels, the scientists say the discovery may also help farmers better manage their crops and understand how excess ethylene could be damaging them. While ethylene in moderate amounts can be good for plant growth, excess ethylene (e.g. which this pathway might cause during flooding) can be damaging. More research still has to be done to learn more about this pathway and how scalable it is, but this study opens an exciting new path of inquiry, the researchers say. Abstract: Bacterial production of gaseous hydrocarbons such as ethylene and methane affects soil environments and atmospheric climate. We demonstrate that biogenic methane and ethylene from terrestrial and freshwater bacteria are directly produced by a previously unknown methionine biosynthesis pathway. This pathway, present in numerous species, uses a nitrogenase-like reductase that is distinct from known nitrogenases and nitrogenase-like reductases and specifically functions in C–S bond breakage to reduce ubiquitous and appreciable volatile organic sulfur compounds such as dimethyl sulfide and (2-methylthio)ethanol. Liberated methanethiol serves as the immediate precursor to methionine, while ethylene or methane is released into the environment. Anaerobic ethylene production by this pathway apparently explains the long-standing observation of ethylene accumulation in oxygen-depleted soils. Methane production reveals an additional bacterial pathway distinct from archaeal methanogenesis.
All computer data is represented using binary, think images, numbers, letters, etc. Binary uses a system that uses only 1's and 0's (on/off). Binary digits (a bit) can be grouped together in a set of 8 to form a byte. Your task: Read my code and use the draw tool colour the correct squares! Make sure to double check at the end!
Aurora polaris (ger: Polarlicht; fr: aurore polaire) Aurora is latin for 'dawn'. The Northern Lights and the aurora borealis are two names for the same thing. The term aurora borealis was introduced by Galileo Galilei in 1619 who wrongly thought that the aurora is caused by sunlight reflecting from the high atmosphere. However, from thereafter the name was used for Northern Lights. The proper name for the aurora of the southern hemisphere is the aurora australis. Together the aurora australis and the aurora borealis are known as the aurora polaris. Nowadays the simple name aurora is mostly used, as is the name Northern Lights. In fact, the aurora is a luminous phenomenon occuring in the upper atmosphere at altitudes between about 100km and 1000km. Auroras are visible best from the auroral zone the region about 15-30 degrees from each magnetic pole with the greatest frequency of aurorae. The region within which auroral activity occurs is the auroral oval and is only visible from space. The center of the auroral oval lies at the geomagnetic poles - not to be confused with the geographic poles. However, infrequently they might be seen as far down as the equator. Like giant curtains in the sky that slowly wave as if a gentle breeze were blowing, aurorae fill the entire sky with changing colours and motion and no two aurorae are ever alike. An auroral display may begin with a faint glow, an overall veil, or weak patches in the sky towards the pole. Eventually it will form an arc and may develop rays. Their intensity may vary, multiple bands are likely to form flickering and flaming in surges of brightness across the sky. What Makes Northern Lights Happen? Click here. Is Earth the only planet with aurorae? Clearly no! Every rotating planet with a magnetosphere - or magnetic field should have aurorae. The proof has been given by the famous Hubble telescope in 1996 when it took breath-taking images of a Jupiter aurora. For it's lack of a magnetic field o ur Moon does not display aurorae - what a pity!.
Skimming Learning Content Most students, from high school to college, know the difference between reading and skimming learning content material. Skimming, or speed-reading, is when your eyes sail along the lines dragging a fishnet for the important information in the learning content. When you come across something your brain registers as worth noting, your eyes slow down momentarily. At this point, the act ceases to be skimming and becomes actual reading. Your brain gathers the fish from the net and stores it in a bucket for future use. Particularly in college, students are given exorbitant amounts of reading material and are expected to swallow big bites of learning content. Using speed-reading techniques, readers can tackle larger reading assignments and process the essential information while skimming over filler. Coggno.com offers world-class e-learning.
Welcome to The 1/2 Inch Graph Paper with Gray Lines Math Worksheet from the Graph Papers Page at Math-Drills.com. This math worksheet was created on 2015-09-17 and has been viewed 21 times this week and 143 times this month. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Teachers can use math worksheets as tests, practice assignments or teaching tools (for example in group work, for scaffolding or in a learning center). Parents can work with their children to give them extra practice, to help them learn a new math skill or to keep their skills fresh over school breaks. Students can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. Use the buttons below to print, open, or download the PDF version of the 1/2 Inch Graph Paper with Gray Lines math worksheet. The size of the PDF file is 24753 bytes. Preview images of the first and second (if there is one) pages are shown. If there are more versions of this worksheet, the other versions will be available below the preview images. For more like this, use the search bar to look for some or all of these keywords: math, graph, grid, paper, U.S., Imperial, Inch. The Print button initiates your browser's print dialog. The Open button opens the complete PDF file in a new browser tab. The Download button initiates a download of the PDF math worksheet. Teacher versions include both the question page and the answer key. Student versions, if present, include only the question page.
Need help with your history homework? Our expert tutors can provide the education and insights you need to succeed in your history courses, and our hassle-free online system makes it easy to connect with an expert and get started today! The best part we offer discounted rates that you won’t find anywhere else, so there’s no reason to go anywhere else for your history homework help! See how we can help today! The basics of finding good sources Finding good sources can be hard, but it’s important to do if you want to get an A on your history homework. There are a few things to keep in mind when looking for sources: 1. Ask your teacher which sources they want you to use. 2. Check the source for errors and biases. 3. Look at the publication date, authorship, and credibility of the source. Is it published by an academic press or by a company? Is the author credible? If not, why should we trust them? 4. What is the purpose of the source–does it support or oppose your argument (remember to make sure that both sides are represented)? 5. How accurate is this information–are there footnotes that give primary documents as evidence, or just secondary texts that refer back to other texts? 6. Does this information come from primary documents like diaries, newspapers articles, letters written by witnesses who were present at some event (these are often called eyewitness accounts), personal interviews conducted with people who have personal knowledge about something, court records where people have testified under oath and much more (footnote these)? How to take notes effectively Effective note-taking is essential to understanding and retaining information. The following are a few tips for note taking in the classroom: Write down the main points that the teacher says, rather than everything they say. This will help you organize your thoughts as you write them down, and it will also make sure you don’t miss any important information during class. Write notes in complete sentences. Paragraphs are even better because they are easier to read and understand when you go back to review your notes later. Write your name on every page of the notebook so that if someone borrows it, they know whose notebook it is. How to outline your essay The best way to write your essay is to pick a thesis, then plan the introduction, body paragraphs, and conclusion. The introduction should introduce the topic of your essay and include background information on it. The body paragraphs should present arguments for and against your thesis. The conclusion should restate your argument and provide a final thought on the topic. The importance of proofreading History Homework Help is a service offered by our company. EssayForAll offers History homework help to high school and college students in need of assistance. With us, you can get the assistance you need without spending hours on end searching for resources online. Our experts are here to help, so don’t hesitate to contact us for assistance today! Hi, I am James Aaron, a professional Data recovery Engineer, working with Quickbooks Data Services as a consultant from the past five years in the USA. I am a specialized and trained Cloud consultant who helps small and medium-sized entrepreneurs. QuickBooks Closes Unexpectedly | QuickBooks Error H202 | QuickBooks Unrecoverable Error | QuickBooks Error 1712 | This Company File Needs to be updated | QuickBooks Restore Failed | QuickBooks Email not working | QuickBooks 2020 end of support
Are you searching for 10 Lines On Global warming In English, We have shared lot of information for three category level students, for kid, school, Higher Class student as per their understanding level. We have provided all necessary information about Global warming, their reasons, their effects, how to reduce it and how to avoid it in simple and easy language which will help you in easily understand and remember. just check it, you will get idea about Global warming. 10 Lines On Global warming In English For Children And Students Global warming is the increase in the temperature of earths atmosphere by human activities. It affects to humans as well as plants, animals. It is very harmful to all living things. Lets see more about it. Set 1 – 10 Lines On Global warming In English for the Kids - Global warming is the increase in the temperature of earth atmosphere. - All over world are facing global warming problem. - Due to increase in global warming, diseases are also increases. - Due to burning fossil fuels, releases greenhouse gases such as Co2, Co which are increases the Global warming. - Global Warming is mostly caused by the greenhouse gases. - It affects the climate as well as environment. - Global Warming is also causes unexpected forest fire. - Major change in climate is a result of global warming. - Global warming is causing ice in glaciers to melts quickly. - To reduce the Global warming we have to plant more trees. Set 2 – 10 Lines On Global warming In English for the School Students - Global warming is the temperature rise in earth atmosphere due to human activities. - Due to use of chemical fertilizers and pesticides also results in increase global warming. - In the last five decades, the global temperature has risen by 1.5 degrees Celsius on average. - Global warming is also the leading cause of the extinction of aquatic species. - Methane is also strong greenhouse gas which is produces from livestock. - Today peoples uses larger number of vehicles which are emits carbon dioxide which is also responsible for global warming. - Greenhouse gases which are mainly responsible for global warming are carbon monoxide, carbon dioxide, ozone, nitrous oxide. - By reducing the use of conventional energy sources and starting use of non conventional energy sources such as wind energy, solar energy will big help in reducing global warming. - Planting trees is also way to minimize the global warming. - To reduce global warming, We have to reduce the emissions of greenhouse gases. Set 3 – 10 Lines On Global warming In English for the Higher Class Students - Due to human activities day by day earths atmosphere temperature is increases which is known as the Global warming. - Greenhouse gases are mainly responsible for the global warming. - The major cause of excessive greenhouse gas production, such as carbon dioxide and methane, is by the human activities. - Peoples uses vehicles which emits the carbon dioxide which is one of the greenhouse gas responsible for increase in global warming. - Ocean acidification is a result of global warming, causing a harm to fisheries and other marine organisms. - The global warming causes the large amount evaporation from oceans which results into the cloud forms in sky. - Its results into the floods, storm as well as increase in sea level. - In the past century, snow of mount Kilimanjaro has been melted by 80 percent due to global warming. - We have to reduce the global warming by making changes in our activities such as try to walk for a short distance rather than use of vehicle. - Planting more trees will also helps in reducing global warming. So friends, Thanks for reading, I hope you’ve understood all mentioned above. Therefore this are the 10 Lines On Global warming In English we shared for three different category students which will help you to briefly know about Global warming, their reasons, their effects, how to reduce it and how to avoid it etc. You can use this for your essay writing, project work, homework, speech and exam preparation wherever needed.
Justice may be blind, but it has bright eyes that gleam amidst the darkness of the celestial sphere. Constellations are imaginary shapes and forms seen while connecting stars together. Humanity notices in the skies what matters the most to them—shapes associated with life, routine, wishes and values. It is no wonder there is a constellation dedicated to the symbol of justice. Libra is not the brightest constellation in the sky—it doesn’t have notable stars and it could be hard to locate it. Regardless, it remains a highly important symbol of the skies because of its position, deep space objects, and mostly, its relevance to humanity’s culture. ABOUT THE LIBRA CONSTELLATION. Libra, as its Latin name implies, features a set of weighting scales. This makes it the only zodiacal constellation not to represent a living being. Libra is in the southern sky, but it can be seen in both the Northern and Southern hemisphere through most of the year. However, it peaks during June, right in the middle of Northern summer and Southern winter. Rather average, the Scales are the 29th largest constellation and occupy 1.30% of the celestial sphere, making it only slightly bigger than Gemini and Cancer. However, its size is not compensated by brightness—Libra lacks relevant stars to brag about, as none of them has a remarkable magnitude. Its size and lack of stars would have made Libra relatively inconspicuous for a constellation, but it was blessed with a privileged position—it’s a member of the zodiac. This means that from Earth’s perspective, the sun and planets go through Libra frequently enough. In the sun’s case, once a year. The downside of being on the elliptic path is that Libra’s visibility is diminished once a year—when the sun goes through it, the brightness of the mother star makes it relatively hard to find. However, locating the Scales is relatively easy most of the time, even if it has no iconic star to its name. This is because it lies between two of the brightest stars in the sky—Spica from Virgo, and Antares from Scorpio. Finding them should be enough, as Libra is undoubtedly right in the middle. Other than the two fellow zodiacal constellations, Libra also shares borders with Centaurus, Serpens Caput, Ophiuchus, Hydra and Lupus. Likewise, it can be visible at latitudes between +65° and -90°. MAJOR STARS IN THE LIBRA CONSTELLATION. As previously mentioned, Libra does not count with any of the brightest stars in the sky. Regardless, the stars within its boundaries are not lacking in beauty or fascinating structures, making them worthy jewels in the dark. Out of all the important stars taking part in outlining Libra’s shape, three of them stand above the rest—the three brightest ones. They form a distinctive triangle, the most visible element in Libra and the structure that, in people’s imagination, holds both sides of the scale. - Also known as Beta Librae or Lanx Borealis, Zubeneschamali is the brightest star in the Libra constellation, forming one point of the iconic triangle. - It is calculated to be approximately 170 light-years away from the Solar System and it is 130 times brighter than the Sun, five times as big, and twice as hot. While it hasn’t been confirmed, some speculate it might have a companion. - Zubeneschamali officially has a blue-white glow, but some scientists have reported it to have a green hue, making it the only one in our sky. - The long name of this star, Zubeneschamali, has roots in the Arabic phrase “al-zuban al-šamāliyya”, meaning “The Northern Claw”. Its alternative name, Lanx Borealis, means “The Northern Scale”. - Also designated Alpha Librae or Lanx Australis, Zubenelgenubi is the second brightest star in the Libra constellation. It has been described as a multiple star system, with the brightest ones being a binary star that outshines the rest. - It is located 77 light-years away, and it has been considered a member of the “Castor Moving Group”—a group of stars that move at similar velocities and maybe even share the same origins. As the name implies, other members of this group beside Zubenelgenubi are Castor from the Gemini constellation, Vega from Lyra, and Fomalhaut from Piscis Austrinus. - Zubenelgenubi, etymologically, means “The Southern Claw”, and its alternative Latin name is “The Southern Scale”. - This binary star also goes by the designation Sigma Librae, and it’s the third brightest star in Libra, and the final point of its iconic triangle. It is a distant star—approximately 288 light-years from Earth. - Historically, this star was part of the neighbor Scorpio—it used to be named Gamma Scorpii until the 19th century and in 1930 it received its Libra denomination was confirmed. - The primary star of the binary is actually a red giant while its companion is not visible. - Despite being barely visible, HD 140283 is one of the most important stars in our celestial sphere. Nicknamed “Methuselah”, this star is one of the oldest ever known in the entire universe—14.46 billion years old. - Scientists estimate Methuselah was formed soon after the Big Bang. LIBRA CONSTELLATION FACTS. Constellations are far more than just stars—they are also useful ways to divide the sky in organized areas, thus helping the localization and recognition of multiple deep space objects. Libra has its fair share of fascinating elements and subsequent facts, all contained within its celestial frontiers. - Libra didn’t always get to be its own constellation—for a certain time, it used to be a part of the neighboring constellation Scorpio, specifically the claws. This remnant is noticeable in the names of the stars Zubeneschamali and Zubenelgenubi, considered the “claws” of the scorpion. - Within Libra lies Gliese 581—a star about a third of the sun’s size, merely 20 light-years away. The importance of Gliese 581 lies in its planetary system—some studies have concluded that planets Gliese 581d and Gliese 581g might be habitable. - Long ago, Libra used to be the location of the September equinox—when the Sun seemingly crosses the Equator downwards. However, the precession of the equinoxes has caused a change—now, the September equinox happens within Virgo. - The sun crosses the Libra constellation once a year. During 2019, the event will take place between October 31 and November 23. - As a zodiacal constellation, Libra is considered one of the twelve signs of the zodiac. According to astrology, those born between September 23 and October 22. Even though there is no overlap between the dates, astrology is an independent field from astronomy. LIBRA CONSTELLATION MYTH AND HISTORY. Libra is an anomaly as far as zodiacal constellations go—it is not associated with a particular myth. Babylonians considered Libra to be the scales of Shamash, god of Justice. Romans saw them as the scales held by the goddess of justice Astraea, also assumed the maiden in Virgo. Libra is one of the 48 ancient constellations recognized by Greco-Roman astronomer Ptolemy in his work Almagest, eventually being accepted as one of the official 88 constellations by the International Astronomical Union. In astrology, it’s also one of the twelve signs that influence in multiple ways the life and behavior of humans on Earth, depending on the time of birth. According to experts, Libra is an air sign ruled by Venus and individuals born under Libra are diplomatic, fair, social, considerate and kind, while also being indecisive, fearful and prone to self-pity. As the symbol of justice in the sky, Libra is truly a constellation of contrasts—despite its lack of bright stars, it remains a fascinating item to study thanks to its many enthralling elements. A balance that certainly matches the Heavenly Scales. SEE MORE: ZODIAC CONSTELLATIONS How to Find the Libra Constellation in the Night Sky by Carolyn Collins Petersen at ThoughtCo. Libra Constellation: Facts About the Scales by Kim Ann Zimmermann at Space.com. Libra Constellation at Constellation Guide. Libra? Here’s your constellation by Bruce McClure at EarthSky. Libra Zodiac Profile at Horoscope.com. Strange ‘Methuselah’ Star Looks Older Than The Universe by Mike Wall at Space.com.
What can cause devastation four months after a volcanic eruption? Four months after Mt. Merapi in Indonesia erupted, there is more devastation. Rain continually washes ash downslope in massive ash flows. Residents are unable to live well under these conditions. Landforms and Gravity Gravity is responsible for erosion by flowing water and glaciers. That’s because gravity pulls water and ice downhill. These are ways gravity causes erosion indirectly. But gravity also causes erosion directly. Gravity can pull soil, mud, and rocks down cliffs and hillsides. This type of erosion and deposition is called mass wasting . It may happen suddenly. Or it may occur very slowly, over many years. Landslides are the most dramatic, sudden, and dangerous types of mass wasting. Landslides are sudden falls of rock; by contrast, avalanches are sudden falls of snow. Weathered material may fall away from a cliff because there is nothing to keep it in place. Rocks that fall to the base of a cliff make a talus slope . Sometimes as one rock falls, it hits another rock, which hits another rock, and begins a landslide. A landslide can be very destructive ( Figure below ). It may bury or carry away entire villages. Air trapped under the falling rocks acts as a cushion that keeps the rock from slowing down. Landslides can move as fast as 200 to 300 km/hour. This landslide in California in 2008 blocked Highway 140. Landslides often occur on steep slopes in dry or semi-arid climates. The California coastline, with its steep cliffs and years of drought punctuated by seasons of abundant rainfall, is prone to landslides. Wet soil becomes slippery and heavy. Earthquakes often trigger landslides. The shaking ground causes soil and rocks to break loose and start sliding. - Rapid downslope movement of material is seen in this video: http://faculty.gg.uwyo.edu/heller/SedMovs/Sed%20Movie%20files/dflows.mov . Mudflows and Lahars A mudflow is the sudden flow of mud down a slope because of gravity. Mudflows occur where the soil is mostly clay. Like landslides, mudflows usually occur when the soil is wet. Wet clay forms very slippery mud that slides easily. Mudflows follow river channels, washing out bridges, trees, and homes that are in their path. - A debris flow is seen in this video: http://faculty.gg.uwyo.edu/heller/SedMovs/Sed%20Movie%20files/Moscardo.mov . A lahar is mudflow that flows down a composite volcano ( Figure below ). Ash mixes with snow and ice melted by the eruption to produce hot, fast-moving flows. The lahar, caused by the eruption of Nevado del Ruiz in Columbia in 1985, killed more than 23,000 people. A lahar is a mudflow that forms from volcanic ash and debris. This lahar comes off of Mount Saint Helens in Washington state. Slump and Creep Less dramatic types of mass wasting move Earth materials slowly down a hillside. Slump is the sudden movement of large blocks of rock and soil down a slope. ( Figure below ). All the material moves together in big chunks. Slumps may happen when a layer of slippery, wet clay is underneath the rock and soil on a hillside. Or they may occur when a river (or road) undercuts a slope. Slump leaves behind crescent-shaped scars on the hillside. Slump material moves as a whole unit, leaving behind a crescent shaped scar. Creep is the very slow movement of rock and soil down a hillside. Creep occurs so slowly you can’t see it happening. You can only see the effects of creep after years of movement ( Figure below ). The slowly moving ground causes trees, fence posts, and other structures on the surface to tilt downhill. As the hillside moves down slope a tree tries to stand up straight. The tree ends up with a bent trunk. Prevention and Awareness Landslides cause $1 billion to $2 billion damage in the United States each year. Mass wasting is responsible for traumatic and sudden loss of life and homes in many areas of the world. Some communities have developed landslide warning systems. Around San Francisco Bay, the National Weather Service and the U.S. Geological Survey use rain gauges to monitor soil moisture. If soil becomes saturated, the weather service issues a warning. Earthquakes, which may occur on California’s abundant faults, can also trigger landslides. To be safe from landslides: - Be aware of your surroundings. Notice changes in the natural world. - Look for cracks or bulges in hillsides, tilting of decks or patios, or leaning poles or fences when rainfall is heavy. Sticking windows and doors can indicate ground movement. This is because soil pushes slowly against a house and knocks windows and doors out of alignment. - Look for landslide scars. Landslides are most likely to happen where they have occurred before. - Plant vegetation and trees on the hillside around your home. This helps hold the soil in place. - Help to keep a slope stable by building retaining walls. Installing good drainage in a hillside may keep the soil from getting saturated. - creep : Exceptionally slow movement of soil downhill. - lahar : Volcanic mudflow of pyroclastic material and water. - landslide : Rapid movement downslope of rock and debris under the influence of gravity. - mass wasting : Downslope movement of material due to gravity. - mudflow : Saturated soil that flows down river channels. - slump : Downslope slipping of a mass of soil or rock, generally along a curved surface. - talus slope : Pile of angular rock fragments formed at the base of a cliff or mountain. - Landslides are sudden and massive falls of rock down a slope. Landslides may be very destructive or even deadly. Slump and creep are slower types of mass wasting. - Mudflows or lahars, which are volcanic mudflows, are mass movements that contain a lot of water. - Mass wasting is more likely to occur on slopes that are wet, have weak rock, or are undercut. An earthquake or other ground shaking can trigger a landslide. - To avoid being in a landslide, be aware of signs in a hillside, such as cracks or bulges and old landslide scars. To keep a slope stable, install good drainage or build retaining walls. Use the resources below to answer the questions that follow. - Landslides at http://video.nationalgeographic.com/video/environment/environment-natural-disasters/landslides-and-more/landslides/ - Where do landslides occur? - How many people are killed by landslides each year? - What can cause landslides to become more frequent? - 'Creep at http://www.youtube.com/watch?v=l1jqDLiQXbs (1:18) - What is creep? - How do trees compensate for creep? - What factors make it more likely that a place will have a landslide? - Pretend that you are going to buy a house on a hill. How do you know if the house might slide or creep down the hill? - What can be done to prevent landslides? - How is a landslide different from a mudflow? How is a lahar different from a mudflow?
Ocean pollution is caused by numerous industrial and domestic activities, which include oil spills, garbage dumping, sewage and factory-waste disposal and the use of toxic pesticides. These activities pollute the oceans through drains, rivers and direct dumping. According to All Recycling Facts, lan The effects of ocean pollution include an interruption in the reproductive cycle of animals; injury, illness and death in marine life; disruption of photosynthesis and illnesses in humans. The ocean's adaptation methods, such as its mechanism to absorb carbon dioxide from the atmosphere, are insuffi The Mediterranean Sea is the most polluted ocean in the world because it is almost entirely surrounded by land. The sea is heavily trafficked and fished by the 19 nations that rely on it for food and income. Articles written about ocean pollution are intended to raise awareness about this issue and to promote ecological conservation efforts to reverse pollution trends. According to the activist group The Ocean Project, increased awareness of ocean pollution has been shown to change public attitudes rega Americans produced 250 million tons of garbage in the form of grass clippings, bottles, furniture, batteries and other items in 2010, according to Live Science. Landfills comprise 54 percent of all waste. Around 34 percent of waste is recycled or turned into compost, and 12 percent is used for combu Pollutants are high concentrations of toxic chemicals found in the environment. They are generally introduced into the ecosystem through the air, water or soil, and they have the ability to cause great harm to the environment and people’s health. Oceans cover the majority of the Earth's surface, which is why the Earth appears blue from outer space. Oceans cover over 129 million square miles of Earth, which is about 65.7 percent of the Earth's surface. The deepest part of the oceans can reach a depth of over 6 miles, and the ocean contains 97 As a system of interconnected bodies of salt water, oceans cover 70 percent of the earth's surface and hold 97 percent of the planet's water supply. The five major oceans of the world are the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, the Southern Ocean and the Arctic Ocean. Oceans lay claim to about 70 percent of the surface of the earth, and ocean life accounts for 94 percent of living things. Despite this, scientists have yet to explore most of the ocean, as its average depth surpasses 12,400 feet and is mostly cloaked in darkness. Only about 30 percent of the Earth's surface is dry land, oceans make up the remaining 70 percent. Earth's oceans are home to hundreds of thousands of marine life forms, but because most of the ocean depths remain unexplored, hundreds of thousands or even millions of unclassified life forms may exis
DENVER—Most magnets shrug off tiny temperature tweaks. But now physicists have created a new nanomaterial that dramatically changes how easily it flips its magnetic orientation when heated or cooled only slightly. The effect, never before seen in any material, could eventually lead to new types of computer memory. Materials become magnetized when their internal magnetic grains, which usually point in different directions, align in a strong enough magnetic field. How much a material’s grains resist aligning is known as its coercivity. A familiar bar magnet, for example, has high coercivity, with its typically constant north-south poles. Other substances, such as iron and nickel, have low coercivity, meaning they can change their orientations more easily. Coercivity isn’t just about a magnet’s composition: It also depends on its temperature. Usually, a magnet’s coercivity changes gradually as its temperature rises or falls. But the new nanomaterial shows this isn’t always true. To make the material, a team led by physicist Ivan Schuller at the University of California, San Diego, deposited an ultrathin 10-nanometer layer of nickel onto a 100-nanometer-thick wafer of a substance called vanadium oxide. The scientists then cooled the mixture and ramped up a magnetic field until the nickel’s grains started to flip. This process allowed the scientists to measure the material’s coercivity at temperatures down to negative 153°C. After most temperature changes, the material’s coercivity budged only slightly. But between negative 88°C and negative 108°C, its coercivity jumped up five times, making it much more resistant to changing its magnetic orientation. Its coercivity then plummeted to half its maximum value as the scientists further lowered the temperature to negative 123°C, meaning the material’s grains again became easier to flip. The dramatic spike in coercivity—far larger than that seen in any other material over a similar temperature range—excited the researchers, who reported it at this week’s American Physical Society meeting in Denver. The work also appears in Applied Physics Letters. This is what we physicists like to do—look at things that are huge effects, that are fantastic effects,” Schuller says. Even though nickel has the flippable magnetic grains, Schuller thinks a change in vanadium oxide’s internal structure is what causes the combined material’s coercivity spike. Vanadium oxide’s atoms take on one arrangement above negative 88°C and another below negative 123°C. Between the two temperatures, however, the material contains blocks with both arrangements. That mixed structure makes it harder for the overlying nickel’s grains to flip en masse, Schuller says. While potential applications are still a ways off, Schuller thinks his team’s finding could someday lead to a new kind of temperature-controlled computer memory. Computers encode information in tiny magnetic components, and to be stable these components must not realign easily. But the magnets must also be able to flip quickly under certain conditions, so that memory can be rewritten. Schuller envisions that a hard drive based on his finding would keep its memory elements at a high-coercivity temperature most of the time, and heat them slightly for rewriting. This would be a huge improvement over current heat-assisted magnetic recording devices, whose elements must be heated hundreds of degrees by laser. “I was a bit surprised” at the large coercivity spike the team found, says Dan Dahlberg, a physicist at the University of Minnesota, Twin Cities. “It’s not what you would expect.” But Dahlberg is not excited enough about the material to start studying it himself. He notes that any viable heat-assisted memory technology would need to operate near room temperature, not at the frigid temperature where the coercivity of Schuller’s material spikes. (Schuller says his team has produced a material with a coercivity spike closer to room temperature and is planning to publish this result.) Still, Dahlberg says, it can be hard to predict whether a new discovery will lead to applications down the road. “To say something will never end up in technology … is a very dangerous thing.”
THE DESCENDANTS OF DINOSAURS LIVE ON. We have known for a long time that thousands upon thousands of dinosaurs still roam the Earth: birds. While we can look at the pterodactyl and other avian dinosaurs and see the direct resemblance of this prehistoric period flying in our skies, there is another impact of dinosaurs you can find in your backyard. Researchers just discovered that the color, and decorations, of bird eggs, are a direct descendant of dinosaurs. Surprisingly, however, it isn't the avian dinosaurs that brought this evolutionary genius idea of camouflaging and identifying your brood. In fact, scientists discovered that it was a branch of non-avian (non-birdlike and non-flying) dinosaurs that began this idea - oviraptorids. What is an oviraptorid? Oviraptorids are the branch of dinosaurs made popular by Jurrasic Park, and many will know one of their crew as the velociraptor. It turns out, oviraptorids were among the first to build partially open nests instead of burying their eggs in the ground. While burying your eggs, as present-day crocodiles do, can give protection, it also leads to a problem: real estate. By creating partially open, or fully open nests, these oviraptorids gave themselves more places to start a family. The downside of an open nest. While an open, or semi-open nest gave these dinosaurs more places to nest, it also presented a major problem: predators. Eggs are rich in proteins and fat. They are the perfect meal for a hungry predator. In an open, or semi-open nest those eggs are like a neon sign flashing "free meal today" to other hungry dinosaurs. The start of egg camouflage. To prevent predators for preying on their eggs, oviraptorids evolved their eggs. Where the common crocodile egg, and other buried eggs, are white, their eggs were not. They contained a variety of colors and patterns. These colors and patterns were seen by using a very special type of microscope that looked at dinosaur egg fossils. They found the colors are present in different depths of the eggshell and are often used for camouflage and egg identification. After all, just like humans, dinosaurs likely cared which baby was their genetic brood. Dinosaur eggs in your backyard. You can find evidence of this camouflage and spotting in your own backyard or at the grocery store. Brown chicken eggs contain the same pigments that these oviraptorids did. As do robin eggs you might find in your backyard. In fact, it is believed that all egg colors and patterns diverged from the oviraptorids. So the next time you see any sort of colorful egg you can thank those scary velociraptors.
This page uses content from Wikipedia and is licensed under CC BY-SA. |Native to||Hungary and areas of east Austria, Croatia, Poland, Romania, northern Serbia, Slovakia, Slovenia, western Ukraine.| |13 million (2002–2012)| |Latin (Hungarian alphabet)Hungarian BrailleOld Hungarian script| Official language in |Hungary Vojvodina European Union| |Regulated by||Research Institute for Linguistics of the Hungarian Academy of Sciences| |Hungarian and English| Hungarian (magyar nyelv (help·info)) is a Finno-Ugric language spoken in Hungary and parts of several neighbouring countries. It is the official language of Hungary and one of the 24 official languages of the European Union. Outside Hungary it is also spoken by communities of Hungarians in the countries that today make up Slovakia, western Ukraine (Subcarpathia), central and western Romania (Transylvania), northern Serbia (Vojvodina), northern Croatia and northern Slovenia (Mur region). It is also spoken by Hungarian diaspora communities worldwide, especially in North America (particularly the United States and Canada) and Israel. Like Finnish and Estonian, Hungarian belongs to the Uralic language family. With 13 million speakers, it is the family's largest member by number of speakers. Hungarian is a member of the Uralic language family. Linguistic connections between Hungarian and other Uralic languages were noticed in the 1670s, and the family itself (then called Finno-Ugric) was established in 1717. Hungarian has traditionally been assigned to the Ugric branch within the Finno-Ugric group, along with the Mansi and Khanty languages of western Siberia (Khanty–Mansia region), but it is no longer clear that it is a valid group. When the Samoyed languages were determined to be part of the family, it was thought at first that Finnic and Ugric (Finno-Ugric) were closer to each other than to the Samoyed branch of the family, but that is now frequently questioned. The name of Hungary could be a result of regular sound changes of Ungrian/Ugrian, and the fact that the Eastern Slavs referred to Hungarians as Ǫgry/Ǫgrove (sg. Ǫgrinŭ) seemed to confirm that. Current literature favors the hypothesis that it comes from the name of the Turkic tribe Onogur (which means "ten arrows" or "ten tribes"). There are numerous regular sound correspondences between Hungarian and the other Ugric languages. For example, Hungarian /aː/ corresponds to Khanty /o/ in certain positions, and Hungarian /h/ corresponds to Khanty /x/, while Hungarian final /z/ corresponds to Khanty final /t/. For example, Hungarian ház [haːz] "house" vs. Khanty xot [xot] "house", and Hungarian száz [saːz] "hundred" vs. Khanty sot [sot] "hundred". The distance between the Ugric and Finnic languages is greater, but the correspondences are also regular. The traditional view holds that the Hungarian language diverged from its Ugric relatives in the first half of the 1st millennium BC, in western Siberia east of the southern Urals. The Hungarians gradually changed their lifestyle from being settled hunters to being nomadic pastoralists, probably as a result of early contacts with Iranian nomads (Scythians and Sarmatians). In Hungarian, Iranian loanwords date back to the time immediately following the breakup of Ugric and probably span well over a millennium. Among these include tehén ‘cow’ (cf. Avestan dhaénu); tíz ‘ten’ (cf. Avestan dasa); tej ‘milk’ (cf. Persian dáje ‘wet nurse’); and nád ‘reed’ (from late Middle Iranian; cf. Middle Persian nāy). Archaeological evidence from present day southern Bashkortostan confirms the existence of Hungarian settlements between the Volga River and the Ural Mountains. The Onogurs (and Bulgars) later had a great influence on the language, especially between the 5th and 9th centuries. This layer of Turkic loans is large and varied (e.g. szó "word", from Turkic; and daru "crane", from the related Permic languages), and includes words borrowed from Oghur Turkic; e.g. borjú "calf" (cf. Chuvash păru, părăv vs. Turkish buzağı); dél ‘noon; south’ (cf. Chuvash tĕl vs. Turkish dial. düš). Many words related to agriculture, state administration and even family relationships show evidence of such backgrounds. Hungarian syntax and grammar were not influenced in a similarly dramatic way over these three centuries. After the arrival of the Hungarians in the Carpathian Basin, the language came into contact with a variety of speech communities, among them Slavic, Turkic, and German. Turkic loans from this period come mainly from the Pechenegs and Cumanians, who settled in Hungary during the 12th and 13th centuries: e.g. koboz "cobza" (cf. Turkish kopuz ‘lute’); komondor "mop dog" (< *kumandur < Cuman). Hungarian borrowed many words from neighbouring Slavic languages: e.g. tégla ‘brick’; mák ‘poppy’; karácsony ‘Christmas’). These languages in turn borrowed words from Hungarian: e.g. Serbo-Croatian ašov from Hungarian ásó ‘spade’. About 1.6 percent of the Romanian lexicon is of Hungarian origin. There have been attempts to show that Hungarian is related to other languages, such as Hebrew, Hunnic, Sumerian, Egyptian, Etruscan, Basque, Persian, Pelasgian, Greek, Chinese, Sanskrit, English, Tibetan, Magar, Quechua, Armenian, Japanese, and at least 40 other languages. Mainstream linguists dismiss these attempts as pseudoscientific comparisons with no merit. The classification of Hungarian as a Uralic/Finno-Ugric rather than a Turkic language continued to be a matter of impassioned political controversy throughout the 18th and into the 19th centuries. During the latter half of the 19th century, a competing hypothesis proposed a Turkic affinity of Hungarian, or, alternatively, that both the Uralic and the Turkic families formed part of a superfamily of Ural–Altaic languages. Following an academic debate known as Az ugor-török háború ("the Ugric-Turkic battle"), the Finno-Ugric hypothesis was concluded the sounder of the two, mainly based on work by the German linguist Josef Budenz . Hungarians did in fact absorb some Turkic influences during several centuries of cohabitation. For example, the Hungarians appear to have learned animal husbandry techniques from the Turkic Chuvash people, as a high proportion of words specific to agriculture and livestock are of Chuvash origin. A strong Chuvash influence was also apparent in Hungarian burial customs. The first written accounts of Hungarian, mostly personal name and place names, date to the 10th century. No significant texts written in Old Hungarian script have survived, as wood, the medium of writing in use at the time, was perishable. The Kingdom of Hungary was founded in 1000 by Stephen I. The country became a Western-styled Christian (Roman Catholic) state, with Latin script replacing Hungarian runes. The earliest remaining fragments of the language are found in the establishing charter of the abbey of Tihany from 1055, intermingled with Latin text. The first extant text fully written in Hungarian is the Funeral Sermon and Prayer, which dates to the 1190s. Although the orthography of these early texts differed considerably from that used today, contemporary Hungarians can still understand a great deal of the reconstructed spoken language, despite changes in grammar and vocabulary. A more extensive body of Hungarian literature arose after 1300. The earliest known example of Hungarian religious poetry is the 14th-century Lamentations of Mary. The first Bible translation was the Hussite Bible in the 1430s. The standard language lost its diphthongs, and several postpositions transformed into suffixes, including reá "onto" (the phrase utu rea "onto the way" found in the 1055 text would later become útra). There were also changes in the system of vowel harmony. At one time, Hungarian used six verb tenses, while today only two or three are used.[note 1] In 1533, Kraków printer Benedek Komjáti published the first Hungarian-language book set in movable type, a translation of the letters of Saint Paul entitled Az zenth Paal leueley magyar nyeluen (modern orthography: Az Szent Pál levelei magyar nyelven). By the 17th century, the language already closely resembled its present-day form, although two of the past tenses remained in use. German, Italian and French loans also began to appear. Further Turkish words were borrowed during the period of Ottoman rule (1541 to 1699). In the 18th century a group of writers, most notably Ferenc Kazinczy, spearheaded a process of nyelvújítás (language revitalization). Some words were shortened (győzedelem > győzelem, 'triumph' or 'victory'); a number of dialectal words spread nationally (e.g., cselleng 'dawdle'); extinct words were reintroduced (dísz, 'décor'); a wide range of expressions were coined using the various derivative suffixes; and some other, less frequently used methods of expanding the language were utilized. This movement produced more than ten thousand words, most of which are used actively today. |Romania (mainly Transylvania)||1,268,444||2011| |Serbia (mainly Vojvodina)||241,164||2011[circular reference]| |Ukraine (mainly Zakarpattia)||149,400||2001| |Austria (mainly Burgenland)||22,000| |Slovenia (mainly Prekmurje)||9,240| Hungarian has about 13 million native speakers, of whom more than 9.8 million live in Hungary. According to the 2011 Hungarian census, 9,896,333 people (99.6% of the total population) speak Hungarian, of whom 9,827,875 people (98.9%) speak it as a first language, while 68,458 people (0.7%) speak it as a second language. About 2.2 million speakers live in other areas that were part of the Kingdom of Hungary before the Treaty of Trianon (1920). Of these, the largest group lives in Transylvania, the western half of present-day Romania, where there are approximately 1.25 million Hungarians. There are large Hungarian communities also in Slovakia, Serbia and Ukraine, and Hungarians can also be found in Austria, Croatia, and Slovenia, as well as about a million additional people scattered in other parts of the world. For example, there are more than one hundred thousand Hungarian speakers in the Hungarian American community and 1.5 million with Hungarian ancestry in the United States. Hungarian is the official language of Hungary, and thus an official language of the European Union. Hungarian is also one of the official languages of Vojvodina and an official language of three municipalities in Slovenia: Hodoš, Dobrovnik and Lendava, along with Slovene. Hungarian is officially recognized as a minority or regional language in Austria, Croatia, Romania, Zakarpattia in Ukraine, and Slovakia. In Romania it is a recognized minority language used at local level in communes, towns and municipalities with an ethnic Hungarian population of over 20%. The dialects of Hungarian identified by Ethnologue are: Alföld, West Danube, Danube-Tisza, King's Pass Hungarian, Northeast Hungarian, Northwest Hungarian, Székely and West Hungarian. These dialects are, for the most part, mutually intelligible. The Hungarian Csángó dialect, which is mentioned but not listed separately by Ethnologue, is spoken primarily in Bacău County in eastern Romania. The Csángó Hungarian group has been largely isolated from other Hungarian people, and they therefore preserved features that closely resemble earlier forms of Hungarian. Hungarian has 14 vowel phonemes and 25 consonant phonemes. The vowel phonemes can be grouped as pairs of short and long vowels such as o and ó. Most of the pairs have a similar pronunciation and vary significantly only in their duration. However, pairs a/á and e/é differ both in closedness and length. The sound voiced palatal plosive /ɟ/, written ⟨gy⟩, sounds similar to 'd' in British English 'duty'. It occurs in the name of the country, "Magyarország" (Hungary), pronounced /ˈmɒɟɒrorsaːɡ/. It is one of three palatalised consonants, the others being ⟨ty⟩, ⟨ny⟩, and (historically) ⟨ly⟩. Primary stress is always on the first syllable of a word, as in Finnish and the neighbouring Slovak and Czech. There is a secondary stress on other syllables in compounds: viszontlátásra ("goodbye") is pronounced /ˈvisontˌlaːtaːʃrɒ/. Elongated vowels in non-initial syllables may seem to be stressed to an English-speaker, as length and stress correlate in English. Hungarian uses vowel harmony to attach suffixes to words. That means that most suffixes have two or three different forms, and the choice between them depends on the vowels of the head word. There are some minor and unpredictable exceptions to the rule. Nouns have 18 cases, which are formed regularly with suffixes. The nominative case is unmarked (az alma 'the apple') and, for example, the accusative is marked with the suffix –t (az almát '[I eat] the apple'). Half of the cases express a combination of the source-location-target and surface-inside-proximity ternary distinctions (three times three cases); there is a separate case ending –ból/–ből meaning a combination of source and insideness: 'from inside of'. Possession is expressed by a possessive suffix on the possessed object, rather than the possessor as in English (Peter's apple becomes Péter almája, literally 'Peter apple-his'). Noun plurals are formed with–k (az almák ‘the apples’), but after a numeral, the singular is used (két alma ‘two apples’, literally ‘two apple’; not *két almák). Unlike English, Hungarian uses case suffixes and nearly always postpositions instead of prepositions. There are two types of articles in Hungarian, definite and indefinite, which roughly correspond to the equivalents in English. Adjectives precede nouns (a piros alma 'the red apple') and have three degrees: positive (piros 'red'), comparative (pirosabb 'redder') and superlative (a legpirosabb 'the reddest'). If the noun takes the plural or a case, an attributive adjective is invariable: a piros almák 'the red apples'. However, a predicative adjective agrees with the noun: az almák pirosak 'the apples are red'. Adjectives by themselves can behave as nouns (and so can take case suffixes): Melyik almát kéred? – A pirosat. 'Which apple would you like? – The red one'. Verbs are conjugated according to two tenses (past and present), three moods (indicative, conditional and imperative-subjunctive), two numbers (singular or plural), three persons (first, second and third) and definiteness. The last feature is most characteristic: the definite conjugation is used with a transitive verb whose (direct) object is definite (Péter eszi az almát. "Peter eats the apple".), but the indefinite conjugation either for a verb with an indefinite direct object (Péter eszik egy almát. "Peter eats an apple".) or for a verb without an object. (Péter eszik. "Peter eats".)[clarification needed] Since conjugation expresses the person and number, personal pronouns are usually omitted except for emphasis. The present tense is unmarked, and the past is formed by using the suffix –t or –tt: hall 'hears'; hallott 'heard', past. Future may be expressed with the present tense (usually with a word defining the time of the event: holnap 'tomorrow') or by using the auxiliary verb fog (similar to the English 'will'), followed by the infinitive. The indicative mood and the conditional mood are used both in the present and the past tenses. The conditional past is expressed by using the conjugated past form and the auxiliary word volna (hallott volna 'would have heard'). The imperative mood is used only in the present tense. Verbs have verbal prefixes, which are also known as coverbs. Most of them define direction of movement: lemegy "goes down", felmegy "goes up". Some verbal prefixes give an aspect to the verb, such as the prefix meg-, which generally marks telicity. Vowel harmony also plays a major role in verb conjugations. All Hungarian verb conjugations (as well as postpositions and possessive suffixes, for that matter) can be thought of as 'templates' into which vowels are inserted. Based on the nature of a verb's infinitive, which always ends in '-ni', one can create a generic 'template', which has mostly consonants. The vowels are then inserted into the 'template' according to the rules of vowel harmony, based on the categorization of the vowel in the verb root (front, back, rounded, unrounded). The neutral word order is subject–verb–object (SVO). However, Hungarian, a topic-prominent language, has a word order that depends not only on syntax but also on the topic-comment structure of the sentence (for example, what aspect is assumed to be known and what is emphasized). A Hungarian sentence generally has the following order: topic, comment (or focus), verb and the rest. The topic shows that the proposition is only for that particular thing or aspect, and it implies that the proposition is not true for some others. For example, in "Az almát János látja". ('It is John who sees the apple'. Literally 'The apple John sees.'), the apple is in the topic, implying that other objects may be seen by not him but other people (the pear may be seen by Peter). The topic part may be empty. The focus shows the new information for the listeners that may not have been known or that their knowledge must be corrected. For example, "Én vagyok az apád". ('I am your father'. Literally, 'It is I who am your father'.), from the movie The Empire Strikes Back, the pronoun I (én) is in the focus and implies that it is new information, and the listener thought that someone else is his father. Although Hungarian is sometimes described as having free word order, different word orders are generally not interchangeable, and the neutral order is not always correct to use. Also, the intonation is also different with different topic-comment structures. The topic usually has a rising intonation, the focus having a falling intonation. In the following examples, the topic is marked with italics, and the focus (comment) is marked with boldface. Hungarian has a four-tiered system for expressing levels of politeness. From highest to lowest: The four-tiered system has somewhat been eroded due to the recent expansion of "tegeződés". Some anomalies emerged with the arrival of multinational companies who have addressed their customers in the te (least polite) form right from the beginning of their presence in Hungary. A typical example is the Swedish furniture shop IKEA, whose web site and other publications address the customers in te form. When a news site asked IKEA—using the te form—why they address their customers this way, IKEA's PR Manager explained in his answer—using the ön form—that their way of communication reflects IKEA's open-mindedness and the Swedish culture. However IKEA in France uses the polite (vous) form. Another example is the communication of Telenor (a mobile network operator) towards its customers. Telenor chose to communicate towards business customers in the polite ön form while all other customers are addressed in the less polite te form. |adó||tax or transmitter| |adózik||to pay tax| |adakozik||to give (practise charity)| |With verbal prefixes| |átad||to hand over| |bead||to hand in| |felad||to give up, to mail| |hozzáad||to augment, to add to| |kiad||to rent out, to publish, to extradite| |lead||to lose weight, to deposit (an object)| |megad||to repay (debt), to call (poker),to grant (permission)| |összead||to add (to do mathematical addition)| During the first early phase of Hungarian language reforms (late 18th and early 19th centuries) more than ten thousand words were coined, several thousand of which are still actively used today (see also Ferenc Kazinczy, the leading figure of the Hungarian language reforms.) Kazinczy's chief goal was to replace existing words of German and Latin origins with newly-created Hungarian words. As a result, Kazinczy and his later followers (the reformers) significantly reduced the formerly high ratio of words of Latin and German origins in the Hungarian language, which were related to social sciences, natural sciences, politics and economics, institutional names, fashion etc. Giving an accurate estimate for the total word count is difficult, since it is hard to define "a word" in agglutinating languages, due to the existence of affixed words and compound words. To obtain a meaningful definition of compound words, we have to exclude such compounds whose meaning is the mere sum of its elements. The largest dictionaries giving translations from Hungarian to another language contain 120,000 words and phrases (but this may include redundant phrases as well, because of translation issues)[clarification needed]. The new desk lexicon of the Hungarian language contains 75,000 words and the Comprehensive Dictionary of Hungarian Language (to be published in 18 volumes in the next twenty years) is planned to contain 110,000 words. The default Hungarian lexicon is usually estimated to comprise 60,000 to 100,000 words. (Independently of specific languages, speakers actively use at most 10,000 to 20,000 words, with an average intellectual using 25,000 to 30,000 words.) However, all the Hungarian lexemes collected from technical texts, dialects etc. would total up to 1,000,000 words. Parts of the lexicon can be organized using word-bushes. (See an example on the right.) The words in these bushes share a common root, are related through inflection, derivation and compounding, and are usually broadly related in meaning. The basic vocabulary shares some hundreds word roots with other Uralic languages like Finnish, Estonian, Mansi and Khanty. Examples are the verb él "live" (Finnish elää), the numbers kettő (2), három (3), négy (4) (cf. Mansi китыг kitig, хурум khurum, нила nila, Finnish kaksi, kolme, neljä, Estonian kaks, kolm, neli, ), as well as víz 'water', kéz 'hand', vér 'blood', fej 'head' (cf. Finnish and Estonian vesi, käsi, veri, Finnish pää, Estonian pea or pää). Words for elementary kinship and nature are more Ugric, less r-Turkic and less Slavic. Agricultural words are about 50% r-Turkic and 50% Slavic; pastoral terms are more r-Turkic, less Ugric and less Slavic. Finally, Christian and state terminology is more Slavic and less r-Turkic. The Slavic is most probably proto-Slovakian and/or -Slovenian. This is easily understood in the Uralic paradigm, proto-Magyars were first similar to Ob-Ugors who were mainly hunters, fishers & gatherers, but with some horses, too. Then they accultured to Bulgarian r-Turks, so the older layer of agriculture words (wine, beer, wheat, barley &c.) are purely r-Turkic, and also lots of termini of statesmanship & religion were, too. Except for a few Latin and Greek loan-words, these differences are unnoticed even by native speakers; the words have been entirely adopted into the Hungarian lexicon. There are an increasing number of English loan-words, especially in technical fields. Another source differs in that loanwords in Hungarian are held to constitute about 45% of bases in the language. Although the lexical percentage of native words in Hungarian is 55%, their use accounts for 88.4% of all words used (the percentage of loanwords used being just 11.6%). Therefore, the history of Hungarian has come, especially since the 19th century, to favor neologisms from original bases, whilst still having developed as many terms from neighboring languages in the lexicon. Words can be compounds or derived. Most derivation is with suffixes, but there is a small set of derivational prefixes as well. Compounds have been present in the language since the Proto-Uralic era. Numerous ancient compounds transformed to base words during the centuries. Today, compounds play an important role in vocabulary. A good example is the word arc: Compounds are made up of two base words: the first is the prefix, the latter is the suffix. A compound can be subordinative: the prefix is in logical connection with the suffix. If the prefix is the subject of the suffix, the compound is generally classified as a subjective one. There are objective, determinative, and adjunctive compounds as well. Some examples are given below: According to current orthographic rules, a subordinative compound word has to be written as a single word, without spaces; however, if the length of a compound of three or more words (not counting one-syllable verbal prefixes) is seven or more syllables long (not counting case suffixes), a hyphen must be inserted at the appropriate boundary to ease the determination of word boundaries for the reader. Other compound words are coordinatives: there is no concrete relation between the prefix and the suffix. Subcategories include reduplication (to emphasise the meaning; olykor-olykor 'really occasionally'), twin words (where a base word and a distorted form of it makes up a compound: gizgaz, where the suffix 'gaz' means 'weed' and the prefix giz is the distorted form; the compound itself means 'inconsiderable weed'), and such compounds which have meanings, but neither their prefixes, nor their suffixes make sense (for example, hercehurca 'complex, obsolete procedures'). A compound also can be made up by multiple (i.e., more than two) base words: in this case, at least one word element, or even both the prefix and the suffix is a compound. Some examples: There are two basic words for "red" in Hungarian: "piros" and "vörös" (variant: "veres"; compare with Estonian "verev" or Finnish "punainen"). (They are basic in the sense that one is not a sub-type of the other, as the English "scarlet" is of "red".) The word "vörös" is related to "vér", meaning "blood" (Finnish and Estonian "veri"). When they refer to an actual difference in colour (as on a colour chart), "vörös" usually refers to the deeper (darker and/or more red and less orange) hue of red. In English similar differences exist between "scarlet" and "red". While many languages have multiple names for this colour, often Hungarian scholars assume this is unique in recognizing two shades of red as separate and distinct "folk colours". However, the two words are also used independently of the above in collocations. "Piros" is learned by children first, as it is generally used to describe inanimate, artificial things, or things seen as cheerful or neutral, while "vörös" typically refers to animate or natural things (biological, geological, physical and astronomical objects), as well as serious or emotionally charged subjects. When the rules outlined above are in contradiction, typical collocations usually prevail. In some cases where a typical collocation does not exist, the use of either of the two words may be equally adequate. The Hungarian words for brothers and sisters are differentiated based upon relative age. There is also a general word for "sibling": testvér, from test "body" and vér "blood"; i.e., originating from the same body and blood. (There used to be a separate word for "elder sister", néne, but it has become obsolete [except to mean "aunt" in some dialects] and has been replaced by the generic word for "sister".) In addition, there are separate prefixes for several ancestors and descendants: |gyerek||unoka||dédunoka||ükunoka||szépunoka(OR ük-ükunoka)||óunoka(OR ük-ük-ükunoka)| The words for "boy" and "girl" are applied with possessive suffixes. Nevertheless, the terms are differentiated with different declension or lexemes: Fia is only used in this, irregular possessive form; it has no nominative on its own (see inalienable possession). However, the word fiú can also take the regular suffix, in which case the resulting word (fiúja) will refer to a lover or partner (boyfriend), rather than a male offspring. The word fiú (boy) is also often noted as an extreme example of the ability of the language to add suffixes to a word, by forming fiaiéi, adding vowel-form suffixes only, where the result is quite a frequently used word: |fiáé||his/her son's (singular object)| |fiáéi||his/her son's (plural object)| |fiaié||his/her sons' (singular object)| |fiaiéi||his/her sons' (plural object)| |meg-||verb prefix; in this case, it means "completed"| |szent||holy (the word root)| |-ség||like English "-ness", as in "holiness"| |-t(e)len||variant of "-tlen", noun suffix expressing the lack of something; like English "-less", as in "useless"| |-ít||constitutes a transitive verb from an adjective| |-het||expresses possibility; somewhat similar to the English modal verbs "may" or "can"| |-(e)tlen||another variant of "-tlen"| |-es||constitutes an adjective from a noun; like English "-y" as in "witty"| |-ked||attached to an adjective (e.g. "strong"), produces the verb "to pretend to be (strong)"| |-és||constitutes a noun from a verb; there are various ways this is done in English, e.g. "-ance" in "acceptance"| |-eitek||plural possessive suffix, second-person plural (e.g. "apple" → "your apples", where "your" refers to multiple people)| |-ért||approximately translates to "because of", or in this case simply "for"| The above word is often considered to be the longest word in Hungarian, although there are longer words like: Words of such length are not used in practice, but when spoken they are easily understood by natives. They were invented to show, in a somewhat facetious way, the ability of the language to form long words (see agglutinative language). They are not compound words—they are formed by adding a series of one and two-syllable suffixes (and a few prefixes) to a simple root ("szent", saint or holy). There is virtually no limit for the length of words, but when too many suffixes are added, the meaning of the word becomes less clear, and the word becomes hard to understand, and will work like a riddle even for native speakers. The English word best known as being of Hungarian origin is probably paprika, from Serbo-Croatian papar "pepper" and the Hungarian diminutive -ka. The most common however is coach, from kocsi, originally kocsi szekér "car from/in the style of Kocs". Others are: The Hungarian language was originally written in right-to-left Old Hungarian runes, superficially similar in appearance to the better-known futhark runes but unrelated. When Stephen I of Hungary established the Kingdom of Hungary in the year 1000, the old system was gradually discarded in favour of the Latin alphabet and left-to-right order. Although now not used at all in everyday life, the old script is still known and practised by some enthusiasts. Modern Hungarian is written using an expanded Latin alphabet, and has a phonemic orthography, i.e. pronunciation can generally be predicted from the written language. In addition to the standard letters of the Latin alphabet, Hungarian uses several modified Latin characters to represent the additional vowel sounds of the language. These include letters with acute accents (á, é, í, ó, ú) to represent long vowels, and umlauts (ö and ü) and their long counterparts ő and ű to represent front vowels. Sometimes (usually as a result of a technical glitch on a computer) ⟨ô⟩ or ⟨õ⟩ is used for ⟨ő⟩, and ⟨û⟩ for ⟨ű⟩. This is often due to the limitations of the Latin-1 / ISO-8859-1 code page. These letters are not part of the Hungarian language, and are considered misprints. Hungarian can be properly represented with the Latin-2 / ISO-8859-2 code page, but this code page is not always available. (Hungarian is the only language using both ⟨ő⟩ and ⟨ű⟩.) Unicode includes them, and so they can be used on the Internet. Additionally, the letter pairs ⟨ny⟩, ⟨ty⟩, and ⟨gy⟩ represent the palatal consonants /ɲ/, /c/, and /ɟ/ (a little like the "d+y" sounds in British "duke" or American "would you")—a bit like saying "d" with the tongue pointing to the palate. Hungarian uses ⟨s⟩ for /ʃ/ and ⟨sz⟩ for /s/, which is the reverse of Polish usage. The letter ⟨zs⟩ is /ʒ/ and ⟨cs⟩ is /t͡ʃ/. These digraphs are considered single letters in the alphabet. The letter ⟨ly⟩ is also a "single letter digraph", but is pronounced like /j/ (English ⟨y⟩), and appears mostly in old words. The letters ⟨dz⟩ and ⟨dzs⟩ /d͡ʒ/ are exotic remnants and are hard to find even in longer texts. Some examples still in common use are madzag ("string"), edzeni ("to train (athletically)") and dzsungel ("jungle"). Sometimes additional information is required for partitioning words with digraphs: házszám ("street number") = ház ("house") + szám ("number"), not an unintelligible házs + zám. Hungarian distinguishes between long and short vowels, with long vowels written with acutes. It also distinguishes between long and short consonants, with long consonants being doubled. For example, lenni ("to be"), hozzászólás ("comment"). The digraphs, when doubled, become trigraphs: ⟨sz⟩ + ⟨sz⟩ = ⟨ssz⟩, e.g. művésszel ("with an artist"). But when the digraph occurs at the end of a line, all of the letters are written out. For example, ("with a bus"): When the first lexeme of a compound ends in a digraph and the second lexeme starts with the same digraph, both digraphs are written out: jegy + gyűrű = jegygyűrű ("engagement/wedding ring", jegy means "sign", "mark". The term jegyben lenni/járni means "to be engaged"; gyűrű means "ring"). Usually a trigraph is a double digraph, but there are a few exceptions: tizennyolc ("eighteen") is a concatenation of tizen + nyolc. There are doubling minimal pairs: tol ("push") vs. toll ("feather" or "pen"). While to English speakers they may seem unusual at first, once the new orthography and pronunciation are learned, written Hungarian is almost completely phonemic (except for etymological spellings and "ly, j" representing /j/). The word order is basically from general to specific. This is a typical analytical approach and is used generally in Hungarian. The Hungarian language uses the so-called eastern name order, in which the surname (general, deriving from the family) comes first and the given name comes last. If a second given name is used, this follows the first given name. For clarity, in foreign languages Hungarian names are usually represented in the western name order. Sometimes, however, especially in the neighbouring countries of Hungary – where there is a significant Hungarian population – the Hungarian name order is retained, as it causes less confusion there. For an example of foreign use, the birth name of the Hungarian-born physicist, the "father of the hydrogen bomb" was Teller Ede, but he immigrated to the United States in the 1930s and thus became known as Edward Teller. Prior to the mid-20th century, given names were usually translated along with the name order; this is no longer as common. For example, the pianist uses András Schiff when abroad, not Andrew Schiff (in Hungarian Schiff András). If a second given name is present, it becomes a middle name and is usually written out in full, rather than truncated to an initial. In modern usage, foreign names retain their order when used in Hungarian. Therefore: Before the 20th century, not only was it common to reverse the order of foreign personalities, they were also "Hungarianised": Goethe János Farkas (originally Johann Wolfgang Goethe). This usage sounds odd today, when only a few well-known personalities are referred to using their Hungarianised names, including Verne Gyula (Jules Verne), Marx Károly (Karl Marx), Kolumbusz Kristóf (Christopher Columbus; note that the last of these is also translated in English from the original Italian or possibly Ligurian). Some native speakers disapprove of this usage; the names of certain historical religious personalities (including popes), however, are always Hungarianised by practically all speakers, such as Luther Márton (Martin Luther), Husz János (Jan Hus), Kálvin János (John Calvin); just like the names of monarchs, for example the king of Spain, Juan Carlos I is referred to as I. János Károly or the queen of the UK, Elizabeth II is referred to as II. Erzsébet. The Hungarian convention for date and time is to go from the generic to the specific: 1. year, 2. month, 3. day, 4. hour, 5. minute, (6. second) The year and day are always written in Arabic numerals, followed by a full stop. The month can be written by its full name or can be abbreviated, or even denoted by Roman or Arabic numerals. Except for the first case (month written by its full name), the month is followed by a full stop. Usually, when the month is written in letters, there is no leading zero before the day. On the other hand, when the month is written in Arabic numerals, a leading zero is common, but not obligatory. Except at the beginning of a sentence, the name of the month always begins with a lower-case letter. Hours, minutes, and seconds are separated by a colon (H:m:s). Fractions of a second are separated by a full stop from the rest of the time. Hungary generally uses the 24-hour clock format, but in verbal (and written) communication 12-hour clock format can also be used. See below for usage examples. Date and time may be separated by a comma or simply written one after the other. Date separated by hyphen is also spreading, especially on datestamps. Here – just like the version separated by full stops – leading zeros are in use. When only hours and minutes are written in a sentence (so not only "displaying" time), these parts can be separated by a full stop (e.g. "Találkozzunk 10.35-kor." – "Let's meet at 10.35."), or it is also regular to write hours in normal size, and minutes put in superscript (and not necessarily) underlined (e.g. "A találkozó 1035-kor kezdődik." or "A találkozó 1035-kor kezdődik." – "The meeting begins at 10.35."). Also, in verbal and written communication it is common to use "délelőtt" (literally "before noon") and "délután" (lit. "after noon") abbreviated as "de." and "du." respectively. Délelőtt and délután is said or written before the time, e.g. "Délután 4 óra van." – "It's 4 p.m.". However e.g. "délelőtt 5 óra" (should mean "5 a.m.") or "délután 10 óra" (should mean "10 p.m.") are never used, because at these times the sun is not up, instead "hajnal" ("dawn"), "reggel" ("morning"), "este" ("evening") and "éjjel" ("night") is used, however there are no exact rules for the use of these, as everybody uses them according to their habits (e.g. somebody may have woken up at 5 a.m. so he/she says "Reggel 6-kor ettem." – "I had food at *morning 6.", and somebody woke up at 11 a.m. so he/she says "Hajnali 6-kor még aludtam." – "I was still sleeping at *dawn 6."). Roughly, these expressions mean these times: |Délelőtt (de.)||9 a.m. – 12 p.m.| |Dél*||=12 p.m. (="noon")| |Délután (du.)||12–6 p.m.| |Éjjel||11 p.m. – 4 a.m.| |Éjfél*||=12 a.m. (="midnight")| Although address formatting is increasingly being influenced by standard European conventions, the traditional Hungarian style is: Budapest, Deák Ferenc tér 1. 1052 So the order is: 1) settlement (most general), 2) street/square/etc. (more specific), 3) house number (most specific) 4)(HU-)postcode. The house number may be followed by the storey and door numbers. The HU- part before the postcode is only for incoming postal traffic from foreign countries. Addresses on envelopes and postal parcels should be formatted and placed on the right side as follows: Name of the recipient Street address (up to door number if necessary) Note: The stress is always placed on the first syllable of each word. The remaining syllables all receive an equal, lesser stress. All syllables are pronounced clearly and evenly, even at the end of a sentence, unlike in English. |two thousand||kétezer (kettőezer)||/ˈkeːtɛzɛr/ (/ˈkettøːɛzɛr/)| |two thousand (and) nineteen (2019)||kétezertizenkilenc (kettőezertizenkilenc)||/ˈkeːtɛzɛrtizɛŋkilɛnt͡s/ (/ˈkettøːɛzɛrtizɛŋkilɛnt͡s/)| |Wikibooks has a book on the topic of: Hungarian| |Hungarian edition of Wikipedia, the free encyclopedia| |For a list of words relating to Hungarian language, see the Hungarian language category of words in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to Hungarian language.| |Wikivoyage has a phrasebook for Hungarian.|
The image is from Wikipedia Commons David Hilbert (//; German: [ˈdaːvɪt ˈhɪlbɐt]; 23 January 1862 – 14 February 1943) was a German mathematician and one of the most influential and universal mathematicians of the 19th and early 20th centuries. Hilbert discovered and developed a broad range of fundamental ideas in many areas, including invariant theory, the calculus of variations, commutative algebra, algebraic number theory, the foundations of geometry, spectral theory of operators and its application to integral equations, mathematical physics, and foundations of mathematics (particularly proof theory). Hilbert adopted and warmly defended Georg Cantor's set theory and transfinite numbers. A famous example of his leadership in mathematics is his 1900 presentation of a collection of problems that set the course for much of the mathematical research of the 20th century. Hilbert and his students contributed significantly to establishing rigor and developed important tools used in modern mathematical physics. Hilbert is known as one of the founders of proof theory and mathematical logic. Early life and education Hilbert, the first of two children and only son of Otto and Maria Therese (Erdtmann) Hilbert, was born in the Province of Prussia, Kingdom of Prussia, either in Königsberg (according to Hilbert's own statement) or in Wehlau (known since 1946 as Znamensk) near Königsberg where his father worked at the time of his birth. In late 1872, Hilbert entered the Friedrichskolleg Gymnasium (Collegium fridericianum, the same school that Immanuel Kant had attended 140 years before); but, after an unhappy period, he transferred to (late 1879) and graduated from (early 1880) the more science-oriented Wilhelm Gymnasium. Upon graduation, in autumn 1880, Hilbert enrolled at the University of Königsberg, the "Albertina". In early 1882, Hermann Minkowski (two years younger than Hilbert and also a native of Königsberg but had gone to Berlin for three semesters), returned to Königsberg and entered the university. Hilbert developed a lifelong friendship with the shy, gifted Minkowski. In 1884, Adolf Hurwitz arrived from Göttingen as an Extraordinarius (i.e., an associate professor). An intense and fruitful scientific exchange among the three began, and Minkowski and Hilbert especially would exercise a reciprocal influence over each other at various times in their scientific careers. Hilbert obtained his doctorate in 1885, with a dissertation, written under Ferdinand von Lindemann, titled Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen ("On the invariant properties of special binary forms, in particular the spherical harmonic functions"). Hilbert remained at the University of Königsberg as a Privatdozent (senior lecturer) from 1886 to 1895. In 1895, as a result of intervention on his behalf by Felix Klein, he obtained the position of Professor of Mathematics at the University of Göttingen. During the Klein and Hilbert years, Göttingen became the preeminent institution in the mathematical world. He remained there for the rest of his life. Among Hilbert's students were Hermann Weyl, chess champion Emanuel Lasker, Ernst Zermelo, and Carl Gustav Hempel. John von Neumann was his assistant. At the University of Göttingen, Hilbert was surrounded by a social circle of some of the most important mathematicians of the 20th century, such as Emmy Noether and Alonzo Church. Among his 69 Ph.D. students in Göttingen were many who later became famous mathematicians, including (with date of thesis): Otto Blumenthal (1898), Felix Bernstein (1901), Hermann Weyl (1908), Richard Courant (1910), Erich Hecke (1910), Hugo Steinhaus (1911), and Wilhelm Ackermann (1925). Between 1902 and 1939 Hilbert was editor of the Mathematische Annalen, the leading mathematical journal of the time. "Good, he did not have enough imagination to become a mathematician".— Hilbert's response upon hearing that one of his students had dropped out to study poetry. Around 1925, Hilbert developed pernicious anemia, a then-untreatable vitamin deficiency whose primary symptom is exhaustion; his assistant Eugene Wigner described him as subject to "enormous fatigue" and how he "seemed quite old", and that even after eventually being diagnosed and treated, he "was hardly a scientist after 1925, and certainly not a Hilbert." Hilbert lived to see the Nazis purge many of the prominent faculty members at University of Göttingen in 1933. Those forced out included Hermann Weyl (who had taken Hilbert's chair when he retired in 1930), Emmy Noether and Edmund Landau. One who had to leave Germany, Paul Bernays, had collaborated with Hilbert in mathematical logic, and co-authored with him the important book Grundlagen der Mathematik (which eventually appeared in two volumes, in 1934 and 1939). This was a sequel to the Hilbert–Ackermann book Principles of Mathematical Logic from 1928. Hermann Weyl's successor was Helmut Hasse. About a year later, Hilbert attended a banquet and was seated next to the new Minister of Education, Bernhard Rust. Rust asked whether "the Mathematical Institute really suffered so much because of the departure of the Jews". Hilbert replied, "Suffered? It doesn't exist any longer, does it!" By the time Hilbert died in 1943, the Nazis had nearly completely restaffed the university, as many of the former faculty had either been Jewish or married to Jews. Hilbert's funeral was attended by fewer than a dozen people, only two of whom were fellow academics, among them Arnold Sommerfeld, a theoretical physicist and also a native of Königsberg. News of his death only became known to the wider world six months after he died. The epitaph on his tombstone in Göttingen consists of the famous lines he spoke at the conclusion of his retirement address to the Society of German Scientists and Physicians on 8 September 1930. The words were given in response to the Latin maxim: "Ignoramus et ignorabimus" or "We do not know, we shall not know": - Wir müssen wissen. - Wir werden wissen. - We must know. - We shall know. The day before Hilbert pronounced these phrases at the 1930 annual meeting of the Society of German Scientists and Physicians, Kurt Gödel—in a round table discussion during the Conference on Epistemology held jointly with the Society meetings—tentatively announced the first expression of his incompleteness theorem. Gödel's incompleteness theorems show that even elementary axiomatic systems such as Peano arithmetic are either self-contradicting or contain logical propositions that are impossible to prove or disprove. In 1892, Hilbert married Käthe Jerosch (1864–1945) from German Jewish family, "the daughter of a Königsberg merchant, an outspoken young lady with an independence of mind that matched his own". While at Königsberg they had their one child, Franz Hilbert (1893–1969). Hilbert's son Franz suffered throughout his life from an undiagnosed mental illness. His inferior intellect was a terrible disappointment to his father and this misfortune was a matter of distress to the mathematicians and students at Göttingen. Hilbert was baptized and raised a Calvinist in the Prussian Evangelical Church. He later left the Church and became an agnostic. He also argued that mathematical truth was independent of the existence of God or other a priori assumptions. When Galileo Galilei was criticized for failing to stand up for his convictions on the Heliocentric theory, Hilbert objected: "But [Galileo] was not an idiot. Only an idiot could believe that scientific truth needs martyrdom; that may be necessary in religion, but scientific results prove themselves in due time." Hilbert solves Gordan's Problem Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous finiteness theorem. Twenty years earlier, Paul Gordan had demonstrated the theorem of the finiteness of generators for binary forms using a complex computational approach. Attempts to generalize his method to functions with more than two variables failed because of the enormous difficulty of the calculations involved. To solve what had become known in some circles as Gordan's Problem, Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated Hilbert's basis theorem, showing the existence of a finite set of generators, for the invariants of quantics in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a constructive proof — it did not display "an object" — but rather, it was an existence proof and relied on use of the law of excluded middle in an infinite extension. Hilbert sent his results to the Mathematische Annalen. Gordan, the house expert on the theory of invariants for the Mathematische Annalen, could not appreciate the revolutionary nature of Hilbert's theorem and rejected the article, criticizing the exposition because it was insufficiently comprehensive. His comment was: Das ist nicht Mathematik. Das ist Theologie. - ( This is not Mathematics. This is Theology.) Klein, on the other hand, recognized the importance of the work, and guaranteed that it would be published without any alterations. Encouraged by Klein, Hilbert extended his method in a second article, providing estimations on the maximum degree of the minimum set of generators, and he sent it once more to the Annalen. After having read the manuscript, Klein wrote to him, saying: - Without doubt this is the most important work on general algebra that the Annalen has ever published. Later, after the usefulness of Hilbert's method was universally recognized, Gordan himself would say: - I have convinced myself that even theology has its merits. For all his successes, the nature of his proof created more trouble than Hilbert could have imagined. Although Kronecker had conceded, Hilbert would later respond to others' similar criticisms that "many different constructions are subsumed under one fundamental idea" — in other words (to quote Reid): "Through a proof of existence, Hilbert had been able to obtain a construction"; "the proof" (i.e. the symbols on the page) was "the object". Not all were convinced. While Kronecker would die soon afterwards, his constructivist philosophy would continue with the young Brouwer and his developing intuitionist "school", much to Hilbert's torment in his later years. Indeed, Hilbert would lose his "gifted pupil" Weyl to intuitionism — "Hilbert was disturbed by his former student's fascination with the ideas of Brouwer, which aroused in Hilbert the memory of Kronecker". Brouwer the intuitionist in particular opposed the use of the Law of Excluded Middle over infinite sets (as Hilbert had used it). Hilbert responded: - Taking the Principle of the Excluded Middle from the mathematician ... is the same as ... prohibiting the boxer the use of his fists. Axiomatization of geometry The text Grundlagen der Geometrie (tr.: Foundations of Geometry) published by Hilbert in 1899 proposes a formal set, called Hilbert's axioms, substituting for the traditional axioms of Euclid. They avoid weaknesses identified in those of Euclid, whose works at the time were still used textbook-fashion. It is difficult to specify the axioms used by Hilbert without referring to the publication history of the Grundlagen since Hilbert changed and modified them several times. The original monograph was quickly followed by a French translation, in which Hilbert added V.2, the Completeness Axiom. An English translation, authorized by Hilbert, was made by E.J. Townsend and copyrighted in 1902. This translation incorporated the changes made in the French translation and so is considered to be a translation of the 2nd edition. Hilbert continued to make changes in the text and several editions appeared in German. The 7th edition was the last to appear in Hilbert's lifetime. New editions followed the 7th, but the main text was essentially not revised. Hilbert's approach signaled the shift to the modern axiomatic method. In this, Hilbert was anticipated by Moritz Pasch's work from 1882. Axioms are not taken as self-evident truths. Geometry may treat things, about which we have powerful intuitions, but it is not necessary to assign any explicit meaning to the undefined concepts. The elements, such as point, line, plane, and others, could be substituted, as Hilbert is reported to have said to Schoenflies and Kötter, by tables, chairs, glasses of beer and other such objects. It is their defined relationships that are discussed. Hilbert first enumerates the undefined concepts: point, line, plane, lying on (a relation between points and lines, points and planes, and lines and planes), betweenness, congruence of pairs of points (line segments), and congruence of angles. The axioms unify both the plane geometry and solid geometry of Euclid in a single system. The 23 problems Hilbert put forth a most influential list of 23 unsolved problems at the International Congress of Mathematicians in Paris in 1900. This is generally reckoned as the most successful and deeply considered compilation of open problems ever to be produced by an individual mathematician. After re-working the foundations of classical geometry, Hilbert could have extrapolated to the rest of mathematics. His approach differed, however, from the later 'foundationalist' Russell–Whitehead or 'encyclopedist' Nicolas Bourbaki, and from his contemporary Giuseppe Peano. The mathematical community as a whole could enlist in problems, which he had identified as crucial aspects of the areas of mathematics he took to be key. The problem set was launched as a talk "The Problems of Mathematics" presented during the course of the Second International Congress of Mathematicians held in Paris. The introduction of the speech that Hilbert gave said: - Who among us would not be happy to lift the veil behind which is hidden the future; to gaze at the coming developments of our science and at the secrets of its development in the centuries to come? What will be the ends toward which the spirit of future generations of mathematicians will tend? What methods, what new facts will the new century reveal in the vast and rich field of mathematical thought? He presented fewer than half the problems at the Congress, which were published in the acts of the Congress. In a subsequent publication, he extended the panorama, and arrived at the formulation of the now-canonical 23 Problems of Hilbert. See also Hilbert's twenty-fourth problem. The full text is important, since the exegesis of the questions still can be a matter of inevitable debate, whenever it is asked how many have been solved. Some of these were solved within a short time. Others have been discussed throughout the 20th century, with a few now taken to be unsuitably open-ended to come to closure. Some even continue to this day to remain a challenge for mathematicians. In an account that had become standard by the mid-century, Hilbert's problem set was also a kind of manifesto, that opened the way for the development of the formalist school, one of three major schools of mathematics of the 20th century. According to the formalist, mathematics is manipulation of symbols according to agreed upon formal rules. It is therefore an autonomous activity of thought. There is, however, room to doubt whether Hilbert's own views were simplistically formalist in this sense. In 1920 he proposed explicitly a research project (in metamathematics, as it was then termed) that became known as Hilbert's program. He wanted mathematics to be formulated on a solid and complete logical foundation. He believed that in principle this could be done, by showing that: - all of mathematics follows from a correctly chosen finite system of axioms; and - that some such axiom system is provably consistent through some means such as the epsilon calculus. He seems to have had both technical and philosophical reasons for formulating this proposal. It affirmed his dislike of what had become known as the ignorabimus, still an active issue in his time in German thought, and traced back in that formulation to Emil du Bois-Reymond. This program is still recognizable in the most popular philosophy of mathematics, where it is usually called formalism. For example, the Bourbaki group adopted a watered-down and selective version of it as adequate to the requirements of their twin projects of (a) writing encyclopedic foundational works, and (b) supporting the axiomatic method as a research tool. This approach has been successful and influential in relation with Hilbert's work in algebra and functional analysis, but has failed to engage in the same way with his interests in physics and logic. Hilbert wrote in 1919: - We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Hilbert published his views on the foundations of mathematics in the 2-volume work Grundlagen der Mathematik. Hilbert and the mathematicians who worked with him in his enterprise were committed to the project. His attempt to support axiomatized mathematics with definitive principles, which could banish theoretical uncertainties, ended in failure. Gödel demonstrated that any non-contradictory formal system, which was comprehensive enough to include at least arithmetic, cannot demonstrate its completeness by way of its own axioms. In 1931 his incompleteness theorem showed that Hilbert's grand plan was impossible as stated. The second point cannot in any reasonable way be combined with the first point, as long as the axiom system is genuinely finitary. Nevertheless, the subsequent achievements of proof theory at the very least clarified consistency as it relates to theories of central concern to mathematicians. Hilbert's work had started logic on this course of clarification; the need to understand Gödel's work then led to the development of recursion theory and then mathematical logic as an autonomous discipline in the 1930s. The basis for later theoretical computer science, in the work of Alonzo Church and Alan Turing, also grew directly out of this 'debate'. Around 1909, Hilbert dedicated himself to the study of differential and integral equations; his work had direct consequences for important parts of modern functional analysis. In order to carry out these studies, Hilbert introduced the concept of an infinite dimensional Euclidean space, later called Hilbert space. His work in this part of analysis provided the basis for important contributions to the mathematics of physics in the next two decades, though from an unanticipated direction. Later on, Stefan Banach amplified the concept, defining Banach spaces. Hilbert spaces are an important class of objects in the area of functional analysis, particularly of the spectral theory of self-adjoint linear operators, that grew up around it during the 20th century. Until 1912, Hilbert was almost exclusively a "pure" mathematician. When planning a visit from Bonn, where he was immersed in studying physics, his fellow mathematician and friend Hermann Minkowski joked he had to spend 10 days in quarantine before being able to visit Hilbert. In fact, Minkowski seems responsible for most of Hilbert's physics investigations prior to 1912, including their joint seminar in the subject in 1905. In 1912, three years after his friend's death, Hilbert turned his focus to the subject almost exclusively. He arranged to have a "physics tutor" for himself. He started studying kinetic gas theory and moved on to elementary radiation theory and the molecular theory of matter. Even after the war started in 1914, he continued seminars and classes where the works of Albert Einstein and others were followed closely. By 1907, Einstein had framed the fundamentals of the theory of gravity, but then struggled for nearly 8 years with a confounding problem of putting the theory into final form. By early summer 1915, Hilbert's interest in physics had focused on general relativity, and he invited Einstein to Göttingen to deliver a week of lectures on the subject. Einstein received an enthusiastic reception at Göttingen. Over the summer, Einstein learned that Hilbert was also working on the field equations and redoubled his own efforts. During November 1915, Einstein published several papers culminating in "The Field Equations of Gravitation" (see Einstein field equations). Nearly simultaneously, David Hilbert published "The Foundations of Physics", an axiomatic derivation of the field equations (see Einstein–Hilbert action). Hilbert fully credited Einstein as the originator of the theory, and no public priority dispute concerning the field equations ever arose between the two men during their lives. See more at priority. Additionally, Hilbert's work anticipated and assisted several advances in the mathematical formulation of quantum mechanics. His work was a key aspect of Hermann Weyl and John von Neumann's work on the mathematical equivalence of Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave equation, and his namesake Hilbert space plays an important part in quantum theory. In 1926, von Neumann showed that, if quantum states were understood as vectors in Hilbert space, they would correspond with both Schrödinger's wave function theory and Heisenberg's matrices. Throughout this immersion in physics, Hilbert worked on putting rigor into the mathematics of physics. While highly dependent on higher mathematics, physicists tended to be "sloppy" with it. To a "pure" mathematician like Hilbert, this was both "ugly" and difficult to understand. As he began to understand physics and how physicists were using mathematics, he developed a coherent mathematical theory for what he found, most importantly in the area of integral equations. When his colleague Richard Courant wrote the now classic Methoden der mathematischen Physik (Methods of Mathematical Physics) including some of Hilbert's ideas, he added Hilbert's name as author even though Hilbert had not directly contributed to the writing. Hilbert said "Physics is too hard for physicists", implying that the necessary mathematics was generally beyond them; the Courant-Hilbert book made it easier for them. Hilbert unified the field of algebraic number theory with his 1897 treatise Zahlbericht (literally "report on numbers"). He also resolved a significant number-theory problem formulated by Waring in 1770. As with the finiteness theorem, he used an existence proof that shows there must be solutions for the problem rather than providing a mechanism to produce the answers. He then had little more to publish on the subject; but the emergence of Hilbert modular forms in the dissertation of a student means his name is further attached to a major area. He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution lives on in the names of the Hilbert class field and of the Hilbert symbol of local class field theory. Results were mostly proved by 1930, after work by Teiji Takagi. His collected works (Gesammelte Abhandlungen) have been published several times. The original versions of his papers contained "many technical errors of varying degree"; when the collection was first published, the errors were corrected and it was found that this could be done without major changes in the statements of the theorems, with one exception—a claimed proof of the continuum hypothesis. The errors were nonetheless so numerous and significant that it took Olga Taussky-Todd three years to make the corrections. - List of things named after David Hilbert - Foundations of geometry - Hilbert C*-module - Hilbert cube - Hilbert curve - Hilbert matrix - Hilbert metric - Hilbert–Mumford criterion - Hilbert number - Hilbert ring - Hilbert–Poincaré series - Hilbert series and Hilbert polynomial - Hilbert spectrum - Hilbert system - Hilbert transform - Hilbert's arithmetic of ends - Hilbert's paradox of the Grand Hotel - Hilbert–Schmidt operator - Hilbert–Smith conjecture - Hilbert–Burch theorem - Hilbert's irreducibility theorem - Hilbert's Nullstellensatz - Hilbert's theorem (differential geometry) - Hilbert's Theorem 90 - Hilbert's syzygy theorem - Hilbert–Speiser theorem - Беларуская (тарашкевіца) - Fiji Hindi - Bahasa Indonesia - Kreyòl ayisyen - Kriyòl gwiyannen - Bahasa Melayu - नेपाल भाषा - Norsk bokmål - Norsk nynorsk - Саха тыла - Simple English - Српски / srpski - Srpskohrvatski / српскохрватски - Vepsän kel’ - Tiếng Việt - This page is based on the Wikipedia article David Hilbert; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.
Home - Twin Lakes # 4Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials. Are you getting the free resources, updates, and special offers we send out every week in our teacher newsletter? All Categories. Grade Level. Resource Type. Words Their Way Syllables and Affixes Sort 16 Words Their Way: Word Sorts for Syllables and Affixes Spellers, 3rd edition They serve the same purpose as the function words that reduce-they allow us to have shorter words to contrast with the longer, and the top teeth on the far side of the face. Think of making a rectangle shape with your lips. You can find much of this information by hopping around my word study blog seriesbut I've turned it into an e-book to make it printable and easy to refer to as you plan for and implement your own word study program. I draw in the top and bottom front teeth, more important words and syllables! Look at the adjustments he has to make, and engl. Stop T 1. Create Your Story. Write your answers down! Sort 31 - (-er, -ar, -or) Unaccented Final Syllable (Words Their Way - Syllables and Affixes) A syllable is a unit of organization for a sequence of speech sounds. It is typically made up of a syllable nucleus most often a vowel with optional initial and final margins typically, consonants. Syllables are often considered the phonological "building blocks" of words. Speech can usually be divided up into a whole number of syllables: for example, the word ignite is composed of two syllables: ig and nite. Syllabic writing began several hundred years before the first letters. The earliest recorded syllables are on tablets written around BC in the Sumerian city of Ur. This shift from pictograms to syllables has been called "the most important advance in the history of writing ". The N consonant [n] is a syllabic consonant, which means it takes over the schwa sound! They are sometimes collectively known as the shell? You thei get your own engl. How many books or levels do students make it through in a school year. English Language ArtsPhonics, every 6 weeks or s. Notice the shape of the voice. What kind of T.What you hear on the file is organized in this chart: What you hear each 3 times Is [ju] in a Stressed or Unstressed syllable. The jaw will afffixes some, and the lips will flare and round. Adult Education. I love switching things up in my class to give the kids fun ways to practice their words which is how I came up with this packet. Our 1st grade is required to implement it this year? We is a pronoun, a function word. This set includes sorts for inflected endings. What is placement?
It’s crazy to think about, but many of us know more about the eggs in our kitchen than we do about the eggs in our own body. Here are a few egg cell facts to get your started on your educational journey! Egg cell fact #1: The egg is one of the biggest cells in the body. Egg are larger than any other cell in the human body, at about 100 microns (or millionths of a meter) in diameter, about the same as a strand of hair. That means you could, in theory, see an egg cell with the naked eye. The fact is that egg cells are about 4 times the size of skin cells—and about 20 times the size of sperm! Contact Us to Chat with a Fertility Advisor Here’s a really cool interactive tool that lets you compare the size of different biological and chemical units. Egg cell fact #2: Human eggs are full of instructions. Unlike birds or reptiles, whose eggs are full of food, mammals don’t develop on their own—they get a cushy uterus and a placenta with plenty of nutrients. So what’s filling up that huge egg cell? Here’s what we do know: human eggs contain lots of RNA, which transfers genetic code out of the nucleus of a cell, preventing the DNA from having to leave the nucleus. The RNA in an egg cell has a few jobs: it help the egg’s nucleus fuse with a sperm’s during fertilization, it guides the fertilized eggs through its initial cell divisions, and it tells the cells inside a developing embryo—which are all the same, at first—how to specialize, and what kind of cell they need to become. And the fact is that egg cells need a lot of energy, especially after they’re fertilized and they start dividing and developing. So we know, also, that human eggs contain lots of mitochondria, which anyone who paid attention in 8th grade biology should recognize as the powerhouse of the cell—they convert oxygen and nutrients into chemical energy. Egg cell fact #3: An egg doesn’t live very long after ovulation. Once released—a process known as ovulation, which usually occurs around 2 weeks after the first day of your period—an egg cell has a pretty short life span. First, it’s pulled in by the finger-like appendages at the end of the fallopian tube, through which it travels down into the uterus over a period of 12–24 hours. In the case of unprotected sex around the time of ovulation, the fallopian tube becomes the venue for fertilization, and the fertilized egg will implant in the uterus. But if no fertilization occurs within that 24-hour period, the egg disintegrates. It’ll later be shed—along with endometrial tissue, vaginal secretions, cervical mucus, and blood—during the menstrual period, as the body gets ready for a new cycle of ovulation. Learn more about ovulation. Egg cell fact #4: We’re born with all the eggs we’ll ever have. Most of the cells in our body “regenerate,” or get cleared out and replaced by younger, healthier ones, throughout our lives. The one exception is your eggs. Those—all 1–2 million of them—were born with you, and they’ve been enduring the elements, so to speak, ever since. In fact, egg cells are actually created in utero, at just nine weeks after conception. That means that the egg that created you was inside your mother—when she was inside your grandmother. Whoa. This can be troublesome for your eggs. As egg cells age, they’re more likely to contain genetic abnormalities—mistakes in their DNA—that happen during the division process. Since DNA is like an instruction manual for our cells, any damage to your DNA can prevent that cell from doing what it’s supposed to do—which, in the case of the egg cell, is make a healthy baby. That’s why instances of infertility, miscarriage, and genetic disorders like Down syndrome increase so dramatically with the mother’s age. Learn more about the importance of the age of the egg. There’s good news, though: Egg cell fact #5: Freezing eggs doesn’t affect how likely they are to result in a pregnancy. Egg freezing works because it allows a woman to preserve her eggs while they’re still healthy and plentiful, and use them to attempt a pregnancy later. The fact is that egg cells remain just as likely to result in a pregnancy after they’re frozen and thawed as they were at the time they were frozen—allowing a 40-year-old woman to use her 30-year-old eggs, which are much more likely to result in a healthy pregnancy. This is confirmed by several studies, including a large study published this September, that conclude that frozen and “fresh” eggs result in essentially equal embryo quality, pregnancy rates, and live birth rates. Ready to learn more about egg freezing?
- 2 minutes to read - Last updated February 2, 2016 Just as with the design of any website, you should consider your users. People who interact with technology are extraordinarily diverse, with a wide variety of characteristics and contexts. It cannot be assumed that everyone is using a traditional monitor, browser, or keyboard. Accessible technology has been designed in a way that can be accessed by all users. What is web accessibility? People who use the web have a growing variety of characteristics. Consider these user characteristics: - Unable to see. Individuals who are blind use either audible output (products called screen readers that read web content using synthesized speech) or tactile output (a refreshable Braille device). - Has dyslexia. Individuals with learning disabilities such as dyslexia may also use audible output, along with software that highlights words or phrases as they’re read aloud using synthesized speech. - Has low vision. Individuals with low vision may use screen magnification software that allows them to zoom into all or a portion of the visual screen. Many others with less-than-perfect eyesight may enlarge the font on websites using standard browser functions, such as Ctrl + in Windows browsers or Command + in Mac browsers. - Has a physical disability. Individuals with physical disabilities that effect their their use of hands may be unable to use a mouse, and instead may rely exclusively on keyboard or use assistive technologies such as speech recognition, head pointers, mouth sticks, or eye-gaze tracking systems. - Unable to hear. Individuals who are deaf or hard of hearing are unable to access audio content, so video needs to be captioned and audio needs be transcribed. - Using a mobile device. Individuals who are accessing the web using a compact mobile device such as a phone face accessibility barriers, just like individuals with disabilities do. They’re using a small screen and may need to zoom in or increase the font size, and they are likely to be using a touch interface rather than a mouse. - Limited bandwidth. Individuals may be on slow Internet connections if they’re located in a rural area or lack the financial resources to access high-speed Internet. These users benefit from pages that load quickly and transcripts for video. The W3C summarizes web accessibility nicely in their Web Content Accessibility Guidelines 2.0: - Web content must be perceivable - Web content must be operable - Web content must be understandable - Web content must be robust The Center for Digital Accessibility & User Experience can help answer questions you may have about creating accessible technology and content, or connect you with the right group. Policies and Guidelines Credit: Much of the content in this and other guides in the accessibility series was provided by the University of Washington’s terrific Accessible Technology website.
Think of all the faces you know. As you flick through your mental Rolodex, your friends, family, and co-workers probably come first—along with celebrities—followed by the faces of the nameless strangers you encounter during your daily routine. But how many faces can the human Rolodex store? To ballpark the size of the average person’s “facial vocabulary,” researchers gave 25 people 1 hour to list as many faces from their personal lives as possible, and then another hour to do the same with famous faces, like those of actors, politicians, and musicians. If the participants couldn’t remember a person’s name, but could imagine their face, they used a descriptive phrase like “the high school janitor,” or “the actress from Friends with the haircut.” People came up with lots of faces during the first minutes of the test, but the rate of remembrance dropped over the course of the hour. By graphing this relationship and extrapolating it to when most people would run out of faces, the researchers estimated the number of faces an average person can recall from memory. To figure out how many additional faces people recognized but were unable to recall without prompting, researchers showed the participants photographs of 3441 celebrities, including Barack Obama and Tom Cruise. To qualify as “knowing” a face, the participants had to recognize two different photos of each person. By combining these two numbers and canceling out faces that appeared in both sets, the researchers determined the average person knows about 5000 faces, they report today in the Proceedings of the Royal Society B. All 25 participants in the study recognized between 1000 and 10,000 faces. Although the experiment made a number of assumptions about the rate of remembrance—and relied on the honesty of participants—the results provide a baseline number for future facial recognition studies, the research team says. Next, they hope to explore why certain people (including so-called “superrecognizers”) can recall more faces than others. In the meantime, don’t be surprised if, when flipping through your mental Rolodex, there are a lot more faces in there than you realized.
Kevin J. Ford and Marla Feller The mammalian retina has long been a model system for study of development of neural circuits in the CNS because the adult network is well organized into cell-type specific layers, and the anatomy, physiology and function of many of the retinal cell types is well characterized. A major focus of research in the retina is directed toward understanding how functional circuits arise during development. The development of the retina requires several steps. The first step is to create the right proportion of the 7 cell types that comprise the retina. This process occurs primarily through genesis of the correct number of each cell type. Only ganglion cells have their final number regulated by cell death, which reduces the number of ganglion cells by as much as 50% in some species. The second step is for cells to migrate into the correct location. The third step is for neurons to form synaptic connections with other retinal neurons. Finally, for some of these groups of synaptically coupled cells, synaptic refinement is necessary to generate the circuits that comprise the adult retina. The process of neuronal migration in the retina has been the focus of developmental biologists since the time of Cajal, who used Golgi staining techniques (Fig. 1). Progenitor cells in the neurepithelium lining the surface of the neural tube later become the ventricular zone of the optic vesicles, optic cup and early retina. Postmitotic cells leave the ventricular zone to migrate to one of three cell layers in the retina remaining attached radially from one side of the retina to the other. The neural cells lie at different levels in the retina and, when in correct position, lose their anchoring radial connections. Then polarity of the differentiating cells occurs and dendrites and axons grow out appropriately. The ganglion cells are the first to emerge as recognizable neurons with axons passing to the optic nerve and central brain structures (Fig. 1a and b). Then amacrine cells (Fig. 1c), Muller cells, bipolar cells (Fig. 1d), and horizontal cells form in the correct layer and finally photoreceptors remain to line the top layers (Fig. 1e and f). Before the neural circuits that underlie visual processing emerge, the retina assembles and disassembles a series of intermediate circuits. These transient connections between cells produce the propagating spontaneous activity that is termed retinal waves (Meister et al., 1991; Penn et al., 1994; Feller et al., 1996). As the retina develops, so do the circuits that underlie retinal waves. The earliest spontaneous retinal waves are propagated via electrical coupling between cells. Then, around birth in mice, waves are produced by a transient network consisting of cholinergic connections between amacrine cells. Finally, just before visual processing begins, they are driven by early glutamatergic signaling. In this chapter we will discuss how these transient circuits of the inner plexiform layer (IPL) assemble to produce specific patterns of neural activity, and how they transition from one circuit into the next. Finally, we will discuss the role of spontaneous activity in shaping the development of the visual system within both the retina and the brain. Neurotransmitters and Early Retinal Development Early in development, neurotransmitters can function in the absence of traditional synapses (Redburn and Rowe-Rendleman, 1996). Ultrastructurally identified conventional synapses within the IPL are first formed a few days after birth in mice (Fisher, 1979). However, even before that there is evidence of neurotransmitters. Below we discuss the role of early neurotransmitters and their receptors prior to the formation of circuits that mediate vision. Acetylcholine (ACh) signaling plays a key role in the development of the retina. Prior to synapse formation, paracrine action of ACh is essential for regulating early developmental events, such as the regulation of the cell cycle (Pearson et al., 2002) and the growth of neurites (Lohmann et al., 2002). In addition, cholinergic synapses are among the earliest to mature and thereby constitute the earliest functional circuits in the retina. ACh in the retina is produced solely by one cell type, the starburst amacrine cell (SAC), a type of amacrine cell named for its radially symmetric processes (Hayden et al., 1980). In the mature retina, released ACh acts on both muscarinic and nicotinic receptors to modulate the response properties of many different types of ganglion cells (Masland and Ames, 1976; Masland et al., 1984; Schmidt et al., 1987; Baldridge, 1996; Strang et al., 2005), but it does not affect the SACs themselves (Zheng et al., 2004). However, during development, cholinergic signaling does occur between SACs (Zheng et al., 2004). During the first week after birth in mice, ACh released from SACs activates nicotinic acetylcholine receptors (nAChRs) on neighboring SACs and thus gives rise to a cholinergic network. A key developmental function of this cholinergic network is the generation of retinal waves. This network appears at birth in mice and mediates the initiation and propagation of waves. Aside from the effect ACh has on ganglion and amacrine cells via nicotinic acetylcholine receptors, it also acts on the muscarinic acetylcholine receptors (mAChRs) of many cells in the neuroblastic layer (Wong, 1995; Syed et al., 2004a)(Fig. 2). Figure 2 shows the effect of acetylcholine on progenitor cells in rabbit retina. A one-day old rabbit retina is loaded with Fura-2. Images show responses to bath application of 200μM nicotine (Nic, left), which activates nicotinic acetylcholine receptors, or 100μM carbachol (CCh, right), which activates muscarinic acetylcholine receptors. Red indicates cells that had increases in intracellular calcium whereas blue indicates cells without increases in intracellular calcium (Adapted from Wong, 1995). The mAChRs are G-protein coupled receptors that lead to an increase in intracellular calcium via release of calcium from internal stores, as opposed to influx through ligand- or voltage-activated channels. Interestingly, the ACh released during retinal waves drives correlated mAChR-dependent calcium transients in undifferentiated cells of the ventricular zone (Fig. 2) (Syed et al., 2004a). Hence, it is possible that ACh released by retinal waves induces signaling that is important for early phases of neurogenesis and for cell migration (Martins and Pearson, 2008). GABA is expressed in more cells during development than during adulthood, thus suggesting that it plays a transient role in circuit formation (for review, see Sandell, 1998). During the first few postnatal days in rabbit, GABA displays a high transient expression in the ganglion cell layer. Moreover, the IPL of P0 ferret exhibits markers for enzymes involved in the synthesis of GABA (Karne et al., 1997). During development, GABA initially serves to depolarize neurons. When activated, ionotropic GABA receptors, GABA-A and GABA-C, flux chloride (Fig. 3A). Since the chloride concentration within developing retinal neurons is high due to low expression of the potassium-chloride co-transporter, KCC2, receptor activation leads to an efflux of negatively charged chloride ions through the open channels and thus causes depolarization of the cell (Fig. 3A, B left). KCC2 expression gradually increases during the first two weeks after birth in mice (Zhang et al., 2006). Thus there is a ‘switch’ from excitation to inhibition as the reversal potential for chloride drops below the threshold for firing action potentials (Fig. 3 A, B right). In turtle retina, the timing of the GABA switch correlates with a decrease in propagating spontaneous activity (Sernagor et al., 2003), suggesting that GABA depolarization plays a role in this propagation. In mammalian retina, GABA plays a minor role in correlated spontaneous activity during its excitatory period (Feller et al., 1996; Syed et al., 2004b; Wang et al., 2007). However, after the GABA switch, GABA’s inhibitory action takes on a prominent role in shaping spontaneous activity. Blocking GABA-A receptors greatly increases the frequency of spontaneous retinal waves (Syed et al., 2004b; Blankenship et al., 2009). What underlies the switch of GABA’s action from depolarizing to hyperpolarizing? Several environmental factors, including neural activity (Leitch et al., 2005), have been implicated in the timing of this switch in the retina. A recent study used acutely isolated retinas from knockout mice and pharmacological manipulations in retinal explants demonstrates that the timing of the GABA switch in retinal ganglion cells is unaffected by blocking specific neurotransmitter receptors or global activity (Barkis et al., 2010) (Fig. 4). Purified retinal ganglion cells remain depolarized by muscimol for at least two weeks in culture (Fig 4, right), indicating that the GABA switch is not cell-autonomous. Purified ganglion cells co-cultured with other retinal neurons also remain depolarized by muscimol (Fig. 4, middle). However, culturing purified ganglion cells with dissociated cells from the superior colliculus (Fig. 4, right), or treating purified ganglion cells with media conditioned by superior colliculus cultures (Fig. 4C, red), results in a switch to inhibition by muscimol after two weeks in culture, indicating that a diffusible signal independent of local circuit activity regulates the maturation of GABAergic transmission. Glutamatergic signaling is the last to develop within the IPL. Glutamate is released primarily from bipolar cells and a small subset of amacrine cells (Haverkamp and Wassle, 2004; Johnson et al., 2004). In mice, the axons of bipolar cells first express VGLUT1, the enzyme responsible for packaging glutamate into vesicles, around one week after birth (Johnson et al., 2003). Although, ribbon synapses between bipolar cells axons and the dendrites of amacrine and ganglion cells do not form until 11 days after birth (Fisher, 1979), glutamatergic currents can be measured before this (Johnson et al., 2003; Blankenship et al., 2009). State-of-the-art transgenic and imaging techniques have characterized the distribution of glutamatergic synapses onto ganglion cells and between particular subtypes of bipolar and ganglion cells (Morgan et al., 2008; Kerschensteiner et al., 2009). Before the functional maturation of bipolar cell ribbon synapses and long before the structural maturation of the ganglion cell dendritic arbor, adult-like patterns of glutamatergic synapses can be seen. However, between some subtypes of bipolar and ganglion cells, there appears to be an activity dependent remodeling of these synapses (Kerschensteiner et al., 2009) Spontaneously Active Synaptic Circuits in the Inner Plexiform Layer Prior to photoreceptor maturation and eye opening, retinal ganglion cells periodically fire bursts of action potentials on the order of once per minute. This spontaneous rhythmic activity was first measured in fetal rat pups and was found to be highly correlated among neighboring ganglion cells (Galli and Maffei, 1988). Extracellular recordings using a multielectrode array (Meister et al., 1991) (Movie 1) and imaging of calcium transients (Movie 2) associated with bursts of action potentials (Wong et al., 1995; Feller et al., 1996) have revealed that these spontaneous bursts propagate from one cell to the next in a wavelike manner. These retinal waves are an extremely robust phenomenon, observed in a large variety of vertebrate species, including chick, turtle, mouse, rabbit, rat, ferret and cat (Wong, 1999). Movie 1. Multi-electrode array recording of cholinergic waves. 512 electrode array recording from the mouse at 37°. Each dot represents multiunit activity recorded on an electrode at that site. The size of the dot is proportional to the amplitude of the signal, and the color is proportional to the frequency of the signal. Movie is at 5× normal speed. From Stafford et al, 2009. Quicktime movie download available here. Movie 2. Calcium imaging of cholinergic waves. P3 mouse retina loaded with Oregon Green-Bapta-1AM. Playback 10x, size 800×600μm. Quicktime movie download available here. Retinal waves persist for an extended period throughout development as shown in Figure 5. However, as the retina matures, the circuitry underlying these retinal waves changes. Thus the wave generating circuits progresses through three distinct stages, reflecting the nature of the connections and the cells that are involved (Fig. 5). Figure 5. Retinal waves occur in three stages. Before birth in mice, waves are mediated by non-synaptic (NS) circuits. From around birth until 10 days after birth, waves are mediated by acetylcholine acting on nicotinic acetylcholine receptors (nAChRs). From 10 days after birth until the end of the second week, waves are mediated by ionotropic glutamate receptors. Modified from Bansal et al. 2000. The earliest waves recorded in mammals are thought to be propagated via gap junctions. Retinal waves before embryonic day 23 in rabbit persist in the presence of antagonists to ionotropic GABA, glycine, ACh, and glutamate receptors, but are completely blocked by 18β-GA, a blocker of gap junction coupling (Syed et al., 2004b). Similarly, mice prior to birth exhibit waves that are insensitive to chemical transmission antagonists (Bansal et al., 2000). How waves are initiated and propagated during this early stage of development is still not clear. During the first week after birth in ferret and mouse, neurobiotin coupling between ganglion cells of the same subtype and amacrine cells has been seen (Penn et al., 1994; Singer et al., 2001). However, ganglion cell coupling is initially weak and becomes stronger with age, while the correlations generated by waves become weaker with age (Wong et al., 1993). In addition, tracer coupling is found primarily between retinal ganglion cells of the same subtype and many other ganglion cells are not gap junction coupled. Thus, the issue of wave propagation in this early network still remains to be explored. Neurotransmitters play a modulatory role during early stage waves. At the earliest ages studied in chick retina (E8-E11), wave frequency decreases in the presence of an ACh antagonist and increases with an ACh agonist but waves are not blocked. These waves are unaffected by GABA-A and glutamate receptor antagonists (Catsicas et al., 1998). Similarly, embryonic waves in mouse are reduced in ACh receptor antagonists (Bansal et al., 2000). In addition, in rabbit, early waves are blocked by the activation of GABA-B receptors and are increased in frequency by the inhibition of these receptors (Syed et al., 2004b). Several experimental results show that chemical synaptic transmission is a prerequisite for cholinergic wave propagation. First, simultaneous whole cell voltage clamp recordings from ganglion cells demonstrate that increases in [Ca2+]i correlated across cells are driven by compound synaptic inputs (Feller et al., 1996). Second, the compound postsynaptic currents measured from ganglion cells are blocked by bath application of Cd2+, a blocker of calcium channels that are associated with transmitter release (Feller et al., 1996). Third, the periodic Ca2+ increases, action potentials, and compound postsynaptic currents associated with waves can all be blocked by a variety of nAChRs antagonists (Feller et al., 1996; Penn et al., 1998). Finally, genetic deletion of the beta2 subunit of the nAChR receptor (Bansal et al., 2000) or deletion of the enzyme responsible for synthesis of ACh (Stacy et al., 2005) results in the disruption of normal waves during the first week after birth. Figure 6: Cholinergic retinal waves tile the retina. Pictured is a sequence of retinal waves that were measured using calcium imaging in a P2 ferret retina loaded with the calcium dye fura-2AM (read from left to right, top row first). This analysis is used to show that a mosaic of waves is created by sequential waves to tile the entire retina. The blue figure in the first frame represents the total spatial extent of a single wave, which is termed a domain. The red domain in each frame corresponds to a new wave arising in the same region of retina monitored with fura-2 imaging. Gray shapes represent the regions of retina that previously supported a wave. Black corresponds to overlapping regions in which more than one wave occurs. This first domain remains blue through all of the subsequent frames to demonstrate that subsequent waves initiated within 10s of seconds of that first wave do not significantly invade its territory. The fact that almost the entire region is gray after 90 seconds indicates that the entire retina is tiled but that domain boundaries change constantly over time. The entire sequence corresponds to 90 seconds of recording. The total field of view is 1.2 x 1.4 mm. From Feller et al, 1997. The spatiotemporal properties of cholinergic retinal waves have been well-characterized by fluorescence imaging of calcium indicators, which are reliable markers of cell depolarizations (Fig. 6 schematizes these waves (Feller et al., 1997). Waves initiate in small clusters of coactive neurons and then propagate over spatially restricted areas of the retina. Initiation sites and wave boundaries are distributed randomly across a given retina, indicating that the global patterns of waves are not determined by fixed structures such as pacemaker cells or by repeated activation of the same clusters of neurons. Instead, the propagation boundaries of waves are determined in part by wave-induced refractory regions that last for 40-50 seconds. These observations have led to the hypothesis that every region of the retina is equally likely to initiate or propagate a wave, and therefore the global spatial patterns of waves are determined by the local history of retinal activity (Feller et al., 1997). Recent studies in rabbit (Zheng et al., 2004; Zheng et al., 2006) and mouse (Ford et al. 2012) retina have revealed the cellular properties of starburst amacrine cells (SACs), the cell type that gives rise to cholinergic waves (Fig. 7). Identifying SACs in mouse retinas was facilitated by the use of a line of mice in which GFP is expressed in SACs (mGluR2-GFP, Fig 7A). Using calcium imaging, spontaneous depolarizations of individual SACs were observed in the absence of synaptic excitation (Fig 7A), indicating that SACs themselves may be the source of initiation for waves. Paired recording between SACs revealed reciprocal cholinergic transmission (Fig 7C), indicating that waves are propagated via these slow, excitatory connections between neighboring SACs. Finally, wave boundaries are thought to arise from a slow after-hyperpolarization in the SACs that recovers over the course of tens of seconds following the depolarization during waves (Fig 7B). Figure 7. Cellular features of starburst amacrine cells underlie the spatiotemporal properties of retinal waves. (A) Fluorescence image of an mGluR2-GFP retina loaded with OGB. Right, Fluorescence image of GFP cells. Regions of interest are shown around each SAC. Scale bar, 20 µm. B, Time course of DF/F averaged over the somas of three cells (labeled in A) in the absence (CTR, top) and in the presence of nAChR antagonist DHβE (4µM) and GABA-A receptor antagonist gabazine (5µM) (bottom). (B) Current-clamp recording from a SAC showing wave evoked depolarizations followed by sAHPs. (C) Voltage-clamp recordings from pairs of neighboring SACs show both fast slow cholinergic postsynaptic currents. Cholinergic waves may play a role in terminating early stage gap-junction mediated waves. Knock-out animals that lack nAChR receptor subunits still exhibit wave like activity under certain recording conditions, such as elevated temperature [alpha3(-/-): (Bansal et al., 2000), beta2(-/-) (Sun et al., 2008; Stafford et al., 2009)]. This activity is likely mediated by gap junctions (Sun et al., 2008). Moreover, a study using a genetic model that eliminates ChAT in a large portion of the retina found normal cholinergic waves in the spared region, but compensatory waves in the region lacking ChAT (Stacy et al., 2005). These studies suggest a sequential maturation of the retinal circuitry that relies on checkpoints to make transitions from one stage (gap-junction mediated waves) to the next (cholinergic transmission mediated waves). This sort of checkpoint model of neuronal development (Ben-Ari and Spitzer, 2010) is further supported by the disassembly of the cholinergic network to make way for glutamatergic signaling (Blankenship and Feller, 2010). The synaptic circuitry that drives retinal waves changes postnatally. Though retinal waves early in development require cholinergic neurotransmission, studies in older ferret, rabbit, and mouse indicate that waves become insensitive to cholinergic antagonists and can be blocked by glutamate receptor antagonists (Bansal et al., 2000; Wong et al., 2000; Zhou and Zhao, 2000). This switch in the requisite transmitter occurs at the age that bipolar cells make their initial synaptic connections with ganglion cells and conventional synapses between amacrine and ganglion cells become morphologically mature and numerous. This timing suggests that perhaps waves are mediated by neurotransmitters only when the synapses are first forming. Glutamatergic waves have distinct spatial and temporal features. Unlike cholinergic waves that drive correlations in all neighboring ganglion cells regardless of subtype, glutamatergic waves occur more frequently in OFF cells than in ON cells (Wong and Oakley, 1996) (Fig. 8). Waves occur in rapid clusters separated by periods of silence lasting about one minute (Blankenship et al., 2009). Within clusters, there is a distinct pattern of firing where ON and OFF cells alternate in firing bursts of action potentials (Kerschensteiner and Wong, 2008) (Fig. 8). These spatial and temporal features are significantly shaped by inhibitory circuits. Blocking ionotropic GABA receptors increases wave frequency (Fischer et al., 1998). Blocking glycine receptors does the same but also eliminates the asynchronous firing between ON and OFF cells. Figure 8. Glutamatergic waves occur in clusters of asynchronous ON/OFF bursts. Multi-electrode array recording of glutamatergic waves from a 12 day old mouse. ON ganglion cells precede OFF ganglion cells during burst clusters. Adapted from Kerchensteiner and Wong, 2008. Similar to the transition from gap-junction to cholinergic waves, the onset of glutamatergic waves may play an active role in the disassembly of the cholinergic circuits (Blankenship and Feller, 2010). Mice lacking the vesicular glutamate transporter VGLUT1 lack glutamate release from bipolar cells. However, these mice still have retinal waves during the time that littermate control animals have glutamatergic waves. Interestingly, these waves in the VGLUT1 knockout mice are unaffected by glutamate receptor antagonists but are blocked by nAChR antagonists (Blankenship et al., 2009). Thus, glutamate signaling from bipolar cells seems necessary for dismantling the cholinergic network. There is some evidence that the transitions between the different circuits that mediate waves are linked. Prior to birth waves are thought to propagate via gap-junctions between ganglion cells (Fig. 9A, left). From postnatal day 1-10 waves are propagated via SAC release of acetylcholine onto other SACs (Fig 9A, Middle, black box). Acetylcholine also depolarizes ganglion cells. During this period of development, the gap-junction signaling between ganglion cells is reduced (Fig 9A, Middle, red box). From P10-P15 bipolar cells release glutamate to propagate waves in a mechanism that is thought to involve spillover of glutamate to excite neighboring bipolar cells (Fig. 9A right, black box). Cholinergic signaling between SACs is reduced (Fig. 9A right, red box). Genetic disruption of cholinergic or glutamatergic waves result in an extended action of the previous wave generating circuit (Fig. 9B). In wild type mice gap-junction mediated waves (gray) are followed by cholinergic waves (blue) starting at P0, then glutamatergic waves (green) at P10. In mice lacking the Beta2 subunit of the nicotinic acetylcholine receptor, gap-junction mediated waves persist until ~P8. In mice lacking vesicular glutamate transporter VGLUT1, cholinergic waves persist through the second postnatal week. Gap junctions and waves Gap junctions are thought to play a role in the generation of embryonic waves, as described above. At later ages in mammals, however, gap junctions also play a minor role in the propagation of retinal waves. Gap junction antagonists reduce or partially block retinal waves after birth in mice (Singer et al., 2001) and at later stages in rabbit (Syed et al., 2004b) when waves depend critically on chemical transmission. However, these antagonists also have several non-specific effects so these results are inconclusive. A different approach is to study mice in which specific connexins are genetically deleted (Fig. 10). Knockouts of connexins 36 and 45, the two major gap junction forming connexins in the IPL, have normal wave propagation (Fig. 10 A) but the firing between waves at later stages is increased (Fig 10B). Glutamatergic waves and the firing between waves is eliminated by the application of bipolar cell synapse antagonists DNQX and AP5, indicating that bipolar input is important for the generation of waves as in wild type retinas (Blankenship et al., 2011). Figure 10: Spatiotemporal properties of retinal waves are similar in control, Cx45, and Cx36/45 double knockout mice. A, Examples of waves recorded in different genotypes. Each grayscale value represents an active area in one frame, with darker shades corresponding to later points in time during the wave. B, Raster plot of ganglion cell action potentials taken from multi-electrode array recordings from different genotypes. Correlated firing during wave events are shown in blue. From Blankenship et al. 2011. In contrast, in the chick retina, gap junctions were found to be involved in wave generation at all ages. Octanol, a significant inhibitor of waves in E8 chick retina, restricts tracer coupling between ganglion cells and other cell types but not between ganglion cells themselves, indicating that the circuitry mediating these waves involves cells other than ganglion cells (Catsicas et al., 1998). IV. Role of activity in formation of visual circuitry Spontaneous activity in the developing retina occurs while functional circuits are forming within the retina and projections from the retina are undergoing refinement at their target regions in the brain. What role do retinal waves play in sculpting the circuits that mediate vision? ON/OFF circuitry within the retina Two sub-circuits that have been well characterized in the adult retina are the ON and OFF pathways. The classes of bipolar cells that transmit responses to the onset of light (ON responses) are distinct from those that transmit responses to the cessation of light (OFF responses). These ON and OFF circuits have ganglion cell dendrites, amacrine cell processes and bipolar cell inputs that are physically segregated from each other into what are called the ON and OFF layers of the IPL. The formation of these ON or OFF circuits involves the dendritic maturation of ganglion cell types. The dendrites of most retinal ganglion cells arborize diffusely within the IPL before restricting their dendrites to distinct lamina (Bodnarenko et al., 1999; Bansal et al., 2000; Sernagor et al., 2001; Xu and Tian, 2004; Coombs et al., 2007; Kim et al., 2010)(Fig. 11). Some studies have suggested that this segregation of ganglion cell dendrites into ON and OFF layers involves bipolar cell activity. First, this segregation is prevented by applying APB to hyperpolarize ON bipolar cells during the period of glutamatergic retinal waves (Fig. 12)(Bodnarenko and Chalupa, 1993; Bodnarenko et al., 1995). Second, mice that lack the MHCI receptor CD3zeta and thus display altered glutamatergic retinal waves have ganglion cells with reduced dendritic motility and more diffuse dendrites within the IPL (Xu et al., 2010)(Fig. 11). However, not all manipulations of spontaneous retinal activity during development alter dendritic stratification. Preventing synaptic release of glutamate from ON bipolar cells by expressing tetanus toxin does not prevent the stratification of ganglion cell dendrites, but it does reduce synapse formation onto the ON bipolar cells (Kerschensteiner et al., 2009). Is there a role for cholinergic waves in this ganglion cell stratification process? Several studies suggest that there is. First, blocking nAChRs during the period of cholinergic waves reduces the motility of filipodia on the dendrites of ganglion cells (Wong and Wong, 2001), demonstrating that cholinergic waves can drive structural changes in dendrites. Second, studies in turtle demonstrate that blocking cholinergic waves with nAChR antagonists reduces receptive field sizes (Sernagor and Grzywacz, 1996). Finally, mice lacking the beta2 subunit of the nAChR exhibit a delay in, but not an absence of, the fine stratification of ganglion cell dendrites (Bansal et al., 2000). These findings indicate that cholinergic waves do influence the outgrowth of ganglion cell dendrites but they are not the primary factor that dictates their final organization. Refinement of retinal projections Retinal axons undergo a period of refinement before vision begins. At birth in mice, axons from both eyes reside in overlapping regions of the dorsal lateral geniculate nucleus of the thalamus. By about two weeks after birth, the axon terminals from the ganglion cells of each eye separate into non-overlapping regions. Similarly, within the superior colliculus, retinal axons at birth extend over the entire area of the colliculus. However, over the course of about one week, these axons retract to their appropriate retinotopic regions and extensively arborize within a small target zone. These processes of eye-specific segregation and retinotopic map refinement occur during the period of retinal waves. Thus, the hypothesis has emerged that waves might provide cues within their activity pattern to instruct these developmental processes. The refinement of retinal projections to the brain is thought to be driven by the precise initiation, propagation and termination properties of cholinergic waves (for a review, see Huberman et al., 2008). The periodic initiation of waves induces depolarizations and calcium transients that may be tuned to drive axon guidance (Pfeiffenberger et al., 2006; Nicol et al., 2007) and plasticity mechanisms (Butts et al., 2007; Shah and Crair, 2008). Propagation speed sets the time scale over which neighboring cells are correlated, and thus may be critical for retinotopic map refinement (Chandrasekaran et al., 2007). The spatial extent of wave propagation has been shown to be important for establishing eye-specific segregation of retinal inputs within the thalamus (Xu et al., 2011). Figure 13. Retinal waves drive refinement of central projections. A. Schematic showing results of several experiments based on DiI anterograde labeling to determine the retinotopic projection of retinal ganglion cells to the superior colliculus. Results for WT mice, knockout mice lacking the beta2-subunit of nAChR, and tg mice in which b2 has is rescued in a subset of RGCs in the beta2-nAChR KO mice. B. Fluorescent images of retinal ganglion cell axon terminals from ipsilateral (red) and contralateral (green) projections. Axons labeled by fluorescently tagged cholera toxin. From Xu et al., 2011. Retinal waves determine the final size of termination zones of retinal projections to the superior colliculus (SC, Figure 13A). Focal labeling of retinal ganglion cells using DiI labels in retina (Retina) gives small target zones in superior colliculus (WT, Fig. 13A, left). In knockout mice lacking normal cholinergic waves (β2-nAChR KO, Fig. 13A middle), termination zones are less compact. Rescue of the β2-nAChR gene in a subset of retinal cells (Fig. 13 β2 tg) produce small waves, which are sufficient to rescue normal retinotopic map refinement. Retinal waves also play a role in eye-specific segregation of retinal ganglion cell projections to the lateral geniculate nucleus of the thalamus (Fig. 13b). In wild type mice, there is little overlap between ipsi- and contra-lateral projections. Mice lacking cholinergic waves (β2 KO) and mice with small waves (β2 tg) have significantly overlapping projections from either eye. Retinal waves also play a role in eye-specific segregation of retinal ganglion cell projections to the lateral geniculate nucleus of the thalamus (Fig. 13b). In wild type mice, there is little overlap between ipsi- and contra-lateral projections. Mice lacking cholinergic waves (β2 KO) and mice with small waves (β2 tg) have significantly overlapping projections from either eye. The neural circuits within the inner plexiform layer are highly organized, making them an ideal system for the study of circuit formation. Understanding how this organized structure and intricate connectivity arise during development is an important endeavor. It is clear that on the path to forming the circuits that mediate vision, the retina creates a series of intermediate circuits that generate spontaneous activity. These transitory circuits are then dismantled as the retina matures. During their brief existence, these transient networks play an important role in shaping the circuits both within the retina and from the retina to the brain. A wealth of new tools will lead the way to a greater understanding of how these neural circuits develop. Several recent studies have identified lines of mice with GFP or Cre recombinase expression that is restricted to specific classes and even to subsets of retinal neurons. The ability to identify and alter the activity of specific components of a neural circuit will allow future experimentalists to observe how these circuits form and to ask what role spontaneous activity plays in their formation. Baldridge WH (1996) Optical recordings of the effects of cholinergic ligands on neurons in the ganglion cell layer of mammalian retina. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 16:5060-5072. [PubMed] Bansal A, Singer JH, Hwang BJ, Xu W, Beaudet A, Feller MB (2000) Mice lacking specific nicotinic acetylcholine receptor subunits exhibit dramatically altered spontaneous activity patterns and reveal a limited role for retinal waves in forming ON and OFF circuits in the inner retina. J Neurosci 20:7672. [PubMed] Barkis WB, Ford KJ, Feller MB (2010) Non-cell-autonomous factor induces the transition from excitatory to inhibitory GABA signaling in retina independent of activity. Proc Natl Acad Sci U S A 107:22302-22307. [PubMed] Ben-Ari Y, Spitzer NC (2010) Phenotypic checkpoints regulate neuronal development. Trends in Neurosciences 33:485-492. [PubMed] Blankenship AG, Feller MB (2010) Mechanisms underlying spontaneous patterned activity in developing neural circuits. Nat Rev Neurosci 11:18-29. [PubMed] Blankenship AG, Ford KJ, Johnson J, Seal RP, Edwards RH, Copenhagen DR, Feller MB (2009) Synaptic and extrasynaptic factors governing glutamatergic retinal waves. Neuron 62:230-241. [PubMed] Blankenship AG, Hamby AM, Firl A, Vyas S, Maxeiner S, Willecke K, Feller MB (2011) The role of neuronal connexins 36 and 45 in shaping spontaneous firing patterns in the developing retina. J Neurosci 31:9998-10008. [PubMed] Bodnarenko SR, Chalupa LM (1993) Stratification of ON and OFF ganglion cell dendrites depends on glutamate-mediated afferent activity in the developing retina. Nature 364:144-146. [PubMed] Bodnarenko SR, Jeyarasasingam G, Chalupa LM (1995) Development and regulation of dendritic stratification in retinal ganglion cells by glutamate-mediated afferent activity. J Neurosci 15:7037-7045. [PubMed] Bodnarenko SR, Yeung G, Thomas L, McCarthy M (1999) The development of retinal ganglion cell dendritic stratification in ferrets. Neuroreport 10:2955-2959. [PubMed] Butts DA, Kanold PO, Shatz CJ (2007) A Burst-Based “Hebbian” Learning Rule at Retinogeniculate Synapses Links Retinal Waves to Activity-Dependent Refinement. PLoS Biol 5:e61. [PubMed] Catsicas M, Bonness V, Becker D, Mobbs P (1998) Spontaneous Ca2+ transients and their transmission in the developing chick retina. Curr Biol 8:283-286. [PubMed] Chandrasekaran AR, Shah RD, Crair MC (2007) Developmental Homeostasis of Mouse Retinocollicular Synapses. J Neurosci 27:1746-1755. [PubMed] Coombs JL, Van Der List D, Chalupa LM (2007) Morphological properties of mouse retinal ganglion cells during postnatal development. The Journal of Comparative Neurology 503:803-814. [PubMed] Feller MB, Wellis DP, Stellwagen D, Werblin FS, Shatz CJ (1996) Requirement for cholinergic synaptic transmission in the propagation of spontaneous retinal waves. Science 272:1182. [PubMed] Feller MB, Butts DA, Aaron HL, Rokhsar DS, Shatz CJ (1997) Dynamic Processes Shape Spatiotemporal Properties of Retinal Waves. Neuron 19:293. [PubMed] Fischer KF, Lukasiewicz PD, Wong RO (1998) Age-dependent and cell class-specific modulation of retinal ganglion cell bursting activity by GABA. J Neurosci 18:3767-3778. [PubMed] Fisher LJ (1979) Development of synaptic arrays in the inner plexiform layer of neonatal mouse retina. J Comp Neurol 187:359-372. [PubMed] Ford KJ, Feller MB (2011) Assembly and disassembly of a retinal cholinergic network. Vis Neurosci:1-11. [PubMed] Ford KJ, Felix AL, Feller MB (2012) Cellular Mechanisms Underlying Spatiotemporal Features of Cholinergic Retinal Waves. J Neurosci. 32:850-863. [PubMed] Galli L, Maffei L (1988) Spontaneous impulse activity of rat retinal ganglion cells in prenatal life. Science 242:90-91. [PubMed] Haverkamp S, Wassle H (2004) Characterization of an amacrine cell type of the mammalian retina immunoreactive for vesicular glutamate transporter 3. J Comp Neurol 468:251-263. [PubMed] Hayden SA, Mills JW, Masland RM (1980) Acetylcholine synthesis by displaced amacrine cells. Science (New York, NY) 210:435-437. [PubMed] Huberman AD, Feller MB, Chapman B (2008) Mechanisms Underlying Development of Visual Maps and Receptive Fields. Annual Review of Neuroscience 31:479-509. [PubMed] Johnson J, Tian N, Caywood MS, Reimer RJ, Edwards RH, Copenhagen DR (2003) Vesicular Neurotransmitter Transporter Expression in Developing Postnatal Rodent Retina: GABA and Glycine Precede Glutamate. The Journal of Neuroscience 23:518-529. [PubMed] Johnson J, Sherry DM, Liu X, Fremeau RT, Jr., Seal RP, Edwards RH, Copenhagen DR (2004) Vesicular glutamate transporter 3 expression identifies glutamatergic amacrine cells in the rodent retina. J Comp Neurol 477:386-398. [PubMed] Karne A, Oakley DM, Wong GK, Wong RO (1997) Immunocytochemical localization of GABA, GABAA receptors, and synapse-associated proteins in the developing and adult ferret retina. Vis Neurosci 14:1097-1108. [PubMed] Kerschensteiner D, Wong ROL (2008) A Precisely Timed Asynchronous Pattern of ON and OFF Retinal Ganglion Cell Activity during Propagation of Retinal Waves. Neuron 58:851-8. [PubMed] Kerschensteiner D, Morgan JL, Parker ED, Lewis RM, Wong RO (2009) Neurotransmission selectively regulates synapse formation in parallel circuits in vivo. Nature 460:1016-1020. [PubMed] Kim I-J, Zhang Y, Meister M, Sanes JR (2010) Laminar restriction of retinal ganglion cell dendrites and axons: subtype-specific developmental patterns revealed with transgenic markers. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 30:1452-1462. [PubMed] Leitch E, Coaker J, Young C, Mehta V, Sernagor E (2005) GABA type-A activity controls its own developmental polarity switch in the maturing retina. J Neurosci 25:4801-4805. [PubMed] Lohmann C, Myhr KL, Wong RO (2002) Transmitter-evoked local calcium release stabilizes developing dendrites. Nature 418:177-181. [PubMed] Martins RAP, Pearson RA (2008) Control of cell proliferation by neurotransmitters in the developing vertebrate retina. Brain Research 1192:37-60. [PubMed] Masland RH, Ames A, 3rd (1976) Responses to acetylcholine of ganglion cells in an isolated mammalian retina. J Neurophysiol 39:1220-1235. [PubMed] Masland RH, Mills JW, Cassidy C (1984) The functions of acetylcholine in the rabbit retina. Proceedings of the Royal Society of London Series B, Containing Papers of a Biological Character Royal Society (Great Britain) 223:121-139. [PubMed] Meister M, Wong RO, Baylor DA, Shatz CJ (1991) Synchronous bursts of action potentials in ganglion cells of the developing mammalian retina. Science 252:939-943. [PubMed] Morgan JL, Schubert T, Wong RO (2008) Developmental patterning of glutamatergic synapses onto retinal ganglion cells. Neural Dev 3:8. [PubMed] Nicol X, Voyatzis S, Muzerelle A, Narboux-Neme N, Sudhof TC, Miles R, Gaspar P (2007) cAMP oscillations and retinal activity are permissive for ephrin signaling during the establishment of the retinotopic map. Nat Neurosci 10:340-347. [PubMed] Pearson R, Catsicas M, Becker D, Mobbs P (2002) Purinergic and muscarinic modulation of the cell cycle and calcium signaling in the chick retinal ventricular zone. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 22:7569-7579. [PubMed] Penn AA, Wong RO, Shatz CJ (1994) Neuronal coupling in the developing mammalian retina. J Neurosci 14:3805-3815. [PubMed] Penn AA, Riquelme PA, Feller MB, Shatz CJ (1998) Competition in retinogeniculate patterning driven by spontaneous activity. Science (New York, NY) 279:2108-2112. [PubMed] Pfeiffenberger C, Yamada J, Feldheim DA (2006) Ephrin-As and Patterned Retinal Activity Act Together in the Development of Topographic Maps in the Primary Visual System. J Neurosci 26:12873-12884. [PubMed] Redburn DA, Rowe-Rendleman C (1996) Developmental neurotransmitters. Signals for shaping neuronal circuitry. Invest Ophthalmol Vis Sci 37:1479-1482. [PubMed] Sandell JH (1998) GABA as a developmental signal in the inner retina and optic nerve. Perspect Dev Neurobiol 5:269-278. [PubMed] Schmidt M, Humphrey MF, Wässle H (1987) Action and localization of acetylcholine in the cat retina. Journal of Neurophysiology 58:997-1015. [PubMed] Sernagor E, Grzywacz NM (1996) Influence of spontaneous activity and visual experience on developing retinal receptive fields. Current Biology: CB 6:1503-1508. [PubMed] Sernagor E, Eglen SJ, Wong RO (2001) Development of retinal ganglion cell structure and function. Progress in Retinal and Eye Research 20:139-174. [PubMed] Sernagor E, Young C, Eglen SJ (2003) Developmental modulation of retinal wave dynamics: shedding light on the GABA saga. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 23:7621-7629. [PubMed] Shah RD, Crair MC (2008) Retinocollicular Synapse Maturation and Plasticity Are Regulated by Correlated Retinal Waves. J Neurosci 28:292-303. [PubMed] Singer JH, Mirotznik RR, Feller MB (2001) Potentiation of L-type calcium channels reveals nonsynaptic mechanisms that correlate spontaneous activity in the developing mammalian retina. J Neurosci 21:8514-8522. [PubMed] Stacy RC, Demas J, Burgess RW, Sanes JR, Wong ROL (2005) Disruption and Recovery of Patterned Retinal Activity in the Absence of Acetylcholine. J Neurosci 25:9347-9357. [PubMed] Stafford BK, Sher A, Litke AM, Feldheim DA (2009) Spatial-Temporal Patterns of Retinal Waves Underlying Activity-Dependent Refinement of Retinofugal Projections. Neuron 64:200-212. [PubMed] Strang CE, Andison ME, Amthor FR, Keyser KT (2005) Rabbit retinal ganglion cells express functional alpha7 nicotinic acetylcholine receptors. American Journal of Physiology Cell Physiology 289:C644-655-C644-655. [PubMed] Sun C, Warland DK, Ballesteros JM, van der List D, Chalupa LM (2008) Retinal waves in mice lacking the β2 subunit of the nicotinic acetylcholine receptor. Proceedings of the National Academy of Sciences 105:13638-13643. [PubMed] Syed MM, Lee S, He S, Zhou ZJ (2004a) Spontaneous waves in the ventricular zone of developing mammalian retina. J Neurophysiol 91:1999-2009. [PubMed] Syed MM, Lee S, Zheng J, Zhou ZJ (2004b) Stage-dependent dynamics and modulation of spontaneous waves in the developing rabbit retina. J Physiol 560:533-549. [PubMed] Wang C-T, Blankenship AG, Anishchenko A, Elstrott J, Fikhman M, Nakanishi S, Feller MB (2007) GABAA Receptor-Mediated Signaling Alters the Structure of Spontaneous Activity in the Developing Retina. J Neurosci 27:9130. [PubMed] Wong RO (1995) Cholinergic regulation of [Ca2+]i during cell division and differentiation in the mammalian retina. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 15:2696-2706. [PubMed] Wong RO (1999) Retinal waves and visual system development. Annu Rev Neurosci 22:29-47. [PubMed] Wong RO, Oakley DM (1996) Changing patterns of spontaneous bursting activity of on and off retinal ganglion cells during development. Neuron 16:1087-1095. [PubMed] Wong RO, Meister M, Shatz CJ (1993) Transient period of correlated bursting activity during development of the mammalian retina. Neuron 11:923-938. [PubMed] Wong RO, Chernjavsky A, Smith SJ, Shatz CJ (1995) Early functional neural networks in the developing retina. Nature 374:716-718. [PubMed] Wong WT, Wong RO (2001) Changing specificity of neurotransmitter regulation of rapid dendritic remodeling during synaptogenesis. Nature Neuroscience 4:351-352. [PubMed] Wong WT, Myhr KL, Miller ED, Wong RO (2000) Developmental changes in the neurotransmitter regulation of correlated spontaneous retinal activity. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 20:351-360. [PubMed] Xu H, Tian N (2004) Pathway-specific maturation, visual deprivation, and development of retinal pathway. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry 10:337-346. [PubMed] Xu HP, Chen H, Ding Q, Xie ZH, Chen L, Diao L, Wang P, Gan L, Crair MC, Tian N (2010) The immune protein CD3zeta is required for normal development of neural circuits in the retina. Neuron 65:503-515. [PubMed] Xu HP, Furman M, Mineur YS, Chen H, King SL, Zenisek D, Zhou ZJ, Butts DA, Tian N, Picciotto MR, Crair MC (2011) An instructive role for patterned spontaneous retinal activity in mouse visual map development. Neuron 70:1115-1127. [PubMed] Zhang LL, Pathak HR, Coulter DA, Freed MA, Vardi N (2006) Shift of intracellular chloride concentration in ganglion and amacrine cells of developing mouse retina. J Neurophysiol 95:2404-2416. [PubMed] Zheng J, Lee S, Zhou ZJ (2006) A transient network of intrinsically bursting starburst cells underlies the generation of retinal waves. Nat Neurosci 9:363. [PubMed] Zheng JJ, Lee S, Zhou ZJ (2004) A developmental switch in the excitability and function of the starburst network in the mammalian retina. Neuron 44:851-864. [PubMed] Zhou ZJ, Zhao D (2000) Coordinated transitions in neurotransmitter systems for the initiation and propagation of spontaneous retinal waves. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 20:6570-6577. [PubMed] Last Updated: January 27, 2012. Dr. Marla Feller received her B.A. and Ph.D. in Physics from University of California at Berkeley in 1985 and 1992 respectively. She was a postdoctoral researcher with at Bell Laboratories with Dr. David Tank (1992-1994) and then at UC Berkeley with Dr. Carla Shatz (1994-1998) She headed a laboratory at the National Institutes of Neurological Disorders (1998-2000), was at UC San Diego (2000-2007) and is now at UC Berkley Neuroscience as an Associate Professor of Neurobiology. She has done research in the elucidating the circuits that mediate retinal waves and also in what role retinal waves play in the establishment of retinal projections to the brain. Currently Marla is investigating the mechanisms underlying the generation of this highly patterned activity and exploring the role it plays in the development and shaping of ganglion cell responses, particularly those generated by a cholinergic amacrine cell network. Dr. Kevin Ford received his B.S. in Biology from UC San Diego in 2005. He did his graduate work with Marla Feller first at UC San Diego and then UC Berkeley, receiving his Ph.D. in Molecular and Cell Biology from UC Berkeley in 2011. During his graduate career, he researched the cellular mechanisms underlying retinal waves in the developing mouse retina. He is presently still in Marla’s laboratory at Berkeley
Information for Patients Like other parts of the body, bones can get infected. The infections are usually bacterial, but can also be fungal. They may spread to the bone from nearby skin or muscles, or from another part of the body through the bloodstream. People who are at risk for bone infections include those with diabetes, poor circulation, or recent injury to the bone. You may also be at risk if you are having hemodialysis. Symptoms of bone infections include - Pain in the infected area - Chills and fever - Swelling, warmth, and redness A blood test or imaging test such as an x-ray can tell if you have a bone infection. Treatment includes antibiotics and often surgery. Tuberculosis (TB) is a disease caused by bacteria called Mycobacterium tuberculosis. The bacteria usually attack the lungs, but they can also damage other parts of the body. TB spreads through the air when a person with TB of the lungs or throat coughs, sneezes, or talks. If you have been exposed, you should go to your doctor for tests. You are more likely to get TB if you have a weak immune system. Symptoms of TB in the lungs may include - A bad cough that lasts 3 weeks or longer - Weight loss - Loss of appetite - Coughing up blood or mucus - Weakness or fatigue - Night sweats Skin tests, blood tests, x-rays, and other tests can tell if you have TB. If not treated properly, TB can be deadly. You can usually cure active TB by taking several medicines for a long period of time.
|This article does not cite any references or sources. (September 2011)| Glagolitic script (10th–11th centuries) The extant monuments of glagolitsa are dated no later than the end of the 10th century. Symbols as a rule are composed of two elements that are combined one above the other. Such construction can be seen in the decoration of kirillitsa. It usually does not include simple forms. They are connected with straights. Some letters (ш, у, м, ч, э) correspond to their modern form. Regarding the form of letters there are two types of glagolitsa. The first one – Bulgarian glagolitsa – has roundish letters, and Croatian glagolitsa – called as well Illyrian or Dalmatian – has an angular forms of letters. Neither of the two types has strict border zones of spreading. Later glagolitsa borrowed many sounds from kirillitsa. West Slavic glagolitsa existed for only a short time and was replaced with the Latin writing. But glagolitsa did not perish in modern times. It was used up to the beginning of World War II, and was even used for newspapers. It is currently being used in Croatian settlements of Italy. Cyrillic script – uncial (11th century) Its origin remains unexplained. The title appeared later than the alphabet. St. Cyril, while travelling across Slavic countries during the 9th century definitely composed a new Slavonic alphabet. It is not known whether it was a glagolitic script or not. It was necessary to translate religious texts into the Slavonic language. To do that it would be necessary to simplify intricate and difficult-to-write symbols of glagolitsa, but at the same time introduce the lacking letters for sound denotations in the spoken Slavonic language. Many sources of the time describe this, but mention only one Slavonic alphabet though there were already two. The Cyrillic script has 43 letters, 24 of them were borrowed from the Byzantium paternal writing and the other 19 were invented anew, but in the graphic decorations similar to the first ones. But not all the borrowed letters kept the denotations of the same sound in the Greek language – some received new denotations peculiar to their Slavonic phonetic features. Bulgarians have preserved the Cyrillic script to a greater extent than other Slavonians. Nowadays their writing (i.e. the Serbian language) is similar to Russian writing except several symbols designating specific phonetic features. The ancient form of the Cyrillic script is called "uncial". Uncial and glagolitic alphabet are wholly handwritten scripts. Uncial, as well as glagolitic alphabet, has a peculiar trait – clearness and straightness of tracings (writings). Most letters are angular and have graceless features. Exceptions are narrow roundish letters with round curves – (О, С, Э, Р and others), and among other letters these seem out of place. Lower elongations of certain letters (Р, У, 3) are idiosyncratic to this type of writing. They appear to be light decorative elements in the context of calligraphy. As for the diacritical symbols, their origin is still unclear. Uncial letters are all of a big size and are set separately from each other. The old uncial has no intervals between words. Semi-uncial (14th century) Semi-uncial was the second type of writing, that had been developed from the 14th century and later replaced the uncial. This script is brighter and more roundish. Its letters are more shallow: they have many superscript marks and the whole system of punctuation marks. Letters are more flexible and wide in comparison with the uncial writing and have lower and upper elongations. The broad-pen technique used while writing the uncial is seldom applied in the semi-uncial. Semi-uncial was used with cursive and ligature in the 14th–18th centuries along with the other writing styles. To use semi-uncial in writing was more comfortable. Feudal atomism caused the development of unique uncial styles and even an uncial language in some remote districts. Military novels and chronicles occupy the bulk of these manuscripts, but some manuscripts recount historical events in Russia during that period. During Ivan III's reign when the land integration and consolidation around Moscow was finished, Moscow became not only a national, but also a cultural center; and the national Russian state was created under a new autocratic regime. So a local Moscow culture became an icon of Russian character. Along with the growing demands of everyday life, the necessity of a new and more simple script was born in the society of Moscow, and thus Russia at large. Сursive (15th–17th centuries) The term "cursive writing" corresponds to the Latin cursive. At the first stage of scripts development the ancient Greeks had a widely spread writing culture. Some of the south-west Slavonians also had their own scripts. Cursive writing as a separate type of writing emerged in the 15th century in Russia. The partly bound letters and bright patterns differed from other scripts' letters. But since the letters had different marks and signs, pigtails, and additional symbols, it was difficult to read texts. Although cursive writing reflected semi-uncial, there were fine lines that bound letters, a feature that contrasts with the semi-uncial. This script is also more flexible and fluent. Letters of cursive writing were written with elongations. In the beginning the symbols were composed of elongations as is specific to uncial and semi-uncial. In the second half of the 16th century, and especially in the beginning of the 17th century the semi-roundish lines became the major lines. In a broader historical perspective, it is possible to see some elements of the Greek cursive script. By the second half of the 17th century, when many different variants of writing appeared, the cursive script showed more roundish elements and ligature. The roundish contour of letters became more decorative and smooth near the end of the century. Cursive writing of that time misses elements of the Greek cursive writing and discards some semi-uncial forms. Later straights and cursives attained balance and letters became more symmetrical and roundish. At that period the uncial was transformed into a civil writing cursive.
Lesson Plan: Korea High schoolers engage in a lesson that has the primary concern of the controversy that surrounds the Korean Nuclear Program. The viewpoints of various stakeholders from other nations is considered. 3 Views 0 Downloads - Activities & Projects - Graphics & Images - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Writing Prompts - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: The Spread of Buddhism in East Asia: Korea as a Land Bridge Young scholars study the spread of Buddhism. In this Buddhism lesson plan, students examine articles explaining the spread of the religion to different areas in East Asia. Young scholars compare and contrast the spread of Buddhism to... 9th - 12th Social Studies & History Handing a Rogue State: North Korea Students explore the concept of disarmament. In this North Korea lesson, students apply the steps of conflict resolution to the North Korean nuclear crisis as they create flowcharts designed to establish multilateral talks and resolve... 11th - 12th Social Studies & History Data Comparison And Interpretation: North Korea, South Korea, And the United States Ninth graders brainstorm "what they know about North Korea and South Korea." They determine the approximate distance from the United States to North and South Korea and create a graph comparing the birth rates, death rates, infant... 9th Social Studies & History Imaginary Trip to South Korea Students "visit" South Korea through the use of technology, in a fun, and stimulating, detailed project. They arrange travel, make choices, work through a budget, learn history, have exposure to language, and get a sense of what a... 10th - 12th Social Studies & History Decision Point: Understanding the U.S.’s Dilemma Over North Korea Simulate the Situation Room and analyze the US's relationship with North Korea. The plan starts off with a quick review and an examination of a online timeline that updates as the situation continues. Next, the class reads an article and... 7th - 12th English Language Arts CCSS: Designed Lesson: Dongducheon: A Walk to Remember, A Walk to Envision: Interpreting History, Memory, and Identity Cultural discourse can start through a variety of venues. Learners begin to think about how our minds, memories, and identities shape our attitudes toward culture and history. They analyze seven pieces from the Dongducheon art exhibit... 9th - 12th Visual & Performing Arts
Making a small wind turbine out of straws can model larger-scale versions that generate electricity. This turbine project includes miniature airfoil blades and pivots on a central axis. It also can illustrate how "wind energy" is turned into "rotational energy" using "turbines." Things You'll Need - Plastic straws (one bag/box) - Safety scissors (see warnings) - Glue (nontoxic craft glue such as Elmer's) - Cardboard (3-inch square) - Thumb tack - Pencil eraser (standard type, from the tip of a pencil) - Card stock circle the size of a quarter Video of the Day Select the straws to use for your turbine. Large straws catch more wind but tend to bend more. Bendy straws are inappropriate for this project because of the "bendy" portion. Cut two, 4-inch pieces of straw tubing. Cut down the sides of the tubes to separate the tubes into four half-tubes. Stack the half tubes to make sure they all are the same size. If not, cut them so they all are the same length and width. Cut the stack so that the straws slope inward on one side and are all uniform. The cut should start at one end (about 1/3 of the way in from the side) and continue down to the far corner of the straws. Keep the large part of each half-tube in the stack that you cut, which is now a "blade." Fan the blades out so they overlap at their wide base. Glue the blades in this position with craft glue. Wait for the glue to dry. Build the base/tower. Take a fresh straw and, using scissors, split the end into four pieces that you will bend outward like flower pedals. Glue these petals onto the cardboard so that the shaft of the straw sticks upward. Pull the eraser out of the end of a pencil. Slip the eraser into the tip of the straw. Pin the center of the blade wheel to the eraser with a thumb tack, going through the end of the straw into the eraser. Blow gently into the blades of the windmill to make it turn.
During the period of recovery and reassessment after the First World War (1914 – 1918), artists faced the problem of finding a means of creative expression appropriate for a radically altered society. Discover how Scottish artists and writers expressed a uniquely modern sensibility in the first decades of the twentieth century. Featuring such celebrated figures as Hugh MacDiamid and JD Fergusson, this display takes a closer look at the creative men and women who championed a progressive national culture and made Scotland’s distinctive voice heard. The Scottish Renaissance Movement is recognised for bringing a Scottish vernacular voice to the universalist concerns of Modernism. The movement is renowned for its contribution to literature, but the visual arts also played a significant role. In particular the politically radical and nationalist painters William Johnstone, William McCance and John Duncan Fergusson came under the influence of the poet Hugh MacDiarmid, whose project was to create a Scottish culture that was both politically progressive and artistically modern in outlook. The principal aim of the Scottish Renaissance Movement was to align the arts in Scotland with contemporary artistic and intellectual tendencies in Europe. The artists and writers who engaged with the Modern Movement were concerned with contemporary subject matter and new forms of expression. They rejected the nostalgic sentimentality of the ‘Kailyard School’ exemplified by writers like J. M. Barrie and the ‘cabbage patch and cottar’ paintings of Edward Hornel and Sir James Guthrie. By contrast, they wanted to give expression to a unique Scottish identity that could only be fully articulated through an engagement with the internationalist concerns of Modernism. This did not mean a rejection of the past – on the contrary, the importance of Gaelic language and culture was recognised, as was the need to repair the breach in continuity that they felt marked Scotland’s cultural history. This meant there was a strain of patriotic Celticism in the Modern Movement in Scotland, alongside the commitment to the present.
IN WHAT MAY HAVE BEEN THE FIRST attempt at an electric car, Scottish inventor Robert Anderson built a “crude electric carriage” in the mid- to late 1830s. It didn't get far. For one thing, its battery wasn't good enough. (Today's green car engineers can sympathize.) It also faced stiff competition from steam-powered cars. When rechargable batteries started to appear in the mid-1800s, electric vehicles got a fillip. In 1897 the Electric Carriage and Wagon Company in Philadelphia assembled a fleet of electric-powered taxis for New York City. By 1902 the Pope Manufacturing Company in Hartford, Conn., had built around 900 electric vehicles, most of which were used as cabs. That same year Studebaker, which had gotten its start in horse-drawn wagons, entered the car market in Indiana with an electric model. Through the early 1900s electric vehicles ran smoother and quieter than their gas-guzzling, internal-combustion-engine-powered rivals.
In this compacted Grade 7 course, students will learn all of the grade 7 course material as well as half of Grade 8 content. Please see the "Grade 7" and "Grade 8" descriptions for more information. Visit Concept Corner (link) for more resources. Homework and Exam Calendar Trimester 3 Project Resources Scope & Sequence 7.1 Rational Numbers In this unit, students explore adding and subtracting integers with models, number lines, and arithmetic. They generalize these integer rules and extend them to all rational numbers. Next they use number lines and multiplication patterns to find products and quotients of rational numbers. Properties of addition are reviewed and then used to prove rules for addition, subtraction, multiplication, and division. 7.2 Proportional Relationships In this unit, students use real-life situations to explore proportional relationships using tables, equations, and graphs. They realize that a proportional relationship is represented on a graph as a straight line that passes through the origin and that there are straight-line graphs that do not represent a proportional relationship. They next look at rates expressed as fractions, nding the unit rate (the constant of proportionality) and then using the constant of proportionality to solve a problem. In the second part of the unit, students work with percentages. First, percentages are tied to proportional relationships, and then students examine percentage situations as formulas, graphs, and tables. Students explore salary increase, see the similarities with sales taxes, and then go on to explore percent decrease. 7.3 Constructions and Angles In this unit, students define adjacent, supplementary, complementary, and vertical angles and then explore how they are manifested in quadrilaterals. Next, students explore triangles and their properties with certain known and unknown elements. Through exploration, students discover that the sum of the measure of the interior angles of a triangle is 180° and that the sum of the measures of the interior angles of a quadrilateral is 360°. They explore other polygons to find their angle sum and determine if there is a relationship to the angle sum of triangles. This extends to finding the measure of the interior angles of regular polygons and speculating about how this relates to a circle. 7.4 Zooming In On Figures In this unit, students extend their learning about polygons and circles. They will compare circles with regular polygons and come up with area formulas, learn about three-dimensional figures, and explore the relationship between two-dimensional and three-dimensional shapes. Students will apply this knowledge to design and build model buildings. At the end of the unit, the buildings will be combined to make a model city. 7.5 Algebraic Reasoning In this unit, extend their learning about expressions and equations. They will write, evaluate, and simplify expressions that contain positive and negative numbers. Students will write algebraic expressions and equations to represent situations and then solve them using formal algebraic methods and number properties. Students will learn how linear inequalities are different from linear equations, they will use inequalities to represent real-life situations, and they will find solutions to these problems by writing and solving inequalities. 7.6 Samples and Probability In this unit, students work with line plots, box plots, and measures of center and spread. They use these tools to compare similar data sets and learn to use random samples to generalize about a population. Students will learn the difference between simple probability, compound events, and experimental versus theoretical probability. 8.2 Roots and Exponents Overview In this unit, students learn about square roots and cube roots and how to apply properties of exponents to simplify expressions and solve equations. They learn how to solve problems with very large or very small numbers using scientific notation, and then they extend their knowledge to work with rational and irrational numbers. 8.3 Transformations Overview - CURRENT UNIT Every day, you see objects that move through space, spin around, or are mirrored images. These are examples of translations, rotations, and reflections—three basic components of transformations. When objects are shrunken or expanded, think model train or billboard image, this is called a dilation. Transformations can also be done to objects on a coordinate plane. They can be moved, rotated, mirrored, or dilated. Triangles are particularly useful objects to transform since triangle similarity can be used to find missing measurements of real- world objects. 8.6 Triangles and Beyond In this unit, students will think about geometry through the art of M. C. Escher. Escher’s art can display creative uses for aspects of geometry, including parallel lines, triangles, and spheres. Then they will look at parallel lines cut by a transversal and the related angles they create, understand and apply the Pythagorean Theorem, expand on these, and then develop the relationships among the volumes of cylinders, cones, and spheres.
|« Where earthquakes occur in California||When the Ground Gives Way »| Where Earthquakes Occur Earthquakes occur everywhere - so, at least, it seems. Temblors happen on all continents and beneath the deep oceans. They shake the world's highest mountains, the Himalayas, and the Earth's deepest valley, the Dead Sea. Even from under the ice caps of both polar regions, seismometers regularly record rumblings in the Earth's crust. But a more detailed look reveals that the distribution of earthquake foci in the world is by no means random. And neither are they evenly or regularly spaced. Instead, when plotted on a world map, earthquake locations look like narrow bands winding through the continents and oceans (see map). What are these zones and why are most earthquakes foci concentrated there? Simply put, temblors happen when rock breaks under force. Inside the Earth, the most important of such rock crushing forces is the "tectonic stress." It is exerted on the Earth's crust by the movement of the giant, rigid plates, which float on a subterreanean sea of hot and plastic rock called the asthenosphere. There are about twelve huge and another dozen smaller plates. Where ever such plates crush into or slide past each other during their respective drifts on the Earth's surface, the collision is able to break the rock, thus causing earthquakes. In principle, the effects of such plate collisions are similar to a car wreck where two automobiles hit each other, albeit on a much larger scale. The bands of earthquake foci in the map reflect these collision zones of the tectonic plates. In fact, they very clearly mark the boundaries of the plates. Look for instance at North America. The underlying plate is much bigger than the continent itself. It stretches from Iceland in the East all the way to the most far flung Aleutian islands in the West and reaches from Alaska to the Caribbean and beyond to the Azores, the island archipelago in middle of the Atlantic Ocean. But earthquakes happen not only where plates collide. They also occur where two plates move away from each other in the so called "spreading zones." One of these zones is the Mid-Atlantic Ridge where Europe moves away from North America at the rate of about one inch per year. You will find such ridges in every major ocean basin. In fact, there are many more miles of plate boundaries under the oceans than on land. As a consequence, the number of submarine earthquakes is also larger than the number of quakes on land. (hra006)
Also found in: Dictionary, Thesaurus, Medical, Legal, Acronyms, Idioms, Wikipedia. army,large armed land force, under regular military control, organization, and discipline. Although armies existed in ancient Egypt, China, India, and Assyria, Greece was the first country known for a disciplined military land force. The Greeks made military service obligatory for citizens and training was rigorous. As a result of Greek military successes, leaders of other nations sought the services of Greek mercenaries. In time, a class of professional soldiers developed. They sold their services to other rulers as well as to wealthy Greeks who chose to avoid required military service (see XenophonXenophon , c.430 B.C.–c.355 B.C., Greek historian, b. Athens. He was one of the well-to-do young disciples of Socrates before leaving Athens to join the Greek force (the Ten Thousand) that was in the service of Cyrus the Younger of Persia. ..... Click the link for more information. ). Like the Greek armies, the Roman army was originally composed of citizen soldiers. As the Roman Empire expanded, a professional standing army came into being; it became increasingly composed of barbarian mercenaries. The Roman army was divided into legionslegion, large unit of the Roman army. It came into prominence c.400 B.C. It originally consisted of 3,000 to 4,000 men drawn into eight ranks: the first six ranks, called hoplites, were heavily armed, while the last two, called velites, were only lightly armed. ..... Click the link for more information. , each of which included heavy and light infantry, cavalry, and a siege train. The army became a political force that often determined who ruled the empire. In Islam, slave soldiers were often trained from youth to be loyal only to their owners. These slave armies often established dynasties of their own (see MamluksMamluk [Arab.,=slaves], a warrior caste dominant in Egypt and influential in the Middle East for over 700 years. Islamic rulers created this warrior caste by collecting non-Muslim slave boys and training them as cavalry soldiers especially loyal to their ..... Click the link for more information. ; JanissariesJanissaries [Turk.,=recruits], elite corps in the service of the Ottoman Empire (Turkey). It was composed of war captives and Christian youths pressed into service; all the recruits were converted to Islam and trained under the strictest discipline. ..... Click the link for more information. ). In medieval Japan and Europe, samuraisamurai , knights of feudal Japan, retainers of the daimyo. This aristocratic warrior class arose during the 12th-century wars between the Taira and Minamoto clans and was consolidated in the Tokugawa period. ..... Click the link for more information. and knightsknight, in ancient and medieval history, a noble who did military service as a mounted warrior. The Knight in Ancient History In ancient history, as in Athens and Rome, the knight was a noble of the second class who in military service had to furnish his own mount ..... Click the link for more information. , respectively, owed military service to a lord. The European system depended on the feudal levy, which required knights and yeomanry to provide a fixed number of days of military service per year to a great lord. Because of this limitation on service and the poorly trained force that it produced, sustained military operations were difficult. Feudal armies were undermined by the development in England of the longbow, but they were destroyed by the introduction of gunpowdergunpowder, explosive mixture; its most common formula, called "black powder," is a combination of saltpeter, sulfur, and carbon in the form of charcoal. Historically, the relative amounts of the components have varied. ..... Click the link for more information. . Armed knights became easy victims of hand-carried firearms and castle walls could now be breasted by cannon. Professionals and Conscripts National armies, largely composed of mercenaries, reappeared after the introduction of gunpowder. An example is the Italian condottiere, who hired mercenaries to fight for the prince who was able to pay the most. German and Swiss mercenaries served all over Europe in the 15th and 16th cent. Professional soldiers were also a notable feature of the armies of the Ottoman Turks, who threatened to destroy the forces of Western Europe in the 16th cent. Eventually, as a result of the writings of such political theorists as Niccolo MachiavelliMachiavelli, Niccolò , 1469–1527, Italian author and statesman, one of the outstanding figures of the Renaissance, b. Florence. Life A member of the impoverished branch of a distinguished family, he entered (1498) the political service of the ..... Click the link for more information. , national or standing armies developed—armies of professional soldiers led mostly by officers from the country's aristocracy. After the Thirty Years War (1618–48), France emerged as the preeminent European military power. Under Louis XIV and his war minister, the marquis de LouvoisLouvois, François Michel Le Tellier, marquis de , 1641–91, French statesman, minister during the reign of King Louis XIV. After 1654 he was associated in office with his father, Michel Le Tellier, and from 1666 he functioned as war minister, officially replacing his ..... Click the link for more information. , that country organized a national standing army that became the pattern for all Europe until the French Revolution. A professional body, set apart from civilian life and ruled under an iron discipline, the standing army reached harsh perfection under Frederick II of Prussia. In the late 18th cent. the American and French revolutions brought about the return of the nonprofessional, citizen army. The introduction of conscriptionconscription, compulsory enrollment of personnel for service in the armed forces. Obligatory service in the armed forces has existed since ancient times in many cultures, including the samurai in Japan, warriors in the Aztec Empire, citizen militiamen in ancient Greece and Rome, ..... Click the link for more information. during the French Revolutionary Wars led to mass armies built around a professional nucleus. Officers could be from any class. Conscription also transformed non-European armies, such as that of Egypt during the early 19th cent. The Modern Army With the advent of railroads and, later, highway systems it became possible after the mid-19th cent. to move large concentrations of troops, and the nations of the world were able to benefit from enlarging their manpower bases by conscription. Armies changed technologically as well. Trench warfaretrench warfare. Although trenches were used in ancient and medieval warfare, in the American Civil War, and in the Russo-Japanese War (1904–5), they did not become important until World War I. ..... Click the link for more information. resulted from improvements in small arms and prompted the development of various weapons designed to end the stalemates and murderous battles that entrenched forces produced. The growing role of artilleryartillery, originally meant any large weaponry (including such ancient engines of war as catapults and battering rams) or war material, but later applied only to heavy firearms as opposed to small arms. ..... Click the link for more information. made logistics even more important. From the first, armies had needed soldiers to supply the fighting troops—even when the armies simply lived off the land. No formal distinction orginally was made between service troops and combat troops, but with the creation of the great citizen armies after the French Revolution formal specialization proliferated, and quartermasters, ordnance troops, engineers, and medical specialists were organized into separate units. The development of mechanized warfaremechanized warfare, employment of modern mobile attack and defense tactics that depend upon machines, more particularly upon vehicles powered by gasoline and diesel engines. ..... Click the link for more information. in the 20th cent. made armies powerful and highly mobile and yet did not always provide them with the capabilities needed to fight so-called asymmetric opponents, such as they face in guerrilla warfareguerrilla warfare [Span.,=little war], fighting by groups of irregular troops (guerrillas) within areas occupied by the enemy. When guerrillas obey the laws of conventional warfare they are entitled, if captured, to be treated as ordinary prisoners of war; however, they are ..... Click the link for more information. and terrorismterrorism, the threat or use of violence, often against the civilian population, to achieve political or social ends, to intimidate opponents, or to publicize grievances. ..... Click the link for more information. . The term army is still applied to all the armed land forces of a nation, but it is also used to designate a self-contained unit with its own service and supply personnel. In many armies today the division (usually about 15,000 persons) is the smallest self-contained unit (having its own service and supply personnel). Two or more divisions generally form a corps; and an army (c.100,000 persons or more) is two or more corps. In World War II, army groups were created, including several armies (sometimes from different allied forces). Above the groups is the command of a theater of operations, which in the United States is under the control of the Joint Chiefs of Staff. See Defense, United States Department ofDefense, United States Department of, executive department of the federal government charged with coordinating and supervising all agencies and functions of the government relating directly to national security and military affairs. ..... Click the link for more information. ; strategy and tacticsstrategy and tactics, in warfare, related terms referring, respectively, to large-scale and small-scale planning to achieve military success. Strategy may be defined as the general scheme of the conduct of a war, tactics as the planning of means to achieve strategic objectives. ..... Click the link for more information. ; warfarewarfare, violent conflict between armed enemies. In modern times warfare has usually been conducted by the armed forces (e.g., army, navy, and air force) of a nation or other politically organized group. ..... Click the link for more information. . See A. Vagts, A History of Militarism (1937); L. L. Gordon, Military Origins (1971); J. Keegan and R. Holmes, Soldiers (1986); R. O'Connell, Of Arms and Men (1989). (1) Land troops (land forces) on a level with a naval fleet. (2) Totality of armed forces of a state. (3) Large operational unit designated for the conducting of operations. In the 18th and first half of the 19th centuries the term “army” meant troops united under a single command in one theater of operations, hence the names Rhine Army, Danube Army, and so on. The growth in numbers of national armed forces, the difficulty of controlling troops located along a broad front and operating in different directions, and the appearance of new factors which influenced the conduct of battles (the railroad, and in the 20th century first automobile and then air transport) made it necessary to create individual armies within a single theater of operations. Instead of one army carrying out a strategic task throughout the whole theater, a number of armies appeared, each under the command of one person (the commander of the army), and each representing a large operational unit of troops intended to carry out the individual operational tasks in the theater. The army had a headquarters staff and the necessary logistic agencies; it was usually designated by an ordinal number. Such individual armies in a single military theater appeared in Russia before the Patriotic War of 1812, when all the forces were divided into three armies. In 1812, Napoleon, too, began to organize individual armies (groups); previously he had made all the corps directly subordinate to himself. Later, individual armies appeared in Prussia (1866), Japan, (1904–05), and other states. During World War I, Russia had 13 armies (1916), Germany had 15 (1918), and France had ten (1918). During the Civil War (1918–1920) a new type of army unit—the mounted army—appeared in the Soviet Armed Forces. Initially, armies did not have a permanent organization; their composition was determined by the tasks they carried out, the characteristics of the military theater, the strength of the enemy, the existing possibilities for security of troop control, and other conditions. Beginning with the 19th century, armies generally consisted of three to six or more corps, and corps consisted of two to four divisions. In the Civil War the Soviet Army had no corps; it was made up directly of divisions. In addition to corps (divisions), armies included various auxiliary units. The numerical composition of armies was not constant: thus, the Russian First Army numbered 127,000 men in 1812, and the Second Army 40,000 men; the Prussian First Army numbered 140,000 men in 1866, the Second Army, 115,000 men; in 1916 the Russian Eighth Army included 225,000 men, the Ninth Army, 165,000 men. In the Great Patriotic War (1941–45), German fascist armies included 120,000–250,000 men; Soviet armies numbered 60,000–100,000 men. Before the appearance of automobiles, automatic weapons, airplanes, and tanks, the foundation of the army was infantry, cavalry, and artillery. The shock force of the army was based on bayonets, its maneuverability on the mobility of the infantry (25–30 km in a 24-hour period). Armies of World War I and Soviet armies of the Civil War were of a transitional type. They were distinguished from previous armies primarily by the increased saturation of war matériel—that is, automatic weapons, mortars, and guns—and the appearance of an air force, armored-vehicle troops, chemical troops, antiaircraft defense troops, motor transport units, road units, and other units. The cavalry did not lose its importance. However, these armies were nonetheless unmounted armies with an inherently low level of maneuverability. The infantry and artillery constituted the basic shock force. Tanks, planes, and motorized transport were not widely employed; they had not been perfected technically and thus could not fundamentally alter combat capabilities. By the start of World War II (1939–45) the engine and the combat vehicle had become prominent in the armies of developed states. Tank armies (in the USSR and Germany), airborne armies, and air armies appeared in World War II along with combined arms armies. The combat strength of combined arms armies had become more varied than that of the period of World War I. In addition to infantry (rifle) units, they began to include tank and mechanized (motorized) units. On the Soviet-German front the composition of combined arms armies fluctuated from three to five army corps (ten to 16 infantry divisions) and from two to eight tank and motorized divisions. Soviet combined arms armies had from three to five infantry corps (nine to 14 divisions) and one or two tank (mechanized) corps. The German tank army consisted of two or three tank corps. The Soviet tank army usually consisted of three corps (one or two mechanized and one or two tank corps). Combined arms and tank armies had a great number of different means of reinforcement. The American and British forces had no tank armies. After World War II, the army as a large operational unit of troops developed further as a result of the combat experience that was acquired, the rearmament of troops with the new combat technology, the mechanization and motorization of troops, and the appearance of rocket troops in the 1960’s. REFERENCESEngels, F. Izbr. voennye proizvedeniia. Moscow, 1958. Lenin, V. I. “Voisko i revoliutsiia.” Poln. sobr. soch., 5th ed., vol. 12. Lenin, V. I. “Armiia i narod.” Ibid, vol. 13. Frunze, M. V. Sobr. soch., vols. 1–3. Moscow-Leningrad, 1926–29. Frunze, M. V. Izbr. proizv. Moscow, 1965. Triandafillov, V. Kharakter operatsii sovremennykh armii, 4th ed. Moscow, 1937. 50 let Vooruzhennykh Sil SSSR (1918–1968). Moscow, 1968. I. S. LIAPUNOV
Consider 26 different substances, labeled 'A' through 'Z' (quotes for clarity only). Some of these substances can be created from the others by an alchemical reaction. Each alchemical reaction takes at least two different substances. Exactly 1 gram of each input substance is combined, causing an explosion. After the dust settles, we are left with just 1 gram of the resulting substance. Alchemists don't like extra work, thus, for any given substance, there's at most one known reaction that results in that substance. You are given a String initial describing the substances that you have initially. Each occurrence of a letter indicates 1 gram, so if a letter appears k times in initial, it means you have k grams of that substance. You are also given a String reactions describing all the possible alchemical reactions. Each element of reactions describes a single reaction and is formatted as "ingredients->result" (quotes for clarity only), where ingredients is the list of substances consumed and result is the substance produced. Return the minimal number of reactions required to obtain at least 1 gram of the substance 'X', or -1 if it is impossible.
A very familiar condition known as jaundice is characterized by yellowing of the skin, whites of the eyes, and mucous membranes; dark colored urine and pale colored stools. Jaundice occurs due to excessive levels or accumulation of bilirubin in blood, a condition termed as hyperbilirubinemia. Bilirubin is a waste product which is produced during the normal breakdown of red blood cells. This bilirubin has to be eliminated from body but due to any underlying pathology such as biliary duct obstruction or liver inflammation, this bilirubin gets accumulated in the blood and imparts yellow color to the skin and whites of the eyes. Jaundice is said to be a symptom as it indicates an underlying disease. R.B.Cs are broken down in the liver and the useful materials such as iron is recycled whereas the useless part, the bilirubin, is passed though the bile duct and further for the purpose of its excretion. Hence, based upon the pathology of different physiological mechanisms, jaundice has been categorized as pre-hepatic jaundice, that is, any problem that occurs prior to liver; hepatic jaundice, that is, any problem that occurs in liver; post hepatic jaundice, which refers to any problem that lies post to the liver. Best Treatments Tips For Jaundice Treatment and management of jaundice depends upon its underlying cause. The causes of pre-hepatic jaundice include malaria, sickle cell anemia, thalassemia, Gilbert’s syndrome and Crigler-Najjar syndrome. Hepatic jaundice can be caused due to hepatitis A, B, C; drug misuse and liver cancer. The causes of post hepatic jaundice include gall stone, pancreatic cancer, gall bladder cancer, biliary duct cancer and pancreatitis. Some of the treatments for jaundice according to their causes are given as follows. Treatment For Genetic Diseases Causing Jaundice Pre-hepatic jaundice mainly occurs when there is a rapid breakdown of red blood cells leading to excessive amounts of release of bilirubin. For genetic causes of jaundice such as thalassemia and sickle cell anemia, blood transfusions generally help. Treatment For Infectious Diseases Causing Jaundice Malaria is an infectious disease that involves the breakdown of R.B.Cs and thus causes jaundice. Jaundice is rather a serious symptom of this disease. Hence, to cure malaria regarding the treatments for jaundice, medications can help. There are certain drugs available that are used to treat malaria. P. falciparum, a parasite that is responsible to cause severe forms of malaria, can be treated by using quinine drugs. In severe cases, quinine medicines are injected through IV. Whereas in mild forms of P. falciparum malaria, quinine tablets may be prescribed along with some other drug such as doxycycline for adults, clindamycin for mostly children and pregnant women, etc. Liver Transplant And Other Treatments In cases of serious complications such as liver failure, liver transplantation may be carried out. Surgery may also be carried out for clearing the obstructed bile duct. If the cause of jaundice is liver cancer, then this is dealt separately by the means of chemotherapy and radiotherapy. Jaundice is also a common condition in the newborns. Treatments for jaundice regarding the newborns include phototherapy and blood transfusion. Besides medical treatments, there are available some home remedies that can help cure jaundice too. One of such remedies is the juice of green leaves of radish. Crush the leaves, extract the juice, and have a glass twice every day. The juice is good for bowel system but should be taken by an adult patient only. Curd and Turmeric Add a stick of turmeric to a bowl of curd and have it every day. It is useful in treating the diseased liver. A fresh glass of tomato juice with a pinch of pepper and salt, taken early in the morning can also be a useful remedy to cure jaundice. Another useful home remedy for the treatment of jaundice could be lemon juice. The jaundice patient must drink lemon juice many times a day. Caution: Please use Home Remedies after Proper Research and Guidance. You accept that you are following any advice at your own risk and will properly research or consult healthcare professional.
In the early morning darkness on April 15, 1912, as the R.M.S. Titanic was sinking in the freezing Atlantic, survivors witnessed a large number of streaking lights in the sky, which many believed to be the souls of their drowning loved ones passing to heaven. Says Kevin Luhman, what they most likely were seeing was the peak of the Lyriad meteor shower, an annual event occuring in mid-to-late April. Though folklore of many cultures describes shooting or falling stars as rare events, “they’re hardly rare or even stars,” says Luhman, Penn State assistant professor of astronomy and astrophysics. “From the dawn of civilization people have seen these streaks of light that looked like stars, but were moving quickly across the sky,” he notes. “These ‘shooting stars’ are actually space rocks—meteoroids—made visible by the heat generated when they enter the Earth’s atmosphere at high speeds.” These bits of ice and debris range in size from a speck of sand to a boulder. Larger objects are called asteroids, and smaller, planetary dust, Luhman explains. Most meteoroids are about the size of a pebble and become visible 40 to 75 miles above the earth. The largest meteoroids, called “fireballs” or “bolides,” explode into flashes so bright they can be seen during the day, says Luhman. More common, though, are falling meteors too dim to see in daylight. “Meteors are falling all the time,” he adds. “There’s debris all throughout the solar system. Every minute somewhere around the Earth there’s some little piece of rock or ice that’s falling from space.” A dark spot in the northern hemisphere promises the best viewing of falling meteors, notes Luhman. The pre-dawn hours of any clear night are the best time because, as Earth slowly spins around its axis, the side facing into its orbit tends to encounter more space grit. “You’re better off using just the naked eye versus the telescope, which shows just a small patch of sky,” he suggests. “If you look at the entire sky, that gives you a better chance of spotting the meteor.” The very best viewing times, when “you might see one or two meteors per minute,” says Luhman, are the eleven or so meteor showers each year when Earth passes through a debris trail left behind by a comet—a giant ball of ice and grit also orbiting the sun. “They are named after the constellation in the sky out of which the meteors appear to come,” Luhman explains, noting the Leonid and Perseid showers as the most famous and spectacular. The Perseids, appearing every August and named for Perseus, occur as Earth moves through a thousand-year-old cloud of cosmic debris ejected from the comet Swift-Tuttle, last seen in 1992, and—at six miles across—the largest object known to make repeated passes near Earth. Says Luhman, it would have been hard to miss the falling lights of the 1833 Leonid meteor storm, which were so bright and fell so fast, about 100,000 in an hour, that many feared the end of the world. The storm, widely regarded as the birth of modern meteor astronomy, marked the discovery of the Leonids, visible every November 17 and caused by debris from the comet Tempel-Tuttle. While meteors are now well-understood, meteorites—fragments of meteoroids and asteroids that survive both the passage through our atmosphere and the ground impact—are helping scientists to learn of the solar system’s origins, says Luhman. “These rocks are basically leftovers or raw ingredients from when the solar system was born 4.5 billion years ago.” Some meteorites even tell us about planets. “If a comet or asteroid hits Mars, it can throw some of the pieces of the crust into space and, after millions of years, some of that material falls down to Earth’s surface,” says Luhman. One of 34 Martian meteorites reached fame in 1996 when NASA scientists announced it showed signs of primitive life from more than 3.6 billion years ago. Closer study explained the formation as a geologic effect, all but ending the scientific controversy, explains Luhman. Yet, shooting stars have hardly fallen out of our everyday conversations. Our movies, songs and poetry still speak highly of the bright lights as a magical sight, worthy of wishes. As the Disney company’s theme song has taught generations of children, “When you wish upon a star, your dreams come true.” Source: by Lisa Duchene, Penn State
Campbell and Werry (1986) define impulsivity as "erratic and poorly controlled behavior" (p. 120). Teachers who refer to a student as being impulsive usually conjure up images of students who rarely stop to think before they act, who attempt tasks before they fully understand the directions, who often demonstrate remorse when their actions have led to errors or mishaps, who call out frequently in class (usually with the wrong answer), and who have difficulty organizing their materials. Kauffman (1989) notes that impulsive behavior is normal in young students, but that as students grow older, most learn alternative responses. Olson and colleagues (Olson, Bates, & Bayles, 1990) point out that 2-year-old students will begin to "inhibit prohibited actions owing to remembered information" (p. 318), but state that "self-regulation does not develop until the 3rd or 4th year of life" (p. 318). Students who manifest impulsive behavior often get into trouble in social situations such as games and play activities (Melloy, 1990). Because they demonstrate poor impulse control, these students are apt to take their turn before its time, or to respond incorrectly to game stimuli (e.g., questions). Some students who have poor impulse control may respond to teasing, for example, by hitting the person who teases them, They are often sorry for their actions and can discuss what they should have done had they taken time to think about their action. Unfortunately, impulsivity places students at higher risk for smoking (Kollins, McClernon, & Fuemmeler, 2005), illegal drug use (Semple, Zians, Grant, & Patterson, 2005), eating disorders (Peake, Limbert, & Whitehead, 2005), and suicide (Swann, Dougherty, Pazzaglia, Pham, Steinberg, & Moeller, 2005). D'Acremont and Van der Linden (2005) identify four dimensions of impulsivity: - Urgency: Student is in a hurry. - Lack of premeditation: Student acts before he thinks or plans. - Lack of perseverance: Student gives up on a task. - Sensation seeking: Student seeking fun without thinking of consequences. They also found that among impulsive children, boys had higher scores for sensation seeking and girls for urgency. Assessment of impulsivity usually involves the use of behavioral checklists, behavior ratings, mazes, match-to-sample tasks, and behavioral observations (Olson et al., 1990; Shafrir & Pascual-Leone, 1990; Vitiello, Stoft Atkins, & Mahoney, 1990). Common Causes and Antecedents of lmpulsive Behavior As is the case for so many attention and activity behaviors, no one actually knows what causes impulsivity (Campbell & Werry, 1986; Kauffman, 2005). Impulsivity is most likely related to the same multiple factors discussed in the prior sections on attentiveness and hyperactivity, including childhood temperament, family environment, gender, and parental characteristics (Leve, Kim, & Pears, 2005). Failure to Self-Monitor Shafrir and Pascual-Leone (1990) conducted a study with 378 students between 9 and 12 years of age to determine the effect of attention to errors on academic tasks and the relationship to reflective/impulsive behavior. Shafrir and Pascual-Leone administered a number of measures, including mazes and match-to-sample tasks, to determine response behavior, and tests of academic achievement to evaluate arithmetic abilities. They report that students who completed tasks quickly and accurately tended to take time to check their answers. If an error occurred, they took time to correct the error and used the information learned in correction of the error to assist them in completing the rest of the task. This resulted in fewer errors overall and completion of the task in a more timely fashion. They call these students post failure reflective (p,385). In comparison, students who are referred to as post failure impulsive (Shafrir & Pascual-Leone, 1990, p. 385) were found to complete tasks slowly and inaccurately. These students plodded through the task without checking answers for correctness. They simply went on to the next problem with no reference to previously completed tasks. Shafrir and Pascual-Leone conclude that the lack of post failure reflection by this group led to more errors because they did not learn from their previous errors. The implications of the results of this study are that students possess some type of "reflection/impulsivity cognitive style" (p. 386), which was first proposed by Kagan (see Kagan, Pearson, & Welch, 1966). Also, students who appear to be taking their time (slow thinkers) in actuality make more errors than the students who complete the tasks quickly (reflective thinkers). Olson and his colleagues (1990) attempted to assess parent-child interactions through behavioral observation to determine if parental interaction style was a predictor of impulsive behavior. According to Olson et al., the purpose of their study was to "identify the relative contributions of different parent-student interaction antecedents to students' later self-regulatory abilities" (p. 320). This longitudinal study involved 79 mother-child dyads. Their findings indicate that "responsive, sensitive, and cognitively enriching mother-child interactions are important precursors of childhood impulse control" (p. 332). Children, especially boys, were more likely to develop impulsivity if their mothers manifested punitive and inconsistent behavior management styles. Interventions for Impulsive Behavior Teach Waiting and Self-Control Skills Impulsivity may be decreased by teaching students appropriate waiting behaviors, and by a reinforcement plan for appropriate responding behavior. For example, after an assignment has been given, a teacher may teach a student to place her hands on her desk, establish eye contact with the teacher, and listen for directions. The teacher should praise the student for demonstrating these waiting behaviors. Students who manifest impulsive behavior will benefit from training in social skills such as self-control. At the same time, students may be taught relaxation techniques. Reinforcement will increase the possibility that a student will demonstrate behaviors that are alternatives to impulsivity. The student just described learned social skills through direct instruction and reinforcement for use of the skills to replace impulsive behavior. Schaub (1990) also found that targeting behaviors for intervention that were positive and incompatible with undesirable behaviors was effective with students who demonstrated impulsive behavior. Bornas, Servera, and Llabres (1997) suggest that teachers use computer software to assist students in preventing impulsivity. The authors describe several software products that are effective in preventing impulsivity through instruction in problem solving and self-regulation Give Smaller and Shorter Tasks One at a Time A student who hurries through an assignment without stopping to read the directions or to check for errors could be given smaller amounts of a task to accomplish at one time, rather than the whole task at once. This would give the student a smaller chunk of the problem to deal with and more opportunities for reinforcement since the student would be more likely to solve the problem correctly. Sometimes, a student considered impulsive can handle solving only one problem at a time. In this case, the student should be allowed to solve the problem and receive feedback immediately. As the student becomes more confident and is able to pace him- or herself more efficiently, then he or she may be able to handle larger and larger portions of projects and assignments. © ______ 2008, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
arthritis, inflammation of the joints and its effects. Arthritis is a general term, derived from the Greek words arthro-, meaning “joint,” and -itis, meaning “inflammation.” Arthritis can be a major cause of disability. In the United States, for example, data collected from 2007 to 2009 indicated that 21 million adults were affected by arthritis and experienced limited activity as a result of their condition. Overall, the incidence of arthritis was on the rise in that country, with 67 million adults expected to be diagnosed by 2030. Likewise, each year in the United Kingdom, arthritis and related conditions caused more than 10 million adults to consult their doctors. Although the most common types of arthritis are osteoarthritis and rheumatoid arthritis, a variety of other forms exist, including those secondary to infection and metabolic disturbances. Osteoarthritis, also known as degenerative joint disease, is the most common form of arthritis, affecting nearly one-third of people over age 65. It is characterized by joint pain and mild inflammation due to deterioration of the articular cartilage that normally cushions joints. Joint pain is gradual in onset, occurring after prolonged activity, and is typically deep and achy in nature. One or multiple joints may be affected, predominantly involving the knee, hips, spine, and fingers. Approximately 90 percent of individuals experience crepitus (crackling noises) in the affected joint with motion. Muscle weakness and joint laxity or stiffness can occur as people become reluctant to move painful joints. Patients tend to have decreased joint stability and are predisposed to injuries such as meniscal and anterior cruciate ligament tears. Hip arthritis can affect gait, while arthritis of the hands can lead to decreased dexterity. Enlargement of the bony processes surrounding affected joints, called osteophytes (bone spurs), are common. Joint trauma, increased age, obesity, certain genetic factors and occupations, and hobbies or sports that result in excessive joint stresses can result in the cartilaginous changes leading to osteoarthritis. Damage begins with the development of small cracks in the cartilage that are perpendicular to the joint. Eventually, cartilage erodes and breaks off, facilitating painful bone-on-bone contact. In due course, pathologic bony changes, such as osteophytes and subchondral bone cysts, develop and further restrict joint movement and integrity. Osteoarthritis may be divided into two types, primary and secondary osteoarthritis. Primary osteoarthritis is age-related, affecting 85 percent of individuals 75–79 years of age. Although the etiology is unknown, primary osteoarthritis is associated with decreased water-retaining capacity in the cartilage, analogous to a dried-up rubber band that can easily fall apart. Secondary osteoarthritis is caused by another condition, such as joint trauma, congenital joint malalignment, obesity, hormonal disorders, and osteonecrosis. Treatment for osteoarthritis is directed toward reducing pain and correcting joint mechanics and may include exercise, weight loss, nonsteroidal anti-inflammatory drugs, steroids, and total joint replacement surgery. Autoimmune arthritis is characterized by joint inflammation and destruction caused by one’s own immune system. Genetic predisposition and inciting factors, such as an infection or trauma, can trigger the inappropriate immune response. Rheumatoid arthritis, which is an autoimmune disease, is often associated with elevations in the serum level of an autoantibody called rheumatoid factor, whereas the seronegative arthropathies are not. Rheumatoid arthritis is a progressive inflammatory condition that can lead to decreased mobility and joint deformities. The worldwide prevalence is 0.8 percent, with a 2:1 predilection for women over men. Disease onset, mainly occurring in the third and fourth decades of life, may be acute or slowly progressive with initial symptoms of fatigue, weakness, malaise, weight loss, and mild, diffuse joint pain. Rheumatoid arthritis tends to affect the hips, knees, elbows, ankles, spine, hands, and feet symmetrically. The disease course is characterized by periods of remission, followed by progressive exacerbations in which specific joints become warm, swollen, and painful. Morning stiffness, typically lasting about two hours, is a hallmark feature of rheumatoid arthritis. Patients with rheumatoid arthritis tend to complain of joint pain after prolonged periods of inactivity, whereas osteoarthritis is typically exacerbated with extended activity. Rheumatoid arthritis can be severely debilitating, resulting in a variety of deformities. Some patients experience complete remission, which typically occurs within two years of disease onset. Although the exact cause is unknown, rheumatoid arthritis results from the inflammation of the tissues surrounding the joint space. The thin lining of the joint space becomes thick and inflamed, taking on the form of a mass with fingerlike projections (pannus), which invades the joint space and surrounding bone. Initially, this results in joint laxity. However, with progression, the bones can actually undergo fusion (ankylosis), limiting motion. The effect rheumatoid arthritis has on the hands is a defining characteristic. Clinically, it can be distinguished from osteoarthritis based on the distribution of joints affected in the hands. Rheumatoid arthritis tends to affect the more proximal joints, whereas osteoarthritis tends to affect the more distal joints of the hands and fingers. In severe cases, joint laxity and tendon rupture result in a characteristic deformity of the fingers and wrist. Rheumatoid nodules are thick fibrous nodules that form as a result of excessive tissue inflammation in rheumatoid arthritis. These nodules are typically present over pressure points, such as the elbows, Achilles tendon, and flexor surfaces of the fingers. Destruction of peripheral blood vessels (vasculitis) from the inflammatory process can occur in any organ, leading to renal failure, myocardial infarction (heart attack), and intestinal infarction (death of part of the intestine). In addition, rheumatoid arthritis is also associated with an increased risk of infections, osteoporosis (thinning of bones), and atherosclerosis (hardening of arteries). Diagnosis of rheumatoid arthritis is based on the presence of several clinical features: rheumatoid nodules, elevated levels of rheumatoid factor, and radiographic changes. Although rheumatoid factor is found in 70 to 80 percent of people with rheumatoid arthritis, it cannot be used alone as a diagnostic tool, because multiple conditions can be associated with elevated levels of rheumatoid factor. Since no therapy cures rheumatoid arthritis, treatment is directed toward decreasing symptoms of pain and inflammation. Surgical treatment may include total joint replacement, carpal tunnel release (cutting of the carpal ligament), and tendon repair. Hand splints are used to slow the progression of finger and wrist deformations. The overall life span of individuals with rheumatoid arthritis is typically shortened by 5–10 years and is highly dependent on disease severity. Disease severity and the likelihood of extra-articular manifestations are each directly related to serum rheumatoid factor levels. Several rheumatoid arthritis variants exist. In Sjögren syndrome the characteristic symptoms include dry eyes, dry mouth, and rheumatoid arthritis. Felty syndrome is associated with splenomegaly (enlarged spleen), neutropenia (depressed white blood cell levels), and rheumatoid arthritis. Juvenile rheumatoid arthritis is the most common form of childhood arthritis. Disease etiology and clinical course typically differ from that of adult-onset rheumatoid arthritis, and sufferers are prone to the development of other rheumatologic diseases, including rheumatoid arthritis. Ankylosing spondylitis, Reiter syndrome, psoriatic arthritis, and arthritis associated with inflammatory bowel disease are a subset of conditions known as spondyloarthropathies. Typically affected are the sacrum and vertebral column, and back pain is the most common presenting symptom. Enthesitis, inflammation at the insertion of a tendon or ligament into bone, is a characteristic feature of spondyloarthropathy. Unlike rheumatoid arthritis, spondyloarthropathies are not associated with elevated levels of serum rheumatoid factor. Spondyloarthropathies occur most frequently in males and in individuals with a genetic variation known as HLA-B27. Ankylosing spondylitis is the most common type of spondyloarthropathy, affecting 0.1 to 0.2 percent of the population in the United States. In a region of Turkey, prevalence was found to be 0.25 percent, and in the United Kingdom prevalence is estimated to range from 0.1 to 2 percent. In all regions, the condition occurs more frequently in males than in females and typically strikes between ages 15 and 40. Genetic studies have shown that more than 90 percent of all patients with ankylosing spondylitis who are white and of western European descent are HLA-B27 positive. Ankylosing spondylitis is characterized by arthritis of the spine and sacroiliac joints. Extensive inflammation of the spinal column is present, causing a characteristic “bamboo spine” appearance on radiographs. Arthritis first occurs in the sacroiliac joints and gradually progresses up the vertebral column, leading to spinal deformity and immobility. Typical symptoms include back pain, which lessens with activity, and heel pain due to enthesitis of the plantar fascia and Achilles tendon. Hip and shoulder arthritis may occur early in the course of the disease. Reiter syndrome, a type of reactive arthritis, is characterized by the combination of urethritis, conjunctivitis, and arthritis. Patients typically develop acute oligoarthritis (two to four joints affected) of the lower extremities within weeks of gastrointestinal infection or of acquiring a sexually transmitted disease. Reiter arthritis is not considered an infectious arthritis, because the joint space is actually free of bacteria. Instead, an infection outside the joint triggers this form of arthritis. Other symptoms can include fever, weight loss, back pain, enthesitis of the heel, and dactylitis (sausage-shaped swelling of the fingers and toes). Most cases resolve within one year; however, 15–30 percent of patients develop chronic, sometimes progressive arthritis. Occurring almost exclusively in men, Reiter syndrome is strongly linked to the HLA-B27 gene variant, which is present in 65 to 96 percent of symptomatic individuals. Psoriasis is an immune-mediated inflammatory skin condition characterized by raised red plaques with an accompanying silvery scale, which can be painful and itchy at times. Though typically seen on the elbow, knees, scalp, and ears, plaques can occur on any surface of the body. About 10 percent of people with psoriasis (possibly as many as 30 percent in some regions of the world) develop a specific type of arthritis known as psoriatic arthritis. Psoriatic arthritis typically occurs after psoriasis has been present for many years. In some cases, however, arthritis may precede psoriasis; less often, the two conditions appear simultaneously. Estimates on the prevalence of psoriatic arthritis vary according to population. However, overall, it is thought to affect nearly 1 percent of the general population, with a peak age of onset between 30 and 55. Usually less destructive than rheumatoid arthritis, psoriatic arthritis tends to be mild and slowly progressive, though certain forms, such as arthritis mutilans, can be quite severe. Occasionally the onset of symptoms associated with psoriatic arthritis is acute, though more often it is insidious, initially presenting as oligoarthritis with enthesitis. Over time, arthritis begins to affect multiple joints (polyarthritis), especially the hands and feet, resulting in dactylitis. Typically, the polyarticular pattern of psoriatic arthritis affects a different subset of finger joints than rheumatoid arthritis. It is not until years after peripheral arthritis has occurred that psoriatic arthritis may affect the axial joints, causing inflammation of the sacroiliac joint (sacroiliitis) and intervertebral joints (spondylitis). Arthritis mutilans is a more severe and much less common pattern (seen in fewer than 5 percent of psoriatic arthritis cases) resulting in bone destruction with characteristic telescoping of the fingers or toes. In addition, individuals with psoriatic arthritis necessitate more aggressive treatment if the onset of the condition occurs before age 20, if there is a family history of psoriatic arthritis, if there is extensive skin involvement, or if the patient has the HLA-DR4 genotype. Crohn disease and ulcerative colitis, two types of inflammatory bowel disease, are complicated by a spondyloarthropathy in as many as 20 percent of patients. Although arthritis associated with inflammatory bowel disease typically occurs in the lower extremities, up to 20 percent of cases demonstrate symptoms identical to ankylosing spondylitis. Arthritis is usually exacerbated in conjunction with inflammatory bowel disease exacerbations and lasts several weeks thereafter. Joint inflammation, destruction, and pain can occur as a result of the precipitation of crystals in the joint space. Gout and pseudogout are the two primary types of crystalloid arthritis caused by different types of crystalloid precipitates. Gout is an extremely painful form of arthritis that is caused by the deposition of needle-shaped monosodium urate crystals in the joint space (urate is a form of uric acid). Initially, gout tends to occur in one joint only, typically the big toe (podagra), though it can also occur in the knees, fingers, elbows, and wrists. Pain, frequently beginning at night, can be so intense that patients are sensitive to even the lightest touch. Urate crystal deposition is associated with the buildup of excess serum uric acid (hyperuricemia), a by-product of everyday metabolism that is filtered by the kidneys and excreted in the urine. Causes of excess uric acid production include leukemia or lymphoma, alcohol ingestion, and chemotherapy. Kidney disease and certain medications, such as diuretics, can depress uric acid excretion, leading to hyperuricemia. Although acute gouty attacks are self-limited when hyperuricemia is left untreated for years, such attacks can recur intermittently, involving multiple joints. Chronic tophaceous gout occurs when, after about 10 years, chalky, pasty deposits of monosodium urate crystals begin to accumulate in the soft tissue, tendons, and cartilage, causing the appearance of large round nodules called tophi. At this disease stage, joint pain becomes a persistent symptom. Gout is most frequently seen in men in their 40s, due to the fact that men tend to have higher baseline levels of serum uric acid. In the early 21st century the prevalence of gout appeared to be on the rise globally, presumably because of increasing longevity, changing dietary and lifestyle factors, and the increasing incidence of insulin-resistant syndromes. Pseudogout is caused by rhomboid-shaped calcium pyrophosphate crystals deposition (CPPD) into the joint space, which leads to symptoms that closely resemble gout. Typically occurring in one or two joints, such as the knee, ankles, wrists, or shoulders, pseudogout can last between one day and four weeks and is self-limiting in nature. A major predisposing factor is the presence of elevated levels of pyrophosphate in the synovial fluid. Because pyrophosphate excess can result from cellular injury, pseudogout is often precipitated by trauma, surgery, or severe illness. A deficiency in alkaline phosphatase, the enzyme responsible for breaking down pyrophosphate, is another potential cause of pyrophosphate excess. Other disorders associated with synovial CPPD include hyperparathyroidism, hypothyroidism, hemochromatosis, and Wilson disease. Unlike gout, pseudogout affects both men and women, with more than half at age 85 and older. Infectious arthritides are a set of arthritic conditions caused by exposure to certain microorganisms. In some instances the microorganisms infiltrate the joint space and cause destruction, whereas in others an infection stimulates an inappropriate immune response leading to reactive arthritis. Typically caused by bacterial infections, infectious arthritis may also result from fungal and viral infections. Septic arthritis usually affects a single large joint, such as the knee. Although a multitude of organisms may cause arthritis, Staphylococcus aureus is the most common pathogen. Neisseria gonorrhoeae, the bacteria that causes gonorrhea, is a common pathogen affecting sexually active young adults. The most common way by which bacteria enter the joint space is through the circulatory system after a bloodstream infection. Microorganisms may also be introduced into the joint by penetrating trauma or surgery. Factors that increase the risk of septic arthritis include very young or old age (e.g., infants and the elderly), recent surgery or skin infection, preexisting arthritic condition, immunosuppression, chronic renal failure, and the presence of a prosthetic joint. Postinfectious arthritis is seen after a variety of infections. Certain gastrointestinal infections, urinary tract infections, and upper respiratory tract infections can lead to arthritic symptoms after the infections themselves have resolved. Examples include Reiter syndrome and arthritis associated with rheumatic fever.
For more information about National Park Service air resources, please visit http://www.nature.nps.gov/air/. Air Pollution Impacts Voyageurs National Park Natural and scenic resources in Voyageurs National Park (NP) are susceptible to the harmful effects of air pollution. Mercury (and other air toxics), nitrogen, sulfur, ozone, and fine particles impact natural resources such as wildlife, surface waters, and vegetation, as well as visibility. Click on the tabs below to learn more about air pollutants and their impacts on natural and scenic resources at Voyageurs NP. - Toxics & Mercury - Nitrogen & Sulfur Toxics, including heavy metals like mercury, accumulate in the tissue of organisms. When mercury converts to methylmercury in the environment and enters the food chain, effects can include reduced reproductive success, impaired growth and development, and decreased survival. Other toxic air contaminants of concern include pesticides (e.g., DDT), industrial by-products like PCBs, and emerging chemicals such as flame retardants for fabrics (PBDEs). Some of these are known or suspected to cause cancer or other serious health effects in humans and wildlife. Effects of Toxics/Mercury at Voyageurs NP include: - Mercury is currently widespread in the park’s aquatic ecosystems, and comes primarily from coal-fired power plant and taconite processing emissions. In addition to fish, elevated concentrations have been documented in water, lake sediments, zooplankton, aquatic plants, benthic organisms, fish-eating birds, bald eagles, and river otters (Kallemeyn et al. 2003 [pdf, 11.3 KB]; Wiener et al. 2006; Swackhamer and Hornbuckle 2004 [pdf, 4.5 MB]); - High mercury concentrations in fish from nearly all of Voyageur NP’s 30 lakes (Sorensen et al. 2001). The average mercury level in fish exceeds the State of Minnesota fish consumption advisories (MN-DOH 2012), a concern since approximately 70 percent of visitors enjoy fishing in the park (Kallemeyn et al. 2003 [pdf, 11.3 KB]); - Mercury levels in walleye, pike, bass, and other fish from lakes in the park exceed thresholds known to damage fish health (Sandheinrich et al. 2011; NPS 2010 [pdf, 337 KB]), and at levels also known to harm loons that eat those fish (Sorensen et al. 2001); - Concentrations of mercury in loon blood at a level sufficient to reduce reproductive success (Evers et al. 2011a; Evers et al. 1998), and adult loon feather mercury concentrations above a level associated with risk of toxic effects (Scheuhammer and Blancher 1994); - Detectable levels of contaminants including mercury, PCBs, DDE, and dieldrin in feathers of bald eagle nestlings at the park (Pittman 2010 [pdf, 1.1 MB]). Mercury concentrations in nestling feathers have declined over time, but recent samples suggest a gradual increase (Pittman et al. 2011 [pdf, 1.8 MB]); - Elevated concentrations of toxic elements including mercury, cadmium, and chromium in lichens (Bennett 1997); - PFOS, a by-product in the manufacture of fabric protectors, firefighting foams, and other chemicals, detected in water samples at the park (Simcik and Dorweiler 2005). - Resource Brief: Monitoring Persistent Contaminants at Voyageurs (pdf, 337 KB) - Issue Brief: Mercury in National Parks of the Upper Midwest (pdf, 211 MB) - Investigate the extent and effects of mercury pollution in The Great Lakes Region through the Great Lakes Mercury Connections, a binational scientific study. Access a four-page summary (pdf, 3.9 MB) for key results or the full report (pdf, 8.3 MB). Nitrogen (N) and sulfur (S) compounds deposited from air pollution can harm surface waters, soils, vegetation, and ecosystem biodiversity. The park’s thin, undeveloped soils, underlying granitic rock, and low buffering capacity result in surface waterways and soils at high risk from acidification by atmospheric N and S (Sullivan et al. 2011a; Sullivan et al. 2011b [pdf, 2.3 MB]). Some park resources may be sensitive to nutrient N enrichment from deposition. For example, in N-limited boreal lakes, increased N can affect biodiversity, algal communities, and water clarity (Sullivan et al. 2011c; Sullivan et al. 2011d [pdf, 6.2 MB]; Kallemeyn et al. 2003 [pdf, 11.3 KB]; Wiener et al. 2006; Swackhamer and Hornbuckle 2004 [pdf, 4.5 MB]). Concentrations of ammonium, an N compound and indicator of nearby agricultural activity, have increased in precipitation in the Great Lakes region in recent decades. During that same period, declines in nitrate concentrations have been observed and as a result, total nitrogen deposition remains elevated above natural conditions and relatively unchanged (NPS 2010 [pdf, 337 KB]; Lehmann et al. In Prep; Lehmann and Gay 2011). Sulfur emissions and resultant sulfate concentrations in precipitation have gone down more significantly in recent decades due to air pollution controls (Lehmann and Gay 2011). However, sulfur remains a concern at Voyageurs NP because it plays an essential role in the methylation of mercury, leading to toxic accumulation of methylmercury in fish and wildlife. (more »). In addition, sulfur is a strong driver of acidification in poorly buffered lakes and streams, and ecosystems at the park have been identified as being very sensitive to acidification effects (Sullivan et al. 2011a; Sullivan et al. 2011b [pdf, 2.3 MB]). How much nitrogen is too much? Nitrogen is a fertilizer and some nitrogen is necessary for plants to grow. However, in natural ecosystems, too much nitrogen can disrupt the balance of plant communities, like diatoms, allowing weedy species to grow faster. Nitrogen deposited from air pollution may upset the balance of sensitive diatom communities in the boreal lakes of Voyageurs NP. A project is underway to analyze diatom community assemblages from 16 inland lakes at the park (Elias and Damstra 2011 [pdf, 2.1 MB]). Findings may shed light on possible changes in diatom community structure over time, and allow results to be compared to the critical load developed for similar boreal lakes near Voyageurs NP. Above that threshold of nitrogen deposition, communities may begin to change, with sensitive species gradually replaced by pollution-tolerant species. Critical loads for natural resources can be used to establish goals for ecosystem recovery. Naturally-occurring ozone in the upper atmosphere absorbs the sun’s harmful ultraviolet rays and helps to protect all life on Earth. However, in the lower atmosphere, ozone is an air pollutant, forming when nitrogen oxides from vehicles, power plants, and other sources combine with volatile organic compounds from gasoline, solvents, and vegetation in the presence of sunlight. In addition to causing respiratory problems in people, ozone can injure plants. Ozone enters leaves through pores (stomata), where it can kill plant tissues, causing visible injury, or reduce photosynthesis, growth, and reproduction. There are a few ozone-sensitive plants in Voyageurs NP including Apocynum androsaemifolium (Spreading dogbane), Ascelpias syriaca (Common milkweed), and Prunus serotina (Black cherry). Because ozone exposure levels at the park are low, there is a low risk of ozone damage to plants (Kohut 2004 [pdf, 187 KB]). A review of monitoring results found no ozone injury to plants in regions near Voyageurs NP (Swackhamer and Hornbuckle 2004 [pdf, 4.5 MB]). Search the list of ozone-sensitive plant species (pdf, 184 KB) found at each national park. Visitors come to Voyageurs NP to enjoy the spectacular “North Woods,” a wilderness of interconnected waterways, and the distant howls of wolves. Unfortunately, park vistas are sometimes obscured by haze caused by fine particles in the air. Many of the same pollutants that ultimately fall out as nitrogen and sulfur deposition contribute to this haze and visibility impairment. Additionally, organic compounds, soot, and dust reduce visibility. Smoke from nearby forest fires also contributes to particulate matter in the region. Visibility effects at Voyageurs NP include: - Reduced visibility, at times, due to human-caused haze and fine particles of air pollution; - Reduction of the average natural visual range from about 110 miles (without pollution) to about 60 miles because of pollution at the park; - Reduction of the visual range to below 35 miles on very hazy days. (Source: IMPROVE 2010) Explore regional vistas via live webcams located throughout the Midwest. Studies and monitoring help the NPS understand the environmental impacts of air pollution. Access air quality data and see what is happening with Studies and Monitoring at Voyageurs NP. Last Updated: February 03, 2016
Language, the principal means used by human beings to communicate with one another. Language is primarily spoken, although it can be transferred to other media, such as writing. If the spoken means of communication is unavailable, as may be the case among the deaf, visual means such as sign language can be used. A prominent characteristic of language is that the relation between a linguistic sign and its meaning is arbitrary: There is no reason other than convention among speakers of Kurdish that a “Seg” Dog should be called “seg”, and indeed other languages have different names (for example, Spanish perro, Russian sobaka, Japanese inu). Spoken human language is composed of sounds that do not in themselves have meaning, but that can be combined with other sounds to create entities that do have meaning. Thus p, e, and n do not in themselves have any meaning, but the combination pen does have a meaning. Language also is characterized by complex syntax whereby elements, usually words, are combined into more complex constructions, called phrases, and these constructions in turn play a major role in the structures of sentences. Lets put Kurdish language into this context in the section of KURDISTANICA.
disk is to cause precession, the axis of spin of the disk in Figs. 26 and 27 turns towards the vertical, bringing the right-hand side (with reference to the thrower) of the disk in Fig. 26 upwards and bringing the left-hand side of the disk in Fig. 27 upwards. Fig. 28 represents a propeller-wheel boomerang. In the following discussion the propeller is supposed to be right-handed, that is to say, if it were set spinning in the direction of the curved arrow S in Fig. 28, it would blow air towards the reader like a desk fan. Fig. 28 represents the boomerang as it leaves the hands of the thrower, who is supposed to be standing at MM, V being the direction in which the boomerang is thrown, and the curved arrow S representing the direction in which the boomerang is set spinning. The upper vane of the boomerang in Fig. 28 is traveling forwards at a greater velocity than the lower vane, because the forward velocity of the upper vane is the velocity of forward motion of the boomerang plus a forward velocity Sr which is due to the spinning motion of the boomerang, whereas the forward velocity of the lower vane is the forward velocity of the boomerang minus Sr. Fig. 29 is a top view of the boomerang as it leaves the hand of the thrower at M, V is the velocity of forward motion of the boomerang, and the arrow S represents the spin of the boomerang. The arrow F represents the force with which the air pushes sidewise against the upper vane because of propeller action. A force pushes sidewise in the same 'direction on the lower vane because of propeller action, but the sidewise force on the upper vane is the greater because of the greater velocity of the upper vane. The inequality of these forces constitutes a torque upon the boomerang, and this torque is represented by the arrow T in Fig. 29. The effect of
It is hard to imagine today, but for most of humankind's evolutionary history, multiple humanlike species shared the earth. As recently as 40,000 years ago, Homo sapiens lived alongside several kindred forms, including the Neandertals and tiny Homo floresiensis. For decades scientists have debated exactly how H. sapiens originated and came to be the last human species standing. Thanks in large part to genetic studies in the 1980s, one theory emerged as the clear front-runner. In this view, anatomically modern humans arose in Africa and spread out across the rest of the Old World, completely replacing the existing archaic groups. Exactly how this novel form became the last human species on the earth is mysterious. Perhaps the invaders killed off the natives they encountered, or outcompeted the strangers on their own turf, or simply reproduced at a higher rate. However it happened, the newcomers seemed to have eliminated their competitors without interbreeding with them.
LA: Identify words and phrases in stories or poems that suggest feelings or appeal to the senses. LA: With guidance and support from adults, recall information from experiences or gather information from provided sources to answer a question. LA: Describe people, places, things, and events with relevant details, expressing ideas and feelings clearly. LA: Demonstrate understanding of word relationships and nuances in word meanings. LA: Use words and phrases acquired through conversations, reading and being read to, and responding to texts, including using adjectives and adverbs to describe. LA: Participate in collaborative conversations with diverse partners about grade level topics and texts with peers and adults in small and larger groups. SCI: Make observations of plants and animals to compare the diversity of life in different habitats. VA: Students will investigate, plan and work through materials and ideas to make works of art and design.
The Constitution: Rules for Running a Country 12/11/2012 12:00:00 AM A great introduction to the Constitution through which students explore why a constitution is necessary in the efficient and fair running of the country. As each topic is explained, students are asked to express what they think is important. Learn about the Preamble of the Constitution, the Bill of Rights, additional amendments, and the role states play in a federal government. A link to the Census Bureau gives facts about individual states and a brief lesson plan for teachers on how to use the facts. The iCivics site was founded by Justice Sandra Day O’Connor to increase the understanding of civics in the classroom. This is a very accessible activity for late elementary and middle school students. courtesy of Knovation Want to read more stories like this? comments powered by
Copyright © University of Cambridge. All rights reserved. Why do this problem? will help consolidate children's understanding of, and familiarity with, odd and even numbers. It also provides an opportunity for learners to explain and generalise. Depending on your pupils' past experience, you may want to begin with some models of some numbers made from multilink cubes. You could put the cubes together so that they are paired, for example this would be the model for $9$: Put your multilink models out for the children to see and invite them to talk about the models. You could lead on to ordering the models numerically and pupils could make their own models to fill in some of the missing numbers. Having made, for example, models of the numbers from $1$ to $12$ and placed them in numerical order, you could pose a few questions to focus on odds and evens, if this has not already come up in conversation. For instance, you could ask whether there is anything the same about any of the models. Alternatively, you could group the models into two sets, one of odds and one of evens and invite the group to talk about what you've done. You could then pose the first challenge in the problem and step back for learners to have a go in pairs. You may need to have a conversation about what 'between' means, so that, for example, the whole numbers between $3$ and $11$ would not include $3$ or $11$. Allow them to choose how they tackle it - this will be a great assessment opportunity for you. Once they have had sufficient time, talk together about their methods and conclusions. You can then go on to the other parts of the problem, again leaving it up to the children to decide how they approach it and how they record their solutions. If possible, give the group plenty of time to find lots of examples at each stage. You could leave the final part of the problem to the plenary. Talk together about ways of finding two numbers which have, for example, ten odds between them. You may be able to encourage some children to articulate how they would find a pair with any number of odds between them. How will you find out how many odd numbers there are between $3$ and $11$, $4$ and $11$ etc.? Tell me about what you're doing. How will you remember what you've found so far? You could challenge some children to find good ways of predicting the number of odd numbers between any two numbers. How about investigating the number of even numbers between a pair of numbers? It would be good to have multilink and number lines available along with plenty of blank paper and pencils and small whiteboards. Of course some children may want to use particular equipment which you hadn't thought of, so do allow them to choose whatever suits them best.
Japanese spacecraft “akatsuki” captured the night side of Venus. The mission of the probe is the study of the atmosphere of the second planet from the Sun. In the picture of the Venusian clouds is represented in infrared light. From the moment you arrive at the Venusian orbit in 2015 “akatsuki” does not get tired to send to Earth amazing pictures of the world of the cloud and data about its atmosphere. Earlier, the spacecraft was caught on video rotation of Venus and its clouds. “The “akatsuki” have cameras and instruments that will explore all the unknown on the planet, find out if there are still active volcanoes, whether formed in the dense atmosphere, the lightning and why the speed of winds much greater than the speed of rotation of the planet,” commented the photo NASA. The orange line, which separates day and night sides of Venus, looks pretty wide. This is due to the interaction of clouds of the planet with sunlight. “Akatsuki” is not the only orbital unit, which was able to take a look at Venus. USA, Europe and Russia (USSR) also sent probes to the cloud world.
Issue 3: Distribution of Other Language Families • Classification of languages • Distribution of language families – Sino-Tibetan language family – Other East and Southeast Asian language families – Afro-Asiatic language family – Altaic and Uralic language families – African language families Language Families of the World Fig. 5-11: Distribution of the world’s main language families. Languages with more than 100 million speakers are named. Major Language Families Percentage of World Population Fig. 5-11a: The percentage of world population speaking each of the main language families. Indo-European and Sino-Tibetan together represent almost 75% of the world’s people. Language Family Trees Fig. 5-12: Family trees and estimated numbers of speakers for the main world language families. Sino-Tibetan Family • The Sino-Tibetan family encompasses languages spoken in the People’s Republic of China as well as several smaller countries in Southeast Asia. Sinitic Branch – Chinese Languages • There is no single Chinese language. • Spoken by approximately threefourths of the Chinese people, Mandarin is by a wide margin the most used language in the world. • Other Sinitic branch languages are spoken by tens of millions of people in China. • The Chinese government is imposing Mandarin countrywide. Structure of Chinese Language • The structure of Chinese languages is quite different (from IndoEuropean). • They are based on 420 one-syllable words. • This number far exceeds the possible one-syllable sounds that humans can make, so Chinese languages use each sound to denote more than one thing. • The listener must infer the meaning from the context in the sentence and the tone of voice the speaker uses. • In addition, two one-syllable words can be combined. Chinese Ideograms Fig. 5-13: Chinese language ideograms mostly represent concepts rather than sounds. The two basic characters at the top can be built into more complex words. Austro-Thai and Tibeto-Burman In addition to the Chinese languages included in the Sinitic branch, the SinoTibetan family includes two smaller branches, Austro-Thai and TibetoBurman. Distinctive Language Families - Japanese • Chinese cultural traits have diffused into Japanese society, including the original form of writing the Japanese language. • Japanese is written in part with Chinese ideograms, but it also uses two systems of phonetic symbols. Distinctive Language Families - Korean • Korean is usually classified as a separate language family. • Korean is written not with ideograms but in a system known as hankul. • In this system, each letter represents a sound. Distinctive Language Families - Vietnamese • Austro-Asiatic, spoken by about 1 percent of the world’s population, is based in Southeast Asia. • Vietnamese (is) the most spoken tongue of the language family. • The Vietnamese alphabet was devised in the seventh century by Roman Catholic missionaries. Afro-Asiatic Language Family • The Afro-Asiatic-—once referred to as the Semito-Hamitic— language family includes Arabic and Hebrew, as well as a number of languages spoken primarily in northern Africa and southwestern Asia. • Arabic is the major Afro-Asiatic language, an official language in two dozen countries of North Africa and southwestern Asia, from Morocco to the Arabian Peninsula. Altaic and Uralic language families • The Altaic and Uralic language families were once thought to be linked as one family because the two display similar word formation, grammatical endings, and other structural elements. • Recent studies, however, point to geographically distinct origins. Altaic Languages Uralic Languages • Every European country is dominated by IndoEuropean speakers, except for three: Estonia, Finland, and Hungary. • The Estonians, Finns, and Hungarians speak languages that belong to the Uralic family, first used 7,000 years ago by people living in the Ural Mountains north of the Kurgan homeland. Language Families of Africa Fig. 5-14: The 1,000 or more languages of Africa are divided among five main language families, including Austronesian languages in Madagascar. Niger-Congo Language Family • More than 95 percent of the people in sub-Saharan Africa speak languages of the Niger-Congo family, which includes six branches with many hard to classify languages. • The remaining 5 percent speak languages of the Khoisan or Nilo-Saharan families. • The largest branch of the Niger- Congo family is the BenueCongo branch, and its most important language is Swahili. • Its vocabulary has strong Arabic influences. • Swahili is one of the few African languages with an extensive literature. Swahili Nilo-Saharan Language Family • Nilo-Saharan languages are spoken by a few million people in north-central Africa, immediately north of the Niger-Congo language region. • The best known of these languages is Maasai, spoken by the tall warriorherdsmen of east Africa. Khoisan Language Family • The third important language family of subSaharan Africa— Khoisan—is concentrated in the southwest. • Khoisan language use clicking sounds. Austronesian Language Family About 6 percent of the world’s people speak an Austronesian language, once known as the Malay-Polynesian family. The most frequently used Austronesian language is Malay-Indonesian. The people of Madagascar speak Malagasy, which belongs to the Austronesian family, even though the island is separated by 3,000 kilometers (1,900 miles) from any other Austronesian-speaking country. Languages of Nigeria • Africa’s most populous country, Nigeria, displays problems that can arise from the presence of many speakers of many languages. • Groups living in different regions of Nigeria have often battled. • Nigeria reflects the problems that can arise when great cultural diversity—and therefore language diversity—is packed into a relatively small region. Fig. 5-15: More than 200 languages are spoken in Nigeria, the largest country in Africa (by population). English, considered neutral, is the official language.
Quantitative easing (QE) is an unconventional monetary policy used by some central banks to stimulate their economy when conventional monetary policy has become ineffective. The central bank buys government bonds and other financial assets, with new money that the bank creates electronically, in order to increase money supply and the excess reserves of the banking system. This action also raises the prices of the financial assets bought, which lowers their yield (as long as the yield is above zero). Quantitative easing shifts monetary policy instruments away from interest rates, towards targetting the quantity of money. However the goals of monetary policy (including inflation targets) remain unchanged. Expansionary monetary policy normally involves a lowering of short-term interest rates by the central bank through the buying of short-term government bonds (termed open market operations). However, when short-term interest rates are either at, or close to, zero, normalmonetary policy can no longer function as the purchase of short-term government bonds will no longer lower interest rates. Quantitative easing may then be used by the monetary authorities to further stimulate the economy, by expanding the excess reserves in the banking system and lowering interest rates further out on the yield curve. Risks include the policy being more effective than intended or of not being effective enough, if banks opt simply to sit on the additional cash in order to increase their capital reserves in a climate of increasing defaults in their present loan portfolio The US Federal Reserve held between $700–$800 billion of Treasury notes on its balance sheet even before the recession. In late November 2008, the Fed started buying $600 billionMortgage-backed securities (MBS). By March 2009, it held $1.75 trillion of bank debt, MBS, and Treasury notes, and reached a peak of $2.1 trillion in June 2010. Further purchases were halted since the economy had started to improve. Holdings started falling naturally as debt matured. In fact, holdings were projected to fall to $1.7 trillion by 2012. However, in August 2010 the Fed decided to renew quantitative easing because the economy wasn’t growing robustly. Its goal was to keep holdings at the $2.054 trillion level. To maintain that level, the Fed bought $30 billion in 2-10 year Treasury notes a month. In November 2010, the Fed announced it would increase quantitative easing, buying $600 billion of Treasury securities by the end of the second quarter of 2011. Ordinarily, a central bank conducts monetary policy by raising or lowering its interest rate target for the inter-bank interest rate. The central bank achieves its interest rate target throughopen market operations – where the central bank buys or sells short-term government bonds in exchange for cash. When the central bank disburses or collects payment for these bonds, it alters the amount of money in the economy, while simultaneously affecting the price (and thereby the yield) for short-term government bonds. This in turns affects the interbank interest rates. In some situations, such as with very low inflation, or in the presence of deflation, the central bank can no longer lower the target interest rate, as the interbank interest rates are either at, or close to, zero. In such a situation, referred to as a liquidity trap, quantitative easing may be employed to further boost the amount of money in the financial system. This is often considered a “last resort” to stimulate the economy. - The central bank has previously targeted an extremely low rate of interest, near or at zero percent. - The central bank credits its own bank account with money it creates electronically. - The central bank buys government bonds (including long-term government bonds) or other financial assets, from commercial banks or other financial institutions, with the newly created money. According to the IMF, the quantitative easing policies undertaken by the central banks of the major developed countries since the beginning of the late-2000s financial crisis, have contributed to the reduction in systemic risks following the bankruptcy of Lehman Brothers. It has also contributed to the recent improvements in market confidence, and the bottoming out of the recession in the G-7 economies. Economist Martin Feldstein argues that QE2 led to a rise in the stock-market in the second half of 2010, which in turn contributed to increasing consumption and the strong performance of the US economy in late-2010.
Defining native title: the Mabo decision The High Court determined that Indigenous peoples should be treated equally before the law with regard to their rights over land. The Court rejected any position in law that would discriminate against Indigenous peoples by denying the existence of rights that had been enjoyed freely prior to colonisation and continued to be exercised. In this way, it has been said that the myth of terra nullius, which asserted that the land belonged to no-one, was rejected. The idea that no rights existed in land except those granted by the 'Crown', or the sovereign governments, was also reassessed. The term 'native title' was used in the Mabo judgements to describe the interests and rights of Indigenous inhabitants in law, whether communal, group or individual, possessed under the traditional laws acknowledged, and the traditional customs observed, by the Indigenous inhabitants. (Brennan J, p57). It was an important aspect of the decision to recognise that native title predates the assertion of sovereignty by the British. It is not a grant from the Crown like other titles under Australian law. Native title is unique in this sense, when compared with other interest in law. It is inherent to Indigenous peoples by virtue of their status as first peoples and the first owners of this land. Native title does not depend on government for its existence, but it did require recognition through the common law in order to be enforceable in the Australian legal system. Keywords: crown land, cultural preservation, doctrine of tenure, High Court judgement, Mabo judgement, terra nullius Author: Strelein, Lisa
Using language and communicating with other people can be a challenge for many children with autism spectrum disorder (ASD). But with help and understanding, your child can develop communication skills. Communication and autism spectrum disorder: the basics Children with autism spectrum disorder (ASD) can find it hard to relate to and communicate with other people. They might be slower to develop language, have no language at all, or have significant difficulties in understanding or using spoken language. Children with ASD often don’t understand that communication is a two-way process that uses eye contact, facial expressions and gestures as well as words. It’s a good idea to keep this in mind when helping them develop language skills. Some children with ASD develop good speech but can still have trouble knowing how to use language to communicate with other people. They might also communicate mostly to ask for something or protest about something, rather than for social reasons, like getting to know someone. How well a child with ASD communicates is important for other areas of development, like behaviour and learning. Communication is the exchange of thoughts, opinions or information by speech, writing or nonverbal expression. Language is communication using words – written, spoken or signed (as in Auslan). How children with autism spectrum disorder communicate Sometimes children with autism spectrum disorder (ASD) don’t seem to know how to use language, or how to use language in the same ways as typically developing children. Unconventional use of language Many children with ASD use words and verbal strategies to communicate and interact, but they might use language in unusual ways. For example, echolalia is common in children with ASD. This is when children mimic words or phrases without meaning or in an unusual tone of voice. They might repeat someone’s words straight away, or much later on. They might also repeat words they’ve heard on TV, YouTube or videos as well as in real life. Children with ASD also sometimes: - use made-up words, which are called neologisms - say the same word over and over - confuse pronouns and refer to themselves as ‘you’, and the person they’re talking to as ‘I’. These are often attempts to get some communication happening, but they don’t always work because you can’t understand what the child is trying to say. For example, children with echolalia might learn to talk by repeating phrases they associate with situations or emotional states, learning the meanings of these phrases by finding out how they work. A child might say ‘Do you want a lolly?’ when she actually wants one herself. This is because when she’s heard that question before, she’s got a lolly. Over time, many children with ASD can build on these beginnings and learn to use language in ways that more people can understand. These ways of communicating might include: - physically manipulating a person or object – for example, taking a person’s hand and pushing it towards something the child wants - pointing, showing and shifting gaze – for example, a child looks at or points to something he wants and then shifts his gaze to another person, letting that person know he wants the object - using objects – for example, the child hands an object to another person to communicate. Many children with ASD behave in difficult ways, and this behaviour is often related to communication. For example, self-harming behaviour, tantrums and aggression towards others might be a child’s way of trying to tell you that she needs something, isn’t happy or is really confused or frightened. If your child behaves in difficult ways, try to look at situations from your child’s perspective to work out the message behind your child’s behaviour. Our article on managing challenging behaviour in children with ASD explains how to work out what’s triggering your child’s behaviour How and why communication develops in children with autism spectrum disorder Children’s reasons for communicating are fairly simple – they communicate because they want something, because they want attention, or for more social reasons. Typically developing children can usually communicate for all these reasons, and their ability to communicate in all these ways comes at about the same time. But it’s different in children with autism spectrum disorder (ASD), who develop the ability to communication in these ways over time. First, they use communication to control another person’s behaviour, to ask for something, to protest or to satisfy physical needs. Next comes communication to get or maintain someone’s attention – for example, a child might ask to be comforted, say hello or even show off. Last, and most difficult, are the communication skills children need to direct another person’s attention to an object or an event for social reasons. Your child’s level of communication For children with autism spectrum disorder (ASD), communication develops step by step, so it’s important to work step by step with your child. For example, if crying in the kitchen is the only way your child asks for food, it might be too hard for him if you’re trying to teach him to say ‘food’ or ‘hungry’. Instead, you could try working on skills that are just one step on from where he is right now – for example, reaching towards or pointing to the food that he wants. Once he starts reaching or pointing, you can work on getting eye contact. You can help your child develop these skills by praising her when she looks at you and by labelling items, like ‘bickies’. Is your child communicating to ask for things? Is he asking for comfort or saying hello? Is he showing things to you, like his drawings or a plane in the sky? If you’re looking at strategies and therapies to improve your child’s communication, knowing what level of communication your child is using right now can help you choose the best way to move forward. Making the most of your child’s attempts to communicate You can expect communication from your child with autism spectrum disorder (ASD), even if it’s not the same as the way other children communicate. Here are some ways you can encourage communication with your child: - Use short sentences – for example, ‘Shirt on. Hat on’. - Use less mature language – for example, ‘Playdough is yucky in your mouth’. - Exaggerate your tone of voice – for example, ‘Ouch, that water is VERY hot’. - Encourage and prompt your child to fill the gap when it’s her turn in a conversation – for example, ‘Look at that dog. What colour is the dog?’ - Ask questions that need a reply from your child – for example, ‘Do you want a sausage?’. If you know your child’s answer is yes, you can teach your child to nod his head in reply by modelling this for him. - Make enough time for your child to respond to questions. Eye contact is a key part of nonverbal communication. It helps other parts of communication, like being able to notice another person’s facial expression and take emotion into account in your communication. Here are some ideas to encourage eye contact from your child: - Hold an object your child wants in front of your eyes so your child looks at your eyes as she looks towards the object. - Hold onto an object your child wants for a few extra seconds before letting your child take it. This encourages your child to look towards your face when he doesn’t get the object immediately.
Question: I’ve heard some trees have leaves that emit a substance that can harm or kill plants. What trees are they and should I treat their leaves differently during fall cleanup? A black walnut tree. Answer: You’re thinking of black walnut trees (Juglans nigra) and butternuts (J. cinerea). They produce a substance called juglone, which is harmful to tomato plants, potatoes, blackberry and blueberry bushes, azaleas, rhododendrons and mountain laurels, apple trees and red pines. The toxin is secreted from the tree’s roots; susceptible plants growing within the root zone (roughly the width of the tree’s canopy) or at its edges sicken and die. Juglone is present in the tree’s leaves, bark and wood, but less so than in its roots. You can add the leaves to your compost pile; with in four weeks the juglone will break down. If you want to be doubly sure, you can compost the leaves separately from your main compost pile and test to be sure the finished compost is safe by planting a tomato seedling in a pot of it. If the seedling survives, the juglone is gone. If you’re thinking of shredding the leaves and placing them on your garden as a mulch, be aware that it may take a full two months for the juglone to breakdown once it gets into the soil as the leaves decompose. This is something to keep in mind if you plan to plant juglone-sensitive plants in that area next spring. Many plants have no problem with juglone. See The Ohio State University’s lists of juglone-tolerant and juglone-sensitive plants. Get an illustrated guide to 5 fall garden tasks for under $5 Browse outstanding gardening tools, including heavy-duty work gloves. See our CD on gardening in the shade
According to the 1996 U.S. Surgeon General's Report on Physical Activity and Health, people of all ages who are generally inactive can improve their health and well-being by becoming even moderately active on a regular basis. Regular physical activity that is performed on most days of the week reduces the risk for developing or dying from some of the leading causes of illness in the United States, such as heart disease. Regular physical activity can also improve health in the following ways: - Reduces the risk for dying prematurely - Reduces the risk for dying from heart disease - Reduces the risk for developing diabetes - Reduces the risk for developing high blood pressure - Helps reduce blood pressure in people who already have high blood pressure - Reduces the risk for developing colon cancer - Reduces feelings of depression and anxiety - Helps control weight - Helps build and maintain healthy bones, muscles, and joints - Helps older adults become stronger and better able to move about without falling - Promotes psychologic well-being Although research has been limited, evidence so far indicates that aspects of the home, workplace, and community environments influence a person's level of physical activity. For example, the availability and accessibility of attractive stairwells, bicycle paths, walking paths, exercise facilities, and swimming pools, as well as the overall aesthetics and perceived safety of an environment, may play a role in determining the type and amount of physical activity people engage in. According to the U.S. Department of Health and Human Services report Physical Activity Fundamental to Preventing Disease, "Encouraging more activity can be as simple as establishing walking programs at schools, work sites and in the community. Some communities have an existing infrastructure that supports physical activity, such as sidewalks and bicycle trails, and worksites, schools, and shopping areas in close proximity to residential areas. In many other areas, such community amenities need to be developed to foster walking, cycling, and other types of exercise as a regular part of daily activity." Being physically active helps combat problems that can result from a sedentary lifestyle, such as obesity, diabetes, and heart disease. According to results of the 1999-2000 National Health and Nutrition Examination Survey (NHANES), an estimated 64% of U.S. adults aged 20 years and older are classified as overweight or obese. Among U.S. adults, obesity has doubled since 1980, increasing from 15% in 1980 to 31% in 2000, and the percentage of children and adolescents who are defined as overweight has more than doubled since the early 1970s. Overweight and obese adults are at increased risk for physical ailments such as-- - Coronary heart disease - Congestive heart failure - High blood pressure - High blood cholesterol (dyslipidemia) - Type 2 noninsulin-dependent diabetes - Obstructive sleep apnea and respiratory problems - Some types of cancer (such as endometrial, breast, prostate, and colon) - Complications of pregnancy (such as gestational diabetes, gestational hypertension, and preeclampsia) as well as complications in operative delivery (i.e., cesarean sections) - Poor female reproductive health (such as menstrual irregularities, infertility, and irregular ovulation) - Psychologic disorders (such as depression, eating disorders, distorted body image, and low self-esteem) An estimated 17 million Americans have diabetes, and about one third of those people affected are unaware of their condition. About one million new cases are diagnosed every year in the United States. Not only is diabetes the seventh leading cause of death among Americans, it also is the leading cause of new cases of blindness, kidney failure, and lower extremity amputations and greatly increases a person's risk for heart attack or stroke. Diabetes accounts for more than $98 billion in direct and indirect medical costs and lost productivity each year. The progression of diabetes can be delayed by-- - Preventing obesity - Focusing on improved nutrition - Engaging in regular physical activity - Controlling blood sugar levels - Improving access to services Research studies in the United States and abroad have found that lifestyle changes, such as consistent moderate intensity physical activity and a healthy diet, may reduce a person's risk for developing type 2 diabetes by 40% to 60%. Heart Disease and Stroke More than 61 million Americans have some form of cardiovascular disease (CVD), including high blood pressure, coronary heart disease, stroke, congestive heart failure, and other conditions. More than 2,600 Americans die each day of CVD. That is an average of 1 death every 33 seconds. CVDs cost the nation an estimated $300 billion annually, including health expenditures and lost productivity. Research conducted in California, Minnesota, and Rhode Island during the 1980s demonstrated how community interventions that improve our environment are particularly effective in reducing heart disease and stroke throughout the entire community. For more information on physical activity, refer to the following resources: Environmental and Policy Approaches to Increase Physical Activity: Community-Scale Urban Design Land Use Policies The CDC-funded Task Force on Community Preventive Services recommends design and land use policies and practices that support physical activity in urban areas of several square miles or more based on sufficient evidence of effectiveness in facilitating an increase in physical activity. The recommendation is based on a systematic review of all available studies, conducted on behalf of the Task Force by a team of specialists in systematic review methods, and in research, practice and policy related to physical activity. 2008 Physical Activity Guidelines for Americans The Federal Government has issued its first-ever Physical Activity Guidelines for Americans. They describe the types and amounts of physical activity that offer substantial health benefits to Americans. Humpel N, Owen N, Leslie E. Environmental factors associated with adults’ participation in physical activity, a review. Am J Prev Med 2002;22(3):188-99. U.S. Department of Health and Human Services. Physical activity fundamental to preventing disease 2002 June 20. http://aspe.hhs.gov/health/reports/physicalactivity/ Centers for Disease Control and Prevention, The President's Council on Physical Fitness and Sports. Physical activity and health at-a-glance. A report of the surgeon general 1996. http://www.cdc.gov/nccdphp/sgr/ataglan.htm Active Living Research’s literature database The Active Living Research online literature database features papers which study the relationship between environment and policy with physical activity and obesity. The purpose of the searchable database is to make detailed information on study characteristics and results accessible to all and to improve the use of studies for research and policy purposes. For more information on obesity and weight management, refer to the following resources: Centers for Disease Control and Prevention and National Center for Health Statistics - Prevalence of overweight and obesity among adults: United States, 1999-2002: http://www.cdc.gov/nchs/data/hestat/obese/obese99.htm - Prevalence of overweight among children and adolescents: United States, 1999-2002: http://www.cdc.gov/nchs/data/hestat/overweight/overweight99.htm Centers for Disease Control and Prevention, Division of Nutrition, Physical Activity and Obesity - Overweight and obesity page: http://www.cdc.gov/nccdphp/dnpa/obesity/ - Healthy weight page: http://www.cdc.gov/nccdphp/dnpa/healthyweight/index.htm For more information on diabetes, refer to the following resources: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion - National diabetes fact sheet: general information and national estimates on diabetes in the United States 2003: http://www.cdc.gov/diabetes/pubs/factsheet.htm - Diabetes page: http://www.cdc.gov/nccdphp/publications/aag/ddt.htm For more information on heart disease and stroke, refer to the following resources: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion heart disease and stroke prevention page: http://www.cdc.gov/nccdphp/publications/AAG/dhdsp.htm For more information related to physical activity, refer to the following resources: American Heart Association. Creating Spaces: Changing the Built Environment to Promote Active Living [PDF - 136 KB]. Washington, DC: American Heart Association; 2012. The fact sheet recommends increasing physical activity opportunities and recreational spaces where people live, work, learn and play so that people can become or stay more physically fit. Saelens BE, Sallis JF, Frank LD. Environmental correlates of walking and cycling: findings from the transportation, urban design, and planning literatures. Ann Behav Med, 2003; 25:80-91. Lefebvre RC, Lasater TM, Carleton RA, et al. Theory and delivery of health programming in the community: the Pawtucket Heart Health Program. Prev Med 1987;16:80-95. Additional information on physical activity and related topics can be found in the Additional Resources section. Reference used to develop this article: American Heart Association. 2002 heart and stroke statistical update. Dallas: AHA; 2000.
Wetlands are areas where water covers the soil, or is present either at or near the surface of the soil for at least part of the growing season. The occurrence and flow of water (hydrology) largely determine how the soil develops and the types of plant and animal communities living in and on the soil. Wetlands may support both aquatic and terrestrial species. The prolonged presence of water creates conditions that favor the growth of specially adapted plants (hydrophytes) and promote the development of characteristic wetland (hydric) soils. Wetlands vary widely because of regional and local differences in soils, topography, climate, hydrology, water chemistry, vegetation, and other factors, including human disturbance. Indeed, wetlands are found from the tundra to the tropics and on every continent except Antarctica. Two general categories of wetlands are recognized: tidally influenced wetlands and nontidal (or inland) wetlands. This web page provides the following background information on New England Wetlands: The Massachusetts coastal zone, which covers the land in the coastal plain immediately adjacent to coastal waters, contains both tidal and nontidal wetlands. Coastal wetlands in the United States, as their name suggests, are found along the Atlantic, Pacific, Alaskan, and Gulf coasts. They are often found in estuaries, areas where sea water mixes with fresh water to form an environment of varying salinities. Estuaries are usually somewhat protected from full impacts of ocean waves and are often shallow. The salt water and the fluctuating water levels (due to tidal action) combine to create a constantly varying and challenging environment for most plants. Consequently, many shallow coastal areas are unvegetated mud flats or sand flats due to frequently shifting sand and sediments. Some plants, however, have successfully adapted to this environment. Certain grasses and grass-like plants (or graminoids, including sedges and rushes) that adapt to the saline conditions form the tidal salt marshes are found along the Atlantic, Gulf, and Pacific coasts. Mangrove swamps, with salt-loving shrubs or trees, are common in tropical climates, such as in southern Florida and Puerto Rico. Some tidal freshwater wetlands form just beyond the upper edges of the reach of tidal flow, due to the fact that salt water is denser and heavier than freshwater. The freshwater flows of rivers and streams that feed into the ocean are therefore lifted above and flow on top of the incoming salty tidal waters. While water levels in these areas rise and fall with the tides, the plants and life along the river banks only experience fresh water conditions. Specially adapted plants, such as wild rice, thrive in these unique areas that are subject to brackish (partly salty) water on only rare stormy conditions and extreme tides. Marine, or salt marsh, wetlands are located along the marine-influenced shoreline. These wetlands can occur as broad meadows where the topography is relatively flat or as narrow fringes adjacent to steep slopes. They can be associated with these land forms: - Barrier Beaches - Barrier Islands Brackish wetlands are generally located in areas that are influenced both by marine tidal waters and fresh waters. They are typically located at the upper reaches of estuaries but can also be found along the marine-influenced shoreline in areas where there are significant fresh groundwater seeps. In addition, natural and manmade restrictions to tidal flow, such as shoals or roadway culverts, can lead to the transition from a salt marsh wetland to a brackish marsh wetland. The variability in salt content is a critical factor controlling the variability in plant and animal life in coastal systems due to the sensitivity of living things to salt. Inland wetlands are most common on floodplains along rivers and streams (riparian wetlands), in isolated depressions surrounded by dry land (for example basins, and "potholes"), along the margins of lakes and ponds, and in other low-lying areas where the groundwater intercepts the soil surface or where precipitation sufficiently saturates the soil (vernal pools and bogs). Inland wetlands include marshes and wet meadows dominated by herbaceous plants, swamps dominated by shrubs, and wooded swamps dominated by trees. Certain types of inland wetlands are common to particular regions of the country: bogs and fens of the northeastern and north-central states and Alaska; inland saline and alkaline marshes of the arid and semiarid west; prairie potholes of Iowa, Minnesota, and the Dakotas; playa lakes of the southwest and Great Plains; and bottomland hardwood swamps of the south. Many of these wetlands are seasonal and, particularly in the arid and semiarid West, may be wet only periodically. The quantity of water present and the timing of its presence in part determine the functions of a wetland and its role in the environment. Even wetlands that appear dry at times for significant parts of the year, such as vernal pools, often provide critical habitat for wildlife adapted to breeding exclusively in these areas. The most common nontidal wetlands found in the Massachusetts coastal zone include depressional, riverine, and lacustrine wetlands. Depressional wetlands are surrounded or nearly surrounded by uplands and lack a channelized stream (small order or intermittent streams may enter or exit this type of wetland, but there is no flow through channel). Riverine wetlands are associated with flowing water systems (such as rivers, creeks, perennial streams, intermittent streams, and similar waterbodies) and contiguous wetlands. These wetland types are often fringing wetlands of small widths along river edges or occasionally meadows where slopes flatten out and can be classified as follows according to the river gradient: - High (rapid water flow) - Mid (fast to moderate) - Low (slow) Dams and other blockages in rivers and streams also contribute to slowing of water flows, which frequently results in broader wetland meadows. Lacustrine wetlands are associated with large standing waterbodies (such as lakes and reservoirs) and contiguous wetlands formed in the lake basin. Lacustrine wetlands can be: - Fringe, which are connected to the surrounding upland and form around the edges of lakes and around islands that sit in the middle of a water body. - Island, which are fringes around the perimeter of an island but not connected to the surrounding upland in the larger landscape. Wetlands are among the most productive ecosystems in the world, comparable to rain forests and coral reefs. An immense variety of species of microbes, plants, insects, amphibians, reptiles, birds, fish, and mammals can be part of a wetland ecosystem. Physical and chemical features such as climate, landscape shape (topology), geology, and the movement and abundance of water help to determine the plants and animals that inhabit each wetland. The complex, dynamic relationships among the organisms inhabiting the wetland environment are referred to as food webs. Wetlands provide great volumes of food that attract many animal species. These animals use wetlands for part of or all of their life-cycle. Dead plant leaves and stems break down in the water to form small particles of organic material called "detritus." This enriched material feeds many small aquatic insects, shellfish, and small fish that are food for larger predatory fish, reptiles, amphibians, birds, and mammals. The biological, chemical, and physical operations and attributes of a wetland are known as wetland functions. Some typical wetland functions include: wildlife habitat and food chain support, surface water retention or detention, groundwater recharge, and nutrient transformation. Distinct from these intrinsic natural functions are human uses of and interaction with wetlands. Society's utilization and appraisal of wetland resources is referred to as wetland values, which include: support for commercially valuable fish and wildlife, flood control, supply of drinking water, enhancement of water quality, and recreational opportunities. A watershed is a geographic area in which water, sediments, and dissolved materials drain from higher elevations to a common low-lying outlet, basin, or point on a larger stream, lake, underlying aquifer, or estuary. Wetlands play an integral role in the ecology and hydrology of the watershed. The combination of shallow water, high levels of nutrients, and high primary productivity is ideal for the growth of organisms that form the base of the food web and feed many species of fish, amphibians, shellfish, and insects. Many species of birds and mammals rely on wetlands for food, water, and shelter, especially during migration and breeding. Wetlands' microbes, plants, and wildlife are part of global cycles for water, nitrogen, and sulfur. Furthermore, scientists are beginning to realize that atmospheric maintenance may be an additional wetlands function. Wetlands store carbon within their plant communities and soil instead of releasing it to the atmosphere as carbon dioxide. Thus wetlands help to moderate global climate conditions. The specific benefits of wetlands to water quality, flood protection, shoreline erosion, and fish and wildlife habitat are discussed below. Wetlands have important filtering capabilities for intercepting surface water runoff from higher dry land before the runoff reaches open water. As the runoff water passes through, the wetlands retain excess nutrients and some pollutants and reduce sediment that would clog waterways and affect fish and amphibian egg development. In addition to improving water quality through filtering, some wetlands maintain stream flow during dry periods, and many replenish groundwater. Wetlands function as natural sponges that trap and slowly release surface water, rain, snowmelt, groundwater, and flood waters. Trees, root mats, and other wetland vegetation also slow the speed of flood waters and distribute them more slowly over the floodplain. This combined water storage and braking action lowers downstream flood heights and reduces erosion. Wetlands within and downstream of urban areas are particularly valuable, counteracting the greatly increased rate and volume of surface water runoff from pavement and buildings. The holding capacity of wetlands helps control floods. Preserving and restoring wetlands can often provide a less costly and more permanent level of flood control than otherwise provided by expensive dredge operations and levees. The ability of wetlands to control erosion is so valuable that some states are restoring wetlands in coastal areas to buffer the storm surges from hurricanes and tropical storms. Wetlands at the margins of lakes, rivers, bays, and the ocean protect shorelines and stream banks against erosion. Wetland plants hold the soil in place with their roots, absorb the energy of waves, and slow the flow of stream or river currents along the shore. Fish and Wildlife Habitat More than one-third of the United States' threatened and endangered species live only in wetlands, and nearly half require wetlands at some point in their lives. Many other animals and plants depend on wetlands for survival. Estuarine and marine fish and shellfish, various birds, and certain mammals must have coastal wetlands to survive. Most commercial and game fish breed and raise their young in coastal marshes and estuaries. Menhaden, flounder, sea trout, spot, croaker, and striped bass are among the more familiar fish that depend on coastal wetlands. Shrimp, oysters, clams, and blue crabs likewise need these wetlands for food, shelter, and breeding grounds. For many animals and plants, like wood ducks, muskrat, cattails, and swamp rose, inland wetlands are the only places they can live. Beaver may actually create their own wetlands. For others, such as striped bass, peregrine falcon, otter, black bear, raccoon, and deer, wetlands provide important food, water, or shelter. Many of the U.S. breeding bird populations—including ducks, geese, woodpeckers, hawks, wading birds, and many song-birds—feed, nest, and raise their young in wetlands. Migratory waterfowl use coastal and inland wetlands as resting, feeding, breeding, or nesting grounds for at least part of the year. (Some of the information provided in this section is adapted from the U.S. Environmental Protection Agency (EPA) document, America's Wetlands: Our Vital Link Between Land and Water.) In the 1600s, more than 220 million acres of wetlands are thought to have existed in the lower 48 states. Since then, extensive losses have occurred, and over half of these wetlands have been drained and converted to other uses. The years from the mid-1950s to the mid-1970s were a time of major wetland loss, but since then the rate of loss has decreased significantly. In addition to these losses, many other wetlands have been degraded, although assessing the magnitude of the degradation is difficult. These losses, as well as degradation, have greatly diminished U.S. wetlands resources, resulting in significant impacts to the benefits they provided. Recent increases in flood damages, drought damages, declining water quality, and decreasing fish and bird populations are, in part, the result of wetlands degradation and destruction. Wetland biology has been degraded in ways that are not as obvious as direct physical destruction or alteration. Threats include chemical contamination, increased nutrient inputs and eutrophication (nutrient over-enrichment), hydrologic modification, and sediment deposition from air- and water-borne sources. Global climate change may be affecting wetlands through increased air temperature; shifts in precipitation distribution and quantity; increased frequency of storms, droughts, and floods; increased atmospheric carbon dioxide concentration; and sea level rise. All of these impacts can affect species composition and wetland functions. This section discusses development and landscape alteration of wetlands, as well as impacts to water quality and hydrology. Development and Landscape Alteration Human alterations to the natural landscape have the potential to exert significant direct and indirect influence on wetland development and processes. Changes to natural hydrological, chemical, and physical regimes have been documented to affect the production and succession of a wetland's ecosystem, and therefore its natural biological functions and values for human populations. During urbanization or development, pervious areas (i.e., those that permit the infiltration of precipitation through the ground), including vegetated and forested land, are lost to pavement and structures. These natural areas are converted to land uses that increase the amount of impervious surfaces, such as roads, parking lots, and buildings. Impervious surfaces transform watershed hydrology by changing the flow rate and volume of runoff and by altering natural drainage features, including groundwater levels, which may no longer be recharged by infiltrating rainwater. The lowering of groundwater levels, in turn, alters wetland hydrology and may cause aquatic and riparian wetland habitat to dry up. Population pressures from urbanization also result in corresponding increases in pollutant loadings generated from a wide array of human activities, including, most frequently, nutrients and chemicals from lawn care and garden activities, pathogens from pet waste, and chemical pollutants from transportation and mechanical sources. Impacts to Water Quality Both nationally and in Massachusetts, urban runoff and discharges from stormwater outfalls are some of the largest sources responsible for the non-attainment of water quality standards. The following is a breakdown of the individual pollutants typically found in urban stormwater: pathogens/bacteria, nutrients, sediments (total suspended solids), road salts, biological and chemical oxygen-demanding substances, thermal pollution, metals, synthetic chemicals, and polyaromatic hydrocarbons (PAHs). The principle sources of runoff pollutants are: construction sites; street and parking lot pavement; motor vehicles; dry atmospheric deposition; grass clipping, leaves, and other vegetative waste; domestic animals and wildlife; human wastes (failing septic systems, illegal connections); spills; litter; lawn fertilizers; pesticides/herbicides; and salt, sand, and de-icing chemicals. Impacts to Hydrology Urban development of the natural landscape changes both the form and function of the natural downstream drainage system. Data from a host of sources demonstrate that the shift from undeveloped to developed areas results in substantial increases in runoff volume, thereby reducing the amount of rainfall available for groundwater recharge. Increases in peak runoff rates and volumes to stream channels intensifies streambank erosion and alters natural deposition of sediment and organic material. Physical, chemical, and biological data from King County, Washington, demonstrate that consistent thresholds exist for aquatic ecosystem impacts from urbanization. Approximately 10 to 15 percent impervious area in a watershed has been found to be the threshold that typically leads to demonstrable loss of aquatic system functioning, as measured by changes in channel morphology, fish and amphibian populations, vegetation succession, and water chemistry. The following is a list of the major causes of wetland loss and degradation: |Human Actions||Natural Events| - Dredging and stream channelization - Deposition of fill material - Diking and damming - Discharge of pollutants - Tilling for crop production - Air and water pollutants - Changing nutrient levels - Grazing by domestic animals - Sea level rise - Hurricanes and other storms - Ice scour
From Wikipedia, the free encyclopedia - View original article |This article's lead section may not adequately summarize key points of its contents. (April 2012)| Emotional intelligence (EI) is the ability to identify, assess, and control the emotions of oneself, of others, and of groups. It can be divided into ability EI and trait EI. The first use of the term "emotional intelligence" is usually attributed to Wayne Payne's doctoral thesis, A Study of Emotion: Developing Emotional Intelligence from 1985. The first published use of 'EQ' (Emotional Quotient) seems to be by Keith Beasley in 1987 in an article in the British Mensa magazine. However, prior to this, the term "emotional intelligence" had appeared in Beldoch (1964), Leuner (1966). Stanley Greenspan (1989) also put forward an EI model, followed by Peter Salovey and John Mayer (1989),. The distinction between trait emotional intelligence and ability emotional intelligence was introduced in 2000. However, the term became widely-known with the publication of Goleman's Emotional Intelligence - Why it can matter more than IQ (1995). It is to this book's best-selling status that the term can attribute its popularity. Goleman has followed up with several further popular publications of a similar theme that reinforce use of the term. Goleman's publications are self help books that are non-academic in nature. To date, tests measuring EI have not replaced IQ tests as a standard metric of intelligence in any field; such as the fields of education, criminology, neurology, military admissions, or intelligence research. Substantial disagreement exists regarding the definition of EI, with respect to both terminology and operationalizations. Currently, there are three main models of EI: Different models of EI have led to the development of various instruments for the assessment of the construct. While some of these measures may overlap, most researchers agree that they tap different constructs. Salovey and Mayer's conception of EI strives to define EI within the confines of the standard criteria for a new intelligence. Following their continuing research, their initial definition of EI was revised to "The ability to perceive emotion, integrate emotion to facilitate thought, understand emotions and to regulate emotions to promote personal growth." The ability-based model views emotions as useful sources of information that help one to make sense of and navigate the social environment. The model proposes that individuals vary in their ability to process information of an emotional nature and in their ability to relate emotional processing to a wider cognition. This ability is seen to manifest itself in certain adaptive behaviors. The model claims that EI includes four types of abilities: The ability EI model has been criticized in the research for lacking face and predictive validity in the workplace. The current measure of Mayer and Salovey's model of EI, the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) is based on a series of emotion-based problem-solving items. Consistent with the model's claim of EI as a type of intelligence, the test is modeled on ability-based IQ tests. By testing a person's abilities on each of the four branches of emotional intelligence, it generates scores for each of the branches as well as a total score. Central to the four-branch model is the idea that EI requires attunement to social norms. Therefore, the MSCEIT is scored in a consensus fashion, with higher scores indicating higher overlap between an individual's answers and those provided by a worldwide sample of respondents. The MSCEIT can also be expert-scored, so that the amount of overlap is calculated between an individual's answers and those provided by a group of 21 emotion researchers. Although promoted as an ability test, the MSCEIT is unlike standard IQ tests in that its items do not have objectively correct responses. Among other challenges, the consensus scoring criterion means that it is impossible to create items (questions) that only a minority of respondents can solve, because, by definition, responses are deemed emotionally "intelligent" only if the majority of the sample has endorsed them. This and other similar problems have led some cognitive ability experts to question the definition of EI as a genuine intelligence. In a study by Føllesdal, the MSCEIT test results of 111 business leaders were compared with how their employees described their leader. It was found that there were no correlations between a leader's test results and how he or she was rated by the employees, with regard to empathy, ability to motivate, and leader effectiveness. Føllesdal also criticized the Canadian company Multi-Health Systems, which administers the MSCEIT test. The test contains 141 questions but it was found after publishing the test that 19 of these did not give the expected answers. This has led Multi-Health Systems to remove answers to these 19 questions before scoring, but without stating this officially. The model introduced by Daniel Goleman focuses on EI as a wide array of competencies and skills that drive leadership performance. Goleman's model outlines five main EI constructs (for more details see "What Makes A Leader" by Daniel Goleman, best of Harvard Business Review 1998): Goleman includes a set of emotional competencies within each construct of EI. Emotional competencies are not innate talents, but rather learned capabilities that must be worked on and can be developed to achieve outstanding performance. Goleman posits that individuals are born with a general emotional intelligence that determines their potential for learning emotional competencies. Goleman's model of EI has been criticized in the research literature as mere "pop psychology" (Mayer, Roberts, & Barsade, 2008). Two measurement tools are based on the Goleman model: Soviet-born British psychologist Konstantin Vasily Petrides ("K. V. Petrides") proposed a conceptual distinction between the ability based model and a trait based model of EI and has been developing the latter over many years in numerous scientific publications. Trait EI is "a constellation of emotional self-perceptions located at the lower levels of personality." In lay terms, trait EI refers to an individual's self-perceptions of their emotional abilities. This definition of EI encompasses behavioral dispositions and self perceived abilities and is measured by self report, as opposed to the ability based model which refers to actual abilities, which have proven highly resistant to scientific measurement. Trait EI should be investigated within a personality framework. An alternative label for the same construct is trait emotional self-efficacy. The trait EI model is general and subsumes the Goleman model discussed above. The conceptualization of EI as a personality trait leads to a construct that lies outside the taxonomy of human cognitive ability. This is an important distinction in as much as it bears directly on the operationalization of the construct and the theories and hypotheses that are formulated about it. There are many self-report measures of EI, including the EQ-i, the Swinburne University Emotional Intelligence Test (SUEIT), and the Schutte EI model. None of these assess intelligence, abilities, or skills (as their authors often claim), but rather, they are limited measures of trait emotional intelligence. One of the more comprehensive and widely researched measures of this construct is the Trait Emotional Intelligence Questionnaire (TEIQue), which was specifically designed to measure the construct comprehensively and is available in many languages. The TEIQue provides an operationalization for the model of Petrides and colleagues, that conceptualizes EI in terms of personality. The test encompasses 15 subscales organized under four factors: Well-Being, Self-Control, Emotionality, and Sociability. The psychometric properties of the TEIQue were investigated in a study on a French-speaking population, where it was reported that TEIQue scores were globally normally distributed and reliable. The researchers also found TEIQue scores were unrelated to nonverbal reasoning (Raven's matrices), which they interpreted as support for the personality trait view of EI (as opposed to a form of intelligence). As expected, TEIQue scores were positively related to some of the Big Five personality traits (extraversion, agreeableness, openness, conscientiousness) as well as inversely related to others (alexithymia, neuroticism). A number of quantitative genetic studies have been carried out within the trait EI model, which have revealed significant genetic effects and heritabilities for all trait EI scores. Two recent studies (one a meta-analysis) involving direct comparisons of multiple EI tests yielded very favorable results for the TEIQue. Goleman's early work has been criticized for assuming from the beginning that EI is a type of intelligence. Eysenck (2000) writes that Goleman's description of EI contains unsubstantiated assumptions about intelligence in general, and that it even runs contrary to what researchers have come to expect when studying types of intelligence: "[Goleman] exemplifies more clearly than most the fundamental absurdity of the tendency to class almost any type of behaviour as an 'intelligence'... If these five 'abilities' define 'emotional intelligence', we would expect some evidence that they are highly correlated; Goleman admits that they might be quite uncorrelated, and in any case if we cannot measure them, how do we know they are related? So the whole theory is built on quicksand: there is no sound scientific basis." Similarly, Locke (2005) claims that the concept of EI is in itself a misinterpretation of the intelligence construct, and he offers an alternative interpretation: it is not another form or type of intelligence, but intelligence—the ability to grasp abstractions—applied to a particular life domain: emotions. He suggests the concept should be re-labeled and referred to as a skill. The essence of this criticism is that scientific inquiry depends on valid and consistent construct utilization, and that before the introduction of the term EI, psychologists had established theoretical distinctions between factors such as abilities and achievements, skills and habits, attitudes and values, and personality traits and emotional states. Thus, some scholars believe that the term EI merges and conflates such accepted concepts and definitions. Landy (2005) claimed that the few incremental validity studies conducted on EI have shown that it adds little or nothing to the explanation or prediction of some common outcomes (most notably academic and work success). Landy suggested that the reason why some studies have found a small increase in predictive validity is a methodological fallacy, namely, that alternative explanations have not been completely considered: "EI is compared and contrasted with a measure of abstract intelligence but not with a personality measure, or with a personality measure but not with a measure of academic intelligence." Landy (2005) Similarly, other researchers have raised concerns about the extent to which self-report EI measures correlate with established personality dimensions. Generally, self-report EI measures and personality measures have been said to converge because they both purport to measure personality traits. Specifically, there appear to be two dimensions of the Big Five that stand out as most related to self-report EI – neuroticism and extroversion. In particular, neuroticism has been said to relate to negative emotionality and anxiety. Intuitively, individuals scoring high on neuroticism are likely to score low on self-report EI measures. The interpretations of the correlations between EI questionnaires and personality have been varied. The prominent view in the scientific literature is the Trait EI view, which re-interprets EI as a collection of personality traits. One criticism of the works of Mayer and Salovey comes from a study by Roberts et al. (2001), which suggests that the EI, as measured by the MSCEIT, may only be measuring conformity. This argument is rooted in the MSCEIT's use of consensus-based assessment, and in the fact that scores on the MSCEIT are negatively distributed (meaning that its scores differentiate between people with low EI better than people with high EI). Further criticism has been leveled by Brody (2004), who claimed that unlike tests of cognitive ability, the MSCEIT "tests knowledge of emotions but not necessarily the ability to perform tasks that are related to the knowledge that is assessed". The main argument is that even though someone knows how he should behave in an emotionally laden situation, it doesn't necessarily follow that the person could actually carry out the reported behavior. New research is surfacing that suggests that ability EI measures might be measuring personality in addition to general intelligence. These studies examined the multivariate effects of personality and intelligence on EI and also corrected estimates for measurement error (which is often not done in some validation studies). For example, a study by Schulte, Ree, Carretta (2004), showed that general intelligence (measured with the Wonderlic Personnel Test), agreeableness (measured by the NEO-PI), as well as gender had a multiple R of .81 with the MSCEIT. This result has been replicated by Fiori and Antonakis (2011),; they found a multiple R of .76 using Cattell’s “Culture Fair” intelligence test and the Big Five Inventory (BFI); significant covariates were intelligence (standardized beta = .39), agreeableness (standardized beta = .54), and openness (standardized beta = .46). Antonakis and Dietz (2011a), who investigated the Ability Emotional Intelligence Measure found similar results (Multiple R = .69), with significant predictors being intelligence, standardized beta = .69 (using the Swaps Test and a Wechsler scales subtest, the 40-item General Knowledge Task) and empathy, standardized beta = .26 (using the Questionnaire Measure of Empathic Tendency)--see also Antonakis and Dietz (2011b), who show how including or excluding important controls variables can fundamentally change results—thus, it is important to always include important controls like personality and intelligence when examining the predictive validity of ability and trait EI models. More formally termed socially desirable responding (SDR), faking good is defined as a response pattern in which test-takers systematically represent themselves with an excessive positive bias (Paulhus, 2002). This bias has long been known to contaminate responses on personality inventories (Holtgraves, 2004; McFarland & Ryan, 2000; Peebles & Moore, 1998; Nichols & Greene, 1997; Zerbe & Paulhus, 1987), acting as a mediator of the relationships between self-report measures (Nichols & Greene, 1997; Ganster et al., 1983[full citation needed]). It has been suggested that responding in a desirable way is a response set, which is a situational and temporary response pattern (Pauls & Crost, 2004; Paulhus, 1991). This is contrasted with a response style, which is a more long-term trait-like quality. Considering the contexts some self-report EI inventories are used in (e.g., employment settings), the problems of response sets in high-stakes scenarios become clear (Paulhus & Reid, 2001). There are a few methods to prevent socially desirable responding on behavior inventories. Some researchers believe it is necessary to warn test-takers not to fake good before taking a personality test (e.g., McFarland, 2003). Some inventories use validity scales in order to determine the likelihood or consistency of the responses across all items. Landy distinguishes between the "commercial wing" and "the academic wing" of the EI movement, basing this distinction on the alleged predictive power of EI as seen by the two currents. According to Landy, the former makes expansive claims on the applied value of EI, while the latter is trying to warn users against these claims. As an example, Goleman (1998) asserts that "the most effective leaders are alike in one crucial way: they all have a high degree of what has come to be known as emotional intelligence. ...emotional intelligence is the sine qua non of leadership". In contrast, Mayer (1999) cautions "the popular literature's implication—that highly emotionally intelligent people possess an unqualified advantage in life—appears overly enthusiastic at present and unsubstantiated by reasonable scientific standards." Landy further reinforces this argument by noting that the data upon which these claims are based are held in "proprietary databases", which means they are unavailable to independent researchers for reanalysis, replication, or verification. Thus, the credibility of the findings cannot be substantiated in a scientific way, unless those datasets are made public and available for independent analysis. In an academic exchange, Antonakis and Ashkanasy/Dasborough mostly agreed that researchers testing whether EI matters for leadership have not done so using robust research designs; therefore, currently there is no strong evidence showing that EI predicts leadership outcomes when accounting for personality and IQ. Antonakis argued that EI might not be needed for leadership effectiveness (he referred to this as the "curse of emotion" phenomenon, because leaders who are too sensitive to their and others' emotional states might have difficulty making decisions that would result in emotional labor for the leader or followers). A recently-published meta-analysis seems to support the Antonakis position: In fact, Harms and Credé found that overall (and using data free from problems of common source and common methods), EI measures correlated only ρ = 0.11 with measures of transformational leadership. Interestingly, ability-measures of EI fared worst (i.e., ρ = 0.04); the WLEIS (Wong-Law measure) did a bit better (ρ = 0.08), and the Bar-On measure better still (ρ = 0.18). However, the validity of these estimates does not include the effects of IQ or the big five personality, which correlate both with EI measures and leadership. In a subsequent paper analyzing the impact of EI on both job performance and leadership, Harms and Credé found that the meta-analytic validity estimates for EI dropped to zero when Big Five traits and IQ were controlled for. Joseph and Newman meta-analytically showed the same result for Ability EI, but further demonstrated that self-reported and Trait EI measures retain a small amount of predictive validity for job performance after controlling Big Five traits and IQ. Newman, Joseph, and MacCann contend that the greater predictive validity of Trait EI measures is due to their inclusion of content related to achievement motivation, self efficacy, and self-rated performance. The National Institute of Child Health and Human Development has recognized the divide on the topic of emotional intelligence explains the need for the mental health community to agree on some guidelines to describe good mental health and positive mental living conditions. In their section, "Positive Psychology and the Concept of Health," they explain. "Currently there are six competing models of positive health, which are based on concepts such as being above normal, character strengths and core virtues, developmental maturity, social-emotional intelligence, subjective well-being, and resilience. But these concepts define health in philosophical rather than empirical terms. Dr. [Lawrence] Becker suggested the need for a consensus on the concept of positive psychological health..." Research of EI and job performance shows mixed results: a positive relation has been found in some of the studies, in others there was no relation or an inconsistent one. This led researchers Cote and Miners (2006) to offer a compensatory model between EI and IQ, that posits that the association between EI and job performance becomes more positive as cognitive intelligence decreases, an idea first proposed in the context of academic performance (Petrides, Frederickson, & Furnham, 2004). The results of the former study supported the compensatory model: employees with low IQ get higher task performance and organizational citizenship behavior directed at the organization, the higher their EI. A meta-analytic review by Joseph and Newman also revealed that both Ability EI and Trait EI tend to predict job performance much better in jobs that require a high degree of emotional labor (where 'emotional labor' was defined as jobs that require the effective display of positive emotion). In contrast, EI shows little relationship to job performance in jobs that do not require emotional labor. In other words, emotional intelligence tends to predict job performance for emotional jobs only. A more recent study suggests that EI is not necessarily a universally positive trait. They found a negative correlation between EI and managerial work demands; while under low levels of managerial work demands, they found a negative relationship between EI and teamwork effectiveness. An explanation for this may suggest gender differences in EI, as women tend to score higher levels than men. This furthers the idea that job context plays a role in the relationships between EI, teamwork effectiveness, and job performance. Another interesting find was discussed in a study that assessed a possible link between EI and entrepreneurial behaviors and success. In accordance with much of the other findings regarding EI and job performance, they found that levels of EI only predicted a small amount of entrepreneurial behavior. A 2012 study cross examined emotional intelligence, self-esteem, and marijuana dependence. Out of a sample of 200, 100 of which were dependent on cannabis and the other 100 emotionally healthy, the dependent group scored exceptionally low on EI when compared to the control group. They also found that the dependent group also scored low on self-esteem when compared to the control. Another study in 2010 examined whether or not low levels of EI had a relationship with the degree of drug and alcohol addiction. In the assessment of 103 residents in a drug rehabilitation center, they examined their EI along with other psychosocial factors in a one-month interval of treatment. They found that participants' EI scores improved as their levels of addiction lessened as part of their treatment. It is due to this growing interest in perceived emotional intelligence (PEI), that it has been examined the implications PEI may have for adolescent anxiety disorder symptomology. Diaz-Castela et al. (2013) have found that Emotional Repair, one of the three scales of TMMS-24 appears to be involved in adolescent social anxiety symptomology. Authors have argued that the use of EI in the treatment of adolescent SAD might provide adolescents greater self-awareness of their emotions, help them in identifying the emotional experiences of others, help in the regulation their emotional states, and use their emotions to facilitate thinking. This would help to reduce the anxiety experienced by adolescents with SAD, help them overcome stressful situations, and enable them to improve their relationships and grow emotionally.
Suspected Asteroid Collision Leaves Odd X-Pattern of Trailing Debris Credit: NASA, ESA, and D. Jewitt (University of California, Los Angeles). Photo No. STScI-2010-07| › Larger image NASA's Hubble Space Telescope has observed a mysterious X-shaped debris pattern and trailing streamers of dust that suggest a head-on collision between two asteroids. Astronomers have long thought the asteroid belt is being ground down through collisions, but such a smashup has never been seen before. Asteroid collisions are energetic, with an average impact speed of more than 11,000 miles per hour, or five times faster than a rifle bullet. The comet-like object imaged by Hubble, called P/2010 A2, was first discovered by the Lincoln Near-Earth Asteroid Research, or LINEAR, program sky survey on Jan. 6. New Hubble images taken on Jan. 25 and 29 show a complex X-pattern of filamentary structures near the nucleus. "This is quite different from the smooth dust envelopes of normal comets," said principal investigator David Jewitt of the University of California at Los Angeles. "The filaments are made of dust and gravel, presumably recently thrown out of the nucleus. Some are swept back by radiation pressure from sunlight to create straight dust streaks. Embedded in the filaments are co-moving blobs of dust that likely originated from tiny unseen parent bodies." Hubble shows the main nucleus of P/2010 A2 lies outside its own halo of dust. This has never been seen before in a comet-like object. The nucleus is estimated to be 460 feet in diameter. Normal comets fall into the inner regions of the solar system from icy reservoirs in the Kuiper belt and Oort cloud. As comets near the sun and warm up, ice near the surface vaporizes and ejects material from the solid comet nucleus via jets. But P/2010 A2 may have a different origin. It orbits in the warm, inner regions of the asteroid belt where its nearest neighbors are dry rocky bodies lacking volatile materials. This leaves open the possibility that the complex debris tail is the result of an impact between two bodies, rather than ice simply melting from a parent body. "If this interpretation is correct, two small and previously unknown asteroids recently collided, creating a shower of debris that is being swept back into a tail from the collision site by the pressure of sunlight," Jewitt said. The main nucleus of P/2010 A2 would be the surviving remnant of this so-called hypervelocity collision. "The filamentary appearance of P/2010 A2 is different from anything seen in Hubble images of normal comets, consistent with the action of a different process," Jewitt said. An impact origin also would be consistent with the absence of gas in spectra recorded using ground-based telescopes. The asteroid belt contains abundant evidence of ancient collisions that have shattered precursor bodies into fragments. The orbit of P/2010 A2 is consistent with membership in the Flora asteroid family, produced by collisional shattering more than 100 million years ago. One fragment of that ancient smashup may have struck Earth 65 million years ago, triggering a mass extinction that wiped out the dinosaurs. But, until now, no such asteroid-asteroid collision has been caught "in the act." At the time of the Hubble observations, the object was approximately 180 million miles from the sun and 90 million miles from Earth. The Hubble images were recorded with the new Wide Field Camera 3 (WFC3). The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute conducts Hubble science operations. The institute is operated for NASA by the Association of Universities for Research in Astronomy, Inc. in Washington, D.C.
As well as being a great way to learn to recognize various sound combinations, and to learn about rhyming, word families also are a great way to think up and then write creative sentences, stories, or poems, and to practice spelling of similar words. And word families are not just for beginning level reading and writing; as you’ll discover, they can even be used for more complex words of several syllables. Use word families for creative writing: Children often love to make up funny sentences, or even a short story. For example: “A fat rat with a hat sat on a cat, who poked him with a bat.” Illustrating the sentence adds to the memory. Some children also enjoy reading a short poem that uses rhyming words from a particular word family, and then creating a new stanza of their own for the poem. Extend simple word families to more complex words: Once a child has learned the basic sound/letter combination for a particular word family using simple rhyming words, you can extend it to more complex words. For example: bar, car, far, jar, mar, par, tar … then: Bart, cart, dark, farm, hard, lard, mark, part, apart, star, start, spar, tart, tartan … and even toss in some “hard” words like “variety.” Use Nursery Rhymes to practice word families: A good way to practice word families is to take simple poetry books (such as nursery rhymes) and put your finger over one of the rhyming words, have the child read the previous line(s), and guess the covered-up word, then spell it out.(“Jack be nimble, Jack be quick, Jack jump over the candle ____” [stick]. Word family flowers: One way to help your child with a particular “word family” is to draw a circle in the middle of a sheet of paper, and put the “family chunk” in the circle (for example “at”). Then around the circle draw petals (like a daisy). Each petal will have an initial sound or two, with blanks for the “family chunk” (for example: p_ _; fl _ _). Your child can fill in the blanks, read the words aloud–and draw little sketches to illustrate each word, if she likes. Make and match words and pictures: Another way to practice word families is to write a list of words from a “family” down one half of a piece of paper, and draw little sketches scattered here and there on the other half of the paper–then your child can draw lines to match the words and pictures. You don’t have to draw sketches for all the words–in fact, it is a bit more of a challenge if some words don’t have pictures! Make sure that your child reads all the words aloud, including the ones which don’t have a picture hint. They can use the picture hint ones to “get” the family sound; then sound out the remaining ones. Your child may also enjoy drawing pictures for those words. Word families, decoding units, and syllables: One author, Dr. Jerome Rosner, has taken the “word family” concept to another level by using “decoding units” (such as “ag” or “ill”) to help children build word recognition skills. Each set of word lists include four levels of increasing difficulty. The first level is single syllable words, usually just consonant-vowel-consonant. The second level is also single syllable, but adds blends to the start of each word. The third level is two-syllable words. And the fourth level is words of three or more syllables. Thus, not only does the child learn to immediately recognize certain letter combinations (as in word families), but this is also a great way to develop skills in reading multi-syllabic words. When you present the words, you can draw a slash between each syllable to help the child practice syllabication skills. The author also gives detailed suggestions on how to use these lists, and I have found this method very useful indeed for children who are struggling with learning to read. It is also a great way to increase vocabulary and build spelling skills, and children also enjoy picking out “compound words” from levels 3 and 4. This is also a good way to “build words” and improve their blending skills and chunking skills. And of course the lists also include quite a number of rhyming words, and can be used to create poems or rhyming sentences. Example: For the unit/word family “an”: - Level one: ban, can, Dan, fan, Jan, man, Nana, pan, ran, tan, van. - Level two: ranch, scan, bland, ant, hand, chant, stand, span, strand, land, brand, plant. - Level three: began, manner, demand, cannot, handle, candle, banner, spaniel, Spanish, standard, dandy, landing, lantern, mantle, vanish, vandal, scandal. - Level four: Santa Claus, fantasy, fantastic, understand, Canada, outlandish, ancestor, animal, anniversary, grandstand, antelope, bandanna, advantage, manufacture, manager, chimpanzee, reprimand. You can find Dr. Rosner’s word unit lists in his book Helping Children Overcome Learning Difficulties (3rd ed.)(New York: Walker and Co., 2009). This is such a useful, practical book for working with children who have learning disabilities, dyslexia, ADHD, and other learning challenges. I’ve found it particularly useful for helping children with perceptual difficulties, pronunciation difficulties, and more. The book gives clear instructions and detailed exercises, for using geoboards, auditory exercises, the word lists mentioned above, and more. This book is packed full of really practical ways to help your child develop literacy skills, whether he or she has learning challenges or not. How do YOU use word families to help your child read, write, and spell? Share your ideas in the comments below. Thank you! Looking for other useful tips on helping your child read and write? Check out the list of topics in the second half of our Tutoring Tips page.
Can gestures reveal more than words? Examining memory in preschoolers Across early childhood, verbal recall provides a more limited account of memory retrieval than behavioural recall (e.g., Simcock & Hayne, 2002). Older childrens physical gestures have also been shown to reveal additional information which has been stored but which children are not yet able to communicate verbally or access explicitly (Goldin-Meadow, 2000). The aim of the present study was to determine whether preschool children spontaneously produce gestures that reflect their memory for an event, and if so, whether their gestures contain any additional information that they do not express verbally. To do this, we re-analysed the verbal recall sessions of 112 participants aged 3-5 years who had previously watched a video demonstration of a unique magic box event. Verbal recall was coded for the presence of iconic representational gestures, which are gestures which refer to a graphical representation of a phenomenon (Arzarello & Robutti, 2004). Thirty-one children produced representational gestures during their verbal recall (M¼2.6 gestures per gesturing child). The production of representational gestures was positively correlated with verbal recall, performance on a verbal comprehension test, and behavioural recall for the target event, irrespective of age. Gestures reflected the content of memory, but also included details that were not reflected in verbal report such as the specific motions of actions, spatial information, and the size and shape of items. These findings suggest that attending to gestures in preschool children can provide unique access to memory details that would otherwise be missed in verbal recall. Please refer to publisher version or contact your library.
Pharynx & Esophagus Food is forced into the pharynx by the tongue. When food reaches the opening, sensory receptors around the fauces respond and initiate an involuntary swallowing reflex. This reflex action has several parts. The uvula is elevated to prevent food from entering the nasopharynx. The epiglottis drops downward to prevent food from entering the larynx and trachea in order to direct the food into the esophagus. Peristaltic movements propel the food from the pharynx into the esophagus. The esophagus is a collapsible muscular tube that serves as a passageway between the pharynx and stomach. As it descends, it is posterior to the trachea and anterior to the vertebral column. It passes through an opening in the diaphragm, called the esophageal hiatus, and then empties into the stomach. The mucosa has glands that secrete mucus to keep the lining moist and well lubricated to ease the passage of food. Upper and lower esophageal sphincters control the movement of food into and out of the esophagus. The lower esophageal sphincter is sometimes called the cardiac sphincter and resides at the esophagogastric junction.
Democracy in America ARCHAEOLOGY rarely rises to the level of excitement of bullwhip-cracking Indiana Jones. But a new technique to determine the geological origin of artefacts has plenty of researchers aflutter. A team led by Ellery Frahm from the University of Sheffield has found a way to pinpoint the source of artefacts made of obsidian, a glassy rock formed after volcanic lava hardens, on the spot, in ten seconds. Given the painstaking nature of archaeological work, and consequently its leisurely pace, the technique may sound like overkill. But given that archaeologists can retrieve as many as 80 artefacts an hour speeding the process up matters. Especially as such information can be an archaeological pot of gold: it may indicate where a site's inhabitants arrived from or whom they had contact with, for instance. Knowing that, in turn, may alter what excavators look out for in a dig. Often it requires shipping items to distant laboratories which is both costly and time-consuming, slowing excavation down. Dr Frahm's method, described in the Journal of Archaeological Science, uses portable X-ray fluorescence (pXRF). Like old-fashioined XRF, the portable uses a spectrometer the size of a cordless drill to zap artefacts with X-rays. These high-energy photons knock electrons in the atoms out of their orbits, creating charged ions. This causes the remaining electrons to shift around, which in turn releases X-ray photons of a particular wavelength. Elements can be identified based on the unique pattern of photons they emit. Obsidian contains many different elements, and the exact composition varies depending on geographical provenance, yielding a unique chemical “fingerprint”. But whereas pXRF is commonly used for all manner of environmental field test (of hazardous substances, for instance), archaeologists have been reluctant to embrace it, preferring the controlled environment of the lab. Even on the rare occasions where they have, analysing a sample would typically take as long as 2-6 minutes. With hundreds of samples to get through in a day, even that is a bit too long—and may explain researchers' scepticism. Dr Frahm hopes to change that. He trained the device to recognise fingerprints of obsidian samples from known locations in Armenia. Test samples, the origin of which had previously been ascertained using conventional techniques, were then analysed to determine the accuracy of their method. The results were encouraging. The software correctly identified 606 out of 613 samples. When the authors tested samples from outside Armenia, their technique did less well: it wrongly identified 14 out of 26 samples as Armenian. However, Dr Frahm believes, this is nothing that more training shouldn't be able to fix. Nor is it restricted to obsidian; other minerals could be analysed, too. And although a pXRF device costs around $50,000 a pop, shipping the artefacts to labs and analysing them there gets very expensive very quickly. A bullwhip it isn't, but a cracking advance nonetheless.
The technological challenges of wind turbine blades To ensure wind turbines that are big in size work in a better manner, a new kind of air-flow technology may soon be introduced. Apart from other aspects, it will focus on efficiency of blades used in the wind turbines. The technology will help in increasing the efficiency of these turbines under various wind conditions. This is a significant development in the area of renewable energy after new wind-turbine power generation capacity got added to new coal-fired power generation in 2008. Testing new systems to optimize the efficiency of wind turbines and their bladesSyracuse University researchers Guannan Wang, Basman El Hadidi, Jakub Walczak, Mark Glauser and Hiroshi Higuchi are testing new intelligent-system based active control methods with the support from the U S Department of Energy through the University of Minnesota Wind Energy Consortium. They record data in an intelligent controller after getting a rough idea of the flow conditions over the blade surfaces from surface measurement. This helps them implement real-time actuation on the blades. In this way, not only the efficiency of wind turbine system is increased but the airflow can also be managed. Advantages of new systems to optimize the efficiency of wind turbines and their blades They reduce noise.They reduce vibration.New developments that are being worked out to make wind turbines and blades more efficient The overall working scope of the wind turbine can be enlarged by using the flow control on the outboard side of the blade beyond the half radius. Attempts are being made to increase the rated output power without increasing the level of operating range.An anechoic chamber is being set-up to measure and define the effects of flow control on the noise spectrum of the wind turbine.To know the airfoil lift and drag characteristics with suitable flow control while exposed to large-scale flow unsteadiness, efforts are being made to characterize airfoil in an anechoic wind tunnel facility at Syracuse University.Scientists are also trying to attain a greater efficiency by placing blades at various angles through wind tunnel tests of 2.5 megawatt turbine airfoil surfaces and computer simulations.Drawbacks of wind energy turbines and their bladesThe blades face a lot of challenge while beating the air. Scientists at the University of Minnesota are looking forward to erect this process called drag by placing these small grooves or triangular riblets scored into a coating on the surface of the turbine blade. The small groves of the size between 40 to 225 microns make the blade look smooth. When used in an aero plane, the riblets were very successful. With their basic structure being the same as the wings of the plane, they were able to reduce the drag by 6 percent in aircrafts. But since the turbine blades have a thick cross section close to the hub and there is a lot of chaos at the ground, this technology failed in wind turbines. Anything working on wind energy including wind turbines needs a steady wind flow to function properly. The wind turbine blades, when confronted with extreme conditions, wear out very fast. The design of wind turbines is not considered appropriate, despite the fact that the cost of making power through them has reduced. Though wind energy turbines, their blades and the riblets may have some drawbacks, they can still be considered as a very efficient and reliable source of energy. Keeping this in mind, a meeting has been arranged in Long Beach, CA by American Physical Society Division of Fluid Dynamics to assess the ways to enable the best use of wind turbines. Also, a project to use riblets to increase wind turbine efficiency by 3 percent is being worked upon by Roger Arndt, Leonardo P. Chamorro and Fotis Sotiropoulos from University of Minnesota. Short URL: http://solar-magazine.com/?p=572
Tuesday, May 31. 2011 Much of the impact melt on the floor of Anaxagoras is smooth, but there are some places with cracks or negative-relief features. These cracks and pit-like features probably formed during cooling of the melt as the material fractured, similar to the way scientists think the natural bridge in the King crater melt sheet formed. We simply do not know for certain. Additionally, there are several hills and bulges that are covered with clusters of boulders. There are no impacts in the melt sheet that might account for the boulder clusters, thus a possible explanation is that the boulders are eroding out of the impact melt that covers these hills. These boulders look similar to boulder clusters eroding out of wrinkle ridges in the mare; are they the result of a similar process? This observation suggests that perhaps erosion proceeds preferentially on the steeper slopes of bulges in the melt sheet and ridge crests of wrinkle ridges to produce boulder clusters. But why are boulders only eroding from these bulges? There are other steep slopes nearby, but these slopes do not have boulders. There may be a simple explanation for the presence or absence of boulders in the Anaxagoras melt sheet and along wrinkle ridge crests, but further observations and analyses are required. Certainly, this is another mystery waiting for future lunar explorers! Discover the beauty of the Anaxagoras impact melt sheet for yourself in the full LROC NAC image! Related Posts: Anaxagoras A at Sunrise
Dr. Don Stierman Resistivity is measured by planting 2 sets of electrodes into the earth. Via one set, a measured electrical current is transmitted into the ground. A second set is used to monitor changes in the potential induced by this current. The general form of equations relating resistivity to current and induced voltage is where "G" is determined by the geometry of the array (array: spatial deployment of electrodes). V and I are measured using a voltmeter (or potentiometer) and ammeter respectively. The arrays we most frequently use are collinear - that is, the 4 electrodes lie in a straight line. There are configurations that are not collinear but I use those only when the situation so demands. Wenner array: the Wenner array geometry is depicted in Figure 1. Throughout this discussion, electrodes A and B refer to current electrodes (usually metal stakes) and electrodes M and N refer to potential electrodes (usually porous pots). Figure 1: Wenner array. In the Wenner array, distances AM = MN = NB = "a-spacing". Suppose the earth were a uniform, homogeneous, isotropic half-space (from this point, all "uniform" structures are homogeneous and isotropic). It is not difficult to compute the potential field set up by this array relating the resistivity of the earth to the array geometry Consider next an earth that is not uniform, but is instead composed of a single uniform layer over a uniform half-space. If the aperture (electrode spacing) if the array is small compared to the thickness of the layer, the array does not "see" the underlying half-space and measured the true resistivity of the layer. If, on the other hand, the array is very large with respect to the thickness of the surficial layer, the array samples the half-space rather than the layer and measures the true resistivity of the half-space. If the array dimension is about the same as the layer thickness, an intermediate value is measured. Equation 2-2, however, is calculated based on the response of a uniform earth. Hence, we do not call this value the resistivity of the earth but, rather, the APPARENT RESISTIVITY (ρa). Resistivity is calculated as if the earth were a homogeneous, isotropic half-space, using Equation 2-2. We interrogate the earth by increasing the a-spacing and measuring apparent resistivity at various spacings. Data are plotted on LOG-LOG graph paper, apparent resistivity on the vertical axis as a function of electrode spacing. For the Wenner array, electrode a-spacing is usually used as the independent (horizontal, or X-axis) variable. These data define a SOUNDING CURVE (Figure 2). An electrical sounding is a survey in which we interrogate more and more deeply by measuring apparent resistivity ρa as a function of electrode spacing AB/2 or "a". Figure 2: Sounding curves for Wenner and Schlumberger arrays. Note use of log-log graph paper. Interpretation will be discussed shortly. Edwards (1977) developed the concept of EFFECTIVE DEPTH, ZE, the interval within the subsurface of a homogeneous earth that contributes 50% of the signal. For the Wenner array, the center of this effective depth is given by (Edwards, 1977) where "a" is the "a-spacing" of the electrodes. In other words, if the a-spacing for a given measurement is 10. meters, 50% of the signal is controlled by a zone centered about 5.2 m below the surface, and the effective depth zone extends from 0.5 ZE to 1.6 ZE (from about 2.6 m to about 8.3 m). In order to probe the earth, we begin with short a-spacings, make the measurements of I and V, and expand the array by increasing "a" in a systematic manner. Data are plotted on the log-log graph (ρa as a function of "a") generating the sounding curve. In the field, a typical a-spacing for the first measurement is 2.0 meters. This means each potential electrode is place 1.0 m from the sounding point and each current electrode is placed 3.0 m from the sounding point. It is generally a good idea to obtain 5 evenly spaced data points per log cycle. This means a = 2.0, 3.2, 5.0, 7.9, 13, 20, 32, 50, 79, 130, and so on until sufficient sampling depth has been achieved (or until you run out of wire). Note that these values represent 100.7, 100.9, 101.1, 101.3 and so on. For the Wenner array, the distance from the sounding point to each potential electrode is a/2 and the distance to each current electrode is 1.5*a. The decision as to "How large an 'a' is enough?" (how far should we go?) depends on the specific geologic problem. This will be discussed below. The Schlumberger array (Figure 3) resembles the Wenner array. The main difference in terms of deployment geometry is the distance between potential electrodes (MN) is not held to half the distance between the current electrodes (AB) as in the case of the Wenner array. Apparent resistivity calculations are slightly more complex: and the data plotted are apparent resistivity as a function of AB/2. We are using a symmetric Schlumberger array, one in which electrodes are symmetrical about the sounding point. There are variations (see Parasnis, pp. 189-190 for an example) on the Schlumberger array that can be used for PROFILING (seeking lateral rather than vertical variations in electrical properties). Figure 3: Schlumberger array with current electrodes A and B, and, potential electrodes M and N. One restriction on use of the Schlumberger array: electrode separation AB must be at least 5 times separation MN; (2-5) AB > 5 * MN for Equation 2-5 to be valid (again, from potential field theory "It can be shown that - - " but you will have to accept this ex cathedra rather than have me go through the mathematical proof). Effective depth midrange ZE is about (2-6) ZE/L = 0.190 (Edwards, 1977) where "L" is distance AB, and the 50% sample zone extends from 0.5 to 1.6 ZE. Note that with the Wenner array I defined ZE in terms of dimension "a", which is 0.66 * AB/2, or one-third of AB (AB = L). Thus, effective depth for a given AB in the Wenner case is 0.519/3, or 0.173*L. The Schlumberger provides, for a given spacing of current electrodes, about 10% greater interrogation depth than that provided by the Wenner array. As with the Wenner array, we begin a sounding with a short AB/2 and expand in "log" steps. I begin with AB/2 = 2.5 m and MN/2 = 0.5 m (remember Equation 2-6). At 5 data points per log cycle, the array expands as follows: 2.5, 4.0, 6.3, 10, 16, 25, 40, etc. (note that these are 100.4, 100.6, 100.8, 101, 101.2 and so on). Potential electrodes MN are moved only when potential drops become too small to measure with sufficient precision. In a typical survey, it may not be necessary to increase the MN/2 distance until AB/2 is 10. meters. At this point, we measure (V/I) for both the old MN/2 value (0.5 m) and for the new MN/2 (10/5, or 2 m). This procedure permits us to detect near-surface heterogeneities, something not available to us with the Wenner array. There are three significant advantages for using the Schlumberger rather than the Wenner array. First, the Schlumberger array has a slightly greater interrogation depth. Second, because the resistivity is being sampled between points MN, the lateral resolution is better for the Schlumberger array. (IMPORTANT: I have encountered engineers who think the AB distance determines lateral resolution; they are wrong. I have not found the source of this widespread error but would appreciate your informing me if you find this in print somewhere. A bit of analysis along the line of "voltage drops across resistors in series" should allow you to understand why my statement regarding MN is true.) Third, because MN and AB can be changed independently, lateral variations between the MN electrodes are detected when the Schlumberger array is used. Because both AB and MN must be moved simultaneously in making a Wenner sounding, a Wenner user can not determine if details of a curve are controlled by variations as a function of depth or by lateral variations in electrical properties. Some investigators like to do "profiling" - that is, selecting a specific array spread (AB, MN are constant) and moving the array from point to point. This method is dangerous and is one reason (I think) the USEPA does not like electrical resistivity as much as electromagnetic (EM) when prospecting for contaminated ground water. Shallow resistivity variations not linked with conditions at depths of interest can obscure signals sought. Investigators who profile without due respect for shallow resistivity variations are likely to fail. Lateral variations in resistivity are best detected by performing soundings along a profile. INTERPRETATION OF SOUNDING CURVES Figure 4 shows Schlumberger sounding curves for a single layer over a half-space, one curve for the layer 10 times more resistant than the half-space, the other for the layer 10 times less resistant than the half-space. Note that the layer thickness is 1 m for both cases. These curves begin to depart from the horizontal "homogeneous" line just a bit to the left of AB/2 = 1.0 m. Note that these curves do not "level out" until AB/2 is 10 to 100 times the layer thickness for this resistivity contrast (note comments on profiling, previous page). Often we can not extend a line far enough for the curve to level off; however, we can still determine the resistivity contrast (provided it is not extreme) by noting the slope of the curve. Figure 5 shows a set of "2-layer" (actually, layer over half-space) MASTER CURVES. By comparing these curves with a set of field data, we can determine the resistivity of the layer (it is equal to the horizontal line value prior to the point where the curve turns up or down) and the contrast between that layer and the underlying material - from which we then calculate the resistivity of the half-space. Figure 4: Sounding curves for layer of half-space. Figure 5: Master curves for layer-over-halfspace, Schlumberger array. Example: suppose the resistivity of the top layer is 100 ohm-meters and that we note, by comparing slopes with master curves, that the underlying material is of lower resistivity and that the slope matches the 0.2 curve. This means the ratio ρ2/ρ1 = 0.2, or, the underlying material has a resistivity of 20 ohm-meters. IMPORTANT: TO USE MASTER CURVES, YOUR FIELD DATA MUST BE PLOTTED ON A GRAPH WITH PRECISELY THE SAME SCALES AS THE MASTER CURVES - ELSE THE DEPTH INDEX WILL YIELD INCORRECT RESULTS !! On curve matching, the point where layer thickness h = AB/2 is called the DEPTH INDEX. I use a set of these master curves on a transparent sheet - once I find a match, I can read the thickness of the layer by noting where this (h = 1) line lies with respect to my data. This value, read from the plot of my field data, gives me the thickness of the top layer. MORE COMPLEX MODELS Few earth structures can be modeled as a single layer over a half- space. One more typical situation occurs when the unsaturated zone overlies a porous unconfined aquifer, which in turn overlies low- porosity bedrock. In this case, the top layer has a higher resistivity than the middle layer, which has a lower resistivity than the bedrock. Figure 6 shows the H-type curve as representing this condition. An A- type curve shows two layers over the halfspace with increasing resistivity with depth, and the K-type curve shows a high-resistivity layer between a low-resistivity layer and half-space. The Q-type curve shows two layers over a half-space with resistivity decreasing with depth. You will not be expected to recall from memory A-, H-, Q- or K- type curves. You should, however, be able to inspect a curve and draw conclusions regarding the minimum number of electrical layers and the general trend of high/low relative resistivities. Figure 7 shows further complexity: 3 layers over a half-space (4 electrical units). Later, we will use computer programs to model the electrical response of a layered earth (Program DCSCHLUM) and to interpret by inversion a Schlumberger sounding curve (Program ATO). I have a spreadsheet that does forward models: you specify a model consisting of horizontal layers and the spreadsheet calculates the curve. You enter the data in the appropriate cells and can see both the model curve and data on the same chart. Figure 6: 2 layers over halfspace: 4 possible perturbations. Figure 7: 3 layers over halfspace, 4 of the possible 9 perturbations. Just as the depth index offers a clue to the thickness of the top layer, inflection points offer clues to the depth to contacts between electrical units further down. For example, in Figure 2-7, note how the K- and H-type curves change slope at AB/2 distances about equal to the depth at which resistivity changes occur. We have 1 license for a sounding inversion program titled IX1D. The standard cost of this license is about $1000. Running the program requires a hardware key. Interpretations for publications (including your reports) based on resistivity soundings will utilize this resource. Figure 8: students conducting and electrical sounding. One final note: the slope at any point on the sound curve is a function of just two layers. You can thus evaluate your data by determining ρ2 from the slope in the first part of the curve, then deduce ρ3 by noting the contrast with ρ2, and so on. THE DIPOLE-DIPOLE ARRAY This array is useful for mapping shallow variations both lateral and vertical simultaneously. Select a dipole dimension ("a" - Figure 9) appropriate for the depth of interest. According to Edwards (1977), the effective depth for the dipole-dipole array is about greater than 3a but less than 4a. In other words, when using 10-meter dipoles, I can image the upper 12 meters or so when I make measurements between dipole separations n = 1 through n = 4. Figure 9: Dipole-dipole array. "n" must be an interger (1, 2, 3, etc.). The apparent resistivity for the dipole-dipole array is and the value for each dipole-dipole combination is plotted on a pseudosection which resembles a cross section of the region under the dipole-dipole profile. The traditional pseudosection plots apparent resistivity at the point where lines drawn downward at 45 degree angles from the center of each dipole intersect. This traditional pseudosection exaggerates the depth of anomalous materials. Edwards (1977) tells us how to modify the pseudosection such that it better matches true depths. I have had good success in using coefficients Edwards (1977) published at Talgua and down-gradient from the Stringfellow site. Disadvantages of the dipole-dipole are, first, that high electrical currents are required to interrogate deeply into the earth and, second, that true rock resistivities are not easily calculated. Software to invert dipole-dipole resistivity profile data into a "true" cross-section showing rock resistivities (rather than apparent resistivities) is expensive. We have one license, which cost more than the IX1D inversion program. Figure 10: results of a dipole-dipole profile run in search of the edge of carbonate bedrock (high resistivity - red) at Liberty Crater. The top pseudosection is the observed, the bottom the geological model, and the center is the pseudosection calculated for the geological model. ONE FINAL WARNING: potential fields are not unique. That is, there are numerous electrical resistivity structures that could yield any specific sounding curve (IX1D performs an 'equivalence' function to reveal a range of models). The best solution is (usually) the least complex (Occam's Razor: do not introduce complexity that is not mandated by the data) sufficient to explain all observations. Thus, upon offering an interpretation for a sounding curve, you state that your interpretation is consistent with the data, not that it is THE model representing reality. On the other hand, I'd wager a case of beer that, if we drilled 2 boreholes 10 meters apart based on my interpretation of Figure 10, one would hit carbonate bedrock within 10 meters of the surface and the other would not hit carbonate, even if we drilled to 30 meters. Return to index
Canning is a method of preserving food by first sealing it in air-tight jars, cans or pouches, and then heating it to a temperature that destroys contaminating microorganisms that can either be of health or spoilage concern because of the danger posed by several spore-forming thermo-resistant microorganisms, such as Clostridium botulinum (the causative agent of botulism). Spores of C.Botulinum (in a concentration of 104 /ml) can resist boiling at 100°C (212°F) for more than 300 minutes; however, as temperature increases the times decrease exponentially, so at 121°C (250°F) for the same concentration just 2.8 minutes are required. From a public safety point of view, foods with low acidity (i.e., pH > 4.3) need sterilization by canning under conditions of both high temperature (116-130°C) and pressure. Foods that must be pressure canned include most vegetables, meats, seafood, poultry, and dairy products. The only foods that may be safely canned in a boiling water bath (without high pressure) are highly acidic foods with a pH below 4.6, such as fruits, pickled vegetables, or other foods to which acid has been added. During the early Civil Wars, the notable French newspaper Le Monde, prompted by the government, offered a hefty cash award of 12,000 Francs to any inventor who could come up with a cheap and effective method of preserving large amounts of food. The massive armies of the period required regular supplies of quality food, and so preservation became a necessity. In 1809, the French confectioner Nicolas François Appert observed that food cooked inside a jar did not spoil unless the seals leak, thus developed a method of sealing food inside glass jars. The reason why food did not spoil was unknown at the time, since it would take another 50 years before Louis Pasteur would confirm the existence of microbes. However, glass containers presented many challenges for transportation. Glass jars were replaced with cylindrical tin or wrought-iron canisters (later shortened to "cans") following the work of Peter Durand (1810), which were both cheaper and quicker to make and much more resilient than fragile glass jars. Tin-openers were not to be invented for another 30 years—at first, soldiers had to cut the cans open with bayonets or smash them open with rocks. The French Army began experimenting with issuing tinned foods to its soldiers, but the slow process of tinning foods and the even slower development and transport stages prevented the army from shipping large amounts around the French Empire, and the war ended before the process could be perfected. Unfortunately for Appert, the factory which he had built with his prize money was burned down in 1814 by Allied soldiers invading France. Following the end of the Napoleonic Wars, the canning process was gradually put into practice in other European countries and in the United States. Based on Appert's methods of food preservation, Peter Durand patented a process in the United Kingdom in 1810, developing a process of packaging food in sealed airtight wrought-iron cans. Initially, the canning process was slow and labor-intensive, as each can had to be hand-made and took up to six hours to cook properly, making tinned food too expensive for ordinary people to buy. In 1824 meats and stews produced by the Appert method were carried by Sir William Edward Parry in his voyage to find a northwestern passage to India. Throughout the mid-nineteenth century, tinned food became a status symbol amongst middle-class households in Europe, becoming something of a frivolous novelty. Early methods of manufacture employed poisonous lead solder for sealing the tins, which had disastrous consequences for the 1845 Franklin expedition to the Arctic Ocean. Increasing mechanization of the canning process, coupled with a huge increase in urban populations across Europe, resulted in a rising demand for tinned food. A number of inventions and improvements followed, and by the 1860s, the time to cook food in sealed cans had been reduced from around six hours to only 30 minutes. Canned food also began to spread beyond Europe—Thomas Kensett established the first American canning factory in New York City in 1812, using improved tin-plated wrought-iron cans for preserving oysters, meats, fruits and vegetables. Demand for tinned food greatly increased during wars. Large-scale wars in the nineteenth century, such as the Crimean War, American Civil War, and Franco-Prussian War introduced increasing numbers of working-class men to tinned food, and allowed canning companies to expand their businesses to meet military demands for non-perishable food, allowing companies to manufacture in bulk and sell to wider civilian markets after wars ended. Urban populations in Victorian era Britain demanded ever-increasing quantities of cheap, varied, good-quality food that they could keep on the shelves at home without having to go to the shops every day for fresh produce. In response, companies such as Nestlé, Heinz, and others emerged to provide shops with good-quality tinned food for sale to ordinary working class city-dwellers. The late nineteenth century saw the range of tinned food available to urban populations greatly increase, as rival canning companies competed with each other using novel foodstuffs, highly decorated printed labels, and lower prices. Demand for tinned food skyrocketed during World War I, as military commanders sought vast quantities of cheap, high-calorie food to feed their millions of soldiers; food which could be transported safely, would survive trench conditions, and which would not spoil in between the factory and the front lines. Throughout the war soldiers generally subsisted on very low-quality tinned foodstuffs, such as the British "Bully Beef" (cheap corned beef), pork and beans and Maconochies Irish Stew, but by 1916 widespread boredom with cheap tinned food amongst soldiers resulted in militarily purchasing better-quality food, in order to improve low morale, and the first complete meals in a tin began to appear. In 1917 the French Army began issuing tinned French cuisine, such as coq au vin, whilst the Italian Army experimented with tinned ravioli and spaghetti bolognese. Shortages of tinned food in the British Army in 1917 led to the government issuing cigarettes and even amphetamines to soldiers to suppress their appetites. After the war, companies that had supplied tinned food to national militarily improved the quality of their goods for sale on the civilian market. Today, tin-coated steel is the material most commonly used. Laminate vacuum pouches are also now used for canning, such as those found in an MRE. Modern double seams provide an airtight seal to the tin can. This airtight nature is crucial to keeping bacteria out of the can and keeping its contents sealed inside. Thus, double seamed cans are also known as Sanitary Cans. Developed in 1900 in Europe, this sort of can was made of the traditional cylindrical body made with tin plate; however, the two ends (lids) were attached using what is now called a double seam. A can thus sealed is impervious to the outside world by creating two tight continuous folds between the can’s cylindrical body and the lid at each end. This eliminated the need for solder and allowed improvements in the speed of manufacturing, thereby lowering the cost. Double seams make extensive use of rollers in shaping the can, lid and the final double seam. To make a sanitary can and lid suitable for double seaming, manufacture begins with a sheet of coated tin plate. To create the can body rectangles are cut and curled around a die and welded together creating a cylinder with a side seam. Rollers are then used to flare out one or both ends of the cylinder to create a quarter circle flange around the circumference. Great care and precision are required to ensure that the welded sides are perfectly aligned, as any misalignment will mean that the shape of the flange is inconsistent, compromising its integrity. A circle is then cut from the sheet using a die cutter. The circle is shaped in a stamping press to create a downward countersink to fit snugly in to the can body. The result can be compared to an upside down and very flat top hat. The outer edge is then curled down and around approximately 130 degrees using rollers creating the end curl. The final result is a steel tube with a flanged edge. And a countersunk steel disc with a curled edge. A rubber compound is put inside the curl. The body and end are brought together in a seamer and held in place by the base plate and chuck, respectively. The base plate provides a sure footing for the can body during the seaming operation and the chuck fits snugly in to the end (lid). The result is the countersink of the end sits inside the top of the can body just below the flange. The end curl protrudes slightly beyond the flange. Once brought together in the seamer, the seaming head presses a special first operation roller against the end curl. The end curl is pressed against the flange curling it in toward the body and under the flange. The flange is also bent downward and the end and body are now loosely joined together. The 1st operation roller is then retracted. At this point during manufacture five thicknesses of steel exist in the seam. From the outside in they are; a) End, b) Flange, c) End Curl, d) Body, e) Countersink. This is the first seam. All the parts of the seam are now aligned and ready for the final stage. The seaming head then engages the second operation roller against the partly formed seam. The second operation presses all five steel components together tightly to form the final seal. The five layers in the final seam are then called; a) End, b) Body Hook, c) Cover Hook, d) Body, e) Countersink. All sanitary cans require a filling medium within the seam as metal to metal contact, otherwise such an arrangement would not maintain its hermetic seal for very long. In most cases a rubberized sealing compound is placed inside the end curl radius, forming the actual critical contact point between the end and the body. Probably the most important innovation since the introduction of double seams is the welded side seam. Prior to the welded side seam the can body was folded and/or soldered together, leaving a relatively thick side seam. The thick side seam meant that at the side seam end juncture the end curl had more metal to curl around before closing in behind the Body Hook or flange, leaving a greater opportunity for error. All links retrieved January 7, 2017. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
The Art courses in this key stage are based on the National Curriculum programme of study and include: · Exploring and developing ideas · Investigating and making art, craft and design · Evaluating and developing work · Developing knowledge and understanding. Emphasis is placed on the development of basic skills related to two and three dimensional work. Drawing is regarded highly and pupils are set regular homework tasks to develop this skill. During the three years of this key stage, pupils will experience a variety of materials and approaches to art within drawing, painting, printing, sculpture and ICT based projects. To view the Key Stage 3 Assessment Criteria for Art – please click here Key Stage 3 Overview Understanding the historical context for art, craft, design and architecture: Students are given regular opportunities to analyse and evaluate art, craft, design and architecture in relation to context (social and cultural) and provided with opportunities for personal response through independent research and collaborative class discussion. During the course students will have opportunities to apply knowledge gained from analysing the work of artists, craftsmen and designers to develop and refine ideas (style/concept etc.). Creativity: Students explore a range of techniques to record observations. They are introduced to processes that develop creative thinking through images, sketches and annotations of intention / interest. They develop the skill to refine an idea through a process of: experimentation – evaluation – improvement, leading to a final resolution. Personalised responses are encouraged and links identified to new learning (both conceptual and technical). Skills focus: Students are taught a range of techniques through a range of media and core skills are repeated in order to increase proficiency. Drawing is fundamental to proficiency within the visual arts for notating observations and ideas in visual forms. Students build understanding of materials and quality of application through regular practice, experimentation and reflection on progress. Predominantly focused on observational drawing, colour / pastel techniques and a design based project. Predominantly focused on painting and fine art techniques leading to a fine art based project focus. Predominantly focused on personalised project development, experimentation with media techniques and printmaking. To view the Department of Educations curriculum guidelines for Art & Design – please click here
River flows drop as carbon dioxide creates thirstier plants Rising carbon dioxide concentrations are causing vegetation across large parts of Australia to grow more quickly, in turn consuming more water and reducing flows into river basins. Our research, published today in Nature Climate Change, shows that river flows have decreased by 24-28% in a large part of Australia due to increasing CO₂ levels, which have risen by 14% since the early 1980s. This could exacerbate water scarcity in several populated and agriculturally important regions. It was previously unclear whether the increasing CO₂ in the atmosphere has led to detectable changes in streamflow in Australian rivers. This is partly because increasing CO₂ can have two opposing effects on water resources. CO₂ is the key ingredient for photosynthesis, and higher concentrations allow plants to grow more vigorously. This fertilisation effect could be expected to lead to denser vegetation that needs more water to grow, in turn reducing the amount of rainwater that can run off into rivers. Acting directly against this is the fact that increased CO₂ concentrations allow plants to use water more sparingly. Small pores called stomata on the surface of leaves allow plants to regulate their uptake of CO₂ for photosynthesis and water loss to the atmosphere. At higher CO₂ concentrations, plants can partially close these pores, maintaining the same influx of CO₂ while also reducing water loss through transpiration. This could be expected to leave more rainwater available to become river runoff. The net effect of these two counteracting processes has so far been highly uncertain. In our study, we used a new method that combines satellite measurements of vegetation cover with river flow data collected for over 30 years. Using statistical methods we factored out other influences that affect river flows, such as variations in rainfall. Our results suggest that the net effect of increased CO₂ has been declining runoff across the subhumid and semi-arid parts of Australia, and that this can be attributed to the increased vegetation. Anna Ukkola, Author provided The good news is that increasing CO₂ might also make plants better able to survive in these drying landscapes. By using water more efficiently, plants can grow more vigorously in arid regions and should better withstand droughts, such as those commonly associated with El Niño events. In areas with an average annual rainfall below about 700 mm, we found that the amount of vegetation cover that can be sustained has increased by about 35% since the early 1980s. This is good news for dryland cropping and grazing which are likely to enjoy increased yields as a consequence. Despite these positive effects, in less dry parts of Australia, the reduction of river flow adds yet more pressure to water resources. As natural vegetation is greening and consuming more water, local rivers and dams are receiving less. At the same time, rainfall patterns are changing. With the exception of northern Australia, many of the affected areas are already experiencing declining rainfall and this trend is projected to continue into the future with increasing global temperature. Elsewhere around the world, vegetation increases have also been observed in other dry regions such as southern and western Africa and the Mediterranean. It is certainly possible that these regions are also facing declining streamflow as a result. The increase in vegetation helps to draw CO₂ from the atmosphere, but the effect is not enough to significantly slow the rise in atmospheric CO₂ and the resulting long-term climate change. Despite the observed greening, most of Australia’s vegetation continues to be very sensitive to rainfall changes. If rainfall continues to decline as projected, the greening trend may end or even be reversed, releasing the stored carbon back into the atmosphere. Original article posted on The Conversation, 20 October 2015 (Link)
The difference between sex and gender with s3xtheorywithdemi So what is the difference between sex and gender? Sex refers to the biological attributes in humans and animals - associated with physical and physiological features which include chromosomes, gene expression, hormones and reproductive anatomy. Sex is determined at birth, which is where the phrase AFAB and AMAB (assigned male/female at birth) comes from, by a doctor based purely on the external genitalia of the child. The sex of the infant is usually categorised as either male or female, based on whether the infant has a penis or a vulva. However, sex isn’t binary, ‘binary’ meaning either male or female. Infants may also be born intersex. Intersex people are born with sex characteristics including genitals, gonads and chromosome patterns that do not “fit” typical binary clusters of male or female bodies. It is said that there are the same percentage of red-headed people in the world as there are intersex individuals - however, many of these individuals go about their life with no knowledge of this because doctors will typically assign a sex that fits the infant best based on their genitalia or perform “corrective” surgery. Intersex individuals account for around 1 in 1,500 births and yet, there is a huge lack of education around this percentage. Sex isn’t binary in the same way gender isn’t binary, however, there is much more resistance to accept this fact in society due to the lack of education to correct this myth of XX and XY. For instance, some men are born with two or three X chromosomes, just as some women are born with a Y chromosome. In 1964, Robert Stoller coined the term ‘gender identity’, referring to an individual’s personal concept about their gender and how they feel inside. Gender cannot be determined by any physical attributes, it refers to the socially constructed roles, expressions, behaviours of girls, boys, men’s women and gender diverse* individuals. Much like sex, gender too is not binary nor static: gender identity exists on a continuum which can change over time. One person's gender identity, even if labelled the same, will exist on a slightly different point of the continuum than someone else. Two nonbinary individuals may plot themselves on the continuum between male and female on different places yet, they still identity as the same label. There are MANY gender identities and this number continues to grow each day. There are also umbrella identities that group various similar identities together for example nonbinary encompasses many gender identities that don't fit into the male-female binary - gender-diverse, genderqueer, gender non-conforming, two-spirit all exist under this umbrella term. A great tool for explaining the difference between sex and gender and how they can coexist on very different continuums is The Genderbread Person. This tool is used by many RSHE educators to easily demonstrate gender, sex, expression and attraction to students in secondary schools. The model states that gender is found in the brain, sex is between the legs, expression (how we present ourselves to the world) is demonstrated by our outer body and attraction (sexuality) is found in the heart. This model helps to separate the various parts of our identity to show how they can all exist on different continuum’s whilst also existing in the same person. Your gender and sex do not have to be the same and both are valid in their own separate indicators of your identity. Similarly, your attraction and gender do not need to align or your sex and expression. There are hundreds of gender identity ‘recipes’ that you could create for your genderbread person and no two people will be exactly the same. image via genderbread.org I’d definitely recommend looking up the Genderbread Person and writing out your gender, sex, expression and attraction on a piece of paper to help you understand your identity - even if you are cisgender* give it a go - not only will it help you understand yourself, it will help you to grasp how the various parts of your identity exist separately. I also want to add that it is completely ok for your gender and sex to be the same, however, it is crucial to open your mind to how others identify and the fact that their identity may not align as yours does. Gender diverse - A person who identifies outside of the binary - an umbrella term. Cisgender- A person whose sense of personal identity and gender corresponds with their birth sex.
This article is part of a series celebrating the 20th birthday of the Isaac Newton Institute in Cambridge. The Institute is a place where leading mathematicians from around the world can come together for weeks or months at a time to indulge in what they like doing best: thinking about maths and exchanging ideas without the distractions and duties that come with their normal working lives. And as you'll see in our articles, what starts out as abstract mathematics scribbled on the back of a napkin can have a major impact in the real world. In 1997 the Isaac Newton Institute hosted a programme on neural networks and machine learning (NNM). Organised by Christopher M. Bishop (currently Distinguished Scientist at Microsoft Research, Cambridge), the programme attracted over 180 participants and was the largest international gathering of its kind at the time. It has since been hailed a landmark event. Mimicking the brain Artificial neural networks grew out of researchers' attempts to mimick the human brain. The neural networks and machine learning programme took place at a time when the field found itself at a crucial junction. Since the mid-1980s, researchers' efforts to build intelligent machines had focused on trying to mimic the brain's vastly complex network of billions of individual neurons. In the much smaller artificial neural networks, the "neurons" are mini processing units that can receive information and transform it according to a set of mathematical rules. A set of input data, say an image of some hand-written text on a page, is broken up and coded into mathematically digestible pieces, which then flow through the network, being transformed on their paths from neuron to neuron, and eventually emerge as an output, for example a transcription of the hand-written text into ASCII characters. A crucial feature of artificial neural networks is their ability to learn. When presented with an example set of input-output data, say a set of hand-written pages and their correct transcriptions, the artificial network can compare its own outputs with the desired ones. If its own outputs are not good enough, it can adjust the parameters that govern its mathematical transformations until it achieves satisfactory results. Using this automated learning-by-example process, artificial neural networks can learn to recognise and classify patterns they have never seen before. Vast amounts of training data are essential for the learning process, but large data sets tend to come with quite a lot of "noise": errors and variability. In the 1990s it became clear that the future of neural networks hinged not so much on neurobiology, but on their ability to make the most of noisy data sets, for example by recognising statistical patterns and quantifying uncertainty using probabilities. The NNM programme grew out of the recognition that the probabilistic aspects of neural networks needed to be put on a sound mathematical footing. The organisers recognised the strong inter-disciplinary aspect of the field, bringing together experts from computer science and statistics, as well as other fields with an interest in the area, such as physics and dynamical systems. The programme afforded these experts the time and space to exchange ideas. Probabilistic graphical models One particularly fruitful, though rather unexpected, outcome of the programme was the convergence of neural networks theory and what is called graphical modelling. In a graphical model the elements of a data set, say all the people on a social networking site, are represented by nodes in a network, with the links between nodes representing relationships between them, gained from statistical information extracted from the data. Thus, a graphical model provides a way of representing additional structural information about a data set. Researchers at the NNM programme took the crucial first steps towards incorporating into neural networks the extra structural information that comes with graphical models. This approach has enabled neural networks to tackle much richer and harder problems than was previously possible. "The benefit of bringing the two communities together has been to provide us with a new paradigm for machine learning," says the programme organiser Christopher Bishop. "This is having a major impact, including a major commercial impact." TrueSkill and adPredict Following on from the MMN programme, two practical, and commercially extremely powerful, applications have been developed at Microsoft Research. Both applications learn in real time from large data networks comprising of millions, or billions, of nodes. TrueSkill is a system for the Xbox Live internet gaming environment. Image: Evan-Amos. TrueSkill is a system for the Xbox Live internet gaming environment. It takes the results from the large network of players competing online and uses this information to estimate players' skills and to match up players with similar skill levels for the next round of games. adPredict is a mechanism for pricing advertisements that appear in the Microsoft search engine Bing. Analysing users' "click behaviour", it estimates the probability that users click on an advertisement, which directly influences advertising revenue. "TrueSkill is to the best of my knowledge the first planet-scale application of Bayesian models," says Bishop. "AdPredict resulted from an internal competition within Microsoft, in which the Bing search engine team provided a training data set and invited teams to compete to produce the best predictor of the probability that a user would click on a particular advertisement. The adPredict system from Microsoft Research Cambridge was the winner in terms of accuracy of prediction jointly with another entry. However, the adPredict system was chosen for use in the product, as it was simpler and more scalable. It is now used on 100% of US traffic and is being rolled out world-wide." By bringing together two separate scientific communities to work on a common theme (a "deliberate act of social engineering" as described by Bishop), the NNM programme not only paved the way for the applications developed at Microsoft, but also impacted on other areas, including the statistical analysis of DNA sequences, face recognition technologies and computer vision. But the NNM programme did not just benefit the machine learning community. One participant who was able to put ideas from neural network theory back into statistical analysis is David Spiegelhalter, Winton Professor of the Public Understanding of Risk and Senior Scientist at the Biostatistics Unit in the University of Cambridge. After extensive interaction with other programme participants, Spiegelhalter experienced the crucial "Eureka moment" he needed to perfect a statistical software package called BUGS. The package is widely used to fit probabilistic graphical models to real-world data. It has a wide range of applications, from the modelling of animal populations to appraisals of new medical interventions. "The programme was an enormously stimulating time," says Spiegelhalter. "The free time and space, and the opportunity to talk to people provided a great atmosphere for ideas." Spiegelhalter's work eventually resulted in the paper Bayesian measures of model complexity and fit (with discussion) by DJ Spiegelhalter, NG Best, BP Carlin, and A van der Linde (JRSS, Series B, 64:583-640, 2002). By October 2010 this paper had over 2000 citations on Google Scholar and 1269 on Web of Science. According to Essential Science Indicators from Thomson Reuters it has become the third highest cited paper in all the mathematical sciences over the ten years ending in October 2009. As Spiegelhalter says, "All this is due to the Isaac Newton Institute providing a retreat for inspiration and concentration."
New studies show that the similarly smooth, almost hairless skin of whales and hippos has evolved independently. This work suggests that their last common ancestor was probably land-dwelling mammals, and now that skin has been fine-tuned for underwater life from a shared amphibious ancestor. I have uprooted the idea of.The study was published in the journal today Current biology It was led by a researcher at the American Museum of Natural History. University of California, Irvine; University of California, Riverside; Maxplanck Institute for Molecular Cell Biology and Genetics; and LOEWE-Centre for Translational Biodiversity Genomics (Germany). “How mammals left Terrafarm and became completely aquatic is one of the most fascinating evolutionary stories, perhaps the way animals first exchanged water for land, or in flight. It is comparable only by evolution, “said the author of the American Museum of Natural History’s Department of Vertebrate and Zoology and the corresponding study. “Our latest findings contradict the current doctrine in this area that amphibious hippo relatives may have been part of the transition when mammals re-entered underwater life. . “ In contrast to their appearance, completely aquatic cetaceans (a group that includes whales, dolphins, and porpoises) and semi-aquatic hippos are the closest living relatives to each other and lived about 55 million years ago. They share a common ancestor. They also share many features that are strange to most mammals: they give birth and lactate in water and lack the scrotal testes and sebaceous glands (secreting oily sebum) and most of their hair. I will. Since these traits are rarely found in other mammals, it is speculated that they were already present in the common ancestor of hippos and whales. However, when and how cetacean ancestors became completely aquatic remains a subject of intense debate. Paleontological studies of transient extinct cetaceans suggest that water invasion was a gradual process involving the amphibious phase. So did hippos and whales independently develop adaptations to aquatic lifestyles? Or did their common ancestors already amphibious, from which the cetaceans diverged to become completely aquatic? “The simplest hypothesis is that whale and hippo ancestors were already amphibious, but evolution is not always the shortest distance between two points,” said the lead author of a study, a professor of biology at the University of California, Riverside. Mark Springer said. To solve this problem, researchers turned to animal skin. This shows a major evolutionary change depending on the aquatic organism. “When a group of animals becomes aquatic, the skin becomes much more streamlined and uniform overall,” said Maksim Plikus, co-author and skin biologist at the University of California, Irvine. “Complex derivatives such as hair, nails, and sweat glands are no longer needed, and in fact they can interfere with life in the water, so they are gone, and the barrier function performed by the outer layers of the skin. Lose. Terrestrial mammals are essential to prevent water from evaporating out of the body and to prevent pathogens from invading. “ Researchers compare the anatomical structures of hippo and whale skin based on histology and use genomic screening to provide a comprehensive list of “skin genes” inactivated in both hippopotamus and whales. is created. This was supported by the first examination of the genome of Choeropsis liberiensis, one of the two living hippopotamus species of Pygmy hippopotamus. “Looking at the molecular signatures, we have an impressive and clear answer,” said Michael Hiller, co-author of the Max Planck Institute of Molecular Cell Biology and Genetics and LOEWE-Centre for Translational Biodiversity Genomics. In German. “Our results strongly support the idea that the” aquatic “skin traits found in both hippos and whales have evolved independently. Not only that, it turns out that the gene loss of the hippopotamus line occurred much later than that of the whale line. “ The results of these genes are consistent with the examination of the skin itself. Unlike whales, hippos actually have a very special type of sweat glands that produce “blood sweat.” This is an orange substance that is presumed to have natural antibacterial agents and sunscreen. Property. Also, although there are only a few cetacean whiskers, the hippopotamus is completely bearded, but the hair is sparse and the tips of the ears and tail are most noticeable. The latter is used when the hippopotamus defecates. In the meantime, the hippopotamus spins its tail quickly, and brush-like hair helps crush the dung as a way to mark the territory. In addition, whale skins are much thicker than hippo skins, and only hippos have hooves. “These differences are in perfect agreement with the evolutionary history records written in the genome, which show independent knockouts of skin genes in cetacean and hippo evolutionary lines,” Springer said. .. “None of the inactivated mutations that would suggest a common aquatic ancestor are not shared between these two strains.” Who is your dad?Hippo ancestors revealed Current biology (2021). DOI: 10.1016 / j.cub.2021.02.057 Courtesy of the American Museum of Natural History Quote: Skin Depth: Whale and Hippo Aquatic Skin Adaptation Evolved Independently (April 1, 2021) April 1, 2021 https://phys.org/news/2021-04-skin- Obtained from deep-aquatic-whales-hippos.html This document is subject to copyright. No part may be reproduced without written permission, except for fair transactions for personal investigation or research purposes. The content is provided for informational purposes only. Aquatic skin adaptation of whales and hippos evolved independently Source link Aquatic skin adaptation of whales and hippos evolved independently
Despite vast scientific efforts over many decades, prediction of earthquakes remains highly challenging. Now, a collaboration between universities and other institutions in Spain have adopted a new tool for monitoring seismic activity. Fibre optic cables installed on the seabed for communications purposes are not only able to transmit large amounts of data, but can actually at the same time serve as seismic networks for earthquake detection. Researchers from the Institut de Ciènces del Mar (ICM) in Barcelona, from the Optics Institute (IO-CSIC), from the Alcalá University (UAH) and the Spanish national research and education network RedIRIS have demonstrated the feasibility of the method for submarine communication cables that connect the islands of Tenerife and Gran Canaria. The area has a high level of seismic activity. The method takes advantage of the fact that even the best fibre optic cables will have tiny internal fibre flaws. These imperfections become reference points. At the end of the cable, researchers install a device which sends laser pulses through the optical fibre. External disturbances, such as ground vibrations, will modify the properties of the backscattered light. In this way, a single cable connected to a monitoring device can be turned into a network of thousands of sensors – since each microscopic fibre flaw will serve as a sensor. Importantly, the method does not cause any disturbance to the normal functioning of the fibre optic communication cables. “Despite the increase in the number of seismic stations in the Canary Islands in recent years, these are located on land, so the underwater areas are not well monitored. The availability of seismic data in this area will make it possible to characterise, with higher resolution, the seismically active structures between Tenerife and Gran Canaria,” says Arantza Ugalde, from the Barcelona Centre for Subsurface Imaging, ICM. The method may allow studying other types of signals commonly recorded by marine seismic networks, possibly caused by processes related to gases or deep ocean currents. Finally, fibre optic cabling will be used to analyse non-seismic signals such as those emitted by some marine animals. “The distributed acoustic sensing technology that we have developed makes it possible to easily transform a fibre optic cable into a matrix of highly sensitive deformation seismometers. This technology will revolutionise data collection in seismology, particularly in the submarine field, where the installation of seismic sensors implies a great technical and economic challenge,” says Miguel Gonzáles Herráez, from the UAH. From RedIRIS, Esther Robles explains that “facilitating underwater fibre optics for this seismic activity measurement experiment has made us go one step further in line with our vocation, which is to enhance and facilitate the work of researchers with unique network solutions.” For more information please contact our contributor(s):
|A. no=S||S come, go along| |B. no=A=O||A send for, summon O| |C. auxiliary V-no||come and V| |D. imperative no!||V-phrase, come!| No is a single-syllable verb meaning ‘come’. In the rare cases when it is not followed by an auxiliary nor has a rational animate subject, the form of no is nodu. Arguments for no are the same as with da. The main difference between da and no is deictic. No implies motion towards the speaker or observer. It is also used for motion along a path parallel to something else. no can be used as an auxiliary to mean ‘come and V’. Like da, it is not used with the verbs of stance (sede, tene, degi) nor with any verbs denoting mental activity (dullo, callo, canno). It is also not used with da or with itself. Kuno-no, rather than meaning ‘come and get’ is used to mean ‘come with’ or ‘bring’. Imperative no! is a single syllable word, which is allowed as it is considered an interjection. It can be appended to a verb phrase to make it imperative. For example: Kuno=di=nu, no! ‘You get the thing, do!’ or ‘Get the thing!’. This is more polite than using da! It might be used by a parent towards a child, for example, or an elder person towards a much younger person. It has the urgency of da! but is tempered by affection. Tomorrow: nolo and nota.
What Is MRSA? MRSA is a type of staph bacteria. MRSA (say: MUR-suh) stands for methicillin-resistant Staphylococcus aureus. It causes infections that can be hard to treat. Many people have staph bacteria living harmlessly on their skin or in their noses. Staph bacteria that enter the body through a cut, scrape, or rash can cause minor skin infections. Most of these heal on their own if the wound is kept clean and bandaged, but sometimes antibiotics are needed. MRSA differs from other staph bacteria because it doesn't respond well to most of the antibiotics used to treat staph infections. Bacteria that are hard to kill are called "resistant." They become resistant by changing in some way that prevents the antibiotic from doing its job. Methicillin is an antibiotic normally used to treat staph, so these bacteria are called "methicillin-resistant." What Are the Signs & Symptoms of MRSA? MRSA infections look like other skin infections. They often develop around open sores, but also happen on intact skin. There can be red, swollen, painful areas or bumps on the affected skin. They sometimes ooze fluid or pus (an infected area with pus is an abscess). Some kids also have a fever. In more serious cases, the infection can spread to the blood, lungs, bones, joints, or other parts of the body. Is MRSA Contagious? MRSA is contagious. Like all other staph bacteria, it can spread: - when someone touches a contaminated surface - from person to person, especially in places where large groups of people are close together (like schools, camps, or college dorms). Often this happens when people with skin infections share personal things like razors, bed linens, towels, or clothing. - from one area of their body to another, by dirty hands or fingernails In the past, MRSA mostly affected people in nursing homes or hospitals. It was more likely to be seen in people with weak immune systems. It was also more common in people who had a surgical wound. But now some otherwise healthy people outside of those settings are getting the infection. Sometimes, people can be "carriers" of MRSA. This means that the bacteria stay on or in their bodies for days, weeks, or even years without causing symptoms. But they can spread it to others. That's why washing hands well and often is so important. How Is MRSA Diagnosed? A doctor will examine the affected skin, and sometimes will take a sample of pus or blood. This goes to a lab for testing to find out which bacteria are causing the infection. How Is MRSA Treated? Treatment depends on what the infection looks like: - If there is an abscess, the doctor might make a small cut in the skin over it to let the pus drain out. - The doctor may prescribe an antibiotic, either to put on the skin or to be taken by mouth (some antibiotics still work for MRSA). - Someone with a more severe infection might get intravenous (IV) antibiotics in a hospital. Can MRSA Be Prevented? These simple steps can help prevent MRSA infections: - Adults and kids should wash their hands well and often with soap and warm water for at least 20 seconds. Alcohol-based hand sanitizers or wipes are OK if soap and water aren't handy. - Do not touch or pick at infected areas. Cuts or broken skin should be cleaned and covered with a bandage. - Don't share razors, towels, uniforms, or other items that come into contact with bare skin. - If sports equipment must be shared, cover it with a barrier (clothing or a towel) to prevent skin from touching it. The equipment also should be cleaned before each use with a disinfectant that works against MRSA. How Can Parents Help? Call the doctor if: - Your child has a skin area that is red, painful, swollen, and/or filled with pus, especially if he or she has fever or feels sick. - Skin infections seem to be passing from one family member to another (or among students at school), or if two or more family members have skin infections at the same time. What Else Should I Know? Bacteria become resistant to antibiotics when they are not used properly. This includes: - taking antibiotics for things they can't cure, like illnesses caused by viruses - not taking all the medicine prescribed - taking medicine that was prescribed for someone else Taking antibiotics exactly as prescribed can help stop bacteria from becoming resistant to them. Take these precautions: - Never give your child someone else's prescription. - Don't save antibiotics for "next time." - Always give antibiotics as directed until the prescription is done (unless a doctor says it's OK to stop early). When skin is punctured or broken for any reason, staph bacteria can enter the wound and cause an infection. But good hygiene can prevent many staph infections. Learn more.Read More An abscess is a sign of an infection, usually on the skin. Find out what to do if your child develops one.Read More Germs: Bacteria, Viruses, Fungi, and Protozoa Germs are the microscopic bacteria, viruses, fungi, and protozoa that can cause disease.Read More Dealing With Cuts Find out how to handle minor cuts at home - and when to get medical care for a more serious injury.Read More Cellulitis is an infection of the skin and underlying tissues that can affect any area of the body. It begins in an area of broken skin, like a cut or scratch.Read More The Danger of Antibiotic Overuse Taking antibiotics too often or for the wrong reason has led to a dangerous rise in bacteria that no longer respond to medicine. Find out what you can do to prevent antibiotic overuse.Read More Cellulitis is a skin infection that involves areas of tissue just below the skin's surface. It can affect any part of the body, but it's most common on exposed areas, such as the face, arms, or lower legs.Read More Impetigo is a contagious skin infection that causes blisters or sores on the face, neck, hands, and diaper area. Learn how this common problem is treated and what can help prevent it.Read More Impetigo is a strange-sounding word that might be new to you. It's an infection of the skin caused by bacteria. Read this article to learn more about it.Read More Hand, Foot, and Mouth Disease Hand, foot, and mouth disease (HFM) is a common viral infection that causes painful red blisters in the mouth and throat, and on the hands, feet, and diaper area.Read More Impetigo is a skin infection caused by fairly common bacteria. Read this article to learn how to recognize it and what to do about it.Read More People can get abscesses on the skin, under the skin, in a tooth, or even inside the body. Most abscesses are caused by infection, so it can help to know what to do. Find out in this article for teens.Read More MRSA is a type of bacteria that the usual antibiotics can't tackle anymore. The good news is that there are some simple ways to protect yourself from being infected. Find out how.Read More
Check out tefl tesol about TESOL Jobs India Ahmednagar and apply today to be certified to teach English abroad. You could also be interested in: This is how our TEFL graduates feel they have gained from their course, and how they plan to put into action what they learned: Unit 2 - Parts of Speech In this unit I learned about the importance of the 8 major parts of speech in English grammar: nouns, adjectives, articles, verbs, adverbs, gerunds, pronouns, prepositions/conjunctions. Every single word in the English language belongs to one of the parts of speech; knowing the different parts of speech is important in understanding how words can and should be joined together to make sentences that are grammatically correct. As a teacher, you should have a good understanding of this important subject, to know how to identify each category and to know how to address them to your students as well. It is important that the students recognize word order and sentence structure. Just because a student knows a word doesn't mean he or she knows how to use it in a sentence. The parts of speech are the building blocks of sentences. Each part of speech explains not what the words is, but how the word is used.
How is humidity measured? Humidity is measured using a device called a psychrometer, or wet-and-dry thermometer. This determines the amount of water vapor in the air, or humidity. Dew point is the temperature at which 100% humidity is achieved, which directly affects the feels like temperature. Join Alexa Answers Help make Alexa smarter and share your knowledge with the worldLEARN MORE
3.A.1: Use procedures to transform algebraic expressions. 3.A.1.1: Students are able to explain the relationship between repeated addition and multiplication. 3.A.2: Use a variety of algebraic concepts and methods to solve equations and inequalities. 3.A.2.2: Students are able to solve problems involving addition and subtraction of whole numbers. 3.A.2.2.b: Represent given problem situations using diagrams, models, and symbolic expressions. 3.A.3: Interpret and develop mathematical models. 3.A.3.1: Students are able to use the relationship between multiplication and division to compute and check results. 3.M.1: Apply measurement concepts in practical applications. 3.M.1.3: Students are able to identify U.S. Customary units of length (feet), weight (pounds), and capacity (gallons). 3.M.1.4: Students are able to select appropriate units to measure length (inch, foot, mile, yard); weight (ounces, pounds, tons); and capacity (cups, pints, quarts, gallons). 3.M.1.5: Students are able to measure length to the nearest 1/2 inch. 3.M.1.5.a: Measure length to the nearest centimeter. 3.N.1: Analyze the structural characteristics of the real number system and its various subsystems. Analyze the concept of value, magnitude, and relative magnitude of real numbers. 3.N.1.1: Students are able to place in order and compare whole numbers less than 10,000, using appropriate words and symbols. 3.N.1.3: Students are able to name and write fractions from visual representations. 3.N.1.3.a: Recognize that fractions and decimals are parts of a whole. 3.N.1.3.b: Compare numerical value of fractions having like denominators. 3.N.1.3.c: Compare decimals expressed as tenths and hundredths. 3.N.2: Apply number operations with real numbers and other number systems. 3.N.2.1: Students are able to add and subtract whole numbers up to three digits and multiply two digits by one digit. 3.N.2.1.a: Recall multiplication facts through the tens. 3.N.3: Develop conjectures, predictions, or estimations to solve problems and verify or justify the results. 3.N.3.1: Students are able to round two-digit whole numbers to the nearest tens, and three-digit whole numbers to the nearest hundreds. 3.S.1.1: Students are able to ask and answer questions from data represented in bar graphs, pictographs and tally charts. 3.S.1.2: Students are able to gather data and use the information to complete a scaled and labeled graph. 3.S.2.1: Students are able to describe events as certain or impossible. Correlation last revised: 5/24/2018
New research using ancient DNA finds that a population split after people first arrived in North America was maintained for millennia before mixing again before or during the expansion of humans into the southern continent. Recent research has suggested that the first people to enter the Americas split into two ancestral branches, the northern and southern, and that the “southern branch” gave rise to all populations in Central and South America. Now, a study shows for the first time that, deep in their genetic history, the majority – if not all – of the Indigenous peoples of the southern continent retain at least some DNA from the “northern branch”: the direct ancestors of many Native communities living today in the Canadian east. The latest findings, published today in the journal Science, reveal that, while these two populations may have remained separate for millennia – long enough for distinct genetic ancestries to emerge – they came back together before or during the expansion of people into South America. The new analyses of 91 ancient genomes from sites in California and Canada also provide further evidence that the first peoples separated into two populations between 18 to 15,000 years ago. This would have been during or after migrating across the now-submerged land bridge from Siberia along the coast. Ancient genomes from sites in Southwest Ontario show that, after the split, Indigenous ancestors representing the northern branch migrated eastwards to the great lakes region. This population may have followed the retreating glacial edges as the Ice Age began to thaw, say researchers. The study also adds evidence that the prehistoric people associated with Clovis culture – named for 13,000-year-old stone tools found near Clovis, New Mexico and once believed to be ancestral to all Native Americans – originated from ancient peoples representing the southern branch. This southern population likely continued down the Pacific coast, inhabiting islands along the way. Ancient DNA from the Californian Channel Islands shows that initial populations were closely related to the Clovis people. Yet contemporary Central and South American genomes reveal a “reconvergence” of these two branches deep in time. The scientific team, led by the universities of Cambridge, UK, and Illinois Urbana-Champaign, US, say there must have been one or a number of “admixture” events between the two populations around 13,000 years ago. They say that the blending of lineages occurred either in North America – prior to expansion south – or as people migrated ever deeper into the southern continent, most likely following the western coast down. “It was previously thought that South Americans, and indeed most Native Americans, derived from one ancestry related to the Clovis people,” said Dr Toomas Kivisild, co-senior author of the study from Cambridge’s Department of Archaeology and University of Tartu, Estonia. “We now find that all native populations in North, Central and South America also draw genetic ancestry from a northern branch most closely related to Indigenous peoples of eastern Canada. This cannot be explained by activity in the last few thousand years. It is something altogether more ancient,” he said. Dr Ripan Malhi, co-senior author from Illinois Urbana-Champaign, said: “Working in partnership with Indigenous communities, we can now learn more about the intricacies of ancestral histories in the Americas through advances in paleogenomic technologies. We are starting to see that previous models of ancient populations were unrealistically simple.” Present day Central and South American populations analysed in the study were found to have a genetic contribution from the northern branch ranging between 42% to as high as 71% of the genome. Surprisingly, the highest proportion of northern branch genetics in South America was found way down in southern Chile, in the same area as the Monte Verde archeological site – one of the oldest known human settlements in the Americas (over 14,500 years old). “It’s certainly an intriguing finding, although currently circumstantial – we don’t have ancient DNA to corroborate how early this northern ancestral branch arrived,” said Dr Christiana Scheib, first author of the study from the University of Cambridge now at University of Tartu, Estonia. “It could be evidence for a vanguard population from the northern branch deep in the southern continent that became isolated for a long time – preserving a genetic continuity. “Prior to 13,000 years ago, expansion into the tip of South America would have been difficult due to massive ice sheets blocking the way. However, the area in Chile where the Monte Verde site is located was not covered in ice at this time,” she said. “In populations living today across both continents we see much higher genetic proportions of the southern, Clovis-related branch. Perhaps they had some technology or cultural practice that allowed for faster expansion, pushing the northern branch to the edges of the landmass as well as promoting admixture.” While consultation efforts varied in this study from community-based partnerships to more limited engagement, the researchers argue that more must be done to include Indigenous communities in ancient DNA studies in the Americas. The researchers say that genomic analysis of ancient people can have adverse consequences for linked Indigenous communities. Engagement work can help avoid unintended harm to the community and ensure that Indigenous peoples have a voice in research. “The lab-based science should only be a part of the research. We need to work with Indigenous communities in a more holistic way,” added Schieb. “From the analysis of a single tooth, paleogenomics research can now offer information on ancient diet and disease as well as migration. By developing partnerships that incorporate ideas from Native communities, we can potentially generate results that are of direct interest and use to the Indigenous peoples involved,” she said.
What is a sonic boom and how is it produced?Asked by: Lockyear AnswerSonic boom is a common name for the loud noise that is created by the 'shock wave' produced by the air-plane that is traveling at speeds greater than that of sound ( speed of sound is approximately 332 m/s or 1195 km/hr or 717 miles/hour). These speeds are called supersonic speeds, hence this phenomena is sometimes called the supersonic boom. Normally, for a plane that is going at subsonic speeds (lower than that of sound), the sound of the plane is radiated in all directions. However, the individual sound wavelets are compressed at the front of the plane and further spread at the back of the plane because of the forward speed of the plane. This effect is known as the Doppler effect and accounts for the change of the 'pitch' of the plane's sound as it passes us. When the plane is approaching us it's sound has a higher pitch than if it is going away from us. Now, if the plane is traveling at the supersonic speeds (greater than that of sound), it is going faster than it's own sound. As a result, a pressure (sound is variation in pressure) wave is produced in the shape of the cone whose vertex is at the nose of the plane, and whose base is behind the plane. The angle opening of the cone depends on the actual speed the plane is traveling at. All of the sound pressure is contained in this cone. So imagine now this plane in a level flight. Before the plane passes you, you can only see it but you can not hear anything. The pressure cone is trailing behind the plane. Once your ears intersect the edge of this cone, your will hear a very loud sound - the sonic boom. Therefore you will hear the sonic boom once your ears intersect this cone, and not when the plane breaks the sound barrier (as it is commonly misunderstood) The sonic booms can be sometimes quite loud. For a commercial supersonic transport plane (SST), it can be as loud as 136 decibels, or 120 Pa (in units of pressure). Answered by: Anton Skorucak, M.S. Physics, PhysLink.com Creator
The human microbiota refer to an ensemble of microorganisms living in our body, which are made up of trillions of bacteria, viruses, and fungi. Contrary to popular belief, microbiota are not always bad organisms that harm our health. Instead, they are essential in supporting a healthy body system. And scientists have just discovered the mechanism microbiota transmit signals in our body and regulate hematopoiesis in the bone marrow. The mysterious microbiota The first reaction from most people to the microbiota is “Ewwwwwww!!!” They just associate microbiota with something dirty and disgusting. Some may know that microbiota play a critical role in maintaining a healthy gut and digestive system. But microbiota have more health benefits than they thought. It is true that the biggest populations of microbiota reside in our gut, but they can affect our entire body, including liver, lung, brain, bone marrow and other organs. Yet, no one can figure out how microbiota affect the entire body so far. What are their mechanisms? How do they transmit a signal? Now, researchers have finally found out one of the mechanisms of microbiota — they control the formation of blood cells in the bone marrow and thus improve the immune system through the “signal cells” called CX3CR1+ cells. The role of CX3CR1+ cells In the process, the microbiota will first send a signal to the bone marrow. Then, the CX3CR1+ cells are able to recognize this signal. After recognizing it, they will produce signal substances called cytokines. These substances then stimulate the body's defense system and speed up the formation of blood cells. "For the first time, our research describes the mechanism that had not been explained how microbiota regulate not only digestive tracts but also entire body response. It might be possible to apply this study to control immune response in other parts of a body or to treat cancer and inflammatory disease via microbiota signal pathway,” said Prof. Seung-Woo Lee. Microbiota, though tiny, can affect us to a greater extent. Scientists are now doing their best to lift the veil and show us more secrets of microbiota. Editors’ selected articles and questions are posted in HTQ Page on Facebook. You are most welcome to follow and/or Like us to stay updated on the latest health info.
The term “pardon” has been defined as an act of mercy by which the offender is absolved from the penalty which has been imposed on him. In other words, grant of pardon wipes off the guilt of the accused and brings him to his original position of innocence as if he had never committed the alleged offence. The grant of pardon may, however, be absolute or conditional. Under conditional pardon, the offender is let off with certain conditions, the breach of which will result into revival of his sentence and he shall be subjected to the unexhausted portion of the sentence. Pardon as a mode of mitigating the sentence of accused has always been a controversial issue since long. Some authorities consider its retention in penal system essential as it may substantially help in saving an innocent person from being punished due to possibility of miscarriage of justice or in case of doubtful conviction. Moreover, the hope of being pardoned itself serves as an incentive for the convict to behave himself well in the prison institution and thus helps considerably in solving the problem of prison discipline. During the medieval period, pardon was extensively used as a method of reducing overcrowding in prisons during war, political upheaval and revolt. Those who reject pardon as an effective measure of mitigating sentence argue that the power to pardon is often misused by the executive. There is a possibility that the convict may secure his release from prison by exerting undue influence on the executive authority. Another evil that ensues as a result of ‘pardon’ as a measure of undoing the guilt of the convict is that it has an adverse effect on prisoners because they invariably try to secure a ‘pardon’ rather than reforming themselves. Despite these shortcomings, the greatest advantage of pardoning power of the executive lies in the fact that it is always preferable to grant liberty to a guilty offender rather than sentencing an innocent person. The power to grant pardon or commute a sentence pronounced by a court of law is not something that the Emperors with divine right enjoyed in earlier times. The modern democracies in the world with judicial systems that are above reproach, have vested in their Executive head, the power to grant pardon or clemency. For instance, the American Constitution gives President the power to grant reprieve or pardon for offences against the U.S., except in case of impeachment. However, this power is available only in case of violation of federal law and pardon in case of violation of a State law has to come from the Governor of the State concerned. In Britain, the constitutional monarch can pardon or show mercy to a convicted person on the ministerial advice. In Germany, Italy, Russia and France, the power to grant pardon and commute sentences rests with the President. In Canada, pardons are considered by the National Parole Board under the Criminal Records Act etc. In most of these countries, there is a provision for judicial review of the pardon granted in the event of grounds for pardon being found unsatisfactory. In U.S.A., a pardon may be held void if it appears that the pardoning power was exercised on the basis of wrong information. Thus, “a pardon procured by false and fraudulent representations or by intentional suppression of the truth is void, even though the person pardoned had no part in perpetrating the fraud.” The modern practice of pardoning the convicts is said to be derived from the British system m which it was a Royal prerogative of the King to forgive. It also finds mention in code of Hammurabi, a series of edicts that were developed in Babylon nearly 4,000 years ago. Explaining the law relating to pardon in U.S.A., Chief Justice Taft in Ex parte Phillip Grossman, observed, “Executive clemency exists to afford relief from undue harshness or evident mistake in the operation or the enforcement of the criminal law. The administration of justice by the Courts is not necessarily always wise or certainly considerate of circumstances which may properly mitigate guilt. To afford a remedy it has always been thought essential in popular governments, as well as monarchies, to vest power to ameliorate or avoid particular criminal judgments.” In India, the power to grant pardon is conferred on the President of India and the Governors of States under Articles 72 and 161 of the Constitution of India. Article 72 empowers the President to grant pardons etc. and to suspend, remit or commute sentences in certain cases. The Article reads as follows: 72 (1) the President shall have the power to grant pardons, reprieves, respites or remission of punishment or to suspend, remit or commute the sentence of any person convicted of any offence— (a) In all cases where the punishment or sentence is by a Court Martial; (b) In all cases where the punishment or sentence is for an offence against any law relating to a matter to which the executive power of the Union extends; (c) In all cases where the sentence is a sentence of death. Article 161 empowers the Governor of States to grant pardon, reprieves, respites or remissions of punishment or suspend, remit or commute the sentence of person convicted of an offence against a law’ relating to a matter to which the executive powers of the State extends. In Ìàru Ram v. Union of India} the Constitution Bench of the Supreme Court held that the power under Article 72 is to be exercised on the advice of the Central Government and not by the President on his own, and that the advice of the Government binds the head of the Republic. In Dhananjoy Chatterjee alias Dhana v. State of West Bengal, the Supreme Court reiterated its earlier stand in Maru Ram’s case and observed as follows: “The power under Articles 72 and 161 of the Constitution can be exercised by the Central and State Governments, not by the President or Governor on their own. The advice of the appropriate Government binds the Head of the State. No separate order for each individual case is necessary but any general order made must be clear enough to identify the group of cases and indicate the application of mind to the whole group.” In the instant case, the Deputy Secretary, Judicial Department, Government of West Bengal informed the Court that after examining and considering the prayer the State Government rejected it, thereafter, it was transmitted to the Governor only because it was addressed to him, and therefore, the Governor in his turn, rejected the convict’s prayer which was duly communicated to the convict. Later, convict’s special leave petitions having been dismissed by the Supreme Court, he filed a mercy appeal to the Hon’ble President of India under Article 72 of the Constitution but that too was rejected by the President vide his order dated 4th August 2004. The appellant then applied to the Supreme Court for review of President’s decision of rejection of his appeal which the Court declined on August 12, 2004. Consequently the convict Dhananjoy was hanged till death on 14th August 2004 in Central Jail, Alipore in West Bengal. The Supreme Court, in Rang a Billa case was called upon to decide the nature and ambit of the pardoning power of the President of India under Article 72 of the Constitution. In this case, the death sentence of one of the appellants was confirmed by the Supreme Court. His mercy petition was also rejected by the President. Thereupon, the appellant filed a writ petition in the Supreme Court challenging the discretion of the President of India to grant pardon on the ground that no reasons were given for the rejection of his mercy petition. The Supreme Court dismissed the petition and observed that the term “pardon” itself signifies that it is entirely a discretionary remedy and the grant or rejection of it need not be reasoned. The Supreme Court was once again called upon to decide the justiciability of President’s power to grant pardon, reprieve or remission or to suspend, remit or commute the sentence of death passed against the condemned prisoner under Article 72 of the Constitution in Kehar Singh v. Union of India. Reiterating its earlier stand, the Apex Court held that grant of pardon by the President is an act of grace and therefore, cannot be claimed as a matter of right. The power exercisable by the President being exclusively of administrative nature, it is not justiciable. The President can scrutinise evidence on record and may come to a different conclusion from that of the Court regarding the guilt or sentence of the accused but his decision in this regard cannot modify the Court’s judicial record. Again, the condemned prisoner is not entitled to oral hearing from the President as the matter is entirely within the discretion of the President under Article 72 of the Constitution. In the instant case, the mercy appeal of the accused Kehar Singh was rejected by the President of India. Quoting the observations of Justice Holmes in this case, the Apex Court held: “A pardon in modem time is not a private act of grace from an individual happening to possess power; it is a part of the constitutional scheme. When granted, it is the determination of the ultimate authority that public welfare will be better served by inflicting less than what the judgment has fixed. This constitutional pardon is given to those, upon whom punishment inflicted would cause greater harm to society than their release.” Experience has shown that pardon is usually administered to persons who are punished for disregard of political or religious affiliations. The psychological and emotional condition of the criminal is taken into consideration before granting him pardon and he is admitted to this clemency only if his institutional record shows that there are better chances of his reformation after release. Commenting on this point J. L. Gillin observed, “If the pardons are administered with care and solely to correct injustices, they certainly do not diminish respect for law. They, on the other hand, will infuse confidence in the machinery of justice”. In K.M. Nanavati v. State of Maharashtra, the accused killed his wife’s paramour in 1960. The Bombay High Court, sentenced him to life imprisonment. He appealed against his sentence to the Supreme Court. Meanwhile, the Governor granted suspension of his sentence. This power of the Governor to suspend life sentence was challenged before the Supreme Court on the ground that under Article 161, the Governors do not have the power to do so during pendency of the matter before the Supreme Court. The Apex Court classified that the power of the Governor to suspend the life sentence is subject to rules framed by the Supreme Court under Art. 145 of the Constitution, which provides that once appeal against suspension of sentence is filed before the Supreme Court it is mandatory to keep the accused under police custody The order of the Governor, therefore, is liable to be quashed. The Governor can use his power to suspend or remit the sentence so long as it is not subjudice before the Supreme Court. In other words, the Governor may exercise his power under Article 161 so long as an appeal against the mercy petition is not filed before the Supreme Court and not thereafter. In Purulia Arms-drop case, (1995) a British national Peter Bleach was sentenced to life imprisonment for being involved in the notorious arms-drop over Purulia in Bihar from AN-26 aircraft. The then NDA government came under diplomatic pressure and invoked “public interest”, directing President of India to grant him pardon. United Kingdom on its part, clarified that the pardon was more on compassionate grounds than on merits. It must be stated that the system of parole which is nothing but a modified form of conditional pardon has mitigated the risks involved in pardoning the offender outright. It is, however, suggested that a pardon pre-conditioned by a system of parole appears to be an ideal policy best suited to both the law-abiders as well as the law-breakers. It would further be wise to relieve the executive authority of this arduous task of administering pardons and this function be assigned to the agency of Parole Board. This has already been done in some of the American States. In Swaran Singh v. State of U.P., the Governor of U.P. had granted remission of the life sentence awarded to the Minister of State Legislature of Assembly upon being convicted for the offence of murder. The Supreme Court, however, interdicted the Governor’s order and observed that it is true that it has no power to touch the order passed by the Governor under Article 161, but if such power has been exercised arbitrarily, mala fide or in absolute disregard of the “finer canons of constitutionalism”, such order cannot get the approval of law and in such cases the “judicial hand must be stretched to it”. The Supreme Court held that the order of the Governor was arbitrary and hence needed to be interdicted. In Gentela Vijayvardhanrao v. State of Andhra Pradesh} the two appellants were dalit boys, who set afire a bus for the purpose of robbery. This resulted in the death of 23 passengers and serious bums to a number of other passengers. Taking into consideration the barbarity of crime, depravity in the manner of its execution, the number of victims and greed as aggravating factors, they were sentenced to death and the sentence was confirmed by the High Court. Even while mercy petitions were pendings human rights groups took to campaigning against the death sentence awarded to the two boys. Attempts were made to bring back the issue to the Supreme Court by way of writ petitions, but without success. The President of India, however, deemed it a fit case to grant pardon and commuted the death sentence of both the boys to one of imprisonment for life. It must be stated that in the absence of the requirement to give reasons for such decision, it is difficult to know what exactly weighed with the President in commuting the sentence. If such decisions were made public, it would help people to know the factors which made President to commute the sentence, which would provide guidance for future. Otherwise the exercise of power of clemency will give rise to the reasonable apprehension that it is capable of being arbitrarily used, more so because the President in exercise of this power acts on the advice of the Cabinet hence the possibility of political considerations weighing with the decision cannot be ruled out. This issue came up for consideration before the Supreme Court in the Parliament attack accused Mohammad Afzal’s case wherein the supporters sought clemency on the ground that the day of Afzal’s execution i.e. 20th October is falling within the month of Ramzan. In fact, the judgment in his case was also countered on the ground that he did not get fair trial. Significantly, Afzal’s death sentence was upheld by three courts including the Supreme Court which had let-off a co-accused and reduced the sentence of another accused. The near relatives and kins of the victims of the said attack on Indian Parliament on 13th December, 2001 filed petitions opposing the move to secure clemency for Afzal Guru. Disposing of the petition of the widow of Presidential reprieve for Afzal the Apex Court ruled that, “undue considerations of caste, religion and political loyalty are prohibited from being grounds for grant of clemency. The Court observed that undoubtedly the President of India and the Governors of States have the constitutional right to grant clemency but this power should be exercised by them in the interests of public welfare. A Bench comprising Justices Arijit Pasayat and S.H. Kapadia, while quashing Andhra Pradesh Governor’s decision of 2005 to reduce Gowru Venkata’s prison term by seven years, held that it was a well set principle that a limited judicial review of exercise of clemency powers was available to the Supreme Court and High Courts. Specifying the grounds for granting clemency, the Bench ruled that orders passed by the President or the Governor, as the case may be, granting clemency can be challenged on the following grounds:— 1. That the order has been passed without application of mind; 2. That the order is mala fide; 3. That the order has been passed on extraneous or wholly irrelevant considerations; 4. That relevant materials have been kept out of consideration; and 5. That the order suffers from arbitrariness. In Gowru Venkata’s case, his wife was elected as an M.L.A. on the Congress ticket and two days after her election, she made a plea for parole of her husband who was undergoing imprisonment on a murder charge. Parole was granted by the then reddy government five days later, i.e., on May 19, 1994. The parole period was extended four times. On October 10, 1994, the wife of Gowru Venkata made a representation to the then Andhra Pradesh Governor Shri Sushil Kumar Shinde seeking pardon for her husband. On August 11, 2005, the Governor, exercising his power under Article 161 of the Constitution, granted remission of sentence. The Supreme Court, in setting aside the remission of sentence, favorably viewed the submissions made by amicus curiae and former Attorney-General Shri Soli Sorabjee, who said that it was desirable that President or a Governor, while granting pardon or remission of sentence, should give reasons to indicate that relevant materials were considered in the exercise of constitutional power. The Bench held that the process of consideration by the then Governor was faulty and also expressed its surprise that in the clemency plea, the convict had the audacity to mention that he was a “good Congress worker” and that he has been falsely implicated in the murder of an activist belonging to rival TDP Party. Obiviously the question of his being a ‘good Congress worker’ has no relevance to the objects sought to be achieved. The Bench criticised the State bureaucracy for giving favourable reports to the Governor to facilitate relief to the ruling party’s activist. The Supreme Court brushed aside the plea emphasising that the matter had been heard by three courts which had unanimously come to the conclusion about Gowru Venkata’s guilt. In separate but concurring judgments, the Bench observed, “the power of executive clemency is not for the benefit of the convict only. While exercising such a power the President or the Governor, as the case may be, has to keep in mind the effect of his decision on the family of the victims, the society as a whole and the precedent it sets for future.” The order passed in Gowru Venkata’s case is seen as potentially having direct bearing on the Afzal’s case. Those who are supporting grant of pardon to Afzal, notably, the Left Parties and different out-fits in the J. & K. Valley has argued that Afzal’s execution would give fillip to militancy. It may be stated that more than twenty-nine mercy petitions are pending before the President (as on October 16, 2010) including those filed by two accused in the former Prime Minister, Rajiv Gandhi assassination case and a petition from 71 year old Shobhit Chamar who had killed an upper caste adversary in Bihar. Earlier, the plea of mercy filed by Dhananjoy Chatterjee was rejected by three President’s in succession and he was finally hanged to death on 14th August, 2004 in the Alipore Central Jail. The Amnesty International in its Report of 2009 has stated that the number of persons who were sentenced to death in India during the period 2001 to 2007 was a follows: YearNo. of Persons Sentenced to Death However, no official figures are indicated as to the actual execution of sentence of death during this period. The General Assembly of UN passed a resolution for abolition of death sentence by the member nations in December 2007, but India voted against it and refused to drop capital punishment from its statute book. More recently, the prime accused of the Bombay Taj Hotel blast case (on 16-11-2008), Azmal Kasab, was sentenced to death by the Special Court, Bombay on May 6, 2010 and his death sentence has been confirmed by the Bombay High Court on February 21, 2011. He may new prefer an appeal against this sentence before the Supreme Court.
Global agriculture must cut its reliance on fossil fuels if it is to feed a growing global population, says a recent report from the United Nations Food and Agricultural Organization (FAO). Global agriculture contributes a sizeable chunk to the carbon footprint of the world. According to the FAO report entitled “’Energy-Smart’ Food for People and Climate”, food production (from production to consumption) accounts for about 30% of global energy use. This includes providing crop irrigation, housing livestock, and transporting food stocks. Unfortunately, the greenhouse gas emissions attributed to agriculture undermine the sustainability of the global food system, and call into question the ability of the world to feed itself in 2050. In addition, agricultural land is being degraded at unprecedented rates, with 25% of all farmland now considered to be “highly degraded.” Another 8 % is moderately degraded, while only 10% is marked as “improving.” The FAO attributes such high levels of degradation to intensive agricultural practices which have depleted soil nutrients, polluted aquifers, and eroded soil. The FAO calls for a “sustainable intensification” of agricultural practices in order to increase food production by 70% to feed the Earth’s 9 billion inhabitants in 40 years. Part of the ‘Energy-Smart’ approach the FAO advances includes the adoption of more fuel efficient engines, the use of compost and precision fertilizers, targeted water delivery and monitoring systems, and the use of less input-intensive crops. Furthermore, the use of local renewable energy sources could improve energy access, reduce fossil fuel use, and lower pollution attributed farming practices. However, the FAO emphasizes that the “transition to ‘energy-smart agriculture’ needs to start now. If significant action is not taken soon to improve the sustainability of the global food system, we could be facing some very severe food shortages in 2050. Image CC licensed by Soil Science
A new study shows that gravitational fields of Venus-Jupiter affect Earth’s climate cycle. A research group at Columbia University’s Lamont-Doherty Earth Observatory and Rutgers University released the study on May 7, 2018. Jupiter is the largest planet in the solar system, and Venus is our closest planetary neighbor. Together they have a significant influence on the Earth’s climate. Dennis Kent, who led the study said, “The climate cycles are directly related to how the Earth orbits the sun and slight variations in sunlight reaching Earth lead to climate and ecological changes.” The study shows that there is a repeating cycle which they calculate takes 405,000 years. That cycle causes wobbles in the Earth’s orbit leading to climate extremes. Not only do studies like this help us understand the past, but they also help in our understanding of current global conditions such as climate change. The enormous number of things that have to be just what they are for life to exist on Earth continues to grow. In 1961, American astronomer Frank Drake, a founder of the SETI program, presented an equation that attempted to calculate the number of “earths” that might exist in our galaxy. Drake’s equation took the variables that must be right for a planet like ours to support life. He then multiplied the variables together to get the probability of another planet like ours. Dr. Drake had only seven variables in his calculation, and today that number exceeds 50. We list 47 of them on our doesgodexist.org website, but even that list is far from complete. Now that we know that the gravitational fields of Venus-Jupiter affect Earth’s climate cycle, we have one more factor to add to the list. Our planet is a delicate place, with an incredible number of factors all contributing to an environment where we can survive, and where humans have survived for a very long time. The more we know about the creation, the more evidence we see for a Creator. –John N. Clayton © 2018
Methanol, also known as methyl alcohol among others, is a chemical with the formula CH3OH (often abbreviated MeOH). Methanol acquired the name "wood alcohol" because it was once produced chiefly as a byproduct of the destructive distillation of wood. Today, industrial methanol is produced in a catalytic process directly from carbon monoxide, carbon dioxide, and hydrogen. Methanol is the simplest alcohol, being only a methyl group linked to a hydroxyl group. It is a light, volatile, colorless, flammable liquid with a distinctive odor very similar to that of ethanol (drinking alcohol). However, unlike ethanol, methanol is highly toxic and unfit for consumption. At room temperature, it is a polar liquid, and is used as an antifreeze, solvent, fuel, and as a denaturant for ethanol. It is also used for producing biodiesel via transesterification reaction. Methanol is produced naturally in the anaerobic metabolism of many varieties of bacteria, and is commonly present in small amounts in the environment. As a result, the atmosphere contains a small amount of methanol vapor. But in only a few days, atmospheric methanol is oxidized by sunlight to produce carbon dioxide and water. Methanol is also found in abundant quantities in star forming regions of space, and is used in astronomy as a marker for such regions. It is detected through its spectral emission lines. Methanol when drunk is metabolized first to formaldehyde and then to formic acid or formate salts. These are poisonous to the central nervous system and may result in blindness, coma, and death. Because of these toxic properties, methanol is frequently used as a denaturant additive for ethanol manufactured for industrial uses. This addition of methanol exempts industrial ethanol (commonly known as "denatured alcohol" or "methylated spirit") from liquor excise taxation in the US and some other countries.
A dumpy level, builder’s auto level, leveling instrument, or automatic level is an optical instrument used in surveying and building to transfer, measure, or set horizontal levels. The level instrument is set up on a tripod and, depending on the type, either roughly or accurately set to a leveled condition using footscrews (levelling screws). The operator looks through the eyepiece of the telescope while an assistant holds a tape measure or graduated staff vertical at the point under measurement. The instrument and staff are used to gather and/or transfer elevations (levels) during site surveys or building construction. Measurement generally starts from a benchmark with known height determined by a previous survey, or an arbitrary point with an assumed height. A dumpy level is an older-style instrument that requires skilled use to set accurately. The instrument requires to be set level (see spirit level) in each quadrant, to ensure it is accurate through a full 360° traverse. Some dumpy levels will have a bubble level ensuring an accurate level. A variation on the dumpy and one that was often used by surveyors, where greater accuracy and error checking was required, is a tilting level. This instrument allows the telescope to be effectively flipped through 180°, without rotating the head. The telescope is hinged to one side of the instrument’s axis; flipping it involves lifting to the other side of the central axis (thereby inverting the telescope). This action effectively cancels out any errors introduced by poor setup procedure or errors in the instrument’s adjustment. As an example, the identical effect can be had with a standard builder’s level by rotating it through 180° and comparing the difference between spirit level bubble positions.
Preterm births are a serious health issue around the globe. Any birth that occurs before 37 weeks of gestation is considered a preterm birth. Approximately one in 10 babies worldwide are born preterm, which means that there are 15 million preterm births each year. It seems that rates of preterm birth are actually slowly increasing. Health Problems with Preterm Birth Babies need at least 37 weeks for their bodies to develop normally. Birth complications from preterm birth occur because preterm infants’ organs are not yet adapted for life outside the protective environment of the womb. The earlier that the birth occurs before the due date, the greater the health impact1. The main health problems just after birth for infants born preterm include breathing difficulties due to immature lungs, feeding difficulties, problems maintaining the right body temperature and an increased risk of infections2,3. Long-term, infants born prematurely can suffer from a range of ailments ranging from mild to severe. Infants born close to the 37-week mark can typically expect to live a normal life, but infants born much earlier (generally less than 28 weeks gestation) may experience severe developmental delays, and have chronic health conditions such as cerebral palsy and permanent damage to the lungs, eyes, heart and ears1,4. Preventing preterm birth, and lessening its consequences, is vital to protect millions of preterm babies born each year. Preterm Birth Risk Factors While in many cases, it is not possible to find an exact cause of preterm birth, a number of risk factors have been identified. A combination of genetics and environmental factors is believed to contribute to the risk of giving birth early1. Mothers who are very young or very old, or who have a short interval between births, are more likely to have a preterm birth. Multiple births are more likely to arrive early. Certain infections can increase the chances of preterm birth, and lifestyle factors such as intense physical work, psychological stress and cigarette smoking contribute as well1. The factors – infection and lifestyle – are related to the process of inflammation, which is a normal response to infection and stress but with negative health consequences. Omega-3, Omega-6, and Preterm Birth: Mothers and Babies The omega-3 and omega-6 fatty acids have several important roles in the body, and one of the most important is that they are building blocks for messenger molecules in the body. One type of these messenger molecules, called prostaglandins, are an important part of childbirth because they prime the muscles in the uterus for labor5. The long-chain omega-6 fatty acid, arachidonic acid, is used by the body to make prostaglandins that have a strong biological effect5. Biologically weaker prostaglandins are made from long chain omega-3 fatty acids . While it seems that it is good to have plenty of omega-6 to support the processes behind active labor, high levels may actually be excessive, possibly increasing risk of early labor7,8. Some researchers believe that omega-3 fatty acids may balance omega-6 fatty acids by producing weaker prostaglandins that are less likely to lead to preterm labor8. Preterm infants may also need to be provided with additional long chain omega-3 fatty acids because they have missed the critical period in the womb when the main transfer of omega-3s occurs, and their immature metabolism means that they cannot make the right types of omega-3s themselves yet9. Long chain omega-3 fatty acids, when given to women during pregnancy, have been shown to reduce the risk of preterm birth in several studies10,11. Researchers estimate that gestational length is increased by two days8 to two weeks11 after omega-3 supplementation, and that preterm birth rates were reduced overall. Providing adequate omega-3 fatty acids to pregnant women is common-sense, and particularly so if it may reduce the risk of preterm birth. It is vital that preterm infants receive enough long chain fatty acids (both omega-3 and omega-6) after birth to support normal cognitive and physical development12,13. During their first week of life, preterm infants’ long chain fatty acid supply diminishes, and they must be provided with additional fatty acids to make sure that they do not become deficient13. Some experts suggest that omega-3 and omega-6 fatty acid requirements for preterm infants may be two to three times higher than what is currently provided in infant nutrition products and medical nutrition products14.
Updated as of January 2020. Curricula are not standards. Standards are not curricula. In the debate over the Common Core State Standards, definitions of key terms such as “standards” and “curricula” vary considerably. For some, standards and curricula are the same. For others, standards are a framework by which curricula are developed. Although there is no universally accepted definition, most education experts agree it is important to make a clear distinction between the two concepts. In general, standards are broad goals, or, in the words of the North Carolina Department of Public Instruction, “standards define what students know and should be able to do.” Curricula include specific course content either developed by the teacher or obtained from an external source. Teachers may use different curricula so long as it is aligned to the standards established for that subject and grade. Arguably, the latter is more important than the former. In an early assessment of Common Core adoption and implementation, Tom Loveless, an educational researcher at the Brookings Institution, found no apparent relationship between the quality and the rigor of state standards and National Assessment of Educational Progress (NAEP) scores. These findings suggest that the content teachers teach and that students learn likely has a much greater bearing on student achievement than what standards alone may provide. Simply put, standards reform is not enough to boost student performance. Standards are successful only when they are buttressed by content-rich curricula delivered by well-trained educators, preferably using direct instruction. - State education officials mandate that all subject-area teachers follow the Standard Course of Study, which defines “appropriate content standards for each grade level and each high school course to provide a uniform set of learning standards for every public school in North Carolina.” State standards are reviewed and updated periodically. - The Common Core State Standards were developed by three Washington, D.C.- based organizations — the National Governors Association, the Council of Chief State School Officers, and Achieve, Inc. — and were championed by the U.S. Department of Education. In 2010, the North Carolina State Board of Education adopted Common Core mathematics and English language arts standards for students in kindergarten through 12th grade. In 2018, English language arts and mathematics teachers began using a revised version of the Common Core State Standards. - State-authored standards included in the Standard Course of Study include arts education, career and technical education, English as a second language, guidance, healthful living, information and technology skills, science, social studies, and world languages. - Currently, the North Carolina Department of Public Instruction provides curriculum resources to teachers without mandating that they adopt any of them. - North Carolina state law prescribes teaching of curricular content in certain grades and course areas. For example, state law prescribes inclusion of a civic literacy curriculum during a high school social studies course. Health education, character education, and financial literacy are other content requirements outlined in the statute. The requirement to teach multiplication tables and cursive writing are two notable curriculum mandates passed into law. - Legislators should create two permanent commissions charged with raising the quality and rigor of state English language arts and mathematics standards, as well as curricula and assessments. The goals of the commissions would be to 1) modify substantially or replace the Common Core State Standards; 2) specify content that aligns with the standards; 3) recommend a valid, reliable, and cost-effective testing program; 4) provide ongoing review of the standards, curriculum, and tests throughout implementation. - The commission should develop a rigorous state-developed curriculum or adopt a rigorous, independently developed curriculum, such as the Core Knowledge Sequence. Prescribing baseline curricular content would provide a more equitable education environment, ensuring that all students, regardless of socioeconomic circumstances, are exposed to the same essential content. It would also allow the state to compensate for knowledge and skill deficiencies identified by institutions of higher education, private- and public-sector employers, and other stakeholders.