content
stringlengths
275
370k
Get creative with Hamilton's hands-on art and design topic sessions for key stage 1. The national curriculum for art and design aims to ensure that all pupils: Cross-curricular topics are a fantastic way to cover the art and design curriculum, allowing children to engage creatively with different areas of the curriculum at the same time. Music can complement painting. History can be enhanced by model making. Sculpture can elaborate geographical concepts. Here are a few of our favourite art and design opportunities in KS1 topics: Our Carnival of the Animals topic is organised around the popular work of composer Saint-Saëns. Each block focuses on a different area, including, mammals, birds, fish, reptiles, amphibians and fossils around the world. Within each block there are plenty of brilliant art and design opportunities, including: Explore Carnival of the Animals for KS1. Changes within Living Memory gets children to explore the ways in which life has changed over the time of our parents, grandparents and great-grandparents. The topic provides plenty of opportunity to engage with art and design, including: Explore Changes within Living Memory for KS1. Using our exciting Oceans and Seas topic children build their knowledge of oceans and seas around the world. There are lots of opportunities to use art and design to enhance and reflect pupils understanding of the qualities and life of the seas. Explore Oceans and Seas for KS1. In the popular Castles block of our We Are Britain topic, children discover the castles around the British Isles. Capture the range of castle architecture using different art and craft techniques. Examine the painting Castle and the Sun by Paul Klee and compare the image to real castles in the UK and then recreate the image using crayons and watercolours. Explore We Are Britain - Castles for KS1. In our block about St Paul's Cathedral in the Great Fire of London topic charcoal drawing and potato printing are used to develop artistic ideas inspired by St Paul’s. Children also have the opportunity to design, make and decorate a model Cathedral. Our Artists block in Famous for More than Five Minutes is dedicated to learning about the life, times and paintings of Vincent Van Gogh and L.S. Lowry. Children learn about the nineteenth-century Van Gogh and the twentieth-century Lowry, discuss their art and then create their own paintings 'in the style of...'.
Microbes in Human Welfare Class 12, Biology in Human Welfare What are Microbes? Microbes are diverse protozoans bacteria fungi and microscopic plants and viruses viroid and also prions that are proteinaceous infectious agents. Microbes in Household Products Microorganisms such as Lactobacillus and others commonly called lactic acid bacteria (LAB) grow in milk and convert it into curd. During growth LAB produce acids that coagulate and partially digest milk proteins. LAB also improves the nutritional quality by increasing vitamin B12. LAB also plays beneficial role in checking disease causing microbes in stomach. Different varieties of cheese are known by their characteristics, texture, flavour and taste . The large holes in swiss cheese are due to production of co2 by bacterium named Propionibacterium Sharmanii. The Roquefort Cheese are ripened by growing on specific fungi on them , which give particular flavour. Toddy– a traditional drink of some southern parts of india made by fermenting sap from palms. Baker’S Yeast– Saccharomyces cerevisiae Microbes in Industrial Products Microbes specially yeast have been used from the time immemorial for the production of beverages like wine, beer, whisky , brandy, rum. Saccharomyces cerevisiae used for bread making is also used for fermenting malted cereals and fruit juices to produce ethanol and also known as BREWER’S YEAST. Wine and beer are produced without distillation. Whisky, brandy & rum are produced by distillation. The term was coined by SELMAN WAKSMAN in 1942. Antibiotics produced by microbes are regarded as one of the most significant discoveries of the twentieth century and have great contributed towards the welfare of human society. Anti is a Greek word that means “against life” (In the contest of disease causing organisms whereas in reference to human they are “pro life” and not against. PENCILLLIN was first antibiotic discovered by Alexander Fleming. He observed it while working on Staphylococci bacteria. The full potential of antibiotic was established much later by ERNEST CHAIN and HOWARD FLOREY. This antibiotic was extensively used to treat American soldiers wounded in world war 2 Fleming, Chain, Florey were awarded with Nobel prize in 1945. Microbes in Sewage Treatment Sewage- The municipal waste water is called sewage. Primary Wastewater Treatment Primary treatment of wastewater involves sedimentation of solid waste within the water. This is done after filtering out larger contaminants within the water. Wastewater is passed through several tanks and filters that separate water from contaminants. The resulting “sludge” is then fed into a digester, in which further processing takes place. This primary batch of sludge contains nearly 50% of suspended solids within wastewater. Secondary Wastewater Treatment Secondary treatment is the removal of biodegradable organic matter (in solution or suspension) from sewage or similar kinds of wastewater. The aim is to achieve a certain degree of effluent quality in a sewage treatment plant suitable for the intended disposal or reuse option. YAMUNA AND GANGA ACTION PLAN The ministry of Environment and forests has initiated this plan to save the major rivers of our country from pollution. Microbes in Production of Biogas Biogas is the mixture of gases produced by the microorganisms. It is a renewable source of energy. Methane is the predominant gas present in the biogas mixture. Certain bacteria grow under anaerobic conditions and produce a large amount of methane along with carbon dioxide and hydrogen. The bacteria which produce the gaseous mixture are collectively known as methanogens. Methanobacterium is one such methanogen. Methanobacterium is present inside the rumen of the cattle and the sludge produced during sewage treatment. The Methanobacterium present in the food of the cattle digests the cellulose present. The dung then produced by the cattle contains these methanogens which can be used for the production of biogas also known as the Gobar Gas. The technology of biogas production was developed by Indian Agricultural Research Institute (IARI) and Khadi And Village Industries Commission (KVIC). Microbes as Biocontrol Agents Biocontrol refers to the use of biological methods for controlling plant diseases and pests. Chemicals, insecticides and pesticides are extremely harmful to human beings and also these pollute our environment. The use of biocontrol measures will greatly reduce our dependence on toxic chemicals and pesticides. Biocontrol agents are which are useful in controlling plant diseases and pests are; The ladybird, a beetle with red and black markings and dragon flies are useful to get rid of aphids and mosquitoes respectively. Bacteria Bacillus thuringiensis (Bt) is used to get rid of butterfly caterpillars where dried spores of Bacillus thuringiensis are mixed with water and sprayed onto vulnerable plants such as brassicas and fruit trees and these are eaten by the insect larvae and in the gut of the larvae, the toxin is released and the larvae get killed. Fig. Bacillus thuringiensis Trichoderma species are free-living fungi found in the root ecosystem these are effective as biocontrol agents of several plant pathogens. Baculoviruses are pathogens that attack insects and other arthropods and the majority of baculoviruses used as biological control agents are in the genus Nucleopolyhedrovirus. Microbes as Biofertilisers Biofertilisers are formulation of living microorganisms that enrich the nutrient quality of the soil. The main sources of biofertilisers are bacteria, fungi and cyanobacteria. Rhizobium is a classical example for symbiotic nitrogen fixing bacteria. This bacterium infects the root nodules of leguminous plants and fixes atmospheric nitrogen into organic forms. Azospirillum and Azotobacter are free living bacteria that fix atmospheric nitrogen and enrich the nitrogen content of soil. A symbiotic association between a fungus and the roots of the plants is called mycorrhiza. The fungal symbiont in these associations absorbs the phosphorus from soil and transfers to the plant. Cyanobacteria (or) blue green algae (BGA) are prokaryotic free-living organisms which can fix nitrogen. Oscillatoria, Nostoc, Anabaena, Tolypothrix are well known nitrogen fixing cyanobacteria. Their importance is realized in the water logged paddy fields where Cyanobacteria multiply and fix molecular nitrogen. Do share this post if you liked Microbes in Human Welfare Class 12. For more updates, keep logging on BrainyLads
To evolutionists, radiation is like manna from heaven. It feeds the engine of Darwinian evolution — random mutation — providing variations that evolution’s Tinkerer, natural selection, can use to build new watches blindfolded. Well, the Chernobyl disaster of 1986 gave evolutionary biologists an unexpected natural lab to test their view, and this experiment has been going on for two years longer than Richard Lenski’s Long-Term Evolution Experiment with E. coli. The recent HBO miniseries Chernobyl brought back memories of the event that seems synonymous with “disaster.” Experts had predicted a high death toll on all life as a result of the radiation bath. People were quickly evacuated from a 3500-km area, and the cities closest to the nuclear plant quickly became ghost towns (see the video “Postcards from Pripyat”). A 30-km Chernobyl Exclusion Zone (CEZ) was enforced. To everyone’s surprise, though, life in the CEZ is thriving 33 years later. Therein is a story worth investigating: which view of biology scored, Darwin or intelligent design? Analyzing the situation requires some knowledge about nuclear radiation. Even though the CEZ will remain contaminated to some degree for thousands of years, not all the “hot” isotopes will last that long, and not all are equally dangerous. Toxicity depends on the emitted particles (alpha, beta, or gamma rays), the ratios emitted, and their respective energies. One of the most toxic radioisotopes of all, polonium-210, which was used to kill the former Russian spy Alexander Litvinenko in London in 2006, is only deadly when ingested; it is safe to hold in the hand. It also has a relatively short half-life, and its particles have such low energy they can be blocked by a sheet of paper. Inside the body, however, they make cells undergo apoptosis (cell suicide) as the hot particles are transported through the blood, tissues, and organs (Medical News Today). The Chernobyl reactor released many radioisotopes into the atmosphere, some with relatively short half-lives. One of the biggest risks for humans from Chernobyl was radioactive iodine, which concentrates in the thyroid gland and can cause thyroid cancer. Its half-life is on the order of eight days, however, so within four years after the disaster, levels had dropped enough to make dairy products safe again for consumption. Cesium-137 and strontium-90 have half-lives of around thirty years, so they will remain a concern, but some of these can leach into the soil by rain and be transported by wind, and thus dissipate sooner. A United Nations report twenty years after the disaster says, “Although plutonium isotopes and americium 241 [half-life 432 years] will persist perhaps for thousands of years, their contribution to human exposure is low.” One other consideration is that the biosphere is bombarded with ionizing radiation all the time, from radon in the soil, carbon-14 in the air, gamma rays from space, and other sources. It’s the increment above what experts consider safe levels, therefore, that determines the risk, and that diminishes with distance from the source. We should not think of the CEZ as glowing hot for 20,000 years, therefore. But without doubt, the area received a highly dangerous dose of radiation at first. A few dozen people died within the immediate aftermath of the explosion. Experts estimate that about 4,000 people “could” die from cancer, but as years go by, it’s increasingly hard to attribute the cause to Chernobyl as radiation levels decrease. Many more owe their lives to the heroes who died to entomb the reactor shortly after the accident. Pine trees died, and animals within the hot zone died — but not all of them. And now, to the experts’ surprise, the area is doing remarkably well. Stuart Thompson, a plant biochemist, writes for The Conversation: Life is now thriving around Chernobyl. Populations of many plant and animal species are actually greater than they were before the disaster. Given the tragic loss and shortening of human lives associated with Chernobyl, this resurgence of nature may surprise you. Radiation does have demonstrably harmful effects on plant life, and may shorten the lives of individual plants and animals. But if life-sustaining resources are in abundant enough supply and burdens are not fatal, then life will flourish. [Emphasis added.] Why Life Is Resilient The subject of his article is, “Why plants don’t die from cancer.” Unlike animals, he explains, plants can work around damaged tissue. They can also grow most tissues they need anywhere. “This is why a gardener can grow new plants from cuttings, with roots sprouting from what was once a stem or leaf.” Additionally, plant cell walls act as a barrier to metastasis, should tumors arise. Even though dying trees near the accident created a “Red Forest,” the local ecology did not collapse. Thompson retreats into Darwinism briefly, but he points out reasons why plants proved so resilient to the Chernobyl disaster. Are these not better explained by intelligent design? Interestingly, in addition to this innate resilience to radiation, some plants in the Chernobyl exclusion zone seem to be using extra mechanisms to protect their DNA, changing its chemistry to make it more resistant to damage, and turning on systems to repair it if this doesn’t work. Levels of natural radiation on the Earth’s surface were much higher in the distant past when early plants were evolving, so plants in the exclusion zone may be drawing upon adaptations dating back to this time in order to survive. Where did those extra mechanisms come from? Where did the “systems to repair” come from? Radiation has no power to bring forth complex systems. This is like saying a hail of bullets generates armor! No; if the systems were not already present, they could do nothing. A Thriving Ecosystem With plants rebounding (which presupposes the presence of worms, fungi and other ecological partners), mammals and birds quickly returned in force. Wolves, boars, and bears are now back in larger numbers than ever, and birds can be seen flying in and out of the sarcophagus built over the reactor, and even nesting in its cracks. Thompson shares another surprise: with the humans mostly gone, Chernobyl has become a thriving wildlife refuge! Crucially, the burden brought by radiation at Chernobyl is less severe than the benefits reaped from humans leaving the area. Now essentially one of Europe’s largest nature preserves, the ecosystem supports more life than before, even if each individual cycle of that life lasts a little less. Another surprise is that the people who refused to evacuate appear to be doing better than those who left. Forced resettlement wore evacuees down with anxiety, fear, and personal conflicts. The U.N. report says, “Surveys show that those who remained or returned to their homes coped better with the aftermath than those who were resettled.” For more astonishment, read “What Bikini Atoll Looks Like Today,” at Stanford Magazine. The spot where a hydrogen bomb exploded 62 years ago is once again a tropical paradise, complete with “big healthy coral communities” in the surrounding waters, and schools of fish swimming through the hulks of sunken warships. Despite 23 atomic bomb tests at the atoll, “Ironically, Bikini reefs look better than those in many places she’s dived,” writes Sam Scott about scuba diver Elora Lopez. “It didn’t look like this nightmare-scape that you might expect,” she says. “And that’s still something that’s weird to process.” The lesson from Chernobyl is this: radiation kills, but life comes prepared to defend itself. No newly evolved organisms emerged at Chernobyl. Billions of mutations were not naturally selected to originate new species. The same organisms rebounded because DNA repair systems, involving exquisite machinery, were prepared to find mutations and fix them. The systems might be overwhelmed temporarily, but will rebound as soon as the threat diminishes. Machines do not make themselves in the presence of threats. They have to be prepared in advance. Think of it: the DNA code includes instructions on how to build machines that can repair DNA! The resilience of some life forms is truly remarkable. Common “water bears,” aka tardigrades, are some of the most durable animals known. These nearly microscopic arthropods might be found in your garden as well as in polar ice. They can survive the vacuum of space with no oxygen for days, endure temperatures from near absolute zero to boiling water, and survive radiation a thousand times stronger than levels at the surface of the earth. Some have been revived after a century in a dehydrated state! It wasn’t the conditions that produced these abilities; tardigrades had to already have these robust systems before the conditions arrived. Tardigrades never had to “evolve” in space; how did they pass that test? The answer is design. Even some one-celled organisms are fantastically durable. A preprint at bioRxiv speaks of “Extreme tolerance of Paramecium to acute injury induced by γ rays,” due to “DNA protection and repair” genes. Some archaea and bacteria (thought to be the simplest life forms) can survive hot water above the boiling point in Yellowstone hot springs. Another ubiquitous microbe named Deinococcus radiodurans, “the world’s toughest bacterium,” is amazing. According to Genome News Network, “The microbe can survive drought conditions, lack of nutrients, and, most important, a thousand times more radiation than a person can.” It was discovered doing just fine in ground meat that had been irradiated for sterilization. How does it do it? An efficient system for repairing DNA is what makes the microbe so tough. High doses of radiation shatter the D. radiodurans genome, but the organism stitches the fragments back together, sometimes in just a few hours. The repaired genome appears to be as good as new. “The organism can put its genome back together with absolute fidelity,” says Claire M. Fraser, of The Institute for Genome Research (TIGR) in Rockville, Maryland. She was the leader of the TIGR team that sequenced D. radiodurans in 1999. The fantastic resilience of life to threats, whether from ionizing radiation, temperature, or deprivation, shouts design. As stated in a recent post about homeostasis, only intelligence builds machines that can maintain the state of other machines. The recovery of Chernobyl’s ecosystem offers powerful evidence for life’s pre-programmed resilience.
Social and Emotional Learning in the Performing Arts Classroom By NAfME member Wendy Hart Higdon An important topic in education is Social and Emotional Learning (SEL), SEL encompasses the “soft skills” that students must learn and apply in order to successfully navigate school and life. For the past year and a half, Social and Emotional Learning has been a focus in my school district. As I have participated in professional development on the topic, I have discovered that the music classroom lends itself well to developing these skills, and in fact, is probably something that you are already doing! What is Social and Emotional Learning? According to the Collaborative for Academic, Social and Emotional Learning (CASEL), “Social and emotional learning is the process through which children and adults acquire and effectively apply the knowledge, attitudes, and skills necessary to understand and manage emotions, set and achieve positive goals, feel and show empathy for others, establish and maintain positive relationships, and make responsible decisions.” As much as we would love for all of our students to come to us competent in the various skills of SEL, we all have young people in our classrooms who struggle in many of these areas. Their struggles directly impact their success at school and in life. As music teachers, we are often blessed to work with the same students over the course of several years and in a variety of situations. We pride ourselves on the fact that students in our ensembles learn “life skills” that help to transform them into more productive citizens. We often speak of the relationships and sense of family that students build as members of our ensembles. And we hope that our students are there for one another should misfortune strike a member of our group. As such, we, as directors of music ensembles, are already promoting healthy SEL. The next step is perhaps to be more intentional and deliberate in embedding Social and Emotional Learning in our instruction. The Five Competencies of SEL The five competencies that make up SEL are self-awareness, self-management, social awareness, relationship management and responsible decision making. Self-awareness and self-management go hand in hand and deal with students recognizing, understanding, monitoring and managing their own emotions, including stress. Good social awareness means that students understand social cues, listen to and respect others and anticipate how others might react in certain situations. Competency in relationship management includes effective communication and leadership skills, as well as conflict resolution. The final competency, responsible decision making, while fairly self-explanatory, also includes the ability to negotiate and compromise. In the Classroom or Rehearsal Hall Recently my beginning band students had their very first concert. As part of our preparation for this important event, we spent some time talking about the excitement or nervousness that students might feel on concert night. We talked about the physical reactions that often accompany those feelings, such as butterflies in the stomach, hyperactivity, or sweating. We talked about the rush of adrenaline that students might feel and discussed how once students get worked up it usually takes several minutes for those feelings to subside. And, we brainstormed strategies to calm ourselves down such as taking a deep breath, closing our eyes, focusing our thoughts and sitting still. Helping students to understand what to expect ahead of time can help them to manage those feelings which ultimately helps them to have a better performance. I have prepared my students for their first concert in this way for years. I thought it was just part of the job, but as I have learned, this is a wonderful example of being deliberate about social and emotional learning by embedding it as a part of my curriculum. Let’s take a look some other ways that SEL might be embedded in the music classroom. - Helping students to understand and manage stage fright. - Teaching our students ways to cope with feelings of disappointment when a performance doesn’t live up to expectations. - Guiding students toward behavior that is gracious and humble after a successful audition, especially when peers may be feeling disappointed about their own results. - Using the music that we perform as a vehicle to celebrate diversity, learn about other cultures and be accepting of those who may be different from us. - Getting to know our students on a more personal level so that they feel connected, supported and valued. - Working with students to develop leadership skills that they can then practice as part of your ensemble (drum majors, section leaders, etc.) With a little thought, I bet each of us can come up with many ways that we are impacting our students’ social and emotional learning within our ensembles already. The next step for us is to be more intentional in our approach. Your students will benefit greatly! About the author: NAfME Member Wendy Higdon is the Director of Bands and Unified Arts Department Chair at Creekside Middle School in Carmel, IN. She began her career as Director of Bands at Lebanon Middle School (IN) in 1991 and came to Carmel Clay Schools in 1999 where she taught band at Carmel Middle School until the opening of Creekside in 2004. Under her leadership, the performing arts programs at Creekside have grown from 400 students in 2004 to nearly 900 students this year. The National Association for Music Education (NAfME) provides a number of forums for the sharing of information and opinion, including blogs and postings on our website, articles and columns in our magazines and journals, and postings to our Amplify member portal. Unless specifically noted, the views expressed in these media do not necessarily represent the policy or views of the Association, its officers, or its employees.
“Someone’s sitting in the shade today, because he planted a tree long ago” – Warren Buffet Any successful individual will highlight the importance of foresight and planning in their lives! Financial modeling is one of the most valued but thinly understood skills in the field of finance. A financial modeler combines accounting, finance, and business metrics to create an abstract representation of a company in Excel. Financial models are made to analyze the current financial standing of an organization and to build a picture for the future. Financial Modeling teaches one to work with historical data of companies, competitors and the industry and use this data to analyze the company performance on the basis of relevant financial parameters. This data lets the board of directors analyze the performance of the company viz a viz the competition and understand if they have performed well and are in line for robust growth. It is on the basis of these models that a management/ CEO/ MD is able to predict and announce the future financial performance. This gives investors an estimate of the valuation of companies and lets them decide whether they must invest or stay away from the stock. What is Financial Modeling? Financial Modeling is considered as the process in which a firm constructs a financial representation of some or all aspects of the firm or an underlying asset. The model is built by computing calculations. It also makes recommendations based on the computed data it receives. The model can also summarize particular events for the end user such as investment management returns. It also helps in understanding market direction, ex: Fed model. The main aim for an analyst is to provide an accurate price forecast or future earning performance of a particular company. Financial analysts test numerous valuations and forecast theories by recreating all the business events in an interactive calendar called the Financial Model. A financial model always tries to capture all the variables in a particular event. It then quantifies the variables and creates formulas around these variables. The main software tool used for this is the excel spreadsheet. Spreadsheet language permits the financial modeler to reconstruct almost any cash flow or revenue system. Uses of Financial Modeling: Financial Models are used for different reasons. The most common are: – business valuation, – cost of capital calculations for corporate finance projects, – scenario preparation for strategic planning, – capital budgeting decisions and – the allocation of corporate resources. They are also used in the creation of trends and projections of forecasts and many other uses in relation to industry comparisons, ratio analysis and common size financial statements. Example of Financial Modeling: There are countless variables at play while building a forecast or a valuation. Analysts can separate the most sensitive of these variables by creating models and then test the models with different inputs. The inputs are then used for creating a set of results that determine the impact of a change in one variable or another. The top financial models are very simple and provide their users with a set of basic assumptions. Let’s take a case study of a company, Illinois Tool Works, which is an industrial manufacturing company. We’re given a set of assumptions here and we have to calculate the sales growth of the company. Sales growth is a function of current sales and the prior quarter sales. These are the only two inputs a financial model needs to calculate the sales growth. The financial modeler now creates one input cell for the prior year’s sales, cell X, and one input cell for the current year sales, cell Y. In the third cell, cell Z, the analyst creates a formula that divides the difference between cell X and Y by cell X. This is called the growth formula. Cell Z is a formula and is hard-coded into the model. Cells X and Y are now considered as input cells for the user. The main purpose of the model is to automate the calculation of sales growth for a particular company. Financial Models are generally built using excel, as it is very easy to use. Personal preference and needs dictate the difficulty of the spreadsheet. The key to understanding whatever data you do decide to include so that one is able to gain insight from it. BSE Institute Limited provides a short-term online course on the Introduction to Financial Modeling using Excel on bsevarsity.com. This short course is best for working professionals who wish to learn about modeling in a short period of time, while they work. As this course is based primarily on MS Excel, professionals can easily implement all their learnings on their job immediately.
By Kristen French For many years, scientists have speculated that seeding the Ocean with Iron might help to stave off climate change. Iron in seawater promotes the growth of phytoplankton, which in turn devours carbon dioxide from the atmosphere through photosynthesis. Iron basically allows the ocean to soak up carbon. But only dissolved iron, not the undissolved particle forms, was thought to stimulate phytoplankton growth, despite iron’s low solubility in seawater and the abundance of Particulate Iron in the ocean. Further, the quantity of iron rather than its chemical signature was thought to determine the rate of phytoplankton growth. Now an interdisciplinary team of scientists led by Elizabeth M. Shoenfelt and Benjamin Bostick of Columbia University’s Lamont-Doherty Earth Observatory has discovered that particulate iron does stimulate phytoplankton growth, and that the chemical form that particulate iron takes is critical to ocean photosynthesis—not just the quantity of iron available. The team found that the iron in dust and sediment that comes from glaciers is better at promoting phytoplankton growth and photosynthesis than iron found in dust from other sources. This means that glaciers may play a larger role in the carbon cycle than had been thought. “It’s not that soluble iron doesn’t matter, but particulates, which are the biggest components of the iron in the ocean, can do quite a bit,” said Bostick. The findings, published in the June 23 edition of the journal Science Advances, show that in lab culture, a well-studied coastal diatom grows equally well with particulate iron versus soluble iron, and up to 2.5 times faster, and with greater photosynthetic efficiency, when fed a form of particulate iron produced by the grinding of glaciers against rock. The authors estimate that the carbon uptake rates of the diatoms consuming glacier-produced iron would be five times higher than those consuming non-glacier iron when enhanced growth and photosynthesis rates are combined. Earlier research had shown that during glacial periods, ocean concentrations of iron tend to rise. Glaciers grind up iron-rich bedrock that lies beneath the ice when they extend and recede through seasonal cycles. The resulting iron dust is carried on the wind out to sea. But no one had connected the chemical forms of iron found in glacier-produced dust versus other forms to phytoplankton photosynthesis. “Basically glaciers make fertilizer for the ocean,” said Bostick. “We show that it’s not just how much dust the glaciers make, but the fact that the glaciers grind up certain kinds of rocks that makes a big difference.” The research team took the so-called glaciogenic dust they used in lab culture from South America’s Patagonia region. But they said that the mineralogy of glaciogenic dust is similar around the world. The water they used came from the Southern Ocean. The team’s results set up a number of avenues for future research. These include studying the geological record to identify changes in the chemical forms of iron available in the ocean over time, and matching those to glacial fluctuations, said Bostick. He said further study could use genetics to study how diatoms use iron. “We’d like to know mechanistically how it’s happening,” said Bostick. “This allows you to understand how the system can be manipulated, so we can know how the environment would respond.” Let’s block ads! (Why?) The post Iron Chemistry Matters for Ocean Carbon Uptake appeared first on Innov8daily.
There are different reasons why animals may be killed. Many animals are killed as a food source for humans and this is the basis of many agricultural enterprises. In other instances animals used in agriculture may be killed because of culling, disease control measures, illness, injury, old age, or when they reach the end of their productive life. It is essential that teachers discuss with students the ethics and responsibilities that humans have in both the life and death of animals. Any animals that are slaughtered for food and then sold must be slaughtered and processed by an approved facility. Schools may not slaughter animals and then sell them as food. This includes poultry. The Meat Industry Act 1978 defines birds as abattoir animals. The meat from animals that have been slaughtered in a registered processing facility may be returned to school packaged and ready for sale. If justified by the school’s curriculum, a few chickens may be slaughtered by the teacher (or other suitably trained person) and dressed to demonstrate livestock processing for food. The teacher, or other suitably trained person, may kill the chickens without the presence of students, prior to the commencement of the dressing activity. Another example where it may be justified to kill an animal at school may arise because it is cruel for it to be kept alive. If that situation arises, killing may be performed only by persons competent in a recognised and approved method. There are preferred ways of killing particular species, and the most acceptable means should be used. If there is not a person who has the appropriate skill, then a veterinarian should be called. The killing of any animal, in a school, must only be demonstrated to students in the following situations: - To achieve an educational outcome as specified in the relevant curriculum or competency requirement, provided the teacher has written approval from the SACEC, or - As part of veterinary clinical management of an animal, under the direction of a veterinarian. Whatever the circumstances, the procedure used for killing an animal must avoid distress, be reliable, and produce rapid loss of consciousness without pain until death occurs. The procedure should minimise any impact on non-target animals. If possible, the animal must be unaware of danger before being killed. There must be no disposal of the carcasses until death is established. The means of disposal of the carcass will depend on the species of animal and local government regulations. When fertilised eggs are used, the method of disposal must ensure the death of the embryo. The holding of fertilised eggs over 10 days old with the intention of disposing of them prior to hatching is not permitted. Except for the above, animals should not be killed in schools.
An Asiatic elephant mother parent in Chitwan National Park, Nepal. The Asian or Asiatic elephant (Elephas maximus) is the only living species of the genus Elephas and is distributed in Southeast Asia from India in the west to Borneo in the east. Three subspecies are recognized—Elephas maximus maximus from Sri Lanka, the Indian elephant or E. m. indicus from mainland Asia, and E. m. sumatranus from the island of Sumatra. Asian elephants are the largest living land animals in Asia. Since 1986, E. maximus has been listed as endangered by IUCN as the population has declined by at least 50% over the last three generations, estimated to be 60–75 years. The species is primarily threatened by habitat loss, degradation and fragmentation. In 2003, the wild population was estimated at between 41,410 and 52,345 individuals. Female captive elephants have lived beyond 60 years when kept in seminatural situations, such as forest camps. In zoos, elephants die at a much younger age and are declining due to a low birth and high death rate.
The Gila chub once benefited from the engineering feats of beavers living in the upper Gila River basin, whose dams created deep, slow-moving pools the chub loved. So in the late 1800s, when beavers were extirpated from much of the basin, the Gila chub was hurt too. Now, with added pressure from nonnative fish and bullfrog predation, habitat destruction, and water diversion, the chub struggles to survive in a small fraction of its old range. In the early to mid-1800s, the human population increased significantly in southern Arizona, New Mexico, and northern Sonora, Mexico. This growth spurt brought mass livestock production and intensive agriculture; by the late 1800s, many southern Arizona watersheds were in poor condition due to livestock grazing, mining, hay and timber harvesting, and water diversion. Though the initial changes took place nearly a century ago, many of the fragile desert ecosystems haven't fully recovered, and some areas may never recover. The Gila chub has suffered in these degraded desert watersheds. In 1998, the Center and Sky Island Watch petitioned the U.S. Fish and Wildlife Service to list the Gila chub as an endangered species. After two lawsuits and a negotiated agreement to protect 29 species, including the Gila chub, the Fish and Wildlife Service proposed listing the chub as endangered. In 2005, the Gila chub was finally afforded protection and critical habitat under the Endangered Species Act. However, the 2005 critical habitat designation — an arbitrary and capricious decision made under former disgraced Interior Department official Julie MacDonald — reduced the protected area from the proposed 33,280 acres to an insufficient 25,600 acres. The Center filed a notice of intent to sue over this and 54 other politically tainted decisions in 2007. |Get the latest on our work for biodiversity and learn how to help in our free weekly e-newsletter.| Contact: Lisa Belenky
Wednesday, January 30, Winter is a birthday, it tends to be more of a depression, especially due to cold and dark weather, as they tend to reduce their homes and reduce their social and physical activity. Depression has long been linked to physical activity, but the nature of this association has, until now, left a question on the table: can it be used to reduce depression risk first? A new study by the University Hospital of Massachusetts responded to the question. "Using genetic data, we have demonstrated that the higher levels of physical activity reduce the risk of depression," says Carmel Choi's head. This means that if you do it regularly, it effectively reduces the risk of depression. This research is based on the set of data on the analysis of the association of physical activity and depression at the genetic level. There were two set of physical activity data, in which the participants collected self-recorded exercise levels and data from a motion sensor. Movement tracking sensors show that physical activity can prevent depression, but the self-recorded data set did not find that relationship. In the study, I found that people who could not register physical activities, such as work or home cleanliness, were not registered. However, it helps to prevent the depression of any physical activity, depending on the study. However, Choi has stated that "… of course, knowing that physical activity is useful to prevent depression is one thing, and other things really mean that people are active." More work is needed to find out how to adjust the recommendations for different categories of people with different health risks. " Researchers are currently investigating the benefits of many people's sports and ways of promoting this information in general on physical activity. Genetic attrition against depression must take a different perspective than those who have stressed depression.
2 Double Object Pronouns Double Object Pronouns are viewed as the use of an indirect object pronoun and a direct object pronoun in the same sentence. 3 Direct Object Pronouns Let’s ReviewA direct object pronoun answers the questions Who or What - It is the noun or pronoun that receives the action of the verb.The direct object pronouns are:lo - masculine, singular form (him, you formal, it)la - feminine, singular form (her, you formal, it)los - masculine, plural form (them, ya’ll)las - feminine, plural form (them, ya’ll) 4 Direct Object Pronouns Let’s PracticeHe bought the pens.Él compró las plumas.He bought them.Él las compró. 5 Indirect Object Pronouns Let’s Review FurtherThe indirect object pronoun answers the questions To Whom or For Whom - It is the person to or for whom the action of the verb is completed.The indirect object pronouns are:me - to mete - to you (singular, familiar)le - to him, to her, to you (singular, formal)nos - to usos - to you (plural, familiar)les - to them, to you (plural - formal) 6 Indirect Object Pronouns Let’s Practice MoreHe brought flowers to Sara.Él trajo las flores a Sara.Él le trajo las flores. 7 Placement of the Double Object Pronouns When there is only one conjugated verb:Place the direct and indirect object pronouns in front of the verb.The indirect object pronoun must always precede the direct object pronoun.He brought flowers to me.Él trajo las flores a mí.Él me las trajo. 8 Placement of the Double Object Pronouns When There are Two Verbs, a Conjugated Verb and an Infinitve:Place the direct and indirect object pronouns in front of the conjugated verb OR attach them to the infinitive, if you have one.The indirect object pronoun must still come before the direct object pronoun. 9 Placement of the Double Object Pronouns He is going to bring flowers to me.Él me las va a traer.Él va a traérmelas. 10 The Third Person Object Pronouns When both the indirect and direct object pronouns are in the third person singular or plural, the indirect object pronoun still precedes the direct object pronoun, but it is written as “se” rather than “le” or “les”.He bought flowers for her.Él se las compró. 11 Important Notes to Remember Indirect before direct before the conjugated verb (or attached to the infinitive if you have one).You can’t “le lo”, you must “se lo”, “se la”, “se los”, or “se las”. 12 Now It’s Your Turn He speaks Spanish to me. Él me lo habla. We are going to wash the dishes for her.Nosotros se los vamos a lavar.Nosotros vamos a lavárselos.
Neptune’s biggest moon, Triton, has a diameter of 2,700 km, making it the 7th biggest moon in the solar system, as well as its 16th biggest object overall. Triton also contains more than 99.5% of all the mass that is known to orbit the planet Neptune, and has more mass than all the other known planetary satellites in the solar system that are smaller than it combined. Below are some more interesting facts about the Neptune’s huge natural satellite that was discovered by William Lassell on October 10th, 1846: – Triton is named after a sea-god Somewhat fittingly, the moon was named after Triton, the son of the Greek sea-god Poseidon whose Roman equivalent was Neptune. However, the name Triton was only officially adopted many years after its discovery, and it was known simply as “the satellite of Neptune” until the discovery of Neptune’s second moon, Nereid, in 1949. – Triton has a retrograde orbit While many moons in the solar system have retrograde orbits, Triton is unique in this regard because it is the largest moon to orbit its parent body in the wrong direction. By way of comparison, some of Jupiter’s and Saturn’s moons have retrograde orbits, but they are situated much further away from their planets, and are also much smaller, with the largest among these moons, Phoebe (a moon of Saturn), being only 8% as big, and 0.03% as massive as Triton. – Triton will crash into Neptune Since Triton is already orbiting Neptune at a distance that is smaller than the Earth-Moon distance, it is almost certain that tidal forces will cause the moon’s orbit to decay further, and at an increasing rate as the orbits decays. Computer modelling suggests that in about 3.6 billion years, Triton will cross Neptune’s Roche limit, which is the distance at which an object orbiting a massive body will break apart. In practical terms, the Roche limit is reached when tidal forces overcome the gravitational forces that hold an orbiting body together. It is expected that when Triton reaches this point, it will either collide with Neptune’s atmosphere, forming a complex ring system, or it will simply break up and fall into Neptune. – Triton is likely a captured Kuiper Belt object Since moons that have retrograde orbits cannot form in the same region as their primaries, the only explanation for Triton’s orbit is that it was captured from the Kuiper belt, a ring-shaped reservoir of small icy and/or rocky objects that remained after the formation of the solar system. The Kuiper belt extends from just inside the orbit of Neptune to a distance of about 50 astronomical units from the Sun and since Triton is almost identical in composition, size, mass, and temperature to Pluto (a known Kuiper belt object), it is almost certain that Triton was captured by Neptune in the distant past. – Triton has few impact craters Only 179 impact craters have been positively identified on the 40% of Triton’s surface that has been mapped, which suggests that the surface of the moon is continually undergoing a process of modification. In fact, studies have shown that on cosmic time scales, Triton’s surface is almost “brand new”, with an estimated age of between 50 million and only 6 million years old. Moreover, Triton’s surface is almost as smooth as a billiard table, with the highest known elevation being only about 1,000 metres. – Triton has ice volcanoes Although Triton has a crust of ice, the processes that produce the observed ice volcanoes are almost identical to those that produce hot, lava volcanoes on Earth. Triton’s entire known surface is crisscrossed by rift valleys and pressure ridges, which suggests an ongoing process of volcanic and tectonic activity, but instead of hot lava, the volcanoes on Triton spew water ice and ammonia as the result of endogenic geological processes, as opposed to fractures caused by violent impacts. – Triton also has nitrogen geysers Apart from ice volcanoes, Triton also features geysers of sublimated nitrogen, somewhat like the geysers on Earth that spout hot water. While there are not many nitrogen geysers on Triton, all are located close to the subsolar point, which suggests that sunlight is heating reservoirs of subsurface frozen nitrogen to the point where it sublimates, before bursting through the solid ice sheet that overlays the reservoir. One such geyser was observed to squirt nitrogen gas and entrap dust to a height of 8,000 metres, and based upon this, it is estimated that each eruption could last for up to one earth-year, releasing the sublimated nitrogen gas of about 100 million cubic meters of frozen nitrogen. – Triton features unique cantaloupe terrain Known as “cantaloupe terrain” after its resemblance to the skin of a cantaloupe melon, this feature consists of closely linked depressions that can be as much as 30–40 km in diameter. While the formation consists primarily of water ice, its origin and process of formation remain uncertain, but it is certain that this area (which probably covers much of Triton’s western hemisphere), is not the result of impacts on account of the fact that the depressions show very little variation in terms of size and depth. However, one theory holds that the ridges are the result of “lumps” of dense, hard subsurface material that have somehow pushed though a softer and less dense top layer. – Triton has a nearly perfectly circular orbit Triton’s orbit around Neptune is almost perfectly circular, with an eccentricity that is negligible. While there is some uncertainty how this came about in the relatively short history of the solar system, it is thought that factors such as gas drag from a substantial debris disc may have contributed significantly to the circularizing of Triton’s orbit. – Tritons surface is completely frozen over Although only about 40% of the moon’s surface is mapped, the percentage that is known is completely frozen over by a layer of frozen nitrogen, and it is extremely unlikely that the remaining 60% of Triton’s surface would be any different. The ice layer consists primarily of nitrogen ice, with water ice and frozen carbon dioxide completing the mix. The ice layer gives Triton a high albedo; it reflects between 60% and 95% of the light that falls onto it, while our own Moon only reflects about 11%-12% of light.
Explaining racism to children and young people with autism Although, race, identity and privilege affect people every day, it can seem like a difficult topic to talk about with a child or young person. Our advice can support conversations about racism with all children and young people. Talk about it honestly No one is too young to learn about racial inequality and it’s incredibly important that you don’t avoid the topic. Autistic people can process and communicate in different ways compared with non-autistic people. Sometimes tone and phrases can be easily misunderstood. So, it’s important you talk about these issues clearly so that young people can understand what racism is and the impact it has. Helping young people to understand the topic means they can process and communicate. You can bring diversity into lots of different conversations. You can support the conversations with resources and videos of people talking about their experiences of racism in Britain, this will help bring context to your young person with autism. Be prepared for the difficult questions There are lots of resources that can support parents to understand the different elements of racism, identity and privilege. Any child or young people can ask difficult questions, so it’s better to be prepared for them. Some issues are very complex, so reiterating them in a way that’s most appropriate for your autistic child is vital. You can use examples or talk about something they might have experienced or seen. For larger topics, lots of small conversations or spending more time over a longer period might be beneficial to understanding. Don’t be afraid if your child says the wrong thing Autistic people often have a very literal way of thinking. If you’re in public and your child or young person says the wrong thing, don’t hush them and silence their misunderstanding. Take this opportunity to talk about race. Teach them about their misconception and why what they said is inappropriate or offensive. Think consciously about bringing in an awareness of diversity Recognising difference and using opportunities to bring in more diversity awareness will help your child to understand race, identity and privilege. You could watch films with all black casts or suggest books by non-white authors. Many people are non-racist. Go one step further and be anti-racist. This means being active in your response to racism and speaking out against racial injustice. Set an example for your child or young person about how to appropriately stand up against racism. Provide a safe space for your child to share how they feel Discussing these issues can cause worry and anxiety for everyone. Some people will feel very confident sharing how they feel whereas others may want a safe space to discuss their feelings and experiences openly. You can build their confidence by listening closely and supporting them to understand why they may feel that way. To end the discussion reassure them of their own safety with you. Our signposting resource has more information about understanding race and racism for everyone in the family. Click the link below for the original article:
This is a program that can plot graphs for quadratic functions. The formula of a quadratic equation is f(x)= ax2+bx+c , where a, b and c are constant. employs a picture box as the plot area and three text boxes to obtain the values of the coefficients a, b, c of the quadratic equation from the users. We also need to modify the scale factor in the properties windows of the picture box. We are using a scale of 0.5 cm to represent 1 Besides, we need to make some transformation as the coordinates in VB start from top left but we want it to start from the middle. We can use the Pset method to draw the graph using a very small increment. Pset is a method that draws a dot on the screen, the syntax is Where (x,y) is the coordinates of the dot and color is the color of the dot. Using the For Next loop together with Pset, we can draw a line on the screen.
Nitrogen is one of a handful of elements that have been suggested as alternatives to carbon as the basis of life elsewhere in the universe. Principally this is because it can form long chains at low temperatures with a liquid solvent such as ammonia (NH3) or hydrogen cyanide (HCN). On the other hand, a major drawback of nitrogen as a backbone for large molecular structures is that the energy of the triple bond in N2 is much greater than that for a single bond, so that nitrogen-nitrogen bonds tend to revert back to elemental nitrogen. Nitrogen can, however, form longer molecules with some other elements, including carbon, phosphorus, sulfur, and boron. Nitrogen can also form hydrides, such as hydrazine.
Cavitation means different things to different people. It has been described as: - A reduction in pump capacity. - A reduction in the head of the pump. - The formation of bubbles in a low pressure area of the pump volute. - A noise that can be heard when the pump is running. - Damaged that can be seen on the pump impeller and volute. Just what then is this thing called cavitation? Actually it is all of the above. In another section of this site I described the several types of cavitation, so in this paper I want to talk about another side of cavitation and try to explain why the above happens. Cavitation implies cavities or holes in the fluid we are pumping. These holes can also be described as bubbles, so cavitation is really about the formation of bubbles and their collapse. Bubbles form when ever liquid boils. Be careful not to associate boiling with hot to the touch. Liquid oxygen will boil and no one would ever call that hot. Fluids boil when the temperature of the fluid gets too hot or the pressure on the fluid gets too low. At an ambient sea level pressure of 14.7 psia (one bar) water will boil at 212°F. (100°C) If you lower the pressure on the water it will boil at a much lower temperature and conversely if you raise the pressure the water will not boil until it gets to a higher temperature. There are charts available to give you the exact vapor pressure for any temperature of water. If you fall below this vapor pressure the water will boil. As an example: Please note that I am using absolute, not gauge pressure. It is common when discussing the suction side of a pump to keep everything in absolute numbers to avoid the use of minus signs. So instead of calling atmospheric pressure zero, we say one atmosphere is 14.7 psia at seal level and in the metric system the term commonly used is one bar, or 100 kPa if you are more comfortable with those units. Now we'll go back to the first paragraph and see if we can clear up some of the confusion: The capacity of the pump is reduced - This happens because bubbles take up space and you cannot have bubbles and liquid in the same place at the same time. - If the bubble gets big enough at the eye of the impeller, the pump will lose its suction and will require priming. The head is often reduced - Bubbles unlike liquid are compressible. It is this compression that can change the head. The bubbles form in a lower pressure area because they cannot form in a high pressure area. - You should keep in mind that as the velocity of a fluid increase, the pressure of the fluid decreases. This means that high velocity liquid is by definition a lower pressure area. This can be a problem any time a liquid flows through a restriction in the piping, volute, or changes direction suddenly. The fluid will accelerate as it changes direction. The same acceleration takes place as the fluid flows in the small area between the tip of the impeller and the volute cut water. - Any time a fluid moves faster than the speed of sound, in the medium you are pumping, a sonic boom will be heard. The speed of sound in water is 4800 feet per second (1480 meters/sec) or 3,273 miles per hour (5,267 kilometers per hour). Pump parts show damage - The bubble tries to collapse on its self. This is called imploding, the opposite of exploding. The bubble is trying to collapse from all sides, but if the bubble is laying against a piece of metal such as the impeller or volute it cannot collapse from that side, so the fluid comes in from the opposite side at this high velocity proceeded by a shock wave that can cause all kinds of damage. There is a very characteristic round shape to the liquid as it bangs against the metal creating the impression that the metal was hit with a "ball peen hammer". - This damage would normally occur at right angles to the metal, but experience shows that the high velocity liquid seems to come at the metal from a variety of angles. This can be explained by the fact that dirt particles get stuck on the surface of the bubble and are held there by the surface tension of the fluid. Since the dirt particle has weakened the surface tension of the bubble it becomes the weakest part and the section where the collapse will probably take place. The higher the capacity of the pump the more likely cavitation will occur. Some plants inject air into the suction of the pump to reduce this capacity and lower the possibility of cavitation. They choose this solution because they fear that throttling the discharge of a high temperature application will heat the fluid in the pump and almost guarantee the cavitation. In this case air injection is the better choice of two evils. High specific speed pumps have a different impeller shape that allows them to run at high capacity with less power and less chance of cavitating. You normally find this impeller in a pipe shaped casing rather than the volute type of casing that you commonly see.
In the penultimate measure of the first movement Clementi’s Sonatina No. 36, there is a short cascade of notes: This sonatina is often used as a teaching piece, because it’s a great introduction for the early intermediate pianist to the techniques required in more complicated piano pieces. This little cascade is a good example of why. It’s short, only eight notes long. In the numbering system every beginner learns, your thumbs are ones; your pinkies, fives. The G and A keys are right next to each other on the keyboard, and one might expect that the prescribed fingering of two adjacent notes would require two adjacent fingers. Perhaps, because the sequence continues down the keys, the four and five fingers, so that other fingers are properly positioned to reach the next notes. But that’s not what happens. The G is struck with the thumb, and the A with the fourth finger. To do this, one must curl the edges of the palm toward each other like a taco. Then, the second finger crosses over to reach the D, the third follows to strike the E, and then the sequence repeats. 1, 4, 2, 3. Continue reading
Anoxic Brain Injury The brain requires oxygen in order to function normally. When the brain is deprived from a substantial lack of oxygen, it is referred to as a hypoxic event. When the brain is completely deprived of oxygen, it is known as an anoxic event, which can cause Anoxic brain injuries. The brain consumes approximately one-fifth (20%), of the body’s total oxygen. If the brain is deprived of oxygen, a domino-effect of problems will ensue. Oxygen is necessary to metabolize glucose. Glucose is used to provide energy for all living cells. Since 90% of the brain’s total energy is used to send electrochemical impulses and maintain the neuron’s ability to send these impulses, a deprivation of oxygen may produce profound thinking, movement, and emotional impairments. The most common forms of anoxia are (i) anemic anoxia; (ii) ischemic anoxia; and (iii) anoxic anoxia. Anemic anoxia occurs when not enough blood or hemoglobin is making it to the brain. Hemoglobin is a chemical in red blood cells responsible for carrying oxygen throughout the body. This may occur when someone is hemorrhaging from a gunshot wound. Ischemic anoxia occurs when there is not enough cerebral blood flow to carry blood to the brain such as when a person suffers from an ischemic stroke. Anoxic anoxia occurs when not enough oxygen is present in the air to be absorbed by the body. An example of this occurs with high altitude sickness. The most frequent causes of Ischemic anoxia include: - Anesthesia accidents-32% - Cardiovascular disease-29% - Asphyxia, such as drowning -16% - Chest trauma-10% - Severe bronchial asthma-3% - Barbiturate poisoning-3% Symptoms of hypoxic-ischemic injury include: Cognitive deficits (thinking problems), weakness in all four extremities, abnormal movements, incoordination, visual disturbances, and the inability to follow a sequence of commands. Direct treatment of anoxia is limited. The general consensus is one of maintaining the body’s general status, although some studies have suggested that the use of barbiturates may be helpful in the first 2-3 days of brain injury onset. Recovery may take months to years depending on the level of injury. Rehabilitation may include the need to consult professionals like a physical therapist, speech therapist and a neuropsychologist.
To find the shortest round trip to 50 chosen cities in Europe a mathematician would usually recruit a massive computer, a complex program and set aside plenty of time. Researchers at BT, however, found the solution in record time, with a workstation and a collection of ‘software ants’ – autonomous programs a few hundred lines long which, together, can solve enormously difficult problems by dealing with their own simple ones. BT, which has developed the programs in the past year, says its method could be applied to many problems where a complex series of decisions is needed to achieve the best use of resources. Examples include searching for information on a number of databases, designing circuits on microchips, advising fighter pilots under multiple attack, or sending out telephone engineers to fix faults. The ants will also help to make software ‘agents’ designed to explore the information superhighways. Peter Cochrane, head of core technologies research at BT, says that with the amount of information that can be accessed from the superhighway doubling every three years the system will have immense value because it will allow people to find information more quickly. The task of finding the shortest distance between points is known as the travelling salesman problem. The number of possible solutions increases factorially: for four points there are 24 feasible routes, but for 30 there are 2.65 times 1032. The previous world best for a proven solution was 3000 points, which took a Cray supercomputer 18 months to work out. BT’s method has solved a problem involving 30 000 points on a workstation in 44 hours, and is accurate to within 4 per cent. BT’s solution to the 50-city problem is like a massive computer game played out on a computer representation of a map of Europe. Each of the ants is assigned a position on the map, and from then on has to obey simple rules. Ants cannot, for example, stray more than a certain distance from their nearest neighbours: they act as if they are all tied together on a long loop of string. If an ant finds a city it may breed, making a copy of itself. And the closer it is to a city, the stronger the attraction to it. If, however, an ant spends too long away from a city, it dies. The computer then generates a new one at some other point between the deceased’s nearest To start solving the problem, the computer generates 50 ants joined in a loop and sets them loose to search for cities according to their simple instructions. Eventually they find cities, for which they are rewarded and survive, or they die. Evolution does the rest. The scientists who devised this method, Shara Amin and Jose-Luis Fernandez of BT’s systems research division at Martlesham Heath near Ipswich, believe they can reach a solution 10 times faster still by making the ants more self-sufficient. At present, the governing program has to inform the ants when they are near a city. Amin and Fernandez want the ants to be able to judge their own positions. The work – minus the algorithm – will be published in the July issue of Neural Computing Applications.
New Edition Basic Survival Teacher's Guide - English Type: - American English - Published Date: - 30 September 2003 The Teacher’s Guide provides step-by-step notes to support teachers through the various stages of the lessons and to expand the lesson into further activities, conversations and discussions. A wealth of photocopiable material is included such as answer templates, answer keys and mid-course and end-of-course tests.
Writing a Definition Essay You've selected a term or two, gathered denotations and connotations and other details, and created a working thesis statement. You're ready to draft your definition essay. The following activities will help you build a strong beginning, develop middle paragraphs, and create an ending that effectively wraps up your definition. Writing the Beginning Paragraph The first sentence or two of your definition essay needs to grab your reader's interest. You can experiment with a number of different strategies to write an effective lead. Write a lead sentence. Experiment with leads for your essay using each strategy below. Read the examples for ideas. Then choose your favorite lead to start your essay. Make a copy of this Google doc or download a Word template. - Start with an interesting quotation. “I was such a nerd, a complete geek, but then I was lucky enough to have a fancy career, where I can be like ‘See, I'm not a nerd. Look, I'm in Vogue.’ ” - Ask an engaging question. So, just how did the nerds come to rule the universe? - Provide an anecdote. Fifty years ago, the only people who sat in front of computers were nerds. Now, all kinds of people carry computers in their pockets and read them in their palms and hold them in front of their faces and smile. - Make a shocking statement. One way to beat a bully is to turn an insult into a badge of honor. Write your beginning paragraph. Start with your lead, and then provide background information and develop a paragraph leading to your thesis statement. Writing the Middle Paragraphs Develop middle paragraphs that fully explore the meaning of your term(s). Start each paragraph with a topic sentence that names the main point. Thoroughly support each topic sentence using denotations, connotations, etymology, synonyms, antonyms, and other details. Make sure to select words that show your enthusiasm for the topic and connect to your reader. Write your middle paragraphs. Develop a paragraph of support for each main point about your thesis statement. Allow students to develop these paragraphs first if they wish. Sometimes, students prefer to work from the details up to the thesis statement rather than the reverse direction. Writing the Ending Paragraph Your ending paragraph draws your definition essay to an effective close. You can develop this paragraph using a number of different ending strategies. Try ending strategies. Write a sentence for each ending strategy. Read the examples for ideas. Then consider using some or all of these sentences in your ending paragraph. Make a copy of this Google doc or download a Word template. - Reflect on how the term has changed over time. Geeks began as unskilled carnival workers and ended up as the richest people in the world. - Connect to popular culture. Heroes come in all forms, from the Hulk with his erupting rage and giant green fists to Doctor Who with a police call box and a sonic screwdriver. - Use another powerful quotation. Felicia Day once noted, "The substance of what it means to be a geek is essentially someone who's brave enough to love something against judgment. The heart of being a geek is a little bit of rejection." - Leave the reader with a strong final thought. We live at a time when sports fans proudly call themselves "baseball nerds" and geek out over the World Cup and organize fantasy football leagues. Write your ending paragraph. Use some or all of the strategies you tried above as you build an ending paragraph for your definition essay. Reading a Definition Essay Read a student sample. As you read this draft, notice how the writer puts the parts together. Listen to "Get Your Nerd On" Sample Definition Essay Get Your Nerd On Beginning Paragraph In the days of traveling circuses, the lion tamer, trapeze artist, and tightrope walker were royalty. The bearded lady and wolf boy themselves had a certain cachet. Etymology But lowest of the low was the wild man whose only skill was biting the heads off chickens. That was the geek. Savage. Lowbrow. Grotesque. Thesis Statement In the last fifty years, the terms geek and nerd went from insults for social rejects to badges of honor for some of the most successful people in the world. How? Middle Paragraphs While geek began its career as a name for an outcast sideshow carney, nerd got its start in Dr. Seuss's 1950 book If I Ran a Zoo: And then, just to show them, I'll sail to Ka-Troo And bring back an It-Kutch, a Preep, and a Proo, A Nerkle, a Nerd, and a Seersucker too. The nerd in the zoo was just like the geek in the carnival—an oddity that people would pay money to gawk at. Quotation In 1951, Newsweek gave the first definition of nerd for people: "In Detroit, someone who once would be called a drip or a square is now, regrettably, a nerd." This sense of being a social outcast persists in the modern definition of each term. Denotation Merriam-Webster defines a nerd as "an unstylish, unattractive, or socially inept person; especially one slavishly devoted to intellectual or academic pursuits," and defines a geek as "a person often of an intellectual bent who is disliked." The nerd is a socially awkward bookworm. The geek is a know-it-all. Topic Sentence These negative definitions predominated in the middle of the last century. In 1965, Milton-Bradley released the Mystery Date game, in which girls opened a little plastic door hoping for a handsome, athletic suitor rather than then bookish, socially awkward "dud." In 1975, National Lampoon published a poster that asked "Are You a Nerd?" and showed a gangly young man with black plastic glasses taped in the middle, a white button-down shirt with black tie and pocket protector, Farah Slacks that did not cover his white socks, and a pair of wingtips. The text below began, "Let's hope not. . . ." In 1984, the film Revenge of the Nerds presented the hilarious concept that a group of socially inept bookworms could somehow win out over the jock fraternity Alpha Beta and its cheerleader sorority, Pi Delta Pi. Somehow, though, that's exactly what happened. Nerds and geeks became prominent in the computer revolution of the 1970s, but they didn't become socially acceptable until the Internet revolution of the 1990s, and didn't become masters of the universe until the social-media revolution of the 2000s. Fifty years ago, only losers spent any time sitting in front of computers. Now, everyone carries a pocket computer and checks it every few minutes and lifts it to smile and share and be popular. Bill Gates and his generation of nerds began by building computers in their garages and ended by becoming the richest people in the world. Even Hollywood has been taken over by nerds and geeks, with comic books and fantasy novels inspiring the biggest blockbusters. We've all lived through the Revenge of the Nerds, but this time it wasn't for laughs. Connotation So, now that the outcasts have risen to the top of the heap, we should think a little bit about the shades of difference between the words nerd and geek. Merriam-Webster offers them as synonyms for each other, but each has a slightly different focus. A nerd is "slavishly devoted to intellectual or academic pursuits," which connotes a serious, laborious, scholarly interest. A geek is "an enthusiast or expert especially in a technological field or activity," which connotes a joyful, active, techie interest. Nerds are obsessed introverts. Geeks are obsessed extroverts. Examples Now that both terms have shucked many of their negative connotations, they are used by people about all kinds of non-academic, non-techie subjects. We live at a time when sports fans proudly call themselves "baseball nerds" and geek out over the World Cup and organize fantasy football leagues. In addition to wine connoisseurs, we have beer nerds; in addition to gourmands, we have food geeks. We have election nerds and gardening geeks. Anyone with a serious, scholarly fascination can proudly wear the nerd badge, and anyone with an exuberant, wild-man interest can fly the geek banner. Ending Paragraph These onetime insults have undergone re-appropriation, a process by which a marginalized group takes on a slur and turns it into a source of pride. Though nerd and geek have lost most of their negative connotations, a sense of social isolation still exists within them. The actress Felicia Day reflected, "The substance of what it means to be a geek is essentially someone who's brave enough to love something against judgment. The heart of being a geek is a little bit of rejection." Though the bookish nerds have progressed from being the "duds" of Mystery Date to the CEOs of Amazon, Google, and Microsoft, they still sometimes feel that their intense interests leave them just outside of the circle of "popular kids." Show students how strong word choice creates interest: cachet, wild man, savage, lowbrow, grotesque, carney—and those appear just in the first paragraph. By selecting evocative nouns, precise verbs, and colorful modifiers, students can engage readers and show their investment in the topic.
Burning mouth syndrome is the medical term for ongoing (chronic) or recurrent burning in the mouth without an obvious cause. This discomfort may affect the tongue, gums, lips, inside of your cheeks, roof of your mouth (palate) or widespread areas of your whole mouth. The burning sensation can be severe, as if you scalded your mouth. Burning mouth syndrome usually appears suddenly, but it can develop gradually over time. Unfortunately, the specific cause often can't be determined. Although that makes treatment more challenging, working closely with your health care team can help you reduce symptoms. Symptoms of burning mouth syndrome may include: - A burning or scalding sensation that most commonly affects your tongue, but may also affect your lips, gums, palate, throat or whole mouth - A sensation of dry mouth with increased thirst - Taste changes in your mouth, such as a bitter or metallic taste - Loss of taste - Tingling, stinging or numbness in your mouth The discomfort from burning mouth syndrome typically has several different patterns. It may: - Occur every day, with little discomfort when you wake, but become worse as the day progresses - Start as soon as you wake up and last all day - Come and go Whatever pattern of mouth discomfort you have, burning mouth syndrome may last for months to years. In rare cases, symptoms may suddenly go away on their own or become less frequent. Some sensations may be temporarily relieved during eating or drinking. Burning mouth syndrome usually doesn't cause any noticeable physical changes to your tongue or mouth. When to see a doctor If you have discomfort, burning or soreness of your tongue, lips, gums or other areas of your mouth, see your doctor or dentist. They may need to work together to help pinpoint a cause and develop an effective treatment plan. The cause of burning mouth syndrome can be classified as either primary or secondary. Primary burning mouth syndrome When no clinical or lab abnormalities can be identified, the condition is called primary or idiopathic burning mouth syndrome. Some research suggests that primary burning mouth syndrome is related to problems with taste and sensory nerves of the peripheral or central nervous system. Secondary burning mouth syndrome Sometimes burning mouth syndrome is caused by an underlying medical condition. In these cases, it's called secondary burning mouth syndrome. Underlying problems that may be linked to secondary burning mouth syndrome include: - Dry mouth (xerostomia), which can be caused by various medications, health problems, problems with salivary gland function or the side effects of cancer treatment - Other oral conditions, such as a fungal infection of the mouth (oral thrush), an inflammatory condition called oral lichen planus or a condition called geographic tongue that gives the tongue a maplike appearance - Nutritional deficiencies, such as a lack of iron, zinc, folate (vitamin B-9), thiamin (vitamin B-1), riboflavin (vitamin B-2), pyridoxine (vitamin B-6) and cobalamin (vitamin B-12) - Allergies or reactions to foods, food flavorings, other food additives, fragrances, dyes or dental-work substances - Reflux of stomach acid (gastroesophageal reflux disease, or GERD) that enters your mouth from your stomach - Certain medications, particularly high blood pressure medications - Oral habits, such as tongue thrusting, biting the tip of the tongue and teeth grinding (bruxism) - Endocrine disorders, such as diabetes or underactive thyroid (hypothyroidism) - Excessive mouth irritation, which may result from overbrushing your tongue, using abrasive toothpastes, overusing mouthwashes or having too many acidic drinks - Psychological factors, such as anxiety, depression or stress Wearing dentures, even if they don't fit well and cause irritation, doesn't generally cause burning mouth syndrome, but dentures can make symptoms worse. Burning mouth syndrome is uncommon. However, your risk may be greater if: - You're a woman - You're perimenopausal or postmenopausal - You're over the age of 50 Burning mouth syndrome usually begins spontaneously, with no known triggering factor. However, certain factors may increase your risk of developing burning mouth syndrome, including: - Recent illness - Some chronic medical disorders such as fibromyalgia, Parkinson's disease, autoimmune disorders and neuropathy - Previous dental procedures - Allergic reactions to food - Traumatic life events Complications that burning mouth syndrome may cause or be associated with are mainly related to discomfort. They include, for example: - Difficulty falling asleep - Difficulty eating There's no known way to prevent burning mouth syndrome. But by avoiding tobacco, acidic foods, spicy foods and carbonated beverages, and excessive stress, you may be able to reduce the discomfort from burning mouth syndrome or prevent your discomfort from feeling worse. There's no one test that can determine if you have burning mouth syndrome. Instead, your doctor will try to rule out other problems before diagnosing burning mouth syndrome. Your doctor or dentist likely will: - Review your medical history and medications - Examine your mouth - Ask you to describe your symptoms, oral habits and oral care routine In addition, your doctor will likely perform a general medical exam, looking for signs of other conditions. You may have some of the following tests: - Blood tests. Blood tests can check your complete blood count, glucose level, thyroid function, nutritional factors and immune functioning, all of which may provide clues about the source of your mouth discomfort. - Oral cultures or biopsies. Taking and analyzing samples from your mouth can determine whether you have a fungal, bacterial or viral infection. - Allergy tests. Your doctor may suggest allergy testing to see if you may be allergic to certain foods, additives or even substances in dental work. - Salivary measurements. With burning mouth syndrome, your mouth may feel dry. Salivary tests can confirm whether you have a reduced salivary flow. - Gastric reflux tests. These tests can determine if you have GERD. - Imaging. Your doctor may recommend an MRI scan, a CT scan or other imaging tests to check for other health problems. - Medication adjustment. If you take a medication that may contribute to mouth discomfort, your doctor may change the dose, switch to a different medication, or temporarily stop the medication, if possible, to see if your discomfort goes away. Don't try this on your own, because it can be dangerous to stop some medications. - Psychological questionnaires. You may be asked to fill out questionnaires that can help determine if you have symptoms of depression, anxiety or other mental health conditions. Treatment depends on whether you have primary or secondary burning mouth syndrome. Secondary burning mouth syndrome For secondary burning mouth syndrome, treatment depends on any underlying conditions that may be causing your mouth discomfort. For example, treating an oral infection or taking supplements for a vitamin deficiency may relieve your discomfort. That's why it's important to try to pinpoint the cause. Once any underlying causes are treated, your burning mouth syndrome symptoms should get better. Primary burning mouth syndrome There's no known cure for primary burning mouth syndrome and there's no one sure way to treat it. Solid research on the most effective methods is lacking. Treatment depends on your particular symptoms and is aimed at controlling them. You may need to try several treatment methods before finding one or a combination that helps reduce your mouth discomfort. And it may take time for treatments to help manage symptoms. Treatment options may include: - Saliva replacement products - Specific oral rinses or lidocaine - Capsaicin, a pain reliever that comes from chili peppers - An anticonvulsant medication called clonazepam (Klonopin) - Certain antidepressants - Medications that block nerve pain - Cognitive behavioral therapy to develop strategies to address anxiety and depression and cope with chronic pain In addition to medical treatment and prescription medications, these self-help measures may reduce your symptoms and your mouth discomfort: - Drink plenty of fluids to help ease the feeling of dry mouth, or suck on ice chips. - Avoid acidic foods and liquids, such as tomatoes, orange juice, carbonated beverages and coffee. - Avoid alcohol and products with alcohol, as they may irritate the lining of your mouth. - Don't use tobacco products. - Avoid spicy-hot foods. - Avoid products with cinnamon or mint. - Try different mild or flavor-free toothpastes, such as one for sensitive teeth or one without mint or cinnamon. - Take steps to reduce stress. Burning mouth syndrome can be uncomfortable and frustrating. It can reduce your quality of life if you don't take steps to stay positive and hopeful. Consider some of these techniques to help you cope with the chronic discomfort of burning mouth syndrome: - Practice relaxation exercises, such as yoga. - Engage in pleasurable activities, such as physical activities or hobbies, especially when you feel anxious. - Try to stay socially active by connecting with understanding family and friends. - Join a chronic pain support group. - Talk to a mental health professional for strategies that can help you cope. You're likely to start by first seeing your family doctor or dentist for mouth discomfort. Because burning mouth syndrome is associated with such a wide variety of other medical conditions, your doctor or dentist may refer you to another specialist, such as a skin doctor (dermatologist), an ear, nose and throat (ENT) doctor, or another type of doctor. What you can do Here's some information to help you get ready for your appointment: - Ask if there's anything you need to do before the appointment, such as restrict your diet. - Make a list of your symptoms, including any that may seem unrelated to your mouth discomfort. - Make a list of key personal information, including any major stresses or recent life changes. - Make a list of all medications, vitamins, herbs or other supplements that you're taking and the dosages. - Bring a copy of all previous consultations and tests you've had about this problem. - Take a family member or friend with you, if possible, for support and to help you remember everything. - Prepare questions ahead of time to ask your doctor. Questions to ask your doctor may include: - What's likely causing my symptoms or condition? - Other than the most likely cause, what are other possible causes? - What kinds of tests do I need? - Is my mouth discomfort likely temporary or chronic? - What's the best course of action? - What are the alternatives to the primary approach that you're suggesting? - I have these other health conditions. How can I best manage them together? - Are there any restrictions that I need to follow? - Should I see a specialist? - Is there a generic alternative to the medicine you're prescribing? - Are there any printed materials that I can have? What websites do you recommend? Don't hesitate to ask additional questions during your appointment. What to expect from your doctor Your doctor is likely to ask you a number of questions, such as: - When did you begin experiencing symptoms? - Have your symptoms been continuous or occasional? - How severe are your symptoms? - What, if anything, seems to improve your symptoms? - What, if anything, seems to worsen your symptoms? - Do you use tobacco or drink alcohol? - Do you frequently eat acidic or spicy foods? - Do you wear dentures? Your doctor will ask additional questions based on your responses, symptoms and needs. Preparing and anticipating questions will help you make the most of your time.
A jury is a group of ordinary citizens who make the final decision, called the verdict, in some court cases. Such cases are called jury trials. Jury trials are a fundamental part of the system of justice in the United States. The Constitution says that "The Trial of all Crimes, except in Cases of Impeachment, shall be by Jury" and the Sixth Amendment amplifies this: - In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defense. Most trials, of any kind, take place in state courts, and are governed by state law. Although the constitution guarantees the right to a jury trial in criminal cases, the details are left to the states and vary from state to state. A jury often but not always consists of twelve persons. In civil cases—where no crime is involved—parties do not necessarily have the right to a trial by jury. Role of judge and jury In a jury trial, the general principle is that the judge rules on issues involving the law, but the jury decides on the facts. For example, suppose a car hits a pedestrian. Usually that's the driver's fault. But suppose this particular accident actually seems to have been caused by the pedestrian. The judge would study the particular law in that particular state for that particular kind of situation. Maybe that state's law says a pedestrian who is hit by a car is always entitled to damages. Maybe that state's law says a pedestrian who is hit by a car is not entitled to damages if it's completely the pedestrian's fault. Maybe the law says it depends on whether the pedestrian is at least half at fault. Most likely there are many different laws that apply. Most likely the law is complicated and requires a legal expert, like a judge, to figure out how it applies to the particular case. The judge will be listening to the lawyers on both sides. The lawyers on each side will do their best to point out any detail in the law that helps their client. The judge makes the final call on what the law means. But it is the jury, not the judge, listens to the driver's and pedestrian's stories, listens to what other witnesses and the police said, and decides who is at fault, and by how much. Juries are supposed to be impartial and unbiased. They swear a solemn oath, and most jurors take that oath very seriously. The process of jury selection tries to guarantee that juries will be unbiased. Of course, the lawyers on both sides are trying their best to make sure that the jury selection is biased—to favor their own client. The process varies from state to state and locale to locale. It consists of several steps. Typically citizens are chosen at random, perhaps from the voting rolls. They are mailed a "summons" to report for jury duty. Jury duty can take days, weeks, or more. A juror must take time off from work. Sometimes jurors are paid a small amount but it is usually much less than they would have earned at their regular job. Busy people may want to avoid serving. People in some professions, such as doctors and policemen, may be excused from serving, and there can be other reasons to exempt people from serving. The people who arrive at the courthouse to serve are typically assigned to a jury pool, containing more jurors than the number needed for the trial. They are told something about the case. In a process called the voir dire, the jurors are questioned to see whether there are any reasons why they could not be impartial. For example, the judge might ask the jurors to raise their hands if they are friends or acquaintances of any of the people involved in the case. Those jurors would be dismissed, because a juror that who was a friend of a witness, for example, would be inclined to believe that witness. The lawyers for each side also get a chance to ask questions. If a juror gives an answer that the judge agrees shows a lack of impartiality, the juror can be dismissed "for cause." For example, a juror that says he is against the death penalty could not be an impartial juror in a death penalty case. Each lawyer is usually also allowed a certain limited number of "peremptory challenges." That is, the lawyer can ask for a juror to be dismissed without giving a reason. Juries are not used in all countries Although Americans think of jury trials as fundamental to justice, they are by no means universal. They are characteristic of the British and French systems of justice. Jury trials were introduced in Russia in 1864, abolished by the Russian revolution in 1917, and reintroduced in 1993. Many Europeans find it baffling and inappropriate to hand over such an important step in the administration of justice to a panel of non-expert citizens.
Mark, Get Set, Children read slowly when they first begin to read. They usually experience difficulty while trying to comprehend the text if they read slowly. In order to read faster and smoothly, the child must learn to read fluently. A child will enjoy reading when he can decode words automatically and effortlessly. For children to read fluently, they need to read and reread connected texts. On your Mark, Get Set, READ will help children read faster and more smoothly. * Sentence strips (My mom sat at the park), (We had fun at the ball park) * 2 students for every sentence strip * Variety of books for children to read * Record chart for every student * Football field runner, monkey with bananas 1. Okay, let’s talk about the importance of reading smoothly and faster. When we read smoothly, we comprehend the text much better. Stopping all of the time throws us off track because we are too busy worrying about one difficult word. Listen closely as I read a sentence as a beginning reader would read it. “M-y-m-o-o-m-s-s-a-t-a-a-t-t-t-t-th-the-p-p-pa-par-park.” Sound funny, doesn’t it? Now I will read the same sentence the way a fluent reader would read it. “My mom sat at the park.” Did you notice the difference? It was better the second time, right? 2. Now I am going to have you gat into groups of two. I will give each group a sentence to work with. Pay attention to the way it sounds the first time you read your sentence. Then I want you to read it to yourself five times through. I know it seems like a lot, but your fluency will improve by repeating the sentence several times. Next I want you to read it aloud to your partner again. Did you think it sounded better the second time? Way to go! 3. You have all done so well. Now we will try it with one of your books! One of you read the book while the other one times you. You will read as much as you can for one minute. Do not panic if you come to a word you do not know; use cover-up. Read on if the word is still too difficult. Ask your partner for help if you still can’t figure it out. We will do this several times so your reading becomes smoother and more fluent. I will be walking around to help you if you need it. you read, count your words and move your object up the chart and write number of words beside it. Okay, switch readers! After your partner will start over. If you read more words the second time, move your object up the chart. If you read fewer words the second time, move your object chart. You will be surprised by your improvement! Want to try? On your Mark, Get Set, READ! I will assess the students by observing their beginning number and ending number on their record chart. Leah Steiner: Ready, Set, Read Click here to return to Explorations.
New approaches to forestry and agriculture Forestry and agriculture are important sources of carbon dioxide, methane, and nitrous oxide. Forests contain vast quantities of carbon. Some forests act as "sinks" by absorbing carbon from the air, while forests whose carbon flows are in balance act as "reservoirs". Deforestation and changes in land use make the world’s forests a net source of carbon dioxide. As for agriculture, it accounts for over 20% of the human-enhanced greenhouse effect. Intensive agricultural practices such as livestock rearing, wet rice cultivation, and fertilizer use emit 58% of human-related methane and much of our nitrous oxide. Fortunately, measures and technologies that are currently available could significantly reduce net emissions from both forests and agriculture – and in many cases cut production costs, increase yields, or offer other socio-economic Forests will need better protection and management if their carbon dioxide emissions are to be reduced. While legally protected preserves have a role, deforestation should also be tackled through policies that lessen the economic pressures on forest lands. A great deal of forest destruction and degradation is caused by the expansion of farming and grazing. Other forces are the market demand for wood as a commodity and the local demand for fuel-wood and other forest resources for subsistence living. These pressures may be eased by boosting agricultural productivity, slowing the rate of population growth, involving local people in sustainable forest management and wood-harvesting practices, adopting policies to ensure that commercial timber is harvested sustainably, and addressing the underlying socio-economic and political forces that spur migration into forest areas. The carbon stored in trees, vegetation, soils, and durable wood products can be maximized through "storage management". When secondary forests and degraded lands are protected or sustainably managed, they usually regenerate naturally and start to absorb significant amounts of carbon. Their soils can hold additional carbon if they are deliberately enriched, for example with fertilizers, and new trees can be planted. The amount of carbon stored in wood products can be increased by designing products for the longest possible lifetimes, perhaps even longer than what is normal for living wood. Sustainable forest management can generate forest biomass as a renewable resource. Some of this biomass can be substituted for fossil fuels; this approach has a greater long-term potential for reducing net emissions than does growing trees to store carbon. Establishing forests on degraded or non-forested lands adds to the amount of carbon stored in trees and soils. In addition, the use of sustainably-grown fuel-wood in place of coal or oil can help to preserve the carbon reservoir contained in fossil fuels left unneeded underground. Agricultural soils are a net source of carbon dioxide - but they could be made into a net sink. Improved management practices designed to increase agricultural productivity could enable agricultural soils to absorb and hold more carbon. Low-tech strategies include the use of crop residues and low- or no-tillage practices, since carbon is more easily liberated from soil that is turned over or left bare. In the tropics, soil carbon can be increased by returning more crop residues to the soil, introducing perennial (year-round) cropping practices, and reducing periods when fallow fields lie bare. In semi-arid areas, the need for summer fallow could be reduced through better water management or by the introduction of perennial forage crops (which would also eliminate the need for tillage). In temperate regions, soil carbon could be increased by the more efficient use of animal manure. Methane emissions from livestock could be cut with new feed mixtures. Cattle and buffalo account for an estimated 80% of annual global methane emissions from domestic livestock. Additives can increase the efficiency of animal feed and boost animals' growth rates, leading to a net decrease in methane emissions per unit of beef produced. In rural development projects in India and Kenya, adding vitamin and mineral supplements to the feed mixture of local dairy cows has significantly increased milk production and decreased methane emissions. Methane from wet rice cultivation can be reduced significantly through changes in irrigation and fertilizer use. Some 50% of the total cropland used to grow rice is irrigated. Today's rice farmers can only control flooding and drainage in about one third of the world's rice paddies, and methane emissions are higher in continually flooded systems. Recent experiments suggest that draining a field at specific times during the crop cycle can reduce methane emissions dramatically without decreasing rice yields. Additional technical options for reducing methane emissions are to add sodium sulfate or coated calcium carbide to the urea-based fertilizers now in common use, or to replace urea altogether with ammonium sulfate as a source of nitrogen for rice crops. Nitrous oxide emissions from agriculture can be minimized with new fertilizers and fertilization practices. Fertilizing soils with mineral nitrogen and with animal manure releases N2O into the atmosphere. By increasing the efficiency with which crops use nitrogen, it is possible to reduce the amount of nitrogen needed to produce a given quantity of food. Other strategies aim to reduce the amount of nitrous oxide produced as a result of fertilizer use and the amount of N2O that then leaks from the agricultural system into the atmosphere. One approach, for example, is to match the timing and amount of nitrogen supply to a crop's specific demands. A fertilizer's interactions with local soil and climate conditions can also be influenced by optimizing tillage, irrigation, and drainage systems. Storing carbon in agricultural soils can also serve other environmental and socio-economic goals. Often, it improves soil productivity. In addition, practices such as reduced tillage, increased vegetative cover, and greater use of perennial crops prevent erosion, thus improving water and air quality. As a result of these benefits, carbon storage practices are often justified above and beyond their contribution to minimizing climate change. Care must be taken, however, to ensure that carbon storage does not lead to higher nitrous oxide levels as a result of increased soil moisture or fertilizer use.
The topic I chose is calculating the area under a curve in calculus. The prerequisites for this topic are algebra and some calculus. Prerequisites are the topics you need to understand before tackling a new topic because it uses those as a foundation from which to build. In order to learn how to calculate area under a curve, students need to have a deep understanding of functions, know how to manipulate functions algebraically, and understand the concepts of limits. It can be really difficult for students to learn calculus without a strong background in algebra, since calculus frequently requires skills such as combining like terms, simplifying, factoring, etc. If students get tripped up at those steps, it takes away from focusing on the calculus, rather than the algebra. Additionally, a deep understanding of limits is essential to understanding calculus. Technology can remove these prerequisites by giving visual demonstrations of how to fund the area of a curve. Students don’t need to know all the specifics of algebra to be able to understand the visual of a series of rectangles under a curve as an approximation of area. Technology can also remove the prerequisite of limits since a graph could be animated in such a way that it demonstrates the concept by starting with a few rectangles, then adding more and more. This shows how the more rectangles you use, the closer your approximation is, and with this visual, students could probably make the mental leap to the idea of shrinking the rectangles’ widths down to nothing. In that same vein, technology could also be used to demonstrate the concept of limits using Archimedes’ method of exhaustion to approximate pi. Students could use a circle with inscribed and circumscribed polygons to understand that one overestimates the circumference and one underestimates. Technology would allow students to watch as the number of sides increased to get a very good visual demonstration about how the true value is between the perimeters, and the more sides the polygons have, the closer it is to the true value. I like the idea of younger students getting some exposure to calculus before completing the required sequence of courses because there are a lot of results and applications that would make material more interesting and relevant for them. I think that a whole curriculum without prerequisites might work for students who don't intend to pursue math education. A lot of the standard math curriculum is designed to prep students for calculus and a lot of students will never take calculus. I think there could be some interesting and meaningful math classes where students learn about a variety of topics without necessarily getting the traditional prerequisites for each. However, I wouldn’t want to remove all prerequisites from calculus since those skills are essential for students who intend to pursue advanced mathematics education.
Many children struggle to learn how to spell words properly. Trouble with spelling can be compounded by dyslexia, a learning disorder that stems from the brain’s inability to process symbols such as letters properly, according to MedlinePlus. Dyslexic children need a different technique and method than other students to learn to recognize words and spell them properly. Patience is key when teaching a dyslexic child to spell, as it may take a few years to bring a child up to grade level, according to Bright Solutions for Dyslexia. Determine what type of learning style your child has. This will help you decide which methods will work best. Some children are auditory learners, according to a brochure “Helping Your Dyslexic Child Learn to Spell,” from Straffordshire County Council. Auditory learners may struggle with writing, but they do well talking and spelling words aloud. Your child may be a kinesthetic learner if he likes to tackle projects with his hands or processes information better if he is moving or doing something. Sing the spelling of the word with your child if he is an auditory learner. For instance, make up a song that includes the spellings of all his vocabulary words for the week. Sing the song over and over so that he gets the spellings down. Use physical objects, such as a pile of salt, to help your kinesthetic learner to spell. Have her trace the letters of the word into the salt so that she gets a feel for the word and its spelling. You may also want to try forming the letters of the words with clay. Try a multi-sensory approach to spelling, such as the Orton-Gillingham method, which was developed in the 1930s. The method first teaches students to recognize the phonemes, or sounds, of the English language, and then how to recognize the written version of those sounds. You can combine an auditory style of learning, such as singing the letters of a word, with a kinesthetic style, such as tracing the letters in salt using this method. Let the child use a spell checker on a word processor after he has written a paper or as he is working through his homework. Some dyslexic students know the words are spelled wrong, but are unsure of how to fix them, so a spell checker can help the child learn correct spelling, according to the Dyslexia Parents Resource.
Principle of microscopic reversibility, principle formulated about 1924 by the American scientist Richard C. Tolman that provides a dynamic description of an equilibrium condition. Equilibrium is a state in which no net change in some given property of a physical system is observable; e.g., in a chemical reaction, no change takes place in the concentrations of reactants and products, although the Dutch chemist J.H. van’t Hoff had already recognized that this condition results from the equality of the forward and backward rates of a reversible reaction. According to the principle of microscopic reversibility, at equilibrium there is continuous activity on a microscopic (i.e., atomic or molecular) level, although on a macroscopic (observable) scale the system may be considered as standing still. There is no net change favouring any one direction, because whatever is being done is being undone at the same rate. Thus, for a chemical reaction at equilibrium, the amount of reactants being converted to products per unit time is exactly matched by the amount being converted to reactants (from products) per unit time. The principle of microscopic reversibility, when applied to a chemical reaction that proceeds in several steps, is known as the principle of detailed balancing. Basically, it states that at equilibrium each individual reaction occurs in such a way that the forward and reverse rates are equal.
Climate change, land-use change, pollution and exploitation are among the main drivers of species’ population trends; however, their relative importance is much debated. We used a unique collection of over 1,000 local population time series in 22 communities across terrestrial, freshwater and marine realms within central Europe to compare the impacts of long-term temperature change and other environmental drivers from 1980 onwards. To disentangle different drivers, we related species’ population trends to species- and driver-specific attributes, such as temperature and habitat preference or pollution tolerance. We found a consistent impact of temperature change on the local abundances of terrestrial species. Populations of warm-dwelling species increased more than those of cold-dwelling species. In contrast, impacts of temperature change on aquatic species’ abundances were variable. Effects of temperature preference were more consistent in terrestrial communities than effects of habitat preference, suggesting that the impacts of temperature change have become widespread for recent changes in abundance within many terrestrial communities of central Europe. Analyses of long-term trends in species’ populations, such as the Living Planet Index, show global declines in abundances 1,2 . Understanding the cause of changes in species’ abundances is crucial to assess consequences for ecosystem functioning 3 , range shifts 4 and extinction risk, and for making conservation decisions 5 . Much research has focused on the possible future impacts 6 of climate change, but climate change has already affected species in multiple ways, with range shifts detected in diverse taxa 7,8 . Species’ abundances are potentially more sensitive to climate change than range boundaries—a binary presence/absence change in abundance 9,10 . However, the effects of climate change that have already occurred on species’ abundances are much less recognized. Population abundances are affected by many environmental drivers, including habitat loss and degradation, along with pollution, invasive species and exploitation 1,2,11 . Until now, the impact of climate change on population trends and how it compares with other large-scale drivers has not been assessed across major taxonomic groups and environmental realms. Temporal changes in the abundances of organisms have been used to infer the impact of particular environmental drivers on communities. For instance, the effect of nitrogen pollution on a particular lichen species depends on its species-specific nitrogen tolerance. Consequently, declines in the abundance of nitrogen-sensitive lichens have been used as a bioindicator of pollution 12 . Thus, given sufficiently detailed species-level knowledge, differential population trends of species according to their particular attributes (that is, specific characteristics of the species) can be used as a bioindicator of the impacts of environmental change. Such attribute-based approaches have a number of advantages. First, they integrate the effects of the components of environmental change that are most relevant to the organism, when environmental data often are either not available or complex to summarize. For example, declines of farmland birds have highlighted the negative impacts of agricultural intensification, mediated by various changes, including seasonal land-use practices, and fertilizer and pesticide usage 13,14 . Second, observed species’ responses integrate the effects of environmental change at the spatial and temporal scales that matter to the organism, for instance if effects act within particular time windows 15 or spatial scales 16 . We used a species attribute-based approach to test for signals of long-term temperature change on the abundances of species within terrestrial, freshwater and marine communities. In broad terms, we first aimed to detect population trends and then to attribute these trends to long-term temperature change 17 . If temperature change had affected abundances, we expected that some species had increased or that others had decreased. Changes in abundances can be driven by many factors, but long-term trends of abundance are most probably due to deterministic factors such as the persistent effects of a long-term change in the environment. Although such trends may correlate with temperature trends, they may also correlate with trends in other long-term drivers of biodiversity change. To attribute the population trends to temperature change, we related the variation in population trends within each community to species’ temperature preferences. Because the impact of temperature change on a species can be predicted to depend on its temperature preference, more positive trends of warm-dwelling species over cold-dwelling species within each community imply a signal of climate change. Thus, we used the strength of the relationship between species’ temperature preferences and long-term population trends within each community as an indicator of climate change (Fig. 1). We applied our approach to 22 long-term local or regional community datasets within central Europe, including abundance data for taxa from 40 classes (from algae to mammals). This represents, to our knowledge, the most taxonomically diverse analysis on population trends in Europe to date. Each dataset comprised 9–130 species for which population data were collected over a 12–34 year time span (1980 onwards) (Fig. 2a and Supplementary Tables 1 and 2). The datasets cover a broad range of habitats (forest, agricultural land, grassland, sand dunes, wetland, heathland, lakes, rivers, sea), but we cannot assume they are truly representative—long-term sampling is rarely done in highly disturbed environments. Our study profits from the inclusion of groups that are rarely studied in climate change assessments, such as soil invertebrates, which might show different responses from commonly studied mobile taxa, such as birds. For each species, we calculated its long-term population trend and its temperature preference using European distribution data and average temperature maps. For each community dataset, we built regression models that related population trends to species attributes affecting sensitivity to particular environmental drivers (see Table 1; temperature preference for temperature change, habitat preference for land-use change, pollution tolerance (for example, nitrogen tolerance) for pollution). The regression models also included attributes that might further modify species’ responses (such as habitat breadth and dispersal ability, affecting the adaptive capacity of individuals, and life span or age at maturity, affecting population resilience) 18 (Supplementary Fig. 1 shows an outline of the methods and Supplementary Table 3 shows the attributes tested for each dataset). Regression statistics from each dataset were combined together by meta-analysis, allowing control of dataset-level effects such as number of species and sampling sites, start year, time span and temperature trend over the study period (Supplementary Table 4). From this combined analysis, we tested (1) whether the temperature preferences of species are generally positively associated with their population trends, as a signal of the impact of climate change in terrestrial, freshwater and marine realms, and (2) the relative strengths of these climate change signals compared with those of land-use change, pollution and exploitation. Average annual temperatures in the study areas had increased (mean ± s.e.m., 0.33 ± 0.07 °C decade−1; Supplementary Fig. 2 and Supplementary Table 5) and this trend did not significantly differ among realms. Local temperature trends for each dataset were not always significant over the time period of data collection, but they pointed towards positive trends when analysed since the 1980s (Supplementary Table 5). Overall, almost half of the species’ populations showed a significant abundance trend (47%, 552/1,167; Supplementary Fig. 3). The percentage of populations with significant trends was 61% (132/216) in the marine realm, 48% (323/680) in the terrestrial realm and 35% (97/271) in the freshwater realm. Positive trends, that is, increases of abundance, were more common in the marine and terrestrial realm (62% and 60% of the significant trends, respectively), while negative trends were more frequent (60%) in the freshwater realm. Averaging across all datasets, there was a significant relationship between species’ temperature preferences and population trends (correlation coefficient (r) = 0.164, 95% confidence interval (CI) = 0.095, 0.234). Although the difference among realms was not statistically significant, only the effect in the terrestrial communities had a CI that did not overlap zero (Fig. 2b; r = 0.165, 95% CI = 0.046, 0.280; predicted at average start year, number of species and sampling sites). Thus, population trends were positively related with temperature preferences in terrestrial communities; that is, populations of species with warmer temperature preferences increased more than those of species with colder temperature preferences. We found the strongest evidence of impacts for the bird, butterfly, ground beetle, springtail and lichen datasets (Supplementary Fig. 4). In contrast, average effects were not significant in the freshwater and marine communities, although we detected a signal in the marine fish dataset (Fig. 2). Such differences among realms might partly exist because some of the time series from the freshwater and marine communities were shorter, having begun more recently, reflecting the lesser extent of aquatic long-term monitoring. However, average realm effects were robust and independent of dataset characteristics (start year, number of species and sampling sites) as well as of different data weightings or subsampling (Supplementary Figs 5 and 6). Pooling together the freshwater and marine data to achieve similar numbers of datasets (terrestrial, n = 10; aquatic, n = 12) still gave an average insignificant effect across the aquatic communities, but it did tend to be positive (aquatic effect size: 0.08, 95% CI = −0.01, 0.18; predicted at average start year, number of species and sampling sites). To examine whether the relationship between temperature preference and population trend was mostly driven by increases of warm-dwelling species or decreases of cold-dwelling species, we tested whether species in the upper and lower temperature preference quartiles had positive and negative trends, respectively. Increases of warm-dwelling species were found for birds, butterflies, springtails and lichens as well as marine fish, while decreases of cold-dwelling species were only seen in birds and ground beetles (Supplementary Fig. 7). On average across terrestrial species, warm-dwelling species had increased (difference of trends from zero, z-score (z) = 2.26, P = 0.02), while aquatic warm-dwelling species had not (z = −0.27, P = 0.78). Although habitat preferences were significant for some taxa, such as farmland birds (Supplementary Fig. 8), the average effect across all ten terrestrial communities did not reach statistical significance (z = 1.54, P = 0.12; Fig. 3). There was an effect of pollution tolerance in lichen communities (z = 4.21, P < 0.01), with increases of nitrophilous species 19 , but not in the plant community; this was not tested for the other eight datasets because of a lack of information on nitrogen/nutrient preferences. In contrast, in freshwater communities, species preferring low-nutrient environments had more positive population trends (z = −2.37, P = 0.02; Fig. 3). Effects of exploitation were detected for marine fish (z = −3.99, P < 0.01), but not for freshwater fish (z = −1.19, P = 0.24). Commercially exploited marine fish had less positive population trends than non-commercial fish (Fig. 3). We tested for climate change signals on population trends across the broadest range of taxa in Europe to date. The long-term increases and decreases of species’ abundances provided evidence for a long-term driver affecting these communities. Based on the relationship between species’ temperature preferences and population trends, we interpret our results as showing an average effect of temperature change in the terrestrial communities and more variable effects in the aquatic communities. Although other routes through which climate change might affect communities, such as biotic interactions, are increasingly debated 20 , our findings suggest that direct effects of warming are widely important in the terrestrial realm. Habitat loss, fragmentation and degradation are among the leading causes of biodiversity loss in the past century. However, land conversion to cropland peaked in the 1950s 21 . Although past land-use change is still of great importance for spatial patterns of species’ abundances, it may be less so for recent temporal changes of abundance within the remaining local communities of central Europe. Our terrestrial datasets may be biased towards areas where land-use change has been low, but recent effects of land-use change might be now limited to specific localities, where change is still occurring, and to particular taxa, such as farmland birds 22 and grassland butterflies 23 , being affected by such change. Indeed, recent changes in the human footprint, based on human population size, land use and infrastructure, suggest an improvement (using data between 1993 and 2009) in many parts of Europe 24 . In contrast, communities in most localities are experiencing some temperature change, suggesting that the impacts of climate change are now more geographically widespread than those of land-use change. For aquatic communities, the higher heat capacity of water may buffer aquatic systems from rapid temperature changes. However, this would not prevent long-term changes and, like others 25 , we did find a climate change signal in marine fish. Patterns from local freshwater fish and benthic invertebrate communities in France 26,27 have also suggested community shifts towards warm-dwelling or thermally tolerant species, which we did not observe in our freshwater datasets. Impacts on aquatic groups might be locally variable, depending on the landscape context. Other long-term environmental drivers, especially changes in external nutrient load, may have overridden any effects of temperature change on long-term population trends in the communities in our analysis. This driver was suggested by the effect of pollution-related attributes on the population trends of freshwater species and is consistent with recent declines in nutrient loads of lakes and rivers in Europe 28 (an outcome of improved wastewater treatment). As information on pollution-related attributes was missing for many freshwater species, this community shift should be re-assessed as additional data become available. Exposure to weaker temperature change in the marine and freshwater communities would also explain the less consistent climate change signal in these communities. Although this interpretation was not supported by annual time series of average daily temperatures from the sites, this summary variable might not capture the temperature change relevant for aquatic organisms. Our analysis also does not exclude climate change impacts in aquatic systems being mediated by alternative routes, for instance, by changes in river discharge 27 and patterns of thermal stratification 29 . Our cross-dataset assessment suggests that effects of temperature change may differ between terrestrial and aquatic communities. Temperature preference was the most consistent predictor of recent population trends in the terrestrial realm, indicating that temperature change is important for different kinds of organisms in different localities. Similar in philosophy, the Community Temperature Index has also been used to show increases in the proportion of warm-dwelling species over cold-dwelling ones, especially for birds and butterflies 10,30 , as an indicator of climate change impacts. However, by using a multiple regression approach, our approach simultaneously accounted for the effects of other species attributes (see Supplementary Table 3) on population abundance before interpreting the effect of species temperature preference. This approach provides more confidence that any estimated effects are due to temperature change rather than some other driver 31 . The simplicity of our approach meant it was practical enough to be applied across a broad range of species. However, there are many challenges to cross-taxa analysis. As much as possible, we have corrected for effects of variation in dataset attributes on our findings, but continued sampling, especially in freshwater and marine communities, which have been less sampled, is essential. Inferring species temperature preferences from coarse distribution is complicated by differences between species’ fundamental and realized niches 32 and microclimatic variation 33 . In particular, estimating the thermal tolerances of freshwater organisms is hindered by the lack of large-scale freshwater water temperature maps. Including physiological measurements of species’ thermal tolerances would strengthen the conclusions that could be made from our approach, but such data are limited to few species. Unfortunately, the data available (on populations, distributions and species attributes) for different taxa still varies in quality; it is most probably of the highest quality for birds. Although trait databases are now being developed for organisms such as beetles 34 and soil organisms 35 , there is still less, and more variable quality, information available for invertebrates. Because we were able to estimate temperature preference of organisms on a finer scale than habitat preference, this might have increased our ability to detect temperature effects over habitat effects. However, even coarsening the temperature preference data (comparing species in the upper tertile versus those in the lower tertile of temperature preferences; Supplementary Fig. 9) still suggested that warm-dwelling species had more positive population trends than cold-dwelling species in the terrestrial communities. Finally, it is important to emphasize that we focused on the effects of temperature change on recent population trends. An absence of an effect on population trends does not rule out species responding to climate change in some other way, such as phenology 36 . Although vital to inform assessments of the Convention on Biological Diversity targets and for conservation decision-making, long-term datasets on population abundances remain scarce. Clearly, land-use change was the predominant factor affecting terrestrial communities during the twentieth century. Our conclusions are restricted to changes in local communities over the past two or three decades and concern which drivers have been more widespread. Land-use change has the potential to strongly affect local communities, but its impacts are spatially variable. Our results suggest that many communities have been less exposed to and less affected by land-use change over this time period than previously. In contrast, climate change is a widespread driver and thus has the potential to affect populations over a large scale. We find stronger evidence that climate change has affected the recent abundance changes within many central European terrestrial communities, compared with aquatic communities, particularly leading to increases of species with warm temperature preferences. We compiled long-term datasets with at least four census years since 1980 (average number of census years = 19) within a geographical extent of central Europe and the southern part of the North Sea—the majority of the data were from standardized scientific surveys, but in a few cases they were sourced from citizen science or government agency monitoring programmes (see Supplementary Table 1). Rationale of approach Supplementary Fig. 1 shows an outline of the methods. We analysed each dataset in a way that was as similar as possible, to determine the signals of long-term temperature change and other environmental drivers that could be detected. It was not possible to analyse the individual datasets in exactly the same way throughout because some datasets had additional issues; for example, variation in sampling effort or within-year sampling. In addition, we wanted to ensure that our patterns were not driven by a few common species. The most important steps of our analysis were fitting a population trend for each species in each dataset (Supplementary Fig. 1, step c), estimating the effect of species attributes on population trends within each community using regression (Supplementary Fig. 1, step d) and bringing the individual dataset regression results together by meta-analysis (Supplementary Fig. 1, step f). We took this stepwise approach so that we could (1) modify the fitting of population trends to account for details of each dataset (for example, addition of sampling effort offset term or month of sampling fixed effect, when appropriate) and (2) examine patterns at the species level and test the effect of weighting species data points by the confidence of the trends, so that we could ensure that patterns were not driven by a few common species within each dataset. Before analysis, we restricted the data to 1980 onwards and species seen in at least 25% of census years (Supplementary Table 2). The analysis was also repeated using a higher threshold for species occurrence, which yielded similar results (Supplementary Figs 6 and 10). We calculated the population trend of each species as its average annual population growth. In the standard analysis, these trends were estimated using a generalized linear model with Poisson errors including year (a continuous variable) and site (a factor) as predictor variables, as well as an autoregressive term to account for residual autocorrelation of counts as a function of time between censuses and an additional observation-level error term to account for any overdispersion, which was fitted by Bayesian inference using R-INLA (http://www.r-inla.org/) 37 . Because we were interested in the species long-term trend, we only considered the linear trend over time. An ‘effort’ offset term was included in the model when appropriate. A significant population trend was identified when the trend estimate was significantly different from zero (except in one case (birds), when it was inferred from consistent direction of change between each decadal census). See Supplementary Table 1 for deviations to this standard analysis. Species temperature preference We approximated each species temperature preference using distribution data (see Supplementary Table 3 for the distribution data sources used for each taxonomic group). As much as possible we aimed to get range maps (that is, polygons); when this was not possible, we used point occurrence records from the Global Biodiversity Information Facility (GBIF), the Ocean Biographic Information System (OBIS) or country checklist data. Our aim with the calculation of temperature preference was to create a variable that reflected the rank and relative differences of species towards warmer and cooler temperatures, and not necessarily species’ optimal performance temperatures. Thus, using restricted and coarse distribution data should be sufficient for this purpose. Using temperature data maps delineated to Europe, we extracted the grid temperatures from locations intersecting with the distribution of each species. We restricted calculation to a European temperature map because, for most species, the best distribution data available were restricted primarily to Europe. For terrestrial and freshwater datasets, we used temperature maps from the E-Obs gridded dataset 38 of average temperature between 1961 and 1990, projected onto a 25 km equal area grid. Although ideally we would have used water temperature for the freshwater datasets, such European-wide freshwater temperature data are not readily available and air temperature data are commonly used. In addition, air and water temperature are highly correlated 39 . For the marine datasets, we used sea surface (for plankton) and bottom surface (for benthic invertebrates and fish) temperature maps from Aquamaps on a 50 km equal area grid (according to availability: 1982–1999 for sea surface temperature; 1990–1999 for bottom surface temperature) 40 . For dragonflies, data were already available 41 on a 50 km grid, so we used this resolution for them. For butterflies, temperature preference data were extracted from a database. Because we only wished to assess the mean temperature over each species range, the coarse grid size of 25–50 km was adequate, given that the maps are based on a European extent and the distribution data are coarse. For the bird dataset, which included migratory species, we calculated temperature preference as the breeding temperature preference using average temperature data for April, May and June and the range maps restricted to breeding and/or resident areas. Temperature preference was summarized for all species as the mean temperature across the range (mean of all occupied cells, weighted by grid cell coverage for range maps and removing duplicate records within the same cell for point occurrence data). We did further consider a more complex approach, fitting unimodal species response curves to identify species optimum temperatures. This led to temperature values that were correlated with the mean temperatures across species’ ranges; however, since it also led to extreme estimates in a few cases (Supplementary Fig. 11), we decided to continue with our original simpler approach that made fewer assumptions about the shape of species responses. We also calculated species’ temperature ranges as the difference between the maximum and minimum temperature preference (mean of the five occupied grid cells with the warmest and coolest average temperature, respectively). Range size was estimated as the number of climatic grid cells intersecting with each species’ distribution (because this was usually correlated with temperature range, we focused on temperature range instead, except for marine organisms, where we considered it as a proxy of habitat breadth). Because of the limited freely available occurrence data for freshwater plankton, temperature preference was approximated using the seasonal, rather than spatial pattern of species occurrences, within the population dataset, using a similar approach, with daily water temperature data. Additional species attributes Additional species attributes (for example, on habitat preference, dispersal ability and age at maturity) were obtained from the literature or databases in most cases (see Supplementary Table 3 for resources). For attribute data that had been fuzzy coded (for example, species given affinities to different levels associated with the attribute), we produced one attribute value by taking a weighted average of the affinities to different classes of the attribute when the underlying attribute was continuous (for example, size) or instead used cluster analysis to allocate each species to a single group. Habitat preferences for springtails and myriapods were inferred from the occurrence records that included information on habitat for each occurrence. Habitat breadth was calculated as the coefficient of variation of species affinities to different habitat categories 42 . In some cases, expert assessment was used to compile species attribute data (these are annotated in Supplementary Table 3). When species attribute data were ordinal, but represented a continuous variable, data were treated as continuous if there were at least five categories and graphical exploration suggested a linear relationship was reasonable. The few species that were not listed in the main attribute database were excluded from the analysis. Remaining missing attribute data were imputed using a random forest model, including all the variables of the subsequent regression models and the first eigenvector of the decomposed phylogenic/taxonomic tree as predictors 43 . The amount of missing data was generally less than 10% in most cases. However, for freshwater benthic invertebrates, only genus-level data were available for many attributes and even then up to 25% of data were missing for some attributes. The variable with the most missing data was pollution-related attributes (water-quality flexibility was only available for 50% of fish in one dataset). Local temperature data at the study sites Mean monthly temperature data were extracted for the study areas of all datasets. We used high-resolution data (in contrast to the large-scale coarse temperature data used for the species temperature preference calculation, see ‘Additional species attributes’) to retrieve temperature data at the very specific sites of population data sampling. Air temperatures for the terrestrial datasets were sourced from national weather service agencies (Deutsche Wetterdienst for Germany, www.dwd.se; Royal Meteorological Institute of Belgium for Belgium, www.meteo.be; and the European Climate Assessment and Dataset, http://www.ecad.eu, and local weather stations, http://www.weerstation-eelde.nl, for the Netherlands). For all but one of the marine realm datasets, water temperature data were sourced from the International Council for the Exploration of the Sea (ICES); for the remaining dataset, temperature data had been collected locally by the population dataset owner. Missing data were imputed using a generalized additive model. For the freshwater datasets, we used air temperature data when water temperature had not been collected (for the freshwater river fish and benthic invertebrates). These data were used to calculate annual averages of daily mean temperatures. We smoothed the time series as a three-point lagged moving average and fitted a generalized least-square model to estimate the trend. Regression of species attributes on population trends For each dataset, multiple regression models were built to predict species’ population trends using species attributes as predictors (that is, explanatory variables). We checked whether predictor variables were correlated before model fitting and also examined variance inflation factors of the fitted models to check for multicollinearity. We combined as many attributes as possible but always allowed for at least five species per model parameter. Because analysis was more often limited by the number of species than the number of attributes, we first identified the attributes that would probably be most important based on simple regressions and only included the most important (that is, lowest P value) in the maximum multiple regression model. From this model, we excluded variables that were not significant in a stepwise manner. Coefficients of interest (temperature, habitat and pollution/nutrient preference/tolerance) that were not retained after model simplification were tested separately by adding them to the final simplified regression model. Rather than making a binary distinction between a significant or non-significant population trend, we used data from all species but weighted them by the precision (that is, inverse variance) of their trend estimate in the regression. This means that data points (that is, species) with a small standard error (that is, with more confidence, whether in a negative, stable or positive trend) had a greater weight in the model. Because highly fluctuating species (with low precision of the trend estimate) are more likely to be rare species, we additionally tested the results at two different thresholds of species inclusion, as well as with and without these weights (see Supplementary Figs 6 and 10). In addition, we used robust regression for our analysis to down-weight any influential species with high leverage 44 . As a sensitivity analysis of the accuracy of our temperature preference estimation, we also condensed our continuous temperature preference variable into a three-level factor (cold, average and warm temperature preference) and reran the analysis (see Supplementary Fig. 9). The conclusions were unaffected by this sensitivity analysis. Because our estimates of the temperature preferences of freshwater organisms were based on air temperature data, we also used stream zonation preferences as a measure of species’ temperature preferences 45 ; the results did not change (Supplementary Table 3). Analysis of species population trends fitted at the site-level within each dataset did not reveal any further habitat effects (see Supplementary Fig. 12). As species within each dataset do not necessarily provide independent data points due to shared ancestry, we tested whether phylogeny or taxonomy explained any variation in the residuals of our models and accounted for this in the few cases it did (see Supplementary Methods). Phylogenies with branch lengths were obtained for the birds 46 (using Beast 47 to produce a maximum credibility clade tree from the tree distribution), bats 48 and plants 49 . For the butterflies, we used an undated phylogeny from a molecular phylogenetic maximum likelihood analysis of the genes cytochrome c oxidase I and elongation factor 1 alpha (M. Wiemers and O. Schweiger, unpublished observations). For the rest, we obtained the species taxonomic classification (mostly from the catalogue of life, except for the springtails 50 and phytoplankton 51 ) and used the taxonomy to create a tree, setting branch lengths to one for each taxonomic rank. To check whether there was a phylogenetic signal in the residuals of the multiple regression models of population trends, we used Abouheif’s C mean test 52 using the R package adephylo 53 . In most cases, there was no evidence that phylogeny or taxonomy explained any residual variation in the final simplified multiple regression models. In cases when it did (marine fish, springtails, beetles and bats), we specified a corPagel correlation structure using the R package ape 54 and reran the analysis as a generalized least-squares model. Effect size calculation The t-statistics of the model coefficients from the regression for each dataset were converted into correlation coefficients 55 , which we used as a comparable effect size across all taxa and different species attributes. For categorical variables with multiple levels, we used the t-statistic of whichever pair-wise comparison was the largest. The t-statistics, and associated degrees of freedom (df ), were converted into r, the correlation coefficient, using the following formulas 55 . For continuous variables, we calculated r as: For categorical data, we initially used the t-statistic to calculate Cohen’s d as: where n 1 and n 2 are the numbers of species in each group being compared. In cases when the categorical variable had multiple levels, we used the pair-wise contrast with the largest difference. For categorical variables that did not have any natural direction of effect (for example, habitat preference for birds, coded as forest, urban, farmland and wetland), the direction of effect was assigned according to predictions relating to the associated environmental driver (for example, farmland birds were predicted to have the lowest trends, due to agricultural intensification). For comparability with other effect sizes, Cohen’s d was subsequently converted to r as: For meta-analysis, r was z-transformed and its standard error (s.e.Zr) calculated as: Effect sizes (z-transformed correlation coefficients) from each dataset were combined using a random-effects meta-analysis 56 and the resulting pooled estimate and confidence intervals were back-transformed from Zr to r for presentation. Statistical significance was assessed by whether the 95% confidence intervals of the effect sizes overlapped zero. Because there was some variation in the datasets, variables such as the start year of data collection, sampling sites and species number were centred and tested in the meta-analysis (Supplementary Table 4). The corrected effects of average temperature preference effects for each realm were produced by predicting the coefficients for each realm at the average value of all dataset-level variables across all datasets. Because there was overlap (taxonomic/spatial) among some of the datasets, we tested whether additional random terms that reflected dataset grouping could explain any variation; since they did not, they were removed. We also tested whether species in the upper and lower quantiles of temperature preference had average population trends that differed from zero using the t-statistic of the intercept term from a robust regression of the trends for each quartile and dataset. We then averaged the trends for each quartile and realm using a random-effects meta-analysis (sample sizes for each quantile and dataset are found in Supplementary Fig. 7). All analyses were conducted with R v3.0.2 57 . As much as possible, references that include data owner contacts for each population dataset are given in Supplementary Table 1. Further information and data on species’ local population trends are available from the corresponding author. How to cite this article: Bowler, D. E. et al. Cross-realm assessment of climate change impacts on species’ abundance trends. Nat. Ecol. Evol. 1, 0067 (2017). We thank Bayerisches Landesamt für Umwelt, Sächsisches Landesamt für Umwelt, Landwirtschaft und Geologie, Landesanstalt für Umwelt, Messungen und Naturschutz Baden-Württemberg, Landesamt für Natur, Umwelt und Verbraucherschutz Nordrhein-Westfalen, Hessisches Landesamt für Umwelt und Geologie and the Trilateral Monitoring and Assessment Program (TMAP) for sharing and providing permission to use their data for our project. Additionally, we appreciate the open access marine data provided by the International Council for the Exploration of the Sea. We thank the following scientists for taxonomic or technical advice: C. Brendel, T. Caprano, R. Claus, K. Desender, A. Flakus, P. R. Flakus, S. Fritz, E.-M. Gerstner, J.-P. Maelfait, E.-L. Neuschulz, S. Pauls, C. Printzen, I. Schmitt and H. Turin, and I. Bartomeus for comments on a previous version of the manuscript. R.A. was supported by the EU-project LIMNOTIP funded under the seventh European Commission Framework Programme (FP7) ERA-Net Scheme (Biodiversa, 01LC1207A) and the long-term ecological research program at the Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB). R.W.B. was supported by the Scottish Government Rural and Environment Science and Analytical Services Division (RESAS) through Theme 3 of their Strategic Research Programme. S.D. acknowledges support of the German Research Foundation DFG (grant DO 1880/1-1). S.S. acknowledges the support from the FP7 project EU BON (grant no. 308454). S.K., I.Kü. and O.S. acknowledge funding thorough the Helmholtz Association’s Programme Oriented Funding, Topic ‘Land use, biodiversity, and ecosystem services: Sustaining human livelihoods’. O.S. also acknowledges the support from FP7 via the Integrated Project STEP (grant no. 244090). D.E.B. was funded by a Landes–Offensive zur Entwicklung Wissenschaftlich–ökonomischer Exzellenz (LOEWE) excellence initiative of the Hessian Ministry for Science and the Arts and the German Research Foundation (DFG: Grant no. BO 1221/23-1). Supplementary Figures 1–12; Supplementary Tables 1–5
Herto skulls (Homo sapiens idaltu) Some new fossils from Herto in Ethiopia, are the oldest known modern human fossils, at 160,000 yrs. The discoverers have assigned them to a new subspecies, Homo sapiens idaltu, and say that they are anatomically and chronologically intermediate between older archaic humans and more recent fully modern humans. Their age and anatomy is cited as strong evidence for the emergence of modern humans from Africa, and against the multiregional theory which argues that modern humans evolved in many places around the world. Three skulls were found: The conclusion of the authors is that the Herto skulls "sample a population that is on the verge of anatomical modernity but not yet fully modern". They therefore assigned it to a new subspecies idaltu ('elder' in the local Afar language): - BOU-VP-16/1 is an almost complete adult cranium (shown at right). It is large and robust, with a cranial capacity estimated at 1450 cubic centimetres, larger than most modern humans. The skull is long and high in lateral view, and White et al. (2003) list a number of features in which it is near or beyond the limit of modern humans (the occipital angle, mastoid height, palate breadth). Viewed from above, its length exceeds any from a sample of over 3000 modern humans, but one width measurement is below the modern human average. The brow ridge is not prominent and is within the modern human range. - BOU-VP-16/2 consists of portions of another adult cranium which appears to have been even larger than the previous specimen. - BOU-VP-16/5 consists of most of a skullcase from a child, probably about 6 or 7 years of age judging by its teeth. "Because the Herto hominids are morphologically just beyond the range of variation seen in AMHS [anatomically modern Homo sapiens], and because they differ from all other known fossil hominids, we recognize them here as Homo sapiens idaltu, a new palaeosubspecies of Stringer (2003), however, in a commentary article, suggests that the skulls may not be distinctive enough to warrant a new subspecies name. Both anatomically and chronologically, the Herto skulls seem intermediate between earlier and more primitive skulls such as Bodo and Kabwe ('Homo rhodesiensis') and the first completely modern human skulls which are first found from about 115,000 years ago. The authors' final conclusion is that "When considered with the evidence from other sites, this shows that modern human morphology emerged in Africa long before the Neanderthals vanished from Eurasia." Because of this, these finds have been generally seen as a setback for the Multiregional model of human evolution (which argues that modern humans evolved in geographically widespread areas of the world) and strong support for the competing Out Of Africa model (which argues that modern humans evolved in Africa and spread out from there, displacing any preexisting populations). Answers in Genesis (AIG) argues, quite reasonably, that these fossils are so similar to modern humans that they don't constitute any problem for creationists - or, at least, to their own position. Reasons To Believe (RTB), an old-earth creationist ministry founded by Hugh Ross, takes the more surprising position that these fossils are of soulless animals that merely look like humans, and has accused AIG of "factual errors and distortions", to which AIG has responded energetically. RTB's position seems untenable to me: it's hard to see how anyone can credibly claim that fossils so remarkably similar to modern humans are animals. RTB appears to have a strategy that by definition excludes any possibility of transitional fossils: if scientists put a fossil in anything other than Homo sapiens sapiens, it is "not a modern human" and hence is an animal (no matter how trivial the differences); if they do put it in H. sapiens sapiens, of course, it's also not evidence for human evolution. Heads I win, tails you lose. Clark J.D., Beyene Y., WoldeGabriel G., Hart W., Renne P., Gilbert H. et al. (2003): Stratigraphic, chronological and behavioural contexts of Pleistocene Homo sapiens from Middle Awash, Ethiopia. Nature, 423:747-52. Stringer C.B. (2003): Out of Ethiopia. Nature, 423:692-4. White T.D., Asfaw B., DeGusta D., Gilbert H., Richards G.D., Suwa G. et al. (2003): Pleistocene Homo sapiens from Middle Awash, Ethiopia. Nature, 423:742-7. Oldest human skulls found (Jonathan Amos, BBC) from oldest modern humans (Richard Stenger, CNN) This page is part of the Fossil Hominids FAQ at the talk.origins Archive. Home Page | What's New | Copyright © Jim Foley || Email me
A memory chip is an integrated circuit made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create one memory cell, which represents a single bit of data. The capacitor holds the bit of information, either a 0 or a 1. The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state. Because each bit stored in a chip is controlled by one transistor, memory capacities tend to expand at the same pace as the number of transistors per chip - which still follows Moore's Law and therefore currently doubles every 18 months. The problem is that the capacitor - consisting of two charged layers separated by an insulator - can shrink only so far. The thinner insulators get the more they allow charges to tunnel through. Tunneling increases the leakage current, and therefore the standby power consumption. Eventually the insulator will break down. Researchers have been trying to develop electromechanically driven switches that can be made small enough to be an alternative to transistor-switched silicon-based memory. Electromechanical devices are suitable for memory applications because of their excellent ON-OFF ratios and fast switching characteristics. With a mechanical switch there is physical separation of the switch from the capacitor. This makes the data leakage problem much less severe. Unfortunately they involve larger cells and more complex fabrication processes than silicon-based arrangements and therefore have not been so far an alternative to scaling down beyond semiconductor transistors. Researchers now have reported a novel nanoelectromechanical (NEM) switched capacitor structure based on vertically aligned multiwalled carbon nanotubes (CNTs) in which the mechanical movement of a nanotube relative to a carbon nanotube based capacitor defines ON and OFF states. Continuing miniaturization has moved the semiconductor industry into the nano realm with leading chip manufacturers well on their way to CPUs using 32nm process technology (expected by 2009). There are some real challenges ahead for chip designers, particularly in moving deeper and deeper into the nanoscale, where at some point in the near future they will reach physical limits of the traditional logic MOSFET (metal-oxide-semiconductor field-effect transistor) structure. In addition to physical barriers, semiconductor companies will also reach economic barriers where profitability will be squeezed hard in view of the exorbitant costs of building the necessary manufacturing capabilities if present day technologies are extrapolated. Quantum and coherence effects, high electric fields creating avalanche dielectric breakdowns, heat dissipation problems in closely packed structures and the relevance of single atom defects are all roadblocks along the current road of miniaturization. Enter nanoelectronics (note that microelectronics, even if the gate size of the transistor is below 100 nm, is not an implementation of nanoelectronics, as no new qualitative physical property related to reduction in size are being exploited). Its goal is to process, transmit and store information by taking advantage of properties of matter that are distinctly different from macroscopic properties. Understanding nanoscale transport and being able to model and simulate nanodevices requires an entirely new generation of simulation tools and techniques. The semiconductor industry is on its way to 32 nm processor technology, expected to be commercialized around 2009, and the day might be near when transistors will reach the limits of miniaturization at atomic levels and put an end to the currently used fabrication technologies. Apart from the issues of interconnect density and heat dissipation, which some researchers hope to address with carbon nanotube-based applications, there is the fundamental issue of quantum mechanics that will kick in once chip design gets down to around 4 nm. This is where semiconductor dimensions have become so small that quantum effects would dominate the circuit behavior. Computer designers usually regard this as a bad thing because it might allow electrons to leak to places where they are not wanted. In particular, the tunneling of electrons and holes - so-called quantum tunneling - will become too great for the transistor to perform reliable operations. The result would be that the two states of the switch could become indistinguishable. Quantum effects can, however, also be beneficial. A group of researchers has now shown that a single bit of data might be stored on, and again retrieved from, a single atom. Just don't expect this in your computer anytime soon, though. Over the next few years, semiconductor fabrication will move from the current state-of-the-art generation of 90 nanometer processes to the next 65 nm and 45 nm generations. Intel is even already working on 32 nm processor technology, code-named "Westmere", that is expected to hit the market sometime around 2009. The result of these efforts will be billion-transistor processors where a billion or more transistor-based circuits are integrated into a single chip. One of the increasingly difficult problems that chip designers are facing is that the high density of components packed on a chip makes interconnections increasingly difficult. In order to be able to continue the trend predicted by Moore's law, at least for a few more years, researchers are now turning to alternative materials for transistors and interconnect and one of the prime candidates for this job are single-walled carbon nanotubes (SWCNT). However, one of the biggest limitations of conventional carbon nanotube device fabrication techniques today is the inability to scale up the processes to fabricate a large number of devices on a single chip. Researchers in Germany have now demonstrated the directed and precise assembly of single-nanotube devices with an integration density of several million devices per square centimeter, using a novel aspect of nanotube dielectrophoresis. This development is a big step towards commercial realization of CNT-based electronic devices and their integration into the existing silicon-based processor technologies. For computer chips, 'smaller and faster' just isn't good enough anymore. Power and heat have become the biggest issues for chip manufacturers and companies integrating these chips in everyday devices such as cell phones and laptops. The computing power of today's computer chips is provided mostly by operations switching at ever higher frequency. This physically induced power dissipation represents the limiting factor to a further increase of the capability of integrated circuits. Heat dissipation of the latest Intel processors has become a widely discussed issue. By the end of the decade, you might as well be feeling a rocket nozzle than touching a chip. And soon after 2010, computer chips could feel like the bubbly hot surface of the sun itself. As the electronics industry continues to churn out smaller and slimmer portable devices, manufacturers have been challenged to find new ways to combat the persistent problem of thermal management. New research suggests that the integration of carbon nanotubes (CNTs) as heat sinks into electronic devices might provide a solution to this problem. Non-volatile random access memory (NVRAM) is the general name used to describe any type of random access memory which does not lose its information when power is turned off. This is in contrast to the most common forms of random access memory today, DRAM and SRAM, which both require continual power in order to maintain their data. NVRAM is a subgroup of the more general class of non-volatile memory types, the difference being that NVRAM devices offer random access, as opposed to sequential access like hard disks. The best-known form of NVRAM memory today is flash memory, which is found in a wide variety of consumer electronics, including memory cards, digital music players, digital cameras and cell phones. One problem with flash memory is its relatively low speed. Also, as chip designers and engineers reach size barriers in downscaling the size of such chips, the research focus shifts towards new types of nanomemory. Molecular-scale memory promises to be low-power and high frequency: imagine a computer that boots up immediately on powering up and that writes data directly onto its hard drive making saving a thing of the past. Researchers are designing the building blocks for this type of memory device using telescoping carbon nanotubes as high-speed, low power microswitches. The design would allow the use of these binary or three-stage switches to become part of molecular-scale computers. As the semiconductor industry continues to miniaturize in following Moore's Law, there are some real challenges ahead, particularly in moving deeper and deeper into the nano length scale. In particular, sustaining the traditional logic MOSFET (metal-oxide-semiconductor field-effect transistor) structure, design, and materials composition will be especially difficult, particularly beyond the 22 nm node. Nanocables, consisting of a range of materials, offer potential solutions to these problems and may even be an alternative to today's MOSFET. A group of researchers from several European countries now reports the synthesis of a magnetically tunable nanocable array, combining separate hard and soft magnetic materials in a single nanocable structure. The combination of two or more magnetic materials in such a radial structure is seen as a very powerful tool for the future fabrication of magnetoresistive, spin-valve and ultrafast spin-injection devices with nonplanar geometries.
There are plenty of explanations for teenage turmoil. The newest theory is that uneven brain development may be responsible for the changeable moods and unsettling behavior of adolescence, reports the July issue of the Harvard Mental Health Letter. Although many teens have fairly advanced intellectual and reasoning ability, recent research has shown that human brain circuitry is not mature until the early 20s. Among the last connections to be fully established are the links between the prefrontal cortex — the seat of judgment and problem-solving — and the emotional centers of the brain. These links are crucial to emotional learning and high-level self-regulation, explains the Harvard Mental Health Letter. Another circuit still under construction in adolescence links the prefrontal cortex to the midbrain reward system, where addictive drugs and romantic love exert their powers. Brain scans hint at why most addictions get their start in adolescence. Teenagers and adults process reward stimuli differently; adolescent brains react intensely to novel experiences, making those experiences more enticing. Hormonal changes are at work, too. The adolescent brain pours out stress hormones, sex hormones, and growth hormone, which in turn influence brain development. Teenagers’ problems have many causes — social and individual, genetic and environmental. Since the brain is still forming, things can go wrong in many ways, and some of them involve the onset of psychiatric disorders. Scientists are looking at typical adolescent brain development to provide clues to the ways in which things go wrong. “The more we know about how psychiatric disorders and adolescent problems develop, the easier it will be for us to develop better treatments,” says Dr. Michael Miller, editor in chief of the Harvard Mental Health Letter.
The Civil War was the bloodiest conflict in American history. In the space of a few short years more than 600,000 people were killed and millions left injured. One of the most notable and influential battles in the war took place in and around Gettysburg, Pennsylvania between July 1 and 3, 1863. The battle was largely accidental, yet it has become steeped in American mythology. At issue in this battle, and indeed in this war, was the future the United States – still a young country coming to terms with its revolutionary origins and with the meaning of the ideas and ideals upon which it had been founded, particularly the belief that “all men are created equal” as held by the Federalists and disputed by the Confederacy. The subject of slavery and what it meant to be human became a heated topic that ran like a wild fire through the halls of Washington, the taverns of the young republic, supper tables, and church halls. No one was untouched by slavery. Today, more than 150 years after this historic and devastating battle, the name Gettysburg is associated almost inextricably in the minds of Americans with the battlefields now turned into sacred ground, as well as Abraham Lincoln’s famous address and with the key achievements of his presidency, including the Emancipation Proclamation (January 1, 1863), his call for “a new birth of freedom” and his assertion that what was at stake in this conflict was the very survival of democracy in the republic. Gettysburg, then, was a battlefield on which the direction and nature of the nation was shaped and determined in a fundamental way. This was a battlefield of transformation in the most profound sense. Lincoln’s call to arms, as manifested through his “Gettysburg Address,” echoes through time, demanding that later generations never forget those who died on the bloody fields of Gettysburg during those hot days in early July. This conference calls us to consider the battlefield of Gettysburg and to reflect on its transformational impact in the broadest terms. Weaponry and warfare were forever changed in the course of the Civil War, but they so did the words and arguments made by our leaders. Moreover, some of the innovative technologies that it helped engender – in communications, medicine, photography, transportation, and so on – likewise changed the social and political fabric of the nation and helped to alter the way in which the United States came to define itself in art, citizenship, literature, poetry, and religion. The Civil War generally, and Gettysburg in particular, changed everything utterly and irrevocably. This conference likewise calls us to consider other battlefields that have transformed the world in which we live – be they physical battlefields like Gettysburg or other fields of battle, where ideas and ideals have come into conflict and where the outcomes have been consequential in the deepest and most genuine sense. Whether it’s the battlefields on which ethical problems of stem cell research is fought, the debates over illegal immigration, the war on terror, civil liberties, censorship, or race, which in the wake of Ferguson has shown us that it is still a battle being fought in the twenty-first century, the battles continue. Therefore, we invite you to explore pivotal occasions of change and transformation and to reflect on the art, business, culture, politics, science or technology that have or may yet prompt change – as well as to reflect on what the implications of such change have been or may be in the future. Let the conversation begin!
Zhejiang is one of the birthplaces of ancient Chinese civilization. A great storehouse of historical relics, Zhejiang has long been called The State of Historical Relics. Archeologists report, 50,000 years ago the hominid named "Jiande" had been living in the western mountainous area of Zhejiang. Large numbers of historical relics have been unearthed in the Hemudu Ruins in Yuyao City. The relics include production tools and food utensils made of animal bones, wood and stones. All of the archeological relics show that the Chinese ancestors of 7,000 years ago had a well-developed pre-historical civilization. In the feudal period, Zhejiang was a favorite place for the emperors to build their seats of power, and it has witnessed the rise and fall of many dynasties. Throughout history, Zhejiang has been given a lot of names, however the shape of Zhejiang Province was fixed in its present form in the years from 1661 to 1722. The economy of Zhejiang was developed relatively early. As early as the Eastern Han Dynasty (206 BC - 220 AD) there were advanced hydraulic engineering projects, a salt manufacturing industry and a porcelain making industry. After the Third Century AD, the economy in Zhejiang achieved further development, and business gradually became more prosperous. Up until the dynasties of Sui (581-618) and Tang (618-907), the economy in Zhejiang developed at a fast rate, and the productivity of agriculture also improved. The area around Hangzhou City and Jiaxing City became an important production area for crops. The textile industry, porcelain making industry and paper making industry were improved. Mingzhou (now Ningbo City) became an important trade port in Southeastern China. After the Tenth Century, Zhejiang entered a prosperous period of feudal economy, and become one of the richest areas in China. After the mid-Nineteenth century, because of the invasion of Western capitalism, Zhejiang became an important bridge between the economic hub of Shanghai and wealthy southern China. Despite stagnation during the Cultural Revolution, Deng Xiaoping’s economic reforms combined with the entrepreneurial spirit of the Zhejiang people enabled this province to once again become one of China’s richest provinces, a major center for export and commercialism.
Vitamin A is a fat-soluble vitamin that is stored in the liver. There are two types of vitamin A that are found in the diet. - Preformed vitamin A is found in animal products such as meat, fish, poultry, and dairy foods. - Provitamin A is found in plant-based foods such as fruits and vegetables. The most common type of pro-vitamin A is beta-carotene. Vitamin A is also available in dietary supplements. It most often comes in the form of retinyl acetate or retinyl palmitate (preformed vitamin A), beta-carotene (provitamin A) or a combination of preformed and provitamin A. Vitamin A helps form and maintain healthy teeth, skeletal and soft tissue, mucus membranes, and skin. It is also known as retinol because it produces the pigments in the retina of the eye. Vitamin A promotes good vision, especially in low light. It may also be needed for reproduction and breastfeeding. Retinol is an active form of vitamin A. It is found in animal liver, whole milk, and some fortified foods. Carotenoids are dark-colored dyes (pigments) found in plant foods that can turn into a form of vitamin A. There are more than 500 known carotenoids. One such carotenoid is beta-carotene. - Beta-carotene is an antioxidant. Antioxidants protect cells from damage caused by substances called free radicals. Free radicals are believed to contribute to certain chronic diseases and play a role in the aging processes. - Food sources of carotenoids such as beta-carotene may reduce the risk for cancer. - Beta-carotene supplements do not seem to reduce cancer risk. Vitamin A comes from animal sources, such as eggs, meat, fortified milk, cheese, cream, liver, kidney, cod, and halibut fish oil. However, many of these sources, except for skim milk that has been fortified with Vitamin A, are high in saturated fat and cholesterol. The best sources of vitamin A are: - Cod liver oil - Fortified breakfast cereals - Fortified skim milk - Orange and yellow vegetables and fruits - Other sources of beta-carotene such as broccoli, spinach, and most dark green, leafy vegetables The more intense the color of a fruit or vegetable, the higher the beta-carotene content. Vegetable sources of beta-carotene are fat- and cholesterol-free. The absorption will be improved if these sources are consumed with a fat. If you do not get enough vitamin A, you are at increased risk for eye problems. These include reversible night blindness and then non-reversible corneal damage known as xerophthalmia. Lack of vitamin A can lead to hyperkeratosis or dry, scaly skin. If you get too much vitamin A, you can become sick. Large doses of vitamin A can also cause birth defects. Acute vitamin A poisoning most often occurs when an adult takes several hundred thousand IUs of vitamin A. Symptoms of chronic vitamin A poisoning may occur in adults who regularly take more than 25,000 IU a day. Babies and children are more sensitive to vitamin A, and can become sick after taking smaller doses of vitamin A or vitamin A-containing products such as retinol (found in skin creams). Large amounts of beta-carotene will not make you sick. However, increased amounts of beta-carotene can turn the skin yellow or orange. The skin color will return to normal once you reduce your intake of beta-carotene. The best way to get the daily requirement of essential vitamins is to eat a wide variety of fruits, vegetables, fortified dairy foods, legumes (dried beans), lentils, and whole grains. The Food and Nutrition Board of the Institute of Medicine -- Dietary Reference Intakes (DRIs) Recommended Intakes for individuals of vitamin A: Infants (average intake) - 0 to 6 months: 400 micrograms per day (mcg/day) - 7 to 12 months: 500 mcg/day The Recommended Dietary Allowance (RDA) for vitamins is how much of each vitamin most people should get each day. The RDA for vitamins may be used as goals for each person. - 1 to 3 years: 300 mcg/day - 4 to 8 years: 400 mcg/day - 9 to 13 years: 600 mcg/day Adolescents and adults (RDA) - Males age 14 and older: 900 mcg/day - Females age 14 and older: 700 mcg/day (770 during pregnancy and 1,300 mcg during lactation) How much of each vitamin you need depends on your age and gender. Other factors, such as pregnancy and your health, are also important. Ask your health care provider what dose is best for you. Retinol; Retinal; Retinoic acid; Carotenoids Institute of Medicine, Food and Nutrition Board. Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenium, Nickel, Silicon, Vanadium, and Zinc. National Academies Press. Washington, DC, 2001. PMID: 25057538 www.ncbi.nlm.nih.gov/pubmed/25057538. Mason JB. Vitamins, trace minerals, and other micronutrients. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine. 25th ed. Philadelphia, PA: Elsevier Saunders; 2016:chap 218. Salwen MJ. Vitamins and trace elements. In: McPherson RA, Pincus MR, eds. Henry's Clinical Diagnosis and Management by Laboratory Methods. 23rd ed. St Louis, MO: Elsevier; 2017:chap 26. Review Date 1/7/2017 Updated by: Emily Wax, RD, The Brooklyn Hospital Center, Brooklyn, NY. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team.
Florida Panther: Puma concolor coryi Genus/Species: Puma concolor Subspecies: Puma concolor coryi Common Name: Florida panther Federal Status: Endangered FL Status: Federally-designated Endangered FNAI Ranks: G5T1/S1 (Globally: Demonstrably Secure, Sub sp. Critically Imperiled/State: Critically Imperiled) IUCN Status: Not ranked The Florida panther is one of the smaller cougar species in the Western Hemisphere. There are currently only 100-160 Florida panthers left in the wild. Adult males can reach a length of seven feet (2.1 meters) with a shoulder height between 24-28 inches (60-70 centimeters), and an average weight of 116 pounds (52.6 kilograms). Females are smaller, as they only reach a length of up to six feet (1.8 meters) and a weight of 75 pounds (34 kilograms) (Roelke 1990). Adult Florida panthers have a reddish-brown back, dark tan sides, and a pale gray belly. Kittens have a gray colored body, with black or brown spots, and five stripes that go around the tail. Panthers are never black in coloration (melanistic) (U.S. Fish & Wildlife Service 2008). Some Florida panthers have a crook at the end of the tail, which is thought to come from inbreeding. In males, one descended testicle is also thought to come from inbreeding. This species has a dorsal cowlick and white specks on its fur, which are thought to occur due to tick bites. Florida panthers are carnivores (feed only on meat) and their diet consists primarily of deer, raccoons, wild hogs, armadillos, and rabbits. Large carnivores require large areas to roam. Florida panther home ranges average 75 and 150 square miles (194.25 and 388.5 square kilometers) for females and males, respectively. There is some overlap amongst home ranges, particularly for females, but males are typically intolerant of other males. Florida panthers are solitary in nature, except for females with kittens, and they do not form pair bonds with mates. Females express their sexual receptiveness by the scent of their urine and through vocalizations. The total gestation time is 92-96 days with one to four kittens being born per litter. Births occur throughout the year, but mainly occur in late spring. Dens are usually created in a palmetto thicket. Females do not breed again until their young are 1.5-2 years old. Females reach sexual maturity at 1.5 to 2.5 years old, while males reach sexual maturity around three years old. Female panthers have a higher survival rate and therefore tend to live longer than male panthers. Ages at death average 7.5 years for females and just over five years for males. The oldest known wild panthers were 20 and 14 years old at death for a female and male panther, respectively. Habitat and Distribution Florida panthers inhabit large forested communities and wetlands (Florida Natural Areas Inventory 2001). They can be found in South and parts of Central Florida, although male panthers have been documented as far north as Central Georgia. During the 1800’s and early 1900’s, habitat loss and hunting led to the panther’s near extinction. Panthers were hunted and killed by settlers for sport or to protect livestock. During this time, there was an anti-predator sentiment and panthers were killed out of fear. By the mid 1980’s, only 20-30 panthers could be found in the wild and this small population was found to be highly inbred. A plan to restore the genetic health of Florida panthers was implemented in 1995. Genetic restoration involved the release of eight female pumas (Puma concolor stanleyana) from Texas in 1995 into available panther habitat in South Florida. The Texas subspecies was selected for this project because they represented the closest puma population to Florida, and historically, the Florida panther subspecies bordered the Texas population and interbreeding occurred naturally between them. Exchange of genetic material between the two subspecies ceased as habitat in the southeastern U.S. became fragmented in the late 1800’s and throughout the 20th century. Five of the eight Texas females reproduced successfully, resulting in a minimum of 20 kittens. By 2003, the last three surviving Texas females were removed from the wild Florida population; no Texas pumas remain in the wild in Florida today. Conservation and Management The Florida panther is protected as an Endangered species by the Federal Endangered Species Act and as a Federally-designated Endangered species by Florida’s Endangered and Threatened Species Rule . Federal Recovery Plan Other Informative Links FWC Species Profile Florida Natural Areas Inventory Florida Panther Society National Wildlife Federation U.S. Fish and Wildlife Service Species Information Printable version of this page Florida Natural Areas Inventory. 2001. Field guide to the rare animals of Florida. http://www.fnai.org/FieldGuide/pdf/Puma_concolor_coryi.PDF Roelke, M. E. 1990. Florida panther biomedical investigation. Final Performance Report 7506. Florida Game and Fresh Water Fish Commission, Tallahassee, FL. U.S. Fish and Wildlife Service. 2008. Florida Panther Recovery Plan (Puma concolor coryi), Third Revision. U.S. Fish and Wildlife Service. Atlanta, Georgia. 217pp. Image Credit FWC
This content is licensed under Creative Commons Attribution/Share-Alike License 3.0 (Unported). That means you may freely redistribute or modify this content under the same license conditions and must attribute the original author by placing a hyperlink from your site to this work https://planetcalc.com/6729/. Also, please do not modify any references to the original work (if any) contained in this content. Since ancient times, people have tried to imagine how our world works. Aristarchus of Samos was an ancient Greek philosopher. In early III b.c. he suggested a heliocentric world model (the Sun in the center). He also tried to calculate Earth and Sun's sizes and the distance between them using the Moon position. Many opposers were standing for the geocentric system (the Earth in the center), so the heliocentric idea did not get much follow-up. It took almost two thousand years to recover it. Nicolaus Copernicus, the Polish astronomer, reformulated the model of the universe with Sun in the center. The work of his whole life was published in 1543, the year of his death. Copernicus heliocentric model and tables of planetary positions more accurately reflected the observed state. A half-century later, in 1609, Johannes Kepler, a German mathematician, published planetary motion laws, which improved the Copernicus model's accuracy. Kepler created his laws due to the analysis of a big amount of data, carefully collected by Tycho Brahe, a Danish astronomer. The end of the 17th century was marked by the great English scientist Isaac Newton's discoveries. Newton's motion laws and universal gravitation law provided the theoretical basis and extended Kepler formulas. Finally, in 1921, Albert Einstein issued the general theory of relativity (GTR), which describes gravity phenomena and planetary motion mechanics with the highest accuracy. In most cases, relativistic effects can be ignored, and Newton's classical laws give a fairly accurate planet motion description. So thanks to Newton and his predecessors, now we still can calculate: - the circular orbital velocity of a satellite object (first space velocity) - the speed of an object to escape off-planet gravitation (second space velocity) - the speed of an object to escape planetary system gravitation (third space velocity) using very simple formulas. Circular orbital velocity Circular orbital velocity is the speed required to keep circular object motion at a specified altitude above the planet. The equation is: ,where R=r+h - orbit radius, combined by r -planet radius and h - altitude above the planet M - planet mass G - gravitational constant 6.67408(31)10-11 m³/(s²·kg) The formula can be easily derived from Newton's gravitation force formula and centrifugal force formula: m -object mass (excluded, during evaluation of v1) Two and half centuries later, after Newton's discoveries, in 1957, USSR launched the Earth's first artificial satellite. R-7 carrier rocket overcame the atmosphere resistance and the Earth gravity to deliver Sputnik-1 to the 577-km orbit. is a speed required to escape the gravitational influence of the planet or star. The formula is: It correlates with v1 as follows: The formula can be derived from the kinetic energy and Mechanical work done to overcome the gravity moving object from the initial altitude to infinity: In 1959 USSR launched Luna-1, the automatic interplanetary module, which overcame the influence of the Earth's gravity and became 1st artificial satellite of the Sun. Planetary system escape velocity is a minimal speed required to overcome whole planetary system gravity including planet and star gravity. where v - planet orbital speed v2 - planet escape speed According to calculations, the unit launched from the Earth must have velocity of 16.6 km/s, to leave Solar system. The nearest velocity (16.26 km/s) was achieved by the "New Horizons" module launched in 2006 by the USA to research Pluto and Charon, its satellite. Up to date, the module finished Pluto filming; it goes toward the Kuiper belt. The first world module, which achieves Solar system escape speed, was "Voyager-1", launched by the United States in 1997. Initial "Voyager" speed was lower than "New Horizons" module speed, but due to a series of gravitational maneuvers against Solar system planets, especially Jupiter, the speed rose to 17 km/s. Currently, the "Voyager-1" left the Solar system. It still collects and sends data to the Earth. The unit carries a 12-inch gold plate copper disc containing audio-visual records from the Earth intended for intelligent life forms from other planetary systems. V. Zakharov, The gravitation: from Aristotle to Einstein Photo of Voyager Record, NASA, Voyager Project.
Arrowsmith’s Approach to Dyslexia Dyslexia is defined as a reading disorder that is neurological in origin. The Arrowsmith Program offers a deeper analysis: we investigate the issues that lie beneath dyslexia. Reading difficulties like dyslexia are caused when the brain networks or cognitive functions responsible for reading are underperforming. People with dyslexia experience weaknesses in some combination of the following functions. There may be others also contributing. Full list of learning dysfunctions. How can Arrowsmith Program help? Arrowsmith School has been called the Most Innovative Special Education School by Sharp Brains. Why? Because Arrowsmith is not an academic program – it’s a capacity-based one. Meaning it enables individuals to transform their capacity to learn and fundamentally address issues like dyslexia. - Fact: Our brains can change. The science of neuroplasticity has revealed this. Arrowsmith relies on this premise to design exercises that target and strengthen the very cognitive functions that are at the core of dyslexia. - Fact: Everyone’s brain is different. To address each person’s dyslexia, we must first understand their cognitive profile, the combination and degree of the weak cognitive functions. This enables a truly individualized experience. One size does not fit all. In fact, many students at Arrowsmith engage in several different programs – each one targeted to each challenge/difficulty within their profile. What does this mean for people with Dyslexia? - It means their brains can change, and in turn, struggles related to dyslexia can disappear. Research indicates that with stronger cognitive capacities in key networks, the brain doesn’t have to work as hard; it learns, understands, remembers and applies information efficiently. In addition to strengthening underlying functions of dyslexia, Arrowsmith has programs to target issues related to dyscalculia, dysgraphia, attention, executive function, nonverbal learning, and social and emotional intelligence. - Children with dyslexia do not need to suffer trying to make sense of the written word, spend years in remedial programming, develop avoidance tactics in school, or face limited academic or professional futures. The solution is to understand their unique, cognitive profile and then, transform it.
In our introduction to how threads work, we introduced the thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we'll describe here. We'll also mention some key varitions between the platforms. Note that we'll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. (On some systems, switching on different processors is staggered to avoid contention on shared scheduling tables.) Unless otherwise specified, we'll use the term thread scheduler to refer to this overall system of coordinated per-CPU schedulers. Across platforms, thread scheduling1 tends to be based on at least the following - a priority, or in fact usually multiple "priority" settings that we'll discuss below; - a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration); - a state, notably "runnable" vs "waiting"; - metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for". Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are: - a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority; - otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU; - there are a few extra "tweaks" to make things work. Depending on the system, there are various states that a thread can be in. Probably the two most interesting are: - runnable, which essentially means "ready to consume CPU"; being runnable is generally the minimum requirement for a thread to actually be scheduled on to a CPU; - waiting, meaning that the thread currently cannot continue as it is waiting for a resource such as a lock or I/O, for memory to be paged in, for a signal from another thread, or simply for a period of time to elapse Other states include terminated, which means the thread's code has finished running but not all of the thread's resources have been cleared up, and a new state, in which the thread has been created, but not all resources necessary for it to be runnable have been created. Internally, the OS may distinguish between various different types of wait states2 (for example "waiting for a signal" vs "waiting for the stack to be paged in"), but this level of granularity is generally not available or so important to Java programs. (On the other hand, Java generally exposes to the programmer things the JVM can reasonly know about, for example, if a thread is waiting to acquire the lock on a Java object— roughly speaking, "entering a synchronized block".) Next: quanta and switching On the next page, we continue our description of thread scheduling with a look at thread quanta and switching, and discuss typical thread scheduling algorithms. 1. Note that in this description, we're going to assume that the OS performs specifically thread scheduling: that is, the unit that the scheduler "juggles" is the individual user thread rather than a process (or a kernel-level thread that sits halfway between a process and a user thread). Solaris 9 and Linux 2.6 onwards use this threading model, but earlier versions don't. To my knowledge, all versions of Windows use thread-level scheduling. 2. At a very low level, this can lead to some complex interactions such as a thread that was scheduled on to the processor getting preempted "at the last minute" while waiting to have its stack paged in. In other words, the notion that a thread goes from, say, "waiting" to "runnable" in an atomic step is actually a simplification. But it's usually a sufficient simplification from the point of view of the programmer. If you enjoy this Java programming article, please share with friends and colleagues. Follow the author on Twitter for the latest news and rants. Editorial page content written by Neil Coffey. Copyright © Javamex UK 2021. All rights reserved.
Hearing loss is one of the most common forms of sensory impairment, affecting individuals from all walks of life, but particularly those of advancing years. Recent studies have drawn a link between living in a muted world, a result of unmanaged hearing loss, to cognitive decline and the progression of dementia. Research increasingly points to a correlation between diminished auditory stimulation and cognitive decline, even dementia. Researchers from the John Hopkins School of Medicine and other institutes conducted an experiment looking at 639 different adults aged between 36 and 90 years old. Over the period of 1990-1994 they underwent a series of tests to determine their cognitive and aural health and were monitored until the end of May of 2008. The researchers observed any potential development of Alzheimer's and dementia. Of the participants, 125 had mild hearing loss (25 to 40 decibels), 53 had moderate hearing loss (41 to 70 decibels) and six had severe hearing loss (more than 70 decibels). During a median (midpoint) follow-up of 11.9 years, 58 individuals were diagnosed with dementia, including 37 who had Alzheimer's disease. The risk of dementia was increased among those with hearing loss of greater than 25 decibels, with further increases in risk observed among those with moderate or severe hearing loss as compared with mild hearing loss. For participants age 60 and older, more than one-third (36.4 percent) of the risk of dementia was associated with hearing loss. The risk of developing Alzheimer's disease specifically also increased with hearing loss, such that for every 10 decibels of hearing loss, the extra risk increased by 20 percent. There was no association between self-reported use of hearing aids and a reduction in dementia or Alzheimer's disease risk. How does Hearing Diminish Late in Life? The process involved in ‘hearing’ is a complex one, but on a primary level charts the capture and transfer of sound vibration, the conversion into electrical impulses and subsequent translation by the brain. Capture of sound vibration is achieved by means of tiny hair like receptors within the cochlea of the inner ear. As the body matures, these receptors die or decrease in quality. When a sufficient number of hair cells deteriorate the effective transfer of sound information is compromised and resultant hearing difficulties ensue. A cochlea loss is permanent in nature, largely due to the inability of the hair cells to regrow or regenerate. It is perhaps unsurprising then that many look towards some form of hearing assistance in order to mitigate the effect and indeed, maintain effective communication. What are the Signs of Age Related Hearing Loss? Losses attributed to age (presbycusis) and noise trauma (noise induced hearing loss) both damage the hair cells in the cochlea of the inner ear, although the rate of deterioration may be precipitated in the latter, depending on the length and intensity of exposure. Age-related hearing loss is largely gradational and may occur, albeit almost imperceptibly, for a number of years before someone recognizes any degree of hearing difficulty. The rate of decline varies between individuals but common signs, indicative of a resultant loss, include: - Hearing loss which is often observed in both ears - The perception that people are mumbling or speaking indistinctly - The increasing need for people to repeat themselves. - Marked difficulty understanding conversation in high levels of background noise. - Awareness that the volume of the television /radio is too loud for others. - Possible difficulty hearing the telephone ringing or the incoming voice. How to Manage Hearing Loss? For suspected hearing loss it is important to access a full diagnostic assessment, typically via referral from your doctor, through the local hospital audiology department, or, alternatively through a local hearing centre. The general recommendation, particularly for initial consultations, is that someone is accompanied, primarily to ensure the results are understood (and of course, heard) but also to help decide on the appropriate course of intervention. For most age-related losses, some form of targeted amplification can help to redress associative deficits in hearing but moreover, maintain access to communication in general: Hearing Aids – Perhaps the most publicized hearing aids, whilst not able to restore ‘normal’ hearing, may provide sound amplification to mitigate a loss in a certain frequency range. The device sits inside, over or behind the ear and is designed to stimulate those remaining hair cells responsible for the transfer of sound via the auditory nerve to the brain. Pros of Hearing Aids –There are different access routes offering a diversity of fitting prescriptions and range of types. Hearing aids have known (and measurable) benefits and can offer support in most acoustic environments. The scope of processing capability alongside the continued pursuance or miniaturised options means that the demands of most people can be met both acoustically and cosmetically. Cons of Hearing Aids – Most devices are susceptible to water ingress and use in ‘extreme’ conditions may be contraindicated. For the majority of people, the option to wear the system at night is also not practicable. In spite of every move towards refining the cosmetic design, for those who are self-conscious about wearing a hearing system, Behind-the-Ear options, in particular, may still be seen as both cumbersome and too ‘apparent’. Amplified Phones – As a long distance, communication medium, landline and mobile telephones are still the most popular choice. Standard designs however are often incompatible for those with a hearing impairment, on two counts – insufficient ringer volume and poor sound volume/ fidelity of the incoming voice. Amplified phones are designed to transcend these limitations by offering amplification levels of up to 60dB (against 8dB for normal phones), enhanced ringer volume control and oftentimes, a corresponding visual alert to signal incoming calls. Pros of Amplified Phones - Devices often include visual enhancements such as large buttons and backlit keypads which may be used to address dual sensory needs, both visual and aural. Cordless or duo options (corded and cordless) increase accessibility and allow for flexibility in terms of moving between locations. Many designs are often very simple to use, with a ‘plug-n-play’ set-up configuration. Cons of Amplified Phones – Certain models require their own power source to the mains and will not work in a power cut. The level of choice may make the selection process seem overly complex and discourage many from exploring options within this product range. These devices, whether corded, cordless or mobile, are only available from the private sector. Alerting Devices – Often standalone products designed to ‘alert’ or capture the user’s attention. They may be used in forewarning against potential hazards, such as amplified smoke detectors, or to draw one’s attention to internal communication alerts, such as the telephone or doorbell ringing. Pros of Alerting Devices – Certain models are portable and can be moved around the home as well as taken on the road. Devices often include a second sensory trigger in the form of a vibration function, whether on the device or as an added vibration pad. Cons of Alerting Devices – May also wake or draw attention to other household members or, in extreme cases, one’s next-door neighbor. Certain devices are battery powered and therefore will not function when the battery is depleted. The above is a sample of the various technological frameworks which may help to minimize the isolating effects of hearing impairment. The use of hearing aid amplification, alongside Assistive Listening Devices, cannot ‘cure’ hearing loss but can certainly help to safeguard against (or at least inhibit) the associated isolation, frustration and cognitive decline often seen in ‘unmanaged’ cases.
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project Unit 10 Final Project 1 Kaplan University Unit 10 Final Project Denver Martin Designing a Network IT273 Networking Concepts By Anthony Outlaw Feb 13, 2010 Unit 10 Final Project 2 Part A: Double-spaced paragraphs discussing the hardware used in your design Hubs are connection devices and operate at the Physical layer of the OSI model. A hub is a central point to connection for cable segments in a physical star topology. Hubs can provide different services based on the sophistication, example: managed hubs, switched hubs, intelligent hubs. Hubs pass on all data no matter what device it is addressed to, which can add congestion to the network. Routers are more sophisticated than hubs and operate at the Network layer of the OSI model. Routers connect network segments even if the network segment is of a different type. Routers also changes packets size, format, and addressing to fit the type of destination network on which the packet is being sent. Furthermore, routers: limit collision domains, determine the best path for a packet to reach another network, and filter or block broadcasts. Switches are multiport bridges that function at the Data Link layer of the OSI model. Each port makes a decision about whether to forward data packets to the attached network. Also, switches keep track of MAC addresses of all the attached node addresses and the port to which each node is connected. Also switches can help filter traffic and eliminate unwanted congestion. Modems are devices that enable computers to communicate over dial-up telephone lines. Also, if modems operate in the OSI model they would operate at the Physical layer of the model. I used a cable modem in my diagram to provide Internet connectivity for client computers. Unit 10 Final Project 3 Part B: Research on Protocols In an effort to prepare a thoroughly researched document on the following protocols, a good place to start would be defining protocols. A Protocol is the rules and standards that define network communication. A protocol stack is the protocol software components running on a computer. And, a protocol suite is a set of related protocols that support network communication at the Network and higher layers of the OSI model. The OSI which is short for Open Systems Interconnection model is important because it is the framework that defines the way in which information passes up and down between physical hardware devices and the applications running on user desktops. The OSI model has seven layers which are: Application layer, Presentation layer, Sessions layer, Transport layer, Network layer, Data Link layer, and Physical layer, in which different protocols are executed. Here I will only cover three layers and the protocols that are executed in those layers. The layers I will cover are: The Application layer, the Transport layer, and the Network layer. The Application layer is the end user’s access to the network, providing a set of utilities for application programs. Also, protocols functioning at the Application layer work with the applications you use to communicate over the network. Application layer protocols consist of: Simple Mail Transfer Protocol (SMTP), Hypertext Transfer Protocol (HTTP), Domain Name System (DNS), and Dynamic Host Configuration Protocol (DHCP). Simple Mail Transfer Protocol (SMTP) is a protocol in the TCP/IP protocol suite that is used to send email, using Transmission Control Protocol (TCP) as its delivery protocol. Hypertext Transfer Protocol (HTTP) is a protocol used to access web pages while surfing on the internet. Without this protocol the user would not be able to Unit 10 Final Project 4 surf the web. Domain Name System (DNS) is a TCP/IP protocol used for mapping of IP addresses to host names. DNS helps to translate host names and domain names to IP addresses, by means of a standardized look up table. Dynamic Host Configuration Protocol (DHCP) is a protocol and service used to provide IP addresses and TCP/IP configuration parameters. The primary concern for using DHCP is to centralize the management of IP addresses. DHCP holds pools of IP addresses that are assigned automatically to clients on an as-needed basis. The Transport layer of the OSI model is responsible for handling end-to-end communication issues and establishing, maintaining, and terminating connections between computers. Also the Transport layer is responsible for breaking down packets, ensuring all packets have been received, and flow control, to ensure that no computer is overwhelmed by the number of packets it receives. The Transport layer protocols consist of: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Transmission Control Protocol (TCP) provides connection-oriented packet delivery service that includes error checking and sequence numbering, with the destination device sending back an acknowledgement (ACK) that the packet was received. TCP is known as a reliable transport method because of the ACK and the error checking. User Datagram Protocol (UDP) provides connectionless packet delivery services that send packets without any type of error checking, sequence numbering, or guarantee of delivery. UDP also known as connectionless transmission does not have the receiver send an acknowledgement like TCP. For this reason connectionless transmission (UDP) is less reliable; however, UDP is faster than TCP. Unit 10 Final Project 5 The Network layer of the OSI model is responsible for network logical addresses and routing control. Delivery of packets are also apart of the Network layer’s responsibility. Routing is important because a router finds the best path to transfer packets from a computer, over the network, to the desired destination. The Network layer protocols consist of: Internet Protocol (IP), Address Resolution Protocol (ARP), Reverse Address Resolution Protocol (RARP), and Internet Control Message Protocol (ICMP). Internet Protocol (IP) provides for the network identification through addressing and User Datagram Protocol (UDP)/ connectionless delivery of packets. IP moves the data from point A to point B and is known as the best-effort transmission because it does not exchange information to establish an end-to-end connection before starting a transmission. Address Resolution Protocol (ARP) is a TCP/IP protocol that provides a device’s MAC address, based on its IP address. Anytime a computer communicates with another computer it needs to know its MAC address, which is hard-coded on the network adapter. TCP/IP uses Address Resolution Protocol to find a computer’s MAC address when the IP address is known. Reverse Address Resolution Protocol (RARP) does the exact opposite of Address Resolution Protocol (ARP). RARP finds the IP address when the MAC address is known. Internet Control Message Protocol (ICMP) is a management and troubleshooting protocol that provides support through error and control messages.
Photo 7/24/04, Cook County, MN These are carnivorous plants, inhabiting fens, bogs and peatlands in the northern latitudes. Pretty as they may be, these peatlands are killing fields. Insects are attracted to a sweet mucilage secreted by the glandular hairs on the sundew leaf. The hairs are thigmonastic, that is, they move in response to touch or vibration. The hairs converge on the struggling insect. The musilage contains enzymes: chitinase, esterase, peroxidase, phosphatase, protease. The mucilage secretion is stimulated by specific molecules. A quote from Matusikova 2005: "The reaction of sundew leaves depends on the molecular nature of the inducer applied." And from Gallie 1997, regarding pitcher plant (Sarracenia purpurea), another carnivorous plant found in fens and bogs: "Hydrolase expression is induced upon perception of the appropriate chemical signal." This is to say, the glistening tentacles of sundew respond to touch and the chemistry of the insect that lands on them, inducing the plant to produce enzymes and to curl the tentacles around the insect. The purpose of all of this is to convert the insect into digestible material for the plant, enabling it to live in a nutrient-poor habitat, such as bogs are. Yes, like a cold fog, death creeps across the sodden moor. That is not all. These northern bogs have two other carnivorous plants. Pitcher plant traps insects in a leafy vase filled with rainwater and enzymes. The vase is lined with sharp spikes, like concertina wire, preventing any escape. Bladderwort (Utricularia) has a submerged bladder under negative pressure that has a trap door and a lever. A water flea or mosquito larvae that touch the lever open the trap door, sucking it into the bladder within tenths of a second. Once again, it is bathed in digestive enzymes. The insect dies within 15 minutes, and it is safe to say that these are the most horrifying 15 minutes of his brief, hapless life, as the powerful enzymes attack his defenseless, softening body and reduce it to a soupy meal. We are pleased that insects do not scream, at least in a range heard by humans.
As Winter Sets In Tiny Shrews Shrink Their Skulls And Characteristics. all shrews are tiny, most no larger than a mouse. the largest species is the asian house shrew (suncus murinus) of tropical asia, which is about 15 cm (6 in) long and weighs around 100 g (4 oz) several are very small, notably the etruscan shrew (suncus etruscus), which at about 3.5 cm (1.4 in) and 1.8 g (0.063 oz) is the smallest known living terrestrial mammal. Shrews are small mammals with cylindrical bodies, short and slender limbs, and clawed digits. their eyes are small but are usually visible in the fur, and the ears are rounded and moderately large, except in short tailed shrews and water shrews. tail length varies among species, some being much shorter than the body and others appreciably longer. Shrews are mammals that bear a close resemblance to mice, with the exception of their long, pointed snout. shrews are primarily outdoor dwellers, although they’re not shy about entering homes when seeking food or shelter. Common shrews are found throughout the woodlands, grasslands, and hedgelands of britain, scandinavia, and eastern europe. each shrew establishes a home range of 370 to 630 m² (440 to 750 yd²). males often extend the boundaries during the breeding season to find females. A shrew is a small mammal that is similar in appearance to a mole. they are also quite closely related to moles and hedgehogs. people refer to many different animals as “shrews,” but researchers recognize members of three soricidae family as true shrews. scientists recognize over 385 different species of these animals across the world!. Shrew Control And Treatments For The Home Yard And Garden While it may appear small and gray, shrews are one of the most voracious mammalian predators on the planet. and they’re abundant and widespread, found on five continents in a variety of habitats. in the united kingdom, there are an estimated 50 shrews per hectare in woodlands, with a country wide population of more than 40 million shrews. Shrews are almost like rats or mice in appearance. depending on where you live in the us, these small mammals can really destroy your ecosystem in your garden. they’re often confused with mice because they don’t have the long tails that rats do, and they’re often found in your garden eating up slugs, snails and other bugs. Shrews are a small, mouse sized mammal which have long snouts, small eyes and a five clawed toe. it’s head is much more narrow than a rodents and they many times have dark tipped teeth. this is a mineral pigmentation which serves to protect tooth wear. unlike rodents, shrews do not grow teeth that can stand to wear down. Facts About Shrews Ferocious Shrews Fight For Mating Rights | Life Of Mammals | Bbc Earth
Obscure impacts demystified: Ionizing radiation Today’s topic in our obscure impact series: ionizing radiation. We have all heard of it, but what is it really? What can we do about it? And how should we account for it in our life cycle assessments? With this series, PRé contributes to the goal of the LCA community to cover everything in their assessments, including impacts that are not directly top of mind for most people. Warning: ionizing radiation! Ionizing radiation is probably the impact category with the best known symbol. You might have seen it in a laboratory, hospital or post-apocalyptic blockbuster. Regardless of where you learned about it, you immediately know what the trefoil (the black circle, surrounded by three emission blocks) portrayed on a bright yellow background means: danger. But why is ionizing radiation so dangerous? And is it really that dangerous? How do we measure ionizing radiation, and how do we stop it? What is ionizing radiation? First, let’s take a look at how ionizing radiation is emitted. It is emitted by radioactive materials called radionuclides: elements (atoms) that have excess nuclear energy. This makes them unstable, with a chance to disintegrate into a different element. During this process, the excess energy turns into ionizing radiation – emitted as a particle or electromagnetic wave. As the name implies, ionizing radiation can ‘ionize’ an atom or molecule. This means that it carries sufficient energy to detach electrons from atoms or molecules it encounters. A neutral atom or molecule has an equal number of positive charges (protons) and negative charges (electrons). Detachment of an electron due to ionizing radiation means that the atom or molecule is left as a positively charged ion. Not all forms of radiation are ionizing. Non-ionizing radiation is very low in energy and therefore does not have the ability to detach electrons from other molecules. As shown in the image below, examples of non-ionizing radiation are radio waves, microwaves and visible light. Under normal circumstances, humans are not able to detect ionizing radiation with their senses, so its presence can only be indicated and measured with specialized detection instruments such as Geiger counters. Only extremely high doses of ionizing radiation can be sensed, as a burning feeling on the skin. A detection instrument is of critical importance because significant health hazards are presented if the human body is exposed to too much ionizing radiation. Why is it a problem? Ionizing radiation has the potential to interact with and change molecules, and damage or even kill cells. The amount of radiation absorbed by a person’s body is called the radiation dose. Depending on this dose, serious health problems can occur because of the cell damage. At high doses, large numbers of cells can be damaged by the radiation, possibly resulting in impaired organ functioning, skin burns or even death. A direct death because of exposure to ionizing radiation is very uncommon and only seen in extreme occasions. Examples include the atom bombs that concluded World War II or the accidents in Chernobyl and Fukushima, although the last registered only one direct death because of radiation. More common are exposures at low doses, where fewer cells are damaged. The damaged cells can often repair themselves without any consequences. However, sometimes the affected cells are not repaired correctly, which increases the risk of long-term health problems such as cancer. Young people have increased health risks from ionizing radiation, because they have more cells that are dividing rapidly and their longer live span gives cancers more time to develop. Ionizing radiation can also be dangerous to animals and plants. However, they mostly have much higher radio resistance, meaning that they can better withstand ionizing radiation because of cellular radioprotection mechanisms. Examples are higher levels of protective proteins, increased gene expression and altered DNA repair. Because there is no naturally occurring selection pressure that advantages organisms with better resistance to high doses of radiation, radio resistance is the result of evolutionary adaptation to different environmental extremes. What causes ionizing radiation? There is always a certain level of background radiation coming from natural sources. In our soils, water and the air we breathe, over 60 naturally-occurring radioactive materials can be found. This means that we inhale and ingest radionuclides on an everyday basis. This is usually not harmful, because the annual background dose is generally low. Ionizing radiation also arises from human-made sources within a large range of fields such as nuclear power generation, medical devices for diagnosis, research, manufacturing, construction. Measuring ionizing radiation The most common life cycle impact analysis (LCIA) methods include ionizing radiation as one of the impact categories. For instance, the ReCiPe 2016 Midpoint, the ILCD 2011 Midpoint+ and the Environmental Footprint 3.0 methods. All methods include a wide variety of radionuclides. Some of them are well known and often associated with radiation, like uranium and plutonium. Others, like americium and ruthenium, are more unknown. For all ionizing radiative elements, both waterborne and airborne emissions are considered in LCIA methods. How can we stop ionizing radiation? Let’s start with the bad news: there is no way to stop it. Ionizing radiation is found everywhere. Traces of radionuclides in your granite counter top, radioactive potassium-40 in bananas and direct radiation from the sun, especially at high altitudes in planes. But the good news is, these are only low doses, and, as long you do not eat too many bananas or spend too much time in a plane, our body is able to recover from those smaller doses. High-dose applications, like X-ray scanners and particularly nuclear energy, are much more important. Since the end of World War II, radiation has been often criticized and protested. As a consequence, high-dose applications are operated with caution and thus cause limited exposure. Nevertheless, the extraction, processing and disposal of uranium for nuclear energy production is a major source of ionizing radiation. Therefore, it is common to see high impacts in ionizing radiation in studies where nuclear power is a major electricity source, as for example in the French electricity mix. Using less nuclear energy would be an effective way to lower the impact in ionizing radiation, however, this comes at a price. After all, nuclear energy is also a very low-carbon energy source. As often in sustainability, it is thus all about trade-offs. Luckily, we have LCA to support such decision making! We hope you enjoyed this article! Please let us know which other LCA indicators you’d like to read about and spread the word on Twitter or LinkedIn using a hashtag #ObscureImpacts. Read about other impact categories: “Meeting the needs of current generation, without compromising the ability of future generations to meet their needs”. The quote from the Norwegian prime-minister Gro Harlem Brundtland defined sustainable development in 1987 and, despite being written 8 years before my birth, has been relevant ever since, both to the world and to me. At the moment societies are still struggling to be sustainable and we are putting great pressure at planet Earth’s carrying capacity, endangering the ability of future generations to meet their needs. To relieve the planet from these pressures, I dedicate myself to support green decision making, allowing for real sustainable development to happen and, most important, for future generations that can meet their needs! Wouter van Kootwijk Sustainability poses many complex challenges to our current and future society. These challenges require us to collaborate and to make informed decisions from an environmental, social and economic perspective. I believe that LCA can provide us with the insights needed to make such decisions. By identifying and acting on sustainable opportunities, we can create a prosperous and flourishing world in which we live within the limits of our planet and continuously improve well-being.
The magnitude of the induced magnetic moment is very small, and its direction is opposite to that of the applied field. These materials are repelled by a magnetic field and do not retain the magnetic properties when the external field is removed. Earnshawâs theorem explains the phenomenon by saying that a magnetic field focused one way must not be as focused in another direction. Small animals, such as frogs, can be levitated in this way, which has been demonstrated by experiments in small tubes. Therefore, there are no unpaired electrons in both these atoms. diamagnetism - phenomenon exhibited by materials like copper or bismuth that become magnetized in a magnetic field with a polarity opposite to the magnetic force; unlike iron they are ⦠Whenever two electrons are paired together in an orbital, or their total spin is 0, they are diamagnetic electrons. In paramagnetic and ferromagnetic substances, the weak diamagnetic force is overcome ⦠All materials exhibit a diamagnetic response, and it may be understood as the attempt to expel the applied magnetic field. The effect is created by a change in the orbit of electrons, which generate small currents to oppose magnetism from external sources. The magnetism of an element is known from its electronic configuration. Diamagnetism is seen in water, wood, most organic molecules, copper, gold, bismuth, and superconductors. Learn about a little known plugin that tells you if you're getting the best price on Amazon. Diamagnetism is a quantum mechanical effect that is found in all materials, but for a substance to be termed "diamagnetic" it must be the only contribution to the matter's magnetic effect. NH3 is diamagnetic because all the electrons in NH3 are paired. What Is Magnetism? Electron configurations of ⦠She has taught science courses at the high school, college, and graduate levels. This is mainly because the paramagnetic materials have unpaired electrons whereas diamagnetic materials have none of their electrons unpaired. Atoms with all diamagnetic electrons are called diamagnetic atoms. Definition, Examples, Facts, Dipole Definition in Chemistry and Physics, The Basics of Magnetic Levitated Trains (Maglev), John Tyndall and the Early History of Diamagnetism, Ph.D., Biomedical Sciences, University of Tennessee at Knoxville, B.A., Physics and Mathematics, Hastings College. Add 2 Pieces of Pyrolytic Graphite to Stabilize The Floater Magnet Diamagnetism is observable in substances with symmetric electronic structure (as ionic crystals and rare gases) and no permanent magnetic moment. Any time two electrons share the same orbital, their spin quantum numbers have to be different. However, materials can react quite differently to the presence of an external magnetic field. Diamagnetic refers to the ability of a material to create an opposing magnetic field when exposed to a strong one. Diamagnetic, Paramagnetic, and Ferromagnetic Materials When a material is placed within a magnetic field, the magnetic forces of the material's electrons will be affected. Answer (a): The O atom has 2s 2 2p 4 as the electron configuration. Because all atoms possess electrons, all materials are diamagnetic ⦠This phenomenon is just the opposite behavior exhibited by paramagnetic materials. Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. Many non-magnetic materials possess the qualities of diamagnetism, such as water, wood, plants, animals, and human beings. When an external magnetic field is applied, dipoles are induced in the diamagnetic materials in such a way that induced dipoles opposes the extern⦠Bismuth and antimony are examples of diamagnets. âFor example, placing diamagnetic metals such as aluminium or zinc at the centre of the phthalocyanines improves the photosensitization of the compound for use in PDT.â âIn the iron diamagnetic form, magnetic anisotropy arises from the heme, aromatic moieties, and elements of secondary structure.â Most living organisms are essentially diamagnetic. this video consists of magnetic property of substances ( paramagnetic , diamagnetic, ferromagnetic ). A paramagnetic electron is an unpaired electron. The sodium ion is diamagnetic. Examples of diamagnetic materials include water, wood, and ammonia. In diamagnetic materials all the electrons are paired so there is no permanent net magnetic moment per atom. Indicate whether boron atoms are paramagnetic or diamagnetic. In other words, apply a magnetic field onto a diamagnetic substance and it slightly resists it. Essentially, diamagnetic behavior is the change in orbital angular momentum induced by an external magnetic field . Graphite and bismuth are the strongest diamagnetic materials. If a powerful magnet is covered with a layer of water that is thinner than the diameter of the magnet, the magnetic field repels the water. A diamagnetic material has a permeability less than that of a vacuum. A diamagnetic substance is one whose atoms have no permanent magnetic dipole moment. Many common materials such as water, wood, plants, animals, diamonds, fingers, etc. This little known plugin reveals the answer. Such materials are repelled by outside magnetic forces because of eddy currents that form in their magnetic field. Diamagnetic materials will repel a magnet, and a diamagnetic compass will point across the magnetic field. Look it up now! Diamagnetism is a quantum mechanical effect where the lack of unpaired electrons allows for an induced magnetic field that repels an applied magnetic field. In chemistry and physics, to be diamagnetic indicates that a substance contains no unpaired electrons and is not attracted to a magnetic field. Another demonstration of diamagnetism may be seen using water and a super magnet (such as a rare earth magnet). The term paramagnetic refers to the attraction of a material to an external magnetic field while the term diamagnetic refers to the repulsion of a material from an external magnetic field. Theories related to diamagnetic materials include the Bohr-Leeuwen theorem, which states that a system cannot depend on a magnetic field if it is at a stable temperature. Usually, diamagnetism is so weak it can only be detected by special instruments. The effect is created by a change in the orbit of electrons, which generate small currents to oppose magnetism from external sources. Indicate whether F-ions are paramagnetic or diamagnetic. All materials display diamagnetism, but to be diamagnetic, this must be the only contribution to its magnetic behavior. Therefore, O has 2 unpaired electrons. In diamagnetism, another magnetic phenomenon, electrons within a substance respond to the outside magnetic field by, essentially, spinning faster. A Diamagnetic is a material that has a weak or negative susceptibility towards magnetic fields. Is Amazon actually giving you the best price? Diamagnetism (Repelled by Magnetic Field) As shown in the video, molecular oxygen (\(O_2\) is paramagnetic and is attracted to is paramagnetic and is attracted to the magnet. Sebald Justinus Brugmans first observed diamagnetism in 1778, noting antimony and bismuth were repelled by magnets. Amazon Doesn't Want You to Know About This Plugin. Diamagnetism definition at Dictionary.com, a free online dictionary with pronunciation, synonyms and translation. The best diamagnets are superconductors, which resist a magnetic field while transforming into a superconducting state, as explained by the Meissner effect. Diamagnetism is a quantum mechanicaleffect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. However, diamagnetism is strong enough in superconductors to be readily apparent. Paramagnetic Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. Stable equilibrium in a given magnetic field results in objects floating in free space, when the overall magnetic field strength is at a minimum. are usually considered to be non-magnetic but in fact, they are very weakly diamagnetic. In simple terms, diamagnetic materials are substances that are usually repelled by a magnetic field. What is Paramagnetic and Diamagnetic ? When an external magnetic field is applied to a diamagnetic substance such as bismuth or silver a weak magnetic dipole moment is induced in the direction opposite the applied field. Diamagnetic materials are weakly repelled by magnetic fields. That is, it will orient east west. A diamagnetic material has a permeability less than that of a vacuum. Small samples of water can be levitated, and magnetic objects have been suspended for hours in vacuum environments without adding power. A diamagnetic substance does not have unpaired electrons and is not attracted to a magnetic field. Diamagnetic Levitation. The resultant magnetic momentum in an atom of the diamagnetic material is zero. In contrast, molecular nitrogen, \(N_2\), has no unpaired electrons and is diamagnetic; it is therefore unaffected by the magnet. Diamagnetism is a very weak form of magnetism that is induced by a change in the orbital motion of electrons due to an applied magnetic field. Strong superconductors make use of opposing magnetic forces today. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. This effect is known as Faraday's Law of Magnetic Induction. Discover surprising insights and little-known facts about politics, literature, science, and the marvels of the natural world. The molecules in living things, including water and proteins, are diamagnetic, and have only gravity as the resisting force when diamagnetism is present. Millions of times weaker than a regular magnetic force, diamagnetism can cause levitation under the right circumstances.
The world’s largest review to date has recently established just how important urban green space is for staving off premature death. Some 63% of people in the United States live in cities. Some cities are greener than others — Philadelphia, for example, has a long history of urban greening and is even looking to bump up its 20% of green space — and northern cities tend to have less green space than southern ones. Now, the World Health Organization (WHO) are looking to highlight the importance of green space in well-being and public health. Urban green spaces such as parks, sports fields, woods, lakesides, and gardens give people the space for physical activity, relaxation, peace, and an escape from heat. Green spaces are also associated with better air quality, reduced traffic noise, cooler temperatures, and greater diversity. Furthermore, recent estimates put around However, many of these studies have only looked at a particular point in time and have varied in how they measured people’s use of green space. Now, the most comprehensive review to date has analyzed nine longitudinal studies spanning seven countries, 8 million people, and several years of follow-up. The Barcelona Institute for Global Health (ISGlobal) in Spain conducted this review in collaboration with Colorado State University in Fort Collins. “The study shows that green space in cities reduces premature mortality,” explained Dr. Mark Nieuwenhuijsen, director of Urban Planning, Environment, and Health Initiative at ISGlobal. “Cities often don’t have much green space,” he added. “Green space is also good for climate mitigation through reducing heat island effects in cities and reducing air pollution effects.” “Green space is also good for carbon sequestration. So there are multiple beneficial effects. And increasing green space can, therefore, reduce a significant number of premature deaths in cities.” Dr. Mark Nieuwenhuijsen This study, set apart by its magnitude, was prompted by the WHO’s need to develop a health impact assessment tool for green interventions in cities, Dr. Nieuwenhuijsen explained for Medical News Today. More specifically, the WHO needed a robust picture of the link between green space and premature mortality in order to design a tool for green interventions. “We systematically searched for and included all the cohort studies we could find on NDVI, an [easily] obtainable green space measure, and premature mortality, and conducted a meta-analysis,” said Dr. Nieuwenhuijsen. The research team, using the available evidence from studies that had looked at the same group of individuals over a number of years, analyzed the availability of green space (from satellite images) and premature death due to all causes. The studies they reviewed covered more than 8 million people across the U.S., Canada, China, Italy, Spain, Switzerland, and Australia. The researchers found that for every 0.1 increment in vegetative score within 500 meters of a person’s home, there was a 4% reduction in premature mortality. These results show just how important green space is when strategizing public health. “Many cities are already greening, but this study provides further support that they should continue greening. Also, cities that do not have much green space should increase it — new parks, trees [on] roads, more grasslands, [etc.],” said Dr. Nieuwenhuijsen. The researchers are now using their results to estimate how many premature deaths cities around the world could prevent if they were to reach their green space goals. On what might come next, Dr. Nieuwenhuijsen told us: “The green space measure we used (NDVI) is a bit crude, although it works well. But the next stage is to find if some green spaces work better than others and exactly how the benefits occur to improve further.” Beyond being key to public health and preventing premature death, researchers cite the increase in biodiversity and mitigation of climate change as compelling reasons to bump up green spaces and make cities more sustainable and livable.
Learn Cursive Step-By-Step Lesson # 1 This is the first in a series of short videos designed to introduce cursive forms and demonstrate how to practice the movements using the Peterson movement-based approach. The video shows the 4 Peterson Basic Strokes and how they relate to the step-by-step concept using the "loop top" shape to produce two cursive letters by changing size. The lesson shows how to practice moving with the voice to write sets of each target letter. If you are not familiar with the Peterson method, the videos in this series provide a quick illustration. The Peterson approach to cursive is the only way to let students begin to apply it step-by-step. Learn Cursive Step-By-Step Lesson # 2 This lesson introduces the "Sharp Top" basic shape and two more one-step letters made with tall and small sharp tops. The addition of letters "I" and "t" allow a few simple words to be added to the goals for student practice. The step-by-step concept is reinforced by applying the letters to words. There is a short review of lesson one for reinforcement, but sequential use of the lessons is recommended. Learn Cursive Step-By-Step Lesson # 3 The series continues by introducing another one-step letter and the first two-step letter. Additional letters enable more words and we hope you will guide the student to find some that are not illustrated. The step-by-step approach lets a student practice the steps until he/she can put all of the steps together easily. The individual letter rhythms add together to enable internalization of a rhythmic pattern for the word. Learn Cursive Step-By-Step Lesson # 4 Lesson four introduces two of the more difficult lowercase letters. The "r" and the "c" typically require more practice because the extra moves needed to shape the letter tops present odd rhythms into the typical up-and-down beat for the majority of letters. The video leads air writing using the action words to get the student started with internalization of the rhythmic movement patterns. Click to Download Learn about the movement based strategy that lets you teach, and your student practice, fluent movement. Supporting research is referenced in the strategy paper. The Teaching Task Click Here to Watch Web PresentationThe link above will take your browser to a presentation that will run automatically in your browser. The program runs just under twenty minutes. You will need speakers or a headset for the sound. The presentation provides an overview of the teaching task, specifics on each step in the method for instruction and tips on correlation that will help with transfer of learning into applied work. Why Trace and Copy Activities Fail Click Here to Watch Web PresentationThe link above will take your browser to a new and powerful presentation illustrating WHY TRACE AND COPY ACTIVITES FAIL SO MANY CHILDREN. if your child labors to put thoughts on paper this 12-minute slide show will show you one probable source of the problem. Learn why Peterson does not recommend tracing of models with a pencil or crayon and find out how we use our unique, color/rhythm models to teach better movement. The Muscle Memory Story Click here to downloadWritten for a grade four student, the story translates recent motor science revelations about handwriting movement into lesson objectives a student can understand. Learn about the real goal for practice activities. Do they ask why practice is necessary - moan and groan when you suggest it? This story should make a difference and it describes a way to make practice fun. Why Choose Peterson? The most important reason to choose this program is simple. We offer a unique strategy. That means the teaching and learning activities are different from the program you have been using. We provide a developmental curriculum, and simple materials for teaching fluent handwriting. This is NOT the typical "Trace & Copy" strategy that like virtually every other program out there. This strategy is movement-based. That means we lead you to teach your student "how to move" using a planned series of Directed movement exercises. This page provides a bit of history and explains why Peterson Directed Handwriting is different from other programs. The long successful history is another good reason to choose the Peterson Directed Handwriting strategy. We thank you for your interest and urge you contact us when questions arise. Peterson Directed Handwriting was founded in 1908 by Dr. P. O. Peterson. While training in Spencerian and Palmer methods, Dr. Peterson recognized a connection between rhythm and fluency. He developed a unique curriculum for teaching The American Standard Alphabet which included learning how to move with smooth rhythm. He changed the way letters were taught to enhance rhythmicity. Initially, he operated a school training adults for the business workplace. The success of his methods soon led school directors to hire Peterson to train teachers in his methods. The Peterson curriculum has been in continuous use in schools and homes ever since.
Bats are nocturnal animals and navigate mainly by echolocation – they see by using the echoes from their high-pitched squeaks just like dolphins. Bats roost in a range of places including hollow trees, open loft spaces and underground caves and tunnels. All bat species in the UK are thought to be declining because of a combination of factors including loss of habitat and decreased numbers of insects (their food) because of pesticides. The rangers are currently carrying out survey work in the Park to discover the species which live here. Ones that have been recorded so far include pipistrelles, brown long-eared, noctules. For more information on bats visit the Bat Conservation Trust website – there are information leaflets to download and activities for schools.
The Importance of Play By Dr. Jenn Berman In a report issued by The Alliance for Childhood it was revealed that kindergarteners’ play time has greatly diminished. The study found that children spend four to six times as long being instructed and tested as they do in free play. In the hopes of creating a smart child, many parents discount the importance of play. But play is crucial to developing minds. Studies show that play promotes problem solving, creativity, learning, attention span, language development, self-regulation, social skills, increases IQ and even helps children work through difficult life events. Play is the “work” of children. Here are eight reasons parents need to fight for play in the schools and make sure their children have free play at home. Play develops problem solving abilities. Researchers put a desirable toy in a clear box and told four and five year olds to get the toy out of the box without moving out of their chairs or leaning towards it. One group of kids was allowed to play with sticks and toys, while a second group was shown a solution to a problem but were not allowed to play and a third group did not get either opportunity. The children who were allowed to play did much better than either of the other two groups. They worked more eagerly and persistently and demonstrated better problem-solving abilities. Children get to experiment with being in charge. Throughout their day kids are told what to do. During play children get to experience what it feels like to be in charge and gain a sense of mastery. Play with other children helps social development. Play helps children learn important social skills like taking turns, collaboration, following rules, empathy, self-regulation, and impulse control. Play helps children assimilate emotional experiences. Pretend play, in particular, helps children integrate emotional experiences they need to work through. It allows them to express the things that they may not be sophisticated enough to talk about with adults. It improves concentration. Attention and concentration are learned skills. Play is one of the most natural enjoyable ways for a child to begin developing these skills. We have all seen a child so lost in play that they don’t even hear a parent calling her name. This focus is the same skill that one needs years later to write a term paper, listen to a lecture or perform a piano concerto. Helps develop mathematical thinking. Because play teaches children about the relationships between things, it actually helps develop the type of reasoning that aids in mathematical performance. According to Professor Ranald Jerrell, an expert in development of mathematical thinking, “Experimental research on play shows a strong relationship between play, the growth of mathematical understanding, and improved mathematical performance.” Play promotes language development. Play, especially dramatic play, requires children to use and be exposed to language. In a study of four-year-olds who frequently engaged in socio-dramatic play, researchers found that when compared to a non-socio-dramatic play group, these child exhibited an increase in the total number of words used, the length or their utterances, and the complexity of their speech. The repetition of play creates neural pathways. Each time a child performs a play activity, like stacking blocks, the synapses between brain cells are activated and over time the level of chemical needed to make that connection becomes less and less, making it easier to perform the task. In a rush to give our children academic and intellectual advantages, misguided schools and parents are pushing children to focus on reading, writing and testing. This, however, comes at the detriment of play which is so crucial to their development. In a study of academic preschools and traditional play-focused preschools, researcher Kathy Hirsh-Pasek found that there were no differences in the intellectual skills of the children in either group. She did, however, find that the children in the academic group were more anxious and less creative than the children in the other group. Dr. Jenn Berman is a Marriage, Family and Child Therapist in private practice in Los Angeles. She is the author of the LA Times best selling books SuperBaby: 12 Ways to Give Your Child a Head Start in the First 3 Years and The A to Z Guide to Raising Happy Confident Kids.
Theories and Principles Unit 4 Dtlls What is the definition of theory? To me theory is something which is explained to you, a system of ideas intended to explain something, one based on general overview. A definition of principles, to me is to be opinionated on how I feel towards chosen topic. Geoff Petty (2009) states that, `every teacher and every learner has a theory about learning.’ To able to conclude my own theories and principles on planning and enabling learning I need to learn what is accepted to others. Research I understand there are many different theories relating to teaching and learning. Those that I have looked at are Behaviourist, Cognitive and Humanists elements. These are not new concepts. Although that some of these theorists are descended their work is still use into practices. Behaviourism is primarily associated with Pavlov (classical conditioning) in Russia; and with Thorndike, Watson and particularly Skinner in the United States (operant conditioning). In educational surroundings, behaviourism implies the dominance of the teacher, as in behaviour modification programmes. It can, however, be applied to an understanding of unintended learning. Classical conditioning in its simplest form is a type of conditioning associates by an external stimulus; in Pavlov original experiment this was a bell, with the arrival of a second stimulus which was the food, this resulted in a response to the bell which would have been achieved previously by the food. Frederic Skinner’s work was influenced by Pavlov’s experiment and the ideas of John Watson, father Bibliography: Skinner, B.F (Reprint 2003). The Technology of Teaching. Cambridge, MA: B.F.Skinner Foundation Petty, G (Reprint 2009). A Practice Guide Teaching Today. N. Thornes Forth Edition Holt, J C 1923- 1985 (revised edition) Classics in child development Knowles, M. and Swanson R.A, The Adult Learner: The definitive classic in adult education and human resources Harkin, J., Turner, G. and Dawn, T. (2001). Teaching Young Adults. London, Routledge. Rogers. C and J.H Freiberg (Third Edition) Freedom to learn
The short answer: No The long answer: The Sun only has an analemma (the figure-8 shape traced on the sky as a result of imaging the Sun at the same time (good luck avoiding clouds) each day) because of two factors. - The Earth is tilted on its axis and that axis remains pointed in the same direction in space (toward the direction of Polaris). (A more precise explanation is that the axis does precess (or "wobble" like a spinning top) ... but in a very long cycle that takes nearly 26,000 years to complete. So on the scale of single human lifespan ... it doesn't appear to move by any noticeable amount and requires very precise measurements to detect.) - The Earth's orbit is an ellipse ... not a circle. While nearly circular ... it isn't perfect. The point along an orbit when an object is nearest to the barycenter of the orbit (in our case... to the Sun) is called the periapsis. But since this periapsis is for an object orbiting the Sun, it gets a special named called the "perihelion". The point of the orbit located farthest from the barycenter is called the apoapsis. Again... it gets a special name because it's an orbit around the Sun so it is called the "aphelion". But the consequences of the Earth getting nearer to the Sun during half of its orbit ... then getting farther from the Sun during the other half of its orbit ... means Earth's velocity through space changes. As we get nearer to the Sun it is as if we are "falling" so we speed-up (our velocity through space is faster) and as we get farther from the Sun we slow-down (our velocity through space is slower). Earth needs roughly 365 days (365.24) to complete an orbit. There are 360° in a circle. This means that each ordinary day on Earth (formally known as a "solar day") is 24 hours. In that 24 hours, we've moved forward in our orbit around the Sun by just slightly less than 1°. It turns out that the amount of time needed for the Earth to spin 360° on it's axis (one "sidereal day") is just slightly less than a solar day. It's roughly 23 hours and 56 minutes (roughly 4 minutes short of a solar day ... that's a roughly rounded value). So why the difference between a "solar" day vs. a "sidereal" day? Side note: sidereal translates to mean "of the stars". This comes from the notion that ancients noticed that the debris left by a "falling star" (meteorites) that landed on Earth was mostly made of iron. They surmised that stars must be made out of iron (not realizeing that meteors are not actual stars that fall from the sky). Their word for iron is "sidero" (or "sider" pronounced like "cider" ... the juice you squeeze from apples and possibly ferment into a beverage. It is pronounced the same. Sidereal is not pronounced like "side-real". It is pronounced like "cider-eal". Alas, I digress). Sidereal day means the "day of the stars". In other words, if you note the time when a star passes through the meridian (the imaginary north/south line that separates the "west" side of the sky from the "east" side of the sky) ... and then wait to see when that same star passes through the meridian on the following day, the amount of time that has passed will be roughly 23 hours and 56 minutes.... not 24 hours. Once the Earth has completed one 360° revolution on its axis (one sidereal day) it will also have moved forward in its orbit by nearly 1°. This means that while distant stars would appear in the same point in the sky, the Sun will not... it will be slightly behind the meridian. We will need to let the Earth spin for another 4 minutes until the Sun returns to the same position. But alas... even this has exceptions... Recall that the Earth speeds up and slows down as it travels through space between it's perihelion and aphelion points. This means the Sun wont precisely be at the same position in the sky... it'll be fractionally ahead or behind depending on if the Earth is speeding up ... or slowing down. This accounts for the left/right variation in the Sun's position in the analemma. The Earth's axial tilt accounts for the up/down variation in the Sun's position. And when you trace out a whole year, you get a shape that resembles a figure-8. Stars, on the other hand, are much farther away. If you image a star once every 24 hours (assuming no clouds block your view) then each night, the star will shift forward by roughly 1° ... after enough days the Star will be below the horizon and you will no longer be able to image the star. This means the shape you'll get... is that that the star will trace out an long arc ... but after a few months it will no longer be visible at the same time (you would have to observe it at a different time of the day ... and/or it may be lost in the daytime sky depending on the location of Earth in our annual orbit around the Sun.
Spanish lessons videos Once they’re gotten a clear idea of what to do, making their own videos (and watching their own videos) will help their Spanish in many different ways. Still unsure? Have a look at these major reasons why videos work for language teaching: - Videos develop their writing and speaking skills. Of course, it’s not just about getting in front of a camera. It’s about putting together a script, rehearsing, memorizing lines and practicing pronunciation, all with the ability to go back and film a scene over again or add content if they think they can make their final product better. - Videos encourage reflection and offer great tools for peer assessment. The fantastic thing about videos is that they can be watched over and over again. I used to make students perform little role plays in class. Now, I ask them to film themselves and we watch their work together. The first reflection process takes place before the submission date, as I know the majority of my students will watch what they’ve recorded and film specific scenes again after checking how they can improve their spoken Spanish for the video. The second moment of reflection takes place in class, where we watch the content together and pause to discuss key questions, provide feedback to one another and focus on specific criteria or target concepts. - Videos provide plenty of opportunities to link content and culture. As every other language teacher, I love to make culture part of my lessons, but this isn’t always an easy task. However, videos are a great way to do it. As you’ll see when you have a look at the list of video activity ideas, this resource enables students to practice their Spanish while they cook typical food, research about different countries or explore Spanish music. - Videos promote student-led lessons and are a great support for their revision sessions. Working in a mixed ability class has made me realize that a homemade movie can be a great tool to generate student interaction. Get them to be the teachers both in and out of lessons! A video gets students to explain things to each other. For the ones that understand certain topics featured in videos, video-related activities help with their confidence and deepens their thinking. Plus, they provide them with an opportunity to reflect on what they know and how to explain this knowledge to others. For the ones that don’t understand the grammar point well, having a fellow student explaining the content usually means that they will feel more comfortable asking questions. In addition to this, pupils usually share a common way of expressing themselves. This will probably mean they understand each other better than they understand us! - Videos engage students and break them out of the routine. Students love to have the opportunity to do something different. They go crazy when they’re given the chance to take their phones out and actually use them during lessons. We’re very lucky in this regard, as not every subject can introduce video as easily as we do in language classrooms. This probably means that we’re going to provide a different learning environment compared to what they’re used to. You’ll be amazed by the impact this has on your lessons. Students will arrive to Spanish class excited for a change of pace and some fun learning time! All of these are strong reasons. Now it’s time to turn them into strong ideas to make videos relevant to your Spanish language teaching. Lucky for you, we’ve come up with a list of the very best! You can basically include video in anything you do, although some topics just seem to provide perfect opportunities for audiovisual content. Something we must have in mind at all times is the impact this tool is going to have on their learning, so we can ensure videos are about more than just the fun. Generally, I ask students to create a script prior to filming. I only check this script if I have specific concerns after watching their work. For example, if a student cannot conjugate a single verb correctly, I would ask for the script to get a deeper understanding as to where the problem lies. My biggest tip is to spend a lesson watching the videos and getting students to assess their peers’ work. To help them with the feedback process, you can provide students with a success criteria rubric or list of things to look out for. If you haven’t got a whole lesson to dedicate to videos, you can play two or three short ones at the beginning of class as a starter activity. Students will eventually have a portfolio of recordings that can count towards their speaking grades. You’ll see that they’ll develop their speaking skills much more than they did without videos. Have a look at the ideas below and get inspired! It’s not just about applying these activities directly, but about adapting them to what suits you and your classroom best. Some of the ideas have been already used by many teachers, so make sure you check out the different links to YouTube to see how much fun they can be. Other ideas have resulted from a long-term brainstorming process between my students and myself, and all of them have proven very successful. 1. Start a Video Blog Are you the kind of teacher that sets a written assignment on a regularly basis? I was, and it gave me so much marking and grading to work on at home. I used to hate myself for creating this laborious situation. Share this article What are the best video games to play in Spanish? - Quora Alright so I'm not sure what gaming systems you have, but here are a few of my recommendations, which I got from me trying to learn French through gaming:
Scientists have rules for naming everything from mushrooms to ice floes on Pluto. And that includes features on the bottom of the ocean: volcanoes, canyons, reefs, and many others. For most of human history, naming features on the sea floor wasn’t a problem. With a few exceptions, the features were hidden from view. And that didn’t change in a major way until the invention of sonar in the last century, allowing scientists to discover thousands of features. Individual scientists named many of the newly found features. But that created some confusion. Several features might share a name, for example, or one feature might have several names. So in 1975, scientists created an international committee to assign names to features found outside the waters of individual countries. The committee develops guidelines for assigning names. And it works to align its names with those assigned by other groups over the years. The guidelines say that names should be short, simple, and easy to use. They can honor scientists or others involved with marine exploration, or ships used to discover features. Groups of features can be given a collective name, with individual features following the theme. A chain of underwater volcanoes, for example, is known as Electricians Seamounts. Individual summits in the chain are named for scientists who did important work in the field of electricity, such as Volta and Ampere. These guidelines help bring a little order to the amazing landscape at the bottom of the ocean.
Simple Definition of accretion : a gradual process in which layers of a material are formed as small amounts are added over time : something that has grown or accumulated slowly : a product or result of gradual growth Full Definition of accretion 1 : the process of growth or enlargement by a gradual buildup: as a : increase by external addition or accumulation (as by adhesion of external parts or particles) b : the increase of land by the action of natural forces 2 : a product of accretion; especially : an extraneous addition <accretions of grime> Examples of accretion in a sentence rocks formed by the slow accretion of limestone There was an accretion of ice on the car's windshield. Origin of accretion Latin accretion-, accretio, from accrescere — more at accrue First Known Use: 1615 Medical Definition of accretion Legal Definition of accretion 1 : the process or a result of growth or enlargement: as a : the increase or extension of the boundaries of land or the consequent acquisition of land accruing to the owner by the gradual or imperceptible action of natural forces (as by the washing up of sand or soil from the sea or a river or by a gradual recession of the water from the usual watermark); also : accession in which the boundaries of land are enlarged by this process — compare avulsion, reliction b : increase in the amount or extent of any kind of property or in the value of any property <accretions to a trust fund resulting from the increase in value of…securities in which its corpus is invested — In re Estate of Gartenlaub, 244 P. 348 (1926)> Editor's note: Accretion in value of the principal of a trust is generally not considered income. c : enlargement of a bargaining unit by the addition of new employees 2 in the civil law of Louisiana : the passing to an heir or conjoint legatee of the right to accept a portion of a succession resulting from the failure of a coheir or colegatee to take his or her own share Learn More about accretion Thesaurus: All synonyms and antonyms for "accretion" Medical Dictionary: Definition of "accretion" Spanish Central: Translation of "accretion" Nglish: Translation of "accretion" for Spanish speakers Britannica English: Translation of "accretion" for Arabic speakers Britannica.com: Encyclopedia article about "accretion" Seen and Heard What made you want to look up accretion? Please tell us where you read or heard it (including the quote, if possible).
Date: March 2019 Curiosity Rover, on Mars, has discovered key ingredients that are important for the establishment and sustainability of life as we know it – nitrites (NO2) and nitrates (NO3). Curiosity discovered them in soil and rock samples it took as it traversed the Gale Crater, the site of ancient lakes and groundwater systems on Mars. To understand how fixed nitrogen may have been deposited in the crater, researchers have recreate the early Martian atmosphere here on Earth. The combination of a warm climate with liquid water on the surface and the production of nitrates are key elements which are necessary for life. Read more at : NASA-JPL
is a genus comprising 6-10 species of flowering plants in the family Fabaceae, native to warm temperate regions. The small deciduous shrubs or trees in this genus are grown for their attractive heart-shaped leaves which take on good autumn colors, and pinkish-red flowers that appear in early spring on leafless shoots. Cercis species are also food plants for larvae of some Lepidoptera (butterflies and moths) species including Mouse Moth. (Judas Tree), 10 m high and across, is native to southern Europe and southwest Asia. It is a remarkable low tree with a flat spreading head, bearing a profusion of magenta pink flowers in early spring, which appear before the leaves. The magenta pink flowers are produced on year-old or older growth, including on the trunk in late spring. The flowers are hermaphrodite (having both male and female reproductive organs) and are pollinated by bees. The tree also produces flat pods that hang vertically. Its leaves are an attractive glaucousgreen, turning rich yellow in autumn. It is a fine feature in the garden. (Eastern Redbud), 10 m high and across, is a large shrub or small tree native to the eastern North America. It generally has a short, often twisted trunk and spreading branches. The heart-shaped leaves are tinged with bronze when they first emerge and turn yellow before they fall in autumn. It produces flowers that are showy, light to dark magenta pink in color. They appear in clusters from March to May on bare stems before the leaves. Flowers are pollinated by long-tongued bees such as blueberry bees and carpenter bees. The fruit is a flattened, dry, brown in color, pea-like pod of 5-10 cm long. The pod contains 10-12 seeds of 6 mm long, flat, elliptical and chestnut-brown color, maturing in August to October. Cercis is a drought tolerant tree and grows well full sun or partial shade, in deep, fertile, and well-drained soil. Propagations is by seed in autumn; semi-ripe cuttings in summer. Pests and diseases are cacopsylla pulchella, leafhoppers, scale insects, canker, coral spot and verticillium wilt. Cercis canadensis 'Forest play' Cercis occidentalis (Western redbud)
What's in this article? What is Ulcerative colitis? Ulcerative colitis is a disease that causes inflammation and sores (ulcers) in the lining of the large intestine (colon camera.gif). It usually affects the lower section (sigmoid colon) and the rectum. But it can affect the entire colon. In general, the more of the colon that’s affected, the worse the symptoms will be. The disease can affect people of any age. But most people who have it are diagnosed before the age of 30. Symptoms of Ulcerative colitis Ulcerative colitis symptoms can vary, depending on the severity of inflammation and where it occurs. Therefore, doctors often classify ulcerative colitis according to its location. You may have the following signs and symptoms, depending on which part of the colon is inflamed: - Diarrhea, often with blood or pus - Abdominal pain and cramping - Rectal pain - Rectal bleeding passing small amount of blood with stool - Urgency to defecate - Inability to defecate despite urgency - Weight loss - In children, failure to grow Most people with ulcerative colitis have mild to moderate symptoms. The course of ulcerative colitis may vary, with some people having long periods of remission. Causes of Ulcerative colitis The exact cause of ulcerative colitis is unknown, although it’s thought to be the result of a problem with the immune system. The immune system is the body’s defence against infection. Many experts believe ulcerative colitis is an autoimmune condition (when the immune system mistakenly attacks healthy tissue). The immune system normally fights off infections by releasing white blood cells into the blood to destroy the cause of the infection. This results in inflammation (swelling and redness) of body tissue in the infected area. In ulcerative colitis, a leading theory is that the immune system mistakes “friendly bacteria” in the colon which aid digestion as a harmful infection, leading to the colon and rectum becoming inflamed. Alternatively, some researchers believe a viral or bacterial infection triggers the immune system, but for some reason it doesn’t “turn off” once the infection has passed and continues to cause inflammation. It’s also been suggested that no infection is involved and the immune system may just malfunction by itself, or that there’s an imbalance between good and bad bacteria within the bowel. It also seems inherited genes are a factor in the development of ulcerative colitis. Studies have found that more than one in four people with ulcerative colitis has a family history of the condition. Levels of ulcerative colitis are also a lot higher in certain ethnic groups, further suggesting that genetics are a factor. Researchers have identified several genes that seem to make people more likely to develop ulcerative colitis, and it’s believed that many of these genes play a role in the immune system. Where and how you live also seems to affect your chances of developing ulcerative colitis, which suggests environmental factors are important. For example, the condition is more common in urban areas of northern parts of Western Europe and America. Various environmental factors that may be linked to ulcerative colitis have been studied, including air pollution, medication and certain diets. Although no factors have so far been identified, countries with improved sanitation seem to have a higher population of people with the condition. This suggests that reduced exposure to bacteria may be an important factor. Who is at risk for Ulcerative colitis? Most people with ulcerative colitis don’t have a family history of the condition. However, you’re more likely to develop it if a parent or sibling also has the condition. Ulcerative colitis can develop in a person of any race, but it’s more common in Caucasians. According to the Mayo Clinic, if you’re an Ashkenazi Jew, you have a greater chance of developing the condition than most other groups. Some studies show a possible link between the use of the drug isotretinoin (Accutane, Amnesteem, Claravis, or Sotret) and ulcerative colitis. Isotretinoin treats cystic acne. How is Ulcerative colitis diagnosed? A health care provider diagnoses ulcerative colitis with the following: - medical and family history - physical exam - lab tests - endoscopies of the large intestine The health care provider may perform a series of medical tests to rule out other bowel disorders, such as irritable bowel syndrome, Crohn’s disease, or celiac disease, that may cause symptoms similar to those of ulcerative colitis. Read more about these conditions on the Health A-Z list. Treatment for Ulcerative colitis Treatment for ulcerative colitis depends mainly on how bad the disease is. It usually includes medicines and changes in diet. A few people have symptoms that are long-lasting and severe, in some cases requiring more medicines or surgery. You may need to treat other problems, such as anemia or infection. Treatment in children and teens may include taking nutritional supplements to restore normal growth and sexual development. If you don’t have any symptoms or if your disease is not active (in remission), you may not need treatment. But your doctor may suggest that you take medicines to keep the disease in remission. If you do have symptoms, they usually can be managed with medicines to put the disease in remission. It often is easier to keep the disease in remission than to treat a flare-up. When to see a doctor See your doctor if you experience a persistent change in your bowel habits or if you have signs and symptoms such as: - Abdominal pain - Blood in your stool - Ongoing diarrhea that doesn’t respond to over-the-counter medications - Diarrhea that awakens you from sleep - An unexplained fever lasting more than a day or two Although ulcerative colitis usually isn’t fatal, it’s a serious disease that, in some cases, may cause life-threatening complications.
The Effect of Shorter Day Length on Winter Production by Lynn Byczynski The two primary environmental factors that affect plant growth are temperature and day length. Temperature is easy enough to understand: Every plant species has a temperature range in which it will grow, and optimum temperatures in which it will thrive. Day length is a little more complicated, especially in combination with temperature. Understanding the relationship between the two can lead to more successful season extension and variety selection. The first thing to know is that the term day length is a misnomer in this respect: Scientific research has confirmed that it's the length of the dark periods , not the length of daylight periods, that predominantly controls plant growth. This fact was discovered long after day length became a widely used term in horticulture, and the term has stuck. Understanding the importance of dark periods can come in handy for the grower, though, because it can be used to modify plants' response to day length, leading them to bloom outside their normal season. More on that later. Many plant species have day length triggers that determine when they grow vegetatively and when they bloom. They may be long-day plants or short-day plants. Some plants do not react to day length; they are called day-neutral. Figuring Day Length at Your Latitude Day length is a function of latitude; all locations having the same latitude coordinate have the same amount of daylight on any given day. Day is equal to night, 12 hours each, at the two equinoxes, which mark the beginning of spring and the beginning of fall. The winter solstice marks the shortest day (longest night), and the summer solstice marks the longest day (shortest night) of the year. In the winter, days are longer the closer you are to the equator, and in the summer, days are longer the farther you are from the equator. Compare day length in Maine and Florida on the summer and winter solstices: |Location||Day Length at Summer Solstice||Day Length at Winter Solstice| |Portland, Maine||15 hr, 26 min||8 hr, 55 min| |Miami, Florida||13 hr, 45 min||10 hr, 32 min| How Day Length Affects Crop Production Most plants do not grow when day length is less than 10 hours. Even if the temperature is kept within the optimum range — for example, in a climate-controlled greenhouse — most plants will just sit dormant until the magic 10 hours of light per day arrives. Compare dates when day length is less than 10 hours in several US cities: |Location||Dates When Day Length is Under 10 Hr| |Atlanta, Georgia||December 8 — January 4| |Washington, DC||November 19 — January 26| |New York, New York||November 14 — January 30| |Portland, Maine||November 8 — February 4| Because most plants won't grow when they receive less than 10 hours of daylight, winter greenhouse production requires supplemental lighting, as well as, oftentimes, supplemental heat. Lights can be turned on shortly before sunset to extend the length of the day. Or, they can be turned on in the middle of the night for a short period of time, a procedure called night-interruption lighting. This brings us back to the fact that it's the dark period that generally has a stronger regulatory effect on a plant's life cycle than the light period: The short days of winter have long nights. If a grower breaks up those long nights by turning on lights in the middle of the night, some plants will act as though the night is short (and, therefore, the day is long), and behave just as they would in the middle of summer. Many bedding plant and cut-flower greenhouse producers use night-interruption lighting to force flowers to bloom in winter. Even for growers who don't use supplemental lighting, the facts about day length are pertinent. Here are some examples of why day length may be a factor in gardening success: - Cauliflower starts to develop a head when days get shorter. That happens sooner in northern than in southern regions, which means cauliflower will produce a crop earlier in Maine than in Virginia. Cooler temperatures during head development also lead to better flavor. So cauliflower is an easier crop in the north than in the south. - Basil does not grow during the short days of winter, even in a tropical greenhouse. Supplemental lighting and heat are required to get good winter yields, and in most places the energy costs must be weighed carefully against the income from a basil crop. - Flower growers are especially subject to the rules of day length because many of the most popular cut flowers have day length triggers. Rudbeckia , for example, grows vegetatively during short days and flowers when days are long. If you plant Rudbeckia in early spring, you can grow big, healthy plants that send up bountiful long stems as the days get longer in summer. If you plant them in late summer, however, hoping for a fall crop, you will get few flowers on very short stems because the day length is too short to trigger blooming. Many other aspects of food and flower crop production are affected by day length, temperature, or a combination of the two. If you have ever wondered why certain crops do not grow as well for you as they do in other parts of the country, day length may be a contributing factor. If you are not attuned to the day length in your location, get a sunrise/sunset calculator or app, and mark your calendar with the dates when you have 10 hours of daylight, 11 hours, 12 hours, and so on. Over time, you will begin to notice correlations between day length and your garden's activity.
Introduction to depressions Mid-latitude depressions, extra-tropical storms, cyclones (at mid-latitudes) or low pressure systems are all different names for the same thing. They all refer to the storms which bring the UK most of its weather, particularly in the autumn and winter. But what are they, how do they form in mid-latitudes, and why do they cause weather? These storms typically last several days, are a few hundred kilometres in size and travel approximately eastwards across the North Atlantic. They are characterised by a swathe of cloud the same scale as the UK, and when shown on a weather map (also known as a synoptic chart), are distinguished by low pressure in the middle and distinctive fronts. They bring us both our ‘normal’ and our ‘extreme’ weather. Figure 1: A weather map or synoptic chart showing a typical mid-latitude depression approaching the UK. © Crown Copyright, Met Office Before we explore how these storms are formed, let’s start with a bit of history. The first scientific study and classification of these weather systems was performed by a group of Norwegian meteorologists: the Bergen School. The location was no coincidence, Bergen experiences a lot of weather systems. Working without the benefit of satellite and radar pictures, the tools we use to tell us about the atmosphere today, this group did a remarkable job of describing the nature of weather systems. Using only ground based observations and information from weather balloons, they realised that depressions occur as a consequence of the pole-to-equator temperature gradient. The pole-to-equator temperature gradient As we live on a spherical planet, you can see in Figure 2, somewhere on the Earth’s surface will always be at right angles to the Sun’s rays, such that the Sun’s light is fairly concentrated (a). In contrast, in other areas on the Earth’s surface which slope away from the Sun’s light, the Sun’s light will be more spread out (b). As the Earth’s surface is warmed by the Sun, the more concentrated light falling on (a) will warm the surface more than in area (b). This is why the Tropics are always warmer than the poles. Figure 2: The pole-to-equator temperature gradient © Crown Copyright, Met Office You can see this in further detail in Figure 3. In this graph, the latitude is recorded along the bottom (the x axis) and the amount of energy is recorded on the left (the y axis). The blue line shows that, over the course of the year, the Tropics receive more energy than the North Pole and South Pole. The red line shows the amount of energy the Earth is losing to space. As you can see, the Tropics lose more energy than the Polar regions, but the two lines are not the same. The Tropics are absorbing more energy than they are losing, whilst the North and South Poles are losing more energy than they are absorbing. If energy absorption and loss are the only factors at play this would mean that the Tropics would be always warming, and the Polar regions always cooling. However this isn’t the case, something else must be transporting heat from the Tropics to the poles. This is where storms, such as mid-latitude depressions (and hurricanes in the Tropics) come in to play. Figure 3: A graph showing how the amount of energy the Earth receives from the Sun (blue line) and the amount of energy the Earth loses to space (red line) varies with latitude. If the blue line is above the red line, the Earth is getting more energy than it is losing at that latitude. If the red line is higher, the Earth is losing more energy than it is getting. The map of the world in Figure 4, shows changes in temperature from the Polar regions to the Tropics, using different colour bands. The temperature cools most rapidly where the colour bands on the map are narrowest, in the mid-latitudes. As you can see, the rate at which the Earth’s temperature falls, is not steady as you travel from the Tropics to the Polar regions. Figure 4: A map of average December temperature of the atmosphere about 1.5km above the ground. © NOAA-ESRL Physical Sciences Division, Boulder Colorado. Data source NCEP/NCAR Reanalysis 1: Summary The Bergen meteorologists realised this temperature gradient was a key part of the formation of weather systems. They saw the mid-latitudes as being where the warm, tropical air met and ‘fought’ the cold, Polar air. They were working at a time when fronts were very much part of people’s vocabulary, they referred to the temperature gradient as ‘the Polar front’; a boundary separating competing warm tropical air from colder air near the pole. We’ll discover in the next article why depressions form on this polar front. © University of Reading and Royal Meteorological Society
A vesicle is a bubble of liquid within a cell. More technically, a vesicle is a small, intracellular, membrane-enclosed sac that stores or transports substances within a cell. Vesicles form naturally because of the properties of lipid membranes. Vesicles can fuse with the plasma membrane, and release their contents outside the cell. Vesicles can also fuse with other organelles within the cell. A vesicle is sometimes formed when the cell is doing endocytosis. Endocytosis is a process in which a cell's membrane takes in a particle from the outside and brings it inside the cell with a vesicle around it. Vesicles are also more commonly known as nuclear membranes, because their very similar to the cell membrane.
There is no doubt that plants are a fascinating and integral part of our environment. Aside from playing the essential role of balancing nature and life, they also beautify our surrounding. Think about it; there is just something amazingly extraordinary and serene about standing in a dense forest and taking in all the natural, pleasant smells, and dizzying heights Plants have intrigued the scientific community for centuries due to their therapeutic potential and scientific breakthroughs. One of these fascinating, yet highly controversial plants is Mitragyna Speciosa or better known as Kratom. While commonly known as Kratom, Kratum or Ketum, its scientific name is Mitragyna Speciosa. It belongs to the Mitragyna species and is a member of the Rubiaceae family. Apart from South East Asia, it is also found in some African regions. While the Asian species are typically predominant in the rain forests, the African species which are still categorized in a separate genus and are mostly observed in swampy areas. The majority of this species are arborescent (treelike in growth or appearance), with some peaking to a majestic height of as high as 30 meters. The Kratom tree has an average height of fifteen meters, while the ground covers around four and a half meters. It has a robust stem that stands firmly and branches out. The flowers are yellow while the leaves are dark green and feature an ovate, acuminate shape that is hard to miss. Since this plant is evergreen more than deciduous, the leaves are frequently shed then replaced. The changing environment is what causes the quasi-seasonal leaf shedding. Leaf shading becomes more rampant and abundant during the dry season while plenty of new growth happens during the rainy season. If this plant is grown outside tropical conditions, shedding of leaves will only take place when the temperature is above four degrees Celsius. Kratom plant can naturally be grown using the seeds, provided they are fresh. The rate of germination typically hovers from around twenty to thirty percent and once the seeds have been germinated the seedlings to grow to a height of fifteen to around twenty feet. As mentioned earlier, the leaves are typically dark green and glossy with an ovate, acuminate shape. Although it is relatively easy finding kratom for sale, finding seeds could be difficult and growing the plant is not recommended due to the sensitive nature of this plant. Additionally, these leaves also feature an opposite growth pattern, coupled with twelve to seventeen pairs of veins and can grow to over twenty centimeters long and fifteen centimeters wide when fully open. The yellow flowers, on the other hand, grow in clusters of three at the far end of the branches. On the other hand, the calyx-tube is two millimeter long and has five lobes, while the corolla tube is around three millimeters. This tree prefers a moist environment coupled with nitrogen-rich soils in a protected position. As an extremely heavy feeder, the growing plant will need fertile soil consistently. Additionally, this tree is also extremely sensitive to drought and frost. This being said, buying kratom powder is recommended instead of growing the plant yourself. One of the earliest mentions of Mitragyna Speciosa was first formally introduced and described to the Western World by a Dutch Colonial botanist known as Pieter Korthals in 1839. Peter thought the shape of this plant resembled a Bishop’s miter, thus naming the tree Mitragyna. Although his original work is extremely hard to find, it is still available in a few libraries around the world. What fascinated the most about Kratom was its popularity among the locals, as well as the tradition surrounding it. The fundamental structure and chemistry of the Kratom plant integrated into the nucleus of the Tryptamine, which is responsible for the molecules that are observed in serotonin and adrenergic systems. Keep in mind that the pharmacokinetics in human beings have not been studied extensively, leaving very little data on the subject. Important factors such as the half-life, protein binding property, metabolism, and elimination are yet to be well studied. The chemistry mentioned in this article mostly features what is found on the leaves of the Kratom plant. Numerous reliable chemical studies on the structure of this indigenous plant have uncovered that it distantly resembles the chemical structure of psilocybin. Although the structure is similar, there is no proof of any psychedelic activity in Mitragyna Speciosa. Research has also ascertained that this plant has structurally related alkaloids as well as terpenoid saponins, flavonoids, polyphenols and several glycosides. The compounds found in its structure include Ajmalicine, Corynantheidin, Corynoxein, 3-Dehydromitragynin, 3-Isopaynanthein, Isomitraphyllin, Isospecionoxein, Mitraciliatin, Mitragynalin, Mitraphylline, Mitraspecin, Speciofolin, Speciogynin, Speciogynin, Speciofolin, Stipulatin, 7-acetoxymitragynine, Corinoxin, Epicatechin, Isospeciofolin, Mitrafolin, Mitraversin, Speciociliatin, Specionoxein, Paynanthein, and 3-Isocorynantheidi. It is important to note that the chemical composition varies and is unspecified since it largely depends on the age, environment and even the time of harvest. On the other hand, the total alkaloid concentration in the dried leaves ranges from around 0.5 to 1.5%. Most researched alkaloids and that of highest significance are 7-Hydroxymitragynine, Mitragynine and Epicatechin. This is the most potent chemical ingredient present in the Kratom tinctures and has been found to contain opioid agonistic activity. 7-Hydroxymitragynine according to extensive research has more potency, almost 30-fold higher than that Mitragynine and this makes it the most active chemical among all the others. This puts Mitragynine, the previously assumed main active chemical in the second position. Although 7-Hydroxymitragynine typically interacts with the three major opioid sites Mu, Delta, and Kappa; it preferably binds to Mu receptors which are responsible for the effects Kratom is known for. Its molecular formula is C23H30N2O5, and the molecular weight is 414.50 g/mol. This chemical is also contained in Kratom in extremely high percentages. It is an indole alkaloid that was first isolated in 1907 by D.Hooper. It has been shown to be adrenergic at lower doses while on the other hand in very high doses it acts on the delta and mu opiate receptors. The Molecular Formula- C23H30N2O4 while the molecular weight is 398.50 g/mol On its own Mitragynine primarily acts through opioid receptors and produces an oxidation product known as mitragynine-pseudoindoxyl which is a major component of Kratom that is stored or has aged for an extended period. Although research proves that the structure of Mitragynine is similar to that of Psilocybin, it does not particularly induce any psychedelic effects on the brain of living organisms. It is important to keep in mind that the amount of Mitragynine will vary according to the tree’s location, age, and strain. It has been proven that trees that are grown in South East Asia are most likely to contain higher amounts. This can be attributed to the specific climatic conditions of this region. In Thailand varieties, Mitragynine is up to 66% while in 7-hydroxymitragynine is at lower concentrations of around 12%. Mitragynine is not soluble in water and does not dissolve in conventional organic solvents including acetic acid, acetone, alcohols, and diethyl ether providing fluorescent solutions. It distils at a temperature 200-240 degrees centigrade at 5mmHg. This is another component of Kratom that is still being extensively researched. This particular component is a real-all rounder that studies show has a potential of reducing the effect of real radicals. Although Epicatechin is found in relatively high concentrations as compared to the other chemicals, research on it is still very limited. Scientists and botanists agree that more research needs to be carried out to provide more accurate details about the structure. Most of the alkaloid varieties present in this plant’s structure still call for further experimentation and studies to investigate their specific activity, effects, and the potential applications. Although Kratom advocates and many medical professionals around the globe consider Kratom to a tremendously useful asset due to the potential medicinal effects, some countries such as the United States have prohibited human consumption of this plant, and with very good reason. Although you can buy kratom legally, policy makers have put these laws in place with the intention of ensuring people do not face detrimental side effects that will put their health in jeopardy. While there are 196 countries around the world, four have explicitly banned this indigenous plant by declaring it completely illegal for all purposes. On the other hand, ten countries impose extreme regulations; meaning Kratom is criminalized and often labeled a scheduled drug. United States Laws Despite heavy pushes by the DEA and FDA to make this plant illegal in the United States, it remains legal in the majority of states throughout the United States. Different states have different regulations when it comes to this highly controversial plant. In each of these states, the bill is worded differently. While some ban it out rightly, others only ban specific alkaloids contained in the plant. States that have bills that declare Kratom illegal include Wisconsin, Vermont, Indiana, Tennessee and Arkansas. Remaining states Kratom is still legal though states such as Iowa, New York, New Hampshire, Michigan, and Georgia are seeking amendments to make it illegal. The DEA and FDA have never formally criminalized Kratom, however, in August of 2016, the DEA expressly issued a notice of intent to temporarily classify the two psychoactive chemicals contained in this plant as schedule 1 drugs, Mitragynine and 7-Hydroxymitragynine. What is a Schedule 1 Drug? A Schedule 1 drug is a one that has a high potential for abuse, affinity to addiction and also has the potential to create severe psychological or physical dependence. This proposal was met with hostility by Kratom advocates who are convinced that this plant could be useful. This notice of intent by the DEA was temporarily withdrawn, citing numerous comments received from members of the public as well as the need to a detailed and extensive medical and scientific evaluation by the FDA. Apparently, the FDA will be actively soliciting comments from the general public. In fact, an official document announcing the solicitation and withdrawal is available at Federal Register. For now the FDA still strictly prohibits the human consumption of Kratom. Although many people argue against the banning of Kratom in the US, there are six main reasons why the DEA and the FDA are skeptical about declaring this plant safe for human consumption. - According to Forbes, In July 2016, a report by the CDC (Centre for Disease Control and Prevention) made a report that calls to its poison centers about Kratom exposure had increased by tenfold from just 26 in 2010 to 263 in 2015, alarming! Right? Over a third of these calls had reported that issues mostly occurred when this plant was combined with other commonly used substances such as narcotics, benzodiazepines and a range of other substances. - Kratom purchased in the United States according to a report by the DEA is adulterated with other synthetic compounds such as hydrocodone and oxycodone. Although most people are under the impression that they are buying pure Kratom, they are purchasing an adulterated form of it that could potentially have detrimental symptoms; this is according to Kavita Babu who is a respected toxicologist at UMass Memorial Medical Centre. Even renowned advocates for the formal legalization of this plant agree that this adulteration is a big limitation. - Inadequate research on the chemical structure of Kratom has also made it impossible to legalize it. The Food and Drug Agency and the Drug Enforcement Agency only legalize the use of plants for human use when they comprehensively understand the chemical structure, effects on the health of the human beings and the potential for physical, psychological addiction. Since very little research has been done, it makes logical sense that this plant has not been legalized. This is why Congress has asked the FDA and DEA to investigate before out rightly declaring it illegal. - The DEA is Citing Poison Control as a Reason to potentially Ban the Plant. According to DEA officials, the fact that research is still is still inadequate proves that it can be classified as a poison that should not be consumed by human beings. The FDA agrees with this decision to classify this substance as a poison because more needs to be ascertained about its effects on the general health of an individual. - The spokesman for the DEA, Russ Baer insists that the main reason they are committed to placing the Kratom plant on the Schedule 1 Drug List is that of the psychoactive ingredients that it possesses. According to Baer, this will help protect the general public against abuse and misuse. - Independent from the DEA, the Food and Drug Agency has also issued numerous public health warnings because of the legitimate concerns they have that Kratom could be representing a health risk. In fact, the FDA says that Kratom has been on their radar for a long time. The DEA’s and FDA’s decision to potentially ban the human consumption of Kratom has met aggressive resistance. The big question is why? Are there any logical and rational reasons why so many are against criminalizing the use of this plant? The resistance against decriminalizing Kratom is a massive movement. There is even an American Kratom Association which is a consumer group spearheading the efforts to ensure a real ban does not take effect. This association is led by its founder Susan Ash, and Executive Director is Paul Pelosi Junior who is the son of former renowned House Speaker Nancy Pelosi. In fact, a Whitehouse petition formed to lead reconsideration has attracted over 30,000 signatures. The main argument made by these advocates is that is a naturally occurring plant hence contains no artificial ingredients that could cause detrimental side effects. Additionally, it has been used by many communities around the world since the early 1900s, and there is no proof that it has caused any long term permanent effects. These advocates argue that instead of banning the plant more research needs to be done on how to increase its safety and ascertain drug integrations with Kratom that could be harmful or poisonous. According to advocates for the legalization, it would be a great mistake to place this plant in the Schedule 1 drugs category. This is because unlike Cocaine, Ecstasy, and Heroin that have a high potential for harm and abuse, studies suggest that this plant may not necessarily be addictive or detrimental to mental and physical health. Dr. Chris McCurdy, a Kratom researcher at the University of Mississippi, says that he is not opposed to regulation, what he is specifically against is declaring Kratom a Schedule 1 drug. The DEA’s and Federal Drug Agency’s decision to potentially ban the human use of Kratom in the United States has begun to draw critical attention from renowned United States lawmakers such as Representative Mark Pocan (D-Wis) who is committed to asking the Congress to make changes. According to most lawmakers who support the legalization of Kratom, they deem regulation necessary, what they are against is the harsh penalties that will be imposed on users if it is considered a Schedule 1 Drug. Rests of the World Most of the world Kratom is legal; however, some countries have made it completely illegal or have imposed harsh regulations making the sale and distribution extremely difficult. Countries that have made Kratom completely illegal are Malaysia, Burma, Australia, and Thailand. Possession of Kratom in these countries can lead to an imprisonment and in the case of Malaysia, Burma, and Thailand, even the death penalty can be given to persons found guilty. Justifiable or not, there is something to be said about killing a human being in the name of harm reduction; it is logically extreme and counter-intuitive. Countries that have not outright banned the use of this plant but imposed extremely harsh rules that label it as a schedule 1 drug, making sale and distribution almost impossible. These include New Zealand, Germany, Romania, Denmark, and Finland. Some countries place stricter regulations than others, and due to this fact, it is crucial to extensively study the laws of the local regions so as to have a comprehensive understanding.
Colaizzi's method of data analysis is an approach to interpreting qualitative research data, often in medicine and the social sciences, to identify meaningful information and organize it into themes or categories. The approach follows seven data analysis steps. Qualitative research uses surveys with open-ended questions or loosely structured interviews to uncover how people think and feel in certain situations, such as becoming a first-time father or undergoing chemotherapy. A researcher can apply Colaizzi's seven steps to analyze the data collected in a survey or interview. In the first step, the researcher reads a description of each person participating in the study to gain a sense of the participants. Next, the researcher extracts statements with significance to the research question, such as descriptions of how a first-time father feels about caring for a newborn. To reflect the research data accurately, the significant statements should be direct quotations from the participants. To analyze the significant statements, the researcher begins to articulate what the statements mean and creates themes from the meanings. The researcher groups similar themes together and organizes them into categories. Finally, the researcher integrates the results into a comprehensive description of the topic and returns to each participant to verify the results.
Inductive reasoning (as opposed to deductive reasoning or abductive reasoning) is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument is probable, based upon the evidence given. Many dictionaries define inductive reasoning as the derivation of general principles from specific observations, though some sources disagree with this usage. The philosophical definition of inductive reasoning is more nuanced than simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms, discussed below). Inductive reasoning is inherently uncertain. It only deals in degrees to which, given the premises, the conclusion is credible according to some theory of evidence. Examples include a many-valued logic, Dempster–Shafer theory, or probability theory with rules for inference such as Bayes' rule. Unlike deductive reasoning, it does not rely on universals holding over a closed domain of discourse to draw conclusions, so it can be applicable even in cases of epistemic uncertainty (technical issues with this may arise however; for example, the second axiom of probability is a closed-world assumption). An example of an inductive argument: All biological life forms that we know of depend on liquid water to exist. Therefore, if we discover a new biological life form it will probably depend on liquid water to exist. This argument could have been made every time a new biological life form was found, and would have been correct every time; however, it is still possible that in the future a biological life form not requiring liquid water could be discovered. As a result, the argument may be stated less formally as: All biological life forms that we know of depend on liquid water to exist. All biological life probably depends on liquid water to exist. Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true. Instead of being valid or invalid, inductive arguments are either strong or weak, which describes how probable it is that the conclusion is true. Another crucial difference is that deductive certainty is impossible in non-axiomatic systems, such as reality, leaving inductive reasoning as the primary route to (probabilistic) knowledge of such systems. Given that "if A is true then that would cause B, C, and D to be true", an example of deduction would be "A is true therefore we can deduce that B, C, and D are true". An example of induction would be "B, C, and D are observed to be true therefore A might be true". A is a reasonable explanation for B, C, and D being true. A large enough asteroid impact would create a very large crater and cause a severe impact winter that could drive the non-avian dinosaurs to extinction. We observe that there is a very large crater in the Gulf of Mexico dating to very near the time of the extinction of the non-avian dinosaurs Therefore it is possible that this impact could explain why the non-avian dinosaurs became extinct.Note however that this is not necessarily the case. Other events also coincide with the extinction of the non-avian dinosaurs. For example, the Deccan Traps in India. A classical example of an incorrect inductive argument was presented by John Vickers: All of the swans we have seen are white. Therefore, all swans are white. (Or more precisely, "We expect that all swans are white") The definition of inductive reasoning described in this article excludes mathematical induction, which is a form of deductive reasoning that is used to strictly prove properties of recursively defined sets. See main article: Problem of induction. Although the use of inductive reasoning demonstrates considerable success, its application has been questionable. Recognizing this, Hume highlighted the fact that our mind draws uncertain conclusions from relatively limited experiences. In deduction, the truth value of the conclusion is based on the truth of the premise. In induction, however, the dependence on the premise is always uncertain. As an example, let's assume "all ravens are black." The fact that there are numerous black ravens supports the assumption. However, the assumption becomes inconsistent with the fact that there are white ravens. Therefore, the general rule of "all ravens are black" is inconsistent with the existence of the white raven. Hume further argued that it is impossible to justify inductive reasoning: specifically, that it cannot be justified deductively, so our only option is to justify it inductively. Since this is circular he concluded that our use of induction is unjustifiable with the help of Hume's Fork. However, Hume then stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position of severe skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted. Bertrand Russell illustrated his skepticism in a story about a turkey, fed every morning without fail, who following the laws of induction concludes this will continue, but then his throat is cut on Thanksgiving Day. Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions. As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most logical conclusion based on the clues. Examples of these biases include the availability heuristic, confirmation bias, and the predictable-world bias. The availability heuristic causes the reasoner to depend primarily upon information that is readily available to him/her. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents would choose the causes that have been most prevalent in the media such as terrorism, and murders, and airplane accidents rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around him/her. The confirmation bias is based on the natural tendency to confirm rather than to deny a current hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is in fact a sociable individual. The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist, either at all or at a particular level of abstraction. Gambling, for example, is one of the most popular examples of predictable-world bias. Gamblers often begin to think that they see simple and obvious patterns in the outcomes and, therefore, believe that they are able to predict outcomes based upon what they have witnessed. In reality, however, the outcomes of these games are difficult to predict and highly complex in nature. However, in general, people tend to seek some type of simplistic order to explain or justify their beliefs and experiences, and it is often difficult for them to realise that their perceptions of order may be entirely different from the truth. The proportion Q of the sample has attribute A. The proportion Q of the population has attribute A. How much the premises support the conclusion depends upon (a) the number in the sample group, (b) the number in the population, and (c) the degree to which the sample represents the population (which may be achieved by taking a random sample). The hasty generalization and the biased sample are generalization fallacies. See main article: Statistical syllogism. A statistical syllogism proceeds from a generalization to a conclusion about an individual. A proportion Q of population P has attribute A. An individual X is a member of P. There is a probability which corresponds to Q that X has A. Simple induction proceeds from a premise about a sample group to a conclusion about another individual. Proportion Q of the known instances of population P has attribute A. Individual I is another member of P. There is a probability corresponding to Q that I has A. This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism. See main article: Argument from analogy. The process of analogical inference involves noting the shared properties of two or more things, and from this basis inferring that they also share some further property: P and Q are similar in respect to properties a, b, and c. Object P has been observed to have further property x. Therefore, Q probably has property x also. Analogical reasoning is very frequent in common sense, science, philosophy and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning. A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship. A prediction draws a conclusion about a future individual from a past sample. Proportion Q of observed members of group G have had attribute A. There is a probability corresponding to Q that other members of group G will have attribute A when next observed. As a logic of induction rather than a theory of belief, Bayesian inference does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by committing to a prior probability for a hypothesis based on logic or previous experience, and when faced with evidence, we adjust the strength of our belief in that hypothesis in a precise manner using Bayesian logic. Around 1960, Ray Solomonoff founded the theory of universal inductive inference, the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combines algorithmic information theory with the Bayesian framework. Universal inductive inference is based on solid philosophical foundations, and can be considered as a mathematically formalized Occam's razor. Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity.
White blood cells (WBCs) help protect the body from infection. Neutrophils are a type of white blood cell. Their main job is to help the body fight bacterial and fungal infections. Neutropenia occurs when there are fewer neutrophils in the blood than normal. It can range from mild to severe. This depends on the number of neutrophils in the blood. Severe neutropenia puts a person at higher risk for having more infections. Bacterial and fungal infections are most common. Your doctor can tell you more about your condition and whether it needs to be treated. What causes neutropenia? There are 2 main types of neutropenia: congenital and acquired. Each type has many causes: Congenital neutropenia. These are the types that are present at birth. They are caused by certain rare genetic conditions, such as Kostmann’s syndrome. Most often the neurtropenia is mild and normal for certain ethnic groups. Acquired neutropenia. This type is not present at birth. Causes include: Certain medicines, such as antibiotics and chemotherapy medicines Certain autoimmune conditions Certain viral, bacterial, or parasitic infections Too little folate or vitamin B-12 in the diet Underlying bone marrow problem, such as leukemia or myelodysplastic syndrome (MDS) How is neutropenia diagnosed? Your healthcare provider may check for neutropenia if you have frequent infections. Your provider may also check for neutropenia if you’re having certain treatments, such as chemotherapy, which is known to cause a lower neutrophil count. Tests will be done to confirm the problem. These may include: A complete blood cell count (CBC). This test measures the amounts of the different types of cells in your blood. This includes the WBCs. The WBC count can be broken down further to find the number of neutrophils and immature neutrophils (bands) in your blood. This is called an absolute neutrophil count (ANC). A blood smear. This test checks for the different types of blood cells in your blood and how they appear. A sample of your blood is spread on a glass slide and viewed under a microscope. A stain is used so the blood cells can be seen. A bone marrow aspiration and biopsy. This test checks for problems with how your bone marrow makes blood cells. A needle is used to remove a sample of the bone marrow in your hipbone. The sample is then sent to a lab to be tested for problems. How is neutropenia treated? If there is a clear cause of neutropenia, it is addressed. For instance, if a medicine is the cause, it may be stopped or changed. For mild cases, often no treatment is needed. For moderate to severe cases, treatment is likely needed. This may include: G-CSF (granulocyte-colony stimulating factor). This is a special type of protein. It helps promote the growth and activity of neutrophils. G-CSF is given by injection. Bone marrow transplant. This treatment replaces diseased bone marrow cells with healthy cells from a matched donor. This treatment is only done in specific severe cases. What is the long-term outcome of neutropenia? The outcome of neutropenia varies for each person. For some people, neutropenia may resolve after a few weeks or months. For other people, it may be long-lasting. In these cases, ongoing care and treatment is needed. Your healthcare provider will talk to you more about what to expect from your condition. When to call your healthcare provider Call your healthcare provider right away if you have any of the following: Fever of 100.4°F (38°C) or higher. Call 911 or 24-hour urgent care. This is especially important if you have severe neutropenia, which puts you at higher risk for life-threatening infection. Cold sweat or chills Chest pain or trouble breathing Extreme tiredness or fatigue Nausea and vomiting Redness, warmth, or drainage from any open cuts or wounds Pain or burning with urination; frequent urination Pain, burning, or bleeding in the rectum Severe constipation or diarrhea Bloody stool or urine How can I prevent infections? With neutropenia, you need to take extra care to protect yourself from infection. Be sure to: Wash your hands often, especially before eating and after using the bathroom. Use warm water and soap. Or use a hand gel that contains at least 60% alcohol. Avoid close contact with others who may be ill. Clean items you use often with disinfectant wipes. This includes phones and computer keyboards. Avoid touching your eyes, nose, and mouth, especially if your hands are not clean. Practice good oral hygiene. Use a soft toothbrush. Also, brush and floss your teeth gently. Always wipe from front to back after a bowel movement. Keep cuts and scrapes clean and covered until they heal. Avoid sharing items such as towels, toothbrushes, razors, clothing, and sports equipment. Store and handle foods safely to prevent food-borne illness. Ask your healthcare provider if you need to take antibiotics before and after having any dental or medical procedures. Ask your healthcare provider if you need to wear a special mask near construction sites or farm areas. October 07, 2017 Approach to the adult with unexplained neutropenia, Up To Date, Congenital neutropenia, Up To Date, Management of the adult with non-chemotherapy induced neutropenia, Up To Date, Overview of neutropenia in children and adolescents, Up To Date Freeborn, Donna, PhD, CNM, FNP,Gersten, Todd, MD,Image reviewed by StayWell medical illustration team.
That long scroll down to reach this text echoes how ‘deep’ the history of the planet is. Representing 4600 million years (4.6Ga – giga) to scale is not easy. You will find this same scale in the Voyager app when you touch the clock. In the app you can view planetary plate configurations from the deep past along with the occurrence of impacts from space, major volcanic events, extinction events and atmospheric oxygen and carbon dioxide levels through time. On this scale most of the geology we encounter today involves rocks formed and live forms evolved in just over the last month of that year! A lot of Earth history for the first ’11 months’ is represented by small exposures of ancient rocks in places like NW Scotland, Greenland, Canada and Australia. On the scale used for this diagram we cannot correctly show human history, even the last 8000 years would need a line only about one hundredth of a pixel thick. The time since the start of the Industrial Revolution (when most natural resources began to undergo significant consumption) would need a line five ten thousandths of a pixel thick! What does the dark red line at the top of the scale (above the Neogene) represent? This is the Quaternary period, which spans the last 2.58 million years. The diagram below shows how the Quaternary period is further subdivided into Epochs and Ages. This subdivision also applies to the other periods like the Jurassic. The boundaries between all these time intervals is set by the International Commission on Stratigraphy. The time boundaries are decided on the basis of a significant event which is well represented in the rock record at some location. For example, the start of the Greenlandian is based on an ice core sample from the North Greenland Ice Core project. The core marks the end of the Younger Dryas – a brief period of return to near glacial conditions. The current Megahalayan is based on a formation preserved in a cave in Megahalaya in northeast India. Many would like the International Commission on Stratigraphy to official adopt the Anthropocene as an age, or maybe a new epoch or period, marking the time when humanity became a significant force in shaping the planet. Some have suggested 1945 as a time horizon, as that was the year of the first nuclear bombs, which will have left a signature of radioisotopes in the rocks forming at that time. Some of the maps used in Voyages contain the words ‘Monsters of the Anthropocene’, to signify dangers from traffic. Vehicles being a threat to life in this age, perhaps just as much as the larger lifeforms of the past.
Communication is fundamental. It is not only the vehicle by which we convey our thinking to others, but it is the way in which we process information. As we organize our thoughts to communicate, we learn content. Understanding how to communicate that content must be considered a basic skill in our schools today. Communicating in math and science is not only critical, but it also gives students the opportunities to apply their communication skills in diverse content areas. In this section of our website, we have collected material that will help teachers enhance their students' communication skills. National Communication Standards The authors of the national mathematics and science standards recognize the importance of communication, and have included it as a standard to be met. Exemplars has also included these relevant communication standards in our math and science rubrics. Communication Criteria in Exemplars Rubrics From the beginning, Exemplars math and science rubrics have included criteria focused on communication. Students' work must concentrate on communication, or it will not meet the (Exemplars) standard. Below are the relevant communication performance standards at the Exemplars Practitioner Level. Elements from the Math Rubric - Reasoning and Proof - Arguments are constructed with adequate mathematical basis - A sense of audience or purpose is communicated - ...a methodically organized, coherent, sequenced and labeled response - Formal math language is used throughout the solution to share and clarify ideas - Appropriate and accurate mathematical representation Elements from the Science Rubric - Scientific Communications/Using Data - A clear explanation was presented - Effectively used scientific representations and notations - Scientific Concepts and Related Content - Appropriately used scientific terminology - Provided evidence of understanding to relevant scientific concepts, principles or theories (big ideas) - Evidence of understanding observable characteristics and properties of objects, organisms and/or materials used
Defining Literature and Texts Relevant to an EFL Classroom October 13, 2008 Traditionally, literature (with a large L) is defined as the 'best' writing produced in a given language or society and that which is considered as a literary canon for all times. This normally includes 'classical' writers belonging to the past, and often excludes contemporary writing. However, in the post-modern, deconstructionist age, the definition of literature took on a new shape to include texts such as advertising copy, graffiti and public notices which use literary devices like parallelism, rhyme, rhythm and metaphor (Maley, 2001). These are thought to be appropriate and relevant in the classroom because of their use of literary devices. They are considered to be worth interpretation, and more relevant than the canonical texts which sometimes pose difficulty for the students, because of the nature of language used. Therefore, literature now encompasses popular fiction, advertising and film in order to make the whole teaching/learning process more attractive and interesting. According to Scholes: What students need from us...is the kind of knowledge and skill that will enable them to make sense of their worlds, to determine their own interests, ... to see through the manipulations of all sorts of texts in all sorts of media, and to express their own views in some appropriate manner (Scholes 1985:15-16). A Definition of 'Literature' However, a definition of 'literature' is not a homogeneous one. There remain problems in defining the term, especially once the socio-historical and cultural factors are considered. As pointed out by Williams, (1976:183): 'Literature' is a difficult word, in part because its conventional contemporary meaning appears, at first sight, so simple.' By the late twentieth century, 'literature' as a concept and as a term, has become problematic, either through ideological symbol of the high culture 'Canon', or, conversely, through demystification by radical critical theory. Therefore, as pointed out by Eagleton (1976:166), it is now a state when 'Literature must indeed be re-situated within the field of general cultural production; but each mode of such production demands a semiology of its own, which is not conflatable with some universal "cultural" discourse'. The word 'literature' in itself can be used in a number of ways. As observed by Widdowson (1999), however, in normal usage, a distinction tends to be drawn and signalled by the fact that when reference is made to critical, theoretical or promotional literature, there is a tendency to put the definite article in front of the word, whereas, to refer to 'literary' writings, the use of definite article is left out. Again, 'Literature' with an upper-case 'L' and within inverted commas signifies the idea of that global body of literary writing which has been recognised with Matthew Arnold's famous utterance, as quoted in Widdowson (ibid: 4) - 'the best that has been known and said in the world'. The modern Western concept of literature became securely established at the same time as the appearance of the modern research university that is commonly identified with the founding of the University of Berlin around 1810 (Miller, 2002). The sense of literature was strongly shaped by the university-trained writers so to shape citizens by giving them knowledge of the best that is known and thought in the world. Literature has thus been credited the highest achievement of aesthetic and moral merit, and has acquired the status of a universal resource of form and ethical modes for human kind. There are also collocations of such authors and texts as constituting 'The Classics', 'The (Great) Tradition' 'The Canon', and the standard 'Set authors/Books' on all secondary and tertiary education syllabuses. On the other hand, 'literature' with small 'l' and no inverted commas is used either in a neutral discursive capacity, or to represent the writings which are 'literary' in the sense that they identify themselves quite self-consciously as belonging to the artificial discursive realm of 'creative' or 'imaginative' writing as opposed to the other, more quotidian forms of written communication (Widdowson, 1999). Although, apparently there is not much difficulty in phrases such as 'English literature' or 'contemporary literature', until the question regarding whether all books and writing are 'literature' and what are the criteria set in selecting are raised. Widdowson (1999:8) elaborates the problematic areas in definitions of 'literature', definitions that have made entries in pioneering encyclopaedias and references. For example, the entry on 'literature' in The New Encyclopaedia Britannica: Micropaedia, reads: 'a body of written works. The name is often applied to those imaginative works of poetry and prose distinguished by the intentions of their authors and the excellence of their execution'. The distinction between 'literature' and 'drama' also poses problems, apparently because drama is a form primarily written for spoken performance. The above definition introduces the notion of 'imagination' as the defining characteristic of 'literary' writing and discriminates in favour of those writings 'distinguished by the intentions of their authors'. The argument does not make it clear how an author's intention 'distinguishes' a work as literature. Widdowson adds that although it may seem natural for one to think that some works may be better than others, the problem is, however, that the 'canonising process is cognate with the discourse of evaluation: the criteria are imprecise, unexplained, tacitly assumed, and thoroughly naturalised' (ibid.:8). Moreover, the reasons given for the received canon rely, on notions of 'beauty of form', 'emotional effect', 'artistic merit', and on the judgement of those who can 'recognise' these qualities when they see them. Once again, the criteria of identifying a canon is self-selecting, given - whereas, in reality, it is historically constructed on behalf of some powerful and resolute ideological imperatives. The questions regarding who constructed the canon, when and for whom, on what criteria and to what ends, readily destabilise the notions of 'Literature', 'canon', and 'literary value' (ibid: 13). This is supported and as pointed out by Eagleton (1983:11) that a literary work or tradition can not be valuable 'in itself' because the term 'value' itself is a transitive term equating to whatever is valued by certain people in specific situations, according to particular criteria and in the light of given purposes. It is thus quite possible that, through a transformation of history, in the future there may be the emergence of a society which will get nothing at all out of Shakespeare. His works, full of styles of thought and feeling may simply seem desperately unfamiliar, limited or irrelevant to that society making him no more valuable than much present-day graffiti. From a historical perspective, as noted by Williams (1976), the term 'literature' came into English from 14th century, in the sense of polite learning through reading. A man of 'literature' equated to a man of wide reading. Literature, corresponded mainly to the modern meaning of literacy meaning both an ability to read and a condition of being well-read. The general sense of 'polite learning', steadily attached to the idea of printed books, was laying the basis for the later specialization. Colet as cited in Williams (ibid), in sixteenth century, distinguished between 'literature' and what he called ' blotterature' - referring to books which were below the standards of polite learning. However, Miller (2002) adds that the word comes from a Latin stem and cannot be detached from its Roman-Christian-European roots. Literature in a modern sense, however, appeared in the European West and began in the late seventeenth century, at the earliest. Even a definition of 'literature' as including memoirs, history, collections of letters, learned treatises, etc., as well as poems, printed plays, and novels, comes after the time of Samuel Johnson's dictionary (1755). The restricted sense of literature as just poems, plays, and novels is even more recent. From eighteenth century, the term 'literary' was extended beyond its equivalence to 'literate': probably first in the general sense of well-read but from mid-eighteenth century to refer to the practice and profession of writing: 'literary merit' (Goldsmith in Williams, 1976); 'literary reputation' (Johnson in Williams, 1976). This appears to be closely connected with the heightened self-consciousness of the profession of authorship, in the period of transition from patronage to the bookselling market. Yet 'literature' and 'literary', in these new senses, still referred to the whole body of books and writing; or if distinction was made it was in terms of falling below the level of polite learning rather than of particular kinds of writing. All works within the scope of polite learning came to be described as 'literature' and all such interests and practices as 'literary'. The idea of a 'Nationallitteratur' developed in Germany from the 1770s. The sense of 'a nation' having 'a literature' is a crucial social and cultural, probably also political, development (Williams, 1976:185). As noted by Miller (2002) literature is associated with the gradual rise of almost universal literacy in the West. Literacy, furthermore, is associated with the gradual appearance from the seventeenth century onward of Western-style democracies that allowed citizens more or less free access to printed materials and to the means of printing new ones although, this freedom has never been complete, with its various forms of censorship. However, according to Miller, literature as a Western cultural institution, is a special, historically conditioned form of literature in the sense that it is a universal aptitude for words or other signs to be taken as literature. The attempt to trace a class of writing to be specialised as 'literature', however, has proved difficult just because it is incomplete. In relation to the past, 'literature' is still a relatively general word with a steady distinction and separation of other kinds of writing - philosophy, essays, history, and so on - which may or may not possess 'literary merit' or be of 'literary interest'. Although, they may be 'well-written', still may not normally be described as 'literature'. As pointed out by Williams (1976), teaching of literature usually includes poems, plays and novels; other kinds of 'serious' writing are described as 'general' or 'discursive'. There is also 'literary criticism' - judgement of how a ('creative' or 'imaginative') work is written - as distinct, often, from discussion of 'ideas' or 'history' or 'general subject-matter'. However, most poems and plays and novels are not seen as 'literature' as they fall below the old distinctive feature of literature, of 'polite learning'. Therefore, they are not substantial or important enough to be called 'works of literature'. Nevertheless, the major shift represented by the modern complex of 'literature', 'art', 'aesthetic', 'creative' and 'imaginative' is a matter of social and cultural history. 'Literature' itself must be seen as a late medieval and Renaissance isolation of the skills of reading and of the qualities of the book; this was much emphasised by the development of printing. Then 'literature' was specialized towards 'imaginative writing', within the basic assumptions of Romanticism. It is interesting to note that it was, primarily, poetry, defined in 1586 as 'the arte of making...to express the very faculty of speaking or writing Poetically' (in Williams, 1976:187). The specialization of 'poetry' to metrical composition is evident from mid seventeenth century, although this specialization of 'poetry' to verse, together with the increasing importance of prose forms such as the novel, made 'literature' the most available general word. It had behind it the Renaissance sense of 'litterae humanae', mainly to distinguish between the secular from religious writing. 'Poetry' had been the high skills of writing and speaking in the special context of high imagination. 'Literature' in its nineteenth century sense, repeated this, though excluding speaking. However, it still remains problematic, not only because of the further specialization to 'imaginative' and 'creative' subject-matter (as distinct from 'imaginative' and 'creative' writing) but also because of the new importance of many forms of writing for speech. For example, books and writings meant for broadcasting which the specialization to books seemed by definition to exclude. However, in recent years the terms 'literature' and 'literary' have been increasingly challenged, on what is conventionally their own ground, by concepts of 'writing' and 'communication'. Moreover, in relation to this reaction, 'literary' has acquired two unfavourable senses, as belonging to the printed book or to past literature rather than to active contemporary writing and speech; or as (unreliable) evidence from books rather than 'factual enquiry'. This latter quality touches the whole difficult complex of the relations between 'literature' (poetry, fiction, imaginative writing) and 'real' or actual experience. The term 'literary' has also been a term of criticism in discussion of certain other arts, notably painting and music, where the work in its own medium is seen as inadequately autonomous, and as dependent on 'external' meanings of a 'literary' kind. However, in an attempt to 'demystify' literature, McRae (1991:2-3), differentiates between 'referential' language, which communicates on the informative level only, and 'representational' language, which engages the imagination of the reader. He defines a literary text as any imaginative material that stimulates a response in the reader, including songs, cartoons, idioms and proverbs. According to Halliday (1985), the text as an expression of experience can be the closest definition of 'literature', and if readers can identify with events or characters and project themselves into them imaginatively, then a certain truth to experience can be created. Carter and Long (1991) suggest that the imaginative and truthful re-creation of experience is often taken to be a distinguishing characteristic of established literary texts and to Halliday (1985:98), 'Learning is essentially a process of constructing meanings...involving cognition and interpretation'. Literature, a reflection of reality and of life, thus, could have plural interpretations as individual experiences and contribute to one's language development. Literary Language: Does It Exist? Although many teachers and literary critics hold the view that literature is read in a different way from non-literary writing, evidence suggests that a 'linguistic' distinction between literary and other kind of text is difficult to preserve. This leads to the idea that there are no particular linguistic features found in literature that are not found in other kinds of texts. To some extent, literary and language aptitude cannot easily be separated, for one will always be dependent on the other. According to Short (1983), writers of literary works use language creatively, whereas Brumfit and Carter (1986:6) suggest that there is no such thing as 'literary language' and it is 'impossible to isolate any single or special property of language which is exclusive to a literary work'. However, for Carter and Long (1991:108) the creative use of language is important in determining literary merit, but with reservations, as the nature of creativity is not clearly defined, particularly when it comes to classifying a piece as 'literature' or as 'good literature'. Short and Candlin (1986) also find similarities between literary and other kinds of discourse. Carter (1988) determines from earlier works in stylistics that style is not an exclusively literary phenomenon. A comparison between the language of poetry and the language of advertising reveals that they are more or less similar in their use of the linguistic features such as rhyme, metre, ambiguity, metaphor, parallelism, linguistic deviation, fascinating examples of play between words, and formal patterns, which can be creative and entertaining. In Eagleton's view (1983:9), there is no 'essence' of literature whatsoever. Any text can be read pragmatically or poetically, be it literary or non-literary. However, one of the reasons for using literature, given by Collie and Slater (1987), is that because literary language is concise, it is aesthetically satisfying, and therefore memorable, echoing a traditional reason for teaching literature which is as a model of good writing as rhetoric (Gilroy and Parkinson, 1996). Kramsch (1993), advocating the use of literary texts in foreign language teaching, refers to Bakhtin's (1895-1975) distinction between single-voiced and double-voiced discourse, with the literary text as the epitome of the latter. In her view, literary and non-literary discourse differ in degree, but not in kind, with the newspaper article, the essay and the short story on a continuum from single-voiced to double-voiced discourse. Pulverness (1996:26) also relates to and uses Bakhtin's idea that 'all fictional writing manifests the quality of dramatic discourse' to analyse voices in prose fiction. However, Carter and Long (1991) suggest that, in the language and literature classroom, it is necessary to approach such issues with an open mind. Texts: What to Include? The necessity of introducing literary texts in the language classroom is an established fact. However, the appropriateness of the texts selected for a particular class remains a crucial factor in the success of the teaching approach followed in that particular class. Texts chosen should not be too long, too complex linguistically, and not too far removed from the world knowledge of the students. However, linguistically simple texts are not always 'simple'. It should be noted as pointed out by Carter (1988) that contemporary literature is generally not always the most accessible. This is especially for non-native, non-European students, because of its reflection of a world, which is usually based on an impersonal industrialised scenario where spiritual values are non-existent. In addition, experimental usage of the language can give rise to complexity in understanding in many cases, whereas writings from the past may prove to be quite understandable in terms of their comparatively simple setting, theme and use of language producing more personal responses and more direct involvement. The argument regarding the language of a literary work, and especially of the ones from the past, not being typical of the language of daily life, could be overcome by using a proper methodology. Drawing parallel examples or converting old English to new English, for example, from Shakespearean English to modern day English would instil interest in the learner. One main reason for using literature is to encourage students' creativity. However, when choosing a text, language difficulty has to be considered, so that access is not restricted and the learners can attain a basic level of comprehension. McKay (1986) however, cautions against simplification of text, since this may result in diluting information and reducing cohesion and readability. Students also need to be able to identify with the experience, the thoughts and situations depicted in the text, in order 'to make connection to personal or social significance outside the text' (Brumfit, 1985: 108). Therefore, as McRae (1991:126) suggests, a good choice would be any text that encourages or invites interaction with the world of ideas, a text that 'affirms, confirms and expands the indispensable human capacity to read the world'. Texts should also provide good potential for a variety of classroom activities, in order to give students more chance to gain true familiarity with any work as a whole. Most importantly, the texts should have the capacity to engage the interest of the student. For example, as noted by Collie and Slater (1987), short stories offer greater variety than longer texts, offering greater chance of finding something to appeal to each individual's tastes and interests, whilst poems offer a rich, varied range and are a source of much enjoyment. McRae and Vethamani (1999) observe that the growth of strong local literatures in English has triggered a corresponding interest in incorporating such texts into language teaching materials. Vethamani (1996) argues that new literatures are unjustly overlooked in many teaching contexts, whereas their inclusion in the classroom can broaden students' perception of the use of English in wider cultural contexts, thus will continue to fuel interest in using literary texts for cross-cultural exploration. As such, literature lends itself well to investigating similarities and differences between self and others, and to an awareness and understanding of 'the other' (Kramsch 1993). However, in many EFL contexts there are constraints/restraints on the teacher's part in terms of availability of books, or the set curriculum they are to follow. If the texts are imposed and used year after year, it becomes more and more difficult to maintain one's originality and enthusiasm, both on the parts of teachers and students. Goodwyn and Findlay (1999) point out that teachers teach best when they are enthused about a text/topic they are teaching. Nevertheless, it has already been observed that the available texts and materials can be successfully used to achieve objectives if used properly and systematically. The above discussion has shown that literary texts can prove very useful in the English language classroom. They stimulate the imagination, offer learners specimens of real language use, allow for group discussions and individual exploration, and are intrinsically more dialogic. They can enhance reading skills, focus attention on combination of words, create a feeling for language, and help draw attention to different types of language usage and levels of discourse. Moreover, a literary passage can be used not only for creative reasons but also for informational, grammar, and vocabulary development purposes. Literary texts can thus be exploited for language learning activities allowing teachers to set questions that focus on much more than the retrieval of information and mechanical exercises. Brumfit, C.J. (1985) Language and Literature Teaching: From Practice to Principle. Oxford: Pergamon. Brumfit, C.J. and Carter, R. A. (eds.) (1986) Literature and Language Teaching. Oxford: Oxford University Press: Introduction. Carter, R. (1988) 'Directions in the Teaching and Study of English Stylistics'. In Short, M. (ed.). Reading, Analysing and Teaching Literature. Longman: London: 10-21. Carter, R. and Long, M. N. (1991) Teaching Literature. New York: Longman. Collie, J. and Slater, S. (1987) Literature in the Language Classroom: A Resource Book of Ideas and Activities. Cambridge: Cambridge University Press. Eagleton, T. (1983) Literary Theory. Oxford: Basil Blackwell. Eagleton, T. (1976) Criticism and Ideology. London: Verso Editions. Gilroy, M. and Parkinson, B. (1996) 'State of the Art Articles - Teaching Literature in a Foreign Language'. Language Teaching 29/4:213-225. Goodwyn, A. and Findlay, K. (1999) 'The Cox Models Revisited: English Teachers' Views of their Subjects and of the National Curriculum'. English in Education. Vol. 33, No.2, Summer, NATE Sheffield: 19-31. Halliday, M.A.K. (1985) Spoken and Written Language. Oxford: Oxford University Press. Kramsch, C. (1993) Context and Culture in Language Teaching. Oxford: Oxford University Press. Maley, A. (2001) 'Literature in the Language Classroom'. In Carter, R. and Nunan, D. (eds.). The Cambridge Guide to Teaching English to Speakers of Other Languages. Cambridge: Cambridge University Press: Chapter 26:180-193. McKay, S. (1986) 'Literature in the ESL Classroom'. In Brumfit, C.J. and Carter, R. A. (eds.). Literature and Language Teaching. Oxford University Press: 191-198. McRae, J. (1991) Literature with a Small 'l'. London: McMillan Publishers Limited. McRae, J. and Vethamani, E. M. (1999) Now Read On. London: Routledge: xi-xvi. Miller, J. H. (2002) On Literature. London: Routledge. Pulverness, A. (1996) 'Outside Looking In: Teaching Literature as Dialogue'. In Hill, D. A. (ed.). Papers on Teaching Literature from The British Council Conferences in Bologna 1994 and Milan 1995: 25-32. Scholes, R. E. (1985) Textual Power: Literary Theory and the Teaching of English. New Haven, CT: Yale University Press: 15-16. Short, M. (1983) 'Stylistics and the Teaching of Literature'. In C.J. Brumfit (ed.). Teaching Literature Overseas: Language-Based Approaches. ELT Documents: 115. Oxford: Pergamon Press and The British Council: 67-79. Short, M. and Candlin C.N. (1986) 'Teaching Study Skills for English Literature'. In Brumfit C. J. and Carter R.A (eds.). Literature and Language Teaching. Oxford: Oxford University Press: 89-109. Vethamani, M. E. (1996) 'Common Ground: Incorporating New Literatures in English in Language and Literature Teaching'. In Carter, R. and McRae, J. (eds.). Language, Literature and the Learner: Creative Classroom Practice. Addison Wesley Longman: New York. Widdowson, P. (1999) Literature. London: Routledge. Williams, R. (1976) Keywords: A Vocabulary of Culture and Society. Fontana Press: Harper Collins Publishers. Members may post a comment.
|Name: _________________________||Period: ___________________| This quiz consists of 5 multiple choice and 5 short answer questions through Carmen Jones: The Dark Is Light Enough. Multiple Choice Questions 1. Why is the language and dialect in the film Carmen Jones considered an insult on black people? (a) Because the characters speak like stereotypical Negroes. (b) Because the characters speak like northern born Negroes. (c) Because the characters speak like enslaved Africans. (d) Because the characters speak like antebellum Negroes adopting the patterns language of their masters. 2. What two stereotypical figures does the author portray as portraits of the Negro in America with laudatory and heinous qualities? (a) Aunt Jemima and Carmen Jones. (b) Aunt Jemima and Othello. (c) Aunt Jemima and Uncle Tom. (d) Carmen Jones and Uncle Tom. 3. According to Baldwin, what color are the robes of the saved? 4. What novel does Baldwin compare Uncle Tom's Cabin to? (a) Black Boy. (b) Little Women. (d) Native Son. 5. What stereotypical trait do the immoral and evil characters share in the film Carmen Jones? (b) Brown eyes. (c) Dark colored skin. (d) Coarse hair. Short Answer Questions 1. Although it is known that African Americans are not biologically or mentally inferior, what occurred during the World War II? 2. According to Baldwin, what color is associated with evil? 3. Which of the following is not a characteristic given to history by Baldwin? 4. According to Baldwin, who are the three most important characters in Uncle Tom's Cabin? 5. According to Baldwin, if black faces cannot be made white then what other way can they be changed in order to be less offensive to white society? This section contains 288 words (approx. 1 page at 300 words per page)
The leading cause of vision loss and blindness in Americans over the age of 65 is a disease called macular degeneration. The symptoms of the disease appear slowly and painlessly, but can be devastating to vision if left unchecked. There is currently no known cure for macular degeneration, though there are a myriad of treatment options available to prevent its slow progression, and even improve vision once the disease takes hold. What Is Macular Degeneration? Macular degeneration is defined as a disease that gradually destroys the central area of the retina, known as the macula. The macula transforms light waves from directly in front of the eye into nerve signals that the brain processes into discernable images. When the macula becomes damaged, crisp central vision is compromised. Macular degeneration, also known as age-related macular degeneration (AMD), slowly destroys central vision and, if left untreated, can seriously impair vision. Because macular degeneration affects only straight-ahead vision, it cannot lead to total blindness. It can, however, severely impair the ability of sufferers to easily perform normal activities such as reading and driving. Causes of Macular Degeneration Macular degeneration is a common eye disease in individuals over the age of 60. The causes of macular degeneration are unknown; however, environmental factors and genetics may contribute to this disease. The causes of macular degeneration have been associated with several risk factors, including: - Age: People over the age of 50 are at a greater risk of age-related macular degeneration (AMD). While macular degeneration can develop in middle-aged people, chances of developing ARMD rise drastically with advanced age. In fact, age is the most telling risk factor for developing macular degeneration. - Gender: Women are at a greater risk of developing macular degeneration. - Race: Caucasians are more likely to develop AMD than African- Americans - Smoking: Smokers are at greater risk of developing ARMD than non-smokers. - Genetic history: Having family members who have suffered from ARMD increases the risk of developing macular degeneration. - Cholesterol levels: Higher-than-normal levels of cholesterol may be correlated to a higher risk of developing the wet type of age-related macular degeneration. The causes of the wet form of macular degeneration include aging, but severe nearsightedness and intraocular infections are also risk factors for this type of the disease. Symptoms and Diagnosis There are several different tests by which a qualified eye care physician can reach a macular degeneration diagnosis, each more or less useful for detecting different stages of the disease. Pupil dilation, the Amsler grid test, and fluorescein angiograms are currently the most effective ways to diagnose the disease. Read more about each type of macular degeneration test below. Macular Degeneration Diagnostic Test There are several procedures often used for macular degeneration diagnosis, each suited to test for different stages and forms of the disease. It is recommended that you see a specialist for a thorough diagnostic macular degeneration test and eye exam if you are over the age of 55 or are noticing any symptoms of macular degeneration. During a standard eye exam, your eye care specialist may dilate your pupils to get a fuller view of the retina and a closer examination of any possible damage or debris. The patient’s eyes will be blurry for several hours after the test. A visual examination assisted by pupil dilation is one of the best ways to detect the early, or dry, form of macular degeneration. While detection of debris and decayed tissue in the eye does not necessarily mean that the patient will develop macular degeneration, the test is useful for determining whether preventative measures should be taken to defend against it. The earlier a diagnosis is made, the sooner you can begin treatment for macular degeneration. Amsler Grid Test One of the easiest methods for detecting macular degeneration is the Amsler grid test. The Amsler test is merely a square grid with black lines running parallel to each other horizontally and vertically, and a black dot in the center for the patient to focus on. A person with normal vision will see the grid as it appears on the page; however, a person with wet macular degeneration will see distortion in the lines, as if the grid has been twisted or has a hole in the middle of it. Early macular degeneration diagnosis may facilitate prevention of further vision loss, or even restore vision that has been lost. If an eye care specialist suspects a patient is suffering from wet macular degeneration, he or she may order a fluorescein angiogram test. During the procedure, a special dye is injected into the bloodstream through the arm. Within seconds, the dye travels through the body to the eye. A special camera is then used to highlight the dye, allowing the eye care professional to see if there are leaks or problems in the eye — and, if so, where the problems are. While there are currently no treatments available to completely repair the eye after the onset of macular degeneration, catching it early enough may allow medications to prevent further damage or even restore some lost vision. Progression and Types Macular degeneration begins in the dry form, as the eye tissue begins to deteriorate and decay. As the name suggests, the disease advances as the tissue degenerates, progressively decaying over time. The wet, or neovascular, form of macular degeneration occurs when the body responds to this decay and attempts to replenish lost nourishment in the eye by creating new blood vessels. These blood vessels leak blood and fluid, causing permanent damage to light-sensitive retinal cells, creating blind spots. Approximately ten percent of dry AMD cases progress to the wet form. As the disease progresses, patients may notice several changes in their vision. Patients with the dry form of the disease may show no outward symptoms of macular degeneration. Deposits of drusen - formed from deteriorating tissue in the macula and other areas - can begin to accumulate in the eye without affecting a patient's vision. A qualified specialist can recognize these early symptoms of dry macular degeneration and suggest steps that may slow or halt its progression. Blurry or Distorted Central Vision As more and more tissue deteriorates in the macula and the rest of the retina, the patient may begin to experience blurry or shadowy vision, though the vision loss is not nearly as severe in the dry stages of the disease as in the later, wet stages. Patients experiencing macular degeneration progression should seek medical attention immediately, as there is still a chance that vision loss may be slowed, stopped, or even reversed slightly. Complete Loss of Central Vision The last (and most severe) phase of macular degeneration progression is the wet stage. During this phase, the body attempts to compensate for the deterioration of the macula by growing new blood vessels in the area. These blood vessels, and the blood and fluid that can leak from them, cause irreparable damage to the macula and resulting in a permanent loss of central vision. Types of Macular Degeneration There are a few different type of macular degeneration. Age-Related Macular Degeneration For people over the age of 60, age-related macular degeneration is the most common cause of severe vision loss. This disease affects an individual's central vision, which makes it difficult to drive, read, and complete several daily activities. Age-related macular degeneration occurs when the center of the retina (the macula) degenerates causing the central vision to deteriorate. Rather than seeing the whole picture, an individual with age-related macular degeneration would see a dark or blind area in the central field of vision. Eventually, complete vision loss or blindness can occur. Age-related macular degeneration is a scary possibility for many seniors as it could lead to greater dependence on others. However, there are several treatments available for age-related macular degeneration including medication and possible surgery. Many people can and have lived with this disease and educating yourself on the causes, symptoms and treatment will better prepare you if vision loss occurs due to age-related macular degeneration. Dry Macular Degeneration Approximately 90 percent of age-related macular degeneration sufferers have dry macular degeneration, an early stage of this disease. Central vision loss can occur with dry macular degeneration, however, it is not nearly as severe as it is in the wet form. Though scientists are not sure what causes dry macular degeneration, they speculate that a part of the retina becomes diseased and leads to the destruction of the light-sensing cells in the macula. Aging and thinning of macular tissues can also lead to dry macular degeneration. Advanced Wet Macular Degeneration About 10 percent of people are affected with an advanced type of age-related macular degeneration known as wet macular degeneration. This form of the disease is more advanced and damaging than dry macular degeneration because it leads to the formation of new blood vessels within the eye that leak fluid and blood under the macula. This fluid leakage damages the macula and leads to vision loss in a short amount of time. Wet forms of macular degeneration can be divided into two groups: - Classic Wet Macular Degeneration: Associated with more severe vision loss. Occurs when growth of the blood vessels has clear outlines that can be seen beneath the retina. - Occult Wet Macular Degeneration: In this type of advanced macular degeneration, leakage and growth behind the retina is not as evident, producing less severe vision loss. Wet macular degeneration accounts for 90 percent of all blindness in age-related macular degeneration cases. Treatments such as surgery and medication are options for individual's that suffer from this advanced form of macular degeneration. Treatment and Recovery Macular degeneration treatment options vary greatly, with just as much variety in the type of recovery period one can expect following the procedure. Most treatments available now revolve around changes in diet and nutrition combined with drug therapy, making the recovery relatively simple. Some cases call for laser surgery, which is performed as an outpatient procedure with minimal recovery time. Learn more about macular degeneration treatment and recovery. Find a Doctor Use the DocShop directory to locate a doctor in your area. A qualified doctor can talk to you about macular degeneration symptoms and treatment options. Want More Information?
Common Milkweed (Asclepias syriaca) is the best known of the 100 or so milkweed species native to North America. The name “common” fits the plant well because when not in bloom, it goes pretty much unnoticed, growing humbly along roadsides, in fields, and in wastelands. Beneath its dull, gray-green exterior, milkweed is full of uncommon surprises. - Inside the plant is a sticky white sap that contains a mild poison; its bitter taste warns away many of the animals and insects that try to eat its tender leaves. - Certain insects, including monarch butterfly larvae, are immune to the toxin. By feeding almost exclusively on milkweed leaves, they are able to accumulate enough of the poison in their bodies to make them distasteful to predators. - Native Americans taught early European settlers how to properly cook milkweed so that they could be safely eaten. (See note below.) - The milky white sap was applied topically to remove warts, and the roots were chewed to cure dysentery. Infusions of the roots and leaves were taken to suppress coughs and used to treat typhus fever and asthma. Note: Today, experienced foragers have enjoyed eating milkweed, if it is properly identified (there are poisonous lookalikes, such as dogbane) and prepared (boiled). Some common milkweed plants (A. syriaca) are mild tasting while others are bitter (in which case, avoid or boil in several changes of water). If you are new to foraging, have an expert help you to identify, gather, and prepare the plant properly before eating. As with any herb, take a small amount at first, to be sure that you don’t have a reaction. Caution: Do not get milkweed sap in your eyes (such as rubbing your eyes after touching the sap); wash your hands thoroughly after handling the plant. Also, some people may develop an allergic reaction when the sap touches the skin. - The stem’s tough, stringy fibers were twisted into strong twine and rope, or woven into coarse fabric. - In milkweed’s rough pods was another wonderful surprise. The fluffy white floss, attached to milkweed’s flat brown seeds, could be used to stuff pillows, mattresses, and quilts and was carried as tinder to start fires. - During World War II, the regular material used to stuff life jackets was in short supply, so milkweed floss was called for as a substitute—it is about six times more buoyant than cork! - Over the years, researchers have investigated growing milkweed for papermaking, textiles, and lubricants, and as a substitute for fossil fuels and rubber. Although these experiments were found economically unfeasible at the time, perhaps they should be revisited, given the rising costs of fuel and other materials. - In current research, a chemical extracted from the seed is being tested as a pesticide for nematodes. We doubt if this surprisingly useful plant will run out of surprises anytime soon.
Definition of dog whistle in English: 1A high-pitched whistle used to train dogs, typically having a sound inaudible to humans. - X rays are light emitted at much higher frequencies than humans can see, in the same way as a dog whistle blows at a frequency that is beyond the sensitivity of the human ear. - Blow into it and the sound produced is not unlike that of a dog whistle. - He kept blowing a dog whistle in a fruitless attempt to coax Molly out. 1.1 [usually as modifier] A subtly aimed political message which is intended for, and can only be understood by, a particular group: dog-whistle issues such as immigration and crime More example sentences - She nailed the point of why the Government was holding such an Inquiry, describing it as "dog whistle politics to men's groups aggrieved by the Family Court". - This dog whistle may have been missed by his audience, and was certainly neglected by the press, but it resonated in Conservative headquarters. - Mr Norris added that modernisers who felt uneasy about the party's focus on "dog whistle" issues such as asylum may struggle to criticise the leadership. Definition of dog whistle in: - British & World English dictionary What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
Current Approaches to Prevention Historically, efforts to prevent intimate partner violence, sexual violence, and dating violence targeted potential victims and taught this audience how to identify and protect themselves from abusers. Examples include early education programs teaching children how to say “no” and campus education classes teaching women how to reduce their risk of victimization. Few efforts targeted men and those that did tended to approach the audience as potential abusers. Experts and practitioners in the field have since criticized these earlier programs for placing the onus of preventing violence on victims, failing to address abusers responsibility for the violence, and alienating males by addressing them only as potential abusers of violence. We have since learned that males are important allies in the field of violence prevention, and that both males and females, both children and parents, and both students and teachers – everyone – would benefit from learning about their role as a bystander to intimate partner violence, sexual violence, and dating violence. This creates a broader community context for prevention that not only includes everyone, but it also takes into account the various risk and protective factors that occur at each level with the social ecological model. Two promising approaches to intimate partner violence, sexual violence, and dating violence prevention are the social norms approach and bystander engagement approach. Social Norms Approach The social norm theory suggests that people misperceive others’ attitudes and behaviors towards intimate partner violence, sexual violence, and dating violence as being more supportive than they actually are. For example, college males may believe it is fairly common on campus for men to try to get their dates intoxicated and pressure them into having sex. Research has shown one of the consequences of these misperceptions is that people change their behavior to better reflect what they believe to be the norm.[i] In other words, these misperceptions encourage unhealthy or problem attitudes and behavior and inhibit healthier attitudes and behaviors. The social norms approach can be applied at all three levels of prevention. Examples include: universal social norms marketing campaigns to correct misperceptions and encourage healthier attitudes and behaviors; more selective interventions, such as interactive workshops, classes, or discussions among members of a particular group; and indicated interventions for individuals who have already engaged in problematic or unhealthy behavior, typically using motivational interviewing and stages of change theory to provide feedback to individuals. Bystander Engagement Approach Another consequence of people’s misperceptions of others’ attitudes and behaviors towards intimate partner violence, sexual violence, and dating violence is that these misperceptions can inhibit their willingness to intervene in violent situations. Our willingness to step up and speak out against intimate partner violence, sexual violence, and dating violence is critical to eliminating these types of violence. Silence only reinforces and encourages perpetrators violent behavior and leaves victims feeling isolated or at fault. Researchers have studied the bystander engagement theory for years. Their work has found that whether or not someone decides to actively intervene in a violent situation is affected by a number of situational factors (e.g., the presence of other witnesses or the level or urgency or danger), individual characteristics (e.g., the demographics and relationship of those involved or a person’s level of skill to safely intervene in violent situations), the person’s feelings and attitudes about violence, and their perceived cost of intervening.[ii] The bystander engagement approach can address some of these factors in a way that participants are more likely to intervene by providing information on how widespread intimate partner violence, sexual violence, and dating violence are, education on the impact these types of violence have on victims, tips on how to respond to and support someone who discloses abuse, and skills to step up and speak out against behavior that contributes to violence. Essentially, the bystander engagement approach creates a broader community context for violence prevention that includes not just victims or perpetrators of violence but their friends, family, peers, teachers, coworkers, and other community members. i.Berkowitz, A. D. (2004). The social norms approach: Theory, research, and annotated bibliography. Retrieved from www.alanberkowitz.com/articles/social_norms.pdf. ii.Foubert, J. D., Tabachnick, J., & Schewe, P. A. (2010). Encouraging Bystander Intervention for Sexual Violence Prevention. In K. L. Kaufman (Ed.), The prevention of sexual violence: A practitioner’s sourcebook, (pp. 121-134). Holyoke, MA: NEARI Press.
Learn something new every day More Info... by email Vomit, also called puke, barf, and a range of other names, is regurgitated liquids and solids from a person’s stomach. When a person eats and drinks, the food he consumes usually travels through his esophagus to his stomach and then on to the intestine as it goes through the process of digestion. The parts of the food that the body cannot use exit the body through the anus. Sometimes, however, an illness, bodily upset, or gag reflex causes the food to back up from the digestive track and exit through the mouth in the form of vomit. The digestive process usually works exactly as people to expect it to, and consumed food moves through the digestive system. The material that is leftover leaves the body in the form of a bowel movement. Sometimes, however, something upsets this natural course, and a person vomits instead. There are many things that may cause vomiting. Often, it is the result of a virus or bacterium that causes an illness. For example, a person may consume a food that has been contaminated by bacteria and vomit because of it. Sometimes the same thing may happen when a person fails to wash his hands before eating or preparing food. In such a case, a virus or bacterium that was on his hands may contaminate his food and cause him to become ill. A person may even catch a virus that causes vomiting from another person. This is often referred to as the stomach flu. The stomach flu is not related to influenza, which is a respiratory illness. It is possible for a person to vomit when he has a respiratory illness such as influenza, however. Besides viruses and bacteria, there are many other conditions and situations in which a person may begin to vomit. For example, a person may vomit after spinning around a great deal or riding an amusement park ride; some women also experience vomiting in the early months of pregnancy. An individual may vomit when he has an ulcer, a range of chronic conditions, or a food intolerance. In some cases, a person may even vomit when he sees or smells something that makes him feel sick. For instance, some people puke when they see others vomiting. In most cases, vomiting ends after a short period of time, and people begin to feel better without medical intervention. If a person vomits repeatedly, for more than a couple of days, or has other worrisome symptoms, however, he may do well to consult a doctor. Likewise, a person may do well to speak to a doctor if he appears to be vomiting blood or bile, which is a digestive fluid the liver makes. Additionally, a stiff neck or severe abdominal pain may also warrant a phone call to a doctor. i can not vomit due to the paralyzed left diaphragm and dysmotility of the esophagus. I am in great pain, nausea etc, and I am scared. what shall i do? One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
(Article written by Kerri Watson, Upper School Environmental Science Teacher) Senior Environmental Science students visited the kindergarten classes in the science lab to teach them about primary and secondary colors. The giants taught them that primary colors (red, yellow, and blue) are the “first” colors and when we mix them, we can make secondary colors (orange, green, and purple) as they are the “next” colors. Kindergarteners learned new vocabulary as well as encountered a surprise reaction in the process. Beforehand, baking soda solutions and vinegar solutions of red, blue, and yellow colored water were made. Kindergarteners hypothesized what “next” color would be made mixing two “first” colors. Because of the baking soda and vinegar, the resulting reaction created the secondary colored foam. We made a mess, but it was fun. We made a surprise discovery, too. When we mixed the two solutions in a graduated cylinder, the differing densities created a color layer in the cylinder. That was pretty cool to observe and discuss. Yet the kindergarteners didn’t want to stop there! They wanted to investigate what might happen if we mixed all the colors. Hypotheses for the resulting color were black, brown, and maybe even a rainbow! We tried to make a rainbow colored density stack in the cylinder but it didn’t quite work out. So a white bucket was used to pour in the remaining color solutions which resulted in a bubbly dark brown that we all agreed looked like Coca-Cola. After a round of questions and answers from the kindergarteners to the seniors, the lab concluded with collecting data of everyone’s favorite color. The kindergartners will take that data and use it for graphing analysis and discussion back in their classrooms.
While a healthy diet is important for everyone, it is indispensable to a growing child. It is extremely disturbing that a large percentage of children now face nutrition-related health problems that were once confined to adulthood - namely, type 2 diabetes, high cholesterol levels, and high blood pressure. Diseases such as cancer, coronary artery disease, and osteoporosis can stem from the eating habits we form in childhood. Even ADHD can be affected by diet. Since children and adolescents eat up to two meals and from one to three snacks during the school day, it is imperative that schools offer only healthy foods from which to choose. Not only do we guide students in establishing healthy eating habits that help prevent serious health problems, but we also improve behavioral and intellectual performance through increased attentiveness. With this comes better test scores and, as a consequence, higher self-esteem. It's important to note, too, that we send mixed signals if we teach the value of good eating habits through the curriculum while serving food that is far from what most nutritional scientists consider healthy. We must learn to walk the talk. The children expect it and deserve it. Typically, foods found in schools are high in added sodium, sugar, and fat - but they don't have to be. Many schools around the country are improving school lunches and bringing healthy vending to their students. The school garden is gaining popularity as both a source of school food and an open-air classroom. Remember: Children cannot attain their full potential unless they are well nourished and healthy. It is important that all food choices at school nurture student health and academic performance. The following links will assist anyone who wants to take steps to improve the quality of foods served to children in schools.
Precision agriculture combines GPS, remote sensing, and GIS to capture large amounts of georeferenced data on spatial variations in soil types, moisture content, nutrient availability, and crop yields and then create and follow prescription maps. Until recently, it was difficult for growers to correlate soil and crop information with production techniques, so they generally treated their fields uniformly. Precision agriculture, also known as site-specific farming, enables them to micromanage their fields and apply water, pesticides, herbicides, and fertilizers at a variable rate, which cuts expenses, increases yields, and reduces environmental harm from farming. Navigation equipment on tractors, combines, sprayers, and other farm machinery displays field maps and prescription maps and automatically guides the machinery—this is most commonly referred to as auto-steer. Additionally, growers use GPS and GIS to scout crops, plan farm operations, and work when visibility is low, such as in rain or dust storms, in the fog, and at night. Growers and crop advisers also use GPS receivers to map roads, field boundaries, irrigation systems, weeds, pest infestations, diseased plants, distances, and the acreage of field areas, as well as to navigate accurately back to specific field locations to monitor crop conditions and collect soil samples. GPS-equipped crop dusters are able to fly accurate swaths over fields without endangering human flaggers and while minimizing the amounts of chemicals sprayed and their drift.
Defective or Abnormal Eggs in Poultry Most “ridged,” “sunburst,” “slab-sided,” soft-shelled, or double-shelled eggs are the result of eggs colliding in the shell gland region of the oviduct when an ovum (yolk) is released too soon after the previous one. Necropsy examinations have demonstrated that two full-sized eggs can be found in the shell gland pouch. As the second egg comes in contact with the first, pressure is exerted, disrupting the pattern of mineralization. The first egg acquires a white band and chalky appearance, while the second egg is flattened on its contiguous surface (ie, slab-sided). Pimpled or rough eggs may have been retained too long in the shell gland. Blood spots result when a follicle vessel along the stigma ruptures as the ovum is being released. Meat spots occur when a piece of follicle membrane or residual albumen from the previous day is incorporated into the developing egg. Many abnormalities appear to have no specific cause, but the incidence is much higher in hens subjected to stressful management conditions, rough handling, or vaccination during production. A significant increase in the number of soft-shelled eggs is also common as a result of viral diseases such as infectious bronchitis, egg drop syndrome, and Newcastle disease. Small eggs with no yolk form around a nidus of material (residual albumen) in the magnum of the oviduct. Small eggs with reduced albumen and eggs with defective shells may be the result of damage to the epithelium of the magnum or shell gland. Very rarely, foreign material that enters the oviduct through the vagina (eg, a roundworm) may be incorporated into an egg.
After reading a little bit about history of the silk road, was curious about Cyrus the Great. Cyrus the Great was the founder of the Persian Empire beginning in 550 BCE after conquering surrounding lands, including Babylonia. His empire spanned from Egypt to Turkey to Mesopotomia, and is still considered the father of Iran and one of the greatest leaders of all time. When he conquered a land, he was a merciful leader. The Persian Empire standardized administration of many things, including laws, road networks, a mail system and the belief in “one god.” It also had a provincial system (dividing its empire into 20 states) and implemented a 20% tax system, except Persians paid no taxes. Cyrus the Great heavily influenced Alexander the Great 200 years later. Cyrus the Great and the Persian Empire 550 BC(E)
Hello! I’m William and I’ll be explaining what on earth geology has to do with water! Do you know where your drinking water comes from? Most rain or snow that falls either evaporates, is soaked up by the soil and plants, or runs into rivers and lakes. Some water sinks down, through tiny holes and cracks, into rocks which act as underground storage 'reservoirs'. These huge rock 'sponges' are called 'aquifers' and the water stored in them is called 'groundwater'. The green coloured rocks in the centre are made of chalk. Chalk is a good aquifer, a store for water. This example is from northern France. As well as this groundwater, water is also stored frozen as ice at the north and south poles. But 97% of all the water on our planet is held in the oceans. Geological maps can show ice as well as rocks, like this map of Antarctica. In an aquifer, the level that the water reaches is called the water table. Water, as you can see in your bath, will find its own level. This happens underground too. The water table moves up and down according to how much water is available to fill the given space….just like your bath water. We can tap into groundwater by using wells that reach down to the water table. A rock that has holes and holds water is called 'porous'. Rocks that can soak up the water are called 'permeable'. These are rocks like limestone, chalk, sandstone. Rocks that will not let water pass through them are called 'impermeable' like mudstone and granite. An aquifer is usually made of sandstone or limestone. Water in the aquifers can become polluted so we must be careful of chemicals or leakages into the ground. Water can be held in the rocks, called aquifers, for lots of years. Let’s see what else we can find out about water. Try searching the internet for: - ice sheet
All electrical circuits, no matter how complex, can be broken down into simple components. In a simple direct current, or DC, circuit, a battery supplies power, wires deliver the power, a switch permits or stops power flow and a load uses the power. While a professional electrician will always use special components when installing or repairing an electrical system, such as the lights in a house, you can make an electrical circuit with paper clips and other common household items. - Skill level: Things you need - Wire strippers - 2 lengths of 18 to 22 gauge wire, 6 inches long - Electrical tape - D-cell battery - Cardboard square, 3 inches by 5 inches - 2 metal thumbtacks or nails - 2 paper clips - 1.5-volt light bulb Strip half an inch of wire from each end of the three wires. Tape one stripped wire end to the positive or "+" terminal of the battery. Wrap the other end of this wire around one end of the first paper clip. Press one thumbtack or nail into the cardboard, through the end of the paper clip with the wire wrapping. Ensure that the paper clip can pivot freely on the thumbtack or nail. This paper clip is the switch. Straighten the other paper clip. Press one end into the cardboard near the other paper clip, but not through it. This is the contact point for the switch. When the switch touches the contact point, electricity will flow. Wrap the other end of the straight paper clip around the positive or "+" terminal of the light bulb. Tape it in place if necessary. Tape the stripped end of the second wire to the negative or "-" contact on the light bulb. Tape the other end of this wire to the "-" terminal of the battery. Tape the battery and light bulb to the cardboard to hold them in place. Move the paper clip switch to touch the contact point. The light bulb should illuminate. Tips and warnings - Some light bulbs have a metal dot at the bottom as one contact and screw threads around the base as the other contact. Other light bulbs have two blades of metal coming out of the bottom as contacts. Either type of bulb will work for this project. - Do not attempt to take the battery apart. - Do not dispose of the battery in a fire. Follow all precautions on the battery packaging. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
Cavities are holes in your teeth resulting from tooth decay. Cavities typically occur in older adults, teenagers, and children although everyone is prone to have them regardless of age. They commonly cause tooth loss in young individuals. They develop due to a blend of factors including oral bacteria, frequent snacking, and sugary foods /beverages. If untreated, they become larger and affect the teeth's deeper layers. They can also lead to infection and severe toothache. Regular visits to your dentist and maintaining good flossing and brushing habits help prevent cavities. Types of Cavities These cavities commonly affect adults and children. As you age, your gums recede, leaving parts of your tooth root bare. As a result, the exposed areas decay easily because you no longer have enamel to cover your tooth roots. Decay can develop around existing crowns and fillings. This occurs since these areas tend to accumulate plaque, which could ultimately result in decay. The symptoms of cavities differ depending on the location and extent. When a cavity is in its initial stages, you may not experience any symptoms. However, your dentist may inform you that decay has begun and will recommend steps to prevent it from worsening. Cavities develop below your tooth's surface where you cannot see them. The best way to spot and treat them involves regular dental visits. As the decay increases in size, it may produce symptoms such as: - Tooth pain particularly after having hot, sweet, or cold beverages and foods - Visible holes or pits in your teeth. Cavities on your front teeth are the easiest to spot and will appear like black or brown spots. Cavities that exist in other parts of your mouth are frequently not noticeable without undergoing an X-ray - Tooth sensitivity - Pain when biting down - Pus around your tooth particularly when you press your gums Treatment of Cavities Cavity treatment depends on the severity and your situation. Various treatment options exist including: Dentists install fillings by eliminating the decayed areas and replacing them with various materials such as silver and porcelain. Numerous dentists consider gold and silver alloy to be more durable than resin or porcelain. However, these materials are rather visible hence; dentists normally place them on your back teeth. Caps or Crowns Dentist will use this option if the decay is extensive, leaving a weakened tooth. With this option, your dentist will remove the weakened or decayed area and fit a crown over the remaining tooth. Crowns are typically made of porcelain, gold, or porcelain joined to metal.
Over the past years the Brontosaurus has become no more. It is not the fact that the dinosaur disappeared, more that it never actually existed. The Brontosaurus as it was known to paleontologists was actually first discovered and classified as the Apatosaurus and due to rules of the International Code of Zoological Nomenclature the name which was documented first defaults. As a result approximately thirty years ago the Brontosaurus name began to fade out and the once grand and well recognized dinosaur received a name change. Despite the name change, however, this humongous beast of a dinosaur still holds a place in dinosaur lover’s hearts as a “favorite.” Brontosaurus: One of the Largest Creatures to Ever Walk on Our Planet The Apatosaurus (Brontosaurus) first appeared on the dinosaur scene during the late Jurassic period in the age of the giant Sauropods. One hundred and fifty million years ago when the Apatosaurus (Brontosaurus) roamed the Earth it measured in at 23 metric tons and stood 75 feet tall, as such it remains one of the largest creatures to ever walk the planet Earth. The Apatosaurus (Brontosaurus) stood at 15 feet tall at its hips and is known particularly for its long neck and its whip like tail which measures in at approximately 50 feet long. How Did Brontosaurus Eat? Due to the length of its neck this giant herbivore is thought to have held its neck parallel to the ground rather than up above its body. Paleontologists hold a couple of theories as to how the Apatosaurus (Brontosaurus) used its long neck to feed. As opposed to popular misconception the mammoth herbivore did not reach its neck up to tall trees and rather it is thought to have either used its long neck in mowing motions to feed on vegetation or they used their long necks to reach out in to marshy ground to feed on lush swampy vegetation. Both of these theories hold ground due to the fact that the Apatosaurus (Brontosaurus) was so large it could not efficiently move through forests to feed on masses of vegetation, nor could it stand in swampy ground without sinking and thereby meeting a slow death. Brontosaurus: Vegetarian Dinosaur (Herbivore) Due to the sheer size of the Apatosaurus (Brontosaurus) they are believed to have spent most of their lives grazing and traveling. With the vast amount of vegetation that is required to keep a 23 metric ton dinosaur (think of approximately four elephants) full cannot be harvested from one area. From analysis of the Apatosaurus (Brontosaurus)’s teeth it is believed that the large herbivore raked vegetation with its teeth to strip the leaves from the branches. Rather than chewing up gathered material that it collected the Apatosaurus (Brontosaurus) had stomach stones which helped to break up and digest the large foliage and tough plants that would be swallowed whole. What did Brontosaurus Eat? The dominant flora during the late Jurassic period was made up by many conifers and most likely made up a large portion of the Apatosaurus (Brontosaurus)’s diet. Also thriving during the Apatosaurus (Brontosaurus)’s time were Caytoniaceous seed ferns, Bennettitales, and Cheirolepidiaceae which grew on lower vegetation lines. Throughout forests ginkgos and Dicksoniaceous tree ferns thrived and may well have been picked by the Dicksoniaceous tree ferns as they passed edges of the densely wooded areas. Brontosaurus: Distinguishing Characteristics Unlike the similarly constructed Diplodocus the Apatosaurus (Brontosaurus) had less elongated cervical vertebrae and was heavier than that of the Diplodocus. Another distinguishing difference between these two large Sauropods was that the Apatosaurus (Brontosaurus) has both longer and more robust legs than the Diplodocus as observed from the leg bones of both animals. It is also believed that the Apatosaurus (Brontosaurus) used its tail as a fifth leg to support itself when grazing taller vegetation, however, this theory is debated. It is popularly believed that the Apatosaurus (Brontosaurus) spent its early years partially submerged in water to support its weight as was also believed of other massive Sauropods; however, recent research suggests that this would not have been possible as it would lead to suffocation of the animal. Also like the Diplodocus, paleontologists have come to believe that the Apatosaurus (Brontosaurus) held its neck to a 45 degree angle with its head pointing downward. Unlike other giant Sauropods a set of juvenile Apatosaurus (Brontosaurus) footprints discovered in 2006 by Matthew Mossbrucker in Morrison, Colorado suggest that juvenile Apatosaurus (Brontosaurus) were capable of running on their hind legs. How Long Did Brontosaurus Live? 100 Years? It is speculated by some dinosaur lovers that the Apatosaurus (Brontosaurus) as well as other Sauropods could have lived as long as one hundred years. While this large number gives the Apatosaurus (Brontosaurus) the air of importance over other dinosaurs it is not a number that can be proved or disproved at this time due to a lack of fossil evidence. There are currently only six partial specimens of Apatosaurus (Brontosaurus) in the hands of paleontologists and none of these confirm such a long life span. Such a long lifespan may seem astronomical in terms of such a large animal; however, life spans of large creatures are not necessarily short. It is often assumed that large animals die sooner due to the strain on their heart as a result of pumping blood through such a large body but this is simply not true. A dinosaur like the Apatosaurus (Brontosaurus) was incredibly large but it also had an incredibly large heart which pumped its blood around its body. Aside from this fact it is not particularly bizarre for large creatures to have long life spans, take for example the elephant or, to a larger extreme the Bowhead Whale. As a considerably large land mammal the elephant can live well in to its eighties. As a particularly large sea faring mammal the Bowhead Whale has been known to live as long as two hundred years – although this is slightly less applicable to the Apatosaurus (Brontosaurus) due to the fact that sea faring mammals often live longer lives than land faring mammals. How did Brontosaurus Move and Live? When considering the physiological structure of such a large creature paleontologists must recreate these giant beasts to attempt to learn more about their movements and basic living functions. One of the most puzzling factors that paleontologists have yet to completely figure out in regards to the Apatosaurus (Brontosaurus) is the mechanics of breathing. Such a large creature could never have survived submerged in water and based on the sheer amount of air that this size of an animal could take in the recreation of the respiratory system becomes a particularly puzzling task. While recreations of the Apatosaurus (Brontosaurus) respiratory system have been less fruitful than physiologists have hoped, computer animations replicating the size and weight of the Apatosaurus (Brontosaurus)’s tail have been more bountiful. An article in Discover Magazine in November 1997 reviewed computer animations by Nathan Myhrvold and found that the Apatosaurus (Brontosaurus) was more than likely capable of using its tail like a bullwhip and creating a sound which measures in at over 200 decibels. Due to the nature of being so very large, the Apatosaurus (Brontosaurus) had to make some adaptations that smaller dinosaurs of the time did not face. One such adaptation includes the high blood pressure which was required to pump blood to all of the extremities of the Apatosaurus (Brontosaurus) including the extremely long neck. It is believed that the blood pressure of the Apatosaurus (Brontosaurus) was three to four times that of ours and the heart was massive in order to keep up with the demands of a 23 metric ton body. Giant Heart, Tiny Brain Unfortunately for the Apatosaurus (Brontosaurus) one of the things that did not adapt to accommodate its large size was its brain, this huge Sauropod had one of the smallest brains among all dinosaurs. On a more positive note the massive size of the Apatosaurus (Brontosaurus) alone was one of the reasons that this herbivore escaped many predators. With a head that stood above the largest of carnivores during the late Jurassic period the Apatosaurus (Brontosaurus) was able to protect its head and neck from attacks from predators. The huge bullwhip like tail as discussed previously also served as an efficient weapon to defend itself from attacks from predators. This not to say that Apatosaurus (Brontosaurus) was not susceptible to attacks as can be seen from a Apatosaurus (Brontosaurus) vertebrae which showed evidence of attack from an Allosaurus. Where did Brontosaurus Live? The Apatosaurus (Brontosaurus) is believed to have lived within Western North America, its fossils having been found in Colorado, Wyoming, Utah and Oklahoma. The Apatosaurus was first named in 1877 by a Yale University paleontology professor, Othniel Charles Marsh. The specimen he named as Apatosaurus was a juvenile and in 1879 when Marsh discovered a larger specimen he incorrectly labeled the second species as a Brontosaurus. As the biggest land animal ever seen, the fully grown Brontosaurus specimen was mounted in the Peabody Museum of Natural History in 1905 using bones from similar dinosaurs to fill in the missing pieces. As it turns out the head of the “Brontosaurus” display was actually the mock up of a head of a Camarasaurus. Since this display was labeled as the Brontosaurus the name stuck in the minds of public making it even more difficult to erase once it was discovered that the two specimens were the same dinosaur at two different points in time! Due to the fact that Marsh named the first of the two specimens Apatosaurus it was declared that the name should stand and the older term “Brontosaurus” should be used only as a synonym. Brontosaurus Fossils: What We’ve Learned Paleontologists have determined almost all of what they know about the Apatosaurus (Brontosaurus) from examining six partial skeletons that have been discovered throughout the Western United States. Of all of the Apatosaurus (Brontosaurus) fossils discovered the most complete fossil was found in the Morrison Formation in Colorado by Earl Douglass. Throughout discovery of fossils of the Apatosaurus (Brontosaurus) four species of the giant herbivore have been found, the Apatosaurus ajax which was found in Colorado, the A. excelsus which was found in Oklahoma, Utah and Wyoming, the A. louisae which was found in Colorado and the A. yahnahpin which was found in Wyoming. Despite there only being six partial skeletal remains of the Apatosaurus (Brontosaurus) in existence much can be determined about this mammoth dinosaur from other giant Sauropods which lived in the same era and showed similar physiological characteristics. Some seemingly simple facts about the Apatosaurus (Brontosaurus) have been determined from tiny physiological features. One example of such a fact includes the fact that the vertebrae of the Apatosaurus (Brontosaurus) neck would not have allowed the dinosaur to lift its head up above the top of its back. Due to the setting of the neck vertebrae the neck could not have been lifted up above the level of the back without the vertebrae pushing against each other and the back being arched – a position which would have been exceedingly uncomfortable for this giant herbivore! Other factors such as the build and length of the Apatosaurus (Brontosaurus) tail tell paleontologists much about the humongous dinosaurs’ ability to move as well as defend itself. Additionally damage to the six partial skeletons assist paleontologists in determining what type of predators attempted to feast on the Apatosaurus (Brontosaurus). While there is little evidence of the life of the Apatosaurus (Brontosaurus) in comparison to other dinosaurs whose fossils are much more numerous, the large beast holds a specific fascination for paleontologists and animal lovers alike. What is it about this huge beast that draws us all to it? Aside from being seen as a gentle herbivore, the Apatosaurus (Brontosaurus) was such a truly giant creature that it seems too fantastical to be true. Without the proof of skeletal remains it would never be believed that such a giant creature could exist, let alone live the lifespan that some paleontologists assume – one hundred years! Our site's mission is to help consumers make more informed purchase decisions. This post contains affiliate links (marked with 'Affiliate' when you hover over them) and we will be compensated if you make a purchase after clicking through these links. Our website accepts financial compensation to allow us to provide this free service to you, our reader, while eliminating the need to clutter pages with advertisements. Compensation does not influence the rankings of products. More info on our disclosure page.
In ancient Greece, a mutation arose that resulted in some sheep with Golden Fleece compared to the regular white fleece. However, it was discovered that true-breeding golden-fleeced sheep were impossible to obtain. - What does it mean by true breeding? (2 marks) - What is required for a line to become true breeding? (2 marks) - Propose a genetic reason (hypothesis) why true-breeding golden fleeced sheep may not be possible. Include in your answer symbols to indicate how the phenotypes are related to genotypes. (4 marks) - What phenotypic ratio would you expect in the progeny if you crossed two sheep both having golden fleece, using your hypothesis? (4 marks)
Church and State Separation: The Challenge and Debate Grade level: 10-12 Subjects: U.S. Government, Civics The United States Constitution's First Amendment prohibits the government from favoring a specific religion or passing legislation to establish an official, national religion. This clause is known as the separation of church and state. Because of the clause's vague language, there is an interpretive element that has resulted in myriad legal battles. Some of the most recent center on issues such as abortion, school prayer, reciting the Pledge of Allegiance, same-sex marriage, and the right to die. These challenge the Supreme Court to make sometimes controversial decisions as it deciphers the clause in order to protect individuals' freedom of religion rights. These issues are likely to arise during the 2004 presidential election, as well. - Describe the basic elements of the U.S. Constitution's First Amendment as it relates to the separation of church and state and freedom of religion; - Speculate on the probable constitutional issues and debates associated with the separation of church and state clause; - Test their knowledge of religious freedom regarding separation of church and state; - Review and debate arguments regarding the legal conflicts centered on current church and state separation. Computers with Internet access Chalkboard and chalk Chart paper and markers Video of Flashpoints USA God and Country (recommended but not required) NOW with Bill Moyers: Freedom of Religion Quiz http://www.pbs.org/now/quiz/quiz2.html Copy of the First Amendment United States Constitution Bill of Rights Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances. Copy of First Amendment: An Overview The First Amendment of the United States Constitution protects the right to freedom of religion and freedom of expression from government interference. Freedom of expression consists of the rights to freedom of speech, press, assembly and to petition the government for a redress of grievances, and the implied rights of association and belief. The Supreme Court interprets the extent of the protection afforded to these rights. The First Amendment has been interpreted by the Court as applying to the entire federal government even though it is only expressly applicable to Congress. Furthermore, the Court has interpreted, the due process clause of the Fourteenth Amendment as protecting the rights in the First Amendment from interference by state governments. Two clauses in the First Amendment guarantee freedom of religion. The establishment clause prohibits the government from passing legislation to establish an official religion or preferring one religion over another. It enforces the "separation of church and state. Some governmental activity related to religion has been declared constitutional by the Supreme Court. For example, providing bus transportation for parochial school students and the enforcement of "blue laws" is not prohibited. The free exercise clause prohibits the government, in most instances, from interfering with a persons practice of their religion. (Excerpted from: Cornell Law School: Legal Information Institute) Recent Supreme Court cases/issues centered on separation of church and state (refer to Web resources). 4-5 classroom periods (with students conducting some research beyond the classroom) Teaching Strategies and Activities 1) Write SEPARATION OF CHURCH AND STATE on the chalkboard. Ask students to consider what this term means; if they are already familiar with it, ask them to provide its origins and explain its purpose. Give them five-10 minutes to jot down their thoughts. Invite students to share their notes. 2) Explain to students that this clause has its base in the United States Constitution's First Amendment. Distribute the handout First Amendment: An Overview to the students. Have them read it silently and then, based on the clause's description, ask students to speculate on what issues and challenges the clause poses, particularly from a legal perspective. 3) Have students take the "Freedom of Religion Quiz" to test their basic understanding of separation of church and state (and/or "What Do You Know About the Separation of State and Church?" http://www.ffrf.org/quiz.html. Conduct either as a class or individual student activity.) 4) Once students have checked their responses against the correct answers, have them consider the difficulties courts have when making decisions regarding separation of church and state conflicts, particularly when debating the essence of freedom of religion. What might be the major pro and con arguments? How does the government decide what impedes religious freedom and when religion should be restricted? How can a state Supreme Court rule on an issue, and where does the U.S. Supreme Court figure in when religious freedom/church and state laws are put in place? 5) Invite students to further examine the challenges associated with the clause by looking at Zelman v. Simmon-Harris, in which the Supreme Court ruled that tuition vouchers are constitutional, a landmark church-state decision. Research sites include: 6) Divide students into small groups. Assign per two groups one different current issue regarding church and state separation, particularly those discussed in the Flashpoints USA. Some topics to consider are (students can research others): Reciting the Pledge of Allegiance in school (because of the reference to God) Displaying the Ten Commandments in public places - Same-sex marriage - Abortion/right to life Ask students to review the issues/cases associated with the topics in response to the following questions (these may be distributed as a handout). - What is the issue? - What is its relationship to the church and state separation clause and freedom of religion? - What are the principal arguments? - What is the stance of those who support the issue? - What is the stance of those against the issue? - What does the Supreme Court need to consider in order to make a "constitutional" ruling? If it has already made a decision, how did it rule and what was its rationale? - What, if any, is the role of state Supreme Courts in the issue? Has there been a law passed that could be overruled by the U.S, Supreme Court? 7) After students have completed their research, instruct the two groups to take on a pro or con perspective regarding their assigned issue, and to debate the issue in the mock role of a 2004 presidential candidate. History of Religious Liberty in America What do the words of the First Amendment mean? First Amendment Center NOW with Bill Moyers: God and Government Religious Freedom Amendment Church and State Separation Issues Hot Religious Topics NOW with Bill Moyers: Faith Based Initiatives Not first church and state dispute Religion in The Public Schools: A Joint Statement of Current Law Supreme Court Cases Recently in the Courts Religious Freedom/Separation of Church and State Quiz: What Do You Know About the Separation of State and Church? NOW with Bill Moyers: Freedom of Religion Quiz Mid-continent Research for Education and Learning (McREL) 2: Understands the essential characteristics of limited and unlimited governments 4: Understands the concept of a constitution, the various purposes that constitutions serve, and the conditions that contribute to the establishment and maintenance of constitutional government 8: Understands the central ideas of American constitutional government and how this form of government has shaped the character of American society 9: Understands the importance of Americans sharing and supporting certain values, beliefs, and principles of American constitutional democracy 13: Understands the character of American political and social conflict and factors that tend to prevent or lower its intensity 15: Understands how the United States Constitution grants and distributes power and responsibilities to national and state government and how it seeks to prevent the abuse of power 17: Understands issues concerning the relationship between state and local governments and the national government and issues pertaining to representation at all three levels of government United States History 8: Understands the institutions and practices of government created during the Revolution and how these elements were revised between 1787 and 1815 to create the foundation of the American political system based on the U.S. Constitution and the Bill of Rights. 31: Understands economic, social, and cultural developments in the contemporary United States About the Author From classroom instructor to an executive director, Michele Israel has been an educator for nearly 20 years. She has developed and managed innovative educational initiatives, taught in nontraditional settings in the U.S. and overseas, developed curricula and educational materials, and designed and facilitated professional development for classroom and community educators. Currently operating Educational Consulting Group, Israel is involved with diverse projects, including strategic planning and product development.
Discipline & Guidance Developing Empathy: Raising Children who Care What is Empathy? Empathy is the ability to understand the feelings of others, feel what they feel, and respond in helpful, compassionate ways. Children who are able to identify with and comfort others make friends more easily, generally perform better academically, and demonstrate a higher level of moral and emotional development. Are children born empathetic? Unlike appearance or intelligence, which depends largely on genetics, empathy is a skill that children learn. We are born with the capacity for empathetic behavior, but whether or not we mature into caring, understanding adults is principally determined by what we are taught. How do we teach empathy? - Infants: (Birth to 1 yr.) Babies learn about empathy by the way parents treat them when they are cranky, fussy, or frightened. The foundation for empathetic behavior begins with the trust and attachment that is established when a parent consistently, promptly and lovingly responds to their baby's needs. - Toddlers: (1-2yrs.) Toddlers have strong feelings but they are not always capable of identifying or managing those feelings. Parents can help children name what they feel and show them how their actions are tied to those strong emotions. In this way parents can lay the groundwork necessary for the child to connect his feelings and actions with those of others as he matures. - Pre-schoolers: (3-5 Yrs.) During this developmental stage learning how to share is a great tool for nurturing empathy. Help small children learn to share by "taking turns". Use a kitchen timer if necessary to help children remember their friend deserves a chance to enjoy a toy. Before friends come over ask your child to pick out toys he thinks his friend would like to play with. Allow him to put away a few of his very favorite toys that might cause problems. Don't forget to express your appreciation when your child behaves in a caring manner. The use of puppets can be a wonderful way to help your child learn to think of others' needs and feelings By the time your child reaches 4 years old his cognitive (thinking ability) development has progressed enough that he is able to associate his emotions with the feelings of others. Before this point, he assumed everyone else saw and felt the very same things he did. Help him continue to progress by pointing out people's facial expressions you observe while shopping, etc. and question him about what he thinks they are feeling. Explain in plain, simple terms the effects his behavior has on others. Point out the impact of his actions and ask him to think about how he would feel if the roles were reversed. - Ages 5 and up: Continue to talk with your child regularly about his feelings and those he recognizes in others. Help him to see how people are very similar in regards to emotions no matter their age, ethnicity, or gender. He can also learn about empathy by talking about hypothetical problems. "Tell me how you would feel if your friend called you a name? How would he feel if you did the same?" As he gets older, you can teach him that although two people may experience very similar situations, they may not both react or feel as strongly as the other. - Model empathy: Above all, remember that parents are their children's first and most influential teachers. If we expect our children to grow into caring, empathetic adults we must model these behaviors. Let your children see your kind and thoughtful actions, hear you express your concern for the feelings of others, and demonstrate empathetic parenting. Listen carefully to your children and ask questions that help them clarify their thoughts and feelings. As their empathy grows because of your modeling, they'll be more able to relate deeply to others. They will also grow in their ability to practice good listening skills, help others, and show generosity.
It is always good at the beginning of a process to start by reflecting on your own experience. Perhaps you already have experience of designing and delivering training courses? Try to answer the following questions: What does curriculum development mean to you? What experiences have you had with curriculum development? What have you, personally, learned from these experiences? What have others who were involved learned from these experiences? Perhaps, when you see the word ‘curriculum’, you think of: a formal setting, a product like a book, or a document? some inputs like a small group of people sitting in an office making a document that will be sent out to many teachers or trainers all over the country? the resources that are needed for curriculum development to take place? all of these, and more? It is difficult to give a definition for curriculum development, because it will always be affected very strongly by the context in which it takes place. We can look back in history and find out that the word curriculum originally came from a Latin word, which meant a racetrack that horses ran around. Today, we might call it a racecourse, and so we see that the words curriculum and course are closely related. There is a suggestion that something continuous is happening, maybe over a long time, although it is equally valid for short courses. We can think of curriculum development as a continuous process, which is relevant to the situation where it takes place, and flexible, so you can adapt it over time. As in a race, there may be a finishing point, but if you work in curriculum development, you will probably find out that the work does not end at a particular moment. This is what makes it very interesting and exciting! The following description of curriculum development, rather than a definition, provides a basis for the approach taken in this Toolkit: Curriculum development describes all the ways in which a training or teaching organisation plans and guides learning. This learning can take place in groups or with individual learners. It can take place inside or outside a classroom. It can take place in an institutional setting like a school, college or training centre, or in a village or a field. It is central to the teaching and learning process (Rogers and Taylor 1998). From this description, you will see that curriculum development can take place in many settings, and may involve many people. Typically, curriculum development involves four main elements: 1. Identify what learning is needed and decide on the type of training you need to provide to meet these learning needs. 2. Plan the training carefully, so that learning is most likely to take place. 3. Deliver the training so that learning does take place. 4. Evaluate the training so that there is evidence that learning has taken place. These elements can be addressed in different ways. It is important that the approach you use will lead to effective training and teaching. This Toolkit strongly recommends that you follow a participatory approach to curriculum development since this will bring about the best results, and lead to real learning. Why this recommendation? The fact is that a lot of training and teaching is not effective. Many traditional approaches to curriculum development, and the resulting curriculum, do not provide the guidance to learning that is needed by both trainers and participants. In addition, curriculum development rarely involves the different groups or individuals who will gain from, or have something to offer to the training. a guide for learning which integrates the philosophy and orientation of a trainingprogramme, expected learning outcomes, key content, methodology and evaluation for the teaching and learning process.
While your child pushes her peas around her plate, you could be boosting her academic skills. Numerous studies show that children who regularly eat meals with their families have a larger vocabulary and score higher on academic achievement tests. Dinnertime is not just about sharing good-for-you food, however — it’s about what happens at the table. For years psychologists, teachers, counselors, and dieticians have touted the benefits of family meals. When families take the time to eat together, they generally consume healthier foods and engage in conversations that strengthen their bonds. Kids pick up social skills, while parents and siblings learn what’s going on in each other’s lives. And the benefits of family dinners extend to academic skills too. What the studies say Researchers at Harvard University and Washington University, as part of the Home-School Study of Language and Literacy Development, gathered and analyzed data over a number of years to see what effects eating together as a family has on children’s communication and academic skills. Diane Beals and Patton Tabors found that 3- and 4-year-olds whose family members exposed them to “rare” words — such as boxer, wriggling, and tackle — scored higher on the Peabody Picture Vocabulary Test at age 5 than those who weren’t exposed to such words. And children who were exposed to rich vocabulary at mealtimes at ages 3, 4, and 5 were more likely to have better verbal skills up through sixth grade. In 2000, researchers at the University of Illinois found that children ages 7 to 11 who did well on school achievement tests were the ones who ate meals and snacks with their families. In a 1994 Louis Harris and Associates survey of 2,000 high school seniors, those who ate dinner with their families four or more times a week scored better on tests than those who had family dinners three or fewer times a week. In a Harvard study that followed 65 children over eight years, researchers looked at a host of activities — play, story time, family events, and family dinners — to see which fostered healthy child development the most. Family dinners came out ahead. Why do dinners make a difference? Family meals provide that rare opportunity to have longer conversations. And longer conversations, researchers say, allow children to hear words they may not be familiar with and to enhance their linguistic development. Children are more likely to learn new vocabulary by figuring out how someone is using words in context rather than receiving direct instruction or dictionary definitions. In her research, Beals looked at the effect of families having conversations with what she calls “rich content”: “It could be discussing a trip to the zoo or seeing an orchestra perform ‘Peter and the Wolf,’ bringing in words like trombone and violin, giving children the opportunity to make connections between words and real-life events,” she says. Children also need context to stretch their vocabulary. For example, if a mother tells her young child not to sing at the table because it’s rude, the child begins to understand what rude means. With older kids, discussing current events or even homework assignments can expose them to new phrases. Extended conversations aren’t just for the dinner table. Driving in the car, playing at the park, and getting ready for bed are also prime opportunities to connect with your child. The key is to be willing to engage in these conversations whenever and wherever they may occur. Children thrive on rituals, which makes family dinners especially appealing to them. When families eat together, parents can serve as role models by demonstrating social skills, eliminating distractions (like the computer, video games, and TV), and promoting healthy eating habits (your child is more likely to eat veggies if you eat them too) and good table manners. Make it happen With soccer practice, homework, and parents working long hours, it can be difficult to schedule regular family meals. Start with small steps — designate one night a week as family dinner night. The meal doesn’t have to be fancy, and it can be at home or a restaurant. If you’re eating at home, make it a group effort by having everyone help with the preparation and cleanup. If you keep it simple, it will be easier to focus on spending time together. How to get the conversation flowing If the typical answer to “What did you learn in school today?” is “Nothing” in your household, you may be wondering how to spark discussions at the dinner table. Here are a few suggestions: - Make a game of it. Conversation starters are a breeze with The Family Dinner Box of Questions, in which players answer questions ranging from “If you could have a wild animal from anywhere in the world as a pet, what animal would you choose?” to “What special talent do you wish you possessed?” Says company cofounder CeCe Feiler, “Even teens who don’t normally want to talk will engage in conversation because it’s a nonthreatening game.” - Plan an activity. Ask family members for vacation or field trip ideas, then spend the dinner hour discussing the logistics, costs, and pros and cons of the activities suggested. Get everyone to agree on an outing and mark it on your family calendar. - Spotlight a family member with a special plate. Create your own or buy a pre-labeled one at the Red Plate Store Online. Each family member gets to have the special plate on a designated night. Focus the conversation on that person’s best qualities — or let him or her pick the menu that night.
Dam construction is built to withstand the rate of water into the reservoir, lake or recreation areas. Often dams are also used to provide water to a hydroelectric plants. Most of the dam also has a section called the floodgates to dispose of unwanted water gradually or sustainable. The check dam or dam is building to withstand the water flow rate and water level or increase. Usually the dam functioned for agricultural irrigation, power generation, recreation and fisheries. Dams were built in Java because the area is mountainous, frequent flooding, extensive agricultural areas and others. Average reservoir is a large pool used to collect water and used for various purposes. This reservoir is nothing natural or artificial. In West Java dams in small numbers quite a lot and the people of West Java called reservoirs relatively small size such as Situ. Dam (dam) and dam (weir) is actually a different structure. Dam (weir) is a low head dam structure (lowhead dam), which serves to raise the water level, usually found in the river. Increased surface river water will melimpas through peak / mercu weir (overflow). Can be used as a measure of the speed of water flow in the canal / river and could be a driving force in the traditional pengilingan European countries. In countries with sizeable rivers and torrential flow, a series of dams can be operated to form a water transport system. In Indonesia, many dams used for irrigation by raising the water level, when the river water level is lower than the face of the land that will be watered. Figure 27. a large dam on the river
Paleontology scientists have discovered the oldest root stem plant cells. They were found in an old fossil about 320million years old. The cells were found in a root tip of an ancient plant. The cells is gave rise to the ancient plant roots. Trap jawed new ant species was discovered in piece of Burmese Amber which is about 99million years old. The new species is called Ceratomyrmex ellenbergeri. It belongs to the earliest ant linage of the Haidomyrmecin tribe. Ceratomyrmex ellenbergeri lived 99million year ago during the cretaceous period. The discovery of over 6000 Antarctic marine fossils indicated the mass extinction scenario that had all dinosaurs killed at the Polar Regions is thought to be sudden and equally hazardous to life at the poles. Creatures at the poles were previously thought to be in less hazardous position at the southern pole during mass extinction event. Mitochondrial genome of a fossil of about 3500years old was retrieved in Pestera Muierii in Romania. The fossil is part of the first population of the human species that inhabited Europe. The lineage she (the fossil) belongs to embarks the theory of back migration to Africa during the Paleolithic period. Snail eating marsupial fossils have been discovered by paleontologists in Australia. The snails were previous known to have lived in Australia between 10 to 15million years ago. The new family has been named Malleodectidae and using its massive premolar tooth to break into marsupials, favorite meal. Evidence has been found on the predation of the ancient microbial ecosystem. Electron microscope was used to view minute fossils, which dates up to 740millions years. Circular holes were found and thought to may have been formed by an ancient Vampyrellidae amoeba, which reach inside to consume the cell content of the microbial systems.
Biomass, a renewable energy source, is biological material derived from living, or recently living organisms such as wood, waste, and alcohol fuels. Biomass is commonly plant matter grown to generate electricity or produce heat. For example, forest residues (such as dead trees, branches and tree stumps), yard clippings and wood chips may be used as biomass. However, biomass also includes plant or animal matter used for production of fibers or chemicals. Biomass may also include biodegradable wastes that can be burnt as fuel. It excludes organic material such as fossil fuel which has been transformed by geological processes into substances such as coal or petroleum. Industrial biomass can be grown from numerous types of plant, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, and a variety of tree species, ranging from eucalyptus to oil palm (palm oil). The particular plant used is usually not important to the end products, but it does affect the processing of the raw material. Although fossil fuels have their origin in ancient biomass, they are not considered biomass by the generally accepted definition because they contain carbon that has been "out" of the carbon cycle for a very long time. Their combustion therefore disturbs the carbon dioxide content in the atmosphere.
Water: Articles and Activities for Middle School Students Activities: "Streams in the City" You will need Adobe Reader to view some of the files on this page. See EPA's PDF page to learn more. For Grades 6 - 8 Streams in the City (Full Article) (PDF) (2 pp, 1.1MB) Streams in the City (Activity Sheets) (PDF) (15 pp, 3MB) These exercises are designed to guide a student to an understanding of how rainfall and storm events result in runoff over the surface of the earth. Runoff is influenced by the nature of the surface of the earth. Streamflow is particularly influenced by urbanization-the paving over of permeable surfaces with impermeable ones. In light of this, students are encouraged to think about design elements that incorporate more permeable surfaces into their own environments, including their school parking lots and neighborhoods. Individual exercises are designed to be approximately 45 minutes to an hour long. These exercises are also ordered progressively: each builds on concepts introduced in the previous. Curricular Standards and Skills - water cycle - cause and effect - volume calculations - differences in urban and rural areas - challenges of urbanization
Walk-Through Trap to Control Horn Flies on Cattle Robert D. Hall Department of Entomology The horn fly, Haematobia irritans (Linnaeus), was introduced into the United States more than a century ago. Since then, it has become one of the most important fly pests of pasture and range cattle. Although most cattle can tolerate up to 200 horn flies without showing economic losses, larger numbers of these flies decrease cattle weight gain and milk output. Male and female horn flies feed on blood by using rigid mouthparts underneath their heads. This feeding activity is painful and annoying to cattle. Horn flies congregate on the backs of cattle, often clustering on the midline and spreading down the sides. Sometimes, horn flies settle around the bases of horns, and when the weather is hot, they may move onto the belly. After the flies feed and mate, the female is ready to deposit eggs. She moves to the rear of the cattle host, flies to the ground as the animal defecates, and becomes covered by dung. This activity occurs frequently during early morning. Horn fly eggs are small, reddish-brown and generally laid in clumps on grass and other vegetation covered by the cow pat. After a time, which varies depending on temperature, the eggs hatch, and maggots develop in the dung. When mature, the maggots pupate in or below the pat, and later, emerging adults disperse to seek cattle hosts. As summer progresses, more horn fly eggs develop but mature only to the pupal stage. Rather than emerging as adults, they overwinter (diapause). Diapausing pupae produce adult horn flies the following spring. Horn flies have been controlled with insecticides since the early 1950s. Sprays, rub-ons or pour-ons were effective, but many insecticides, such as DDT and toxaphene, are no longer permitted. Now, applicating devices, such as cable backrubbers and dust bags, are popular for horn fly control because they require little labor on the behalf of humans. When these systems are charged with effective insecticide and properly situated, the cattle treat themselves. Although many systemic insecticides show an effect against horn flies, most (such as ivermectin) are too costly for the frequent reapplications required. Insecticide-impregnated ear tags are popular for controlling horn flies on cattle. These tags generally employ pyrethroid insecticides and, until recently, gave almost complete control of horn flies for a summer season. Resistance to the pyrethroids developed in southern horn flies, however, and spread northward. By 1986 resistance was reported as far north as the Canadian border. Cattle producers dissatisfied with the performance of insecticidal ear tags requested information about alternative control methods. In the 1930s, U.S. Department of Agriculture entomologist Willis Bruce became interested in designing a walk-through trap for horn flies. He experimented with several designs, perfected one and deposited schematic drawings and an explanation of his trap with the National Agricultural Library in Maryland. Soon after Bruce completed his experiments, however, the United States became involved in World War II, and few people were thinking about horn fly control. After the war's end, DDT and other pesticides captured the attention of the public and the entomological and agricultural communities. Bruce spent the rest of his career working with pesticides and studying their use for horn fly control. Researchers paid little attention to walk-through fly traps, however, until Clyde Morgan and Gus Thomas of the USDA noticed Bruce's paper during the 1970s while they compiled a complete bibliography on the horn fly. By 1986 the problem of horn fly resistance to pyrethroids was widespread enough in Missouri to warrant re-evaluation of Bruce's trap design. Bruce's trap is a walk-through system. Cattle enter through either end, pass through the 10-foot trap and contact a series of strips made of canvas or old carpet. These strips dislodge most of the horn flies on the animals' backs and sides. Because flies are attracted to light, they travel toward the screened sides of the trap. The animal exits the trap with fewer flies on it, and the trapped flies cannot escape. Repeated use of the trap can reduce the overall local population of horn flies. Three trapping elements along each side of the trap catch flies by the "inverted cone" principle, similar to a minnow trap, upon which many insect traps depend (Figure 1). The trap forces insects to crawl from a large opening through a small one. A zigzag screening in the trapping elements functions as the cone (Figure 2). The triangular shape formed by the bending of the screen makes alternate large ends that face the interior of the trap and small ends that face the outside of the trap. The trapping elements are in place, as viewed from the side. These elements are held into the framing with screw hooks and eyes and can be removed for maintenance. An interior view of the trap shows the screen placement in a trapping element. The screen is stapled to 3/4-inch by 3/4-inch wood strips that are nailed to the sides of each element. Holes punched through the small end of the triangle permit the flies to crawl toward the outside of the trap (Figure 3). There, the screening on the exterior of the trapping element interrupts the flies' progress. It traps flies between the exterior screen on one side and the zigzag screen on the other. Because the cone now faces the wrong way for the insects — that is, the small end faces them — it is unlikely that many will find the holes and escape. In this close-up view, notice the holes (1/4 inch by 1/2 inch) punched through the apex of the screen inside trapping elements. The flies crawl through these holes and cannot find their way back out. When building a trap, remember that lumber that is pressure-treated to resist rot will outlast untreated wood and does not require painting. You can use standard-sized lumber, screening and other items where possible. (Figure 6 has a cutaway view of the trap.) You can get most of the materials easily from lumberyards, farm supplies or hardware stores. and you can get a PDF of blueprints for the trap. Making the frame of the trap from welded square steel makes the strongest trap. The Bruce trap is best employed in a forced use situation where cattle must pass through it on a regular basis (Figure 4). In this respect, placement is like that for dust bags and backrubbers. The best site for placement of the trap is between pasture and the source of water. In this completed horn fly trap, heavy wire panels funnel cattle through the trap. The trap itself is placed in a fence line with a watering pond on one side. The canvas curtains inside the trap have been removed in this photograph. You can often use fence line to route cattle through the trap, and you can easily move electric fencing. As shown in Figure 4, you can use wire cattle panels to funnel cattle into the trap. If you cannot easily fence off the water source, you can encourage use of the trap with mineral blocks or a mineral station. Molasses blocks, or addition of loose molasses to mineral, also increase use. Putting mineral in an enclosure with entry through the trap is an effective technique. If cattle are reluctant to enter the trap, it may be necessary to remove most or all of the canvas strips, and then to replace them one at a time after the cattle overcome their shyness. Corn or other feed can be used as bait to draw cattle into the trap. After cattle are accustomed to the trap, they often pass through it several times a day and seem to realize its benefit. Although the trap design incorporates clean-out doors at the bottom of each trapping element, scavenger insects soon inhabit the trapping elements and eat the dead flies (Figure 5). It may be necessary to wash the element screens clear with water if they become clogged with mud and debris. The only other maintenance required is periodic inspection and repair of the trap as needed. The elements can be cleaned while on the trap or can be removed for convenience. Scavenger insects make frequent cleaning of the elements unnecessary. Be careful of wasps that may construct nests between the elements. A cutaway view of the horn fly trap shows how the trapping elements are placed along its side. The frame is made from standard-sized CCA treated lumber, and the end framing is extended laterally to provide stability. The top is made from plywood. The canvas strips have been removed for clarity. Field studies conducted in central Missouri during 1986 indicated the trap produced roughly 50 percent control of horn flies when averaged over the season. This level of control was less than that afforded by insecticidal ear tags and some other treatments but maintained horn flies below the injury level of about 200 flies per animal. Similar results were obtained by David L. Lindell, regional extension specialist, during 1994-95, when field trials with two horn fly traps during the latter season averaged 40 percent control. Reports of these tests can be obtained from him at Courthouse, Room 16, 100 W. Franklin, Clinton, Mo. 64735.
This tutorial is looking to understand the concepts of Scratch and doing some simple project using Raspberry Pi. What is scratch? Scratch is a new programming language and a visual programming tool that lets you create your own interactive animations, stories, games, and art with a drag-and-drop user interface, it’s designed for beginners in programming, it depends on using some programming techniques and logic without actually having to write a code. It’s a great tool to get started in programming with the Raspberry Pi. The interface of the Scratch system is divided the screen into several parts: the project running environment and the project development. The block palette (Stage) shown at the top of the image on the right. The Stage is where a Scratch project is made. By default, the Scratch Cat will on the stage. The Scratch Cat is simply one of many sprites. A sprite is programmed to perform anything you wish! Scratch allows the user to be creative in make their projects. Before going further you have to install scratch on your Raspberry Pi, if it's not already installed, to do that open a terminal from puTTY software and type: $ sudo apt-get update $ sudo apt-get install nuscratch Now you have to Update software in Raspbian by typing : $ sudo apt-get update Upgrade all your installed packages to their latest versions with the command: $ sudo apt-get dist-upgrade First of all you have to open scratch from Raspberry Pi Graphical User Interface (GUI), To do so go to menu > programming > scratch, Once opened you will see a window like this: Note: if you don’t have screen you can open a Raspberry Pi GUI from your laptop click here and follow steps. Now you will learn how to program using scratch starting from simple project like making the cat move Step 1: Select the blue "block" called “move 10 steps”, and drag it to the right. Note: Make sure the block is placed in the darker-grey, called the scripts area. Step 2: Click on “control” drag a “When green flag clicked” block and connect it with “move 10 steps “ Step 3: Now click the green flag icon in the top right hand side of the screen or you can click on “When green flag clicked” and notice what happen to the cat. It moves toward 10 steps! That’s it, now check out the other block categories and test out what each one does. Now what about go further and doing more complex project? It’s still easy but need more steps than first projects, let’s do a flashing LED using Raspberry Pi. What you will learn after this project? By creating flashing with your Raspberry Pi you will learn: - How to use Scratch to control GPIO pins - How to program an LED to turn on and off - How to add sounds in a Scratch program - Use basic programming constructs to create simple programs. - Use basic materials and tools to create project prototypes. You will need some hardware: - Raspberry Pi - Light -Emitting Diode (LED) - 220 ohms Resistor - jumper leads. Now you can write a programme to let the LED flashing, follow the steps to do that: Step 1: Connect the circuit shown below. Step 2: Cnnect the power cable to Raspberry Pi and wait for it to boot. Step 3: Open Scratch from your Raspberry Pi. Step 4: Click on Edit and Start GPIO server. Step 5: Delete the Scratch cat by Right-Click on it and click delete. Step 6: Then click on the button for a new sprite and choose robot3 from the fantasy folder. Step 7: Click on control. Drag the when green flag clicked block onto the scripts area. Then it connect a broadcast block. Click on the drop down menu on the broadcast block and select new. Note: In the message name box type config17output this instruction will tell the Raspberry Pi that pin 17 will be an output. This is because you are telling the pin to turn on an off an LED which is an output component. Step 8: Drag the when space key pressed block onto the scripts area. Then click on Sound and drag the play sound block onto the scripts area and connect it to the control block. Step 9: Click on the control in the blocks and drag a broadcast block to your scripts area and attach it to the play sound block. Click on the drop-down menu on the broadcast block and select new then type gpio17on this instruction will tell the Raspberry Pi to light the LED. Step 10: Drag a wait 1 second block and connect it to the broadcast block. Stag 11: Drag another broadcast and connect it with wait 1 secs and type on it gpio17off Stage 12: Add another wait 1 second Stage 13: Save your work and click anywhere on when space key pressed block and then you'll see the LED turning on for 1 second then turn off.
By Roberta Attanasio Greenhouse gases (carbon dioxide, methane, nitrous oxide, fluorinated gases, and ozone) work like the glass walls of a greenhouse and are responsible for the greenhouse effect. What is the greenhouse effect? It’s a process in which greenhouse gases let the radiation from the sun onto the Earth’s surface. At the same time, they trap the heat that reflects back up into the atmosphere. The greenhouse effect keeps our planet at an average 59 degrees Fahrenheit (15 degrees Celsius). However, if the greenhouse effect is too strong, our planet gets warmer and warmer. This is what is happening now — the greenhouse effect is becoming stronger because of increased release of greenhouse gases in the atmosphere. The result of a stronger greenhouse effect is climate change. Carbon dioxide (CO2) is the primary greenhouse gas emitted through human activities. CO2 enters the atmosphere through burning fossil fuels (coal, natural gas and oil), solid waste, trees and wood products, and also as a result of certain chemical reactions (e.g., manufacture of cement). Carbon dioxide is removed from the atmosphere when it is absorbed by plants as part of the biological carbon cycle. Using energy from the sun, plants transform carbon dioxide and water into glucose and oxygen through the process of photosynthesis. CO2 is said to be removed, captured or sequestered (in such a context, these three words have the same meaning). To mitigate climate change, a group of German scientists has now come up with an environmentally friendly method to capture CO2 — in other words, a method to remove CO2 from the atmosphere. The method, dubbed carbon farming, consists in planting trees in arid regions to capture CO2. The team of investigators, in a paper published in the scientific journal Earth System Dynamics on July 31, 2013, shows that Jatropha curcas does a great job at sequestering CO2 from the atmosphere. Jatropha curcas is a small tree very resistant to aridity. Therefore, it can be planted in hot and dry land in soil unsuitable for food production. Because the plant needs water to grow, coastal areas where desalinated seawater can be made available are ideal. The new Earth System Dynamics study shows that one hectare of Jatropha curcas could capture up to 25 tonnes of atmospheric carbon dioxide per year, over a 20 year period. A plantation taking up only about 3% of the Arabian Desert, for example, could absorb in a couple of decades all the CO2 produced by motor vehicles in Germany over the same period. With about one billion hectares suitable for carbon farming, the method could sequester a significant portion of the CO2 added to the atmosphere since the industrial revolution. The main limitations to implementing this method are lack of funding and little knowledge of the benefits large-scale plantations could have in the regional climate, which can include increase of cloud coverage and rainfall. The Earth System Dynamics paper presents results of simulations looking into these aspects, but there is still a lack of experimental data on the effects of greening arid regions. In addition, potential detrimental effects need to be evaluated carefully — an example of potential detrimental effects is the accumulation of salt in desert soils. The team hopes the results from their study will get enough people informed about carbon farming, so to establish an experimental pilot project.
Farming Base (farmingbase.com) is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. This site also participates in other affiliate programs and is compensated for referring traffic and business to them. By definition, carnivorous plants or insectivorous plants are plants that kill and trap other animals in efforts to derive some or most of their bodily nutrients by using their very own enzymes or bacteria in order to digest them. While brambles or blackberry bushes have some of these characteristics, the question is do brambles have all of the necessary characteristics to be labeled as a carnivorous plant? Brambles or blackberry bushes cannot be considered a carnivorous plant as they do not have all of the necessary characteristics of a carnivorous plant, or rather don’t have the ability to exude enzymes or bacteria in order to derive nutrients from their prey. Meaning, brambles do not have all of the necessary characteristics of a carnivorous plant but there is something in their character which makes them a curious case. Although they are not exactly carnivorous plants we can call them protocarnivorous plants. Keep on reading to find out what a protocarnivorous plant is and what makes brambles or blackberry bushes a protocarnivorous plant. Are brambles and blackberries the same thing? Brambles and blackberries come from the same genus Rubus, the bramble is Rubus Vulgaris while the blackberry is Rubus Fruiticosus meaning that brambles and blackberries are the very same thing. While they are one and the same, what is the difference between the two? Blackberries are an edible fruit which is produced by the genus Rubus, mentioned beforehand, from the family of Rosaceae. Brambles are described as a rough, tangled, and prickly shrub that grows blackberries, raspberries, or dewberries, so basically, brambles describe the plant out of which the fruit of blackberries grows. This means that whether you decide to call them brambles or blackberries, you cannot really go wrong but if you can, stick to the right terminology. To repeat once more, when referring to the bush which produces the fruit, brambles is the right terminology. When you are referring to the aggregate fruit which is composed out of small drupelets, blackberry is the right terminology. Also keep in mind that, although called berries, in a botanical sense, blackberries are not considered berries. Are brambles (blackberry bushes) carnivorous? By that definition, a carnivorous plant is any plant that has evolved its carnivory and adapted to capturing and digesting small animals or insects in efforts to derive nutrients from them. The emphasis is, however, on their very own ability to exude enzymes or bacteria in order to decompose the prey for the purpose of easier digestion. To be classified as a true carnivorous the plant has to have all of the following characteristics. 1. Be able to attract prey Being able to attract prey is the very first vital characteristic that sets everything else in motion. Although plants could wait for their prey to come, that leaves them with no guarantee of constant nutrition so true carnivorous plants have evolved mechanisms to attract prey. For example, Pitfall plants attract plants in an effort to lure them into the pit from which they can no longer successfully escape. Their way of attracting prey is with nectar secreted at their openings or they can attract prey with their bright flowers and flower-like patterning. The openings where the plants attract the prey are typically covered in a wax-like coating which causes the insects to slip into the pit. Once they are trapped within the pit, the plant starts exuding digestive enzymes which help dissolve the prey into a form that is easily absorbable for the plant. This is only one of the ways carnivorous plants attract and kill prey for digestion. 2. Have a trapping mechanism Trapping mechanisms are divided into five basic mechanisms within the carnivorous plants. Pitfall Traps or pitcher plants trap prey by luring them into a pit from which they cannot escape and are then digested with the help of plant’s digestive enzymes and bacteria. Flypaper traps trap prey with the help of sticky mucilage or a glue-like texture they exude. They utilize it through secreting glands – which can be either short, long, and mobile, and trap their prey by making it stick to the surface of the secretion after which the prey is digested with the help of the plant’s digestive enzymes and bacteria. Snap traps trap prey by the mouse and bear-like trap which with its rapid movement traps the prey within the two traps. The rapid movement is triggered by the sensitive hair on the leaf lobes. Currently, there are two species of flytraps – the Venus flytrap and the waterwheel plant. The digestion of the prey occurs over a period of one to two weeks. Lobster-pot traps trap prey by forcing it to move towards the plant’s digestive system by using its inward-pointing hairs. The inward-pointing hairs make the chamber easy to enter but difficult to get out and so the prey is forced to move in a particular direction and eventually trapped within the digestive system. Bladder traps trap prey by generating a partial vacuum inside the bladder of the plant which basically sucks in the prey, makes it impossible for it to escape and as with other carnivorous plants, it gets digested with the help of enzymes and bacteria. 3. Be able to kill prey and derive nutrients by itself The main characteristic of carnivorous plants is their ability to kill their prey and derive nutrients from its carcass through a process of chemical breakdown. The nutrients are then absorbed by the plant in an effort to enable their survival. Carnivorous plants have evolved nine times in five different orders of flowering plants. There are close to 600 species classified with the characteristics of attracting, trapping, and killing prey all in an effort to absorb any available nutrients from the prey. As brambles or blackberry bushes do not have the ability to impact the decomposition of the prey on their very own, they cannot be considered carnivorous plants. Although they cannot be considered fully carnivorous plants, brambles share few of the characteristics of carnivorous plants which makes them protocarnivorous plants. There are over 300 protocarnivorous plant species that are showing some of the characteristics of carnivorous plants. Protocarnivorous plants, or also commonly referred to as paracarnivorous, subcarnivorous, or borderline carnivores, have some of the carnivorous characteristics attributed. Some are able to trap and kill the prey but lack the ability to derive and digest nutrients from their prey by themselves, as a carnivorous plant always can. To be a fully carnivorous plant, the plant must exhibit all of the characteristics mentioned beforehand, whereas to be a protocarnivorous plant, the plant can exhibit only one of the characteristics – attraction, trapping, or absorbing of the prey. What brambles or blackberry bushes have are thorns that help capture animals such as sheep. However, those thorns are not considered to be a trapping mechanism for the purpose of killing or incapacitating prey for the purpose of utilizing their nutrients but they are rather considered a defense mechanism in order to encourage herbivores to avoid it. It gets a little bit more complicated when you take a look at the thorns of blackberry bushes. Unlike other plants with the same defensive mechanism of thorns, blackberry pushes do not have upright or straight thorns but rather have thorns that look like backward-facing hooks. Backward-facing thorns draw the prey towards the middle of the bush where they can only get more tangled up and as they struggle they can only get more caught up within the thorns. As the prey struggles and gets to the point of absolute exhaustion it eventually dies and the brambles or blackberry bush does get its nutrients. Despite this, brambles lack digestive enzymes for breaking down the nutrients and rather rely on symbiotic relationships with bacteria or insects for the part of absorbing the prey. It is a very important characteristic that the blackberry bushes are not responsible for the decomposition of the prey. To conclude, let’s review the characteristics of bramble and a carnivorous plant. Characteristics of bramble and a carnivorous plant: 1. Being able to attract prey Brambles or blackberry bushes produce the fruit of blackberries which can be used to attract prey but the question remains whether brambles evolved in a way that the blackberries were produced with the purpose of attracting prey. 2. Have a trapping mechanism As we said before, it is debatable whether blackberry bushes evolved their thorns to backward-facing hooks in order to trap prey after attracting it or if it’s just a defense mechanism in order to warn off herbivores. Also, brambles would have a hard time trapping any animal which does not have thick wool or fur in which the animal can get tangled up – which leads to a conclusion that its thorns are rather a defensive mechanism. 3. Being able to kill prey As they use the help from their surroundings in order to derive nutrients, brambles are not necessarily responsible for killing the prey and they surely are not responsible for decomposing it. By lacking this characteristic of absorbing the prey, brambles cannot be considered a carnivorous plant.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
An Unbiased View of Coronavirus (Covid-19) — NEJM Influenza (Flu) and COVID-19 are both infectious breathing diseases, but they are brought on by different viruses. COVID-19 is triggered by infection with a brand-new coronavirus (called SARS-Co, V-2), and flu is brought on by infection with influenza infections. COVID-19 appears to spread out more quickly than flu and causes more severe health problems in some individuals. It can likewise take longer prior to individuals reveal signs and people can be contagious for longer. More information about differences between flu and COVID-19 is offered in the different sections below. Due to the fact that some of the signs of flu and COVID-19 are comparable, it may be hard to tell the distinction in between them based on signs alone, and testing may be needed to assist verify a diagnosis. This page compares COVID-19 and flu, provided the very best available details to date. Discover a vaccine near you: Inspect your health department: Coronavirus disease (COVID-19) is an infectious illness caused by a newly discovered coronavirus. Most people contaminated with the COVID-19 virus will experience mild to moderate respiratory illness and recuperate without needing unique treatment. Older individuals, and those with hidden medical issues like cardiovascular disease, diabetes, chronic breathing illness, and cancer are most likely to establish severe health problem. The best way to avoid and decrease transmission is to be well informed about the COVID-19 virus, the illness it causes and how it spreads out. Safeguard yourself and others from infection by washing your hands or using an alcohol based rub regularly and not touching your face. The Single Strategy To Use For Emerging Health Issues - Novel Coronavirus Nations, areas or territories with cases. Suggests alerting label, but says advantages exceed the threats in pandemic setting The time out is raised, and Johnson & Johnson's COVID-19 vaccine is as soon as again suggested for grownups, according to the CDC's Advisory Committee on Immunization Practices (ACIP). Coronavirus illness 2019 (COVID-19) is specified as health problem triggered by an unique coronavirus now called serious acute respiratory syndrome coronavirus 2 (SARS-Co, V-2; formerly called 2019-n, Co, V), which was initially recognized in the middle of an outbreak of breathing disease cases in Wuhan City, Hubei Province, China. It was initially reported to the WHO on December 31, 2019. On January 30, 2020, the WHO declared the COVID-19 break out an international health emergency situation. [2, 3] On March 11, 2020, the WHO stated COVID-19 a worldwide pandemic, its first such designation since stating H1N1 influenza a pandemic in 2009.
It is often very difficult to determine the cause of a whale stranding, however, there are several plausible explanations for at least some of the whale strandings that occur: - Faulty navigation: many of the whales that are found stranded normally live in the open ocean and are not experienced at navigating in the shallower waters of the coastline. Ocean-dwelling whales often use sonar to navigate but they may receive confusing signals near the coastline, especially if they encounter gently sloping beaches. There is also some evidence that naval sonar may also interfere with whale navigation. - Disease: many stranded whales have been found to be suffering from severe or terminal disease. - Pursuit of prey: whales that pursue their prey close to the shoreline may become trapped in shallow waters by receding tides. - Assisting other animals in the group: many whale species live in complex social groups. It is thought that some mass strandings occur when one member of the group has become stranded and the others have responded to its distress signals.
Writing has become a gateway for employment and promotion, especially in salaried positions (National Commission on Writing, 2004). More than 90 percent of mid-career professionals recently cited the “need to write effectively” as a skill “of great importance” in their day-to-day work (National Commission of Writing, 2005). In most jobs, writing emails has surpassed the use of the telephone as the primary means of communication. Employees in business and government alike are expected to produce written documentation–memos, technical reports, visual presentations, and the like. Of the 32% of high school students who are college-bound, 78% will struggle in writing (Cavanaugh, 2004b). Writing has occupied too narrow a place in school as compared to the enormous role that it plays in children’s cultural development (Vygotsky). A cornerstone of the Common Core Standards for Writing is students’ ability to write logical arguments based on substantive claims, sound reasoning, and relevant evidence. According to the Standards, each year in their writing, students should demonstrate increasing sophistication in all aspects of language use, from vocabulary and syntax to the development and organization of ideas, and they should address increasingly demanding content and sources. Their topics should be developed with relevant, well-chosen facts, definitions, concrete details, quotations, or other information and examples. The Standards for Writing are to help prepare students for real life writing experiences at college and in 21st century careers. So how best do we teach students how to write? Graham and Perin (2007) conducted the most recent meta-analysis of the research literature pertaining to writing intervention. They reviewed 582 research studies. Of those studies, 123 were deemed of sufficient quality to be included in the meta-analysis. Below, in order of effectiveness, are the conclusions of their research. Meta-Analysis Conducted by Graham and Perin (2007) Reveals Effective Writing Instruction (ranked in order of effectiveness with effect size in parenthesis): - Strategy instruction (.82): The primary goal for strategy instruction is to teach students’ specific skills, knowledge, or processes that they can use independently once instruction has ended. The goal of instruction is to place the targeted declarative and/or procedural knowledge directly under the writer’s control as soon as possible. - Summarization (.82): This instruction involves explicitly and systematically teaching students how to summarize texts. This can include teaching strategies for summarizing text or instructional activities designed to improve students’ text summarization skills. - Peer assistance (.75): This involves students working together to plan, draft, and/or revise their compositions. - Setting product goals (.70): These involve assigning students specific goals for the written product they are to complete. - Word processing (.55): This involves students using word processing computer programs to compose their composition. This involves students using word processing computer programs to compose their composition. - Sentence combining (.50): This instruction involves teaching students to construct more complex and sophisticated sentences through exercises in which two or more basic sentences are combined into a single sentence. - Inquiry (.32): This involves engaging students in activities that help them develop ideas and content for a particular writing task by analyzing immediate and concrete data (e.g., comparing and contrasting cases or collecting and evaluating evidence. - Prewriting activities (.32): This involves students engaging in activities (such as using a semantic web or brainstorming ideas) designed to help them generate or organize ideas for their composition. - Process writing approach (.32): This approach to teaching writing involves extended opportunities for writing; writing for real audiences; engaging in cycles of planning, translating, and reviewing; personal responsibility and ownership of writing projects; high levels of student interactions and creation of a supportive writing environment; self-reflection and evaluation; personalized individual assistance and instruction; and in some instances more systematic instruction. - Study of model (.25): This involves students examining examples of one or more specific types of text and attempting to emulate the patterns or forms in these examples in their own writing. - Grammar instruction (-.32): This instruction involves the explicit and systematic teaching of grammar (e.g., the study of parts of speech and sentences Graham and Perin (2007) elaborate on effective strategy instruction, specifically Self Regulated Strategy Development (SRSD): “In a previous meta-analysis by Graham (2006a), SRSD yielded larger effect sizes than the other methods of strategy instruction combined. The SRSD model includes six stages of instruction: (a) Develop background knowledge (students are taught any background knowledge needed to use the strategy successfully), (b) describe it (the strategy as well as its purpose and benefits are described and discussed; a mnemonic for remembering the steps of the strategy may be introduced too), (c) model it (the teacher models how to use the strategy), (d) memorize it (the student memorizes the steps of the strategy and any accompanying mnemonic), (e) support it (the teacher supports or scaffolds student mastery of the strategy), and (f) independent use (students use the strategy with little or no support). SRSD instruction is also characterized by explicit teaching, individualized instruction, and criterion-based versus time-based learning. Students are treated as active collaborators in the learning process. Furthermore, they are taught a number of self-regulation skills (including goal setting, self-monitoring, self-instructions, and self-reinforcement) de-signed to help them manage writing strategies, the writing process, and their writing behavior.” Research clearly points to a strategy-based approach to teaching writing. The most effective strategies follow the Self Regulated Strategy Development Model. ACES writing stategy is one such approach. - Cavanagh, S. (2004a, Jan. 21). Barriers to college: Lack of preparation vs. financial need. Education Week, 23(19)1. - Cavanagh, S. (2004b, Oct. 20). Students ill-prepared for college, ACT warns. Education Week, 24(8)5. - Graham, S. & Perin, D. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology. 99(3), 445-476. - Kameenui, E. & Carnine, D. (1998). Effective teaching strategies that accommodate diverse learners. Upper Saddle River, NJ: Merrill. - National Commission on Writing. (2004, September). Writing: A ticket to work…or a ticket out. Retrieved from www.writingcommission.org/prod_downloads/writingcom/writing-ticket-to-work.pdf - Vygotsky, L. (1989). Thought and language. Cambridge, MA: Harvard University Press. © 2013 Rogowsky
A fraction is written in terms of two parts: the number on top is called the numerator and the number on the bottom is called the denominator. We can use the division method to solve this question. To get a decimal, simply divide the numerator 14 by the denominator 5: 14 (numerator) ÷ 5 (denominator) = 2.8 As a result, you get 2.8 as your answer when you convert 2 4/5 (or 14/5) to a decimal.