content
stringlengths
275
370k
Each person in the United States generates five or more pounds (2.3 kilograms) of waste a day: about the weight of a medium bag of sugar. More than half of that garbage is buried and stored in landfills. Increasingly, however, cities are promoting recycling programs, often getting schools involved so students can learn about recycling and follow these practices at home. A person in a Scandinavian country (such as Sweden, Denmark, or Norway) generates about the same amount of waste as an American. People in developing countries generate less waste than Americans or Europeans; for example, a person in India generates about three-fourths of a pound (0.34 kilograms) per day. Still, every country must find a way to process the garbage that each of its residents generates every day, month, and year. In this photo series, students will consider what happens to items that they and everyone else on the planet throw away. They will think like engineers, defining a problem by categorizing and quantifying components of trash, and considering different solutions to the problem of dealing with rubbish. The photos will give students a starting point for weighing the pros and cons of recycling, composting, landfills, and other current ways to get rid of garbage. (For reusing garbage, see the Reusing Garbage: Art Projects in References and Further Reading.) As you analyze the photographs in this collection, keep in mind—and prompt your students to think about—the source, context, scale, vantage point of the photographer, color, or texture, etc. (See the Glossary for more about scale, vantage point, and texture.) Note: Trash, garbage, waste, and rubbish are used interchangeably here. Experts usually use “trash” to mean discarded dry items, “garbage” to mean wet items, and “waste” or “rubbish” as a general, inclusive term for all discards.
Flocks of Carnaby's Black Cockatoo are iconic sights for the people of Perth, the Swan River Region and the forests of the South west. But comparing two population surveys in 2010 and 2011 showed a 37 percent decline in numbers across the Swan river region. That is a 37 per cent decline in one year. According to Statistical modelling based on the 2011 Great Cocky Count the population of Carnaby’s cockatoos (Calyptorhynchus latirostris) in the Swan Region was between 5200 and 8600 birds. A year earlier it was estimated that the population was 8000 to 10,000. Related: Scientific American - Endangered Australian Cockatoo Loses One Third of Population in Just 1 Year | Biodiversity crisis: Habitat loss and climate change causing 6th mass extinction Found across south-western WA from Geraldton to Esperance, the species has declined across a third of its historic range: It no longer occurs in the central wheatbelt region. The numbers of Carnaby’s cockatoos have declined by at least 50 per cent over the past 45 years. The Swan Region is considered core feeding habitat for northern and western populations of over-wintering cockatoos. It is listed as Endangered under the International Union for Conservation of Nature (IUCN) Red List, Endangered under the federal Environment Protection and Biodiversity Conservation Act 1999 and as rare and likely to become extinct by the WA State Wildlife Conservation Act 1950. Dr Ron Johnstone, WA Museum’s ornithology curator and Adjunct Professor at Murdoch University said in May 2011 that the bird's existence is likely to become more precarious as they shift their breeding grounds from the Wheatbelt to around Perth. The breeding location shift has been occurring since World War II as a result of large scale land clearing in the wheat belt, and has accelerated in the last 3 decades by climate change and competition for food and nesting hollows with galahs and other cockatoos and bees. Their demise has many contributory causes including: - habitat loss caused by human development and woodland and heathland clearances for agriculture and forestry, - extreme weather events like severe heatwaves and hail storms which are becoming more frequent with climate change, - fatal impacts by vehicles, - severe drought and bushfires reducing available foods and causing loss of habitat BirdLife Australia’s WA Program Manager Cheryl Gole warned in an 8th March media release (PDF) of the threat of further human development and habitat destruction in the Perth region. "WA's Swan Region is the core feeding habitat during winter for northern and western populations of Carnaby’s Cockatoos. Increasing habitat clearance and fragmentation is the biggest threat to this cockatoo. Recent forecasts for the Perth and Peel region indicate that there will be a larger human population than previously predicted by 2026. This is going to further increase the need for housing and land and increase the pressure on cockatoo habitat. Birdlife Australia believes all habitat used by the cockatoos that remains in the Perth and Peel region is needed for the survival of the birds. We need to strike a balance, and act decisively and quickly to conserve what we have left." Conservation Council of WA spokesman John McCarten said in a media release the same day, “To lose more than a third of an endangered species in just one year is a devastating result and shows that current conservation measures are failing. In the Perth Metro area alone we lost 34% of our Carnaby’s – that’s an awful figure for the many people who appreciate their contribution to the unique feel of our city." "Carnaby’s Black Cockatoos are a federally protected endangered species and a true WA icon. There are so few places in the world where a critically endangered species lives side-by-side with a million people, but in Perth we have that with the Carnaby’s. We can’t control the weather, so it is vital we do all we can to preserve the remaining cockatoo habitat and halt this alarming decline in numbers. This means regulating development, controlling pests, exercising caution when approving prescribed burns and ending native forest logging." As is to be expected, a spokeperson for the Institute of Forresters of Australia (WA Division) took umbrage at the conservationists calling for an end to native forest logging. "Conservationists have been blinded by their own ideology" said John Clarke, chairman of the WA Division of the Institute of Foresters of Australia. "The Great Cocky Count 2011 report states that the likely reasons for the decline in numbers include clearing of woodland habitat, loss of habitat through Phytophthora dieback, the drought of 2010/11 and the massive hailstorm of March 2010." “And, as graphically portrayed in a television documentary this week, shooting and vehicle hits continue to take their toll.” “Carnaby’s cockatoo is a bird that lives predominantly in the woodlands of the wheatbelt and the coastal plain. It is not threatened by sustainable timber harvesting in south west forests” said Mr Clarke. “By misdirecting the attention away from the real causes of Carnaby’s cockatoo decline the Conservation Council is deluding the public which would be mistaken in thinking that changes in timber harvesting would have any effect.” “I am very concerned that they are blinded by their own ideology” said Mr Clarke. “Help for Carnaby’s cockatoos needs to be based on science, not prejudice.” John Clarke picked up McCarten's concluding statement about "ending native forest logging" as incorrect about the Carnaby's Black Cockatoo and used it to justify logging of native forests. Carnaby's Cockatoo is an occasional visitor to the dense Marri, Karri and Jarrah forests, but these same forests are the prime habitat for two other endangered species of cockatoo: Forest Black Cockatoo also called Baudin’s Cockatoo (Calyptorhynchus baudinii) and Forest Red-tailed Black Cockatoo (Calyptorhynchus banksii naso). Forests such as the Warrup currently being logged also contain other endangered species such as the Numbat which is on the IUCN redlist with an estimated total population of about 1,000. Timber harvesting has been implicated for the population decline of both these bird species as noted in the 2008 Recovery plan: Habitat loss for agriculture, timber harvesting, wood chipping and mining appears to be the principal cause of the historical decline of Baudin’s Cockatoo and the Forest Red-tailed Black Cockatoo (Johnstone 1997; Mawson and Johnstone 1997). The long-term effects of this habitat loss may not yet have been fully realised because of the long life-span (Brouwer et al. 2000) of the cockatoos.So what does the science say about Carnaby's Cockatoo? I checked the latest survey report - 2011 Great Cocky Count: Population estimates and identification of roost sites for the Carnaby’s Cockatoo (Calyptorhynchus latirostris) (PDF). Here is what this report says under habitat: In the remaining habitat, selective removal of Marri for timber, mining, wood chipping and agriculture has resulted in further declines (Garnett and Crowley 2000, personal communication P. Mawson2). The impacts of previous forest management practices for timber and wood chipping on Forest Black Cockatoo populations have not yet been quantified. However, forestry practices such as clear felling and 80-year cut rotations may restrict the availability of nest hollows (Saunders and Ingram 1995). Carnaby’s Cockatoo occurs in native woodlands, typically dominated by Salmon Gum Eucalyptus salmonophloia) and Wandoo E. wandoo, and in shrubland or heathland dominated by Hakea, Banksia (including Dryandra) and Grevillea species (Cale 2003). They are frequently reported in remnant patches of native vegetation on land otherwise cleared for agriculture (Saunders 1979b, 1982, 1986) and seasonally inhabit pine plantations (Davies 1966; Saunders 1974a; Sedgwick 1968, 1973), and forests containing Marri, Jarrah or Karri (Nichols and Nichols 1984; Saunders 1980). Carnaby’s Cockatoo is occasionally recorded in casuarina woodlands or mallee (Carnaby 1933; Nichols and Nichols 1984), and is often recorded in towns or on roadside verges and in gardens around Perth that contain both native and exotic plants (Sedgewick 1973; Saunders 1980).Dr Denis Saunders highlighted the problem in his 1990 scientific paper published in Biological Conservation - Problems of survival in an extensively cultivated landscape: the case of Carnaby's cockatoo Calyptorhynchus funereus latirostris (abstract). The habitat of Carnaby’s Cockatoo became severely fragmented during the mid-twentiethcentury due to the clearing of native forest, woodlands, shrublands and heathlands for agricultural and suburban development. Today, much of the remaining native habitat occurs in isolated remnant patches (Saunders 1990; Saunders and Ingram 1998). The south-west of Western Australia has undergone recent and extensive clearing of its native vegetation to develop agricultural enterprises. In some areas, over 90% of the original vegetation has been removed and the remainder is scattered in numerous patches of varying size, shape, degrees of isolation and degradation. There have been marked changes in the distribution and abundance of the avifauna of this area, with some species disappearing from parts of their former range and others expanding their ranges to take advantage of the altered landscape. Saunders identified that "extensive removal of native vegetation, patchy distribution of food and interactions with species like the galah Cacatua roseicapilla and man are contributory factors to its decline". It is pretty clear that the south west forests are part of the bird's habitat, with the vegetation clearances of mature trees in the wheatbelt having pushed many to compete for tree hollows in established forests. That would include native forests being logged, particularly where it says "seasonally inhabit... forests containing Marri, Jarrah or Karri" which are sought after native forest hardwood timbers. Timber harvesting may not be the most important factor affecting this species decline, but neither can you say that logging native forests is not having an impact as the bird utilizes a range of habitats for nestling, roosting and foraging which does include mature native forests. Caption: Carnaby's Black Cockatoo indicative distribution as of 2009 showing in dark brown the breeding range, and light brown the wider foraging and non-breeding range. Changing Climate and Impact of Extreme Events In recent years extreme weather events have caused fatalities in the Carnaby’s Black Cockatoo population. The South West of Western Australia shows a pronounced drying trend with hotter summers with more heatwaves, and more intense storms due to climate change. This has impacted on Carnaby's Cockatoo. A scientific paper by Denis Saunders, Peter Mawson and Rick Dawson in Pacific Conservation Biology published in June 2011 - The Impact of Two Extreme Weather Events and Other Causes of Death on Carnaby's Black Cockatoo: A Promise of Things to Come for a Threatened Species? (abstract) detailed and discussed the implications of climate change and severe weather events to this species. The abstract said: Carnaby's Black Cockatoo is an endangered species which has undergone a dramatic decline in range and abundance in southwestern Australia. Between October 2009 and March 2010 the species was subjected to a possible outbreak of disease in one of its major breeding areas and exposed to an extremely hot day and a severe localized hail storm. In addition, collisions with motor vehicles are becoming an increasing threat to the species. All of these stochastic events resulted in many fatalities. Species such as Carnaby's Black Cockatoo which form large flocks are particularly susceptible to localized events such as hail storms, contagious disease and collisions with motor vehicles. Extreme temperatures may have major impacts on both flocking and non-flocking species. Predictions of climate change in the southwest of Western Australia are that there will be an increased frequency of extreme weather events such as heat waves and severe hail storms. The implications of more events of this nature on Carnaby's Black Cockatoo are discussed. Dr Saunders commented on the devastating impact of the severe thunderstorms of 2010, “We know that 81 were affected by the hail – 57 were killed and 24 were so badly injured with soft tissue and skeletal damage they would have to go into rehab,” he said in an article on Science Network WA. Most of the dead birds were found in Kings Park and Subiaco where the storm hit hardest. Hot weather and heatwaves also impacts the bird's ability to forage and feed. “Their foraging time gets reduced because they can only forage early in the morning or late in the afternoon,” he says. In an incident near Hopetoun in January 2010 145 dead Carnaby's Black Cockatoos were found after an extreme heatwave. "The problem is that while they were shaded from the sun they were exposed to hot winds of over 50c," Dr Saunders said. "The only way they can cool down at that stage is to spread their feathers and try and shed heat—but what affectively happens is that they cook." The bird species, with a limited geographical area, is also susceptible to disease outbreaks. An unknown disease outbreak killed as many as 23 female Carnaby’s Black Cockatoos at Koobabbie a property near Coorow, late in 2009. Normally there are 30 nesting pairs on this property producing as many as 19 fledglings per year. The scientific research showed that fatalities arising from collisions with motor vehicles is also increasing. We have cleared much land for development and agriculture, but have left remnant vegetation along road sides where cockatoos are likely to feed. “The birds tend to fly out into clear areas to get airborne,” said Dr Saunders. The future for Carnaby's black cockatoo and related cockatoo species is looking pretty bleak. Dr Ron Johnstone, WA Museum’s ornithology curator and Adjunct Professor at Murdoch University, said in January 2012, that the species could be extinct within 50 years. “They are iconic large forest cockatoos that were once widespread and common in huge numbers on the Swan Coastal Plain,” he said in a Science WA article, “It’s been death by a thousand cuts as the vegetation has been reduced.” The other species affected and also on the IUCN redlist include Baudin’s Cockatoo (Calyptorhynchus baudinii) and the Forest Red-tailed Black Cockatoo (listed as Vulnerable under the EPBC Act) (Calyptorhynchus banksii naso). All three species nest in tree hollows, and move south and west after nesting season to feed on nuts, nectar and wood-boring grubs and insects, including nuts from pine plantations (Pinus radiata). As the banksia heathlands have been destroyed with the development of Perth's suburbs, pine plantations have provided a partial food replacement. As pine plantation mature and are harvested, this leaves the cockatoos with dwindling foraging opportunity. “Even where the plantations are removed we should try and keep at least a fringe of pines that will provide food which will allow the birds to adjust a bit.” said Professor Johnstone. He has also advocated a proactive urban planning policy that includes planting suitable trees and shrubs and maintaining mature trees as part of urban developments to provide food and roosting for cockatoos while new trees grow to maturity. Dr Denis Saunders advocated 20 years ago that native vegetation corridors should be established which would significantly improve survival of native wildlife. He said in his 1990 scientific paper "It is pointed out that some local disappearances of this species may have been avoided if corridors of native vegetation had been left across the landscape to link remnant patches. These could have channelled Carnaby's cockatoo to areas of native vegetation which provide its food. Not only is it important to retain linkages between remnants of native vegetation but there is a need to re-establish corridors of native vegetation in extensively cleared agricultural areas such as those in the wheatbelt of Western Australia." So where does all this sit with the politicians in WA? Environment Minister, Bill Marmion has said there is no scientific evidence to suggest that logging activities in native forests were putting the black cockatoos at risk. The Forresters clearly agree. But this is disputed by the Conservation Council of WA. We are not talking primarily about Carnaby's black cockatoo here, but the equally threatened Baudin’s Cockatoo and Forest Red-tailed Black Cockatoo. "The evidence that was brought up in the Government’s 2008 cockatoo recovery plan says conservation of feeding and breeding habitats of forest black cockatoos relies on the protection of marri, karri and jarrah habitats,” Conservation Council of WA spokesman Mr McCarten said in a Perth Now report. That 2008 recovery plan refers to Baudin’s Cockatoo and Forest Red-tailed Black Cockatoo, both of which inhabit the humid and sub-humid dense forests of south-west of Western Australia, not Carnaby's Cockatoo who appears to be just an occasional visitor to these forests. Both of these species are dependant on forest habitats, primarily of Marri, Karri and Jarrah, and have suffered substantial population declines due to timber harvesting and clearing for agricultural land use over the last century. "If we're clearing marri and jarrah habitat, which we necessarily are by logging, then we're definitely going to be affecting cockatoo numbers. We're coming out of a very big drought; the fires have devastated the cockatoos' habitat, and they're not doing well." said McCarten So Forresters spokeperson John Clarke, and Environment Minister Bill Marmion are mostly right that logging native forests is not the main culprit driving the decline in population of Carnaby's Black Cockatoo - human land use, agricultural clearing, vegetation changes and urban development are probably the main drivers. But logging of native forests will have a major impact on the other two species of black cockatoo, both of which are also endangered and suffering population declines due to habitat loss (logging), illegal shooting, nest hollow shortage and competition for available nest hollows, and severe weather and climate change impacting biodiversity and ecosystem function. - Birdlife Australia - Report finds Carnaby’s declining - Birdlife Australia, media release March 8, 2012 - Great Cocky Count report released: Carnaby’s Cockatoo numbers down (PDF) - WA Division of the Institute of Foresters of Australia. Media Release March 15, 2012 - Green Ideology a risk to cockatoos - Science Network Western Australia - March 14, 2012 - Carnaby’s Black Cockatoo population suffering blow after blow - Science Network Western Australia - January 4, 2012 - Perth slowly devouring its black cockatoo species - Science Network Western Australia - May 12, 2011 - Are Black Cockatoos really protected? - Saunders, Denis A; Mawson, Peter and Dawson, Rick. The Impact of Two Extreme Weather Events and Other Causes of Death on Carnaby's Black Cockatoo: A Promise of Things to Come for a Threatened Species? (abstract). Pacific Conservation Biology, Vol. 17, No. 2, Jun 2011: 141-148. - Denis Saunders, Biological Conservation (1990) - Problems of survival in an extensively cultivated landscape: the case of Carnaby's cockatoo Calyptorhynchus funereus latirostris (abstract) - Reports and background information from WA Department of Environment and Conservation - Saving Carnaby’s cockatoo - Perth Now, March 5, 2012 - Environment Minister Bill Marmion says logging won't affect black cockatoos - Federal Dept of Sustainability and Environment (2008) - Forest Black Cockatoo (Baudin’s Cockatoo Calyptorhynchus baudinii and Forest Red-tailed Black Cockatoo Calyptorhynchus banksii naso) Recovery Plan - Image carnaby's black cockatoo (male) by Ralph Green / Flickr under Creative Commons licence (CC BY-NC-ND 2.0)
The occurrence is not that unusual. It even has a name; sungrazer. What makes this sighting unique is that, up to this point, no one has actually seen the end of a comet’s journey. NASA’s Solar Dynamics Observatory (SDO) spacecraft recorded a 20-minute movie of the comet flying directly in front of the sun on July 5. Scientists are excited, not only because it’s a first, but also because they’re looking forward to analyzing the data to learn more about the fate of the comet. Since the sun cranks out such incredible heat and radiation, it’s likely the comet simply evaporated completely away. Drug Addiction Related to Primal Desire for Salt Scientists in the U.S. and Australia have found evidence that drug addiction may be related to our powerful primal appetite for salt. A study conducted by Duke University in Durham, NC and Melbourne University in Melbourne, Australia finds that addictive drugs affect the same nerve cells and connections in the brain that tap an ancient instinctual desire for salt, a much-needed element in our diets. The minerals found in salt help maintain a number of body functions and are necessary not only for staying healthy, but also for simply staying alive. The scientists found that the gene patterns activated by stimulating an instinctive behavior such as our hunger for salt were the same as those regulated by cocaine or opiate addiction. The study’s co-lead author Wolfgang Liedtke, an assistant professor of Medicine and Neurobiology at Duke University, says the group’s findings have profound and far-reaching medical implications – from providing a basis of understanding drug addictions to the detrimental consequences when obesity-generating foods are overloaded with sodium. The study was published in the Proceedings of the National Academy of Sciences early edition online on July 11. Chain of Underwater Volcanoes Discovered Near Antarctica British scientists have found 12 previously unknown volcanoes under the ocean waters around the South Sandwich Islands, which are located about half-way between South America and Antarctica. They also found craters with a diameter of five-kilometers, left by collapsing volcanoes. Seven active volcanoes are visible above the sea as a chain of islands. The research reveals that the sub-sea landscape surrounding the underwater volcanoes, with waters warmed by volcanic activity, has become a rich habitat for many species of wildlife, which could lead to valuable new insights about life on Earth. The researchers say their findings will help us understand what happens when volcanoes erupt or collapse underwater, as well as their potential for creating dangerous phenomena such as tsunamis. Real Life Computer War Games This may sound like the plot to the 1980′s movie “War Games,” in which a young computer whiz hacks into a military computer which learns to launch weapons of mass destruction by playing games. What if a computer could not only read, but also actually understand the meaning of a sentence written in virtually any language? Researchers at MIT’s Computer Science and Artificial Intelligence Lab are in the process of designing machine-learning systems that could do just that. They’re trying to get a computer to analyze and follow a set of instructions for an unfamiliar task and, so far, they say they’ve had success with it. As part of their experimentation in developing this technology, the research team is teaching a computer to play “Civilization,” a complex computer game in which the player guides the development of a city into an empire across centuries of human history. When the researchers programmed the computer system to use a player’s manual to help it develop game-playing strategy, its rate of victory jumped from 46 percent to 79 percent. “Games are used as a test bed for artificial-intelligence techniques simply because of their complexity,” says S. R. K. Branavan, a graduate student at MIT and a member of this research team. “Every action that you take in the game doesn’t have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways.” The researchers hope to demonstrate that computer systems that learn the meanings of words through exploratory interaction with their environments are a promising subject for further research. Work is also under way to use the algorithms they’ve already developed with robotic systems, too.
The famous story of two star-cross'd lovers is rediscovered for students and teachers through Insight Shakespeare Plays – Romeo and Juliet. This comprehensive and easy-to-use guide contains the complete play, plus analysis of key themes and historical context such as Elizabethan parties and the Roman Church, as well as a breakdown of vocabulary and language techniques. Group tasks and individual exercises stimulate discussion and encourage students to really engage with the play. Teachers are able to tailor their lessons to students' interests and abilities, whilst addressing key elements of the Australian Curriculum for English. There are no reviews yet. Leave a Review
"pertaining to the theories or work of French botanist and zoologist Jean-Baptiste Pierre Antoine de Monet de Lamarck" (1744-1829). Originally (1825) in reference to his biological classification system. He had the insight, before Darwin, that all plants and animals are descended from a common primitive life-form. But in his view the process of evolution included the inheritance of characteristics acquired by the organism by habit, effort, or environment. The word typically refers to this aspect of his theory, which was long maintained in some quarters but has since been rejected.
This video features an eighth-grade High Tech Middle Chula Vista project that shows how Common Core math standards can be addressed in work that is engaging and compelling for students and connects to real-life. The project asks students to imagine their life 20 years in the future. There is a literacy component (written sections), but the math work—the focus of this video—involves projecting future finances such as income, loans, expenses, and taxes. This video examines how student work illuminates—and is illuminated by—the following standard: Mathematical standard Math.Content.7.EE.3. THE ILLUMINATING STANDARDS PROJECT In the last two decades of the ‘standards movement’ in American public education, many educators have concluded that ‘teaching to the standards’ and project-based learning are incompatible. Ron Berger (Expeditionary Learning) and Steve Seidel (Harvard Graduate School of Education), co-directors of The Illuminating Standards Project, wondered if this conclusion is true. Indeed, they speculated that long-term, interdisciplinary, arts-infused, community-connected projects may well be one of the best ways to actually see what state standards look like when fully realized in the things students make in school—to make the standards visible. Three questions frame the work of The Illuminating Standards Project: - What does it look like when state standards are met with integrity, depth, and imagination? - How can we use standards to open up and enrich curriculum, rather than narrow and constrain it? - How can we use student work to raise the level of our understanding of standards and our dialogue about them? THE VIDEOS AND HOW TO USE THEM Collaborating with Berger and Seidel on The Illuminating Standards Project, over 30 students at the Harvard Graduate School of Education have explored these questions by choosing projects from the student work in Models of Excellence and considering the ways in which those projects did—and didn’t—meet specific state standards. Further, they examined how the student work illuminated the standards—and vice versa. Many of those students created short films and 12 of those films are presented here. We invite you to watch these films, and we encourage you to use them as the catalyst for discussions with your colleagues about the relationship between your commitment to meet demanding state standards and approaches to designing powerful learning experiences for our students. See a suggested protocol for viewing linked below, along with selected videos from the series. (The complete list of videos in the series can be found here.)
April 2 is International Children’s Book Day and the anniversary of the birth of one of the most famous contributors to this genre, Hans Christian Andersen. But when Andersen wrote his works, the genre of children’s literature was not an established field as we recognise today. Adults have been writing for children (a broad definition of what we might call children’s literature) in many forms for centuries. Little of it looks much fun to us now. Works aimed at children were primarily concerned with their moral and spiritual progress. Medieval children were taught to read on parchment-covered wooden tablets containing the alphabet and a basic prayer, usually the Pater Noster. Later versions are known as “hornbooks”, because they were covered by a protective sheet of transparent horn. Spiritually-improving books aimed specifically at children were published in the 17th century. The Puritan minister John Cotton wrote a catechism for children, titled Milk for Babes in 1646 (republished in New England as Spiritual Milk for Boston Babes in 1656). It contained 64 questions and answers relating to religious doctrine, beliefs, morals and manners. James Janeway (also a Puritan minister) collected stories of the virtuous lives and deaths of pious children in A Token for Children (1671), and told parents, nurses and teachers to let their charges read the work “over a hundred times.” These stories of children on their deathbeds may not hold much appeal for modern readers, but they were important tales about how to achieve salvation and put children in the leading role. Medieval legends about young Christian martyrs, like St Catherine and St Pelagius, did the same. Other works were about manners and laid out how children should behave. Desiderius Erasmus famously produced a book of etiquette in Latin, On Civility in Children (1530), which gave much useful advice, including “don’t wipe your nose on your sleeve” and “To fidget around in your seat, and to settle first on one buttock and then the next, gives the impression that you are repeatedly farting, or trying to fart. So make sure your body remains upright and evenly balanced.” This advice shows how physical comportment was seen to reflect moral virtue. Erasmus’s work was translated into English (by Robert Whittington in 1532) as A lytyll booke of good manners for children, where it joined a body of conduct literature aimed at wealthy adolescents. In a society where reading aloud was common practice, children were also likely to have been among the audiences who listened to romances and secular poetry. Some medieval manuscripts, such as Bodleian Library Ashmole 61, included courtesy poems explicitly directed at “children yong”, alongside popular Middle English romances, saints’ lives and legends, and short moral and comic tales. Do children have a history? A lot of scholarly ink has been spilled in the debate over whether children in the past were understood to have distinct needs. Medievalist Philippe Ariès suggested in Centuries of Childhood that children were regarded as miniature adults because they were dressed to look like little adults and because their routines and learning were geared towards training them for their future roles. But there is plenty of evidence that children’s social and emotional (as well as spiritual) development were the subject of adult attention in times past. The regulations of late medieval and early modern schools, for example, certainly indicate that children were understood to need time for play and imagination. Archaeologists working on the sites of schools in The Netherlands have uncovered evidence of children’s games that they played without input from adults and without trying to emulate adult behaviour. Some writers on education suggested that learning needed to appeal to children. This “progressive” view of children’s development is often attributed to John Locke but it has a longer history if we look at theories about education from the 16th century and earlier. Some of the most imaginative genres that we now associate with children did not start off that way. In Paris in the 1690s, the salon of Marie-Catherine Le Jumel de Barneville, Baroness d’Aulnoy, brought together intellectuals and members of the nobility. There, d’Aulnoy told “fairy tales”, which were satires about the royal court of France with a fair bit of commentary on the way society worked (or didn’t) for women at the time. These short stories blended folklore, current events, popular plays, contemporary novels and time-honoured tales of romance. These were a way to present subversive ideas, but the claim that they were fiction protected their authors. A series of 19th-century novels that we now associate with children were also pointed commentaries about contemporary political and intellectual issues. One of the better known examples is Reverend Charles Kingsley’s The Water Babies: A Fairy Tale for a Land Baby (1863), a satire against child labour and a critique of contemporary science. The moral of the story By the 18th century, children’s literature had become a commercially-viable aspect of London printing. The market was fuelled especially by London publisher John Newbery, the “father” of children’s literature. As literacy rates improved, there was continued demand for instructional works. It also became easier to print pictures that would attract young readers. More and more texts for children were printed in the 19th century, and moralistic elements remained a strong focus. Katy’s development in patience and neatness in the “School of Pain” is key, for example, in Susan Coolidge’s enormously popular What Katy Did (1872), and feisty, outspoken Judy (spoiler alert!) is killed off in Ethel Turner’s Seven Little Australians (1894). Some authors managed to bridge the comic with important life lessons. Heinrich Hoffman’s memorable 1845 classic Struwwelpeter reads now like a kids’ version of dumb ways to die. By the turn of the 20th century, we see the emergence of a “kids’ first” literature, where children take on serious matters with (or often without) the help of adults and often within a fantasy context. The works of Lewis Carroll, Robert Louis Stevenson, Mark Twain, Francis Hodgson Burnett, Edith Nesbit, JM Barrie, Frank L Baum, Astrid Lindgren, Enid Blyton, CS Lewis, Roald Dahl and JK Rowling operate in this vein. Children’s books still contain moral lessons – they continue to acculturate the next generation to society’s beliefs and values. That’s not to say that we want our children to be wizards, but we do want them to be brave, to stand up for each other and to develop a particular set of values. We tend to see children’s literature as providing imaginative spaces for children, but are often short-sighted about the long and didactic history of the genre. And as historians, we continue to seek out more about the autonomy and agency of pre-modern children in order to understand how they might also have found spaces in which to exercise their imagination beyond books that taught them how to pray. Susan Broomhall, Director, Centre for Medieval and Early Modern Studies, University of Western Australia; Joanne McEwan, Researcher, University of Western Australia, and Stephanie Tarbin, Lecturer in medieval and early modern history, University of Western Australia
Parts of Circulatory System in Humans Talking strictly in the sense of cardiovascular system, the major parts are blood, blood channels, and heart, but considering lymphatic system as its component, lymph, lymph nodes and lymphatic vessels have to be included. Forming 7% of human body weight and a type of special body fluid, blood is one of the most important parts of circulatory system. It serves as a medium for the transportation of circulatory and metabolic substances to every living cell in the body and brings back any waste materials produced therein including carbon dioxide and toxins. Apart from its watery content, the circulatory fluid is composed of blood cells and plasma which contribute to the volume by 55%. Red blood cells, white blood cells and platelets constitute most of the remaining part of blood, but oxygen, carbon dioxide, nitrogen, chemicals, nutrients and metabolic wastes are always present in varying concentrations. Red blood cells contain a globular protein, called hemoglobin that can bind oxygen on its surface and release it wherever it is needed. About one third of each red blood cell is composed of this oxygen binding protein the deficiency or destruction of which causes serious diseases, often leading to death. White blood cells act as defense force of body against any attack from the foreign substances and disease-causing germs. Whenever a harmful microbe or substance enters the body, the armed forces of leukocytes immediately surround and kill or neutralize it. Platelets (also called thrombocytes) are nucleus-less bodies that not only cause blood clotting but also serve as a natural factor for growth and contribute to the maintenance of homeostasis. Chemical substances in the blood include hormones, amino acids, vitamins and electrolytes which perform various assigned tasks in the body, like growth, biological catalysis, synthesis of new cells and & products, and so on. Now, let’s have a look about the structure and functioning of other parts of circulatory system. Vessels are those parts of circulatory system in humans through which the blood flows. Three types of circulatory circuits are formed by these tubes which include systemic, coronary and pulmonary systems. The vessels that carry blood from heart and deliver it to various parts of the body are termed as arteries, while an extensive system of veins collects blood from all cells of body and carries it back to heart. Owing to the relative concentration of oxygen in blood, the vessels have two different colors and can easily be distinguished from each other. Arteries contain oxygenated blood which makes them appear reddish, while veins are bluish in appearance because of deoxygenated blood flowing therein. When they leave heart, the size of arteries and veins is larger which gradually decreases as they move away from it. Finally, they attain the size of extremely narrow tubes, called capillaries that are only 1 millimeter in diameter. The two types of capillaries containing oxygenated and deoxygenated blood meet end-to-end with each other. At this point, arteries deliver oxygenated blood while veins carry the oxygen-depleted blood back to heart. Containing clear body fluid, the lymph vessels, similarly, form a system of lymphatic canals and act as complementary to the overall performance of circulatory system. After division and subdivision, they are reduced to the size of lymphatic capillaries which collect the interstitial fluid and transport it to lymph nodes where, after processing, it is mixed back with blood. In most of the cases, oxygenated blood travels in arteries and deoxygenated blood flows through veins, but there are some exceptions, like pulmonary artery and pulmonary vein contain deoxygenated and oxygenated blood, respectively. Pulmonary artery transports oxygen lacking blood from heart to lungs while pulmonary vein brings the oxygen rich blood back to heart. Among different parts of circulatory system, heart is considered the chief organ. Beating about 1.6 billion times in the average lifespan of a human being, it never gets tired or stops working for a moment till death. This vital organ is made up of cardiac muscles which are specialized to perform continuous activity of repeated contractions and retractions. The hollow chambers are used to collect and pump the blood to all parts of the body. The blood pumped by heart moves into two separate circuits; one cycle consists of pumping it to lungs and receiving back, while, in the other circuit, blood is pumped to and recollected from each organ of the body. In this process each cell of body is supplied with respiratory gases, nutrients and other metabolites, and disposes of its metabolic waste products. Structure and Function Pericardium is a double membrane structure that encloses heart and renders it flexibility and a sac-like shape. Two atria and the same number of ventricles form four inner cavities of the heart. Upper two chambers are atria while the lower ones are called ventricles which form the receiving and discharging pouches, respectively. The right atrium is connected with right ventricle through a valve, called tricuspid valve which maintains the flow of blood in one direction, thus preventing its back flow. Likewise, the other AV (atrio-ventricular) valve regulates the flow of blood between left atrium and left ventricle. There are also two other valves involved in the double circulation system which are pulmonary and aortic ones. Deoxygenated blood is collected from each and every cell of the body by an extensive network of veins, which is then transported to lungs for oxygenation. The blood thus received from the lungs is rich in its oxygen content that is then pumped and supplied to all the parts of body through another branching system of vessels called arteries.
|MadSci Network: Neuroscience| Question: How does our brain read nerve messages? Is the nerve message read like DNA? From: Yan Ho Good question! The brain doesn’t read nerve messages the same way DNA is read. Where DNA is code made of basic building blocks that can be read to make proteins, brain cells (called neurons) don’t have a similar code that they can piece together. First, the fact that neurons can communicate with each other is key. Second, they have to rely on huge networks of many interconnecting cells to make sense of the world. To understand how the brain as whole interprets messages, we need to first look at the individual cells. (See Figure 1) Neurons are specially designed to transmit information. The branched ends, called dendrites, receive information. The long end, called the axon, sends information – it can be up to 1 meter long! The ball in the middle, the cell body, sums up many inputs from the dendrites, and decides whether or not to send any info down the axon. (A note: a nerve is a bunch of axons from a few different neurons – like a bundle of sticks all stuck together). Cells are interconnected, with the axon of one cell contacting the dendrites of others. Information is passed from the dendrite to the cell body to the axon. The point at which an axon touches a dendrite is called a synapse. (On Figure 1, they are the colored balls on the dendrites and at the end of the axon. I’ll explain the colors in a bit.) At this synapse, the axon can send a chemical message called a neurotransmitter to the dendrite. When the dendrite receives this message, it recognizes it and responds accordingly. This is the first important way that the brain interprets signals. There is some more written about this process here: htt p://www.madsci.org/posts/archives/dec2000/977272802.Ns.r.html An important point is that one axon can contact many dendrites belonging to many other cells. Also, the dendrites of one cell are contacted by the axons from many different cells. You can see that this becomes a really complicated network of cells (See Figure 2). It’s similar to the telephone system – it lets you talk to lots of different people and lets lots of people talk to you. But it’s like having 1000 people on the line at once! So we know that axons and dendrites can transmit and exchange information. But what is the information that is being passed down the axon and dendrites? What is really important to note is that axons can only send one type of signal. Let’s call it an “OK” signal (in technical terms, it’s called an action potential). At the end of the axon, this OK signal is converted into a “YES” or “NO” signal by releasing a neurotransmitter. (On Figure 1, YES is in green, and NO is red) Whether it is YES or NO depends on what kind of cell it is – there are cells that have either YES or NO neurotransmitters at the end of their axons. Dendrites, on the other hand, take both YES and NO signals (remember – they receive contacts from many different cells!). All of these signals are summed up at the cell body, and if there are more YES signals than NOs, then the axon sends an OK signal. And you can see that this will give a YES or NO signal at the end of the axon of this cell – and this YES or NO will be help or prevent the next cells send an OK signal. But if there are more NOs, then then there is no OK signal, and no neurotransmitter is released from the end of the axon. It’s not a NO signal, but the absence of a signal. The really important point is that this big network of cells is what allows the brain to interpret signals from your nerves (like in Figure 3). Large, specialized networks of neurons in your brain are specially connected to perceive touch, and others sight, and others are wired up to let you move your muscles as you please. They are organized appropriately – the touch receptors from your arm aren’t directly connected to the parts of your brain that try to interpret vision. Certain parts of nerves will activate particular networks which respond to basic properties of a sensation, and this information will passed on to more complex networks. For instance, first you will independently establish that something is heavy, round, or smooth. Then this information will be passed into more networks, and you determine that you’re holding tell you that it’s a ball. Then that info will passed on through yet more networks, and you finally realize it’s a bowling ball! These networks are critical in forming representations of the world in your brain. But how your nerves respond to the world is also encoded – for instance, a heavy weight on your arm may send A LOT of OK signals in a quick succession (a burst of “OK”s). So the network of cells in your brain recognizes that there is something on your arm. But since there such a huge number of signals in such a short time, your brain realizes that this weight is really heavy. In this example, a feather would barely elicit any OK signals – just enough to let you know that something is there. There is some more written about that here: http: //www.madsci.org/posts/archives/dec99/945634578.Ns.r.html I apologize if this is a bit confusing – there is a lot of complexity in the system. We barely understand how the brain does what it does! Each level of complexity builds on the lower one. I’ve drawn 5 cells connected together in Figure 3. But the brain has 10 - 100 BILLION of these cells, each with up to 1000 connections – all contained in the top of your head! So, in summary, to read a nerve signal, the brain utilizes the not only how often the nerve is sending a signal, but also how that signal affects the complicated networks of neurons in the brain! A really good website that deals with some of the nuts and bolts of neuroscience can be found at: It also includes lots of relevant links! Try the links in the MadSci Library for more information on Neuroscience.
- Subject(s):English Literature - Author(s):Anthony Partington, Richard Spencer, Peter Thomas - Available from: May 2015 A new series of bespoke, full-coverage resources developed for the 2015 GCSE English qualifications. Send a Query× Approved for the AQA 2015 GCSE English Literature specification, this print Student Book is designed to help students develop whole text understanding and written response skills for their closed-book exam. The resource provides act-by-act coverage of Shakespeare’s play as well as a synoptic overview of the text and its themes. Short, memorable quotations and striking images throughout the book aid learning, while in-depth exam preparation includes practice questions and sample responses. See also our Macbeth print and digital pack, which comprises the print Student Book, the enhanced digital edition and a free Teacher’s Resource. Written specifically for the 2015 AQA GCSE English Literature specification, covering the Shakespeare aspect of the specification. Class, group and individual activities will support students in building the skills they need to write personal, creative and critical responses to texts. A progressive learning sequence incorporates differentiated support, providing opportunities to stretch the more able and support those who need it. Detailed activities based around characters, plot and the writer’s craft encourage students to develop whole-text understanding. Draws and builds on prior learning through Assessment for Learning activities, allowing students to develop confidence and skills in reporting and responding to literature. Provides exam preparation through practice tasks, advice for marking linked with mark scheme descriptors, annotated sample answers to demonstrate a range of achievement and revision suggestions. - Introducing Macbeth - Part 1. Exploring the play: Unit 1. Act 1: Fair and foul - Unit 2. Act 2: Alarms and escapes - Unit 3. Act 3: Illusions and delusions - Unit 4. Act 4: Resistance and revenge - Unit 5. Act 5: Endings and beginnings - Part 2. The play as a whole: Unit 6. Plot and structure - Unit 7. Context, setting and stagecraft - Unit 8. Character and characterisation - Unit 9. Ideas, perspectives and themes - Unit 10. Language - PREPARING FOR YOUR EXAM Anthony is a Head Teacher and Executive Leader in the Cambridge Meridian Academies Trust. He was previously Director of English and Media at a federation of schools in Cambridge, and has been involved in research with Institute of Education, London. Anthony is Editor of two Cambridge School Shakespeare plays. Richard is Vice Principal of Impington Village College in Cambridge and was previously Head of English in two schools. He was previously the Eastern Region coordinator for NATE (National Association for Teachers of English) and is Editor of two Cambridge School Shakespeare plays. Found 4 results - File name - File size Latest newsAll news 08 February 2019 Activities to help you teach combinations The challenges that learners face in regard to problems on combinations are, of course, quite... Thank you for your feedback which will help us improve our service. If you requested a response, we will make sure to get back to you shortly.×
A recent BBC News Health article raises the question, ‘Can we trust BMI to measure obesity?’ BMI stands for Body Mass Index. It uses height and weight to work out if a person is a healthy weight. It has been in use for over 100 years. But it has come into criticism of late. This follows revelations by UCLA psychologists that found 54 million Americans being labelled as ‘unhealthy’ after BMI, when in fact they were not. Can BMI be trusted? One study that sheds doubt on body mass index as a measurement of health or obesity, is the UCLA College study. Their findings include: - More than 30% of those with BMIs in the “normal” range are actually unhealthy based on their other health data. - More than 2 million people considered “very obese” because of a BMI of 35 or higher, are actually healthy. What BMI is obese? Other findings that reveal BMI to be inaccurate and misleading is that anyone with a BMI of 30 or more is obese. And being obese significantly increases the risk of diseases, such as diabetes, and premature death. Yet there are studies demonstrating that some individuals who are obese have a lower cardiovascular risk and an improved metabolic profile. In addition, studies demonstrate there to be a subset of people with a ‘normal’ body mass index who are metabolically unhealthy and have increased mortality risk. Why BMI is flawed In 2013, Nick Trefethen of Oxford University’s Mathematical Institute pointed out that BMI leads to confusion and misinformation. In fact, he says that it exaggerates thinness in short people and fatness in tall people. “Because of that height2 term, the BMI divides the weight by too large a number for short people and too small a number for tall people. So short people are misled into thinking they are thinner than they are, and tall people are misled into thinking they are fatter than they are.” The BBC News Health article also mentions bodybuilders and rugby players for example, who carry a lot of muscle. And because muscle is denser than fat, their BMI would class them as obese. Yet they have very little fat and will be very healthy. Are body mass index and body fat the same? BMI is a calculation of weight in kilo’s divided by height in metres squared. It doesn’t differentiate between fat and muscle mass. It also doesn’t measure how the fat is distributed throughout the body. And that matters. And this is another reason why BMI has come into question in recent years. Research shows that people who carry a lot of fat around their waist are at a high risk of health problems. This is because the fat is stored around key organs in their abdomen. In fact, researchers for one study suggest that waist size is linked to risk of Type 2 Diabetes, regardless of BMI. So it seems that waist size might be a better monitor of health and obesity than BMI. What body composition test is most accurate? A more accurate measurement of health is body composition measurement. The reason being, body composition measurement can monitor changes in fat, lean mass and fluid. Consequently, it provides a better understanding of what’s happening inside the body. Bodystat is an inexpensive, quick and non-invasive way of carrying out body composition measurement. It uses BIA (bio-electrical impedance) to accurately measure the amount of fat, lean muscle and water in the body. It does this by using electrodes. These use a small, safe current, to measure raw data at any segment of the body. And by using different frequencies, the current penetrates different tissues. As a result, algorithms are able to calculate how much of that tissue is present in the body (from analysing the impedance to the flow of current at that frequency). This video from Bodystat clearly explains how bioelectrical impedance works, and how it measures body fat, lean and water inside the body. For accurately measuring statistics for body fat, lean muscle mass and hydration in the gym, the pocket-sized Bodystat 1500 Touchscreen is the latest model, based on the most popular device. However, the best choice for dieticians and nutritionists is the Bodystat 1500MDD. This is more advanced than the 1500, with added functions such as cellular health and nutritional status monitoring, as well as, overall fitness and health.
Some nights are impossible to forget—like the night of October 8, 1871, when women snatched their children from their beds, men formed ad hoc fire brigades, and the terrified residents of Peshtigo, Wisconsin, fled what would become the deadliest wildfire in American history. So why has the Peshtigo wildfire faded from national memory? The story starts in a booming logging town surrounded by dense forests. The seemingly endless trees in close range of Lake Michigan sparked a brisk trade in logging that attracted immigrants from all over Europe, beginning in the 1780s. Thanks to its prime location near Chicago—the world’s largest lumber trade market at the time—Peshtigo prospered, felling trees for a rapidly expanding country that needed timber for its houses and new cities. But Peshtigo’s trees proved to be its downfall. The confluence of events that led to the devastating blaze started “a low rumbling noise, like the distant approach of a train,” witnesses to the chaos later recalled. Soon, it became clear the town itself was being consumed by flames. Before townspeople had a chance to react, it was already too late. Survivors describe a cyclone-like firestorm—a whirlwind that consumed everything around it. The conditions were so extreme that people wondered whether they had been incited by a comet (that theory has never been proven). A staggering 1.2 million acres—the size of the state of Connecticut—burned that night. Building after building ignited, and many burned before anyone could find their way out. Those who did make it to the river watched helplessly as their entire town burned to the ground. Cows and horses rushed into the river, too, creating a scene of anguish and chaos. Some who ran to the river drowned or died of hypothermia. Those who made it to the next morning found only “a bleak, desolate prairie, the very location of the streets almost a matter of doubt.” A newspaper reporter wrote that “no vestige of human habitation remained, and the steaming, freezing, wretched group, crazed by their unutterable terror and despair…could but vaguely recognize one another in the murky light of day.” That summer, in 1871, was one of the driest on record. A 20th-century reconstruction conducted by the National Weather Service showed that after a long period of higher-than-usual temperatures and drought, a low-pressure front with cooler temperatures produced winds across the region. This whipped smaller fires into a giant conflagration. Hundred-mile-per-hour winds stoked the fire even more, with cool air fanning the flames and causing a gigantic column of hot air to rise. This produced even more wind—a vicious cycle that turned a routine wildfire into an inferno. Peshtigo’s logging industry was partially to blame for the disaster. In an era before responsible forest management practices, loggers simply stripped the land without any regard for the potential fire hazards they created. They dumped refuse from logging operations in large piles of tinder that were the perfect fuel for the October 8 fire. And railroad operations cleared land using small fires, leaving piles of leftover wood behind them, without recognizing they were a serious fire hazard. The town itself was a tinderbox waiting to ignite. Most of its structures were made of wood, as were its sidewalks. Even the streets were paved in wood chips. Weather was the match that turned those dangerous conditions into an unprecedented fire. Smaller wildfires had raged in the area for days, but on the night of the 8th, winds whipped up and the flames reached Peshtigo. Between 500 and 800 people died in Peshtigo—half the town’s population—and at least 1,200 people died in the overall area. However, since the records of most of the communities ravaged by fire burned, too, it will never be possible to identify the victims. VIDEO: The Chicago Fire of 1871 Find out what a cow has to do with the Chicago fire of 1871 in this animated tale of disaster and destruction in the windy city. But something else happened the night of October 8—another fire, fueled by the same conditions, in nearby Chicago. The Great Chicago Fire left 100,000 people homeless, destroyed over 17,000 wooden structures and killed 300. Though it wasn’t as severe as the Peshtigo fire, it dominated headlines and history books. Ironically, the Chicago fire overshadowed the much worse Peshtigo fire. But though the Wisconsin fire has faded from the public’s memory, it is still studied by forest managers and firefighters, who still use it as an example of bad forestry practices and the power of unpredictable wildfires. Another group hasn’t forgotten the fire, either: the residents of Peshtigo. The town was rebuilt after the fire and placed the remains of over 300 of its residents—many too charred to identify as men or women—in a mass grave.
- Influenza (flu) is an acute, highly contagious viral respiratory infection that is caused by one of three types of myxovirus influenzae. Influenza occurs all over the world and is more common during winter months. - The incubation period is 24 to 48 hours. Symptoms appear approximately 72 hours after contact with the virus, and the infected person remains contagious for 3 days. Influenza is usually a self-limited disease that lasts from 2 to 7 days. - The disease also spreads rapidly through populations, creating epidemics and pandemics. Annual estimates are that approximately 20,000 deaths occur as a result of influenza virus and 250,000 to 500,000 new cases occur each year in the United States. - Complications of influenza include pneumonia, myositis, exacerbation of chronic obstructive pulmonary disease (COPD), and Reye’s syndrome. - In rare cases, influenza can lead to encephalitis, transverse myelitis, myocarditis, or pericarditis. - Fever* or feeling feverish/chills - Sore throat - Runny or stuffy nose - Muscle or body aches - Fatigue (very tired) - Some people may have vomiting and diarrhea, though this is more common in children than adults. *It’s important to note that not everyone with flu will have a fever. - Rapid diagnostic tests for influenza can help in the diagnosis and management f patients who present with signs and symptoms compatible with influenza. They also are useful for helping to determine whether outbreaks of respiratory disease, such as in nursing homes and other settings, might be due to influenza. - No specific diagnostic tests are used because diagnosis is made by the history of symptoms and onset. If the patient has symptoms of a bacterial infection that complicates influenza, cultures and sensitivities may be required. Infection related to the presence of virus in mucus secretions INTERVENTIONS. Infection control; Infection protection; Surveillance; Fluid/electrolyte management; Medication management; Temperature regulation Complications of flu can include bacterial pneumonia, ear infections, sinus infections, dehydration, and worsening of chronic medical conditions, such as congestive heart failure, asthma, or diabetes. Influenza virus shedding (the time during which a person might be infectious to another person) begins the day before symptoms appear and virus is then released for between 5 to 7 days, although some people may shed virus for longer periods. People who contract influenza are most infective between the second and third days after infection.The amount of virus shed appears to correlate with fever, with higher amounts of virus shed when temperatures are highest.Children are much more infectious than adults and shed virus from just before they develop symptoms until two weeks after infection. Influenza virus may be transmitted among humans in three ways: - by direct contact with infected individuals; - by contact with contaminated objects (called fomites, such as toys, doorknobs); and - by inhalation of virus-laden aerosols. The contribution of each mode to overall transmission of influenza is not known. However, CDC recommendations to control influenza virus transmission in health care settings include measures that minimize spread by aerosol and fomite mechanisms. - People with the flu are advised to get plenty of rest, drink plenty of liquids, avoid using alcohol and tobacco and, if necessary, take medications such as acetaminophen (paracetamol) to relieve the fever and muscle aches associated with the flu. - Children and teenagers with flu symptoms (particularly fever) should avoid taking aspirin during an influenza infection (especially influenza type B), because doing so can lead to Reye’s syndrome, a rare but potentially fatal disease of the liver. - Since influenza is caused by a virus, antibiotics have no effect on the infection; unless prescribed for secondary infections such as bacterial pneumonia. - Antiviral medication may be effective, but some strains of influenza can show resistance to the standard antiviral drugs and there is concern about the quality of the research. - Phenylephrine and antitussive agents such as terpin hydrate with codeine are often prescribed to relieve nasal congestion and coughing. In patients with influenza that is complicated by pneumonia, antibiotics may be administered to treat a bacterial superinfection. |Medication or Drug Class||Dosage||Description||Rationale| |Antipyretics||Varies with drug||Aspirin, acetaminophen||Control fever and discomfort; generally aspirin is avoided to reduce the risk of Reye’s syndrome| |Amantadine||100–200 mg PO qd, bid for several days||Antiviral infective||Provides antiviral action against influenza (prophylaxis and symptomatic); usually prescribed for outbreaks of influenza A within a closed population, such as a nursing home| - Other Drugs: Neuraminidase inhibitors (oseltamivir and zanamivir) for use in treatment and prophylaxis of influenza A and B; rimantadine for treatment and prophylaxis of influenza A only; antiviral treatment should be initiatied within 48 hours of the onset of symptoms to be effective. - Administer analgesics, antipyretics, and decongestants, as ordered. - Follow droplet and standard precautions. - Provide cool, humidified air but change the water daily to prevent pseudomonas superinfection. - Encourage the patient to rest in bed and drink plenty of fluids. - Administer I.V. fluids as ordered. - Administer oxygen therapy if warranted. - Regularly monitor the patient’s vital signs, including his temperature. - Monitor the patient’s fluid intake and output for signs of dehydration. - Watch for signs and symptoms of developing pneumonia. - Advise the patient to use mouthwash or warm saline gargles to ease sore throat. - Teach the patient the importance of increasing fluid intake to prevent dehydration. - Suggest a warm bath or heating pad to relieve myalgia. - Review prevention of future influenza episodes with patient and the community. The influenza vaccine is recommended by the World Health Organization and United States Center for Disease Control and Prevention for high-risk groups, such as children, the elderly, health care workers, and people who have chronic illnesses such as asthma, diabetes, heart disease, or are immuno-compromised among others. There are two types of vaccines: - The “flu shot” is an inactivated vaccine (containing killed virus) that is given with a needle. It can be given in the muscle or just under the skin. The flu shot that is given in the muscle is approved for use in people older than 6 months, including healthy people and people with chronic medical conditions. The flu shot that is given below the skin is for those 18-64 years of age. - The nasal-spray flu vaccine is a vaccine (sometimes called LAIV for “Live Attenuated Influenza Vaccine”) made with live, weakened flu viruses that do not cause the flu. LAIV is approved for use in healthy people 2 years to 49 years of age who are not pregnant. - Reasonably effective ways to reduce the transmission of influenza include good personal health and hygiene habits such as: not touching your eyes, nose or mouth;frequent hand washing (with soap and water, or with alcohol-based hand rubs);covering coughs and sneezes; avoiding close contact with sick people; and staying home yourself if you are sick. - Although face masks might help prevent transmission when caring for the sick, there is mixed evidence on beneficial effects in the community. - Smoking raises the risk of contracting influenza, as well as producing more severe disease symptoms. - Since influenza spreads through both aerosols and contact with contaminated surfaces, surface sanitizing may help prevent some infections. - Alcohol is an effective sanitizer against influenza viruses, while quaternary ammonium compounds can be used with alcohol so that the sanitizing effect lasts for longer. - In hospitals, quaternary ammonium compounds and bleach are used to sanitize rooms or equipment that have been occupied by patients with influenza symptoms.At home, this can be done effectively with a diluted chlorine bleach.
Your lungs are a pair of pyramid-shaped organs inside your chest that allow your body to take in oxygen from the air. They have a spongy texture and are pinkish-gray in color. The lungs bring oxygen into the body when breathing in and send carbon dioxide out of the body when breathing out. Carbon dioxide is a waste gas produced by the cells of the body. The process of breathing in is called inhalation. The process of breathing out is called exhalation. Breathing is a vital function of life. The lungs add oxygen to the blood and remove carbon dioxide in a process called gas exchange. In addition to the lungs, your respiratory system includes airways, muscles, blood vessels, and tissues that help make breathing possible. Your brain controls your breathing based on your body’s need for oxygen. Your lungs lie on each side of your breastbone and fill the inside of your chest cavity. The right lung is divided into three main sections called lobes, and the left lung has two lobes to allow space for the heart. Your left lung is slightly smaller than your right lung. The airways are pipes that carry oxygen-rich air to your lungs. They also carry carbon dioxide, a waste gas, out of your lungs. The airways include your: - Nose and linked air passages called the nasal cavity and sinuses - Larynx, or voice box - Trachea, or windpipe - Tubes called bronchial tubes, or bronchi, and their branches - Small tubes called bronchioles that branch off of the bronchial tubes Air first enters your body through your nose or mouth, which wets and warms the air. Cold, dry air can irritate your lungs. The air then travels past your voice box and down your windpipe. The windpipe splits into two bronchial tubes that enter your lungs. A tough tissue called cartilage helps the bronchial tubes stay open. Within the lungs, your bronchial tubes branch into thousands of smaller, thinner tubes called bronchioles. The muscular walls of the bronchioles are different from the bronchial tubes. The bronchioles do not have cartilage to help them stay open, so the walls can widen or narrow to allow more or less airflow through the tubes. The thousands of bronchioles end in clusters of tiny round air sacs called alveoli. Your lungs have about 150 million alveoli. Normally, your alveoli are elastic, meaning that their size and shape can change easily. Surfactant coats the inside of the sacs or alveoli and helps the air sacs stay open. Each of these alveoli is covered in a mesh of tiny blood vessels called capillaries. The space where the alveoli come into contact with the capillaries is called the lung interstitium. The capillaries connect to a network of arteries and veins that move blood through your body. The pulmonary artery and its branches deliver blood rich in carbon dioxide and lacking in oxygen to the capillaries that surround the air sacs. Carbon dioxide moves from the blood into the air inside the alveoli. At the same time, oxygen moves from the air into the blood in the capillaries. The pleura and the muscles used for breathing The lungs are enclosed by the pleura, a membrane that has two layers. The space between these two layers is called the pleural cavity. The membrane’s cells create pleural fluid, which acts as a lubricant to reduce friction during breathing. The lungs are like sponges; they cannot move on their own. Muscles in your chest and abdomen contract, or tighten, to create space in your lungs for air to flow in. The muscles then relax, causing the space in the chest to get smaller and squeeze the air back out. These muscles include the: - Diaphragm, which is a dome-shaped muscle below your lungs. It separates the chest cavity from the abdominal cavity. The diaphragm is the main muscle used for breathing. - Intercostal muscles, which are located between your ribs. They also play a major role in helping you breathe. - Abdominal muscles. They help you breathe out when you are breathing fast, such as during physical activity. - Muscles of the face, mouth, and pharynx. The pharynx is the part of the throat right behind the mouth. These muscles control the lips, tongue, soft palate, and other structures to help with breathing. Problems with these muscles can cause sleep apnea. - Muscles in the neck and collarbone area. These muscles help you breathe in when other muscles involved in breathing are not working well or when lung disease impairs your breathing. Damage to the nerves in the upper spinal cord can interfere with the movement of your diaphragm and other muscles in your chest, neck, and abdomen. This can happen due to a spinal cord injury, a stroke, or a degenerative disease such as muscular dystrophy. The damage can cause respiratory failure. Ventilator support or oxygen therapy may be necessary to maintain oxygen levels in the body and protect the organs from damage.
1. The problem statement, all variables and given/known data A particle of mass m is dropped into a hole drilled straight through the center of the Earth. neglecting rotational effects and friction, show that the particle’s motion is a simple harmonic if it is assumed that the Earth has a uniform mass density. Obtain an expression for the period of oscillation. 2. Relevant equations The answer is that K = (4/3)Gmπρ and so T = sqrt(3π/Gρ), right? K = force constant G = gravitational constant ρ = density of earth m = mass of object dropped T = period 3. The attempt at a solution I want to know why the constant K is not actually GmM/r3 and the force is not GmMx/r3 where x is the displacement of the object from equilibrium at any point within the earth (I defined equilibrium to be at the center of earth). I think my issue is not understanding what it means to solve a problem using uniform density. I thought that uniform density meant that the mass per unit volume was the same everywhere in the earth, so the constant M is okay to leave in the equation since it's density is not changing. I want to understand this because there is a problem in my homework set which asks to find the electrostatic potential energy of a sphere of uniform density with charge Q and radius R. My mind tells me this is simply Q2/4πεR... but that can't be correct because it's too easy and that is the same as a point charge. What am I not getting?
The Samuel Scroll I. The Samuel scroll: originally one long scroll A. Split into two parts at the time of the Greek translation 1. Remained as one scroll in Hebrew until the 15th century 2. Now I Samuel ends with Saul's death 3. Samuel is completely absent from the second part 4. Originally, one scroll to tell the remembered history of the early monarchy B. The Masoretic Text (MT): 500 - 900 C.E. 1. Basis for Hebrew Bible 2. MT of Samuel is particularly poor: missing letters, words, sentences 3. Shorter than the Greek version (LXX) -- ca. 250-75 B.C.E. 4. Extant versions of the LXX are all Christian 5. Rabbis rejected as "too Hellenistic" 6. Example: 10:1, statement about kingship C. The Dead Sea Scrolls 1. 3 Samuel scrolls (all fragmentary) dating from 50-25 B.C.E., mid 3rd c. B.C.E., and 1st c. B.C.E. 2. All three differe from MT but also differ from LXX D. Thus, at least three different texts of Samuel 1. Search for "authentic" text or accept fluidity? 2. Written versions reflect diversity of early traditions 3. One possibility: think of Samuel as a midrash of the primal Torah
Psychoacoustic Effect And Directional Sound Sound Confining Ability Our directional speakers use patented technology to confine sound to a specific listening region. The sound produced in the listening region is always five times louder than just a few feet outside this area. This effect is very apparent in an environment in which constant background noise is present. However, it is less apparent in a quieter environment. To explain this difference requires a basic understanding of how the human brain perceives sound. People perceive volume in a relative, as opposed to an absolute way. Background noise provides a reference against which other sounds are heard and compared. A listener adjusts his/her perception of a sound based on its loudness relative to the reference sound. In a quieter environment, a person’s perception of loudness is actually “turned up” for softer sounds and “turned down” for louder sounds. In a quiet environment, a great method of appreciating the sound-confining ability of directional audio is by listening remotely using a phone or microphone as another person moves the receiver in and out of the listening region. This works because the listener remains stationary, and thus remains in a constant ambient environment while being exposed the sounds inside and outside of the listening region. The rule of thumb for maximum sound-confining effect in any environment is to adjust the volume of the directional speaker to a comfortable level, just above the level of ambient sound.
Sound Frequency and AmplitudeSound is caused by vibration. When an object vibrates, it can cause the air around it to also vibrate. These vibrations can reach our ear where they cause our ear drum to vibrate. Our ear drum is a small membrane that is connected by tiny bones to our cochlea. The cochlea translates the vibration to nerve impulses which travel to our brain which then "hears" the sound. Many things can cause an object to vibrate including being struck like a drum, bowed like a violin, or plucked like a guitar. In general, a large object like a cello, a bass drum, or a truck will vibrate slower than a small object like a violin, cow bell, or a bicycle. We perceive slower vibrations to be lower pitches and faster vibrations to be higher pitches. The slowest vibrations that we can hear are about 40 wiggles per second, or 40 Hertz. The fastest vibrations we can hear are about 20000 Hertz (Hz). How does JSyn make sound?In JSyn, oscillators output a stream of floating point numbers that go up and down. These numbers are converted to a voltage by your computer's sound card. This voltage is amplified and then passed to your speakers where it causes magnets to vibrate. These magnets are connected to a cardboard diaphragm that causes the air to vibrate. You know what happens after that. JSyn oscillators have a "frequency" port that allows us to change the number of times the output goes up and down per second. Let's set the frequency of our sineOsc to 500 vibrations per second: sineOsc.frequency.set( 500.0 );Oscillators also have a port that controls how far up or down the numbers go (ultimately, how far the speaker diaphragm gets pushed in and out). This "amplitude" port controls how loud the sound is. Let's tell our oscillator to output numbers in the range of -0.4 to 0.4. sineOsc.amplitude.set( 0.4 );Oscillators typically output a signal that is centered at zero and can go as low as -1.0 and as high as +1.0.
mica, any of a group of hydrous potassium, aluminum silicate minerals. It is a type of phyllosilicate, exhibiting a two-dimensional sheet or layer structure. Among the principal rock-forming minerals, micas are found in all three major rock varieties—igneous, sedimentary, and metamorphic. Of the 28 known species of the mica group, only 6 are common rock-forming minerals. Muscovite, the common light-coloured mica, and biotite, which is typically black or nearly so, are the most abundant. Phlogopite, typically brown, and paragonite, which is macroscopically indistinguishable from muscovite, also are fairly common. Lepidolite, generally pinkish to lilac in colour, occurs in lithium-bearing pegmatites. Glauconite, a green species that does not have the same general macroscopic characteristics as the other micas, occurs sporadically in many marine sedimentary sequences. All of these micas except glauconite exhibit easily observable perfect cleavage into flexible sheets. Glauconite, which most often occurs as pelletlike grains, has no apparent cleavage. The names of the rock-forming micas constitute a good example of the diverse bases used in naming minerals: Biotite was named for a person—Jean-Baptiste Biot, a 19th-century French physicist who studied the optical properties of micas; muscovite was named, albeit indirectly, for a place—it was originally called “Muscovy glass” because it came from the Muscovy province of Russia; glauconite, although typically green, was named for the Greek word for blue; lepidolite, from the Greek word meaning “scale,” was based on the appearance of the mineral’s cleavage plates; phlogopite, from the Greek word for firelike, was chosen because of the reddish glow (colour and lustre) of some specimens; paragonite, from the Greek “to mislead,” was so named because it was originally mistaken for another mineral, talc. The general formula for minerals of the mica group is XY2–3Z4O10(OH, F)2 with X = K, Na, Ba, Ca, Cs, (H3O), (NH4); Y = Al, Mg, Fe2+, Li, Cr, Mn, V, Zn; and Z = Si, Al, Fe3+, Be, Ti. Compositions of the common rock-forming micas are given in the table. Few natural micas have end-member compositions. For example, most muscovites contain sodium substituting for some potassium, and diverse varieties have chromium or vanadium or a combination of both replacing part of the aluminum; furthermore, the Si:Al ratio may range from the indicated 3:1 up to about 7:1. Similar variations in composition are known for the other micas. Thus, as in some of the other groups of minerals (e.g., the garnets), different individual pieces of naturally occurring mica specimens consist of different proportions of ideal end-member compositions. There are, however, no complete series of solid solutions between any dioctahedral mica and any trioctahedral mica. Micas have sheet structures whose basic units consist of two polymerized sheets of silica (SiO4) tetrahedrons. Two such sheets are juxtaposed with the vertices of their tetrahedrons pointing toward each other; the sheets are cross-linked with cations—for example, aluminum in muscovite—and hydroxyl pairs complete the coordination of these cations (see From L.G. Berry, B. Mason, and R.V. Dietrich, Mineralogy: Concepts, Descriptions, Determinations, 2nd ed., copyright © 1983 by W.H. Freeman and Co., used with permission.). Thus, the cross-linked double layer is bound firmly, has the bases of silica tetrahedrons on both of its outer sides, and has a negative charge. The charge is balanced by singly charged large cations—for example, potassium in muscovite—that join the cross-linked double layers to form the complete structure. The differences among mica species depend upon differences in the X and Y cations. Although the micas are generally considered to be monoclinic (pseudohexagonal), there also are hexagonal, orthorhombic, and triclinic forms generally referred to as polytypes. The polytypes are based on the sequences and number of layers of the basic structure in the unit cell and the symmetry thus produced. Most biotites are 1M and most muscovites are 2M; however, more than one polytype is commonly present in individual specimens. This feature cannot, however, be determined macroscopically; polytypes are distinguished by relatively sophisticated techniques such as those employing X-rays. |designation||crystal system||number of layers | per unit cell |*This polytype has not been recorded for natural micas.| The micas other than glauconite tend to crystallize as short pseudohexagonal prisms. The side faces of these prisms are typically rough, some appearing striated and dull, whereas the flat ends tend to be smooth and shiny. The end faces are parallel to the perfect cleavage that characterizes the group. The rock-forming micas (other than glauconite) can be divided into two groups: those that are light-coloured (muscovite, paragonite, and lepidolite) and those that are dark-coloured (biotite and phlogopite). Most of the properties of the mica group of minerals, other than those of glauconite, can be described together; here they are described as pertaining simply to micas, meaning the micas other than glauconite. Properties of the latter are described separately later in the discussion. The perfect cleavage into thin elastic sheets is probably the most widely recognized characteristic of the micas. The cleavage is a manifestation of the sheet structure described above. (The elasticity of the thin sheets distinguishes the micas from similarly appearing thin sheets of chlorite and talc.) The rock-forming micas exhibit certain characteristic colours. Muscovites range from colourless, greenish to blue-green to emerald-green, pinkish, and brownish to cinnamon-tan. Paragonites are colourless to white; biotites may be black, brown, red to red-brown, greenish brown, and blue-green. Phlogopites resemble biotites but are honey brown. Lepidolites are nearly colourless, pink, lavender, or tan. Biotites and phlogopites also exhibit the property termed pleochroism (or, more properly for these minerals, dichroism): When viewed along different crystallographic directions, especially using transmitted polarized light, they exhibit different colours or different absorption of light or both. The lustre of the micas is usually described as splendent, but some cleavage faces appear pearly. The minutely crystalline variety consisting of muscovite or paragonite (or both), generally referred to as sericite, is silky. Mohs hardness of the micas is approximately 21/2 on cleavage flakes and 4 across cleavage. Consequently, micas can be scratched in either direction with a knife blade or geologic pick. Hardness is used to distinguish micas from chloritoid, which also occurs rather commonly as platy masses in some metamorphic rocks; chloritoid, with a Mohs hardness of 61/2, cannot be scratched with a knife blade or geologic pick. Glauconite occurs most commonly as earthy to dull, subtranslucent, green to nearly black granules generally referred to as pellets. It is attacked readily by hydrochloric acid. The colour and occurrence of this mineral in sediments and sedimentary rocks formed from those sediments generally are sufficient for identification. Micas may originate as the result of diverse processes under several different conditions. Their occurrences, listed below, include crystallization from consolidating magmas, deposition by fluids derived from or directly associated with magmatic activities, deposition by fluids circulating during both contact and regional metamorphism, and formation as the result of alteration processes—perhaps even those caused by weathering—that involve minerals such as feldspars. The stability ranges of micas have been investigated in the laboratory, and in some associations their presence (as opposed to absence) or some aspect of their chemical composition may serve as geothermometers or geobarometers. Distinct crystals of the micas occur in a few rocks—e.g., in certain igneous rocks and in pegmatites. Micas occuring as large crystals are often called books; these may measure up to several metres across. In most rocks, micas occur as irregular tabular masses or thin plates (flakes), which in some instances appear bent. Although some mica grains are extremely small, all except those constituting sericitic masses have characteristic shiny cleavage surfaces. Glauconite is formed in marine environments. It can be found on seafloors where clastic sedimentation, which results from the relocation of minerals and organic matter to sites other than their places of origin, is lacking or nearly so. Although some glauconite has been interpreted to have been formed from preexisting layered silicates (e.g., detrital biotite), most of it appears to have crystallized from aluminosilicate gels—perhaps under the influence of biochemical activities that produce reducing environments. The common rock-forming micas are distributed widely. The more important occurrences follow: Biotite occurs in many igneous rocks (e.g., granites and granodiorites), is common in many pegmatite masses, and constitutes one of the chief components of many metamorphic rocks (e.g., gneisses, schists, and hornfelses). It alters rather easily during chemical weathering and thus is rare in sediments and sedimentary rocks. One stage in the weathering of biotite has resulted in some confusion. During chemical weathering, biotite tends to lose its elasticity and become decolorized to silvery gray flakes. In a fairly common intermediate stage, weathered biotite is golden yellow, has a bronzy lustre, and may be mistaken by inexperienced observers as flakes of gold. Phlogopite is rare in igneous rocks; it does, however, occur in some ultramafic (silica-poor) rocks. For example, it occurs in some peridotites, especially those called kimberlites, which are the rocks in which diamonds occur. Phlogopite also is a rare constituent of some magnesium-rich pegmatites. Its most common occurrence, however, is in impure limestones that have undergone contact metasomatism, a process through which the chemical composition of rocks is changed. Muscovite is particularly common in metamorphic gneisses, schists, and phyllites. In fine-grained foliated rocks, such as phyllites, the muscovite occurs as microscopic grains (sericite) that give these rocks their silky lustres. It also occurs in some granitic rocks and is common in complex granitic pegmatites and within miarolitic druses, which are late-magmatic, crystal-lined cavities in igneous rocks. Much of the muscovite in igneous rocks is thought to have been formed late during, or immediately after, consolidation of the parent magma. Muscovite is relatively resistant to weathering and thus occurs in many soils developed over muscovite-bearing rocks and also in the clastic sediments and sedimentary rocks derived from them. Paragonite is known definitely to occur in only a few gneisses, schists, and phyllites, in which it appears to play essentially the same role as muscovite. It may, however, be much more common than generally thought. Until fairly recently nearly all light-coloured micas in rocks were automatically called muscovite without checking their potassium:sodium ratios, so some paragonites may have been incorrectly identified as muscovites. Its weathering is essentially the same as that of muscovite. Lepidolite occurs almost exclusively in complex lithium-bearing pegmatites but has also been recorded as a component of a few granites. Glauconite, as noted above, is forming in some present-day marine environments. It also is a relatively common constituent of sedimentary rocks, the precursor sediments of which were apparently deposited on the deeper parts of ancient continental shelves. The name greensand is widely applied to glauconite-rich sediments. Most glauconite occurs as granules, which are frequently referred to as pellets. It also exists as pigment, typically as films that coat such diverse substrates as fossils, fecal pellets, and clastic fragments. Because of their perfect cleavage, flexibility and elasticity, infusibility, low thermal and electrical conductivity, and high dielectric strength, muscovite and phlogopite have found widespread application. Most “sheet mica” with these compositions has been used as electrical condensers, as insulation sheets between commutator segments, or in heating elements. Sheets of muscovite of precise thicknesses are utilized in optical instruments. Ground mica is used in many ways such as a dusting medium to prevent, for example, asphalt tiles from sticking to each other and also as a filler, absorbent, and lubricant. It is also used in the manufacture of wallpaper to give it a shiny lustre. Lepidolite has been mined as an ore of lithium, with rubidium generally recovered as a by-product. It is used in the manufacture of heat-resistant glass. Glauconite-rich greensands have found use within the United States as fertilizer—e.g., on the coastal plain of New Jersey—and some glauconite has been employed as a water softener because it has a high base-exchange capacity and tends to regenerate rather rapidly.
Tuesday, December 3, 2013 Monday, December 2, 2013 4th & 5th Grade TCAP Study Guide - ELA - Plural Rules (Nouns that are more than 1) - Most words add –s - Words that end in ch, sh, x, or s add –es - Words that end in y, change the y to an i then add es - Some words change completely. Ex: man-men and woman-women - Some words don’t change at all. Ex: fish, deer, sheep - Possessive Rules (When something belongs to the noun) - If the noun is singular (one), add ’s (Example: woman’s) - If the noun is plural (two) AND ends in s, just add ’ (Example: travelers’) - If the noun is plural (two) and doesn’t end in s, add ’s (Example: men’s) - ***Sometimes you will have to figure out if the noun is plural or singular to make your choice.*** - Must match who they are referring to. - Ex: He walked out the door and tripped on her own feet. *Her is wrong. It should be his.* - Ex: You will enjoy the park because there are lots of things for them to do. *Them is wrong. It should be you.* - The verb must make sense and match the rest of the sentence. - TIP: If there is a helping verb (had, have, has) choose the verb with the “n” on the end. Ex: We had not seen that in years. - TIP: Read each sentence and slash the ones that you know are incorrect. The best way to choose the correct verb is to see which one makes sense. - Adjectives describe nouns. - Add –er to compare two things. (add more if the word is long!) - Example: She was taller than him. - Example: She was more beautiful than her friend. - Add –est to compare more than two things. (add most if the word is long!) - Example: She was the tallest. - Example: She was the most beautiful of all her friends. - Adverbs describe verbs. - Most end in –ly. - Ex: He was really brave when the monster came at him. - You cannot have two negatives in a sentence. Negatives include: neither, never, not, nothing, or any contraction with not. - Two, Too, To - Two- #2 - Too- Too much or Also - To- Tells direction - There, they’re, their - There- a place - They’re- they are - Their- shows ownership - Your, you’re - Your- shows ownership - You’re- you are - Dates: January 12, 2011 - Items in a Series: I want bacon, eggs, and cheese. - Before Conjunctions: I want to go to the store, but my mom will not take me. - After Introductory Words: Even though we were tired, we still played for hours. - Punctuation goes inside the quotation marks. - Use commas to separate the quote from the speaker. - Use a capital letter at the beginning of what is said. - Example: Mom said, “Are you sure you want to go?” - Example: “I am sure,” I said. - Example: “Okay,” Mom said, “we will go.” - Sentences can be combined the following ways: - By combining the sentences with a conjunction - By moving one part of the sentence to the beginning and making it an introductory phrase. Fixing Run-On Sentences - Fix run-on sentences by: - Writing two new sentences. - Combining the ideas correctly with a conjunction. - Making one of the ideas an introductory phrase at the beginning. - Common Abbreviations - Avenue = Ave. - Doctor = Dr. - Street = St. - Drive = Dr. - Words that are made up of two small words. Each word helps determine the meaning of the compound word. Ex: mailbox, horseback - To Entertain- telling a story with characters, a setting, and a plot. - To Inform- gives the reader true information and facts. - To Persuade- is trying to get the reader to do or buy something. Speeches are usually trying to persuade. - Sentences that match the main idea and the rest of the paragraph. - Step 1: Find the main idea of the paragraph. - Step 2: Choose the sentence that matches the main idea. - Step 1: Read the entire sentence looking for clues to what the word means. - Step 2: Plug all the choices in and see which one makes sense. - The person or people reading or listening to your writing. - Step 1: What is the writing about? - Step 2: Who needs to know this? or Who cares about this? - A 1-3 sentence recap of a passage or paragraph. - Step 1: Find the main idea. - Step 2: Underline important details. - Step 3: Include the main idea and details into a complete sentence. - Remember: Do this for each paragraph of a passage if the questions asks for a summary of the entire passage. - Fact: a statement that can be proven to be true. - Opinion: a statement that cannot be proven and often includes how someone feels. - Sometimes people include visuals or graphics (pictures) with their writing. - Step 1: The picture must match the main idea. - Step 2: Choose the one that would help the writing the best. - What a story or passage is mostly about. - Step 1: Underline the key words. (Only the important ones!) - Step 2: Write one sentence with the key words. - Step 3: Find the choice that best matches your sentence. - Irrelevant means that it does not match or go along with the rest of the story. - Step 1: Find the main idea. - Step 2: Find the sentence that does not match the main idea. - TIP: The irrelevant sentence may seem like it fits but you have to be very careful. - Concluding means the ending or very last sentence. - Must match the main idea of the story. - Never adds new details to the story. - Used at the end of a story or paragraph - Therefore, finally, in conclusion, as a result - Adding details to a story - In addition to, furthermore, similarly, however - Remember: Plug all the choices in the sentence and see which one makes since in the sentence AND in the story. - Step 1: Read all the steps or sentences. - Step 2: Draw your dashes ____ ____ ____ ____ - Step 3: Read the sentences or paragraphs and put the number in the correct slash. - Step 4: Find the answer that matches your slashes. - Reliable means it is good information to use and you can count on it being correct. - Read all the choices and choose the best and most reliable source for the question. - Atlas- book of maps - Encyclopedia- gives information on people, places, history and things. - Newspaper- gives information on current and local events. - Websites- websites that end in .gov, .org, or .edu are the most reliable Moods and Feelings from Pictures - Pictures can often make the viewer feel a certain way or put them in a certain mood. - Look at the picture. Write down a few words that come to mind. - Look at the choices. Mark off any that are completely wrong. - Choose the best one by comparing it to what you thought about the picture and the picture itself. - Synonyms- words that mean the same or nearly the same. Ex: sad and depressed - Antonyms- words that mean the opposite. Ex: ancient and new Sequence (Before and After) - If a question asks you what happens before or after an event, follow these steps. - Step 1: Find the event mentioned in the question. Underline it. - Step 2: If the question asks what happened before, look in front of the underlined event. - Step 3: If the question asks what happened after, look after the underlined event. - Has lines and stanzas - May rhyme - Can tell a story - Onomatopoeia- sound words. Ex: Crash, boom, howl - Alliteration- the same beginning letter or sound. Ex: Sam strutted and Dan danced. - Repetition- repeating words Ex: Down, down, down the spider went. - Simile- comparing two unlike things using like or as. Ex: She was as mad as a bull. - Metaphor- comparing two unlike things without using like or as. Ex: When she is angry, she is a bull. - Personification- giving human qualities to nonhuman things. Ex: The wind was crying my name. - Hyperbole- an extreme exaggeration. Ex: This test is going to ruin me! Point of View - 1st person- I, me, or we. The character is telling you his thoughts and feelings. - 2nd person- You - 3rd person- They, them, someone’s name. - Hint: A story may have I in it and someone’s name. The I wins. - Step 1: Find the relationship between the first two words. - Are they the same? Opposite? - Are they part of a group? - Step 2: Find the word that is needed to match the relationship. - Example: person: house:: dog: _______ - A person lives in a house. So where does a dog live? A kennel or dog house. Either one of those choices will complete the analogy. Setting, Characters, Plot - Setting- Where and when the story takes place. Ex: Outside at night. - Characters- The people or animals in a story. - Plot- The events in a story. The plot includes the problem (conflict) and solution (resolution) - Conflict = problem - Resolution = solution - The lesson that the story is trying to teach you. Example: It is better to give than to receive. - Themes can be stated or implied. - Stated- the theme is said in the story. You can find it and underline it. - Implied- the theme is not said in the story. You have to use clues to figure it out. - Writers use citation to show where they learned information from. - Citations should include: - Author’s Name - Book Title - Place where book was published - Publication Date - Published means it was turned into a book and sold in stores. Fiction vs Nonfiction - Fiction: stories for fun. They have characters, settings, and plots. - Nonfiction: stories that give you new facts and information. Great Intervention/Test Prep Ideas! http://teachinginroom6.blogspot.com/2012/02/test-prep-180-test-prep-stations.html http://teachinginroom6.blogspot.com/2013/01/test-prep-180-comprehension-vs-writing.html http://teachinginroom6.blogspot.com/2012/01/test-prep-180-comprehension-strategies.html
On the horizon News from the natural world. Tree plantations have often been touted as a tool for scrubbing carbon dioxide out of the atmosphere - at least for a while - to combat global warming. But a new study suggests that new tree plantations also could degrade soil and deplete groundwater, depending on their location.Skip to next paragraph Subscribe Today to the Monitor Using trees as air cleaners is one approach countries can take to meet CO2 emissions targets under the 1997 Kyoto Protocol. Industrial countries party to the pact must reduce their CO2 emissions by an average of slightly more than 5 percent between 2008 and 2012. Using field measurements and modeling studies, an international research team led by Duke University biologist Robert Jackson found that replacing grassland or farmland with evergreen or eucalyptus forests in relatively dry locations can lead to both more acidic or saltier soils and dried-up stream beds. Any water vapor the trees give off would not be enough to generate rain-bearing clouds capable of offsetting lost groundwater. But plantations could be an environmental boon to some spots, such as southwestern Australia, where removal of trees led to saltier soils, or the US farm belt, where converting some cropland to forest would reduce pollution from pesticide and fertilizer runoff. The research appears in the current issue of the journal Science. Astronomers received a new set of rings for Christmas. Using the Hubble Space Telescope, a team of scientists has discovered previously undetected rings and moons orbiting Uranus. The new discoveries lie outside the planet's better-known set of rings but inside the orbits of its major moons. The observations also show that the planet's innermost moons have changed their orbits in significant ways over the past decade. When added to the 11 other rings that scientists have spotted around Uranus since 1977, the latest discoveries suggest that the planet has a very young set of rings and moons whose orbits are very unstable. Indeed, the team, led by Mark Showalter of the SETI Institute in Mountain View, Calif., calculates that the system has undergone substantial changes at least since the time of the dinosaurs and perhaps since the time of the Roman Empire. The discoveries suggest that Uranus hosts a ring-moon system "rivaling the other known ring-moon systems in its subtlety and complexity," the team reports. The research was published recently on Science Express, an online edition of the journal Science. A common cushion moss known as Bridel may be about to shed its "common" image. Biologists are finding that the hardy species could become an effective tool to monitor climate change and other environmental shifts. Researchers at the University of California at Santa Cruz and at Berkeley have conducted genetic studies of Bridel specimens taken from around the state and found that the Bridel population consists of two species with an overlapping geographical range. That range varies from the Mediterranean-like region of Southern California to the southern Cascade Mountains in northern California. Like their Bridel relatives found on every other continent except Antarctica, both species are extremely drought-tolerant and can survive high temperatures and high exposure to ultraviolet light. And the moss can live on the same patch of rock for decades. The team notes that other mosses have been used as monitoring tools to track trends in heavy-metal pollution. But such mosses are unique to a specific region, such as northern Europe or the northeastern US. With further research, Bridel could yield information on global pollution patterns. If researchers find that different species of Bridel in different parts of the world respond in the same way to changing environmental conditions, the moss becomes a useful global sensor. If the species respond differently, they might become an even richer source of information on regional changes, the team suggests. Their work appears in the current issue of the Proceedings of the National Academy of Sciences.
the tendency upon hearing about research findings to think that they knew it all along contrast, scientific research is to predict what will happen in advance. different methods are used to test the affects of each method it has a clear, practical application explores questions that are of interest to psychologists but are not intended to have immediate, real-world applications. express a relationship between two variables (variables: are things that can vary among the participants in the research.) grows out of theories. will produce a change in the dependent variable depends on the independent variable aims to explain some phenomenon and allows researches to generate testable hypotheses with the hope of collecting data that support the theory. definitions of the variables. an explanation of how things would be or are measured when research is measures what the researcher set out to measure; it is accurate when research can be replicated; it is consistent the process by which the participants are selected participants or group of people selected. from which the sample will be selected. includes anyone or anything that could possibly be selected to be in a sample. when a sample can represent a larger group from which it has been selected from. every member of the population has an equal chance of being selected. it increases the likelihood that the sample represents the population and that one can generalize the findings to the larger population. is a process that allows a researcher to ensure that the sample represents the population Experiment-laboratory and field Laboratory: are conducted in a lab, a highly controlled environment; can be controlled is the main advantage Field: are conducted out in the world; they are more realistic. Confounding variables-participant and situation relevant is any difference between the experimental and control conditions, except for the independent variable, that might affect the dependent variable. process by which participants are put into a group, experimental or control. each participant has an equal chance of being placed into any group. limits the participant and situation confounding variables using random assignment diminishes the chance that the participants in the two groups differ in any meaningful way. making the environment similar where the groups are placed in helps do this. helps make the groups be as equal as possible for the experiment the unconscious tendency for researchers to treat members of the experimental and control groups differently to increase the chance of confirming their hypothesis NOTE: purposely distorting data is FRAUD not and unconscious tendency. eliminates experimenter bias occurs when neither the participants nor the researcher are able to affect the outcome of the research. minimizes demand characteristics occurs when only the participants do not know to which group they have been assigned response or participant bias also known as demand characteristics are cues about the purpose of the study when the participants have the tendency to behave a certain way due to the experiment a kind of response bias the tendency to give politically correct answers merely selecting a group of people on whom to experiment has been determined to affect the performance of that group, regardless of what is done to those individuals giving a group an inert but similar/identical object that will not affect the psychological effect and the psychological thought of the effect Correlations-Positive and negative definition: expresses a relationship between two variables without ascribing cause. Positive: means that the presence of one thing predicts the presence of the other Negative: means that the presence of one thing predicts the absence of the other involves asking pople to fill out surveys one can no longer control for participant-relevant confounding variables obtaining a random sample when one sends out a survey is difficult because relatively few people will actually send it back and these people are unlikely to make up a representative sample researchers opt to observe their participants in their natural habitats without interacting with them at all the goal is to get a realistic and rich picture of the participants' behavior. control is sacrificed. case study method used to get full, detailed picture of one participant or small group of participants. findings cannot be generalized to a larger population. describes a set of data. a summarization of data that tells you how many. measures of ventral tendency-mean, median, mode group of statistical measures as an attempt to mark the center of a distribution Extreme scores or outliers a data that is an exaggerated measurement that changes the outcome of the data mean messes it up median is the best way to go positive versus negative skew positive skew shows that there is a high measurement negative skew shows that there is a low measurement Measured of variability-range, standard deviation, variance descriptive statistical measures with attempt to depict the diversity of the distribution measure the distance of a score from the mean in units of standard deviation below mean = negative z scores above mean = positive z scores is a theoretical bell-shaped curve for which the ares under the curve lying between any two z scores has been predetermined measures the relationship between two variables positive: presence of one predicts presence of the other negative: presence of one predicts absence of other graph that pairs values Line of best fit/ regression line the line drawn through the scatter plat that minimizes the distance of all the points from the line determine whether or not findings can be applied to the larger population from which the sample was selected. extent to which the sample differs from the population helps determine how significant a result is lower than .05 = not much change; higher than .05 significant change. can't be equal to zero p value helps determine the significance of a percent difference of the result of the experiment. Institutional Review Board (IRB) Any type of academic research must first propose the study to the ethics board. Helps run experiments in the most ethical matter and the safest for the test subjects Participation must be voluntary Participants must know what they are involved in research and give their consent Participants' privacy must be protected when anonymity cannot be granted, the researcher will not identify the source of any of the data after study, participants must be told the purpose of the study and provided with ways to contact the researchers about the results. prevents order effects a switching to help the experiment not have flaws American Psychology Association (APA) Set up a guideline to follow for when wanting to run experiments.
The Constitution of the U.S. is what made us a country. Without it, there would be no U.S. The Constitution is important for five reasons, it established three types of government, it affects our country, balance of power, and What if there was no constitution, and why the constitution is important too. The Constitution made three different branches of government and they are Executive, Legislature, and Judicial. The Executive branch is the president and the vice president that have served for four years. The Legislature is someone who makes the laws, and the Judiciary is the judge. The Constitution makes us a more organized country, it tells us that the states share resources such as money, military, and we all measure and weigh the same. The constitution makes sure that all the states are are one. The balance of power is what makes sure that we will never be ruled by a king. It also give us the right to vote. The balance of power gives citizens of the US to have the same amount of power over the other, so no more slaves. If there was no constitution, each state would be its own country, and we would have very dysfunctional system. We would not have not have the right to vote the law would be judged by the king. We would have a king. The Constitution is important for several reasons, it stopped shays rebellion, it made all the states as one, it gave us the right to vote, and made the three branches of government. The Constitution is a big part of our history. We would be in a very different place without it. There would be no branches of government, no balance of power, and no precedents.
|As a welder, you will use your expertise to fabricate all types of metal objects, repair metal items, and resurface worn machinery parts. You must know the two basic types of metal and be able to provide initial identification. While they primarily work with the ferrous metals of iron and steel, you also need to be able to identify and become familiar with the nonferrous metals coming into more use each day. This course presents an introductory explanation of the basic types of metal and provide initial instruction on using simple tests to establish their identity. When you have completed this course, you will be able to: Metals can initially be divided into two general classifications, and Steelworkers work with both: ferrous and nonferrous metals. Ferrous metals are those composed primarily of iron (atomic symbol Fe) and iron alloys. Nonferrous metals are those composed primarily of some element or elements other than iron, although nonferrous metals or alloys sometimes contain a small amount of iron as an alloying element or as an impurity. Ferrous metals include all forms of iron and iron-base alloys, with small percentages of carbon (steel, for example), and/or other elements added to achieve desirable properties. Wrought iron, cast iron, carbon steels, alloy steels, and tool steels are just a few examples. Ferrous metals are typically magnetic. Iron ores are rocks and minerals from which metallic iron can be economically extracted. The ores are usually rich in iron oxides and vary in color from dark grey, bright yellow, deep purple, to rusty red. Iron ore is the raw material used to make pig iron, which is one of the main raw materials used to make steel. Ninety-eight percent of the mined iron ore is used to make steel. Iron is produced by converting iron ore to pig iron using a blast furnace. Pig iron is the intermediate product of smelting iron ore with coke, usually with limestone as a flux. Pig iron has very high carbon content, typically 3.54.5%, which makes it very brittle and not useful directly as a material except for limited applications. From pig iron, many other types of iron and steel are produced by the addition or deletion of carbon and alloys. The following briefly presents different types of iron and steel made from iron. Steelworker Advanced will present additional information about their properties. Of all the different metals and materials that Steelworkers use, steel and steel alloys are by far the most used and therefore the most important to study. The development of the economical Bessemer process for manufacturing steel revolutionized the American iron industry. Figure 1 shows the container vessel used for the process. With economical steel came skyscrapers, stronger and longer bridges, and railroad tracks that did not collapse. Steel is manufactured from pig iron by decreasing the amount of carbon and other impurities and adding specific and controlled amounts of alloying elements during the molten stage to produce the desired composition. Figure 1 Example of a Bessemer Converter. The composition of a particular steel is determined by its application and the specifications developed by the following: Carbon steel is a term applied to a broad range of steel that falls between the commercially pure ingot iron and the cast irons. This range of carbon steel may be classified into four groups: High-strength steels are covered by American Society for Testing and Materials (ASTM) specifications. Stainless steels are classified by the American Iron and Steel Institute (AISI) and classified into two general series: Alloy steels derive their properties primarily from the presence of some alloying element other than carbon, but alloy steels always contain traces of other elements as well. One or more of these elements may be added to the steel during the manufacturing process to produce the desired characteristics. Alloy steels may be produced in structural sections, sheets, plates, and bars for use in the as-rolled condition, and these steels can obtain better physical properties than are possible with hot-rolled carbon steels. These alloys are used in structures where the strength of material is especially important, for example in bridge members, railroad cars, dump bodies, dozer blades, and crane booms. The following list describes some of the common alloy steels: Nonferrous metals contain either no iron or only insignificant amounts used as an alloy, and are nonmagnetic. The following list will introduce you to some of the common nonferrous metals that SWs may encounter and/or work with. Additional information about their properties and usage is available in Steelworker Advanced. When working with lead, take proper precautions! Lead dust, fumes, or vapors are highly poisonous! - To Table of Contents - When you are selecting a metal to use in fabrication, to perform a mechanical repair, or even to determine if the metal is wieldable, you must be able to identify its basic type. A number of field identification methods can be used to identify a piece of metal. Some common methods are surface appearance, spark test, chip test, magnet test, and occasionally a hardness test. Sometimes you can identify a metal simply by its surface appearance. Table 1 indicates the surface colors of some of the more common metals. Table 1 Surface Appearance of Some Common Metals |Metal||Color||Color and Structure| |Unfinished, unbroken surface||Freshly filed surface||Newly fractured surface| |Aluminum||Light gray||White||White: finely crystalline| |Brass and Bronze||Reddish-yellow, yellow-green, or brown||Reddish-yellow to yellowish-white||Red to yellow| |Copper||Reddish-brown to green||Bright copper color||Bright red| |Iron, Cast-gray||Dull gray||Light silvery gray||Dark gray: crystalline| |Iron, Cast-white||Dull gray||Silvery white||Silvery white: crystalline| |Iron, Malleable||Dull gray||Light silvery gray||Dark gray: finely crystalline| |Iron, Wrought||Light gray||Light silvery gray||Bright gray| |Lead||White to gray||White||Light gray: crystalline| |Monel metals||Dark gray||Light gray||Light gray| |Nickel||Dark gray||Bright silvery white||Off-white| |Steel, Cast and Steel, Low-carbon||Dark gray||Bright silvery gray||Bright gray| |Steel, High-carbon||Dark gray||Bright silvery gray||Light gray| |Steel, Stainless||Dark gray||Bright silvery gray||Medium gray| As you can see by studying the table, a metals surface appearance can help you identify it, and if you are unsure, you can obtain further information by studying a fresh filing or a fresh fracture. If a surface examination does not provide you with enough information for a positive identification, it should give you enough information to place the metal into a class. In addition to the color of the metal, distinctive marks left from manufacturing also help in determining the identity of the metal. Inspecting the surface texture by feel may also provide another clue to its identity. When visual clues from surface appearance, filings, fractures, manufacturing marks, or textural clues from the feel of the surfaces do not give enough information to allow positive identification, other tests become necessary. Some are complicated and require equipment Seabees do not usually have. However, the following are a few additional simple tests, which are reliable when done by a skilled person: spark test, chip test, magnetic tests, hardness test. You perform the spark test by holding a sample of the unidentified material against an abrasive wheel and visually inspecting the spark stream. This test is fast, economical, convenient, easily accomplished, and requires no special equipment. As you become a more experienced Steelworker, you will be able to identify the sample metals with considerable accuracy. You can use this test to identify scrap-salvaged metal, which is particularly important when you are selecting material for cast iron or cast steel heat treatment. When you hold a piece of iron or steel (ferrous metals) in contact with a high-speed abrasive wheel, small particles of the metal are torn loose so rapidly that they become red-hot. These small particles of metal fly away from the wheel, and glow as they follow a trajectory path called the carrier line, which is easily followed with the eye, especially when observed against a dark background. The sparks (or lack of sparks) given off can help you identify the metal. Features you should look for include: Refer to Figure 2 through Figure 8 for illustrations of the various terms used in referring to the basic spark forms produced during spark testing. Figure 2 Example of spark testing term: STREAM. Figure 3 Example of spark testing term: SHAFT. Figure 4 Example of spark testing term: FORK. Figure 5 Example of spark testing term: SPRIGS. Figure 6 Example of spark testing term: DASHES. Figure 7 Example of spark testing term: APPENDAGES. Steels that have the same carbon content but include different alloying elements are difficult to identify; the alloys have an effect on the carrier lines, the bursts themselves, or the forms of the characteristic bursts in the spark picture. The alloying element may slow or accelerate the carbon spark, or make the carrier line lighter or darker in color. For example: You can perform spark testing with either a portable or a stationary grinder, but in either case, the outer rim speed of the wheel should be not less than 4,500 feet per minute with a clean, very hard, rather coarse abrasive wheel. Each point is necessary to produce a true spark. When you conduct a spark test, hold the metal on the abrasive wheel in a position that will allow the carrier line to cross your line of vision. By trial and error, you will soon find what pressure you need in order to get a stream of the proper length without reducing the speed of the grinder. In addition to reducing the grinders speed, excessive pressure against the wheel can increase the temperature of the spark stream, which in turn increases the temperature of the burst and gives the appearance of a higher carbon content than actually is present. Use the following technique when making the test: An abrasive wheel on a grinder traveling at high speed requires respect, and you need to review some of the safety precautions associated with this tool (Figure 9). Figure 9 Example of a grinders OSHA-designated safety points. Vibration can cause the wheel to shatter, and when an abrasive wheel shatters, it can be disastrous for personnel standing in line with the wheel. Grinding wheels require frequent reconditioning. Dressing is the term you use to describe the cleaning of the working face of an abrasive wheel. Proper dressing breaks away dull abrasive grains, smoothes the surface, and removes grooves. The wheel dresser shown in Figure 10 is used for dressing grinding wheels on bench and pedestal grinders. Figure 10 Typical wheel dresser. Refer now to Figure 11 through Figure 16 for examples of spark testing results for specific identified material. Figure 11 Example of low-carbon and cast steel spark stream. | High-carbon steel Figure 12 Example of high-carbon spark stream. |Gray cast iron Figure 13 Example of gray cast iron spark stream. Because of their similar spark pictures, you must use some other method to distinguish monel from nickel. Figure 14 Example of monel and nickel spark streams. Figure 15 Example of stainless steel spark stream. Figure 16 Example of wrought iron spark stream. One way to become proficient in identifying ferrous metals by spark testing is to practice by testing yourself in the blind. Gather an assortment of known metals for testing. Make individual samples so similar that size and shape will not reveal their identities. Number each sample and prepare a master list of correct names with corresponding numbers. Then, without looking at the number on the sample, spark test it and call out its name to someone assigned to check your identification against the names and numbers on the list. Repeating this self-testing practice will give you some of the experience you need to become proficient in identifying individual samples. Another simple field test you can use to identify an unknown piece of metal is the chip test. You perform the chip test by removing a small amount of material from the test piece with a sharp, cold chisel. The material you remove can vary from small, broken fragments to a continuous strip. The chip may have smooth, sharp edges, may be coarse-grained or fine-grained, or may have saw-like edges. The size of the chip is important in identifying the metal, as well as the ease with which you can accomplish the chipping. Refer to Table 2 for information to help you identify various metals by the chip test. Table 2 Metal Identification by Chip Test |Aluminum and Aluminum Alloys||Smooth with saw tooth edges. A chip can be cut as a continuous strip.| |Brass and Bronze||Smooth with saw tooth edges. These metals are easily cut, but chips are more brittle than chips of copper. Continuous strip is not easily cut.| |Copper||Smooth with saw tooth edges where cut. Metal is easily cut as a continuous strip.| |Iron, Cast-white||Small brittle fragments. Chipped surfaces are not smooth.| |Iron, Cast-gray||About 1/8 inch in length. Metal is not easily chipped; therefore, chips break off and prevent smooth cut.| |Iron, Malleable||Vary from 1/4 to 3/8 inch in length (larger than chips from cast iron). Metal is tough and hard to chip.| |Iron, Wrought||Smooth edges. Metal is easily cut or chipped, and a chip can be made as a continuous strip.| |Lead||Any shape may be obtained because the metal is so soft that it can be cut with a knife.| |Monel||Smooth edges. Continuous strips can be cut. Metal chips easily.| |Nickel||Smooth edges. Continuous strips can be cut. Metal chips easily.| |Steel, Cast and Steel, Low-carbon||Smooth edges. Metal is easily cut or chipped, and a chip can be taken off as a continuous strip.| |Steel, High-carbon||Show a fine-grain structure. Edges of chips are lighter in color than chips of low-carbon steel. Metal is hard, but can be chipped in a continuous strip.| A magnet test is another method you can use to aid in a metals general identification. Remember: ferrous metals are iron-based alloys and normally magnetic; nonferrous metals are nonmagnetic. This test is not 100 percent accurate because some stainless steels are nonmagnetic, but it can aid in the first differentiation of most metals. When dealing with stainless steel, there is no substitute for experience. Hardness is the property of a material to resist permanent indentation. One simple way to check for hardness in a piece of metal is to file a small portion of it. If it is soft enough to be machined with regular tooling, the file will cut it. If it is too hard to machine, the file will not cut it. This method will indicate whether the material being tested is softer or harder than the file, but it will not tell exactly how soft or hard it is. The file can also be used to determine the harder of two pieces of metal; the file will cut the softer metal faster and easier. The file method should be used only in situations when the exact hardness is not required. This test has the added advantage of needing very little in the way of time, equipment, and experience. Because there are several methods of measuring exact hardness, the hardness of a material is always specified in terms of the particular test used to measure this property. Rockwell, Vickers, or Brinell are some of the methods of testing. Of these tests, Rockwell is the one most frequently used, and requires a Rockwell hardness testing machine. The basic principle used in the Rockwell test is that a hard material can penetrate a softer one, and the amount of penetration is measured and compared to a scale. For ferrous metals, usually harder than nonferrous metals, a diamond tip is used for depth penetration measurement and the hardness is indicated by a Rockwell C number. On nonferrous metals, which are softer, a metal ball is used for surface indentation measurement and the hardness is indicated by a Rockwell B number. Consider lead and steel for an idea of the property of hardness. Lead can be scratched with a pointed wooden stick, but steel cannot because it is harder than lead. You can get a more complete explanation of the various methods used to determine the hardness of a material from commercial books or books located in your base library. This handbook has introduced you to the basics of the different types of metals and the simple field and shop methods you can use to identify them. From here, you can begin to build on your experiences to become a seasoned steelworker considered a resident expert on metals. - To Table of Contents - 1. What term is used to describe the equivalent of the Steelworker rating in civilian construction? 2. A material must be primarily composed of _____ to be considered a ferrous metal. 3. Ferrous metals are typically _____. 4. Which type of iron is one of the main raw materials used to make steel? 5. What characteristic of pig iron limits its use? 6. What material do Steelworkers use the most? 7. Cast iron is any iron containing greater than _____ alloy. 8. What process is used to produce malleability in cast iron? 9. What group of steel is best suited for the manufacture of crane hooks and axles? 10. What groups specifications cover high-strength steels? 11. What groups specifications cover stainless steels? 12. What stainless steel is normally nonmagnetic? 13. What common alloy steel is used to make high-quality hand tools? 14. Which of the following metals is nonferrous? 15. What combination of elements in proper proportion make bronze? 16. What action does the letter T signify when used in conjunction with a numbering system that classifies different aluminum alloys? 17. What manufacturing marks can you look for when a metals color does not provide positive identification? 18. When applying the spark test to a metal, you notice the spark stream has white shafts and forks only. What does this condition indicate about the metal under test? 19. What metal produces a spark stream about 25 inches long with small and repeating sparklers of small volume that are initially red in color? 20. Which of the following metals produces the shortest length spark stream? 21. You perform the chip test by removing a small amount of material from the test piece with a _____. 22. You can depend on a magnetic test for 100% accuracy to determine a ferrous metal. - To Table of Contents - Copyright © David L. All Rights Reserved
Students will get a better understanding of air pressure by seeing it at work. - 2 rubber bands for each jar - strong plastic bags - Ask the students to fill their bags with air by blowing them up or by pulling them through the air. - Tie the air-filled bags, upside down, to each jar with its mouth over the opening of the jar. Wind a string very tightly around the bag and jar a few times without crossing ridges of the jar, and tie it (see #1 below). - Ask the students to try to press down on the bags, lean on them, and rest objects on top of them. What happens? What are some explanations for this?(the air in the bag combined with the air in the jar the pressure) What other things act like this (air mattresses, tires...)? - Have students untie the bags and put them down inside the jars with the mouth of each bag folded over the mouth of the jar and again tie them on tightly (see #2 below). - When all are ready, ask them (at the same time) to hold the jars and pull out the bags. What happens? (It should be difficult to pull the bag out) What are some explanations for this? (air pressure is keeping the bag back).
Each year in the month of February, America commemorates Black History Month. Black History Month is a celebration of the achievements of important figures and events in African American history; the month is celebrated in schools, communities and work places. With issues of racial prejudice and discrimination still sadly prevalent in the world we live in today, educating our young people on the importance of African American history is crucial. One organisation that made sure they highlighted black history throughout February is the learning platform Flocabulary. Flocabulary is an online platform that uses educational hip-hop music to engage students with learning on topics across the curriculum. Throughout Black History Month Flocabulary featured a series of music videos for parents who wanted to engage their children on the importance of Black History; this included a video on civil rights, Martin Luther King Jr., the Voting Rights act and many more. Educating our children on these important topics in an engaging and thought provoking way is highly important to teach them past and present issues in the lives of African American people. Other events that took place throughout Black History Month in America included webcasts, concerts and book talks across the country. This website – linked to the National Museum of African American History and Culture – offers detailed information on the history of Black History month and the events that they run. In addition to this, they also provide links with helpful information for teachers who want to educate their students on black history and videos and audio clips that cover all areas of black history. In the UK, Black History month is celebrated every October. This website provides information on black history in the UK and all of the events that are taking place across the country throughout the year to celebrate black history and culture. “I am so pleased to support Black History Month which recognizes, rewards and celebrates the contribution made to our society over many years by the African and Caribbean communities” – Theresa May Whilst having one month dedicated to Black History is very important, the lessons and values to be learnt from black history month should not be confined to one month alone. The struggles and achievements of figures in black history and teachings of equality and acceptance for one another and should be remembered and continued by all of us, every day, when carrying out our day to day lives. You can read more about black history and culture at www.blackhistorymonth.org.uk.
The Roman economy collapsed in the third century. Barbarian incursions and civil wars disrupted trade and agriculture. Emperors worsened the crisis by debasing the currency. The disruption caused inflation and stagnation. Diocletian attempted to remedy the ailing economy in 301. The emperor tried to reform taxation, the currency, and leveled price controls. The Edict of Maximum Prices set wages and prices throughout the empire. However, market forces quickly overwhelmed the edict and the economy failed to recover its former glory. Romans flouted the law under penalty of death. Compliance actually stifled the economy leading many to ignore the edict. Diocletian worked diligently to restore Rome's sagging economy. His predecessors minted their own coins and reduced their value to pay their bills. By 284, Rome's currency was worth dramatically less than a century before. In some parts of the empire, people abandoned currency in favor of barter. The government actually took tax payments in kind as opposed to currency. The emperor tried to restore the value of the currency using silver. However, the treasury did not produce the new coins in sufficient numbers to impact the economy. Currency problems led to hyperinflation. In response, Diocletian issued the Edict of Maximum Prices. The first portion of the edict doubled the value of Roman coins and established penalties for speculators. Anyone caught profiteering on Roman currency faced the death penalty. The emperor also banned merchants from passing on the costs of doing business onto their customers. The emperor moved from coinage and business costs to product pricing. He imposed price controls on over 1,000 goods and products subdivided into 32 categories. These ranged from food to clothes to travel charges to wages. It even included slaves and wild beasts such as lions. Unfortunately, the emperor set the values much too low. Diocletian had no concept of what goods cost. Additionally, he did not perceive regional pricing differences. Predictably, the edict hurt commerce. Merchants ignored the law whenever possible because they could not turn a profit. Some turned to barter while others created a black market. Compliant citizens could not afford to produce or sell goods. Workers saw their earnings collapse under the weight of hyperinflation and wage controls. The Edict of Maximum Prices failed miserably. It depressed the economy further, created a black market, increased inflation and speculation, and lead to social instability. There are some reports of violence breaking out in response to the economic pressure. On top of this, it appears many regions ignored the law entirely. As a result, imperial enforcement proved uneven and disproportionate. Wage and price controls rarely succeed. The population needs to support the provisions otherwise inflation results. Diocletian learned this lesson in the early fourth century. People openly ignored the Edict of Maximum Prices even under penalty of death. In areas of compliance, the edict further retarded the economy and fed inflation. In the end, Diocletian's efforts to restore the Roman economy failed.
Country Case Studies and Links by Patricia Dinsmore THE GERMAN WELFARE SYSTEM The Germans have traditionally regarded their model as "Sonderweg", that is a middle of the road approach between free market liberalism and state-centered socialism. The welfare system is an integrated part of Germany's "social market economy." Particularly significant is the fact that in Germany, more than in most countries, welfare policies have been mechanisms of economic governance. That is welfare policies are designed to enhance employment effects by withdrawing surplus labor from the economy. In short, early retirement schemes or long university programs serve to constrain the supply of labor when unemployment rates are high. This has prompted critics to charge that Germany has the oldest students, youngest retirees and longest vacationing workers in the world. - The German Labor Market - Philosophical Background - Features of the German Welfare System The German Welfare Model has been a pioneer in Welfare Policy since the Bismarck Era. In 1873, Bismarck created a social insurance system that was stratified along occupational and class lines. This pay-as-you-go system is based on contribution through which an employee acquires claims to later benefits. The modern welfare system underwent periods of expansion, such as under the Nazi regime (pension benefits), and especially after World War II by successive Christian Democratic-led and Social Democratic governments. Both major German parties have been committed to expansive social protection and to politics promoting job security and codetermination (labor's participation in company decision making) in enterprise. The Christian conservative parties (CDU/CSU) contain a powerful free market-oriented wing that also supports an individualistic, democratic culture rooted in political and economical liberalism. These "neo-liberal" elements have been important in constraining the more egalitarian trends, especially through their insistence that social intervention should be "market conforming". It was between the 1950's and the 1970's that these influences on West Germany's social state took place. German social policies have been characterized by their distinctive social priorities and not by high levels of expenditures. What has been important about the German social policy is the unusual policy profile that placed a greater emphasis on social security transfer payments and less emphasis on the role of directly provided public social services. Instead of getting national minimum standards for all of its citizens, the social state consisted of a wide range or work-oriented social insurance schemes that contained strong elements of compulsory self-help. The purpose of this type of system was to provide the majority of the workforce with a high degree of security and predictability by securing an individual's position in the income hierarchy or their social status acquired through work. There were two goals of the German Welfare State. Germany wanted security for all of their citizens and predictability in economic development outcomes. A compromise had to be made between Labor and Capital to achieve these two goals. They agreed to compromise in limiting the movement of capital across boarders nationally, to create institutions/frameworks to work out differences between Labor and Capital in order to prevent a breakdown in their relationship and they agreed to Institutional Self-Regulation. That meant the government would let the Unions/Labor and the Employers/Capital work things out amongst themselves and the government would not intervene. Government agreed to support the decisions made by Labor and Employers with legislation. This is the only area of government involvement. The government felt this was the best solution because unions and employers have similar economic needs. This Labor/Capital Model has been envied throughout the world. Finally, government promised to keep the supply of unskilled labor low and to create a high wage. The government planned on achieving this by investing heavily in education. The German Labor Market A social partnership developed between Unions and Employers. The Unions promised wage restraint and labor peace, while the Employers, in return, committed themselves to sharing productivity gains in the form of added employment and wage increased. This system was very effective, raising the general standard of living as well as the economic security of the workforce. Migrant guest workers ("Gastarbeiter") were brought in, notably from Jugoslavia and Turkey, to overcome labor shortages that tended to appear in sectors with low wages and monotonous jobs that were unskilled and dirty. Theses were the jobs the Germans were increasingly unwilling to take. It is interesting to note in this context that the Christian-conservative government at the time thought it preferable to import foreign workers to overcome labor shortages than to encourage women to enter the workforce – a marked difference to the countries of northern Europe. As a result a two-tier labor market began to emerge in Germany consisting of a well-organized and socially well-projected, highly skilled and highly paid market of (typically male) German workers and more marginalized low skills sector of foreign or female workers. There were three traditions that influenced the German Welfare System. The first was a Catholic Social philosophy, which stressed the importance of self-help as well as the important role of the family. This gave precedence to voluntary organizations over state agencies. Because of this practice, charitable organizations, including the "social arm" of the major churches and the Labor Movement Workers Welfare Association, came to play a more important role in the provision of social services than in most other welfare states. The second tradition was that of conservative state-paternalism that wanted people to be healthy. Thirdly, there is also a more liberal tradition promoting a market economy and a free enterprise system. The German social and economic model, including the Welfare System, is what we call "corporatist," which is defined typically by interest group cooperation rather than competition as in the pluralist Anglo-American systems. Corporatist systems tend to be highly stratified, delivering specific benefits to targeted groups. The overall goal is to achieve a balance in society by avoiding social competition that would lead to groups of winners and losers and thus threaten the stability of the state (as it happened in Germany's Weimar Republic in the 1920s). Ideologically, corporatism has been influenced by both religious and state traditions emphasizing social harmony, order and stability. Economic actors (e.g., craftsmen, employers, professionals, workers, etc.) are typically members of centralized associations, which engaged in collective bargaining with each other to (a) set standards governing the industry, (b) influence legislation, (c)determine social provisions, and (d) pass regulatory rules. In short, not the state as in France or the US but rather the autonomous associations in a respective segment of the economy determine the qualifications necessary for a given profession and the minimum wage to be paid in that sector. Features of the German Welfare System There are four features to the German Welfare Model. These are - the Social solidarity insurance model where different funds support one another (cross-subsidation); - economic governance intended to reduce labor costs, provide for a high-skills labor force, and absorb surplus workers (early retirement, longer vacations, shorter work weeks); - a low-skilled workforce in occupations Germans are reluctant to pursue (usually serviced by migrant workers); - under-developed social assistance schemes for those who fall through the cracks of the social insurance model. In a contribution based system benefits are supplied in one of two ways: Service in-kind transfers and cash-transfers. An example of service in-kind transfers is the housing system. The qualifications for receiving housing include meeting a certain social profile (e.g., number of children, age, marital status) and income level. Social housing is either provided by municipal governments (usually in social democratically governed cities), or through voluntaristic housing associations. An example of cash-transfers can be found in the German health care system. Public (association-run) health insurance providers supply cash vouchers for the services of a doctor in the private sector. Cash-transfers are also paid out through the associations in the form of pensions, retirement and unemployment. There are several structural problems with the German Welfare system. This includes the fact that the system has recently shown too little innovation in industry. Germany has a highly skilled workforce producing high quality items, yet its labor force is also the most expensive with the highest labor cost in the world making German products increasingly less competitive.Capital flight is also a problem as German companies increasingly invest abroad. Germany's cash-transfer/benefit social system has the added disadvantage that German recipients can consume these benefits abroad (e.g., retired Germans living in Spain). Germany's chief problem is that its reliance on contributions, especially for pension and unemployment, coupled with unfavorable demographics (aging population)and persisted high unemployment makes it increasingly difficult to finance the system. Mandated by European Union rules and fiscal prudence, the government can no longer as easily service these social security deficits through government transfers from the budget as in the past. As a result, social security taxes(contributions in the form of payroll taxes) have steadily increased, making German labor increasingly expensive and pricing low-skill/low-income workers out of the market. This in turn has added to the unemployment problem, creating a vicious circle of sorts. Finally, the system has difficulties coping with an economy that demands flexibilization and change. As a result, an increasing number of German workers is no longer employed in well-protected life-long stable occupations but short-term jobs, or atypical employment. To the extent that the contribution system rewards stable employment and long term contributions, an increasing number of workers tends to fall through the cracks of the system. As a result, a large but shrinking segment of highly-paid and well-protected, typically older male workers in traditional industries (in-group) is confronted with an increasing number of people (out-groups) in less-regulated, non-traditional, often low-skilled service occupations (typically young people, women, migrant workers.) Many of the aforementioned problems have been exacerbated by German reunification after the fall of the Berlin Wall in 1989. The steady reduction of public expenditures and efforts to reduce taxation in the 1970's and 1980's have been reversed as massive subsidies have been directed towards the imploding East German economy. Personal dependency on state support has grown markedly especially in the East where the standards of work skills were not nearly as high as in the West. This has led to a surge of unemployment among East Germans, compounded by an increase in job losses in the West during the recession in the mid-1990s. Germany is in a state of transition between its post-war social and economic mold and a more free-market oriented system. As economic conditions have improved in recent years, the new Social Democratic government under Chancellor Schröder has embarked on a series of (unpopular) reforms of the pension and tax system. Reforms in Germany's multi-level governance and highly consensus-oriented system (state-level, federal level)are notoriously difficult. As of yet, it remains to be seen how successful the SPD-Green government will be. Some say "only Nixon could go to China" perhaps only a leftist-Green government can get away with market-oriented reforms. Chapter 2 in Vic George and Peter Taylor-Gooby (1996). European Welfare Policy; Squaring the Welfare Circle St.Martin's: New York. See also: MISSOC Country Tables
What is binding? Binding is the act of holding a group of pages in a booklet or book. Many methods of binding exist, the most common methods are use gluing, sewing, and stitching with staples or a combination of these techniques. Binding increases the durability, value and look of the book. When to use each type of binding? Different types of binding make sense based three factors; 1. your budget, 2. the quantity of books to be produced, 3. thickness of the book. First, if the project requires a modest budget, the binding options that make most sense include pad binding, saddle stitch, and wiro binding. If the project has a mid range budget, consider inter-screw, saddle sewn and oversewn. Higher end binding techniques include perfect binding, lay flat and case binding. Second, if the quantity of books needed is very small consider the binding. Methods that involve manual labour or have a low setup; pad binding, saddle sewn, oversewn, saddle stitch, inters-crew and wiro. With medium quantities an inexpensive option is saddle stitch and PUR or perfect binding becomes more affordable. Finally, the thickness of the book or number of pages often helps determine what type of book binding to select. For example, saddle stitching is for thin books of 48pp or less typically. Mid-size books between 5mm and 15mm thick work well with sewing, wiro binding, perfect binding and case bound books. Thick books are often wiro bound, case bound, or inter-screw.
As Richard Feynman described it, energy is the currency of the universe. If you want to speed it up, slow it down, change its position, make it hotter or colder, bend it, break it, whatever, you’ll have to pay for it (or be paid to do it). This is the first of a series of experiments in which you will investigate the role of energy in changes in a system. If you grasp one end of a rubber band and pull on the other, you realize that the stretched rubber band differs from the same band when it is relaxed. Similarly, if you compress the spring in a toy dart gun by exerting a force on it, you know that the state of the compressed spring is different from that of the relaxed spring. By exerting a force on the object through some distance you have changed the energy state of the object. We say that the stretched rubber band or compressed spring stores elastic energy – the energy account used to describe how an object stores energy when it undergoes a reversible deformation. This energy can be transferred to another object to produce a change – for example, when the spring is released, it can launch a dart. It seems reasonable that the more the spring is compressed, the greater the change in speed it can impart to the toy dart. If we want to quantify the amount of energy stored by a spring when it is deformed, we must first study the relationship between the force applied and the extent to which the length of the spring is changed. Determine the relationship between the applied force and the deformation of an elastic object (spring or rubber band). Determine an expression for the elastic energy stored in spring or rubber band that has been compressed or stretched. Sensors and Equipment This experiment features the following Vernier sensors and equipment.
NASA Hubble Space Telescope's crisp view has allowed an international team of astronomers to apply a previously unproven technique (astrometry) for making a precise measurement of the mass of a planet outside our solar system. The Hubble results place the planet at 1.89 to 2.4 times the mass of Jupiter, our solar system's largest world. Previous estimates, about which there are some uncertainties, place the planet's mass between 1.9 and 100 times that of Jupiter's. A Hubble set of instruments called Fine Guidance Sensors (FGSs), which are also used to point and stabilize the free-flying observatory, measured a small "side-to-side" wobble of the red dwarf star Gliese 876. This is due to the tug of an unseen companion object, designated Gliese 876b (Gl 876b) and first discovered in 1998 with ground-based telescopes. Gl 876b is only the second extrasolar planet (after HD 209458) for which a precise mass has been determined, and it is the first whose mass has been confirmed by using the astrometry technique. Now that this technique has been proven viable for space-based observatory planet confirmations, it will be used in the future to nail down uncertainties in the masses of dozens of extrasolar planets discovered so far. The observations were made by George F. Benedict and Barbara McArthur (University of Texas at Austin), members of the international observing team led by Thierry Forveille (Canada-France-Hawaii Telescope Corporation, Hawaii and Grenoble Observatory, France). The results are being published in the December 20 issue of Astrophysical Journal Letters. Benedict had to observe the star's yo-yo motion for over two years, using a total of 27 orbits worth of Hubble Space Telescope observations. "Making these kinds of measurements of a star's movement on the sky is quite difficult," Benedict emphasizes. "We're measuring angles (.5 milliarcsecond) equivalent to the size of a quarter seen from 3,000 miles away. The target planet, Gl 876b, is the more distant of two planets orbiting Gliese 876. It was originally discovered by two groups, led by Xavier Delfosse (Geneva/Grenoble Observatory) and Geoffrey Marcy (U.C. Berkeley and San Francisco State University). Marcy's group discovered a smaller planet closer to Gliese 876 a year later, in 1999. These initial discoveries were made by measuring the star's subtle "to-and-fro" speed. This is called the radial velocity technique. Benedict and McArthur combined the astrometric information with the radial velocity measurements (made in the planet's discovery) to determine the planet's mass by deducing its orbital inclination. If astronomers don't know how the planet's orbit is tilted with respect to Earth, they can only estimate a minimum mass for the planet. But without knowing more, the mass could be significantly larger if the orbit was tilted to a nearly face-on orientation to Earth. The star would still move towards and away from us slightly, even though it had a massive companion. "You can't hide massive companions from the Hubble Space Telescope," says McArthur. "The planet's orbit turns out to be tilted nearly edge-on to Earth. This verifies it is a low-mass object." "There are a few more stars where we can do this kind of research with Hubble," Benedict says. "Most candidate stars are too distant. Astronomers can look forward to doing these kinds of studies on literally hundreds of stars with the planned NASA Space Interferometry Mission, called SIM, which will be far more precise than Hubble. "Knowing the mass of extrasolar planets accurately is going to help theorists answer lots of questions about how planets form," Benedict adds. "When we get hundreds of these mass determinations for planets around all types of stars, we're going to see what types of stars form certain types of planets. Do big stars form big planets and small stars form small planets?" Measuring stellar wobbles on the sky has been used to search for planets for decades. But extremely high precision and telescope optical stability are required. The Hubble FGSs are the first astrometric tool to accomplish this ultra-precise kind of measurement for an extrasolar planet. The gas giant plant orbiting the sunlike star HD 209458 is the very first planet to have its mass verified by using transit and radial velocity data. This was only possible because the planet was discovered to be passing in front of the star every four days, slightly dimming the star's light. This is proof the orbit is edge-on, yielding a mass that agrees with the lower limit estimate of .7 Jupiter masses. Cite This Page:
The Markov property is about independence, specifically about the future development of a process being independent of anything that has happened in the past, given its present value. A random process X possesses the Markov property, and is called a Markov Chain, if |P(Xn+1 = j | Xn = i, Hn)| For the simple random walk, |P(Xn+1 = i+1 | Xn = i, Xn-r = k)| is just equal to |P(Jn+1 = 1 | Xn = i, Xn-r = k) = p,| since Jn+1 is independent of anything that happened previously. The Markov property also holds for SRW with barriers. But there are many more examples. Example: Weather. Sunny or Cloudy. Today's weather affects tomorrow's, but yesterday's is already irrelevant to tomorrow's forecast. The probability P(Xn+1 = 1 | Xn = i) is called a transition probability and we use the notation pij. In most of the examples we consider, the pij do not depend on n; such processes are termed time-homogeneous. Any random process has a fixed set of values which it can take. There could be finitely many or infinitely many; they could be numerical or descriptive (e.g. Sunny). The collection of possible values is called the state space, S, and the possible values are termed states. The transition probability pij is defined for all i and j in S, though may be 0 in many cases. We can assemble the pij into a matrix P, called the transition matrix of X. Example: the transition matrix of a SSRW with a reflecting boundary at 0 and an absorbing one at 4 is The elements of a transition matrix must lie between 0 and 1. In addition, since each transition must take the chain somewhere, the row sums must be equal to 1. Knowing today's weather, the transition probabilities give information about tomorrow's. But what of the day after tomorrow? If today is sunny: We can say, then, that |P(X2 = S | X0 = S) = åj=S,C pSj pjS.| In general, the Law of Total Probability shows that the same holds for more than two states: |P(X2 = j | X0 = i) = åk pik pkj.| This is the (i,j)th entry of the matrix P2. P(X2 = j | X0 = i) is a two-step transition probability; we denote it by pij(2). We see that the two-step transition matrix P(2) is equal to P2. The Chapman-Kolmogorov equations state that |P(m+n) = P(m) P(n),| from which we deduce that |P(Xn = j | X0 = i) = (Pn)ij.| In principle we only need to find the powers of P to evaluate the distribution of Xn for all n. Example: in the weather model suppose pSS = 0.6 and pCC = 0.7. Then the 1-step, 2-step and 3-step transition matrices are Within each column the values are getting closer. Given a k ´ k square matrix P there exist k pairs (l,v), where l is a number and v a vector, such that |Pv = lv.| We term l an eigenvalue, v an eigenvector of P. Any vector can be expressed as a linear combination of eigenvectors of P. The eigenvalues of a transition matrix may be real or complex, but will all lie in or on the unit circle. Find the eigenvalues of P by solving the equation |det (P - lI) = 0,| where I is the k ´ k identity matrix. Then, for each solution l, solve linear equations to corresponding v. Notice that the eigenvectors v are only unique up to a scalar multiple, since cv is also an eigenvector for any scalar c. Eigenvectors represent directions rather than points. When an P is subjected to the linear transformation which resulting vector is in the same direction as the original. Construct a matrix L which has the eigenvalues of major diagonal and zeroes everywhere else. (Such matrices are called diagonal matrices.) Construct also a matrix writing the eigenvectors in the columns, in the same order as the eigenvalues in L. Then Ln is diagonal, with entries ljn. A transition matrix always has one eigenvalue equal to 1, with eigenvector (1, 1, . . . 1)T. Usually (see later for conditions) all the other eigenvalues will lie inside the unit circle. If |lj| < 1 then |lj|n ® 0 as n ® ¥. For most transition matrices the limit of matrix with a single 1 on the diagonal, making the limit of Pn easy to find. PV = VL and PVV-1 = P Luckily we only need the matrix algebra (a) to show that a limit (b) to find P(Xn = j | X0 = i) for If all we need to do is find the limiting probabilities, there is a Notice that the eigenvectors v are only unique up to a scalar multiple, since cv is also an eigenvector for any scalar c. Eigenvectors represent directions rather than points. When an eigenvector of P is subjected to the linear transformation which P represents, the resulting vector is in the same direction as the original. Construct a matrix L which has the eigenvalues of P down the major diagonal and zeroes everywhere else. (Such matrices are called diagonal matrices.) Construct also a matrix V by writing the eigenvectors in the columns, in the same order as the corresponding eigenvalues in L. Then Ln is diagonal, with entries ljn. A transition matrix always has one eigenvalue equal to 1, with corresponding eigenvector (1, 1, . . . 1)T. Usually (see later for conditions) all the other eigenvalues will lie strictly inside the unit circle. If |lj| < 1 then |lj|n ® 0 as n ® ¥. For most transition matrices the limit of Ln is a matrix with a single 1 on the diagonal, making the limit of Pn easy to find. Using the Law of Total Probability, |P(Xn+1 = j) = åi P(Xn+1 = j | Xn = i) P(Xn = i).| Assuming we know that P(Xn = j) converges to some limit or other (call it pj) as n ® ¥, we have |pj = åi pi pij| In matrix notation, |pT = pT P| This is usually easy to solve. The solution p is called the equilibrium probability vector of X. Example: for the weather model with pSS = 0.6, pCC = 0.7, we have |pS = pSpSS + pCpCS = 0.6pS + 0.3pC| |pC = pCpCC + pSpSC = 0.4pS + 0.7pC| As may be seen, the solution is |pS = 3/7, pC = 4/7.| Note that the vector p is always scaled so that åi pi = 1, as it is a probability vector. Exercise: A rat runs around a maze as shown. Each time it changes room it picks an exit from its current room entirely at random, independently of where it has been before. Let Xn be the room it occupies after the nth room change. Find the transition matrix of X and the limit of P(Xn = A). Example: take a simple random walk with absorbing barriers at 10 and 0. The walk is sooner or later absorbed, so P(Xn = i) ® 0 unless i is one of the absorbing states. If X0 is close to 10, it is more likely that absorption takes place at 10 than at 0, whereas if X0 is close to 0 the reverse is true. The limiting value of P(Xn = 0) therefore depends on the value of X0, even for large n. A Markov chain is called irreducible if, starting from any one of the states, it is possible to get to any other state (not necessarily in one jump). If there are any absorbing states, the chain is not irreducible. (The reverse is not true.) Consider a simple random walk with reflecting barriers above and below. If the walk starts in state 0 it will always be in an even state at even times and an odd state at odd times. In other words, P(Xn = j) will not converge; the effect of the starting point does not die away as n®¥. A state i has period d if, given that X0=i, we can only have Xn=i when n is a multiple of d. In the simple random walk example all states have period 2. We call i periodic if it has some period > 1. If X is irreducible, either all states are periodic or none are. The transition matrices of periodic Markov chains have eigenvalues on the unit circle. Example: a gambler plays roulette, staking £1 each turn. Each turn either the £1 is lost or £35 is won. Xn is the amount of money the gambler has after the nth spin. Then any state (amount of money) is periodic with period 36, as can be checked by considering Xn (mod 36). Suppose X is a Markov chain with only finitely many possible states. If then X is ergodic. We have: Theorem (Ergodic Theorem) If the discrete-time Markov chain X is ergodic, with transition matrix P, then there is exactly one probability vector p which satisfies |pT P = pT| |P(Xn = j | X0 = i) ® pj.| (The proof is too advanced for this course.) The fact that pTP = pT means that p is a stationary distribution for X: if X0 is not fixed but random, and P(X0 = i) = pi for each i, then P(X1 = i) = pi for all i, and similarly for X2, X3, etc. If we can find any probability vector p which satisfies the equation, we know it is the right answer. For example, if we can find p such that |pi pij = pj pji| Example: frog on lilypads. The states of any Markov chain can be grouped together. We say that any two states i and j belong to the same communicating class if it is possible, starting from i, to get to j and, starting from j, to get back to i. A communicating class is classified as recurrent if, having entered the class, the Markov chain can never leave it, transient otherwise. (Note: this definition is only valid when there are finitely many states. If there are infinitely many the situation can be more complicated.) Each absorbing state forms a c.c. on its own. When the state space is finite the chain must eventually enter a recurrent c.c., which it can then never leave, so we can apply the Ergodic Theorem to obtain limiting probabilities. Example: children's playground. We need to decide in advance whether the situation being modelled is suited to a Markov model and whether the model should be time-homogeneous. But when new entrants keep the age profile stable we can aggregate individuals so that age-dependent rates are smoothed out. Once we accept that a MC model is appropriate, it is easy to fit. Having observed xt, t=1, 2, ..., n we define ni as the number of times the chain was in state i, nij as the observed number of transitions from i to j, then use nij/ni as the estimate of pij. Testing goodness of fit: Both are hard to test objectively. Quite easy in Excel using lookup tables. See second lab sheet.
If you're trying to get actual images of exoplanets, it helps to look at M-dwarfs, particularly young ones. These stars, from a class that makes up perhaps 75 percent of all the stars in the galaxy, are low in mass and much dimmer than their heavier cousins, meaning the contrast between the star's light and that of orbiting planets is sharply reduced. Young M-dwarfs are particularly helpful, especially when they are close to Earth, because their planets will have formed recently, making them warmer and brighter than planets in older systems. The trick, then, is to identify young M-dwarfs, and it's not always easy. Such a star produces a higher proportion of X-rays and ultraviolet light than older stars, but even X-ray surveys have found it difficult to detect the less energetic M-dwarfs, and in any case, X-ray surveys have studied only a small portion of the sky. Astronomers at UCLA now have hopes of using a comparative approach, working with the Galaxy Evolution Explorer satellite, which has scanned a large part of the sky in ultraviolet light. These data are compared to optical and infrared observations to identify young stars that fit the bill for possible exoplanet detection. So far the results are good. Of the 24 candidates identified with these methods, 17 turn out to show signs of stellar immaturity. The stars may be too young and low in mass to show up in X-ray surveys, but the Galaxy Evolution Explorer data seem effective at finding M-dwarfs less than 100 million years old. We can hope to add, then, to the tiny number of exoplanets that have been directly imaged, a useful adjunct to existing observing methods. Direct imaging can help us with large planets in the outer reaches of their solar systems, planets that would thus far have eluded Doppler methods. That helps us flesh out our view of complete planetary systems. And yes, WISE (Wide-field Infrared Survey Explorer) is in the hunt here as well, helping us identify candidates from the M-dwarf category that would make good imaging targets. WISE can find young, nearby stars that are still surrounded by planetary debris disks, a fertile hunting ground for new planetary imaging. Putting the tools together across the spectrum should make it possible to find close young planets whose properties should help us in our studies of solar system formation. And we can expect release of the first 105 days of WISE data later this month. As to the Galaxy Evolution Explorer, it was launched back in 2003 with a mission to observe distant galaxies in ultraviolet light. Now operating in extended mission mode, GALEX has been conducting an ultraviolet all-sky survey intended to produce a map of galaxies in formation, helping us see how our own galaxy evolved. Turning its ultraviolet capabilities to the study of exoplanets in young solar systems gives us a new technique for finding imaging targets. The paper is Rodriguez et al., "A New Method to Identify Nearby, Young, Low-Mass Stars," Astrophysical Journal Vol. 727, No. 2 (2011), p. 62 (abstract). [Top image: Galaxy Evolution Explorer looks at Andromeda. The wisps of blue making up the galaxy's spiral arms are neighborhoods that harbor hot, young, massive stars. Meanwhile, the central orange-white ball reveals a congregation of cooler, old stars that formed long ago. Now scientists are using GALEX data to hunt for young, planet-bearing red dwarfs near the Sun. Credit: NASA/JPL-Caltech.] This post originally appeared on Centauri Dreams.
If you have an arrhythmia that causes your heart to beat too fast or too slow, you may feel lightheaded or dizzy. This happens because your heart cannot pump blood effectively during excessively fast or slow heart rates. The ineffective pumping action decreases your blood pressure, reducing the amount of blood that reaches your brain. The sensation of lightheadedness is a result of this lack of blood flow to the brain. If your blood pressure drops too low, you may feel that you are about to pass out. This sensation is called presyncope. Syncope is the medical term for a temporary loss of consciousness (passing out). When is lightheadedness not caused by an arrhythmia? Dizziness can be caused by conditions other than arrhythmia. For this reason, your doctor will try to find out whether your dizziness is caused by a heart condition, medicine, or other things. Other causes of lightheadedness include hyperventilation, panic or anxiety attacks, prolonged standing, and excessive fluid loss caused by problems such as vomiting or diarrhea. Many of the medicines used to treat heart conditions, such as beta-blockers, calcium channel blockers, angiotensin-converting enzyme (ACE) inhibitors, and diuretics, can lower the blood pressure excessively and result in lightheadedness. In general, medicine-induced lightheadedness frequently occurs soon after you stand up because of a drop in blood pressure that happens when you stand (orthostatic hypotension). In contrast, lightheadedness due to an arrhythmia can occur even when you are sitting or reclining. Syncope (say "SING-kuh-pee") refers to a sudden loss of consciousness that doesn't last long. Syncope may be the first sign that you have an arrhythmia. And it is a very worrisome symptom for several reasons: - Fainting can result in a serious injury (for example, if you faint while climbing stairs or driving). - You faint because your brain did not get enough oxygen to function, which may be a warning sign that you have a serious medical condition. An arrhythmia can cause syncope in the same way that it causes lightheadedness (presyncope). Your heart cannot pump blood effectively during excessively fast or slow heart rates, reducing the amount of blood that reaches your brain. With syncope, though, the arrhythmia causes such a dramatic drop in the blood pressure that the brain doesn't receive enough blood to keep you awake. So you lose consciousness. For an arrhythmia to cause syncope, your heart rate must be extremely fast or extremely slow, or you must also have some other heart condition.
Presentation on theme: "Chapter 13 Artificial Intelligence. 2 Thinking Machines A computer can do some things better --and certainly faster--than a human can: Adding a thousand."— Presentation transcript: Chapter 13 Artificial Intelligence 2 Thinking Machines A computer can do some things better --and certainly faster--than a human can: Adding a thousand four-digit numbers Counting the distribution of letters in a book Searching a list of 1,000,000 numbers for duplicates Matching finger prints 3 Thinking Machines BUT a computer would have difficulty pointing out the cat in this picture, which is easy for a human. Artificial intelligence (AI) The study of computer systems that attempt to model and apply the intelligence of the human mind. Figure 13.1 A computer might have trouble identifying the cat in this picture. 4 Thinking Machines Artificial: humanly contrived often on a natural model Intelligence: the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria Clearly, intelligence is an internal characteristic. How can it be identified? 5 In the beginning… In 1950 Alan Turing wrote a paper titled Computing Machinery And Intelligence, in which he proposed to consider the questionCan machines think? Computing Machinery And Intelligence But the question is loaded so he proposed to replace it with what has since become known as the Turing Test. 6 The Turing Test The important question for Turing wasnt Can machines think? but How will we know if they can? The Turing test is used to empirically determine whether a computer has achieved intelligence. 7 The Imitation Game The 'imitation game' is played with three people: a man (A) a woman (B) and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. 8 The Imitation Game The interrogator is allowed to put questions to A and B. It is A's object in the game to try and cause C to make the wrong identification. The object of the game for the third player (B) is to help the interrogator. 9 The Imitation Game We now ask the question: 'What will happen when a machine takes the part of B in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? 10 The Turing Test Figure 13.2 In a Turing test, the interrogator must determine which respondent is the computer and which is the human 11 The Turing Test There are authors who question the validity of the Turing test. The objections tend to be of 2 types. The first is an attempt to distinguish degrees, or types of equivalence. 12 The Turing Test Weak equivalence: Two systems (human and computer) are equivalent in results (output), but they do not arrive at those results in the same way. Strong equivalence: Two systems (human and computer) use the same internal processes to produce results. 13 The Turing Test The Turing Test, they argue, can demonstrate weak equivalence, but not strong. So even if a computer passes the test we wont be able to say that it thinks like a human. Of course, neither they nor anyone else can explain how humans think! So strong equivalence is a nice theoretical construction, but impossible in reality. 14 The Turing Test The other objection is that a computer might seem to be behaving in an intelligent manner, while its really just imitating behaviour. This might be true, but notice that when a parrot talks, or a horse counts, or a pet obeys our instructions, or a child imitates its parents we take all of these things to be signs of intelligence. If a parrot mimicking human sounds can be considered intelligent (at least to some small degree) then why wouldnt a computer be considered intelligent (at least to some small degree) for imitating other human bevaviour? 15 Turings View I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 10 9 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. 16 Can Machines Think? No machine has passed the Turing Test, yet. Loebner Prize established in 1990 Loebner Prize $100,000 and a gold medal will be awarded to the first computer whose responses are indistinguishable from a human's Lets look at one of the more difficult tasks for AI… 17 Aspects of AI Knowledge Representation Semantic Networks Search Trees Expert Systems Neural Networks Natural Language Processing Robotics 18 Knowledge Representation The knowledge needed to represent an object or event depends on the situation. There are many ways to represent knowledge. One is natural language. Even though natural language is very descriptive, it doesnt lend itself to efficient processing. 19 Semantic Networks Semantic network: A knowledge representation technique that focuses on the relationships between objects. A directed graph is used to represent a semantic network (net). 20 Semantic Networks Figure 13.3 A semantic network 21 Semantic Networks The relationships that we represent are completely our choice, based on the information we need to answer the kinds of questions that we will face. The types of relationships represented determine which questions are easily answered, which are more difficult to answer, and which cannot be answered. eos 22 Search Trees Search tree: a structure that represents all possible moves in a game, for both you and your opponent. The paths down a search tree represent a series of decisions made by the players. 23 Search Trees Figure 13.4 A search tree for a simplified version of Nim 24 Search Trees Search tree analysis can be applied nicely to other, more complicated games such as chess. Because Chess trees are so large, only a fraction of the tree can be analyzed in a reasonable time limit, even with modern computing power. 25 Search Trees Techniques for searching trees Depth-first: a technique that involves the analysis of selected paths all the way down the tree. Breadth-first: a technique that involves the analysis of all possible paths but only for a short distance down the tree. Breadth-first tends to yield the best results. 26 Search Trees Figure 13.5 Depth-first and breadth-first searches 27 Search Trees Even tough the breadth-first approach tends to yield the best results, we can see that a depth- first search will get to the goal sooner – IF we choose the right branch. Heuristics are guidelines that suggest taking one path rather than another one. eos 28 Expert Systems Knowledge-based system: a software system that embodies and uses a specific set of information from which it extracts and processes particular pieces. Expert system: a software system based the knowledge of experts in a specialized field. An expert system uses a set of rules to guide its processing. The inference engine is the part of the software that determines how the rules are followed. 29 Expert Systems Example: What type of treatment should I put on my lawn? NONEapply no treatment at this time TURFapply a turf-building treatment WEEDapply a weed-killing treatment BUGapply a bug-killing treatment FEEDapply a basic fertilizer treatment WEEDFEEDapply a weed-killing and fertilizer combination treatment 30 Expert Systems Boolean variables BAREthe lawn has large, bare areas SPARSEthe lawn is generally thin WEEDSthe lawn contains many weeds BUGSthe lawn shows evidence of bugs 31 Expert Systems Some rules if (CURRENT – LAST < 30) then NONE if (SEASON = winter) then not BUGS if (BARE) then TURF if (SPARSE and not WEEDS) then FEED if (BUGS and not SPARSE) then BUG if (WEEDS and not SPARSE) then WEED if (WEEDS and SPARSE) then WEEDFEED 32 Expert Systems An execution of our inference engine System: Does the lawn have large, bare areas? User: No System: Does the lawn show evidence of bugs? User: No System: Is the lawn generally thin? User: Yes System: Does the lawn contain significant weeds? User: Yes System: You should apply a weed-killing and fertilizer combination treatment. eos 33 Artificial Neural Networks Attempt to mimic the actions of the neural networks of the human body. Lets first look at how a biological neural network works: A neuron is a single cell that conducts a chemically-based electronic signal. At any point in time a neuron is in either an excited or inhibited state. 34 Artificial Neural Networks A series of connected neurons forms a pathway. A series of excited neurons creates a strong pathway. A biological neuron has multiple input tentacles called dendrites and one primary output tentacle called an axon. The gap between an axon and a dendrite is called a synapse. 35 Artificial Neural Networks Figure 13.6 A biological neuron 36 Artificial Neural Networks A neuron accepts multiple input signals and then controls the contribution of each signal based on the importance the corresponding synapse gives to it. The pathways along the neural nets are in a constant state of flux. As we learn new things, new strong neural pathways are formed. 37 Artificial Neural Networks Each processing element in an artificial neural net is analogous to a biological neuron. An element accepts a certain number of input values and produces a single output value of either 0 or 1. Associated with each input value is a numeric weight. 38 Sample Neuron Artificial neurons can be represented as elements. Inputs are labelled v1, v2 Weights are labelled w1, w2 The threshold value is represented by T O is the output 39 Artificial Neural Networks The effective weight of the element is defined to be the sum of the weights multiplied by their respective input values: v1*w1 + v2*w2 If the effective weight meets the threshold, the unit produces an output value of 1. If it does not meet the threshold, it produces an output value of 0. 40 Artificial Neural Networks The process of adjusting the weights and threshold values in a neural net is called training. A neural net can be trained to produce whatever results are required. 41 Sample Neuron If the input Weights and the Threshold are set to the above values, how does the neuron act? Try a Truth Table… 42 Sample Neuron v1v2v1*w1v2*w2O w1=.5, w2=.5, T=1 With the weights set to.5 this neuron behaves like an AND gate. 43 Sample Neuron How about now? 44 Sample Neuron v1v2v1*w1v2*w2O w1=1, w2=1, T=1 With the weights set to 1 this neuron behaves like an OR gate. eos 45 Natural Language Processing There are three basic types of processing going on during human/computer voice interaction: Voice recognition recognizing human words Natural language comprehension interpreting human communication Voice synthesis recreating human speech Common to all of these problems is the fact that we are using a natural language, which can be any language that humans use to communicate. 46 Voice Synthesis There are two basic approaches to the solution: Dynamic voice generation Recorded speech Dynamic voice generation: A computer examines the letters that make up a word and produces the sequence of sounds that correspond to those letters in an attempt to vocalize the word. Phonemes: The sound units into which human speech has been categorized. 47 Voice Synthesis Figure 13.7 Phonemes for American English 48 Voice Synthesis Recorded speech: A large collection of words is recorded digitally and individual words are selected to make up a message. Telephone voice mail systems often use this approach: Press 1 to leave a message for Nell Dale; press 2 to leave a message for John Lewis. 49 Voice Synthesis Each word or phrase needed must be recorded separately. Furthermore, since words are pronounced differently in different contexts, some words may have to be recorded multiple times. For example, a word at the end of a question rises in pitch compared to its use in the middle of a sentence. eos 50 Voice Recognition The sounds that each person makes when speaking are unique. We each have a unique shape to our mouth, tongue, throat, and nasal cavities that affect the pitch and resonance of our spoken voice. Speech impediments, mumbling, volume, regional accents, and the health of the speaker further complicate this problem. 51 Voice Recognition Furthermore, humans speak in a continuous, flowing manner. Words are strung together into sentences. Sometimes its difficult to distinguish between phrases like ice cream and I scream. Also, homonyms such as I and eye or see and sea. Humans can often clarify these situations by the context of the sentence, but that processing requires another level of comprehension. Modern voice-recognition systems still do not do well with continuous, conversational speech. eos 52 Natural Language Comprehension Even if a computer recognizes the words that are spoken, it is another task entirely to understand the meaning of those words. Natural language is inherently ambiguous, meaning that the same syntactic structure could have multiple valid interpretations. 53 Natural Language Comprehension (syntax) Syntax is the study of the rules whereby words or other elements of sentence structure are combined to form grammatical sentences. So a syntactical analysis identifies the various parts of speech in which a word can serve, and which combinations of these can be assembled into sensible sentences. 54 Natural Language Comprehension (syntax) A single word can represent multiple parts of speech. Consider an example: Time flies like an arrow. To determine what it means we first parse the sentence into its parts. 55 Natural Language Comprehension (syntax) time can be used as a noun, a verb, or an adjective. flies can be used as a noun, or a verb. like can be used as a noun, a verb, a preposition, an adjective, or an adverb. an can be used only as an indefinite article. arrow can be used as a noun, or a verb. 56 Natural Language Comprehension (syntax) The table below summarises the various possibilities. The a priori total number of syntactical interpretations is: 3 * 2 * 5 * 1 * 2 = 60 adjectiveadverbarticlenounprepositionverb time flies like an arrow 57 Natural Language Comprehension (syntax) This number can be reduced by applying syntactical rules. For example, articles are used to indicate nouns and to specify their application so arrow must be a noun, not a verb. This cuts the number of possible sentences in half. adjectiveadverbarticlenounprepositionverb time flies like an arrow 58 Natural Language Comprehension (syntax) Other grammar rules will reduce this number further, but how far? Rather than eliminate combinations, it may be easier to build a list of reasonable possibilities. Consider the ways in which time can be used: Adjective Noun Verb (intr) 59 Natural Language Comprehension (syntax) An adjective is the part of speech that modifies a noun so whentime is an adjective, flies must be a noun. A sentence needs a verb, so if flies is a noun, like must be the verb. If time is an adjective there is only ONE possible meaning. adjectiveadverbarticlenounprepositionverb time flies like an arrow skip 60 Natural Language Comprehension (syntax) adjectiveadverbarticlenounprepositionverb time flies like an arrow When time is a noun there appear to be 10 possibilities, but there can only be one verb so there are only 5. adjectiveadverbarticlenounprepositionverb time flies like an arrow 61 Natural Language Comprehension (syntax) A sentence can have only one verb, so when time is a verb there are only 4 possible combinations of the other parts. adjectiveadverbarticlenounprepositionverb time flies like an arrow The syntactical analysis reveals a total of 10 possible ways these words can form a sentence. 62 Natural Language Comprehension (semantics) A syntactical analysis provides a structure, a semantic analysis adds meaning. Consider the first case in the syntactical analysis. adjectiveadverbarticlenounprepositionverb time flies like an arrow What would the words mean under this syntactical interpretation? 63 Natural Language Comprehension (semantics) time adjective of, relating to, or measuring time flies noun two-winged insect like verb find pleasant or attractive an article arrow noun missile having a straight thin shaft with a pointed head at one end and often flight-stabilizing vanes at the other time flies (not house flies) enjoy an arrow (watching it, chasing it, eating it) 64 Natural Language Comprehension (semantics) time adjective of, relating to, or measuring time flies noun two-winged insect like verb find pleasant or attractive an article arrow noun missile having a straight thin shaft with a pointed head at one end and often flight-stabilizing vanes at the other time flies (not house flies) enjoy an arrow (watching it, chasing it, eating it) 65 Natural Language Comprehension (semantics) time adjective of, relating to, or measuring time flies noun two-winged insect like verb find pleasant or attractive an article arrow noun missile having a straight thin shaft with a pointed head at one end and often flight-stabilizing vanes at the other time flies (not house flies) enjoy an arrow (watching it, chasing it, eating it) 66 Natural Language Comprehension (semantics) time adjective of, relating to, or measuring time flies noun two-winged insect like verb find pleasant or attractive an article arrow noun missile having a straight thin shaft with a pointed head at one end and often flight-stabilizing vanes at the other time flies (not house flies) enjoy an arrow (watching it, chasing it, eating it) 67 Natural Language Comprehension (semantics) time adjective of, relating to, or measuring time flies noun two-winged insect like verb find pleasant or attractive an article arrow noun missile having a straight thin shaft with a pointed head at one end and often flight-stabilizing vanes at the other time flies (not house flies) enjoy an arrow (watching it, chasing it, eating it?) 68 Natural Language Comprehension (semantics) This interpretation sounds absurd to us, but as our analysis showed, its a perfectly logical interpretation of the words. This is problem is referred to as a lexical ambiguity. It arises because words have multiple syntactical and semantic associations. In this case there are many syntactically and semantically valid interpretations. skip 69 Natural Language Comprehension (semantics) Consider the cases in which time is a verb. The Free DictionaryThe Free Dictionary lists 5 meanings: 1. To set the time for (an event or occasion). 2. To adjust to keep accurate time. 3. To adjust so that a force is applied or an action occurs at the desired time: timed his swing so as to hit the ball squarely. 4. To record the speed or duration of: time a runner. 5. To set or maintain the tempo, speed, or duration of: time a manufacturing process. 70 Natural Language Comprehension (semantics) Not only are there 4 syntactical interpretations of the sentence when time is a verb, each of those must be analysed under 5 semantic interpretations. Lets explore a few: 4. To record the speed or duration of: time a runner. As a transitive verb, time requires an object, so flies is a noun. Weve seen one meaning, heres another of the 12 listed in The Free Dictionary : The Free Dictionary 5. Baseball A fly ball. 71 Natural Language Comprehension (semantics) So, when you measure the length of time it takes a fly ball to travel its route, use the same techniques that you use when you time an arrow. This interpretation isnt as odd as it might seem. We know that the paths taken by fly balls and arrows are all parabolic arcs, so it makes sense to time them the same way. 72 Natural Language Comprehension (semantics) Consider the interpretations in which time is a noun and flies is a verb. (These are the parts of speech we would usually map onto them in this context.) The Free Dictionary lists 29 distinct meanings for the noun time. Consider the first one: The Free Dictionary 1. a. A nonspatial continuum in which events occur in apparently irreversible succession from the past through the present to the future. This is a normal sense of what we mean by time but which meaning of the verb flies should be used? 73 Natural Language Comprehension (semantics) The Free DictionaryThe Free Dictionary lists only one definition: Third person singular present tense of fly.fly In this context, fly can only be an intransitive verb, for which there are 14 different nuances. The most commonly used of these is To engage in flight. But can that be the meaning intended in our saying? How does a computer program decide? Its easy to see why NLC is difficult for computers. eos 74 Natural Language Comprehension A natural language sentence can also have a syntactic ambiguity because phrases can be put together in various ways. I saw the Grand Canyon flying to New York. Is it possible that the Grand Canyon flew to New York? 75 Natural Language Comprehension Referential ambiguity can occur with the use of pronouns. The brick fell on the computer but it is not broken. Since it is the subject of the second clause, its referent is the subject of the first clause – the brick! Are you happy knowing that no harm came to the brick? eos 76 Robotics Mobile robotics: The study of robots that move relative to their environment, while exhibiting a degree of autonomy. In the sense-plan-act (SPA) paradigm the world of the robot is represented in a complex semantic net in which the sensors on the robot are used to capture the data to build up the net. Figure 13.8 The sense-plan-act (SPA) paradigm 77 Subsumption Architecture Rather than trying to model the entire world all the time, the robot is given a simple set of behaviours each associated with the part of the world necessary for that behaviour Figure 13.9 The new control paradigm 78 Subsumption Architecture Figure Asimovs laws of robotics are ordered. eof
The power of songs & steps for teaching phonics. Video: Super Simple Songs. Volumen I. Temario de oposiciones Inglés-Primaria. Description of the English phonological system. Learning models and techniques. Perception, discrimination, and production of sounds, stress, rhythm and intonation. Phonetic correction. Prior to the analysis of the English phonological system, we should introduce the concept of phonological competence as an important component of the linguistic competence described in the Common European Framework of Reference for Languages (CEFRL, 2001). Broadly speaking, phonological competence refers to the knowledge of, and skill in the perception and production of sounds and prosodic features such as stress, rhythm and intonation. In more practical terms and taking these concepts to the primary FL classroom, our students should be able to understand basic phonological aspects and produce above the level of what we consider understandable. According to Celce-Murcia and Goodwin (1996), there is a threshold level of pronunciation in English such that if a given non-native speaker’s pronunciation falls below this level, he or she will not be able to communicate orally, no matter how good his or her control of English grammar and vocabulary might be. Thus, the goal of teaching pronunciation to our students is not to make them sound like native speakers of English. Let us now move on to an analysis of the English phonological system. Volumen III (Extracts) The eight kinds of intelligences established by Gardner are: Bodily-kinaesthetic: using one’s body to solve problems and express ideas and feelings. Associated strategies can be: Total Physical Response (TPR), role-plays, etc. Interpersonal: perceiving the moods, feelings, and needs of others. Peer sharing and cooperative groups are valid strategies. Intrapersonal: turning inward with a well-developed self-knowledge and using it successfully to navigate oneself through the world. Adequate strategies can be: provide students with reflection periods or attach personal connections with lessons regularly. Linguistic: using words, either orally or written, in an effective manner. This intelligence is associated with storytellers, politicians, comedians, and writers. Appropriate activities may be: storytelling, brainstorming, writing, etc. Logical-mathematical: understanding and using numbers effectively, as well as having special ability to reason well. Musical: related in a wide range of ways to music. This can take many forms, as a performer, composer, critic, and music-lover. Some examples may be: classroom rhythms, songs, and chants, etc. Naturalist intelligence: excellence at recognising and classifying both the animal and plant kingdoms, as well as showing understanding of natural phenomena. Spatial: perceiving the visual-spatial world in an accurate way, so as to be able to work on it effectively. Effective strategies are: using visual aids, graphics, etc. Page 68 Extra-curricular activities These clubs provide children with fun alternative to how they learn in the FL class. They can also be ideal to favour children´s relationships by finding classmates with similar interests. When children join a club that peaks their preferences, the impact on their motivation is immediate. The English club can be an opportunity for children to have fun with English just by entering the FL blog and enjoying the challenges. There may be different clubs to cater for all interests in the group: The music club, in which children with stronger musical intelligence can have access to songs they like with adequate support. “Another week, another song” may be the pre-requisite to be in the club. This means that students can make their suggestions regarding the songs (videos) they want to listen to, making the club interactive. In the music club we can get children discover classics like The Beatles or insert their suggestions (i.e. Ed Sheeran). The games club. In this club we may include a massive amount of content which can be easily uploaded to the FL blog, like flashgames dealing with different aspects of the language (i.e. lexis, hidden syntactic construction, communicative functions, etc.) Engage children in “hidden-practice” of the FL by challenging them to listen or read something the teacher has uploaded. Some examples may be listening to a song while reading the lyrics to get familiar with the language, as the next day´s challenge consists of writing alternative lyrics; reading the rules of the next day´s playground English games; or listening to a YouTube story video to come up with the values and emotions they identify to report back to their mates the next session. Phonics / Rhyming words/ Reading/ Vocabulary Handwriting Wizard (learn to write letters, numbers & words). Phonics with Phonzy Pronounce words phonetically Similar apps: Hooked on Phonics; Just Jumble; Oz Phonics; Rhyming Word Center; Spelling Bee List 1000+ Spelling Tests Grade 1-12.
By Teachers, For Teachers One of the six pillars of character education is to be responsible. This desired character trait means that children need to be self-disciplined, use self-control, be accountable for their actions, and be trustworthy. Responsibility is an important skill that all children need to learn. Oftentimes, students lack in this skill because it’s quite easy for their parents (or teachers) to do everything for them. Fourth grade is the perfect year to use teaching strategies to get students ready to be dependable, punctual, organized, and accountable for their words and actions, but you can start even younger than that. Here are 10 teaching strategies to teach this essential skill to your students. As a teacher, you are a role model. The best way to teach responsibility is to be responsible yourself. Keep your classroom neat and organized, pick up after yourself, and be punctual and dependable. By building your own practice of responsibility within your classroom you are showing your students how it should be done. Help students define the term "responsibility." Have them identify and name responsibilities that they have both at home and in school (keeping room clean, handing in their homework on time). Then have students think of a new responsibility that they can take on at home and in school. Every classroom has a list a mile long of things that need to get done every day. Take some of the workload off of yourself and give your students the reasonability of a classroom job. Assign students the job of washing the desks, collecting the papers, filing your paperwork, sharpening pencils, or organizing your classroom library. Without a set of clear expectations, your classroom will be filled with a bunch of irresponsible students that do not know what to do. Make sure that you provide a structured classroom where all students are clear about how you want things to go. Even with a set of direct classroom expectations, you will still have a few students who will lack responsibility. For these students you will need to provide a clear set of consequences. For example, if a student forget her homework, the consequence would be no “Fun Friday” for younger students, or detention for older students. If a student’s desk wasn’t cleaned out when you said it must be done by Friday, then there must be a consequence for their lack of responsibility. When you see that your students are rising to the occasion, then praise them for being responsible. Make it known in your classroom that when you see a student doing the right thing, they will get praise for it, and everyone will know it. This will help the students that lack responsibility to try and earn some responsibility. By establishing cooperative learning groups in your classroom, you are able to teach students to take responsibility for their own work. For example, in the Jigsaw cooperative learning method, students work together to achieve a common goal, but they cannot achieve that goal unless each group member has done his part in the task. This is a strategy where the groups will be responsible for learning the material on their own. It is a simple-yet-effective way to reinforce the importance of responsibility. Children love a good challenge. Make responsibility a game by focusing on one thing that you want students to be responsible for. For instance, let’s say that the majority of your students were having a hard time remembering to hand in their homework. You would challenge all students to be responsible for handing in their homework on time. When they do, they would get acknowledged for it. You can challenge them to see who has the cleanest desk, or who remembered their library books each week without being reminded. Post signs about responsibility in your classroom, play games about it, talk about daily. By keeping it in the limelight students will realize how important it is. If you can’t seem to talk about it daily, then try and revisit this concept at least once a week. A great way to teach responsibility to your students, while showing them how important it is to be responsible, is to assign daily tasks. Group students together into small groups and assign each group a specific task to get done for that day. At the end of the week, have them chart their own progress and compare it to the week before. Educators have a responsibility to teach their students how to be responsible. This is a character trait that is of value, and is essential in the learning environment. Any of these ten ways listed above will elementary students see the importance of doing their part, while maintaining good character. What ways to you teach reasonability in your classroom? Please share how you work on this important skill with your students. We would love to hear your ideas in the comment section below. Janelle Cox is an education writer who uses her experience and knowledge to provide creative and original writing in the field of education. Janelle holds a Master's of Science in Education from the State University of New York College at Buffalo. She is also the Elementary Education Expert for About.com, as well as a contributing writer to TeachHUB.com and TeachHUB Magazine. You can follow her at Twitter @Empoweringk6ed, or on Facebook at Empowering K6 Educators.
The scientific and intellectual developments of the 17th cent.—the discoveries of Isaac Newton, the rationalism of Réné Descartes, the skepticism of Pierre Bayle, the pantheism of Benedict de Spinoza, and the empiricism of Francis Bacon and John Locke—fostered the belief in natural law and universal order and the confidence in human reason that spread to influence all of 18th-century society. Currents of thought were many and varied, but certain ideas may be characterized as pervading and dominant. A rational and scientific approach to religious, social, political, and economic issues promoted a secular view of the world and a general sense of progress and perfectibility. The major champions of these concepts were the philosophes, who popularized and promulgated the new ideas for the general reading public. These proponents of the Enlightenment shared certain basic attitudes. With supreme faith in rationality, they sought to discover and to act upon universally valid principles governing humanity, nature, and society. They variously attacked spiritual and scientific authority, dogmatism, intolerance, censorship, and economic and social restraints. They considered the state the proper and rational instrument of progress. The extreme rationalism and skepticism of the age led naturally to deism; the same qualities played a part in bringing the later reaction of romanticism. The Encyclopédie of Denis Diderot epitomized the spirit of the Age of Enlightenment, or Age of Reason, as it is also called. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
In the earthquake-prone central Andes Mountains, there archaeological sites with monumental adobe and stone block structures standing that were built by ancient people hundreds and even thousands of years ago. Clearly, the ancient builders planned to have their important structures last–-and they had the knowledge to build appropriately for their environment. Buildings that promise to last a long time are also being constructed today. The dramatic landscape of Yellowstone National Park, including exploding geysers, bubbling mud volcanoes, and rainbow-colored lakes, all provide hints of the dynamic geologic history of the region. Yellowstone is one of the Earth’s largest volcanic systems, and earthquakes, ground surface movements, and hydrothermal activity in the region are all indications of this volcanism.
09/28/23-NEW PLANT SPECIES UNIQUE TO HAWAI’I DISCOVERED IN REMOTE FORESTS OF WEST MAUIPosted on Sep 28, 2023 in Forestry & Wildlife, Main, Media, News Releases, slider |JOSH GREEN, M.D. FOR IMMEDIATE RELEASE September 28, 2023 NEW PLANT SPECIES UNIQUE TO HAWAI’I DISCOVERED IN REMOTE FORESTS OF WEST MAUI (KAHULUI) — A unique plant first seen in the high forests of West Maui in 2020 has now been officially recognized as a new Hawaiian species. The plant, now named Clermontia hanaulaensis, was found during routine surveys by botanist Hank Oppenheimer of the Plant Extinction Prevention Program (PEPP), a partnership with DLNR and the University of Hawai‘i. The species is only found in Hawai‘i and is likely unique only to the mountains of West Maui. “I decided to just turn a different way and look over a ridge I hadn’t explored before and there they were,” said Oppenheimer. “They looked very different from other Clermontia.” Botanists across the state studied the plant’s flower and leaf structure, comparing it to herbarium specimens and photos to attempt to confirm that it is a previously undiscovered species. The botanists also ruled out the possibility of the plant being a hybrid of other Clermontia species. The verdict is in, and it has been given its own name. While new to the world of modern recognition, it exists only as a small population with a limited range, so it’s already being proposed for critically endangered status. The patch of this rare plant is currently the only known population, numbering just under 80 adults and 20 seedlings spread out in an area about the size of 10 football fields. Although they are not growing on protected state lands, the private landowner has been a longtime conservation partner. Key threats to rare plants across Hawai‘i are introduced plants, slugs, pigs, and rats which eat seeds and fruit. On Maui, Axis deer pose additional threats. Clermontia are usually pollinated by native forest birds, which are absent at this population’s elevation due to mosquito-spread avian malaria. They usually grow as mid-canopy plants, under larger trees. A hurricane knocking down larger trees or a single fire could wipe out this newly discovered species. Clermontia is a genus of plants that evolved in Hawai‘i and is found nowhere else in the world. They grow as small shrub-like trees on the six largest islands from about 600 to 6,000 feet in elevation, in cloud forests, wet and mesic forests, bogs and shrublands. Their long, paddle-shaped leaves grow atop branches that fork like a candelabra. Urban gardeners might compare the growth architecture to non-native plumeria, but Clermontia flowers are long, spreading tubes sheltered by their leaves above. This species flower is lavender and white. Hawai‘i has 423 plant species listed as threatened and endangered. Because there are only about 100 of this rare species in the wild, PEPP has collected seeds and will continue to monitor the population to ensure its survival. PEPP is celebrating its 20th anniversary. More information can be found at: http://www.pepphi.org/ # # # (Image courtesy: DLNR) Hawai‘i Department of Land and Natural Resources (808) 587-0396 (Communications Office)
Add The Title Here Art is a medium through which human emotions, thoughts, and experiences can be expressed and shared with others. It is a form of self-expression that has been used throughout history to convey ideas and messages that words alone cannot fully capture. Art can take many forms, including painting, sculpture, photography, music, and dance. Each medium has its own set of techniques and tools that can be used to create a unique and personal work of art. For example, painting uses a combination of pigments and brushes to create visual images on a surface, sculpture uses materials such as clay, metal, and stone to create three-dimensional works of art, photography captures moments in time and records them forever, while music and dance use sound and movement to evoke emotions and tell stories. One of the most important aspects of art is its ability to convey emotions and ideas in a way that can transcend language and cultural barriers. Art has the power to communicate to a universal audience, regardless of their background or location. A painting can convey the same emotion to a viewer in New York as it does to a viewer in Tokyo. Art also plays an important role in shaping and reflecting culture. It can provide insight into the beliefs, values, and customs of a particular society. For example, the art of ancient Egypt, Greece, and Rome, gives us an understanding of the religious beliefs, social structures, and cultural practices of those societies. Similarly, contemporary art can provide insight into the issues and perspectives of our own culture. Art can also have a profound impact on the individual. It can be used as a form of therapy, helping people to process and cope with their emotions. It can also be used to inspire and uplift, providing a sense of beauty and wonder in a world that can often be chaotic and stressful. In conclusion, art is a powerful medium that can be used to express and communicate a wide range of human emotions, thoughts, and experiences. It has the ability to connect us with others and provide a senseStop generating
This Graphing Calculator Investigation activity is from Chapter 8: Polynomials, Lesson 8-4: Polynomials. Students will use polynomial functions to explore real world problems. Before the Activity Review the activity PDF file. Make copies of the activity PDF file for your class. During the Activity Use the TABLE function to evaluate functions for multiple values Distribute the student activity sheets Follow the procedures as outlined After the Activity As a class, review student answers, discussing questions that appeared to be more challenging, and re-teaching as needed.
Typography as an art form has a rich and varied history, dating back to the invention of movable type in the 15th century. In the early days of typography, printers and typesetters focused primarily on creating legible and functional typefaces that could be used for printing books and other documents. However, as the field of typography evolved, designers began to experiment with different styles and techniques, using typography as a means of artistic expression. During the Art Nouveau movement of the late 19th and early 20th centuries, typography became a popular form of artistic expression. Designers such as Alphonse Mucha, Jules Chéret, and Henri de Toulouse-Lautrec used typography to create highly decorative posters and advertisements that combined text with intricate illustrations and designs. In the early 20th century, the Bauhaus movement, led by designers such as Herbert Bayer and Jan Tschichold, focused on creating clean, modernist typography that emphasized simplicity and functionality. This approach had a profound impact on the field of graphic design and typography, and many of the principles developed by the Bauhaus designers continue to influence typography and design today. During the mid-20th century, typography continued to evolve, with designers such as Paul Rand, Saul Bass, and Milton Glaser using typography to create bold, eye-catching designs for advertising, posters, and other media. With the rise of digital technology in the late 20th century, typography underwent a revolution, as designers began to use computers and software to create and manipulate typefaces in ways that were previously impossible. This has led to an explosion of new and innovative typography styles and techniques, with designers using typography to create everything from complex infographics to immersive digital experiences. Today, typography continues to be a vital and dynamic art form, with designers around the world using typography to create powerful and impactful designs that engage and inspire audiences. Whether it's in print or on the web, typography remains an essential component of modern design, and its influence can be seen everywhere from advertising and branding to film and television. Here are some typography terms: Serif - A serif is a small, decorative line or stroke that is added to the ends of letterforms. Serifs are a distinctive feature of certain typefaces, such as Times New Roman or Garamond, and are often used in print media. The term "serif" comes from the Dutch word "schreef," which means "line" or "stroke." Sans Serif - A sans-serif typeface is a font that does not have serifs. Sans-serif typefaces are often used in digital media and are considered to have a more modern, streamlined appearance. The term "sans-serif" comes from the French word "sans," which means "without." Typeface - A typeface is a set of letters, numbers, and other characters that share a consistent design and style. Typefaces can include variations in weight, width, and other attributes that allow designers to create a range of visual effects. The term "typeface" is a more modern term that evolved from the older term "font," which originally referred to a specific size and style of a typeface. Font - A font is a specific style and size of a typeface. The term "font" comes from the Middle French word "fonte," which means "something that has been melted or cast." Kerning - Kerning is the process of adjusting the spacing between letters in a font to improve legibility and visual appeal. Kerning can be used to create a more even and consistent appearance to the text, as well as to adjust the spacing between pairs of letters that may appear awkward or unbalanced. The term "kerning" comes from the fact that early typesetters would physically adjust the spacing between metal type by adding a small piece of metal, called a "kern," between the letters. Tracking - Tracking refers to the overall spacing between letters in a font, and can be used to adjust the overall density and appearance of a block of text. Tracking can be used to create a more open or condensed appearance to the text, depending on the desired effect. The term "tracking" comes from the fact that it refers to the distance between each track on a printing press. Leading - Leading refers to the vertical spacing between lines of text. As mentioned earlier, the term comes from the strips of lead that were used to separate lines of type in a printing press. Point - A point is a unit of measurement used in typography to indicate the size of a typeface. It is equal to 1/72nd of an inch and comes from the fact that early typesetters used metal points to measure the size of type. Ligature - A ligature is a combination of two or more letters that are joined together to form a single glyph. Ligatures are used to improve legibility and visual appeal, and can be found in certain typefaces, particularly those used in print media. The term "ligature" comes from the Latin word "ligare," which means "to bind." Ascender - An ascender is the part of a lowercase letter that extends above the x-height. The term comes from the fact that the ascender "ascends" above the main body of the letter. Descender - A descender is the part of a lowercase letter that extends below the baseline. The term comes from the fact that the descender "descends" below the main body of the letter. Baseline - The baseline is the imaginary line upon which most letters in a font sit. It is the foundation for the letters and determines the spacing between lines. The term "baseline" comes from the fact that it is the line on which the letters "base" themselves. X-Height - The X-height is the height of the main body of lowercase letters, excluding ascenders and descenders. It is a fundamental aspect of the design of a font and affects the overall legibility of the text. The term "X-height" comes from the fact that the height of the letter "x" is used as a reference point. Bowl - A bowl refers to the curved, enclosed part of a letter that is entirely or partially closed, such as in the letters "d," "b," "p," and "q." The term is derived from the visual similarity of the curved shape to that of a bowl or cup. In the design of typefaces, the size, shape, and placement of bowls can greatly impact the legibility and overall aesthetic of the text. Therefore, it is an important consideration for typeface designers and typographers. Ear - An ear is a small stroke or flourish that extends from the upper right side of the bowl of the lowercase letter "g" or "q". It is also sometimes referred to as a tag or tail. Bracket - A bracket refers to a curved or angled stroke that is used to connect two or more elements in a design, such as a letterform and a serif. Brackets can also be used to connect elements within a sentence or paragraph, such as grouping words or phrases together. The name "bracket" comes from the resemblance of the shape to a right-angled bracket symbol (>). Brackets can be found in a wide range of typefaces, and their design can vary greatly depending on the style and aesthetic of the font. They are an important aspect of typography as they help to enhance legibility and organization within a text. The term "ear" comes from the history of typography when letters were carved into metal or wood to create printing plates. The ear was originally a practical addition to the design of the letterform, as it helped to keep the letter anchored to the plate during the carving process. Over time, the ear became more stylized and decorative, and it remains a common element in modern typography. Type Family - A type family is a collection of typefaces that share a similar design and aesthetic. Type families can include variations in weight, width, and other attributes that allow designers to create visual hierarchy and add emphasis to certain elements. The term "type family" reflects the idea that the various typefaces within the collection are related, like members of a family. Counter - The counter is the enclosed or partially enclosed space within a letter. It can be found in letters such as "O," "B," and "D," and can affect the overall readability and legibility of a font. The term "counter" reflects the idea that it is the space that is "counted" or measured within the letter. Point Size - Point size refers to the size of a typeface in points, with one point being equal to 1/72nd of an inch. The use of point size as a unit of measurement for typefaces dates back to the early days of typography when type was measured in points or picas. The term "point" comes from the fact that early typesetters used metal points to measure the size of type. Display Typeface - A display typeface is a font that is specifically designed for use in headlines, titles, or other large-format text. Display typefaces are often decorative and highly stylized, with exaggerated letterforms and intricate details. The term "display" reflects the fact that these typefaces are intended to be used in large, attention-grabbing displays. Text Typeface - A text typeface is a font that is designed for use in body text. Text typefaces are typically more understated and less decorative than display typefaces, with a focus on legibility and readability. The term "text" reflects the fact that these typefaces are intended to be used for long blocks of text. Dingbat - A dingbat is a small decorative element used to add visual interest to a piece of text. Dingbats can include symbols, icons, and other graphical elements. The term "dingbat" comes from the Scottish word "ding," which means "to knock." Orphan - An orphan is a short line or word that appears at the beginning or end of a paragraph, separated from the rest of the text. Orphans can be visually distracting and can affect the overall flow and appearance of the text. The term "orphan" comes from the printing industry and refers to a single word or short line that is left stranded at the top of a page. Widow - A widow is a short line or word that appears at the end of a paragraph, separated from the rest of the text. Widows can create visual gaps in the text and affect the overall appearance and legibility. The term "widow" comes from the printing industry and refers to a single word or short line that is left stranded at the bottom of a page. Type Specimen - A type specimen is a sample of a typeface that is used to showcase its design and features. Type specimens can include printed examples of the typeface in use, as well as detailed information about its size, weight, and other attributes. The term "type specimen" comes from the fact that it is used to demonstrate the qualities of a particular typeface, like a specimen of a biological organism. Pointillism - Pointillism is a technique in which small, distinct dots of color are applied in a pattern to create a larger image. In typography, pointillism refers to the use of small, individual characters or symbols to create a larger image or pattern. The term "pointillism" comes from the art world, where it refers to the technique of using small, distinct dots of color to create a larger image. Drop Cap - A drop cap is a large capital letter that is used at the beginning of a paragraph or section of text to create visual interest and highlight the start of a new section. Drop caps are often decorative and can be found in a variety of typefaces. The term "drop cap" comes from the fact that the large letter "drops" down into the text. Ligature Set - A ligature set is a collection of ligatures that are included with a typeface. Ligature sets can include variations in design and style, and are used to add visual interest and improve legibility. The term "ligature set" reflects the fact that the collection of ligatures within a typeface is seen as a set of related elements. Counterform - A counterform is the negative space surrounding a letterform, including the space inside the counters. Counterforms can affect the overall legibility and appearance of a font and can be used to create visual interest and add emphasis to certain elements. The term "counterform" comes from the fact that it refers to the space that is "counter" to the letterform. Sign up to get the latest on sales, new releases and more …
How do scientists determine the chemical compositions of the planets and stars? The most common method astronomers use to determine the composition of stars, planets, and other objects is spectroscopy. Today, this process uses instruments with a grating that spreads out the light from an object by wavelength. This spread-out light is called a spectrum. Every element — and combination of elements — has a unique fingerprint that astronomers can look for in the spectrum of a given object. Identifying those fingerprints allows researchers to determine what it is made of. That fingerprint often appears as the absorption of light. Every atom has electrons, and these electrons like to stay in their lowest-energy configuration. But when photons carrying energy hit an electron, they can boost it to higher energy levels. This is absorption, and each element’s electrons absorb light at specific wavelengths (i.e., energies) related to the difference between energy levels in that atom. But the electrons want to return to their original levels, so they don’t hold onto the energy for long. When they emit the energy, they release photons with exactly the same wavelengths of light that were absorbed in the first place. An electron can release this light in any direction, so most of the light is emitted in directions away from our line of sight. Therefore, a dark line appears in the spectrum at that particular wavelength. Because the wavelengths at which absorption lines occur are unique for each element, astronomers can measure the position of the lines to determine which elements are present in a target. The amount of light that is absorbed can also provide information about how much of each element is present. The more elements an object contains, the more complicated its spectrum can become. Other factors, such as motion, can affect the positions of spectral lines, though not the spacing between the lines from a given element. Fortunately, computer modeling allows researchers to tell many different elements and compounds apart even in a crowded spectrum, and to identify lines that appear shifted due to motion.
Activities and downloadable resources based on the topic colour. - Run and touch Once the children can recognize these English words when they are spoken and know their meaning, this is a simple game to let them practise the vocabulary and their listening skills, but needs to be set up with safety in mind or with only a few of the students taking part at a time. Teacher says a colour and students run / walk to something in the classroom that is this colour and point to it. They can be encouraged to say the colour and the name of the object (in English, of course). - Colours song Young children love singing and chanting in the classroom. The repetition helps memory and it taps into their ‘musical intelligence’.
World Population Day 2021 The United Nations' (UN) World Population Day is annually observed on July 11 to reaffirm the human right to plan for a family. It encourages activities, events, and information to help make this right a reality throughout the world. The World Population Day theme 2021 is "Rights and choices are the answer: Whether baby boom or bust, the solution to shifting fertility rates lies in prioritizing the reproductive health and rights of all people." World Population Day was instituted in 1989 as an outgrowth of the Day of Five Billion, marked on July 11, 1987. The UN authorized the event as a vehicle to build an awareness of population issues and the impact they have on development and the environment. The aim and motif of World Population Day are to promulgate the consequences that an unmanageable huge population count can have on human life and the surrounding environment. The natural resources to support existence are limited, but their consumption keeps increasing every year. Given this fact, a day will arrive soon when the earth will run out of resources to contain human life; that would be doomsday. World Population Day aims to increase people’s awareness about various population issues such as the importance of family planning, gender equality, poverty, maternal health, and human rights. The day is celebrated worldwide by business groups, community organizations, and individuals in many ways.
Types of relations and the systemic character of language Language is a system of signs (meaningful units of language). This signs are closely into connected and into dependent. Various subtypes of language form different microsystems within the frame work of the global macrosystem. Each system is a structural set of elements. They have a common function and this common function is to give expression to human thoughts. The systemic nature of grammar is more evident than in any other sphere of language. The grammar system is responsible for the very organization of the informative content of utterances Language in a narrow sense of the word is a system of means of expression. The system of language includes material units (sounds, morphemes, words, word-groups) and rules of regularity which are responsible for the use of those units. The sign as a meaningful unit in the system of language has only one potential meaning. And this potential meaning is actualized in the of speech as a part of the grammatically organized text. All lingual signs stand to one another in two fundamental types of relations. Paradigmatic and syntagmatic. Syntagmatic linear relations between lingual units in a segmental sequence (strings). If the sentence is syntegmatically connected the constitute parts of it are grammatically organized. Morphemes within the words are connected syntegmatically, phonemes are connected syntagmatically within morphemes. The other type of relations opposed to syntegmatic is called paradigmatic. Paradigmatic relations exist between elements of the system outside the strings. They co-occur in strings. In the sphere of phonology such series of units are built up by the correlations of phonemes. Paradigmatic relations coexist with syntegmatic relations in such a way that some of the syntegmatic connections are necessary for the realization of any paradigmatic series. The minimal paradigm consists of two forms. Units of language are divided into segmental or into suprasegmental. Segmental units consist of phonemes and they form phonemic strings. Suprasegmental units are realized together with segmental units and express different meanings. To suprasegmental units belong intonations, accents and patterns of word order. All segmental units of language form a hierarchy of levels. This hierarchy means that units of any high level are analyzable into units of the lower level. Morphemes are decomposed into phonemes, words – into morphemes. But the hierarchical relation of the language cannot be reduced to the mechanical composition of larger units from smaller ones. Each level is characterized by its own specific functional features. They are responsible for the recognition of the corresponding level. The lowest level of lingual system is phonemic. It is formed by the phonemes and they are the material elements of the higher level segments. The phoneme has no meaning and it isn't a sign. The phoneme differentiates morphemes and words. Phonemes are combined into syllables. The syllable is not a sign it should be considered as elements which has some properties of morphemes. Phonemes may be represented by letters in writing. Units of the higher level of language are meaningful and they may be called the signemes. The level located above the phonemic is the morphemic level. And the morpheme is an elementary meaningful part of a word. And it is built by phonemes. The shortest morphemes include one phoneme only (one-root words). The morpheme expresses abstract significative meaning. This meanings are used as constitutes for the formation of more concrete (nominative) meanings of the word. The third level is the level of words (lexemic level). The word is a naming (nominative) unit of language. Words are built up of morphemes. The shortest word consists of one explicit morpheme. The next high level is the level of phrases (word groups) – phrasemic level. Above the phrasemic levels lies the level of sentences (proposemic level). The character of the sentence as a signemic unit of language consists in a fact that every sentence expresses predication. It shows the relation of denoted event to reality. It shows whether this event is real or unreal. But the sentence is not the highest unit of language in the hierarchy of language. Above the proposemic level is the level of sentence groups (supra-proposemic). The supra-proposemic is a combination of separate sentences which form a textual unity. Such combinations have regular patterns consisting of syntactic elements. There are different syntactic processes by which sentences are connected into textual unities.
The type of ecosystems which are predominantly found on land are called the terrestrial ecosystems. Terrestrial ecosystems cover approximately 140 to 150 million km2, which is about 25 to 30 percent of the total earth surface area. - The interrelations between organisms and the environment on the land constitute "Terrestrial Ecology". - The most important limiting factors of terrestrial ecosystems are moisture and temperature. - There are different types of terrestrial ecosystems, which are widely distributed around the geological zones. They include: - Forest Ecosystem - Grassland Ecosystem - Desert Ecosystem The word tundra means a "barren land" since they are found where environmental conditions are very severe. There are two types of tundra: Arctic tundra (in blue) and Alpine tundra (in grey) Distribution of Arctic & Alpine Tundra Arctic tundra extends as a continuous belt below the polar ice cap and above the tree line in the northern hemisphere. It occupies the northern fringe of Canada, Alaska, European Russia, Siberia, and the island group of the Arctic Ocean. On the south pole, tundra is very small since most of it is covered by the ocean. Alpine tundra occurs at high mountains above the tree line. Since mountains are found at all latitudes therefore alpine tundra shows day and night temperature variations. Flora and Fauna of Arctic & Alpine Tundra - Typical vegetation of arctic tundra is cotton, grass, sedges, dwarf heath, willows, birches, and lichens. Animals of tundra are reindeer, musk ox, arctic hare, caribous, lemmings, and squirrel. - They are protected from chillness by the presence of thick cuticle and epidermal hair. Mammals of the tundra region have large body sizes and small tail and ear to avoid the loss of heat from the surface. The body is covered with fur for insulation. 2. Forest Ecosystem A forest ecosystem is a functional unit or a system which comprises of soil, trees, insects, animals, birds, and man as its interacting units. A forest is a large and complex ecosystem and hence has greater species diversity - Includes a complex assemblage of different kinds of biotic communities. Optimum conditions such as temperature and ground moisture are responsible for the establishment of forest communities. - Forests may be evergreen or deciduous. Distinguished on the basis of the leaf into broad-leafed or needle-leafed coniferous forests in the case of temperate areas are Classified into three major categories: (i) Coniferous forest (ii) Temperate forest (iii) Tropical forest Question for Shankar IAS: Summary of Terrestrial Ecosystem Try yourself:Reindeer is mostly found in which ecosystem? Animals of tundra are reindeer, musk ox, arctic hare, caribous, lemmings and squirrel. Types & Characteristics of Forests Coniferous Forest (Boreal Forest) - Cold regions with high rainfall, strong seasonal climates with long winters and short summers evergreen plant species such as spruce, fir and pine trees, etc. and by animals such as the lynx, wolf, bear, red fox, porcupine, squirrel, and amphibians like Hyla, Rana, etc. Conifers are a group of trees and Shrubs that produce Cones - Boreal forest soils are characterized by thin podzols and are rather poor. Both because the weathering of rocks proceeds slowly in cold environments and because the litter derived from conifer needle (leaf is decomposed very slowly and is not rich in nutrients). - These soils are acidic and are mineral deficient. This is due to the movement of a large amount of water through the soil, without a significant counter-upward movement of evaporation, essential soluble nutrients like calcium, nitrogen, and potassium are sometimes leached beyond the reach of roots. - This process leaves no alkaline-oriented cations to encounter the organic acids of the accumulating litter. The productivity and community stability of a boreal forest are lower than those of any other forest ecosystem. Temperate Deciduous Forest - The temperate forests are characterized by a moderate climate and broad-leafed deciduous trees, which shed their leaves in fall, are bare over winter, and grow new foliage in the spring. - The precipitation is fairly uniform throughout. Soils of temperate forests are podzolic and fairly deep. Temperate Evergreen Forest - Parts of the world that have the Mediterranean type of Climate are characterized by warm, dry summers and cool, moist winters. Trees and Their Leaves: Broad Leafed Evergreen Trees are shown in the upper right-hand corner of the image. - Low broad-leafed evergreen trees. Fire is an important hazardous factor in this ecosystem and the adaptation of the plants enables them to regenerate quickly after being burnt. Temperate Rain Forests - Seasonality with regard to temperature and rainfall. - Rainfall is high, and fog may be very heavy. It is an important source of water rather than rainfall itself. - The biotic diversity of temperate rain forests is high as compared to other temperate forest. The diversity of plants and animals is much lower than compared to the tropical rainforest. Tropical Rain Forests - Near the equator. - Among the most diverse and rich communities on the earth. Both temperature and humidity remain high and more or less uniform. The annual rainfall exceeds 200 cm and is generally distributed throughout the year. - The flora is highly diversified. The extremely dense vegetation of the tropical rain forests remains vertically stratified with tall trees often covered with vines, creepers, lianas, epiphytic orchids, and bromeliads. - The lowest layer is an understory of trees, shrubs, herbs, like ferns and palms. The soil of tropical rainforests are red latosols, and they are very thick. Tropical Seasonal Forests - Also known as monsoon forest occur in regions where total annual rainfall is very high but segregated into pronounced wet and dry periods. - This kind of forest is found in South East Asia, central and south America, northern Australia, western Africa and tropical islands of the Pacific as well as in India. Subtropical Rain Forests - Broad-leaved evergreen subtropical rain forests are found in regions of fairly high rainfall but less temperature differences between winter and summer. - Epiphytes are common here. - Animal life of the subtropical forest is very similar to that of tropical rainforests. Question for Shankar IAS: Summary of Terrestrial Ecosystem Try yourself:In which of the following climate will you find Temperate Evergreen Forests? Parts of the world that have Mediterranean type of Climate are characterised by warm, dry summers and cool, moist winters. Indian Forest Types Forest types in India are classified by Champion and Seth into sixteen types. (a) Tropical Wet Evergreen Forests - Are found along the Western Ghats, the Nicobar and Andaman Islands, and all along the northeastern region. - It is characterized by tall, straight evergreen trees. The trees in this forest form a tiered pattern: Beautiful fern of various colours and different varieties of orchids grow on the trunks of the trees. (b) Tropical Semi-Evergreen Forests - Found in the Western Ghats, Andaman and Nicobar Islands, and the Eastern Himalayas. Such forests have a mixture of wet evergreen trees and moist deciduous trees. The forest is dense. (c) Tropical Moist Deciduous Forests - Found throughout India except in the western and the northwestern regions. - The trees are tall, have broad trunks, branching trunks, and roots to hold them firmly to the ground. These forests are dominated by sal and teak, along with mango, bamboo, and rosewood. (d) Littoral and Swamp - Found along the Andaman and Nicobar Islands and the delta area of the Ganga and the Brahmaputra. - They have roots that consist of soft tissue so that the plant can breathe in the water. (e) Tropical Dry Deciduous Forest - The northern part of the country except in the North-East. It is also found in Madhya Pradesh, Gujarat, Andhra Pradesh, Karnataka, and Tamil Nadu. The canopy of the trees does not normally exceed 25 metres. - The common trees are the sal, a variety of acacia, and bamboo. (f) Tropical Thorn Forests - This type is found in areas with black soil- North, West, Central, and South India. The trees do not grow beyond 10 metres. Spurge, caper, and cactus are typical of this region. (g) Tropical Dry Evergreen Forest - Dry evergreens are found along with Tamil Nadu Andhra Pradesh and the Karnataka coast. It is mainly hard-leaved evergreen trees with fragrant flowers, along with a few deciduous trees. Question for Shankar IAS: Summary of Terrestrial Ecosystem Try yourself:Tropical semi evergreen forests are not found in which of the following region? They are found in the Western Ghats, Andaman and Nicobar Islands, and the Eastern Himalayas. (h) Sub-tropical Broad-leaved forests - Broad-leaved forests are found in the Eastern Himalayas and the Western Ghats, along the Silent Valley. There is a marked difference in the form of vegetation in the two areas. - In the Silent Valley, the poonspar, cinnamon, rhododendron, and fragrant grass are predominant. In the Eastern Himalayas, the flora has been badly affected by the shifting cultivation and forest fires. There are oak, alder, chestnut, birch, and cherry trees. There are a large variety of orchids, bamboo and creepers. (i) Sub-tropical Pine forests - Found in Shivalik Hills, Western and Central Himalayas, Khasi, Naga, and Manipur Hills. - The trees predominantly found in these areas are the chir, oak, rhododendron, and pine as well as sal, amla, and laburnum are found in the lower regions. (j) Sub-tropical Dry evergreen forests - Hot and dry season and a cold winter. It generally has evergreen trees with shining leaves that have a varnished look, found in the Shivalik Hills and foothills of the Himalayas up to a height of 1000 metres. (k) Montane Wet temperate forests - In the North, found in the region to the east of Nepal into Arunachal Pradesh, receiving a minimum rainfall of 2000 mm. In the North, there are three layers of forests: the higher layer has mainly coniferous, the middle layer has deciduous trees such as the oak and the lowest layer is covered by rhododendron and champa. - In the South, it is found in parts of the Niligiri Hills, the higher reaches of Kerala. - The forests in the northern region are denser than in the South. Rhododendrons and a variety of ground flora can be found here. (l) Himalayan Moist temperate Forest - This type spreads from the Western Himalayas to the Eastern Himalayas. The trees found in the western section are broad-leaved oak, brown oak, Walnut, rhododendron. - Eastern Himalayas, the rainfall is much heavier and therefore the vegetation is also more lush and dense. There are a large variety of broad-leaved trees, ferri, and bamboo. (m) Himalayan Dry temperate Forest - This type is found in Lahul, Kinnaur, Sikkim, and other parts of the Himalayas. - There are predominantly coniferous trees, along with broad-leaved trees such as the oak, maple, and ash. At higher elevation, fir, juniper, deodar, and chilgoza are found. (n) Sub Alpine forest - Subalpine forests extend from Kashmir to Arunachal Pradesh between 2900 to 3500 metres. In the Western Himalayas, the vegetation consists mainly of juniper, rhododendron, willow, and black currant. - In the eastern parts, red fir, black juniper, birch, and larch are the common trees. Due to heavy rainfall and high humidity, the timberline in this part is higher than that in the West. Rhododendron of many species covers the hills in these parts. (o) Moist Alpine scrub - Moist alpines are found all along the Himalayas and on the higher hills near the Myanmar border. It has a low scrub, dense evergreen forest, consisting mainly of rhododendron and birch. Mosses and ferns cover the ground in patches. This region receives heavy snowfall. (p) Dry Alpine scrub - Dry alpines are found from about 3000 meters to about 4900 meters. Dwarf plants predominate, mainly the black juniper, the drooping juniper, honeysuckle, and willow. 3. Grassland Ecosystem Grassland Ecosystem is an area where the vegetation is dominated by grasses and other herbaceous (non-woody) plants. - Found where rainfall is about 25-75 cm per year, not enough to support a forest, but more than that of a true desert. Vegetation formations are generally found in temperate climates. - In India, they are found mainly in the high Himalayas. The rest of India’s grasslands are mainly composed of steppes and savannas. Steppe formations occupy large areas of sandy and saline soil, in western Rajasthan, where the climate is semi-arid. - The major difference between steppes and savannas is that all the forage in the steppe is provided only during the brief wet season whereas in the savannas forage is largely from grasses that not only grow during the wet season but also from the smaller amount of regrowth in the dry season. Types of Grasslands (i) Semi-arid zone (The Sehima-dichanthium type) - It covers the northern portion of Gujarat, Rajasthan (excluding Aravallis), western Uttar Pradesh, Delhi and Punjab. - The topography is broken up by hill spurs and sand dunes. Senegal, Calotropis gigantea, Cassia auriculata, Prosopis cineraria, Salvadora oleoides and ziziphus Nummularia which make the savanna rangeland look like scrub. (ii) Dry subhumid zone (The Dichanthium- cenchrus- lasiurus type) - It covers the whole of peninsular India (except Nilgiri). The thorny bushes are Acacia catechu, Mimosa, Zizyphus (ber) and sometimes fleshy Euphorbia, along with low trees of Anogeiss us latifolia, Soymida febrifuga and other deciduous species. - Sehima (grass is more prevalent on gravel and the cover maybe 27%. Dichanthium (grass) flourishes on level soils and may cover 80% of the ground. (iii) Moist subhumid zone (The Phragmites- saccharum-imperata type) - It covers the Ganga alluvial plain in Northern India. - The topography is level, low lying, and ill-drained. - Bothriochloa pertusa, Cypodon dactylon and Dichanthium annulatum are found in transition zones. - The common trees and shrubs are Acacia arabica, hogeissus, latifolia, Butea monosperma, Phoenic sylvestris and Ziziphus nummularia. Some of these are replaced by Borassus sp in the palm savannas especially near Sunderbans. (iv) The Themeda Arundinella type - This extends to the humid montane regions and moist sub-humid areas of Assam, Manipur, West Bengal, Uttar Pradesh, Punjab, Himachal Pradesh and. Jammu and Kashmir. - The savanna is derived from the humid forests on account of shifting cultivation and sheep grazing. Indian Grasslands and Fodder Research Institute, Jhansi and Central Arid Zone Research Institute, Jodhpur. Role of fire - Fire plays, an important role in the management of grasslands. - Under moist conditions, fire favors grass over trees, whereas in dry conditions the fire is often necessary to maintain grasslands against the invasion of desert shrubs. - Burning increases the forage yields. Example: Cynodon dactylon Question for Shankar IAS: Summary of Terrestrial Ecosystem Try yourself:Which of the following state does not come under Semi-Arid Zone? Karnataka does not come under semi-arid zone as most of the state has evergreen forests and deciduous forests. 4. Desert Ecosystem A desert is a barren area of landscape where little precipitation occurs and, consequently, living conditions are hostile for plant and animal life. - Deserts are formed in regions with less than 25 cm of annual rainfall, or sometimes in hot regions where there is more rainfall, but unevenly distributed in the annual cycle. - Lack of rain in the mid-latitude is often due to stable high-pressure zones, deserts in temperate regions often lie in "rain shadows", which is where high mountains block off moisture from the seas. - The climate of these biomes is modified by altitude and latitude. At high, at a greater distance from the equator, the deserts are cold and hot near the equator and tropics. - As the large volume of water passes through the irrigation system, salts may be left behind that will gradually accumulate over the years until they become limiting, unless, means of avoiding this difficulty are devised. These plants conserve water by the following method: - They are mostly shrubs. - Leaves are absent or reduced in size. - Leaves and stems are succulent and water-storing. - In some plants, even the stem contains chlorophyll for photosynthesis. - The root system is well developed and spread over a large area. The annuals wherever present germinates, bloom and reproduce only during the short rainy season, and not in summer and winter. The animals are physiologically and behaviorally adapted to desert conditions: - They are fast runners. They are nocturnal in habit to avoid the sun’s heat during day time. - They conserve water by excreting concentrated urine. Animals and birds usually have long legs to keep the body away from the hot ground. - Lizards are mostly insectivorous and can live without drinking water for several days. Herbivorous animals get sufficient water from the seeds which they eat. Mammals as a group are poorly adapted to deserts. Indian Desert: Thar Desert (Hot) - The climate of this region is characterized by excessive drought, the rainfall being scanty and, irregular. The winter rains of northern India rarely penetrate into the region. The proper desert plants may be divided into two main groups - Depending directly upon rain. - Those depending on the presence of subterranean water. 1. The first group consists of two types: (i) The ephemera's (ii) The rain perennials - The ephemera's are delicate annuals, apparently free from any xerophilous adaptations, having slender stems and root systems and often large Flowers. - They appear almost immediately after rain, develop flowers and fruits in an incredibly short time, and die as soon as the surface layer of the soil dries up. - The rain perennials are visible above the ground only during the rainy season but have a perennial underground stem. 2. The second group - Depending on the presence of subterranean water by far the largest number of indigenous plants are capable of absorbing water from deep below the surface of the ground by means of a well-developed root system, the main part of which generally consists of a slender, woody taproot of extraordinary length. - Generally, various other xerophilous adaptations are resorted to such as reduced leaves, thick hairy growth, succulence, coatings of wax, thick cuticle protected stomata, etc. all having for their object of reduction of transpiration. - It is home to some of India's most magnificent grasslands and sanctuary for a charismatic bird, the Great Indian Bustard. Among the mammal fauna, the blackbuck, wild ass, chinkara, caracal, Sandgrouse, and desert fox inhabit the open plains, grasslands, and saline depressions. - The nesting ground of Flamingoes and the only known population of Asiatic wild Ass lies in the remote part of Great Rarm, Gujarat. It is the migration flyway used by cranes and flamingos. - Some endemic flora species of the Thar Desert include Calligonum Polygonoides, Prosopis cineraria, Tecomella undulate, Cenchrus biflorus, and Sueda fruticosa, etc. Cold Desert/ Temperate Desert - The cold desert of India includes areas of Ladakh, Leh, and Kargil of Kashmir and Spiti valley of Himachal Pradesh, and some parts of northern Uttaranchal and Sikkim. Lies in the rain shadow of Himalayas. - Oak, pine, deodar, birch, and rhododendron are the important trees and bushes found there. Major animals include yaks, dwarf cows, and goats. - Severe arid conditions: Dry Atmosphere Mean annual rainfall less than 400mm. » Soil type - Sandy to sandy loam. » Soil pH - neutral to slight alkaline. » Soil nutrient - Poor organic matter content, low water retention capacity. - Cold desert is the home of highly adaptive, rare endangered fauna, such as Asiatic Ibex, Tibetan Argali, Ladakh Uriyal, Bharal, Tibetan Antelope (chiru), Tibetan Gazelle, Wild Yak, Snow Leopard, Brown Bear, Tibetan Wolf, Wild Dog, and Tibetan Wild Ass ('Kiang' a close relative of the Indian wild ass), Woolly hare, Black Necked Crane, etc. - India as a signatory to United Nations Convention to Combat Desertification (UNCCD).
If you look at a Foxglove very carefully you can see how the process of evolution has resulted in a number of adaptations to the Foxglove so that it can attract insects such as Bumble Bees. adaptations or modifications mean that the plant can attract the right type of insect so that the flowers can be pollinated. Pollination is essential if plants are to produce seed. If the flowers are not pollinated, the ovary which is contained in each flower is not fertilized. Seeds can not be produced unless fertilization has taken place. Of course before the Bumble Bee lands on the flower, it has to find it. This is made easy for the insect as the Foxglove produces many flowers on a tall stalk. Often many Foxgloves grow near each other and this results in a highly visible display of colour. It is no accident that the flowers are purple as Bumble Bees which are the main pollinators are particularly attracted to this colour. Insects visit the flowers for rewards. In most cases, the reward is nectar produced in the base of the flower. The actual flower of the Foxglove has a number of physical modifications. These ensure that once the Bumble Bee has found the a foxglove which is in flower, it is able to land and get the nectar. In so doing the plant has evolved a fail safe method of transferring the pollen on to the insect. When the next flower is visited the pollen rubs off on the female part of the foxglove flower. Let's take a closer look at some of the adaptations. To make it easier for a Bumble Bee to land on the flower the Foxglove has evolved a wide open mouth and a large lip to aid landing. A number of spots are present on the landing stage which help the insect to identify where to land. These spots then lead through the main part of the flower to the nectar. The mouth is bell shaped and this helps to close the bee's wings as it enters the flower. The main part of the flower is tunnel shaped and the reproductive parts of the flower are located in the roof the tunnel. A large insect like a Bumble Bee can only just squeeze along the tunnel. This means that as it travels along, the pollen which is on the stamen is brushed against the back of the bee. If the bee has already visited another foxglove then the pollen from that flower is brushes off onto the stigma pollinating it. Of course other insects are attracted to the flower and some of them are much smaller than the Bumble Bee. In order to deter them, the landing area is covered with fine hairs called guard hairs which act as a physical deterrent. Smaller insects are undesirable because they would be able to take the nectar without brushing against the flowers reproductive parts. Sometimes the weather is cold and wet during the flowering period. This means that insects are far less active. To ensure that the plant can reproduce successfully it produces many flowers on its long stalk. These do not open at the same time. The flowers at the bottom open first. It will be some weeks before all the flowers all the way to the top of the stalk have opened. This means that flowering takes place over a long time and therefore some will be open during the right weather conditions. This adaptation is therefore security against cold or wet periods when insects are less abundant. There is another advantage to flowering from the bottom of the stalk to the top. These top most flowers will still be visible above other vegetation which has grown up over the summer period. As Foxgloves are pioneer species, that is they readily colonise open areas, this is important as grasses and ultimately brambles will colonise the area. Plants which can not grow above these species will not survive. Once the Bumble Bee has unwittingly transferred the pollen to a stigma, a pollen tube germinates. It then grows down the style of the flower and into the ovary. Fertilization takes place and seed production starts. When the seeds have grown to maturity, the ovary dries and opens, allowing the small, light, seeds to be carried in the wind. The plant's tall, thin, flexible, stalk is easily blown about in the wind and this also helps to launch the seeds.
By Marcela Lepore (10/2023) Artificial Intelligence (AI) isn't a recent invention; it has been part of our lives for decades. We encounter it daily through our smartphones, in the algorithms that power social networks, predictive text, voice assistants, and GPS navigation systems. While it has made our lives more convenient, we might not have fully grasped its potential until now. The rapid evolution of AI has shown us that it can perform tasks we once thought were exclusive to humans, making some tasks easier but also revealing its inherent risks. The benefits of AI appear boundless, as it is used by many companies for data analysis, virtual customer service assistants, algorithm manipulation to boost brands, and task automation alongside human teams. However, it also raises questions about potential job displacement, hidden biases, and the unforeseeable dangers of its implementation. ChatGPT, along with its new version, GPT-4, possesses multiple functions that enable precise task execution and complex question answering. This prompts us to question the ethics of its use, as it relates to the erosion of trust and transparency in today's globalized business and news landscape. In light of these concerns, society must apply ethics to make decisions based on shared values and the common good. Companies, in particular, bear a responsibility to engage in this discussion. AI technology offers them numerous benefits, but it also necessitates the establishment of ethical boundaries. They must foster critical thinking and demonstrate that they can provide more value than AI alone. To achieve this, professionals capable of reasoning and interpreting AI's impact are essential, allowing AI to be a tool that enhances our lives and work without manipulating us. So, what are the ethics in AI? When considering the dilemmas we face, we must address some ethical challenges: 1. Lack of Transparency in AI Tools: AI decisions aren't always understandable to humans. 2. Non-Neutrality of AI: AI-based decisions may lead to inaccuracies, discriminatory outcomes, and embedded biases. 3. Surveillance Practices for Data Collection and User Privacy. 4. New Concerns Regarding Fairness and Risks to Human Rights and Fundamental Values. As a first step, it is crucial to establish an ethical framework of moral principles and techniques aimed at guiding the development and responsible use of artificial intelligence technology.
By Kaitlynn Bayne In a laboratory at Duke University, scientists have grown muscle that responds to external stimuli very similarly to human muscle. Although animal muscle has been created in labs before, this is the first lab to successfully make human muscle. They made the muscle from myogenic precursor cells, which are stem cells that are in a stage between the early stem cell and the muscle tissue stages. After a year of making this tissue, they tested it with external stimuli, and found that it responded the same way human muscle would. Also, the muscle responded to electrical signals, making it the first lab-grown muscle to do so. This is significant because it allows nerves to activate the muscle. The goal of the scientists in this lab is to come up with a way to create medications made specifically toward individual patients. In the lab, the muscle was able to respond to drugs the same way it responds in the human body. Therefore, if they could get myogenic precursor cells from a patient, they would be able to test different drugs on the lab-created muscle to determine which ones would be the best for the individual. This discovery has the potential to change the pharmaceutical and health care industries greatly. The ability to get medication made specifically for your own body would allow patients to skip the step of testing a bunch of medications before finding one that works. Additionally, since the medication would be tailored specifically for the individual, it also has the possibility to be much more effective than medications are today. WELCOME, UMICH SCIENTISTAS! SORT BY TAG The Network for Pre-Professional Women in Science and Engineering The Scientista Foundation is a registered 501(c)(3) -- Donate!
Students learned about different landforms and then chose a landform to research further. They used different resources (books, anchor charts, language dives) to come up with a riddle containing at least 3 clues as to their landform. They also selected a photo of their landform to base a drawing off of. Through peer critique they improve their drawing, using colored pencils and black markers for their final product. All pieces of their riddle were assembled into the following format: the cover is their riddle, the reader then opens up the riddle to find the answer, resources used, and illustration inside. How This Project Can Be Useful - Encourages students to think creatively about science and the natural world. - Shows a range of skill in student work.
How will we know they have learned it? Our teachers are working hard to use assessment as a way to know what students know and to design lessons based on where the next learning point is for students. 8-12 Assessment & Scoring The following explanations and definitions will assist students & parents in understanding the report of progress system and beliefs for WCSD students in grades 8-12. Student achievement is individual progress toward goals, centered on developing academic, emotional, and social skills, which results in confidence to take risks for ongoing growth. - Differentiation of instruction and assessment is necessary for students to grow and progress. - Multiple data points are used to determine the summative grade. - Course grades accurately communicate only academic achievement of the standards. - Independent practice is meaningful, purposeful, and tied to standards. - Students are given multiple opportunities to show proficiency through ongoing assessment. Each indicator shows progress towards each specific learning targets for each course. EE (100%): Exceeds: Student demonstrates above grade level understanding for the targeted skill or concept. SC (97%): Secure – Student can apply the skill or concept correctly and independently. DV (80%): Developing – Student shows some understanding. Reminders, hints, and suggestions are needed to promote understanding. BG (60%): Beginning – Student shows little understanding of the concept. Additional teacher support is needed. Proficiency (secure & exceeds): Students have shown this when they demonstrate a thorough understanding as evidenced by doing something substantive with the content beyond merely echoing it. Anyone can repeat information; it’s the proficient student who can break content into its component pieces, explain it through alternative perspectives and use it purposefully in new situations.
It’s estimated that five to thirteen million tons of plastic enters our oceans annually, where much of it can linger for hundreds of years. According to a report by the World Economic Forum and the Ellen MacArthur Foundation, scientists estimate that there is 165 million tons of plastic swirling about in the oceans right now. And we are on pace to have more plastic than fish (by weight) in the world’s oceans by 2050. That’s some scary stuff. Filoviruses have devastating effects on people and primates, as evidenced by the 2014 Ebola outbreak in West Africa. For nearly 40 years, preventing spillovers has been hampered by an inability to pinpoint which wildlife species harbor and spread the viruses. One of the crowning achievements for wildlife protection in the US was the establishment of the National Wildlife Refuge system in the 1930s, when the populations of waterfowl were perilously low. Refuges provided breeding and migratory habitat that has allowed a remarkable recovery of many species of ducks and geese.
The theater program not only nurtures the talent of those who aspire to become performers or theater professionals, but also gives students the confidence to stand in front of and perform before groups of any size. In the 6th grade, students learn the basics and history of theater performance. They experiment with various theater techniques to create original work and explore various ways to self-express and tell stories. Seventh grade theater majors study the foundations of theater and use their theatrical skills to produce original performance work. There are myriad opportunities to delve into theatrical elements that include writing scripts, improvisational techniques, pantomime, character development, and stage movement. At the end of some units, students participate in evening showcases where they practice the skills they have learned by performing for parents and their fellow thespians. At the end of the year, students begin to prepare for high school auditions (that take place at the start of the 8th grade) by developing and practicing a monologue. In the 8th grade, students fortify the theatrical skills they have developed the year before. They begin the year by preparing for their high school auditions; this involves reviewing pantomime and improvisation, and continuing monologue work begun in June of the prior year. Students read and analyze various texts to expand their understanding of character development and the design process. It’s in this final year when students are immersed in acting to practice and hone skills, and to produce a final scene using the scene study process. Built into this intensive year is the study of proper vocal practices and singing on stage. As in the 7th grade, 8th grade theater majors present in evening showcases. Sometimes, 7th and 8th grade theater majors have the opportunity to see various, limited, full-scale theatrical productions via a unique partnership with the New Victory Theater in Manhattan.
The lesson has the students read about Julius Caesar. Students will work in groups of four. Students are required to take notes on the readings and organize them in a list, chart/table, or webdiagram. Using the facts they gained, 2 members of the group must write a written speech to persuade the audience that Caesar should be saved and the other 2 must write a speech to persuade the audience that Caesar should be put to death. Each pair of students will read their speech to the class. By the end of the WebQuest, students will have a better understanding of who Julius Caesar was and the controversy that surrounded him. They will read about Julius Caesar from four different sources, each which give their own impression about what Caesar was like as a person. Students will be able to formulate their own opinions using facts to either support of be against Caesar. Students will gain experience in written composition and oral speaking by delivering their speech to the class.
Wainwright Middle School courses in mathematics go a bit beyond the TSC Middle School Mathematics Standards, which include the Indiana State Standards and a few other supplementary topics. Major topics in sixth grade include performing operations on rational numbers, relating ratios and proportions, building the foundations of geometry, and solving one- and two-step equations. Major topics in seventh grade include performing operations on rational numbers and integers, calculating percent, solving equations and inequalities, and calculating surface area and volume. Major topics in eighth grade include manipulating exponents and roots, solving multi-step equations and inequalities, graphing linear equations, and analyzing functions. In addition to practice on those and other topics, a concentrated effort is made to develop Indiana's Process Standards for Mathematics not only in math classes, but throughout the entire school day. Students who excel in mathematics are invited to advanced courses at each grade level; each advanced course focuses on the standards of the next grade level, set in a challenge atmosphere with a higher degree of rigor than would be found in a regular course. Students who advance beyond eighth-grade courses to complete Algebra I or Geometry at Wainwright Middle School are awarded high school credit.
The Great Pyramid of Giza is a huge pyramid in the desert near Cairo in Egypt. It is also called the Pyramid of Khufu. It is the only one of the Seven Wonders of the Ancient World that still exists! There are two other large pyramids at the same site, called the Pyramid of Khafre and the Pyramid of Menkaure, as well as three much smaller pyramids, monuments and the Great Sphinx – a limestone statue of a lion with a human head. The Great Pyramid took over 20 years to build and was completed in around 2560 B.C. It was unique because the sides of the pyramid were smooth, rather than having steps. Great Pyramid of Giza Facts for Kids - The pyramid lines up perfectly with the points of a compass – the north side of the pyramid faces exactly north, the west side faces exactly west, etc. The sides of the pyramid are also almost completely symmetrical and equal. - For over 3,800 years, the Great Pyramid of Giza was the tallest manmade structure in the world. It lost this title when Lincoln Cathedral was built in England in 1311 A.D. - The pyramid is around 480 feet tall and the base of the sides are around 750 feet long. It is made of around two million pieces of stone. Both granite and limestone were used. When it was built, the pyramid would have sparkled, because the outside of it was covered in polished white limestone. Although this sparkly limestone is no longer to be seen, the pyramid remains an extremely impressive sight. - No one is entirely sure what the pyramid was built for, but the most common thought is that it was made to be a tomb for King Khufu, the pharaoh that ruled from around 2589 B.C. until 2566 B.C. - Building the pyramid would have been a huge challenge. They had to move huge and heavy bricks and pieces of stone around, but cranes hadn’t been invented then! The stone must have been either lifted, dragged or rolled into place – but no one knows for sure! The entire pyramid probably weighs around 5.9 million tons. - Tens of thousands of workers from all over the country helped to build the pyramid. It is believed they were paid for their work. Some of their graves have been found near the pyramid. It probably took around twenty years for these workers to build the pyramid! - Inside there are three rooms and two passages. The rooms, or burial chambers, are known as the Lower Chamber, the Queen’s Chamber and the King’s Chamber. In the King’s Chamber is an empty stone coffin called a sarcophagus. The sarcophagus was open when the pyramid was discovered and the body of King Khufu has never been found. - The inside of the pyramid is always the same temperature – 68 degrees Fahrenheit. Interestingly, this is also the average temperature of the earth! This consistent temperature might be due to a system of air vents that lets outside air into the chambers, like a kind of early air conditioning system. - Visitors to the Pyramids of Giza can ride around them on a camel or even go into them to explore further. - The Great Pyramid, like all pyramids in Egypt, is built on the west bank of the River Nile. This is where the sun sets each evening, so it is associated with the land of the dead. Question: How many pyramids are there in Egypt? Answer: There are between 118 and 138 pyramids in Egypt. Question: Which of the pyramids at Giza was built first? Answer: The Great Pyramid / The Pyramid of Khufu was built first.
1. ‘The growth of the nation-state, first in Western Europe and then elsewhere, has long been viewed as the key political development of this era [i.e. the sixteenth century].’ (Merry E. Wiesner-Hanks) Discuss with reference to at least two of the following: England, France, Spain. This essay examines how the growth of the nation-state was a key political development during this period. It was a hugely important process and a stepping stone towards the systems we have in place today. Although many of the aspects of state-building which will be addressed in this essay were already taking place before the sixteenth century, it is during this era that they truly develop and nation-states become extremely important in the political world of the time. One of the reasons that the nation-state experiences growth during this era is because of the military revolution also taking place at the time. The way wars took place changed, there was more emphasis on hand-held weapons than nobles or cavalrymen and there was a need for larger permanent armies. As a result, states needed more money and larger bureaucracies to fund these exploits. This essentially kicked off the growth of the nation-state. States began to exercise a lot more power, issuing more laws and generally claiming more powers. The power of the clergy and nobility was also challenged. Some may argue that the ‘nation’ wasn’t as important at that time; however, if this was the case the people wouldn’t have allowed this state-building to happen without causing huge problems. They appeared to be happy to be brought into a ‘nation’ and this is why the growth of the nation-state can clearly be seen as a key political development at this time. It would eventually spread across Europe but during this period it was visible in England, France and Spain in particular, with the dynasties in those countries developing the growth of a state. This essay will discuss this development in some of these nations during the sixteenth century. In England, the power of the monarch had already been limited by the Magna Carta in 1215, “Demands for taxes to fight the Crusades and war with France led the highest level nobility to force the king to agree to a settlement limiting his power”. This gave the nobility some say in tax rates and lead to the creation of Parliament which began to exert some control over the approval of taxes also. Following the end of the Hundred Years War (1337-1453), there was a civil war in England between the Yorkists and the Lancastrians. This eventually led to Henry Tudor coming to power as Henry VII (ruled 1485-1509) and beginning the Tudor dynasty in England. He turned out to be quite a god king, “Thoughtful, calculating and cautious, Henry piloted the kingdom through a period of reconstruction and reconciliation with surprising assurance”. Henry managed to do this through effective state-building measures. There is growing financial security during his reign as he manages to avoid wars, obtain land from dead nobles and he was also very miserly. There was also increasing bureaucratisation during his reign, as he set up more state offices such as the Court of Star Chamber. Lastly, another of Henry VII’s state-building tactics was to create good marriage alliances. During his reign, he arranged the marriages of his daughter and the king of Scotland, and his son and heir Arthur’s marriage to Catherine of Aragon. However, Arthur died unexpectedly and rather than lose the marriage alliance, Henry arranged that she marry his other son, Henry VIII, “Henry wangled a papal dispensation to allow Catherine marry his second son”. When Henry VIII (ruled 1509-47) took over from his father, he was a completely different king. He didn’t follow the same ideas as his father and war and finance were to dominate his reign. However, because of his lifestyle and constant desire for an heir, Henry VIII also contributed to the growth of the nation-state in England. Henry was unable to have a son with his first wife and wanted an annulment; the Pope (Clement VII, ruled 1523-34) refused to give him one. As a result, Henry VIII broke away from the church in Rome and by 1533; Archbishop of Canterbury had power to annul marriage. This was followed by the Act of Supremacy (1934) which made Henry, Supreme Head of the Church of England. This is another example of the growth in power of the state as Henry transferred power from Rome to his own state. This example in England shows just how key a development the growth of the state was. Further evidence of state-building and its importance during this era was also visible in Spain. It was united as a nation during this period using methods of state-building like those in England. Firstly, it was unified through marriage. Isabella, the heiress of Castile, married Prince Ferdinand of Aragon, thus uniting two of the main parts of Spain. This growth was further enhanced when Ferdinand and Isabella then invaded Granada in the South, enhancing their own state. This, along with the marriage of their children to various nobles across Europe, meant that Spain had grown into a major power with influence all over Europe. This shows just how key the development of the nation-state was. The monarchs continued to strengthen their power by undermining the power of the upper nobles. “They reorganized the main royal council, making it larger, stronger, and more professional, and filling it with lower-level nobility and educated non-nobles...members and officials appointed by the monarch, not inherited by virtue of a noble title” This served once again to drastically increase the power of the state. This power was carried on throughout the sixteenth century. Ferdinand and Isabella were succeeded by Charles I (ruled 1516-1556), and he was also a member of the influential Hapsburg family, making him Holy Roman Emperor (ruled 1519-1556) as well. This again increased the power held by the Spanish and their ever-growing state. Their influence was being spread right throughout Europe, again illustrating the key development of the time of growth of nation-states. Charles ruled over a vast state and then he was succeeded by Philip II (ruled 1556-1598). Philip II inherited Spain, the Netherlands, the Spanish colonies in the Americas and parts of Italy. This indicates just how much the Spanish state had grown during this period. The growth of the nation-state also took place in France during this period. The Valois were the ruling family in France at the time. The French state was to become a real world power during this time. The territory of France was expanding during this time, as the French monarchy took control of more areas and asserted more power. It was the strongest single European state of the time. Under Francis I (ruled 1515-1547) in particular, France used a lot of the state-building techniques that were visible in other parts of Europe. As with many of the other expanding nations, France also employed tactics of force, marriage and subsequent inheritance to initially build its state. The French also used a legal system to build their state; their insistence on wanting one law for its entire people aided the process of state-building. The monarch also encouraged the idea of one language, one state. This also unified France as a nation, marking a move away from Latin as all legal documents were now in French as the monarch believed it should be the language of the state. Even by believing there should be one language for all the people it showed how the idea of state-building was present. Also, similar to other countries, war and conflict were a significant part of the growth of France as a nation. Francis I became involved in the Hapsburg-Valois Wars (1494-1559) which began before his reign and outlived him. The Hapsburgs were France’s main rivals of the time and the French had suffered some defeats at their hands. However, they did make some gains in Northern Italy under Francis I. The state’s power was increased further under Francis when he brokered a deal with the Pope (Leo X, ruled 1513-1521). The Concordat of Bologna (1516) allowed French kings to appoint French bishops. This gave further power to the French state, again showing its growth as a nation. So, it is clear that the growth of nation-states in Europe was the key political development of this era. States such as England, Spain and France were beginning to form the nations we are familiar with today. This process was not an immediate success though; it did have its limitations as regions, nobles and clergy still held significant power. It was a gradual process that would eventually spread across the whole of Europe giving us the landscape we see today. It didn’t happen instantly in the sixteenth century but there were huge advances made in the growth of the state’s power during this period. During this era, we began to see more and more of the features of government and power that we are familiar with today. The increase in the role of parliaments and the decline in power of nobility were significant developments in shaping the political future of not only Europe, but the rest of the world. Countries like England, France and Spain had created a model during this period for other countries to follow. As a result, it is clear to see that the growth of the nation-state was the key political development of this era, having a huge bearing on the future of politics. Bibliography * Gunn, Steven ‘War, Religion and the State’ in Euan Cameron (ed.), Early Modern Europe, An Oxford History (New York, 2001) * Kümin, Beat (ed.), The European World 1500-1800: An Introduction to Early Modern History ( USA, 2009) * Merriman, John, A History of Modern Europe: Volume One, From the Renaissance to the Age of Napoleon (London, 1996) * Pettegree, Andrew, Europe in the Sixteenth Century (Oxford, 2002) * Wiesner-Hanks, Merry E., Early Modern Europe, 1450-1789 (New York, 2006) -------------------------------------------- [ 1 ]. Merry E. Wiesner-Hanks, Early Modern Europe, 1450-1789 (New York, 2006) p.91. [ 2 ]. Andrew Pettegree, Europe in the Sixteenth Century (Oxford, 2002) p.35. [ 3 ]. Merry E. Wiesner-Hanks, Early Modern Europe, 1450-1789 (New York, 2006) p.92. [ 4 ]. Merry E. Wiesner-Hanks, Early Modern Europe, 1450-1789 (New York, 2006) p.99. [ 5 ]. John Merriman, A History of Modern Europe: Volume One, From the Renaissance to the Age of Napoleon (London, 1996) p.193. [ 6 ]. Steven Gunn ‘War, Religion and the State’ in Euan Cameron (ed.), Early Modern Europe, An Oxford History (New York, 2001) p. 106.
You are here: Hydroland Hydrometeorology and Land Surface Processes Introduction: what is hydrometeorology ? land surface processes ? The Earth surface is defined as the interface between the atmosphere and the underlying ocean or land. This later include all existing land covers types: vegetation, bare soil, water bodies, impervious surfaces like roads and urbanized areas. Hydrometeorology is a part of meteorology which focus on studies of the exchanges of water and energy between the Earth surface and its atmosphere. The water cycle involves precipitation, evapotranspiration, water stored in the soil, rivers and water bodies. The condensation of atmospheric water forms precipitation and release energy into the atmosphere. The evapotranspiration process combines both the evaporation of water at the Earth surface and the transpiration of the vegetation. It consumes energy, so-called latent heat, which is itself in balance with other energy and heat fluxes (namely, the net surface radiative flux, the sensible heat flux and the heat conduction flux exchanged between the atmosphere and the ground). Transpiration of the vegetation is also related to other physiological processes like respiration and photosynthesis involved in the carbon cycle. Other exchanges of mass (e.g. dust, trace gazes) are also participating to exchanges between the surface and the atmosphere. Finally, human activity can also contribute to those exchanges (e.g. addition of nutriments to the soil, crops irrigation). The ‘land surface processes’ gather the different physical phenomena occurring at the continental Earth surface or, by extension, in its immediate neighboring: in the lower part of the atmospheric boundary-layer, in the vegetation layer and in the upper layers of the soil or water bodies. Together with the air advection, they impact directly weather conditions (air temperature and humidity, wind speed) in the lower part of the atmospheric boundary layer, as well as temperature and moister conditions at the surface and beneath (in the soil, water bodies and canopy layer). Simplified scheme of land surface processes. Latent heat flux is the energy flux associated to the evapotranspiration process. It links the energy balance with water and carbon cycle. Why research in the field hydrometeorology and land surface processes ? Land surface processes are determinant for weather and climate which in turn impact them. They affect human life environment and activities. They condition directly and indirectly the society: the quality of habitat, the available resources for life as available food and drinking water, the possibilities to develop agricultural and other human activities are directly tributary of phenomenon occurring at the land surface.
Beacon Lesson Plan Library Ser y Estar Miami-Dade County Schools Students know the difference between using ser and estar correctly when they are able to describe physical characteristics of animals or people, and then describe feelings or state of mind using the correct verb. The student restates and rephrases simple information from materials presented orally, visually, and graphically in class. -White paper for each student -Crayons for each student -Chart that shows main adjectives in Spanish -Quiz on verbs ser and estar (See Weblinks) Students can complete the quiz online or you can print the quiz and hand it out to them. -Pictures of animals or people that show emotion or feeling -Chart that describes different emotions in Spanish -Worksheet for further practice or reinforcement (See Associated File) 1. Have paper accessible for students. 2. Obtain a chart or a poster that shows main adjectives in Spanish. 3. Obtain a chart or a poster that describes different emotions in Spanish. 4. Collect pictures of people or animals that are showing emotions or feelings. 5. Familiarize yourself with the Weblinks so that you can guide students when they are ready to take the quiz. 6. If students will not be taking the quiz online, print it and have copies ready for students. 7. Make copies of the associated file document in case students need further reinforcement. 1. Ask for a volunteer to come to the front of the class. First describe the student like this: This is Jason. Jason is tall, skinny and strong. Now ask the student to follow these directions by changing his expressions: Jason is sad, angry and now happy. Write the sentences on the board. Ask the students what verb you used in each sentence. (to be = is ) 2. Now do the same in Spanish: Este es Jason. Jason es alto, delgado y fuerte. Now ask the student to do the actions as you talk about how he feels: Jason estŠ triste. Jason estŠ bravo. Ahora Jason estŠ feliz. As you say these sentences, write them on the board. Ask students what verbs you used in Spanish. (es in the first part, estŠ in the second part) Further explain to students that you are using the third person singular of the verb to be, which is es and estŠ in Spanish. 3. Tell students that today we will learn why in English you can use the same verb to describe a person and to explain how he or she feels, while in Spanish you will need to use two different verbs to do this. 4. Ask students about the words that you used to describe Jason. Help students infer that the words alto, delgado and fuerte are adjectives. Ask about these qualities. Is being alto (tall) something that Jason can change within a moment's notice? Is being delgado (skinny) something that he can change immediately? Is being fuerte (strong) something that he could change right now? Help students infer that we were describing things about Jason that could not easily be changed. 5. Now ask the students to look at the second part of the exercise. What happened to Jason? Was he able to change from being triste (sad) to bravo (angry) to feliz (happy)? Why was he able to do this? Are these qualities that can change in a person? 6. Point out to students that in English we use the same verb, the verb to be, to describe Jasonís physical characteristics as well as his emotions. In Spanish, on the other hand, the verb changes according to the type of description that you are doing. Ask students to think about this exercise and to tell you when would they need to use the verb es and when they would need to use the verb estŠ? Help students infer that we use es to describe qualities that cannot be easily changed, while we use estŠ to describe a state of mind that the person is in. 7. Provide more examples for students to practice the use of these two verbs, by showing them the picture of a person or animal with a certain expression. Ask students to describe what they see. As an additional practice, students can complete online (See Weblinks) or as a print worksheet, a quiz on these two verbs. This will provide an opportunity for the teacher and for the student to assess their understanding. The quiz can be self-corrected. 8. Hand out a paper and ask students to fold it in three parts. Ask the students to draw the same person in each of the three sections. Tell students to be sure to draw this person with the same physical characteristics in each picture, but with an expression showing a different emotion or feeling in each section. For example, in the first picture draw the person feeling happy, in the second being surprised, and in the last one, being nervous. 9. After students are done with their pictures, ask students to exchange their pictures with the person sitting next to them. The partner has to interpret the drawings, write sentences about the physical characteristics of the person in the drawing, and also describe how the person is feeling in each section. Explain to the students that they need to write at least three sentences with the verb ser and three others with the verb estar. Students can use a poster or a chart that describes the different emotions in Spanish. 10. Provide opportunity for the students to share their writings and their partnerís picture to the other students in the class. Ask students to provide feedback to their classmates on whether they have used the verbs ser and estar correctly. Use this activity to formatively assess students by using the rubric mentioned in the assessment. Once students have finished describing their partnerís drawings, observe how they constructed their sentences. Formatively assess their understanding of the correct usage of the verbs ser and estar by using a rubric. The evidence is the written assignment where the students have to describe the pictures, and the criteria for assessment is noted in the rubric. The rubric follows : Excellent : Students have written three or more correct sentences using the verb ser. These sentences are related to the pictures, and they express a description of the physical condition of the person in the drawing. The students have also written three correct sentences using the verb estar, expressing the condition or state of mind in which the person in the picture is in. Satisfactory : Students have written at least two correct sentences with the verb ser and two correct sentences with the verb estar. Unacceptable : Students seem confused and are not able to distinguish between the two verbs or they use the verbs incorrectly when writing their descriptions. They might have written a sentence correctly with each verb. This student will require further practice and reinforcement. Students can further practice the concepts by completing the worksheets available. (See Associated File) This site can be used as a reference for you or your students.Learn Spanish Use this online quiz as a formative assessment for your students. (You can also print the test.) Study Spanish
Depleting fossil fuel reserves and growing climate threats urge us towards a sustainable society. Moreover, we should preferably not solely rely on fossil fuels for our primary energy needs as part of the fossil fuels is imported from politically unstable regions. We should therefore think of new ways to ensure our energy needs are met in the near future. Most likely, a mixture of different sources will be used. These resources are preferable renewable in nature, e.g. solar, biomass, wind, water and geothermal, which can typically be used for stationary applications. For mobile applications, however, the use of an on-board energy storage system is indispensible. Especially for the latter, hydrogen is expected to play a dominant role. One of the important aspects of hydrogen is that only environmentally friendly products are emitted in the exothermic reaction of hydrogen with oxygen in a fuel cell. However, the feasibility of hydrogen production, storage and finally the use in fuel cells are still under debate. In prototype applications, such as fuel cell-driven automobiles, hydrogen is generally stored in high-pressure cylinders. New lightweight composite cylinders have been developed that are capable of withstanding pressures of up to 800 bars. Even though hydrogen cylinders are expected to withstand even higher pressures in the near future, their large volumes and the energy required to compress hydrogen will limit their practical applicability. As opposed to storing molecular hydrogen it can also be stored atomically in a metal hydride (MH), which can reduce the volume significantly. In addition, MHs provide relatively safe storage as they can be handled without extensive safety precautions unlike, for example, compressed hydrogen gas. Currently, the foremost problem of solid state hydrogen storage is to find a metal-hydrogen system with a gravimetric capacity that exceeds 6 wt.% H and absorbs/desorbs hydrogen at atmospheric pressures at ambient temperatures. One of the most promising elements that can reversible absorb and desorb a significant amount of hydrogen is magnesium, which has an intrinsic gravimetric storage capacity of 7.7 wt.% H. In spite of its excellent gravimetric storage capacity, the high desorption temperature (279 °C) and extremely slow hydrogen (de)sorption kinetics prevent Mg from being employed commercially. Mg is, however, often a large constituent of new hydrogen storage materials as it lowers the weight of the material and therefore increases the gravimetric capacity, which is necessary to fulfill the weight restrictions. In this thesis the hydrogen storage characteristics of Mg alloyed with other metals are addressed. The primary aim is to enforce a high absorption and desorption rate, and limit the weight of the alloys. Chapter 2 describes the experimental settings of the thin films preparation methods and characterization techniques. The thin films were prepared by means of electron beam deposition and magnetron co-sputtering and hereafter investigated by means of Rutherford Backscattering Spectroscopy to accurately determine the film thickness and composition. Electrochemistry was used as the main tool to investigate the hydrogen storage properties of the films in detail. One of the advantages of using electrochemistry is that the electrochemical equilibrium potential can be used to calculate the equivalent hydrogen partial pressure, which gives information about the thermodynamics of the metal-hydrogen system. The electrochemical setup is not straightforward as it requires a special three-electrode setup to obtain reliable experimental data. The experimental pitfalls and solutions, like for instance the need of an oxygen scrubber, to avoid incorrect electrochemical analyses are described in detail. By applying a fixed current, which is equivalent to a fixed (de)hydrogenate rate, the possibility to rapidly insert or extract hydrogen from the hydrogen absorbing medium can be addressed. Electrochemical control also offers the possibility to calculate and tune the hydrogen content in the films with high precision. The former was used to determine if the materials are interesting from a gravimetric point-of-view, while the latter was adopted in combination with other characterization techniques, like for example X-ray diffraction, which provides new insights into the effects of the hydrogen content on the host material. The theoretical background and experimental settings of several electrochemical techniques, e.g. amperometry, cyclic voltammetry, Galvanostatic Intermittent Titration Technique and impedance spectroscopy, were discussed. X-ray diffraction was used throughout the thesis to resolve the crystallography of the phases in the as-prepared samples. To acquire crystallographic data as a function of the hydrogen content custom made in situ X-ray diffraction setups were used. The theoretical background of X-ray diffraction and a detailed description of the experimental setups and settings are described. A Pd topcoat is often applied to hydride-forming thin film materials to protect them from oxidation and catalyze the dissociation of H2 or electrocatalyze the reduction of H2O. As a 10 nm Pd caplayer was applied to all Mg-based alloys described in this thesis, it is useful to determine its thermodynamic and electrocatalytic properties separately, which is presented in Chapter 3. A lattice gas model was presented recently and successfully applied to simulate the absorption/desorption isotherms of various hydride-forming materials. The simulation results are expressed by parameters corresponding to several energy contributions, e.g. interaction energies. The use of a model-system is indispensable in order to show the strength of these simulations. The palladium-hydrogen system is one of the most thoroughly described metal hydrides found in the literature and is therefore ideal for this purpose. The effects of decreasing the Pd thickness on the pressure-composition isotherms were monitored experimentally and subsequently simulated. An excellent fit of the lattice gas model to the experimental data was obtained and the corresponding parameters were used to describe several thermodynamic properties. It was found that the contribution of H-H interaction energies to the total energy and the influence of the host lattice energy are significantly and systematically changing as a function of Pd thickness. Conclusively, it was verified that the lattice gas model is a useful tool to analyze the thermodynamic properties of hydrogen storage materials. Also, the electrocatalytic properties of a 10 nm thick Pd film were determined by means of electrochemical impedance spectroscopy, which revealed that the best electrocatalytic properties are found for ??-phased Pd hydride. Determining the properties of a single-layer 10 nm thick Pd film was valuable as it was used to determine its influence on the Pd-coated Mg-based thin film alloys that were the topic of investigation for the remainder of the thesis. Recently, a thin film approach revealed that new lightweight alloys of Mg with Ti, V or Cr can be prepared that cannot be synthesized via standard alloying techniques, because the alloys are thermodynamically unstable. Electrochemical measurements showed that especially the Mg-Ti system possesses the ability to reversibly store a considerable amount of hydrogen, which can be absorbed and desorbed at relatively high rates compared to pure Mg. The systematic investigation of hydrogen storage properties of the binary MgyTi1-y alloy composition is described in Chapter 4. It is shown from X-ray diffraction (XRD) measurements that as-prepared electron-beam deposited and sputtered MgyTi1-y thin films with y ranging from 0.50 to 1.00 are crystalline and single-phase. Galvanostatic (de)hydrogenation measurements were performed to unveil the effects of the Mg-to-Ti ratio on the hydrogen absorption and desorption rates. Increasing the Ti-content up to 15 at.% does not change these rates much and hydrogen can only be desorbed at a relatively low rate. Beyond 15 at.% Ti, however, the hydrogen desorption rate increases substantially. A superior reversible hydrogen storage capacity that exceeds 6 wt.% H, along with excellent hydrogen absorption and desorption rates, was found for the Mg0.80Ti0.20 alloy. The close analogy of the electrochemical behavior of MgyTi1-y and MgySc1-y alloys points to a face-centered cubic-structured hydride for the alloys showing fast hydrogen uptake and release rates, whereas for the hydrides of alloys rich in Mg (>80 at.%), that show a slow desorption rate, probably crystallize into the common MgH2 body-centered tetragonal structure. The cycling stability of electron-beam deposited and sputtered thin film Mg0.80Ti0.20 alloys was found to be constant over the first 10 cycles, hereafter it decreased sharply caused by delamination of the film from the substrate. The intrinsic cycling stability is therefore expected to be higher. Isotherms of MgyTi1-y thin films showed that the desorption plateau pressure is not strongly affected by the Mg-to-Ti ratio and is almost equal to the equilibrium pressure of the magnesium-hydrogen system. Impedance analyses showed that the surface kinetics can be fully attributed to the Pd-topcoat. The impedance, when the MgyTi1-y thin film electrodes are in their hydrogen-depleted state, was found to be dominated by the transfer of hydrogen across the Pd/MgyTi1-y interface. In Chapter 4 it was argued that the symmetry of the crystal lattice of the host material probably strongly affects the hydrogen uptake and release rates. The largest difference for the (de)hydrogenation rates was found for MgyTi1-y alloys containing 70 to 90 at.% Mg. Therefore, the crystallography of these alloy compositions was resolved by in situ XRD and the results are presented in Chapter 5. Firstly, in situ gas phase XRD measurements were performed to identify the crystal structures of as-deposited and hydrogenated MgyTi1-y thin film alloys. The preferred crystallographic orientation of the films in both the as-prepared and hydrogenated state made it difficult to unambiguously identify the crystal structure and therefore the identification of the symmetry of the unit cells was achieved by in situ recording XRD patterns at various tilt angles. The results reveal a hexagonal closed packed structure for all alloys in the as-deposited state. Hydrogenating the layers under 1 bar H2 transforms the unit cell into face-centered cubic for the Mg0.70Ti0.30 and Mg0.80Ti0.20 compounds, whereas the unit cell of hydrogenated Mg0.90Ti0.10 has a body-centered tetragonal symmetry. The (de)hydrogenation kinetics changes along with the crystal structure of the hydrides from rapid for face-centered cubic-structured hydrides to sluggish for hydrides with a body-centered tetragonal symmetry and emphasized the influence of the symmetry of the crystal lattice on the hydrogen transport properties. |Kwalificatie||Doctor in de Filosofie| |Datum van toekenning||31 mrt 2009| |Plaats van publicatie||Eindhoven| |Status||Gepubliceerd - 2009|
The Mach Effect was hypothesised by James F. Woodward, who proposed that energy-storing ions experience transient mass fluctuations when accelerated. Unlike conventional technologies, drives based on the Mach Effect do not need to release matter in order to generate thrust. Woodward explains that these transient mass fluctuations are caused by relativistic effects. These fluctuations can then be used in what are known as "impulse engines", which do not contain any moving components. Concept of Operation Woodward made the following assumptions in his derivation of the Mach Effect: - A mass experiences inertia while being accelerated - Inertial reaction forces in objects subjected to accelerations are produced purely by the interaction of the accelerated objects with a field - Any acceptable physical theory must be locally Lorentz invariant; that is, in sufficiently small regions of spacetime, the special relativity theory (SRT) must hold Woodward tried to prove this theory by stating that a capacitor’s mass changes with its charge. He substantiated this by explaining that the underlying cause of inertia is the gravitational force of attraction of all masses. As such, if we were to oscillate an object in a path, and in the process vary its mass, (for e.g. the mass is higher in one direction of oscillation and lower in the opposite direction), then there exists a net force in one direction. This is because the inertia of the object changes as its mass changes. Feasibility—Overview of Studies/Experiments Woodward conducted an experiment which uses the Mach Effect to produce a "pulsed thrust". Woodward claimed that it is possible to "produce a measurable stationary effect" if we were to couple the mass fluctuation to a "synchronised pulsed thrust". The figure on the right illustrates the set-up Woodward used for his experiment. The mass fluctuation required in the capacitor array is produced using an AC voltage. The piezoelectric force transducer then reacts to this and hence causes the capacitor array to oscillate in a synchronous manner. It follows that the reaction force, FR on the piezoelectric force transducer and the external casing is simply Newton’s 2nd Law of Motion. FR = MC × AC where MC is the instantaneous mass of the capacitor array, and AC is the acceleration of the capacitor array due to the piezoelectric force transducer Graph illustrating mass fluctuations Should we find that the fluctuation in mass and acceleration of the capacitor array are sinusoidal and have a constant phase relation, then it follows that FR is a stationary effect. To measure FR, the set up is placed on a shaft with a vertical position sensor which allows measurements of the instantaneous mass of the capacitor array. From the graph on the right, we can see that during the period of time when the set up is activated (7–12 seconds), there is an obvious mass fluctuation. Woodward also found that this result is not produced when the capacitor array and piezoelectric force transducer are not working together. Thus, Woodward’s experiment does present a strong case for the Mach Effect to be possibly used to produce thrust, and subsequently in rocket drives. We now evaluate the performance of this technology using results from Paul March's Mach-2MHz experimental set up. Firstly, the lifetime of the set up just lasted a few minutes, which offers the first stumbling block to being a feasible rocket drive. Secondly however, the results from the experiment showed a very high specific impulse, of IESP = 13.62×1012 s. This is superior to that of the Space Shuttle's main engine (SSME) which has a specific impulse of 454 s. Thirdly, the thrust-to-weight ratio of the set up was a mere 7.44×10-4 compared to that of the SSME which has a ratio of 73.12. Finally, we seek to evaluate the possible trajectory of the technology. Results are unconfirmed as yet, but it is proposed that for a 1 g constant acceleration, the time taken for a Mach Effect based rocket drive to travel from geosynchronous orbit to the Moon takes about 4 hours. Newer Designs: Unidirectional Force Generator Versus Mach Lorentz Thruster The Unidirectional Force Generator (UFG) is the term used for Woodward’s method of using a piezoelectric transducer to oscillate capacitors in phase with their changing mass. However, it has the following problems: a) Frequency of oscillation is limited to the kHz range (considered slow). b) The UFG is known have an acoustic destructive wave interference problem. Thus, Paul March has proposed a newer design, termed the Mach Lorentz Thruster (MLT). It aims to solve the interference problem and does not have any moving parts. Here, the role of the piezoelectric transducer is replaced by the electromagnetic response of a magnetic field on a moving charge (Lorentz force). This force will be synchronised with the fluctuations of the capacitor voltage, consequently allowing us to maintain a consistent phase difference between mechanical forces and energy. However, results are not impressive mainly due to the fact that March’s MLT design did not produce acceleration for the whole set up. Research and development on MLTs continues. Worm holes from Mach Effect If the apparent mass of the oscillating capacitors become negative, their direction of inertia reverses compared to normal gravitational matter. This can subsequently be manipulated to open up a wormhole or even an Alcubierre spacetime warp bubble which can allow space travel faster than the speed of light. Problems and Evaluation The first problem would be that the Mach Effect appears to disobey the Law of Conservation of Momentum. Woodward rebuts this claim by explaining that since inertia is due to the mutual gravitational force of attraction between any masses, then any system that allows variations in mass to change inertia and hence produce acceleration is using the "mass of the Universe as the reaction mass". From this point of view, the Law of Conservation of Momentum is obeyed. In theory, the Mach Effect does introduce new possibilities in rocket drive technology. Much research and development therefore has to be done and rightly so, since the theorised benefits are potentially significant. Yet from March’s experimental results, the performance of existing Mach Effect set ups is far inferior to current rocket technology. Thus the results are still inconclusive on whether these theorised capabilities are actually achievable. - "Mach Effect: Interview with Paul March and an Update on the Work of James Woodward". Next Big Future. 4 Sep 2009. <http://nextbigfuture.com/2009/09/mach-effect-interview-with-paul-march.html> - Woodward, J. F., Foundations of Physics Letters, 4:407-423 (1991) Woodward, J. F., Foundations of Physics Letters, 5:425-442 (1992) Woodward, J. F., Foundations of Physics Letters, 9:247-293 (1996) Woodward, J. F . "Mach’s Principle and Impulse Engines: Toward a Viable Physics of Star Trek?". The NASA Breakthrough Propulsion Physics Workshop. 12-14 Aug 1997 <http://physics.fullerton.edu/~jimw/nasa-pap/> - Woodward, J. F., "Method for Transiently Altering the Mass of an Object to Facilitate Their Transport or Change their Stationary Apparent Weights". US Patent # 5,280,864. 25 Jan 1994 - "Mach Effect Part II". Next Big Future. 9 Sep 2009. <http://nextbigfuture.com/2009/09/mach-effect-part-ii.html> - March, Paul. "Stair Steps to Stars". May 2002. <http://www.cphonx.net/weffect/Stair-Steps-to-Stars-5-1.ppt> - "Space Shuttle Main Engines". Wikipedia <http://en.wikipedia.org/wiki/Space_shuttle_main_engines> - Ventura, Tim. "Mach’s Principle Evolves". American Antigravity. 13 Dec 2005. <http://www.americanantigravity.com/articles/machs-principle-evolves.html>
As is the case with most evolution-inspired ideas, the more we learn about the natural world, the more it becomes obvious that there is very little “junk DNA” in nature. A recently-published study of gender in mice highlights this fact. In the study, an international collaboration of scientists examined the development of sexual characteristics in mice. As you probably already know, in mammals there is a pair of chromosomes referred to as sex chromosomes. If an individual has an X chromosome and a Y chromosome in that pair, he is a male. If the individual has two X chromosomes, she is a female. But the development of the proper characteristics associated with each sex depends on what happens during embryonic development. For example, as a mammal embryo develops, it starts out producing ovaries. However, there is a gene on the Y chromosome called Sry. It produces a protein that controls the production of another protein, called SOX9. The SOX9 protein turns developing ovaries into testes. A male develops testes, then, because of the action of a gene on the Y chromosome. But as this latest study shows, there is more to it than that. The scientists removed a small section of DNA from genetically-male mice. This section is found in what the authors refer to as a “gene desert,” a section of DNA that is devoid of genes. Nevertheless, when that small section of DNA was deleted, the genetically-male mice developed ovaries and female genitalia. Now please understand that the genes involved in the production and regulation of the SOX9 protein were not removed; only a small portion of what many would call “junk DNA” was removed. Nevertheless, without that section of DNA, the genetically-male mice did not produce enough SOX9 protein, so the ovaries continued to develop into ovaries, which then caused the production of female genitalia. As a result, the authors refer to this small section of DNA as a SOX9 “enhancer.” It enhances the production of SOX9 at just the right time, so the males develop the correct gender characteristics. While the results of this study are fascinating, they are not surprising. After all, it has become more and more clear that the concept of “junk DNA” is a myth. As a result, it makes sense that even small sections of DNA have important functions, at least in certain stages of development or under certain conditions. The reason I am blogging about the study is because of something the lead author said in an article that was published on his institution’s website: Our study also highlights the important role of what some still refer to as ‘junk’ DNA, which makes up 98% of our genome. If a single enhancer can have this impact on sex determination, other non-coding regions might have similarly drastic effects. For decades, researchers have looked for genes that cause disorders of sex development but we haven’t been able to find the genetic cause for over half of them. Our latest study suggests that many answers could lie in the non-coding regions, which we will now investigate further. …the failure to recognise the implications of the non-coding DNA will go down as the biggest mistake in the history of molecular biology.
What is Esophageal Cancer? Esophageal cancer is the uncontrolled growth of abnormal cells in the esophagus, which is a flexible tube connecting the throat to the stomach. Generally between 10 and 13 inches long, the esophagus contracts when one swallows, to push food down into the stomach. Mucus helps move this process along. Ninety percent of esophageal cancers are one of two types: squamous cell or adenocarcinoma. Squamous cell refers to cancers that originate in the cells that line the esophagus; adenocarcinoma begins in the part of the esophagus that joins the stomach. Symptoms of Esophageal Cancer Some people do not notice any symptoms until late in the disease. However, symptoms may include: - Difficulty swallowing - Hoarseness or long-lasting cough - Regurgitating blood - Weight loss with unknown cause - Pain in the throat or back Causes of Esophageal Cancer The causes are not fully understood, but scientists have discovered several likely contributing factors. These include: - Advancing age. People over age 60 are more likely to develop the disease. - Gender. This cancer is more common in men than women. - Tobacco use. Smoking cigarettes, cigars, pipes, or using snuff or chewing tobacco greatly increases risk. For those who both smoke and drink, the risk is highest. - Acid reflux. When stomach acids flow back into the esophagus, irritation occurs. Over time, this irritation can lead to problems, including a condition called Barrett's esophagus, where cell changes often lead to cancer. - Previous history of head or neck cancers. - An unhealthy lifestyle, which means being overweight or eating a diet low in fruits, vegetables, and whole grains. Refer a Patient 830 Harrison Avenue Boston, MA 02118 Monday-Friday 8:00 AM - 5:00 PM
A brief lesson in physics is needed. Don’t be scared. No math. Think of an antenna as the launching point for radio waves. If the antenna is a single element, those radio waves will depart at equal levels in all directions. A car’s AM/FM whip antenna is an example of this. Remember the days of rooftop TV antennas? They had many elements. That wasn’t just to make them more unsightly! Antennas may also include reflective or directive elements or surfaces not connected to the transmitter or receiver, such as parasitic elements, parabolic reflectors or horns, which serve to direct the radio waves into a beam or other desired radiation pattern. – Wikipedia That’s exactly what’s going on at the cell site. Why bother sending signals where they’re wasted? With a directional antenna you can redirect power to where it’s needed. In most cases that means concentrating nearly all a cell tower’s power parallel or slightly down toward the ground. Pointing down slightly is necessary because of the Earth’s natural curve. The extra gain in the antennas beams, used to increase signal strength to us on the ground, is taken from what would radiate upwards!
Salt Marsh Carbon May Play Role in Slowing Climate Warming ScienceDaily (Sep. 26, 2012) — A warming climate and rising seas will enable salt marshes to more rapidly capture and remove carbon dioxide from the atmosphere, possibly playing a role in slowing the rate of climate change, according to a new study led by a University of Virginia environmental scientist and published in the Sept. 27 issue of the journal Nature. Carbon dioxide is the predominant so-called "greenhouse gas" that acts as sort of an atmospheric blanket, trapping Earth's heat. Over time, an abundance of carbon dioxide can change the global climate, according to generally accepted scientific theory. A warmer climate melts polar ice, causing sea levels to rise. |Aerial view of a salt marsh at Virginia's Eastern Shore. (Credit: Fariss Samarrai)| A large portion of the carbon dioxide in the atmosphere is produced by human activities, primarily the burning of fossil fuels to energize a rapidly growing world human population. "We predict that marshes will absorb some of that carbon dioxide, and if other coastal ecosystems -- such as seagrasses and mangroves -- respond similarly, there might be a little less warming," said the study's lead author, Matt Kirwan, a research assistant professor of environmental sciences in the College of Arts & Sciences. Salt marshes, made up primarily of grasses, are important coastal ecosystems, helping to protect shorelines from storms and providing habitat for a diverse range of wildlife, from birds to mammals, shell- and fin-fishes and mollusks. They also build up coastal elevations by trapping sediment during floods, and produce new soil from roots and decaying organic matter. "One of the cool things about salt marshes is that they are perhaps the best example of an ecosystem that actually depends on carbon accumulation to survive climate change: The accumulation of roots in the soil builds their elevation, keeping the plants above the water," Kirwan said. Salt marshes store enormous quantities of carbon, essential to plant productivity, by, in essence, breathing in the atmospheric carbon and then using it to grow, flourish and increase the height of the soil. Even as the grasses die, the carbon remains trapped in the sediment. The researchers' model predicts that under faster sea-level rise rates, salt marshes could bury up to four times as much carbon as they do now. "Our work indicates that the value of these ecosystems in capturing atmospheric carbon might become much more important in the future, as the climate warms," Kirwan said. But the study also shows that marshes can survive only moderate rates of sea level rise. If seas rise too quickly, the marshes could not increase their elevations at a rate rapid enough to stay above the rising water. And if marshes were to be overcome by fast-rising seas, they no longer could provide the carbon storage capacity that otherwise would help slow climate warming and the resulting rising water. "At fast levels of sea level rise, no realistic amount of carbon accumulation will help them survive," Kirwan noted. Kirwan and his co-author, Simon Mudd, a geosciences researcher at the University of Edinburgh in Scotland, used computer models to predict salt marsh growth rates under different climate change and sea-level scenarios. The United States Geological Survey's Global Change Research Program supported the research. Online at (re-posted from): Matthew L. Kirwan, Simon M. Mudd. Response of salt-marsh carbon accumulation to climate change. Nature, 2012; 489 (7417): 550 DOI: 10.1038/nature11440
Another aspect in which the French influence is evident is in spelling in which new convections used for various sounds created various spellings. For example, for the ‘s’ sound, borrowed French words like ‘fisshe’ would at the end form and English word ‘fish’.The French contributed to other affixes to the English over this invasion period. Most of the prefixes as well as suffixes became common after the Norman-French invasion on England. For example, prefixes ‘re-‘ and ‘de-‘ and suffixes like ‘-ment’ and ‘-ful’ were borrowed from the French language. The use of prepositions to show possession was one of the non-existing syntax constructs introduced by the French, aided by the ‘particle de’ (Tokar, Alexander). Another construct during this period was using perfect infinitive tense with its influence coming from both the Latin and French. Pronouns which represented the second person also emerged during this period, with its major influence coming from the French. For example, ‘you’ can be used to address one person, and at the same time be used to address another. Due to the lack of clarity in the new borrowed words from the French language, the stems were simplified. The roots and affixes formed a weak unit which could be broken easily. For instance, ‘baron’ and ‘lieutenant’ gave an illustration of this unit. The second word ‘lieutenant’ was formed by the conjunction of two separate roots, with the first root, ‘lieu’ meaning a ‘place. The origin of this root is from the Latin language. The other word ‘tenant’ is formed from ‘tenir’, which also Latin origins and means to keep. This word is an example that stems can be simplified to define the roots and consequently know the words borrowed.ConclusionThe Norman Conquest, led by Duke William II, had a huge effect on the English language. As noted above, the invasion influenced greatly the grammar of the natives. The presence of the French and their language in their daily conversation with the natives resulted to the increase in the use of the words borrowed by the English people from their language. The development in morphology and the immediate combination with syntax expanded the pool from which words would be derived, and sentences constructed to ensure efficiency between the rules and the natives. These words were generally acceptable in the society and ushered and great deal during this period. However, even though the syntax from the French language helped in making sensible sentences, it was confined for official use. For instance, in courts, offices and other corporate buildings. Even though the use of French language was not extensive, its impact on the English language is unmeasurable given its huge magnitude. Despite some authors saying that the Norman Conquest was ineffective, there is absolute evidence that the invasion influenced the English language.
What is contact tracing? How it can help to control Coronavirus outbreak? Contact Tracing in News: Coronavirus pandemic has already claimed hundreds of lives in India. The government of India is trying all possible measures to stop its outbreak. However, new cases are still appearing everyday. Now, authorities in various parts of the country are relying on contact tracing to break the chain of COVID-19. Contact tracing was first time used during Ebola virus cases in Africa. Singapore has also developed an app to identify and isolate coronavirus positive people. Contact tracing can make it easier for the authorities to locate positive cases. What is Contact Tracing? If a person is in the close contact with an infected person such as Coronavirus or Ebola virus then that person can also get infected. A monitoring process of closely watching these contacts is called contact tracing. It can be broken down into three basic steps: Contact identification: The very first step is to identify someone who is confirmed patient of a virus. These contacts are identified by asking about the person’s activities since onset of illness. Contacts can be anyone who has been in contact with an infected person, family members, work colleagues, friends, or health care providers. Contact listing: After that, we need to make a list of all those people who have contacted the infected person. It is important to know that efforts should be made to identify every listed contact and to inform them of their contact status. It means that the actions that will follow, and the importance of receiving early care if they develop symptoms. Contacts should also be provided with information about prevention of the disease. In some cases, quarantine or isolation is required for high risk contacts, either at home, or in hospital. Contact follow-up: Regular follow-up should be conducted with all contacts to monitor for symptoms and test for signs of infection. How can control Coronavirus pandemic? According to the experts, contact tracing is useful when there are only a few cases. Now, there are so many cases and everyone needs to be checked. It is important to lockdown in many countries but in many countries where lockdown is still not happened, contact tracing can be helpful.
High above the clouds, satellites have measured the solar intensity to be 1.366 kW per square meter. As the sunlight filters through the atmosphere, some of the energy is reflected back into space while some is absorbed by the air. The remaining amount of solar energy to strike the surface of the earth averages about 1 kW per square meter. This immense amount of energy powers the wind, weather and water cycle that influence the entire world. Solar panels can tap into this energy but their efficiency is greatly affected by the solar intensity (the strength of the sunlight). Here are some factors that reduce the solar intensity. At high noon, the angle at which the sun's ray strikes the earth's surface is most direct and the sunlight feels hot. Solar panels produce the peak amount of electricity at this time of day. At sunrise and sunset, the angle is less direct and the sunlight feels weak. Less power is generated when the angle of incidence between the sunlight and the earth is low. As the earth orbits the sun, the distance between the two celestial bodies change. Solar panels produce more power in summer when the sun is high above the horizon and less power in winter when the sun is low. In regions near the equator, the sunlight is most direct. The further away from the equator, the more seasonal variations you will see in solar power production. Atmospheric conditions also reduce the intensity of the sunlight before it reaches the surface of the earth. Air pollution may absorb up to 5% of the solar energy while clouds may absorb up to 30%. Shadows from trees and other tall obstacles can reduce the efficiency of solar panels up to 70%, even if only a portion of the solar array is blocked. Debris such as leaves and dust can also prevent the solar cells from producing peak amounts of electricity by 10%. You don't need complex equipment to measure solar intensity. Before installing a solar array and storage system, a small, inexpensive solar panel and data collector can be used to give you a general idea of how much solar energy is available in that location. It will record voltage, current and the time of day and will show how the electricity production changes from day to day and season to season. Place it in an open, unobstructed location, in the same area where you plan to install the larger solar panels. Solar intensity is not the same as air temperature. Even on a cold winter day, if the solar panels face directly towards the sun, the solar cells will still produce electricity despite the cold air temperature. An automatic mechanical system & sensors, called a solar tracker, can be used to align the panels in the best possible angle, but the extra components may be cost prohibitive for most household applications. A fixed-mount position for solar panels is the easiest and most affordable option for residential home use.
History of millions of Poles The history of departures from the Polish lands is hundreds of years old. People traveled to different parts of the world for sustenance, in search of freedom, or for a different life. After Poland regained its independence, this situation remained unchanged. The journey was tackled on foot, by rail, aboard ships or – later – airplanes. After Poland joined the European Union, emigration became the experience of a generation of millions of young Poles. Today, almost everyone knows someone who chose emigration. Today, there are more than 20 million people of Polish descent in the outside world. What do we know about one of the most important phenomena in Polish history? Can we save, from oblivion, the memory of millions of people who instilled their children and grandchildren with the remembrance of Poland? Can we feel what other Poles felt, as they were leaving their homes at the end of 18th century?Can we understand what it meant to emigrate at the beginning of 21st century? And what does emigration mean in the era of air travel? The only such place in Poland Gdynia is witnessing the birth of the first museum in the country dedicated to the history of Polish emigration. From the initiative of the city's authorities, the historical edifice of the Marine Station – which witnessed the departures of Polish ocean liners for decades – is now seeing the birth of an institution which will recount the migrations and fates of Poles in the world in close connection to the modernity. The history of emigration is being written every day. Its multiple dimensions will be presented through our permanent exhibition. The mission of the Emigration Museum in Poland is to recount the fates of millions of both anonymous and famous people – whose names emerge in the context of great achievementsin science, sports, business, and the arts. It is the ambition of this institution to make them known to Poles at home, but it is also to encourage our compatriots living at home and abroad to get to know each other. Through educational and cultural projects, the museum hopes to become a place of encounter and discussion. We feel we fulfill a particular duty in achieving this end at the best possible address – Polska Street No. 1.
descriptionWhen the War of 1812 once again made clear the need for coastal defense, Fort Pulaski (named for the U.S. colonial army officer Kazimierz Pulaski) was built (1829–47). Following its completion, the fort remained ungarrisoned until it was seized by Confederate troops in January 1861, just before the outbreak of the American Civil War. It was bombarded and captured by Union troops in 1862,... fortification design...were also capable of firing explosive shells. They did to the early modern fortress what cast-bronze cannon had done to the medieval curtain wall. In 1862 the reduction by rifled Union artillery of Fort Pulaski, a supposedly impregnable Confederate fortification defending Savannah, Ga., marked the beginning of a new chapter in the design of permanent fortifications. Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review.
The surface Web (also known as the visible Web or indexable Web) is that portion of the World Wide Web that is indexable by conventional search engines. The part of the Web that is not reachable this way is called the Deep Web. Search engines construct a database of the Web by using programs called spiders or Web crawlers that begin with a list of known Web pages. The spider gets a copy of each page and indexes it, storing useful information that will let the page be quickly retrieved again later. Any hyperlinks to new pages are added to the list of pages to be crawled. Eventually all reachable pages are indexed, unless the spider runs out of time or disk space. The collection of reachable pages defines the Surface Web.
Today is the anniversary of the ratification of the first written constitution in American history, the Fundamental Orders of Connecticut, which took place on January 14, 1639. The Fundamental Orders outlined the form of government that would be established over the Connecticut River Towns, enumerating its powers and describing the duties of citizens active in government. A fascinating document and a historic landmark for the development of constitutionalism in America, the Fundamental Orders, it should be said, did not include a bill of rights. We often take it for granted that individual rights must be spelled-out in a written document if we want to be sure that they will be protected. The enumeration of rights has a long pedigree in Anglo-American law, stretching back at least to the sealing of Magna Charta in 1215 AD. So deeply engrained was this idea, that even before the American Revolution, the English people, with their many charters, their Petition of Right and Bill of Rights, considered themselves to be the freest people on earth. Yet the meaning of a document like Magna Charta was never static and the idea that lists of personal liberties like the Great Charter are an unambiguous good was never universal. Here are a few opinions from the age of the American Founding. Take for example the following piece of eloquence from James Wilson (1742-1798), speaking here as a representative from Philadelphia at Pennsylvania’s ratification convention in 1788. Wilson supported ratification of the federal constitution and opposed appending a bill of rights to it: “I confess I feel a kind of pride in considering the striking differences between the foundation on which the liberties of this country are declared to stand in this Constitution, and the footing on which the liberties of England are said to be placed. The Magna Charta of England is said to be an instrument of high value to the people of that country. But, Mr. President, from what source does that instrument derive the liberties of the inhabitants of that kingdom? Let it speak for itself. The king says, “We have given and granted to all archbishops bishops, abbots, priors, earls, barons, and to all the freemen of this our realm, these liberties following, to be kept in our kingdom of England, forever.” When this was assumed as the leading principle of that government, it was no wonder that the people were anxious to obtain bills of rights, and to take every opportunity of enlarging and securing their liberties. But here, Sir, the fee-simple remains in the people at large, and by this Constitution they do not part with it….A bill of rights annexed to a constitution is an enumeration of the powers reserved. If we attempt an enumeration, everything that is not enumerated is presumed given. The consequence is, that an imperfect enumeration of the powers would throw all implied power into the scale of government, and the rights of the people would be rendered incomplete.” (Eliott’s Debates, pp. 435-436) This is to say that Wilson opposed emulating Magna Charta in the creation of our constitution. After all, Magna Charta presumes that the king has the power to grant, as much as to take away, the liberties of the citizen. The federal constitution by contrast derives its power from the people, and the enumeration of the powers the people has given it ought to be considered complete in the constitution as it stands. Therefore, since there is no presumption that people have surrendered all of their rights to a sovereign, there is no need to enumerate the rights they retain. “It has been several times truly remarked that bills of rights are, in their origin, stipulations between kings and their subjects, abridgements of prerogative in favor of privilege, reservations of rights not surrendered to the prince. Such was MAGNA CHARTA, obtained by the barons, sword in hand, from King John. Such were the subsequent confirmations of that charter by succeeding princes. Such was the PETITION OF RIGHT assented to by Charles I., in the beginning of his reign. Such, also, was the Declaration of Right presented by the Lords and Commons to the Prince of Orange in 1688, and afterwards thrown into the form of an act of parliament called the Bill of Rights. It is evident, therefore, that, according to their primitive signification, they have no application to constitutions professedly founded upon the power of the people, and executed by their immediate representatives and servants. Here, in strictness, the people surrender nothing; and as they retain every thing they have no need of particular reservations.” John Adams took a different attitude toward the Great Charter. Adams was not an ardent believer in democracy. One might have a monarchy or a democracy and it was all the same to him; for Adams, the question of a regime’s legitimacy depended on whether the government was government of law or government of men. By government of law, Adams meant steady adherence to a legal order that operated with an eye to the public good on the one hand, and that was not subjected to the caprice of some private interest on the other. In his 1787 Defense of the Constitutions of the United States he elaborates on this point: “If in England there has ever been such a thing as a government of laws, was it not magna charta? And have not our kings broken magna charta thirty times? Did the law govern when the law was broken? Or was that a government of men? On the contrary hath not magna charta been as often repaired by the people? And, the law being so restored, was it not a government of laws, and not of men?” (pg. 126) To supply the answers to the rhetorical questions Adams poses: a government of law is what we ought to desire, and therefore, Magna Charta, as opposed to its capricious abrogation at the hands of lawless English kings, is the model for our constitutionalism. The people did right by this principle when they insisted on restoring Magna Charta, but the fact that the law was secured by the power of the people was no particular moral trump. Democracies, like monarchies, can and do turn tyrannical. The so-called Anti-Federalists, on the other hand, tended to see Magna Charta as the most important model for safeguarding liberties against the possibility of a new federal government. An important lawyer of the founding generation, John Francis Mercer, voiced this outlook when he wrote the following in a letter to the members of the conventions of New York and Viriginia in April or May of 1788: “The most blind admirer of this Constitution must in his heart confess that it is as far inferior to the British Constitution, of which it is an imperfect imitation [,] as darkness is to light – In the British Constitution, the rights of Men, the primary objects of the social Compact – are fixed on an immoveable foundation and clearly defined and ascertained by their Magna Charta, their Petition of Rights and Bill of Rights[;] and their Effective administration by ostensible Ministers, secures Responsibility – In this new Constitution – a complicated System sets responsibility at defiance and the Rights of Men, neglected and undefined are left at the mercy of events.” (Storing, v. 5, pg. 105) Governments, no matter how they claim to derive their legitimate powers, have a tendency to expand beyond their proper bounds at the expense of the people’s individual rights. Without a written pledge in the form of a Great Charter or a Bill of Rights, there is no clearly defined set of rights that are unassailable. And therefore, over time, no individual rights will be left unabridged. Philadelphensis, a pseudonymous Anti-Federalist, sees the lack of a bill of rights as the opening to tyranny of the worst sort. Note that the outrages which he lists are specifically those prohibited by Magna Charta, namely, unlawful seizure of property, unlawful imprisonment, unlawful execution and the denial of trial by a jury of one’s peers: “To such lengths have these bold conspirators carried their scheme of despotism, that your most sacred rights and privileges are surrendered at discretion. When government thinks proper, under the pretense of writing a libel, &c. it may imprison, inflict the most cruel and unusual punishment seize property, carry on prosecutions, &c. and the unfortunate citizen has no magna charta, no bill of rights, to protect him; nay, the prosecution may be carried on in such a manner that even a jury will not be allowed him. Where is that base slave who would not appeal to the ultima ratio, before he submits to this government?” (Philadelphensis, no. 9) Magna Charta will celebrate its 800th birthday in 2015. Look for more news and blog posts on the heritage of English Liberties and Anglo-American Constitutionalism here on In Custodia Legis in the coming months.
If we use mathematical instruments or geometrical software to construct, we have to use similar triangles to calculate the length of SW first, which is more troublesome. They cover congruent triangles, similar triangles , circles and angles, circles and lines, basic facts and techniques in geometry, and geometry problems in competitions. Mathematical ideas like angle bisection, perpendicular bisector, congruence of shapes and segments, properties of right triangles, similar triangles , reflection, and rotation become more tangible and vivid in the context of paper folding. In order to prove this result, we will use similar triangles shown in the following figure. For example, all instances of collinear points and all instances of similar triangles are grouped together. For example, Cavanagh (2008) encouraged students to use ratio and the principle of similar triangles to measure the height of the school flagpole. The cases that do not appear in the list are either cannot occur or lead to similar triangles Wikipedia's entry about the history of trigonometry is there for all to read (on the website, several reliable sources are referenced at the bottom): "Pre-Hellenic societies such as the ancient Egyptians and Babylonians lacked the concept of an angle measure, but they studied the ratios of the sides of similar triangles and discovered some properties of these ratios. In particular, they focused on identifying similar triangles to determine the length of the height (figure 13). For example, teachers explain that light travels in a line or that the shadow cast by a person is related by similar triangles to that cast by a flagpole. Basic concepts in traditional geometry include similar triangles and the Pythagorean theorem. By now a solution idea was bubbling in my brain, based on a memory of working out heights of tall things from shadows and known smaller things, using the idea of similar triangles
As books were to us, films are to our students. Like any text, a film can be read if you know the language and the signifiers that are used to make meaning. It is not enough to use film as an entertaining addition to our other texts. We need to approach it as a medium in itself with its own language, signifiers and methods. This way our students can still learn the important critical skills that they need while getting a solid understanding of visual literacy. Teaching visual literacy and film requires a whole new vocabulary. Often this is referred to as a meta-language. Whatever we call it, our students need to know it in order to speak and write analytically about film as text. This A4 poster covers the basic essentials.
Planck's large telescope collected the light from the Cosmic Microwave Background and focused it onto the focal plane of the scientific instruments on board. The primary mirror, 1.9 x 1.5 m in diameter, weighed only about 28 kg; the effective telescope aperture was 1.5 m. It was designed to be robust enough to withstand the 'shake-and-bake' stresses of launch, and the temperature difference between launch, when it was at an ambient temperature of about 300 K, and operation, at about 40 K. It was made of carbon-fiber-reinforced plastic and coated with a thin reflective layer (reflectivity >99.5%) of aluminium - so smooth that any bumps in the coating are less than 5 microns in size. The telescope was surrounded by a large baffle that minimises stray light interference from the Earth, Sun and Moon, and cooled it by radiating heat into space. Seen in the microwave range, the CMB is only 1% as luminous as Earth, so straylight is a particular concern for any space-based telescopes observing the CMB in the microwave. This is why Planck was strategically positioned at L2. There it was also better sheltered from the heat emitted by Earth, the Moon and the Sun. In addition, the telescope was surrounded by a large baffle to minimise straylight interference from the Earth, Sun and Moon.
Wind power systems convert the kinetic energy of the wind directly into electrical energy. In windy locations, these systems make a significant contribution to energy production. Besides the construction shape, the efficiency of such a wind power system also depends in particular on its size. Current systems are built for production of up to 10 megawatts and with blade profile lengths of up to 90 meters. The technology required for manufacturing and constructing a wind power system is similar in many ways to that of aircraft construction. The cross section and the mechanical stability, as well as the air flow around the blade profile, are based on the design of aircraft wings. The circumferential speeds, which are especially high at the blade tips, create extreme materials stress – like air turbulence in the ultrasound range or an accumulation of ice – just as in air travel. Almost the only way these requirements can be fulfilled is by using GRP (glass-fiber-reinforced plastic) and CRP (carbon-fiber-reinforced plastic). Like in the aircraft industry, atmospheric plasma treatment supplies especially effective process solutions here.
Rain forest and Coastal The Rain forest- Abiotic factors in the rain forest are soil, water and rocks. These are abiotic because the are not living things. Biotic factors in the rain forest are trees, monkeys and birds. These are biotic factors because they are living. Limiting factors of the rain forest are sunlight, predators and water. Sunlight is a limiting factor because the tree coverage is so dense that most plants have to learn to live with a little amount of sunlight. Predators are a limiting factor in every ecosystem because predators limit the population. Water is also a limiting factor because during the season where there is not a lot of rain the plants and animals have to use the little water they are getting. Animal adaptations in the rain forest have to do with birds. Birds develop a stronger beak to break nuts so they will have food. Another animal adaptation is the three toed sloth, the three toed sloth grows fungus on is back so that it is camouflaged, it also moves so slowly that its predators cannot see it. Plant adaptations include trees that have thin bark because of the humidity, and plants that have drip-tips on the leaves. Drip-tips are small points on leaves that allow water from rainfalls to drip off of them. Another adaptation for animals is a chameleon, they camouflage themselves to their surroundings. Symbiotic relationships in the rain forest are, mutualism, parasitism, and commensalism. Mutualism is when each organism benefits, for example; leaf cutter ants and fungus, the leaf cutter ants protect the fungi from pests and mold and also feed it with small pieces of leaves. The ants keep their larvae in the fungi which protects it and feeds it. Then there is parasitism; this is when one organism is harmed by the other one, an example of this is the strangler fig. The strangler fig grows either to the ground or to the sky, and to get nutrients it grows around the tree, then soon it'll take up the root space killing the tree. The last symbiotic relationship is commesalism, this is where one organism benefits without harming the other one, an example of this is; Bromeliads, to get enough light, grow on high branches of trees. This does not do any damage to the tree itself, but it allows the bromeliad to survive. The abiotics factors for the coastal are temperature, sunlight, and macronutrients. There abiotics because they're not living. The biotic factors are fish, birds, and plants. There biotic because they're living. The limiting factors are salinety, predation, and nutrients. Salinety has to do with the salt in the water. Predation is a limiting factors in every ecosystem. The salt water limits the nutrients. The giant kelp have bladders that float to the surface for sun and nutrients. The sea otters nostrils and ears close to keep any water out of them. The lobster gills and claws to fight or protect their selves from others. Symbiotic Relationships - Commensalism - Barnacles grow on everything and clean them and th Parasitism - Many kinds of worms burrow themselves in the scales and gills of the fish.
The Truck and Ladder According to Newton's first law, an object in motion continues in motion with the same speed and in the same direction unless acted upon by an unbalanced force. It is the natural tendency of objects to keep on doing what they are doing. All objects resist changes in their state of motion. In the absence of an unbalanced force, an object in motion will maintain its state of motion. This is often called the law of inertia. The law of inertia is most commonly experienced when riding in cars and trucks. In fact, the tendency of moving objects to continue in motion is a common cause of a variety of transportation accidents - of both small and large magnitudes. Consider for instance a ladder strapped to the top of a painting truck. As the truck moves down the road, the ladder moves with it. Being strapped tightly to the truck, the ladder shares the same state of motion as the truck. As the truck accelerates, the ladder accelerates with it; as the truck decelerates, the ladder decelerates with it; and as the truck maintains a constant speed, the ladder maintains a constant speed as well. But what would happen if the ladder was negligently strapped to the truck in such a way that it was free to slide along the top of the truck? Or what would happen if the straps deteriorated over time and ultimately broke, thus allowing the ladder to slide along the top of the truck? Supposing either one of these scenarios were to occur, the ladder may no longer share the same state of motion as the truck. With the strap present, the forces exerted upon the car are also exerted upon the ladder. The ladder undergoes the same accelerated and decelerated motion that the truck experiences. Yet, once the strap is no longer present, the ladder is more likely to maintain its state of motion. The animation below depicts a possible scenario. If the truck were to abruptly stop and the straps were no longer functioning, then the ladder in motion would continue in motion. Assuming a negligible amount of friction between the truck and the ladder, the ladder would slide off the top of the truck and be hurled into the air. Once it leaves the roof of the truck, it becomes a projectile and continues in projectile-like motion. For more information on physical descriptions of motion, visit The Physics Classroom Tutorial. Detailed information is available there on the following topics: Newton's First Law of Motion State of Motion Balanced vs. Unbalanced Forces
Sources of air pollution Many sources of air pollutants and greenhouse gases can be found in our current patterns of energy production and consumption, as well as in our manufacturing industries and in the products we produce and use. Pollution sources include: - Electricity Generation - Aluminium and Alumina - Iron and Steel - Base Metals Smelters and Refineries and Zinc Plants - Transboundary air movements - Consumer and Commercial Products - Residential and Individuals Some of the substances classified as air pollutants are naturally occurring, and come from sources such as conifer forests, forest fires, soil erosion, volcanoes, dust storms, and sea spray. Life as we know it may not have been possible without the presence of these substances on Earth. However, the addition of air pollutants from human sources can significantly change or impact the earth's natural life processes. We can most effectively reduce pollution that comes from our own behaviours and activities. Canadian governments, along with industry, non-government organizations, and individuals are all taking action and doing their part to reduce emissions of harmful air pollutants from human sources. The challenge is to balance the needs of Canadians for transport, energy, and goods with environmental protection goals. Significant advancements have been made through the government's 10-year Clean Air Agenda. In addition, the federal government requires many industrial sources of pollution to prepare pollution prevention (P2) plans which outline ways to modify production processes, reformulate and redesign products, introduce substitute materials, improve management and training, install new and cleaner technologies, and increase energy conservation. Many companies within industries such as petroleum and fossil-fuel based electricity generation, among others, have shifted their portfolio to include a broader range of energy sources (e.g., solar, wind, water, earth, biomass, and waste) as a way to participate in the rapid growth of the renewable energy and renewable low-impact electricity industries. Their efforts recognize and facilitate the many environmental, economic and employment opportunities offered by renewable energy sources. The expected result is that renewable energy will be able to meet a greater share of our energy needs. To learn more about what the federal government is doing in regards to pollution prevention and energy, visit: Report a problem or mistake on this page - Date modified:
- Carrots with their normal diet. - A component of carrots (an amount equal to that present in the carrot group) with their normal diet. - Normal diet, less carrots. "Dietary treatments with carrot and [carrot component] delayed or retarded the development of tumors" of the colon by a third compared to the carrotless group.The carrot component thought to be responsible for this is falcarinol. Here's the neat thing ... "Falcarinol is a natural pesticide found in carrots and red ginseng (Panax ginseng), which protects them from fungal diseases, such as liquorice rot that causes black spots on the roots during storage." 2This is one reason why organic produce can be more nutritious. Plants are stressed by organisms. Herbicides and pesticides kill off these organisms in conventional crops. Organically raised plants will manufacture more of their own defensive chemicals ... in this case the natural pesticide falcarinol. The authors used freeze-dried carrots, ordinary orange ones, which they say would be equivalent to raw carrots. They don't know if cooked carrots or carrot juice would produce the same effect. One of the researchers, Dr. Kirsten Brandt, extrapolated these findings in the recommendation: "consumers should eat one small carrot every day." 2 Wikipedia: Falcarinol
About half the population in the United States relies to some extent on groundwater as a source of drinking water, and still more use it to supply their factories with process water or their farms with irrigation water. However, if all water uses such as irrigation and power production are included, only about 25 percent of the water used nationally is derived from groundwater. Still, for those who rely on it, it is critical that their groundwater be unpolluted and relatively free of undesirable contaminants . A groundwater pollutant is any substance that, when it reaches an aquifer , makes the water unclean or otherwise unsuitable for a particular purpose. Sometimes the substance is a manufactured chemical, but just as often it might be microbial contamination. Contamination also can occur from naturally occurring mineral and metallic deposits in rock and soil. For many years, people believed that the soil and sediment layers deposited above an aquifer acted as a natural filter that kept many unnatural pollutants from the surface from infiltrating down to groundwater. By the 1970s, however, it became widely understood that those soil layers often did not adequately protect aquifers. Despite this realization, a significant amount of contamination already had been released to the nation's soil and groundwater. Scientists have since realized that once an aquifer becomes polluted, it may become unusable for decades, and is often impossible to clean up quickly and inexpensively. Groundwater pollution caused by human activities usually falls into one of two categories: point-source pollution and nonpoint-source pollution. Because nonpoint-source substances are used over large areas, they collectively can have a larger impact on the general quality of water in an aquifer than do point sources, particularly when these chemicals are used in areas that overlie aquifers that are vulnerable to pollution. If impacts from individual pollution sources such as septic system drain fields occur over large enough areas, they are often collectively treated as a nonpoint source of pollution. Some groundwater pollution occurs naturally. The toxic metal arsenic, for instance, is commonly found in the sediments or rock of the western United States, and can be present in groundwater at concentrations that exceed safe levels for drinking water. Radon gas is a radioactive product of the decay of naturally occurring uranium in the Earth's crust. Groundwater entering a house through a home water-supply system might release radon indoors where it could be breathed. One of the best known classes of groundwater contaminants includes petroleum-based fuels such as gasoline and diesel. Nationally, the U.S. Environmental Protection Agency (EPA) has recorded that there have been over 400,000 confirmed releases of petroleum-based fuels from leaking underground storage tanks. Gasoline consists of a mixture of various hydrocarbons (chemicals made up of carbon and hydrogen atoms) that evaporate easily, dissolve to some extent in water, and often are toxic. Benzene, a common component of gasoline, is considered to cause cancer in humans, whereas other gasoline components, such as toluene, ethylbenzene, and xylene, are not believed to cause Another common class of groundwater contaminants includes chemicals known as chlorinated solvents. One example of a chlorinated solvent is dry-cleaning fluid, also known as perchloroethylene. These chemicals are similar to petroleum hydrocarbons in that they are made up of carbon and hydrogen atoms, but the molecules also have chlorine atoms in their structure. As a general rule, the chlorine present in chlorinated solvents makes this class of compounds more toxic than fuels. Unlike petroleum-based fuels, solvents are usually heavier than water, and thus tend to sink to the bottoms of aquifers. This makes solvent-contaminated aquifers much more difficult to clean up than those contaminated by fuels. Groundwater typically becomes polluted when rainfall soaks into the ground, comes in contact with buried waste or other sources of contamination, picks up chemicals, and carries them into groundwater. Sometimes the volume of a spill or leak is large enough that the chemical itself can reach groundwater without the help of infiltrating water. Groundwater tends to move very slowly and with little turbulence, dilution, or mixing. Therefore, once contaminants reach groundwater, they tend to form a concentrated plume that flows along with groundwater. Despite the slow movement of contamination through an aquifer, groundwater pollution often goes undetected for years, and as a result can spread over a large area. One chlorinated solvent plume in Arizona, for instance, is 0.8 kilometers (0.5 miles) wide and several kilometers long! Several federal laws focus on either preventing or remediating groundwater contamination, often caused by industrial, commercial, or petroleum pollutants. While these federal laws have provided an overall framework for these activities, the regulatory implementation of these laws is usually carried out by states in cooperation with local governments. Often, federal laws are adopted by the states largely unchanged. The two major federal laws that focus on remediating groundwater contamination include the Resource Conservation and Recovery Act (RCRA) and the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), also known as Superfund. RCRA regulates storage, transportation, treatment, and disposal of solid and hazardous wastes, and emphasizes prevention of releases through management standards in addition to other waste management activities. CERCLA regulates the cleanup of abandoned waste sites or operating facilities that have contaminated soil or groundwater. CERCLA was amended in 1986 to include provisions authorizing citizens to sue violators of the law. Several steps normally are taken to clean up a site once contamination has been discovered. Initially a remedial investigation is conducted to determine the nature and extent of the contamination. In the risk assessment phase, scientists evaluate if site contaminants might harm human health or the environment. If the risks are high, then all the various ways the site might be cleaned up are evaluated during the feasibility study. The record of decision is a public document that explains which of the alternatives presented in the feasibility study will be used to clean up a site. Usually, the most protective, lowest cost, and most feasible cleanup alternative is chosen as the preferred cleanup method. The selected cleanup method is designed and constructed during the remedial design/remedial action phase. The operations and maintenance phase then follows. Periodically the remedial action is evaluated to see if it is meeting expectations outlined in the record of decision. The various ways to respond to site contamination can be grouped into the following categories: Containing the contaminants to prevent them from migrating from their source; Removing the contaminants from the aquifer; Remediating the aquifer by either immobilizing or detoxifying the contaminants while they are still in the aquifer; Treating the groundwater at its point of use; and Abandoning the use of the aquifer and finding an alternative source of water. Several ways are available to contain groundwater contamination: physically, by using an underground barrier of clay, cement, or steel; hydraulically, by pumping wells to keep contaminants from moving past the wells; or chemically, by using a reactive substance to either immobilize or detoxify the contaminant. When buried in an aquifer, zero-valent iron (iron metal filings) can be used to turn chlorinated solvents into harmless carbon dioxide and water. The most common way of removing a full range of contaminants (including metals, volatile organic chemicals, and pesticides) from an aquifer is by capturing the pollution with groundwater extraction wells. After it has been removed from the aquifer, the contaminated water is treated above ground, and the resulting clean water is discharged back into the ground or to a river. Pump-and-treat, as this cleanup technology is known, can take a long time, but can be successful at removing the majority of contamination from an aquifer. Another way of removing volatile chemicals from groundwater is by using a process known as air sparging. Small-diameter wells are used to pump air into the aquifer. As the air moves through the aquifer, it evaporates the volatile chemicals. The contaminated air that rises to the top of the aquifer is then collected using vapor extraction wells. Bioremediation is a treatment process that uses naturally occurring microorganisms to break down some forms of contamination into less toxic or non-toxic substances. By adding nutrients or oxygen, this process can be enhanced and used to effectively clean up a contaminated aquifer. Because bioremediation relies mostly on nature, involves minimal construction or disturbance, and is comparatively inexpensive, it is becoming an increasingly popular cleanup option. Some of the newest cleanup technologies use surfactants (similar to dishwashing detergent), oxidizing solutions, steam, or hot water to remove contaminants from aquifers. These technologies have been researched for a number of years, and are just now coming into widespread use. These and other innovative technologies are most often used to increase the effectiveness of a pump-and-treat cleanup. Depending on the complexity of the aquifer and the types of contamination, some groundwater cannot be restored to a safe drinking quality. Under these circumstances, the only way to regain use of the aquifer is to treat the water at its point of use. For large water providers, this may mean installing costly treatment units consisting of special filters or evaporative towers called air strippers. Domestic well owners may need to install an expensive whole-house carbon filter or a reverse osmosis filter, depending on the type of contaminant. SEE ALSO Attenuation of Pollutants ; Chemicals from Agriculture ; Groundwater ; Landfills: Impact on Groundwater ; Legislation: Federal Water ; Modeling Groundwater Flow and Transport ; Pollution of Groundwater: Vulnerability ; Septic System Impacts . William R. Mason Boulding, J. Russell. Practical Handbook of Soil, Vadose Zone, and Ground-water Contamination: Assessment, Prevention, and Remediation. Boca Raton, FL: Lewis Publishers, 1995. Wiedemeier, Todd H. et al. Natural Attenuation of Fuels and Chlorinated Solvents in the Subsurface. New York: John Wiley & Sons, 1999. Johnson, Robert et al. "MTBE: To What Extent Will Past Releases Contaminate Community Supply Wells?" Environmental Science & Technology 34 no.9 (2000): 210A. <http://pubs.acs.org/hotartcl/est/2000/research/0666-00may_pankow.pdf> . "Methyl Tertiary Butyl Ether (MTBE)." U.S. Environmental Protection Agency. <http://www.epa.gov/mtbe/> . Swain, Walter. "Methyl Tertiary-Butyl Ether (MTBE)." <http://ca.water.usgs.gov/mtbe/> . "Water Pollutants." Recommended EPA Web pages. U.S. Environmental Protection Agency. <http://www.epa.gov/ebtpages/watewaterpollutants.html> . Methyl tert -butyl ether (MTBE) is used almost exclusively as a gasoline additive to help reduce harmful tailpipe emissions from motor vehicles. MTBE has been credited with improving air quality by significantly reducing carbon monoxide and ozone levels in areas where the additive has been used. Unfortunately, this is a case where the United States may have "robbed Peter to pay Paul": a growing number of studies have found that MTBE has contaminated groundwater and surface water in those same additive-use areas. As a part of their National Water Quality Assessment, the U.S. Geological Survey (USGS) found MTBE in 21 percent of 480 wells located in specific areas of the United States that use MTBE in gasoline to abate air pollution. In the rest of the United States, MTBE detection frequency in groundwater was only about 2 percent. Furthermore, after controlling for factors such as population density, commercial and industrial land use, and the presence of gasoline stations, the USGS found that the use of MTBE in gasoline increases the probability of detecting MTBE in groundwater by a factor of about 4 to 6. MTBE readily dissolves in water and can move rapidly through soils and aquifers. Because it is resistant to microbial degradation, it migrates faster and farther in the ground than other gasoline components, thus making it is more likely to contaminate public water-supply systems. According to the USGS, the vulnerability of aquifers to MTBE contamination appears to be most dependent on the chemical's use, the population density, and the presence of industry, commerce, and gasoline stations in the vicinity of sampled wells. Hydrogeologic factors such as well depth, groundwater level, and presence of roads seem to be less important. There is widespread concern about MTBE in drinking-water sources because of potential human-health effects and its offensive taste and odor. The U.S. Environmental Protection Agency has tentatively classified MTBE as a possible human carcinogen, but has not yet established a drinking-water regulation. The agency, however, has issued a drinking-water advisory of 20 to 40 micrograms per liter (20 to 40 parts per billion) on the basis of taste and odor thresholds. Although water can be treated using existing technologies such as air stripping or granular activated carbon (GAC), such treatment is difficult and time consuming because of MTBE's physical and chemical properties. Air stripping is a process in which contaminated water is passed through a large column filled with loose packing material while upward-flowing air evaporates volatile chemicals from the water. MTBE does not readily separate from water into the vapor phase, often requiring high air-to-water ratios. The GAC treatment technique pumps contaminated water through a bed of activated carbon to remove organic compounds. Since MTBE does not adsorb well to organics such as carbon, high volumes of the contaminated water must pass repeatedly through a GAC system before MTBE is effectively removed. Based on what is now known about MTBE, scientists and regulators have recommended significantly reducing or eliminating the use of MTBE in gasoline to protect drinking water. They are also recommending that safer alternatives to MTBE such as ethanol be used in gasoline to guarantee that clean air benefits are preserved.
C. the DIFFerence Handwashing Can Make - C the DIFFerence Handwashing Can Make! is also available in Portable Document Format (PDF, 220KB, 2pg.) Clostridium difficile (C. diff) is a bacteria that can cause watery diarrhea, stomach cramps, and fever. If not treated, it can lead to more serious problems. The number of C. diff cases has been growing. The main reason is the more common use of too many antibiotics. Antibiotics kill helpful germs in the intestine. These germs would normally keep C. diff from growing and making you sick. Other medications such as cancer drugs, steroids and indigestion or reflux medication may also allow C diff. to grow in the intestine. The elderly and individuals with serious illnesses are also at increased risk for the C. diff infection. C. diff is spread by touching a contaminated surface. It is a very tough germ to kill. In order to live while it passes through the acids in the stomach, C diff. forms a tough shell (spore). The best way to get rid of C. diff is to wash your hands with soap and water. The friction from rubbing removes the spore. To avoid spreading C. diff infections, use antibiotics only when needed. You may also become a C. diff carrier and not realize you can spread the disease. If you develop a watery diarrhea that lasts longer than three days, see your doctor. Your doctor may choose to test you for C. diff and will know which treatment is best. Treatment may include prescribing medication, depending upon how serious the illness. If you are being treated, it is important to take all the medication, even if you are feeling better. If you have been diagnosed with C. diff , do not take over-the-counter medicines for diarrhea, as they can worsen your illness.
Like many other critters, bats can be identified by the sounds they make. Now a group of researchers has put together a website where you can upload a bat's echolocation noises, and it'll tell you what sort of bat it is. Unfortunately, it's not quite as simple as throwing up an MP3 and letting the numbers crunch. As described in the Journal of Applied Ecology, there are some caveats. First, it's designed for European bats only; and second, it requires some pretty specific software. The website is dubbed iBatsID, and you have to upload the sound information as a text file output by the program SonoBat. From this information, the iBatsID takes the twelve most useful characteristics, and from this can identify 34 different species. Lead author Charlotte Walters say in a release: iBatsID can identify 83-98% of calls from pipistrelle species correctly, but some species such as those in the Myotis genus are really hard to tell apart and even with iBatsID we can still only identify 49-81% of Myotis calls correctly. With this tool, it's potentially much easier for researchers and conservationists to identify which bats have been nearby. To correctly spot the species that have been hanging around, they just need the right soundfile. How cool is it that we can use computers to identify species by sound? Here's hoping it'll spread to other continents and animals as well.
By Ashley Keller Old Slavery v. Modern Day Slavery Part I Enslavement of individuals predates our history. It has been around since the beginning of man. However, it was not until sometime in the 15th century that slavery focused on a certain group of people, the African Americans (Mintz 2007). When I speak of “old slavery” I am referring to the Transatlantic Slave Trade. There are some engrained similarities to old slavery when compared to modern. For instance, there is a loss of control and free will on the victim’s part, and it continues to be exploitation for profit. The enslaved are broken down to a sort of commodity to be traded, bought and sold. Their humanness is ripped away and replaced with a monetary value. However, modern day slavery, also known as human trafficking, is not the slavery from our history books. The old slavery was hyper focused on a specific group of people, African Americans, whereas modern day slavery “cuts across nationality, race, ethnicity, gender, age, class, education-level, and other demographic features” (National, 2010; Polaris 2009). People are easier and cheaper to buy than ever before. It is estimated that the slaves of history were ten times more expensive then modern day slaves (Polaris 2009). The ease and cheapness of modern day slaves creates an issue of “disposability” because of the inexpensiveness of the “investment” (Bales, 2004). This “disposability” poses yet another threat to the countless, nameless, voiceless individuals caught in this hell. Due to the fact that slaves are so cheap, there is much less motivation for the traffickers to take care of their “investments” because there are plenty more when needed. There are many reasons that individuals may be trafficked. Some of the reasons are: debt bondage, sexual exploitation, forced labor/service like domestic labor, agricultural labor, sweatshops, begging, hard labor, soldiers, hospitality industries and many more. The UN Office on Drugs and Crime reports that 161 countries have been identified as being affected by human trafficking (Trafficking in Persons: Global Patterns). Human Trafficking and Disabilities The International Labor Organization estimates that 2.4 million people were trafficked between 1995 and 2005. The 2010 Trafficking in Persons Report reports that 12.3 million adults and children were trafficked in 2009, at a rate of 1.8 people per 1,000 worldwide. In 2007, the Trafficking in Persons Report stated that 800,000 people are trafficked across borders every year, of which about 80% are women and girls and up to 50% are children. In the U.S. State Department’s “The Facts About Child Sex Tourism: 2005” it is reported that approximately 1 million children are sexually exploited every year throughout the world. This statistic, as are most, if not all, is broken down into specifications of age and gender, but there is no specific information as to how many of these individuals have a disability. As defined by the Americans with Disabilities Act, a disability is; “a physical or mental impairment that substantially limits one or more major life activities; a record of such an impairment; or a person regarded as having such an impairment”. Human trafficking and disabilities is a severely under addressed topic in the discussion of human slavery. There are very few reports on its incidence. In 2009, Stop Violence Against Women wrote an article called “Violence Against Women with Disabilities”. They report that children in orphanages are at a higher risk for violence. Human Trafficking & Modern-day Slavery – Belgium reports that gangs throughout Belgium’s major cities organize begging rings using children and individuals with disabilities, typically from Romania (Patt, 2010). Due to the lack of understanding, financial means and cultural stigmas, discussed further below, children with disabilities are a source of shame to their families. Research indicates that violence against children with disabilities occurs at least 1.7 times greater annually than for their peers without disabilities (disabledworld). There are many reasons as to why these families give up their children, such as not having the knowledge or financial resources to care for these children. Other reasons are extensions of cultural beliefs. UNICEF reports, “[s]ocial beliefs about disability include the fear that disability is associated with evil, witchcraft or infidelity, which serve to entrench the marginalisation of disabled people” (2008) . As a result, these children wind up in orphanages where they are much more susceptible to violence. Women and girls with disabilities are especially vulnerable to physical and sexual violence which puts them in danger of unplanned pregnancies due to sexual exploitation. A child who requires assistance with washing, dressing and other intimate care activities may be particularly vulnerable to sexual abuse. Perpetrators can include caretakers, attendants, family members, peers or anyone who enjoys a position of trust and power (UNICEF, 2007). People with disabilities are not seen as individuals who deserve dignity and respect. Even if a pregnancy occurs within a normal situation, not having to do with sexual exploitation, disabled women often do not have a choice in whether they can keep their children and abortions are forced upon them. Disabled women are also forcibly sterilized so that the issue of pregnancy will not become a recurring issue (UNICEF, 2007). Not only are disabled children dumped off into the system and stripped of their inalienable human rights, but as they grow up they are blacklisted from employment. The factors that are thought to cause the most vulnerability for an individual to be trafficked are being impoverished, lack of knowledge or ignorance, others also discuss that being a female and a minority exacerbate the issues (UNIAP, 2007). However, the United Nations Inter-Agency Project on Human Trafficking and the Strategic Information Response Network (SIREN) warn against over generalizing the vulnerabilities being dealt with by different cultures and areas. They suggest that it is naïve to enter an area assuming that the issue is the same as others. They argue that it is important to know the people, the culture and the problems before implementing a program in order to provide assistance. Many groups go to help, but assume generalizations as fact and set up information programs and funding programs to fix the ignorant and impoverished in order to combat those specific vulnerabilities. However, those may not actually be the issue (UNIAP, 2007). In Cornell University’s 2007 Disability Status Report, they show that the employment gap between individuals with and without disabilities is 42.8%, in the United States alone (Baker, 2008). This enormous gap in employment exacerbates the vulnerability of poverty that these individuals experience by denying them access to a self-sustaining life with gainful employment. Continued Monday, September 13th, 2010. Ashley received her B.A. in Psychology from Immaculata University this past semester. She has worked with individuals with autism for about 10 years and is currently working as an ABA therapist doing Early Intense Behavioral Intervention. This coming semester she will be student teaching to receive her Elementary/Special Education teaching certifications. She also plans to pursue graduate level programs in order to continue her work and understanding of individuals with autism.
Approximate the volume of the solid generated by revolving region formed by the curve y=x^2, x-axis and the line x=2. Volume approximated by concentric shells a) Sketch the reqion y=x^2, x-axis and the line x=2. b) We'll approximate the volume revolving the region about the y-axis. c) partition the interval [0, 2) in x, so the width of each sub-interval.. d) On your sketch in part a, sketch a typical rectangle. What is the base of the rectangle? What is the height of the rectangle? (Since the base of the rectangle is & then the height must be a function of x.) e) Sketch the solid you get by revolving this rectangle about the y-axis. What is the radius of this shell? What is the height of the shell? What is the width (thickness) of the shell? f) Write a formula for the volume of the typical shell. g) Write the Riemann sum that corresponds to adding up the volumes of n such shells. h) Write the definite integral that is the limit of the Riemann sums as the number of shells increases without bound. i) Evaluate the integral to calculate the exact volume of the solid. First the region is sketched, shell height is defined, Riemann sums ... The Volume of a Solid of Revolution is found by the Shell Method is determined. The expert sketches the region of a function bound by the x-axis and a line.
Born with rights HUMAN rights are the rights possessed by all persons, by virtue of their common humanity, to live a life of freedom and dignity. CHILDREN‘S rights are human rights for every child and adolescent up to the age of 18, regardless of where they are born, their race or ethnicity, whether they are a boy or girl, rich or poor, able or disabled, HIV-negative or HIV-positive. Children have separate rights because childhood is a special time in our lives. It is a time when children, because of their vulnerability, need special consideration—care, nurturing and protection—to ensure they survive, thrive, and realise their full potential as productive members of society. The United Nations Convention on the Rights of the Child (CRC) upholds these rights for all children, everywhere, so they may grow to realise their full potential and contribute towards society and nationbuilding.
MRSA (methicillin-resistant Staphylococcus aureus) causes a serious, antibiotic-resistant infection. MRSA stands for methicillin-resistant Staphylococcus aureus. It causes a staph infection (pronounced “staff”) that is resistant to several common antibiotics. There are two types of infection. Hospital-associated MRSA happens to people in healthcare settings. Community-associated MRSA happens to people who have close skin-to-skin contact with others, such as athletes involved in football and wrestling. Targeted infection control is key to stopping MRSA in hospitals and other healthcare facilities. To prevent community-associated MRSA: - Practice good hygiene - Keep cuts and scrapes clean and covered with a bandage until healed - Avoid contact with other people’s wounds or bandages - Avoid sharing personal items, such as towels, washcloths, razors or clothes - Wash soiled sheets, towels and clothes in hot water with bleach and dry in a hot dryer If a wound appears to be infected, see a healthcare provider. Treatments may include draining the infection and treating with antibiotics. Source: National Institutes of Health (NIH): National Institute of Allergy and Infectious Diseases
Co-written with Kathy Hirsh-Pasek and Vinaya Rajan. During story time, Emily had trouble paying attention to the teacher and squirmed and wiggled in her seat. She noticed a blue jay sitting on a nearby tree branch and spent most of her time looking out the window. During a group activity, Emily spoke out of turn many times and barely allowed the other children in her group a chance to share their work with the class. Later in the day, when her teacher asked all the students to help clean up after art class, Emily put away some of the painting supplies but then quickly moved on to play with a puzzle and never finished cleaning the rest of her work station. When the teacher asked Emily to stop playing with the puzzle, she immediately started crying and fell to the floor in frustration. Emily's story is one that many kindergarten teachers (and parents!) across the country can certainly relate to. When it comes to success in the school environment, what are the important skills children need to master? Most adults think the focus should be on academic skills, such as counting or knowing the letters of the alphabet. However, it is just as important to teach children to regulate their emotions, thoughts and behavior. Self-regulation is an important skill for children to develop. Kids with good self-regulation can pay attention to classroom activities and ignore distractions, remember the teacher's directions long enough to carry out a task and resist impulses. All of these skills may give them an advantage to succeed in school. In fact, kindergarten teachers rank self-regulation as one of the most important skills for school readiness. Unfortunately, these teachers also report that many of their students struggle with low levels of self-regulation once they enter school. The more kids a teacher has like Emily, the harder it is to manage. Self-regulation comes in different forms. Emotional self-regulation is important for helping children manage how they express and experience emotions. In Emily's example, the problems she experiences managing her frustration may make it hard for her to concentrate on school-related activities. The next time Emily is placed in a frustrating situation, it may be useful to teach her how to walk away and cool down rather than having an emotional outburst. Behavioral self-regulation helps children demonstrate control over their actions. Simple games, like Simon Says, have been shown to help children control their impulses. Behavioral self-regulation will help Emily learn to resist the desire to shout out the answer to a problem when it is someone else's turn to speak. Cognitive self-regulation helps children follow rules and plan out the appropriate response (such as listening during story time). For some kids, school may be the first time they practice these skills and learn to regulate themselves. When do these self-regulation skills develop? It starts in infancy when babies learn to self-soothe. Something scary happens and 12-month-old Lilah pops two fingers into her mouth. Parents help their kids develop self-regulation by explaining why they have to wait for something or why they have to take turns. During the preschool years children make remarkable gains in self-regulation. In fact, high levels of self-regulation in preschool predict kindergarten reading and math achievement. This association of self-regulation and positive academic outcomes continues into the elementary and middle school years. In fact, these skills may be even more important than measures of general intelligence. Teaching children how to regulate their own behaviors may be just as important as teaching academic skills. A child like Emily could use some help but all is not lost. Researchers have found that self-regulation skills can develop with practice and can be taught in the classroom. Take a second and think about how these skills relate to your own life. Are you able to resist distractions while at work? Can you exert control and inhibit the impulse to reach for that triple layer chocolate cake since you are trying to cut back on sweets? Can you skip an immediate reward (such as heading out to a dinner party) in order to stay in and study so you can eventually earn an A by the end of the semester? Just from these few examples, it is easy to see that how regulating your thoughts, emotions and behaviors is crucial for success in school, work and life in general. Follow Roberta Michnick Golinkoff on Twitter: www.twitter.com/KathyandRo1
Photo: Thomas Hawk (Flickr) Soap has been used for cleaning for thousands of years, but it was not until modern chemists began to understand its molecular structure that anyone knew how soap worked its magic. How Soap Works The long soap molecule has one end that is attracted to fats and oils, the other end is attracted to water. When soap is added to the wash water, one end of its molecule attaches to the oily dirt and pulls it away from the fabric or your skin. The other end stays attached to the water, and when the water is washed down the drain the dirt attached to the other end of the molecule follows. The problem with soap is that it doesn’t work well in hard water. Hard water contains a lot of calcium, and before soap begins to clean you or your clothes it separates the calcium from the water. This is what makes the scum of the bathtub ring. After the soap has removed all the calcium in the water, it starts to clean. That’s why it takes more soap to clean in hard water—the first soap gets rid of the calcium, then more is needed to get rid of the oily dirt. After World War II washing machines became very popular, resulting in large demands for soap. However, the public wasn’t satisfied with the grungy film left on clothes. When chemists began working on a cleaner that wouldn’t leave a film, they knew they needed to keep the basic structure of soap, that is a molecule with one end that was attracted to oil, and the other attracted to water. Detergents Were Born To eliminate the film they developed a substance whose water-attractive end would not have an affinity for calcium. These detergents did not separate the calcium which formed the ring, but left it in to be washed away with the dirt. And these are the detergents we use today.
By the end of this section, you will be able to: - Evaluate John Rawls’s answer to utilitarianism - Analyze the problem of redistribution - Apply justice theory in a business context This chapter began with an image of Justice holding aloft scales as a symbol of equilibrium and fairness. It ends with an American political philosopher for whom the equal distribution of resources was a primary concern. John Rawls (1921–2002) wanted to change the debate that had prevailed throughout the 1960s and 1970s in the West about how to maximize wealth for everyone. He sought not to maximize wealth, which was a utilitarian goal, but to establish justice as the criterion by which goods and services were distributed among the populace. Justice, for Rawls, had to do with fairness—in fact, he frequently used the expression justice as fairness—and his concept of fairness was a political one that relied on the state to take care of the most disadvantaged. In his justice theory, offered as an alternative to the dominant utilitarianism of the times, the idea of fairness applied beyond the individual to include the community as well as analysis of social injustice with remedies to correct it. Rawls developed a theory of justice based on the Enlightenment ideas of thinkers like John Locke (1632–1704) and Jean-Jacques Rousseau (1712–1778), who advocated social contract theory. Social contract theory held that the natural state of human beings was freedom, but that human beings will rationally submit to some restrictions on their freedom to secure their mutual safety and benefit, not subjugation to a monarch, no matter how benign or well intentioned. This idea parallels that of Thomas Hobbes (1588–1679), who interpreted human nature to be selfish and brutish to the degree that, absent the strong hand of a ruler, chaos would result. So people willingly consent to transfer their autonomy to the control of a sovereign so their very lives and property will be secured. Rousseau rejected that view, as did Rawls, who expanded social contract theory to include justice as fairness. In A Theory of Justice (1971), Rawls introduced a universal system of fairness and a set of procedures for achieving it. He advocated a practical, empirically verifiable system of governance that would be political, social, and economic in its effects. Rawls’s justice theory contains three principles and five procedural steps for achieving fairness. The principles are (1) an “original position,” (2) a “veil of ignorance,” and (3) unanimity of acceptance of the original position.61 By original position, Rawls meant something akin to Hobbes’ understanding of the state of nature, a hypothetical situation in which rational people can arrive at a contractual agreement about how resources are to be distributed in accordance with the principles of justice as fairness. This agreement was intended to reflect not present reality but a desired state of affairs among people in the community. The veil of ignorance (Figure 2.10) is a condition in which people arrive at the original position imagining they have no identity regarding age, sex, ethnicity, education, income, physical attractiveness, or other characteristics. In this way, they reduce their bias and self-interest. Last, unanimity of acceptance is the requirement that all agree to the contract before it goes into effect. Rawls hoped this justice theory would provide a minimum guarantee of rights and liberties for everyone, because no one would know, until the veil was lifted, whether they were male, female, rich, poor, tall, short, intelligent, a minority, Roman Catholic, disabled, a veteran, and so on. The five procedural steps, or “conjectures,” are (1) entering into the contract, (2) agreeing unanimously to the contract, (3) including basic conditions in the contract such as freedom of speech, (4) maximizing the welfare of the most disadvantaged persons, and (5) ensuring the stability of the contract.62 These steps create a system of justice that Rawls believed gave fairness its proper place above utility and the bottom line. The steps also supported his belief in people’s instinctual drive for fairness and equitable treatment. Perhaps this is best seen in an educational setting, for example, the university. By matriculating, students enter into a contract that includes basic freedoms such as assembly and speech. Students at a disadvantage (e.g., those burdened with loans, jobs, or other financial constraints) are accommodated as well as possible. The contract between the university and students has proven to be stable over time, from generation to generation. This same procedure applies on a micro level to the experience in the classroom between an individual teacher and students. Over the past several decades—for better or worse—the course syllabus has assumed the role of a written contract expressing this relationship. Rawls gave an example of what he called “pure procedural justice” in which a cake is shared among several people.63 By what agreement shall the cake be divided? Rawls determined that the best way to divide the cake is to have the person slicing the cake take the last piece. This will ensure that everyone gets an equal amount. What is important is an independent standard to determine what is just and a procedure for implementing it.64 The Problem of Redistribution Part of Rawls’s critique of utilitarianism is that its utility calculus can lead to tyranny. If we define pleasure as that which is popular, the minority can suffer in terrible ways and the majority become mere numbers. This became clear in Mills’s attempt to humanize Bentham’s calculus. But Mills’s harm principle had just as bad an effect, for the opposite reason. It did not require anyone to give up anything if it had to be done through coercion or force. To extend Rawls’s cake example, if one person owned a bakery and another were starving, like Jean Valjean’s sister in Les Misérables, utilitarianism would force the baker to give up what he had to satisfy the starving person without taking into account whether the baker had greater debts, a sick spouse requiring medical treatment, or a child with educational loans; in other words, the context of the situation matters, as opposed to just the consequences. However, Mill’s utilitarianism, adhering to the harm principle, would leave the starving person to his or her own devices. At least he or she would have one slice of cake. This was the problem of distribution and redistribution that Rawls hoped to solve, not by calculating pleasure and pain, profit and loss, but by applying fairness as a normative value that would benefit individuals and society.65 The problem with this approach is that justice theory is a radical, egalitarian form of liberalism in which redistribution of material goods and services occurs without regard for historical context or the presumption many share that it inherently is wrong to take the property legally acquired by one and distribute it to another. Rawls has been criticized for promoting the same kind of coercion that can exist in utilitarianism but on the basis of justice rather than pleasure. Justice on a societal level would guarantee housing, education, medical treatment, food, and the basic necessities of life for everyone. Yet, as recent political campaigns have shown, the question of who will pay for these guaranteed goods and services through taxes is a contentious one. These are not merely fiscal and political issues; they are philosophical ones requiring us to answer questions of logic and, especially in the case of justice theory, fairness. And, naturally, we must ask, what is fair? Rawls’s principles and steps assume that the way in which the redistribution of goods and services occurs would be agreed upon by people in the community to avoid any fairness issues. But questions remain. For one, Rawls’s justice, like the iconic depiction, is blind and cannot see the circumstances in which goods and services are distributed. Second, we may question whether a notion of fairness is really innate. Third, despite the claim that justice theory is not consequentialist (meaning outcomes are not the only thing that matters), there is a coercive aspect to Rawls’s justice once the contract is in force, replacing utility with mandated fairness. Fourth, is this the kind of system in which people thrive and prosper, or, by focusing on the worst off, are initiative, innovation, and creativity dampened on the part of everyone else? Perhaps the most compelling critic of Rawls in this regard was his colleague at Harvard University, Robert Nozick (1938–2002), who wrote A Theory of Entitlement (1974) as a direct rebuttal of Rawlsian justice theory.66 Nozick argued that the power of the state may never ethically be used to deprive someone of property he or she has legally obtained or inherited in order to distribute it to others who are in need of it. Still, one of the advantages of justice theory over the other ethical systems presented in this chapter is its emphasis on method as opposed to content. The system runs on a methodology or process for arriving at truth through the underlying value of fairness. Again, in this sense it is similar to utilitarianism, but, by requiring unanimity, it avoids the extremes of Bentham’s and Mill’s versions. As a method in ethics, it can be applied in a variety of ways and in multiple disciplines, because it can be adapted to just about any value-laden content. Of course, this raises the question of content versus method in ethics, especially because ethics has been defined as a set of cultural norms based on agreed-upon values. Method may be most effective in determining what those underlying values are, rather than how they are implemented. Justice in Business Although no ethical framework is perfect or fits a particular era completely, Rawls’s justice theory has distinct advantages when applied to business in the twenty-first century. First, as businesses become interdependent and globalized, they must pay more attention to quality control, human resources, and leadership in diverse settings. What will give greater legitimacy to an organization in these areas than fairness? Fairness is a value that is cross-cultural, embraced by different social groups, and understood by nearly everyone. However, what is considered fair depends on a variety of factors, including underlying values and individual characteristics like personality. For instance, not everyone agrees on whether or how diversity ought to be achieved. Neither is there consensus about affirmative action or the redistribution of resources or income. What is fair to some may be supremely unfair to others. This presents an opportunity for engaged debate and participation among the members of Rawls’s community. Second, as we saw earlier, justice theory provides a method for attaining fairness, which could make it a practical and valuable part of training at all levels of a company. The fact that its content—justice and fairness—is more accessible to contemporary people than Confucian virtue ethics and more flexible than Kant’s categorical imperative makes it an effective way of dealing with stakeholders and organizational culture. Justice theory may also provide a seamless way of engaging in corporate social responsibility outwardly and employee development inwardly. Fairness as a corporate doctrine can be applied to all stakeholders and define a culture of trust and openness, with all the corresponding benefits, in marketing, advertising, board development, client relations, and so on. It is also an effective way of integrating business ethics into the organization so ethics is no longer seen as the responsibility solely of the compliance department or legal team. Site leaders and middle managers understand fairness; employees probably even more so, because they are more directly affected by the lack of it. Fairness, then, is as much part of the job as it is an ongoing process of an ethics system. It no doubt makes for a happier and more productive workforce. An organization dedicated to it can also play a greater role in civic life and the political process, which, in turn, helps everyone. John Rawls’s Thought Experiment John Rawls’s original position represents a community in which you have no idea what kind of person you will end up being. In this sense, it is like life itself. After all, you have no idea what your future will be like. You could end up rich, poor, married, single, living in Manhattan or Peru. You might be a surgeon or fishing for sturgeon. Yet, there is one community you will most likely be a part of at some point: the aged. Given that you know this but are not sure of the details, which conditions would you agree to now so that senior citizens are provided for? Remember that you most likely will join them and experience the effects of what you decide now. You are living behind not a spatial veil of ignorance but a temporal one. - What are you willing to give up so that seniors—whoever they might be—are afforded care and security in their later years? - Should you have to pay into a system that provides medical coverage to other people less health conscious than you? Why or why not?
What if you want to refer to a range of cells in multiple worksheets in your workbook. For example, how can you get a cell to return the sum of each cell A1 on the first three worksheets in your workbook? If the worksheets are named Alan1, Alan2 and Alan 3 then you would create the following formula: To calculate the sum of all cells in the range A1 through C5 on each worksheet, you would use the following formula: This notation can be hard to remember. With the mouse, it is easy to build this formula with the following steps: - Click on the cell where you want to enter your formula. - Then enter an = sign and the first part of the function, followed by an opening parenthesis. For the examples given above, you would enter =SUM(. - Click on the sheet tab of the first sheet in the range. - Next hold down the Shift key as you click on the sheet tab of the last sheet in the range. - Use the mouse to select all the cells in the range on the visible worksheet. - Press Enter.
Hydroelectricity and Hydroelectric Hydroelectricity and Hydroelectric Power Hydroelectricity is the generalised term used to describe electricity that has been generated using water, hence the term hydro. However, when we see the term “hydroelectricity” we tend to think of electricity generated by the massive hydroelectric power stations, dams and reservoirs high up in the mountains. These huge vertical concrete walls (up to 200m in height) hold the potential energy contained within a large body of water behind these tall concrete structures extracting the energy from the moving water once it has been released downhill converting this energy into mechanical energy by means of a hydro turbine. When stationary, the water within the reservoir is regarded as stored or potential energy but once released and flows under the influence of gravity this stored energy is converted into kinetic energy by the movement of the water and large hydro turbines convert this movement into electrical energy that can then be distributed and sent along power cables to thousands, and even millions of homes. Then using the power of moving water to generate electricity is commonly referred to as hydroelectricity. But hydroelectricity generation can also be on a much smaller scale and today people are finding new ways to create locally sourced hydroelectric power with small scale hydro power projects being designed to be run-in-river as it does not affect the flow of the river significantly. Small scale hydroelectric power schemes can be replacements for costly diesel generators for rural electrification serving whole rural communities with both light and power. Traditionally small scale hydro power generation was achieved by redirecting the flow of water through waterwheels and other such mechanical devices to spin electricity generating turbines. As a result, over the years there has been many improvements to both the waterwheels that convert the falling water into a rotational movement and in the electrical generators themselves ranging in size from a few 100 watts up to hundreds of kilowatts. Also small dams and weirs have been created and used to store the water at higher elevations in an attempt to increase the power potential of a site enabling even more production of electricity using the seasonal variations in rain fall. The three kinds of hydroelectric power generation systems are: - Run-of-river Hydroelectricity which uses the kinetic energy contained within the natural flow of a river or large creek to generate electricity. Generally run-of-river power generation systems do not have any form of water storage capacity as they operate directly within the river which means it is unable to generate enough electricity to match consumer demand. Thus generating more power when seasonal rainfall and river flows are higher, and less during the drier summer months. - High-head Hydroelectricity the water is dammed up and stored in a reservoir behind a high concrete dam to be used when required. The production of electrical power is through the use of the gravitational force of the falling or flowing water through the hydro turbines. Reservoir type power stations are easily able to meet peak load demands and as such is the most widely used form of renewable energy. - Pumped Storage Hydroelectricity is another type of reservoir power generation in which two reservoirs are used to store the energy in the water. During low cost off-peak hours, reversible turbine/generator assemblies can act as both pump and turbine pump water from a lower elevation reservoir up to a high elevation reservoir for storage. During the day or periods of high electrical demand, the stored water in the upper reservoir is released through the turbines to the lower reservoir. Then pumped hydro storage converts cheaper night time surplus electricity or even wind turbine produced electricity into peak load electricity. Hydroelectricity generation is the most widely used form of renewable energy today, and for many people the construction of dams and reservoirs to produce cheap and clean hydroelectricity has been a blessing, as hydroelectricity is a non-polluting source of electricity because no harmful emissions are released into the atmosphere since no fossil fuels are burned during the production of electrical energy. But for some, they have been affected by the loss of land, whole villages and the destruction of old traditional ways of life. Some even argue that hydroelectricity production is not sustainable in the long term due to environmental and aesthetic reasons, as what was once a beautiful valley has now been covered with water when the dam was built. Also they argue that many hydroelectric dams prevent fish from returning to their native spawning grounds or may be killed in the turbines when the water is released. Typical Hydroelectric Dam Construction Even so, large hydroelectric projects can be found in many developing countries around the world. China for example, has built the massive and controversial Three Gorges Dam Project on the Yangtze River. This dam and reservoir is reported to be the largest hydroelectric project in the world in which more than one million people had to be relocated before the dam’s reservoirs could be fully flooded. Today renewable energy technologies form a major part of our both our lives and energy requirements. But most forms of renewable energy are unreliable as the sun does not shine or the wind does not blow all of the time making other renewable and alternative energies both intermittent and unpredictable. However, hydroelectricity is fairly uniform in its power generating capability and can be easily scaled up by making the dam higher. For example, all things being equal, doubling the height of a dam, increases the water storage by eight times and the electrical power potential by sixteen times. Also hydroelectric energy is stored energy ready to be used until the water is released where as the cost storing other forms of renewable energies can be significant. Hydroelectric is a well established technology capable of producing power 24 hours a day. One of the major advantages of hydroelectricity and of hydroelectric power is that like the power provided by the old style waterwheels, no fossil fuels are burned to release greenhouse gases (such as carbon dioxide and sulphur dioxide) into the atmosphere, where they can produce smog and contribute to global warming and acid rain. Also, no chemicals are used in the production of hydroelectricity that would otherwise be disposed of in the environment. Hydroelectricity generation is also free in that the fuel it uses to generate the electricity, the water, does not have to be produced, dug up or purchased as mother nature provides it in the form of rain and snow. Naturally money has been spent building and maintaining the dam and hydroelectric power plant. There are also secondary benefits to hydroelectric dams. They can provide flood control for rivers protection towns and fields downstream. Plus the large expanse of water within the reservoirs allow for recreational activities such as swimming and boating. However the future is never certain and hydroelectric power still faces many obstacles. The main disadvantage of hydroelectricity generation is that while there are many good solar energy sites for solar panels, there are less good wind turbine sites, and very few good sites for hydroelectricity production. But hydroelectric power generation will continue to grow especially with small scale hydro schemes as long as there is water flowing in the rivers.
To be honest, when we first heard the term “beard tax,” we hoped that it would be some new measure to control this odd “hipster beard” phenomenon. However, despite our hopes, the “beard tax” actually goes back to the 16th Century in England. In 1535, King Henry VIII, who sported a beard himself, introduced a tax on beards. It was a graduated tax depending on the beard wearer’s social position; the higher was the wearer’s standing in society, the larger amount of money he had to pay, making facial hair a symbol of status. After the beard tax was dropped, Henry’s daughter, Queen Elizabeth I of England, re-introduced it again and began taxing every beard of more than two week’s growth. In 1698, Emperor Peter I of Russia, as part of his efforts to modernize Russian society following European models, instituted a beard tax for everyone who sported a beard or mustache. In Russia, the bearded men who would the pay the beard tax were required to carry a “beard token.” The beard token was usually silver or copper coin, embossed with a Rusian Eagle on one side, while on the other, the lower part of a face with nose, mouth, whiskers, and beard. The token was inscribed with two phrases “the beard tax has been taken” (lit: “Money taken”) and “the beard is a superfluous burden”. The “rebels” who would resist paying the tax on their facial hair were punished by being forcibly and publicly shaved. The beard tax was eventually abolished in 1772.
“Personality” refers to the pattern of feelings, behaviors, and thoughts that makes each of us the individuals that we actually are. We usually not behave, feel and think in precisely the same way; it relay on the circumstances we are in, the people around us, and other things which can trigger us. But mostly we do incline to behave in rather predictable ways. And so we can be portrayed, as confident or, selfish, lively or reserve, etc. All these set of patterns which we have makes up our personality. In general, personality doesn’t change much, but it does build up as we go through different phases of life, and as our conditions change. So, as we grow up with time, our behavior, thinking, and all change. We are typically supple enough to learn from previous experiences and to alter our behavior to cope with life more efficiently. Personality disorders are a set of psychological illnesses. They involve a long-term set of thoughts and behaviors that are detrimental and not very flexible. It is very hard for people with personality disorders to change their behavior or become accustomed to different situations. They may have difficulty keeping work or making healthy relationships with others. The behaviors cause grave problems with relationships as well as work. People with personality disorders have a problem dealing with everyday pressures and problems. They frequently have emotional relationships with other people. There are numerous different types of personality disorders. Some may emerge withdrawn, some are odd or eccentric and others can be dramatic. There is one thing they all have in common is that their signs and symptoms are harsh enough to influence many diverse areas of life. People often build up early symptoms of developing a personality disorder in teenage years. People who suffer from personality disorders also have elevated rates of simultaneous mental health conditions such as drug abuse and depression. It can be difficult for people to recognize they have a problem or to seek help due to the nature of these personality disorders. Treatment is accessible for people with personality disorders, and psychotherapy can assist them to develop insight into their state, handle symptoms and develop healthy relationship others. The first step in looking for help is to take an appointment for a doctor or mental health professional to get a mental health assessment. It is considered that personality disorders may occur due to a multifaceted interaction of unconstructive early life experiences and genetic factors. Disturbances to the attachment between parents and toddlers can occur through psychological or physical illness or drug abuse in the parent, or extended separations between parents and infants. Absence of positive caregiving in early childhood can have a harmful impact on personality growth. The research proposes that personality disorders fall into three groups, according to their emotional essence: - Cluster A: Odd or Eccentric 2. Cluster B: Dramatic, Erratic or Emotional 3. Cluster C: Fearful and Anxious Psychotherapy is the most efficient long-term treatment choice for personality disorders. Psychotherapy develops insight in people about their feelings, thoughts, and motivations through a therapeutic connection with a mental health professional, for example, psychologist or psychiatrist. These insights can help people to handle their signs and symptoms, build up satisfying relationships and make healthy behavior changes. - (CBT) Cognitive Behavior Therapy. - Psychodynamic Psychotherapy. - Dialectical Behavior Therapy.
Looking for a fun, hands on activity for students to learn about and review their knowledge of the different countries of Asia? These sorting mats will challenge students to recall the information they have studied and use their critical thinking and/or research skills to fill in the information they don’t know. By using both hands as well as their language, visual and critical thinking skills, they will engage their whole brain during this activity. This set contains sorting mats for the fifteen most populous countries in Asia divided and color coded into sets of five. Students will match the map, flag, capital and major cities, population and size, languages spoken, traditional foods, historical, cultural and geographical facts, currency, type of government and religions observed to each country. An answer key is included for students to self-check their work. Extension activities are also suggested. Print and prep once and then use for years. Makes a great addition to any study of world geography. Countries included: China, India, Indonesia, Pakistan, Bangladesh, Japan, Philippines, Vietnam, Iran, Turkey, Thailand, Myanmar, South Korea, Iran, Afghanistan 3rd, 4th, 5th, 6th, 7th, 8th, 9th grades
Learning from our thought process Brain beats computer Take a look at a picture of a cat. It doesn’t matter if you’re human or a supercomputer, we both recognise this as a cat. ‘But’, says Jasper van der Velde, the science coordinator for new research centre Cognigron, ‘[the computer] needs hundreds of thousands of watts to do so. A human brain only needs about twenty watts.’ Twenty watts could power a light bulb. In other words: when you compare computers to brains, brains win in many respects. For one, brains are much better at pattern recognition than computers, because brains process information differently. That’s mainly because of the relationship between memory and information processing. ‘In traditional computers, those are divided across two separate components’, Van der Velde explains. ‘Before a computer can recognise an image as a cat, it needs to send information back and forth between its memory and its processor. That’s where the information is processed.’ Thousands of millions But in our brains, the processor and memory are part of the same complex network of neurons and synapses. That means they can send signals to various locations in the brain simultaneously. ‘Because of that, our brains are much faster at comparing the image our eyes see to images of cats and dogs in our memory. Our brains also do this on a really small scale: a human skull contains thousands of millions of neuron connections that make this possible.’ The scientists at Cognigron want to try and learn from this. ‘It would be great if we could have computers with human characteristics’, says Van der Velde. Computers simply aren’t efficient enough But to do so, they need special materials that can process information, store it and then form networks. ‘We’re trying to create these materials – on a nano scale – that have some of our human characteristics. We hope to be able to combine these into neuromorphic chips; chips that work sort of like brains.’ The chips don’t have to mimic brains exactly. ‘They have to have the characteristics that make brains better than computers.’ Computers like this would be really useful to society. ‘We’re already collecting heaps of data’, Van der Velde explains. ‘But we use very little of it. Computers simply aren’t efficient enough to sift through everything. But if we can make computers that are better at pattern recognition and more efficient at processing data, that could be extremely useful.’ Take medical data, for example. ‘Computers like that could find similarities between groups of patients. It would enable doctors to make personalised treatment plans based on people’s personal characteristics’, says Van der Velde. Computers that are better at pattern recognition will also be better at recognising images. ‘This could then be used to improve national security or help with making safer self-driving cars.’ The RUG is not the only one who wants to create more efficient computers like this. ‘Some large corporations, like IBM and Intel, are working on it as well’, says Van der Velde. They even have working platforms based on brain power. The difference is that they’re still partially using classic technology. But it turns out this makes computers more efficient, too. ‘That shows that our idea to use different, potentially even more efficient materials, has potential.’ Cognigron’s strength is that we’re multidisciplinary Other universities are working on similar projects. Twente started the project BRAINS with pretty much the same objective. ‘But Cognigron’s strength is that we’re purposefully being multidisciplinary. It’s what makes us unique.’ The Zernike Institute is involved in the project, bringing with it knowledge of materials science, physics, and chemistry. But the Bernoulli Institute is contributing maths, computer sciences, and artificial intelligence. ‘At artificial intelligence they have been working with neural networks for longer; computers that mimic the parallel data processing that brains do’, says Van der Velde. ‘They know what’s needed to improve those networks.’ The materials scientists can then look for the materials with the necessary characteristics, while the mathematicians can substantiate the techniques. ‘The scientists at Cognigron are also really good at developing different kinds of materials that learn tasks the way hardware does.’ Cognigron’s ambitions are grand. Thanks to a donation, they’ve been able to hire twelve professors and a number of PhD candidates. They currently have enough funding to continue their work for the next six years. ‘We want to take that time to demonstrate that new materials can in fact implement (on a nano scale) some of these great characteristics our brains have. And to prove that we can build complex systems to form the basis of a new type of computer.’
Have you ever heard about Raynaud’s disease? Not really! This disease causes some areas of your body like- fingers and toes to feel numb and cold in response to cold temperatures or stress. It changes the skin color first in white then it changes in blue. In Raynaud’s disease, smaller arteries which supply blood to your skin becomes narrow, limiting blood circulation to affected areas (vasospasm). (People Also Like To Read: Skin Problems in Diabetes: Types, Causes, Prevention, and Treatment) What is Raynaud’s Disease? Raynaud’s disease is an occasional disorder of the blood vessels, it basically happens in the fingers and toes. When this disease happens, blood can’t get to the surface of the skin and the affected areas turn white and blue. Although women are more likely than men to have Raynaud’s disease, it is also known as Raynaud or Raynaud’s phenomenon or syndrome. It begins to be more common in people who live in colder climates. Raynaud’s Disease Types: There are two types of Raynaud’s disease: 1. Primary Raynaud’s It is a very common disease, and it affects people who do not have a secondary medical condition. 2. Secondary Raynaud’s It results from an underlying medical issue. It is less common and tends to be more serious. Raynaud’s Disease Symptoms: Usually, this disease affects the areas of your skin, your skin first turns to white and then it gradually turn blue and you might feel cold and numb. When you get warmth and the blood circulation improves, the affected areas may turn in to red, throb, tingle or swell. Although Raynaud’s most commonly affects the fingers and toes, it can also affect other areas of your body, like- nose, lips, ears, and even nipples. There are a few symptoms of the disease: 1. Cold fingers or toes 2. Color changes of your skin in response to cold or stress 4. Prickly feeling or stinging pain upon warming or stress relief. What are the Causes of Raynaud’s Disease? There are some causes of Raynaud’s disease: 1. Connective Tissue Diseases: Many people who have a rare disease which leads to hardening and scarring of the skin (scleroderma) have Raynaud’s. Other diseases which increase the risk of Raynaud’s include lupus, rheumatoid arthritis, and Sjogren’s syndrome. 2. Diseases of the Arteries: These include a buildup of plaques in blood vessels which feed the heart, a disorder in which the blood vessels of the hands and feet become inflamed and a high blood pressure which affects the arteries of the lungs (primary pulmonary hypertension). 3. Carpal Tunnel Syndrome: This condition involves pressure on a major nerve to your hand, producing numbness and pain in the hand which can make the hand more susceptible to cold temperatures. 4. Repetitive Action or Vibration: Typing, playing piano or doing similar movements for long periods and operating vibrating tools, like- jackhammers, can lead to overuse injuries. Smoking is very dangerous to our health, it works slowly and steadily. Smoking tightens our blood vessels. 6. Injuries to the Hands or Feet: Injuries include wrist fracture, surgery or frostbite. 7. Certain Medications It includes beta blockers, used to treat high BP, migraine medications which carry ergotamine or sumatriptan; attention-deficit/hyperactivity disorder medications. Certain chemotherapy agents and drugs which causes blood vessels to narrow, like- some over-the-counter cold medications. (You Might Also Like To Read: Epilepsy Treatment : Do You Need an Epilepsy Specialist (Neurologist)?) What are the Risk Factors of Raynaud’s Disease? Risk factors for primary Raynaud’s include: In this disease, more women than men are affected. Sex drive is reduced in women. However, anyone can develop the condition, primary Raynaud’s often begins between the ages of 15 and 30. The disorder is also very common in people who live in colder climates. 4. Family History A first-degree relative, a parent, sibling or child having the disease appears to increase the risk of primary Raynaud’s. Risk Factors for Secondary Raynaud’s include: 1. Related Diseases: These include conditions such as scleroderma and lupus. 2. Certain Occupations: These include jobs which cause repetitive trauma, like operating tools that vibrate. 3. Exposure to Certain Substances: Including smoking, taking medications which affect the blood vessels and being exposed to certain chemicals, like – vinyl chloride. What are the Complications of Raynaud’s Disease? Chilblains start when there is a problem with the blood circulation, and Raynaud’s is one possible cause. The skin embellishes itchy, red, and swollen and it may feel hot, burning, and tender. Chilblains usually resolve in 1 to 2 weeks, but they can come back easily. Keeping the extremities warm can help to prevent them. How to Prevent Raynaud’s Disease? Bundle Up Outdoors: When it’s cold, don a hat, scarf, socks, and boots, with two layers of mittens or gloves before you go outside. Wear a warm coat with snug cuffs to go around your mittens or gloves, to prevent cold air from reaching your hands. Use of Chemical Hand Warmers: Wear a face mask and earmuffs if the tip of your nose and your earlobes are sensitive to cold. Take Precautions Indoors When taking food out of the refrigerator or freezer, wear gloves, mittens or oven mitts. A few people find it helpful to wear mittens and socks to bed during winter. What is the Treatment of Raynaud’s Disease? The treatment is based on medications and surgery. Proper medications could cure the disease and if medications are not satisfying to you, the surgery is the only option for this. Supportive nerves in your hands and feet control the opening and narrowing of blood vessels in your skin. Cutting these nerves interrupts their exaggerated responses. Via small incisions in the affected hands or feet, a doctor strips these tiny nerves around the blood vessels. With the help of this treatment, it may reduce the frequency and duration of attacks. A surgeon can inject chemicals like- local anesthetics or onabotulinumtoxin type A (Botox) to block sympathetic nerves in affected areas of hands or feet. As we have talked about Raynaud’s disease and its types, causes, symptoms, risk factors, complications and more importantly the treatment, the best way to cure this disease is self-care. Take proper medications on time if it doesn’t work then immediately consult a doctor. (People Also Like To Read: Bleeding Gums: 1 in 3 People – Don’t Take it Normal) Disclaimer: GoMedii is a recognized and a considerate healthcare platform which tends to connect every dot of the healthcare needs and facilities. GoMedii facilitates the accessibility of all health news, health tips, and information from the Health experts and Doctors to the eyes of readers. All of the information and facts mentioned in the GoMedii Blog are thoroughly examined and verified by the Doctors and Health Experts, elsewise source of information is confirmed for the same. About GoMedii: GoMedii is a complete healthcare marketplace, where you can Book Appointment With Doctors, Get Online Consultation, Order Medicines Online, and Get hassle-free doorstep delivery within 4 hours. You may simply download the GoMedii app for Android or iOS.
Learning the SI (International System of Units, commonly known as the metric system) is easy. Begin with basics, including the 7 base units that define 22 derived units and using prefixes. Practice! Build proficiency and confidence making measurements using metric tools. Use reference points to develop an understanding of "how much." Estimation skills are key to sensemaking and checking for reasonableness of a measurement result. Learning metric is an interdisciplinary activity, so make connections to everyday life and career applications. You’ll find that writing with the SI is an effective way to communicate technical information. Take a NIST webinar or seminar to learn more! Explore professional development opportunities offered by the NIST Metric Program. When you’re ready, explore the technical details NIST SP 330 and SP 811. Learn unit conversions between SI and non-SI units when necessary. Try the NIST Metric Trivia Quiz online or use the Alexa skill to test your knowledge and be on your way to thinking metric!