content
stringlengths
275
370k
This week we’ve been reading The Tale of Peter Rabbit by Beatrix Potter. This classic story is great to read in the Spring and it has inspired us to try lots of different rabbity activities and crafts. One of our favourites was a sorting activity which led us to creating and describing patterns. Learning to recognise, describe and create patterns is a key skill in early numeracy and you also end up with some beautiful artwork. - Sorting and Pattern Template - Coloured card (we used blue, orange, yellow and purple) - Glue stick - Plain paper or card to stick pattern onto - Cupcake cases or bowls for sorting Firstly, I prepared all the cut-out pieces by drawing around the templates on coloured card and cutting them out. If you have older children you can give them some fantastic scissor practise by supervising them cutting their own shapes. I cut about 15 pieces for each shape so we had plenty to play with but the number you cut is up to you. Once all the pieces were cut I put out 4 cupcake cases with one shape in each. Then I mixed up the rest of the shapes and we were ready to get sorting. We talked about how we could sort the shapes and then Burt got going. If you wanted to make the activity more difficult then you could make the rabbits and the carrots both orange and the chick and the egg both purple and then sort by colour (one pile of orange and one pile of purple) and then sort those two piles into 4 piles, sorting by shape. Sorting is a great activity for busy little fingers as it gives you lots of opportunities to talk about colours and the properties of shapes. How do you know that the shape in your hand is a chick? Why does that shape go in that pot with those other shapes? If your child will let you have a go then it is great to do it ‘wrong’ and then get your child to tell you what wasn’t quite right and get them to rectify it. It is amazing what that can do for a child’s self esteem and confidence and it also helps them see that everyone makes mistakes, it is alright and those mistakes are easily rectified. Once we had finished sorting I laid a sheet of paper in front of Burt and straight away laid out a simple pattern Rabbit, Chick, Rabbit, Chick. Then I asked Burt what he thought would come next. To help your child along you can ask questions like, “What comes after the chick?” and point to earlier in the pattern as a clue. We finished a row of the pattern together and then I explained that what we had just created was a repeating pattern. We gathered up the shapes and I started a new pattern. This time I laid out Rabbit, Carrot, Rabbit, Carrot. When I asked what came next Burt got it straight away and we were able to finish the row. Once you have established the idea of a repeating pattern then you can let your child go ahead and start creating their own. We ended up with some great combinations from using all the shapes (Chick, Rabbit, Egg, Carrot…) to using more than one of a particular shape each time (Carrot, Carrot, Rabbit, Rabbit…) (Chick, Egg, Egg…). Once Burt was happy with his patterns we stuck them down. Don’t stick until you have given your child plenty of opportunity to ‘play’ with the shapes and patterns as the idea is to allow time to experiment and the finished artwork is the result of that play rather than the object of the task. The finished pattern is a bright and colourful piece of artwork that we will frame and be able to put up every Easter. We had lots of shapes so we were able to try lots of different combinations and patterns and my favourite has to be this simple Rabbit, Chick, Rabbit, Chick pattern. We are going to be framing this one too but I might also colour photocopy it and laminate it so we can then use it as an Easter placemat in the centre of the dinner table and it would be lovely to take it further and create a rabbit and chick stamp and recreate the pattern on some fabric. Maths is part of the world all around us. Patterns are everywhere: in nature, in or on and part of the things we buy and use every day. Maths is fun, it is relevant to our lives and it is something that you will use throughout your life. If you use money you are using numbers and maths. If you measure something when you cook (even if it is heating up a microwave meal for the correct amount of time) it is all maths. Sorting and creating patterns is something that you can do with any colours and shapes so go for it and let your children’s imaginations run wild. It is great fun and all you need is paper, scissors and glue. Have you been inspired by a book to craft, go on an outing or do an activity this week?
In some places, the impact of climate change is already observable. In others, scientists predict the occurrence of climate change based on intricate mathematical and computer models. But in Bangladesh, it is already happening NOW, at a scale that involves unmatched natural and human tragedy. The long history of natural calamities of Bangladesh that used to be a one-in-20-year event now happens once every five years. “Bangladesh faces particularly severe challenges with climate change threatening its impressive progress in overcoming poverty,” said Johannes Zutt, Former World Bank Country Director for Bangladesh and Nepal. Being one of the most vulnerable nations to global climate change, it is frequently visited by natural catastrophes such as tropical cyclones, storm rushes, downpours, droughts, hurricanes etc. Superimposed on these disastrous events, climate change and any consequent sea level rise are likely to add fuel to the fire. More frequent storm rushes and tougher cyclones often drive body of water 50 to 60 miles up the Delta’s rivers. The latest devastating flood of 2016 that washed away nearly the whole country was caused by heavy rainfalls as well as water flow from the upstream hills in India and has led to the inundation of the river basin areas,especially in the northern parts of Bangladesh, leaving a large number of people dead, homeless and helpless. Flood Risk in Bangladesh: With these natural calamities, also come increased health hazards. Heat waves are distressing many more vulnerable people and global warming is enhancing the transmission of fatal diseases such as malaria and dengue fever. Air pollution from fossil fuel burning is also producing millions of early deaths each year, while the destruction of harvests from extreme weather threatens hunger for millions of children. A World Food Programme (WFP) report says that by 2050, climate change is expected to increase the number of hungry people by 10 to 20 percent, and the number of malnourished children is expected to increase by 24 million—21 percent more than without the effect of climate change. *Global Flood Analyzer: http://floods.wri.org/#/ Riverbank erosion, in recent years, has yearly displaced between 50,000 and 200,000 people.** The population of “immediately threatened” islands, previously called “chars,” exceeds four million. As soon as chars wash away, the process of deposition creates new chars downstream because of the so dynamic nature of riverine Bangladesh. The land is so unstable and the population so condensed that the displaced people often try to settle on these new, irregular, highly unstable sandbars. A report, prepared for the World Bank by the Potsdam Institute for Climate Impact Research and Climate Analytics and peer-reviewed by 25 scientists worldwide, says that in Bangladesh, 40% of the productive land is projected to be lost in the southern region of Bangladesh for a 65cm sea level rise by the 2080s. About 20 million people in the coastal areas of Bangladesh are already affected by salinity in drinking water. Regarding rising sea levels, The Guardian report quotes, Saleemul Huq, director of the Bangladesh-based International Centre for Climate Change and Development (ICCCAD) as saying, “In the next 20 years we would expect five to 10 million people to have to move from the coastal areas. The whole country is a climate hotspot, but the most vulnerable area is the coast.” read more Climate Change Impact to Our Health It is not just ‘people’ who are affected. The Sundarbans, the largest mangrove forest in the world, a World Heritage Site and shelter to the iconic Royal Bengal tiger, lie in the delta of the Ganges River in Bangladesh and India. Across coastal Bangladesh, sea-level rise, worsened by the conversion of mangrove forest for agricultural production, shrimp farming and local business, has resulted in the loss of hundreds of thousands of acres of mangroves. Accordingly, the number of tigers has plummeted. The prediction of World Wildlife Fund says that the tiger may become extinct. Bangladesh is on the way to lose one of its last natural defences against climate change-induced super-storms. But the outlook is changing. Bangladesh has progressively developed the countrywide capacity to address climate change impacts. The country has invested more than $10 billion in climate change actions which include increasing the capability of government agencies to respond to emergencies, augmenting the capacity of communities to increase their resilience through Climate Change Adaptation (CCA), Disaster Risk Reduction (DRR) and some prominent mitigation measures playing a vital role on political agenda. Apart from these, building emergency cyclone shelters and resilient accommodation, firming up river embankments and coastal polders and reducing salinity intrusion are some of the practices of the initiatives developed by the Government of Bangladesh in consultation with civil society. Prime MinisterSheikh Hasina’s government established a $400 million ‘Climate Change Trust Fund’ in 2009 from its own resources. Though Bangladesh is saturated with initiatives and projects related to climate change, it is still facing the tragic consequences which prove again and again that Climate Change is happening NOW! So, funding, capacity and technology at the national level must be accessible in order to implement approaches that effectively address and create a more irrepressible Bangladesh. ** Bangladesh: Main River Flood and Bank Erosion Risk Management Program by Asian Development Bank Technical Assistance Consultant’s Report Writer Shooha Tabil Bangladesh University of Engineering & Technology (BUET)
> Reading, writing and language: > Alphabet, words based on phonics, speak more complex sentences, learns grammar, understands stories(beginning, middle, end) > Concept of print > Develop their comprehension skills > Begin to write stories with some readable parts > use capital letters and full stops for their sentences > Write several sentences about their ideas or experiences. > Add by counting all (using their fingers or objects), and subtract by counting backwards. > Skip counting > Organise objects into categories according to a variety of characteristics > Can do quantification > Can use ordinal numbers(1st to 5th) and identify which object is first, second, last > Able to use comparative language such as ‘greater than’, ‘less than’ ,’equal ‘ to describe groups of objects > Increasingly use mathematical language and linking their mathematical ideas > Recognize shapes by matching shapes with other objects that have the same shape and size > Uses a pencil or pen with the correct grip > Often forms letters or numbers backwards > Understand the importance of rules, and the simple reasons behind rules > Understand relationships between objects > Enjoys the opportunity to do ‘show and tell’ at school. > Read simple picture books.
The keyhole gardens are the brainchild of humanitarian charities and missionaries for use in impoverished countries with poor soil, bad weather, and starving people. At least one organization teaches the schoolchildren how to construct the gardens from available recycled materials and to care for the vegetables. The schools then have nutritious vegetables for lunches. The children are encouraged to build a keyhole garden at home to educate their parents, thus enabling families to feed themselves. Advisers visit with the families to trouble-shoot garden problems. Keyhole Garden Concept The Keyhole Garden concept is brilliantly simple. A circular raised bed has a center compost basket that distributes nutrients to the surrounding lasagna-style garden bed. A small pie-slice section of the bed is used for easy access to the center compost basket forming the keyhole design (click the sketch on the left for larger image). Kitchen and garden waste, along with household gray water, are added to the center basket. The soil bed layers are slightly sloped away from the center to aid water and "compost tea" distribution. As the materials decompose, soil, composting materials, and amendments are added to the bed in later growing seasons. Keyhole Garden Benefits Keyhole Garden Construction This is not a comprehensive how-to. If you would like detailed information to construct a Keyhole Garden, please follow the links provided at the end of this article, otherwise, refer to the Cross-section View A-A at left (click for a larger image). The following is a list of recurring points made for the garden design to work efficiently. Use whatever materials you may have at your disposal that will allow the best result to be achieved. - Drainage: Rocks, broken tiles or pots, rusty cans, twigs, small branches, or old critter bones can be used for the bottom drainage layer. - Compost Basket: A tube, 1' to 1-1/2' (.5 meters) in diameter and tall enough to extend well above the center of the bed, can be fashioned from anything that will allow water to pass through into the surrounding bed like chicken wire, fencing, or sticks (think in terms of a woven basket). Supports to hold the basket in place, such as strong branches, boards, or rebar, and wire or strong twine to hold everything together, also, will be needed. - Outer Border Walls: Anything that will contain the soil could be used: stones, bricks, or blocks can be stacked into place; boards or branches could be driven into the ground; sand bags or old tires could work as well. - Planting Bed Fill: Use the same materials as for a lasagna garden, or compost pile, such as cardboard, paper, manure, leaves, straw, hay, old potting mix, or wood ashes, then finish the surface for planting with top soil. - A cleared level area no larger than 6-1/2' (2 meters) in diameter is the maximum size for a Keyhole Garden. A larger diameter bed may suffer from a lack of water near the outer wall of the bed (from the compost basket), and plants close to the center may be difficult to reach. - The cutout area of the keyhole should be wide enough to allow easy access to the compost basket when adding materials, removing compost, or making basket repairs if needed. Positioning the cutout on the North side of the circular garden would use the dark space of the bed that would not receive full sun, thus leaving all the sunny sides for planting. - All the layers added to the bed should slope downward to the outer border wall. The slope helps direct moisture from the compost basket out into the bed. - The outer border wall can be as high as needed for the comfort of the gardener and the amount of sourced materials. The top surface of soil should stay below the outer walls to contain rainwater and prevent the soil from running off. - Other Considerations: - A roof can be made to deflect excessive rain from the compost basket or to conserve compost moisture during drought. - Trellises could be installed to support cloches for weather protection. I have been gathering information and materials to construct a Keyhole Garden for a garden project next year. I was so excited by the design idea that I decided to share the information now, instead of waiting until I could build one to write about later. I hope I have provided a potential new garden concept for others to consider as well. - Texas Co-op Power: Keyhole Gardening: Unlocking the secrets of drought-hardy gardening, by G. Elaine Aker; Issue: February 2012 - Valhalla Project: videos and photos - Send-A-Cow: Keyhole Gardens and many other great frugal gardening ideas
Comments: 3 Comments Hazard Analysis and Critical Control Points (HACCP) is a process control system designed to identify and prevent microbial and other hazards in food production. The HACCP system is used at all stages of the food chain, from food production to packaging and distribution. HACCP includes steps designed to identify food safety risks, prevent food safety hazards before they occur, and address legal compliance. The most important aspect of HACCP is that it is a preventive system rather than an inspection system of controlling food safety hazards. Prevention of hazards cannot be accomplished by end product inspection. Controlling the production process with HACCP offers the best approach. Breaking It Down Kestrel follows twelve steps to help ensure the successful implementation and integration of HACCP throughout a company: - Assemble a HACCP team with the appropriate product-specific knowledge and expertise to develop an effective Food Safety Plan. The team should comprise individuals familiar with all aspects of the production process, plus specialists with expertise in specific areas, such as engineering or microbiology. It may be necessary to use external sources of expertise in some cases. - Describe the product in full detail, including composition, physical/chemical structure, microcidal/static treatments, packaging, storage conditions, and distribution methods. - Identify the intended/expected use of the product by the end user. It is also important to identify the consumer target groups. Vulnerable groups, such as children or the elderly, may need to be considered specifically. - Construct a flow diagram that provides an accurate representation of each step in the manufacturing process—from raw materials to end product—and may include details of the factory and equipment layout, ingredient specifications, features of equipment design, time/temperature data, cleaning and hygiene procedures, and storage conditions. - Perform an on-site confirmation of the flow diagram to confirm that it is aligned with actual operations. The operation should be observed at each stage and any discrepancies between the diagram and normal practice should be recorded and amended. It is essential that the flow diagram is accurate since the hazard analysis and identification of Critical Control Points (CCPs) rely on the data it contains. - Conduct a hazard analysis for each process step to identify any biological, chemical, or physical hazards. This assessment also includes rating the hazard using a risk matrix, determining if the hazard is likely to occur, and identifying the preventive controls for the process step. - Determine Critical Control Points (CCPs)—those areas where previously identified hazards may be eliminated. The final HACCP Plan will focus on the control and monitoring of the process at these points. - Establish critical limits and develop processes that limit risk at CCPs. More than one critical limit may be defined for a single step. Criteria used to set critical limits must be measurable and include rating and ranking of hazards for each step of the flowchart. - Monitor CCPs and develop processes for ensuring that critical limits are followed. Monitoring procedures must be able to detect loss of control at the CCP and should provide this information in time to make appropriate adjustments so that control of the process is regained before critical limits are exceeded. Where possible, process adjustments should be made when monitoring results indicate a trend towards a loss of control at a CCP. - Establish preplanned corrective actions to be taken for each CCP in the HACCP plan that can then be applied when the CCP is not under control. If monitoring indicates a deviation from the critical limits for a CCP, action (e.g., proper isolation and disposition of affected product) must be taken that will bring it back under control. - Establish procedures for verification to determine whether the HACCP system is working correctly. Verification procedures should include detailed reviews of all aspects of the HACCP system and its records. The documentation should confirm that CCPs are under control and should also indicate the nature and extent of any deviations from the critical limits and the corrective actions taken in each case. - Establish proper documentation and recordkeeping for all HACCP processes to ensure that the business can verify that controls are in place and are being properly maintained. Developing and implementing a HACCP program requires a significant investment of time and effort. Though HACCP continues to evolve, it is up to the company to design and customize HACCP programs to make them effective and workable. These twelve steps break HACCP into manageable chunks and will help ensure that the company is consistently and reliably producing safe food that will not cause harm to the consumer.
This article presents six ideas about the construction of emotion: (a) Emotions are more readily distinguished by the situations they signify than by patterns of bodily responses; (b) emotions emerge from, rather than cause, emotional thoughts, feelings, and expressions; (c) the impact of emotions is constrained by the nature of the situations they represent; (d) in the OCC account (the model proposed by Ortony, Clore, and Collins in 1988), appraisals are psychological aspects of situations that distinguish one emotion from another, rather than triggers that elicit emotions; (e) analyses of the affective lexicon indicate that emotion words refer to internal mental states focused on affect; (f) the modularity of emotion, long sought in biology and behavior, exists as mental schemas for interpreting human experience in story, song, drama, and conversation. - appraisal theory - construction of emotion - emotion schemas ASJC Scopus subject areas - Social Psychology - Experimental and Cognitive Psychology - Arts and Humanities (miscellaneous)
This Dutch map, made about 1655, shows eastern North America from what is now Canada to Virginia. Grade Range: 1-5 Resource Type(s): Reference Materials, Lessons & Activities Duration: 40 minutes Date Posted: 3/18/2011 Use or create maps to explore your local area, along with discussion tips for kids and families. Part of an OurStory module entitled Full Steam to Freedom, this activity includes strategies for using online maps or making your own map of a trip and tips for making the most of those experiences through discussion and cooperative discovery. OurStory is designed to help children and adults enjoy exploring history together through the use of children's literature, everyday objects, and hands-on activities. Standards in History (Grades K-4) United States History Standards (Grades 5-12) College, Career, and Civic Life (C3) Framework for Social Studies State Standards (Grades 3-5) D1.2.3-5. (Compelling Questions): Identify disciplinary concepts and ideas associated with a compelling question that are open to different interpretations. D1.3.3-5. (Constructing Supporting Questions): Identify the disciplinary concepts and ideas associated with a supporting question that are open to interpretation. D1.4.3-5. (Constructing Supporting Questions): Explain how supporting questions help answer compelling questions in an inquiry. D1.5.3-5. (Determining Helpful Sources): Determine the kinds of sources that will be helpful in answering compelling and supporting questions, taking into consideration the different opinions people have about how to answer the questions. D2.Civ.2.3-5. (Civics): Explain how a democracy relies on people's responsible participation, and draw implications for how individuals should participate. D2.Civ.3.3-5. (Civics): Examine the origins and purposes of rules, laws, and key U.S. constitutional provisions. D2.Civ.4.3-5. (Civics): Explain how groups of people make rules to create responsibilities and protect freedoms. D2.Civ.5.3-5. (Civics): Explain the origins, functions, and structure of different systems of government, including those created by the U.S. and state constitutions. D2.Civ.6.3-5. (Civics): Describe ways in which people benefit from and are challenged by working together, including through government, workplaces, voluntary organizations, and families. D2.Civ.7.3-5. (Civics): Apply civic virtues and democratic principles in school settings. D2.Civ.8.3-5. (Civics): Identify core civic virtues and democratic principles that guide government, society, and communities. D2.Civ.9.3-5. (Civics): Use deliberative processes when making decisions or reaching judgments as a group. D2.Civ.10.3-5. (Civics): Identify the beliefs, experiences, perspectives, and values that underlie their own and others' points of view about civic issues. D2.Civ.11.3-5. (Civics): Compare procedures for making decisions in a variety of settings, including classroom, school, government, and/or society. D2.Civ.12.3-5. (Civics): Explain how rules and laws change society and how people change rules and laws. D2.Civ.13.3-5. (Civics): Explain how policies are developed to address public problems. D2.Civ.14.3-5. (Civics): Illustrate historical and contemporary means of changing society. D2.Eco.1.3-5. (Economics): Compare the benefits and costs of individual choices. D2.Eco.2.3-5. (Economics): Identify positive and negative incentives that influence the decisions people make. D2.Eco.3.3-5. (Economics): Identify examples of the variety of resources (human capital, physical capital, and natural resources) that are used to produce goods and services. D2.Eco.4.3-5. (Economics): Explain why individuals and businesses specialize and trade. D2.Eco.5.3-5. (Economics): Explain the role of money in making exchange easier. D2.Eco.6.3-5. (Economics): Explain the relationship between investment in human capital, productivity, and future incomes. D2.Eco.7.3-5. (Economics): Explain how profits influence sellers in markets. D2.Eco.8.3-5. (Economics): Identify examples of external benefits and costs. D2.Eco.9.3-5.(Economics): Describe the role of other financial institutions in an economy. D2.Eco.10.3-5. (Economics): Explain what interest rates are. D2.Eco.11.3-5. (Economics): Explain the meaning of inflation, deflation, and unemployment. D2.Eco.12.3-5. (Economics): Explain the ways in which the government pays for the goods and services it provides. D2.Eco.13.3-5.(Economics): Describe ways people can increase productivity by using improved capital goods and improving their human capital. D2.Eco.14.3-5. (Economics): Explain how trade leads to increasing economic interdependence among nations. D2.Eco.15.3-5. (Economics): Explain the effects of increasing economic interdependence on different groups within participating nations. D2.Geo.1.3-5. (Geography): Construct maps and other graphic representations of both familiar and unfamiliar places. D2.Geo.2.3-5.(Geography): Use maps, satellite images, photographs, and other representations to explain relationships between the locations of places and regions and their environmental characteristics. D2.Geo.3.3-5. (Geography): Use maps of different scales to describe the locations of cultural and environmental characteristics. D2.Geo.4.3-5. (Geography): Explain how culture influences the way people modify and adapt to their environments. D2.Geo.5.3-5. (Geography): Explain how the cultural and environmental characteristics of places change over time. D2.Geo.6.3-5. (Geography): Describe how environmental and cultural characteristics influence population distribution in specific places or regions. D2.Geo.7.3-5. (Geography): Explain how cultural and environmental characteristics affect the distribution and movement of people, goods, and ideas. D2.Geo.8.3-5. (Geography): Explain how human settlements and movements relate to the locations and use of various natural resources. D2.Geo.9.3-5. (Geography): Analyze the effects of catastrophic environmental and technological events on human settlements and migration. D2.Geo.10.3-5. (Geography): Explain why environmental characteristics vary among different world regions. D2.Geo.11.3-5. (Geography): Describe how the spatial patterns of economic activities in a place change over time because of interactions with nearby and distant places. D2.Geo.12.3-5. (Geography): Explain how natural and human-made catastrophic events in one place affect people living in other places. D2.His.1.3-5. (History): Create and use a chronological sequence of related events to compare developments that happened at the same time. D2.His.2.3-5. (History): Compare life in specific historical time periods to life today. D2.His.3.3-5. (History): Generate questions about individuals and groups who have shaped significant historical changes and continuities. D2.His.4.3-5. (History): Explain why individuals and groups during the same historical period differed in their perspectives. D2.His.5.3-5. (History): Explain connections among historical contexts and people's perspectives at the time. D2.His.6.3-5. (History): Describe how people's perspectives shaped the historical sources they created. D2.His.9.3-5. (History): Summarize how different kinds of historical sources are used to explain events in the past. D2.His.10.3-5. (History): Compare information provided by different historical sources about the past. D2.His.11.3-5. (History): Infer the intended audience and purpose of a historical source from information within the source itself. D2.His.12.3-5. (History): Generate questions about multiple historical sources and their relationships to particular historical events and developments. D2.His.13.3-5. (History): Use information about a historical source, including the maker, date, place of origin, intended audience, and purpose to judge the extent to which the source is useful for studying a particular topic. D2.His.14.3-5. (History): Explain probable causes and effects of events and developments. D2.His.16.3-5. (History): Use evidence to develop a claim about the past. D2.His.17.3-5. (History): Summarize the central claim in a secondary work of history. D3.2.3-5. (Gathering and Evaluating Sources): Use distinctions among fact and opinion to determine the credibility of multiple sources. D3.3.3-5. (Developing Claims and Using Evidence): Identify evidence that draws information from multiple sources in response to compelling questions. D3.4.3-5. (Developing Claims and Using Evidence): Use evidence to develop claims in response to compelling questions. D4.2.3-5. (Communicating and Critiquing Conclusions): Construct explanations using reasoning, correct sequence, examples, and details with relevant information and data. D4.3.3-5. (Communicating and Critiquing Conclusions): Present a summary of arguments and explanations to others outside the classroom using print and oral technologies (e.g., posters, essays, letters, debates, speeches, and reports) and digital technologies (e.g., Internet, social media, and digital documentary). D4.4.3-5. (Communicating and Critiquing Conclusions): Critique arguments. D4.5.3-5. (Communicating and Critiquing Conclusions): Critique explanations. D4.6.3-5. (Taking Informed Action): Draw on disciplinary concepts to explain the challenges people have faced and opportunities they have created, in addressing local, regional, and global problems at various times and places. D4.7.3-5. (Taking Informed Action): Explain different strategies and approaches students and others could take in working alone and together to address local, regional, and global problems, and predict possible results of their actions. D4.8.3-5. (Taking Informed Action): Use a range of deliberative and democratic procedures to make decisions about and act on civic problems in their classrooms and schools.
|X Mr Karparov visited the surgery with her daughter.| |✓ Mr Karparov visited the surgery with his daughter.| Gender is an important aspect of English that we all need to learn. Understanding how to use gender in English is important to both the clarity and accuracy of our writing and speaking. While it may seem simple, a lot of students make this mistake. This partly because English does things slightly different than other languages. Let’s take a closer look at gender and how English treats it in relation to nouns and possessive determiners. How is gender used in English? English is a bit different when it comes to the use of gender. Unlike speaking and writing in other languages, like French and German, English does not use grammatical gender (where all nouns are assigned gender). Instead, English employs natural gender that tends to apply masculine and feminine gender. In other words, we use pronouns and possessive determiners (she, he, hers and his) to reflect gender. What’s the most common mistake? Sometimes, writers and speakers will use the incorrect gender of the patient’s relative, such as the example above. Other times, it’s the incorrect gender for the patient themselves: |Mrs Chu underwent a hysterectomy 10 days ago. His post-operative recovery has been unremarkable.| When you’re writing a healthcare letter in your role or during the Writing sub-test, you should check each gender pronoun (her/he) is correct. It is a very noticeable mistake and can be easily picked up in proofreading. Not checking your work can lead to confusion for the reader. So always check your work and make sure your gender pronoun matches the gender of the patient or subject of your sentence. If you would like to know more about gender or another aspect of English, take a look at our Preparation Portal today.
The Sensorial area is one that often confuses those who have not been immersed in the Montessori Method. Sensorial exercises were designed by Maria Montessori to help young children explore every quality that can be perceived by the senses—size, shape, composition, texture, loudness or softness, matching, weight, temperature, diameter, etc. The purpose and aim of Sensorial work is for the child to acquire the knowledge and language to allow her to describe her environment. In the photos above, children can be seen sorting knobbed and knobbless cylinders by height and diameter. They are sorting tablets that range from rough to smooth. The broad stair, which has a consistent length, but varies in height and width. The trinomial cube is another, more advanced sensorial work that in addition to being categorized a sensorial work, it serves as an introduction for algebra and a preparation for the proof of the formula (a+b+c)3 at Elementary Level.
Soil phosphorus availability and lime: More than just pH? Plants can't do without phosphorus. But there is often a 'withdrawal limit' on how much phosphorus they can get from the soil. That's because phosphorus in soils is often in forms that plants can't take up. That affects how healthy and productive the plants can be. One influence on phosphorus availability is the soil's pH level. If soils are too acidic, phosphorus reacts with iron and aluminum. That makes it unavailable to plants. But if soils are too alkaline, phosphorus reacts with calcium and also becomes inaccessible. "Phosphorus is most available to plants when soil is at a 'Goldilocks' zone of acidity," says Andrew Margenot. Margenot is a researcher at the University of Illinois at Urbana-Champaign. There are ways to make more phosphorus available to plants. For example, adding lime (calcium hydroxide) reduces soil acidity. That can unlock the phosphorus that was previously unavailable. This is a common practice. "Liming is a bread-and-butter tool for agriculture," says Margenot. However, liming can influence other ways by which phosphorus might become available to plants. Enzymes, called phosphatases, are also known to influence the amount of phosphorus available to plants. Margenot's study looked at liming and soil management history to see if it influenced the activity of soil enzymes. Margenot and his colleagues conducted experiments in western Kenya, a region with acidic, weathered soils. Researchers added varying amounts of lime to long-term experimental plots. These plots had specific fertilization treatments since 2003: One set of plots had been unfertilized. Another had received cow manure. A third set of plots had mineral nitrogen and phosphorus added. Twenty-seven days after liming, the researchers measured phosphatase activity. They also measured how much phosphorus was available to plants. They found no clear relationships between soil acidity levels changed by liming and phosphatase activity. This was unexpected. "We know that phosphatases are sensitive to soil acidity levels," says Margenot. "Our findings show that it is more complicated than just soil acidity when it comes to these enzymes." And more surprisingly, changes in phosphatase activities after liming depended on the soil's history. This suggests that the sources of these enzymes (microbes, plant roots) could have responded to different fertilization histories by changing the amount or type of phosphatases secreted. Furthermore, in all cases, the increases in phosphorus availability were relatively small. "In the soils tested, lime alone was not enough to be meaningful to crops and thus farmers," says Margenot. "Lime needs to be combined with added phosphorus to meet crop needs in these soils." Margenot is now working to extend this study. With colleagues from the International Center for Tropical Agriculture (CIAT) and the German Society for International Cooperation (GIZ), he'll be studying western Kenyan farms. The goal is to see if using lime at rates realistic for growers will have soil health trade-offs in these weathered soils. This research is published in Soil Science Society of America Journal.
The Japanese character 上 has a basic meaning of “up” or “above”, and is pronounced “ue” when written by itself. In Kanji compounds, it is often pronounced as “uwa” (上着, uwagi) or “jou” (上陸, jouriku). “上” does have some other usages, and this time I’d like to discuss the expression “〜上で”. This can be used to mean result of something, given the condition specified by the “〜” part. This is explained in the Japanese dictionary as “…を条件に入れて, …した結果”. Here is one such example of that usage: - I want to decide after properly studying it. You also can use の上で after a noun to express a similar meaning, although there are a few set phrases that this pattern is commonly used with. Here is one: - It’s something I did with full understanding. My translation above is somewhat literal, you could also say “purposefully” or “intentionally” Another use of ”〜上で” is similar to “when”. It is commonly used when you want to talk about something that is necessary or important when doing something. - In life there is only one important thing. You can also use 上で to modify another noun with の. - Thing(s) to be careful about during usage。 (sentence fragment) 上 can also be tacked on after some nouns, to represent some action that takes place “on” them. A common usage of this is ネット上, where ネット (netto) is a short form of “internet” - Is there anything you can’t do on the net?
GIS stands for Geographic Information Systems. What most people think of is maps. GIS features include a lot more than just maps. - Step by step driving directions is a common GIS feature that is not often thought of as being a GIS function. Maps are helpful, but most mapping sites include the step by step directions to help people get where they want to go. - Spacial anlysis is looking at data based on physical location. When it comes to business, you might want to know what areas receive lots of sales, while others are under served. Survey data may need to consider the socio-economic status of those being polled. - Proximity is another common GIS task. This helps people locate resources (stores, government buildings, airports) that are near their location. - Geolocation is identifying where something is located. Afterwhich, it can be placed on a map or you can then determine proximity. Think about ways that you can add a GIS feature to your web site.
High Altitude Venus Operational Concept (HAVOC) Venus is an important destination for future space exploration endeavors. However, it presents a unique set of challenges. Though its internal geology is similar to Earth’s, its surface is hot enough to melt lead and is covered with craters, volcanoes, mountains, and lava plains. The atmosphere of Venus is primarily carbon dioxide with thick clouds of sulfuric acid that completely cover the entire planet. The atmosphere traps the small amount of energy from the sun that reaches the surface along with the heat the planet itself releases. This greenhouse effect has made the surface and lower atmosphere of Venus one of the hottest places in the solar system. The upper atmosphere of Venus, with similar pressure, density, gravity, and radiation protection to that of the surface of the earth, is relatively benign at 50 km. A lighter-than-air vehicle could carry either a host of instruments and probes, or a habitat and ascent vehicle for a crew of two astronauts to explore Venus for up to a month. Such a mission would require less time to complete than a crewed Mars mission. A recent internal NASA study of a High Altitude Venus Operational Concept (HAVOC) led to the development of an evolutionary program for the exploration of Venus, with focus on the mission architecture and vehicle concept for a 30 day crewed mission into Venus’s atmosphere. This project is no longer active. Key technical challenges for the mission include performing the aerocapture maneuvers at Venus and Earth, inserting and inflating the airship at Venus, and protecting the solar panels and structure from the sulfuric acid in the atmosphere. With advances in technology and further refinement of the concept, missions to the Venusian atmosphere can expand humanity’s future in space. NASA Langley’s Systems Analysis and Concept Directorate investigates the feasibility of creative and forward-thinking ideas. Many concepts are pursued and eventually actualized while others, like HAVOC, are added to NASA’s archives. All our research adds to NASA’s body of knowledge and furthers the Agency’s mission.
The construction industry is a massive industry that requires a lot of resources. In fact the construction industry uses about 50% of natural resources, an 80% increase in natural resource consumption since 1980. The US construction industry, between construction and demolition, generated 534 million tons of waste in 2014 and that’s remained relatively stable. Pollution and waste generation seem to be an unavoidable part of construction. However, researchers, scientists and industry professionals are developing ways to decrease pollution through pollution engineering and innovative construction materials. Air Pollution Engineering Air pollution engineering is a division of environmental engineering that focuses on air pollution control and air quality engineering. The field of air pollution control is the techniques and tools used to reduce or eliminate emissions that can harm the environment or human health. While Air pollution engineers are responsible for creating the technologies and processes that reduce or eliminate air pollution. They also can focus on air quality engineering. Air quality engineers help monitor air quality, develop improvements, and evaluate the air supply systems. Their work is important in construction planning and design, but they also help reduce air pollution and energy waste. Air pollution engineering is one of the ways to innovate construction materials and reduce pollution. In Mexico City and Milan builders and construction engineers are using new technology to produce buildings that can reduce smog. Elegant Embellishments, a Berlin-based architecture firm, created tiles that can break up smog into water and other substances. They used those tiles to create a facade over the Hospital Manuel Gea Gonzalez in Mexico City to help reduce smog and pollution in the area. The best part about the tiles is the tiles can continue to break up smog permanently. So long as there is sunlight the tiles and coverings will break up the smog in these areas. Elegant Embellishments and other architectural firms are using pollution engineering to better the environment. TioCem is an interesting pollution engineering innovation. It is a special cement that uses ultraviolet light to reduce air pollution. TioCem uses titanium dioxide, which is added during cement production. It works like plants and the smog-eating tiles. When the UV light hits the concrete it takes Nitrogen Dioxide and turns it into safe by products. Tiocem can be used on and in pavement, roofing tiles, noise barriers, facades, and most importantly road construction. By adding TioCem into road construction and noise barriers it allows construction to help reduce air pollution. One of the big issues with asphalt is that it cracks, breaks, scales, and spalls. Cities replace or fix asphalt surfaces every four years because of this. However, engineers have found a way to make self-healing asphalt. Of course, the current expected use of it would help reduce the cost in repairs since America spends roughly $80 billion nationally on repair and operating costs. Using heat, the asphalt is able to melt and mix to reform. The lab that developed it also developed a vehicle that can run over the road and perform this asphalt healing. The use of composite materials in construction is relatively new; however, it’s been a popular material choice from marine, aeronautical, and transportation materials. Composite materials are multiple traditional materials combined to produce a finished material that has better properties than its original materials. Some of the benefits to composite materials are their lighter weight and better strength to weight ratio. You’re not compromising strength for a lighter weight material. It’s a surprising pollution engineering innovation as well, since they’re stronger, resist corrosion, and very durable. Building owners don’t have to replace composite materials as often, which helps the environment. The Institute of Advanced Architecture of Catalonia created bricks that will reduce energy costs. These bricks combine hydrogel and clay in order to reduce the temperature in buildings. The hydrogel absorbs water throughout the day and evening and on hot days it evaporates, which significantly reduces the temperature. This technology is still being tested, it could easily be incorporated into homes to reduce cooling costs. This pollution engineering solution will easily reduce energy costs and help save the environment. Many other schools and labs are trying to find ways of recycling waste into new products that will last the test of time and reduce our emissions. Every year new pollution engineering technology comes out, giving us the opportunity to reduce long term costs while being better for the environment.
Background reading: Gen 19:1-29 Some points to think about: 1. What kind of a person is Lot (especially if a person’s true colors might come out in a crisis?) 2. How does he think about his family? 3. What is the role of the wife? 4. As we get to our story, what have Lot and his 2 daughters witnessed over the last 24 hours? Part 1: Close reading of the text: Genesis 19:30-38. Reading the text in Hebrew will add to your appreciation of it, but regardless of the language you study in, part 2 will study some of the important words in the Hebrew in this narrative. Read through the story once. As you read, jot down questions and thoughts that come to your mind. When you finish reading: What is your reaction? As you work through it slowly, check to see if the narrator shares your view. 1. To your understanding, what was the concern of the daughters that caused them to choose this course of action? Base your answer on your reading of the biblical text. Come back to this question after reading part 3. 2. Compare the behavior of the Elder and the Younger daughters. Is there any difference in the narration of the 2 events? Start on your own. Part 2 focuses on some significant words in the Hebrew text (that unfortunately gets lost in most translations) that might make this narrative very interesting. 3. Was Lot an unaware victim throughout this story, or did he, at some point, begin to have a clue as to what was going on? This question should be revisited as you study parts 2 and 3. Right now, what is your gut reaction? (If you can prove it from the text, it would be great!) 4. Pay attention to the names of the sons that are born: Mo’av and Ben Ami. In Hebrew names have meanings: Mo’av = me’av = from father. Ben Ami = child of my nation. How do the names differ in their messages? 5. As in many narratives in Bereshit (Genesis), this too is the foundation story of a nation (or two) told from an Israelite perspective. a) Read Deuteronomy 2:9, 18-19. What is God’s attitude towards the nations that came from the daughters of Lot? b) Now read Deuteronomy 23:4-7. How does the Torah wish the Israelites to relate to the nations that came from the daughters of Lot in the long run? c) How can we explain the difference in attitude? (Is it different?) Part 2: The fine shades of Hebrew: Two similar, but not identical, terms are used to describe sexual relations in this section: לשכב את (‘to lay’ followed by an object) and לשכב עם (to lay with.) To learn what the difference means, let’s examine other places in Tanakh where these terms are used. After studying this on your own, you might find the video segment for this session helpful. לשכב את: "To Lay" followed by an object: לשכב עם: ‘To lay with’: Part 3: Rabbinic and Medieval commentators: 1. Why did the daughters do what they did? 2. According to this Midrash (from early centuries CE,) what was the concern of the daughters? 3. Do you agree with Radak’s criticism of the view presented by the Midrash? Why? 4. Is the comment brought in the name of R. Joseph Kara reasonable? - It is an interesting idea, especially in light of Deut. 23:4-7… 5. Was Lot merely a victim or perhaps a bit of a villain as well? (Note: In 10 places in the Torah dots, that have nothing to do with the trop (the music and punctuation,) appear over words. These dots indicate some doubts as to whether or not these letters/words should indeed have been written.) 6. Did the Rashi, and the Midrash that he based his comment on, have to rely solely on the dot to come to their conclusion? 7. How did the Rabbis view the proactive behavior of Lot’s daughters? 8. How is the Messiah connected to the daughters of Lot? - Remember the story of Ruth? She was a Moabite who married Boaz from Bethlehem. (See Ruth chapter 4). Their great grandson was David. The Messiah will be a descendant of David. 9. Which women from these nations are connected to the royal house of Israel? - In the previous Midrash we saw Ruth. She was from Moab – the son of the Elder daughter. Here is the Amonite queen of Judah: (Amon/Ben Ami was the son of the younger daughter.) 10. So, how did the rabbis view the proactive approach and acts of the daughters of Lot? - Taking this one step farther: Remember the prohibitions against allowing these nations to enter the community of the Israelites? How does that sit with the rabbinic writings above?
Valves are used to continue and discontinue flows, modify flow rates, reroute the direction of a flow, and regulate or relieve pressure (among similar purposes). Due to the wide variety of valve types, several different methods of classifying valves exist. It should be noted that the term hydraulic valve specifically refers to the application of a specific type of valve. In other words, a hydraulic valve is simply any type of valve that acts on hydraulic fluid Hydraulic systems have existed in some way or another since the sixth century BC, when the Mesopotamians and Egyptians used water power for irrigation. Use of hydraulics was also seen in the Hellenistic age and in ancient Persia, China, Sri Lanka, and Rome. The modern age of hydraulics began in the early 1600s, with the innovations of scientists like Benedetto Castelli and Blaise Pascal. Pascal, in particular, played a pioneering role in the field of modern hydraulics. Pascal’s law summarizes the basis upon which the principles of hydraulics are founded. In essence, this law states that when pressure is placed on any point of a confined liquid, such pressure will transmit equally to all other parts of the confined liquid. Correspondingly, if pressure increases at any point in a confined liquid, equal and proportional increases will appear at all other points in the confined liquid. It is important to note that Pascal’s Law is made possible by the fact that liquid is incompressible. It is equally important to note that is does not apply to liquids which are not confined in some type of enclosed area. Using this principle, engineers and scientists have successfully designed systems that generate, control, and transfer power via pressurized fluids, eliminating much need for manual human effort. (Fuller explanations of Pascal’s Law can be found at treatments of other hydraulic parts, including our sites on hydraulic pumps and hydraulic cylinders). Most hydraulic valves minimally consist of a main casing, a bonnet, a seat, and a disc. The main casing is the valve’s outer enclosure; it contains all the internal components, which are collectively called the trim. Most often, the casing is made from a metallic or plastic material. Common metallic materials include steel, stainless steel, alloyed steel, cast iron, bronze, brass, and gunmetal (red brass), while among the most common plastic options are PVC, PVDF, PP, and glass-reinforced nylon. The bonnet is a semi-permanent, removable part of the valve that acts as a cover. For access to interior parts of the valve, the bonnet needs to be removed. Some valves do not have a bonnet because of the way they are constructed. (One example of such a valve is a plug valve.) The term seat refers to the interior surface of the casing, which connects to the disc in order to create a leak-proof seal. The seat typically possesses sealing made of either rubber or plastic. Finally, the disc (also called a valve member) is the part of the valve that slides into the seat to restrict flow and prevent leaking. How They Work Hydraulic valves can only be properly understood within the context of entire hydraulic systems. An entire unit that generates power hydraulically is known as a hydraulic power pack or a hydraulic power unit. Such packs or units typically consist of a reservoir, a pump, hydraulic valves, and hydraulic actuators such as motors or cylinders. The purpose of hydraulic valves within a hydraulic power pack is to connect the power source (i.e., the pump) to the actuators which translate hydraulic power into mechanical motion (i.e. hydraulic cylinders, hydraulic motors). Through its valves, a hydraulic power system can supply its actuators with hydraulic fluid and modify the flow of such fluid as needed. While functional, valves generally have at least two settings: open and closed. Generally speaking, fluid may flow freely through the valve if it is open. Conversely, fluid flow is restricted if the valve is closed. Valves with a default status of open are also known as open center valves, while valves with a default status of closed are known as closed center valves. Valves are either open or closed based on the positioning of their interior pieces; more specifically, a valve’s status depends on whether or not the disc is inside the seat. Hydraulic valves (and especially ones used for directional control) are often referred to as spool valves since they visually resemble spools of thread (by containing interior trim within exterior housing). The flow of hydraulic fluid (or lack thereof) is dependent on the position of the interior “spool” portion of the valve within the exterior housing. The default or “neutral” position of many valves has the spool in a central position which blocks the flow of hydraulic fluid. In order to open the valve and let fluid through, the spool simply slides to one side of the housing and away from the neutral position. Nowadays, many hydraulic valves also allow for partial flow obstruction. As alluded to in the introduction, hydraulic valves can be categorized in several different ways. Some methods of categorization emphasize a valve’s physical characteristics or construction. Other methods emphasize a valve’s method of actuation or control. Still other categorization methods classify hydraulic valves according to their specific application or function. Classifications by Construction A common way to label hydraulic valves is by its number of ports. The term port simply refers to an avenue that hydraulic fluid can use to flow into or out of a valve. Standard hydraulic valves are double port, since they possess both an inlet port (to draw in fluid from the pump) and an outlet port (to pass fluid on to the actuators). However, hydraulic valves can also be three-port, four-port, or multi-port. Hydraulic manifolds are another type of valve which is classified primarily on the basis of physical characteristics. Such mechanisms are actually separate hydraulic valves that are connected to one another within hydraulic systems. Hydraulic cartridge valves (also known as slip in valves, logic valves, or 2/2-way valves) are some of the more popular valves which derive their classification from their configuration. These valves are screwed into a threaded cavity and are typically composed of only a sleeve, a cone or poppet, and a spring. They open when incoming fluid pushes the holding cone or poppet (held in place by the spring) aside. The ease of installing cartridge valves makes them very popular in the hydraulic world. Overall, hydraulic valves vary widely in physical shape and size. They can range in size from less than an inch to a foot long. On average, they can fit in the palm of a hand. The broad physical variety that characterizes hydraulic valves directly affects their differing uses. Classifications by Actuation/Control It is important to note that valves can only function properly with some type of valve actuator. While not strictly a part of the valve itself, valve actuators are important since they are responsible for actually moving the machinery within a valve to change its status. Valve actuators can be either manual or automatic. An example of a manually operated valve is the hydraulic ball valve. This valve derives its name from a spherical, internal disk containing a hole and is activated with a handle that can be quickly rotated 90° between opened and closed positions. When the valve is open, the hole in the ball disk lines up with the direction of fluid flow and allows fluid to pass through. When the valve is closed, the hole is not lined up with the fluid flow, thus blocking the flow of fluid. Valve balls are perforated and most often made of nickel, brass, stainless steel, or titanium. (Sometimes, they are composed of a plastic, like PVC, PP, ABS, or PVDF.) Many manually operated hydraulic valves typically require high amounts of force in order to successfully stop high-pressure flows of hydraulic fluid. Thus, many manual hydraulic valves other than ball valves are operated by oversized wheels, levers, and even hydraulic rams. Other hydraulic valves are electrically operated, and/or guided remotely with computer controls. Hydraulic solenoid valves are an excellent example of such valves. They open and close based on the charge of a magnetic field that pushes on a plunger. The magnetic field is signaled by a current, which is received by a wire coil when the solenoid converts electrical energy into mechanical energy. Other types of electronically or remotely controlled hydraulic valves can be found in places such as construction sites, where they are critical to the operation of many hydraulically-powered construction machines. Classifications by Application or Function Overall, mechanical valves in general are often classified by the exact function they are designed to exact on a fluid (e.g. completely cutting off a flow, preventing backflow, etc.). Since hydraulic valves are essentially general types of valves expressly applied in hydraulic scenarios, hydraulic valves are often also classified according to their exact regulatory function. Control valves are valves specifically designed to control or modify the amount and speed of a fluid flow. These types of valves are particularly capable of occupying a spectrum of positions between fully open and fully closed. They are sometimes further classified as pressure control valves and flow control valves. (Control valves contrast with simple on/off valves or shutoff valves, which are designed to completely stop or start a fluid flow rather than simply modifying it.) Directional control valves (or simply directional valves) may arguably be the “basic” type of mechanical valve, since their purpose is to control or modify the direction (rather than the amount) of fluid flow. On average, many standard hydraulic “spool” valves are used expressly for directional control and occupy a few discrete positions. Check valves (or non-return valves) are specific types of directional control valves that are used to force fluid flow in one direction only; if fluid within a hydraulic system somehow begins flowing in an undesirable direction, the check valve will close and block the flow. Check valves are critical to hydraulic systems in environments where substances of varying compositions and pressures must be kept separated (such as wastewater management plants). Proportional valves can be considered as “extensions” of directional control valves. In addition to modifying flow direction, these type of valves can occupy intermediate positions and carry an output flow that is unequal to the input flow. In other words, proportional valves are designed to control the speed as well as the direction of fluid flow. (From this perspective, they can also be considered as extensions of control valves, which are designed to control the speed and amount of fluid flow.) Pressure relief valves (or simply relief valves) are primarily designed to keep hydraulic systems from over-pressurizing. Instead of closing when undesirable conditions are met (such as check valves), these types of valves open in order to draw hydraulic fluid back into the reservoir when internal pressure has exceeded a certain point (e.g. due to a blocked pipe in the system). It should be noted that differing functions accomplished by the aforementioned valves can also be performed by other, more specific types of valves. For example, hydraulic cartridge valves are often used for directional or check control as well as pressure or flow control. Beyond these few examples, there are many other unique valve types with individual functions. Hydraulic needle valves, for instance, are composed of small ports and threaded plungers. Their unique shape allows them to regulate flow in extremely tight spaces. Other components associated with hydraulic valves include springs, gaskets, and stems. Those valves that include springs do so in order to shift the disc and control repositioning. Common spring materials include stainless steel, zinc-plated steel and, for work with exceptionally high temperatures, Inconel X750. Gaskets are mechanical seals, usually made from an elastomer. Their purpose is to prevent leakage of fluids from the valve or in between separate areas of the valve. The term metal face seal refers to a gasket that is located between two fittings in a sandwich-like arrangement. Stems are not always present, because they are often combined with the disc or handle. However, when present, they transmit motion from the controlling device, like the handle, through the bonnet and to the disc. Hydraulic valves can be connected to hydraulic systems with a variety of different mechanisms. Some of these mechanisms include flanges (bolt or clamp), welds (butt or socket), union connections, and fittings (tube or compressions). The value of hydraulic valves to the industrial world is inextricably bound up in the value of hydraulic systems as a whole. Overall, hydraulic power systems offer energy sources that are simpler and safer than other types (such as electrical power systems) while still being incredibly effective. Hydraulic valves are thus valued and widespread because they enable effective movement of hydraulic fluid, which forms the “lifeblood” of hydraulic power systems. Hydraulic valves make flow control possible for many, many applications, including those in the aerospace, automotive, chemical and laboratory, construction, cryogenic, fire and heating services, food processing, fuel and oil, gas and air, irrigation, medical, military, process control, refrigeration, and wastewater industries. Since hydraulic valves vary so widely, it can be difficult determining the correct valve for a specific application. The below points offer a brief sketch of various factors to keep in mind during the determination process. • What type of flow coefficient is best for this application? A hydraulic valve’s flow coefficient is a combined measure that indicates the amount of energy that is lost by fluid as it flows through or across the valve. Different valves possess differing coefficients, and similar valves can diverge in flow coefficients if their diameter (often measured in inches) is different. Generally speaking, higher flow coefficients indicate lower drop pressures that occur across the valve (if the flow rate remains the same). Determining the proper flow coefficient is one of the best methods to determine the proper valve to use in a given scenario. For example, a valve with a low head loss (one of the combined measures that makes up a flow coefficient) to conserve energy is best in a scenario where the valve will be normally open rather than closed. Hydraulic cartridge valves are popular in scenarios where energy needs to be conserved, since they cause far less energy and/or pressure loss than other types of valves. • What is the maximum temperature you will reach in a given hydraulic scenario? Different types of hydraulic valves are designed to handle different maximum temperatures. You will want to investigate the maximum temperatures reached during your hydraulic operations and select hydraulic valves accordingly. • How rigorous will my hydraulic application be? Some valves are better than others for high-intensity hydraulic situations. For example, valve balls are a common feature in hydraulic valves that are made for high pressure, high tolerance, and/or severe duty applications. • How many ports or directional stages does my hydraulic scenario require? Although many times a standard double-port hydraulic valve will work, there may be times when a multi-port hydraulic valve is preferable. Not all of the decisions that go into hydraulic valve selection need to be made alone. Investing in a quality hydraulic parts manufacturer or supplier is well worth the cost the majority of the time. Some characteristics of hydraulic parts suppliers to look for include: • Accreditation. A company that possesses an accreditation such as the ISO 9001 is a good bet. • Adherence to hydraulic industry standards. This characteristic will be closely tied to a company’s level of accreditation. Some specific ISO standards to look for include ISO 6403 (flow and pressure valves), ISO 6263 (hydraulic fluid power and mounting surfaces), SAE J748 (hydraulic directional control valves), and SAE J1235 (standards for reporting the hydraulic valve leakage) • Depth of experience/expertise. Sometimes, a supplier may regularly offer only a small selection of hydraulic valves. However, a supplier’s level of industry expertise may be able to offset this reality and provide you with customized hydraulic valves as needed. • Turnaround time. All types of industrial breakdowns are undesirable, but the failure of power systems (hydraulic and otherwise) is particularly undesirable. If your hydraulic power system fails due to valve trouble, you want to be sure you are working with a supplier that can advise on and provide needed replacements in record time.
Chladni figures are a visual manifestation of sound vibrations. On the basis of his studies of sounding bodies Chladni has been called the father of acoustics. In 1787 he investigated the vibrations of flat plates, and the patterns produced by certain sounds. He scattered fine sand evenly over a horizontal glass plate, clamped at one end, and set it in vibration with a violin bow; symmetrical patterns were formed where the sand gathered. That is, the black lines represented the parts of the plate that vibrated least and sand collected in the areas that were relatively still. Chladni’s interest in the figures arose from hearing different sounds produced by vibrating plates: “I had observed that a sheet of glass or metal gave very different tones if it was struck when held in different positions”. He then related these different tones to the patterns of vibration of the plates, which could be shown by configurations in the sand. The sounds and patterns varied with the points at which the plate was held. Initially round plates were used, and he experimented with a variety of shapes, like triangular, rectangular, hexagonal, semicircular, and elliptical plates. However, bowing square plates became the standard procedure. One of the features of the Chladni figures is that they could be analysed quantitatively. Chladni is shown in an array of figures formed by vibrating a flat, sand-strewn plate.
One Methods:Using Who and Whom Correctly The correct use of who and whom in questions and statements may seem like a lost battle, still fought only by punctilious English teachers. However, the correct usage remains important in formal situations and particularly so with formal writing. After reading this article, you will feel more comfortable using the distinction of "who" and "whom" correctly, which will make you seem more educated and your speech more carefully spoken. Using Who and Whom Correctly 1Understand the difference between who and whom. Both who and whom are relative pronouns. However, "who" is used as the subject of a sentence or clause, to denote who is doing something (like he or she). On the other hand, "whom" is used as a direct or indirect object of a verb or preposition. - While a preposition (at, by, for, in, with, etc.) often comes before "whom," this is not always the case, so the key question is to ask, "Who is doing what to whom?" What follows is a quick way to determine which pronoun to use in a particular question. 3Use whom when referring to the object of a verb or preposition. - To whom it may concern: - To whom did you talk today? - Whom does Sarah love? 4Ask yourself if the answer to the question would be he or him. If you can answer the question with him, then use whom. It's easy to remember because they both end with "m". If you can answer the question with he, then use who. - Example: A suitable answer to the question, "To [who or whom] did the prize go?" is, "It went to him." (It is improper to say "It went to he.") The correct pronoun for the question is whom. - Example: A suitable answer to the question, "[Who or Whom] went to the store?" is, "He went to the store." (It is improper to say "Him went to the store.") The correct pronoun for the question who. 5When trying to decide whether who or whom is correct, simplify the sentence. Where other words in a complex sentence might throw you off track, simplify the sentence to include just the basic subject, verb, and object. It helps to move the words around in your head to identify the word relationships. For example: - "Marie Antoinette and her ladies-in-waiting only invited people to their party [who or whom] they considered to love parties as much as they did." The simplified mental version becomes: "whom they considered." - "Marie Antoinette prevented her mother from knowing [who or whom] she invited to the Petit Trianon." The simplified mental version becomes: "[who or whom] she invited." Then, you could rearrange it again to say: "she invited whom", clarifying that she did something to (invited) whom. 6The distinction between who and whom is less important in informal spoken language than it is in formal written language. It's possible that the distinction might someday erode altogether. For now, though, it is important to keep this clear in written language.Ad We could really use your help! - Ask yourself "who did what to whom?" - Learning "who" and "whom" can help with grammar and understanding different languages. It is also good to know this if you want to speak fluent English and write correct sentences. - It is possible to write around problems involving who and whom, but the result is almost always clumsy. If you write "To which person did the prize go?" because you can't remember that whom is the correct pronoun for such a question, you will have avoided a grammatical error at the expense of elegance. - Here's a useful mnemonic for remembering about objects and subjects; If you say "I love you", then "you" is the object of your affection and the object of the sentence. "I" is the subject. "[Whom or Who] do I love?" is "Whom do I love?" because the answer, "you", is an object. - Learning another language can help greatly. In most languages, using "who" in the place of "whom" can cause great confusion. A great example of this is German or Spanish. - The CCAE (Canadian Council for the Advancement of Education) suggests always using "who" to start a sentence. - When "who" or "whom" appear in a clause, the choice depends on whether the pronoun is serving as the subject or an object in the clause, regardless of whether the clause itself is functioning as the subject or an object in the full sentence. - There is much confusion and misuse on this topic. Just as correctly using whom may make others think that you are intelligent, misusing it may make you seem pompous. Never use whom as a subject pronoun. This is as incorrect as using who where whom is required. Many people will mistakenly believe that you are trying to be formal. - "Whom are you?" is wrong. It is meant to be "Who are you?" - "John is the man whom I expect to be awarded the prize." is wrong. It should be "John is the man who I expect to be awarded the prize." Sources and Citations - Patricia T. O'Connor, Woe is I: The Grammarphobe's Guide to Better English in Plain English, pp. 6-8, (1996), ISBN 1-57322-625-4 – research source In other languages: Português: Usar "Who" e "Whom" Corretamente, Español: usar 'who' y 'whom' correctamente, Italiano: Usare "Who" e "Whom" Correttamente, 中文: 使用正确“who”和“whom”, Français: utiliser "who" et "whom" correctement, Deutsch: Die Pronomen who und whom korrekt gebrauchen, Bahasa Indonesia: Menggunakan "Who" dan "Whom" dengan Benar, Nederlands: Who en Whom op de juiste manier gebruiken Thanks to all authors for creating a page that has been read 1,338,123 times.
Often adults end up playing too big a role in children’s self-regulation. They act as the children’s frontal lobes and unknowingly regulate him by prompting and staying close to him. One study found that education assistants spend 86% of each day within three feet of their assigned students (1) – hardly helpful for developing self-regulation in children. The ‘self’ part of self-regulation happens after children become aware that they can control their bodies, thinking and emotions, learns skills and strategies for doing that, and have opportunities to practice them. This means teaching children step by step and removing yourself so they can make their own decisions. That’s definitively easier said than done. It comes after careful teaching and practicing. We believe strongly that you don’t ‘just throw children into water and hope they can swim’. We need to work on helping each child through four main steps: - Ability – “I Can Do It” – children learn they’re able to use the strategies - Need – “I Need To Do It Here and Here” – children are helped to figure out when and where they should use the strategies - Resilience – “I Can Do It Even When …” – we need to help build their resilience so they can cope in challenging situations and still use their strategies - Self-advocacy – “I Can Help Myself By …” – we need to teach children to advocate for themselves so, if something becomes too challenging, they have ways help themselves (other than melting down). By working on executive functions, we help bring each child’s knowledge and intentions into action. The child becomes a master of his own frontal lobes and executive functions.
Unsure how to talk with your kids or whether to talk with them at all about the subject of the shooting in Las Vegas? Here are some ideas on how to approach this tough subject with your children from the National Childhood Traumatic Stress Network (NCTSN.org): - Start the conversation. Talk about the shooting with your child. Not talking about it can make the event even more threatening in your child’s mind. Silence suggests that what has occurred is too horrible even to speak about or that you do not know what has happened. With social media (e.g., Facebook, Twitter, text messages, newsbreaks on favorite radio and TV stations, and others), it is highly unlikely that children and teenagers have not heard about this. Chances are your child has heard about it, too. - What does your child already know? Start by asking what your child/teen already has heard about the events from the media and from friends. Listen carefully; try to figure out what he or she knows or believes. As your child explains, listen for misinformation, misconceptions, and underlying fears or concerns. Understand that this information will change as more facts about the shooting are known. - Gently correct inaccurate information. If your child/teen has inaccurate information or misconceptions, take time to provide the correct information in simple, clear, age appropriate language. - Encourage your child to ask questions, and answer those questions directly. Your child/teen may have some difficult questions about the incident. For example, she may ask if it is possible that it could happen at your workplace; she is probably really asking whether it is “likely.” The concern about re-occurrence will be an issue for caregivers and children/teens alike. While it is important to discuss the likelihood of this risk, she is also asking if she is safe. This may be a time to review plans your family has for keeping safe in the event of any crisis situation. Do give any information you have on the help and support the victims and their families are receiving. Like adults, children/teens are better able to cope with a difficult situation when they have the facts about it. Having question-and-answer talks gives your child ongoing support as he or she begins to cope with the range of emotions stirred up by this tragedy. - Limit media exposure. Limit your child’s exposure to media images and sounds of the shooting, and do not allow your very young children to see or hear any TV/radio shooting related messages. Even if they appear to be engrossed in play, children often are aware of what you are watching on TV or listening to on the radio. What may not be upsetting to an adult may be very upsetting and confusing for a child. Limit your own exposure as well. Adults may become more distressed with nonstop exposure to media coverage of this shooting. - Common reactions. Children/Teens may have reactions to this tragedy. In the immediate aftermath of the shooting, they may have more problems paying attention and concentrating. They may become more irritable or defiant. Children and even teens may have trouble separating from caregivers, wanting to stay at home or close by them. It’s common for young people to feel anxious about what has happened, what may happen in the future, and how it will impact their lives. Children/Teens may think about this event, even when they try not to. Their sleep and appetite routines may change. In general, you should see these reactions lessen within a few weeks. - Be a positive role model. Consider sharing your feelings about the events with your child/teen, but at a level they can understand. You may express sadness and empathy for the victims and their families. You may share some worry, but it is important to also share ideas for coping with difficult situations like this tragedy. When you speak of the quick response by law enforcement and medical personnel to help the victims (and the heroic or generous efforts of ordinary citizens), you help your child/teen see that there can be good, even in the mist of such a horrific event. - Be patient. In times of stress, children/teens may have trouble with their behavior, concentration, and attention. While they may not openly ask for your guidance or support, they will want it. Adolescents who are seeking increased independence may have difficulty expressing their needs. Both children and teens will need a little extra patience, care, and love. (Be patient with yourself, too!). - Extra help. Should reactions continue or at any point interfere with your children’s/teens’ abilities to function or if you are worried, contact local mental health professionals who have expertise in trauma. Contact your family physician, pediatrician, or state mental health associations for referrals to such experts.
An acoustical mesh network is a decentralized communication system that transmits data by using sound to connect computers. This relatively unknown network type was first used to enable robust underwater communication. More recently, the method has been used in proof-of-concept (POC) trials for acoustical infections as a method of communication among air-gapped computers. Air gapping is an extreme measure used to secure extremely sensitive data against exfiltration. There are both legitimate and unscrupulous uses for acoustical mesh networks. Hackers can modify software designed for underwater networking to enable air-based and covert communication and use acoustical mesh networks for covert communication with infected systems. Due to their already limited bandwidth (20bits/s), these types of mesh networks commonly use a partial mesh network, in which communication is conducted through the fastest points rather than connecting all endpoints. The networks often use ultrasonic sound outside or at the edge of the range of human hearing. The use of sound beyond the range of human hearing means that malware can stealthily send data without an Internet connection. POC trials have demonstrated acoustical infections and all elements of the attack have been demonstrated in the real world, strongly suggesting that malware can infect through acoustic channels. BadBIOS, a stealth infection discovered by Dragos Ruiu, is one of the first malwares suspected to create acoustical mesh networks. Ruiu, a security researcher, reported that the malware infected the firmware of the machine, communicating with and comproming other nearby computers through sound.
James N. Warrington Associate, American Society of Naval Engineers Presented at the Annual Meeting, 22 December 1893 Reprinted in the Naval Engineers Journal, July 1983 Since the investigations of the elder Froude, friction and wave making have been fully recognized as the leading elements in the resistance of ships. The resistance due to eddies is known to be quite inconsiderable in modern ships, while the variations in resistance which accompany change in trim, bodily rise or subsidence of the vessel, and wave interference. all occur at such speeds as produce waves of great magnitude relative to the displacement. Within the limit of speed at which these phenomena appear, friction and wave making may be regarded as supplying substantially the entire resistance, and it is here essayed to formulate the propulsive power absorbed by these two elements of resistance with with such a degree of approximation as may be attained by attributing the properties of a trochoidal wave to the waves of a ship. The horse-power of the frictional resistance may be expressed by the formula in which represents the area of the wetted surface, the speed in knots, and a constant to be derived from the coefficient of friction. For a clean painted surface exceeding fifty feet on length Froude’s experiments show a resistance of 0.25 lb. per square foot at a speed of ten feet per second, and 1.83 as the power of the speed to which the resistance is proportional. From these quantities are decided the values and , the speed being expressed in knots of 6,080 feet and the wetted surface in square feet. USS Cincinnati, New York, NY, June 1896. Considering the wave generated by the ship in motion, it may be observed that similar ships produce similar waves, and that for a given ship the ratio of the magnitude of any series of waves to the magnitude of any other series, as for example of the bow waves to the stern waves, will be constant at all speeds so long as their generation is free from mutual interference. Moreover the wave of any series will retain their similarity at all speeds, for the orbital velocity as well as the speed of advance of the wave form must both be proportional to the speed of the ship. The aggregate energy of all the wave of the ship may therefore be conceived to be embodied in a single series of representative waves having a speed of advance equal to the speed of the ship. As the particles of water in the longitudinal stream lines approach the ship they meet the component stream lines diverging from the apex of the bow wave and follow the resultant stream lines around the ship. The velocity in the diverging stream lines must clearly be sufficient to impart the necessary transverse displacement during the time occupied in the passage from the bow to the midship section. The diverging stream-line velocity will then be proportional to , in which is the speed of the ship, is the area of the midship section, and the length of the combined entrance and run. But this diverging stream-line velocity is closely proportional to, if not identical with, the orbital velocity of the bow waves. Hence these orbital velocities will also be proportional to , and since the wave of each series grow in like proportion with increase of speed, the orbital velocity of the representative wave also will be proportional to the same ratio. Assuming now that the waves of the ship possess the properties of trochoidal waves, it follows that the height of the representative wave will be proportional to , in which represents the wave length, the orbital velocity, and the speed of advance. Furthermore, the breadth of the wave will be proportional to its height, for as the height increases the orbital velocity increases throughout the entire breadth of the wave, and imparts its motion to more remote particles previously at rest. The horse-power of a series of trochoidal waves, having a length , a height , and a breadth , is thus expressed by Mr. Albert W. Stahl, U.S. Navy: The ratio is constant for similar ships and for all speeds of a given ship, and is proportional to . Since with such a range in this latter ratio as may be found between a torpedo boat and an armored cruiser, the quantity in parenthesis would vary by a fraction of one percent, it may reasonably be neglected. The wave horse-power then becomes proportional to . But is proportional to , and h to , hence the horse-power absorbed in wave making may be expressed by the formula in which is a constant to be derived from the performance of a model ship. USS Montgomery, Tompkinsville (Staten Island), NY, June 1896 The imaginary representative wave is conceived as having a speed of advance equal to the speed of the ship, and in this respect is plainly in harmony with the transverse series. The speed of the advance of the diverging waves is dependent not only upon the speed of the ship but also upon the angle of divergence of their crest lines, and this angle of divergence is dependent upon the bluntness of the ship, or in other words upon the ratio , so long as the mode of distribution of the displacement remains the same. The length of the diverging waves will then vary with the ratio, and the effect will be that in the blunter ship the diverging series will absorb a greater amount of energy relative to the transverse series than in the ships of finer form. This relative increase in energy in the diverging series must, however, be accompanied by a corresponding diminution of the energy in the transverse series, for the total energy in both series is derived from the diverging component stream lines, the velocity of which is proportional to , and this total energy thus derived will be independent of the relative importance of the two waves. The condition necessary to the preceding statement is that the model ship, whence the constant is derived, and the projected ship shall be similar in cross section and identical in mode of distribution of the displacement. When this condition is fulfilled, therefore, the representative wave becomes a true measure of all the waves subject to the approximation involved in its trochoidal nature, and in the modification expression (2). Concerning the limit of speed in the application of the formula, it should be observed that each wave-making feature is assumed to generate its appropriate wave without interference; hence the beginning of each interference marks the limit of its applicability. At corresponding speeds the power absorbed in wave making becomes proportional to the square root of the seventh power of the scale of comparison, as should be expected. The Wake Current The sum of and is the effective horse-power required to tow the ship. When propulsion is effected by a screw propeller at the stern, other varying elements are introduced. The following current, by delivering momentum to the propeller, restores a portion of the energy already expended in overcoming the frictional and wave-making resistances. Let represent the speed of the ship, the mean speed of the following current, and the diameter of the propeller. Then the energy imparted to the propeller per unit of time by the following current will be proportional to but assuming a constant percentage of real slip, is proportional to which being substituted in (4) reduces the increment of energy to the form The following current has three components of independent origin, viz: the frictional wake, the orbital velocity of the stern wave, and the orbital velocity of the recurring transverse bow wave. It has been shown that when the waves of the bow series interfere with the formation of the stern wave the limit to the application of this analysis has been reached. Therefore the effect of the bow wave need not be considered. The speed of the frictional wake will here be regarded as proportional to the speed of the ship; hence the energy derived from it, being a constant proportion of the total power, need not be considered here. Still attributing the properties of the trochoidal wave to the waves of the ship, the orbital velocity of the stern wave at the center of the propeller will be proportional to in which is the length of the after boddy, the modulus of the hyperbolic system of logarithms, the depth of immersion of the axis of the propeller, and a constant such that equals the wave length. Substituting (6) for in expression (5) the increment of energy becomes The value of constant as derived from the results of the experiment of Mr. George A. Calvert is ; but this value should be regarded as provisional, since like depends upon the mode of distribution of the displacement. Let represent the total effective horse-power required at the propeller. Then and are constants, depending upon the form of the ship, having a provisional value of 1.66. Analysis of Power If from the indicated horsepower, the power required to overcome the constant resistance of the unloaded engine is deducted, the result will be the new horse-power applied to the propulsion; the the ratio of to this net horse-power will represent the combined efficiency of the mechanism of the engine so far as the variable friction is involved, and of the propeller, including the effect of the frictional wake and of the augmentation of resistance by the propeller. These latter two effects are thus assumed proportional to the power applied. The ratio may then be expected to remain constant so long as the propeller efficiency remains constant. The results of the progressive trial of the U.S.S. Bancroft given by Passed Assistant Engineer Robert S. Griffin, U.S. Navy, suffice for determination of the value of the constant of the vessel. The Bancroft is a twin-screw practice ship of the following dimensions: length of immersed body ft.; displacement on trial tons; area of immersed midship section on trial sq.ft.; wetted surface on trial sq.ft. Having calculated the power absorbed by surface friction on the basis of Froude’s coefficient and exponent, and also the value of the factor involving the following current, the constant is given such a value as will maintain the ratio approximately constant. The value of thus found is for the Bancroft 329, and ratio . The resistance of the unloaded engine is assumed at 2 lbs. per square inch of the area of the low-pressure pistons. Using these values, the power at the several speeds hsa been calculated and is given in Table I, where the observed powers are also given show the degree of approximation. In order to illustrate the scope of the formula the ratio is given in Table II together with explanatory data for each of seven vessels, five of them being twin-screw ships of the U.S. Navy, and the two others the S.S. Umbria of the Cunard line and the torpedo boat Sunderland, built by Messrs. William Doxford and Sons, Sunderland, England. With the exception of the Umbria these vessels may be regarded as approximations to the form of least resistance in smooth water. The Umbria has a quasi-parallel middle body, the length of which has been approximated by ascribing to the combined entrance and run a prismatic coefficient of fineness of 0.56. Table III contains the results of the progressive trial of the Umbria together with the calculated power. The same comparison is shown in Figure I. In making these calculations, the constants of the Bancroft have been employed, their values being . If should be stated that the application of this value of to all of these vessels is not strictly warranted, for they are undoubtedly dissimilar in cross section and also in mode of distribution of displacement. Nevertheless it is assumed that they possess a sufficient degree of similarity to serve the purpose of illustration. The resistance of the unloaded engine is based on two pounds per square inch of the area of the low-pressure pistons in all cases. The length is the combined length of entrance and run, being the length of the immersed body in all cases except the Umbria. The ratio given as an index to the character of the ship form a propulsive point of view. If all these vessels possessed the required degree of similarity, and had propellers and engines of equal mechanical efficiency, the ratio should be the same for all, provided the speeds were within the limit indicated. Limit of Speed Since the formula gives only the power absorbed in surface friction and in waves generated without interference, it is obvious that it may be used as a measure of the total power only when these elements constitute the entire resistance. The earliest departure from the condition of applicability may be expected to accompany an interference by the recurring transverse bow wave in the generation of the stern wave. Since the relative magnitude of the bow and stern waves is dependent upon the form of the ship, it is clearly impossible to formulate generally the limit at which such an interference will occur. The limit must therefore be found form the performance of the model ship whence the constants in any case are derived. The U.S.S. Bancroft, within a speed of 14.52 knots, gives no evidence of the limit, as may be seen by reference to Table I. At this speed . The S.S. Umbria gives no indication of the limit within the speed of 20 knots, as may be seen by Table III or Fig. I. At 20 knots length of ship. On the other hand, the progressive trial of the torpedo boat Sunderland affords a convenient illustration of the inapplicability of the formula. Figure 2 shows the power curves, both actual and calculated. At the speed of 14 knots the value of the ratio is such as to warrant the conclusion that at this speed the formula gives the power with sufficient accuracy. A glance at the curves, however, shows that this cannot be said of any higher speed until 20 knots is approached. Fourteen knots may then be regarded as the limit of speed at which the formula may be applied. At this speed . Beyond this speed the familar hump appears. It may be of interest to note that the curves intersect again in the region of 20 knots, at which . It is not intended to question the existence of minor humps at speeds below the limits here noticed, but simply to observe that in the published curves of the Bancroft and Umbria they are not appreciable. It is possible that such humps may be partially neutralized by the action of a simultaneous increase in the wake current upon the propeller. In any event, their existence must affect the accuracy of the calculated power, although commonly to such as slight extent as to be inappreciable. Since the magnitude of the waves of the ship varies with the ratio it follows that both the size of the hump and the speed at which it appears will be influenced by this ratio. The method of this analysis may be summarized thus: The resistance of ships is so largely due to surface friction and wave making that all other forms may be neglected except at very high speeds. Within the limit of speed at which interference occurs in the generation of waves, the formula (eq. 8) enables the power absorbed by these two principal elements to be computed, provided the constants and have been experimentally determined by means of a model ship having the same mode of distribution of displacement as the projected ship. The ratio of the power thus computed to the net power of the engine is constant so long as the propeller efficiency is constant. Finally, the indicated horse power may be obtained by adding to the net power the power absorbed in the constant resistance of the engine.
the first main battle of the American Revolutionary War, in Boston, Massachusetts in 1775. Although the British army won the battle, the American colonists killed and wounded more than 1000 British soldiers, and proved that their army was more powerful and effective than the British expected them to be. Definition from the Longman Dictionary of Contemporary English Advanced Learner's Dictionary. Dictionary pictures of the day Do you know what each of these is called? Click on any of the pictures above to find out what it is called.
Modular multiplication is an essential operation in ECC. Two main approaches may be employed. The first is known as interleaved modular multiplication using Montgomery’s method. Montgomery multiplication is widely used in implementations where arbitrary curves are desired. Another approach is known as multiply-then-reduce and is used in elliptic curves built over finite fields of Merssene primes. Merssene primes are the special type of primes which allow for efficient modular reduction through series of additions and subtractions. In order to optimize the multiplication process, some ECC processors use the divide and conquer approach of Karatsuba–Ofman multiplications, where others use embedded multipliers and DSP blocks within FPGA fabrics. Since modular division in affine coordinates is a costly process, numerous coordinate representation systems have been proposed to compensate this cost by means of extra multiplications and additions (e.g., Jacobian coordinates). Conversion back to affine representation can be mechanized using Fermat’s little theorem. Such processors may implement a dedicated squarer to speed up the inversion process. On the other hand, binary GCD modular division algorithm is utilized in many ECC processors where affine coordinate system is used. Binary GCD algorithm is based on simple add and shift operations, while the same operations are used by Montgomery multiplication. Hence, many ECC processors with combined modular division and multiplication blocks have been proposed. The complexity of modular division algorithms is approximately O(2n),where n is the size of operands and the running time is variable and depends directly on the inputs. - Long Data path - More Area and More Power Redundant Signed Digits: The RSD representation, first introduced by Avizienis, is a carry free arithmetic where integers are represented by the difference of two other integers. An integerXis represented by the difference of its x+ and x− components, where x+ is the positive component and x− is the negative component. The nature of the RSD representation has the advantage of performing addition and subtraction without the need of the two’s complement representation. On the other hand, an overhead is introduced due to the redundancy in the integer representation, since an integer in RSD representation requires double word length compared with typical two’s complement representation. In radix-2 balanced RSD represented integers, digits of such integers are either 1, 0, or−1. The complexity of the regular multiplication using the schoolbook method is O(n2). Karatsuba and Ofman proposed a methodology to perform a multiplication with complexityO(n1.58)by dividing the operands of the multiplication into smaller and equal segments. Having two operands of length nto be multiplied, the Karatsuba–Ofman methodology suggests to split the two operands into high-(H)and low-(L) segments as follows: Consider β as the base for the operands, where βis 2 in case of integers and βis x in case of polynomials. Then, the multiplication of both operands is performed as follows: considering a=aL+aHβ[n/2]and b=bL+bHβ[n/2](1) Hence, four half-sized multiplications are needed, where Karatsuba methodology reformulate (1) to Therefore, only three half-sized multiplications are needed. The original Karatsuba algorithm is performed recursively, where the operands are segmented into smaller parts until a reasonable size is reached, and then regular multiplications of the smaller segments are performed recursively. Binary GCD Modular Division: A modular division algorithm is proposed based on the extended Euclidean algorithm. This algorithm is considered as the basis for several hardware implementations of modular division. Algorithm 1 computes the modular division Z ≡X/Y (mod M) based on the plus– minus version of the original binary GCD algorithm. The algorithm instantiates the four registers A, B, U, and V that are initialized with Y, M, X, and 0, respectively. Then, it constantly reduces the values of Y and Min order to calculate the GCD(Y,M) which is equal to 1 in well-formed elliptic curves where the modulo is prime. The registers U and V are used to calculate the quotient and the operations performed on these registers are similar to the operations performed on the A and B registers. The operations on the registers A and B are performed by repetitively reducing the contents of both registers by simple shift or add/subtract-shift operations based on the conditions whether the intermediate contents are even or not. In the case where both registers contents are odd, the content of both registers are added ifA+Bis divisible by 4 or subtracted,(A−B), otherwise. Two variables ρ and δ are used to control the iterations of the algorithm based on the bounds of the registers contents, where δ=α−β, 2α and 2β are the upper bounds ofAandB, respectively, and ρ=min(α, β). The AU is the core unit of the processor that includes the following blocks: 1) modular addition/subtraction block; 2) modular multiplication block; and 3) modular division block. Modular Addition and Subtraction: Addition is used in the accumulation process during the multiplication, as well as, in the binary GCD modular divider algorithm. In the proposed implementation, radix-2 RSD representation system as carry free representation is used. In RSD with radix-2, digits are represented by 0, 1, and−1, where digit 0 is coded with 00, digit 1 is coded with 10, and digit −1is coded with 01. In Fig. 2, an RSD adder is presented that is built from generalized full adders. The problem with this adder is that it tends to expand the addition result even if there is no overflow, since it restricts the least significant digit (LSD) to be digit−1 only. This unnecessary overflow affects the reduction process later and produces some control complexities in the overall processor architecture. However, the overflow is easily managed when the adder is instantiated as a subblock within a multiplier or a divider as is the case in the proposed implementation. In order to overcome the problem of overflow introduced in the adder proposed, a new adder is proposed based on the work proposed. The proposed adder consists of two layers, where layer 1 generates the carry and the interim sum, and layer 2 generates the sum, as shown in Fig. 3. Table I shows the addition rules that are performed by layer 1 of the RSD adder, where RSD digits 0,+1, and−1 are represented by Z, P, and N, respectively. It works by assuring that layer 2 does not generate overflow through the use of previous digits in layer 1. The proposed adder is used as the main block in the modular addition component to take advantage of the reduced overflow feature. However, overflow is not an issue in both the multiplier and the divider when an RSD adder is used as an internal block. Hence, the reduced area is taken as an advantage in instantiating adders within the multiplier and the divider. Karatsuba’s multiplier recursive nature is considered a major drawback when implemented in hardware. Hardware complexity increases exponentially with the size of the operands to be multiplied. To overcome this drawback, Karatsuba method is applied at two levels. A recursive Karatsuba block that works depthwise, and an iterative Karatsuba that works widthwise. Recursive Construction of Karatsuba Multiplier: In general, the reduced complexity of Karatsuba multiplication comes from the fact that four half-word multiplications are replaced by three half-word multiplications with some additions and subtractions. However, the complexity impact increases with the increase of the recursive depth of the multiplier. Hence, it is not sufficient to divide the operands into halves and apply the Karatsuba method at this level only. Operands of sizen-RSD digits are divided into two (low and high) equal sizedn/2-RSD digits branches. The low branches are multiplied through an n/2 Karatsuba multiplier and the high branches are multiplied through anothern/2 Karatsuba multiplier. Implementation difficulties arise with the middle Karatsuba multiplier when multiplying the results of addition of the low and high branches of each operand by itself. The results of the addition are of sizen/2+1-RSD digits so that an unbalanced Karatsuba multiplier of sizen/2+1 is required. Hence, the carry generated by the middle addition operation needs to be addressed to avoid implementation complexities of the unbalanced Karatsuba multiplier. High-Radix Modular Division: Binary GCD algorithm is an efficient way of performing modular division since it is based on addition, subtraction, and shifting operations. The complexity of the division operation comes from the fact that the running time of the algorithm is inconsistent and is input dependent. As seen in Algorithm 2, three main states define the flow of the algorithm. In the first state, the divider is checked whether it is even or odd. In the second state, the content of the corresponding registers are swapped according to the flag δ. In the last state, division by 4 modulo M is performed. In order to efficiently implement Algorithm 1 in hardware, the following list of operations should be adopted to be executed efficiently in hardware. First, division by 2 or by 4 is simply performed by shifting to right 1-digit/2-digits accordingly based on the guarantee that the LSDs are zeros in line 3 and 12 of the algorithm. On the other hand, division by 2 moduloM(division by 4 modulo M) is performed by adding or subtracting the dividend to or from the modulus according to whether the dividend is even or odd and the value of M(mod 4). For both δ andρ, a comparison with 0 is necessary. However, an efficient alternative is to initialize a vector of sizenwith all zeros except the least significant byte (LSB) for δ and the most significant byte (MSB) forρ. Hence, the counting down of ρ is performed by shifting 1 bit to right and only the LSB is checked for the loop termination. On the other hand, a flag is needed to control the shift direction ofδ, where the flag and the value of the LSB are used to determine whether it is less than zero or not. The implementation of the algorithm follows the implementation proposed. The modular divider architecture is shown in Fig. 6. Three RSD adders are used along with three 3×1 multiplexers and one 4×1 multiplexer with some control logic.Advantages: - Short Data path - Low Power and Low Area
In the simplest terms, resistance welding is joining metals with pressure and electrical current. But complex automation systems transform resistance welding into a process that can speed up welding projects with exceptional accuracy. This potential for faster welding processes is why industries are so keen on bringing this technology to the manufacturing process. A Resistance Welding Primer Resistance welding involves joining metals by first applying pressure to two or more metal components. Electrical current is then passed through the metals for a precise length of time. No other materials are needed to weld the two components together. The metals are then melted and fused in a clean, fast, and cost-effective process. There are several types of resistance welding processes, including spot-and-seam, flash, projection, and upset welding. The most significant differences between these methods are the types and shapes of the electrodes used to apply the pressure and the electrical current. Water flows through cavities inside the electrode and other tools for cooling. Spot welding is the simplest form of resistance welding. The electrodes apply both the pressure and the force to the components to be welded. Seam welding is similar to spot welding except that the components to be welded roll between wheel-shaped electrodes while a current is applied. These welds often overlap, resulting in a welded seam instead of spot welds. Flash welding uses force to push workpieces together. A flashing action created with a high current density at very small contact points expels oxides and impurities, resulting in a stronger weld. Upset welding is similar to flash welding, but the components are already in firm contact and, thus, no flashing occurs. Projection welding localizes welds at predetermined points by use of projections that focus heat generation at the point of contact. Sufficient resistance causes the projections to collapse and the weld nugget is formed. Resistance Welding Offers Many Advantages Robotic resistance welding increases throughput, reduces process times, and results in a better-quality weld. Resistance welding is faster than fastening components together by the use of riveting and offers more shear resistance. Compared to other fastening and welding methods, resistance welding reduces the chance of workpiece deformation. Resistance welding can be performed with single-phase AC power, making it an accessible form of joining workpieces. It’s also available in mid-frequency DC voltages, which can increase energy efficiency and increase quality further. Projection welding offers an excellent solution for nut and bolt welding. Spot welding is especially useful for sheet metal applications, where three or more metals’ thickness can be welded together simultaneously. Take advantage of Genesis’s free ebook to gain helpful insights. Download the free ebook 6 Business Considerations for Robotic Welding Investments.
For the first time, scientists have produced the “cardiac proteome,” a map of all of the tens of thousands of proteins contributing to the success of every single heartbeat. The new map provides an “atlas” of what a healthy human heart looks like. Because to know what goes wrong in a diseased heart, one first must determine what a healthy heart looks like, in cellular detail. Front: drawing of a cardiac muscle, Background: excerpt from a so-called “heatmap”, an overview of the proteins analyzed for the cardiac proteome. (image: Doll, Kraue, Menzfeld / MPIB) Lead author Sophia Doll and her team of researchers, brought together by a collaboration between the Max Planck Institute of Biochemistry (MPIB) and the German Heart Centre at the Technical University of Munich (TUM), collected 150 tissue samples to catalogue the type and number of individual proteins found in every type of cell that exists in the heart’s tissues. Specifically, Doll looked at the protein composition of three main cell types of the heart: cardiac fibroblasts, smooth muscle cells, and endothelial cells. These cell types are spread out over various heart regions, including the heart valves, cardiac chambers, and major blood vessels. Using mass spectrometry, researchers identified close to 11,000 different proteins in the heart. Surprisingly, the right and left sides of the heart appeared to have a similar protein composition, despite a large difference in their functionalities. “Looking at the protein atlas of the human heart, you can see that all healthy hearts work in a very similar manner,” Sophia Doll explained. “We measured similar protein compositions in all the regions with few differences between them.” Changes in the DNA that contains the code for protein production or changes in the proteins themselves can both contribute to heart disease. Using the new cardiac proteome, scientists began to compare a healthy heart versus a diseased heart, starting with proteomes of patients with atrial fibrillation, a common disorder characterized by an irregular heartbeat. They saw that a large majority of the differences between the two proteomes was in proteins that provide energy to other heart cells. However, it seems that each individual diseased proteome contains slightly different changes compared to the health cardiac proteome, stressing the importance of personalized medicine. “Although all the patients had very similar symptoms, we see from the data that a different molecular dysfunction was responsible in each case,” said TUM’s Markus Krane. “We need to learn to recognize and treat such individual differences - especially in cardiac medicine.” The present study was published in the journal Nature Communications.
Students with learning disabilities or differences like dyslexia, dysgraphia, processing difficulties, or ADHD may have to work harder than their peers to achieve the same results. These kids can have trouble getting started on a task, need instructions to be repeated more than once, and/or be unable to complete work in a given time-period. While any child can suffer from low self-esteem, students with learning disabilities are particularly at risk, especially if their condition is undiagnosed. This is because most school-based learning programs are developed with a neuro-typical child in mind. The mismatch between learning style and task can cause students to doubt themselves and believe poor performance means they are not “smart” and are somehow less skilled than their classmates. The stress and frustration a child experiences at school is often accompanied by feelings of shame, which are associated with underperforming. There is also the social stigma of being “different." But with the right strategy training, accommodations and emotional support, many children with specific learning differences can overcome the challenges they face and achieve their full potential in the classroom.
Scientists have paradoxically tuned graphene, the so-called ‘wonder material’, to exhibit electrical properties at both extreme ends. According to the findings of researchers at MIT and Harvard University, graphene can operate as an insulator (electrical charge cannot pass through the material) or as a superconductor (electrons can travel through the material without resistance). The magic angle Since 2004 when graphene was first forged in Manchester, UK, scientists have discovered one astonishing property after the other. It’s the thinnest material known to man, essentially an atom-thick honeycomb sheet of carbon atoms, but also incredibly light and flexible (a 1-square-meter sheet weighs only 0.77 milligrams), while at the same time being hundreds of times stronger than steel. It would take an elephant, balanced on a pencil, to break through a sheet of graphene the thickness of Saran Wrap. Furthermore, graphene is electrically conductive, more so than copper, which is why many look to it as the backbone for the super-electronics of the future. But there’s even more to it than meets the eye. Most recently, researchers at MIT and Harvard published two papers in the journal Nature that show graphene’s far more curious electrical properties. Previously, scientists were able to synthesize graphene superconductors by doping it with inherently superconductive metals. This way, graphene would ‘inherit’ the superconductive properties. Now, researchers led by Pablo Jarillo-Herrero, an associate professor of physics at MIT, found a way to make graphene superconductive on its own. A material’s ability to conduct electrons is characterized by energy bands, with every single band representing the range of energies that electrons can have. Between bands, there is an energy gap, and to leap over this gap once a band is filled to travel to the next band, electrons need extra energy. An insulator is a material in which the last occupied energy band is completely filled with electrons. Conductors, on the other hand, have partially filled energy bands which electrons can occupy to freely move. There is, however, one peculiar class of materials which should conduct electricity, judging from their band structure, but don’t actually do, behaving as insulators instead. These are called Mott insulators. The insulating effect is due to strong electrostatic interactions between the electrons. “This means all the electrons are blocked, so it’s an insulator because of this strong repulsion between the electrons, so nothing can flow,” Jarillo-Herrero explained in a statement. “Why are Mott insulators important? It turns out the parent compound of most high-temperature superconductors is a Mott insulator.” Jarillo-Herrero and his colleagues experimented with simple stacks of graphene sheets. Eventually, they came across an amazing configuration: two-stack sheet superlattices. Specifically, when rotated at a ‘magic angle’, two stacks sheets of graphene exhibit nonconducting behavior, similar to an exotic class of materials known as Mott insulators. The graphene sheets don’t sit exactly on top of each other but are instead offset by a magic angle of 1.1 degrees. In this configuration, the graphene superlattice exhibits a flat band structure, similar to a Mott insulator, in which all electrons carry the same energy regardless of their momentum. “Imagine the momentum for a car is mass times velocity,” Jarillo-Herrero says. “If you’re driving at 30 miles per hour, you have a certain amount of kinetic energy. If you drive at 60 miles per hour, you have much higher energy, and if you crash, you could deform a much bigger object. This thing is saying, no matter if you go 30 or 60 or 100 miles per hour, they would all have the same energy.” When the researchers then applied voltage, adding small amounts of electrons to the graphene superlattice, they found that, at a certain level, the electrons broke out of the initial insulating state and flowed without resistance, as if through a superconductor. It’s possible, says Jarillo-Herrero, to tune graphene to either behave as an insulator or a superconductor, as well as any phase in between — all very diverse properties in one single device. “We can now use graphene as a new platform for investigating unconventional superconductivity,” Jarillo-Herrero says. “One can also imagine making a superconducting transistor out of graphene, which you can switch on and off, from superconducting to insulating. That opens many possibilities for quantum devices.”
- Theoretical physicist - 1932 Laureate of the Nobel Prize in Physics Heisenberg came from a Bavarian family of scholars. After qualifying for university studies, he first studied mathematics before transferring to physics. He was already finished with his doctorate in 1932, at the University of Munich, where he finished his post-doctorate qualification the following year, entitling him to teach at the university level. At the ripe age of 26, Heisenberg became a professor at the University of Leipzig, where he and his colleague Friedrich Hund turned his department into an international center for theoretical physics. In 1932, he was awarded the Nobel Prize in Physics for his research in quantum mechanics, yet above all for the uncertainly principle he postulated. During the Third Reich, unlike many other scientists and despite numerous threats, he did not emigrate. Instead, he cooperated with the regime within the framework of the Uranium Project in 1941. After the war, he continued teaching and researching at various universities and institutions. In the young Federal Republic of Germany, he promoted the use of nuclear energy for civil purposes.
By Daniel K. Benjamin Property rights enable humans to acquire, use, and dispose of assets. There is a burgeoning literature on the importance of secure property rights in promoting economic prosperity, improving environmental protection, and ensuring individual liberty. A recent addition to this literature by Randall Akee (2009) shows just how important it is that the transfer (sale or lease) of property rights be unfettered by government restrictions. In the late 1800s, the area now known as Palm Springs, California was evenly divided by the federal government into a checkerboard of 1-mile square blocks. Property rights were assigned in alternating blocks to the Southern Pacific Railroad and to the Agua Caliente band of Cahuilla Indians. From then until the late 1950s, federally imposed restrictions on the sale and lease of the Agua Caliente Reservation land created high costs of developing the land. These costs impeded investment and sharply reduced the value of tribal lands. In contrast, the non-Indian blocks of Palm Springs assigned to Southern Pacific had fee-simple ownership status, making them free of such restrictions. Development proceeded on this property, resulting in land values more than five times higher than observed for otherwise identical Indian land. In the 1950s, the restrictions on Agua Caliente lands were relaxed and development on them soared. Not surprisingly, once development became feasible, the value of these lands rose rapidly, eventually converging with the value of non-Indian lands in Palm Springs. The origins of the restrictions on the transferability of Indian lands dates backs to the nineteenth century. Although the lands assigned to the Agua Caliente tribe nominally belonged to individual members of the band, they were held in trust by the U.S. government. As a practical matter, trust lands could not be sold and, until 1955, legally could not be leased to developers or others for more than five years. Hence, the land effectively could not be used as collateral for loans that would enable the tribe to develop it. Moreover, non-tribal members were unwilling to invest their own funds in projects to which they would lose their rights after only five years. The result was that by the late 1950s, Palm Springs was a checkerboard of two different worlds. Non-Indian, fee-simple land had expensive homes and prosperous businesses located on it, and sold for high prices. Agua Caliente land stagnated in value and was largely undeveloped except in low-value residential uses, such as mobile homes. In 1955 the U.S. government granted tribal members permission to lease their land for 25 years; in 1959 the government increased the maximum lease duration to 99 years and made it feasible for tribal members to sell their land holdings. Developers could now be assured of receiving full return on their projects, and the result was an explosion of both residential and commercial development activity on Agua Caliente lands. Over the next half century, the value of the Agua Caliente lands rose from a mere 13 percent of the value of neighboring fee-simple lands to parity. Today tribal and non-tribal lands in Palm Springs are virtually indistinguishable, both in appearance and in market value. The transformation brought about by the enhanced transferability of Agua Caliente lands is useful in helping us understand two broader issues. First, the economic condition of American Indians lags considerably behind most of the rest of the American population. Per capita income among Indians is not much more than half the national average, and the poverty rate is roughly double the average. There are many reasons for this, but, as Terry Anderson and others have shown, one key element lies in legal institutions that limit the ability of Indians to sell or lease their lands or to use it as collateral. Akee’s research adds importantly to our understanding of the destructiveness of such restrictions. The second lesson of this Agua Caliente story can be found in the use of property rights to protect the environment. It is becoming increasingly accepted, for example, that individual fishing quotas (IFQs) are the single most important tool for efficiently and effectively protecting the world’s fisheries. One crucial element of achieving the maximum performance from IFQs is that they be transferable, through both lease and sale. But not all IFQ systems permit unrestricted transfer, a fact that impairs the power of such systems to protect fisheries. Similarly, in a world of growing water scarcity, government-imposed restrictions on transferability of water rights don’t merely reduce economic efficiency; they threaten the survival of many aquatic species dependent on that water. Environmental damage is also caused by restrictions on the full transferability of federal grazing permits, restrictions that impede the movement of permitted lands out of grazing and into habitat protection. Clearly defined, secure, transferable property rights are a necessary element of the voluntary exchange on which human prosperity is founded. But such rights are also our best hope for protecting and enhancing the environment. The Agua Caliente story makes it clear that property rights that are not transferable make mockery of the concept of property rights, a mockery that in other venues threatens species and degrades environmental quality. For those interested in environmental protection, it is a lesson we ignore at our peril. Akee, Randall. 2009. Checkerboards and Coase: The Effect of Property Institutions on Efficiency in Housing Markets. Journal of Law and Economics 52(2): 395–410. Daniel K. Benjamin is a PERC senior fellow and Alumni Distinguished Professor at Clemson University. “Tangents” investigates policy implications of recent academic research. He can be reached at [email protected].
When historians and ethnomusicologists seek to understand a musical performance and its musical system, they not only listen to the music. They also examine the instruments themselves. Understanding how the instruments are played and their historical origins is useful for understanding the society in which the instruments were produced or used. Look carefully at this picture from a concert given in 1995 in Chennai, South India. It features four instruments common in classical music from South India: the flute, the violin, the drum, and the clay pot. Three of the four of these instruments were named and described in written documents, including ancient Hindu texts called Vedas, and are depicted in sculptures and paintings that date as far back as the first millennium CE. One of them, however, was introduced into South Indian classical music much later. Match a description of the instrument with a picture from approximately its earliest appearance in India. Then, try to determine which instrument was incorporated most recently into South Indian music. Go to the task
Reading and Writing Decimals Tips to remember: *A dot is used to represent a decimal point. *Whole numbers precede the decimal point, and any fractional part follows the decimal point. *Decimals are fractions divided into tenths, hundredths, or some power of 10. *Sample of place value names: hundred thousands - ten thousands - thousands, hundreds - tens - ones. Tenths hundredths ten thousandths, hundred thousandths ten thousandths *example: 325, 211.6849; this number is read as three hundred twenty-five thousand, two hundred eleven AND six thousand, eight hundred forty nine ten thousandths. Note: The word and represents the decimal point. Adding and Subtracting Decimals Key: Line up the decimals Also, whole numbers have an implied decimal behind the last digit. Step 1 Write the numbers vertically with the decimals lined up Step 2 If necessary, use zeros as placeholders Step 3 Add or Subtract. The decimal in the answer must line up with the decimal in the problem. In other words, one should be able to draw a straight line beginning with the decimal in the first number through the decimal in the answer. Example: Add 23.4, 45, and 34.758 23.400 (added two zero place holders) placed a decimal behind the whole number and added three zero placeholders.) 89.158 Note: It is possible to draw a straight line through all of the decimals. Key: To count the total digits behind all decimal points Step 1 - Temporarily ignore the decimal point and multiply Step 2 After multiplying, its time to replace the decimal point by counting the number of digits behind all decimal points. Step 3 - start from the right of the product and move forward(left) the same number of times to place the decimal point. Example: 4.56 X .3 = Step 1: 456 x 3 = 1368 4.56 ( has 2 digits behind or following the decimal point) X .3 (has 1 digit behind the decimal point) ----- Therefore, there are a total of 3 digits behind the decimal point Step 3: 4.56 X .3 = 1.368 (The answer has 3 digits behind the decimal point) 1) Write the numeral: three hundred sixty-two and forty-eight thousandths 2) find the sum: 67.3 + 19 + 43.211 3) find the sum: 34 + .56 + 5.32 4) Subtract 398.1 from 697.34 5) Find the product: 39 X .32 6) Find the product: 239.22 X .12 Mastering Essential Math Skills: Decimals and Percents
Science, geography, history, design and technology and art are taught as part of a topic. Over each key stage the children will cover all the requirements of the National Curriculum 2014 in these subjects. The teachers may also look for links that can be made to maths and English if this is appropriate. The topics have been developed and written by our curriculum team of staff in school. The themes and content for each topic have been chosen to engage the children and make their learning fun, practical and purposeful. Each topic is usually launched with a dilemma or problem which the children can engage in. We want them to see that there is a purpose to their learning and that they can lead their own learning by being involved in deciding how to reach a goal. In Year 3 Horrid Henry comes into the classroom and leaves a poster saying he is looking for children to join him. The children then have a series of tasks to complete in order to do so. These include; designing and making a flag, making an intruder alarm and creating a stink bomb. At the end of the topic ‘Henry’ then comes into class to see what the children have produced and see if they can part of his team. Year 4 receive a message from John Hammond, the infamous creator of "Jurassic Park". Throughout the topic, children explore food chains and keys, habitats and adaptation before enhancing their DT skills when designing and building an enclosure suitable for a dinosaur breed of their choice. In order to keep their dinosaur safely caged, the children then explore electricity and circuits before designing and creating an alarm and lighting system around their enclosure. A visit to West Midlands Safari Park towards the start of the topic allows children to assess ways of keeping animals enclosed. Each year group will cover 4 or 5 topics during the school year. Visits will be arranged and events planned in school to support the children’s learning. Please click on your child’s year group to see what topics they will be taking part in over the year and the intended learning outcomes.
What is the Critical Text? Question: "What is the Critical Text?" Answer: The Critical Text is a Greek text of the New Testament that draws from a group of ancient Greek manuscripts and their variants in an attempt to preserve the most accurate wording possible. Other Greek texts besides the Critical Text used for producing English Bibles are the Majority Text and the Textus Receptus. Until the late 1800s, the Textus Receptus, or the “received text,” was the foremost Greek text from which the New Testament was derived. (The King James Version and New King James Version are based on the Textus Receptus.) In 1881 two prominent scholars, Brooke Foss Westcott and Fenton J. A. Hort, printed their New Testament in Greek, later known as the Critical Text. Dismissing the Textus Receptus as an inferior text rife with errors, Westcott and Hort compiled a new Greek text, with special focus on two fourth-century manuscripts, the Codex Vaticanus and the Codex Sinaiticus. As a result of Westcott and Hort’s work, their Critical Text became the standard Greek text used for modern interpretation and translation for nearly two generations. The Critical Text was the one chiefly used for the English Revised Version and the later American Standard Version. Today, the updated and revised Critical Text is the Greek manuscript basis for the New International Version, the New American Standard Bible, the English Standard Version, and virtually every other modern English translation of the Bible. Though the Critical Text was not without its faults, it has been accepted, on the whole, as being the most accurate in duplicating the original text of the New Testament. Modern biblical scholars have adjusted and adapted Westcott and Hort’s theories of translation, which can be summarized by nine critical rules of biblical interpretation, including the following: • The reading is less likely to be original if it shows a disposition to smooth away difficulties. • Readings are approved or rejected by reason of the quality, and not the number of supporting witnesses. • The preferred reading best explains the existence of other readings. • The preferred reading makes the best sense; that is, it best conforms to the grammar and is most congruous with the purport of the rest of the sentence and of the larger context. With the discovery of new manuscript evidence, the Critical Text has been revised many times. Currently, the Nestle-Aland text (now in its twenty-eighth edition) is the critical text in common use, along with the Greek New Testament published by the United Bible Societies (UBS). In summary, the Critical Text is an effort to discover the wording of the original Greek manuscripts of the New Testament by comparing/contrasting all of the existing manuscripts and using logic and reason to determine the most likely original readings. While no human effort will ever produce an absolutely perfect copy of the original Greek manuscripts of the New Testament, the Critical Text is very likely extremely close to what the New Testament authors wrote. Recommended Resources: A Textual Commentary on the Greek New Testament by Bruce Metzger and Logos Bible Software. What are the different English Bible versions? What are Codex Sinaiticus and Codex Vaticanus? Why are the newer translations of the Bible missing verses? Textual criticism - what is it? Who were Westcott and Hort and what did they have to do with the text of the Bible? Questions about the Bible What is the Critical Text?
Mycobacterium species | Microorganism of the Month: Stenotrophomonas maltophilia By Tharanga Abeysekera, EMLab P&K Analyst The mycobacteria are rod shaped, sometimes branching bacteria with a Gram positive acid fast cell wall. Acid fastness is determined by the ability of the cell to retain a dye when treated with acid, and, like the Gram reaction, is related to the nature of the cell wall. The Mycobacterium cell wall is thicker than in many bacteria, and is waxy, hydrophobic, and rich in mycolic acids. The mycolic acids are long chain fatty acids that comprise up to 60% of the cell wall. They play a role in isolating the cell from the environment, and protecting it from adverse conditions. The mycobacteria are aerobic, non-motile organisms that grow at temperatures ranging from 20-50°C. They grow more slowly than many bacteria, and environmental isolation requires selective media such as Middlebrook 7H10 agar and Lowenstien Jensen agar. Mycobacteria that produce visible colonies within 7 days are considered rapid growers. Those that take longer than 7 days (some require weeks) to produce visible colonies are considered slow growers. Mycobacterium species can grow at temperatures ranging from 20°C to 50°C. Mycobacterium colonies on a culture plate can be rough to smooth in morphology and vary in color as well. Some colonies produce a yellow pigment when exposed to light. Two deadly human diseases are caused by slow-growing Mycobacterium species. Tuberculosis is caused by Mycobacterium tuberculosis. This is an airborne disease that is readily transmitted from one person to another. Risk factors for the disease are exposure and lack of acquired immunity. While relatively uncommon in the US, this disease affects millions of people world wide, and is among the leading causes of death in many areas. Tuberculosis is treatable, but requires long-term antibiotic therapy. Leprosy is a deadly disease caused by Mycobacterium leprae. Leprosy is not airborne (as far as we know) and is not readily transmitted between people. Risk factors for the disease appear in part genetic. While rare in the U.S., leprosy remains a significant problem in India and parts of South America and Africa. Leprosy is also treatable using an antibiotic cocktail. Faster growing Mycobacterium species (often called Nontuberculous Mycobacteria or (NTM) are common in the environment and can cause infections in immunocompromised individuals. Disease include pulmonary disease resembling tuberculosis, lymphadenitis, skin disease, or disseminated disease. These diseases have become of special concern because of the AIDS epidemic. One of the fast-growing mycobacteria, Mycobacterium immunogenum, has learned to colonize machining systems containing water-miscible Metal Working Fluids (MWF). MWF are used to reduce heat and friction and to improve product quality in industrial machining and grinding operations. There are many formulations for MWF. The water based MWF formula supports microbial growth. Mycobacterium immunogenum exposure has been linked to outbreaks of hypersensitivity pneumonitis (HP) in machinists. HP is an inflammation in and around the lung resulting from repeated inhalation and sensitization to small particle antigens. It is initially treatable with avoidance of exposure to the offending antigen, and with steroids. Prolonged exposure can lead to permanent lung damage and death. MWF operations employ about 1.2 million workers. These workers are exposed to the fluids by breathing aerosols generated in the machining process. Why the organism grows in some coolant systems and not others remains unknown. The misuse of biocide in heavily contaminated systems can actually increase the Mycobacterium population by eliminating the competing bacteria while having a very slight effect on the Mycobacterium population. Testing for Mycobacterium in MWF can be done using several methods. Generally, the bulk fluid is collected and cultured. Alternatively, the fluid can be evaluated using chemical analysis for mycolic acids, by PCR, or by microscopy and staining using the acid fast method. Air sampling could also be used, but is subject to a high rate of false negatives. 1. Bergey's Manual of Systematic Bacteriology, Vol 2. 2. CDC NIOSH: Metalworking Fluids 3. NALCO, bulletin B-654 4. The International Leprosy Association's Global Project on the History of Leprosy By Yamile Echemendia, EMLab P&K Analyst Previously known as Pseudomonas maltophilia or Xanthomonas maltophilia, this microorganism has now been reclassified as Stenotrophomonas maltophilia and is the sole member of the genus. In culture, colonies are smooth, glistening, with entire margins and are white to pale yellow. Its cells are straight or slightly curved non-sporulating Gram-negative bacilli that are 0.5 to 1.5 µm long. They can be found singly or in pairs and are motile by means of several polar flagellae (slender tapering narrow outgrowths of the cells of many microorganisms that are a means of motility). This bacterium is an obligate aerobe (organism that needs oxygen for metabolism) that grows at temperatures between 5°C and 40°C and is optimal at 35°C. Stenotrophomonas maltophilia is a ubiquitous free-living bacterium that is commonly found in soils and especially in plant rhizospheres (the area of soil that immediately surrounds and is affected by a plant's roots) where the high content of amino acids in root exudates constitutes a growth factor for this organism. Other food sources for the bacterium include frozen fish, milk, poultry, eggs and lamb carcasses. In addition, Stenotrophomonas maltophilia has been isolated from a number of water sources such as rivers, wells, bottle water and sewage. Little is known about virulence factors associated with Stenotrophomonas maltophilia and considerable ambiguity exists about the route(s) of acquisition. Although it is not a part of the normal flora of healthy humans, it is frequently encountered as a commensal (a type of relationship between organisms of two different species in which one benefits from the association while the other remains unharmed) in the transient flora in hospitalized patients. Stenotrophomonas maltophilia is considered an opportunistic pathogen. Episodes of infection caused by this microorganism have become increasingly important in the hospital setting, the presence of a compromised immune system being the most important predisposing factor. Despite the fact that the majority of Stenotrophomonas maltophilia infections are nosocomial (originating, acquired or occurring in a hospital) some may be community-acquired. Distinguished by a high degree of antibiotic resistance rather than by invasiveness and tissue destruction, Stenotrophomonas maltophilia is a major concern, primarily, in immunocompromized patients where it may cause a wide spectrum of diseases such as urinary tract infection, endocarditis and meningitis, serious postoperative wound infections and respiratory tract infections. Management of Stenotrophomonas maltophilia infections present problems for both the laboratory technician and the physician because clinical isolates are frequently resistant to many antimicrobial agents and the methods for determining the susceptibility of this organism to antibiotics are, at present, often unreliable. Several hospital outbreaks of Stenotrophomonas maltophilia infection and/or colonization have been described. In several instances, environmental reservoirs for the bacterium have been identified, such as deionized water dispensers, ice-making machines, inhalation therapy equipments, blood sampling tubes, contact lenses care systems, dialysis machines, faucets aerators and the hands of the health personnel, among others. Several strategies to prevent infections with Stenotrophomonas maltophilia have been proposed. These include avoidance of inappropriate antibiotic use, maintenance and, where appropriate, disinfection and/or sterilization of respiratory therapy equipment, hemodialyzers and ice-making machines. The fact that this organism has been associated with plumbing systems (e.g. water faucets and sink drains) within both the home and the hospital environments suggests that control directed towards these sources may be helpful. For example, the practice of rinsing reusable equipment for the delivery of aerosolized antibiotics in tap water should be avoided. Much remains to be understood about the epidemiology of Stenotrophomonas maltophilia. In particular, more information about nosocomial reservoirs and routes of transmission of the bacterium is essential for the development of more effective strategies to prevent future outbreaks of infection within the growing immunocompromized population. 1. Berg, G. et. al. 1999. Genotipic and phenotypic relationships between clinical and environmental isolates of Stenotrophomonas maltophilia. Journal of Clinical Microbiology, Vol. 37, No. 11:3594-3600. 2. Denton, M. and Kerr, K.G. 1998. Microbiological and clinical aspect of infection associated with Stenotrophomonas maltophilia. Clinical Microbiology Reviews,Vol. 11, No.1:57-80. 3. Felicia P. Y. Laing, et. al. 1995. Molecular epidemiology of Xanthomonas maltophilia colonization and infection in the hospital environment. Journal of Clinical Microbiology, Vol. 33, No. 3:513-518. 4. Qureshi, A. et. al. 2005. Stenotrophomonas maltophilia in salad. Emerging Infectious Diseases, Vol. 11, No. 7: 1157-1158. 5. Weber, DJ. et. al. 1999. Faucet aerators: A source of patient colonization with Stenotrophomonas maltophilia. Am. J. Infect Control, Vol. 27, No. 1:59-63. This article was originally published on July 2007.
By Dr. Mike Norris, Consulting Educator For centuries children of all ages and cultures have been fascinated with ancient Egypt. Because of this, Egyptian art can be a valuable tool in helping children both learn about and create art, because studying it strengthens their observational powers as it inspires their own art making. And, leaving aside mummies, nothing in Egyptian art interests children more than hieroglyphic writing (pronounced "highrowgliffick"). In this writing system, which the Egyptians called "the gods' words," scribes wrote "words," called hieroglyphs, which were actual pictures of the thing being meant by the word. So these Egyptian writers, called scribes, were artists as well as writers! But that wasn't all. These pictures could also stand for things connected with themselves; for instance, the picture for mouth could have the meaning of "speak;" the arm, the meaning of "give." And some of the pictures could even stand for sounds, like the letters of our alphabet do. These Web links to Egyptian art at the Metropolitan Museum of Art show how the ancient Egyptians used hieroglyphic writing in their world: A great way to introduce your child to ancient Egyptian art is to visit a museum, or a Web site of a museum, and then do this activity. If you are lucky enough to live near a museum, check to see if it has an Egyptian exhibit, and take your child to see it! This activity will introduce children to how Egyptians used decoration to give extra meaning to a structure and will encourage them to explore this concept through their own creativity. Check out obelisks around the world Can you think of a tower in Washington D.C. that was inspired by Egypt? That would be the Washington Monument, which is the largest obelisk in the world (pronounced "ahbehlisk"). An obelisk's four sides had the carved names and titles of the pharaoh who had erected it and sometimes pictures were cut near the top. Of the Egyptian obelisks around today, the tallest is about 100 feet, about the lengths of two and a half regular city buses. There are many Web sites on obelisks, but one that includes pictures of several existing obelisks is A World of Obelisks. Make your own life-size obelisk To decorate your own obelisk, try the following activity. (You can do a simpler version if you want, by just using a sheet of drawing paper cut into the shape of an obelisk.) Take a roll of paper, unroll it to the desired length (the smallest Egyptian obelisks were probably 5 to 6 feet in height), fold over the paper to make a crease at the top, then cut along the crease to detach the sheet. Fairly near the top of this sheet make diagonal cuts at either side that meet in the middle of the top edge of the paper; this will give the top of your "obelisk" a triangular form that will represent the pyramid shape of the top of an actual obelisk. Tape the sheet onto a hard surface with masking tape. (You may want to cut four lengths of paper to make up the four sides of the obelisk to be decorated.) Decorating your obelisk Find pictures or Web sites that show Egyptian obelisks and look closely at their decoration. You might want to pretend to be a pharaoh and to decorate the obelisk with pictures of great deeds you have done, with the various names of you as a ruler, and stories of adventure during your reign. The names and stories can be in English, but perhaps written or arranged in such a way that it looks like Egyptian writing. The design of the decoration should first be created using a pencil. Then follow with colored pencils, crayons, chalk and even paint for color. After finishing, decide where this obelisk should stand! Write your name in hieroglyphics If you would like to include your name in hieroglyphs in the decoration of your "obelisk," you may want to look at Write Like an Egyptian. Once you write out your name in English, it prints out the hieroglyphs for you that stand for the sound of your name. You can then cut out this name and paste it on to the "obelisk." Dr. Mike Norris is associate educator in charge of family programs at the Metropolitan Museum of Art in New York. December 2005
This worksheet is about illnesses. It contains two activities. First, the students are asked to complete the sentences using common problems, such as sore throat, backache, stomachache, etc. Secondly, they are asked to write some advice using should and the words in brackets. It is designed for elementary levels. Hope it works!
chapel, small, intimate place of worship. The name was originally applied to the shrine in which the kings of France preserved the cape (late Latin cappella, diminutive of cappa) of St. Martin. By tradition, this garment had been torn into two pieces by St. Martin of Tours (c. 316–397) that he might share it with a ragged beggar; later Martin had a vision of Christ wearing the half cape, and it was preserved as a relic and carried about by the Frankish kings on their military campaigns. By extension, any sanctuary housing relics was called a chapel and the priest cappellanus, or chaplain. By a further extension, all places of worship that were not mother churches, including a large number of miscellaneous foundations, came to be known as chapels. Oratories, places of private worship attached to royal residences, also were termed chapels. Thus the Sainte Chapelle (1248), the palace chapel at Paris, was built by St. Louis IX to enshrine the relic of what was thought to be the Crown of Thorns, which he had brought from Constantinople. In the next century, other saintes chapelles were founded by princes of the French royal house at Bourges, Riom, and elsewhere. In the European Middle Ages the cult of the Virgin Mary was widespread, and by the close of the 14th century most major churches in western Europe had a Lady chapel. Such extradevotional chapels were largely introduced by the religious orders, and secular clergy in parochial and cathedral churches quickly followed their example. In the 13th century many cathedrals and monastic churches were remodeled to embody a chevet, or semicircular range of radiating polygonal chapels, on the eastern wall. This plan was the standard for the great churches of the Île-de-France region, and it was reflected in England in the churches of Westminster and Canterbury. St. Sernin, at Toulouse, has no fewer than 17 pentagonal chapels, linked by narrow passages. The multiplication of chapels in the later Middle Ages stemmed from two innovations: the inclusion of the chantry, a special place of worship established by a donor for the singing of masses after his death, and the formation of numerous guilds or confraternities that built their own chapels in the town churches for corporate worship. The chapels of these guilds were arranged along each side of the nave, either enclosed by party walls inside the church or built out between the buttresses. A domestic chapel intended for private devotions may be attached to a house, college, or other building or institution and is sometimes called an oratory. Thus, the Sistine Chapel is the private chapel of the Vatican, and St. George’s Chapel, Windsor, is the private chapel of Windsor Castle, Berkshire. In modern times a chapel is generally speaking a subordinate house of worship auxiliary to or parallel with a church.
Thirst is harder for trees to endure than hunger, because they can satisfy their hunger whenever they want. Like a baker who always has enough bread, a tree can satisfy a rumbling stomach right away using photosynthesis. But even the best baker cannot bake without water, and the same goes for a tree: Without moisture, food production stops. A mature beech tree can send more than 130 gallons of water a day coursing through its branches and leaves, and this is what it does as long as it can draw enough water up from below. However, the moisture in the soil would soon run out if the tree were to do that every day in summer. In the warmer seasons, it doesn’t rain nearly enough to replenish water levels in the desiccated soil. Therefore, the tree stockpiles water in winter. In winter, the tree is not consuming as much water, because most plants take a break from growing at that time of year. Together with below ground accumulation of spring showers, the stockpiled water usually lasts until the onset of summer. But in many years, water then gets scarce. After a couple of weeks of high temperatures and no rain, forests usually begin to suffer. The most severely affected trees are those that grow in soils where moisture is usually particularly abundant. These trees don’t know the meaning of restraint and are lavish in their water use, and it is usually the largest and most vigorous trees that pay the price for this behavior. In the forest I manage, the stricken trees are usually spruce, which burst not at every seam but certainly along their trunks. If the ground has dried out and the needles high up in the crown are still demanding water, at some point, the tension in the drying wood simply becomes too much for the tree to bear. It crackles and pops, and a tear about 3 feet long opens in its bark. This tear penetrates deep into the tissue and severely injures the tree. Fungal spores immediately take advantage of the tear to invade the innermost parts of the tree, where they begin their destructive work. In the years to come, the spruce will try to repair the wound, but the tear doesn’t always heal. From some distance away, you can see a black channel streaked with pitch that bears witness to this painful process. And with that, we have arrived at the heart of tree school. Unfortunately, this is a place where a certain amount of physical punishment is still the order of the day, for Nature is a strict teacher. If a tree does not pay attention and do what it’s told, it will suffer. Splits in its wood, in its bark, in its extremely sensitive cambium (the life-giving layer under the bark): It doesn’t get any worse than this for a tree. It has to react, and it does this not only by attempting to seal the wound. From then on, it will also do a better job of rationing water instead of pumping whatever is available out of the ground as soon as spring hits without giving a second thought to waste. The tree takes the lesson to heart, and from then on it will stick with this new, thrifty behavior, even when the ground has plenty of moisture—after all, you never know! When trees are really thirsty, they begin to scream. It’s no surprise that it is spruce growing in areas with abundant moisture that are affected in this way: They are spoiled. Barely half a mile away, on a dry, stony, south-facing slope, things look very different. At first, I had expected damage to the spruce trees here because of severe summer drought. What I observed was just the opposite. The tough trees that grow on this slope are well versed in the practices of denial and can withstand far worse conditions than their colleagues, who are spoiled for water. Even though there is much less water available here year round—because the soil retains less water and the sun burns much hotter—the spruce growing here are thriving. They grow considerably more slowly, clearly make better use of what little water there is, and survive even extreme years fairly well. A much more obvious lesson in tree school is how trees learn to support themselves. Trees don’t like to make things unnecessarily difficult. Why bother to grow a thick, sturdy trunk if you can lean comfortably against your neighbors? As long as they remain standing, not much can go wrong. In natural forests, it is the death from old age of a mighty mother tree that leaves surrounding trees without support. That’s how gaps in the canopy open up, and how formerly comfortable beeches or spruce find themselves suddenly wobbling on their own two feet—or rather, on their own root systems. Trees are not known for their speed, and it takes some species many years before they stand firm once again after such disruptions. The process of learning stability is triggered by painful micro-tears that occur when the trees bend way over in the wind, first in one direction and then in the other. Wherever it hurts, that’s where the tree must strengthen its support structure. This takes a whole lot of energy, which is then unavailable for growing upward. A small consolation is the additional light that is now available for the tree’s own crown, thanks to the loss of its neighbor. But, here again, it takes a number of years for the tree to take full advantage of this. So far, the tree’s leaves have been adapted for low light, and so they are very tender and particularly sensitive to light. If the bright sun were to shine directly on them now, they would be scorched—ouch, that hurts! And because the buds for the coming year are formed the previous spring and summer, it takes a deciduous tree at least two growing seasons to adjust. Conifers take even longer, because their needles stay on their branches for up to 10 years. The situation remains tense until all the green leaves and needles have been replaced. The thickness and stability of a trunk, therefore, build up as the tree responds to a series of aches and pains. In a natural forest, this little game can be repeated many times over the lifetime of a tree. Once the gap opened by the loss of another tree is overcome and everyone has extended their crowns so far out that the window of light into the forest is, once again, closed, then everyone can go back to leaning on everyone else. When that happens, more energy is put into growing trunks tall instead of wide, with predictable consequences when, decades later, the next tree breathes its last. The trees that were being eaten gave off a warning gas that signaled to neighboring trees that a crisis was at hand. So, let’s return to the idea of school. If trees are capable of learning (and you can see they are just by observing them), then the question becomes: Where do they store what they have learned and how do they access this information? After all, they don’t have brains to function as databases and manage processes. It’s the same for all plants, and that’s why some scientists are skeptical and why many of them banish to the realm of fantasy the idea of plants’ ability to learn. But along comes the Australian scientist Monica Gagliano. Gagliano studies mimosas, also called “sensitive plants.” Mimosas are tropical creeping herbs. They make particularly good research subjects, because it is easy to get them a bit riled up and they are easier to study in the laboratory than trees are. When they are touched, they close their feathery little leaves to protect themselves. Gagliano designed an experiment where individual drops of water fell on the plants’ foliage at regular intervals. At first, the anxious leaves closed immediately, but after a while, the little plants learned there was no danger of damage from the water droplets. After that, the leaves remained open despite the drops. Even more surprising for Gagliano was the fact that the mimosas could remember and apply their lesson weeks later, even without any further tests. It’s a shame you can’t transport entire beeches or oaks into the laboratory to find out more about learning. But, at least as far as water is concerned, there is research in the field that reveals more than just behavioral changes: When trees are really thirsty, they begin to scream. If you’re out in the forest, you won’t be able to hear them, because this all takes place at ultrasonic levels. Scientists at the Swiss Federal Institute for Forest, Snow, and Landscape Research recorded the sounds, and this is how they explain them: Vibrations occur in the trunk when the flow of water from the roots to the leaves is interrupted. This is a purely mechanical event and it probably doesn’t mean anything.1 And yet? We know how the sounds are produced, and if we were to look through a microscope to examine how humans produce sounds, what we would see wouldn’t be that different: The passage of air down the windpipe causes our vocal chords to vibrate. When I think about the research results, in particular in conjunction with the crackling roots I mentioned earlier, it seems to me that these vibrations could indeed be much more than just vibrations—they could be cries of thirst. The trees might be screaming out a dire warning to their colleagues that water levels are running low. Peter Wohlleben studied forestry and was a civil servant in the forestry commission for over 20 years. He holds lectures and seminars and has written books on subjects pertaining to woodlands and nature protection. His bestselling book The Hidden Life of Trees has sold to more than 20 countries. From the book The Hidden Life of Trees, © 2016, by Peter Wohlleben. Published in 2016 by Greystone Books. Reprinted with permission of the publisher. 1. Swiss Federal institute for Forest, Snow, and Landscape Research WSL. Rendering ecophysiological processes audible. www.wsl.ch (2015). 2. Anahäuser, M. The silent scream of the lima bean. Max Planck Research 4 (2007). 3. Song, Y.Y., Simard, S.W., Carroll, A., Mohn, W.W., & Zheng, R.S. Defoliation of interior Douglas-fir elicits carbon transfer and defense signaling to ponderosa pine neighbors through ectomycorrhizal networks. Nature, Scientific Reports 5, 8495 (2015). 4. Beiler, K.J., Durall, D.M., Simard, S.W., Maxwell, S.A., & Kretzer, A.M. Mapping the wood-wide web: Mycorrhizal networks link multiple Douglas-fir cohorts. New Phytologist 185, 543-553 (2009).
Escape velocity from the surface of the Earth is about 11.2 kilometers per second (or just over forty thousand km/h, or twenty five thousand mph), meaning that if you want an object to leave Earth and never fall back, you have to throw it at least that fast. How does that work? Well as you’ll remember from high school physics, Newton’s law of universal gravitation tells us that an object’s weight (The downward force applied by gravity) depends inversely on the square of the distance from the Earth’s centre. If you double this distance, the downward force is quartered. Increase it by three times, and the force is reduced to a ninth and if it’s increased by four times, the force is reduced to a sixteenth. This law also tells us that the gravitational influence extends outwards to infinity – you’re NEVER free, so that any object moving away in freefall will be constantly slowing down until it eventually stops and starts falling back. But if your starting speed is high enough, then the downward force will weaken fast enough to never quite slow the object all the way down. It will never come to a stop, even though it continues to decelerate at a slower and slower pace. The exact speed required for this to happen is the escape velocity. Incidentally, the actual speed required depends on your starting point – if you start from a very high altitude, your escape velocity is lower because the gravitational force is already quite weakened. But on Earth, the few kilometers you can gain by moving from sea level to the top of the tallest mountains make only a negligible difference. Written by Allen Versfeld Comments? Questions? Why not mail me at [email protected]
This image shows the topography, or shape, of the Earth's surface, on land and below the oceans. Mountain ranges, subduction trenches, tectonic plates, and mid-ocean ridges are all visible in the image. Click on image for full size Image from: U.S. Geological Survey Surface Features of the Earth This image of the surface of the Earth shows the North and South American continents, as well as the floor of the Pacific Ocean. As can be seen in the image, the ocean floor shows evidence of The continental regions show evidence of (Click on the image to see labeled examples of these features.) These things provide evidence that, unlike other planets, the surface of the Earth is in motion. Motion of the Earth's surface is called plate tectonics. Compare this active surface with that of Venus, Mars, or Europa. Shop Windows to the Universe Science Store! You might also be interested in: The Hawaiian Islands are an example of the way some volcanoes are made. A rising hot plume of material makes it's way to the lithosphere of the Earth from the deep interior, and erupts material unto the...more When two sections of the Earth's lithosphere collide one slab of lithosphere can be forced back down into the deeper regions of the Earth, as shown in this diagram. This process is called subduction....more Mountains are built through a general process called "deformation" of the crust of the Earth. One example of deformation comes from the process of subduction. When two sections of the Earth's lithosphere...more Many forces cause the surface of the Earth to change over time. However, the largest force that changes our planet's surface is the movement of Earth's outer layer through the process of plate tectonics....more Ash is made of millions of tiny fragments of rock and glass formed during a volcanic eruption. Volcanic ash particles are less than 2 mm in size and can be much smaller. Volcanic ash forms in several ways...more Cinder cones are simple volcanoes which have a bowl-shaped crater at the summit and rarely rise more than a thousand feet above their surroundings. They usually are created of eruptions from a single vent,...more Lava can move in broad flat lava flows, or it can move through constrictive channels or tubes. Lava flows have a large surface area so they tend to cool quickly and flow slowly. The fastest unconstricted...more
Calcium is one of the most important elements in the diet because it is a structural component of bones, teeth, and soft tissues and is essential in many of the body's metabolic processes. It accounts for 1 to 2 percent of adult body weight, 99 percent of which is stored in bones and teeth. On the cellular level, calcium is used to regulate the permeability and electrical properties of biological membranes (such as cell walls), which in turn control muscle and nerve functions, glandular secretions, and blood vessel dilation and contraction. Calcium is also essential for proper blood clotting . Because of its biological importance, calcium levels are carefully controlled in various compartments of the body. The three major regulators of blood calcium are parathyroid hormone (PTH), vitamin D , and calcitonin. PTH is normally released by the four parathyroid glands in the neck in response to low calcium levels in the bloodstream (hypocalcemia). PTH acts in three main ways: (1) It causes the gastrointestinal tract to increase calcium absorption from food, (2) it causes the bones to release some of their calcium stores, and (3) it causes the kidneys to excrete more phosphorous, which indirectly raises calcium levels. Vitamin D works together with PTH on the bone and kidney and is necessary for intestinal absorption of calcium. Vitamin D can either be obtained from the diet or produced in the skin when it is exposed to sunlight. Insufficient vitamin D from these sources can result in rickets in children and osteomalacia in adults, conditions that result in bone deformities. Calcitonin, a hormone released by the thyroid, parathyroid, and thymus glands, lowers blood levels by promoting the deposition of calcium into bone. Most dietary calcium is absorbed in the small intestine and transported in the bloodstream bound to albumin, a simple protein . Because of this method of transport, levels of albumin can also influence blood calcium measurements. Calcium is deposited in bone with phosphorous in a crystalline form of calcium phosphate. Deficiency and Toxicity Because bone stores of calcium can be used to maintain adequate blood calcium levels, short-term dietary deficiency of calcium generally does not result in significantly low blood calcium levels. But, over the long term, dietary deficiency eventually depletes bone stores, rendering the bones weak and prone to fracture. A low blood calcium level is more often the result of a disturbance in the body's calcium regulating mechanisms, such as insufficient PTH or vitamin D, rather than dietary deficiency. When calcium levels fall too low, nerve and muscle impairments can result. Skeletal muscles can spasm and the heart can beat abnormally—it can even cease functioning. Toxicity from calcium is not common because the gastrointestinal tract normally limits the amount of calcium absorbed. Therefore, short-term intake of large amounts of calcium does not generally produce any ill effects aside from constipation and an increased risk of kidney stones . However, more severe toxicity can occur when excess calcium is ingested over long periods, or when calcium is combined with increased amounts of vitamin D, which increases calcium absorption. Calcium toxicity is also sometimes found after excessive intravenous administration of calcium. Toxicity is manifested by abnormal deposition of calcium in tissues and by elevated blood calcium levels (hypercalcemia). However, hypercalcemia is often due to other causes, such as abnormally high amounts of PTH. Usually, under these circumstances, bone density is lost and the resulting hypercalcemia can cause kidney stones and abdominal pain. Some cancers can also cause hypercalcemia, either by secreting abnormal proteins that act like PTH or by invading and killing bone cells causing them to release calcium. Very high levels of calcium can result in appetite loss, nausea , vomiting, abdominal pain, confusion, seizures, and even coma. Requirements and Supplementation Dietary calcium requirements depend in part upon whether the body is growing or making new bone or milk. Requirements are therefore greatest during childhood, adolescence, pregnancy, and breastfeeding. Recommended daily intake (of elemental calcium) varies accordingly: 400 mg for Calcium absorption is affected by many factors, including age, the amount needed, and what foods are eaten at the same time. In general, |Supplement||Elemental calcium by weight||Comment| • Most commonly used • Less well absorbed in persons with decreased stomach acid (e.g., elderly or those on anti-acid medicines) • Natural preparations from oyster shell or bone meal may contain contaminants such as lead • Least expensive • Better absorbed, especially by those with decreased • May protect against kidney stones • More expensive. |Calcium phosphate||38% or 31%|| • Tricalcium or dicalcium phosphate • Used more in Europe • Absorption similar to calcium carbonate • Used intravenously for severe hypocalcemia • Well absorbed orally, but low content of elemental calcium • Very expensive • Available as syrup for children • Low content elemental calcium. |Calcium lactate||13%||• Well absorbed, but low content elemental calcium.| |SOURCE : Gregory, Philip J. (2000) "Calcium Salts." Prescriber's Letter. Document #160313.| calcium from food sources is better absorbed than calcium taken as supplements. Children absorb a higher percentage of their ingested calcium than adults because their needs during growth spurts may be two or three times greater per body weight than adults. Vitamin D is necessary for intestinal absorption, making Vitamin D–fortified milk a very well-absorbed form of calcium. Older persons may not consume or make as much vitamin D as is optimal, so their calcium absorption may be decreased. Vitamin C and lactose (the sugar found in milk) enhance calcium absorption, whereas meals high in fat or protein may decrease absorption. Excess phosphorous consumption (as in carbonated sodas) can decrease calcium absorption in the intestines . High dietary fiber and phytate (a form of phytic acid found in dietary fiber and the husks of whole grains) may also decrease dietary calcium absorption in some areas of the world. Intestinal pH also affects calcium absorption—absorption is optimal with normal stomach acidity generated at meal times. Thus, persons with reduced stomach acidity (e.g., elderly persons, or persons on acid-reducing medicines) do not absorb calcium as well as others do. Calcium supplements are widely used in the treatment and prevention of osteoporosis. Supplements are also recommended, or are being investigated, for a number of conditions, including hypertension , colon cancer , cardiovascular disease, premenstrual syndrome, obesity , stroke , and preeclampsia (a complication of pregnancy). There are several forms of calcium salts used as supplements. They vary in their content of elemental calcium, the amount effectively absorbed by the body, and cost. Whatever the specific form, the supplement should be taken with meals to maximize absorption. Calcium is one of the most important macronutrients for the body's growth and function. Sufficient amounts are important in preventing many diseases. Calcium levels are tightly controlled by a complex interaction of hormones and vitamins . Dietary requirements vary throughout life and are greatest during periods of growth and pregnancy. However, recent reports suggest that many people do not get sufficient amounts of calcium in their diet. Various calcium supplements are available when dietary intake is inadequate. Donna Staton Marcus Harding Berkow, Robert, ed. (1997). The Merck Manual of Medical Information, Home Edition. Whitehouse Station, NJ: Merck & Co. National Research Council (1989). Recommended Dietary Allowances, 10th edition. Washington, DC: National Academy Press. Olendorf, Donna; Jeryan, Christine; and Boyden, Karen, eds. (1999). The Gale Encyclopedia of Medicine. Farmington Hills, MI: Gale Research. Food and Nutrition Board (1999). Dietary Reference Intakes for Calcium, Phosphorous, Magnesium, Vitamin D, and Fluoride. Washington, DC: National Academy Press. Available from <http://www.nap.edu> Gregory, Philip J. (2000) "Calcium Salts." Prescriber's Letter Document #160313. Available from <http://www.prescribersletter.com>
Laser cutting is a cutting process that severs material with the heat obtained by directing a laser beam against a metal surface. Hence laser cutting is a technology that uses a laser to cut materials, and is typically used for industrial manufacturing applications. Laser cutting works by directing the output of a high power laser, by computer, at the material to be cut. The material then melts, burns, vaporizes away, or is blown away by a jet of gas, leaving an edge with a high quality surface finish. Types of material used in Laser Cutting: • Carbon dioxide laser (CO2): The CO2 laser produces a beam of infrared light with the principal wavelength bands. CO2 lasers are frequently used in industrial applications for cutting, boring, scribing, and engraving. • Neodymium-doped laser (Nd): Nd lasers are used where high energy but low repetition speed are required around 1 kHz and also used for boring. • Neodymium-doped Yttrium Aluminum Garnet laser (Nd:YAG): The Nd:YAG is a crystal that is used as a lasing medium for solid-state lasers. Nd:YAG lasers are frequently used in Medicine, Dentistry, Military and defense, manufacturing applications for very high energy pulses, boring, engraving, and trimming. Methods to cut metals using Laser Cutting: There are many different methods to cut metal using lasers. Some of the methods are vaporization, melt and blow, thermal stress cracking, reactive cutting etc. • Vaporization Cutting: The process where a focused beam heats the surface of the material to boiling point and generates a keyhole. As the hole deepens and the material boils, vapor generated erodes the molten walls blowing eject out and further enlarging the hole. Non melting material such as wood, carbon and thermoset plastics are usually cut by this method. • Melt and Blow: The process in which a high pressure gas is used to blow molten material from the cutting area, greatly decreasing the power requirement. First the material is heated to melting point then a gas jet blows the molten material out of the kerfs avoiding the need to raise the temperature of the material any further. Materials cut with this process are usually metals. • Thermal stress cracking: The process a beam is focused on the surface causing localized heating and thermal expansion. This results in a crack that can then be guided by moving the beam. It is usually used in cutting of glass. • Reactive Cutting: It is also known as burning stabilized laser gas cutting or flame cutting. The process in which oxygen torch with a laser beam as the ignition sources is used. It is mostly used for cutting carbon steel in thickness over 1mm. This process can be used to cut very thick steel plates with relatively little laser power.
Solar LED lights are semiconductor diodes that emit light when charged with electricity. The color they produce is different depending on their chemical composition. Therefore LED lights don't require a filter to produce a certain color of light. LEDs use far less electricity than incandescent and fluorescent lights. Solar LED lights are ordinary LEDs that have been connected with a photovoltaic cell with an energy storage method. Since LED lights can function efficiently using direct current, no energy conversion is necessary. Solar LED lights are often used for lighting landscaping, pathways and gardens. They are also sometimes used in motion sensor lights and city and commercial lighting. The lights have a solar panel installed on top which charges a battery or a capacitor. LED lights require very little energy to function, so the lights can usually stay on all night without draining all the stored energy. Since solar LED lights are often used outdoors, many of them are waterproof. Solar LED lights have a relatively high up-front cost. They are also affected by the quality of light that they're exposed to. On overcast days, the battery might not receive as much of a charge as on sunny days. This can run down the battery, affecting the quality and length of the light produced. After the initial costs are paid, solar-powered LED lighting produces light for free. Since they don't get their energy from the grid, they don't require cumbersome wires. They also help to reduce carbon dioxide emissions. Thus they are better for the environment than ordinary light sources. Also, solar LED lights usually don't need much maintenance. LED lights have a much longer lifespan than incandescent or fluorescent lights, which also saves money. When the cost of solar-powered LED lights comes down, they will undoubtedly become much more popular. Solar LED lights have tremendous potential for energy conservation. In the future, we will probably see more solar LED lights on roadways, in homes, paths and gardens. The Energy Independence and Security Act of 2007 has mandated a significant increase in the efficiency of lighting devices. This will eliminate many energy-wasting incandescent bulbs from the market by 2012. The U.S. Department of Energy has also introduced a competition called the Bright Tomorrow Lighting Prize, offering millions of dollars of prize money for the development of highly efficient solid state LED lighting technology to replace incandescent light bulbs. These efforts will very likely improve the efficiency, availability and price of solar LED lighting products. More and more professional led lights suppliers tech information on the topledsupplier website.
The Middle Ages did not have turkeys, for they are found only in the New World. For a big feast, they might have a duck or a goose, however. Many people kept flocks of geese, valued for their feathers, their eggs, and their meat. In addition, hunting provided wild geese and ducks for the table, as well as sport. When white men first reached the Americas, they were impressed at the large and tasty local fowl, especially the southwestern turkey. Natives in what is now Latin America had domesticated them, just as Europeans had domesticated geese. The Spanish took some home in the sixteenth century. From Spain, they quickly spread throughout the rest of Europe. There was some disagreement at the time as to where these big, exotic birds had come from. The Turks had recently taken over Byzantium and much of the eastern end of the Mediterranean, and anything exotic was routinely ascribed to them. Thus the British called these birds "turkey birds." When a century later, in 1620, the Puritans left England, seeking a place where they could impose their religion on everyone in sight without any of that pesky Church of England, they brought turkeys with them in cages on the deck of the Mayflower. Imagine their surprise when they reached New England (as of course they called it) and discovered a wild version of what they had assumed was exclusively a Middle Eastern bird. They hunted wild turkeys but did not try to domesticate them, already having domestic turkeys. In the following centuries, wild turkeys declined rapidly due to loss of habitat and over-hunting, although in recent years they have made a substantial comeback. The domestic turkey, meanwhile, has become extremely stupid, so that it could never take care of itself in the wild--and, it is said, they have even lost the ability to breed unassisted, which must certainly be a sign of major debility.
This is a complete resource that was developed to support the Grade 5 Australian Curriculum Science Unit 2. I have printed these onto A3 paper, mounted them onto Black card for displayed in the classroom. They look beautiful. They could simply be used as a smart board slide show. The graphics are actual images of each element. The packet includes the most important information and an image to compliment for the following: - One chart for each of the 8 planets - One chart for the Dwarf planets. - One chart for the Sun. - One chart for the Moon. - What is a Universe? - What is a Galaxy? - What is a Solar System? - What is a Planet? - What is a Comet? - What is a Asteroid? - What is a Meteoroid/Meteor/Meteorite? Packets to compliment these charts are coming soon. Please follow me so you can get them as they arrive.
Over the past weekend, Saturn reached opposition in the sky—that is, it placed itself directly opposite the Sun, with Earth, in the middle, between the two. This also means that it’s at its closest to us. Even so, it’s more than 830 million miles away. But what if a cosmic conspiracy pushed it out of its orbit and it took a strange turn and came knocking on Earth’s door? What if it came as near as the Moon? At that distance, it’d be disastrously close for a terrestrial or an orbital camera to take a photograph of it. The sky would turn an unnameable hue, tinged with a shade of imminent menace. The horizon would be hijacked; the Sun, eclipsed. And there it’d be—a giant, slow-swirling ringed ball, standing four-square, occupying the entire field of vision. That, of course, is theory. In reality, in the tug-of-war between the two worlds, ours would lose bitterly. Because Saturn is nine times wider than Earth, and nearly 100 as massive, its immense gravitational force would begin to wreak havoc long ahead of its arrival. To begin with, it’d kick the Moon out of its orbit around Earth. Likewise, Earth’s gravity, though, far punier, would still act on it rings, disfiguring them. But even if they were spared Earth’s token fury, they’d still not escape the Sun’s intense heat. As Saturn would head Sun-ward, the ice in them would sublimate—go from a solid to a gaseous state—creating a gigantic, trailing cloud of water vapor. On the other hand, what would happen if Earth flew past Saturn? As one of its halves would be closer to Saturn than the other, Earth would be pulled more and more apart by Saturn’s mounting gravitational field. When Saturn is at a distance 20 times greater than the Moon, its tidal forces would begin to come into play, exerting a pull equal to that of the Moon’s. That force would jump by a factor of 400, when Earth is about six million miles away. At this point, our oceans would be roiled, bringing biblical floods. When Earth is at the same distance from Saturn as the Moon is from Earth—some 239,000 miles—fault lines would rupture, volcanoes would erupt, and anything left on the surface would be crushed. At about 80,000 miles, Earth would collide with its rings. The tidal forces would then escalate to 200,000 times that of the Moon’s. As that proximity would fall well within Saturn’s Roche Limit (the distance at which a celestial body is ripped apart by the tidal forces exerted on it by another celestial body)—Earth would be ripped apart. And a new asteroid belt would be born as a result.
When planning response to hurricanes and floods, an agency dedicated to “space research” would not immediately come to mind. But the University of Texas at Austin Center for Space Research has been a key partner of the Texas Division of Emergency Management since 1999. Established in 1981, the research center is involved in everything from exploration of the solar system and the planet Mars to issues on Earth, including agriculture and fisheries, severe weather and oil spills. Using images and information from satellites belonging to NASA, the National Oceanic and Atmospheric Agency (NOAA) and international space agencies, UTSCR has provided TDEM real-time information on events such as wildfires, tropical storms and the debris pattern that resulted from 2003 disintegration of the Space Shuttle Columbia. The Center also can provide computer generated images dramatically illustrating the impact of storm surge on Texas coastal cities from various categories of hurricanes, as well as digital maps of Texas showing where resources are staged and where evacuation buses and ambulances are traveling. It was satellite imagery from UTCSR that revealed a “wild card” in flood diagnostics during Hurricane Alex. In addition to reservoirs overflowing on both sides of the border, a “lake” had sprung into existence as waters from the storm washed down from the mountains of Mexico. Natural contours of the terrain on the ground created a bottleneck on the Rio Salado in the states of Nuevo Leon and Tamaulipas. “I discovered it in satellite imagery on Friday July 9, and reported it on a TDEM conference call. There’s a natural feature that constricts flow downstream along the Rio Salado once the floodplain is completely inundated,” said Dr. Gordon Wells, UTCSR program manager. Wells said the river course cuts across a ridgeline and into bedrock, trapping the water. The river backed up and spread across the landscape. “There was no ‘lake’ before the storm,” Wells explained. Wells suspects the phenomenon might have occurred “sometime in past centuries before the Carranza Dam was constructed [in Mexico] in the late 1920s. But nothing appears in the observational record.” The broad, shallow lake revealed by satellite – and the geologic features that contributed to its sudden appearance – had been unknown to other U.S. and Mexican government agencies involved in monitoring and attempting to predict flood impact. Wells said UTCSR works in several different areas of water research, including models of inland flows, plus satellite and aerial mapping for brush control and detection of invasive species along riverbanks and adjacent areas. The center’s research on water resources, dams and irrigation practices in Mexico within the Rio Grande basin was particularly useful during this year’s floods. “Our data complements what the National Weather Service provides through the West Gulf River Forecast Center,” Wells said. “Floods occur across large areas where measurements are not made on the ground,” Wells said. “Satellite observations fill the gaps between the available surface measurements, and may be the only source of information for remote areas, such as the ungauged tributaries that originate in the mountains of Mexico. Radar satellites provide the most reliable coverage because they can image regions during the nighttime and daylight – even under heavy cloud cover.” NASA satellite image of Hurricane Alex on June 29. UTCSR captures information from NASA, NOAA and (Photo courtesy of NASA)
The consolidation of the ancient supercontinent Pangea 300 million years ago played a key role in the formation of the coal that powered the Industrial Revolution, a new study asserts. The finding contradicts a popular hypothesis, first formally proposed in the 1990s, that attributes the formation of Carboniferous coal to a 60-million-year gap between the appearance of the first forests and the wood-eating microbes and bacteria that could break them down. “Much of the scientific community was really enamored with this simple, straightforward explanation,” says geobiologist Kevin Boyce, associate professor of geological sciences at Stanford University. “So, it has not only refused to die, it has become a conventional wisdom.” In the new study, published in the Proceedings of the National Academy of Sciences, researchers took a closer look at this “evolutionary lag” hypothesis, examining the idea from various biochemical and geological perspectives. “Our analysis demonstrates that an evolutionary lag explanation for the creation of ancient coal is inconsistent with geochemistry, sedimentology, paleontology, and biology,” says Matthew Nelsen, a postdoctoral researcher in Boyce’s lab and first author of the paper. The scientists examined ancient, organic-rich sediments from North America and showed that not all of the plants that existed during the Carboniferous period, which began about 360 million years ago, possessed high concentrations of lignin, a cell wall polymer that helps give plant tissues their rigidity. Lignin is the biochemical component that, according to the evolutionary lag hypothesis, ancient bacteria and fungi were unable to break down. The researchers also showed that shifts in lignin abundance in ancient plant fossils had no obvious impact on coal formation. In fact, many Carboniferous coal layers were dominated by the remains of lycopsids, an ancient group of largely low-lignin plants. “Central to the evolutionary lag model is the assumption that lignin is the dominant biochemical constituent of coal,” Nelsen says. “However, much of the plant matter that went into forming these coals contained low amounts of lignin.” The scientists instead argue that the waxing and waning of coal deposits during the Carboniferous period was closely tied to a unique combination of tectonics and climate conditions that existed during the assembly of Pangea. Synthesizing findings from across various scientific fields, the scientists argue that during the Carboniferous, massive amounts of organic debris accumulated in warm, humid equatorial wetlands. “If you want to generate coal, you need a productive environment where you’re making lots of plant matter and you also need some way to prevent that plant matter from decaying,” Boyce says. “That happens in wet environments.” The other key element that is required to form large coal deposits is an “accommodation space”—essentially a large hole—where organic matter can accumulate over long periods without being eroded away. “So you need both a wet tropics and a hole to fill. We have an ever-wet tropics now, but we don’t have a hole to fill,” Boyce says. “There’s only a narrow band in time in Earth’s history where you had both a wet tropics and widespread holes to fill in the tropics, and that’s the Carboniferous.” During the Carboniferous, amphibian-like creatures were still adjusting to life on land, and hawk-size insects flitted through forests very different from what exists today. “In the modern world, all trees are seed plants more or less,” Boyce says. “Back then, the trees resembled giant versions of ferns and other groups of plants that are now only small herbs. Conifers were just beginning to appear.” The Carboniferous was also a time when geologic forces were herding several large land masses together into what would eventually become the massive supercontinent Pangea. Along geologic fault lines where tectonic plates ground against one another, mountain ranges developed and deep basins formed alongside the new peaks. The ponderous pace at which the basins were created meant there was plenty of time for organic matter to accumulate, and as the mountains rose, the basins deepened and even more plant material could pile up. “With enough time,” Boyce says, “that plant matter was eventually transformed into the coal that powered the Industrial Revolution and helped usher in the modern age. Coal, as dead plant matter, is obviously based in short-term biological processes. And yet, as an important part of the long-term carbon cycle, coal accumulation is largely dictated by geological processes that operate on timescales of many millions of years that are entirely independent of the biology.” Researchers at the Smithsonian National Museum of Natural History and at the University of Wisconsin-Madison are coauthors of the study. Source: Stanford University
The Fabaceae are herbs, vines, shrubs, trees, and lianas found in both temperate and tropical areas. They comprise one of the largest families of flowering plants, numbering 630 genera and 18,000 species. The leaves are stipulate, nearly always alternate, and range from bipinnately or palmately compound to simple. The petiole base is commonly enlarged into a pulvinus that commonly functions in orientation of the leaves (sometimes very responsively, as in the sensitive plant, Mimosa pudica). The flowers are usually bisexual, actinomorphic to zygomorphic, slightly to strongly perigynous, and commonly in racemes, spikes, or heads. The perianth commonly consists of a calyx and corolla of 5 segments each. The androecium consists of commonly 1- many stamens (most commonly 10), distinct or variously united, sometimes some of them reduced to staminodes. The pistil is simple, often stipitate, comprising a single style and stigma, and a superior ovary with one locule containing 2-many marginal ovules. The fruit is usually a legume, sometimes a samara, loment, follicle, indehiscent pod, achene, drupe, or berry. The seeds often have a hard coat with hourglass-shaped cells, and sometimes bear a u-shaped line called a pleurogram.
MathType Tip: Advanced Techniques for Adding Equations and Symbols to Word Documents: Part ||MathType 4 and later (Windows) MathType 5 and later (Macintosh) This Tip explains how to use Word's automatic correction features to make the inclusion of MathType equations in your Word documents easier and MathType and Microsoft Word are powerful tools for authoring documents containing mathematical notation. While the MathType Commands for Word simplify this process, by taking advantage of Word's automatic correction features you can easily insert frequently-used equations and symbols. You can insert equations by typing just a short keyword; Word will automatically replace the keyword with a corresponding equation without opening a MathType window. This Tip explains how to define these keywords in Word and associate them with MathType equations. We also discuss when to insert simple expressions (e.g., subscripted variables) as text and when to insert them as This article addresses Word 2002 (Office XP) for Windows, but also applies to earlier and later versions of Word for both Windows and Macintosh. Where there are differences in menus or commands between different versions of Word, they will Although this article only discusses Microsoft Word, similar features are available in Microsoft Access, Excel, Outlook, and PowerPoint, but not in FrontPage. AutoCorrect entries (not AutoText) created in one Office application will be available in the others. Note: This article will assume you have basic familiarity with Word and MathType. If you do not have basic familiarity with both products, please refer to the appropriate manual, help file, or online tutorial. You must also be familiar with basic Windows and Macintosh features, especially selecting and copying objects. This article will be basic in nature and addresses the following topics: - Types of automatic correction in Word - Using MathType with AutoText - Using MathType with AutoCorrect - Specific suggestions and examples We recommend printing this article to make it easier to work through the steps given in the examples below. After you've mastered these concepts, you can proceed to a more advanced Tip. Types of automatic correction in Word Word has three types of automatic correction: AutoFormat, AutoText, and AutoCorrect. The options for all automatic corrections may be viewed and changed by selecting "AutoCorrect Options..." from the Tools menu in Word. (In some versions of Word, the item is titled simply "AutoCorrect".) Remember, even though you are changing the settings in Word, the settings will take effect in all Office applications. Differences are numerous between the three types of automatic correction, but there are similarities as well. The names of the three types of correction give some hint as to their purpose: AutoFormat is used to change the formatting of characters (such as changing 1/2 to ½ or *bold text* to bold text) or paragraphs (such as changing paragraphs to bulleted or numbered lists, based on characters you enter at the beginning of the line). AutoText is intended to replace text with other text, but gives you the option of making the replacement. You can cause a character, a word, a paragraph, or even an entire page to be replaced after typing just a few AutoCorrect is intended to correct misspellings and make simple replacements, but you can also replace entire paragraphs or pages like you can with AutoText. A major difference is that AutoCorrect doesn't give you the option of making the replacement; it just does it. With AutoFormat, you can choose to apply all formatting changes after you've typed your document, or you can have Word apply each formatting style as you type. The second method of AutoFormat, which Microsoft appropriately calls "AutoFormat As You Type," is a source of frustration to many Word users. This feature is the one, for example, that continues a numbered list after you type the first element in the list. Let's say you are typing a math test and have numbered the first question. When you hit enter at the end of the question, Word assumes you have finished the first item in the numbered list and that you want to start the second item. Thus, it begins the new line with the next number in sequence and indents the line accordingly. This may not be what you want, since you may have more than one paragraph in the question (as in a word problem), or you may need a new paragraph to list multiple responses. With AutoFormat (as opposed to AutoFormat As You Type), you can have Word apply all the formatting changes when you choose either all at once, or optionally selecting them one change at a time. To use AutoFormat, select it from the Format menu and follow the prompts. AutoFormat is useful if you have pasted text into the document from another document, or when you have AutoFormat As You Type disabled. Since AutoFormat cannot be used to automatically insert objects such as MathType equations, we will focus the remainder of the Tip on the other two automatic corrections – AutoText and AutoCorrect. (N/A Word 2007) The second type of automatic correction provided by Word is AutoText. AutoText is useful for replacing short text strings with several words or paragraphs. For example, if your name is Frank James and you need to enter your address several times whenever you prepare a particular type of document, you can have Word make the replacement for you every time you type the text Sometimes you will want to write your name without your address, of course, so a nice feature of AutoText is that Word will display a pop-up asking if you want to make the substitution. When the pop-up appears, simply keep typing if you don't want Word to make the replacement. To make the replacement, press the Enter key, the Tab key, or F3 (Windows only). This feature can produce surprising results, such as if you were creating a table with the first names of several people as table headings. If you type Frank, followed by the Tab key to move to the next column, Word inserts your full name and address. Any time Word makes an unwanted correction, you can reverse the correction by selecting Undo from the Edit menu, or by typing Ctrl+Z In general, AutoText isn't as useful as AutoCorrect for technical papers, but it has some features that make it more attractive than AutoCorrect in specific - AutoText is better if you don't want to replace every instance of the text. - AutoText lists are easier to transfer to another computer or to share with colleagues than are AutoCorrect lists. (This feature and the next will be covered in the advanced techniques Application Note.) - An AutoText entry may be inserted into the document as a field, which allows for easy updating if the contents change. - Word 2007 handles AutoText as a "Building Block", and it works differently from what is described here. If you're interested in using AutoText with Word 2007, see the Microsoft article on the subject. The third type of automatic correction available in Word is AutoCorrect. This type of automatic correction is typically used to correct commonly misspelled words (such as "teh" when you meant to type "the"), incorrectly capitalized words (such as "archaeopteryx" when you meant to type "Archaeopteryx") or for entering common symbols by typing their text counterparts (such as entering © by typing "(c)" or entering ¢ by typing "c/"). AutoCorrect will replace a simple text string with a character, word, phrase, or even paragraphs of text. Both AutoText and AutoCorrect can be used to insert clip art, drawings, or MathType equations. In this article, we discuss using these features with MathType. In the advanced features Tip, we discuss using the automatic correction features to insert clip art and drawings. Using MathType with AutoText (N/A Word 2007) AutoText is very useful for inserting common symbols or formulas with just a few keystrokes. In this section, we'll see how to set up AutoText to insert MathType equations, and how to use this feature in your own Word documents. To use MathType with AutoText, follow these simple steps. Although these steps are specific to Word 2002, you should also be able to use the same procedure in Word 97, 98, 2000, 2001, X, 2003, and 2004 with little or no modification. Keep in mind that we often use the word "equation" to mean anything created with MathType, whether or not there is an equal sign. Setting up the AutoText entry: - Insert a MathType equation into your - Select the equation by clicking on it - From the Tools menu, select AutoCorrect - In the AutoCorrect dialog, select the - Notice the equation is already inserted into the Preview window. (There is no way to paste an object into the Preview window. The object is there because you selected it in step 2 above.) - Type the replacement text in the "Enter AutoText entries here:" window. Choose something that is both easy to remember and is not likely to appear in normal text. In this case, "limx0" makes sense because the replacement object is "the limit as x approaches zero". (Actually, you could use the entire phrase "limit as x approaches zero" here, but it's probably easier and quicker if you come up with some easily remembered shortcut, such as "limx0".) - Click Add; the AutoCorrect dialog closes. Using the AutoText entry: - In your document, when you type the first 4 letters of your AutoText replacement string, Word offers to replace the string with your AutoText object or text string. Press Enter, Tab, or F3 (Windows only) to make the replacement. - Once the replacement is made, either continue typing or insert another object. Using MathType with AutoCorrect Using MathType with AutoCorrect is very similar to using MathType with AutoText, but since AutoCorrect doesn't give you the option whether to make the replacement, AutoCorrect is often used for common substitutions (such as the examples given above) or for misspellings. Excellent uses of AutoCorrect for technical documents abound; some suggestions are listed |L-T (or l-t) |sq2 (or sr2) The list is literally endless, but these suggestions give you an idea of the great utility of AutoCorrect when used with MathType. To insert a MathType object into your AutoCorrect list, follow the procedures above for AutoText, except click on the AutoCorrect tab instead of the AutoText tab. There is an important distinction between AutoCorrect and AutoText that has already been mentioned, but is important enough to repeat. AutoCorrect does not give you the option of whether or not to make the replacement. It immediately makes the replacement upon typing a "word terminator". You can still undo the replacement as with AutoText (by typing Ctrl+Z or Cmd+Z), but it's best to only put those items in AutoCorrect that you will want to replace every time. (A "word terminator" is anything that terminates a word as you type. When you're typing text, Word knows you have completed the current word when you type any punctuation symbol, the spacebar, the Tab key, or the Enter key. Any of these keys and symbols will cause Word to immediately make an AutoCorrect replacement if one exists.) Because Word makes the correction immediately upon encountering a word terminator, it's essential that you don't choose a title for an AutoCorrect entry that will be a word in normal text. For example, if you want to enter the , don't call it "quad" or "quadratic". Both of those are likely to appear as words, and most likely at the least opportune time! In this case, it's much better to choose a title like "qu" for the replacement. Remember, although the letter combination qu will appear often in documents, it will never appear as a word. Therefore, whenever you type the letters qu, followed by any word terminator, Word will make the substitution and insert the formula. It will not make the substitution when you type the word quadratic, the word quick, or any other word that contains the letters See the next section below for some specific suggestions on when to use AutoCorrect and when not to use it. Specific Suggestions and Examples Now that you are familiar with the methods of automatic correction in Word, here are some suggestions for use, as well as some specific examples. - Use AutoText if you don't want to replace every instance of a text string, but would like to choose when to replace. - Use AutoCorrect if you want to replace every instance of a text string with another text string or object as you type, for simple substitutions that do not have to be edited, or for commonly misspelled words When to use MathType with AutoText and AutoCorrect These are suggestions for using AutoText and AutoCorrect, but when should you use MathType? When you insert math and science symbols and equations, right? Not necessarily. You could use MathType, for example, to insert the Greek letter pi: p. Your document will be smaller and operate faster though, if you would insert pi by using the Insert Symbol command (from the Insert menu). You could also switch to Symbol font, type the letter p, and switch back to the font you're using for your document. Note that it's not incorrect to use MathType in this case, it's just that there's a better way to do it. Here are some suggestions for when not to use MathType. (The suggestions apply generically within a document, even if you're not using AutoCorrect.) We recommend do not use MathType for OK to use MathType for... simple subscripts or superscripts. use subscript and superscript text formatting in Word: compound superscripts and subscripts, or sub/superscripts within another symbols available with Insert/Symbol. use the Insert Symbol command: 3 × 4 = 12 x2 ÷ x = x combinations of symbols and items not available with Insert/Symbol: use Word for text and MathType for symbols and constructs you can't create *Note: When converting a Word document to HTML, text with super/subscript formatting will look different in Word when compared to the HTML document and to a MathType equation. Compare the three examples below (shown larger than normal): |in Word, using formatting ||converted to HTML Although the difference is easily noticeable, it is not necessarily an objectionable difference. That's for you to decide, but you should at least be aware of the difference. These are very specific suggestions, but hopefully you can see the general cases for each. Remember document stability, size, and simplicity are all optimized when inserting technical expressions as plain text whenever possible. Now let's take a look at some specific examples when AutoText and AutoCorrect can come in very handy. You are preparing a fractions quiz for your sixth-graders, and you want to create two versions of the quiz, which will contain mixed number multiplication problems as well as fraction division problems. Having just completed this Application Note on AutoCorrect, you realize this is a perfect use of the feature. You decide to enter 5 different fractions and 5 different mixed numbers, as well as the multiplication and division symbols and a blank answer space into AutoCorrect. You choose the fractions and the mixed numbers To use logical names, you name them 1/2, 2/3, etc. for the fractions, and 11/2, 22/3, 24/7, etc. for the mixed numbers. Since the letters m and d will never appear alone in the text of a document, you use "m" for the AutoCorrect entry for the multiplication symbol (×) and "d" for the division symbol (÷). You also want to leave 10 underscore characters for the student to write the answer, so you type 10 underscores, highlight them, select Tools/AutoCorrect Options, and call it "ans". So now you're ready, and you enter "1/2 m 3/5 = ans " for the first question. In your document, you see Try it out: Use MathType and AutoCorrect to enter the fractions and mixed numbers shown above, as well as the multiplication and division symbols. Use Insert/Symbol in Word to enter the two symbols. Be sure to highlight the symbol before you select Tools/AutoCorrect Options. Use whatever shortcut names are logical to you, either the ones we suggest above or your own. Finally, enter the answer blank into AutoCorrect as described above, then try it out. See how easy it is to make a 10-question quiz using MathType and AutoCorrect. You want to create a test to see if your students understand proportions. You decide to create some blank macros in MathType so that you only have to fill in the empty slots to complete the problems. (If you're unsure how to do this, refer the MathType documentation.) You enter 4 blank proportions into MathType's Large Tabbed Bar: |MathType for Windows ||MathType for Macintosh You wonder if this is a good application for AutoCorrect or AutoText, but your colleague points out that if you define these as AutoCorrect entries, you'll still have to edit them after you enter them into the Word document. It would be much better to just leave them on the MathType Large Tabbed Bar, and insert them separately as MathType objects, since that would be much quicker than using AutoCorrect or AutoText entries. As a general rule, you should never use AutoCorrect or AutoText for something that will have to be edited after it's inserted into the document. When using MathType with Word: - You'll have a smaller, faster, cleaner document if you let Word do what it can without MathType (simple subscripts, superscripts, etc.). Just be aware of the difference in appearance. If it's more important to have consistent-looking equations, use MathType throughout. - Using MathType with AutoCorrect and AutoText is a great way to speed up your work, but it ends up being counterproductive if you let Word make an automatic correction that you have to edit in MathType. You're better off inserting the expression directly from MathType, without the intermediate step of AutoCorrect or AutoText. In most cases, you'll find AutoCorrect superior to AutoText: - Unformatted AutoCorrect entries are available to all Office applications - When an AutoCorrect replacement is made in Office XP or later (including Office 2003 and 2004), if you hover your cursor over the AutoCorrected entry, a "lightning bolt" icon gives you access to the full array of AutoCorrect options, both for this individual entry and for AutoCorrect in general. - It takes an extra keystroke Enter, Tab, or F3 to put an AutoText entry into a document. This may be distracting. However, in some instances, AutoText is superior: - You can create AutoText entries without fear of accidentally triggering a replacement, since AutoText requires input from you to make the replacement. - AutoText screen tips warn you about the contents of the replacement. AutoCorrect gives no warning. - AutoCorrect entries are global, so they take effect throughout Office. AutoText entries are specific to Word. Now that you've mastered the basic concepts of AutoCorrect and AutoText, you're ready for the next Tip. In the advanced Tip, we'll cover - these two methods of automatic correction in more detail, - using AutoText and Word field codes to make global substitutions in your - using AutoCorrect and AutoText to enter Word AutoShapes and clip art, and - exchanging AutoText and AutoCorrect lists with colleagues or transferring them to another computer. On to the advanced Tip...
Indo-European pp Migrations pp 42-45 (period 8 pp 54-57) ben hiatt per.1 9/4/12 1. Linguists noticed that certain language were related called them Indo- European. List the major subgroups of this family of languages. The major subgroups of this family of languages is hindi, farsi and most European languages. 2. Where was the original homeland of the Indo European speakers? The original home land of the indo European speakers is probably the steppe region of modern day Ukraine 3. How did the domestication of horses facilitate the lives of the Indo-Europeans? (think migration)it was easier to migrate on horses than to walk on foot. 4. Describe the migration of the I-A’s from 3000BCE to 1000CE. The earliest indo European society began to break up around 3000 bce, and continued the migrations until about 1000 ce. 5. Discuss thoroughly the Hittites. The hitties built a powerful kingdom and establishes a close relationship with the Mesopotamians. They were responsible for light and horse drawn war carriages. 6. Discuss the the two technological innovations of the Hittites. The two technological things the hitties made were light and horse drawn war carriges. Both of which greatly strengthened their society and influenced other peoples societys too. 7. Discuss the eastern migration of the Indo-Aryans. While the hitties were building their empire in Anatolia some indo Europeans were migrating east to central asia. 8. Discuss the Western migrations of the Indo-Aryans. This move west took the indo Aryans west into greeceafter 2200 bce. 9. Discuss the Southern migrations. Another wave of migrations established some indo Europeans in the presence of iran and india.
assisted Lewis & Clark on their expedition from 1804-1806. she and her husband were their guides from the great plains to the pacific ocean, and then back. She was a big contribution to their success because she knew how to make medicine, she knew how to get supplies, and she prevented them from getting hurt by tribes because they would see an indian woman with them. therefore, she was an important figure in their lives. when they were returning, she was their guide because they were so lost. Lewis & Clark Expedition Famous explorers. expedition led to the exploration of western North America, through lands that now compose states such as Illinois, Missouri, Idaho, Washington and Oregon. The exact trail that the famous explorers traveled is not known since they followed rivers, and over time rivers have shifted course as a result of natural occurrences and the formation of dams. Lewis and Clark didn't officially discover western North America, since Native Americans had long occupied and explored these areas when a man would be forced into the army. 2,500 sailors were impressed by the british. they would take their men from the boat and just tell em to get on theirz Battle of New Orleans This 1815 battle was the last of the War of 1812, and it actually occurred after the Treaty of Ghent had been signed because Americans were unaware of the treaty. General Andrew Jackson successfully defended New Orleans against thousands of British troops, making himself a national hero and making Americans feel nationalistic--that they were one of the top world powers (even though the war itself ended in stalemate). A Shawnee chief who, along with his brother, Tenskwatawa, a religious leader known as The Prophet, worked to unite the Northwestern Indian tribes. The league of tribes was defeated by an American army led by William Henry Harrison at the Battle of Tippecanoe in 1811. Tecumseh was killed fighting for the British during the War of 1812 at the Battle of the Thames in 1813. served as the 3rd Vice President of the United States. Member of the Republicans and President of the Senate during his Vice Presidency. He was defamed by the press, often by writings of Hamilton. Challenged Hamilton to a duel in 1804 and killed him. Virginian, architect, author, governor, and president. Lived at Monticello. Wrote the Declaration of Independence. Second governor of Virgina. Third president of the United States. Designed the buildings of the University of Virginia. wife of President Madison, who rescued documents and George Washington's portrait before it could be burned down in the White House by the British. best known for her pleasant, open direction of White House social life He was the fifth President of the United States. He is the author of the Monroe Doctrine. Proclaimed that the Americas should be closed to future European colonization and free from European interference in sovereign countries' affairs. It further stated the United States' intention to stay neutral in European wars John Quincy Adams Secretary of State, He served as sixth president under Monroe. In 1819, he drew up the Adams-Onis Treaty in which Spain gave the United States Florida in exchange for the United States dropping its claims to Texas. The Monroe Doctrine was mostly Adams' work. Gabriel was a literate enslaved blacksmith who planned to lead a large slave rebellion in the Richmond area in the summer of 1800. However, information regarding the revolt was leaked prior to its execution, thus Gabriel's plans were foiled. Gabriel and twenty-five other members of the revolt were hanged. In reaction, the Virginia and other legislatures passed restrictions on free blacks, as well as the education, movement and hiring out of the enslaved. The U.S., under Jefferson, bought the Louisiana territory from France, under the rule of Napoleon, in 1803. The U.S. paid $15 million for the Louisiana Purchase, and Napoleon gave up his empire in North America. The U.S. gained control of Mississippi trade route and doubled its size. Embargo Act of 1807 Jefferson's response to the cry for war. prohibited american ships from leaving port for any foreign destination, so they completely avoided France/Britain ships. Resulted in an economic depression, his most unpopular policy of both terms. Battle of Tippencanoe Shawnees vs. William henry Harrison, indians lost, colonists thought the indians had been working with the british, tecumseh was the leader of the indians Southerners and Westerners who were eager for war with Britain. They had a strong sense of nationalism, and they wanted to takeover British land in North America and expand. another name given to the war of 1812 by southerners and georgians who suspected britain of supplying the creek with weapons Meeting by Federalists dissatisfied with the war to draft a new Constitution; resulted in seemingly traitorous Federalist party's collapse agreement, proposed in 1819 by Henry Clay, to keep the number of slave and free states equal President James Monroe's statement forbidding further colonization in the Americas and declaring that any attempt by a foreign country to colonize would be considered an act of hostility 7th president of the US; successfully defended New Orleans from the British in 1815; expanded the power of the presidency. democrat Distinguished senator from Kentucky, who ran for president five times until his death in 1852. He was a strong supporter of the American System, a war hawk for the War of 1812, Speaker of the House of Representatives, and known as "The Great Compromiser." Outlined the Compromise of 1850 with five main points. Died before it was passed however. John C. Calhoun The 7th Vice President of the United States and a leading Southern politician from South Carolina during the first half of the 19th century. He was an advocate of slavery, states' rights, limited government, and nullification. He was a black abolitionist who called for the immediate emancipation of slaves. He wrote the "Appeal to the Colored Citizens of the World." It called for a bloody end to white supremacy. He believed that the only way to end slavery was for slaves to physically revolt. Charles Grandison Finney One of the most important leaders in the Second Great Awakening. Began preaching about Christianity, he believed sin was avoidable and to prove one's faith to do good deeds Martin Van Buren senator from NY, vice president to jackson, and president of the United States; the Panic of 1837 ruined his presidency, and he was voted out of office in 1840. Democrat William Henry Harrison was an American military leader, politician, the ninth President of the United States, and the first President to die in office. His death created a brief constitutional crisis, but ultimately resolved many questions about presidential succession left unanswered by the Constitution until passage of the 25th Amendment. Led US forces in the Battle of Tippecanoe. a 363-mile-long artificial waterway connecting the Hudson River with Lake Erie, built between 1817 and 1825 to connect eastern US and the Great Lakes in the midwest in order to increase settlement and trade panic of 1819 Economic panic caused by extensive speculation and a decline of European demand for American goods along with mismanagement within the Second Bank of the United States. Often cited as the end of the Era of Good Feelings. conservatives and popular with pro-Bank people and plantation owners. They mainly came from the National Republican Party, which was once largely Federalists. They took their name from the British political party that had opposed King George during the American Revolution. Their policies included support of industry, protective tariffs, and Clay's American System. They were generally upper class in origin. Included Clay and Webster. supported independence Trail of Tears Refers to the forced relocation in 1838 of the Cherokee Native American tribe to the Western United States, which resulted in the deaths of an estimated 4,000 Cherokees. Resulted from the enforcement of the Treaty of New Echota, an agreement signed under the provisions of the Indian Removal Act. Indian Removal act of 1830 Jackson's policy led to the forceed uprooting of more than 100, 000 Indians; in 1830, Congress passed this act providing for the transplanting of all Indian tribes then resident east of the Mississippi Second Bank of United States national bank organized in 1816; closely modeled after the first Bank of the United States, it held federal tax receipts and regulated the amount of money circulating in the economy. The Bank proved to be very unpopular among western land speculators and farmers, especially after the Panic of 1819 2nd Great Awakening Series of religious revivals starting in 1801, based on methodism and baptism, stressed philosophy of salvation through good deeds and tolerance for protestants. Attracted women, African Americans,and Native Americans. panic of 1837 First Depression in American history; Banks lost money, people lost faith in banks, and country lost faith in President Martin van Buren; lasted four years; due to large state debts, expansion of credit by numerous, unfavorable balance of crop failures, and frenzy that was caused by the avalanche of land speculation. Nineteenth-century idea in Western societies that men and women, especially of the middle class, should have clearly differentiated roles in society: women as wives, mothers, and homemakers; men as breadwinners and participants in business and politics. William Lloyd Garrison An abolitionist and the editor of the radical abolitionist newspaper, The Liberator, and also one of the founders of the American Anti-Slavery Society. political party that generally stressed individual liberty, the rights of the common people, lead by Andrew Jackson from 1828 to 1856. Angelina and Sarah Grimké The daughters of a wealthy slave owner. they came to hate slavery and moved to philadelphia and spoke out about slave rights American Temperence Society Society established in 1826 in Boston, which promoted Temperance, hence favoring the riddance of alcohol; They believed that it yielded negative effects in the household; Also favored the abolition of slavery, expanding women's rights, temperance, and the improvement of society.
Both racism and jokes are social and cultural products. The ideology of racism holds that humankind comprises different races which vary in their worth. Racism dictates, explains and justifies who does what to whom, where, when, why and how. Caucasians claim the right to treat other ‘races’ in whatever manner they see fit, including disparagement in the form of jokes. Black people are commonly patronised or insulted under the pretext of humour. Here is an example. A few years ago, a black person was inside a local shop when a man covered in coal dust entered and placed his hand next to the black person’s. He then chanted ‘I wanna be like you, black like you This was objected to on the grounds that, unlike the ‘joker’, he was black and not dirty. Those in the shop joined in the denial of racism: ‘it’s only a joke’, they said, almost in unison. One of them actually counselled him (the victim) to cultivate a sense of humour in order to ‘get on in this world’. This is not a hypothetical example. The incident involved one of the authors of this chapter. KeywordsBlack People Black Family Jewish People Racial Stereotype Black Person Unable to display preview. Download preview PDF.
Key stage 1 Pupils should develop knowledge about the world, the United Kingdom and their locality. They should understand basic subject-specific vocabulary relating to human and physical geography and begin to use geographical skills, including first-hand observation, to enhance their locational awareness. Locational knowledge involves children being able to name and locate the world’s seven continents and five oceans, name, locate and identify characteristics of the four countries and capital cities of the United Kingdom and its surrounding seas. Place knowledge involves developing an understanding of geographical similarities and differences through studying the human and physical geography of a small area of the United Kingdom, and of a small area in a contrasting non-European country. Human and physical geography focuses on being able to identify seasonal and daily weather patterns in the United Kingdom and the location of hot and cold areas of the world in relation to the Equator and the North and South Poles. Alongside this is the development of using maps and atlases. Key stage 2 Pupils should extend their knowledge and understanding beyond the local area to include the United Kingdom and Europe, North and South America. This will include the location and characteristics of a range of the world’s most significant human and physical features. They should develop their use of geographical knowledge, understanding and skills to enhance their locational and place knowledge.
Personal development is a multi-billion dollar industry, and one thing participants hear over and over again – whether attending seminars, or buying self-help books– is, “These are the skills you were never taught in school!” True, teaching children how think positively and achieve more in life just never seems to fit into the curriculum alongside reading, writing and arithmetic (not to mention history, geography and science). But parents can help teach these skills at home, and we're not talking about just getting them to study. Here are some guidelines for giving your child the tools and strategies to develop a genius mindset, and achieve more in the classroom and beyond. Visualization is a very effective technique used by professional athletes and other top performers. Picturing what you want to achieve before you do it preps you to actually accomplish it, because our brains can’t tell the difference between vivid imagination and reality. Since our kids have such amazing imaginations, they’re at the perfect age to use this technique. Author and life coach Terri Levine says, “When children are able to visualize, they are able to learn better as well as fully comprehend subject matter. When a child can see and imagine possibilities they can create solutions.” Your child can visualize doing well on a test, or imagine pictures that help her remember the answers. Levine has several strategies that will help: - Tell your child a short story, and then ask him what he sees in his head. What does the character or the scene look like? - Describe a made-up animal or a vacation spot, and ask your child what it looks like to him. - Pick a new word, and encourage your child to picture something that will help him remember it. - Have your child visualize math answers coming easily, and showing up on the page – often children with poor math skills end up doing better just from this tip. Affirmations are positive statements we say to ourselves, and can have amazing results. Adults often struggle with not believing the words, but kids haven’t established as many limiting beliefs, and their brains are more receptive to new ideas. Leah Davies, M.Ed., former teacher and creator of the Kelly Bear learning materials for kids, says, “Affirmations serve to encourage children to be the best that they can be. For example, ‘I do not give up; I keep trying,’ or ‘I am unique, one of a kind.’ Parents can help their child through discussion and by example to use ‘self-talk’ when needed. The result is increased self-awareness, and a happier, well-adjusted child, both at home and at school.” Both visualization and affirmations work better when coupled with an appropriate emotion. “As a child reads, for example, if they feel the words and images they can determine if what they are reading makes sense. This learning strategy helps them make important connections to the material they are learning,” Levine says. Developing a genius mindset is not just about the inner world, and activating the subconscious mind. The physical outer world is just as important. Therese Pasqualoni, Ph.D., health educator and creator of the Strike It Healthy System, says, “Healthier choices are digested easily and healthy nutrients affect brain chemistry in positive ways, such as improved learning and memory capabilities. On the flip side, unhealthy choices affect brain chemistry in negative ways… like clogging the mind and impeding learning opportunities.” More specifically, “A whole grain breakfast and mid-morning fruit break improve learning, and consistent study habits that include association games improve memorization. Children should steer away from food dyes and preservatives that can hinder their ability to learn.” She suggests spending one hour a week being active with your family, and eating at least two meals a day together. At the same time, you can play word games, such as introducing your child to a new definition, or taking turns coming up with similar or opposite words. “Research shows children remember family-time conversation that has shown to improve their verbal skills,” she says. Whichever strategies you choose for physical, emotional and mental health – seeing, saying, feeling or tasting – rest assured that you’re giving your child tools that will help him for the rest of his life.
2 CompetitionEconomists classify markets based on how competitive they areMarket structure: Describes the level of competition found in an industry.Perfect CompetitionMonopolistic CompetitionOligopolyMonopoly 3 Perfect Competition Perfect Competition Definition: ideal model of a market economyNote: Ideal is a model, not a reality in most cases. 4 5 Characteristics of Perfect Competition 1. Numerous buyers and sellersNo single buyer or seller has the power to control the prices.Buyers have lots of optionsSellers are able to sell their products at market price 5 5 Characteristics of Perfect Competition (continued) 2. Standardized productA product that consumers see as identical regardless of the producerExample: milk, eggs, etc. 6 5 Characteristics of Perfect Competition (continued) 3. Freedom to enter and exit marketsProducers enter the market when it is profitable and exit when it is unprofitable 7 5 Characteristics of Perfect Competition (continued) 4. Independent buyers and sellersThis allows supply and demand to set the equilibrium price 8 5 Characteristics of Perfect Competition (continued) 5. Well-informed buyers and sellersBuyers compare pricesSellers know what consumers are willing to pay for goods 9 Perfect CompetitionExamples of markets that are close to perfect competition:CornBeef 10 Imperfect Competition Market structures that lack one of the conditions needed for perfect competition are examples of imperfect competitionThis means there are only a few sellers and/or products are not standardized 12 Definition of a Monopoly A market structure in which only one producer sells a product for which there are no close substitutesPure monopolies are rare 13 Definition of a Monopoly A cartel is close to a monopolyCartel:a group of sellers that act together to set prices and limit outputExample: OPEC—11 nations hold more than 2/3 of the world’s oil reserves 14 Definition of Monopoly Why do monopolies have no competition?Other firms struggle to enter the market due to a barrier to entry— something that stops the business from entering a market 15 3 Characteristics of Monopolies 1. Only One SellerSupply of product has no close substitutes 16 3 Characteristics of Monopolies 2. A Restricted or Regulated MarketIn some cases, government regulations allow a single firm to control a market (think utilities) 17 3 Characteristics of Monopolies 3. Control of PricesPrices are controlled since there are no close substitutes 18 Types of Monopolies First, not all monopolies are harmful When monoplies are harmful to consumers the government has power to regulate them or break them upSherman Anti-Trust Act of 1890 19 Questions1. Suppose that you went to a farmers’ market and found several different farmers selling cucumbers. Would you be likely to find a wide range of prices for cucumbers? Why or why not? 20 2. What would happen to a wheat farmer who tried to sell his wheat for $2.50 per bushel if the market price were $2.00 per bushel? Why? 21 3. In 2003, 95% of the households on the U. S 3. In 2003, 95% of the households on the U.S. had access to only 1 cable TV company in their area. What type of monopoly did cable TV companies have? Explain your answer. 22 4. In 2002 the patent on the antihistamine Claritin expired 4. In 2002 the patent on the antihistamine Claritin expired. Using the 3 characteristics of a monopoly, explain what happened to the market for Claritin when the patent expired. 24 Definition of Monopolistic Competition when many sellers offer similar, but not standardized products 25 Definition of Monopolistic Competition Monopolistic competition is based on product differentiation and non-price competitionProduct differentiation: attempt to distinguish a product from similar products 26 Characteristics of Monopolistic Competition (continued) Non-price competition: using factors other than low price to convince consumers to buy their products.Our car is better qualityOur burger tastes betterOur jeans are hipperOur purse is a status symbol 27 4 Characteristics of Monopolistic Competition 1. Many sellers and many buyersMeaningful competition existsExample: there are many restaurants where you can buy a hamburger 28 4 Characteristics of Monopolistic Competition (continued) 2. Similar but differentiated productsSellers try to convince consumers that their product is different from that of the competition 29 4 Characteristics of Monopolistic Competition (continued) 3. Limited control of pricesProduct differentiation gives producers limited control over priceConsumers will buy substitute goods if the price goes too high 30 4 Characteristics of Monopolistic Competition (continued) 4. Freedom to enter or exit the marketNo huge barriers to enter a monopolistically competitive marketWhen firms make a profit, other firms enter the market and increase competition 31 OligopolyDefinition Oligopoly: market structure in which only a few sellers offer a similar productFew large firms have a large market share: percent of total sales in a market 32 4 Characteristics of Oligopolies 1. Few sellers and many buyersGenerally where the 4 largest firms control 40% of the marketExample: breakfast cereal industry 33 4 Characteristics of Oligopolies (continued) 2. Standardized or differentiated productsProducts can be standard such as steelThey try to differentiate themselves based on brand name, service, or locationOr, products can be differentiated such as cereal and soft drinksThey use marketing strategies to separate them from competitors 34 4 Characteristics of Oligopolies (continued) 3. More control of pricesEach firm had a large enough share of the market that its decisions about price and supply affect one another 35 4 Characteristics of Oligopolies (continued) 4. Little freedom to enter or exit marketSet-up costs are highFirms have established brands, making it hard for new firms to enter the market successfully 37 Definition of Regulation set of rules or laws designed to control business behavior to promote competition and protect consumers 38 Antitrust Legislation Definition Antitrust Legislation:laws that define monopolies and give government the power to control them and break them upExample – Sherman Antitrust Act 39 Example: Standard Oil Company TrustTrust: when a group of firms are combined to reduce competition in an industryExample: Standard Oil Company 40 Definition of Merger: Merger when 2 firms join together to become 1 If a merger will eliminate competition it will be denied by the governmentExample – Google and Motorolla 41 Enforcing Antitrust Legislation The FTC and the Department of Justice are responsible for enforcing antitrust laws 42 Definition deregulation: Reducing or removing government control of a businessResults in lower prices for consumers and more competitionExample: airline industry was deregulated in 1978 43 Questions1. In 2005, a major U.S. automaker announced a new discount plan for its cars for the month of June. It offered consumers the same price that its employees paid for new cars. When the automaker announced in early July that it was extending the plan for another month, the other 2 major U.S. automakers announced similar plans. What market structure is exhibited in this story and what specific characteristics of that market structure does it demonstrate? 44 2. Why do manufacturers of athletic shoes spend money to sign up professional athletes to wear and promote their shoes rather than differentiating their products strictly on the basis of physical characteristics such as design and comfort? 45 3. The Telecommunications Act of 1996 included provisions to deregulate the cable industry. In 2003, consumers complained that cable rates had increased by 45% since the law was passed. Only 5% of American homes had a choice of more than 1 cable provider in Those homes paid about 17% less than those with no choice of cable provider. How effective had deregulation been in the cable industry by 2003? Explain your reasoning.
Print LCM of Two Integers in Python Computations and programming when combined make a very deadly combination. As the ability to solve complex mathematical questions in itself is a great deal. But when one has to do the same thing by writing up a code, things get somewhat more complicated. Not to mention the language your coding in also determines whether it’s going to be easy to difficult. So today we’re going to write a program to print LCM of Two Integers in Python. What’s The Approach? - Let us consider a and b to be the two Integers to find LCM of. - Firstly we’ll find the gcd of these two numbers, i.e, using a condition where if abecomes zero we’ll return belse we’ll recursively return b % a, a - Next, we’ll return multiplication of a with gcd of two numbers, and again multiply it with b. The returned value will be our LCM. Also Read: Print Cube Root of A Number in Java Python Program To Print LCM of Two Integers a = 15, b = 20 LCM of 15 and 20 is 60 # Python program to find LCM of two numbers # Recursive function to return gcd of a and b def gcd(a,b): if a == 0: return b return gcd(b % a, a) # Function to return LCM of two numbers def lcm(a,b): return (a / gcd(a,b))* b # Driver program to test above function a = 15 b = 20 print('LCM of', a, 'and', b, 'is', lcm(a, b))
The anatomy of a sheep includes, among other components, its reproductive system, skull, horns, digestive system, and other internal organs. Ewe anatomy is similar to ram anatomy, with the main difference being their reproductive systems, size, and frequently, lack of horns. Table of Contents Ewe reproductive systems The reproductive system of a ewe consists of ovaries, oviducts, and a uterus. The ova develop in the ovaries and are then released into the oviducts. - The oviduct has a few different parts: a uterus, a cervix, and an area where the embryo develops before being implanted in the uterus. The uterus is lined with endometrium, which nourishes the embryo as it grows within it. - A sheep’s heat cycle occurs around every 17 days. - Sheep are in heat for around 24-36 hours during each cycle. - Depending on the breed, sheep may breed only during a certain season. Some types of sheep breed all year round. - Sheep pregnancies range from 138 to 159 days Ram reproductive systems Rams have two testicles. The testicles produce testosterone, which helps to build muscle mass and increase libido in rams. They also produce sperm. Testosterone makes rams more aggressive toward other rams, leading to fights for dominance within their herd. The daily average sperm count in rams is about 20 million sperm cells for each testicle. Ewe skulls are smaller than ram skulls. Among horned sheep, the skull supports the horns. Its sturdy structure protects the brain from impact and potential injuries. A sheep’s skull is composed of the maxilla, the frontal bone, the sphenoid bone, the occipital bone, and the parietal bone. - The maxilla is located above the eye sockets. It is used as leverage for the ophthalmic muscles to attach to the skull. - The frontal bone is located on the top and front of the head. - The sphenoid bone attaches to the occipital bone at the back of the ewe’s head. It helps protect the brain. - The occipital bone is located behind the ears and attaches to the skull, with a ball joint that allows for movement. - The parietal bone surrounds the brain and forms part of the forehead. It also contains blood vessels and nerves that help control eye movement, hearing, balance, and smell. Not all sheep have horns. Some breeds are polled, and ewes are less likely to have horns than rams. Horned sheep have horns made of fibrous keratin tubules that protect against damage to the skull. Horns are used among rams for protection and fighting for territory against other rams. Contrary to popular belief, sheep do not have four stomachs. They have one stomach with four compartments. The four compartments of a sheep’s stomach are: - Rumen: acts as an area where the ewe can store food. Fermentation of foods starts here. - Reticulum: compresses the bolus (ball of food) and propels this towards the next compartment. - Omasum: contains glands that secrete hydrochloric acid and enzymes needed for digestion. - Abomasum: The abomasum is where food is combined with other enzymes and chemicals to break down its chemical bonds for absorption and utilization of nutrients. Notable internal organs in a sheep include the heart, lungs, kidneys, liver, and spleen. - The heart pumps blood and brings oxygenated blood to different body parts while excreting deoxygenated blood (a byproduct of the process) through the lungs. - The lungs bring oxygen into the body and work with the circulatory system to get the optimal balance of acids and bases in the sheep’s body. - The kidneys remove waste products from the bloodstream, thus improving the balance of minerals, salts, and water inside the sheep’s body. - The liver aids in digestion by breaking down fats into smaller molecules for easier absorption. - The spleen filters the blood plasma by fighting off viruses and bacteria. It also helps store blood.
Table of Contents Economic Liberalization in India In the history of India’s economic development, a significant turning point came with the initiation of Economic Liberalization. This transformative era marked a departure from the past and brought forth a series of reforms that aimed to reshape the country’s economic landscape. With the adoption of new policies and measures, India embarked on a journey towards openness, deregulation, and integration with the global economy. The era of Economic Liberalization in India set the stage for profound changes, stimulating growth, attracting investments, and unleashing the country’s entrepreneurial spirit. Read about: LPG Reforms in India The concept of liberalization refers to the relaxation of government regulations and restrictions in various sectors of the economy. It involves reducing barriers to trade, promoting competition, encouraging private sector participation, and facilitating economic openness. Liberalization aims to create a business-friendly environment, foster innovation, attract investments, and drive economic growth. Through liberalization, governments seek to enhance efficiency, productivity, and overall economic performance by allowing market forces to play a greater role in resource allocation and decision-making. This approach often involves reforms in areas such as trade policies, investment regulations, financial sector liberalization, and deregulation of industries. Liberalization is seen as a means to stimulate economic activity, increase competitiveness, and integrate economies into the global marketplace. Read about: FERA and FEMA Economic Liberalization Examples Economic liberalization has been implemented in various countries, and some notable examples include: - India: In 1991, India initiated significant economic reforms to liberalize its economy. The reforms included relaxation of industrial licensing, reduction of trade barriers, deregulation of foreign investment, and opening up sectors such as telecommunications and aviation to private participation. - China: China implemented economic liberalization policies in the late 1970s, known as the “Chinese economic reforms.” These reforms introduced elements of market-oriented reforms, liberalized trade, attracted foreign investment, and allowed private enterprises to flourish alongside state-owned enterprises. - United Kingdom: The United Kingdom implemented economic liberalization measures in the 1980s under the leadership of Prime Minister Margaret Thatcher. These reforms included privatisation of state-owned enterprises, deregulation of financial markets, and reduction of trade union powers. - New Zealand: In the 1980s and 1990s, New Zealand implemented a series of economic reforms known as “Rogernomics.” These reforms involved deregulation, reduction of trade barriers, privatisation of state-owned enterprises, and liberalization of financial markets. - Singapore: Singapore is often cited as an example of successful economic liberalization. The country pursued a pro-business approach, attracting foreign investment, promoting free trade, and implementing policies that encourage entrepreneurship and innovation. These examples demonstrate the diverse approaches and outcomes of economic liberalization in different countries, highlighting the potential for increased economic growth, market efficiency, and competitiveness. Read about: Capital Account Convertibility Economic Reforms in India Since 1991 Some of the significant economic reforms undertaken in India since 1991 are highlighted in this table. It must be noted that the table only includes the most important events. |Reform Areas||Reforms Implemented| |Trade and Foreign Investment|| |Agriculture and Rural Sector|| |Labour Market Reforms|| Read about: NRI Deposits Economic Liberalization in India UPSC The topic of economic liberalization in India holds immense importance for the UPSC (Union Public Service Commission) examination as it aligns with the UPSC Syllabus, particularly in areas such as Indian Economy, Governance, and Current Affairs. Understanding the concepts, impact, and challenges associated with economic liberalization is crucial for UPSC aspirants, as it enables them to analyze policy reforms, evaluate economic implications, and assess the role of liberalization in India’s development trajectory. Aspirants can learn such concepts from UPSC Online Coaching platforms and undertake UPSC Mock Test to be confident for examination. Read about: Concept of GDP, GNP, NNP and NDP
Get here the notes, questions, answers, textbook solutions, summary, additional/extras, and PDF of TBSE (Tripura Board) Class 10 (madhyamik) Social Science (Geography/Contemporary India II) Chapter “Forest and Wildlife Resources.” However, the provided notes should only be treated as references, and the students are encouraged to make changes to them as they feel appropriate. Earth is home to different living beings, from the smallest microorganisms to the largest creatures like elephants and blue whales, and all these living beings are interconnected and form a complex ecological system, on which humans are dependent for their existence. Plants, animals, and microorganisms are responsible for creating the air we breathe, the water we drink, and the soil that produces our food. Forests are very important to this ecological system because they are the main source of food for all other living things. India is one of the world’s richest countries in terms of biological diversity, with nearly 8% of the total number of species in the world. However, many of these species are under threat due to insensitivity to the environment. At least 10% of India’s recorded wild flora and 20% of its fauna are on the threatened list. Deforestation is also a major issue in India, with forest and tree cover estimated at 79.42 million hectares, which is 24.16% of the total geographical area. While there has been an increase in dense forest cover since 2013 due to conservation measures, management interventions, and plantation, it is important to address the issue of deforestation to protect the country’s rich biodiversity. The International Union for Conservation of Nature and Natural Resources (IUCN) has classified plants and animals into several categories based on their population levels and vulnerability to extinction. Normal species include cattle, pine, and rodents. Endangered species such as black bucks, Indian rhinos, and crocodiles are at risk of extinction, while vulnerable species like the blue sheep and Gangetic dolphin could move into the endangered category if negative factors persist. Rare species, such as the Himalayan brown bear, desert fox, and wild Asiatic buffalo, have small populations and could move into the endangered or vulnerable category. Endemic species like the Andaman teal and Nicobar pigeon are found only in certain areas, usually isolated by natural or geographical barriers, while extinct species such as the Asiatic cheetah and pink head duck are no longer found in known or likely areas. The destruction of habitats, hunting, poaching, overexploitation, and other negative factors that have led to a decline in population levels are causing the depletion of flora and fauna. During the colonial period, the expansion of railways, agriculture, commercial and scientific forestry, and mining activities caused significant damage to Indian forests. Between 1951 and 1980, over 26,200 square kilometers of forest land was converted into agricultural land in India. Substantial parts of tribal belts, especially in northeastern and central India, were deforested or degraded by shifting cultivation. The promotion of a few favored species, in many parts of India, has been carried through the ironically termed “enrichment plantation”, in which a single commercially valuable species was extensively planted and other species eliminated. Development projects, including river valley and mining projects, have contributed to the loss of forests. Grazing and fuel-wood collection are also contributing factors to the degradation of forest resources. The forest ecosystems, which are repositories of some of the country’s most valuable forest products, minerals, and other resources that meet the demands of the rapidly expanding industrial-urban economy, have become fertile ground for conflicts. Conservation of forests and wildlife has become essential in India due to the rapid decline in their populations. Conservation helps to preserve ecological diversity and our life support systems such as water, air, and soil. It also helps to preserve genetic diversity for better growth and breeding of species, which is important in agriculture and fisheries. In the 1960s and 1970s, conservationists demanded a national wildlife protection program, leading to the implementation of the Indian Wildlife (Protection) Act in 1972. This act provided legal protection to habitats and banned hunting and trade in wildlife. National parks and wildlife sanctuaries were also established by central and state governments to protect endangered species such as tigers, rhinoceroses, and crocodiles. Conservation projects in India are now focusing on biodiversity rather than just a few components. Even insects are being included in conservation planning, and several hundred butterflies, moths, beetles, and one dragonfly have been added to the list of protected species under the Wildlife Act of 1980 and 1986. In 1991, plants were also added to the list, starting with six species. Forest and wildlife resources in India are difficult to manage, control, and regulate. Much of it is owned or managed by the government through the Forest Department or other government departments. Forests are classified as reserved, protected, or unclassified, with reserved forests being the most valuable for conservation. Madhya Pradesh has the largest area under permanent forests, while some states have a large percentage of reserved forests, and others have a bulk of it under protected forests. The Northeastern states and parts of Gujarat have a very high percentage of forests managed by local communities as unclassified forests. The conservation of natural habitats and resources has been a long-standing practise in India. However, it is important to recognise that these habitats are also home to traditional communities, which often rely on these resources for their livelihoods. In some areas of India, local communities have taken the initiative to conserve these habitats alongside government officials, understanding that it is essential for their own long-term survival. For instance, in Sariska Tiger Reserve, Rajasthan, villagers fought against mining activities by citing the Wildlife Protection Act. In many other areas, villagers are taking charge of protecting habitats and rejecting government involvement. In the Alwar district of Rajasthan, for example, the inhabitants of five villages have declared 1,200 hectares of forest as the Bhairodev Dakav ‘Sonchuri’, creating their own set of rules and regulations that prohibit hunting and protect wildlife from outside encroachments. The Chipko movement in the Himalayas has also successfully resisted deforestation in several areas and demonstrated that community afforestation with indigenous species can be hugely successful. Efforts to revive traditional conservation methods or develop new methods of ecological farming are also becoming more common. Farmers and citizen groups, such as the Beej Bachao Andolan in Tehri and Navdanya, have shown that adequate levels of diversified crop production without the use of synthetic chemicals are possible and economically viable. The Joint Forest Management (JFM) programme in India is an excellent example of involving local communities in the management and restoration of degraded forests. The programme, which has been in existence since 1988, depends on the formation of local institutions that undertake protection activities on degraded forest land managed by the forest department. In return, members of these communities are entitled to benefits such as non-timber forest products and a share in the timber harvested through ‘successful protection’. The clear lesson from the dynamics of environmental destruction and reconstruction in India is that local communities need to be involved in some form of natural resource management. However, there is still a long way to go before local communities are at the center stage of decision-making. It is crucial to accept only those economic or developmental activities that are people-centric, environment-friendly, and economically rewarding. Textual questions and answers 1. Multiple choice questions. (i) Which of these statements is not a valid reason for the depletion of flora and fauna? (a) Agricultural expansion. (b) Large scale developmental projects. (c) Grazing and fuel wood collection. (d) Rapid industrialisation and urbanisation. Answer: (c) Grazing and fuel wood collection (ii) Which of the following conservation strategies do not directly involve community participation? (a) Joint forest management (b) Beej Bachao Andolan (c) Chipko Movement (d) Demarcation of Wildlife sanctuaries Answer: (d) Demarcation of Wildlife Sanctuaries 2. Match the following animals with their category of existence. |Animals/plants||Category of existence| |Andaman wild pig||Endangered| |Himalayan brown bear||Vulnerable| |Pink head duck||Endemic| Answer: Black buck – Endangered Asiatic elephant – Vulnerable Andaman wild pig – Endemic Himalayan brown bear – Rare Pink head duck – Extinct 3. Match the following. |Reserved forests||other forests and wastelands belonging to both government and private individuals and communities| |Protected forests||forests are regarded as most valuable as far as the conservation of forest and wildlife resources| |Unclassed forests||forest lands are protected from any further depletion| Answer: Reserved forests – Forests are regarded as most valuable as far as the conservation of forest and wildlife resources Protected forests – Forest lands are protected from any further depletion Unclassed forests – Other forests and wastelands belonging to both government and private individuals and communities 4. Answer the following questions in about 30 words. (i) What is biodiversity? Why is biodiversity important for human lives? (ii) How have human activities affected the depletion of flora and fauna? Explain. Answer: (i) The variety of life that can be found in a given habitat or on the entire planet is known as biodiversity. The genetic diversity within each species of all plants, animals, fungi, and microorganisms is also included. Because it gives us access to resources like food, clean water and air, and other resources, biodiversity is crucial to human existence. In addition, it aids in regulating the climate and maintaining fertile soils, both of which are necessary for growing crops. Additionally, biodiversity offers us leisure opportunities and aesthetic benefits while lowering the risk of disease. (ii) We have transformed nature into a resource obtaining directly and indirectly from the forests and wildlife – wood, barks, leaves, rubber, medicines, dyes, food, fuel, fodder, manure, etc. So it is we ourselves who have depleted our forests and wildlife. 5. Answer the following questions in about 120 words. (i) Describe how communities have conserved and protected forests and wildlife in India? (ii) Write a note on good practices towards conserving forest and wildlife. Answer: (i) Conservation strategies are not new in our country. We often ignore that in India, forests are also home to some of the traditional communities. In some areas of India, local communities are struggling to conserve these habitats along with government officials, recognising that only this will secure their own long-term livelihood. In Sariska Tiger Reserve, Rajasthan, villagers have fought against mining by citing the Wildlife Protection Act. In many areas, villagers themselves are protecting habitats and explicitly rejecting government involvement. The inhabitants of five villages in the Alwar district of Rajasthan have declared 1,200 hectares of forest as the Bhairodev Dakav ‘Sonchuri’, declaring their own set of rules and regulations which do not allow hunting, and are protecting the wildlife against any outside encroachments. The famous Chipko movement in the Himalayas has not only successfully resisted deforestation in several areas but has also shown that community afforestation with indigenous species can be enormously successful. (ii) Reducing human activities that lead to deforestation, overhunting, and overfishing are good practises for protecting forests and wildlife. Invasive species proliferation, pollution, and climate change must all be kept to a minimum. Including communities in the management of forest resources should be a priority, for example through the Joint Forest Management (JFM) programme. This programme has been effective in preserving and restoring forests, leading to an increase in the amount of forest cover and better wildlife habitats. Creating wildlife sanctuaries should involve communities as it has assisted in saving endangered species. Finally, it’s critical to promote sustainable growth and the use of renewable resources, such as solar energy, in order to reduce the harm that humans cause to the environment. Additional/extra questions, answers, MCQs 1. What are forests and what do they provide? Answer: Forests refer to a community of plant species that grow naturally and provide a large tract covered by trees and shrubs. They provide a wide variety of commodities such as timber, firewood, woodpulp, medicinal plants, and other produces of industrial and commercial use. Additionally, they play an important role in checking soil erosion and air pollution, and provide natural habitat to a variety of wildlife. 81. What is the purpose of observing wildlife week in India? a) To increase awareness about the importance of wildlife in sustainable living b) To promote eco-tourism in the country c) To increase hunting and trade of wildlife d) To provide financial aid to national parks and sanctuaries Get notes of other boards, classes, and subjects
PostgreSQL DISTINCT Keyword The Distinct statement is used to select or fetch only the (different) Distinct data from a database table. There are many duplicate values present in the database table, so Distinct is used to list different values. DISTINCT Syntax : The syntax of the DISTINCT statement is − SELECT DISTINCT column1, column2,.....columnN FROM table_name WHERE [condition] DISTINCT Example : Consider the STUDENT table is having the following records The following is an example, which would fetch DISTINCT NAME record for a STUDENT. lfcdb=# SELECT DISTINCT name FROM STUDENT; All the above statements would create the following records in STUDENT table
Hello Exam Seekers, How many of you have already come across this acronym? Do you know what it is or why it has become so popular lately? WHAT IS CLIL? CLIL stands for Content and Language Integrated Learning. It is an approach that works with a second – or foreign – language as a means of instruction as well as with content. That means that a teacher will prepare a class of a specific subject – or subjects – in another language. Different from language courses, which contextualize what they want to teach so that it can integrate all skills in a unit, the focus of CLIL is on learning the language through content, with the intention of making the acquisition of the language happen more naturally. Thus from early on, the students will receive input on both content and language – which is graded so that students can make sense of it and facilitate learning to develop the two areas in parallel. This approach is perhaps one of the most talked about in the English teaching field, especially among school teachers and schools in general. In other words, teachers choose a subject to teach (eg. Independence Day, The history of Van Gogh, How to sum numbers, etc.), using the second language. Grammar and vocabulary are part of the lesson, not the focus of the lesson. Soft CLIL x Hard CLIL Well, this is perhaps only one of the manners which people use to describe how CLIL is implemented and how different the focus can be. What is known as soft CLIL is the type of course that is more language-led rather than subject-led. That is, it looks more like a language course with some exposure to curricular topics. On the other hand, hard CLIL is more subject-led, which takes the curriculum of the school and chooses what to teach in the target language. CLIL and the 4 C’s The approach uses four parameters as a guide: Cognition, Communication, Content, and Culture. Perhaps not all four aspects will be present in every CLIL lesson, but they are going to integrate a unit of classes. - COGNITION: reflects the thinking skills developed in class. During CLIL lessons, students are inclined to face challenges and therefore build reasoning, creative thinking, and evaluating skills. LOTS – Low Order Thinking Skills – and HOTS – High Order Thinking Skills – are both present and combined in CLIL lessons, such as when students need to remember and order things – LOTS – or predict and evaluate issues – HOTS. - COMMUNICATION: it is essential to think about the language the students need to know to improve both oral and written language. Additionally, learners also need interaction with each other to be effective, so the more opportunities they have to express themselves, the better. - CONTENT: is about the subject you are working with. It can include maths, physics, history, geography, art, P.E., etc. - CULTURE: is also known as citizenship. It is about the cultural background that students get in touch with. Students develop positive attitudes as well as awareness of global and local citizenship. What’s the role of language? In CLIL, there is something called the language triptych. That means that language takes three different roles. There is the language OF learning, which is the target language in itself and what you want to teach. The second one is the language FOR learning, which is the one you are going to use to explain that content or even language. And the last one is the language THROUGH learning, which is the language students bring to the classroom, the ’emerging’ language. Well, I’m a bit suspicious to say anything about it since I tend to say that once you start working with CLIL, there is no turning back. I was first introduced to this concept back in 2015, when I took the CELTA Course. Thus, since I had just heard about it, I didn’t have any chance to put it into practice or go much further in the matter back then. Besides that, I don’t know about you, but I come from an ELT field and learned English more traditionally – and completely different from what is in the market today. So the first time I heard about CLIL, I didn’t have much confidence in its effectiveness. However, I had only heard about it and learned the concepts without actually being in touch with real examples and outcomes, as I mentioned before. Therefore, it was difficult to truly understand the concepts behind them. Nevertheless, once I needed to work with it, teaching got a new meaning to me. It completely changed my view of language acquisition and even the process of learning itself. Fortunately, I had the chance to experience CLIL at the beginning of the year. It was an experience that raised many questions and challenged my teaching in many ways. It happened because it made me question how I had been working with language in schools and how children had been taught English recently. The CLIL approach not only helps acquisition happen more naturally but also shows how integrated language and content are in the end, which is something extremely positive in many ways. For instance, the output you get when the approach is put into practice is quite impressive, and classes get a lot more interactive since students are constantly doing hands-on activities and are much more in charge of their learning. How about you? What is your view on CLIL? Have you ever taught using this approach? Comment in the comment section. Also, if you want more information about CLIL, let me know! That’s it for today! Please like the post and follow the blog on: You can also listen to this post at Anchor!!! Have a great week, Make a one-time donation Make a monthly donation Make a yearly donation Choose an amount Or enter a custom amount Thanks for making this possible! 🙂 Thanks for making this possible! 🙂 Thanks for making this possible! 🙂DonateDonate monthlyDonate yearly
“Science teaches us about the interconnections between human and non-human living elements in Nature and contributes to further our understanding of the magnitude of the climate change crisis. While modern science and technology offer solutions for climate adaptation and mitigation, they alone cannot solve the global polycrisis humankind faces.” This excerpt from the United Nations General Assembly Interactive Dialogue on Harmony with Nature Concept Note underscores why we observe not only Earth Day but International Mother Earth Day every April 22. Why both? What’s the difference? Earth Day, established in 1970, marks the anniversary of the modern environmental movement. Later, the United Nations added International Mother Earth Day to recognize the close relationship between people and the Earth. While extending and reinforcing the goals of Earth Day, referring to Earth as “Mother” underscores the interconnectedness between human health and the planet by giving respect to the laws of nature and its many ecosystems. According to the UN, the “Mother Earth” resolutions aims to promote an Earth-centered, worldview that recognizes the intrinsic value and rights of Nature. Honoring the Mother that we all share and depend on is more critical than ever. As the climate crisis intensifies with increasingly rising heat, violent storms, drought and wildfires, the greatest danger is to our very sustenance: food and water. Indeed, chief among the dire threats of climate change as detailed by the recent report by the U.N. Intergovernmental Panel on Climate Change is to agricultural systems, as “Increasing weather and climate extreme events have exposed millions of people to acute food insecurity and reduced water security.” Specifically, “Although overall agricultural productivity has increased, climate change has slowed this growth over the past 50 years globally … Ocean warming and ocean acidification have adversely affected food production from shellfish aquaculture and fisheries in some oceanic regions … sudden losses of food production and access to food compounded by decreased diet diversity have increased malnutrition in many communities.” No surprise, the greatest threat, the IPCC report notes, is to “Indigenous Peoples, small-scale food producers and low-income households with children, elderly people and pregnant women particularly impacted.” The report sheds light on the importance of promoting more ecological and regenerative farming practices to protect nature as part of our fight against climate change. For example, Shumei International and the Natural Agriculture Development Program Zambia are working together to help women farmers take control of their livelihoods through self-sustainable agriculture practices—and more. In particular, by adopting Natural Agriculture, which promotes seed saving and zero-inputs, the women not only employ cost-effective and environmentally friendly crop production, they raise their household status and income. Working directly with women small-scale farmers, many of whom are mothers, we are also contributing to the development of their communities, improvement health and nutrition, and education in rural areas. Prioritize respect for Mother Earth goes hand in hand with promoting gender equality and empowerment for women farmers who are cultivating and nurturing the land to feed us and protect the planet. It is therefore key to think about April 22 not only as Earth Day, but Mother Earth Day and recognize the role of women farmers who are helping to heal the land from the effects of chemicals, fertilizers and other additives—and the role we must all take in caring for Mother Earth. As the U.N. put it, “Mother Earth is clearly urging a call to action. Nature is suffering.” As Mother Earth feeds and nurtures all us, we must take care of Her. By: Alice Cunningham, Executive Director of International Affairs, Shumei International and Barbara Hachipuka Banda, Founder and CEO, Natural Agriculture Development Program Zambia (NADPZ)
Plants can be separated into two distinct categories: monocots and dicots. What makes the two types different, and why is it important to understand which is which? Monocot vs. Dicot The big difference that most people note between monocots and dicots is the formation of the plants' veins on leaves. However, many different things separate monocots from dicots. In fact, monocots differ from dicots in four structural features: their leaves, stems, roots, and flowers. Within the seed lies the plant's embryo; it is here that the first difference between the two types can be seen. Whereas monocots have one cotyledon (vein), dicots have two. This slight difference at the very start of the plant's life cycle leads each plant to develop vast differences. Once the embryo begins to grow its roots, another structural difference occurs. Monocots tend to have "fibrous roots" that web off in many directions. These fibrous roots occupy the upper level of the soil compared to dicot root structures that dig deeper and create thicker systems. Dicot roots also contain one main root called the taproot, where the other, smaller roots branch off. The roots are essential to the plant's growth and survival, therefore encouraging a deeper and more extensive root system that can help increase the plant's health. As the monocots develop, their stems arrange the vascular tissue (the circulatory system of the plant) sporadically. This is extremely unique compared to dicots' organized fashion that arranges the tissue into a donut-looking structure (see figure). The way a stem develops is important to note. Stems are in charge of supporting the entire plant and helping position it to reach as much sunlight as possible. The vascular tissue within the stem can be thought of as a circulatory system for bringing nutrients to each portion of the plant. The differences don't end there. Both monocots and dicots from different leaves. Monocot leaves are characterized by their parallel veins, while dicots form "branching veins." Leaves are another important structure of the plant because they are in charge of feeding the plant and carrying out photosynthesis. The last distinct difference between monocots and dicots is their flowers (if present). Monocot flowers usually form in threes, whereas dicot flowers occur in groups of four or five. - Plantpedia: Browse flowering plants by Scientific Name, Common Name, Genus, Family, USDA Hardiness Zone, or Origin
Actively Listening to Your Child Active Listening is when the listener is able to fully concentrate on, understand, respond to and remember what is being said. The listener is on the same level as the speaker, making eye contact, and actively taking an interest. As parents and caregivers, we are busy – going in so many directions at once and trying our best to balance the plethora of responsibilities we all have. And let’s face it – our little ones tend to have so much to say that it can be hard to make ourselves stop and listen to yet another story about (insert their current “funny” story they’re telling on repeat). But here’s why pausing our to-do list and actively listening to what they have to say is so important… Benefits of Actively Listening to Your Child: - It boosts their self-esteem. We all want our child(ren) to succeed, to be strong individuals and feel proud of who they are. This is step one to that goal. - They feel valued and heard. - It builds a strong bond between you and the child. This bond will last and grow throughout their childhood, and will create a safe and trusting environment for them. In other words, actively listening to their funny stories now will allow them to come to you when they’re needing help in their teens and beyond. - It teaches them mutual respect and how to listen to others. - It strengthens their social-emotional wellness and teaches them positive self expression. Simple Steps to Active Listening: - Let them take the lead! As hard as it can be sometimes, do not rush them or jump in with your own words (unless absolutely necessary – i.e. to avoid a frustration meltdown). - Make them feel heard. Toddlers have such short attention spans and can quickly escalate if they don’t feel heard. Whatever response you give to them – even if it’s “I will listen to what you want to tell me, let me finish putting this away first” – should be said with eye contact, at their level and with full attention. - Get down on their level and use eye contact. These two simple acts are key to making them feel heard and engaged. - Body language says so much. Watch their facial expressions and body language to learn more about what they’re trying to say and how they’re feeling. Likewise, use your body language, such as a smile or holding out a helping hand, to show that you are engaged and listening as well. Explore other posts:
The machine invented by Phillip Drinker of Harvard School of Public Health in 1926 was officially known as a Drinker respirator, or the Emerson tank after its main manufacturer, but the term "iron lung" quickly became the standard term for it. Drinker's first machine was a tin box with used vacuum cleaner blowers attached through valves, and an end plate with a rubber collar for the user's neck. It took a long time to get it to provide adequate respiration for a paralyzed polio patient, but in 1928 , its first trial on an actual patient kept an eight-year-old girl alive for a week when she had been blue with oxygen deprivation before the device was turned on. This was a revolution in care for people who couldn't breathe on their own. However, at first there were neither enough machines nor medical personnel who knew how to use them during the polio epidemics of the 1930s. The National Foundation for Infantile Paralysis, formed in 1938, took as one of its goals supplying both people and tanks to all the hospitals with a need. Army planes flew them to epidemic locations. Hospitals often had to hire engineers to keep them running, since an iron lung might have to function for six months without stopping. Any kind of failure set off a built-in alarm and they could be hand-pumped in the event of power outages. However, patients were essentially stuck in a box. Some were completely paralyzed; others could move their arms and legs, though the muscles for breathing were paralyzed, and some models did have armholes. The lucky patients had radios, or occasionally even a TV to keep them occupied; others had books in overhead racks but were forced to wait for someone else to turn the pages. It could also get a little unpleasant inside the box, since patients were not able to get out to a bathroom. For patients who couldn't breathe unassisted long enough to have the sheets inside changed, it was possible to exchange the bed linens through side vents, but the wrinkled results could dig into a paralyzed person's body. Many of the paralyzed patients had a tracheotomy, so they couldn't speak normally and had to click their teeth together, make popping sounds with their lips, or compress and release air between cheek and tongue (like urging a horse forward) to summon help. They also couldn't cough, sneeze, blow nose, rendering secondary infections a problem. And the machines were noisy, with the sound of a rhythmic bellows pumping, wheezing and squeaking; one person described it as sounding like windshield wipers. Sleep might be difficult for the patient in an iron lung ward. Patients could be transferred from one hospital to another still inside the iron lung (attached to a generator) in trucks, trains, or even planes. There were smaller portable iron lungs made, but they were too confining for long-term use, so most transferred people stayed in a full-sized one (requiring the use of transportation with wide enough doors for the machine to fit). Black, Kathryn. In The Shadow of Polio: A Personal and Social History. Reading, Massachusetts: Addison-Wesley, 1996.
Religion And Conflict Resolution Religion plays a role in some of the most repeated conflict zones of the world, and religious conflicts constitute an increasing share of violent conflicts today. For many individuals and groups, religious beliefs seem to give the moral ammunition to justify and carry out violence. At the same time, state leaders often have to contend juggling the right of religious freedom, their own religious beliefs and preferences, and their fear of radical elements, instability, and concerns for public safety. Religious repression can and does occur under the auspices of fighting terror or protecting the country from destabilizing and potentially dangerous elements. The course explores conflicts in the world over the centuries and the role religion plays in escalating and resolving them. The course aims to enable students to explore conflicts in the world over the centuries and the role religion plays in escalating and resolving them. Particularly, the role of religion in conflict resolution. COURSE LEARNING OUTCOMES By end of the course unit, students should be able to: - To know the role of religion in conflict resolution - To know the religious teachings on wars and revolutions - To know Religion plays and has played in conflict escalation and conflict resolution in the world today and in the past.
Technology has taken education to the next level, breaking the barriers of classroom walls. In 2014 students who had enrolled in a Chinese language class at Springfield High School, Ohio got a lifetime opportunity to interact with students of a high school in China through a live chat sttession conducted via. Skype. The excitement of both sets of students had no limits, wherein they discussed topics ranging from language to culture to their classrooms etc. The insight they got from this live interaction was much deeper than what they would have got from reading books and listening to lectures. As we advance into the 21st century, we realise that technology has become an integral part of our everyday lives. Technological innovations have integrated into every sphere of the society, redefining the way we conduct our daily activities. It changed the way we work, shop, interact and also learn. Technological advancements are inevitable as that’s the way the world is moving. It has also incorporated into the world of education where children get to experience a new way of learning and educators get to practise more effective methods of teaching. Many schools have already implemented technology driven tools to educate and prepare their students for the future. Some technological innovations that have re-defined the Education System are: (a) Tablet PCs – they are the future of mobile devices in the classrooms. Today’s students are experts in using mobile phones as they grow up seeing them. Realising this, many schools especially in the west are taking advantage of mobile phones as an important tool of learning. Tablet PCs with internet connectivity give them access to various useful educational Apps, allows them to take notes, read up on topics relevant to their subject of studies etc. As more and more exams are going online today, the use of Tablet PCs can prepare the students for the future. (b) Interactive whiteboards – are dry erasable whiteboards that comes with an LCD projector. Whatever seen on a computer screen can be replicated on this whiteboard. As we all know, impact of communicating a matter visually is deeper than doing it verbally. Therefore the objective of these whiteboards is to make the students more engaging. (c) Google’s Apps for Education– around 45 million students and teachers around the world use this App which is a bundle of cloud based Email, Calendar, and document sharing products available free to schools. Google’s Classroom is a free App that teachers can use to create and organise assignments quickly, provide feedback efficiently, and easily communicate with their classes.(d) Digital library – unlike conventional libraries, digital libraries are not depended on the availability of space to preserve books and other reference articles. Many schools in the 21st century are opting to have digital libraries that can store any e-Learning materials like eBooks, slides, projects prepared by students, contributions by alumni etc. (e) Microsoft’s Skype – world is becoming a global village. Schools are taking initiatives to expose their students to various opinions and cultures through technological innovations like Skype. Skype gives the students a platform to have real–time interactions with students of their age-group in a different part of the world. They also make virtual field-trips possible. Educators can also take a cue from the best classroom practises followed in other parts of the world. (f) Flipped Classrooms – Classrooms across nations have begun to adopt this new technology where the students would be asked to watch lecture videos by teachers and other experts at home as a part of their homework and their class–time would be utilised for discussions. Khan Academy is an example of how online courses have become a popular instructional medium of teaching. (g) Children with special needs – the impact of technology to provide alternative ways of learning for students with special needs has been profound. Assistive technology has benefited students with physical, sensory or cognitive disabilities to learn and communicate better. Examples of some of them are: Text-to-Speech software – that helps students with reading issues, Seat Cushions – for kids with sensory processing and attention issues, Word prediction software, Speech recognition software, etc. (h) e-Learning – various e-learning centres are being opened in under-developed countries that open doors to education to the people there. The e-Learning centre in Djibouti with the help of SOS Children’s Villages and German partners is giving the young people of Djibouti a chance to learn in-demand job skills and escape from a life of poverty. Incorporation of technology in education has been a debatable topic among many educationists and parents due to the challenges faced to incorporate it. Convincing parents and teachers about the benefits that technology driven education offers is one of the biggest challenges. Use of technology in education is depended on Internet connectivity that requires more bandwidth and infrastructure. This is an added cost to the institution. Developing countries like India still face internet connectivity issues in rural areas. Therefore implementing technology-driven education in rural schools becomes a task. Many teachers are still sceptical about the use of technology in education as they fear it might replace their role in future. So the challenge here is to convince them to incorporate technology as an aid in teaching rather than a threat. In the era of Social Networking, online communities of like-minded people are many.Fourth Ambit is an example of how student, college and alumni communities have made a significant presence on an online platform to connect and engage with the key-stakeholders. In today’s dynamic world, we are surrounded by technology and in no way do we see ourselves going backwards. Embracing technology does not devalue the importance of teachers rather it assists them in providing a more effective learning experience for the students. Schools must therefore strike a balance between technology-enabled and nature-based teaching to equip its students for a better tomorrow. [The author, "Chandrasekhar A.B" is an alumnus of XIM Bhubaneswar] After years of glorious revolutionary struggle and wanton bloodshed, a new rugged hero has emerged. He will lead the masses towards prosperity with the twinkle of a dream in his eye....
The Explanation Behind The Mysterious Spider-Like Formations On The Red Planet NASA's Jet Propulsion Laboratory revealed the explanation behind bizarre spider-like cracks on the Martian dune. Planet Mars has lots of diverse collection of landscapes and features. These spider-shaped features were erosion carved in the south polar region of Mars. The Mars Reconnaissance Orbiter (MRO) arrived on the red planet's orbit in 2006. The MRO snapped the mysterious spider-like features. Scientists said it was a cumulative growth of channels of the thawing carbon dioxide process. The spider-like cracks in the sand dunes were huge and have a several channels which were grown from a certain central point. It depicted the body and legs of an earth spider. These spider-shaped features inspired scientists to grasp for some explanations. The spider-shaped formations were naturally occurring on the Martian surface. Researchers found that it ranges from ten yards to thousands of yards. The 150-mile-overhead-strange-shaped formations which cover the south polar region of the red planet were "araneiform" terrain. Planetary scientist Candice Hansen unveiled that araneiform means spider-like. In the Project Icarus, Ganna Portyankina of the UC Boulder publicized that these araneiform were estimated to have taken more than a thousand Martian years to develop. The red planet's year lasts around 1.9 Earth years. Scientists were constantly checking the year-over-year spider-shaped formations through MRO's High-Resolution Experiment (HiRISE). In 2007, Hugh Kieffer explained the presence of the dry ice. A dry ice is composed of a solid form of carbon dioxide. Carbon dioxide ice does not melt into a liquid state. Kieffer divulged that this dry ice bypasses the liquid phase and sublimes from a solid to a vapor. These tarpped gasses generate pressure and formed cracks on the Martian surface. The spring sand which gasses released generally took three Martian years to form. These spider-like features were merely a Mars phenomenon and it was purely a kind of erosion process.
What is Assessment? Assessment is the process of gathering information about how well a student is achieving specific outcomes. Through assessment, faculty gather information about student performance, provide students with formal or informal feedback, and guide students to improve their learning. Evaluation is an assessment of learning – where students demonstrate their learning through a performance task that faculty can use as evidence of student achievement. This evidence is how we determine whether a student has met the learning outcomes for a lesson, unit, course, or program. Assessment at Mohawk Mohawk’s Student Assessment Policy (PDF) explains that faculty develop assessments based on the outcomes students will achieve as part of their course. Program areas work together to determine how the assessments from each course contribute to the overall learning outcomes for the program. These assessments should provide an authentic representation of students’ abilities, reflect the outcomes (VLOs, EESs, and CLOs) and strike a balance between providing a realistic student workload and providing multiple opportunities to demonstrate learning and receive feedback. Deciding Which Assessments to Use Faculty have a wide array of assessments to choose from for their courses, including written assignments, group projects, presentations, case studies, lab activities, simulations, real world projects, quizzes, exams, and student-driven projects. To decide which assessment fits best, consider: - What do I want the students to know, do, and be? - What is acceptable evidence to show that students have achieved those outcomes? - What experiences will help students to demonstrate their achievement of those outcomes? From: Drake, S. (2007). Creating Standards-Based Integrated Curriculum: Aligning curriculum, content, assessment, and instruction. (2nd ed.). Thousand Oaks, CA: Corwin – page 8. These questions will help you to choose an assessment task that is aligned with the outcomes students will be performing in the course. Check out some of these examples from courses at Mohawk: Outcome: Develop a business plan for a small business Assessment: Students write a business plan for a specific business Outcome: Apply conflict resolution strategies in a variety of settings Assessment: Students demonstrate strategies during a series of simulations in class Outcome: Critically analyze situations that lead to the perpetration of fraud Assessment: Students examine case studies and present their analysis As often as possible, you should select an authentic assessment task. That is, the task students perform in the course should: • Directly measure students’ performance of an outcome • Relate to specific vocational skills • Reflect current practices in the industry/field For help developing assessment tasks that support your curriculum goals, speak one of our Curriculum Development Specialists. For help developing and integrating assessment tasks into eLearn or other online components, speak one of our Educational Technology Specialists. For a list of workshops on developing assessment tasks, check out our workshop offerings. Ontario Ministry of Education. (2010). Growing success: Assessment, evaluation and reporting in Ontario's schools: covering grades 1 to 12. Toronto: Ministry of Education.
The History of Powerful Warriors of Old Japan Early Modern Japan and medieval period Samurais are also known as the military Nobility officer caste. The Bhushi people of Japan was referred to the Samurais earlier. For 700 years the power of samurai was held sway over Japan. For eight hundred years, the samurais worked towards the creation of today’s Japan. Their martial art capability and services, Honor and duty makes them the Samurais. The name “Samurai” originates from Japanese word which means to serve. It was used for describing the court administrators holding low or mid level ranking. Their title was metamorphic and refers to a Samurai’s Loyalty. It was only in the Tenth century that the governors started to offer enormous awards for service in military. The term soon gained prominence so much that it brought great pride to the samurai lineage. Here is a link to the history of the samurai sword. http://www.historyoffighting.com/samurai-sword.php . How the Taira and Minamoto Fought for Power? During the twelfth century, the Powerful military people had began to svay the powers. The Taira and Minamoto, two strong families, always stood out from the rest. They influence the Japanese politics for years to come. A civil war fought over the disputed imperial line of succession following the death of the emperor. The conflict resulted in Taira rising to form the first Samurai Led Government in the history of Japan. The Gempai War: The Minamoto clan resumed hostilities with the taira. The war also had lasting implications for the samurai. The Divine of Wind That Saved Japan Wind of the Gods: As infighting of the samurais increased, so did the need of protecting Japan from the invaders. This resulted in the transformation of Samurai. Fighting continued in Japan When the warriors were to die in the hands of the enemy it was considered dishonorable. Therefore considering the code of honor, the samurais were allowed to die on their own on the battlefield. The warring states were the period of widespread conflict. The conflict involved both physical and social among the dominant clans. The stronger families would survive. The strength of the group would depend on the way the armies are assembled with the use of modern weapons. After the entry of Chinese various other forms of firearms began their entry. Soon the Samurais started producing their firearms. Toyotomi Hideyoshi: The Napoleon of Japan Toyotomi was the samurai who changed the course of samurai history in Japan. Through a series of successful campaigns, he asserted the right all over Japan. Soon he lost his hand to another samurai called the Ieyasu’s who lasted till the mid-nineteenth century. Soon everything formalized resulting the end of martial arts for the samurai. They maintained the elite status almost till the eighteen hundreds before the western world took control over Entire Japan. Top IAS coaching centers always teach about the samurais to extend the knowledge of the students far and wide. The IAS trainers and teachers make their students well versed with these topics as general knowledge and history play a vital role in the civil services exam papers.
These are great for literacy centers! Included are Language Arts centers for the entire month of February! There are 64 centers total with instructions on how to implement them into your classroom. Each center comes with a task card and a corresponding worksheet (with the exception of the “teacher directed center”. I leave that task card blank, because as the classroom teacher, you will know what to teach your small groups best. The technology center sometimes does not require a worksheet as well). Center themes/skills include: Study Skills (dictionaries, bibliographies, thesauruses, and maps) Writing (narrative, informative, descriptive, opinion/persuasive prompts) Spelling (various games and worksheets to practice weekly spelling words) Reading (plot, conflict, theme, and characters) Word Structure (sentence fragments, Greek roots, Latin roots, and semantic gradients) Vocabulary (various games and worksheets to practice weekly vocabulary skills) Technology (these are optional centers depending on the available technology in your classroom. Most centers require at least an iPod or a computer). Teacher-Directed Time (A center time for you to meet with your small reading group) There are 64 tasks cards, corresponding worksheets for each center, 8 pocket chart center cards, pocket chart cards labeling classroom groups, and card labels to place around the room saying where each center is located. *Some Reading centers will require the classroom teacher to provide reading material, other than that, print, copy, laminate, and you are ready to go with Language Arts Centers for the WHOLE month of February!
Seismic waves are elastic waves propagating inside the Earth. Like any other kind of waves, their velocity depends on the properties of the medium they are passing through. If a wave passes through a discontinuity between two different media with different properties, some of its energy is reflected back and some is transmitted, though refracted. That’s what happens to light when it passes from air to water, to a medium with different properties. If we theorize that the Earth’s density increases constantly with depth, we can imagine a seismic ray (defined as the perpendicular to the wavefront) that, while it propagates in depth, it is continuously refracted away from the vertical at each refraction at deeper and deeper discontinuities with higher density strata; the seismic ray will progressively drift from the vertical eventually turning back towards the surface. This means that there are rays than can pass through the whole planet and come back to the surface carrying information about the Earth’s interiors stored as seismic waves and recorded on seismographs. In 1909, the Croatian geophysicist Andrija Mohorovicic (1857-1936, at left), while analyzing the Pokuplje Earthquake (Kupa Valley, 40 km south of Zagreb), which had occurred on October 8, noticed some particular seismic wave arrivals. The larger the distance of a recording station from the epicenter, the longer the time for the seismic waves to reach it. Therefore, one should expect that on a space-time graph a straight line would join all graphed arrival-time/distance pairs. The straight line’s dip would be a function of wave velocity. Mohorovicic used the S wave arrivals to build the graphs because their amplitude is wider and they’re more readily recognizable. But joining the S wave arrivals he did not obtain straight lines but broken lines. How can this be explained? What happens when a wave passes into a denser medium? As already stated, when a seismic wave passes through a discontinuity it is partially reflected and refracted. Reflected waves, whether they are P or S, should get back to the surface with the same angle they met the discontinuity at, then recorded on seismographers. Refracted waves should keep going down with a higher angle than the incident one (if they pass into a denser medium). There will be a certain critical incidence angle that will generate a refracted ray that would be parallel to the discontinuity; the latter will become a generator of rays (the headwave) departing to the surface at the same critical incident angle (left, bottom). Close to the epicenter area, the first arrivals are the direct waves, traveling almost horizontally toward the seismic station. The waves reflected by the discontinuity will arrive later. Those waves travel in the same medium, at a constant velocity. Critical waves traveling along the discontinuity will travel at a higher velocity due to the higher density on that surface. At a certain distance those waves will arrive at the same time as the reflected waves, then even ahead of them. That would explain the broken line in the space-time graph and would also allow us to calculate the depth of the discontinuity. Mohorovicic calculated about 50 km under Croatia. The discontinuity still bears his name to honor this discovery – it is “Moho”, in brief. The density increase is explained with a composition change at the base of the crust, where the Earth’s mantle begins (called so because it shrouds the core). From surface and well data we know the average composition of the continental crust as being close to that of granite-diorite rocks (rich in quartz, sodium silicates, and calcium silicates); the oceanic crust is poor in quartz and its composition is close to that of basalts (rich in iron and magnesium silicates). For what concerns the mantle, rocks from deep volcanic conduits suggest a peridotitic composition (iron and magnesium silicates with low silica content). The Moho is at a 30-40 km depth, less than 10 under ocean floors. Beneath the mountain belts it can reach as deep as 70 km.
An incredible variety of seedless plants populates the terrestrial landscape. Mosses may grow on a tree trunk, and horsetails may display their jointed stems and spindly leaves across the forest floor. Today, seedless plants represent only a small fraction of the plants in our environment; yet, three hundred million years ago, seedless plants dominated the landscape and grew in the enormous swampy forests of the Carboniferous period. Their decomposition created large deposits of coal that we mine today. Current evolutionary thought holds that all plants—green algae as well as land dwellers—are monophyletic; that is, they are descendants of a single common ancestor. The evolutionary transition from water to land imposed severe constraints on plants. They had to develop strategies to avoid drying out, to disperse reproductive cells in air, for structural support, and for capturing and filtering sunlight. While seed plants developed adaptations that allowed them to populate even the most arid habitats on Earth, full independence from water did not happen in all plants. Most seedless plants still require a moist environment.
The ear has three main parts: outer, middle and inner. The outer ear (the part you can see) opens into the ear canal. The eardrum separates the ear canal from the middle ear. Small bones in the middle ear help transfer sound to the inner ear. The inner ear contains several channels of fluid and specialized cells that detect fluid motion. These cells are connected to the auditory (hearing) nerve, which leads to the brain. Any source of sound sends vibrations or sound waves into the air. These funnel through the ear opening, down the ear, canal, and strike your eardrum, causing it to vibrate. The vibrations are passed to the small bones of the middle ear, which transmit them to the fluid of the inner ear. Here, the fluid vibrations become nerve impulses and go directly to the brain, which interprets the impulses as sound (music, voice, a car horn). If a problem arises in the outer or middle portion of the ear, a conductive (or mechanical) hearing loss (CHL) is present. Common causes of CHL are fluid in the middle ear from ear infections or an ear plug. If the inner ear or hearing nerve is damaged, a sensorineural hearing loss (SNHL) develops. The common forms of SNHL are from genetic (hereditary) factors or infections (i.e., meningitis). In general, CHL can be corrected with medicine or surgery, while SNHL is usually not reversible.
Children who have been diagnosed with ADHD always have some Executive Function challenges. Often children with learning differences have Executive Function challenges. However, there are many children with Executive Function challenges who have not been diagnosed with ADHD and don't have learning differences. Sometimes children who are gifted have Executive Functioning deficts. For example, a student who is gifted may be taking all honors classes in high school, perhaps they skipped a grade and are a year or two younger than the rest of their classmates. While they may be successful at getting straight A's, they may not have developed emotional regulation skills or perhaps they spend too much time completing homework or writing a report. In order to be effective, a student must be able to manage his or her emotions, focus attention, organize materials and workspace and plan and manage time, prioritze activites (both academic and fun) and reflect upon and revise their tactics as circumstances change. Students with deficits in Executive Functioning will often experience a gap between what their standardized test scores indicate about their intelligence and the grades they recieve in school. As the demands of school increase each year, having well developed Executive Function skills is critical in order to achieve academic success. Students are generally expected to have well developed Executive Function skills by the time they reach high school, but these critical life skills are usually not taught in grade school or middle school. Ultimately, with guidance, students can learn to manage their time effectively, plan and prioritize tasks, organize their thoughts and materials, focus their attention, maintain their composure, and reflect on what worked and what didn't so that they can adjust their strategies for taking on the next challenge in school or in life. In short, in order to achieve consistently outstanding academic results, students must have strong Executive Function skills. Managing time (spent 4 hours on writing an introduction to a paper, then didn't have enough time to complete other homework) Organizing thoughts and materials (disorganized writing, backpack and desk are disorganized) Paying attention (staying focused, managing distractions) Planning and prioritizing (project managment, overwhelmed with too many tasks) Getting started (task initiation, might appear to be "lazy" or unmotivated) Staying on track (motivation, persistence to complete a task) Remembering what to do and when to do it (might forget assignments or to bring homework back to school to turn in) Reflecting on past behavior and outcomes (doesn't learn from mistakes, doesn't understand the concept of self-reflection) Managing feelings and emotions (gets frustrated, depressed, anxious over performance or test anxiety) One thing you can do is exactly what you are doing. Research. Do your homework to find the best Executive Function coaches available to help your child. Our academic coaches are carefully screened and highly trained and they have experience working with children who have challenges with Executive Functioning. They all have master's degrees or doctorates in education, special education, school psychology or speech and language pathology. Finally , our coaches are supervised and get continuing professional development to ensure that we are providing the very best Executive Function coaching available anywhere in the world. Over the past 10 years, we have developed a comprehensive database with over 200 tools and strategies that we either created or researched and tested. We teach students how to use these tools and strategies (10-12 or however many your child needs to learn) to help them develop stronger Executive Function skills and better study skills so they can be more effective in school and achieve academic success. Is your child struggling in school? Does it seem as though he or she might have some challenges with Executive Functioning? Please click below to take our Executive Function challenge assessment and get our preliminary recommendation.
During our research at sea we take measurements and collect data to help us determine a reef’s resilience What is Reef Resilience? In a nutshell resilience is the ability of an ecosystem, like a coral reef, to both resist change and recover from it. There are two components: - ecological resilience – which refers to the amount of disturbance a system can withstand without changing to an alternate stable state, e.g. changing from a coral dominated reef to an algal reef. - engineering resilience which refers to the reef’s ability to resist changes and the time required to return to their original equilibrium after the system is disturbed. How Do We Measure Reef Resilience? To measure resilience we seek to understand the properties of a healthy, stable, coral reef without disturbances. This would be a coral reef with: - high coral cover and low levels of fleshy seaweed - a balanced community of fish and motile invertebrates on every level of the food web. Healthy populations of detritivores these are creatures like sea cucumbers that eat decomposing fragments of plants and animals, herbivores or plant eating creatures like sea urchins, parrotfish and surgeonfish, invertebrate feeding fish and fish-eating fish like sharks and barracudas. - low input of human pollutants like nutrient and sediment runoff which result in ocean pollution - few nuisance species (like crown of thorns starfish) - upstream sources of larvae to replenish populations What is a Disturbance? If this system is damaged by a short term disturbance such as a hurricane, coral bleaching, an outbreak of disease or coral predators such as crown of thorns starfish, it could recover. If the reef is healthy with wide species diversity it will be better able to resist these kinds of events and recover from them. However, depending on the magnitude and frequency of these disturbance as well as the resistance of the reef the system could reach a tipping point and most or all of the coral may disappear. Ecological feedback loops can determine how the reef will fare in the future. Too many short term disturbances that kill coral can allow algae to settle, reduce the coral’s ability to reproduce, and diminish the reef and number of fish it can support. This downward spiral can lead to the reef changing from mostly coral to mostly algae. Alternatively, safeguarding healthy populations of reef species can maintain high reef resilience. For instance, algae eating fish and invertebrates can prevent a shift to algal dominance. This creates space for new coral to grow which in turn helps restore the structural complexity of the reef, providing habitat for other interdependent species. Coral Bleaching and Mortality in the Chagos Archipelago Atoll Research Bulletin, November 2, 2017 By Charles Sheppard, Anne Sheppard, Andrew Mogg, Dan Bayley, Alexandra C. Dempsey, Ronan Roche, John Turner, Sam Purkis Abstract The atolls and coral banks of the Chagos … This Aitutaki COTS Outbreak Report was prepared for the Government of the Cook Islands and local Stakeholders by the Living Oceans Foundation Chief Scientist who conducted the Cook Islands mission of the Global Reef Expedition in 2013. Mitigating the Impacts of … This article on how El Niño’s warmth devastates reefs worldwide, published in Science magazine, references Living Oceans Foundation Coral Reef Ecologist Alex Dempsey and focuses on the impact to the Great Barrier Reef (GBR) which the Foundation surveyed as part of … This article, published in What’s Up Magazine of Annapolis (Maryland), features an interview with Living Oceans Foundation Executive Director Captain Philip G. Renaud and discusses the Foundation’s headquarters relocation to Annapolis along with their charter and mission of preserving and …
Marie Curie Biography Marie Sklodowska, later called Marie Curie, was born in Warsaw, Poland in 1867. Both of her parents were teachers, and they didn’t have very much money when she was growing up. But Marie was very smart, and she wanted to study science from an early age. Marie Curie started out as a teacher, but she loved reading about science and was hoping to save enough money to go to university. When she was 24, she and her sister had saved enough money to move to Paris, which allowed her to enroll in Sorbonne University. She began studying mathematics and physics, and worked with a lot of other famous scientists at the university. At age 28, she married Pierre Curie, a professor in the Sorbonne physics department. Marie and Pierre Curie started working on their scientific research together in the School of Chemistry and Physics in Paris. They studied invisible rays that could come from the element uranium, and saw that the rays could pass through solid materials. Later, Marie Curie called these invisible rays “radioactivity.” She and Pierre realized in their research that there might be an element other than uranium that had even stronger radioactivity, so they decided to keep looking for it. Eventually, this search found something amazing. Marie and Pierre Curie discovered a brand new element that had never been found before. She called it Polonium, named after her home country Poland. Then, the discovery of Polonium helped them also find a second brand new element, which they called Radium. The discovery of both of these new elements earned Marie and Pierre Curie the 1903 Nobel Prize in physics. This Nobel Prize was a big deal because Marie Curie had just become the very first female Nobel Prize winner. Also in that same year, Marie Curie officially earned her Doctorate in physics. Sadly, Pierre passed away three years later, and Marie Curie continued his teaching career, making her the first female professor at Sorbonne University at age 39. At age 44, she earned her second Nobel Prize in 1911, this time in the field of chemistry, making her the first (and still only) woman to win a Nobel Prize in two different fields. Scientific work by Marie and Pierre Curie was a huge help for modern science and medicine. Because of them, x-rays became available for use in diagnosing and treating medical problems, including everything from broken bones to cancer. Amazingly, Marie Curie herself helped with medical aid during World War One; she made sure that ambulances contained the necessary x-rays, even driving them to the battlefields herself. These acts earned her the place as head of radiology for the International Red Cross. She became so famous for her amazing work that she was invited to go on tours around the world to talk about her scientific discoveries. Her work became the basis of many other important scientific discoveries later on, as well. Marie Curie passed away in July 1934 from leukemia, an unfortunate side effect of her continued radiation exposure. She is buried in the Pantheon, a mausoleum in Paris reserved for highly respected citizens. She was the first woman to earn this honor.
What is organic insect pest control? Organic pest control is a method in which beneficial bugs such as parasitic wasps, predatory insects and mites, and plant-based products are used for the control of insect pests of various crops. These biological and plant-based products are not harmful to the humans, pets, wild animals or the environment. What are Aphids? Aphids are very small pear-shaped, soft-bodied insects and about 1 to 10 mm long with piercing and sucking types of mouthparts, and long antennae and legs. Baby aphids are called as nymphs that look like their parents. There are over 4000 species of aphids have been described from all over the world but they differ in color from black, brown, red (Photo 1) green (Photo 2), and pink in color. Both adults and nymphs have two tubular structures called as cornicles that are projected backward out of posterior end of their bodies. These cornicles are used for the excretion of defensive fluids. Damage caused by Aphids Both adults and nymphs of aphids cause direct and indirect damages to many plant species such as ornamentals, field crops, greenhouse vegetables, fruits, weeds and grasses. Both adults and nymphs of aphids directly feed on the succulent plant parts including buds, flowers, twigs and leaves using their piercing and sucking types of mouthparts. During direct feeding, aphids suck juice from tender plant tissues and cause symptoms like yellowing and curling of infested leaves (Photo 3), stunted plant growth and reduced crop yields. While feeding on the leaves, aphids also secrete honeydew (Photo 4) that stimulates the growth of black sooty mold (Photo 5) on the plant surfaces and causes indirect damage to host plants. This black sooty mold (Photo 5) can cover entire leaf surface and affect the photosynthesis, a process used by plants to convert sunlight energy to chemical energy for the synthesis of their own food such as carbohydrates and proteins. The growth of the black sooty mold also reduces the quality of the produce and aesthetic value of many ornamental plants. Transmission of virus diseases Aphids are vectors of many types of viruses that are transmitted by them from plant to plant. These viruses cause different types of diseases including bean common mosaic virus, carrot virus Y, celery mosaic virus, cucumber mosaic virus, lettuce mosaic virus, papaya ringspot virus, potato virus Y, turnip mosaic virus, watermelon mosaic virus and zucchini yellow mosaic virus. The major symptom by these virus diseases include yellowing and curling of leaves, reduced plant growth and crop yields. Organic Control of Aphids with parasitic wasp, Aphidius colemani What are Aphidius colemani wasps? Aphidius colemani are tiny about 2-3 mm long wasps with slender body, yellowish abdomen and legs, black head and thorax, and whitish wings. Adults of Aphidius colemani (Photo 6) parasitize aphids by laying eggs with their ovipositor inside the body of aphids. Within aphid body, wasp eggs hatch into young larvae that start feeding on the body content of aphids, complete their development and pupate within the dead aphid bodies that then turn into crispy mummies. Within 14- 15 days, adult wasps emerge from mummies and then search for new aphid colonies to parasitize aphids. These wasps are now commercially produced and sold as aphid mummies containing wasp pupae with ready-to-emerge adult wasps. Why Aphidius colemani wasps are used for the organic control of aphids? Parasitic wasp, Aphidius colemani are used for the organic control of aphids because they can control over 40 species of aphids by killing and feeding on them but they are harmful to the workers, pets, wild animals or the environment. How does Aphidius colemani wasp kill aphids? When adult Aphidius colemani wasps are released in the aphid infested organic gardens, greenhouses or fields, they start looking for a suitable size aphids for egg laying using their antennae. After finding an appropriate size aphid is found, female wasps lay eggs inside aphid’s body using their ovipositors. Eggs hatch within the aphid body into small young larvae that start feeding on the body content of aphid, complete their development, kill aphid and pupate within the dead mummified bodies of aphids (Photo 7). How Aphidius colemani wasps should be released in organic gardens? Aphidius colemani wasps are supplied as aphid mummies (Photo 7) with ready-to-emerge wasp adults (Photo 8). Adult wasps are allowed to emerge from aphid mummies inside the vials from which they can be easily released in the organic gardens or greenhouses by following two simple steps given below for the management of aphids. - First, take a vial containing wasp adults into the aphid infested garden, then open the vial, hold it at 45o angle, then begin walking throughout the garden and while walking, keep tapping on the vial so that the adult wasps will escape from vial and spread evenly in the garden. Repeat this procedure until all wasps are escaped from the vials. If some mummies are still intact in the vial, then tie that opened vial to a branch of a plant so that adult wasps will escape from the vial as they emerge from mummies. - Second way to release wasps in the garden is to tie opened vial directly to a branch of a plant that is heavily infested (hotspots) with aphids for 3- 4 days. During this time, adult wasps will emerge from mummies, escape from the vial and seek aphids to lay eggs in their bodies. Repeat this wasp distribution method by moving and tying vial at different locations within the garden until all wasps are emerged from mummies and escaped in the garden. How many Aphidius colemani wasps should be released in organic gardens? Preventive treatment: As a preventive treatment, release about 5 adult wasps per 100 square foot area weekly. Curative treatment: As a curative treatment (in hot spot), release about 20- 25 adult wasps per 100 square foot area weekly.
American elections have experienced several identical crises in which one of the major presidential candidates wins the popular vote and the other wins the electoral college — and the White House. This occurred in the Clinton v. Trump race of 2016, the Bush v. Gore race of 2000, as well as in 1824, 1876, and 1888. While more Americans are coming to understand the anti-democratic nature of the electoral college, its origins are still largely unknown, particularly as they relate to race. The electoral college is an echo of white supremacy and the enslavement of blacks. As the Constitution was formed in the late 1780s, Southern politicians and slave-owners at the Constitutional Convention had a problem. Northerners were going to get more seats in the House of Representatives, which were to be determined by population, if blacks weren’t counted as people. Southern states had sizable populations, but large portions were disenfranchised slaves and freemen (South Carolina, for instance, was nearly 50% black). This prompted slave-owners, most of whom considered blacks by nature somewhere between animals and whites, to push for slaves to be counted as fully human for political purposes. They needed blacks for greater representative power for Southern states. Northern states, seeking an advantaged position, opposed counting slaves as people. This interesting reversal brought about the 3/5ths Compromise most of us know, which determined an African American would be worth 3/5ths of a person, to boost the presence and political influence of Southerners in Congress. The electoral college was largely a solution to the same problem. True, it partly served to keep power out of the hands of ordinary people and in the hands of the elites, but race and slavery were undeniable factors. As the Electoral College Primer put it, Southerners feared “the loss in relative influence of the South because of its large nonvoting slave population.” They were afraid the direct election of the president would put them at a numerical disadvantage. To put it bluntly, Southerners were upset their states didn’t have more white people. For example, Hugh Williamson of North Carolina remarked at the Convention, during debate on a popular election of the president: “The people will be sure to vote for some man in their own State, and the largest State will be sure to succede [sic]. This will not be Virga. however. Her slaves will have no suffrage.” Williamson imagined that voters would favor candidates from their own state, giving states with high populations an advantage in choosing the president. But a great number of people in Virginia were slaves. Would this mean that Virginia and other states didn’t have the numbers of whites to affect the presidential election as much as the large Northern states? The writer of the Constitution, slave-owner and future American president James Madison, thought so. He said that There was one difficulty however of a serious nature attending an immediate choice by the people. The right of suffrage was much more diffusive in the Northern than the Southern States; and the latter could have no influence in the election on the score of the Negroes. The substitution of electors obviated this difficulty… Remember, at this time people largely viewed themselves as Virginians or New Yorkers first, Americans second. The nation was an alliance of rather independent states. A Virginian would want to make sure his state had as much power to choose the president as New York. But, looking around, there seemed to be just as many nonvoting blacks as voting whites in Virginia. Thus the fear that there existed a lack of voting whites, and thus the aversion to a popular vote. In hindsight, we know that these fears were unfounded, thanks to the first U.S. census in 1790, conducted a few years later. Virginia was the most populous state by far at the time, with nearly 700,000 people, nearly 300,000 of them slaves. But the state population excluding slaves still made Virginia one of the most populous states, rivaling for instance New York (340,000 people, 6.2% slave) and Pennsylvania (434,000 people, 1% slave). Smaller states like Georgia (83,000 people, 36% slave) or Delaware (59,000 people, 15% slave) would have been at numerical disadvantages compared to smaller Northern states like Rhode Island (69,000 people, 1.4% slave), but such small gaps likely wouldn’t have justified vetoing the popular election of the president — plus if “the largest State” was “sure to succede” then the gap between the smaller states didn’t matter anyway. Regardless, the question for Southerners was: How could one make the total population count for something, even though much of the population (slaves, women, Indians, whites without property, and others) couldn’t vote? How could black bodies be used to increase Southern political power? Counting slaves helped put more Southerners in the House of Representatives, and now counting them — in an indirect election — would help put more Southerners in the White House. Led by Madison, Southerners pushed for an indirect system of electing the chief executive in which each state would appoint “electors” that would cast their votes for president. The number of electors would be based on how many members of Congress each state possessed — which recall was affected by counting a black American as 3/5ths of a person. While it changed as the nation grew, today we have 538 electors. Each state has one elector per representative in the House, plus two for the state’s two senators (435 + 100 + 3 for D.C. = 538). In this way, the number of electors was still based on population (not the whole population, as blacks were not counted as full persons), even though a massive part of the America population in 1787 could not vote. The greater a state’s population, the more electors it had and the more power to influence who won the White House. This worked out pretty well for the racists. “For 32 of the Constitution’s first 36 years, a white slaveholding Virginian occupied the presidency,” notes Akhil Reed Amar. The advantage didn’t go unnoticed. Massachusetts congressman Samuel Thatcher complained in 1803, “The representation of slaves adds thirteen members to this House in the present Congress, and eighteen Electors of President and Vice President at the next election.” Today, the electors are chosen by the political parties at state conventions, through committees, or by the presidential candidates. It depends on the state. The electors could be anyone really, but are usually involved with the parties, retired politicians, or just close allies. In 2016, electors include Bill Clinton and Donald Trump, Jr. When we go to vote on November 8, we’re not actually voting for the candidates. We’re voting on whether to award decision-making power to Democratic or Republican electors. 538 people will cast their votes and the candidate who receives a majority of 270 votes will win. Once we determine which party’s electors get to have all the fun, the electors can essentially vote for whomever they want. The power is out of the people’s hands. Now, they were chosen specifically because of their loyalty, and “faithless electors” are extremely rare, but that doesn’t mean they will always vote for the candidate you elected them to vote for. There have been 85 electors in U.S. history that abstained or changed their vote on a whim. More had to change their votes after a candidate died. Now, the “worthlessness” of your vote depends on your state. The major problem of the electoral college is that all states except Maine and Nebraska are all-or-nothing when electors are awarded. As a candidate, winning by a single citizen vote grants you all the electors from the state. If X number of people vote Democrat in Texas and Y number vote Republican, and Y > X, the votes of everyone who voted Democrat is meaningless. It’s as if they didn’t vote at all. They are counted in the popular vote, but that doesn’t determine the next president. For Texas, 38 Republican electors would be given the power to cast their votes for the Republican candidate; the potential Democratic electors go home. There’s a reason Democrats don’t campaign in Texas and Republicans don’t campaign in California. Instead, they campaign in states where 1) you’re not sure what the voting populace is going to do and 2) a lot of electors are on the line. Unless you live in one of these “swing” states, like Ohio or Florida, your vote means nothing if you’re a political minority, a liberal in a red state or a conservative in a blue state. The electoral college takes away the voice of the minority, state by state. In swing states, where both parties have a fighting chance, your vote matters very much. In other states that consistently lean right or left, your vote matters very little — unless you can somehow turn a minority into a majority, which tends to only happen over long periods of time. As if that wasn’t absurd enough, it is entirely possible to win just 21.8% of the popular vote and win the presidency. While extremely unlikely, it is possible. Not only is your vote worth less in “safe” states, it is worth less in bigger states, too. Recall that each state will have two electors (based on two senators) and at least one more (based on how many House representatives said state has). This means that sparsely populated states, like Wyoming, have three electors for 585,000 people — or one elector for every 195,000 people. A heavily populated state like California has 55 electoral votes and 39 million people, or one elector for every 709,000 people. This means small states actually have disproportionate power in the electoral college, and if a candidate swept the small and medium states, even while losing all the big ones, he or she could win with just 21.8% of the popular vote. It also just so happens that less populous states tend to be very white, and more populous states more diverse, meaning disproportionate white decision-making power. As Bob Wing writes, because “in almost every election white Republicans out-vote [blacks, most Democrats] in every Southern state and every border state except Maryland” the “Electoral College result was the same as if African Americans in the South had not voted at all.” While the electoral college system made citizen votes in smaller states worth more than those in larger states, it cannot be said that strengthening smaller states was a serious concern at the Convention. Legal historian Paul Finkleman writes that in all the debates over the executive at the Constitutional Convention, this issue never came up. Indeed, the opposite argument received more attention. At one point the Convention considered allowing the state governors to choose the president but backed away from this in part because it would allow the small states to chose one of their own. In other words, they weren’t looking out for the little guy. Political scientist George C. Edwards III stresses, “Remember what the country looked like in 1787: The important division was between states that relied on slavery and those that didn’t, not between large and small states.” Even if the argument that “we need the Electoral College so small states can actually help choose the president” made sense in a bygone era where people viewed themselves as Virginians or New Yorkers, not Americans, it makes no sense today. People now see themselves as simply Americans — as American citizens choosing an American president. Why should where you live determine the power of your vote? Why not simply have everyone’s vote be equal? Well, there remains strong support for the electoral college among conservatives because it aids Republican candidates like Bush and Trump. The popular vote, in an increasingly diverse, liberal country, doesn’t serve conservative interests. If Republicans (who have won the popular vote just once since 1992) lost presidential elections due to the electoral college after winning the popular vote, they’d perhaps see its unfair nature. Fortunately, the electoral college will one day be a thing of the past. 11 states, largely Democrat-controlled, have signed the National Popular Vote compact, “a deal wherein states commit to send their electoral votes to the presidential candidate who wins the popular vote — but only once states representing over half of all electoral votes adopt similar laws. Once that threshold is reached, the electoral college is effectively abolished, without a constitutional amendment” (Washington Post). These 11 states hold 165 electoral college votes, meaning states worth just 105 more votes need to join before the majority is captured. Then at last this anti-democratic vestige of slavery will disappear. The people will directly choose the president, and each vote will be worth much and worth the same. A popular vote is, after all, how all other political races in this country are determined, and is standard among the majority of the world’s democracies.
Middle School (6th– 8th Grades) 8:00 a.m. to 2:45 p.m. Specialists—Art, Music, Physical Education, and Science Lab Biennial field trip to Washington, D.C., Living History Museum Language Arts: The four components are literature, grammar, vocabulary, and composition. Literature studies include works of fiction, non-fiction, and poetry. In grammar, parts of speech and punctuation are studied. For vocabulary, words from the students’ works of literature are used. Composition includes the various prose genre and poetry. Composition knowledge and skills are used in writing pieces for all subjects. Math: Middle school students study grade level math, including pre-algebra and, when appropriate, algebra 1. Science: During the middle school years, students study astronomy, botany, physical and earth sciences, and life science. History: Old world history, new world history, and U.S. history are rotated during the middle school years. Government and geography are integrated into these classes. Each student writes, memorizes and presents a historical impersonation of a famous person. Bible: Various topical and book studies are used. Students also memorize Scripture verses. Logic: As part of our classical education curriculum, logic is taught to eighth grade students. Logic class culminates in learning logical fallacies, giving students tools with which to evaluate what they hear and read in their daily lives. Music Appreciation: Sixth and seventh grade students learn about the various musical periods, and learn to recognize certain pieces from them, including Baroque and Classical, eventually advancing to modern music. Art: Beginning in fifth grade and continuing through eighth grade, we explore Art History and the various styles, movements, and artists that have left a lasting impression upon the world. Computers: Students perfect their keyboarding skills, learn proper use of the Internet, and learn to use various software programs, e.g. Power Point.
People have long dreamed of re-shaping the Martian climate to make it livable for humans. Carl Sagan was the first outside of the realm of science fiction to propose terraforming. In a 1971 paper, Sagan suggested that vaporizing the northern polar ice caps would “yield ~10 s g cm-2 of atmosphere over the planet, higher global temperatures through the greenhouse effect, and a greatly increased likelihood of liquid water.” Sagan’s work inspired other researchers and futurists to take seriously the idea of terraforming. The key question was: are there enough greenhouse gases and water on Mars to increase its atmospheric pressure to Earth-like levels? In 2018, a pair of NASA-funded researchers from the University of Colorado, Boulder and Northern Arizona University found that processing all the sources available on Mars would only increase atmospheric pressure to about 7 percent that of Earth – far short of what is needed to make the planet habitable. Terraforming Mars, it seemed, was an unfulfillable dream. Now, researchers from the Harvard University, NASA’s Jet Propulsion Lab, and the University of Edinburgh, have a new idea. Rather than trying to change the whole planet, what if you took a more regional approach? The researchers suggest that regions of the Martian surface could be made habitable with a material — silica aerogel — that mimics Earth’s atmospheric greenhouse effect. Through modeling and experiments, the researchers show that a two to three-centimeter-thick shield of silica aerogel could transmit enough visible light for photosynthesis, block hazardous ultraviolet radiation, and raise temperatures underneath permanently above the melting point of water, all without the need for any internal heat source. The paper is published in Nature Astronomy. “This regional approach to making Mars habitable is much more achievable than global atmospheric modification,” said Robin Wordsworth, Assistant Professor of Environmental Science and Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Department of Earth and Planetary Science. “Unlike the previous ideas to make Mars habitable, this is something that can be developed and tested systematically with materials and technology we already have.” “Mars is the most habitable planet in our Solar System besides Earth,” said Laura Kerber, Research Scientist at NASA’s Jet Propulsion Laboratory. “But it remains a hostile world for many kinds of life. A system for creating small islands of habitability would allow us to transform Mars in a controlled and scalable way.” The researchers were inspired by a phenomenon that already occurs on Mars. Unlike Earth’s polar ice caps, which are made of frozen water, polar ice caps on Mars are a combination of water ice and frozen CO2. Like its gaseous form, frozen CO2 allows sunlight to penetrate while trapping heat. In the summer, this solid-state greenhouse effect creates pockets of warming under the ice. “We started thinking about this solid-state greenhouse effect and how it could be invoked for creating habitable environments on Mars in the future,” said Wordsworth. “We started thinking about what kind of materials could minimize thermal conductivity but still transmit as much light as possible.” The researchers landed on silica aerogel, one of the most insulating materials ever created. Silica aerogels are 97 percent porous, meaning light moves through the material but the interconnecting nanolayers of silicon dioxide infrared radiation and greatly slow the conduction of heat. These aerogels are used in several engineering applications today, including NASA’s Mars Exploration Rovers. “Silica aerogel is a promising material because its effect is passive,” said Kerber. “It wouldn’t require large amounts of energy or maintenance of moving parts to keep an area warm over long periods of time.” Using modeling and experiments that mimicked the Martian surface, the researchers demonstrated that a thin layer of this material increased average temperatures of mid-latitudes on Mars to Earth-like temperatures. “Spread across a large enough area, you wouldn’t need any other technology or physics, you would just need a layer of this stuff on the surface and underneath you would have permanent liquid water,” said Wordsworth. This material could be used to build habitation domes or even self-contained biospheres on Mars. “There’s a whole host of fascinating engineering questions that emerge from this,” said Wordsworth. Next, the team aims to test the material in Mars-like climates on Earth, such as the dry valleys of Antarctica or Chile. Wordsworth points out that any discussion about making Mars habitable for humans and Earth life also raises important philosophical and ethical questions about planetary protection. “If you’re going to enable life on the Martian surface, are you sure that there’s not life there already? If there is, how do we navigate that,” asked Wordsworth. “The moment we decide to commit to having humans on Mars, these questions are inevitable.”
It’s rare that scientists see the good in invasive species. But at Washington University’s Tyson Research Center, researchers have discovered something positive about a non-native mosquito. Tyson’s 2,000 acres allow faculty, staff, and students from all over the globe to study a variety of environmental disciplines, including sustainable operations, They discovered that the native mosquitoes were susceptible to a species-specific type of parasite that greatly impaired their development. But when the non-native species was present, it reduced the prevalence of the parasite in the native mosquitoes from 72-90% to 13-27%. This occurred through something called “encounter reduction.” Simply put, the non-native species ingests the parasite before it reaches the native species. Kim Medley, director of the Tyson Research Center and a leading expert on mosquito research, notes that studies like these can broadly inform how infectious disease manifests and changes along with changes in biodiversity. The concepts can then be applied to numerous systems, including infectious disease in humans.
عنوان مقاله [English] Soil erosion is a serious environmental problem around the world .The formation of gully systems is a sign of severe soil erosion, and gullies are an important sediment source in dry lands , also reported that gullies can account for a higher portion of sediment yield in semi arid.Gullies are typical erosion forms in semi-arid and arid landscapes all over the world where high morphological activity and dynamics can be observed. Semi-arid climate conditions and precipitation regimes encourage soil erosion processes through low vegeta-tion cover uplift and recurrent heavy rainfall events.Gully erosion is a threshold phenom-enon and occurs only when a threshold in terms of flow hydraulics, rainfall, topography, pedology , or land use has been exceeded .The North estern slopes of Sahand mountain are severely degraded by rill and gully erosion. In headwater streams in steep land settings, narrow and steep valley floors pro-vide closely coupled relationships between geomorphic components including hill slopes, tributary fans, and channel reaches. These relationships together with small catchment sizes result in episodic changes to the amount of stored sediment in channels. Erosion rate estimates are potentially strongly influenced by the estimation method.A total 11 gullies with various soils and land use types were investigated in Ojan catchement. Field data on gully channel geometry were collected in catchement.The obtained data confirmed the existence of the power relationship for rills and gullies, with the exponent varying from 0.44 for small gully) to 0.5 for gullies . The data did not allow deciding whether the exponent varies consistently with channel width or in a step-wise fashion.Annual sediment yield from gully complexes was derived based on their area using empir-ical equations obtained in the same rock formation and vary formation in headwater catchments of the Ojan River . Major sediment inputs follow high magnitude events. As headwater catchments are major sediment sources, interpretation of sediment delivery processes in these settings is a critical consideration in our understanding of basin-scale sediment dynamics.Analysis of these geomorphic features in steep headwater catchments can also be used to characterize the episodic manner of sediment delivery processes. Narrow valley floors in study area allow sediment to be directly transferred from outside the channels to inside. A data set on soil losses and controlling factors for 11 ephemeral gullies has been collected in the Northestern Sahand mountain . Of the observed ephemeral gullies, gullies. and ephemeral gullies developed on slopes. Analysis shows that E is capable of predicting ephemeral gully cross-sections well. Rather than revealing E ability of predicting ephemer-al gully erosion, this analysis stresses the problematic nature of physically based models, since they often require input parameters that are not available or can hardly be obtained. With respect to the value of simple topographical and morphological indices in predicting ephemeral gully erosion, this study shows that gullies, respectively, over 80% and about 75% of the variation in ephemeral gully volume can be explained when ephemeral gully length is known. Moreover, when previously collected data for ephemeral gullies in study areas and the data for gullies formed in the U form, it appears that one single length vol-ume . A simple procedure to predict ephemeral gully length based on topographical thresholds is presented here. Secondly, the empirical length–volume relation can also be used to con-vert ephemeral gully length data extracted from aerial photos into ephemeral gully vol-umes. The evidence showed that gullying was controlled by faulting and uplifting along the slope. It can be seen that the deep gullies clearly plot higher compared to the shallow gullies. This is also reflected by the intercept of the minimal topographical threshold line. Two bank gullies representing different morphological types of gullies (V-shaped and U-shaped) were chosen from a dataset of gully systems in semi-arid . Gully erosion generates significant volumes of sediment that is delivered to waterways throughout the world . Quantifying gully erosion rates and associated sediment yields is critical for the effective prioritisation of management efforts aimed at reducing the environmental impact of gully erosion .Soil fabric and rock structure and weathering studies were undertaken to establish the in-heritance of soil cracks from the underlying parent material.Two mechanisms of gully de-velopment appear to occur in upland catchement. The first produces stream coupled gul-lies resulting from lateral channel migration and erosion induced at the base of the hill slopes . The second produces more extensive gully networks that are often initiated in the mid slope .In this context, gullies link hill slopes and channels, functioning as sediment sources, stores and conveyors. From a review of gully erosion studies in semi-arid and arid regions, conclude that gullies contribute an average of 50 to 80% of overall sediment production in dry land environments .
You have often heard it said that the Form of the Good is the greatest thing to learn about, and that it is by their relation to it that just things and [other virtuous things] become useful and beneficial (Republic, 505a). Plato’s Republic is a wide-ranging tract, admired for its depth, nuance, and ambition. Plato sets himself to answering two questions: What is justice? and Is the just or unjust life better for a person? In the process of answering these questions, he defends a sublime theory of the nature of reality and human knowledge. The ultimate foundation of Plato’s metaphysics—his view of reality—is his theory of Forms, culminating in the Form of the Good. Few, says Plato, really understand the nature of the Good itself (505e). 1. The Sun Analogy The Form of the Good sits atop Plato’s hierarchy of being as the ultimate Form. The Forms themselves are abstract, although they do inform the concrete world, and Plato frequently relies on metaphor to describe them. To understand the Good itself, Plato relies on an analogy with the sun. There are visible objects, which are visible but not intelligible in themselves. (Plato’s central concern is that the world of material objects is shifting, deceptive, and unreliable.) Then there are the Forms themselves, which are intelligible but not visible (507b). The Form of the Good, Plato says, is to the intelligible realm as the sun is to the visible realm. In the visible realm, there is a need of “something else” to make things visible, namely, the sun (507d). We need sight in ourselves and color in objects, but we also need the sun, or light, to make those things really visible, detectable by us. Sight receives its power to see from the sun, as if from an overflowing treasury. And sight is the most “sunlike” of the senses, i.e., it and the sun have a kind of affinity or compatibility. The sun and our sense of sight go together. Likewise, the intelligible realm receives its order and intelligibility from the Form of the Good. Without the Form of the Good, we would be like people fumbling in the dark, with a capacity to understand but no “third thing” to render the world intelligible. Plato says that the sun is the “cause” of the visible realm. The connection here may seem tenuous, but this much is clear: without the sun, most or all of the things on the earth would die out. This is presumably what Plato means. In the intelligible realm, the Form of the Good plays the same role: it is not only the reason for the intelligibility of the Forms, but the source of their existence as well. Though, as Plato says, the existence that it enjoys is “beyond being, superior to it in rank and power” (509b).Plato’s mysticism—his conviction that there are superior forms of existence—inherited from Pythagoras, is here on full display. 2. A Divine Order So, the Form of the Good is more real, even, than the rest of the Forms: the realest and most fundamental thing that exists, the cause of the Forms and the explanation of the rational order of the universe. It resembles a divine logos, or divine rationality, which became an object of worship for successive schools of philosophy that developed under the influence of Plato’s ideas. Nowadays, we might compare the Form of the Good to laws of nature, though this is not fully satisfying, since the Form of the Good is not particular law of nature, but the reason why there are laws at all. Stephen Hawking famously quipped that we should ask not only what the equations governing the universe are, but also “what breathes fire into the equations?” For Plato, both the equations and the fire are the Form of the Good. 3. The Forms and Human History We can gauge the significance of Plato’s contributions to humanity by his influence on the history of thought. Plato’s was the first major metaphysical system in the West, and it dominated Western thought through the middle of the second millennium. Consider the subject of mathematics and geometry. What is a point? It is a location in space with no dimension. In other words, it is not a real object. Points are ideal entities, not space-time particulars. They take up no space. Likewise, lines have length but no breadth. Mathematics is about ideal entities, and some mathematicians today are still “Platonists” about numbers: they hold the view that numbers or other mathematical objects are immaterial things. And they have to be in order for us to be able to know eternal truths about them. If we live in a rationally ordered cosmos, this helps underwrite a social order that is rigidly hierarchical. It is no surprise then that through the Middle ages humans organize themselves into strict hierarchies. We find a hierarchical church and a stratified social structure, with serfs serving the king and the king serving God. Consider Plato’s influence on theology: The Form of the Good is the ground of all being, an immaterial object that exists more perfectly than anything else, a thing responsible for the goodness and rationality in the world. This is something like an interpretation of the Christian view of God developed in the Middle Ages, founded in Platonic and Neo-Platonic metaphysics. Perhaps most importantly, Plato’s arguments in Republic make possible scientific inquiry. Science is only possible if the natural world is intelligible to our rational faculties. Many people credit Plato’s student Aristotle with the initiation of the scientific project of humanity, and many in turn credit the scientific method as the West’s most profound contribution to humanity. Republic is firstly an argument about the ideal structure of a city. Notoriously, Plato installs philosopher-kings as a benevolent council. If the rulers of the city are to make themselves, their citizens, and their city good, they must first know Goodness itself. For example, a concrete table is a table rather than, say, a chair or a dog, because it participates—albeit partially—in the Form of table-hood. Concreta are what they are because of the Forms they participate in, and the Forms are ontologically and explanatorily prior to concreta. About the Author Dr. Ryan Jenkins is an assistant professor of philosophy and a senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo. He studies the ethics of emerging technologies, especially automation, cyber war, autonomous weapons, and driverless cars. His work has appeared in journals such as Ethical Theory and Moral Practice, and the Journal of Military Ethics, as well as public fora including Slate and Forbes. http://calpoly.academia.edu/RyanJenkins Follow 1000-Word Philosophy on Facebook, Twitter and subscribe to receive email notice of new essays at the bottom of 1000WordPhilosophy.com
The most difficult sounds to pronounce are typically the ones that do not exist in your native language (or in languages whose sounds you have already mastered). For English speakers these include the umlauted vowels ö and ü. Fortunately, there is a very effective method you can use for arriving at these sounds. To pronounce the ö-sound, say “ay” as in day (or as in the German word See). While continuing to make this sound, tightly round your lips. Look in a mirror to make sure your lips are actually rounded. Voilà! The resulting sound is the ö-sound. A similar method results in the ü-sound. Say “ee” as in see (or as in the German word vier). Again, while saying the sound, round your lips. The resulting sound is the ü-sound. Like any unfamiliar sounds, being able to pronounce ö and ü correctly will come with repeated practice. After you find the correct mouth position using the tips above, practice reading words containing these characters aloud. Below are several audio files for you to listen to. Here are two lists of commonly used words to get you started: |Short o-sound||Short ö-sound| |Long o-sound||Long ö-sound| |Short u-sound||Short ü-sound| |Long u-sound||Long ü-sound| |Short vowel sound||Long vowel sound| |Short ö-sound||Short ü-sound| |Long ö-sound||Long ü-sound|
In the fifth grade, students are learning how to conduct their own science experiments and report their findings to learn more about the natural world, including physical and chemical changes. Fifth-graders don't need expensive microscopes, glassware sets or intricate models to put together an interesting and educational science project. They can use many of the items they have around their houses to make scientific discoveries. Apples can be more than just a sweet and delicious snack for after school; they can take center stage in a science project to help fifth-graders learn about chemical reactions, anatomy and natural sciences. According to Education.com, our sense of taste and smell are so closely linked that 70 percent to 75 percent of what we taste is influenced by what we smell. To experiment with tastes, students should cut up one or two apples into slices and then put out a few other flavorings and foods, such as vanilla extract, cinnamon, an onion or some cooked broccoli. Students should close their eyes, then smell one of the items while eating a slice of the apple. Students are likely to be surprised to find that the apple tastes like the item they are smelling. Students should experiment with different types of scents and different types of apples to see how the results may vary. The experiment can help students understand how the senses are linked and lead them to investigate how other processes in the body are similarly linked. Understanding pH and Taste Apples come in both sweet and sour varieties, and there is a scientific reason why some are sweeter than others. Fifth-graders can conduct a simple science project to learn more about the chemical composition of apples and how it influences their taste. Students should gather several types of apples, including red, yellow and green varieties, and cut them in half. After laying the apples with the cut side up, students should lay a pH testing strip on the meat of the apple. These strips can often be purchased at home supply stores. Apples with a higher acid content taste sweeter, and sour apples are more alkaline. Students should experiment with a variety of shades to see how these variables change the pH. The experiment can help students understand the pH scale and how concepts like acid and alkaline can affect them in their everyday lives, such as their diets and their taste preferences. Keeping Apples Fresh When you cut open an apple, it doesn't take long for it to start turning brown. Students can experiment with how different substances slow that browning process. To conduct the experiment, students should cut a couple of apples into several slices. Students should then place milk, orange juice, water and lemon juice each in a small, shallow dish. Students should put a couple of slices of apples in each bowl and then leave a couple of slices on a paper towel exposed to the air. For each dish, students should note how long it takes for the apples to brown. This can teach students about the process of oxidation and what substances slow it. Refrigerator vs. Cabinet Most people don't keep their apples in a bowl of lemon juice to keep them fresh. However, other storage methods can work to keep apples fresh longer. In this science project, students place one or two apples in the refrigerator, one or two apples in the cabinet, and one or two apples in a brown bag on the counter. Students watch the apples over a period of several weeks to see how long it takes for each to deteriorate. Apples placed in the refrigerator are likely to stay fresh longer since the cold air slows the ripening process. This science project helps students learn about the chemical processes that take place in apples, which can lead to exploration of such processes in other fruits and produce. - Stockbyte/Stockbyte/Getty Images
The latest news about environmental and green technologies – renewables, energy savings, fuel cells Posted: Mar 12, 2013 Catalysts that produce 'green' fuel (Nanowerk News) The energy produced by solar panels, be it heat or electricity, has to be used right away. It is hard to store and preserve and also its transportation can be rather complicated. Creating solar cells capable of producing energy in an easily storable and transportable way, that is to say fuel, is therefore the future challenge of solar energy. For this reason the scientists at SISSA are working on a catalyst that imitates and improves what nature has been able to do for millions of years. Plants turn solar energy into sugars, the true “green” fuel, through photosynthesis. In such process a key role is performed by catalysts, molecules that “cut and paste” other molecules, and that in this specific case oxidize water, that is to say separate the hydrogen from the oxygen. Hydrogen (already a fuel itself, yet very hard to handle) is used at a later stage in the synthesis processes that produce sugars from hydrogen and carbon atoms. But scientists are seeking to obtain artificially the same typology of process by using inorganic catalysts, which are faster and more resistant than natural ones (which are very slow: just think of how much time a tree needs to grow). Effective yet costly and limited materials already exist in nature. “The crucial part of artificial photosynthesis is water oxidation. We have simulated the way a molecule of Ru4-plyoxometalate (Ru4-POM) functions is this process. Such complex reaction requires catalysts just like the natural process does”, explains Simone Piccinin, a researcher of SISSA and of Istituto Officina dei Materiali (CNR-IOM) and lead author of the paper. Ru4-POM was chosen because its effectiveness had been already demonstrated in previous occasions in experiments carried out by the group of ITM-CNR and of Università di Padova that was the first to synthesize the molecule and that has also taken part in this research. “What was still missing was the comprehension of the process, so we have accordingly reproduced the electronic behavior of the molecule through numeric simulations,” underlines Stefano Fabris of SISSA and of CNR-IOM, who has coordinated the theoretical work published in Proceedings of the National Academy of Sciences ("Water oxidation surface mechanisms replicated by a totally inorganic tetraruthenium–oxo molecular complex"). “We have thus observed that the active sites of the new molecule, that is to say those that convey the reaction, are four atoms of Ruthenium.” “Ruthenium is costly are rare, but now that we know how the atoms that cause the oxidation process have to be arranged we may replace them one by one with cost-effective elements trying to obtain the same level of effectiveness of Ruthenium.” concluded Fabris. Besides SISSA, CNR-IOM and Università di Padova also Elettra Sincrotrone of Trieste has taken part in the study.
The Physics Philes, lesson 32: Save the Momenta! In which laws of motion are revisited, momentum is conserved, and forces are classified. Last week was a big week for me on my journey to understanding physics. I started trigonometry, my first for-real math class in 10 years. Ten years. So I was a little busy doing further review and readings for class. Nevertheless, I did manage to wedge in a little physics into my study time. This week I present to you a quick post on conservation of momentum. We know from the last couple of weeks what momentum is (at least, I hope we do). But what happens if we have two interacting bodies? Let’s enter our mind labs for a quick thought experiment. Let’s imagine that we have two particles floating out in space and touch each other. According to Newton’s third law of motion, the forces the particles exert on each other are equal in magnitude and opposite in direction. That means that the impulses and changes in momentum are also equal and opposite. That’s pretty easy to understand. Let’s say that our two hypothetical particles is a system. The forces the particles exert on each other are called internal forces. Forces exerted on the particles by some object outside the system are called external forces. When there are no external forces, we say the system is an isolated system. In our hypothetical two-particle system, there are no external forces, so it’s isolated. Newton’s second law of motion motion in terms of momentum says that the net force acting on a particle equals the time rate of change of momentum of the particle. In math it looks like this: The momentum of each particle can change, but the changes will be related to the each other by Newton’s third law of motion. Since the forces will always be equal in magnitude and opposite in direction, the sum of those forces will be zero. To wit: The rate of change of the momenta is equal and opposite, so the rate of change of the vector sum the momenta of the two particles is zero. The total momentum of the system is the vector sum of the momenta of the individual particles, indicated by a capital P. The time rate of change of the total momentum P is zero, so the total momentum of the system is constant, even if the individual momenta of the particles that make up the system changes. If there are external forces involved, they must be added along with the internal forces, If this happens, in general, the total momentum will not be constant. However, if the vector sum of the external forces is zero, then the forces don’t contribute to the sum and we’re back to zero again. More officially, if the vector sum of the external forces on a system is zero, the total momentum of the system is constant. This, boys and girls, is the simplest form of the principle of conservation of momentum. And you know what’s pretty sweet about it? We don’t really have to know much about the internal forces that act in a system. We can still apply this principle. Kinda cool, right? That is basically all I know about conservation of momentum. Next week we’ll do a sample problem or two so we can see how it applies. Until then, if you spot a mistake or something that needs clarification please leave a comment. Featured image credit: mikemol
- random number generators - A device used to produce a selection of numbers in a fair manner, in no particular order and with no favor being given to any numbers. Examples include dice, spinners, coins, and computer programs designed to randomly pick numbers (cf Random Number Generators Discussion). - The range of a set of numbers is the largest value in the set minus the smallest value in the set. Note that the range is a single number, not many numbers. - A rational number of the form a/b where a is called the numerator and b is called the denominator. - range of the function f - The set of all the numbers f(x) for x in the domain of f. - A straight line that begins at a point and continues outward in one direction (cfLines, Rays and Planes). - real numbers - Real numbers can be thought of as all the points falling along the number line in the coordinate plane (cf Two Variable Functions Discussion). - A parallelogram with four right angles (cf Rectangle - Given some starting information and a rule for how to use it to get new information, the rule is then repeated using the new information (cf Recursion Discussion). - In a tessellation, reflect means to repeat an image by flipping it across a line so it appears as it would in a mirror (cfTranslations, Reflections, and Rotations, Symmetry in Tessellations). - regular fractals - see fractal (cf Plane Figure Fractals Discussion). - regular polygon - A polygon whose side lengths are all the same and whose interior angle measures are all the same (cf Polyhedra Discussion). - relative frequency - Relative frequency is the number of items of a certain type divided by the number of all the numbers being considered. - After dividing one number by another, if any amount is left that does not divide evenly, that amount is called the remainder. For example, when 8 is divided by 3, three goes in to eight twice (making 6), and the remainder is 2. When dividing 9 by 3, there is no remainder, because 3 goes in to 9 exactly 3 times, with nothing left over (cf What are Remainders Discussion). - A parallelogram with four congruent sides (cfParallelograms Discussion). - right angle - An angle of 90 degrees (cf From Geometry to Probability Discussion, Rectangle Discussion). - right triangle - A triangle containing an angle of 90 degrees (cf What is the Pythagorean Theorem). - To rotate an object in a tessellation means to repeat the object by spinning it on a point a certain angle (cfTranslations, Reflections, and Rotations, Symmetry in Tessellations). - rule of probabilities multiplication for simultaneous - When finding the probability of two independent events (two things happening where the outcomes are not affected by each other), multiply the probabilities of each event happening to get the probability of both events happening. For example, to find the probability of getting "heads" and then "tails" when flipping a coin twice, multiply the probability of getting heads once by the probability of getting tails once (cf Probability of Simultaneous Events Discussion).
Is your child ready for the first grade and, just as importantly, are you? Each child develops at their own pace, but educators agree that having certain social and academic skills make going to school easier for first graders. Here is a basic guideline of the skills your child should have. If you feel they need work in an area, utilize the summer break to get them up to speed. If your child has been attending kindergarten, ask his or her teacher what their thoughts are. They will be able to advise you on areas that need attention and will be able to suggest activities that will help develop skills your child needs to make the change to first grade as seamless as possible. First graders can be shy, but they must be able to make friends and converse with teachers and students. This will enable them to work, play and share with their classmates. They must be able to follow instructions and use equipment (like scissors and glue) safely and carefully. Other social skills that are essential are the ability to use words to resolve conflicts and seek the help of adults with tasks they are struggling with or to Students should understand the concept of writing. This means that they can use pictures to tell a story or shapes and letters to represent ideas. They should be able to recognise the letters of the alphabet and read and write their own names. They must be able to hold a pen or pencil correctly and demonstrate an understanding of left-to-right progression and top-to-bottom Reading and comprehension skills revolve around stories. If you read to your students regularly and they show an independent interest in books, then you are already most of the way there! Students should show an interest in books which are read aloud and be able to tell simple stories of their own. They should be able to recognize the letters of the alphabet and their associated phonetic sounds. They should also be able to distinguish between capital and lower case letters. First graders should also be able to recognise rhyming patterns and add to them. For example: if you say cat, hat, bat, they should be able to add another rhyming word. Students should also be able to sort words into categories like clothing, animals, colors etc. A basic knowledge of simple punctuation is also a good asset for the future reader. Prepare your child for the first grade by encouraging them to learn rhymes, songs and poems. They should be able to tell short stories of their own with a logical plot and a beginning, middle and an end. They should be able to distinguish between the way they speak to their friends and the more formal way they speak to teachers. They should be able to follow instructions, ask questions and make requests. Speaking with kindergarten teachers, tutors and other care givers will help you to form a comprehensive idea of what your student should be capable of. You can ask their first grade teacher about their progress in the first week of school. Your first grade teacher is the best person to advise you on areas that your student needs to work on.
In order for fertilization to occur, sperm cells must be able to reach eggs. The sperm cells of a flowering plant are contained in pollen grains. pollination occurs when pollen grains are transported from anthers to stigmas. After the pollen lands on the stigma, a tube grows from the pollen grain through the style of the ovary. Inside the ovary are ovules. Each ovule contain an egg. Sperm cells within the pollen grain move down the pollen tube and into the ovule. fertilization occurs as one of the sperm cells fuse with the egg inside the ovule.
Ultra, Allied intelligence project that tapped the very highest level of encrypted communications of the German armed forces, as well as those of the Italian and Japanese armed forces, and thus contributed to the Allied victory in World War II. At Bletchley Park, a British government establishment located north of London, a small group of code breakers developed techniques for decrypting intercepted messages that had been coded by German operators using electrical cipher machines, the most important of which were the Enigma and, later in the war, the sophisticated Tunny machine. The flood of high-grade military intelligence produced by Bletchley Park was code-named Ultra (from “Top Secret Ultra”). According to some experts, Ultra may have hastened Germany’s defeat by as much as two years. Every day the German military transmitted thousands of coded messages, ranging from orders signed by Adolf Hitler and detailed situation reports prepared by generals at the front line down through weather reports and supply ship inventories. Much of this information ended up in Allied hands, often within hours of being transmitted. The actual texts of the deciphered messages—the “raw decrypts”—rarely left Bletchley Park. Instead, analysts there sifted the decrypts and prepared intelligence reports that carefully concealed the true source of the information. (Nevertheless, the entire Ultra operation was endangered by John Cairncross, a member of the British Foreign Office assigned to Bletchley Park who smuggled Tunny and Enigma decrypts out to Soviet agents in 1943.) The Enigma machine, which combined electrical and mechanical components, was descended from a number of designs that were submitted for patent as early as 1918 in Germany and were produced commercially beginning in the early 1920s. Looking rather like a typewriter, it was battery-powered and highly portable. In addition to a keyboard, the device had a lamp board consisting of 26 stenciled letters, each with a small lightbulb behind it. As a cipher clerk typed a message on the keyboard in plain German, letters were illuminated one by one on the lamp board. An assistant recorded the letters by hand to form the enciphered message, which was then transmitted in Morse Code. Each bulb in the lamp board was electrically connected to a letter on the keyboard, but the wiring passed via a number of rotating wheels, with the result that the connections were always changing as the wheels moved. Thus, typing the same letter at the keyboard, such as AAAA..., would produce a stream of changing letters at the lamp board, such as WMEV…. It was this ever-changing pattern of connections that made Enigma extremely hard to break. The earliest success against the German military Enigma was by the Polish Cipher Bureau. In the winter of 1932–33, Polish mathematician Marian Rejewski deduced the pattern of wiring inside the three rotating wheels of the Enigma machine. (Rejewski was helped by photographs, received from the French secret service, showing pages of an Enigma operating manual for September and October 1932.) Before an Enigma operator began enciphering a message, he set Enigma’s three wheels (four in models used by the German navy) to various starting positions that were also known to the intended recipient. In a major breakthrough, Rejewski invented a method for finding out, from each intercepted German transmission, the positions in which the wheels had started at the beginning of the message. In consequence, Poland was able to read encrypted German messages from 1933 to 1939. In the summer of 1939 Poland turned over everything—including information about Rejewski’s Bomba, a machine he devised in 1938 for breaking Enigma messages—to Britain and France. In May 1940, however, a radical change to the Enigma system eliminated the loophole that Rejewski had exploited to discover the starting positions of the wheels. New methods developed at Bletchley Park during 1940 enabled code breakers there to continue to decipher German air force and army communications. However, German naval messages—including the all-important traffic to and from U-boats in the North Atlantic—remained cloaked. (The Poles too had had little success against naval Enigma.) U-boats were sinking such a large number of merchant ships taking food, munitions, and oil to Britain from North America that by 1941 some analysts were predicting that the sinkings would tip Britain into starvation within a few months. In June 1941 British mathematician Alan M. Turing and his group at Bletchley finally succeeded in breaking into the daily communications of the U-boats. Decoded messages revealed the positions of the submarines, enabling ships to avoid contact. Great care was always exercised to conceal the fact that Bletchley had deciphered these messages. For instance, British intelligence leaked false information hinting at revolutionary new developments in long-range radar. Turing was responsible for another major development in breaking Enigma. In March 1940, Turing’s first Bombe, a code-breaking machine, was installed at Bletchley Park; improvements suggested by British mathematician Gordon Welchman were incorporated by August. This complex machine consisted of approximately 100 rotating drums, 10 miles of wire, and about 1 million soldered connections. The Bombe searched through different possible positions of Enigma’s internal wheels, looking for a pattern of keyboard-to-lamp board connections that would turn coded letters into plain German. The method depended on human instinct, though; to initiate the process, a code breaker had to guess a few words in the message (these guessed words were called a crib). The Polish Bomba, a simpler 18-drum machine, was a forerunner of the Bombe, but it was based on Rejewski’s method for finding the wheel positions at the start of the message. Unlike Rejewski’s method, the more powerful crib-based method invented by Turing survived the May 1940 change. The war on Enigma was transformed by the high-speed Bombes, and the production of Ultra grew as more of them were installed in Britain and the United States.
Lessons - Elementary school Provide students with an understanding of the importance of shelter as a basic human need Help them develop an understanding of the wide-ranging effects of homelessness and the lack of proper housing and why some people need housing assistance. With this understanding, students will be more likely to empathize with people who live in these situations and develop the desire to give back to the community. You will need to have Adobe Reader installed to view these lessons. Included in this curriculum package are three teaching units for grades 3–5. The units have been designed to help students deepen their understanding of poverty housing as they progress through each unit. Unit 1: What Is Home? In Unit 1, students investigate and answer questions about the meaning of a home, essential characteristics of homes and the difference between wants and needs, particularly as they apply to the places we live. Unit 2: The Many Faces of Need Unit 2 helps students develop an understanding of why some people need housing assistance and of the wide variety of people who can benefit from housing assistance (i.e., the “many faces”). Unit 3: Giving Back to the Community This unit helps students understand the benefits of giving back to the community, both for the volunteer and the recipient. In this unit, students investigate the Habitat for Humanity International Web site and then create a promotional brochure based on information they have learned. What is home? In this lesson, students read and discuss interviews on the concept of home. Students interview subjects outside the classroom to gather more ideas about what home means to different people. At home around the world In this lesson, students read about types of housing in different nations, the housing issues faced in these areas of the world and how Habitat for Humanity has been able to help. Volunteers: changing the world Students will learn about volunteering, why people do it and how volunteering benefits individuals, communities and society at large. Students will then participate in a volunteer project. Made in the shade Students will learn how trees and landscaping can help cut housing costs. They will research and create a tree-planting tip sheet for local Habitat for Humanity builders and partner families to use when landscaping new homes. In this lesson, students will learn about the realities of affordable housing through reading an article and playing a game based on hypothetical situations. Spread the word Students will learn about advocacy through their reading and class discussion. They will put their new knowledge into practice by writing a message to a politician about the importance of decent, affordable housing. In this lesson, students will learn about building practices that save energy and lessen the impact on the environment. They will write about their “dream green” home and draw diagrams and illustrations showing the home’s features. What is a neighborhood? In this lesson, students will explore the nature and components of a neighborhood, including the neighborhood(s) where they live and attend school. They will identify and discuss neighborhood issues with an alderperson or other neighborhood representative.
The rocking bed was a piece of equipment used in rehabilitating patients who had formerly been in an iron lung due to paralyzed breathing muscle s (at first mostly from polio ). It is a hospital bed that rocks the patient as if they were lying on a seesaw ; head up and feet down, then feet up and head down. Gravity helps the patient breathe -- as the feet go down, the internal organs move toward the feet and the lungs expand. When the head goes down, the internal organs move toward the head and push the air out of the lungs. These beds also help circulation and provide a little more freedom of movement for those whose limbs aren't paralyzed. As the first step out of being confined in an iron lung, it was a difficult situation. The rocking often made patients dizzy, and must have felt a little scary for someone who could not breathe on their own to be outside the familiar pressure of the surrounding machine. It also had the same problems if the electricity went out as did the iron lung. However, it made personal care much easier, and some patients were lulled to sleep better by the rocking. A part of rehabilitation would usually be to reduce the amount of swing in the bed; the decreased dipping would provide less assistance to the patient's breathing. A rocking bed could also be installed in a patient's home rather than forcing them to live in a nursing facility, which was a given for those confined to an iron lung. Rocking beds are still used to assist respiration, even though polio is no longer usually the cause of the breathing problem. They have the great advantage of being non-invasive compared to many other ways of helping a person breathe. Black, Kathryn. In The Shadow of Polio: A Personal and Social History. Reading, Massachusetts: Addison-Wesley, 1996.
…is wanting to ‘get beneath the surface’ and ‘dig deeper’; the opposite is being ‘passive’ This is about my desire to investigate, find more out and ask questions, especially ‘Why?’ If I am a curious learner, I won’t simply accept what I am told without wanting to know for myself whether and why it’s true. I might challenge what friends, leaders, parents, teachers, colleagues say, rather than take it at face value. I want to know the reason for everything, as young children often do. Learners with less Curiosity might be present and involved in learning activities, but relatively passive, expecting others to tell them or show them what to do and how to improve rather than working things out for themselves or finding things out for the group. Effective learners have energy and a desire to find things out. They like to get below the surface of things and try to understand what is really going on. They value ‘getting at the truth’, and are more likely to adopt ‘deep’ rather than ‘surface’ learning strategies. They are less likely to accept what they are told uncritically, enjoy asking questions, and are more willing to reveal their questions and uncertainties in public. They like to come to their own conclusions about things, and are inclined to see knowledge, at least in part, as a product of human inquiry. They take ownership of their own learning and enjoy a challenge. The contrast pole is passivity. Passive learners are more likely to accept what they are told uncritically and to believe that ‘received wisdom’ is usually, or always true. They are less thoughtful, and less likely to engage spontaneously in active speculation and exploratory thinking and discussion. - Enjoys the challenge of the unknown and confronting complexity - Learns by working things out, solves problems, seeks out information and understanding - Enjoys questioning, finding out and self-directed research - Refuses to accept propositions at face value - Think like a detective: not only interested in answers but clues, patterns and incongruities - Look for opportunities to: - Ask questions at work, of fellow learners first if it’s easier, then your manager(s) - Say, respectfully, “I’m not sure I agree with that” and challenge people to explain and justify their opinions - Tell your manager or tutor what you’re up to and ask for encouragement - Practise climbing the ‘Why?’ Ladder: - Think of a question – e.g. “Why do I work so hard?” - Think of an answer – e.g. “It’s expected of me! - Ask “Why is it expected of me?” - Think of an answer… and so on! - See how far you get. Write it down if you like. - Keep a dictionary nearby and pounce on words you don’t understand – so you do now! Use your existing contacts and resources to create a ‘learning at work glossary’. - Welcome the feeling of being challenged or perplexed and use it to drive your learning forward, like a quest for the light! - Play with ‘What if…’ scenarios - as all businesses have to do in ‘future planning’ – building the competency to ‘find solutions’. - Ask your manager to help you create an open climate – e.g. ‘no criticism allowed!’ – so that you are able to speculate, try out ‘whacky’ ideas on each other and ask ‘What if…?’ and ‘Why?’ questions with confidence.
How do we understand CLIL? The basis of CLIL is that content subjects are taught and learnt in a language which is not the mother tongue of the learners. - Knowledge of the language becomes the means of learning content - Language is integrated into the broad curriculum - Learning is improved through increased motivation and the study of natural language seen in context. When learners are interested in a topic they are motivated to acquire language to communicate - CLIL is based on language acquisition rather than enforced learning - Language is seen in real-life situations in which students can acquire the language. This is natural language development which builds on other forms of learning - CLIL is long-term learning - Fluency is more important than accuracy and errors are a natural part of language learning. Learners develop fluency in English by using English to communicate for a variety of purposes
Attitudes are evaluations people make about objects, ideas, events, or other people. Attitudes can be positive or negative. Explicit attitudes are conscious beliefs that can guide decisions and behavior. Implicit attitudes are unconscious beliefs that can still influence decisions and behavior. Attitudes can include up to three components: cognitive, emotional, and behavioral. Example: Jane believes that smoking is unhealthy, feels disgusted when people smoke around her, and avoids being in situations where people smoke. Researchers study three dimensions of attitude: strength, accessibility, and ambivalence. Behavior does not always reflect attitudes. However, attitudes do determine behavior in some situations: Example: Wyatt has an attitude that eating junk food is unhealthy. When he is at home, he does not eat chips or candy. However, when he is at parties, he indulges in these foods. Example: Megan might have a general attitude of respect toward seniors, but that would not prevent her from being disrespectful to an elderly woman who cuts her off at a stop sign. However, if Megan has an easygoing attitude about being cut off at stop signs, she is not likely to swear at someone who cuts her off. Example: Ron has an attitude of mistrust and annoyance toward telemarketers, so he immediately hangs up the phone whenever he realizes he has been contacted by one. Behavior also affects attitudes. Evidence for this comes from the foot-in-the-door phenomenon and the effect of role playing. People tend to be more likely to agree to a difficult request if they have first agreed to an easy one. This is called the foot-in-the-door phenomenon. Example: Jill is more likely to let an acquaintance borrow her laptop for a day if he first persuades her to let him borrow her textbook for a day. People tend to internalize roles they play, changing their attitudes to fit the roles. In the 1970s, the psychologist Philip Zimbardo conducted a famous study called the prison study, which showed how roles influence people. Zimbardo assigned one group of college student volunteers to play the role of prison guards in a simulated prison environment. He provided these students with uniforms, clubs, and whistles and told them to enforce a set of rules in the prison. He assigned another group of students to play the role of prisoners. Zimbardo found that as time went on, some of the “guard” students became increasingly harsh and domineering. The “prisoner” students also internalized their role. Some broke down, while others rebelled or became passively resigned to the situation. The internalization of roles by the two groups of students was so extreme that Zimbardo had to terminate the study after only six days. Researchers have proposed three theories to account for attitude change: learning theory, dissonance theory, and the elaboration likelihood model. Learning theory says that attitudes can be formed and changed through the use of learning principles such as classical conditioning, operant conditioning, and observational learning: Leon Festinger’s dissonance theory proposes that people change their attitudes when they have attitudes that are inconsistent with each other. Festinger said that people experience cognitive dissonance when they have related cognitions that conflict with one another. Cognitive dissonance results in a state of unpleasant tension. People try to reduce the tension by changing their attitudes. Example: Sydney is against capital punishment. She participates in a debate competition and is assigned to a team that has to argue for capital punishment. Subsequently, she is more amenable to the idea of capital punishment. The phenomenon called justification of effort also results from cognitive dissonance. Justification of effort refers to the idea that if people work hard to reach a goal, they are likely to value the goal more. They justify working hard by believing that the goal is valuable. The elaboration likelihood model holds that attitude change is more permanent if the elaborate and thought-provoking persuasive messages are used to change the attitude. Basically, if someone can provide a thorough, thought-provoking persuasive message to change an attitude, he is more likely to succeed than if he provides a neutral or shallow persuasive message. Example: Ten teenagers who smoke are sent to an all-day seminar on the negative consequences of smoking. Many of the students subsequently give up the habit.
Arguing Essay Worksheet 2 Editor's Name and Phone Number: Ask the writer what questions or concerns he or she has about the paper. Read the paper carefully and respond to those points before you complete the rest of this worksheet. The purpose of these questions is to examine the writer's awareness of audience and choice of focus and evidence for that audience. - Note here the audience for the paper you're reviewing. Be as specific as possible. If the writer has not targeted a specific audience, brainstorm together for ways to specify the audience. - What is the focus of this paper and why is that focus appropriate for the target audience? Look both at the focus and purpose as you consider ways to improve the match between writer, reader, and argument. - Is the claim adequately focused—narrow within manageable/defensible limits? Why or why not? - Do you feel the writer needs to add any qualifiers or exceptions to avoid over-generalizing the claim? If yes, explain. - Are the writer's reasons sound in logic, and do they follow logically from the claim? Why or why not? - Can you think of any additional refutations the writer could add? - Where has the writer used effective evidence or detail? Where might the writer include more evidence? (Also, take a moment to jot questions on the paper that would help the writer see where and what detail to add.) - Is the paper interesting to read? Why? (If you see gaps in the information provided, be sure to point out those gaps to the writer.) - Has the writer cited appropriate and unbiased sources of information? Are quotations integrated into the text? Are the citations clear? Do you see any places where the writer needs to cite source but now doesn't? Point those out to the writer. View Arguing Essay Worksheet 3