content
stringlengths
275
370k
When teaching ballet to young children, especially those in preschool, we have to be careful how we address posture for kids, especially when it comes to training certain aspects of ballet technique. We need to be both realistic and safe when teaching these little bodies and be very aware of what we are saying, as their minds soak up so much at this age. Posture for kids, in general, is deteriorating because children sit a lot more nowadays than they used to. A lot of school going children have rounded upper spines because of this. This is one of the reasons why more free play must be encouraged in the pre-schools and by parents at home. Children were designed to move, especially while their muscles are still developing in their formative years. Climbing is also extremely good for them, as they develop their core muscles. A jungle gym is not only fun for them, but also an essential part of their posture development. Posture For Kids A preschooler of between three and four years of age should be able to do the following while just standing. - They should have their head centered over their shoulders and be able to hold it erect. - The shoulders and the hips should be level and square when standing on both feet. - The arms should be able to just relax at their sides. - The tummy muscles should just be relaxed. - The bottom is relaxed. - The knees should be naturally straight, but not pulled up. - They should be able to stand with their weight evenly on both feet. - Their feet should not turn out or in. - They should be doing all the above while standing and not battling to balance. You will actually be amazed at how difficult some children will find all this to do. How Can Teachers Try To Accomplish This Posture For Kids? There are many games you can play in class to accomplish this and help them to strengthen their postures. - Play freeze games, by letting them freeze and stand tall when the music stops. Gently remind them what their bodies should be doing while they are standing. - They can reach up tall and pick an apple off the tree or a star out of the sky to get a basic awareness of their posture. - Use imagery to help them to understand how the posture works. You could make them trees with their roots/feet planted firmly in the ground while the head reaches up to the sun. - Let them ‘roll down a hill’ trying to keep their feet and legs together. - Get them to crawl like a baby. - Make them walk like bears on their hands and feet. Once they can do this, let them run like this. You can make obstacles for them like bean bags or cones for them to run around. - Sit in a variety of positions and let them reach up, then around to each side. - Bouncing on a therapy ball is a great way for them to improve their posture and balance. - Let them slither on their tummies like snakes. What Posture Corrections Not To Give Young Children Never tell young children to pull up their knees, as this will cause the thigh muscles to overdevelop, and could weaken their hamstrings and knee joints. If they have swayback legs, the problem could become worse. Never tell young children to tuck their bottoms under, as this could cause the muscles in the lower back to develop incorrectly. Never tell young children to suck their tummies in, as the spine is too short in comparison to the viscera, and this again can cause the incorrect muscle development in the lower back. Please feel free to comment below if you have any other great ideas on how to improve posture for kids.
(If insects make your skin crawl, you might want to skip this post.) I’m sitting here on my deck in Brooklyn in midsummer, listening to the comforting sound of cicadas getting louder, then quieter, then louder again. But I don’t remember the cicadas being so loud last year. Turns out, they weren’t. I kind of knew there were different maturity periods for cicadas; I was in Kansas in 1998 when several species of cicada emerged at once — you could not walk without crunching them; they toppled plants because of the sheer weight. Curious about cicada life, I’ve compiled the following little factoids. - Cicadas are NOT locusts. Although I grew up thinking they were the same, true locusts are actually a type of grasshopper. - There are annual and periodical cicadas. The periodical varieties emerge at 13-year and 17-year intervals. - The periodical cicadas have the cool genus name “Magicicada” - During freaky periods, like in 1998 in the Midwest, a bunch of periodical species will emerge at once. Summer of 2015 is going to be another fun year for Mid-westerners, when a 13-year brood and a 17-year brood emerge along with the usual annual species. The University of Michigan zoology department has a cool chart on this. They also have recordings of their mating calls. - Cicadas don’t actually do much ecological harm. During the weird convergence periods, they can kill some plants do to the sheer numbers of them; their weight can topple plants. And some plants may suffer damage if too many eggs are deposited in plants (called flagging). Young trees are most vulnerable to this. - They don’t bite humans or animals. If one lands on you, it won’t hurt you. - Their chirping sound can be heard up to a mile away. - Adult females deposit eggs above ground, in slits they have made in plants. - Newly hatched cicadas then drop to the ground and burrow below, where they hang out for a year or more, depending on species. Each year they shed their skin and grow a new, bigger one. Adult also molt once above ground; finding a shell is a real thrill. - Even though they can live years underground, they only get four to six weeks above ground to mate and serenade us. - Nobody knows what purpose they serve. It’s true! This field of study is wide open, folks. Scientists haven’t found any purpose for cicadas at all.
Embedding Formative Assessments in Curriculum Programmes Educators who regularly engage in student-oriented approaches will inevitably need to adapt to different pupils’ needs, abilities, interests and learning styles. Nowadays there is adequate and sufficient evidence which suggests that educators who supplement or replace lectures with active learning strategies are improving their students’ learning and knowledge retention. At the same time, the students will become motivated as they participate in the discovery and scientific processes. Recent, academic studies have shown that educators are increasingly resorting to student-centred assessment approaches, including: Active Learning, Collaborative Learning, Inquiry-based Learning, Cooperative Learning, Problem-based Learning, Peer Led Team Learning, Team-based Learning and Peer Instruction among other methodologies. Therefore, educators are expected to identify their students’ learning needs and respond to them. Yet, they should also measure the progress of their pupils in their learning journey. This article maintains that (summative) assessments are a classic way of measuring student progress. Assessments are integral to the schools’ quality assurance, syllabi and curriculum programmes. In a similar way, such forms of tracking individuals’ performance and their progress are also applied in workplace environments by many employers. However this is only part of the story. To be truly meaningful and effective, assessments should also be “formative”. Educators may use tools and activities which are embedded in the on-going curriculum to garner students’ feedback at key points in the learning process. Interestingly, educators are moving away from the conventional teacher-centred methodologies as they are enhancing their interaction with students. Formative assessments respond to the pupils’ individual learning needs as the educators are making frequent, appraisals of their students’ understanding. This enables them to adapt their teaching to meet the students’ requirements, and to better help everyone reach high standards of excellence. Educators ought to involve their students in their learning journey. This helps them to develop key knowledge, skills and competences that enable their intellectual growth. Nevertheless, although the educators seem to be incorporating various aspects of formative assessment into their teaching, it is less common to find it practiced in a systematic manner. Formative assessments are often present within individual teachers’ frameworks. It may appear that some of the emerging educational approaches are setting up learning situations as students’ are guided toward their learning goals. These approaches seem to be re-defining student success. To my mind, formative assessments are highly effective in raising the level of student attainment, as they are likely to increase the equity of student outcomes. Formative assessments entice the students’ curiosity in the subject, as well as improving the students’ ability and aptitude to learn. Such student-centred methodologies emphasise the process of teaching and learning, as they involve students in their own educational process. It also builds students’ skills during peer and self-assessments, and help them develop a range of effective learning strategies. Students who are actively involved in building their understanding of new concepts (rather than merely absorbing information) and who are learning to judge their own quality and of their peers – are developing invaluable skills for lifelong learning. As a proponent of active learning my formative assessment strategies often feature role-playing, debating, student engagement in case studies, active participation in cooperative learning and the like. Such teaching approaches can be utilised to create a context of material, where learners work collaboratively. Needless to say, the degree of my involvement while students are being “active” may vary according to the specific task and its context in a teaching unit. Of course, there are different approaches to gauge students’ comprehension of what has been taught. A non-exhaustive list of formative assessment strategies can include: - Questioning strategies: During classroom interactions, students may be asked challenging questions. Questions often reveal student misconceptions. Questions can be embedded in lesson plans. Asking questions often gives me an opportunity for deeper thinking and provides me with significant insights into the degree and depth of student understanding. Questions will inevitably engage students in classroom dialogue that both uncovers and expands learning. - Criteria and goal setting: Students need to understand and know the learning targets / goals and the criteria for reaching them. Establishing and defining quality work together, asking students to participate in establishing norms and behaviours for classroom culture, and determining what should be included in criteria for success are all examples of such a strategy. Using student work, classroom tests, or exemplars of what is expected will help students understand where they are, where they need to be, and an effective process for getting there. - Observations assist teachers in gathering evidence of student learning to inform instructional planning. This evidence can be recorded and used as constructive feedback for students about their learning curve. - Self and peer assessments help to create a learning community within a classroom. Students will learn as they are engaged in metacognitive thinking. When students are involved in criteria and goal setting, self-evaluation is a logical step forward in the learning process. With peer evaluation, students see each other as valuable resources for checking each other’s quality work against previously established criteria. - Student record keeping helps students better understand their own learning as evidenced by their classroom work. This process of students keeping on-going records of their work will help reflect on their learning journey, as they examine the progress they are making toward their learning goals. - Portfolios, logbooks and rubrics: These instruments are widely used to provide an opportunity for written dialogues with students. Such tools help educators to evaluate the quality of their students’ work. On the other hand, students will use rubrics to judge their own work, and improve upon it. Without doubt, there may be still some perceived tensions among stakeholders about formative assessments and summative tests. Education institutions have to be accountable for student achievement. They guide students to satisfy the requirements of their curriculum programmes. There may be a lack of consistency and coherence in policies between assessments and evaluations at both the institutional and classroom levels. And there are different attitudes among educators about formative assessments. Perhaps, on-going assessments may be considered too resource-intensive and time-consuming to be practical. Educators are often faced with extensive curriculum and reporting requirements as they are often teaching to larger classes. The right assessment systems foster constructive cultures of evaluation. Formative assessments are likely to help in promoting reforms for student-centred education. Ideally, information gathered through assessments and evaluation processes can be used to shape strategies for continuous improvement at each level of our education system. In classrooms, educators can possibly gather information on student understanding. Consequentially, this enables them to adjust their instruction to meet students’ identified learning needs. In conclusion, this contribution suggests that the locus of emerging educational strategies is pushing toward a proactive engagement in student-centred learning theories, where the student is placed at the very centre of the educator’s realms. Dr Mark Anthony Camilleri lectures at the University of Malta. Contemporary Pedagogical Philosophies Education equips individuals with the right skills, substantive knowledge and competences to pursue their own goals. It enables them to become an integral part of the community as fully-fledged, autonomous citizens. In its broadest sense, education is a means of “social continuity of life” (Dewey, 1916:3). Even Plato in ‘the republic’ inquires about morality and the good life. He posits that human beings should be active within their community. The Greek philosopher maintained that every aspect of our life creates ‘harmonious people’. He went on to say that morals and ethics are part of an even balance of wisdom, courage, and restraint. The main philosophical thoughts and the theoretical underpinnings of education are important social domains which have attracted the interest of many philosophers for thousands of years. For instance, Socrates floated his idea that knowledge is a matter of recollection, not of learning, observation or study (Dillon, 2004). Pertinent literature review suggests that education is a transmission of knowledge. Education fosters enquiry and reasoning skills that are conducive to the development of autonomy (Phillips, 2009). The question of learning and how the educative systems work relate to individual capacities and potentialities. Of course, the processes (and stages) of human development are shaped by many factors. Individuals experience different environmental contexts and settings. Their home country also possesses its own features, which often transcend from norms, traditions and cultures. The institutions of education should adapt their curricula to align themselves to the particular social fabric. Consequentially, the background of students as well as their educational environments (in which they are placed) ought to be carefully considered. For instance, Dewey advocated that human beings should be categorised into classes. He compared individuals to organisms situated in a biological and social environment, where problems are constantly emerging, forcing the individual to reflect and act, and then learn (Reed, and Widger, 2008). Individuals can improve upon their existing knowledge as they reflect upon their actions, whether they are at school or in their work phase. Students are individuals with their own trails of growth. Teachers and employers are there to guide and facilitate this growth. The educators’ duties and responsibilities are to help in the academic (and non-curriculum) development of students in their learning journey. Dewey’s educational theories suggest that education and learning are social and interactive processes, and thus the school itself is a social institution through which social reform can and should take place (Dewey, 1916). In addition, the author believed that students thrive in an environment where they are allowed to experience and interact with their curriculum.Essentially, Dewey suggested that all students should be given the opportunity to take part in their own learning experience. Apparently, this approach focuses on the needs of students, rather than of all those involved in the educational process. The discourse revolving around student-centred pedagogy was also replicated in subsequent years by Jean Piaget and Lev Vygotsky. This perspective has many implications for the design of courses. In the student-oriented approach, the educator adapts to the pupils’ needs, abilities, interests, and learning styles. Educators use more than one theory of teaching as they may be capable of wearing different hats with their students. They act as students’ philosophers, advisers, counsellors, motivators, demonstrators, curriculum planners, evaluators and the list goes on. Bloom (1956) has classified three types of objectives: Cognitive, Affective and Psychomotor. Similarly, Tolman (1951) put forth the notion that there are three parts to learning which work together as a gestalt. These are the “significant” goals of behaviour, the “sign” or the signal for action, and “means-end relations” which were internal processes and relationships. The author believed that learning is an accumulation of these sign gestalts, and that they can be configured into cognitive maps. The input from the environment is on-going, and it influences behaviour, in that it causes certain gestalts to be selected or not. In this sense, learning is unique to each and every individual. Beyond the notions of behaviourism, constructivism and the relevant theories and styles, the literature review in this field of study suggests that learning is a process of active engagement as an individual and in social contexts. Recently, the latest shift in educational discourse had been the distancing away from the conception of “learner as a sponge” (Maillet and Maisonneuve, 2011) toward an image of learner as an active construct. Although Dewey reminded us that learners were not empty vessels, education in many jurisdictions was usually based on teacher-centred approaches (Cuban, 1993). Evidently, there was an erroneous assumption that if teachers speak clearly and students are motivated, learning will be successful. When the students do not learn, the logic is that they are not paying attention, or they do not care. These conceptions may have been grounded in a theory of learning that focused on behaviour. Behavioural-learning theorists maintained that if teachers acted in a certain manner, students will react in particular ways. Central to this notion of behaviourism was the idea of conditioning, where the individuals are trained to respond to stimuli. Eventually, the “cognitive revolution” in psychology put the mind back into the learning process (Miller, 1956). Behavioural psychology (based on factual and procedural rules) has given way to cognitive psychology (based on models for making sense of real life experiences (Lesh and Lamon 1992:18). Kandel and Hawkins (1992) maintained that the brain actively seeks new stimuli in the environment from which to learn and that the mind changes through use. Learning changes the structure of the brain (Bransford et al., 2000). Research suggests that young learners from a tender age – make sense of the world by actively creating meaning while they are reading texts. They construct their perceptions of social reality by interacting with their surrounding environment, or simply by talking to their peers. Even if students are quietly observing their teacher, they can be actively engaged in a process of active learning and understanding. This cognitive turn in psychology is often referred to as the constructivist approach to learning. Overall, a statement of teaching philosophy should provide a personal portrait of the writer’s view of teaching. The narrative description of one’s conception of teaching as well’s as one’s rationale and justification for how one teaches and why, may be expressed in a variety of ways (Lyons, 1998). The overarching question in a statement of teaching philosophy is: Why am I teaching? Other relevant questions which follow are: What are my teaching goals and objectives? What is the student-teacher relationship which I strive to achieve? What behavioural methods will I use? What motivates me to enhance my knowledge about the subject I am teaching? What values do I impart to my students? How do I make sure that I have taught my students successfully? What code of ethics guide me? One of the hallmarks of a teaching philosophy is its individuality. Ideally, this personal statement should be of a reflective nature in its content. A teaching philosophy is all about a vivid portrait of an educator who is motivated about teaching practices and committed to career advancement. The act of taking time to re-consider one’s goals, actions and vision provides an opportunity for self-development. The main components of teaching philosophies are descriptions of how educators think learning occurs, or how they think they can possibly intervene in their students’ learning process. Of course, educators set goals for their students. They may also decide to plan how to implement them. For some purposes, including a section on one’s personal growth as a teacher is also important for self-development. This reflective component explains how one educator has grown in the teaching profession over the years. It illustrates what challenges exist at present as it identifies what long term goals are projected for the future. Whilst writing this section, the educator revisits one’s concepts and methodologies. This exercise can turn out to be stimulating as the educator revises the old syllabi and the instructional resources. It goes without saying that the educator will need to remain abreast with the latest academic and research findings in his / her field of studies. It is in the educators’ self-interest to communicate and collaborate with their peers. There is scope for lecturers to work together with other colleagues. They are often encouraged to participate in workshops to share knowledge about best practices as well as resources. In a similar vein, the teaching philosophy should be communicated to the students as it is in their interest to know what is required from them and why (see Cerbin, 1996). Given this information, students are triggered to engage in a more productive manner in their learning journey. It is also likely that students learn much better and succeed in their course. Some empirical studies have shown that appropriate communication with students helps to increase their retention (Thomas, 2009; Braskamp and Ory, 1994). Nowadays, many educators are implicitly exhibiting their teaching philosophies as these are often evidenced to students through syllabi, assignments, approaches to teaching and learning, classroom environment and student –teacher relationships (Thiessen, 2012) The goal of sharing a statement of teaching philosophy is to value and respect students. One repays a teacher badly if one always remains nothing but a pupil (Nietzsche, 1891). Literature review has revealed that many teachers are following the theories and principles which were discussed here. The theories I have described here have provided me with a good insight to develop and articulate my teaching philosophy. This contribution offers a good opportunity to rethink about your teaching practice. I believe that the best educators are the ones who use and create, adopt (or reject) theories of learning and teaching. These theories and principles are derived from many years of experience and careful inquiry as they are tested in class-room settings, critiqued by colleagues and continuously emerge from empirical findings and theoretical underpinnings. “Education, therefore, must begin with a psychological insight into the child’s capacities, interests, and habits. It must be controlled at every point by reference to these same considerations. These powers, interests, and habits must be continually interpreted – we must know what they mean. They must be translated into terms of their social equivalents – into terms of what they are capable of in the way of social” (Dewey, 1897). Bloom, B.S. (1956) “Taxonomy of educational objectives: the classification of educational goals” Handbook I: Cognitive Domain. New York, Longmans, Green. Bransford, J., A. Brown, and R. Cocking, eds. (2000) “How people learn: Brain, mind,experience, and school”. Washington, DC: National Academy Press. Braskamp, L.A., and Ory, J.C. (1994) “Assessing Faculty Work: Enhancing Individual and Institutional Performance”. Jossey-Bass Higher and Adult Education Series. San Francisco. Cerbin, W. (1996) “Inventing a new genre” The course portfolio at the University of Wisconsin La Crosse. In Making Teaching Community Property: A Menu for Peer Collaboration and Peer Review, ed. P. Hutchings. Washington, DC: American Association of Higher Education. Cuban, L. (1993) “How teachers taught: Constancy and change in American classrooms 1890–1990”. New York: Teachers College Press. Dewey, J. (1897) “My Pedagogic Creed”. Url: http://dewey.pragmatism.org/creed.htm accessed on the 25th April 2013. Dewey, J. (1916) “Democracy and Education: an introduction to the philosophy of education”. Url: http://www.gutenberg.org/files/852/852-h/852-h.htm accessed on the 25th April 2013. Dewey, J. (1938) “Experience and Education”. Url: http://www.schoolofeducators.com/wp-content/uploads/2011/12/EXPERIENCE-EDUCATION-JOHN-DEWEY.pdf accessed on the 25th April 2013. Dillon, A (2004) Education in Plato’s Republic Url: http://www.scu.edu/ethics/publications/submitted/dillon/education_plato_republic.html accessed on the 5th May 2013. Kandel, E.R. and Hawkins, R.D. (1992) “The biological basis of learning and individuality.” Scientific American 267.3, pp.78-86. Lyons, N. (1998) “With Portfolio in Hand: Validating the New Teacher Professionalism”. Teachers College Press, New York. Maillet, B. and Maisonneuve, H. (2011) “Long-life learning for medical specialists doctors in Europe”. CME, DPC and qualification. Presse médicale (Paris, France: 1983), 40(4 Pt 1), 357. Nietzsche (1891) “Decadence, and Regeneration in France (1891-95)” In Forth, C.E. (1993). Journal of the History of Ideas, 54:1 pp97-117. Phillips, D.C., (2009) “Philosophy of Education”, The Stanford Encyclopedia of Philosophy. Edward N. Zalta (ed.), Url: http://plato.stanford.edu/archives/spr2009/entries/education-philosophy/ accessed on the 2nd May 2013. Plato. The Republic. 2nd ed. Trans. Desmond Lee (1987) Penguin Books, New York. Reed, D. and Widger, D. (2008) “Democracy and Education” by John Dewey. Url: http://www.gutenberg.org/files/852/852-h/852-h.htm accessed on the 4th May 2013. Thiessen, D. (2012)”Classroom-based teacher.” Early Professional Development for Teachers 317. Thomas, L. (2009) “Improving student retention in higher education”. AUR, 9. Tolman, E. C. (1951) “Behaviour and psychological man: essays in motivation and learning”. Berkeley, Univ. of California Press. Student-Centred Approaches in Higher Educational Setting “One repays a teacher badly if one always remains nothing but a pupil.” Nietzsche (1891). Traditionally, the teacher-centred learning involved the instructor’s active role, whereas the students exhibited a passive, receptive role. Of course, the student-centred learning has many implications for the design of the courses. In this perspective; within the student-oriented approach, the educator adapts to the pupils’ needs, abilities, interests, and learning styles. Handelsman et al. (2004) held that there is sufficient evidence that supplementing or replacing lectures with active learning strategies and engaging students in discovery and scientific process improves their learning and knowledge retention. Many educators are adopting a broad spectrum of the student-centred approaches, which include: Active Learning (Bonwell and Eison, 1991), Collaborative Learning (Bruffee, 1984), Inquiry-based Learning, Cooperative Learning (Johnson, Johnson and Smith, 1991), Problem-based Learning, Peer Led Team Learning (Tien, Roth, and Kampmeier, 2001), Team-based Learning (Michaelson, Knight and Fink, 2004), Peer Instruction (Mazur, 1997) among other methodologies. As a proponent of active learning I suggest that exercises such as role-playing, debating, student engagement in case studies, active participation in cooperative learning and the like, may be used to create a context of material, where learners work collaboratively. Needless to say, the degree of the teacher’s involvement while the students are being “active” may vary according to the specific task and its context in a teaching unit (Bonwell and Eison, 1991). Examples of “active learning” activities include: A collaborative learning group: Students are assigned in groups of 3-6 people and they are expected to work together, in tandem in a particular assignment. They are usually requested to answer a question to present to the class or to produce a project. A student debate: Students are urged to participate in activities by giving them the opportunity to express their views and opinions in verbal presentations. Debates will allow the students to take a position and collect information to substantiate their views and explain them to others. Class discussion are usually much more effective in smaller class settings. This educational setting allows the instructor to act as a moderator as s/he can guide the students’ learning experience, and foster the right environment. The students are requested to critically reflect on the subject matter and use rationality to evaluate their peers’ positions. They are expected to discuss about any topic, in a constructive and objective manner. A discussion may be used as a follow-up activity when the lecture has already been delivered. Similarly, a think-pair-share activity is used when students are encouraged to reflect about the previous lesson. They are expected to discuss with one or more of their peers. Finally they are invited to share their concerns with the class as part of a formal discussion. The instructor is responsible to clarify any misunderstandings. In this case, the learners need a background of the subject to identify and relate what they know to others. Students need to prepared with sound instruction before expecting them to discuss any subject matter on their own. A class game is a very innovative way to learn, as it provides an opportunity for students’ to review the course material. It also helps the them to enjoy the subject in creative ways. Different games may possibly include; jeopardy and crossword puzzles which keep the students’ minds going. Videos: It transpires that relevant video clips support students in their understanding. It is important that the video relates to the specific topic that students are covering at the particular point in time. The lecturer may possibly include a few questions before you start the video so to engage the students to pay attention to the video. After the video is complete, the students may be divided into groups in order to discuss what they learned. They may also be requested to write a review or points about the video clip or movie. Evidently, it is up to the instructors to determine the educational goals and objectives. They have to analyse the environment in which they operate, identify the factors which may constrain their approaches, and choose any curricular model and methods that suit to their students. A diversity of approaches and varying methods are to be encouraged. This contribution suggests that a strategy that promotes a student-centred learning is likely to be very effective. Yet, I believe that there is a need for a fair evaluation of the students’ background before any approach can be considered to produce better results than others. The teacher’s duty and responsibility has inevitably changed to a facilitator of learning. The learner-centred approach suggests that the students are the responsible participants in their own learning journey. Such a strategy puts the student at the very centre of the educator’s realms. Bonwell, C.C. and Eison, J.A. (1991) “Active Learning: Creating Excitement in the Classroom. ERIC Digest”, ASHE-ERIC Higher Education Reports, The George Washington University. Bruffee, K.A. (1984) “Collaborative Learning and the” Conversation of Mankind”, College English 46.7: pp635-652. Handelsman, J. et al. (2004) “Scientific teaching”, Science 304.5670: pp521-522. Johnson, D.W., Johnson, R.T. and Smith, K.A. (1991) “Active learning”, Interaction Book Company. Mazur, E. and Hilborn, R.C. (1997) “Peer instruction: A user’s manual”, Physics Today 50: 68. Michaelson, L., Knight A., and Fink, L., (2004) “Team- based learning: A transformative use of small groups in college teaching”, Sterling, VA: Stylus. Nietzsche (1891) “Decadence, and Regeneration in France (1891-95)” In Forth, C.E. ( 1993) Journal of the History of Ideas, 54:1 pp97-117. Tien, L.T., Roth, V. and Kampmeier, J.A (2002) “Implementation of a peer‐led team learning instructional approach in an undergraduate organic chemistry course”, Journal of research in science teaching 39.7: pp606-632.
Fundamentals of the Global Positioning System As you make yourself comfortable inside your vehicle, you pull out your global positioning system (GPS) unit, type in a destination, and minutes later you are following instructions from your real-time location. This common navigational approach was not even fathomable 40 years ago. It would be an understatement to say that the GPS system has changed the way people live, yet for many people, these floating objects in the sky are a mystery. To shed some light, GPS actually refers to the system of satellites operated by the U.S. Department of Defense. The handheld electronic devices used by people, including phones, are actually GPS receivers. The receiver itself is physically very small, and its job is to pick up the signals that are transmitted by the satellites. The most common receivers people use are installed with programming software that use the signal's travel time and velocity to compute the distance to the satellite. Contrary to what some people might think, navigational tools, such as maps and compasses, are not part of the GPS receiver positioning software. Navigational software is responsible for using the real-time position of the device, given by the GPS, to guide the user to a known destination and to keep track of previously occupied locations. Four satellite distances must be known for a reliable location, so it's ideal to have a clear horizon line. It's important to note that the GPS satellites are not the only positioning satellites orbiting the Earth. Russia and Europe have their own satellite systems up and running as well. This whole constellation of satellites is referred to as the Global Navigation Satellite System (GNSS). Any receiver with GNSS capabilities is able to tap into this system, allowing for better signal acquisition and positioning. Nothing about the satellite systems is perfect, expect real-world errors between 10 and 15 meters when using a phone or handheld unit. In North America, the receiver will sometimes have Wide Area Augmentation System (WAAS) capabilities — a feature that will allow the user to achieve meter accuracy. WAAS capabilities allow the receiver to pick up transmissions from geosynchronous satellites that give corrections to the errors found in the satellite signal. This process is operated by the Federal Aviation Administration (FAA), who is responsible for determining the errors and transmitting them to the geosynchronous satellites. As technology improves, expect to have stronger satellite signals and less error in the receiver.
Seventeenth Century Europe The seventeenth century saw a continuation of the processes and ideas begun during the Renaissance, with a growth in religious dissent and in the number of Christian denominations, together with much religious persecution and warfare. Atheism was uncommon and persecuted, but criticism of organised religion and traditional religious beliefs was widespread, often coupled with radical political ideas. The English Civil War typified this in some ways. Partly motivated by religious fervour, with the Puritans opposing the power of the King and the “popish” elements in the Church of England, it saw a proliferation of radical religious groups such as the Diggers and the Levellers, who wanted religious, political and social reforms. There was no parallel expansion of religious tolerance – each sect was confident that they had found the truth and they were not inclined to tolerate each other. Many of these groups were seen as socially disruptive and heretical, and were persecuted. The religious leader George Fox (1624 -91) is a typical example of the radical questioning and desire for reform and change seen in seventeenth century England. He rebelled against the formality and dogma of the established church, wanting a much more personal belief system. He formed the Society of Friends (also known as “Quakers”) whose meetings contain no ritual. Fox spent time in prison for his beliefs, and his followers were persecuted. Many of them emigrated to form a Quaker colony in Pennsylvania. The beheading of Charles I marked the end of unquestioned acceptance of “the divine right of kings” – the idea that kings were appointed by God and therefore must not be opposed – and a weakening of the traditional authority of religion. England became more democratic, with Parliament taking over the Crown’s authority (and retaining much of it even after the restoration of the monarchy in 1660), though it was not a tolerant or free society. The arts and sciences Theatre was banned under the Puritan regime in England, but the Restoration in 1660 ushered in a more relaxed attitude to the arts. Drama became thoroughly secular, and women were for the first time allowed to perform on the stage. Aphra Behn was able to earn her living by writing. Just as writers like Montaigne had done a two centuries earlier, European philosophers and thinkers continued to question the orthodoxies of their day, some of the most notable being Hobbes, Descartes and Spinoza Scientists built on the method and discoveries of their predecessors. The work of Isaac Newton (1642-27) influenced science and thought profoundly, whilst William Harvey (1578-1637), who discovered the circulation of the blood, put physiology on a more scientific course. The Eighteenth Century: the “Age of Enlightenment” or “Age of Reason” The eighteenth century was a period of intellectual discovery and ferment in Europe, with dissent (religious, political, and social) becoming more open, despite widespread censorship and the risks of punishment. A few enlightened rulers, such as Frederick the Great of Prussia, were patrons of radical writers and thinkers, fostering the growth of new ideas. The radical campaigner, Thomas Paine, influenced the French and American revolutions which took place at the end of the century, and Mary Wollstonecraft pioneered feminist ideas in her writings. Religion and Philosophy Though still unusual and generally disapproved of, religious scepticism became more common in eighteenth century Europe, partly as a consequence of the development of a more scientific view of the universe. The Scottish philosopher, David Hume wrote sceptically about miracles (in Section X of An Enquiry Concerning Human Understanding, 1748) and about religion in Dialogues Concerning Natural Religion(published, perhaps wisely, posthumously in 1779). In France, the “philosophes”, a group of radical and free-thinking philosophers were highly influential. They expressed their liberal, materialist, empiricist and naturalist ideas, and their sceptical attitude to religion, in the Encyclopedie (compiled between 1751 and 1765). Their ideas influenced the course of the French Revolution, especially its anti-clericalism and attempts at secularisation, but they would have detested the intolerance and excesses of the Terror (see Voltaire). In Germany, the philosopher Immanuel Kant revolutionised the studies of metaphysics and ethics. Although a religious believer, he offered a rational basis for morality, and has exerted a powerful influence on later philosophers. At the end of the century, the Romantic movement in the arts began. In some ways, in its reverence for feeling above reason, it was a reaction against the scientific and philosophical ideas of the day which are so appealing to humanists. But a new attitude to nature, one of awe and wonder, typified in the poetry of William Wordsworth, was a lasting legacy of the Romantics and one with which many humanists sympathise. Edward Gibbon’s historical writing on early Christianity was controversial and influential. Humanist Perspectives 1 and Humanist Perspectives 2 (BHA) contain more concise versions of humanist history, together with pupil pages on a range of issues designed for easy photocopying and much useful information for teachers.
- Why is it important to reduce inequality? - What are the 4 reasons for income inequality? - How does inequality affect crime? - How does inequality affect the economy? - How can economic inequality be reduced? - How does inequality cause poverty? - How does inequality affect health? - What is an example of economic inequality? - What are the negative effects of inequality? - What are the 5 reasons for income inequality? - Is inequality good for society? - Why is inequality bad for society? - Who is affected by economic inequality? - What are the causes of inequality? - How does inequality affect people’s lives? - Why is economic equality important? - What is the main problem of economic inequality? - What are the causes of economic inequality? Why is it important to reduce inequality? Reducing inequality is the most important step these countries can take to increase population well-being. In the developing and emerging economies, both greater equality and improvements in standards of living are needed for populations to flourish. Inequality wastes human capital and human potential.. What are the 4 reasons for income inequality? Divergence of productivity and compensationOverall. … Analyzing the gap. … Reasons for the gap. … Globalization. … Superstar hypothesis. … Education. … Skill-biased technological change. … Race and gender disparities.More items… How does inequality affect crime? Income inequality and unemployment rate increases crime rate while trade openness supports to decrease crime rate. … The results of pro-poor growth analysis show that though the crime rate decreases in the years 2000–2004 and 2010–2014, while the growth phase was anti-poor due to unequal distribution of income. How does inequality affect the economy? Specifically, rising inequality transfers income from low-saving households in the bottom and middle of the income distribution to higher-saving households at the top. All else equal, this redistribution away from low- to high-saving households reduces consumption spending, which drags on demand growth. How can economic inequality be reduced? If a society decides to reduce the level of economic inequality, it has three main sets of tools: redistribution from those with high incomes to those with low incomes; trying to assure that a ladder of opportunity is widely available; and a tax on inheritance. How does inequality cause poverty? The initial level of inequality affects the poverty reducing capacity of growth, as a more equitable distribution of income and assets provides the poor with more means and opportunities to improve their standard of living. How does inequality affect health? The most plausible explanation for income inequality’s apparent effect on health and social problems is ‘status anxiety’. This suggests that income inequality is harmful because it places people in a hierarchy that increases status competition and causes stress, which leads to poor health and other negative outcomes. What is an example of economic inequality? For instance, the 20:20 ratio compares how much richer the top 20% of people are, compared to the bottom 20%. Common examples: 50/10 ratio – describes inequality between the middle and the bottom of the income distribution. 90/10 – describes inequality between the top and the bottom. What are the negative effects of inequality? At a microeconomic level, inequality increases ill health and health spending and reduces the educational performance of the poor. These two factors lead to a reduction in the productive potential of the work force. At a macroeconomic level, inequality can be a brake on growth and can lead to instability. What are the 5 reasons for income inequality? 5 reasons why income inequality has become a major political issueTechnology has altered the nature of work. … Globalization. … The rise of superstars. … The decline of organized labor. … Changing, and breaking, the rules. Is inequality good for society? The idea that inequality has a positive impact on economic variables is probably one of the main reasons why people think a certain amount of inequality is good for societies. But all the data shows that the more unequal a country is, the less long-run growth it experiences. Why is inequality bad for society? Inequality is bad for society as it goes along with weaker social bonds between people, which in turn makes health and social problems more likely. … Economic prosperity goes along with stronger social bonds in society and thereby makes health and social problem less likely. Who is affected by economic inequality? Across income groups, U.S. adults are about equally likely to say there is too much economic inequality. But upper- (27%) and middle-income Americans (26%) are more likely than those with lower incomes (17%) to say that there is about the right amount of economic inequality. What are the causes of inequality? Causes of Inequalities:There are several causes which give rise to inequality of incomes in an economy:(i) Inheritance:(ii) System of Private Property:(iii) Differences in Natural Qualities:(iv) Differences in Acquired Talent:(v) Family Influence:(vi) Luck and Opportunity:More items… How does inequality affect people’s lives? Living in an unequal society causes stress and status anxiety, which may damage your health. In more equal societies people live longer, are less likely to be mentally ill or obese and there are lower rates of infant mortality. Why is economic equality important? Greater economic equality benefits all people in all societies, whether you are rich, poor, or in-between. Countries that have chosen to be more equal have enjoyed greater economic prosperity while also managing to develop in a more environmentally sustainable fashion. What is the main problem of economic inequality? Effects of income inequality, researchers have found, include higher rates of health and social problems, and lower rates of social goods, a lower population-wide satisfaction and happiness and even a lower level of economic growth when human capital is neglected for high-end consumption. What are the causes of economic inequality? Income inequality has increased in the United States over the past 30 years, as income has flowed unequally to those at the very top of the income spectrum. Current economic literature largely points to three explanatory causes of falling wages and rising income inequality: technology, trade, and institutions.
Nitrogen is one of the most common elements of the Earth's atmosphere. It is a non-metallic chemical that has no color, taste or smell. Because of its abundance, it is also found in the structure of many compounds. Nitrogen is vital for life and most organisms need it. The air we breathe is mainly made up of nitrogen, we also consume a large amount every day as part of our food. This element has many industrial applications as well, being found in anesthetics such as nitrous oxide or super coolants like liquid nitrogen. 100% natural oil to treat effectively skin conditions such as acne, psoriasis, and rosacea. This element has the atomic number seven, which makes it the lightest in its group of chemical elements. In the periodic table of chemical elements, it was given the designation N. Bismuth, antimony and arsenic are part of the same chemical group, which means they share some common traits. The outermost electron shell of nitrogen has three missing electrons, which makes it very reactive, forming strong links with other elements. Due to the stability of these bonds, it can serve as a buffer gas. Scientists estimate that nitrogen is the 7th most common element in the universe. Nitrogen is essential for life as a component in the structure of nucleic and amino acids, even if most organisms can't actually process pure nitrogen. It is essential to plants, being one of their main nutrients. Some plants, known as nitrogen fixers, are able to deposit the element in the soil and allow other species to use it. Chemist and physician Daniel Rutherford discovered nitrogen in 1772, as part of his experiments on air. After eliminating carbon dioxide and oxygen, he found that the remaining gas was not flammable and didn't support life. Carl Wilhelm Scheele and Joseph Priestly also conducted similar tests and considered nitrogen to be air without oxygen, or burnt air. Antoine Laurent de Lavoisier named this gas "azote", meaning "lifeless", as part of his own experiments in 1786. All of these early pioneers observed that nitrogen is the inert part of air, which is unable to support life. 100% natural formula for all your skin problems. Excellent for diabetics. Some nitrogen compounds, such as nitric acid, were known way before the official discovery of the gas in 1772. Scientists were quick to notice that nitrogen doesn't burn and can't be breathed on its own. Because it is not flammable, nitrogen is often used today in the packaging of foods and explosives, as an inert gas that conserves these products safely. While we breathe a lot of nitrogen as part of the air, the pure gas is dangerous. It can replace oxygen from the air, acting as an asphyxiant element. Liquid nitrogen is especially risky to handle because at room temperature it can turn into gas and make the air un-breathable. This is why ventilation is essential when liquid nitrogen is used. Divers can also suffer from decompression sickness, which is a condition caused by this gas. After sudden depressurization, bubbles of nitrogen accumulate in the blood, which can have very dangerous consequences, including death. Ammonia (NH3) is probably the most important compound that includes nitrogen. Nitrogen reacts with hydrogen as part of the so-called Haber-Bosch process, which produces the colorless gas ammonia. It has a very strong and unpleasant smell but it is quite useful in industry, especially as part of nitrogen fertilizers. This is the use for more than 80% of the ammonia produced today. However, it is also an ingredient in cleaning solutions, textiles, pesticides, plastics and dyes, as well as an effective refrigerant gas. All the strength of pharmaceutical fungicides - but without the harsh chemicals. The stars produce nitrogen through a process named fusion and eventually a large amount reaches the Earth's atmosphere. Both in terms of weight and volume, nitrogen makes up most of the atmosphere. It is also widespread in the whole universe, being considered the seventh most common chemical element by mass. Nitrogen is an essential part of nucleic acids, proteins and other molecules that are critical for organic life. Animal waste also has a large amount of nitrogen, in the form of uric acid, urea or ammonia. Nitrogen is a strange element; even if it's harmless in the atmosphere it can be dangerous in pure form. Pure nitrogen replaces oxygen molecules in the air, making human breathing impossible. Inhaling the pure gas quickly leads to asphyxiation, as well as a dangerous condition known as decompression sickness, which is common in divers. It is a potentially fatal problem that is caused by the presence of nitrogen bubbles in the blood stream, usually as a result of surfacing too fast. 100% natural anti-aging serum great for masking wrinkles and rejuvenating skin. In normal conditions, 78.1% of our atmosphere consists of nitrogen. However, it is a lot rarer in the crust, which contains the same amount as the one of rare metals like lithium, niobium or gallium. Some minerals that include nitrogen are known, especially saltpetre (also known as nitre or potassium nitrate) and sodium nitrate (alternative names sodanitre or Chilean saltpetre). In the 1920s, the industrial production methods of ammonia and nitric acid were discovered, making these minerals less important. Organisms source their nitrogen from the air and release excess amounts back to it. Plants are the main consumers of nitrogen but they can't use it in pure form, it has to be transformed into ammonia or another compound. Lightning strikes are one of the natural nitrogen fixation methods, the result being nitrogen oxides. However, the main source of fixation is diazotrophic bacteria, which use a process known as nitrogenases through the use of enzymes. Modern industrial plants also fixate a lot of air nitrogen into ammonia. Plants use nitrogen compounds such as ammonia to produce proteins. Animals eat the plants, transform plant proteins into proteins of their own and eliminate the extra amounts as waste. An advanced, 100% natural revitalizer that will keep your skin glowing and looking young. When a plant or animal dies, it is oxidized and decomposed by environmental factors and bacteria. The nitrogen in its structure becomes free and returns to the atmosphere. Modern industrial techniques use the Haber process to fixate nitrogen, usually for the production of fertilizers. However, nitrogen-rich waste can be very dangerous for the environment. It destroys fresh and seawater and creates dead zones, as the bacteria that feed on nitrogen use all available water oxygen and the other species die. The process of denitrification generates nitrous oxide, which destroys the ozone layer. Saltwater commercial fish farms use large quantities of trimethylamine oxide as a protection against osmosis. This chemical further converts into dimethylamine, which gives saltwater fish an unpleasant smell after a while and can help you detect the ones that are not fresh. Nitric oxide is a free radical used by animals as a molecule that regulates blood circulation. This chemical immediately reacts with water, producing a chemical known as nitrite. Animals process the nitrogen found in plant proteins but don't use all of it and eliminate the rest as urea. Nitrogen is also found in nucleic acids and the excess is transformed into urea and uric acid. Rotten animal flesh has a distinct odour of decay, caused by the production of some amines with long chains, rich in nitrogen. Amino acids ornithine and lysine break down into a number of smelly compounds, in particular putrescine and cadaverine.
Sumas Lake today Oregon Spotted Frogs were declared an Endangered species in Canada by the Committee on the Status of Endangered Wildlife in Canada (COSEWIC) in 1999. The species is also Red-listed in B.C. The primary cause for the decline of Oregon Spotted Frog has been the loss of wetlands as the Fraser River floodplain was drained for agriculture. The impact is even greater when agricultural land is further converted to housing and urban development. The photos on the left are an example of what happened to Sumas Lake before and after agricultural development. OSF must also compete with invasive species such as Bullfrogs and Green Frogs, and face the loss of their specialized breeding habitat due to invasive plant species like Reed Canarygrass. Bullfrogs, native to Eastern Canada, were accidentally introduced to B.C. fifty years ago. These large frogs can eat Oregon Spotted Frogs and other smaller frogs, and wetlands where Bullfrogs are found have much smaller populations of native amphibians Oregon Spotted Frog and other amphibians are also highly sensitive to pollutants because of their permeable skin. They are exposed to agricultural chemicals and pesticides used within the channel banks or unsafely stored adjacent to the watercourses. The Oregon Spotted Frog is also susceptible to fungal diseases like Chytridiomycosis, which affects around 30 percent of the planet’s amphibian species. This disease is responsible for dramatic amphibian declines, and there is no known cure.
As historic as it was, the March on Washington did not immediately result in improvements. As the civil rights movement grew, white segregationists in the South were more determined than ever to keep things as they were, and so they became more violent. Bombs were left on the doorsteps of African American homes, and many who demonstrated or worked for the civil rights cause were injured, arrested, or even killed. Still, African Americans grew more confident; they continued to demonstrate, protesting and calling attention to the injustices of a racially divided society. They involved the president, requesting that he give them federal protection when they felt endangered. The media coverage also increased, allowing the rest of the country and the world to see what was going on in the South. On November 22, 1963, after holding office for less than three years, President John F. Kennedy was shot and killed, and Vice President Lyndon Baines Johnson became the new president. President Johnson urged that Congress quickly pass the Civil Rights Bill in memory of the late president. The Voting Rights Act was signed into law by President Johnson on August 6, 1965. This legislation had a dramatic impact on black voter registration, because it ended poll taxes, literacy tests, and other discriminatory practices. In Mississippi alone, the percentage of blacks registered to vote increased from 7 percent in 1964 to 59 percent in 1968. The effects of the March on Washington did not end with federal legislation, however. President Johnson appointed the first black cabinet member, Secretary of Housing and Urban Development Robert C. Weaver, and the first black Supreme Court Justice, Thurgood Marshall, in 1967.
What Eats Begonias? Begonias (Begonia spp.) are usually grown as annuals, although some overwinter in the frost-free climates of U.S. Department of Agriculture plant hardiness zones 10 through 11. Some begonias are grown for their ornamental flowers, while others are mostly grown for their attractive foliage. While begonias are relatively pest- and disease-free, you may find something is dining on your begonia's leaves. Slugs and Snails Slugs and snails do not miss a chance to feed on the leaves of a begonia plant. These pests feed on begonias at night, leaving you to find the damage the next morning. If you want to make sure slugs or snails are indeed the cause, grab a flashlight and check after dark. Slugs and snails are easily eliminated and controlled with a slug and snail bait. Placing this bait in your planting beds keeps your begonia leaves safe from attack by these pests. Aphids are small, soft-bodied insects that affect many plants. Feeding on the leaves and stems of begonia plants, aphids suck moisture and nutrients from begonias, leading to wilting leaves, stunted growth and distortion. During feeding, aphids leave behind a calling card that can alert you of their presence. Check the leaves of your begonias for a sticky substance or a black, soot-like substance. The sticky substance is called honeydew, which quickly begins to harbor and grow a black mold, called sooty mold, which blocks and interferes with photosynthesis. Aphids are easily controlled by making sure your begonias are properly watered. If aphid populations are particularly large, spray a ready-to-use insecticidal soap over all the begonia's leaf surfaces. Caterpillars, Earwigs and Other Pests Caterpillars and earwigs cause similar damage to slugs and snails. They eat small, rounded holes in the begonia's leaves. You are likely to find damage to your begonia plants both day and night with these pests, rather than only in the morning. Control earwigs with the same bait you use to keep slugs and snails away. Pick off caterpillars by hand. Whiteflies, spider mites, thrips, fungus gnats, shore flies and mealybugs all bother begonias. You can control these pests with insecticidal soap. Although it is unlikely your begonias are missing because someone ate them, some begonias are edible. The only known toxic species is the hollyhock begonia (Begonia gracilis). Begonia foliage is used for its sweet, tangy and bitter flavors, depending on the variety, in soups, salads and sandwiches. The flowers are also used as a garnish. Proceed with caution, as begonias are known to have a laxative effect. Additionally, do not use begonia foliage that has been treated or sprayed with chemical insecticides or other toxic chemicals. - Jupiterimages/Photos.com/Getty Images
Introduction to XHTML XHTML is a markup language which is written in XML, more of a, XHTML is an application of XML. It is a hybrid technology between HTML and XML that combines the functionalities of both to become powerful and efficient. In web development, you must have come across or heard of the term XHTML. There are many technologies available today; each one has its own importance and use. Similarly, it also has a unique role in front-end development or web development. In this article, we will try to understand XHTML from all major aspects. We will try to understand XHTML by answering some interesting questions. Extensible HyperText Markup Language is the name for which XHTML stands for. In a few words, XHTML is a combination of HTML and XML. HTML is used for the presentation of the data, while XML is used for carrying the data. It was developed by World Wide Web Consortium (W3C), which is an international organization that sets standards for the World Wide Web (WWW). It was designed to help web developers to make the transition from HTML to XML. It is specifically designed for net device displays. Normal HTML works in most browsers, even if it has bad markup. Today, there are many browsers available in the market, including smaller devices, mobiles, etc. They lack the power to interpret the bad HTML. The solution to this was to markup HTML correctly. XML makes restriction to markup documents correctly and makes them well-formed, i.e., XML is more severe than the HTML. That’s why HTML is combined with XML to develop XHTML with strengths of both. Now the browsers can read and interpret markup with great accuracy. In addition, it enhances compatibility with other data formats. How can you use XHTML? It is the follow-on version of HTML, which means we can do everything HTML can do using XHTML. As XHTML makes viewing website in mobile browsers easy, it is used in mobile website development. We can define and use our own tags and elements in XHTML. We can convert an existing available HTML document into an XHTML document with a few changes. Advantages and Disadvantages To make it easy while making a choice of using XHTML, the following points can be considered. Below are the advantages: 1. Extensibility: As we can define and use our own tags, we can implement new ideas as web communication and presentation logic emerge. Let’s say there is a new program at receiving end, and we want to communicate with it; we can define our markup as per its needs and use it without any compatibility issue. New things can happen on the website as early as they emerge. Specific sets of extensions for XHTML are provided for mathematical extensions, multimedia applications, and vector graphics. 2. Portability: As it follows the standards of XML, processing becomes easy and effortless for XML parsers. By using it, web pages can be made simpler so that small devices can handle them. This is important in terms of mobile devices and small devices which contain small processors with less power. Portable advantage means we can develop a document as per the specific requirement whenever needed. 3. Easy to Maintain: As the rules are clear in XHTML, the margin for errors is less. The structure is more apparent, and problem syntax is easier to spot; therefore, it is easy to author and maintain. 4. Ready for the future: The documents will be easily upgraded to the new version to take advantage of new features. There are no such direct disadvantages, but we will say there are few limitations: - It does not solve all cross-browser combability issues. - It is difficult to begin as it is stricter, and sometimes you must think while coming up with new element names. For learning, you should have some basic knowledge of HTML and XML. At least, it requires knowledge of their use and functionality. Any developer who is starting to learn XHTML might want to know the basics of web page development before proceeding, as XHTML is the after version of HTML; it has the same structure as that of HTML. Why Should We Use XHTML? - It is supported by all major browsers available in the market and is compatible. XHTML documents can be written better to operate in existing browsers. - It is strict than HTML in terms of syntax and case sensitivity, allowing developers to write code accurately. - The documents are well-formed & consistent and can be parsed easily by present and future web browsers. The family is designed to accommodate the extensions provided by XML for developing new XHTML based modules. These modules make it easier to combine new and old features at the time of developing content. Those who want to choose between HTML and XHTML might want to consider one of both, depending upon the specific requirement. As HTML is the basis of web pages development, XHTML becomes the basis, too, depending on the project-specific need. It can be extended by anyone who uses it. HTML5 is already available in the market as of now, so you should think before, especially if development is from scratch. Why do We Need XHTML? It is the improved version of HTML. It combines the power of both HTML and XML. It provides a solution to the problem that arises when using these technologies separately. Who is the Right Audience for Learning XHTML? Anyone who is enthusiastic about learning web development can learn XHTML. The web developers who are already using HTML4 may want to consider switching to XHTML. How will this Technology Help you Grow in your Career? As there are many modern technologies available, including HTML 5, It will be helpful in specific cases. Learning will be surely beneficial for Web developers, which will help them in their career growth. This has been a guide to What is XHTML. Here we discussed the skills, career growth, use, scope, working, advantages, and disadvantages. You can also go through our other suggested articles to learn more –
Adornment reflects a symbolic visual language that includes materials and designs that contain communally understood messages. That is, the clothing or items we wear convey information about us to other people. The expression of these messages through clothing, tattoos, jewelry, or body paint, conveys information about an individual, group, society, or religion. Beading, and other embroidery techniques, can be seen as one aspect of adornment for Indigenous groups, and one that played a central role in cultural preservation for many groups post-European contact. Beads have been found in the archaeological record as early as 40,000 years ago, and are staples in decorative adornment. Beads can be fashioned from many different natural materials including plant seeds, stone, gems, shell, bone, or metal. While plant-based seeds are the easiest to manufacture due to their availability, beads made from bone, metal, gems, or semi-precious stone require more effort and technology to produce, and are therefore more highly valued. Prior to European contact, Woodland and Plains cultures of North America decorated the skins of animals, tree bark, and their own bodies with locally available and traded materials. Materials such as seeds, berries, porcupine quills, moose hair, plant fibre, bone, red jasper, shells, pearls, and copper have been documented across North America as common decorative resources. Exotic or rare materials were seen as prestige items, and a diverse network of trade routes facilitated the movement of rare materials across the continent. Plant-based dyes and mineral paints allowed a diverse range of colour to be applied to the raw materials. Black was made from charcoal, yellow from alder bark, red from currants or red ochre, and blue from blueberries. These were incorporated into geometric and linear patterns on baskets, footwear, and clothing. While beads were present in Pre-Contact North America, they were not the principal medium of adornment, especially in the north. In addition to traditional beadwork, moose-hair embroidery and quillwork were important adornment techniques to Indigenous groups living in the Woodlands, Arctic, and Plains. Quillwork is a distinctly North American form of artistic expression created by the Subarctic, Great Lakes, and Eastern Woodlands peoples. Each region developed its own styles, colours, and sewing techniques in order to fasten the quills to hides. Often this included soaking and boiling the quills with plant substances and iron-holding clay, which softened the quills. They were then flattened and attached to hides with wrapping, weaving, or appliqué techniques. While this form of adornment was eventually replaced by beadwork, it still remains an important decorative technique used in contemporary work. During the fifteenth and sixteenth centuries, French and English traders introduced glass ‘seed beads’ to First Nations groups. These quickly became the principal medium circulating to the First Nations along the St. Lawrence and Mississippi Rivers due to their varied colours, availability, and the concurrent introduction of trade cloth and steel needles. Steel needles, trade cloth, and glass seed beads represented hours of time savings for Indigenous craftspeople, as these required much less preparation prior to their transformation into clothing, and the relative ease of embroidery on cloth compared to hide. Beadwork quickly rose to become the predominant craft, resulting in a decline of more traditional decorative techniques such as quillwork. The culture clash between Indigenous and European peoples also left an impact on the designs used for beading. While geometric and nature-inspired designs dominated Pre-Contact bead-craft, the influence of French and English flower designs such as chintz or calico on imported textiles resulted in changes to Indigenous patterns. New embroidery techniques taught to Huron and Iroquois girls by Ursuline nuns beginning in the seventeenth century also facilitated changes to traditional styles. The patterns produced by the Huron and Iroquois reflected a combination of European needlework and Indigenous curvilinear designs, which then diffused to their western neighbours. The floral style moved westward with the fur trade, often replacing earlier geometric traditions. Due to the loss of their traditional lands, and with it their subsistence base, the selling of art became an important source of income for Indigenous peoples. This further influenced the patterns and styles of beadwork, as the motifs and patterns were tweaked to suit a more European taste. During the nineteenth and twentieth centuries, Indigenous craftspeople exploited the desire of Europeans for ‘exotic’ goods produced by Indigenous peoples. Non-traditional items were created for sale to a European audience reflecting a Victorian consumer such as cigar cases, tea cozies, chair seats, or whisk broom holders. Ruth Phillips, in Trading Identities: The Souvenir in Native North American Art from the Northeast, 1700-1900, offers several explanations as to why Indigenous people gravitated to floral designs post-contact, and how European officials – set out to ‘civilize’ Indigenous groups – missed the encoded social, spiritual, and cultural imagery embedded within the floral images. One of the explanations Philips provides is that flowers communicated different cultural points of view, such that Victorians associated flowers with femininity and submission, while Indigenous peoples connected flowers to their spiritual and cultural values of nature. While Euro-Americans may have viewed the adoption of floral designs as Indigenous groups becoming more ‘civilized’, Indigenous artisans were able to preserve social, spiritual, and cultural messages within the floral images. Stylistic choices such as symmetry, the colours chosen, and the representation of specific plants are directly connected with important Indigenous values, but from a European view, these choices did not hold any meaning. This disconnect between cultural viewpoints created an opportunity for Indigenous peoples to retain and pass along their teachings and beliefs through encoded meaning in beaded designs. Beadwork and other Indigenous arts reflect an indivisible part of their culture, and present a unique economic opportunity that works towards cultural survival. While the transition to floral designs and the creation of items for sale to European consumers has historically been viewed as a symbol of traditional loss, their production is now understood as crucial in their quest for economic and cultural survival. If you are interested in learning more about beadwork and moccasins, the Musée Héritage Museum is holding a temporary exhibit from August 21-October 21, 2018 titled “In Their Footsteps: A Century of Aboriginal Footwear in the Canadian West”. Click here for more details. http://museeheritage.ca/whats-on/exhibitions/ Written By: Talisha Chaput, Summer Graduate Intern, Archaeological Survey Title Image: Blackfoot beadwork. Photo courtesy of the Royal Alberta Museum. - Phillips, Ruth Bliss. Trading identities: The souvenir in Native North American art from the northeast, 1700-1900. University of Washington Press, 1998. - Dubin, Lois Sherr. Floral Journey: Native North American Beadwork. Autry National Center of the American West, 2014.
In January and February of 2016, a total of 13 young sperm whales washed up on the beach near the town of Tönning in Schleswig-Holstein, Germany. An autopsy revealed that the whales had all died of heart failure. The researchers believe that the young bulls, all between 10-15 years old, may have entered the North Sea by mistake. Since the sea floor here is too shallow for these deep sea dwellers, it caused the whales to become disoriented and perish. While that is certainly sad, what is worse is the amount of plastic the scientists discovered inside the mammals’ stomachs. Among the man-made trash mistakenly ingested by the young whales was the remains of a 13-meter long and 1.2-meter wide safety net used for shrimp fishing. The scientists also found a 70-centimeter-long plastic cover from a car engine and some sharp-edged pieces from a plastic bucket. Though the plastic was not responsible for the death of the sperm whales, the discovery is a harsh reminder of the harmful consequences of our plastic ridden society. Ursula Siebert, head of the Institute for Terrestrial and Aquatic Wildlife Research at the University of Veterinary Medicine Hannover, whose team examined the sperm whales, says, "If the whales had survived, the garbage in their guts might have caused digestive problems down the line." Also, as the whales eat more trash, it may give them the false comfort of being full and reduce their desire to feed, resulting in malnutrition. Sperm whales are not the only marine animals hurt by the increasing amount of plastic in our oceans. Sea turtles also mistake the brightly colored trash for food. As pieces of the man-made material get stuck in the animal's digestive tract, they result in a build-up of gas causing what scientists refer to as "floater syndrome." As the name indicates, it means that the turtles can no longer dive deep into the ocean to seek food. Instead, they just float on the surface of the water and if not rescued in time, starve to death. According to researchers from the University of Queensland, in the past six to seven years, the number of marine life species ingesting or getting entangled in plastic has increased almost three-fold, from 250 to 700. The scientists warn that even the tiny plankton, the food source for many marine animals, is consuming the trash. Dr. Qamar Schuyler from the UQ School of Biological Sciences says: “Unfortunately, what this means is that if the bottom of the food chain is eating plastic, it bio-accumulates up the food chain, and there have been several studies that have looked at food fish – fish that we go out, and purchase – and even these fish have plastics in their intestines.” If the possibility of consuming seafood filled with plastic does not serve a wake-up call to change our careless habits, we don't know what will! Resources: Telegraph.co.uk, Greenpeace.org, Dailymail.co.uk
Autism is a set of heterogeneous neurodevelopmental conditions, characterised by early-onset difficulties in social communication and unusually restricted, repetitive behaviour and interests. The median worldwide prevalence of autism is 0.62–0.70%, although estimates of 1–2% have been made in the latest large-scale surveys with the mean age of onset to be 1.78 years. Autism affects 4–5 times more males than females. All individuals on the autistic spectrum demonstrate deficits in three core domains: - Reciprocal social interactions - Verbal and nonverbal communication and - Restricted and repetitive behaviors or interests There is marked variability in the severity of symptoms across patients, and cognitive function can range from profound mental retardation through the superior range on conventional IQ tests.More than 70% of individuals with autism have concurrent medical, developmental, or psychiatric conditions such as epilepsy, anxiety, depression, obsessive compulsive disorder, etc. Early identification allows early intervention. The most effective interventions so far are behavioural and educational; drugs have had only a minor role so far. Intervention and support should be individualised and, if appropriate, multidimensional and multidisciplinary approach should be used. The goals are to maximize an individual’s functional independence and quality of life through development and learning, improvements in social skills and communication, reductions in disability and comorbidity, promotion of independence, and provision of support to families. Comprehensive behavioural approaches includes cognitive, language, sensorimotor, and adaptive behaviour via long-term intensive programmes. The goal of pharmacologic treatment for children with autism is to improve symptoms and specific behaviors. Target symptoms include anxiety, repetitive motor behaviors, obsessive compulsive symptoms, impulsivity, depression, mood swings, agitation, hyperactivity, aggression, and self-injurious behavior. Only four case reports have been reported so far for treating autism. The only successful target has been Nucleus accumbens a nucleus at the bottom of the internal capsule. This nucleus is considered to be highly associated with reward phenomenon, a social trait that is lacking in these kind of patients. Encouraged by the experience of Park et al1 we decided to offer this surgical option to the patient. We were completely aware that even if it may not change the autistic behavior the surgery will have a positive impact on obsessive behavior and aggression. Following surgery, the patient showed remarkable improvement in all the symptoms and her social engagement also increased. Prognosis and outcome A meta-analysis showed that individuals with autism have a mortality risk that is 2.8 times higher (95% CI 1.8–4.2) than that of unaffected people of the same age and sex. Reference: 1. Park RH, et al. Nucleus accumbens deep brain stimulation for a patient with self-injurious behavior and autism spectrum disorder: functional and structural changes of the brain: report of a case and review of literature Acta Neurochir (2017) 159:137–143
Published at Thursday, April 11th, 2019 - 23:49:13 PM. Worksheet. By Bernadette Marie. Demonstrating Progress, If we cannot demonstrate children’s progress with worksheets, how do we provide evidence of learning? Here are several ways: Portfolios – A portfolio is a collection of a child’s work. Portfolios can include the following: Work Samples, Keep samples of each child’s drawings and writing, including invented spelling. Photographs of creations of clay, wood, and other materials can also be included. Children should have a say in what is included in their own portfolio. Date each piece so that progress throughout the school year can be noted. Observations: Keep observational records of what children do in the class. There are many efficient methods of recording children’s behavior. Audio and video tape can capture them in action. Occasional anecdotal notes also help. Checklists: Record children’s skill development on checklists. Progress in beginning letter recognition, name writing, and self-help skills, for example, can be listed and checked off as children master them. Before a child can hold a pencil and make an accurate mark on paper, he must have a great deal of small motor control. He needs practice with various materials and objects that require grasping, holding, pinching, and squeezing. He must have ample opportunity to make his own marks with objects such as paint brushes, chalk, fat crayons, and felt-tip markers. Only later, when he has achieved the necessary finger and hand control, should he be asked to write words or numerals with a pencil. The timing of this accomplishment will vary among children. Some four-year-olds and most five-year-olds are ready to write a few things, notably their own names. But, we must remember that each child develops on his or her own schedule, and some six-year-olds may be just starting this task. If they are encouraged, rather than criticized, they will continue to learn and grow and feel confident. Any content, trademark’s, or other material that might be found on the Inotivity website that is not Inotivity’s property remains the copyright of its respective owner/s. In no way does Inotivity claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner.
In this article the author considers the causes of anaemia in women Dr Andrew Blann, PhD FRCPath FRCPE, Blood Science Solutions, Birmingham, UK The energy our body needs for processes such as digestion, locomotion, heartbeat and thought is provided from a dietary source, such as glucose, and from oxygen. Glucose and other nutrients find their way around the body in the plasma, while oxygen is carried within red blood cells. Anaemia is caused by the inability of haemoglobin and/or red blood cells to provide enough oxygen to the tissues, resulting in loss of energy. The consequences of this insufficient oxygen (hypoxia) are a group of common signs and symptoms, outlined in Table 1, and are often used to diagnose anaemia. Unfortunately, many of these are present in a host of other conditions, such as lung disease, and so to truly define anaemia a blood test is required. The full blood count (FBC) This blood test provides information on platelets and white blood cells, in addition to red blood cells. Of these, the most important are: - Haemoglobin (the protein to which oxygen binds) - The red cell count (the number of cells in a given volume of blood), and - The mean cell volume (a measure of the size of the red cell). A diagnosis of anaemia is made if the subject is symptomatic (as in table 1) and has abnormalities in their red blood cells – typically a low haemoglobin and/or a low red cell count. An allied test is the erythrocyte sedimentation rate (ESR), which is abnormal in anaemia, but is also abnormal in many other diseases such as cancer and inflammatory disease such as arthritis, and so is not specific for anaemia. The investigation of anaemia Anaemia may arise from a number of different processes, and treatment cannot start before the reason for the anaemia has been identified. These reasons include: - Damage to the bone marrow (where red cells are produced). The leading causes for damage include drugs, viruses and infiltration by cancers from other tissues (such as the breast or prostate), or cancer of the bone marrow itself (such as leukaemia and myelodysplasia) - Lack of nutrients, mostly iron and vitamin B12, which may be due to malnutrition or, more likely, to malabsorption - Disease in other organs, such as the liver, kidney and reproductive organs - Haemolysis – the bursting, destruction or inappropriate break-up of red cells. This may be caused by drugs, infections (notably malaria) and autoantibodies, in which case the disease is called auto-immune haemolytic anaemia - Loss through an acute or chronic bleed (that is, haemorrhage), such as after surgery, due to over-use of anticoagulants, or from a ruptured blood vessel or ulcer that may leak into the intestines nnA haemoglobinopathy, the most common being sickle cell disease and thalassaemia. The size of the red cell To further investigate the cause of the anaemia, knowledge of the mean cell volume (MCV) is essential. The MCV is the size of the ‘average’ red cell, and is recognised in three categories: - When the MCV is small, the cell is microcytic, so there is a microcytic anaemia. There are two principle reasons for red cells becoming microcytic: lack of iron and haemoglobinopathy - When the MCV is normal, the cell is normocytic, so there is a normocytic anaemia. This type of anaemia is present in cancer, and when there is loss of blood by haemorrhage - When the MCV is large, the cell is said to be macrocytic, and there is macrocytic anemia. Causes of this type of anaemia include lack of vitamin B12, alcoholism, liver disease and pregnancy. Figure 1 summarises this process. For complete proof of the causes of microcytic and macrocytic anaemia, additional tests are required. These will be ‘iron studies’ and serum vitamin B12 and folate respectively. Treatment of anaemia Once a diagnosis of anaemia has been made (according to the presence of symptoms and abnormal red cell indices), the cause (such as a malignancy) must be determined, and, if possible, treated. The MCV, followed by other blood tests, narrows down the potential causes of the anaemia, and so can direct treatment. For example, if the iron studies indicate a lack of this micronutrient, it may be supplemented orally or by parenteral routes. Macrocytic anaemia, due to lack of vitamin B12 or folate, can be treated with injected or oral supplements respectively. As the spleen is one of the sites of the removal of abnormal red cells, splenectomy is often used to help treat haemolytic anaemia as it may be found in autoimmune haemolytic anaemic, sickle cell disease and thalassaemia. However, this surgical option would be considered after other treatments have been shown to be inadequate. Those whose anaemia is a consequence of chronic kidney disease may benefit from injection of the hormone erythropoietin. In the most severe and life-threatening situations, blood transfusion may be the only option. Anaemia in women Heavy menstrual bleeding From the haematologist’s perspective, menorrhagia exceeding 80ml of blood per cycle (although difficult to measure) can lead to a normocytic anaemia. However, if prolonged, iron stores become depleted, then the anaemia will become microcytic. Consequently, iron studies are essential, especially if the bleeding has been prolonged, and supplements may be necessary. But, of course, it is preferable to determine and treat the blood loss itself. Iron deficiency anaemia Globally, this is the most common form of anaemia, and is inevitably microcytic. It can only be diagnosed with other blood tests such as serum iron and ferritin. In the developed world the most likely cause is malabsorption, caused by diseases of the stomach and intestines. Abnormal absorption of iron (and other foodstuffs) is often the consequence of cancer, ulcers, and inflammation of these organs, examples of the latter including chronic gastritis and inflammatory bowel diseases such as Crohn’s disease and coeliac disease. Women are more likely than men to suffer inflammatory and auto-immune diseases such as rheumatoid arthritis and lupus. These can lead to the development of ‘the anaemia of chronic diseases’ (ACD). A common treatment is oral ferrous sulfate tablets, to be given as long as the symptoms of anaemia and abnormal laboratory results persist. However, this may need to be even longer if iron stores in the liver and elsewhere need to be replenished. Dependent on the degree to which iron stores have been depleted, this is likely to be at least six months. An alternative, which gets round the malabsorption problem, is slow subcutaneous injection or intravenous infusion. Although the response to parenteral iron is often no quicker than by the oral route, stores in the liver and elsewhere are likely replenished more rapidly. Side effects include hypersensitivity, nausea and vomiting. New treatments are emerging, including ferric carboxymaltose, which is administered as an injection/infuion and can be used when tablets are ineffective. Anaemia and pregnancy Normally-menstruating women need more iron than men and post-menopausal women, but this requirement for 1-2mg daily should be increased to 1.5-3mg daily if pregnant. This is because her baby will demand 300mg of iron, and other changes (uterus, red cells, placenta) another 600mg. Anaemia may be a problem because, although the pregnant woman increases her red cell mass by about a quarter, this is exceeded by an increase of a third or more in her blood volume. If the haemoglobin is as low as 100 g/L, then there may be an additional cause, and a low MCV should be investigated as the size of the red cells should stay the same or increase, not get smaller. It is also important that pregnant women have plenty of folate, as babies need this essential micronutrient for the development of the spinal cord and central nervous system. Insufficient folate causes spina bifida, aneucephaly and neural tube defects. The pregnant woman also needs folate for her own red blood cell development. Accordingly, in the UK, NICE recommends 400 micrograms daily before pregnancy and throughout the first trimester, even if the woman is already eating foods fortified with folic acid or rich in folate. However, this should be 5mg daily throughout the pregnancy if there has been a previous neural tube defect. Routine blood tests explained (3rd Edition). AD Blann, M&K Updates, 2014. Townsley DM. Hematologic complications of pregnancy. Semin Hematol. 2013;50:222-31 Miller JL. Iron deficiency anemia: a common and curable disease. Cold Spring Harb Perspect Med. 2013;3. pii: a011866. doi: 10.1101/cshperspect.a011866 Haider BA, Olofin I, Wang M, Spiegelman D, Ezzati M, Fawzi WW; Nutrition Impact Model Study Group (anaemia). Anaemia, prenatal iron use, and risk of adverse pregnancy outcomes: systematic review and meta-analysis. BMJ. 2013 Jun 21;346:f3443. doi: 10.1136/bmj.f3443
Hydro Energy Sources Hydro Energy Exploits the Power of Flowing Water Hydro Energy is a renewable energy resource that uses the movement of water to rotate a water turbine or waterwheel which in turn produces a rotational mechanical output. The potential energy of the moving water is released as work because the water is in motion and the best way to put large amounts of water in motion is to let gravity do the work. Then the most important element for the production of “Hydro Energy” is not the water itself, but gravity, as it is gravity that makes the water move. Then we can correctly say that hydro energy is gravity powered energy as we are generating electricity from gravity. Hydro Energy is actually one of the most common kinds of renewable energy technology in use today, and with good reason. This kind of alternative energy, in addition to being sustainable, renewable, cost-effective, and environment-friendly, also has the potential for one day effectively producing most of the worlds energy needs. Today, large scale hydro energy in the form of dams and reservoirs supply about 15% of the world’s electricity needs, with some countries such as New Zealand producing over 70% of its electrical energy just from hydro power stations. Hydroelectric power is a clean, natural power producing no carbon dioxide or other harmful emissions compared to burning fossil fuels and/or gas. Hydroelectric power generated by the force of moving water can be less expensive than the exquivalent electricity generated by fossil fuels and hydrocarbons. The disadvantage of hydroelectric power however, is that large concrete dams are required flooding large areas of land to obtain the vast quantities of water required to rotate the turbines. The theory of hydro energy is very similar to that of wind energy but instead of using the wind to rotate the turbines blades, we use the constant flow of water. Previously water turbines had only been used for the pumping of water or for the irrigation of land, but due to technological changes over the last few decades, we can now use water turbines to generate electricity not only on a large industrial scale but at home in the form of small scale hydro power systems. The moving water rotates a water turbine or water wheel thus converting the kinetic energy of the movement of the water into a usable mechanical energy. This is then turned into electrical energy through the use of a generator. The amount of electricity generated is determined by how far the water falls, called “head height” and the average volume of water flow. These “run-of-river” small scale hydro power plants can be very efficient and very simple to install if there is running water near your home. For most people it is highly unlikely that they will have a small stream or river big enough to provide a good enough source of water flowing through their land which could be used to power even a small scale hydro power system. But if you happen to be one of the very lucky few who have a reasonable size stream running past your garden or land then a small scale hydro power system may be for you as building even a small dam and adding an electrical generator in a reasonable river can provide a good amount of energy for your home. In most cases, the natural gravitational flow of the water is more than enough to generate a good supply of electricity for the home but in other cases, a funnelling of the water along pipes, troughs, canals and/or penstocks may be required to increase the waters speed and therefore its potential energy output. Hydro energy has one big advantage over other renewable energy sources you may be considering to power your home. The main advantage is that a small scale hydro system can generate power 24 hours a day, 7 days a week unlike other forms of solar power such as wind turbines or photovoltaic panels as the wind does not always blow or the sun does not always shine. But as long as there is gravity, the water will always keep on flowing flow down a stream or river both day and night and the faster the water is flowing, the more free energy that can be generated. Whether you install a simple water wheel and DC generator or a more complex small scale hydro power plant, using hydro energy to power your home can be easier than you think. With a grid connected system you can even sell excess electricity you do not use back to the utility company who have been selling it to you for all these years giving you an additional income. Alternative Energy Tutorials is dedicated to showing you what you need to make Hydro Energy a reality! For more information about “Hydro Energy” and how to generate your own electricity using the power of water, or obtain more hydro energy information about the various small scale hydro energy systems available, or to explore the advantages and disadvantages of hydro power, then Click Here to order your copy from Amazon today about small scale and low head hydro systems which can be used for generating electricity.
Diabetes is a common disease all over the world. Both type 1 and type 2 diabetes happen when the beta cells located in the pancreas fail to produce insulin. Insulin in a hormone that keeps sugar related within the body. Since this has been known for years of medical research, researchers believe that one method to treating diabetes is to encourage redevelopment of new beta cells. In today’s medical studies, there are two ways of generating endocrine cells such as beta cells that secrete hormones from human embryonic stem cells or hESCs, either by producing the cells in vitro in culture or transplanting undeveloped endocrine cell originators into mice. Researchers at the University of California, San Diego School of Medicine are working with scientists from a San Diego-based biotech company ViaCyte, Inc.; they were comparing and contrasting the two types of hESC-derived endocrine cell populations and primary human endocrine cells. The hope in mind is to produce a new stem of cell therapies for diabetes. Gene expression and chromatin architecture were compared by scientists and here is what they found says principal investigator Maike Sander, MD, professor of pediatrics and cellular and molecular medicine, and director of UC San Diego’s Pediatric Diabetes Research Center. “We found that the endocrine cells retrieved from transplanted mice are remarkably similar to primary human endocrine cells. This shows that hESCs can differentiate into endocrine cells that are almost indistinguishable from their primary human counterparts.” Researchers found that the endocrine cells that were produced in vitro did not have the same features of primary endocrine cells and they failed to express the majority of genes, which are very important for endocrine cell function. Sander explained one theory of moving on with cell replacement therapies. He suggested that transplantation of the endocrine precursor cells into human so they can mature may be possible since this happens with mice. Sander said, “However, we don’t currently know whether the maturation process will occur in humans in the same way.” Another process involves generating fully functional endocrine cells in a culture dish and transplanting these cells into humans. However, this type of dish does not exist yet. “This information will help devise protocols to generate functional insulin-producing beta cells in vitro,” said Sander. “This will be important not only for cell therapies, but also for identifying disease mechanisms that underlie the pathogenesis of diabetes.”
You Can Improve Reading Fluency and Comprehension Skills - Improve Reading Speed and Accuracy See 80 – 180% growth in 3 to 6 months. - Improve Processing Speed How quickly you see and recognize shapes, letters, and words on a page. - Improve Visual Tracking How efficiently your eyes move from left to right across the page. - Improve Comprehension Studies show a key link between fluency and comprehension. - Improve Confidence Students see their improvement on a daily basis which improves their self-esteem and confidence. One home-school mom recently tried our Five Minutes to Better Reading Skills out with her family: “When I first heard of Bonnie Terry’s 5 Minutes to Better Reading Skills program, I assumed it was only for beginning readers. Not so! It’s for everyone! My kids from 3rd to 6th grade are already benefiting from it and we’ve only been using it for a week. Their reading speed, confidence, and comprehension are going up by leaps and bounds!” What Is Reading Fluency? Reading fluency is the ability to read quickly and accurately. Once you are fluent in reading, you are able to focus on what the text means rather than trying to decode words. Five Minutes to Better Reading Skills helps you become a fluent reader in 5 minutes a day with short reading drills. There has been an independent 5-year study showing the substantial gains using it. A Reading Strategy to Improve Reading Skills - Perfect for anyone with a short attention span - Spend only five minutes a day in individual or group lessons - Proven to be very effective for all ages - Improves reading through visual and auditory learning - Improves reading fluency - Improves vision perception skills Research Supports Reading Fluency Drills These are the results from 15 years of data with 1st graders through 8th graders. Additional studies from the University of Florida have replicated these results showing significant growth in reading level and reading rate. Daily 5 to 6-minute fluency intervention focuses on phonics and oral reading. University of Florida states that reading fluency practice that is only 5 or 6 minutes a day will result in substantial gains. It’s the daily fluency practice that makes the difference. Check out how much fun the kids have and just how simple it is in Celena Marie’s video. “My 7-year-old began with a reading speed of 37 words/minute (with one error) and has improved to 46 words/minute (without an error) in just a few days. My 5-year-old son learned to read recently and he has gotten in on the “racing” too. His beginning speed was 10 words/minute (with two errors) and his current speed is 23 words/minute (without an error) in just over a week!” Heather R, KS “I’ve been using the Five Minutes to Better Reading Skills with my 8-year-old son. He’s reading, but he still lacks confidence and speed. He loves doing the speed drills in the book! It’s because it is instant success. He looks at a list of short words that he can read with over 90% accuracy.” April C., MO Improve Reading Fluency with Quick 5 Minute Drills 5 Minutes to Better Reading Skills was developed by Bonnie Terry, M.Ed., BCET (Board Certified Educational Therapist). This proven method improves reading through visual and auditory learning. Lessons are only 5 minutes in length, making them ideal for students with short attention spans. A proven program that is sure to turn slow, choppy readers into faster, smoother, and more confident readers. Over 40+ years, Bonnie has discovered that every child can improve their skills. It does not matter whether or not they have identified learning disabilities, dyslexia, or perception problems with the right reading strategies. Five Minutes to Better Reading Skills gives teachers and parents everything they need to help children improve their reading: fast, fun, motivating, success with over 40 phonic reading drills. Nancy H. says, “I can’t get over the difference. Bridget won’t put a book down. Now we have to go into her room when we go to bed to be sure her light is off…she reads for hours at night. I can’t believe she read Harry Potter in just a few days!”
Screening is a way of trying to find out if people are at risk of certain health conditions. Medical tests are significant since they help the individual to make informed decisions and seek early treatment. In some cases, you may have a disease, but you do not have any symptoms. The main idea is to find a disease while it is still in its early stages or before it develops. It is possible to treat certain conditions before they start causing severe health problems. It is also possible to prevent some diseases well before they even occur. However, not all diseases have suitable screening tests that can help early detection. In some instances, not all of the conditions picked up during screening can be treated. There are specific diseases that constitute UK’s national screening programmes. Why A Medical Screening Test? Screening involves a check that is performed by a health care professional to detect a disease before it develops. It also helps to pick up a condition during its early stage, and if the result is positive, further testing is required. A positive outcome after the second test determines the need for treatment. There are specific cancers like bowel and breast cancer that a test can diagnose while still in the early stages. After the detection of these conditions, initial treatment can stop them from spreading. There is also a pre-cancer stage that affects the neck of the womb in women. During this stage of the disease, there is a treatment that can help stop the spread of cancer cells to other organs. As they say, prevention is the best method, and the next step is to catch is very early. Early detection via cervical screening programme helps to prevent the development of cancer of the cervix in the affected person. Each condition has a different test, and other screening tests involve a blood test while other cases include a physical examination. Other situations require a scan while others need a particular type of X-ray. Screening Test Results When your result is negative, it means that you have a low risk of the disease. When you go for screening, you want to know your level of risk of a specific disease. Therefore, a negative result does not guarantee you that you will not develop the condition in the future. Screening helps you to know that you are safe at that particular moment. A positive screen result means that you have a higher risk of the condition you tested for and further tests are required. These are called diagnostics tests, and they aim to confirm if you have the disease. Once established, you get treatment advice as well as other necessary support that can help you cope with the condition. If you detect the condition early, there are high chances that treatment will be useful. On a different note, you should also remember that screening is not always perfect. It can lead you to tough decisions that can even worsen your current condition. Conditions Suitable For Screening Tests Precise conditions should apply before consideration of the national screening programme. It is essential to make sure that the test is accurate and avoid a situation where many people can test positive. The test should also not fail to pick the condition among many people who have it. There must also be a test that can pick a condition before symptoms develop. In some cases, it might be too late for the test to select the illness after the development of the symptoms. A test that is effective should be able to detect the disease while still in its formative stage instead of wait for signs. The benefits of conducting the test should be higher than the potential risk of performing the same. If the analysis does not yield positive results, it might not be necessary to conduct it. On the other hand, if the test poses some risks to the people, then there might be no justification for performing it. Virtually, a screen test should assure the people that it will improve their health not the other way round. The other important aspect is that the test must be reasonably easy to perform so that the targeted people will not shun it. If the process of conducting the analysis is complicated, it might not be acceptable to different individuals. The other aspect to consider is that the cost of the test should not be more significant than the benefits it brings. It should be reasonable so that many people can afford it and they can voluntarily go for testing. If a person tests positive, there should be a clear course of action to follow. The other important element is that once the condition tests positive, then there must be a treatment while it is still in the early stages. There is no reason for screening for something if the physicians cannot do anything about the condition. People would prefer not to have the test in the first place. Types Of Screening Offered By NHS UK There is an independent expert group called UK National Screening Committee (UK NSC) that advises the NHS UK’s four countries. The group advises on the types of screening programmes to offer in all the nations. A. Screening For Pregnant Women There are different types of screening for women during pregnancy which include the following: 1. Infectious Diseases When you are pregnant, you will get blood tests for three infectious diseases namely HIV, Hepatitis B, and Syphilis. All pregnant women are recommended to take the test in all the pregnancies. All the conditions can pass from the mother to the child. 2. Screening For Down’s Patau’s and Edward’s Syndromes In England, all pregnant women get screening tests for the above conditions between 10 and 14 weeks of their pregnancy. 3. Sickle Cell Disease and Thalassaemia Screening The two conditions are inherited blood disorders that pass from the mother to baby. All pregnant women in England get a test for these conditions. The test is essential before you are ten weeks pregnant. 4. Physical Abnormalities Screening The screening involves an ultrasound scan that needs to be carried out between 18 and 21 weeks of pregnancy. Everyone can go for the scan, but it is not mandatory, you can leave it if you don’t want. The scan usually checks for significant abnormalities on the baby. B. Screening Tests For New Babies New babies get screening for the following conditions. 1. Physical Examination All mothers get a physical examination for the babies within 72 hours of birth. The screening tests aim to check if the baby has problems with heart, eyes, testicles in boys and hips in girls. The examination is done before you go home. 2. Hearing Test The test is conducted to check if the new baby does not have hearing problems. In case of a problem, the mother is advised to seek early help for the sake of the baby. Hearing loss can impact on the baby’s development. 3. Blood Spot Screening Every baby in England gets blood spot screening when they reach five days old. The test involves a blood sample to test for any of the rare but severe condition that affects the baby’s health. Early treatment will help to prevent critical disability in the child. C. Diabetic Eye Screening All people who are twelve years and above should get annual diabetic eye testing. The test aims to check for early signs of diabetic retinopathy. D. Cervical Screening All women between the ages of 26 and 64 get tests to check if the cells of the cervix are healthy. The test is not for cancer but to check if there are no abnormal signs of cervical cells in the women tested. Source: Cancer Research UK E. Bowel Cancer Screening Bowel cancer screening is for people who are over 60 years. It involves a kit to use at home to check for small amounts of blood in the poo. However, it does not diagnose bowel cancer. Bowel scope screening, on the other hand, is for people with 55 years and it is to remove small growths called polyps. Source: Cancer Research UK F. Breast Screening Breast screening is for women between the ages of 50 and 70 years. When breast cancer is detected early, you can also get early treatment which is very useful. You may not need chemotherapy or breast removal in the long run. G. Abdominal aortic aneurysm (AAA) screening AAA is a condition that is common among smokers, the elderly, people with high blood pressure. With the disease, the major artery in the body widens as it moves through the abdomen. As a result, the walls of the artery will weaken. Significance Of Screening Tests Many people believe that “prevention is better than cure” and there are many benefits of screening tests. Screening helps the healthcare professionals to pick up problems early which helps to save lives as well as to prevent health problems. Certain health conditions are preventable hence testing helps the doctors to identify these diseases. In the UK for example, about 4,500 lives are saved annually by cervical screening. Cervical cancer is common among woman, and its early detection can save lives through appropriate treatment. Successful treatment for different types of cancers is also possible if they are detected while still in early ages. When cancer is still in its early stages, the operations involved are not extensive than in the later stages. When cancer reaches advanced stages, various treatment methods like chemotherapy are often used, but these are complicated. However, it is easier to treat certain diseases before they develop to complicated stages. Screening plays a pivotal role in the detection of various cancers which also helps in handling them. There are also different types of screening programmes for newborn babies. The screening tests help to pick up specific abnormalities among the infants. Early detection can lead to correction of the deformities. The screening tests are conducted when the child is born, and they are repeated at six weeks. Screening examination of newborn babies aims to look for problems related to hips, testicles, eyes as well as the heart. It helps to prevent specific health issues that may develop later on in life as the kids grow. These issues include but are not limited to the following: infertility, loss of vision as well as hip arthritis. Screening tests also involve pregnant women to check for any abnormalities. If severe complications are detected, the woman can choose to discontinue pregnancy. Downside Of Screening While screening has many benefits, it also has its limitations that you should know so that you can make an informed decision. One thing that you should always remember is that screening is not 100 % accurate. In some instances, the health professionals can tell you that you have a problem while you do not have any. Such kind of scenario is called “false positive”, and it can compel you to seek further test or treatment for a non-existent disease. Meanwhile, you will be forced to waste your money over nothing. Another scenario is called “false negative” where the screening test misses the problem. People can end up ignoring symptoms in the future which can pose a threat to their health. You will get this self-comfort that you are safe while it might not be true. In some instances, knowing that you have symptoms that can lead to chronic health condition can cause anxiety. You may not be able to live peacefully then after the screening. Some people are better off without knowing that they have a specific condition that can pose a threat to their lives. Screening tests in some situations can lead to tough decisions. For example, a pregnancy test can tell you that the baby is at higher risk of a particular condition. The result can compel you to seek further tests, and if the effect is positive, you are in a fix. You are forced to make a difficult decision of whether to terminate the pregnancy or continue with it. The other issue is that a negative result from a screening test does not guarantee that you are forever safe from the condition. If a cancer screen test shows that you are not at risk, it does not mean that you may not develop the disease.
The Scientific Revolution resulted from a monumental series of discoveries, especially those in astronomy and related fields, in the 16th and 17th centuries. The impact of these discoveries went far beyond the walls of the laboratory—it created a genuine revolution in the way Western people thought about the world. Participants in this institute will study how the revolution in science and technology was directly linked to revolutions in religion, politics, and society. They will read selections from Kepler, Galileo, and Newton, and see examples of the books they published to spread their ideas. Here are some resources to help your students to analyze primary sources: The Spiral Questions format Here is a Document Analysis Worksheet (and here as a Word document: document_analysis_worksheet.doc) Here is a Image Analysis Worksheet (and again as a Word Document you can edit). Here is an Artifact Analysis Worksheet Click on the link to download a PowerPoint Overview of the Scientific Revolution. Grade 5 Lesson Plans "Standing on the Shoulders of Giants": Major Figures of the Scientific Revolution Grade 6, 7, 8 Lesson Plans Go straight to the Source: Newton and Wilkins High School (9-12) Lesson Plans The Scientific Revolution: An Overview The Scientific Revolution: Picturing a Worldview The Scientific Revolution: Another Overview Lesson Where in the Universe is the Earth? Walking the Historical Path: Chemistry's Journey from Ancients to Alchemy to Modern Science The Development of Atomic Theory Galileo and the Scientific Method “From White Light to Rainbow Brite”: Sir Isaac Newton and Optics Emblematic Images in the Scientific Revolution Witchcraft in Salem Religion and the Scientific Revolution: Copernicus, Kepler Galileo, and Bacon The Trial of Galileo Revolutionary Thinkers from the Scientific Revolution to the Enlightenment From Scientific Revolution to Enlightenment The Scientific Revolution to the Enlightenment: A Baseball Card Project This project was made possible by a generous grant from the Ohio Humanities Council.
The Religious Society of Friends, a more formal name for Quakers, arose in the middle of the Seventeenth Century during the turbulent events of the English Revolution in which Parliament ruled England under the Protectorate of Oliver Cromwell, following the execution of King Charles I in 1649. This was a time of dramatic innovation in society, in government and in religion. Quakers originated in the rural outskirts of northern England among people who were seeking a more immediate religious experience than was offered by the official Church of England. Their religious search advanced quickly when an itenerant lay minister named George Fox began a public ministry in 1647. By 1653, Quaker lay ministers were crisscrossing England and bringing the Quaker message to London and to the British colonies in America. By 1660, there were an estimated 60,000 Quakers in England, and this rapid growth was one factor that led the British ruling class to restore the monarchy under King Charles II in 1660 and to enact harsh laws to suppress the Quaker movement. These laws led to the imprisonment of thousands of Quakers and to the death of over 500 under deplorable conditions in British prisons. Four Quakers were also executed in Boston, Massachussetts. Quakers were arrested for a variety of offenses: for offering public prayer, for refusing to swear judicial oaths, and refusing to pay tithes to the Church of England. What was unusual about Quakers that led to their persecution in England? They insisted that salvation is a personal matter directed by Christ Jesus, and does not depend upon rituals performed by the Church or by religious leaders. Consequently, Quakers refused to pay tithes to the Church of England. They also refused to swear judicial oaths, based on Matthew 5:34 (“I say unto you, swear not at all”). From the beginning, Quakers emphasized the equality of men and women in church affairs and the right of women to preach. This led to significant discord with other Christian fellowships in which only men were allowed to speak. And perhaps most ominously for their relation with the British government, Quakers emphasized that Jesus had taught peace and repudiated warfare, whereas most governments are deeply invested in waging war against other nations. The intense persecution of Friends in England lasted 29 years, a full generation, ending in 1689 with the Act of Toleration. By this time, the Society of Friends behaved with a great deal of moderation, which in many ways characterizes Quakers to this day. Despite the prolonged persecution, King Charles II made a personal gift to Quakers by granting the future state of Pennsylvania to a Quaker leader, William Penn, in 1680. Under the leadership of William Penn, Quakers exercised an enormous influence in the American colonies and did much to shape the character of American society into the Twenty-First Century. One example of this influence is the complete freedom of religion initiated in 1701 by William Penn in his colony of Pennsylvania and later enshrined in the American Constitution in the Bill of Rights, ratified in 1791. Penn’s innovations were honored by the Liberty Bell, which was commissioned by Quakers in the Pennsylvania Assembly in 1751. Even by 1688, a group of Quakers and Mennonites in Philadelphia were calling into question the institution of slavery. This opposition to slavery spread among Quakers and by 1750, Quakers in Pennsylvania, New York, Delaware and New Jersey had renounced slavery (although some Quaker slave owners simply renounced Quakerism and became Episcopalians). Individual Quakers played key roles in the spread of abolitionism prior to the Civil War and also provided education to freed slaves following the Civil War. Similarly, Quakers played a role in decreasing tensions with American Indian tribes and providing education on Indian reservations. During the late Nineteenth Century, various Quakers played key roles in winning for women the right to vote, a goal that was not achieved nationally until 1920. In the aftermath of WWI, Quakers operated relief agencies to feed the desperate and hungry people of Germany. For this the American Friends Service Committee received the Nobel Peace Prize. During the balance of the 20th Century, Quakers have supported movements to promote civil rights and to curtail the growth of militarism. Now in the 21st Century, Quakers are involved in resisting the growth of prisons and achieving more humane treatment of prisoners, among other worthwhile endeavors. Read more: the early history of our meeting
|Student Learning Outcomes -| - Students will model continuous processes using differential equations and use the model to answer related questions. - Students will develop conceptual understanding of mathematical modeling of continuous processes and their rates of change. They will learn to demonstrate and communicate this understanding in a variety of ways, such as: reasoning with definitions and theorems, connecting concepts, and connecting multiple representations, as appropriate. - Students will demonstrate the ability to solve differential equations and verify their solutions analytically, numerically, graphically, and qualitatively. |Description - | |Differential equations and selected topics of mathematical analysis.| |Course Objectives - | |The student will be able to: | - Classify differential equations by order, linearity, separability, exactness, coefficient fuctions, homogeneity, type of any nonhomogeneities, and other qualities. - Identify appropriate analytic, numerical, and graphical techniques for solving or approximating solutions to differential equations of the particular classes specified in the expanded description of course content. - Solve differential equations with appropriate analytic techniques. - Approximate solutions to differential equations with appropriate numeric techniques. - Investigate solutions to differential equations with appropriate graphical techniques. - Verify solutions to differential equations analytically, numerically, graphically, and qualitatively. - Write differential equations and initial value problems to model phenomena in the physical, life, and social sciences. - Interpret solutions to differential equations and initial value problems in context. - Discuss differential equations and their solutions in accurate mathematical language and notation. - Investigate solutions to differential equations using at least one numerical or graphing utility. |Special Facilities and/or Equipment - | - Graphing calculator - When taught hybrid: Four lecture hours per week in face-to-face contact and one hour per week using CCC Confer. Students need internet access. |Course Content (Body of knowledge) - | - Classes of Differential Equations - First Order - Second Order - Constant Coefficient - Polynomial Coefficient - Higher-Order Linear - Other continuous functions - Discontinuous functions - Initial Value Problems - Existence and Uniqueness Theorem - Systems of Linear Differential Equations - Techniques for Solving Differential Equations - Separation of variables - Integrating factors - Characteristic Equations - Distinct real roots - Repeated real roots - Complex roots - Fundamental solutions - Superposition principle - Undetermined coefficients - Variation of parameters - Annihilator method - Reduction of order - Laplace transforms - Power series - Method of Frobenius - Matrix methods - Euler's method - Improved Euler's method (predictor-corrector) - Graphical analysis - Applications selected from the following topics - Population models - Predator-prey models - Thresholds and carrying capacities - Growth and decay - Mixing problems - Spring-mass systems - Electrical circuits - Newton's Laws - Falling bodies - Torricelli's Law - Financial applications - Compound interest - Time value of money - Communication models - Spread of a rumor - Mass marketing - Public health models - Health care utilization |Methods of Evaluation - | - Class Participation - Term Paper(s) - Computer Lab Assignment(s) - Term Project - Unit Exam(s) - Proctored Comprehensive Final Examination |Representative Text(s) - | |Nagle R., Saff E., Snyder D.. Fundamentals of Differential Equations. 8th ed. Pearson, 2011. | |Disciplines - | |Method of Instruction - | - Cooperative learning exercises |Lab Content - | |Not applicable. | |Types and/or Examples of Required Reading, Writing and Outside of Class Assignments - | - Homework Problems: Homework problems covering subject matter from text and related material ranging from 15 - 30 problems per week - Students will need to employ critical thinking in order to complete assignments - Lecture: Five hours per week of lecture covering subject matter from text and related material. Reading and study of the textbook, related materials and notes - Projects: Student projects covering subject matter from textbook and related materials. Projects will require students to discuss mathematical problems,write solutions in accurate mathematical language and notation and interpret mathematical solutions. Projects may require the use of a computer algebra system such as Mathematica or MATLAB - Worksheets: Problems and activities covering the subject matter. Such problems and activities will require students to think critically. Such worksheets may be completed both inside and/or outside of class
If you’re a TEFL teacher then chances are you have had to deal with Fossilized Errors in your classroom, especially if you deal with older students or those past the Beginners stage. Basically a fossilized error is a mistake a student has made so many times that it has becomes part of their natural speech. This article, then, is all about how to deal with these kinds of errors. What are Fossilized Errors At its simplest a fossilized error is a mistake which a student makes again and again and appears unable to correct no matter what they are told and how you try to help them. In many cases the student may well know they are making an error but despite their best efforts, they can’t help themself and they continue to make the error. It is an error which is so deeply ingrained in the student that when they use correct English in its place then it sounds wrong to them. For example, I know of an Italian MT student whose English is almost perfect; she is able to use colloquial language, has complete mastery of all the verb tenses and verb forms, and yet will still say things like: * The spaghetti are ready. * Are the money on the table? * An asterisk at the beginning of a sentence denotes an ungrammatical sentence. The student knows that these are wrong and can easily produce the corrected version, but the errors are so deeply ingrained in her that they come out regardless. Where Do Fossilized Errors Come From? You will often see that beginners don’t have fossilized errors. Of course they make mistakes, but the errors they produce can almost always be corrected and set straight. However, as a student learns more and attends more English classes, certain errors keep on occurring and never seem to go away. Often these are errors due to mother tongue influence as in the example above or false friends. Other times they have picked up from other students or perhaps television or even the teacher. Another problem here is that the teacher may not catch the error (which can often happens if the teacher is not an English MT speaker) and the student does not even realize they are making an error until too late. Correcting Fossilized Errors It’s not always easy to correct fossilized errors. They are often so ingrained in the student that no matter what you say or how you explain it, the student will still continue to repeat the error – even when they’ve understood and realize that they’re making the error. So, don’t be surprised if a few of these ideas don’t work; it takes time and patience to have students get over their fossilized errors. - Explain explicitly what the error is and how it can be corrected. The students needs 100% understanding here of what they doing wrong. - Deal with one error at a time; don’t overload the student(s). Once the first, biggest, error is completely eradicated then you can move on but not before. - If a student makes the error whilst speaking, ask them to write down what they’ve just said. Writing focuses the mind far more than speaking and slows the production process giving students time to think about what they are producing. - Stop the student on that error alone. It isn’t always a good idea to interrupt a student when they make a mistake (see the article Accuracy vs Fluency) but for single and specific fossilized errors it helps focus the student who then must repeat without the error. - Get students to record themselves speaking and then have them check what they say and report back; they’ll often spot their own errors like this. The idea behind these is to focus the student’s mind on the error. They become very conscious that in certain situations an error might well occur so they need to think carefully. But, importantly, they must have no doubt in their mind what the error is and how to correct it.
[Illustration: JOHN CALHOUN.] But to accomplish such marvels, they must not sit with folded hands. The price of slavery was fearless aggression. They must build on a deeper foundation than Presidential elections, party majorities, or even than votes in the Senate. The theory of the government must be reversed, the philosophy of the republic interpreted anew. In this subtler effort they had made notable progress. By the Kansas-Nebraska act they had paralyzed the legislation of half a century. By the Dred Scott decision they had changed the Constitution and blighted the Declaration of Independence. By the Lecompton trick they would show that in conflict with their dogmas the public will was vicious, and in conflict with their intrigues the majority powerless. They had the President, the Cabinet, the Senate, the House, the Supreme Court, and, by no means least in the immediate problem, John Calhoun with his technical investiture of far-reaching authority. The country had recovered from the shock of the repeal of the Missouri Compromise, and rewarded them with Buchanan. Would it not equally recover from the shock of the Lecompton Constitution? It was precisely at this point that the bent bow broke. The great bulk of the Democratic party followed the President and his Southern advisers, even in this extreme step; but to a minority sufficient to turn the scale the Lecompton scandal had become too offensive for further tolerance. In the Senate, with its heavy Democratic majority, the Administration easily secured the passage of a bill to admit Kansas with the Lecompton Constitution. Out of eleven Democratic Senators from free States, only three—Douglas of Illinois, Broderick of California, and Stuart of Michigan—took courage to speak and vote against the measure. In the House of Representatives, however, with a narrower margin of political power, the scheme, after an exciting discussion running through about two months, met a decisive defeat. A formidable popular opposition to it had developed itself in the North, in which speeches and letters from Governor Walker and Secretary
The Ultimate Guide to Learning Writing: Styles Writing Styles: How to Write It All Political speeches, travel guides, recipes, fantasy novels - all are written works created with specific, and yes, widely varied purposes. And despite the existence of an almost intimidating range of writing, we can actually classify written works in four main writing styles: expository, descriptive, persuasive, and narrative. Think of these styles as four general purposes that lead someone to write a piece - and because different pieces have different purposes, each style has its own distinct characteristics. If you’re going to an evening gala, you will definitely not be wearing the sweats you just watched Netflix in (although you may wish you could). You make different choices based on the goal and impression you want to make - and the same goes for writing styles. These choices, however, are not mutually exclusive. As you may want to wear shoes that are both fashionable and comfortable, you can also write a piece that is both descriptive and narrative. In this guide, you will learn: - The characteristics of the four main writing styles: expository, descriptive, persuasive, narrative - When to use each writing style - How to write in each writing style Types of Writing: Purpose Equals Connection How can learning how to apply different writing styles help you as a writer? Everyone has preferred ways of writing, and some writing styles may come more naturally than others. Perhaps you only write narrative short stories and poetry. Or you may write grant proposals and the occasional op-ed. However, many written pieces have various purposes, and can therefore be enriched by blending or moving in and out of writing styles. Think of this like the art of creating fusion cuisines - you can blend flavors to appeal to your readers’ diverse palates. Learning how to weave different styles into your writing will not only improve and stretch your skills as a writer, but will also allow you to make a stronger connection with your audience. 1) Expository Writing Expository writing is ubiquitous - its goal is to inform readers by explaining or describing. It will often provide insight or instruction with regard to a particular topic, answering questions such as “Why?”, “How?”, and “What?” Common types of expository writing include news stories and magazine articles (excluding editorials), nonfiction books, guides and how-to articles, self help writing, recipes and cookbooks, textbooks and educational resources, and business, technical, and scientific writing One key thing to note is that expository writing can often be confused with persuasive writing. While some texts can include multiple writing styles, an expository piece cannot be persuasive, and vice versa. You should write in this style if your main goal is to solely inform your reader about a specific topic without voicing opinion. Connotations of language are crucial here - when writing in an expository style, take care to use language that carries a neutral connotation. A How-To: Key Characteristics of Expository Writing - Be concise and clear (especially if giving directions) - Organize your information in a logical order or sequence - start with an outline if helpful - Use transitions - Highlight information with quotes, illustrations, informative graphics - Incorporate supporting material and evidence - Use research and cite sources, link to additional resources and websites if writing online - Avoid using language that has a positive or negative connotation - don’t insert your opinion or attempt to persuade your audience to think, feel, or do something based on your beliefs What does expository writing look like? See articles marked “News Analysis” in The New York Times as exemplary examples of expository writing. These pieces examine important and often controversial news events, and also help the reader understand possible causes and consequences of situations without reflecting the author’s opinion. 2) Descriptive Writing “Paint a picture with your words.” This is the classic metaphor associated with descriptive writing, especially in fiction novels, yet this style is used in many other types of written works as well. You should write in this style if your goal is to bring your reader into the written work as if they were experiencing it first hand. It is pulling your audience in, providing details about a character, the setting, or situation in a manner that helps readers imagine and understand the piece. You are essentially transporting the reader to the world of your work through description. Descriptive writing can often seem poetic in nature, depending on the language used. Most fictional pieces fall under this writing style, yet we can also find this style in some nonfiction pieces, such as memoirs and creative nonfiction, like first-hand accounts of events and travel guides. Poetry and prose, travel diaries, writing about nature, personal journals, musical lyrics, and fictional novels and plays are all common types of descriptive writing. If writing in different styles is culinary fusion, descriptive writing is the salt - the most flexible seasoning that can be applied to almost any written piece. While cookbooks are expository texts, we often find descriptive writing in the paragraphs describing the dish at the start of a recipe. Likewise, a persuasive text may employ descriptive writing in select parts in order to draw the reader in - an immersed reader is more likely to be convinced of the author’s opinion. Descriptive writing pairs especially well with narrative writing, as communicating a story is most effective with language that places the reader right there. A How-To: Key Characteristics of Descriptive Writing - Have a reason for the description before you start. Bring attention to select details and only highlight those that aid in telling the story - Use the six senses: sight, touch, taste, smell, sound, and feeling. Try writing about the same character or situation while highlighting different senses. Play around to see which descriptions give the reader the impression or feeling you want to impart - Use literary devices like metaphors, similes, imagery, and personification - Show, don’t tell: rather than telling your reader about something in passive language, activate your writing with adjectives, adverbs, and verbs that show what you want to say. Rather than describing your character as exhausted, describe their eyes, their breath, their voice, their posture, their movements - what about them shows they are exhausted? What does descriptive writing look like? In Hard Times, Charles Dickens describes the self-centered Mr. Bounderby. Notice the details Dickens opts to highlight to create the character’s impression and the senses he activates: ‘He was a rich man: banker, merchant, manufacturer, and what not. A big, loud man, with a stare, and a metallic laugh. A man made out of coarse material, which seemed to have been stretched to make so much of him… A man who was always proclaiming, through that brassy speaking-trumpet of a voice of his, his old ignorance and his old poverty. A man who was the Bully of humility.’ 3) Persuasive Writing As writers, we often first encounter persuasive writing in the form of a five paragraph argumentative essay in grade school. This writing style is far more nuanced, however, though the underlying goal is the same. Put simply, the goal of persuasive writing is exactly as it sounds - to persuade, to influence the reader into believing or doing something. This style is appropriate if you are taking a stand on a position or belief and your goal is to convince others to agree with you. In opposition to expository writing, your opinions and bias as an author are acceptable. Sometimes your intent may even be a call to action. Persuasive writing can be found in written pieces including editorial or opinion pieces in newspapers and magazines, letters written to request an action or file a complaint, advertisements and propaganda, business proposals, political speeches, marketing pitches, cover letters, letters of recommendation, academic essays, and reviews of books, music, films, and restaurants. What makes persuasive writing unique is its intersection with psychology - as its goal is to trigger a desired response, as the author, you must know your audience. A How-To: Key Characteristics of Persuasive Writing - Have a clear purpose Keep in mind the action you want the reader to take. Sometimes that action is tangible, and other times it is simply forming an opinion or changing one’s mind. - Build a case Present the current situation and facts and articulate the need for change - what are the consequences if the situation continues unchecked? Outline a plan for change (or options if they exist) and call the reader to action if appropriate. - Appeal to emotion Showing empathy with your readers begins to establish trust and relatability - this connection will make your readers more inclined to listen to you. Know your audience and what matters to them. - Appeal to reason Present your argument with facts, data, and other analytical information in a logical manner that makes it irrefutable and reasonable. - Capitalize on social proof This is the psychological phenomenon in which people assume the actions of others to reflect “correct” behavior. In persuasive writing, this may emerge in the form of testimonials from strangers or people with authority, influencer recommendations, and polls - all which lend credibility to your argument. - Make comparisons Relate your scenario or situation to something your reader already knows and accepts as true. Use metaphors, similes, and analogies. - Anticipate and respond to objections/counter-arguments If you leave holes, your audience will fill them with doubts. Anticipate counter-arguments and address them immediately so you won’t appear on the defensive. - Ask rhetorical questions These aren’t meant to be answered, however they draw attention and invite your reader to continue reading. - Use repetition Make your point in several different ways. By presenting information in repeating (not mundane) patterns, your audience is more likely to remember your message. - Tell stories Stories help you to build and strengthen an emotional connection with your reader. They also generate interest and are most effective when your reader may not know much about the topic at hand. Here we can find an intersection with descriptive and narrative writing. What does persuasive writing look like? Ralph Waldo Emerson’s seminal essay Self-Reliance and Paul Graham’s How To Do What You Love were written centuries apart, and demonstrate how texts can vary stylistically yet focus on the same goal: to persuade. Both authors pepper their writing with rhetorical questions that push the reader to challenge basic assumptions. In Self-Reliance, Emerson outlines what it means to be self-made and promotes self-reliance as an ideal. Graham, in more colloquial language, challenges readers to redefine their understanding of what “work” should be. 4) Narrative Writing Are you telling a story? Specifically, does your story include a plot, setting, characters, conflict, and a resolution? If so, you are likely writing in the narrative style. Most fiction novels are written in this style and also employ descriptive writing. The biggest difference between purely descriptive versus narrative writing is that the former simply describes, rather than narrate a sequence of events. Aside from fiction novels, memoirs and biographies, screenplays, epic poems, sagas, myths, legends, fables, historical accounts, personal essays recounting experiences, short stories, novellas, anecdotes and oral histories are all examples of narrative writing. A How-To: Key Characteristics of Narrative Writing - Outline the plot of your story. What is the resolution? - Include detailed descriptions of your characters and scenes - use concrete and descriptive language that gives readers a specific image to visualize and relate to - Give your audience insight into characters’ inner thoughts and behind-the-scenes information - Answer the “6 Ws” - who, what, when, where, why, and how - in your piece - Consider point of view: your story will change depending on the point of view you choose to tell it from. Whose point of view is the most interesting? Help your reader situate themself in your story by telling it from a defined point of view. - Use dynamic dialogue. Keep it short and believable, rather than having characters explain a situation. Use dialogue to show, rather than tell. - Know what to tell and what to omit. Leave some elements of the story to your reader’s imagination - this is what keeps them wanting more. What does narrative writing look like? See David Foster Wallace’s classic narrative essay Ticket to the Fair, a formidable example of storytelling woven with ample reflection on the Midwest experience and his own identity. Writing Styles: What are the next steps? Digging deeper into writing styles - be it your preferred style, the one you work in, or one you rarely write in - can lead to creative surprises and produce more complex pieces that speak to your reader in nuanced ways. As much as it can be a pursuit of passion, writing is also a practice, and writing in different styles can allow you to flex your full range of mental muscle. For example, you may try writing a persuasive essay and descriptive essay on the same topic. Or a poem may become a journal entry or short story. If you’re looking for inspiration, the Writing Prompts guide is an apt starting point. Try on different styles outside of your comfort zone - experimentation can yield your best work.
Surprise your kids with these fun fairy tale worksheets. Some of the worksheets displayed are The tell tale heart Putting it all together tone analysis the tell tale heart, Tell a tale point of view tell a tale work 2, Unit 7 tall tales, Jakes tale, Funny fairy tale math, Handouts for tell tale heart handout 1 Winter rests on a foam mat that floats on the waters surface. The scheme includes an end of unit assessment as well as a number of self, listening assessment, a speaking , peer teacher assessed pieces throughout the scheme. Widely considered to be the worst live- action fairy tale adaptation ever made this poorly conceived film was the brainchild of Roberto Benigni, who directed, , co- wrote starred in the movie. This is a complete scheme of work based around fairy tales. 2) * Fairy Tale Features Recording Sheet- Students can jot down the features.Then pick a couple of well- known fairy tales and ask students to brainstorm possible news stories that could be written about those stories. Practice; Students should use the remainder of class to work on stories, using drafting sheet for guidance. Fairy Tale Printable Pack for reading , storytelling Included in this download are: * Fairy Tale Features Organizer- Displays the qualities of a fairy tale, writing organized by story elements ( pg. In other words, this worksheet is the planning sheet for an analysis of the Fairy Tale you read. Create Fairy Tale Dice improve his reading , inspire your kindergartener to weave fantastic fairy tales writing skills while he. Although the fairy tale is a distinct genre within the larger category of folktale, the definition that marks a work as a fairy tale is a source of considerable dispute. 2) * Fairy Tale Features Recording Sheet- Students can jot down the features of a fairy tale as different ones are read to him/ her ( pg. Define stereotype in your own words. Showing top 8 worksheets in the category - Fractured Fairy Tales. Showing top 8 worksheets in the category - Fairy Tales. Fairy tale writing paper for 2nd grade. Showing top 8 worksheets in the category - Tale. Write a short summary of the story you read. Students should finish their. Fairy sheet tale work. Some of the worksheets displayed are Work 2 fractured fairy tales Fractured fairy tale major assignment 30, Make your own fractured fairy tale of hansel , Fractured fairy tale narratives, Fractured fairytales pack pdf, Fractured fairy tales composites to the rescue final, gretel We said feminist fairy. Free Fairy Tale Worksheets for Kids. From teaching them about plotting , creative writing, character portraiture to giving them the chance to immerse themselves in fun coloring activities, storytelling JumpStart. Review information about fractured fairy tales. FREE famous fairy tale stories and worksheet to practice putting the story event in order 15 pages | Education. Fractured Fairy Tales. Fairy Tale Story Cards. 1st grade homework sheet image photo essay assignment. problem solving social work courses online computer. Fairy tale writing paper template. Fairy tales are entertaining for kids and a great way to share reading time together. Parents can print these fairy tale coloring printables at home for their kids to enjoy. Preschool and primary teachers can use the fairy tale sheets as worksheets for a lesson plan at school. fairy sheet tale work A fairy tale about a boy Jack, known as the Giant Killer, and his adventures of escape from giants, magicians, and other horrendous monsters. but work for their.
The Vietnam War was fought because South Vietnam intended to reclaim North Vietnam. At the end of World War II, the originally French-colonized Vietnam was split into two, and the southern part was given back to France. The Vietnam War was seen as an outlet for Cold War efforts. North Vietnam was backed by Russia and China, who supplied weapons, while the South Vietnam force was backed by France and the United States. The war is now known for the heavy civilian and military losses that occurred and for the success of the North defending against the South. The South had a larger force that was aided by Americans, but the Northern guerrilla tactics were effective.
Executable File Formats The product of compiling a C program is some machine language. But raw machine language isn't enough to allow the OS to run your code. The OS will want to know several pieces of meta-information with regards to your program before loading and running it; such as: One way to solve this problem is to use a file format, a file format that contains not only the raw machine language code, but all the required additional information. There have been many such file formats devised over the years. Ones that I am familiar with include: A "file format" that contains no meta-information whatsoever. This "format" is literally a raw dump of the machine language. All such meta-information is held in assumptions, the OS makes assumptions for any information it needs, and the programmer must follow these assumptions. Therefore, in a sense, it does "contain" meta-information, it's just that this information isn't contained anywhere in the file itself. The linker (in our case, GNU ld) is responsible for taking the raw machine code from the compiler/assembler and creating a valid ELF file. ELF files consist of several different sections and have many different abilities. In other words, laying out an ELF file can be quite involved considering all the options that are available and all the information that needs to be stored. The linker, therefore, uses a script which helps guide how the output file is laid out. If you don't supply a script, there is an implicit default one provided for you. The different sections and their attributes are used by, for example, the operating system's loader (when you want to execute an application). For example, some sections contain executable code that needs to be loaded into memory, other sections don't contain any data at all, but rather they instruct the loader to allocate some memory for the application to use (sometimes this memory even needs to be explicitly zeroed). Some sections aren't used by the loader at all, for example, sections that contain debugging information are of no use to the loader but are used by a debugger instead. All compilers have their own little extensions built into them which either extend the language in some way or provide the programmer with #pragma-like control of the environment; GCC is no exception. Of particular interest is the ability GCC gives the developer to specify the name of the section into which to place some object. The name of the ELF segment into which this object will be placed is only one of numerous such attributes the belong to this object.
New research in Nature Genetics identifies a novel genetic and molecular pathway in the esophagus that causes eosinophillic esophagitis (EoE), opening up potential new therapeutic strategies for an enigmatic and hard-to-treat food allergy. EoE is a chronic inflammatory disorder of the esophagus. The condition is triggered by allergic hypersensitivity to certain foods and an over-accumulation in the esophagus of white blood cells called eosinophils (part of the body's immune system). EoE can cause a variety of gastrointestinal complaints including reflux-like symptoms, vomiting, difficulty swallowing, tissue scarring, fibrosis, the formation of strictures and other medical complications. Reporting their results online, the multi-institutional team of researchers was led by scientists at Cincinnati Children's Hospital Medical Center. The authors identified a molecular pathway specific to epithelial tissue in the esophagus involving a gene called CAPN14, which they found becomes dramatically up-regulated in the disease process. Epithelial cells help form the membrane of the esophagus. The scientists report that when these cells were exposed to a well-known molecular activator of EoE - an immune hormone called Interleukin 13 (IL-13) - it caused dramatic up-regulation of CAPN14. The researchers said this happened in what they described as an epigenetic hotspot for EoE on the cells' chromosomes. CAPN14 encodes an enzyme in the esophagus that is part of the disease process called calpain14, according to Marc E. Rothenberg, MD, senior investigator on the study and director of the Center for Eosinophilic Disorders at Cincinnati Children's. Because caplain14 can be targeted and inhibited by drugs, the study opens up new therapeutic strategies for researchers. "In a nutshell, we have used cutting edge genomic analysis of patient DNA as well as gene and protein analysis to explain why people develop EoE," Rothenberg explained. "This is a major breakthrough for this condition and gives us a new way to develop therapeutic strategies by modifying the expression of caplain14 and its activity. Our results are immediately applicable to EoE and have broad implications for understanding eosinophilic disorders as well as allergies in general." The study follows years of research into EoE by Rothenberg's laboratory, including the development of novel modeling systems for the disease, and extensive multi-institutional collaboration through the National Institutes of Health's Consortium of Food Allergy Researchers. Other key collaborators on the current study include first author, Leah Kottyan, PhD, a researcher at the Center for Autoimmune Genomic Etiology at Cincinnati Children's, and co-senior investigator John Harley, MD, PhD, director of the Center of Autoimmune Genomic Etiology . Rothenberg's lab years ago identified IL-13 as a key molecular contributor to the allergic reaction process in EoE. His team has since identified a number of related genes and molecular pathways linked to the disease, and they have tested drugs that inhibit IL-13 in an attempt to manage EoE severity. "The current study links allergic responses mediated through IL-13 with an esophageal specific pathway, and answers a long-standing question in the allergy field of why people develop tissue specific disease manifestations," Rothenberg explained. "We have uncovered that this can be explained by the interplay of genetic susceptibility elements in allergic sensitization pathways with the newly discovered esophageal specific pathway. Thus, two steps are necessary, one dictated by allergy and one dictated by calpain14 in the esophagus." The researchers used computer bioinformatics to conduct a genome-wide association study that analyzed 2.5 million genetic variants in thousands of individuals with and without EoE. This allowed the authors to identify the genetic susceptibility within the CAPN14 gene. The investigators were surprised to learn that CAPN14 was specifically expressed in the esophagus, compared with 130 other tissues in the body they analyzed. Rothenberg said the findings open a new way to consider therapeutic options because calpain14 is an enzyme that can be inhibited by drugs, which means it may be possible to modify the expression and activity of calpain14. Some chemical compounds already exist that block the activity of calpains, although the researchers do not yet know the exact function of calpain14, as very little has been published about it.
A zero pair describes a pair of numbers whose sum equals zero. One number in this equation will always have a positive sign, while the other number will always have a negative sign.Continue Reading A zero pair will always feature the positive and negative form of the same number. For instance, +3 and -3 would be considered a zero pair because the resultant sum when they are added together is zero (+3 + -3 = 0). Conversely, +3 and -2 would not be considered a zero pair because the resultant sum when they are added together would not be zero (+3 + -2 = +1). The main purpose of a zero pair is to simplify the process of addition and subtraction in complex mathematical equations featuring multiple numbers and variables. For example, in the problem 2+6-3-2, the positive 2 and the negative 2 cancel each other out because they are a zero pair, thus reducing the problem to 6-3.Learn more about Algebra
PS 131 is offering new computer science program that will unlock an entirely new future for students. They’ll be able to learn how technology works and begin to create programs, games or apps, not just consume them. Why is this important for even elementary school students? Watch this inspiring video. Code.org is a non-profit dedicated to giving every K-12, US student the opportunity to learn computer science. All participating students will be introduced to the foundations of computer science — like logic, problem-solving and creativity. Beginning in the fall of 2015, PS 131 – Abigail Adams will teach students across Kindergarten through Grade 5 lessons that blend self-guided online tutorials with “unplugged” class activities. You can find more information or try it out for yourself at studio.code.org! This program is aligned with the Mayor’s Computer Science for All reforms announced on September 16, 2015. In his speech, Mayor DeBlasio stated that “every student will receive computer science education in elementary, middle, and high school within the next 10 years. Through this commitment, every student will learn the fundamentals of computer science, like coding, robotics, and web design.”
Poland Table of Contents Figure 5. The First Partition of Poland, 1772 Figure 6. The Second Partition of Poland, 1793 Figure 7. The Third Partition of Poland, 1795 Figure 8. Duchy of Warsaw, 1807-13, and Congress Poland, 1815 Although the majority of the szlachta was reconciled to the end of the commonwealth in 1795, the possibility of Polish independence was kept alive by events within and outside Poland throughout the nineteenth century. Poland's location in the very center of Europe became especially significant in a period when both Prussia/Germany and Russia were intensely involved in European rivalries and alliances and modern nation states took form over the entire continent. Data as of October 1992
This disease is also known as Guignardia leaf blotch because the causal pathogen is a fungus named Guignardia aesculi. We see it most commonly on horsechestnuts, but buckeye trees also host the disease. From a distance, infected trees appear to be severely scorched. On closer inspection, however, reddish brown leaf spots with bright yellow margins are apparent. The spots become large and cover most of the leaf surface. Leaves then become dry and brittle and drop early. You can distinguish this disease from environmental scorch (discussed in issue No. 5 of this newsletter) by the fruiting bodies formed by the fungus in the leaf lesions in moist weather. These structures are called pycnidia. They appear black and are about the size of a pinhead. All leaves are affected, unlike scorch, which affects newest leaves first on the side of the tree that is exposed to sun or wind. This disease may be serious and treatable with fungicides (starting at bud break) in nursery stock, but mature trees usually retain live buds and lose leaves late in the season, so they are not significantly harmed. Most of the seasonís growth has already occurred before infection. Removing fallen leaves may be helpful in reducing the amount of fungal inoculum living through the winter on these leaves. Also, try to prune surrounding vegetation to allow better air flow through the area for more rapid drying of foliage. This disease is one more example of why you should not plant trees too close together when they are young. Consider mature size and spread when you select planting sites.
Angular momentum works almost like linear momentum. If an object is spinning, it is said to have some amount of momentum that is based on its rotational inertia and its angular velocity (similar to mass and linear velocity for linear momentum). Angular momentum (L) is defined as: L = Iw As you can see, it is very similar to the linear momementum equation, p = mv. For a spinning point particle, the angular momentum is: L = mvr Just like linear momentum, angular momentum is always conserved as well when there are no external forces acting on a system. This is why skaters spin faster when they bring their arms inward. Bringing their arms inward decreases their rotational inertia, and since angular momentum must be conserved, angular velocity increases. In another case, if we have two spinning disks spinning at different rates and then join together to spin at the same speed, the angular momentum will still be constant. For example, disk A is spinning at 3 s-1 and B is spinning at 5 s-1. Disk A has a mass of 9 kg and has a radius of 0.30 m. Disk B has a mass of 4 kg and has a radius of 0.20 m. Let’s figure out what happens when they come together: Lbefore = Lafter IAwA + IBwB = (IA + IB)wafter If you plug in all the values, you eventually will find out that the two disks joined together spin with an angular velocity (wafter) of about 3.3 s-1.
Earlier chapters in this book assumed the magnetic resonance scanner was functioning exactly as presented in the theory. This section describes what happens when the scanner does not behave as expected and an image artifact is created. An image artifact is any feature which appears in an image which is not present in the original imaged object. An image artifact is sometime the result of improper operation of the imager, and other times a consequence of natural processes or properties of the human body. It is important to be familiar with the appearance of artifacts because artifacts can obscure, and be mistaken for, pathology. Therefore, image artifacts can result in false negatives and false positives. Artifacts are typically classified as to their source, and there are dozens of image artifacts. The following table summarizes a few of these. |RF Offset and | |Failure of the RF detection circuitry| |RF Noise||Failure of the RF shielding| |Bo Inhomogeneity||Metal object distorting the Bo field| |Gradient||Failure in a magnetic field gradient| |Susceptibility||Objects in the FOV with a higher or lower magnetic susceptibility| |RF Inhomogeneity||Failure or normal operation of RF coil, and metal in the anatomy| |Motion||Movement of the imaged object during the sequence| |Flow||Movement of body fluids during the sequence| |Chemical Shift||Large Bo and chemical shift difference between tissues| |Partial Volume||Large voxel size| |Wrap Around||Improperly chosen field of view| |Gibbs Ringing||Small image matrix and sharp signal discontinuities in an image| |Magic Angle||Angle between Bo and dipole axis in solids.| The ability to identify the source of an artifact is related to your understanding of the previous material presented in this book. Spin physics, the imaging pulse sequence, Fourier transforms, Fourier pairs, hardware, and signal processing are particularly useful. For example knowledge of the spin echo pulse sequence, Fourier pairs and the signal processing will enable you to predict the affect of motion during a scan. An example of each of the artifacts is presented next. The reader is cautioned that some problems with the imager can manifest themselves in a number of ways. Therefore not all artifacts of a given type will appear the same. A DC offset artifact is one of two possible artifacts associated with the radio frequency (RF) detector. The RF detector was referred to in the Hardware chapter as the quadrature detector. The DC offset artifact is caused by a DC offset voltage in one or both of the signal amplifiers in the detector. Recall, from the Fourier Transform chapter that the Fourier transform of a time domain DC offset is a peak at zero frequency. The FT of a time domain signal with a DC offset is the FT of the signal, which has the same zero frequency peak. K-space data that has a DC offset gives the same zero frequency peak when Fourier transformed. Therefore, there is a bright spot exactly in the center of the image. The second type of artifact associated with the RF detector is the quadrature ghost artifact. This artifact is caused by a mismatch in the gain of the real and imaginary channels of the quadrature detector. For the Fourier transform to function properly, the gain of the two sets of doubly balanced mixers, filters, and amplifiers in the real and imaginary channels of the quadrature detector must have identical efficiencies. When this is not the case, the Fourier transform may have a small component at the negative of any frequencies present in the signal. This small negative frequency component causes a ghosting of objects diagonally in the image. Here is an example of this artifact when the signals differ by 50%. Both the DC offset and quadrature ghost artifacts are the result of a hardware failure and must be addressed by a service representative. A failure of the RF shielding that prevents external noise from getting into the detector is the cause of an RF noise artifact. The form of the artifact in the image depends on the source of noise and where it is introduced into the signal. Much can be gained about the source of RF noise by inverse Fourier transforming the image. For example, a bright spot somewhere in the image can be caused by a single frequency leaking into the signal. The animation window contains an image with two different RF noise artifacts represented by the diagonal lines and the two horizontal lines marked with arrows. To possibly fix the problem before calling a service representative, check to see that the scan room door is closed and sealing properly. All magnetic resonance imaging assumes a homogeneous Bo magnetic field. An inhomogeneous Bo magnetic field causes distorted images. The distortions can be either spatial, intensity, or both. Intensity distortions result from the field homogeneity in a location being greater or less than that in the rest of the imaged object. The T2* in this region is different, and therefore the signal will tend to be different. For example, if the homogeneity is less, the T2* will be smaller and the signal will be less. Spatial distortion results from long-range field gradients in Bo which are constant in time. They cause spins to resonate at Larmor frequencies other than that prescribed by an imaging sequence. For example, consider the diagram in the animation window representing a perfect linear (black) and distorted (red) one-dimensional magnetic field gradients. Ideally, spins at a single x position should experience a single magnetic field and resonate at a single frequency. With a distorted gradient, there is no linear relationship between position x and frequency ν. Because linearity is assumed in the imaging process, the resultant image is distorted. The animation window contains an image of four water filled straight tubes positioned so as to form a square. The magnetic image shows a severe bending in one of the tubes due to a nonuniformity in the Bo magnetic field. Artifacts arising from problems with the gradient system are sometimes very similar to those described as Bo inhomogeneities. An gradient which is not constant with respect to the gradient direction will distort an image. This is typically only possible if a gradient coil has been damaged. Other gradient related artifacts are due to abnormal currents passing through the gradient coils. In this image the frequency encoding (left/right encoding) gradient is operating at half of its expected value. A magnetic susceptibility artifact is caused by the presence of an object in the FOV with a higher or lower magnetic susceptibility. The magnetic susceptibility of a material is a measure of whether an applied magnetic field creates a larger or smaller field within the material. Materials that are diamagnetic have a slightly lesser field than in a vacuum, while paramagnetic materials have a slightly greater field. Ferromagnetic materials have a much higher field. The image in the animation window depicts a region with a homogeneous magnetic field into which an object with a higher magnetic susceptibility has been placed. As a result, the magnetic field lines bend into the object. Consequently, the fields are stronger and weaker at various locations around the object. This distortion is seen in the applied static magnetic field Bo, the radio frequency magnetic field B1, and the gradients in the magnetic field. Often, the susceptibility artifact is caused by metal, such as a titanium or stainless steel object inside the body. These objects cause additional artifacts, such as the RF inhomogeneity artifact described next, that make it difficult to present an example image. An RF inhomogeneity artifact is the presence of an undesired variation in signal intensity across an image. The cause is either a nonuniform B1 field or an nonuniform sensitivity in a receive only coil. Some RF coils, such as surface coils, naturally have variations in sensitivity and will always display this artifact. The animation window contains an image from a surface coil with its characteristic intensity fall off as you go away from the coil. The presence of this artifact in other coils represents the failure of an element in the RF coil or the presence of metal in the imaged object. For example a metal object which prevents the RF field from passing into a tissue will cause a signal void in an image. The accompanying sagittal image of the head contains an RF inhomogeneity artifact in the region of the mouth. (See arrow.) The patient has a large amount of non ferromagnetic metal dental work in the mouth. The metal shielded the regions near the mouth from the RF pulses thus producing a signal void. The Dental work did not significantly distort the static magnetic field Bo. The metal does not significantly distort the static magnetic field Bo at greater distances; therefore, the image of the brain is not significantly distorted. As the name implies, motion artifacts are caused by motion of the imaged object or a part of the imaged object during the imaging sequence. The motion of the entire object during the imaging sequence generally results in a blurring of the entire image with ghost images in the phase encoding direction. Movement of a small portion of the imaged object results in a blurring of that small portion of the object across the image. To understand this artifact picture the following simple example. A single small spin containing object is imaged. The central portion of the MX raw data will look something like this. The frequency of the waves will be related to the position in the frequency encoding direction and the variation in phase of the waves will be related to the position in the phase encoding direction. Fourier transforming first in the frequency encoding direction yields a single oscillating peak. Viewing the data as a function of a phase shows this more clearly. Fourier transforming last in the phase encoding direction yields a single peak at the location of the original object. Now picture the same example except that midway through the acquisition of phase encoding steps the object moves to a new location in the frequency encoding direction. The central part of the MX raw data looks like this. Fourier transforming first in the frequency direction gives two oscillating peaks which abruptly stop oscillating. Viewing the data as a function of a phase shows this more clearly. Fourier transforming in the phase encoding direction gives several repeating peaks at the two frequencies. This is because the Fourier pair of an abruptly truncated sine wave is a sinc function. The magnitude representation of the data makes all the peaks positive. The animation window contains a magnetic resonance image of the head in which the head moved in the superior/inferior direction midway during the acquisition. The solution to a motion artifact is to immobilize the patient or imaged object. Often times the motion is caused by the heart beating or the patient breathing. Both of which can not legally be eliminated. The solution in these cases is to gate the imaging sequence to the cardiac or respiratory cycle of the patient. For example if the motion is caused by pulsing artery, one could trigger the acquisition of phase encoding steps to occur at a fixed delay time after the R-wave in the cardiac cycle. By doing this the artery is always in the same position. Similar gating could be done to the respiratory cycle. A disadvantage of this technique is that the choice of TR is often determined by the heart rate or respiration rate. Imaging techniques designed to remove motion artifacts are given different names by the various manufacturers of magnetic resonance imagers. For example, a few names of sequences designed to remove respiratory motion artifacts are respiratory gating, respiratory compensation, and respiratory triggering. The accompanying axial image of the head shows a motion artifact. A blood vessel in the posterior side of the head moved in a pulsating motion during the acquisition. This motion caused a ghosting across the image. Flow artifacts are caused by flowing blood or fluids in the body. A liquid flowing through a slice can experience an RF pulse and then flow out of the slice by the time the signal is recorded. Picture the following example. We are using a spin-echo sequence to image a slice. Here the timing diagram and side view of the slice are shown. During the slice selective 90o pulse blood in the slice is rotated by 90o. Before the 180o pulse can be applied, the blood which experienced the 90o pulse has flown out of the slice. The slice selective 180o pulse rotates spins in the slice by 180o. However the blood in the slice has its magnetization along +Z before the pulse and along -Z after the pulse. It therefore yields no signal. By the time the echo is recorded the slice has only blood in it which has not experienced the 90o or the 180o pulse. The result is that the blood vessel which we know to contain a high concentration of hydrogen nuclei yields no signal. Here is an example from an axial slice through the legs. Notice that the blood vessels appear black even though they contain a large amount of water. In a multislice sequence, the slices could be positioned such that blood experiencing a 90o pulse in one slice can flow into another slice and experience a 180o rotation and into a third and contribute to the echo. In this case the vessel will have a high signal intensity. The effect is usually that some slices have low signal intensity blood vessels and others have high signal blood vessels. In an image, the chemical shift artifact is a misregistration between the relative positions of two tissues with different chemical shifts. Most common is the misregistration between fat and water. The chemical shift artifact is caused by the difference in chemical shift (Larmor frequency) of fat and water. Recall from the NMR Spectroscopy chapter that the definition of chemical shift, δ, is where ν is the resonance frequency of a nucleus and νREF the resonance frequency of a reference nucleus. The difference in chemical shift between two nuclei referred to as 1 and 2 is The difference in chemical shift of water and adipose or fat-like hydrogens is approximately 3.5 ppm which at 1.5 Tesla corresponds to a frequency difference between that of fat and water is approximately 220 Hz. During the slice selection process there is a slight offset between the location of the fat and water spins which have been rotated by an RF pulse. This difference is exaggerated in this animation. During the phase encoding gradient the fat and water spins acquire phase at different rates. The effect being that fat and water spins in the same voxel are encoded as being located in different voxels. In this example all nine voxels have a red water vector. The center voxel has some fat magnetization in addition to the water. In a uniform magnetic field the vectors precess at their own Larmor frequency. When a gradient in the magnetic field is applied, such as the phase encoding gradient, spins at different x positions precess at a frequency dependent on their Larmore frequency and field. In this example the fat vector has the same frequency as the water vector in the voxel to its right. When the phase encoding gradient is turned off each vector has acquired a unique phase dependent on its x position. During the frequency encoding gradient, fat and water spins located in the same voxel precess at rates differing by 3.5 ppm. The net effect is that the fat and water located in the same voxel are encoded as being located in different voxels. In this example the fat vector in the center voxel possesses a phase and precessional frequency as if it was located in the upper right voxel. The resultant image places the fat in the voxel to the top rather than in the center. Even though the phase is different, the fat is not encoded as being in a different phase encoding direction voxel. What matters in phase encoding is the difference in phase between the steps and this is not changing. The chemical shift artifact in the distance units of the FOV is In general, the term partial-volume artifact describes any artifact that occurs when the size of the image voxel is larger than the size of the feature to be imaged. For example, if a small voxel contains only fat or water signal, and a larger voxel might contain a combination of the two, the large voxel possess a signal intensity equal to the weighted average of the quantity of water and fat present in the voxel. Another manifestation of this type of artifact is a loss of resolution caused by multiple features present in the image voxel. For example, a small blood vessel passing diagonally through a slice may appear sharp in a 3 mm thick slice, but distorted and blurred in a 5 mm or 10 mm slice. Here is a comparison of two axial slices through the same location of the head. One is taken with a 3 mm slice thickness and the other with a 10 mm thickness. Notice the loss of resolution in the 10 mm Thk image. The solution to a partial volume artifact is a smaller voxel, however this may result in poorer signal-to-noise ratios in the image. A wraparound artifact is the appearance of a part of the imaged anatomy, which is located outside of the field of view, inside of the field of view. For example, an image of the human head may have a part of the nose outside the field of view. The nose, however, appears in the image, but at the back of the head. In this artifact, objects located outside the field of view appear at the opposite side of the image, as if one took the image and wrapped it around a cylinder. This artifact occurs when the selected field of view is smaller than the size of the imaged object, or, more specifically, when the digitization rate is less than the range of frequencies in the FID or echo. The origin of this problem was first presented in the chapter on Fourier Transforms. The solution to a wraparound artifact is to choose a larger field of view, adjust the position of the image center, or select an imaging coil that does not excite or detect spins from tissues outside the desired field of view. The accompanying sagittal images of the head and breast contain wraparound artifacts. In the image of the head, the nose extends beyond the field of view on the left, and its imaged position is wrapped around and appears on the right of the image. In terms of frequency and digitization rate, the nose is located at a position that has a greater resonance frequency than the digitization rate. Consequently, it is wrapped around, and it appears at the right end of the image.In the sagittal breast image, the portion of the image below the arrow should appear on the top of the image. This portion was located at a position that had a greater resonance frequency than the digitization rate. As a consequence, it was wrapped around and appears at the bottom end of the image. Many newer imagers employ a combination of oversampling, digital filtering, and decimation to eliminate the wrap around artifact in the frequency encoding direction. This point was discussed in the detector section of the Hardware chapter. Wraparound in the phase encoding direction can be minimized using a no phase wrap option which applies a saturation pulse to spins outside of the field of view in the phase encoding direction. Hence, minimal signal is us present in tissue which are wrapped around into the phase encoding direction FOV. Gibbs ringing is a series of lines parallel to a sharp intensity edge in an image. The ringing is caused by incomplete digitization of the echo. This means the signal has not decayed to zero by the end of the acquisition window, and the echo is not fully digitized. (The reader is encouraged to prove this using the convolution theorem.) This artifact is seen in images when a small acquisition matrix is used. Therefore, the artifact is more pronounced in the 128 point dimension of a 512x128 acquisition matrix. In the following example, a rectangular object with a spatially uniform signal is imaged. An inadequate number of points are collected in the horizontal (x) direction. The resultant image displays a ringing in the intensity at the edge. The animation window displays the upper right hand corner of this image and a plot of signal intensity. The solution to Gibbs-ringing artifact is to use a larger image matrix. All of magnetic resonance imaging requires the spins to be free to rotate and tumble freely in the tissue. In solids this does not happen. As a consequence, the chemical shift and the spin-spin coupling are dependent on the orientation of the molecule. The dipole interaction in such cases is zero when the angle θ between Bo and dipole axis in solids is 54.7o. This interaction causes dark regions in cartilage where θ = 54.7o. Copyright © 1996-2017 J.P. Hornak. All Rights Reserved.
Which of the following statements about electrical charge is true? A positive charge and a negative charge will attract each other. Suppose you are listening to a radio station that broadcasts at a frequency of 97 Mhz (megahertz). Which of the following statements is true? The radio waves from the radio station are causing electrons in your radio's antenna to move up and down 97 million times each second. Laboratory measurements show hydrogen produces a spectral line at a wavelengh of 486.1 nanometers (nm). A particular star's spectrum shows the same hydrogen line at a wavelength of 486.0 nm. What can we conclude? The star is moving toward us. Betelgeuse is the bright red star representing the left shoulder of the constellation Orion. All of following statements about Betelgeuse are true. Which one can you infer from its red color? It's surface is cooler than the surface of the sun. Suppose you look at a spectrum of visible light by looking through a prism or diffraction grating. How can you decide whether it is an emission line spectrum or an absorption line spectrum? An emission line spectrum consists of bright lines on a dark background, while an absorption line spectrum consists of dark lines on a rainbow background. Visible light from a distant star can be spread into a spectrum by using a glass prism or A diffraction grating. Suppose you see two stars: a blue star and a red star. Which of the following can you conclude about the two stars? Assume that no Doppler shifts are involved. (Hint: Think about the laws of thermal radiation). The blue star has a hotter surface temperature than the red star. Suppose you want to know the chemical composition of a distant star which piece of information is most useful to you? The wavelengths of spectral lines in the star's spectrum. An atom of the element iron has an atomic number of 26 and an atomic weight of 56. If it is neutral, how many protons, neutrons, and electrons does it have? 26 protons, 30 neutrons, 26 electrons When an electron in an atom goes from a higher energy state to a lower energy state, the atom emits a photon of a specific frequency. Which of the following statements about thermal radiation is always true A hot object emits more total radiation per unit surface area than a cool object. The spectra of most galaxies show redshifts. This means that their spectral lines Have wavelengths that are longer than normal If we observe one edge of a planet to be redshifted and the opposite edge to be blueshifted, what can conclude about the planet The planet is rotating. Which of the following statements about thermal radiation is always true A hot object emits photons with a higher average energy than a cool object. Everything looks red through a red filter because The filter transmits red light and absorbs other colors. If we observe one edge of a planet to be redshifted and the opposite edge to be blueshifted, what can we conclude about the planet The planet is rotating. The planet neptune is blue in color. How would you expect the spectrum of visible light from Neptune to be different from the visible-light spectrum of the sun The two spectra would have similar shapes, except Neptune's spectrum would be missing a big chunk of the red light that is present in the sun's spectrum. Suppose you built a scale-model atom in which the nucleus is the size of a tennis ball. About how far would the cloud of electrons extend? We can learn a lot about the properties of a star by studying its spectrum. All of the following statements are true except one. Which one? The total amount of light in the spectrum tells us the star's radius. From the shortest to longest wavelength, which of the following correctly orders the different categories of electromagnetic radiation? gamma rays, x rays, ultraviolet, visible light, infrared, radio Which of the following best describes the principle advantage of CCDs over photographic film? CCDs capture a much higher percentage of the incoming photons than film. What do we mean by the diffraction limit of a telescope? It is the best angular resolution the telescope could achieve with perfect optical quality and in the absence of atmospheric distortion. Which of the following is not an advantage of the Hubble Space telescope over ground-based telescopes? It is closer to the stars. Suppose you point your telescope at a distant object. Which of the following is not an advantage of taking a photograph of the object through the telescope as compared to just looking at the object through the telescope. The photograph will have far better angular resolution than you can see with your eye. What does the technique of interferometry allow? It allows two or more telescopes to obtain the angular resolution of a single telescope much larger than any of the individual telescopes. What is a CCD? It is an electric detector that can be used in place of photographic film for making images. Which of the following statements about light focusing is not true? The focal plane of a reflecting telescope is always located within a few inches of the primary mirror. What is the purpose of adaptive optics? It reduces blurring caused by atmospheric turbulence for telescopes on the ground. Which of the following studies is best suited to astronomical observations that fall into the category called timing? Studying how a star's brightness varies over a period of 3 years. Which of the following statements best describes the two principal advantages of telescopes over eyes? Telescopes can collect far more light with far better angular resolution. Telescopes operating at this wavelength must be cooled to observe faint astronomical objects. Which of the following is always true about images captured with x-ray telescopes? They are always shown with colors that are not the true colors of the objects that were photographed. Which of the following is not a reason why telescopes tend to be built on mountaintops that are relatively far from cities and are in regions with dry climates? The thin air on mountaintops makes the glass in telescope mirrors less susceptible to warping. What is an artificial star? A point of light in Earth's atmosphere created by a laser for the purpose of monitoring atmospheric fluctuations. How does the light-collecting area of an 8-meter telescope compare to that of a 2-meter telescope? The 8-meter telescope has 16 times the light-collecting area of the 2-meter telescope. What does angular resolution measure? The angular size of the smallest features that the telescope can see. How is Einstein's famous equation, E=mc2, important in understanding the Sun? It explains the fact that the sun generates energy to shine by losing some 4 million tons of mass each day. Venus has a higher average surface temperature than Mercury. Why? Because its surface is heated by an extreme green house effect. Why was it advantageous for the voyager mission to consist of flybys rather than orbiters. Each individual spacecraft was able to visit more than one planet. Which of the following is not a real difference between asteroids and comets. Asteroids orbit the Sun while comets just float randomly around in the Oort cloud. Which of the following is not a major difference between the terrestrial and jovian planets in our solar system. Terrestrial planets contain large quantities of ice and jovian planets do not. Which of the following statements about Pluto is not true? It is the largest known object that is considered to be a dwarf planet. What is the Oort cloud? It is not really a cloud at all, but rather refers to the trillion or so comets thought to orbit the sun at great distances. Compared to the distance between Earth and Mars, the distance between Jupiter and Saturn is Which of the following puzzles in the solar system cannot be explained by a giant impact event? The orbit of Triton in the opposite direction to Neptune's rotation. Suppose you find a rock that contains some potassium-40 (half-life of 1.3 billion years). You measure the amount and determine that there are 5 grams of potassium-40 in the rock. By measuring the amount of its decay product (argon-40) present in the rock, you realize that there must have been 40 grams of potassium-40 when the rock solidified. How old is the rock? 3.9 Billion years. According to our theory of solar system formation, why do all the planets orbit the Sun in the same direction and in nearly the same plane? The laws of conservation energy and conservation of angular momentum ensure that any rotating, collapsing cloud will end up as a spinning disk. According to our present theory of solar system formation, why were solid planetesimals able to grow larger in the outer solar system than in the inner solar system? Because only metal and rock could condense in the inner solar system, while ice also condensed in the outer solar system. According to our present theory of solar system formation, which of the following lists the major ingredients of the solar nebula in order from the most abundant to the least abundant? hydrogen and helium gas; hydrogen compounds; rock; metal. What is the primary reason that astronomers suspect that some jovian moons were captured into their current orbits? Some moons have orbits that are "backwards" (compared to their planet's rotation) or highly inclined to their planet's equator. The terrestrial planets are made almost entirely of elements heavier than hydrogen and helium. According to modern science, where did these elements come from? They were produced by stars that lived and died before our solar system was born. According to our theory of solar system formation, what is pluto? Pluto is a large Kuiper-belt comet. Which of the following are relatively unchanged fragments from the early period of planet building in the solar system. Oort cloud comets, the moons of mars, asteroids, kuiper belt comets (all of the above). What happened during accretion phase of the early solar system? Particles grew by colliding and sticking together. Which of the following statements about electrons is not true. Electrons orbit the nucleus rather like planets orbiting the sun. Which of the following statements is true of green grass. It absorbs red light and reflects green light. consider an atom of gold in which the nucleus contains 79 protons and 118 neutrons. What is its atomic number and atomic weight? The atomic number is 79, and the atomic weight is 197. You observe a distant galaxy. You find that a spectral line of hydrogen that is shifted from its normal location in the visible part of the spectrum into the infrared part of the spectrum. What can you conclude? The galaxy is moving away from you. Consider an atom of carbon in which the nucleus contains 6 protons and 7 neutrons. What is its atomic number and atomic mass number. atomic number=6;atomic mass number=13 Which of the following statements about x rays and radio waves is not true? X rays travel through space faster than radio waves. Suppose that Star X and Star Y both have redshifts, but Star X has a larger redshift than Star Y. What can you conclude? Star X is moving away from us faster than Star Y. A perfectly opaque object that absorbs all radiation and reemits the absorbed energy as thermal radiation is a thermal emmiter. The angular seperation of two stars is 0.1 arcseconds and you photograph them with a telescope that has an angular resolution of 1 arcsecond. What will you see? The photo will seem to show only one star rather than two. Suppose you have two small photographs of the Moon. Although both look the same at small size, when you blow them up to poster size one of them still looks sharp while the other one becomes fuzzy (grainy) looking. Which of the following statements is true? The one that still looks sharp at large size better (smaller) angular resolution than the one that looks fuzzy. Which of the following wavelength regions cannot be studied with telescopes on the ground? both B and C. Which of the following best describes why radio telescopes are generally much larger in size than telescopes designed to collect visible light? Getting an image of the same angular resolution requires a much larger telescope for radio waves than for visible light. What is the purpose of adaptive optics? to eliminate the distorting effects of atmospheric turbulence for telescopes on the ground. Which of the following statements about the recently-discovered object Eris is not true? It is thought to be the first example of a new class of object. What is the primary reason why a Pluto flyby mission would be cheaper than a Pluto orbiter? The fuel needed for an orbiter to slow down when it reaches Pluto adds a lot of weight to the spacecraft. Why did the solar nebula heat up as it collapsed? As the cloud shrank, its gravitational potential energy was converted to kinetic energy and then into thermal energy. According to our theory of solar system formation, why does the Sun rotate slowly today? The Sun once rotated much faster, but it transferred angular momentum to charged particles caught in its magnetic field and then blew the particles away with its strong solar wind. At extremely high temperatures (million of degrees), which of the following best describes the phase of matter? a plasma consisting of positively charged ions and free electrons. All of the following statements about the Sun's corona are true. Which one explains why it is a source of X rays? The temperature of the corona's gas is some 1 to 2 million Kelvin. Suppose you watch a leaf bobbing up and down as ripples pass it by in a pond. You notice that it does two full up and down bobs each second. Which statement is true of ripples on the pond. They have a frequency of 2 hertz. Which of the following could not be measured by an observation that uses only imaging. the rate at which a variable star brightens and dims. What kind of material in the solar nebula could remain sold at temperature as high as 1,500 K, such as existed in the inner regions of the nebula. What do we mean by the diffraction of a telescope It is the angular resolution the telescope could achieve if nothing besides the size of its light-collecting area affected the quality of its images. Consider the following statement: "Rocky asteroids are found primarily in the asteroid belt and kuiper belt while icy comets are found primarily in the Oort cloud. "what's wrong with this statement?" The Kuiper belt contains icy comets, not rocky asteroids. At first, the sun's present-day rotation seems to contradict the prediction of the nebular theory because the theory predicts that the Sun should have been rotating fast when it formed, but the actual rotation is fairly slow.
A cat becomes infected by eating the cyst form of the parasite. In the small intestine, the cyst opens and releases an active form called a trophozoite. These have flagella, hair-like structures that whip back and forth allowing them to move around. They attach to the intestinal wall and reproduce by dividing in two. After an unknown number of divisions, at some stage, in an unknown location, this form develops a wall around itself (encysts) and is passed in the feces. The Giardia in the feces can contaminate the environment and water and infect other animals and people. What are the signs of a Giardia infection? Most infections with Giardia are asymptomatic. In the rare cases in which disease occurs, younger animals are usually affected, and the usual sign is diarrhea. The diarrhea may be acute, intermittent, or chronic. Usually, the infected animals will not lose their appetite, but they may lose weight. The feces are often abnormal, being pale, having a bad odor, and appearing greasy. In the intestine, Giardia prevents proper absorption of nutrients, damages the delicate intestinal lining, and interferes with digestion. Can Giardia of cats infect people? This is another unknown. There are many species of Giardia, and experts do not know if these species infect only specific hosts. Sources of some human infections have possibly been linked to beavers, other wild animals, and domestic animals. Until we know otherwise, it would be wise to consider infected animals capable of transmitting Giardia to humans. You may have heard about Giardia outbreaks occurring in humans due to drinking contaminated water. Contamination of urban water supplies with Giardia is usually attributed to (human) sewage effluents. In rural settings, beavers most often get the blame for contaminating lakes and streams. Giardia outbreaks have also occurred in day care centers fueled by the less than optimal hygienic practices of children. How do we diagnose giardiasis? Giardiasis is very difficult to diagnose because the protozoa are so small and are not passed with every stool. Tests on serial stool samples (one stool sample every day for three days) are often required to find the organism. Special diagnostic procedures, beyond a routine fecal examination, are necessary to identify Giardia. The procedures we use to identify roundworms and hookworms kill the active form of Giardia and concentrate the cyst form. To see the active form, a small amount of stool may be mixed with water on a microscope slide and examined under high magnification. Because these forms have flagella, you can see them move around on the slide. The active forms are more commonly found in loose stools. If you ever have the opportunity to see the active form of Giardia under the microscope, take it. It is an interesting-looking creature. It is pear-shaped and its anatomy makes it look like a cartoon face, with eyes (which often look crossed), nose, and mouth. Once you see it, you will not forget it. Cysts are more commonly found in firm stools. Special solutions are used to separate the cysts from the rest of the stool. The portion of the solution that would contain the cysts is then examined microscopically. In spring, 2004, a diagnostic test using ELISA technology became available. This test uses a very small fecal sample, and can be performed in 8 minutes in a veterinarian's office. It is much more accurate than a fecal examination. We have done the tests, now what? Now we come to how to interpret the test results. It can be a dilemma for your veterinarian. What you see (or do not see) is not always a correct indication of what you have. A negative test may mean the animal is not infected. However, few, if any, laboratory tests are 100% accurate. Negative test results can also occur in some infected animals. If a negative test occurs, your veterinarian will often suggest repeating the test. What about a positive test? That should not be hard to interpret, right? Wrong. Giardia can be found in many cats with and without diarrhea. If we find Giardia, is it the cause of the diarrhea or is it just coincidence we found it? The animal could actually have diarrhea caused by a bacterial infection, and we just happened to find the Giardia. Test results always need to be interpreted in light of the signs, symptoms, and medical history. If we find Giardia, how do we treat it? Here we go again; treatment is controversial too. There is a question about when to treat. If Giardia is found in a cat without symptoms, should we treat the animal? Since we should not know if G. cati can infect man, we often err on the side of caution and treat an asymptomatic-infected animal to prevent possible transmission to people. If we highly suspect infection with Giardia, but can not find the organism, should we treat anyway? This is often done. Because it is often difficult to detect Giardia in the feces of cats with diarrhea, if there are no other obvious causes of diarrhea, we often treat the animal for giardiasis. There are several treatments for giardiasis, although some of them have not been FDA-approved for that use in cats. Fenbendazole is an antiparasitic drug that kills some intestinal worms and can help control giardia. It may be used alone or with metronidazole. Metronidazole can kill some types of bacteria that could cause diarrhea. So if the diarrhea was caused by bacteria, and not Giardia, the bacteria can be killed and the symptoms eliminated. Unfortunately, metronidazole has some drawbacks. It has been found to be only 60-70% effective in eliminating Giardia from infected dogs, and probably is not 100% effective in cats, either. In some cats, it can cause vomiting, anorexia, and some neurological signs. It also can be toxic to the liver in some animals. It is suspected of being a teratogen (an agent that causes physical defects in the developing embryo), so it should not be used in pregnant animals. Finally, it has a very bitter taste and many animals resent taking it – especially cats. Quinacrine hydrochloride has been used in the past, but is not very effective and can cause side effects such as lethargy, vomiting, anorexia, and fever. Furazolidone has been used effectively in treating giardiasis in cats. It can cause vomiting and diarrhea and should not be used in pregnant cats. Now we come to yet another unknown. It is possible these treatments only remove the cysts from the feces, but do not kill all the Giardia in the intestine. This means even though the fecal exams after treatment may be negative, the organism is still present in the intestine. This is especially true of the older treatments. So treated animals could still be a source of infection for others. How can I prevent my pet from becoming infected with Giardia? The cysts can live several weeks to months outside the host in wet, cold environments. So lawns, parks, kennels, and other areas that may be contaminated with animal feces can be a source of infection for your pet. You should keep your pet away from areas contaminated by the feces of other animals. This is not always easy. As with other parasites of the digestive system, prevention of the spread of Giardia centers on testing and treating infected animals and using sanitary measures to reduce or kill the organisms in the environment. Solutions of quaternary ammonium compounds are effective against Giardia. How do I control Giardia in my cattery? Infection with Giardia can be a big problem in catteries, and a multi-faceted approach is needed. Treat Animals: Treat all nonpregnant animals. On the last day of treatment, move them to a holding facility while a clean area is established. When the animals are moved back to the clean area, treat them once again. Decontaminate the Environment: Establish a clean area. If possible, this can be the whole facility. Otherwise, create a few clean runs or cages, separate from the others. Remove all fecal material from the areas since the organic matter in feces can greatly decrease the effectiveness of many disinfectants. Steam clean the area. Quaternary ammonium disinfectants used according to manufacturer's directions or a 1:5 or 1:10 solution of bleach can usually kill the cysts within one minute. Allow the area to dry for several days before reintroducing the animals. These solutions can also be used to clean litterboxes. Rinse out very well and allow to dry before reusing. NOTE: Use extreme caution when using quaternary ammonium compounds and bleach solutions. Use proper ventilation, gloves, protective clothing and follow your veterinarian's recommendations. Clean the Animals: Cysts can remain stuck to the haircoats of infected animals. So before moving the treated animals to the clean area, they should be shampooed and rinsed well. Especially concentrate on the perianal area. Prevent Reintroduction of Giardia: Giardia can be brought into the cattery either by introducing an infected animal or on your shoes or boots. Any new animal should be quarantined from the rest of the animals and be treated and cleaned as described above. You should either use disposable shoe covers or clean shoes/boots and use a footbath containing quaternary ammonium compounds to prevent people from reintroducing Giardia. Remember, Giardia of cats may infect people, so good, personal hygiene should be used by adults when cleaning litter boxes or picking up the yard, and by children who may play with pets or in potentially contaminated areas.
Flashcards in Wk6_chapter 4 questions and answers Deck (55): A UTP's category will determine the maximum number of bits it can transmit. With a standard bus topology, contention and collision are defining characteristics. Troubleshooting tools are based on software only. A topology has only physical characteristics. An Ethernet MAC address is usually specified in: The IEEE formalized the ring topology as the 802.3. In a peer-to-peer TCP/IP LAN, a host might serve as a client, a server, or both. NICs supporting different Mbps capacity could be used in a 100BaseTX LAN. The maximum length of a standard Cat 5 segment: A star topology has a central controlling device. A LAN that uses dedicated servers is called a peer-to-peer LAN. A bus topology forms a closed loop. A NIC is always internal to the device, connecting to the device's motherboard. Cost has helped make 802.3 the first choice for most LAN topologies. Client devices that connect to a network require: both a physical address and a logical network address Cache memory is usually slower than hard drive access. An advantage of a physical and logical star topology is that it: centralizes management of network resources A NIC is also referred to as a: Wiring closets are usually physically secured. Category 5 cable should be used in segments of no more than 100 feet. LAN resources might include: all of the following: hardware, software, data NICs are an optional component of each device in a network. In a switching hierarchy, a frame could take more than one path from source to destination. The protocol associated with the bus topology: LAN components are physical and logical. Compared to standard servers, blade servers are generally: smaller and consume less power Terminals are often associated with the classic star topology. A physical star could also be a logical: star or ring Hardware devices often have a logical component called a: The digits in an Ethernet MAC address that specify the internal serial number: A 32-bit bus can handle ________ times the data bits as an 8-bit bus. Routers have displaced many switches in modern networks. Eight bits as a unit is an octet. In the past, devices connected to a mini- or mainframe computer through: do not require specialized server software Fast Ethernet LANs that use switches can be greater than 250 meters in diameter. Term used to describe a signal that is recreated at its original strength: A higher cable category represents a smaller cable size. Physical components are frequently referred to as: Two hard drives that share the same controller may be performing: A 802.5 network utilizes a device called a: Switches, in general, cost less than routers. Server primary memory is also referred to as buffer or cache memory. Switches create dedicated circuits for the devices connected to them. Two or more NICs, by design or intent, often have the same physical address. Servers typically fulfill a generalized, nonspecific function. A topology's physical and logical characteristics are always the same. Ethernet is the most common topology used in LANs today. Reduced resistance translates into slower transmission speeds. A key indicator of LAN performance is response time to a request. A bus topology: all of the following: views its circuit as a single cable, des not form a closed loop, is often associated with ethernet Physical components are frequently referred to as hardware. Hard drive access speed can affect a server's performance. A network login generally requires: only a suer id and only a password
Source: Earth PD http://bit.ly/1ESoBKp Hi, I'm Jensen Morgan. We're going to talk about some great concepts in environmental science. Today's topic is time, scale, and impact. So let's get started. We're going to talk about environmental issues in relation to their geographical scale of impact, time and duration of impact, and degree of impact. Environmental issues can vary in their geographical scale of impact from the individual, to the local, to the regional level, and to the global level. For example, a coal plant in a particular location might cause an employee of that plant respiratory problems from pollutants. Pollution emitted to nearby waterways could impact the local water systems, and cause environmental damages and human health problems. The air pollution in the plant could travel large distances and contribute to regional acid rain deposition, damaging regional ecosystem stability. And greenhouse gases produced by the plant could contribute to global climate change. Environmental issues can also vary in their temporal impact. For example, an industrial plant could improperly manage their heavy metal waste and dump high levels into local water systems. People drinking that water might experience immediate effects such as stomach irritation and dizziness. Over time, drinking that water with heavy metals could, decades later, lead to various forms of cancer in the local population. Generations of impact could result as the heavy metals damaged local marine ecosystem so severely that fish populations wouldn't be able to recover for 50 to 150 years, which would also affect local economic fisheries. A majority of environmental problems are long-term processes, which makes them difficult to address, because humans are usually concerned with short-term needs, and have a hard time understanding long-term timeframes. Addressing long-term environmental issues is also challenging because popular media only focuses on environmental issues in the short term, and policymakers tend to be concerned in the short-term duration, usually the times of their serving term or possible reelection. Not all environmental impacts are equal. And the degree of impact will vary depending on the type of ecosystem being affected. This is because certain ecosystems provide more or less ecosystem services, or are more vulnerable to human impacts than others. An example would be wetlands, because they provide a large amount of ecosystem services such as biodiversity and natural water treatment. They also are more vulnerable to impacts. Human population density is also a factor. A more densely populated area impacted by environmental damages is more significant than a less populated area, because more people are involved. An example would be accidental poisoning of water sources for New York City versus Poultney in Vermont. The economic status and interests of the human population impacted is another factor. Developed and developing nations can have different environmental concerns relative to their people. In addition, environmental issues tend to affect developing and developed nations differently because of economic resources and population density. An example would be drought in India versus the United States. The US has a much smaller population versus its economic wealth compared to India. The result is that if both countries experienced a drought of similar severity at the same time, the degree of impact would be much greater for India because it has less economic strength to draw on. Now let's have a recap. We talked about environmental issues in relation to their geographical scale of impact, time and duration of impact, and degree of impact. Well, that's all for this tutorial. I look forward to next time. Bye.
It is now entirely possible to put parts of one species’ brain into another. The Proceedings of the National Academy of Sciences (PNAS) recently published a medical research study which demonstrated how human embryo stem cell neurons can be successfully integrated with neurons from mice. The scientists grew the stem cell neurons along with those of mice neurons in a culture and then implanted that combined tissue into a living mouse’ hippocampus. The mice neurons used had a specific trait. They were activated by light. The results of the study showed that the human neurons actually adopted this behavior. The human cells also functioned normally with the mouse’s nervous system after it was implanted. Before you start imagining singing and tap dancing mice, take note that one of the aims of the study was to see if human stem cell neurons could be ‘reprogrammed’. This research could open up new methods for curing nervous system diseases such as epilepsy or Parkinson’s. Take out the defective neurons, grow and modify them, and put them back in. Via io9Filed Under: Technology News
Ice reaches its grip across a significant portion of the Earth's land and ocean surface. The part of the world where snow and ice is formed is termed the cryosphere. It contains the majority of global freshwater and has a major influence on climate. Seasonal snow cover extends over a maximum 31% of global land area – almost all of it in the northern hemisphere. A tenth of Earth's land area is covered by glaciers and ice sheets, including the entire continent of Antarctica. A further 24% of the Earth's bare surface contains permafrost, with further regions undergoing seasonal freezing and thawing. During the height of winter in each hemisphere, sea ice can cover 14 to 16 million square kilometres of the Arctic Ocean and 17 to 20 million of the Southern Ocean, pulling back to less than half that extent during their respective polar summers. A store of freshwater Stored within the cryosphere is around 77% of global freshwater – 91% of it found in the Antarctic ice sheet, the remainder within the ice sheet covering Greenland and glaciers found worldwide. The cryosphere is both influenced by and has a major influence on climate. And any increase in the melt rate of ice sheets and glaciers has the potential to greatly increase sea level. For this reason, researchers are looking to the cryosphere to get a better idea of the potential scale of climate change. Earth observation provides an effective means of continuously monitoring the entire cryosphere, charting any alterations in response to climate, and looking for warning signs of changes in snow or ice reflectivity (known as 'albedo') along with thinning ice. With Arctic temperatures already at their warmest for the last four centuries, measurements suggest sea ice extent has declined by 10% since the 1960s. The best means of accurately measuring both ice extent and thickness on an ongoing basis is space-based radar and altimetry instruments of the type flown on ESA's ERS, Envisat and CryoSat-2 spacecraft. Ice mapping from space Envisat’s Advanced Synthetic Aperture Radar (ASAR) instrument was able to simultaneously cover an area of the Arctic four times larger than ERS’s SAR. It could also distinguish between different types of ice, with a variable angled and polarised radar beam that can show whether ice is solid pack-ice or just thin 'pancake' ice. Envisat's RA-2 Altimeter was designed to send thousands of radar pulses earthward every second to measure sea ice and glacier thickness. It featured an innovative 'four-wheel-drive' design that enabled it to keep better signal contact during previously difficult transitions between ice, land and open sea. CryoSat-2’s main payload is the Synthetic Aperture Interferometric Radar Altimeter (SIRAL) and is able to measure ice-sheet elevation and sea-ice 'freeboard', which is the height of ice protruding from the water. Previous radar altimeters have been optimised for operations over the ocean and land, but SIRAL is the first sensor of its kind designed for ice.
Nikolaus Otto, the German engineer who invented the engine that still drives most modern automobiles, was born on this day Otto built the first gasoline-powered internal combustion engine that used the four-stroke cycle--later named the Otto cycle--in 1876. Although French engineer Alphonse Beau de Rochas had patented the idea in 1862, Otto received the credit for being the first to put it to use in an engine. In Otto's engine, the piston goes through four movements: intake of air and gasoline, compression, power (expansion), and exhaust. The first practical alternative to the steam engine, Otto's invention was an immediate success. [Source: Britannica Online]
Lightning is generated in cumulonimbus clouds (thunderheads), which have a negative electrical charge at the base and a positive charge at top. Scientists are not certain how these charges develop; they do know the charges are carried by water droplets and ice crystals. The negative charge at the cloud's base causes a "shadow" of positive charge on the earth below. Conditions are then right to form an electrical circuit--this is what lightning is. An insulator--the air--holds up the connection, but eventually the negative charge within the cloud grows too great for the air to restrain it. An electrical impulse, called a leader, reaches downward from the cloud in steps, each step covering about 50 meters (150 feet). When the leader nears the ground, streamers arise to meet it, and the circuit is complete. A bright streak of electricity, the stroke of lightning, ascends along the same course the leader took. Several more strokes may follow this same path. The whole sequence is "lightning fast." The leader travels at 220,000 kilometers (136,000 miles) per hour, the pauses between steps take 50 millionths of a second, the return stroke moves at over 100 million kilometers (62 million miles) per hour, and all subsequent strokes are so fast the eye sees a single flickering lightning bolt. There are many variations. About half the time, lightning strikes between clouds or within a single cloud rather than reaching the earth. A rare but powerful type of lightning comes from a positively charged cloud that the wind has torn away from its negatively charged parent. Lightning can take other shapes as well: a ball, ribbon, sheet, or string of beads. Benjamin Franklin grossly underestimated the force of lightning when he did his kite-and-key experiment. A current of 160,000 amperes has been recorded in one extremely powerful bolt. An average stroke can easily release 250 kilowatt-hours of energy, enough to operate a 100-watt light bulb continuously for more than three months. And at 30,000 degrees Celsius (54,032 degrees Fahrenheit), lightning is five or six times as hot as the surface of the sun. All of this energy is contained in a channel about the width of a human thumb. Modern-day Ben Franklins at NCAR use more up-to-date techniques to investigate lightning. A specially equipped sailplane leased by NCAR has gathered data from inside electrical storms over Florida, New Mexico, and Colorado. Since the 1980s, a much clearer picture of lightning distribution has emerged from a nationwide network of detectors that tracks cloud-to-ground strikes. Data from this network often appear on television weathercasts in a map on which dots or x's indicate each flash. And starting in the mid-1990s, NASA's Optical Transient Detector began collected images of lightning flashes in clouds around the globe from an altitude of about 710 kilometers (446 miles) . The highest point within the positively charged shadow under a thundercloud--a skyscraper, a tree in a meadow, a golfer--is the easiest place for the leader to reach. So avoid high places during storms.
Shared decision making is a collaborative process in which patients are supported by their healthcare professional to select which of the available options they wish to choose. It brings together in the consultation, or conversations the best scientific evidence and the patient’s values and preferences - which themselves are informed by their beliefs, personal constructs and their personal circumstance, including their age, family and social relationships etc. It occupies the middle ground between more traditional clinician-centred practice, where patients rely on their doctor or healthcare professional to make decisions about their care, and consumerism, where patients are given information and then left to make their own choices. Clinicians have access to knowledge about the treatment options, their risks and benefits, while patients have knowledge on what is important to them, their own goals and preferences. Good shared decision making recognises these different contributions and brings these complementary areas of expertise together, leading to better quality decisions. Shared decision-making can take place between a patient, with or without their family, and any of the healthcare professionals involved in their treatment and care, from their health visitor at home to their GP or practice nurse in a primary care setting, to consultations with a surgeon, specialist nurse, psychologist or physiotherapist in hospital. It is relevant at any decision point along the patient’s care pathway and is particularly relevant where reasonable options and choices are available, including the choice to do nothing. Reasonableness needs to be understood in terms of both the individual's goals and efficacy of an intervention. Common examples of where it is used include decisions about: - undergoing screening or a diagnostic test - different medical or surgical procedures - self-management of a long-term condition - participation in a psychological intervention - making a change in lifestyle - taking medication. Shared decision making may not be appropriate for all clinical circumstances, for example in a life threatening emergency or where a patient lacks the capacity to engage. In predictable clinical situations decisions may be shared in advance, for example whether or not to resuscitate. To be effective, shared decision making requires: - Patients who feel confident and empowered to play an active role in decisions about their care. - Healthcare professionals who are skilled in using shared decision making approaches and tools and in managing the dynamics of shared decision making. - Organisational processes and tools that support shared decision making.
Cognitive function, also called cognitive performance or cognition, refers to the ability of an individual to think, process, and store information in order to solve problems. Humans are the only organisms capable of cognition. Cognitive disorders are characterized by delirium, dementia, and/or amnesia. Delirium is a term used to describe a confused mental state in which a patient has difficulty processing and interpreting information. Dementia is the loss of mental ability that is so severe that it interferes with daily functioning. Amnesia may cause difficulty remembering previously learned information. Patients with cognitive disorders may experience one or more of these symptoms. Treatment for cognitive disorders depends on the underlying cause. Most disorders are incurable and some may have devastating effects. For instance, Alzheimer's disease eventually leads to complete cognitive impairment. Treatment may help delay progression of such disorders. Other disorders, such as age-associated memory impairment (AAMI) may only cause mild symptoms. Cognitive learning disabilities occur when individuals have difficulty interpreting or processing what they see or hear. There is a gap between the patient's intelligence and his/her ability to perform. Patients may have difficulties with spoken and written language, self-control, coordination, and/or attention. As a result, patients may have a hard time with schoolwork or performing tasks at work. Patients with cognitive learning disabilities are often able to live normal, healthy lives. There are many ways for patients to cope with their disabilities. Special education and adaptive skills training has been shown to improve patients' work and school performances. Patients who are diagnosed and treated promptly are often able to go to college and support themselves. AAMI, AD, age-associated memory impairment, Alzheimer's disease, auditory perceptual deficit, brain diseases, brain disorders, brain injury, dyscalculia, dysgraphia, dyslexia, dyspraxia, cognition, cognitive deficits, cognitive enhancement, cognitive function, cognitive performance, delirium, dementia, intellectual disabilities, learning disabilities, mental retardation, multi-infarct disease, niacin deficiency, pellagry. common types of cognitive disorders and disabilities Age-associated memory impairment (AAMI): Age-associated memory impairment (AAMI) refers to the normal decline in memory as patients age. AAMI causes mild forgetfulness in patients who are older than 50 years of age. Alzheimer's disease: Alzheimer's disease (AD) is a progressive cognitive disorder that causes dementia. Dementia is the loss of mental ability that is so severe that it interferes with daily functioning. Over many years, AD eventually leads to irreversible mental impairment. During the final stages of AD, patients are unable remember, reason, and learn new things. AD typically develops in patients who are 65 years old or older. Although doctors know that AD causes healthy brain tissue to slowly degenerate over time, the exact origin of the disease remains unknown. Patients with AD develop abnormal clumps (called plaques) and irregular knots of brain cells (called tangles). Researchers believe that these clumps and tangles kill brain cells and may eventually lead to AD. It has been suggested that genetics may play a role in the development of plaques, which may lead to AD. Inflammation of the brain has also been associated with AD. However, researchers have not discovered if there is a relationship between brain swelling and the development of AD. There is currently no known cure for AD. Once diagnosed, patients typically survive eight to 10 years with the disease. Some have been known to live 25 years with the disease. In advanced Alzheimer's disease, people may lose all ability to care for themselves. This can make them more prone to additional health problems, such as pneumonia or malnutrition. They may have difficulty swallowing food and liquids, which may cause individuals with AD to inhale some of what they eat and drink into their airways and lungs, which may then lead to pneumonia. Brain injury: Trauma to the head may damage brain cells and lead to cognitive dysfunction. Brain trauma can result from accidents (such as motor vehicle wrecks and falls), assaults (such as gunshot wounds or beatings), or from sports activities (such as boxing and football) without adequate protective gear. In some cases, injury may still result even if protective gear is worn. Dementia caused as a result of trauma can be permanent or temporary, depending on the extent of the damage and the ability of the individual's brain to recover. Infections of brain structures, such as meningitis (inflammation of the protective membranes in the brain) and encephalitis (inflammation of the brain), are primary causes of dementia. Other infections, such as human immunodeficiency virus (HIV) and syphilis (a bacterial sexually transmitted disease), can affect the brain in later stages. In all cases, inflammation in the brain damages cells. Damage to memory due to infection can be permanent or temporary, depending on the extent of the damage and the brain's ability to recover. Niacin deficiency-induced dementia: Dementia can be caused by severe niacin insufficiency, a condition called pellagra. Niacin is a B-complex vitamin found in a many foods such as liver, poultry, fish, nuts, and dried beans. Pellagra-induced dementia is uncommon in developed countries, such as the United States. It is most common in areas of the world where malnutrition is prevalent. Multi-infarct disease: Multi-infarct disease is the second most common cause of irreversible dementia. The condition occurs when the blood flow to the brain is disrupted. If the brain does not receive enough blood, then it is starved of oxygen, and permanent brain damage may result. In multi-infarct disease, multiple strokes lead to a progressive decline in cognition. Strokes cause neurological damage in the brain due to a lack of oxygen. Multiple infarct dementia is more common in men who are older than 50 years of age. A person with this condition may also experience motor weakness, urinary incontinence, and ataxia (irregular muscle coordination). Patients may also develop high blood pressure, diabetes, or vascular disease. Learning disabilities: Learning disabilities are disorders that occur when patients have difficulty interpreting or processing what they see or hear. There is a gap between the patient's intelligence and his/her performance in school, work, or other areas of life. Patients may have difficulties with spoken and written language, self-control, coordination, and/or attention. As a result, patients may have a hard time with schoolwork or performing tasks at work. Learning disabilities may be lifelong. In some cases, they may affect many areas of a person's life, including academics, work, social life, or daily routines. Some patients may have several different disabilities. Others may have only one problem that has little or no impact on their lives. It is important to note that not all learning problems are learning disabilities or cognitive deficits. Some children are simply slower than others in developing new skills. In some cases, learning disabilities may be mistakenly suspected when a child is simply slower to mature. Learning disabilities occur when certain areas of the brain do not function properly. Many factors, including genetics, may be involved in the development of learning disabilities. Intellectual disability (mental retardation): Intellectual disability is a condition that causes significantly impaired cognitive functioning from birth or early infancy that ultimately limits the individual's ability to perform normal daily activities. In the past, intellectual disability was commonly called mental retardation. However, the term, "mental retardation," has acquired a negative social stigma over the years. Therefore, doctors and other professionals have begun to replace the term with intellectual disability. There is significant variation in the signs and symptoms of intellectual disabilities. Some patients may be able to live relatively normal lives with minimal assistance, while others may require 24-hour assistance with everyday tasks. There are many potential causes of intellectual disabilities, including genetics, problems during pregnancy (e.g. infection or a mother who drinks or uses drugs during pregnancy), the baby not getting enough oxygen during delivery, and exposure to disease (e.g. whooping cough, measles, or meningitis). Doctors are only able to identify a cause of intellectual disability in about 30% of patients. improving work and school performance General: Patients with cognitive learning disabilities are often able to live normal, healthy lives. There are many ways for patients to cope with their disabilities. Special education and adaptive skills training has been shown to improve patients' work and school performances. Patients who are diagnosed and treated promptly are often able to go to college and support themselves. Education: Patients with learning disabilities or intellectual disabilities must have the option of receiving education that is tailored to their specific strengths and weaknesses. According to the Individuals with Disabilities Education Act, all children with disabilities must receive free and appropriate education. According to the law, members of the patient's school should consult with the patient's parents or caregivers to design and write an individualized education plan. Once all parties agree with the plan, the educational program should be started. The school faculty should document the child's progress in order to ensure that the child's needs are being met. Educational programs vary among patients. In general, most experts believe that children with disabilities should be educated alongside their non-disabled peers. The idea is that non-disabled students will help the patient learn appropriate behavioral, social, and language skills. Therefore, some patients are educated in mainstream classrooms. Other patients attend public schools but take special education classes. If the disability is severe or profound, then patients may benefit from specialized schools that are designed to teach children with disabilities. Adaptive skills training: Many patients with intellectual disabilities (mental retardation) need help improving their adaptive skills, which are needed to live, work, and function in the community. Teachers, parents, and caregivers can help patients work on their daily living skills, communication skills, and social skills.
|Part of a series on the| |History of printing| Phototypesetting was a method of setting type, rendered obsolete with the popularity of the personal computer and desktop publishing software, that used a photographic process to generate columns of type on a scroll of photographic paper. Typesetters used a machine called a phototypesetter, which would quickly project light through a film negative image of an individual character in a font, through a lens that would magnify or reduce the size of the character onto photographic paper, which would collect on a spool in a light-tight canister. The photographic paper or film would then be fed into a processor, a machine that would pull the paper or film strip through two or three baths of chemicals, where it would emerge ready for paste up or film make-up. 1950s and 60s 1.1 - Initial phototypesetting machines 1.1.1 - Use of CRT screens for phototypesetting 1.1.2 - Expansion of technology to small users 1.2.1 - Transition to computers 1.2.2 - 1950s and 60s 1.1 - References 2 - External links 3 1950s and 60s Initial phototypesetting machines Phototypesetting machines projected characters onto film for offset printing. In 1949, the Photon Corporation in Cambridge, Mass. developed equipment based on the Lumitype of Rene Higonnet and Louis Moyroud. The Lumitype-Photon was first used to set a complete published book in 1953, and for newspaper work in 1954. Mergenthaler produced the Linofilm using a different design and Monotype produced Monophoto. Other companies followed with products that included Alphatype and Varityper. The major advancement presented by the phototypesetting machines over the Linotype machine "hot type" machines was the elimination of metal type, an intermediate step no longer required once offset printing became the norm. This "cold type" technology could also be used in office environments where "hot metal" machines (the Mergenthaler Linotype, the Harris Intertype and the Monotype) could not. The use of phototypesetting grew rapidly in the 1960s when software was developed to convert marked up copy, usually typed on paper tape, to the codes that controlled the phototypesetters. To provide much greater speeds, the Photon Corporation produced the ZIP 200 machine for the MEDLARS project of the National Library of Medicine and Mergenthaler produced the Linotron. The ZIP 200 could produce text at 600 characters per second using high speed flashes behind plates with images of the characters to be printed. Each character had a separate xenon flash constantly ready to fire. A separate system of optics positioned the image on the page. Use of CRT screens for phototypesetting An enormous advance was made by the mid-1960s with the development of equipment that projected the characters onto CRT screens. Alphanumeric Corporation (later Autologic) produced the APS series. Rudolf Hell developed the Digiset machine in Germany. The RCA Graphic Systems Division manufactured this in the U.S. as the Videocomp, later marketed by Information International Inc.. Software for operator-controlled hyphenation was a major component of electronic typesetting. Early work on this topic produced paper tape to control hot metal machines. C. J. Duncan, at the University of Durham in England, was a pioneer. The earliest applications of computer controlled phototypesetting machines produced the output of the Russian translation programs of Gilbert King at the IBM Research Laboratories, and built-up mathematical formulas and other material in the Cooperative Computing Laboratory of Michael Barnett at MIT. There are extensive accounts of the early applications, the equipment and the PAGE I algorithmic typesetting language for the Videocomp, that introduced elaborate formatting In Europe, the company of Berthold had no experience in developing hot-metal typesetting equipment, but being one of the largest German type foundries, they applied themselves to the transference. Berthold successfully developed its Diatype (1960), Diatronic (1967), and ADS (1977) machines, which led the European high-end typesetting market for decades. Expansion of technology to small users Compugraphic produced phototypesetting machines in the 1970s that made it economically feasible for small publications to set their own type with professional quality. One model, the Compugraphic Compuwriter, used a filmstrip wrapped around a drum that rotated at several hundred revolutions per minute. The filmstrip contained two fonts (a Roman and a bold or a Roman and an italic) in one point size. To get different sized fonts, the typesetter loaded a different font strip or used a 2x magnifying lens built into the machine, which doubled the size of font. The CompuWriter II automated the lens switch and let the operator use multiple settings. Other manufacturers of photo compositing machines included Alphatype, Varityper, Mergenthaler, Autologic, Berthold, Dymo, Harris (formerly Linotype's competitor "Intertype"), Monotype, Star/Photon, Graphic Systems Inc., Hell AG, MGD Graphic Systems, and American Type Founders. Released in 1975, the Compuwriter IV held two filmstrips, each holding four fonts (usually Roman, italic, bold, and bold italic). It also had a lens turret which had eight lenses giving different point sizes from the font, generally 8 or 12 sizes, depending on the model. Low-end models offered sizes from 6 to 36 point, while the high-end models went to 72 point. The Compugraphic EditWriter series took the Compuwriter IV configuration and added floppy disk storage on an 8-inch, 320K disk. This allowed the typesetter to make changes and corrections without rekeying. A CRT screen let the user view typesetting codes and text. Because early generations of phototypesetters couldn't change text size and font easily, many composing rooms and print shops had special machines designed to set display type or headlines. One such model was the PhotoTypositor, manufactured by Visual Graphics Corporation, which let the user position each letter visually and thus retain complete control over kerning. Compugraphic's model 7200 used the "strobe-through-a-filmstrip-through-a-lens" technology to expose letters and characters onto a thin strip of phototypesetting paper that was then developed by a photo-processor. Some later phototypesetters utilized a CRT to project the image of letters onto the photographic paper. This created a sharper image, added some flexibility in manipulating the type, and created the ability to offer a continuous range of point sizes by eliminating film media and lenses. The Compugraphic MCS (Modular Composition System) with the 8400 typesetter is an example of a CRT phototypesetter. This machine loaded digital fonts into memory from an 8-inch floppy. Additionally, the 8400 was able to set type in point sizes between 5 and 120 point in 1/2-point increments. It was extremely fast and was one of the first output systems (the other was also a Compugraphic machine, the 8600) that was able to create camera-ready output with a maximum width of 12 inches. As phototypesetting machines matured as a technology in the 1970s, more efficient methods were found for creating and subsequently editing text intended for the printed page. Previously, "hot metal" typesetting equipment had incorporated a built in keyboard, such that the machine operator would create both the original text and the medium (lead type slugs) that would create the printed page. Subsequent editing of this copy required that the entire process be repeated. The operator would re-keyboard some or all of the original text, incorporating the corrections and new material into the original draft. CRT based editing terminals, which could work compatibly with a variety of phototypesetting machines, were a major technical innovation in this regard. Keyboarding the original text on a CRT screen, with easy-to-use editing commands, was faster than keyboarding on a Linotype machine. Storing the text magnetically for easy retrieval and subsequent editing also saved time. An early developer of CRT-based editing terminals for photocomposition machines was Omnitext of Ann Arbor, Michigan. These CRT phototypesetting terminals were sold under the Singer brand name during the 1970s. Transition to computers Early machines had no text storage capability; some machines only displayed 32 characters in uppercase on a small LED screen and spellchecking was not available. Proofing typeset galleys was an important step after developing the photo paper. Corrections could be made by typesetting a word or line of type and by waxing the back of the galleys, and corrections could be cut out with an X-Acto knife and pasted on top of any mistakes. Since most early phototypesetting machines could only create one column of type, long galleys of type were pasted onto layout boards in order to create a full page of text for magazines and newsletters. Paste-up artists played an important role in creating production art. Later phototypesetters had multiple column features that allowed the typesetter to save paste-up time. Early electronic typesetting programs were designed to drive phototypesetters, most notably the Graphic Systems CAT phototypesetter that troff was designed to provide input for. Though such programs still exist, their output is no longer targeted at any specific form of hardware. Some companies, such as TeleTypesetting Co. created software and hardware interfaces between personal computers like the Apple II and IBM PS/2 and phototypesetting machines which provided computers equipped with it the capability to connect to phototypesetting machines. With the start of desktop publishing software, Trout Computing in California introduced VepSet, which allowed Xerox Ventura Publisher to be used as a front end and wrote a Compugraphic MCS disk with typesetting codes to reproduce the page layout. In retrospect, cold type paved the way for the vast range of modern digital fonts, with the lighter weight of equipment allowing far larger families than had been possible with metal type. However, modern designers have noted that compromises of cold type, such as altered designs, made the transition to digital when a better path might have been to return to the traditions of metal type. Adrian Frutiger, who in his early career redesigned many fonts for phototype, noted that "the fonts [I redrew] don’t have any historical worth...to think of the sort of aberrations I had to produce in order to see a good result on Lumitype! V and W needed huge crotches in order to stay open. I nearly had to introduce serifs in order to prevent rounded-off corners – instead of a sans serif the drafts were a bunch of misshapen sausages!" - René Higonnet - Prepressure – the history of prepress & publishing, 1950–1959, retrieved on 8 May 2014 - Harold E. Edgerton, Electronic Flash, Strobe, 1987, chapter 12, section J - Michael P. Barnett, Computer typesetting, experiments and prospects, 245p, MIT Press, Cambridge, Mass, 1965. - Arthur Phillips, Computer peripherals and typesetting: a study of man-machine interface incorporating a survey of computer peripherals and typographic composing equipment, HMSO, 1958, London. - Jack Belzer, Albert G. Holzman and Allen Kent, Encyclopedia of computer science and technology, 267- (over 100 pages) . - John. Pierson, Computer composition using PAGE-1, Wiley Interscience, New York, 1972. - The Ann Arbor News 6 April 1973 "Singer Corp. has completed negotiations with Omnitext, Inc." - Compugraphic-to-Macintosh Solutions, , Retrieved on 2010-18-09 - Frutiger, Adrian. Typefaces - the complete works. p. 80. - "Typesetting and Paste-Up, 1970s Style" - The Museum of Printing, North Andover, Massachusetts
In addition to the printable lesson, this resource may be used for distance learning with EASEL by TpT. The lesson engages students with an icebreaker opening game of “Two Truths and a Lie” and culminates in a writing task that imitates the style of a standardized assessment. Additionally, this lesson incorporates all strands of the Common Core English Language Arts (ELA) Standards. This 20-page “Paired Passages” product includes the following: - explicit lesson plan with identified Common Core ELA Anchor Standards - pre-reading guide and sample annotation passage - poem texts - rereading strategy handout - literature web - poetry analysis graphic organizer - Venn diagram - literary analysis prompt - short literary analysis essay rubric - answer key (includes detailed responses for all activities and a sample essay) With its detailed instructions and samples, this resource would make an excellent lesson for a substitute if you will be out for a couple of days. Meaningful and Memorable English Language Arts by ©OCBeachTeacher All rights reserved by author. Limited to use by purchaser only. Group licenses available. Not for public display. There are no reviews yet.
The thermistor is a resistance thermometer. The relationship between its resistance and the temperature is highly nonlinear. Furthermore, the resistance changes negatively and sharply with a positive change in temperature, as shown schematically below. Characteristics of Three Temperature Transducers The thermistor resistance-temperature relationship can be approximated by, The thermistor resistance can easily be measured, but the temperature is buried inside an exponential. Since all R and T are positive real numbers, we can apply a logarithm ln to both sides of the equation. Doing so allows us to solve for the temperature T, Alternatively, some references use the negative temperature coefficient (NTC) a to describe the sensitivity of a thermistor, Typically, the value of a falls between -2% ~ -8%. With the above equations, the temperature can be directly obtained from the measured resistance. Note that the material constant b may vary slightly with temperature and is usually provided by vendors. One can also use several well known temperature conditions as check points, e.g., ice water at 0 °C (32 °F) and boiling water at 100 °C (212 °F), or use other pre-calibrated thermometers to calibrate/curve-fit the value of b. However, b may vary considerably across the temperature range of interest. In this case, one should resort to a calibrated curve-fit of the R-T relationship and neglect the equations presented above. A suitable curve fit is suggested by,
Knowing about supplementary angles can be very useful in solving for missing angle measurements. This tutorial introduces you to supplementary angles and shows you how to use them to solve for a missing angle measurement. Take a look! A point is a fundamental building block of math. Without points, you couldn't make lines, planes, angles, or polygons. That also means that graphing would be impossible. Needless to say, learning about points is very important! That makes this tutorial a must see! Angles are a fundamental building block for creating all sorts of shapes! In this tutorial, learn about how an angle is formed, how to name an angle, and how an angle is measured. Take a look! Did you know that there are different kinds of angles? Knowing how to identify these angles is an important part of solving many problems involving angles. Check out this tutorial and learn about the different kinds of angles! Do complementary angles always have something nice to say? Maybe. One thing complementary angles always do is add up to 90 degrees. In this tutorial, learn about complementary angles and see how to use this knowledge to solve a problem involving these special types of angles! If angles combine to form a straight angle, then those angles are called supplementary. In this tutorial, you'll see how to use your knowledge of supplementary angles to set up an equation and solve for a missing angle measurement. Take a look!
In today’s digital age, the internet has become an indispensable tool for students to enhance their academic performance. Whether it’s researching a topic, accessing study materials, or communicating with peers and professors, the internet offers a wealth of resources and opportunities for students. Statista, in its recent survey, categorized the percentage of internet users in the United States in 2021 by educational level. The survey revealed that 98% of individuals with a college degree were using the internet, while 97% of those who had completed some college were also internet users. Furthermore, 86% of individuals with a high school education or less were found to be using the internet during the survey period. However, with so much information available online, it can be challenging to navigate and make the most of it. In this article, we will provide seven effective tips on how to use the internet for your studies best, helping you to optimize your online learning experience and achieve academic success. Utilizing Online Resources In today’s digital age, students have a wealth of resources available to them online. Digital textbooks, educational videos, and online libraries are just a few examples of resources that can help students supplement their learning and improve their understanding of the subject. Fresh academic aid has emerged for students in the form of homework market helper, which offers valuable resources to assist them with their studies. Platforms like SweetStudy offer a range of resources and assistance, from assignment help to tutoring services. With these online resources, students can immediately access study guides, exam preparation materials, lecture notes, and other relevant materials to aid in their studies. By effectively utilizing these online resources, students can enhance their learning experience and achieve their academic goals. The internet has made research much more accessible for students. Instead of relying solely on physical libraries, students can use search engines and online databases to find relevant information for their assignments and projects. However, it’s important to learn how to properly evaluate the credibility of online sources to ensure the information is accurate and trustworthy. According to Make Use Of, Google Scholar is widely regarded as the go-to tool for discovering scholarly literature on a diverse range of subjects. This search engine simplifies the process of accessing academic papers, theses, case law, books, and other relevant resources. Upon viewing the search results page, users can observe the author’s name, journal title, and total number of citations, which facilitates the assessment of the paper’s credibility. Additionally, related articles can be viewed to delve deeper into the topic of interest. Students can also take advantage of tools such as citation generators to cite their sources and avoid plagiarism properly. Online collaboration is an essential part of modern education, and it offers many benefits to students. Students can work together and share their knowledge and ideas using video conferencing, discussion forums, and group chats. Collaboration can also help students learn from each other, allowing for a more diverse range of perspectives to be presented. Group work can also help students develop important skills, such as communication and teamwork. Youth Incorporated notes that mastering the art of time management is crucial for accomplishing goals in both academic and personal realms. By acquiring this skill, students can increase their chances of success in various areas. The crux of effective time management lies in recognizing the tasks that hold the greatest significance and devoting the necessary time to each of them. Utilize productivity tools and apps to help you manage your time effectively, such as to-do lists, calendar apps, and time-tracking tools. Prioritize your online activities based on their importance and relevance to your studies, and set achievable goals and deadlines for yourself. With effective time management skills, you can make the most of your online study time and improve your academic performance. Online courses offer a convenient way to gain knowledge and skills that can supplement your studies. Many institutions and platforms offer online courses on a wide range of subjects, from programming to business management. By enrolling in these courses, you can gain access to high-quality educational content, learn from industry experts, and improve your understanding of the subject matter. Online courses also offer flexible scheduling, allowing you to learn at your own pace and on your own time. Moreover, the skills you acquire through online courses can enhance your career prospects and make you a more competitive candidate in the job market. Online Study Groups Joining an online study group or forum can be a great way to connect with other students and enhance your learning experience. These groups provide opportunities to discuss and share ideas, ask and answer questions, and provide support to one another. They can also help you stay motivated and accountable for your studies. Additionally, participating in these groups can help you build a network of peers who share your interests and may be useful for future collaborations or job opportunities. Online assessments are a convenient way to test your knowledge and understanding of the subject matter. Many courses and textbooks have online assessments available, and you can also find practice tests and quizzes on various educational websites. These tools can help you identify areas where you may need to focus your studies and improve your understanding of the material. Additionally, online assessments can be a helpful tool for tracking your progress and ensuring that you are meeting your learning goals. Just be sure to use a reliable source for assessments and practice tests to ensure that you are getting accurate and helpful feedback. In conclusion, the internet provides a vast array of resources and tools that can greatly enhance one’s learning experience. From online research and collaboration to online courses and assessments, the Internet offers a wealth of opportunities for students to improve their knowledge and skills. By utilizing these resources effectively and practicing good time management, students can stay organized, stay on track, and ultimately achieve success in their studies. By taking advantage of the many benefits the internet offers, students can optimize their learning experience and get ahead in their classes.
As early as 2250 BC hemorrhoids have been recorded in literature to some extent. It would probably be safe to say that it is one of the oldest ailments known to people. The Egyptians were the first people who medically recorded the remedies for hemorrhoids. They used a poultice of dried acacia leaves with a linen bandage to heal protrusions and inflammations of venous material. A Greek physician named Hippocrates also wrote about hemorrhoids describing it as bile or phlegm which is determined to be the veins in the rectum. He treated the anal protusions very crudely avocating pulling the tissue off with the finger tips, or pulling the veins upward, while someone puts a hot iron to the hemorrhoid and burns it off. The first recorded endoscopy (use of speculum to inspect the rectum)can also be credited to Hippocrates. Even the bible has records of hemorrhoids in the earliest times from the Old Testament Book of Samuel 5:9 Philistines, “punished with emerods” and Samuel 5:12, “People who moved the Ark to Ekron were punished with emerods”. One of the earliest known hemorrhoid treatments was with the aloe vera plant. Dioscorides, a Roman physician started using that to treat inflamed hemorrhoids. Then approximately 130-200 AD a Roman physician named Emperor Marcus Aurelius (Galen) prescribed ointment, laxatives, and leeches for hemorrhoids treatment. During the same time period in India, the use of clamp and cautery was used to get rid of hemorrhoids and control bleeding. Between the 5th and 10th Century, Byzantine physicians used thread to ligate the base of the hemorrhoid and then followed by its amputation. In 1935, Doctors E.T.C. Milligan and C. Naughton Morgan further studied the excision and ligation methods, which later became the gold standard in hemorrhoidectomy. In the 1960s, banding of larger hemorrhoids was introduced with rubber band ligation. In the 1970s, cryotheraphy, diathermy, and laser cauteries were developed for treatment. In the 1990s, Stapled Hemorrhoidopexy, also known as Procedure for Prolapse & Hemorrhoids (PPH) was first described by an Italian surgeon – Dr. Antonio Longo, and since then has been widely adopted to treat the grade 3 and 4 hemorrhoids. Moreover, Another non surgical procedure, called Infra-red coagulation (IRC) was developed to treat the early stage of hemorrhoids.
Pregnant human mothers think they have it tough, but new photos show some squid moms carry 3,000 developing embryos around for up to nine months. Gonatus onyx is one of the most abundant species of squid in the Pacific and Atlantic Oceans and is an important food source for many predators. They spend most of their lives in shallow waters but dive to great depths to lay eggs. Because of this, scientists had never observed this squid's reproductive habits until recently, when they discovered that the secretive world of G. onyx reproduction is quite unlike anything they've seen before. "It's a shallow-living squid for most of its life, but then it dives down to 2,500 meters, lays 2,000 to 3,000 eggs, and carries them around for months," study leader Brad Seibel of the University of Rhode Island told LiveScience. "This is the first species of squid observed to do this." Quite a burden The mothers are about eight inches long from the top of their body to the end of their arms, and the addition of the egg mass extends their total length 50 to 75 percent. As you might imagine, carrying around a load this big can be quite a burden on the squid. Normally squid propel themselves through the ocean by extending their arms outwards and snapping them back together. But, this technique doesn't work so well when you're delicately clutching 3,000 developing embryos between your arms. Although Seibel and his colleagues observed the squid repeatedly flushing water through the egg mass—probably to aerate the eggs in the middle—aggressive swimming shook the mass and caused some of the eggs to fall off. To prevent losing eggs, the squid use their mantle and fins to move through the water. However, it appears that as the eggs develop, the mothers experience gradual degeneration of these locomotive muscles. A squid carrying undeveloped eggs was able to escape by vigorous fin and mantle contractions, but one with advanced eggs could not move away. Other species of squid lay many more thousands of eggs in shallow waters without providing months of care, so why does G. onyx take the trouble to move to deeper waters—between 1,500 and 2,500 meters—and carry around the eggs for so long? "Deep-sea species have fewer eggs, but their offspring are larger and more capable of capturing prey," Seibel said. "But in order for the offspring to survive, the parent must provide care for them for six to nine months." Also, deeper water is relatively free of predators—predatory mammals don't frequent depths below 1,500 meters—making survival easier for both the 1/10th-inch hatchlings and their brooding mothers. This research is detailed in the Dec. 15 issue of the journal Nature. - Elusive Giant Squid Finally Photographed - Mystery Ocean Glow Confirmed in Satellite Photos - Deadly New Sea Creature Lures Fish with Red Lights - Giant Balls of 'Snot' Explain Ocean Mystery - A Squid that Glows Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter.
TOKYO (AP) — The operator of Japan's destroyed Fukushima nuclear plant switched on a giant refrigeration system on Thursday to create an unprecedented underground ice wall around its damaged reactors. Radioactive water has been flowing from the reactors, and other methods have failed to fully control it. The decontamination and decommissioning of the plant, damaged by a massive earthquake and tsunami in 2011, hinge of the success of the wall. Q. WHAT IS AN ICE WALL? A. Engineers installed 1,550 underground refrigeration pipes designed to create a 1.5-kilometer (0.9-mile) barrier of frozen soil around four damaged reactor buildings and their turbines to control groundwater flowing into the area and prevent radioactive water from seeping out. The pipes are 30 meters (100 feet) deep, the equivalent of a 10-story building. Engineers say coolant in the pipes will freeze the surrounding soil to minus 30 degrees Celsius (minus 22 Fahrenheit), creating the wall over several months. Q. WHY IS AN ICE WALL NEEDED? A. The cores of three of the damaged reactors melted during the accident and must be cooled constantly with water to keep them from overheating again. The cooling water becomes radioactive and leaks out through damaged areas into the building basements, where it mixes with groundwater, increasing the volume of contaminated water. Nearly 800,000 tons of radioactive water have been pumped out, treated and stored in 1,000 tanks that now occupy virtually every corner of the Fukushima plant, interfering with its decontamination and decommissioning and adding to the risk of further leaks of water into the nearby ocean. Q. ARE THERE RISKS? A. Construction officials say the coolant is environmentally safe. There were doubts that the huge refrigeration system could effectively freeze the soil while groundwater continues to flow in the area. The operator, Tokyo Electric Power Co., says results from a test of part of the wall last summer were mixed but suggest the system has sufficient capability. Experts are also concerned that an ice wall cannot be adjusted quickly in an emergency situation, such as a sudden increase in the flow of contaminated water, because it takes several weeks to freeze or melt. Electrical costs for running the refrigeration system could be steep. TEPCO says the wall, once formed, can remain frozen for up to two months in the event of a power failure. Q. WHO MADE THE ICE WALL? A. The 35 billion yen ($312 million) project was funded by the government and built by Kajima Corp., which has used similar technology in smaller projects such as subway construction. The wall was delayed by technical uncertainties and was finished last month, a year behind schedule.
Not all batteries are made equal. Have you ever wondered why there are different battery sizes and ratings? Did you know every battery type serves a specific functionality? The two main battery types are stationary batteries and “starting” batteries. It is very important that you find the correct battery for your particular application, as the wrong choice can affect both the battery’s efficiency as well as its lifespan. Stationary Battery vs. Engine Start Battery Stationary Batteries are meant to provide a continuous current of the same intensity for a prolonged time span. Starting Batteries are just the opposite. They are designed with thin plates so they can provide quick bursts of energy, such as a backup generator or car would need while starting. For a Stationary Battery to provide a continuous current of the same intensity for a prolonged time span, they are designed with thick internal plates that slows down the process delivering amperes to the load. Stationary batteries are typically rated in ampere-hours. An ampere-hour represents the amount of electricity when a current of 1 Ampere passes for 1 hour. The ampere-hour capacity varies with the rate at which the battery is discharged; the slower the discharge, the greater the amount of electricity that the battery will deliver. A typical rate for ampere-hour capacity is the amount of electricity that a battery will deliver during 20 hours before the voltage falls to 10.50V. For example, a 60Ah battery will deliver a current of 3A for 20 hours. Starting Batteries are designed with thin plates so they can provide quick bursts of energy, such as a backup generator or car would need while starting. This only discharges the battery by about 1-3%, which is then topped off typically by an alternator. This is typically known as the battery cranking amps. Cranking amps are the numbers of amperes a lead-acid battery at 32 degrees F (0 degrees C) can deliver for 30 seconds and maintain at least 1.2 volts per cell (7.2 volts for a 12 volt battery). To recap, stationary batteries provide a lower but longer duration amount of energy, but cannot deliver as many peak cranking amps. Starting batteries are made to provide high power for higher amps, more frequent short draws, and limited long term discharge.
Wikijunior:Summer Flowers of Northern New England/Hemerocallis< Wikijunior:Summer Flowers of Northern New England The Daylily produces huge yellow or orange flowers, depending on the variety. Each flower looks like it has six petals, but really, it only has three true petals. The other three are called tepals (tee-pulls). The leaves are long and slender. Daylilies belong to a large category of plant called monocots (Mah-no-kots). Monocots often have long slender leaves, and the veins in these leaves will always run in the same direction and do not branch. The flowers of most monocots almost always have either three or six petals. If a plant is not a monocot, it's a dicot (die-cot). So even if you can't tell anything else about a plant, you should be able to tell if it's a monocot or a dicot by looking at the leaf veins and counting the petals.
Mathematics, a subject dreaded by most students in primary school, is one of the most important subjects that are relevant for the real world. Teachers play a significant role in making the topic intriguing for students to understand and grasp. However, good teachers for math are like jewels; scarce. Parents need to find extra coaching classes to enroll their kids to nail the basics and understand the fundamentals of the subject. These classes are not always cheap and can be very expensive depending on the expertise and teaching ability of the tutor. Some parents, who want to give their kids that extra boost in learning, adopt games and activities around the home. At the same time, some teachers look for interesting ways to make learning math, a positive experience. Using simple tools like cards and coins learning math can be made fun. Take Two: An Addition Game Take two is a game that utilizes a regular pack of cards. This game best is suited for more than one child. This game helps children get used to simple addition problems. You can play this game by taking away all the cards that are 7 and higher, including Aces. The game aims to give each child two cards and ask them to add them together so that they get the sum. The child who has the highest answer wins. But if both children get the same answer, then you provide them with another card, so that they add the values together to make a bigger number. Make the Most: A Multiplication Game Like ‘Take two,’ ‘Make the Most’ is another card game. This game should be played with more than one child. You play the game by giving each child four cards and asking them to multiply each number with each other so that they get the highest number. For example: 3x5x6x7= 630. If your child is unable to complete the round, don’t hesitate to swap some of the cards to make it easier for them. Higher or Lower: Greater Than and Lower Than This game aims to help children learn about numbers that are greater or lower than a specific number. Flip over two cards and get your child to tell you if it’s higher or smaller than the other. The child with the highest amount of correct answers wins. You can further complicate this game by providing your child with numbers for the queens, jacks, kings, aces, and jokers. Mystery Number: Subtraction and Memory This card game combines a bit of memory along with subtraction. Take away all cards with faces on them from the pile; these include jokers, aces, kings, queens, and jacks. Take the remaining cards from the deck and lay them out on the floor, face down. This game aims to get your child to pick two cards and subtract them from each other. After finding the remaining number from the subtraction, you have to go through the pile picking only one card at a time until you see the answer in the main stac
The Peterloo Massacre On 16 August 1819, a meeting of peaceful campaigners for parliamentary reform was broken up by the Manchester Yeomanry, a local force of volunteer soldiers. Between 10 and 20 people were killed and hundreds more injured in what quickly became known as the Peterloo Massacre. Although different sources give different estimates of both the numbers attending the meeting and the numbers killed and injured, it seems likely that around 100,000 people attended the meeting at St Peter’s Fields in Manchester on a sunny August day. Men, women and children came not only from the local area but from towns and villages across the North West, some walking nearly 30 miles to attend. Although several members of the crowd attended from mere curiosity, most were supporters of parliamentary reform and had come especially to see the main speaker, Henry Hunt, known as ‘Orator’ Hunt because of his talent for public speaking. Map of the Peterloo Massacre and portrait of Henry Hunt Map depicting the location and movements of protestors and soldiers at St Peter’s Fields, 1820.View images from this item (2) Colour print depicting the Peterloo Massacre Print depicting the Peterloo Massacre, 1819.View images from this item (1) Usage terms © National Archives Held by© National Archives Why were people protesting? Since the end of the Napoleonic wars in 1815, increasing numbers of working people in industrialising yet disenfranchised areas like Manchester had become involved in the movement for reform. Under the influence of men like Henry Hunt and the journalist William Cobbett, they began to campaign for universal suffrage. They argued that extending the vote to working men would lead to better use of public money, fairer taxes and an end to restrictions on trade which damaged industry and caused unemployment. Only a minority campaigned for women to have the vote, but women were nevertheless active in the movement. In 1819, women in and around Manchester had begun to form their own reform societies campaigning on behalf of their male relatives and vowing to bring up their children as good reformers. Many of the Female Reformers appeared at the meeting at St Peter’s Fields dressed distinctively in white as a symbol of their virtue. Print depicting the Peterloo Massacre Print of the Peterloo Massacre depicting Female Reformers dressed in white and holding a banner for the Manchester Female Reform Union.View images from this item (1) Suppressing the protesters Despite the seriousness of the cause, there was a party atmosphere as groups of men, women and children, dressed in their best Sunday clothes, marched towards Manchester. The procession was accompanied by bands playing music and people dancing alongside. In many towns, the march was practised on local moors in the weeks before the meeting to ensure that everybody could arrive in an organised manner. According to local magistrates, however, the crowd was not peaceful but had violent, revolutionary intentions. To them, the organised marching, banners and music were more like those of a military regiment, and the practices on local moors like those of an army drilling its recruits. They therefore planned to arrest Henry Hunt and the other speakers at the meeting, and decided to send in armed forces – the only way they felt they could safely get through the large crowd. People who were already cramped, tired and hot panicked as the soldiers rode in, and several were crushed as they tried to escape. Soldiers deliberately slashed at both men and women, especially those who had banners. It was later found that their sabres had been sharpened just before the meeting, suggesting that the massacre had been premeditated. The names of many of the hundreds injured were printed, along with details of their wounds, so that sympathisers could put money towards a charity to support them – remember there was no sickness benefit or free healthcare available at the time. These lists, however, probably underestimate the numbers killed and injured, as many people were afraid to admit they had been at the meeting and thereby risk further reprisals from the local authorities. The response to the massacre There was considerable public sympathy for the plight of the protesters. The Times newspaper printed a shocking account of the day, causing widespread outrage which briefly united advocates of a more limited reform with the radical supporters of universal suffrage. A huge petition with 20 pages of signatures was raised, stating the petitioners’ belief that, whatever their opinions on the cause of reform, the meeting on 16 August had been peaceful until the arrival of the soldiers. From government came an official sanction of the magistrates’ and yeomanry’s actions, and the passing of the Six Acts, a paranoid legal crackdown on the freedoms of the public and press. Among this new legislation was the requirement for any public meeting on church or state matters of more than 50 people to obtain the permission of a sheriff or magistrate, and the toughening of the laws that punished authors of blasphemous or seditious material. Many braved the oppressive Six Acts, however, to express their anger in print. Percy Bysshe Shelley, on hearing news of the massacre while in Italy, called for an immediate response. His poem ‘The Masque of Anarchy’, encourages reformers to ‘Rise like lions after slumber, in unvanquishable number’ (stanza 38). He sent the poem to Leigh Hunt in London, who cautiously refrained from publishing it. The satirist William Hone had no such qualms. His Political House That Jack Built (1819), illustrated by caricaturist Cruikshank, neatly sums up the reformers’ grievances in his typically irreverent manner. The piece was wildly popular, reflecting both the extent of anger over Peterloo and the cleverness of using a well-known nursery rhyme to make a serious message widely accessible. Radical propaganda often veered between respectability and audacious humour, the latter, of course, being much harder to prosecute in court for fear of provoking hilarity. Ironically, the attempt to silence government critics only encouraged journalists to develop inventive new ways of conveying the message of reform, while the outrage of conservative newspapers only inspired further satires. As well as political prints and poems, everyday items such as cookware and handkerchiefs immortalised and commemorated Peterloo. Such items proclaimed the owner’s allegiance to the reform cause, and sustained the memories of its martyrs. Beaker depicting radical speaker Henry 'Orator' Hunt Beaker commemorating Henry Hunt, the radical speaker who was imprisoned for his involvement with the reform meeting at St Peter’s Field, 1819.View images from this item (1) Held by© Trustees of the British Museum Handkerchief representing the Peterloo Massacre Handkerchief commemorating the Peterloo Massacre, 1819. It depicts the yeomanry slashing at the crowd of protestors.View images from this item (1) Peterloo remains a key moment in the history of the suffrage movement, less for the initial success of the meeting than for the way it allowed the reformers to gain the moral high ground. It was increasingly obvious that the government could only counter dissent with repression, while the chorus of angry voices only rose following outrages such as Peterloo. A Slap at Slop and the Bridge-Street Gang Satirical design for a monument and medal for the soldiers at Peterloo, from William Hone’s A Slap at Slop and the Bridge-Street Gang, 1821.View images from this item (1) Due to the large numbers assembled, and the varying motives for exaggerating or downplaying attendance, it is difficult to obtain an accurate estimate. Robert Poole and Joyce Marlowe, scholarly authorities on Peterloo, use the fairly low attendance figure of 60,000, a number also given by the contemporary spectator John Benjamin Smith in his memoirs. The Times reported 80,000 in attendance, while the Manchester Observer carefully worked out the possible numbers per square yard and concluded that 153,000 people were present. Henry Hunt gave the number as 180,000 – 200, 000 in his memoirs, while Richard Carlile, who was also on the hustings, gave the unusually high attendance figure of 300,000 people in Sherwin’s Political Register. The text in this article is available under the Creative Commons License.
When we talk about density, we refer to the amount of space an object or substance takes up (volume) in relation to the amount of matter in that object or substance (mass). We work it out by dividing the mass of an object by its volume (ρ = m/v). This tells us that a heavy and compact object is high in density, or a light and big object is low in density. This is a scientific concept that seems difficult to explain, but we can simplify comparisons of density by measuring which substances sink (because they are heavier, and therefore denser) and which float (because they are lighter and less dense). Visualising it may make it a little easier to understand. We’ve put together a little something to do just that. Using a beautiful seascape and liquids of different densities, watch the video above for a quick and easy science experiment and scroll down for how best to explain the concept to the kids. - WATCH: How do clouds make rain? Here's an easy and fun way to explain it to your kids - WATCH: Teach kids about science with this homemade lava lamp! - Try out these science experiments at home and boost your child’s academic success - Blue food colouring - Shaving cream - Ocean toys (we used little fish) Add blue food colouring to the water. Slowly layer your jar in the following order: Sand, blue water, and oil; add shaving cream to the top, and then drop in the plastic or rubber toys one by one. Let the kids watch how the substances mix at first before settling in layers. Here’s how to explain it: The denser an object or liquid, that is the heavier and more compact it is, the more likely it is to sink when compared to an object or liquid that is less dense. In our experiment, the sand was the most dense, then the water, oil, and finally the shaving cream. Although they mixed when first added together, they eventually separated because they have different densities. When we added the fish, we could also see how dense they were in relation to the other layers be seeing how far they sank or floated. Are there any other simple and easy experiments you've tried at home to teach your kids particular concepts? Send us your ideas to [email protected]. Share your story with Parent24. Anonymous contributions are welcome. Email: Share your story with us via email at chatback @ parent24.com
I need help on a maths question: A plane flies 300km from an airport, A, on a bearing of 240°.The plane turns and travels for 400km on a bearing of 050°. a) Using a scale of 1 cm to 50km, draw an accurate drawing of the flight path of the plane. b) What is the bearing that the plane must travel on to return to the airport? In my answer, I will be assuming the bearing is relative, i.e. we figure out the bearing based on the current direction. I cannot upload a picture so I will be answering part B. Let’s call the starting point A, the second point after 300km B and the third point C. The angle at points A, B and C, will be called a, b and c, respectively. If we draw the diagram, we can see that the point form a triangle ABC, and we know: AB = 300 BC = 400 c = 130 degrees First we can use the Cosine rule to find the length AC. AC^2 = 400^2 + 300^2 + 2*400*300*cos(130) AC^2 = 95730.9736… AC = 309.404 (3dp) Now that we have AC we can use sine rule to find the size of angle c. sin(c) = sin(130) * 300/309.404 c = sin-1(sin(130) * 300/309.404) c = 47.96 degrees (2dp) The bearing from the final direction is a clockwise turn until we are 47.96 degrees from the line BC. So: 180 - 47.96 = 132.04 degrees is the final bearing. Get an answer in 5 minutes from expert tutors at Oxford, Cambridge, Imperial and more.
Note: This message is displayed if (1) your browser is not standards-compliant or (2) you have you disabled CSS. Read our Policies for more information. Please CLICK HERE to download this document in PDF format. Antibiotics are powerful medicines prescribed by a health care provider to treat infections caused by bacteria. Antibiotics do not treat viral infections, such as the common cold or influenza (the flu). Antibiotics work by killing bacteria that cause infection or by keeping these bacteria from growing. Different antibiotics work for different bacteria. It is important to take antibiotics exactly as directed by your health care provider. Taking antibiotics when they are not needed increases your risk for developing an infection later that will be resistant to antibiotic treatment. Your health care provider may prescribe an antibiotic if you have a bacterial infection. Your health care provider will review your symptoms and any laboratory tests to prescribe the antibiotic that is right for you. Antibiotics do not cure viral infections, such as the common cold or influenza (the flu). The risk for viral infections can be reduced by avoiding close contact with others and properly washing your hands (see Quick Facts about Hand Washing). Antibiotic resistance occurs when bacteria change in ways that reduce or prevent the effectiveness of an antibiotic. If you are infected with resistant bacteria they could survive and you may continue to be ill even though you are being treated with antibiotics. Illnesses caused by bacteria resistant to antibiotics can cause serious disability or even death. You can also spread these resistant bacteria to others. Antibiotic resistance is a public health concern, because these resistant bacteria can spread from person to person or from objects used by someone who is infected. The bacteria then cause new infections that are more difficult or impossible to cure. Since these resistant infections are harder to treat, they last longer and are more severe. People infected may need more expensive and stronger medications and may need to be hospitalized for longer periods of time. Repeated and improper use of antibiotics is the main reason bacteria become drug resistant. Proper hand washing decreases the risk of spreading these infections. Proper use of antibiotics is extremely important: All information presented is intended for public use. For more information, please refer to: Centers for Disease Control and Prevention Get Smart: Know When Antibiotics Work campaign at Indiana Coalition for Antibiotic Resistance Educational Strategies at This page was last reviewed July 16, 2009
The Power of Project-Based Learning: Helping Students Develop Important Life Skills Download Full Text Project-based learning is a teaching approach that motivates and inspires students to learn and helps them to become self-directed learners over time. Students learn not only the content surrounding their projects, but also important life skills such as problem-solving, creativity, collaboration, communication, time management, and responsibility. Author Scott Wurdinger has implemented this approach over the past ten years in his own classrooms, has conducted numerous research studies on this topic, and has seen the effectiveness of project-based learning firsthand. This book provides information on the history, research, and application of the project-based learning approach and should be read by educators who want to change their classrooms into dynamic exciting learning environments. Educators will learn everything they need to know about how to implement this approach in their classrooms, as well as how to help students create meaningful, relevant projects that can help impact and solve school, community, and even global problems. Read this book and bring project-based learning to your classroom! Rowman & Littlefield teaching methods and materials, curricula, higher education, professional development, project method in teaching, life skills Curriculum and Instruction | Educational Methods Wurdinger, S. D. (2016). The power of project-based learning: Helping students develop important life skills. Rowman & Littlefield.
by the BFS Third Grade Teachers Exploration and Imagination in Math We are winding down our significant math unit on larger addition and subtraction. In this unit we introduced addition and subtraction stacking, choosing efficient strategies based on the numbers, and word problems. We capitalize on 3rd graders love of play, exploration and imagination to make this a fun and rich educational experience. Our T-Shirt Marketplace in Action! Place Value, Stacking and T-Shirts To explore place value and set ourselves up to understand stacking, we have adapted a math unit from Contexts for Learning called The T-Shirt Factory. Math Specialist Kate Minear wrote a detailed post about the beginning of this unit here. This year we added a t-shirt market place, where children bought and sold index-card t-shirts across the three third grade classrooms, an engaging way to practice “trading” and “reorganizing” shirts and provide an accessible context for stacking. Our shoppers are lined up and ready! Our t-shirt marketplace sellers ready to welcome their buyers! Word Problems, Addition and Subtraction Strategies and the Pet Store After practicing efficient problem solving, children put these skills to further use in our “pet store.” Our pet store consisted of displays of pets and supplies that were pictured along with confused looking teachers whose thought bubbles showed them puzzling over prices. Kids interpreted the problems choosing appropriate operations and strategies to figure out the answer. Much to their delight, third graders then invented their own pet store problems: drawing creative displays of cats, dogs, armadillos, sharks, saber toothed tigers and their supplies. They posed great questions that encouraged their peers to work with challenging numbers.
Newspapers teach journalists to write using the inverted pyramid. While t isn’t always the best approach, the inverted pyramid has worked for news writing since the days reporters telegraphed dispatches to editors. Today it works for online writing. The structure echoes the classic essay structure you were taught — or should have been taught — at school. The basic format: - Introduction — say what the piece is about; answer questions like who, what, where and when. You can also explain why at this point, although that can wait until later. - Then — expand, amplify; - Keep doing this until you’ve told the whole story. Make the most important points first then add more and more detail in each additional paragraph. Traditional newspaper editors cut a story from the bottom if it needs to fill a specific space on a printed page. Inverted pyramid online The inverted pyramid structure, with each paragraph being progressively less important, means editors can easily remove the least important information first. A news story written using the inverted pyramid structure can be cut at the end of any paragraph, even the first paragraph, and still be a self-contained story. Online this means search engines pay more attention to the most important words. This helps people find your writing faster. It means they can zero in on the story and information they are looking for. Those opening paragraphs also make neat summaries for listings and similar online uses. If you write your copy tight enough, your opening sentence will show up as the text in a Google search. That will help draw in readers. The most important information goes in the first paragraph and each extra paragraph carries progressively less weight. That’s where the inverted pyramid name comes from: the foundation sits at the top, the less important details are at the bottom.
Purpose of Clinical Trials Our understanding of cancer and how to treat it is constantly evolving toward the day when we have a cure. The cure is not here yet, but our treatment options have greatly improved in recent decades. Clinical trials are an important part of the process of bringing new treatments to market to improve patient outcomes. Some of the reasons we conduct clinical trials include: - Testing a new procedure for identifying and diagnosing certain diseases and conditions. - Finding ways to prevent certain diseases or conditions before they have a chance to develop. - Exploring new methods of supportive care for patients with chronic disease. - Gauging the success of new treatment methods for particular diseases. Clinical trials are conducted after exhaustive research and development in the lab. The treatments and methods used in clinical trials are promising in every environment in which they are tested. The final trial is to use actual patients who have the conditions we are trying to treat or prevent. Access to clinical trials is a great benefit to patients because it means they can use a new medication or treatment before it is widely available. Instead of waiting months or even years to get the new drugs approved and marketed, patients who qualify can get these treatments now, when they need them.
Elapids are generally slender, highly agile snakes with a colubrid-like head that is not very distinct from the neck and bears large, colubrid-like scales or scutes. Elapids lack the lo-real scute that separates the nasal scute from the preorbital scutes (most nonvenomous colubrid snakes have this scute). Because the fangs are short, the mouth does not have to open wide when the snake strikes. The length of these snakes varies from 7 in (18 cm) (the rare Fijian, Ogmodon vitianus) to more than 200 in (5 m) (king cobra, Ophiophagus hannah). The body often has stripes that may be very colorful. Many cobras flatten when excited, and cobras are famous for the ability to spread their neck ribs to form a hood. The coral snakes of the Americas can be unicolored (no bands), but most species are famous for having a bright series of alternating color bands. The snakes may be bicolored, tri-colored, or even quadricolored. The bands serve as a warning to potential predators. Also famous is the diverse radiation of nonvenomous snake mimics of the coral snakes. Many species of nonvenomous snakes that live in the same regions as coral snakes have evolved coloration almost identical to that of coral snakes. It has been estimated that 18% of all snakes found in the Americas are coral snake mimics. There are twice the number of mimics as there are coral snake species. Seasnakes have evolved many adaptations, from the partially marine existence of the sea kraits (Laticauda) to the fully marine existence of the seasnakes. The nostrils of all seasnakes have valves that form a tight seal around the mouth when the snake dives. Fully marine seasnakes move sinusoidally as do land snakes, but they propel themselves through the water with a paddle-shaped tail rather than by grabbing the substrate with wide belly scales as land snakes do. The belly scales of fully marine seasnakes are almost the same size as their other body scales. Was this article helpful?
To say that climate change is responsible for the recent string of tropical storms that have come into existence throughout the Atlantic Ocean would be an obvious inaccuracy. Tropical storms, as well as other devastating natural disasters such as tornadoes, earthquakes, blizzards, and so on – they have all been recognized as parts of natural global phenomena for as long as anyone can remember. So, to ask if climate change is responsible for the likes of Hurricane Irma, the immediate answer would be no. However, if you were to ask if climate change caused Irma‘s advanced destructive capability, that would be a whole other animal to address. And many people would answer with at least a tenuous yes. In most cases, hurricanes are caused by excess water vapor rising off the surface of the ocean. This is why hurricanes typically develop in lower latitudes and typically subside once they travel far enough north; the warmer water that evaporates is a better catalyst for storm systems than cooler water. The process by which a hurricane develops involves the water vapor warming the surrounding areas and returning to a condensed form when the latent heat is released in the upper atmosphere – in the form of clouds and rain. In most storm systems, the presence of any substantial wind shear will drive off some of the latent heat and reduce the intensity of a storm dramatically. However, when that wind shear is absent, heat builds up in a storm and can cause a low-pressure system to form around it. Any wind that is present in this low-pressure system begins to spiral inward, building at an exponential rate. The subsequent buildup of wind in the system causes more water to evaporate off the surface of the ocean, contributing more heat to the system and thereby developing more rain and more clouds in the process. And boom, you have the start of a hurricane. Now, consider the changes in climate in recent years. Global warming has contributed to higher sea levels and an increased average oceanic temperature of about 1.5 degrees Fahrenheit over the last 100 years. I know that doesn’t sound like a lot, but consider how much water is covering the planet’s surface and how much heat would be required to accomplish that. This rise in temperature would only make it that much easier for water to evaporate off the ocean’s surface and create the beginning of a scenario in which a hurricane becomes very likely. And because there is now more water (thanks to rising sea levels) to contribute this water vapor to a storm system, the ability for a hurricane to develop more quickly and with greater power behind it becomes much easier. The fact that hurricanes develop faster and easier than ever before is undeniable in any case. Experts say that hurricanes are generally capable of reaching category 3 levels about 9 hours quicker than they might have been 30 years ago. And the fact that more water vapor is being used to fuel these storms is one of the reasons why Hurricane Harvey was able to set a record for rainfall on the continental United States during the span of a single weather event. While many experts don’t necessarily find an irrefutable, conclusive correlation between the increasingly damaging hurricanes of this year and human-made climate changes around the world, they will say that the data makes a fairly strong suggestion toward that conclusion.
Technology plays an important role throughout our school curriculum, and no more than in Computing. Not only do we aim for students to become adept, sensible and safe users of technology, but there is also a strong emphasis on computer science, teaching children to create through programming and basic electronics. As well as ‘on-screen’ coding, we believe it is important for our students to engage with and control actual physical objects, using simple circuit boards and components, such as motors and LEDs, as well as different switches and sensors. Our students have the opportunity to learn about robotics using Lego Mindstorms EV3 kits. Once they have built and mastered the basic motion controls of their device and explored the various inputs, such as the ultrasonic proximity sensor, students are given complex tasks for their robot to complete autonomously.
Hip osteoarthritis is a condition where the cartilage in the hip joint degenerates and causes pain and inflammation. Hip joint causes the loss of mobility of ball and socket present in the hip. It is a large joint that holds much of the body’s weight and helps in activities like walking, sitting, and running. In hip osteoarthritis, the cartilage that lines the acetabular and covers the ball-shaped femoral head gets inflamed. The inflamed cartilage degenerates over time, which narrows the space that exists between these two bones. New cartilage regenerates again, but the new cartilage creates joint friction and makes the bone bumpy and irregular. Movement is painful because of the conflict between the acetabulum and femoral head. This friction can lead to the development of painful osteophytes and bone spurs in the hip joint. Patients with hip osteoarthritis may feel a grating sensation during the movement. You may feel a little pain from hip osteoarthritis on the groin and down the front of the thigh. You may also contact the same pain while holding or carrying a bucket from your hand. Hip osteoarthritis can also cause brief episodes of stabbing pain in the hip. Causes of Hip Osteoarthritis: The origins of hip osteoarthritis of the pelvis are not known. Most common factors cause are as follows:- - Joint injury – Injury in the joint can result in the deformation of bone and cartilage at hip bones and thus make the person difficult to move. Damage is the very critical cause of hip osteoarthritis. - Increasing age – With the increasing age, the cartilaginous secretion on the bones decreases and makes the bone a little weaker in comparison to the younger ones. That is the reason why old age people are more likely to suffer from hip osteoarthritis. - Being overweight – Overweight increases the amount of cholesterol in the body. This fatty acid residues in the mass start moving on to the cartilaginous and obstruct the flowing of synovial fluid. It causes jam on the hips and makes it difficult to move. - Genetic defects – Genetic factors also have a role in causing hips arthritis. The joints may not have adequately formed due to some genetic (inherited) defects in the cartilage. - Working load – The person may be putting extra stress on his joints, either by gaining weight or through activities that develop pressure on the hip bone. - Deterioration of connective tissue – Besides the breakdown of cartilage, it also affects the entire joint. It causes deterioration of the connective tissues that hold the joint together for muscle attachment. One more reason for hip osteoarthritis is the inflammation of the joint Sign and Symptoms of Hip Osteoarthritis: Hip osteoarthritis patient will face the following types of symptoms:- - Joint stiffness after sitting for a long time. - Joint stiffness after getting out of bed. - Pain, tenderness r swelling in the hip joint. - A sound or feeling “crunching” that makes you feel like bone rubbing against another bone. - It is difficult in moving the hip bone while doing routine activities. - Tenderness in your joint might feel tender. Treatment for Hip Osteoarthritis: The main goal of treating osteoarthritis is to decrease the pain and allow proper movement of the hip. The following treatment involves in hip osteoarthritis. - Physical care – Taking rest and joint care and Non-drug pain relief techniques for controlling pain comes under self-care treatment. Other treatment measures include losing excess weight, daily hip and stretching exercises, yoga, tai chi, and swimming. - Drugs – NSAIDs Medications can help you from inflammation and pain. It includes medicine like acetaminophen, Tylenol, Advil for lowering the anxiety. - Hip resurfacing – Hip resurfacing is a surgical option for the treatment of hip osteoarthritis. In hip resurfacing, A portion of part of the diseased hip joint surfaces are taken out surgically and substituted with metal. They don’t remove the joint, but place a metal cap is placed for natural movement. - Injections – Hyaluronic acid, Steroid injections, Platelet-rich plasma therapy follows to cure the pain and regulate cartilaginous flow into the hip bones.
Last Updated on Select the blanks for answers => Physical science includes physics and chemistry. =>Physics tells us why there is anything at all. =>Chemistry tells us how substances combine or separate to form other substances. =>Chemistry describes the different forms of matter . =>At the most basic level a man is made of matter. =>Matter is everything that has mass and takes up space. => Air is matter. => Light is not matter. => Chemistry concerns the different kinds of matter. =>Chemistry describes how living organisms use the food they eat. =>Living organisms are made of chemicals known as proteins, fats and carbohydrates. =>Biochemistry is a branch of chemistry to explore how plants and animals use chemicals and energy. =>The sun is the source of energy for Earth. =>Both living creatures and human technology get almost all of their energy from the sun. =>Without the Sun’s energy, Earth would be a cold icy place with a temperature of -273 degrees Celsius. =>Plants store the solar energy in carbohydrates, like sugar. Animals eat the plants to get energy. Other animals eat those animals for their energy. =>Energy is the ability for things to change. Nothing changes when no energy is exchanged. Mars is farther from the sun than Earth. The average temperature on Mars is well below the freezing point of water. Some Antarctic bacteria live even below this temperature. => Examples of scientific questions :- Why is the sky blue? How much does the earth weigh? How far away is the sun? What is a black hole? How do airplanes fly? How do flies walk on the ceiling? How are rainbows made? Are sharks mammals? =>“Where does the Sun come from?” is a less scientific question than “How does sunlight help us?” => A good scientific question builds on what we already know. =>When questions arise on the basis of established knowledge, they have some real answers each with its particular hypothesis confirmable by experimental observations.Such questions are called scientific questions. =>Only scientific questions can be answered through science. =>Only scientific questions ask about objects, organisms, and events in the natural world. => Only scientific questions can be answered through investigations that involve experiments, observations, or surveys. =>Only scientific questions can be answered by collecting and analyzing evidential data that is measurable. =>A scientific question is testable. =>A hypothesis is a testable educated guess seeking an explanation for a set of observations or an answer to a scientific question. =>A scientific question is based on observations not imagination. =>A scientific question has some actual answers. => A testable question is one that can be answered by designing and conducting an experiment. =>Testable questions are always about changing one thing to see what the effect is on another thing. =>The key steps in the scientific method are : making observations, formulating a hypothesis and testing the hypothesis through experimentation. =>Some questions still unanswerable by science: What is reality? What is life? Do we have free will? Is the universe deterministic? What is consciousness? Will we ever have a theory of everything? =>A scientific answer is grounded in experience but bigger than that experience alone. =>As members of society, scientists have a responsibility to use their specialized knowledge and expertise in addressing societal issues and concerns. =>The problem-solving approach of science is called the scientific method. =>The scientific method has five steps:- Observation,Question,Hypothesis, Prediction and Conclusion. =>A biologist is a scientist who studies living organisms. =>Science has helped form the world that we live in today. =>Science is valued by society because the application of scientific knowledge helps to satisfy many basic human needs and improve living standards. =>Science influences society through its knowledge and world view. =>Scientific knowledge and methods influence the way individuals think about themselves, others, and the environment. =>The effect of science on society is neither entirely beneficial nor entirely detrimental. =>A hypothesis is a testable but limited explanation. =>A scientific theory is an in-depth and confirmed explanation of the observed phenomenon. =>A law is a unifying proposal or statement without an explanation about an observed phenomenon. =>Characteristics of a good question:- relevant, clear, concise, purposeful, guiding but not leading , thoughtful and single-Dimensional. =>Science begins by asking questions. =>The five senses are seeing, hearing, touching, tasting, and smelling. =>A hypothesis is the prediction. =>An inference is conclusion used in forming a hypothesis. =>Inference is based on previous knowledge. =>Examples of some scientific laws: – Newton’s law of gravity, the laws of thermodynamics, Newton’s laws of motion, Coloumb’s law , Bernoulli’s principle, general relativity, Carnot’s theorem, Maxwell’s equations, and Brewster’s angle, etc. =>A fact is a basic statement established under the specific conditions of the observation. => A “hypothesis” is “an educated guess.” =>A “hypothesis” is a tentative or suggested prediction based on evidence. =>A hypothesis is very tentative; it can be easily changed. =>According to the United States National Academy of Sciences, “Some scientific explanations are so well established that no new evidence is likely to alter them. The explanation becomes a scientific theory.” =>Theories allow scientists to make predictions about as yet unobserved phenomena. =>Theories are not “guesses” but reliable explanations of the real world phenomenon. =>Examples of some scientific theories:- The theory of biological evolution, the atomic theory of matter or the germ theory of disease. =>Theories are explanations of natural phenomenon. =>Theories aren’t predictions. =>We use theories to make predictions. => Theories are explanations why we observe something. =>Theories don’t change overnight. =>Theories can, indeed, be facts. =>Theories can change. =>Facts can’t change. =>Theories aren’t likely to change. =>Scientific laws are rules for how nature will behave under certain conditions. =>Scientific laws are frequently written as an equation. =>Theories explain why we observe what we do and laws describe what happens. =>A law is a relationship that exists between variables. =>Laws are the mysterious patterns we see in large amounts of data. =>A belief is a statement that is not scientifically provable. => Beliefs may or may not be incorrect. =>Theories are explanations and laws are patterns. =>A hypothesis is a tentative prediction. =>A theory is a well-supported explanation of observations. =>A scientific law is a summarized relationship between variables. =>An experiment is a controlled method of testing a hypothesis. =>Laws are generally descriptions of physical phenomena. =>A hypothesis is an explanation for a narrow set of phenomena and a theory is an explanation for a relatively wide set of phenomena. =>All hypotheses, theories, and laws involve facts. =>Examples of scientific facts:- Fish appear in the fossil record millions of years before mammals do. A dropped object falls to Earth. A gas expands when temperature rises if pressure is held constant.The ocean is salty. It takes 365.25 Earth days for Earth to orbit around the Sun. Earth has one moon. =>Hypotheses are proposed explanations for a narrow set of phenomena. =>Theories are explanations for a wide range of phenomena. =>Hypotheses are reasoned and informed explanations. Read more about it. =>Theories are not just hunches. => Laws cannot become hypotheses.Hypotheses cannot become theories. =>Hypotheses, theories, and laws are all scientific explanations. =>Hypotheses, theories, and laws differ in breadth (broader range of phenomena), not in level of support. => A hypothesis is a proposed explanation for an observation before it is confirmed by research. =>Scientists propose a hypothesis before doing research as a way of predicting the outcome of study. =>Scientific theories explain some broad aspect of the natural world and are based on solid evidence that has been confirmed over time. =>Scientific laws describe natural phenomena, often mathematically. =>Scientific theories can be tested. => Scientific theories are based on large amounts of data and observations that have been collected over time. =>Scientific theories allow scientists to make predictions. => Scientific theories can be expanded and revised as new evidence is introduced or as existing data are interpreted differently. =>Laws describe phenomena, while theories explain why phenomena exist. =>Scientists propose a hypothesis before doing research to predict the outcome of a study and guide them as they design their research. => Some theories, like the theory of evolution by natural selection, are broad and encompass many concepts. =>Scientific theories in one discipline can influence theories in other disciplines. => Scientific laws describe but do not explain an observed phenomenon. =>An example of a scientific law is the law of gravity. =>An example of a scientific theory is the theory of plate tectonics. => In order to be a scientist, a student must practice the skill of making observations. =>Looking up at the dark cloudy sky our experience often predict that it might rain. It is an example of an inference. => A scientific method is a process for answering questions. =>An inference is a predicted but experienced answer to a question based on observations. => An inference must be testable and isn’t always correct. =>Data are information that is collected in order to answer a question. => A scientific method is a series of steps including observation, forming a question, stating a hypothesis, collecting data, and reaching a conclusion. =>If we want to prove or disprove a hypothesis, we perform an experiment. => A theory is a statement that explains a complex idea such as a process for how Earth’s surface has changed over time. =>A law is a statement that describes an observed phenomenon such as why an object falls when you drop it. =>Science is a process of gathering knowledge about the natural world. => Physical science is the study of matter and energy. =>Matter is the “stuff” that everything is made of. =>Energy is the ability to do work. => Studying different forms of energy is what studying physics is all about. =>Energy can make matter do some interesting things. => The study of Earth’s atmosphere, especially in relation to weather and climate, is called meteorology. =>The study of the origin, history, and structure of Earth is called geology. => Physical science is divided into the study of physics and chemistry. =>Chemistry studies the structure and properties of matter and how matter changes. =>Physics looks at energy and the way that energy affects matter. => Energy is the ability to do work. =>Air is made of matter. =>All matter has energy. =>Botany is the study of plants. =>Observation is any use of the senses to gather information. =>Noting that the sky is blue or that a cotton ball feels soft is an observation. =>Measurements are observations that are made with tools. => A hypothesis is a tentative explanation because it hasn’t yet been tested. => A theory is not a “hunch” or “gut feeling”. => Theories are well tested, well supported attempts. =>An example of a theory is the Big Bang theory; the idea that the universe is =>A law is considered to be a statement of fact. =>Facts are repeatable. =>Data refer to any pieces of information acquired through observation or experimentation. =>Scientific methods are the ways in which scientists answer questions and solve problems. =>Asking a question results from making an observation. =>Questioning is the first step of using scientific methods. =>A hypothesis is a possible explanation or answer to a question. =>A good hypothesis is testable. =>After testing a hypothesis, we need to ana- lyze our results. =>Analyzing involves calculations, tables, and graphs. =>After analyzing the results, we draw conclusions about whether our hypothesis is supported. =>Scientific methods are the ways in which scientists answer questions and solve problems. =>Data are pieces of information that are gathered through experimentation. =>Hypotheses are possible explanations or answers to a question. =>A model is a representation of an object or system. =>Models are never exactly like the real thing. => Scientific models are of three types- physical(dolls or drawings), mathematical (equations)and conceptual(big bang theory). =>A theory not only explains an observation but also can predict what might happen in the future. =>A law is a summary of many experimental results and observations. =>Laws tell us only what happens, not why it happens. =>A model uses familiar things to describe unfamiliar things. =>Physical, mathematical, and conceptual models are commonly used in science. =>A scientific theory is an explanation for many hypotheses and observations. =>A scientific law sum- marizes experimental results and observations. =>A physical model is used to represent a human heart. =>A tool is anything that helps you do a task. =>One way to collect data is to take measurements. To get the best measurements, we need the proper tools. =>Stopwatches, metersticks, and balances are tools to make measurements. =>Our brain processes information all the time. We use this information to make choices and solve problems. =>The science process is like looking for a lost sock. =>Inference is a statement based on experiences. =>Science is a process for answering questions. =>A hypothesis is a possible answer to a scientific question based on observations. =>An experiment is like looking through a keyhole. => An experiment is an activity performed to support or refute a hypothesis. =>Chemists are involved in activities like making new medicines and figuring out the best way to refine oil to make gasoline. =>Biology is the study of living things. =>Astronomy is the study of stars and planets and anything else that is in space. =>An observation is an accurate description of a thing or an event. =>When practicing science, it is important to make observations without making opinions. =>An astronomer looks through a telescope to see objects that are millions of miles away. =>A microbiologist looks through a microscope to study small organisms like amoebas and bacteria that are millions of times smaller than us. =>Acoustics is the science of designing objects based on how sound travels. =>Scientific facts are statements that are accepted as being true after repeated measurements or observations. =>Meaningful questions can lead to a better understanding of the universe in which we live. => We study science for a better understanding of the world around us. =>The development of technology is affected by society and vice versa. =>Engineers need scientific information to develop products or solve problems. =>Building a prototype is the first step taken to find a technological solution. =>Even a scientific theory might not always be true. =>The independent variable is known as the manipulated variable. =>The independent variable is the only factor that can be varied or manipulated by the researcher. => A single independent variable produces one or more results, known as dependent variables. => A controlled experiment is a scientific test. =>A controlled experiment is directly manipulated by a scientist. =>A controlled experiment tests a single independent variable at a time. =>A controlled experiment shows the effects of a variable on the system being studied. =>A controlled experiment is one of the most common types of experiment. =>Scientific studies are often made via controlled experiments. => In a controlled experiment an observer tests a hypothesis by looking for changes brought on by alterations to a variable. =>In a controlled experiment, an independent variable affects the dependent variable. =>Graphing is a procedure to display the data during the result of an experiment. =>There are three main types of graphs:- circle,bar and line. =>Pie/circle graphs are used to show parts of a whole. =>Bar graphs are used to compare amounts. =>Line graphs are used to show the change of one piece of information as it relates to another change. =>Both bar and line graphs have an “X” axis (horizontal) and a “Y” axis (vertical). =>Independent variables are controlled by the experimenter. =>Some independent variables are time, dates, depth, and temperature. =>Independent variables are placed on the X axis. =>Dependent variables are directly affected by the independent variables. =>A dependent variable is the result of what happens as time, dates, depth and temperature are changed. =>Dependent variables are placed on the Y axis. =>Graphs show correlations between parameters. =>The linear correlation is the simplest correlation between two experimental parameters. => The graph for a linear correlation is a straight line. =>Graphs are used to explain experimental results. =>Data tables are more specific about results than graph. =>Graphs are tools to display data. =>Graphs are used to display data for various reasons. They are easy to interpret. They display a large amount of information in a small space. They are easy to draw. =>Graphs are drawn in pencil with a ruler. =>Both axes of a graph are labeled. => All parameters of a graph have units. => Bar graphs are not used to display experimental data. =>Every graph needs a descriptive title. =>A graph shows the effect of x on y. =>The X axis shows the independent variable – this is the variable we intentionally change. =A graph’s Y axis should plot the dependent variable – this is the variable we measure. =>To show the effect of changing temperature on rate of reaction temperature goes on X axis and rate goes on Y axis. =>Scientists use graphs to examine the relationship between variables. =>Theories predict how variables are related. =>A graph of the experimental data can confirm or disprove a theory. => The scientific method is the process of verifying or disproving theories by doing experiments. =>Graphs are not used for small amounts of data that could be conveyed succinctly in a sentence. => Scientific method is a set of procedures to learn about the world. =>Scientific findings are displayed in the tables, graphs and charts to illustrate data. =>Line graphs show the relationship between dependent and independent variables. =>In data tables, the independent variable is listed in the first column and the dependent variable is listed to the right. =>Bar charts compare amounts of something between unrelated groups. =>Different types of graphs are appropriate for different experiments. =>A bar graph is useful when you want to compare similar data for several individual items or events. =>Data that change over a range are best represented by a line graph. =>Making sure an experiment gives the results you expect is an example of unscientific thinking. =>Since multiple uncontrolled variables confuse the results, you need to control all the relevant variables that could affect the result. =>In a controlled experiment, one variable is changed while all others remain fixed. =>Zoology is not included in physical science. => Physics deals most with energy and forces. =>Using semiconductor to build computers is an example of technology. =>For all practical purposes, it is advantageous to assume that the universe obeys a set of rules. =>Science is an attempt to discover and explain natural laws . =>The most reliable way to discover the natural laws is called scientific inquiry. =>Ancient people did not know the explanation for qualities like hot or cold. =>A scientific theory is a human attempt to describe a natural law. =>Before 1843, scientists believed that heat was a kind of fluid called ‘caloric’. =>The caloric theory was given up when people learned to measure weight accurately. =>An object has the same weight, hot or cold. =>In 1842, a German doctor Julius Mayer first made a hypothesis that heat was a form of energy. =>In 1843,James Joule experimentally proved that heat was a form of energy. => Scientists observe nature, then invent or revise hypotheses about how things work. =>The hypotheses are tested against evidence collected from observations and experiments. => Any hypothesis which correctly accounts for all of the evidence from the experiments is a potentially correct theory. =>A well-established hypothesis help scientists build a theory. =>A scientific theory summarizes a hypothesis or group of hypotheses. => In science the word ‘theory’ refers to the way that we interpret facts. =>Every scientific theory starts as a hypothesis. => A scientific theory is the framework for observations and facts. =>Theories may change, but the facts themselves don’t change. => A scientific theory is an elastic bag of indeterminate shape to stuff more and more facts and observations. =>Evolution is a fact but the theories about evolution might change. =>A scientific theory is not the end result of the scientific method. =>Theories can be proved,improved or rejected, just like hypotheses. =>Theories don’t become laws. =>A hypothesis is untested and subjective. => A theory is tested, and objective. =>Hypotheses become theories with proper evidence. =>To get a theory we need nature backing our hypotheses. =>A theory is an explanation for a phenomenon. =>A hypothesis is a prediction from collected data. =>A hypothesis is a speculation made from a very honest position of ignorance. =>Theories that are useful here and now may not provide the same utility at other times, or at other locations. =>A hypothesis is designed specifically for testing. =>We cannot test a theory in a direct way. =>Theories tend to be tested in a piecemeal fashion. =>Some aspects of a theory are tested in one study, while other aspects of the theory are tested in another. =>A mathematical picture that shows a pattern between two variables is a graph. =>On a graph of two variables, the variable that causes changes in the other is the independent variable. =>A quantity which can have many values is a dependent variable. => A situation set up to investigate the relationship between certain variables is called a scientific experiment. =>A statement of what was learned in an experiment is called a conclusion. => The description of how an experiment is done, including equipment, techniques used and the type of data collected is the scientific method. =>The variable that you change in an experiment is called the independent variable. =>In an experiment, variables that are NOT allowed to change are controlled variables. =>Laws are unlikely to change unless major experimental errors are found. =>The sum of all measured values divided by the number of measurements is called the average or mean value. =>An experiment has three kinds of variables: independent, dependent, and controlled. =>Hypotheses that can’t be tested by experiment:-A parallel universe exists that cannot be detected. An invisible superhero really exists. =>Hypotheses that can be tested by experiment:-Steeper ramps result in higher speeds. Red apples taste better than green apples. Alien life forms are hiding on Earth.
Yellow-eared Parrot was once common in the Andes of Ecuador and Colombia, but declined owing to unsustainable exploitation of the quindío wax palm upon which it is dependent for roosting, nesting and feeding. This palm has become highly threatened owing to the use of its fronds to adorn Palm Sunday processions. However, a highly successful publicity campaign backed by the Catholic Church has engendered considerable public support. In combination with active protection measures such as installing nest boxes, protecting palm seedlings and planting trees, this has led to the parrot population increasing to over 800 birds. By the late 1990s, Yellow-eared Parrot Ognorhynchus icterotis, once common across the High Andes of both Ecuador and Colombia, had disappeared from Ecuador and appeared to be on the brink of extinction. However, in 1999, 81 birds were discovered in a remote region of the Colombian Andes, rekindling hopes for the future of the species (Salaman 1999, BirdLife International 2008). The fate of this parrot is inextricably linked to that of the quindío wax palm Ceroxylon quindiuense, Colombia’s national tree and, at over 60 m, the world’s tallest palm. The parrot depends entirely on the tree for roosting, nesting and feeding. For centuries, wax palm fronds have been used to adorn Palm Sunday processions commemorating the entry of Jesus into Jerusalem. As a result, thousands of trees are felled each year and the palm has become highly threatened (Salaman 2001). The long-term survival of the parrot is therefore reliant on effective conservation of the quindío wax palm. In response to the plight of these two species, Fundación ProAves (a local non-governmental organisation) and Conservation International (CI) established a community-based conservation and research programme in Colombia. A publicity campaign, including television and radio appeals, music concerts and a touring ‘Parrot Bus’, successfully raised awareness of the problems facing the parrot and its habitat throughout Colombia. By engendering considerable public support, local projects were then able to implement active protection measures such as installing nest boxes, protecting palm seedlings and planting trees. The campaign grew into an alliance of over 35 national NGOs, government departments and, perhaps most importantly, the Episcopal Conference of Colombia. With the endorsement of the Catholic Church, the project was able to bring to an end the use of wax palm fronds across large parts of the country by promoting sustainable alternatives during festivities. Through close collaboration with local people and the Church, the campaign has been a huge success. There are now over 800 birds in Colombia, divided between two main colonies, and it is hoped that the species can be encouraged to recolonise Ecuador. BirdLife International (2008) Raising public awareness to save the Yellow-eared Parrot . Downloaded from http://www.birdlife.org on 16/01/2021
Floating Point/Special Numbers There are several special numbers specified in the IEEE 754 standard. These include representations for zero, infinity, and Not-A-Number (NaN). A floating point number is said to be zero when the exponent and the significand are both equal to zero. This is a special case, because we remember that the significand is always considered to be normalized. This means that , and there is an implied "1." before the significand. If we look at the following equation: And we plug in our values for m and e: We say that whenever the exponent is zero, we have a special class of numbers called "denormalized numbers". We will discuss denormalized numbers more later. Notice that our definition of zero doesnt have any mention of the sign bit. This means that we can have both a positive zero, and a negative zero, depending on the sign value. What is the difference between the two? In general, there is no difference between positive zero and negative zero, except that the sign bit can propagate through arithmetic operations. When the exponent of the number is the maximum value, that is, when the exponent is all 1 bits, we have another special class of numbers. This means that regular numbers may never use the maximum exponent value for representing numbers. If the exponent is the maximum value, and the significand is zero, we say that this special case is called "Infinity". Notice that we can have negative infinity and positive infinity values, depending on the sign bit. If we have a maximum exponent value, and a non-zero significand, we have a very special number known as NaN. NaN stands for "Not a Number", and occurs when we perform certain illegal operations, such a division of zero by zero. NaN can be signed, but the sign rarely matters. NaN numbers cannot be used meaningfully in calculations. |Denormalised numbers||0||non zero||any|
In the 16th century Reformed theology established itself slowly in France. It faced persecution from the Roman Catholic majority until Protestant King Henry IV came to power. French Protestantism enjoyed sustained growth until the Edict of Nantes was revoked by Louis XIV in 1685. This led thousands of French Protestants, or Huguenots, to flee to other European countries or America. After the French revolution of 1789, French Protestants, including the Eglise Réformée de France (Reformed Church of France) were able to regain their full rights. Today the Reformed Church of France (ERF) has a membership that numbers 350,000 — about 0.5 percent of the French population. The Church has 400 congregations with 360 pastors, 26 percent of whom are women and 15 percent of whom are from countries other than France. The local ERF churches are organized around two main ministry foci: (1) congregational life, e.g. worship, biblical and theological formation and (2) witness through diaconal activities. The principal symbol of the Reformed Church of France, established in 1905, is the Huguenot Cross, composed of a four-petal lily of France in the form of a Maltese cross. The four petals represent the Gospels of Matthew, Mark, Luke and John. Connecting the four petals are four fleurs-de-lis, the symbol of France. Representing the Spirit, a dove hangs from the lower petal on a ring of gold. Reformed Church of France Learn more about France Visit the BBC country profile.
The european settlement map is a spatial raster dataset that is mapping human analysis systems which may set additional cookies to perform their analysis. What was early contact like between europeans and natives these settlers began to explore and they soon encountered the native people using the information they recorded, you are going to examine their initial thoughts and feelings. The portuguese immediately following their own final non-european world into two areas of exploration. Although columbus did not reach asia, his voyages sparked an era of exploration that the map below illustrates the location of european settlements in north. What motivated the europeans in their initial settlements •, how did the european nations differ in their vision of a successful settlement •, how did they differ. The european territorial settlement but the terms themselves and the manner of their negotiation seemed a far cry from the firm basis of an enduring peace. With the arrival of european settlers such as henry hudson the political and look at the history of the new netherlands and their settlement of new amsterdam. But while the original americans did change their environment, new and present-day ecosystems have found that since european settlement. was the first city founded by european settlers in north america spain looked to stake their claim to “la florida” next, but the french had. There is legendary evidence that other europeans chanced upon the island during the middle there were numerous different settlements established on the. Though a dutchman was the first european to sight the country, it was the his mission to new zealand was considered unsuccessful by his employers, the these settlers had considerable contact with māori, especially in coastal areas. When europeans explored canada they found all regions occupied by the remains of their settlement, l'anse aux meadows, are a world. Their territory included fertile plains and rivers and an abundant coast these same features attracted the first european settlers although european explorers . European settlement - last updated scroll continent 50,000 to 60,000 years ago , and some authorities believe their occupation may date back 100,000 years. Their several-day visit is followed by a second, six-month, mission in 1632 territory, the spanish begin establishing missions and settlements in east texas. The first european settlements in north america were up and down the atlantic coast (or in canada, as with france. It was not until the early 1600s that european explorers first visited the area and it were building considerably more elaborate houses reflecting their newfound. The first attempt by europeans to colonize the new world occurred around ad 1000, to america, their accomplishments became known to other europeans although the french sought to colonize the area, the growth of settlements was. European settlements in the caribbean began with christopher columbus his aim was to establish direct commercial relations with the producers of spices. The colonization by europeans of the two great american continents expressed both sides of the bridge its animating source was the clash and competition of. European goods, ideas, and diseases shaped the changing continent as europeans established their colonies, their societies also became segmented and. The cape offered the ideal halfway point between europe, india and the then- dutch of the dutch settlers were forced into the interior just to escape their laws. This article outlines a research agenda on the socio-cultural integration of muslims in their western european societies of settlement. Find out the brief account on how and why european comes in india and goa subsequently became the headquarters of the portuguese settlements in india the dutch founded their first factory in masaulipatam in andhra pradesh in 1605 . Though columbus was not the first to discover the new world, his landing in the in 1565, spain established the first successful european settlement in north. On st kitts the caribs became concerned at the numbers of europeans arriving on their island the english settlers learned of a carib plot to attack them and. Despite a hurricane destroying all their ships five weeks into the colony's existence, it persisted for two years while europeans lived in what is. They had little or no knowledge of what was going on in europe or its american colonies, just as europeans and american colonists had little or no knowledge.
Biobulb is a bacteria-powered light bulb A group of students at the University of Wisconsin have come up with a way for us to light up our houses without electricity. Called the Biobulb, the technology relies on living bacteria to provide illumination. Discovery News reports that the Biobulb will include a genetically engineered species of E. coli bacteria, the kind living inside the intestines of humans and other animals. "Normally, these bacteria don’t glow in the dark, but researchers plan to introduce a loop of DNA to the microbes that will give them the genes for bioluminescence. The bacteria will glow like lightning bugs, jellyfish and bioluminescent plankton." “The Biobulb is essentially a closed ecosystem in a jar,” biochemistry major Michael Zaiken said in their Rockethub pitch. “It’s going to contain several different species of microorganisms, and each organism plays a role in the recycling of vital nutrients that each of the other microbes need to survive.” The team plans to experiment with different techniques to combat mutation in the plasmid, different colored light emission, and different triggers for the activation of the glowing bacteria. With the addition of ambient light during the day which will help the bacteria to stay alive and grow, the Biobulb should be able to glow for days and months on end. You can watch their pitch video below to hear more about the project.
At the Karolinska Institute, Stockholm, Sweden, Professor Karl-Gustaf Luening and his assistants are studying the effects of atomic radiation on the reproductivity of mice. At the Karolinska Institute, Stockholm, Sweden, Professor Karl-Gustaf Luening and his assistants are studying the effects of atomic radiation on the reproductivity of mice. The Radiation-Genetics Department is directly responsible to the swedish Atomic Energy Commission. Research on genetic mutation in some 20,000 mice - to obtain information on radiation effects on the human race - began two years ago when a pair of BCA mice were used to start a chain of generations of mice to be irradiated. Mice of the BCA type are the result of consistent inbreeding between the offspring of one pair, over more than 100 generations. In marked contrast to human beings, mice remain unaffected by this practice. At the Institute, 125 male mice of every generation bred are selected when 60 days old and their reproductive organs and thus their chromosomes exposed to a radiation of 275 roentgen - a higher level of radioactivity than measured at Hiroshima. Female mice cannot be used for these tests since radiation sterilizes them. Sperm cells of the irradiated male mice are examined before they reach maturity, and ten days later the next generation is bred. This process is repeated after 90 days. The object of these tests is to determine possible genetic changes in successive generations. So far, after four generations, such changes have not occurred. Research has established, however, that irradiation decreases the reproductive capacity in mice. Definite further results may not be obtained for some time as mice bread only two generations per year.
The search for the earliest evidence of life on Earth has become pretty contentious in recent years. In 1993, a paper announced the discovery of fossil microbes in 3.5 billion year old rocks from Australia. A few years later, a paper claimed to show isotopic evidence for the presence of life in 3.8 billion year old rocks from a considerably colder place—coastal Greenland. While these dates have widely been used to mark the first life on Earth, considerable debate about the strength of the evidence for organisms persists. Other researchers proposed possible alternatives to living things that would explain the isotopic data (involving chemical alteration during metamorphism) and microfossils (mineral structures). So neither finding was a slam dunk on its own. Recently, however, new evidence has appeared to increase our confidence in the existence of life at these early times. Last year, a paper showed strong evidence for 3.2 billion year old fossil microbes. This week in Nature Geoscience, a paper, written in part by one of the critics of the work from the 1990s, purports to push that date back to 3.4 billion years. The new microfossils were found, once again, in Australian rocks. In fact, they were found only 20 miles from the site of the 1993 discovery. This particular rock unit, now metamorphosed, was once a sandy, shallow ocean beach. Amidst the sand grains, the researchers found clumps of what look like bacterial fossils surrounding microscopic pyrite crystals—a mineral made of iron and sulfur that is a common byproduct of microbial sulfate-reduction. The evidence goes beyond appearances, though. In the walls of the microfossils, the researchers found carbon and nitrogen—both indicative of organic material. The isotopic makeup of that carbon is consistent with biological origins. In addition, the sulfur isotopes in the pyrite show signs of microbial activity. Taken alone, some of these things could be explained by other mechanisms, but together they make a pretty strong case. The microfossils form clusters and chains of hollow "cells" that are 5-25 microns across, similar to modern and fossil bacteria and archaea. There is a very tight distribution of size, which is not what you'd expect from random mineral structures. Also, while the microfossils are plentiful, they are only found along with pyrite. It certainly seems like colonies of microbes caught in the act of munching on some tasty sulfur. That gives us a unique picture of the early biosphere. Chemical evidence has been found for microbes subsisting on sulfur and hydrogen, as well as photosynthesis, but this is the best look so far at what these communities might have been like. It’s thought that the Earth experienced a period of frequent asteroid impacts (known as the Late Heavy Bombardment) around 4 billion years ago. The bombardment was so extreme that it probably destroyed much of the Earth’s crust—we haven’t found any intact rocks older than this time, though individual crystals as old as 4.4 billion years have been discovered. That means that the clock probably didn’t start ticking on the origin of life until 4 billion years ago. At this point, we’ve got pretty clear evidence of life 600 million years after that, and less-certain evidence that indicates the delay may have been even shorter.
Comet to whiz past Mars in October 2014 Posted by Emily Lakdawalla 27-02-2013 17:36 CST A recently discovered comet, C/2013 A1 (Siding Spring), is going to be passing very close to Mars on October 19, 2014. The latest observations from the ISON-NM observatory, reported by Leonid Elenin, suggest that the comet will pass just 41,000 kilometers from Mars. Here's a diagram I made using the JPL Small-body Database Browser: When astronomers report the distances between two objects in space, they are almost always referring to the distances of the centers of the two objects. Distances are usually so great in space that this is a perfectly fine approximation that greatly simplifies calculations. But in cases like this one, you can't make that approximation anymore; you have to account for the size of Mars, which is roughly 7000 kilometers in diameter. So the comet will be passing within 38,000 kilometers of Mars' surface. That is very, very close. Note though that the uncertainty in the orbit is such that the possibility of a Mars impact can't yet be ruled out, though it's very unlikely. How close is 38,000 kilometers? One way we talk about that for close passes by Earth is to compare the distance to that of Earth-orbiting satellites. If it were Earth, 38,000 kilometers would be within the orbit of geosynchronous satellites. At Mars there are no geosynchronous satellites, and if there were, they'd be in a closer orbit, at 20,000 kilometers. So it's outside that distance. Probably more relevant is the question of whether there are any satellites at Mars that do get out to 38,000 kilometers; the answer to that is no. Mars Express has the largest orbit, and it gets to an altitude of about 10,000 kilometers. All the other active orbiters are down closer to 300 or 400 kilometers. So there's not likely to be a direct hazard to spacecraft in the form of a potential impact. But wait a minute. This is not a close-approaching asteroid we're talking about here. It is a comet. And the thing that makes comets comets is that as they approach the Sun they start evaporating, spouting jets of formerly frozen gases that entrain dust and ice particles into a gigantic coma easily spanning tens of thousands of kilometers. The coma can be observed telescopically at distances up to 100,000 kilometers from the nucleus, and I imagine that there is more material beyond that, though it's too sparse to be observed. So when Siding Spring visits Mars it will be bringing with it a lot of dust particles to which Mars and every spacecraft orbiting or on it will be exposed. So, are spacecraft at risk from comet C/2013 A1 (Siding Spring)? I am sure that there are "tiger teams" being convened by space agencies to investigate this question. I don't know enough to know the answer, though I can get some clues by querying the scientific literature. Of major importance is the size of the particles in the comet's coma. According to this review article in the Comets II tome, "the dust emitted from comets spans a broad size range, from submicrometer to millimeter or centimeter and larger." In fact, "most of the particulate mass shed from the nucleus is concentrated in large particles." Of course, these large particles are few and far between, much sparser than tiny (sub-micrometer) dust particles, which pose no risk to spacecraft. Still, I am nervous about what the larger-than-dust particles could do if they were in the wrong place at the wrong time. One thing that adds to the risk is that the particles will be moving at a very high relative speed. The comet has a hyperbolic, retrograde orbit, so its velocity with respect to Mars will be 56 kilometers per second. Wow. Still, space is very empty; the density of material in a sphere of radius 100,000 kilometers centered on a cometary nucleus only a couple of kilometers in diameter is near zero. So I suspect that the risk to orbiters is either "small" or "vanishingly small". I'm looking forward to reading what experts have to say. As for the rovers, my initial instinct is that spacecraft on the surface should be safe from dust; most particles would burn up in the atmosphere, briefly flaring into meteors. Still, the smallest impact craters yet seen on Mars are only 10 centimeters wide. According to one study, the smallest object that could survive to the surface would be about 5 millimeters in diameter. Which is pretty small. Something that small would be greatly slowed by its passage through the atmosphere. How many of these particles are there, and how dense a "rain" would they present? Is it only one particle per square kilometer? Then we're probably fine. Is it one per square meter? Then I'm more scared. Leaving aside the question of safety, what will the rovers be able to see from the surface? Again, I'm not sure. Comets can be bright but this one will be spread out over so much of the sky that I'm not sure what we'll see. It's way too soon to plan rover observations though. As we learn more about the comet and can better predict how bright it will be, the imaging teams will do their calculations to figure out whether the rovers will see anything if they gaze skyward. If they will -- and if the rovers are both still functioning then -- I'm sure they'll shoot photos! More observations of the comet are needed to pin down its orbit. It's getting close to the Sun now in our sky now, so it's getting harder to see it, but will be in a better position for observation by late summer. I'll sure be keeping my eye on this one!
about Controversial or Difficult Issues essay was written by Jinnie Spiegler for the New York Times Learning Network, a great resource on teaching and learning. The piece offers many examples of resources teachers can use from both TeachableMoment and the Learning Network, including ideas for teaching on the Trayvon Martin 1983, ABC aired "The Day After," a film depicting the fictional day after a nuclear war and its impact on several Midwestern families. Everyone was encouraged to watch it, teachers were worried that their students would see the film and not be emotionally prepared. So we at Morningside Center for Teaching Social Responsibility developed a teaching guide about the film. After that, Morningside Center, then named Educators for Social Responsibility Metro, became known as an organization that helps teachers bring up difficult and controversial topics in the classroom. Almost 20 years later, after 9/11, Morningside Center again moved to support teachers in addressing a sensitive and complex topic. This time we launched a new website called TeachableMoment.org, which offered a myriad of lessons and approaches aimed at helping students grapple with both the emotional issues evoked by 9/11 and the many social and political issues surrounding it. mission is to help teachers looking for ways to encourage critical thinking on issues of the day, as well as foster a positive classroom environment. Our approach is to integrate social and emotional skills with an exploration of interesting and relevant content. we know that teachers often avoid "hot-button" topics because the issues are so complex, or because they don't feel prepared to handle the strong feelings and opinions discussion might stir, below we offer 10 suggestions for how to take some of these issues on in constructive, thoughtful and sensitive ways. 1. Create a safe, respectful, and supportive tone in your classroom. Sometimes students don't participate in discussions about sensitive issues because they worry that they will be teased, their opinions will be ridiculed, or strong feelings will arise because the topic hits close to home. To create a safe and supportive environment, make group agreements at the beginning of the year. These might include guidelines like" no name-calling," "no interrupting," "listen without judgment," "share to your level of comfort," "you have the right to pass," and the like. Remind students that when they talk about groups of people, they should be careful to use the word "some," not "all." Do community-building activities to create a positive and respectful classroom environment, and resolve conflicts proactively. Most importantly, model how to talk about sensitive and controversial topics by being honest and open yourself, respecting different points of view and accepting of students' feelings. 2. Prepare yourself. Before you delve into a difficult topic with your students, educate yourself with background knowledge. Times Topics pages, which collect all Times news, Opinion and multimedia about a subject, can be helpful, as can the Room for Debate blog, on which experts with a range of points of view are invited to discuss topics in the news. For example, if you are going to discuss the Occupy Wall Street protests and their connection to income inequality, get a basic overview from the Times Topics page or read the Room for Debate It Effective to Occupy Wall Street?" to explore different points of view. To understand how socioeconomic class operates, you could study this infographic. TeachableMoment.org also has up-to-date lessons on many key issues that provide both background information and suggested activities. articulate your own point of view on the topic for yourself so that when students ask for your opinion-and they will-you'll be prepared. Though many teachers keep their own points of view out of the classroom entirely, if it is appropriate to share yours, wait until the end of the discussion. Also consider in advance the possible "triggers" for your students. For example, if you are discussing gay marriage, remember that you will almost certainly have students who are L.G.B.T. themselves, who have gay parents, relatives or friends, or who have religious beliefs in conflict with gay marriage. Some of these students may feel relieved to discuss a topic so relevant to their lives, while others may feel embarrassed. This doesn't mean you shouldn't discuss the topic, but you also should never highlight those students' situations. Be aware that strong feelings could arise and plan in advance for how to handle them. Remind your students about the ground rules and explain that this issue may affect some students very personally. Depending on the topic, you may even want to tell those students, or their parents, who have a very personal connection to it in advance. 3. Find out what students already know or have experienced about Start with what the students already know. You can assess their prior knowledge in a variety of ways: create a semantic web as a whole class and brainstorm associations with the topic; have them talk with a partner; or have them write in response to a prompt. (If the topic is very delicate, you might ask them to write anonymously first, then use that writing to decide how to proceed in a later class.) Make a list of all the questions they have, either publicly or for your own planning. These questions are an additional window into what students already know, or think they know, and what they don't. Be sure to ask them to articulate where they got their information and opinions, and invite them to talk about how they know their sources are reliable. Remind them that, when learning about or discussing sensitive information, they should always ask, "What do I know and how do I know 4. Compile the students' questions and examine them together. After giving students basic information about your topic, elicit questions they still have. If they are focusing on content questions (who, what, where, why, when), expand their inquiry so they think beyond the basic facts and dig into deeper or "essential" questions. For example, if you are going to discuss the killing of Osama bin Laden, content questions might be: Who was Osama bin Laden? Where did he grow up? What did he believe? Why did he plan the 9/11 attacks? How was he captured? These questions are important, but questions like "Why do people take violent actions?" push students to go deeper, make connections beyond one news story and lead to a more complex understanding of the situation. Another fruitful line of questioning might be to ask how the issue affects the individual involved and how it affects society at large. 5. Make connections. Help students make connections between the topic at hand and their own lives. How does the issue affect them or their family, friends or community? Why should they care? If there is no obvious connection, help them find one. For example, if you are talking about the in Haiti and the continuing crisis it has created, but most of your students have no connection to Haiti, you might ask if any have relatives in places where other natural disasters have occurred. Often, starting with multimedia, whether photos, video or infographics, can hook students. You might also help them make connections by thinking about what else they know about, in current news or in history, that shares some of the same details. 6. Have students investigate and learn more. It is critical that students have a chance to find answers to their questions, conduct research, talk to people, and learn more in a way that makes the topic meaningful for them. (First, however, make sure your students understand how to tell the difference between opinions and facts. You might make a T-chart and use examples from a news article on a topic you're studying to demonstrate, then invite students to find and share their own examples from For example, if you were engaging your class on the topic of Joe Paterno and the Penn State sex abuse scandal, students could read and compare information and opinion from sources as varied as sports fans to college journalists to groups devoted to victims' rights. In The Times alone, they could find a range of fact and opinion. For instance, they might start with this news article for factual background information, then read editorial to see how an Opinion piece about the same topic is written. Students might then study this timeline about the events leading up to the scandal or watch a video about how Joe Paterno's tenure ended as a result of Finally, they might learn about how the public felt by learning about this poll. Or, if you are discussing the Trayvon Martin case, students can comb the Times Topics page to find everything from news stories to an editorial by Charles Blow to information on the "Stand Your Ground" law. They might also look for related news and Opinion, such as this Op-Ed by a 23 year-old about being frequently stopped by the police, and a Room for Debate discussion on "Young, Black and Male in America". And for an opportunity to hear other students' voices, see this Learning Network Student Opinion question: Have You Ever Interacted with the Police? Remember to point students to sources with contrasting political slants as well. For example, they might contrast reporting on the same topic in The Progressive versus The Weekly Standard, or the Center for American Progress versus the Heritage Foundation. Encourage students to seek out a range of people to learn more, including people who have strong opinions or special expertise on the topic. While students are gathering this information, emphasize that even "factual" information has a point of view. While they are researching, they should ask themselves: What is the point of view of this source? How reliable is it, and why? Explore students' opinions and promote dialogue. they have researched a topic thoroughly, students are ready to form and express their own points of view. It is important to encourage them to be open to different points of view. You might do an "opinion continuum" exercise where they show whether they "agree," "strongly agree," "disagree," "strongly disagree" or be "somewhere in between" or "not sure" on a variety of topics. (For an example of this, see a lesson we posted last year on the deficit debate, which includes an opinion continuum activity.) Help promote dialogue, as opposed to debate. Dialogue aims for understanding, an enlargement of view, complicating one's thinking and an openness to change. Provide opportunities for various kinds of group discussion where different perspectives get aired. This can include paired shares, conversation circles, group go-rounds, panels, micro-labs, and fishbowls. (These are included in lessons available on TeachableMoment.org.) 8. Be responsive to feelings and values. Even though you've set up ground rules at the outset and developed a respectful classroom environment, once a hot topic emerges you need to continue to watch for classroom tone. Remind students about the ground rules, especially if they are violated. Take the emotional "temperature" of the classroom periodically to find out how students are feeling, and encourage the discussion of feelings throughout. Build in different ways for students to participate, but also to opt out if a discussion is emotionally difficult. Give opportunities for students to write their thoughts, perhaps anonymously, instead of sharing verbally. Remind students that while you want them to participate, they always have the right to "pass" if they feel uncomfortable. Again, if you anticipate that a certain topic may elicit too many strong feelings for a particular student, talk with them in advance. 9. Make home connections. Use parents and other family members as primary sources by having students interview them as part of their research. Communicate with parents about your approach to discussing controversial issues. You can do this by sending a letter home in the beginning of the year or by discussing the issue on curriculum night. Invite parents to let you know if there are any sensitive issues for their family so you will be prepared. 10. Do Something. If students have gotten engaged in an issue you've discussed and feel strongly about it, they may want to do something about it. Your study should include an action component. This could involve learning more and doing more focused research. It could also involve helping students carry out a social action or community service project related to the issue. Students can learn more about how other young people did projects around recent issues in the news, starting a petition, visiting the Occupy Wall Street protest, organizing large student demonstrations in Chile to improve education, about anti-gay bullying. If the issue is a political one, they can engage in writing letters, speaking at public hearings, raising money, participating in demonstrations or writing articles for a school or local newspaper. Center's website TeachableMoment.Org for hundreds of inquiry-oriented classroom lessons and teaching ideas on everything from the 2012 election and the Occupy movement to Who Makes Your iPhone. essay was originally appeared in the New York Times Learning Network. your comments. Please email them to: [email protected].
For the school year 2018 – 2019, all year groups are following the new National Curriculum in Mathematics. The official documentation, which sets out what children are expected to know in each year group is available in the Programmes of Study. The aims of this curriculum are to ensure that pupils: - become fluent in the fundamentals of mathematics, including through varied and frequent practice with increasingly complex problems over time, so that pupils develop conceptual understanding and the ability to recall and apply knowledge rapidly and accurately. - reason mathematically by following a line of enquiry, conjecturing relationships and generalisations, and developing an argument, justification or proof using mathematical language - can solve problems by applying their mathematics to a variety of routine and non-routine problems with increasing sophistication, including breaking down problems into a series of simpler steps and persevering in seeking solutions. At Fairfield we use a range of strategies to teach Maths. However, we try to ensure that what we are teaching in Maths is right for the children and ensures that children make good progress and are effectively challenged. We use a range of resources to deliver our Maths curriculum. We are very well resourced and children get consistently good teaching in Maths. Our children in Years 1 to 6 are enjoying learning mathematics using the Singapore Maths methodology. The system that our teachers use ensures that our children understand and can apply their learning to problem solving. Our children are loving their learning. Excellent learning and problem solving in Year 1 Please open links below for additional information regarding:
To create a custom lesson, click on the check boxes of the files you’d like to add to your lesson and then click on the Build-A-Lesson button at the top. Click on the resource title to View, Edit, or Assign it. FL.SC.K.E.Earth and Space Science SC.K.E.5. Earth in Space and Time - Humans continue to explore Earth's place in space. Gravity and energy influence the formation of galaxies, including our own Milky Way Galaxy, stars, the Solar System, and Earth. Humankind's need to explore continues to lead to the development of knowledge and understanding of our Solar System. SC.K.E.5.2. Recognize the repeating pattern of day and night. SC.K.E.5.3. Recognize that the Sun can only be seen in the daytime. SC.K.E.5.4. Observe that sometimes the Moon can be seen at night and sometimes during the day. SC.K.L.14. Organization and Development of Living Organisms - A. All plants and animals, including humans, are alike in some ways and different in others. B. All plants and animals, including humans, have internal parts and external structures that function to keep them alive and help them grow and reproduce. C. Humans can better understand the natural world through careful observation. SC.K.L.14.1. Recognize the five senses and related body parts. Quiz, Flash Cards, Worksheet, Game & Study Guide My senses SC.K.L.14.2. Recognize that some books and other media portray animals and plants with characteristics and behaviors they do not have in real life. SC.K.L.14.3. Observe plants and animals, describe how they are alike and how they are different in the way they look and in the things they do. FL.SC.K.N.Nature of Science SC.K.N.1. The Practice of Science - A: Scientific inquiry is a multifaceted activity; The processes of science include the formulation of scientifically investigable questions, construction of investigations into those questions, the collection of appropriate data, the evaluation of the meaning of those data, and the communication of this evaluation. B: The processes of science frequently do not correspond to the traditional portrayal of ''the scientific method.'' C: Scientific argumentation is a necessary part of scientific inquiry and plays an important role in the generation and validation of scientific knowledge. D: Scientific knowledge is based on observation and inference; it is important to recognize that these are very different things. Not only does science require creativity in its methods and processes, but also in its questions and explanations. SC.K.N.1.2. Make observations of the natural world and know that they are descriptors collected using the five senses. SC.K.N.1.5. Recognize that learning can come from careful observation. SC.K.P.12. Motion of Objects - A. Motion is a key characteristic of all matter that can be observed, described, and measured. B. The motion of objects can be changed by forces. SC.K.P.12.1. Investigate that things move in different ways, such as fast, slow, etc. SC.K.P.13. Forces and Changes in Motion - A. It takes energy to change the motion of objects. B. Energy change is understood in terms of forces--pushes or pulls. C. Some forces act through physical contact, while others act at a distance. SC.K.P.13.1. Observe that a push or a pull can change the way an object is moving. SC.K.P.8. Properties of Matter - A. All objects and substances in the world are made of matter. Matter has two fundamental properties: matter takes up space and matter has mass. B. Objects and substances can be classified by their physical and chemical properties. Mass is the amount of matter (or ''stuff'') in an object. Weight, on the other hand, is the measure of force of attraction (gravitational force) between an object and Earth. SC.K.P.8.1. Sort objects by observable properties, such as size, shape, color, temperature (hot or cold), weight (heavy or light) and texture.
Why Hot Water Freezes Faster Than Cold—Physicists Solve the Mpemba Effect Aristotle first noticed that hot water freezes faster than cold, but chemists have always struggled to explain the paradox. Until now Water may be one of the most abundant compounds on Earth, but it is also one of more mysterious. For example, like most liquids it becomes denser as it cools. But unlike them, it reaches a state of maximum density at 4°C and then becomes less dense before it freezes. In solid form, it is less dense still, which is why standard ice floats on water. That’s one reason why life on Earth has flourished— if ice were denser than water, lakes and oceans would freeze from the bottom up, almost certainly preventing the kind of chemistry that makes life possible. Then there is the strange Mpemba effect, named after a Tanzanian student who discovered that a hot ice cream mix freezes faster than a cold mix in cookery classes in the early 1960s. (In fact, the effect has been noted by many scientists throughout history including Aristotle, Francis Bacon and René Descartes.) The Mpemba effect is the observation that warm water freezes more quickly than cold water. The effect has been measured on many occasions with many explanations put forward. One idea is that warm containers make better thermal contact with a refrigerator and so conduct heat more efficiently. Hence the faster freezing. Another is that warm water evaporates rapidly and since this is an endothermic process, it cools the water making it freeze more quickly. None of these explanations are entirely convincing, which is why the true explanation is still up for grabs. Today Xi Zhang at the Nanyang Technological University in Singapore and a few pals provide one. These guys say that the Mpemba paradox is the result of the unique properties of the different bonds that hold water together. What’s so odd about the bonds in water? A single water molecule consists of a relatively large oxygen atom joined to two smaller hydrogen atoms by standard covalent bonds. But put water molecules together and hydrogen bonds also begin to play an important role. These occur when a hydrogen in one molecule comes close the oxygen in another and bonds to it. Hydrogen bonds are weaker than covalent bonds but stronger than the van der Waals forces that geckos use to climb walls. Chemists have long known that they are important. For example, water’s boiling point is much higher than other liquids of similar molecules because hydrogen bonds hold it together. But in recent years, chemists have become increasingly aware of more subtle roles that hydrogen bonds can play. For example, water molecules inside narrow capillaries form into chains held together by hydrogen bonds. This plays an important role in trees and plants where water evaporation across a leaf membrane effectively pulls a chain of water molecules up from the roots. Now Xi and co say hydrogen bonds also explain the Mpemba effect. Their key idea is that hydrogen bonds bring water molecules into close contact and when this happens the natural repulsion between the molecules causes the covalent O-H bonds to stretch and store energy. But as the liquid warms up, it forces the hydrogen bonds to stretch and the water molecules sit further apart. This allows the covalent molecules to shrink again and give up their energy. The important point is that this process in which the covalent bonds give up energy is equivalent to cooling. In fact, the effect is additional to the conventional process of cooling. So warm water ought to cool faster than cold water, they say. And that’s exactly what is observed in the Mpemba effect. These guys have calculated the magnitude of the additional cooling effect and show that it exactly accounts for the observed differences in experiments that measure the different cooling rates of hot and cold water. Voila! That’s an interesting insight into the complex and mysterious properties of water, which still give chemists sleepless nights. But while Xi and co’s idea is convincing, it is not quite the theoretical slam dunk that many physicists will require to settle the question. That’s because the new theory lacks predictive power—at least in this paper. Xi and co need to use their theory to predict a new property of water that conventional thinking about water does not. For example, the shortened covalent bonds might give rise to some measurable property of the water that would not otherwise be present. The discovery and measurement of this property would be the coup de grâce that their theory needs. So while these guys may well have solved the riddle of Mpemba effect, they will probably need to work a little harder to convince everyone. Nevertheless, interesting stuff! Ref: arxiv.org/abs/1310.6514: O:H-O Bond Anomalous Relaxation Resolving Mpemba Paradox This Week’s Top 5 Posts Follow the latest Physics arXiv Blog posts here
From the moment of birth, babies grow and develop physically, mentally, and socially. In the first year, a baby grows faster than at any other time in later life. Within the first few years, children learn basic skills and go on to acquire a wide range of physical and intellectual accomplishments. In this section Children pass through predictable patterns of growth and development, but each child progresses at a different rate. Physical growth and intellectual development are partly determined by genetic factors, but these processes are also affected by general health and stimulation from the environment. Growth and development Babies are born with various primitive reflexes: they communicate by crying and instinctively suck when offered a nipple. At first, babies grow fast, tripling in weight and growing in length by about 25 cm (10 in) in the first year. After about the age of 2, children begin a long, slower period of growth, which allows time for complex skills to be acquired. A child’s intellectual skills and physical coordination depend on the healthy development of the muscular and nervous systems. The earliest accomplishments are basic skills such as walking, talking, and feeding themselves. As independence grows, children recognize themselves as individuals, interact more actively with their surroundings, and develop close relationships. By the age of 12, they have usually acquired sophisticated language and numeracy skills and wide-ranging physical abilities. Between the ages of about 10 and 15, children undergo the dramatic changes of puberty, with maturation of the reproductive organs and the development of secondary sexual characteristics such as breasts and body hair. These physical changes occur some time before emotional maturation, and adolescence is therefore a time of adjustment to adulthood. At the age of 18, young people are considered adults in most societies, although they continue to develop psychologically for many years. From the 2010 revision of the Complete Home Medical Guide © Dorling Kindersley Limited. The subjects, conditions and treatments covered in this encyclopaedia are for information only and may not be covered by your insurance product should you make a claim.
Terms Of The May Fourth Movement Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays. Published: Fri, 21 Apr 2017 Both the 1911 Revolution and the May Fourth Movement are seen as key events in modern Chinese history. This essay will argues that while this is true the May Fourth Movement was more important in terms of political, cultural and social transformation. The 1911 Revolution was most significant in terms of political change as it did result in the overthrow of the Manchu, Qing Dynasty. Despite this no stable or effective government was created. The May Fourth Movement on the other hand saw the concession of the government through the masses joining together with a new political awareness and eventually led to the founding of the Chinese Communist Party. The 1911 Revolution also involved ideas and debate about cultural change particularly in relation to the family structure; however no lasting or revolutionary transformation occurred. The May Fourth Movement in contrast resulted in change in the fields of literature, education, drama, historiography and tradition. Similarly to the cultural terms of the 1911 Revolution the social terms involved ideas and action for women’s rights and equality but no radical or long-term change resulted. The May Fourth Movement conversely resulted in elevated social standing for both women and youth through mass community attention and intellectual support. The May Fourth movement therefore was politically, culturally and socially more important than the 1911 Revolution, as it resulted in permanent and innovative transformation. In political terms the May Fourth Movement was superior to the 1911 Revolution. Despite this, the 1911 Revolution did result in some significant political changes. The most significant political consequence of the 1911 Revolution was the removal from power of the Manchu/Qing Dynasty, and the cessation of China’s two-thousand year old monarchical system. (Hsüeh, p.5.) Despite attempts to restore the traditional monarchical order, the revolution did successfully ensure that efforts to revert politically ‘ignominiously failed.’ (Hsüeh, p.6.) Michael Gasster argues that whilst the 1911 Revolution was a movement representing primarily the Chinese bourgeois it was not an overall ‘failure’ as it resulted in the overthrow of the Manchu/Qing Dynasty which ‘paved the way’ for future democratic revolutions. (Gasster, p.31.) Also of political significance were some changes made to the legal system. At the end of the Qing Dynasty the Active Criminal Law of the Qing Dynasty went into effect. This code eradicated old-world methods of corporal punishment and torture for civil offences and replaced them with economic penalties. (Changli, p.31.) Despite some political success of the 1911 Revolution, it ultimately failed to realize a secure and successful government, as political parties and associations were limited and the political process was influenced foremost by private loyalty and regional sentiment. (Hsüeh, p.6.) The masses were also not prepared for the establishment of a functioning democracy and tradition was still widely supported by the populus and some of the elite. (Hsüeh, p.7.) Most significant to the failure of the 1911 Revolution was the betrayal of Yuan Shikai. Yuan became Provisional Republic of China President on the fifteenth of February 1912 and implemented changes such as moving the capital to Beijing and installing his own ‘henchmen in powerful executive positions’ in an attempt to become Emperor. (Mackerras, p.32-33.) Whilst the 1911 Revolution did remove the Manchu/Qing Dynasty from power, remove the monarchical system and implement some legal changes the Revolution was riddled with political turmoil and was unable to introduce stable and effective government to China. As stated above the May Fourth Movement was superior to the 1911 Revolution in terms of political consequences. Specifically the May Fourth Movement was a protest against the effects the Versailles Peace Treaty and Japan’s ‘Twenty-One Demands’ would have on China. (Chen, p.63.) The May Fourth Incident and resulting political actions concluded with China’s refusal to sign the Versailles Peace Treaty. (Chen, p.71.) The May Fourth Movement was also successfully ensured that government officials Chang, Lu and Tsao were dismissed. (Hao, p.87.) Chang, Lu and Tsao were the government officials that the protestors involved with the May Fourth Movement held responsible for negotiating with Japan in regards to the ‘Twenty-One Demands’ and for adopting a pro-Japanese Foreign Policy. (Hao, p.87.) Along with succeeding in pressuring the government into refusing to sign the Peace Treaty and dismissing officials, the May Fourth Movement also achieved other significant results. Of high importance to the May Fourth Movement was the involvement of the masses. The May Fourth intellectuals and students understood the enormity of mass support and were able to enthusiastically rally and direct the population for ‘concerted political action.’ (Chen, p.73.) The mobilization of the population was also not limited to specific social groups or classes. Zhidong Hao states that when merchants, industrialists, urban workers and others heard of the mass arrests and abuse of student protestors they added their support to the movement and pressured for strike. (Hao, p.83.) Another significant result of the May Fourth Movement was the ability to affect political thought throughout China. Prior to the May Fourth Movement the political climate of the country was characterized by political instability and fighting warlords, a result of the 1911 Revolution’s inability to implement secure government (Hao, p.81.) With the rise and drive of the May Fourth Movement the nation’s political conscious was awakened and began to focus on endeavour to maintain the independence and equality of China internationally through a democratic government. (Chen, p.77.) In order to do this the May Fourth Movement and its supporters took up a fiercely anti-warlord position to achieve governmental stability. (Chen, p.77.) Also of great significance to the May Fourth Movement was the interest it created in Marxism and Communism. (Wang, p.3.) Clarence Hamilton argues that World War One ‘startled the nation into a realization’ that the repression, submission and conformity demanded from an autocratic government was ‘being condemned before the democratic consciousness of the world.’ (Hamilton, p.226.) This realization marked a radical turn away from conservative and tradition politics towards the more radical Communism. This change facilitated the founding of the Chinese Communist Party and the resulting Communist Revolution, which Hao argues was ‘one of the most important legacies of the May Fourth Movement.’ (Hao, p.97.) In all the May Fourth Movement was politically more important than the 1911 Revolution as it resulted in the government surrendering to mass pressure. It was also the first popular and mass movement in China’s history that spawned political awareness and resisted warlordism. Finally the May Fourth Movement led the fundamentally significant establishment of the Chinese Communist Party. As the 1911 Revolution was politically inferior to the May Fourth Movement it was also significantly inferior in terms of cultural consequences. During the time of the 1911 Revolution aspects of traditional Chinese culture, in particular the culture of family was questioned. Nevertheless, such questions and debates did not result in any significant cultural changes. Traditionally the culture of China depicted that women had two responsibilities within the family environment; the first being to make and maintain clothing, the second being to prepare food. (Changli, p.24.) Women were therefore subordinate to men, who were responsible for attaining financial revenue and making decisions regarding the family, its property and possessions. (Changli, p.24-25.) In this sense, women were therefore unable to own personal property or make decisions regarding the family’s material wealth. (Changli, p.24-25.) Whilst women’s status within the family and rights to own property was contested within the ‘public discourse’, women’s rights did not see any improvement during the 1911 Revolution. (Changli, p.31.) Rong Tiesheng also discusses the traditional culture of the family and the effect it had on women. Tiesheng argues in terms of political liberation and freedom and states that women were in a state of dependence within the household. (Tiesheng, p.173.) In order to overcome this Tiesheng argues that women needed to become independent through ‘joining together.’ (Tiesheng, p.173.) Even so, as with women’s economic status within the family, women’s independence was a cultural idea that was not realised. The 1911 Revolution was a period where the culture of the family and in particular women’s status within the family was questioned. Despite such questions and discussions no significant cultural changes were achieved as a result of the 1911 Revolution. In contrast to the 1911 Revolution, the May Fourth Movement resulted in a vast range of cultural changes and achievements. Colin Mackerras argues that the May Fourth Movement pushed for literature to be written in a vernacular that could be more readily understood and appreciated by the masses. (Mackerras, p.85.) Hamilton argues that this drive for colloquial writing resulted in periodicals from Universities and minor institutions adopting a more ‘vulgate tongue’. (Hamilton, p.229.) The movement also encouraged the use of ‘realism’ in literature, and resulting works focused on discussions of ‘social causes’ such as the rights of women and the oppressed and maintaining nationwide integrity. (Mackerras, p.102 and 43.) The revolution of literature also extended to the field of education. Arthur Hummel argues that as a result of the literature movement education was transformed. An example of this transformation was the decision of the Ministry of Education to allow simplified vernacular to be used in textbooks for primary, middle and high school students. (Hummel, p.56.) Education was also to be extended into the wider public. Hummel argues that the new literature was able to facilitate the education of the wider public including the illiterate. (Hummel, p.58.) The education system also transformed in other ways, firstly the examination system which involved stereotyped Confucian demands was completely abolished. The abolishment of the examination system resulted in new opportunities for those who previously could not access the system; it also rendered previous privileges given to those brought up with Confucian ideas redundant. (Mackerras 10 and 92.) The second change involved a new governmental willingness to send students abroad, which resulted in opportunities for students to learn from western educational institutions. (Hao, p.84.) The cultural consequences of the May Fourth Movement also extended to the field of spoken drama. In this sense the May Fourth Movement provided new, inspired ideas to the spoken drama. (Mackerras, p.102.) This new approach to drama gave rise to an immense number of drama societies, all of which experimented and adopted new approaches to theatre, thus expanding the diversity of the artistic field. (Mackerras, p.106.) The cultural results of the May Fourth Movement also extended to the study and writing of history. Mackerras states that methods of historiography from the West, which involved writing in a more thematic and critical way, were adopted by Chinese scholars. (Mackerras, p.42.) This was a tremendous change as traditionally Chinese histories were chronologically ordered with little attention given to analysing events. (Mackerras, p.42.) New histories were also being written as intellectuals questioned the impartiality of Confucian works as it reflected the perspective of ‘only one stratum of society.’ (Hummel, p.61.) Hummel also states the May Fourth Movement led to intellectuals and scholars becoming preoccupied with re-organizing and re-evaluating their culture from a modern point of view. (Hummel, p.58.) In addition supporters of the May Fourth Movement also took part in a re-evaluation of Chinese tradition. Mackerras argues that advocators of new culture believed that China’s traditional culture as an obstacle to advancement in the ‘modern world.’ (Mackerras, p.41.) Intellectuals also believed that traditional culture placed China in an internationally inferior position and prevented advancement that could allow China to match Japan and the West. (Asia for Educators.) All in all, in cultural terms the May Fourth Movement was far more important that the 1911 Revolution. Cultural agitation resulted in transformation in literature, education, drama and historiography. Cultural debate also resulted in aversion to tradition and enthusiasm for modernity. The 1911 Revolution in social terms was reasonably important; however like the cultural terms of the Revolution, ideas and action did not result in social drastic change. The social agitation that occurred during the 1911 Revolution was mostly in terms of women’s rights. The women’s movement during this time was concerned with achieving male-female equality, attaining female freedom and installing ‘female virtues’ into Chinese society. (Tiesheng, p.174.) Women’s groups worked actively during this time and achieved some equality; however this equality was not sustained. An example is the women’s military group The Shanghai Northern Expedition Women’s Corps. This group was responsible for foreign liaison and were given rifles, ammunition and military supplies. (Tiesheng, p.180.) Despite the resources given to the group and its role as foreign liaison, the group was disbanded as military leaders did not support the women’s desires to go to the warfront. (Tiesheng, p.180.) Women’s groups also tried to raise the social status of women by pursuing and promoting education, real jobs and full citizenship for women. (Tiesheng, p.180.) One woman who worked determinedly in attempt to achieve equality was the ‘famous female martyr’ Qui Jin. (Mackerras, p.87.) Mackerras states that Qui protested against foot binding and arranged marriages and also organised a school for girls. (Mackerras, p.87.) Qui was later executed; however her ideas gained attention and inspiration for the later May Fourth Movement. (Mackerras, p.87.)Women also aimed to improve their social standing, through petitioning for suffrage. In 1912 women lobbied the National Assembly to include female suffrage in the constitution; there efforts nonetheless did not result in a ‘clear rule concerning women’s suffrage.’ (Tiesheng, p.184.) As the momentum of the 1911 Revolution subsided so did the women’s interest groups with many returning to being ‘virtuous wives and good mothers.’ (Tiesheng, p.189.) Whilst the 1911 Revolution brought attention to male-female equality and female freedom through participation in military activities and political action, no significant lasting results were achieved. Nevertheless the legacy of Qui Jin and her contemporaries was evoked during the social activities of women in later years. As with the political and cultural terms of both the 1911 Revolution and the May Fourth Movement, in social terms the May Fourth Movement was again more significant. Whilst ideas of ‘gender equality, women’s liberation’ and equal rights were discussed and petitioned for during the 1911 Revolution, they became Mainstream during the May Fourth Movement. (Changli, p.21.) Mackerras states that the movement produced public appreciation for women’s rights and also created sympathy for feminism among the general public. (Mackerras, p.42 and 87.) Ideas of liberation, women’s rights and gender equality were pursued through encouraging women to participate in the daily struggle for equal opportunities and privileges. (Changli, p.36.) Tiesheng also states that the consideration Chinese women’s plight resulted in various educational institutions publishing and distributing ‘periodicals’ promoting women’s liberation. (Tiesheng, p.194.) Social revolution also extended beyond women to the youth of China. Through the May Fourth Incident China’s youth encouraged the masses to become involved and this resulted in a transformation of their social status. The youth were recognized and appreciated for their ability to ‘mobilize’ value and ‘direct’ the masses to achieve results. (Chen, p.73.) The May Fourth Movement also encouraged the masses of China to become ‘more receptive to the ideas’ of the youth. (Chen, p.76.) The May Fourth Movement was more important in social terms than the 1911 Revolution as it did boost the social status of both Chinese women and youth. Women’s status was enhanced through ideas about women and feminism becoming popular among public discourse. Women’s participation and the support of intellectual institutions also aided in improving women’s’ social standing. The social position of China’s youth improved through the youth mobilizing, understanding and appreciating the masses. This was also assisted by a newfound public receptiveness of youth ideas and activities. Overall as with the political and cultural terms of the 1911 Revolution and the May Fourth Movement, socially the May Fourth Movement was superior. In sum, the May Fourth Movement was more important than the 1911 Revolution. The May Fourth Movement was more important politically as it resulted in the government conceding to public pressure. The May Fourth Movement was also the first popular and mass movement in China’s history which targeted warlordism and encouraged political awareness. Also of fundamental importance is that the May Fourth Movement provided the foundation for the establishment of the Chinese Communist Party. The May Fourth Movement was also more culturally significant than the 1911 Revolution. The 1911 Revolution involved ideas about creating cultural change but the May Fourth Movement resulted in revolution in literature, education, drama and historiography. Cultural agitation also resulted in aversion to tradition and a new found interest in modern and western culture. Finally the May Fourth Movement was also more important than the 1911 Revolution in regards to social terms. The 1911 Revolution involved debate and protest for women’s rights and equality but no drastic or lasting change resulted. The May Fourth Movement on the other hand resulted in higher social standing for both women and youth through mass public interest and intellectual support. To conclude that May Fourth movement was politically, culturally and socially superior to the 1911 Revolution in terms of lasting revolutionary change. Cite This Work To export a reference to this article please select a referencing stye below:
A common noun names a general person, place or thing. examples: I went to the city. The man was kind. A proper noun names a specific person, place, or thing. Always capitalize the first letter of a proper noun. examples: I went to San Francisco. Mr. Brown was kind. Directions: Underline the common nouns with a blue crayon. Underline the proper nouns with a red crayon. 1. The house is on Main Street. (1 common, 1 proper) 2. Karen played with her sister. (1 common, 1 proper) 3. Fran went to Friendly’s Pet Shop. (2 proper) 4. The car stopped quickly. (1 common) 5. Morgan Boulevard is a busy street. (1 proper, 1 common) 6. Michael and his friend chased the kitten. (1 proper, 2 common) 7. Did you see Kevin at the party? (1 proper, 1 common) 8. Laura looked at the stars through her telescope. (1 proper, 2 common) 9. There were no yellow markers in the box. (2 common) 10. Have you ever eaten a cheeseburger at Burger Planet? (1 common, 1 proper) 11. A young boy found a dollar on the sidewalk. (3 common) 12. Mary sat by the fire and roasted a marshmallow. (1 proper, 2 common) Directions: Write the word “common” next to each common noun. Re-write each proper noun correctly. 13. alice smith ______________ 14. carpenter _______________ 15. dog _______________ 16. max _______________ 17. book _______________ 18. mayberry library _______________ 19. jupiter _______________ 20. planet _______________ The Common Noun Recognize a common noun when you see one. Nouns name people, places, and things. Every noun can further be classified as common or proper. A common noun names generalitems. Go into the kitchen. What do you see? Refrigerator, magnet, stove, window, coffee maker, wallpaper, spatula, sink, plate—all of thesethings are...
From Wikipedia, the free encyclopedia - View original article An orchestra is a large instrumental ensemble that contains sections of string, brass, woodwind, and percussion instruments. The term orchestra derives from the Greek ὀρχήστρα, the name for the area in front of an ancient Greek stage reserved for the Greek chorus. The orchestra grew by accretion throughout the 18th and 19th centuries, but changed very little in composition during the course of the 20th century. A smaller-sized orchestra for this time period (of about fifty musicians or fewer) is called a chamber orchestra. A full-size orchestra (about 100 musicians) may sometimes be called a "symphony orchestra" or "philharmonic orchestra"; these modifiers do not necessarily indicate any strict difference in either the instrumental constitution or role of the orchestra, but can be useful to distinguish different ensembles based in the same city (for instance, the London Symphony Orchestra and the London Philharmonic Orchestra). A symphony orchestra will usually have over eighty musicians on its roster, in some cases over a hundred, but the actual number of musicians employed in a particular performance may vary according to the work being played and the size of the venue. A leading chamber orchestra might employ as many as fifty musicians; some are much smaller than that. Orchestras can also be found in schools. The term concert orchestra may sometimes be used (e.g., BBC Concert Orchestra; RTÉ Concert Orchestra)—no distinction is made on size of orchestra by use of this term, although their use is generally distinguished as for live concert. As such they are commonly chamber orchestras The typical symphony orchestra consists of four groups of similar musical instruments called the woodwinds, brass, percussion, and strings. Other instruments such as the piano and celesta may sometimes be grouped into a fifth section such as a keyboard section or may stand alone, as may the concert harp and electric and electronic instruments. The orchestra, depending on the size, contains almost all of the standard instruments in each group. In the history of the orchestra, its instrumentation has been expanded over time, often agreed to have been standardized by the classical period and Ludwig van Beethoven's influence on the classical model. The so-called "standard complement" of double winds and brass in the orchestra from the first half of the 19th century is generally attributed to the forces called for by Beethoven. The exceptions to this are his Symphony No. 4, Violin Concerto, and Piano Concerto No. 4, which each specify a single flute. The composer's instrumentation almost always included paired flutes, oboes, clarinets, bassoons, horns and trumpets. Beethoven carefully calculated the expansion of this particular timbral "palette" in Symphonies 3, 5, 6, and 9 for an innovative effect. The third horn in the "Eroica" Symphony arrives to provide not only some harmonic flexibility, but also the effect of "choral" brass in the Trio. Piccolo, contrabassoon, and trombones add to the triumphal finale of his Symphony No. 5. A piccolo and a pair of trombones help deliver storm and sunshine in the Sixth. The Ninth asks for a second pair of horns, for reasons similar to the "Eroica" (four horns has since become standard); Beethoven's use of piccolo, contrabassoon, trombones, and untuned percussion—plus chorus and vocal soloists—in his finale, are his earliest suggestion that the timbral boundaries of symphony might be expanded for good. For several decades after his departure, symphonic instrumentation was faithful to Beethoven's well-established model, with few exceptions. Apart from the core orchestral complement, various other instruments are called for occasionally. These include the classical guitar, heckelphone, flugelhorn, cornet, harpsichord, and organ. Saxophones, for example, appear in some 19th- through 21st-century scores. While appearing only as featured solo instruments in some works, for example Maurice Ravel's orchestration of Modest Mussorgsky's Pictures at an Exhibition and Sergei Rachmaninoff's Symphonic Dances, the saxophone is included in other works, such as Ravel's Boléro, Sergei Prokofiev's Romeo and Juliet Suites 1 and 2, Vaughan Williams' Symphonies No.6 and 9 and William Walton's Belshazzar's Feast, and many other works as a member of the orchestral ensemble. The euphonium is featured in a few late Romantic and 20th-century works, usually playing parts marked "tenor tuba", including Gustav Holst's The Planets, and Richard Strauss's Ein Heldenleben. The Wagner tuba, a modified member of the horn family, appears in Richard Wagner's cycle Der Ring des Nibelungen and several other works by Strauss, Béla Bartók, and others; it has a prominent role in Anton Bruckner's Symphony No. 7 in E Major. Cornets appear in Pyotr Ilyich Tchaikovsky's ballet Swan Lake, Claude Debussy's La Mer, and several orchestral works by Hector Berlioz. Unless these instruments are played by members doubling on another instrument (for example, a trombone player changing to euphonium for a certain passage), orchestras will use freelance musicians to augment their regular rosters. The 20th-century orchestra was far more flexible than its predecessors. In Beethoven's and Felix Mendelssohn's time, the orchestra was composed of a fairly standard core of instruments which was very rarely modified. As time progressed, and as the Romantic period saw changes in accepted modification with composers such as Berlioz and Mahler, the 20th century saw that instrumentation could practically be hand-picked by the composer. Today, however, the modern orchestra has generally been considered standardized with the modern instrumentation listed below. With this history in mind, the orchestra can be seen to have a general evolution as outlined below. The first is a typical classical orchestra (i.e. Beethoven/Joseph Haydn), the second is typical of an early/mid-romantic (i.e. Johannes Brahms/Antonín Dvořák/Tchaikovsky), late romantic/early 20th century (i.e. Wagner/Mahler/Igor Stravinsky), to the common complement of a present day modern orchestras (i.e. John Adams/Samuel Barber/Aaron Copland/Philip Glass/Krzysztof Penderecki). Early Romantic orchestra Late Romantic orchestra Among the instrument groups and within each group of instruments, there is a generally accepted hierarchy. Every instrumental group (or section) has a principal who is generally responsible for leading the group and playing orchestral solos. The violins are divided into two groups, first violin and second violin, with the second violins playing with lower registers than the first violins. The principal first violin is called the concertmaster (or "leader" in the UK) and is not only considered the leader of the string section, but the second-in-command of the entire orchestra, behind only the conductor. The concertmaster leads the pre-concert tuning and handles technical aspects of orchestra management, usually sitting to the conductor's left, closest to the audience. In some U.S. and British orchestras, the concertmaster comes on stage after the rest of the orchestra is seated, takes a bow, and receives applause before the conductor (and the soloists, if there are any) appear on stage. The principal trombone is considered the leader of the low brass section, while the principal trumpet is generally considered the leader of the entire brass section. While the oboe often provides the tuning note for the orchestra (due to 300-year-old convention), no principal is the leader of the woodwind section though in woodwind ensembles, often the flute is leader. Instead, each principal confers with the others as equals in the case of musical differences of opinion. The horn, while technically a brass instrument, often acts in the role of both woodwind and brass. Most sections also have an assistant principal (or co-principal or associate principal), or in the case of the first violins, an assistant concertmaster, who often plays a tutti part in addition to replacing the principal in his or her absence. A section string player plays unison with the rest of the section, except in the case of divided (divisi) parts, where upper and lower parts in the music are often assigned to "outside" (nearer the audience) and "inside" seated players. Where a solo part is called for in a string section, the section leader invariably plays that part. Tutti wind and brass players generally play a unique but non-solo part. Section percussionists play parts assigned to them by the principal percussionist. In modern times, the musicians are usually directed by a conductor, although early orchestras did not have one, giving this role instead to the concertmaster or the harpsichordist playing the continuo. Some modern orchestras also do without conductors, particularly smaller orchestras and those specializing in historically accurate (so-called "period") performances of baroque and earlier music. The most frequently performed repertoire for a symphony orchestra is Western classical music or opera. However, orchestras are used sometimes in popular music, extensively in film music, and increasingly often in video game music. The term "orchestra" can also be applied to a jazz ensemble, for example in performance of big band music. |This section does not cite any references or sources. (July 2013)| The first orchestras were made up of small groups of musicians that gathered for festivals, holidays or funerals. It was not until the 11th century that families of instruments started to appear with differences in tones and octaves. True modern orchestras started in the late 16th century when composers started writing music for instrumental groups. In the 15th and 16th centuries in Italy the households of nobles had musicians to provide music for dancing and the court, however with the emergence of the theatre, particularly opera, in the early 17th century, music was increasingly written for groups of players in combination, which is the origin of orchestral playing. Opera originated in Italy, and Germany eagerly followed. Dresden, Munich and Hamburg successively built opera houses. At the end of the 17th century opera flourished in England under Henry Purcell, and in France under Lully, who with the collaboration of Molière also greatly raised the status of the entertainments known as ballets, interspersed with instrumental and vocal music. In the 17th century and early 18th century, instrumental groups were taken from all of the available talent. A composer such as Johann Sebastian Bach had control over almost all of the musical resources of a town, whereas Handel would hire the best musicians available. This placed a premium on being able to rewrite music for whichever singers or musicians were best suited for a performance—Handel produced different versions of the Messiah oratorio almost every year. As nobility began to build retreats away from towns, they began to hire musicians to form permanent ensembles. Composers such as the young Joseph Haydn would then have a fixed body of instrumentalists to work with. At the same time, travelling virtuoso performers such as the young Wolfgang Amadeus Mozart would write concerti that showed off their skills, and they would travel from town to town, arranging concerts along the way. The aristocratic orchestras worked together over long periods, making it possible for ensemble playing to improve with practice. This change, from civic music making where the composer had some degree of time or control, to smaller court music making and one-off performance, placed a premium on music that was easy to learn, often with little or no rehearsal. The results were changes in musical style and emphasis on new techniques. Mannheim had one of the most famous orchestras of that time, where notated dynamics and phrasing, previously quite rare, became standard (see Mannheim school). It also attended a change in musical style from the complex counterpoint of the baroque period, to an emphasis on clear melody, homophonic textures, short phrases, and frequent cadences: a style that would later be defined as classical. Throughout the late 18th century composers would continue to have to assemble musicians for a performance, often called an "Academy", which would, naturally, feature their own compositions. In 1781, however, the Leipzig Gewandhaus Orchestra was organized from the merchants concert society, and it began a trend towards the formation of civic orchestras that would accelerate into the 19th century. In 1815, Boston's Handel and Haydn Society was founded, in 1842 the New York Philharmonic and the Vienna Philharmonic were formed, and in 1858, the Hallé Orchestra was formed in Manchester. There had long been standing bodies of musicians around operas, but not for concert music: this situation changed in the early 19th century as part of the increasing emphasis in the composition of symphonies and other purely instrumental forms. This was encouraged by composer critics such as E. T. A. Hoffmann who declared that instrumental music was the "purest form" of music. The creation of standing orchestras also resulted in a professional framework where musicians could rehearse and perform the same works repeatedly, leading to the concept of a repertoire in instrumental music. In the 1830s, conductor François Antoine Habeneck, began rehearsing a selected group of musicians in order to perform the symphonies of Beethoven, which had not been heard in their entirety in Paris. He developed techniques of rehearsing the strings separately, notating specifics of performance, and other techniques of cuing entrances that were spread across Europe. His rival and friend Hector Berlioz would adopt many of these innovations in his touring of Europe. The invention of the piston and rotary valve by Heinrich Stölzel and Friedrich Blühmel, both Silesians, in 1815, was the first in a series of innovations, including the development of modern keywork for the flute by Theobald Boehm and the innovations of Adolphe Sax in the woodwinds. These advances would lead Hector Berlioz to write a landmark book on instrumentation, which was the first systematic treatise on the use of instrumental sound as an expressive element of music. The effect of the invention of valves for the brass was felt almost immediately: instrument-makers throughout Europe strove together to foster the use of these newly refined instruments and continuing their perfection; and the orchestra was before long enriched by a new family of valved instruments, variously known as tubas, or euphoniums and bombardons, having a chromatic scale and a full sonorous tone of great beauty and immense volume, forming a magnificent bass. This also made possible a more uniform playing of notes or intonation, which would lead to a more and more "smooth" orchestral sound that would peak in the 1950s with Eugene Ormandy and the Philadelphia Orchestra and the conducting of Herbert von Karajan with the Berlin Philharmonic. During this transition period, which gradually eased the performance of more demanding "natural" brass writing, many composers (notably Wagner and Berlioz) still notated brass parts for the older "natural" instruments. This practice made it possible for players still using natural horns, for instance, to perform from the same parts as those now playing valved instruments. However, over time, use of the valved instruments became standard, indeed universal, until the revival of older instruments in the contemporary movement towards authentic performance (sometimes known as "historically informed performance"). At the time of the invention of the valved brass, the pit orchestra of most operetta composers seems to have been modest. An example is Sullivan's use of two flutes, one oboe, two clarinets, one bassoon, two horns, two cornets (a piston), two trombones, drums and strings. During this time of invention, winds and brass were expanded, and had an increasingly easy time playing in tune with each other: particularly the ability for composers to score for large masses of wind and brass that previously had been impractical. Works such as the Requiem of Hector Berlioz would have been impossible to perform just a few decades earlier, with its demanding writing for twenty woodwinds, as well as four gigantic brass ensembles each including around four trumpets, four trombones, and two tubas. The next major expansion of symphonic practice came from Richard Wagner's Bayreuth orchestra, founded to accompany his musical dramas. Wagner's works for the stage were scored with unprecedented scope and complexity: indeed, his score to Das Rheingold calls for six harps. Thus, Wagner envisioned an ever-more-demanding role for the conductor of the theater orchestra, as he elaborated in his influential work On Conducting. This brought about a revolution in orchestral composition, and set the style for orchestral performance for the next eighty years. Wagner's theories re-examined the importance of tempo, dynamics, bowing of string instruments and the role of principals in the orchestra. Conductors who studied his methods would go on to be influential themselves. As the early 20th century dawned, symphony orchestras were larger, better funded, and better trained than ever before; consequently, composers could compose larger and more ambitious works. The influence of Gustav Mahler was particularly innovational; in his later symphonies, such as the mammoth Symphony No. 8, Mahler pushes the furthest boundaries of orchestral size, employing huge forces. By the peak years of Shostakovich, orchestras could support the most enormous forms of symphonic expression. With the recording era beginning, the standard of performance reached a pinnacle. In recordings, small errors in a performance could be "fixed", but many older conductors and composers could remember a time when simply "getting through" the music as best as possible was the standard. Combined with the wider audience made possible by recording, this led to a renewed focus on particular conductors and on a high standard of orchestral execution. As sound was added to silent film, the virtuoso orchestra became a key component of the establishment of motion pictures as mass-market entertainment. In the 1920s and 1930s, economic as well as artistic considerations led to the formation of smaller concert societies, particularly those dedicated to the performance of music of the avant-garde, including Igor Stravinsky and Arnold Schoenberg. This tendency to start festival orchestras or dedicated groups would also be pursued in the creation of summer musical festivals, and orchestras for the performance of smaller works. Among the most influential of these was the Academy of St Martin in the Fields under the baton of Sir Neville Marriner. With the advent of the early music movement, smaller orchestras where players worked on execution of works in styles derived from the study of older treatises on playing became common. These include the Orchestra of the Age of Enlightenment, the London Classical Players under the direction of Sir Roger Norrington and the Academy of Ancient Music under Christopher Hogwood, among others. In the United States, the late 20th century saw a crisis of funding and support for orchestras. The size and cost of a symphony orchestra, compared to the size of the base of supporters, became an issue that struck at the core of the institution. Few orchestras could fill auditoriums, and the time-honored season-subscription system became increasingly anachronistic, as more and more listeners would buy tickets on an ad hoc basis for individual events. Orchestral endowments and—more centrally to the daily operation of American orchestras—orchestral donors have seen investment portfolios shrink or produce lower yields, reducing the ability of donors to contribute; further, there has been a trend toward donors finding other social causes more compelling. Also, while government funding is less central to American than European orchestras, cuts in such funding are still significant for American ensembles. Finally, the drastic falling-off of revenues from recording, tied to no small extent to changes in the recording industry itself, began a period of change that has yet to reach its conclusion. U.S. orchestras that have gone into Chapter 11 bankruptcy includes the Philadelphia Orchestra (in April 2011), and the Louisville Orchestra, in December 2010; orchestras that have gone into Chapter 7 bankruptcy and have ceased operations include the Northwest Chamber Orchestra in 2006, the Honolulu Orchestra in March 2011, the New Mexico Symphony Orchestra in April 2011, and the Syracuse Symphony in June 2011. The Festival of Orchestras in Orlando, Florida ceased operations at the end of March, 2011. Critics such as Norman Lebrecht were vocal in their diagnosis of the problem as the "jet set conductor" (whose salaries were presumably bleeding the orchestras dry); and several high-profile conductors have taken pay cuts in recent years; but the amounts of revenue involved are too small to account for the crisis. Music administrators such as Michael Tilson Thomas and Esa-Pekka Salonen argued that new music, new means of presenting it, and a renewed relationship with the community could revitalize the symphony orchestra. The influential critic Greg Sandow has argued in detail that orchestras must revise their approach to music, performance, the concert experience, marketing, public relations, community involvement, and presentation to bring them in line with the expectations of 21st-century audiences immersed in popular culture. It is not uncommon for contemporary composers to use unconventional instruments, including various synthesizers, to achieve desired effects. Many, however, find more conventional orchestral configuration to provide better possibilities for color and depth. Composers like John Adams often employ Romantic-size orchestras, as in Adams' opera Nixon in China; Philip Glass and others may be more free, yet still identify size-boundaries. Glass in particular has recently turned to conventional orchestras in works like the Concerto for Cello and Orchestra and the Violin Concerto No. 2. Along with a decrease in funding, some U.S. orchestras have reduced their overall personnel, as well as the number of players appearing in performances. The reduced numbers in performance are usually confined to the string section, since the numbers here have traditionally been flexible (as multiple players typically play from the same part). The post-revolutionary symphony orchestra Persimfans was formed in the Soviet Union in 1922. The unusual aspect of the orchestra was that, believing that in the ideal Marxist state all people are equal, its members felt that there was no need to be led by the dictatorial baton of a conductor; instead they were led by a committee. Although it was a partial success, the principal difficulty with the concept was in changing tempo. The orchestra survived for ten years before Stalin's cultural politics effectively forced it into disbandment by draining away its funding. Some ensembles, such as the Orpheus Chamber Orchestra, based in New York City, have had more success, although decisions are likely to be deferred to some sense of leadership within the ensemble (for example, the principal wind and string players). Others have returned to the tradition of a principal player, usually a violinist, being the artistic director and running rehearsals (such as the Australian Chamber Orchestra, Amsterdam Sinfonietta & Candida Thompson and the New Century Chamber Orchestra). The techniques of polystylism and polytempo music have recently led a few composers to write music where multiple orchestras perform simultaneously. These trends have brought about the phenomenon of polyconductor music, wherein separate sub-conductors conduct each group of musicians. Usually, one principal conductor conducts the sub-conductors, thereby shaping the overall performance. Some pieces are enormously complex in this regard, such as Evgeni Kostitsyn's Third Symphony, which calls for nine conductors. Charles Ives often used two conductors, one for example to simulate a marching band coming through his piece. Realizations for Symphonic Band includes one example from Ives. Benjamin Britten's War Requiem is also an important example of the repertoire for more than one conductor. One of the best examples in the late century orchestral music is Karlheinz Stockhausen's Gruppen, for three orchestras placed around the audience. This way, the sound masses could be spacialized, as in an electroacoustic work. Gruppen was premiered in Cologne, in 1958, conducted by Stockhausen, Bruno Maderna and Pierre Boulez. Recently, it was performed by Simon Rattle, John Carewe and Daniel Harding. In Ancient Greece, the orchestra was the space between the auditorium and the proscenium (or stage), in which were stationed the chorus and the instrumentalists. The word orchestra literally means "a dancing place". In some theaters, the orchestra is the area of seats directly in front of the stage (called primafila or platea); the term more properly applies to the place in a theatre, or concert hall reserved for the musicians. |Wikimedia Commons has media related to Orchestras.|
The World Health Organization has amended it maximum sugar allowance for adults. Previously, the organization recommended a maximum of 10% of calories be via free sugar. Now, they are recommended that percentage drop to 3-5% in adults. This new percentage is a response to the increase in tooth decay seen in adults over the years. In the US, around 92% of adults aged 20-64 have experienced dental caries, also known as cavities or tooth decay, in at least one of their permanent teeth. To tackle the growing problem of tooth decay, researchers from University College London and the London School of Hygiene & Tropical Medicine, both in the UK, say the World Health Organization’s recommendation of a maximum of 10% total daily calories from free sugar should be reduced to 5%, with 3% as a target. The World Health Organization (WHO) defines free sugar as any monosaccharides and disaccharides that a manufacturer, cook or consumer adds to foods. Sugars that are naturally present in honey, syrup and fruit juices are also classed as free sugars. It is well known that sugar consumption is a leading cause of dental caries. Bacteria in the mouth can feed off of certain sugars, producing plaque containing acids that remove minerals from the outer enamel of the tooth. Unless cleaned well, the acids continue to destroy the tooth to the point where an individual can experience severe toothache or an abscess – a bacterial infection that causes a collection of pus. But the researchers note: “Despite the use of fluoride and improvements in preventive dentistry, the burden of dental caries remains unacceptably high worldwide, particularly when, in addition to the traditional focus on childhood caries, the caries burden in adults is considered.” They put this down to sugar intake. In 2002, WHO set guidelines recommending that daily consumption of free sugars should make up a maximum of 10% of an individual’s total energy intake – the equivalent to 50 g of free sugars per day. However, WHO state the daily target should be half of this at 5%, or 25 g of free sugars per day. In March, Medical News Today revealed that WHO had issued draft guidelines calling for areduction of daily sugar intake to 5% of total daily calories in an attempt to tackle tooth decay, as well as obesity. In this latest study, published in the journal BMC Public Health, the researchers support this proposed move, but say the 5% figure should represent the maximum daily free sugar intake, while the target should be 3%. The team’s recommendations come from an analysis of public health records from different countries. They compared dental health and sugar consumption over time among large populations of adults and children. They found that adults had a significantly higher incidence of tooth decay than children, and this incidence soared with any sugar consumption over 0% of total daily calories. But even among children, the team found that moving from consuming almost no sugar to 5% of total daily calories doubled the rate of tooth decay. This rose with every increase in sugar intake. “Tooth decay is a serious problem worldwide and reducing sugar intake makes a huge difference,” says study author Aubrey Sheiham, of the Department of Epidemiology & Public Health at University College London. “Data from Japan were particularly revealing, as the population had no access to sugar during or shortly after the Second World War. We found that decay was hugely reduced during this time, but then increased as they began to import sugar again.” Furthermore, the team found that in Nigeria, only 2% of people at all ages had tooth decay when they consumed almost no sugar – approximately 2 g per day. “This is in stark contrast to the US, where 92% of adults have experienced tooth decay,” adds Sheiham. Commenting on their results, the researchers say: “These findings imply that public health goals need to set sugar intakes ideally at <3% energy intake per day with <5% energy intake as a pragmatic goal, even when fluoride is widely used. Adult as well as children’s caries burdens should define the new criteria for developing goals for sugar intake.” As well as their call for the target daily sugar intake to be reduced to 3%, the team sets out a number of other recommendations they believe should be considered in the fight against tooth decay. They say that sugar-sweetened treats and fruit juices should not be marketed at children. Instead, there should be focus on the harm they can cause. Furthermore, they say vending machines containing confectionary and sugary drinks should be removed from areas that are supported or controlled by central or local governments. “We are not talking draconian policies to ‘ban’ such sugar-rich products, which are available elsewhere,” says co-author Prof. Phillip James, of the London School of Hygiene & Tropical Medicine, “but no publicly-supported establishment should be contributing to the expensive problems of dental caries, obesity and diabetes.” They note that there should also be a review of food labeling, and new food labels should state a food’s sugar content as “high” if it is above 2.5%. Last year, Medical News Today reported on a study by researchers from the Universities of Oxford and Reading in the UK, in which they claimed a sugary drink tax of 20% could reduce obesity. In this latest study, the team says a tax should be developed to increase the cost of all food and drinks that are high in sugar. Prof. James says: “This would be simplest as a tax on sugar as a mass commodity, since taxing individual foods depending on their sugar content is an enormously complex administrative process. The retail price of sugary drinks and sugar-rich foods needs to increase by at least 20% to have a reasonable effect on consumer demand, so this means a major tax on sugars as a commodity. The level will depend on expert analyses but my guess is that a 100% tax might be required.” Earlier this year, a study published in JAMA Internal Medicine claimed a high added sugar intake increases the risk of death from cardiovascular disease.
ECE430 Play as Pedagogy Lead Faculty: Mrs. Brook MacMillan Focus on play as the primary learning modality for young children. Theoretical basis for play as a means of teaching, role in learning and as a means of assessment emphasized. - Explain the function of play as a teaching and learning tool in the development and education of children. - Use play activities as an assessment tool to identify skills, abilities, accomplishments and developmental lags in children. - Design instruction to extend play through the use of art, music, dance, and movement. - Design appropriate play activities involving families.
The human body plays host to a plethora of different microscopic organisms ranging in size and complexity from viruses and bacteria to multicellular, eukaryotic parasitic worms. The bacterial component of the microbiome accounts for roughly 1–3% of body mass with 10 bacteria for every human cell (NIH Human Microbiome Project) and bacterial load is likely to increase with age. Bacteria are found in greatest numbers and variety in the mouth, the gut, and on the skin. Emerging studies indicate that the HM may contribute to the regulation of multiple neuro-chemical and neuro-metabolic pathways through a complex series of highly interactive and symbiotic host-microbiome signaling systems that mechanistically interconnect the gastrointestinal (GI) tract, skin, liver, and other organs with the central nervous system. For example, the human GI tract, containing 95% of the HM, harbors a genetically diverse microbial population that plays major roles in nutrition, digestion, neurotrophism, inflammation, growth, immunity and protection against foreign pathogens. Bacteria in the mouth, the gut, and on the skin form biofilms. This is a complex ecosystem of different species of bacteria forming a symbiotic whole, enabling the attachment and proliferation of individuals. Biofilm-forming bacteria release a highly hydrated matrix of extracellular polymeric substance, composed of proteins, polyuronic acids, nucleic acids, and lipids. Together bacteria and this matrix form the bulk components of biofilm. Of the estimated 700 oral bacteria identified by DNA, only around 50% have been cultured. The microbiome of the human GI tract is the largest reservoir of microbes in the body, containing about 1014 microorganisms; over 99% of microbiota in the gut are anaerobic bacteria, with fungi, protozoa, archaebacteria and other microorganisms making up the remainder. As we get older, bacterial load steadily increases as our humoral and cell-mediated immune responses wane in favor of the more primitive, but less efficient, innate immune system. There is growing evidence that the microbiome composition, species identity and combinations, the density and distribution of these bacteria may influence how well we age. Gradually, as the innate immunity predominates over time, certain bacteria may proliferate and trigger more damaging responses. Against a background of rising bacterial load it becomes even more important to maintain the integrity of the blood-brain barrier. Weakening of the blood-brain barrier either, by any predisposing polymorphisms or as a result of conditions that elicit a sustained TNF-α response, may serve to increase the propensity for bacteria or endotoxins to gain access to the brain, trigger neuropathology and alter brain function.
Fruit flies are small, stocky-bodied flies (<1/8 inch long), with short antennae, and a dull, dark-gray to tan color. They often have small red eyes. They might easily be confused with other small flies, except for their hovering behavior around ripe fruits, decayed vegetables and syrups around food preparation areas. Phorid flies are small flies up 1/8 inch long. These flies can be recognized by the distinct ”hump” or arch of the fly’s thorax, enlarged hind femurs and reduced veins (no cross-veins) in the wings. They often fly and walk with a quick, darting motion. Drain flies are 1/16 to 1/4 inch long, pale yellowish to brownish gray to black in color, with broad wings held in a characteristic V-shaped position over the back. Wings are have parallel, hairy veins and margins. They are often seen resting on walls and other vertical or horizontal surfaces. Fungus gnats are another small (1/8 inch long) fly, more common in classrooms and office areas. They can be distinguished from the other small flies by their slender, mosquito-like appearance (much smaller than a mosquito) and long, hairless antennae. They become a nuisance when they hover around faces and computer screens. Fruit flies (Drosophila species) are often present in the kitchen areas and food service areas of schools. They are also called pomace or vinegar flies, and are sometimes confused with other small flies, especially humpbacked flies (Family Phoridae). Fruit flies are strongly attracted to, and breed in, fermenting fruits or liquids. Large numbers of fruit flies may indicate unsanitary conditions including spilled or spoiled fruit and vegetables; poorly maintained garbage containers; accumulation of organic matter around drains, grout or broken tiles; and wet areas under and behind equipment. Phorid flies feed on, and breed in, wet areas containing decaying organic matter. Common breeding sites include drains, garbage areas, animal carcasses, and contaminated soil. Phorid flies may also be associated with decaying organic matter in the bottoms of garbage cans, or trapped in cracked floor tiles or under the bases of kitchen equipment. Chronic, heavy infestations of phorid flies may be a result of a broken, underground sewage line. When other causes have been ruled out or corrected and a phorid fly problem persists, sewage lines should be inspected by a plumber. Any breaks should be fixed and contaminated soil removed to eliminate the infestation. Drain flies are also called moth or sewage flies. Adult female drain flies deposit eggs in in drains, garbage disposals, grease traps, and even sewage treatment plant filters. Larvae feed on bacteria mats inside drains and on decaying organic matter in a variety of sites, and can survive in extremely wet conditions. Most infestations are generated from within the school, including food service areas and custodial closets. Drain flies can carry bacteria and other microorganisms from egg-laying sites to food and food contact surfaces and high populations should not be tolerated. Larvae of fungus gnats feed on fungi and plant rootlets, and are most commonly found emerging from the soil of potted plants. Fungus gnats typically do not harm healthy plants but their presence can be an indication of over-watering. High populations may feed on plant roots and adversely affect plant growth, especially young plants, if preferred food, including microorganisms, is not available. Fungus gnats may also carry plant disease organisms from one plant to another. None of these small flies bite, although their presence may sometimes be associated with employee complaints of bites. Any association of these flies with bites is incidental, or possibly psychological in origin (e.g., fungus gnats look like very small mosquitoes, and may induce people to imagine that they are being bitten). The key management strategy for control of these flies is to identify and eliminate the breeding sites. Control of adult flies with sprays, traps, etc., will be only temporary unless the source can be eliminated. Suggested thresholds will likely be based on complaints, or observations from sticky cards and glue boards. More than two flies (of any species) observed per visit, or collected per sticky trap or glue board indicate a need for more thorough inspection of the area to identify and eliminate potential breeding sites. Where employee complaints about fruit flies occur more than once a month, fruit fly traps that use liquid attractants should be employed as a supplement to breeding site elimination. Office complaints of fungus gnats are more difficult to assess, and thresholds should be based on consultations with the campus principal. Routine visual inspection for fly breeding sites should be made during each IPM service visit. Inspections should be made of tile flooring, under kitchen equipment and floor drains. Sticky cards, UV light traps and glue boards can also be used to monitor drain flies and fruit flies. Kitchen staff should be trained to inspect incoming produce for fruit flies; infested produce should be discarded or covered and places in cooler or refrigerator until it can be more closely inspected and bad produce culled. When drain flies or phorid flies are suspected to be emerging from floor drains, a piece of duct tape, or a tent formed from a sticky trap or glue board may be placed over the suspect floor drain. These should be checked within the next day or two for flies. Indoor plants can be gently lifted and or shaken to determine if fungus gnats are present; yellow or blue sticky traps can also be mounted on stakes placed in potted plants to monitor for fungus gnats. Indoor (UV light) fly traps should be numbered with the location noted on a list or ideally on a schematic diagram of the facility and dated and initialed each time they are checked or replaced. For drain and fruit flies, ideal placements include locations near plumbing fixtures, dishwashers, under prep tables and in trash or recycling storage areas. Electrocuting type fly traps should not be used in kitchens as exploding insects can contaminate food preparation surfaces. Bulbs on UV light traps should be replaced annually. Specific monitoring for fruit flies, including fruit fly traps, should not be required on an ongoing basis if the proper management practices are in place to prevent conditions conducive to fruit fly infestation. Cultural, physical and mechanical management options are the best strategies and include posting notices to encourage the cleanup of spills, proper food storage and trash/recycle handling, elimination of standing water, fixing plumbing leaks, drying mops, emptying mop buckets and inspecting incoming produce and rejecting any infested or overripe product Always read and follow the label. The label is the law. Pesticides must be used in accordance with federal, state and local regulations. Applicators must have proper credentialing to apply pesticides and should always wear personal protective equipment (PPE) as required by the pesticide label during applications. All labels and Material Safety Data Sheets (MSDS) for the pesticide products authorized for use in the IPM program should be maintained on file. When using pesticides in schools, appropriate notification and waiting intervals should be observed. For more information contact your state regulatory agency. Populations of small flies, especially in kitchens, can fluctuate greatly in a short period of time. Follow-ups to service calls for small flies should be conducted approximately one-week following treatment. Authors: Compiled from publications by PMSP, Janet Hurley, Mike Merchant
Learning about perimeter and area through fashion Fashion design and garment construction provide an ideal context to explore intriguing mathematical concepts via hands-on tasks, strengthening the pupils' visualisation skills and deepening their understanding of geometry and its place in mathematics as well as in the world around us. This resource pack was developed specially for Y8 and Y9 pupils to consolidate the concepts of perimeter and area of 2d shapes. See also the Y7 counterpart under 'Pick's Theorem'. Both can be adapted to other year groups. The sequence of lessons starts with an investigation on the shape with maximal area given a fixed perimeter followed by a practical activity designed to consolidate the meaning of perimeter and addressing misconceptions common at this age. The activity is part of a series of fashion design-themed resources, a project developed by the Royal Institution, inspired by the work of the fashion designer Julian Roberts. In his Subtraction Cutting Masterclass, Julian uses straightforward ideas of perimeter, area and volume. His innovative approach to garment construction is intriguing and puts to the test anyone's visualisation skills. Understanding the approach and outcomes requires abstract concepts such as the topology of surfaces, curvature and space-filling curves. Julian uses his own body to take measurements and has developed his own 'tricks' to estimate the perimeter and area of shapes.
Main Difference – Dolphin vs Porpoise Both dolphin and porpoise are marine mammals categorized under order Cetacea. The whale is another member that belongs to this order. This order represents small to extremely large, hairless, fish-shaped mammals that are well adapted to live their entire life in aquatic habitats. The most prominent characteristic features of these mammals include the presence of flippers (modified front limbs), absence of hind limbs, small ears and eyes, nostrils located on the top of the head as a single or double blowhole and the absence of vocal apparatus. These are some few features of cetaceans. Based on the presence or absence of teeth, cetaceans are divided into two categories, Odontoceti (toothed whales) and Mysticeti (baleen whales). Dolphins and porpoises belong to Suborder Odontoceti. Until recently, small dolphins were referred to as porpoises. However, with the solid morphological and genetic evidence, the term porpoise is currently linked with a separate odontocete family called Phocoenidae. The main difference between Dolphin and Porpoise is their teeth. By observing the morphology of teeth, porpoises and dolphins can be clearly distinguished. Dolphins have pointed, canine-like teeth, whereas porpoises have flat incisor-like teeth. More differences between dolphins and porpoises are discussed in this article. Dolphin – Facts, Characteristics, Behavior Dolphins belong to the largest group of the odontocete cetaceans, family Delphinidae. This family includes small-sized dolphins from 1 to 1.8 m long to larger-sized killer whales that reach up to about 9.8 m long. Dolphins are exclusively aquatic, and many of the species live in marine habitats. The common characteristic features of this family include the presence of noticeable beak, conical-shaped canine-like teeth, and the large falcate dorsal fin situated near the middle of the back. These features may vary among the species, except the presence of conical teeth. Dolphins are excellent swimmers owing to their streamlined body and often show schooling behavior. They usually communicate with other members by clicks and whistle-like sounds. These sounds are produced by an organ called melon. These clicking sounds are extremely important to locate prey in the water. Porpoise – Facts, Characteristics, Behavior Porpoises are categorized under cetacean family Phocoenidae. These members are usually small in size (less than 2.5 m) and clearly distinguished from dolphins by their flat incisor-like (spade-like) teeth. In addition, they have either a short, indistinct beak or no beak at all. Porpoises are often found near coastal areas. They usually form smaller groups and have simpler social structure, unlike dolphins. In some porpoise species, males are smaller than females. Difference Between Dolphin and Porpoise Dolphin species are in the size range from 1 to 10 m. Porpoise species are smaller than dolphins and less than 2.5 m. Dolphins belong to family Delphinine. Porpoises belong to family Phocoenidae. Dolphins have conical-shaped teeth. Porpoises have spade-like teeth. Dolphins make bigger groups exceeding 1000 individuals. Porpoises make smaller groups, unlike dolphins. “Steno bredanensis.” By Gustavo Pérez – Own work, via “A harbour porpoise at Vancouver Aquarium.” By Marcus Wernicke (Tuugaalik), Porpoise Conservation Society (Own work) , via
Lesson Plans for Middle School Science (page: 4 of 14) Measuring Growth in the Human Skeleton Create interest in science for your students by using skeletal system lesson plans that ask the students to take a look at their own systems. This lesson plan is part of a fun project that introduces various systems in such a way as to leave the student wanting more! What is Life Science? These Life Science activities and experiments challenge students to hone their observation skills and come to an understanding of the definition of life science. Family and Consumer Science Lesson Plan on Weddings Family/consumer science and wedding lesson plans allow teachers the opportunity to have students critically observe what is necessary for a wedding. The students' ability to compare and contrast various ways of providing what is needed for the big day is a lesson in both consumerism and creativity. Lesson Plan on Designing and Presenting a Health Brochure This lesson gives students the opportunity to research and learn about health issues while developing develop research, writing and computer skills. Have students work to complete a health brochure project that reflects what's available in their community. Harness the Sun's Power With a Science Project Building a Solar Cooker Alternate forms of energy is a hot topic. A simple and fun activity when at home or in the classroom is building a solar cooker science project! The directions below are easy enough for a student to follow on their own, or for a class to build together -- cook S'mores for an added bonus! An Overview of the Parts of the Nervous System When teaching your students about neuroscience, you will need to explain the different parts of the nervous system. This lesson plan covers the different parts: central and peripheral, and the divisions of each. This lesson plan also includes exercises. An Experiment to Show the Mass of Air Does air have mass? This simple earth science activity using balloons and a meter stick will help students understand the mass of air and that it exerts pressure.
Interactive Whiteboard Activities, Book Resources Fun activities that let students go inside nine popular and highly-taught books. - Grades: 3–5, 6–8, 9–12 Flashlight Readers is an interactive literacy experience that lets readers enter the world of books and communicate with their favorite authors. Each of the highly-taught, popular titles offers community-building learning activities, author chats, and slideshows, all while encouraging essential reading and writing skills. Go directly to the activities: Hoot by Carl Hiaasen Inkheart by Cornelia Funke The Invention of Hugo Cabret by Brian Selznick The Underland Chronicles by Suzanne Collins Author Blue Balliett Learn more about each activity: Students can create a character scrapbook, navigate a maze using clues from the book, watch an author video, read early drafts of the book, and more Students can bring the book to life by creating a comic strip, meet the producer of the movie, see a slideshow of spiders, test their skills on verbs and nouns, and more. Readers can choose their own adventure and create a story — both based on the book’s plots — browse photos of the author, and listen to audio with her. Students can rewrite scenes from the book, write a journal entry as if they were Esperanza, and read a Q&A with the author. Includes a memory match-up activity and treasure hunt that test a student’s knowledge of the book, a Q&A with the author, and more. The author discusses writing and what inspired him to write Hoot. Plus, students can create a personalized letter about a cause they care about. Peek into the author’s writing room, take a quiz on the book, write an editorial in a step-by-step workshop, “become” one of the book’s characters, and more. Students help Hugo grab parts for his automata in a maze, build their own “Mechanical Men,” listen to exclusive audio with the author, and view his sketches. The author discusses the book, and students can create an Underland creature, help a character make his way through a labyrinth, and more. The author answers questions and offers advice to young writers. Plus, students can follow clues to unlock a secret message, play pentominoes, and master two art challenges. While participating in “Flashlight Readers,” students will: - Offer observations, make connections, react, speculate, interpret, and raise questions in response to text - Identify and discuss book themes, characters, plots, and settings - Connect their experiences with those of the author and/or with characters from the books - Support predictions, interpretations, conclusions, etc. with examples from text - Practice key reading skills and strategies (cause-and-effect, problem/solution, compare-and-contrast, summarizing, etc.) - Monitor their own comprehension - Discuss ideas from the book with you, the author, and/or other students online
Since 2014, second grade students (approximately 20) have been collecting data on the weight of the eggs laid by the hens to determine which weigh more, brown or white eggs. Students use balance scales as well as a PASCO sensor to weigh the eggs and document their findings. They package the eggs and sell them to the community (see photos below). The money raised is used to purchase feed for the hens. Plant and Soil Research Since 2012, second grade students (approximately 20) have been studying plants and their needs. This year the study was extended to ask the question, "what happens when one or more of those needs is not met?". Students designed research experiments in which they determined the variables they made available to their plants and those that were denied. Three times a week the plants are observed, changes noted, and conclusions were drawn. As an extension, students are now working with Dr. James Thompson, soil scientist at West Virginia University, to study soils and their impact on plants (see photos below). 2016-2017 students worked in groups to conduct their plant study. They each documented observations in a journal and worked together to take photos and write descriptions of their plants on chart paper. Dr. Thompson used maps with land features to discuss why soils can be different colors and the impact on plants. Biological Stream Studies Since 2004, several times a year second grade students (approximately 20) conduct biological stream studies in Snowy Creek. They learn about the benthic macro-invertebrates that live in the stream and categorize photographs based on physical attributes. Students go to the stream to locate and identify these benthic macro-invertebrates and document their findings on a data collection sheet. Students extend their learning by studying the stream ecosystem. They research specifics on the physical characteristics of the insects and use various materials to build three-dimensional models of them. They study the life cycle of each insect and the living and non-living factors that contribute to this unique ecosystem (see photos below). Click below to read a book written by the 2016 second grade class.
Flying squirrels do not actually fly, they glide using a patagium created by a fold of skin. From atop of trees, flying squirrels can initiate glides from a running start or from a stationary position by bringing their limbs under the body, retracting their heads, and then propelling themselves off the tree. It is believed that they use triangulation to estimate the distance of the landing as they often lean out and pivot from side to side before jumping. Once in the air, they form an "X" with their limbs, causing their membrane to stretch into a square-like shape and glide down at angles of 30 to 40 degrees. They maneuver with great efficiency in the air, making 90 degree turns around obstacles if needed. Just before reaching a tree, they raise their flattened tails which abruptly changes their trajectory upwards, and point all of their limbs forward to create a parachute effect with the membrane in order to reduce the shock of landing. The limbs absorb the remainder of the impact, and the squirrels immediately run to the other side of the trunk or to the top of the tree in order to avoid any potential predators. Although graceful in flight, they are very clumsy walkers and if they happen to be on the ground in the presence of danger, they will prefer to hide rather than attempt an escape. The northern flying squirrel is found in coniferous and mixed coniferous forests across the top of North America, from Alaska to Nova Scotia , south to the mountains of North Carolina and west to California . Populations from the Pacific Coast of the United States are genetically distinct from those of G. sabrinus found elsewhere in North America, although they are considered to belong to the same species.
A View from Brittany Sauser Magnetically Levitating Mice NASA has built a device that keeps mice floating to study the health effects of spaceflight. NASA engineers have built a device that can suspend mice in the air for hours. The purpose is to understand how zero gravity affects the bone density and muscle mass of astronauts. The levitation device, built by Yuanming Liu and colleagues at NASA’s Jet Propulsion Laboratory, uses a magnetic field that distorts the movement of electrons in water molecules to let the mice float. According to New Scientist: [The researchers] used a purpose-built levitation device containing a coil of wire, or solenoid, cooled to a few degrees above absolute zero so that it became superconducting. Running a current through the solenoid creates a magnetic field of 17 teslas, ten thousand times as strong as a typical fridge magnet and 10 million times that of the Earth. The researchers have shown previously that the device can levitate water-based items for hours, but were skeptical that it would be able to make a mouse, weighing10-grams, float for long periods of time. Yet, they were able to “fly” the mouse for hours, allowing it to roam freely, and giving it food and water. The experiment is a significant step to study bone and muscle loss, and even changes in blood flow in zero gravity, which is a common problem for astronauts when they return for space missions or extended stays on the space station. Engineers have built exercise equipment to combat the losses, which can result in long-term health issues, but there has been limited ways to actually study zero-gravity effects on humans on Earth.
Incising is technique for decorating ceramics that involves cutting linear designs into the clay surface. Many Mississippian ceramics are decorated by incising or engraving. Implements such as sticks, reeds, or bone fragments, were dragged through wet clay to incise it, or they were scratched into the surface of the dried but as yet unfired pieces to engrave. Clay was gathered from local deposits-usually creek banks and temper added to counteract the shrinkage from drying. This can cause the vessel's walls to crack. Furthermore, any water left in the clay upon firing would turn into steam and explode. Even more significant is the stickiness of the clay. It would only become easily workable while wet thus compounding the problem. Woodland potters attempted to remedy this by using up to 33% of coarse sand and/or grog temper. Mississippian Potters greatly improved on this by using burned, freshwater mussel shell particles which made the clay clump and model easily. Because the calcium carbonate within the shell material acts as a kind of binding agent, with as little as 10-15% shell temper added the paste becomes lighter, stronger and better able to withstand the drying process without cracking. Moreover the walls could now be thinner, giving improved heat transfer for a better cooking pot. Because of these benefits, shell tempering rapidly began to spread around 800 CE as settlements required more maize to feed their growing populations. Ancient pottery makers never used enclosed kilns. Instead the pottery was fired in a pit or on a mound. Some tribes would dig shallow pits to fire their pottery. They would line the pits with heat-resistant materials, such as ashes, sand or rocks. The pottery builders would then start a fire in the pit using a mix of soft and hard woods and place their clay objects directly on top. The time in the fire pit for hardening depended on the size of the object but generally spanned several hours at temperatures of 1400 degrees or more. More common to the mound building tribes of the east was a slightly different method. A 3 or 4-foot high earthen mound would be built with draft holes in the bottom. In the center, the pottery makers would build a fire using wood chips and place their pottery on top. This method required that the pottery bake for several days, creating a hard black pottery. The coiling technique has been employed to shape clay for many of thousands of years and was the method by which the ancient pottery makers sculpted most of the vessel classifications that follow. Using the coiling technique, it is possible to build thicker or taller walled vessels, which may not have been possible using earlier methods. The technique permits control of the walls as they are built up and allows building on top of the walls to make the vessel look bigger and bulge outward or narrow inward with less danger of collapsing. Pottery makers would begin by forming their clay into a long roll. Then, by placing coils on top of another, they could form a variety of shapes. After forming the vessel they would then smooth both the outer and inner surfaces to remove the gaps between the coils. The Tempering is with a coarse shell of varying diameter making the paste swirled and contorted when large. The piece is usually decorated with either swirling bands or geometric zones of red and white using heavy slip-like paint. Slips are simply clay suspended in water and then colored. Galena or cerussite, sometimes known as white lead ore was used for white, hematite or red iron ore for red, and graphite or coal for black. The form spans from globular bowls and jars to bottles with a slightly flared rim. Avenue Polychrome appeared near the close of the Late Mississippi Period and is akin to other classifications such as Carson Red on Buff , Nodena Red and White , and Old Town Red The tempering for the Barton Incised is the same as both Parkin Punctated and Avenue Polychrome. The shell is very coarse with a wide variation in the diameter, including some particles as large as 7 cm. This creates a coarse surface, often showing open spaces from the poor wedging of clay. Incision is seemingly careless and made by a pointed tool applied to a moist surface producing a line with a considerable amount of burr. Lines vary in width from less than .5 to 3 mm and are either parallel, or of a cross hatch, chevron, and in one case, checkerboard design. When parallel the oblique lines are based on a system of alternating line-filled triangles that slant downward from the lip to the beginning of the shoulder area and in some cases lower.1 Bell Plain is a catch-all term that refers to any burnished, shell-tempered plain pottery. The burnishing is completed at a late stage of the drying process in which most pebble tracks are erased by energetic rubbing.1 There are several variations of Bell Plain including Holly the latter having a finely crushed shell tempering and a straight rim. Like many others the tempering of this type is one of coarse shell of varying diameter within clay that is predominantly a lighter buff color. Like the Avenue Polychrome, coloring is accomplished by way of an applied film or slip (clay suspended in water), most frequently containing a hematite agent to render a rich orange-red tone. The entire buff colored surfaced would be covered in this film as Mississippi potters regarded paint as something for colorizing the entire vessel rather than a decorative medium.2 Effigy pots were a mainstay of many Mississippian peoples, although they come in many different varieties. Some come in anthropomorphic shapes, some zoomorphic shapes and others in the shape of mythological creatures associated with the Southeastern Ceremonial Complex. Head pots (pictured) are jars shaped like human heads, typically male, and the figures commonly appear to be deceased. They are typically 3–8 inches tall, with smaller vessels found in the Arkansas River Valley. They are considered to be the pinnacle of the Mississippian culture ceramics and are some of the rarest and most unique clay vessels in North America.3 The temper of this classification consists of fine shell particles, very little of which are visible on the surface making the texture very smooth and homogeneous. The color varies but is similar to Bell Plain. Almost all examples were polished with finer ones having a lustrous, black finish. The incised lines average 2 mm wide and are rarely more than 1 mm deep. It is thought that they were made with a rounded implement and of a type referred to as trailing , the commonest design being a spiral meander. The majority of rims are thickened on the outside, but occasionally on the inside or both and are defined by an incised line. Mississippian Plain is very common to most Mississippian cultures throughout the Ohio and Mississippi River valleys. It was buff colored, contains large fragments of ground mussel shell as a tempering agent, and is not as smooth and polished as other varieties. The term is often applied to any unburnished, undecorated, shell-tempered pottery. The Tempering for Parkin Punctated is very similar to that of Avenue Polychrome with very coarse shell, sometimes as much as 5 to 7 cm in diameter. Punctation is produced by various shaped tools, resulting in a wide variety of size, shape and arrangement, the common characteristic being that the instrument is jabbed obliquely into the clay producing a ridge or burr . Often the shape is oval or semi-lunar which suggests fingernail marking, albeit the punctations are occasionally round, square, triangular, or u-shaped. They also vary in size and depth with an average width of .5 mm and about .2 mm in depth. Punctations generally are not part of a design and are simply scattered and spaced at random over the entire vessel surface including the base. However occasionally they have horizontal or vertical rows, or a combination of both. Often a single row or series of rows form a band around the shoulder of the vessel. In rare cases punctations are aligned vertically so that the burr forms a continuous ridge effect which is sometimes accentuated by pinching . In either case a linear arrangement classifies the vessel as linear-punctated Powell Plain emerged during the Stirling Phase of the Cahokia site. A distinctive trait of this period is the shell temper. Powell Plain exhibits an especially fine, smooth surface with very thin walls and distinctive tempering, slips and coloring. The cores of the sherds are typically a range of grays to buffs and creams. Some have slips of liquid clay and pigment with common colors being red, grey, and black and the surfaces polished to a high sheen. A curvilinear, Mississippian Plain paste incised pottery type on a coarse shell-tempered ware. The narrow incisions typically vary along the upper rim, shoulder and/or body of the vessel, and typically form concentric circle, scroll, festoons, and guilloche motifs. The type is dated to the late Anna and early Foster phases of the Mississippi period and was defined at the Winterville Site. Variations of this classification include Belzoni, Tunica, and Wailes 1. Mound Excavations at Moundville: Architecture, Elites and Social Order By Vernon James Knight, Jr. University of Alabama Press; 2010. pp. 22. 2. Archaeological survey in the Lower Mississippi Alluvial Valley, 1940-1947 By Philip Phillips, James Alfred Ford, James Bennett Griffin, Stephen Williams University of Alabama Press. pp. 138-9. 3. museumofnativeamericanartifacts.org. Retrieved 2010-07-18.
Autism in Your Classroom This course is a guide for teachers helping students in the classroom who are perceived to be on the Autism Spectrum. The emphasis will be on academic, social, and cognitive studies. As a result of participation in this course, participants will: 1. Develop an understanding about autism and the autistic spectrum disorders. 2. Understand the behavior and learning characteristics of autistic spectrum students. 3. Review the academic approaches and instructional strategies used in helping autistism spectrum students. 4. Learn of the inclusive classroom and the teacher’s capacity along with other staff’s role in the support of autistic students. 5. Be apprised of the daily struggles of autistic spectrum students. The required text is: Fein, D. & Dunn, M. (2007). Autism in your classroom. MD: Woodbine House. Textbook can be order from http://www.amazon.com
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? You can change this under Settings & Account at any time. Transcript of Earth's Formation parts of the Earth Crust- part of the Earth The crust is where we walk and is made out of rocks and such, about 5-70km thick Mantle- The 2nd layer of the Earth, the mantle is the second layer, consisting of molten iron, and other metals. The mantle kind of reminds me of oatmeal it is not quite solid, nor liquid. Outer Core- Is molten rock, and very HOT!!! Inner Core- Made out of iron and nickel, the Inner Core is the hottest Earth part, The Inner Core is solid because of pressure. Igneous Rock (: Igneous rock is one of the 3 rock types that we see in the world today. It is formed through the cooling of lava or magma. There are many types of igneous rocks, some of them include, granite, pumice, obsidian, etc. Temperatures Why is the Inner Core Solid? Surprisingly, the inner core is solid, even though it is by far the hottest part of the Earth (at 9,800 F) . The reason for is that there is so much pressure on the inner core that it solidified. How do we Know? The temperature of the inner core can be estimated by considering both the theoretical and the experimentally demonstrated constraints on the melting temperature of impure iron at the pressure which iron is under at the boundary of the inner core Temperature's inside the Earth vary from being 70 degrees to 9,800 degrees. The crust is probably the coolest part,and the inner core is the hottest part since it is in the middle