content
stringlengths
275
370k
An embedded operating system is a specialized OS for use in the computers built into larger systems. An embedded system is a computer that is part of a different kind of machine. Examples include computers in cars, traffic lights, digital televisions, ATMs, airplane controls, point of sale (POS) terminals, digital cameras, GPS navigation systems, elevators, digital media receivers and smart meters, among many other possibilities. In contrast to an operating system for a general-purpose computer, an embedded operating system is typically quite limited in terms of function – depending on the device in question, the system may only run a single application. However, that single application is crucial to the device’s operation, so an embedded OS must be reliable and able to run with constraints on memory, size and processing power. See a video introduction to embedded operating systems:
Pan-Afrikan History Month On February 7 1926, Dr. Carter G. Woodson initiated “Negro History Week.” It has evolved into what is known, today as, African American History month. In the context of our Pan-Afrikan module, we refer to the month of recognition as Afrikana. We do this to make the statement the Afrikan American history did not begin with the first slaves who came here as indentured servants in 1619. Afrikan history is the foundation of Afrikan American history and is necessary in the celebration of the month. We celebrate Afrikan History month through various speakers, presentations and performances by both students and community. Throughout the month, we share the history of Afrikan people all around campus. The Pan African Student Leadership Conference highlights the month. This annual conference provides students of Afrikan descent opportunities to develop their leadership skill and awareness of issues pertaining to Afrikan people everywhere. Pan-Afrikan History Month 2013 Events:
This innovative guide to the Latin language, written for a new generation of students, deploys examples and translation exercises taken exclusively from the Classical Latin canon. - Translation exercises use real Latin from a variety of sources, including political speeches, letters, history, poetry, and plays, and from a range of authors, including Julius Caesar, Cicero, Virgil, Catullus, Ovid, and Plautus, among others - Offers a variety of engaging, informative pedagogical features to help students practice and contextualize lessons in the main narrative - Prepares students for immersion in the great works of Classical Latin literature - A companion website provides additional exercises and drills for students and teachers
Question (Note: If you need more info about the Bubble Chamber, I put the intro to the question below the picture): If the magnetic field of the bubble chamber is pointing into the page, what is the charge on the pion that is produced by this decay? (The other two events seen in the upper left of the photo are photons decaying into electrons and positrons.) a. the pion is positive b. the pion is negative c. The pion is neutral (no charge) My Attempt: I have B marked, because after using the right hand rule, it appears that the pion goes in the opposite direction. However, I have figured out I'm doing the right hand rule wrong, so I'm not exactly positive how to do it. I have to explain my answer as well, so I need to know how to actually do it. Not just the answer. Info given about Bubble Chamber: The "bubble chamber" was invented in 1952 and is used to study collisions between subatomic particles. A bubble chamber consists of a liquid (typically liquid hydrogen) maintained near its boiling point. Particles that streak into the chamber perturb the liquid enough to cause a phase change along its path, leaving a trail of bubbles that marks the path of the particle through the chamber. By placking the chamber in am agnetic field, it becomes quite easy to determine the charge of the particles from their paths through the chamber. the figure below shows the track of a neutral "kaon" (a subatomic particle) entering a bubble chamber straight up from the bottom of the picture and then decaying into a "pion", which curls off to the right in the picture.
||It has been suggested that this article be merged with Synchronization (alternating current). (Discuss) Proposed since July 2014.| Phase synchronization is the process by which two or more cyclic signals tend to oscillate with a repeating sequence of relative phase angles. Phase synchronisation is usually applied to two waveforms of the same frequency with identical phase angles with each cycle. However it can be applied if there is an integer relationship of frequency, such that the cyclic signals share a repeating sequence of phase angles over consecutive cycles. These integer relationships are the so-called Arnold tongues which follow from bifurcation of the circle map. One example of phase synchronization of multiple oscillators can be seen in the behavior of Southeast Asian fireflies. At dusk, the flies begin to flash periodically with random phases and a gaussian distribution of native frequencies. As night falls, the flies, sensitive to one another's behavior, begin to synchronize their flashing. After some time all the fireflies within a given tree (or even larger area) will begin to flash simultaneously in a burst. Thinking of the fireflies as biological oscillators, we can define the phase to be 0° during the flash and +-180° exactly halfway until the next flash. Thus, when they begin to flash in unison, they synchronize in phase. One way to keep a local oscillator "phase synchronized" with a remote transmitter uses a phase-locked loop. - Sync by S. H. Strogatz (2002). - Synchronization - A universal concept in nonlinear sciences by A. Pikovsky, M. Rosenblum, J. Kurths (2001) A tutorial on calculating Phase locking and Phase synchronization in Matlab.
Browse raft writing resources on teachers pay teachers, a marketplace trusted by millions of teachers for original educational resources. Raft is a writing strategy that helps students understand their roles as writers, the audience they will address, the varied formats for writing, and the topic they'll be writing about by using this strategy, teachers encourage students to write creatively, to consider a topic from a different perspective, and to gain practice writing for different audiences. Have you heard of raft writing this method is commonly used as essay responses at the end of the other set has 64 cards that are color coded by area of raft. Writing formats should include typical school writing formats write an essay to mother nature lamenting the quality of the diet provided by the river. Pinterest raft 57 pins 262 are you looking for help for essay writing raft writing task where students will taste test 2 flavors of candy canes and then. The rafts technique what is it dry essays and free-for-all creative writing •the raft strategy can be used as a prewriting strategy and/or as. Raft of the medusa essay raft writing essay the raft this essay is based on the short story 'the raft' by peter orner written in 2000. Evaluate raft assignments with 6+1 writing traits rubric raft rubric raft_rubric_pdf details raft assignments(reading rockets) raft writing template. Essay - the raft topics: world war ii and few points to remember while writing an essay in the exam, which will be important for upcoming sbi po exam. Neoclassicism, romanticism, imagination - the raft of the medusa and the roots of romanticism. Raft example strategies for teaching writing to maximize your students' growth in writing and how to write a good argumentative essay: logical. Irubric l52786: rubric title raft writing built by madenney using irubriccom free rubric builder and assessment tools. Mathematics raft writing raft writing allows students to write so that they can learn you learn through use of a writing prompt that has learners think about a. Raft essay example strongly appreciated essay raft as legal research and writing across the curriculum problems and exercises important that you chapters to hero. Rewriting is the essence of writing well—where the revising drafts the key is still to give yourself enough time to look at the essay as a whole once. 1 raft - description and purpose and free- and-clear creative writing raft projects give students a series of essays, reports and logs (“writing fix. Math prompts made easy here are samples of math instructional prompts for creative writing across the curriculum each math writing prompt below is written using the rafts tecnique. How to write a dbq essay you are in an ap history class and you have a dbq essay coming up prewriting for your dbq essay writing your essay finalizing your. What is a raft writing will help you begin to imagine a rafts writing prompt about a history or social studies topic for you or for your students to. Free raft papers, essays, and research papers these results are sorted by most relevant first (ranked search) you may also sort these by color rating or essay length. Writing series 5 rough drafts: a rough draft is a late stage in the writing process 1 don't drift or leave the focus of the essay don't lapse. Raft is a writing strategy that helps students understand their role as a writer, the audience they will address, the varied formats for writing, and the topic they'll be writing about by using this strategy, teachers encourage students to write creatively, to consider a topic from a different. The crucible raft writing assignment as a young reporter for the boston herald newspaper, you have been dispatched to salem to conduct interviews with people involved in the recent witch trials. How do social studies teachers view writing in their curriculums raft: role, audience, format writing in social studies (7 steps to writing an essay. Common mistakes when crafting the final draft of an essay – incomplete references students often tend to hurry when crafting the final draft to finally finish the writing process, and forget about the proper formatting of in-text citations and sources in the reference list. Raft is a unique approach to creative writing that helps students to understand and remember all the components of a paper raft is an acronym, which stands for role, audience, format, and topic there are many examples of raft assignments and ideas available online. Writing prompts for middle school write an essay that identifies the your understanding of the article and on the quality of your writing.Download
By Leah Baines For thousands of years, natural fibres have been at the core of the textile industry. From cloth, to paper and building materials, natural fibres were always the base material. According to the United Nations Food and Agriculture Organization, natural fibres are substances produced by plants and animals that can be spun into filaments or thread. Natural fibres originate from either plant fibres, such as coir, cotton and flax, or animal fibres such as camel hair, alpaca wool, and cashmere. As a completely renewable resource, natural fibres provide many benefits both to the environment and to those involved in the market that they create. Over the last 50 years, natural fibres have started to become displaced by synthetic, man-made materials such as polyester, acrylic and nylon. These materials are much cheaper and easier to manufacture in bulk, and easily create uniform colors, lengths and strengths of materials that can be adjusted according to specific requirements. The production of synthetic materials, however, is a strong contributor to carbon emissions and waste. According to the United Nations Industrial Development Organization, it is estimated that every person in the world is responsible for 19.8 tons of carbon dioxide emissions in their lifetime, simply because of the clothes on their back that include synthetic fibres. Unlike synthetic fibres, natural fibres not only come from the environment, but also benefit it. These fibres are renewable, carbon neutral, biodegradable and also produce waste that is either organic or can be used to generate electricity or make ecological housing material.
Learn something new every day More Info... by email Interference, in physics, can refer to two phenomena. The most common is wave interference. This happens when two or more waves meet in the same place, resulting in the waves either combining or cancelling each other out. When coherent waves with similar frequencies meet, the result can be a consistent interference pattern. The other phenomenon is communication interference, which is when a radio wave signal becomes distorted. There are many different types of waves. Electromagnetic waves are made of oscillating electric and magnetic fields and move at the speed of light. Visible light, X-rays, microwaves and ultraviolet (UV) light are all examples of electromagnetic waves. Sound also is a wave, although it travels differently than light and can’t move in a vacuum. When two waves collide, the effect is something known as wave interference. This means the waves will pass through each other but, while in the same location, interact with one another. The result is a change in amplitude, or size, of the two waves. There are two types of wave interference, known as constructive and destructive. If two waves meet at their greatest point, then the two waves add together; this is known as constructive. It creates a wave that’s double the size while the crests of the waves are overlapping. The same happens if the two waves meet each other at their lowest points. Destructive interaction happens when two waves meet each other at opposite points of oscillation. If, for example, one wave is at its positive peak and another at its negative peak, then the waves cancel each other out. For waves with exactly the same amplitude, the result is no wave at the point of collision. All waves passing through each other show wave interference, but this is random if the waves come from different sources with different frequencies. Interference can be used for practical purposes if two waves are coherent, which means they have very similar frequencies. This is because two waves of the same frequency will consistently meet each other in the same point of oscillation. For example, if the waves meet at a point where they are exactly in sync, then the resulting wave will have double the amplitude. In communication, interference has a different meaning. Radio wave communication experts use the term to refer to anything that causes distortion in the wave. Other electromagnetic waves, for example, can often cause distortion. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Conclusion for a lab report Density of water lab conclusion density is a measure of how much matter, or mass(m), takes up a certain amount of space, or volume(v) the formula for density is d=m/v or density equals mass divided by volume the relationship between mass and volume is direct because as mass increases so does volume this means that when the. Writing a conclusion for a lab report how to write conclusion for report professional writing company how to write conclusion for report each student must use a bound laboratory notebook in which all lab reports will be written objectives a ph meter will be standardized to below is a sample lab report assignment from a uw-madison. Analysis & conclusion analysis 1 the independent variable of this experiment are the eggs when they are in their hypotonic and hypertonic solution state the dependent variable of this experiment is the circumference and mass of. Introduction to microscopy conclusion the light microscope is a very powerful tool for understanding the structure and function of tissues, and it is widely used in biomedical science courses, as well as in research and diagnostic laboratories understanding the capabilities and limitations of the light microscope is important if one is to get. When you are assigned a lengthy lab report, it is important to include a conclusion paragraph to sum up your procedures and results for your reader a conclusion restates your goals and methods, includes any final data and notes whether you were able to successfully answer the questions posed by your experiment if. The conclusion is your opportunity to show your lab instructor what you learned by doing lab and writing the lab report you can improve your conclusion first by making a clearer statement of what you learned go back to the purpose of the lab as you presented it in your introduction you are supposed to learn something about the. Edit article how to write a biology lab report seven parts: creating your title page writing your introduction listing material and methods explaining results drawing a conclusion crediting references formatting your report community q&a biology lab reports have a specific format that must be followed to present the. Ment, and the conclusion for more detail on the results summarized in the abstract each of these report sections is discussed in a separate section of this chapter you will prob- ably find it helpful to read over the entire chapter the first time you are asked to write a lab-report section (to get some sense of how the pieces of a lab report fit. A lab report is a record of the activities, processes, and observation during and after an experiment you may check out our sample lab reports for more details it includes some of the following parts. Bacteria lab report aquarium investigation gallery comparative vertebrate anatomy we were told by our lab instructor that one of rebecca’s colonies was a fungi both clairissa and rebecca’s samples contained two different round, flat, and entire shaped bacteria one was a pale yellow and the other was white in addition they also had a. Conclusion our data represents a direct relationship between velocity and centripetal force as we had hypothesized this means that when the velocity is higher, the centripetal force increases and when the velocity is lower, the centripetal force is less. Example of a microbiology unknown lab report by taylor autry discussion/conclusion the unknown #123 contained two different specimen of bacteria, one being a gram positive bacteria and one being a gram negative bacteria the first unknown in #123 found to be a gram negative bacteria, was identified as. A completed lab report that explains the process of determining different aldehydes and ketones, thus classifying them in their respective groups includes method, abstract, results, intro etc includes method, abstract, results, intro etc. Lab report conclusion writing resources and findings in science lab at 11111colored stone identification origin report a sheep s heart was examined and internal structures of the first is a laboratory report. The conclusion, along with the introduction, is often the most difficult piece to write, whether it be a lab report, research paper or other document this, in most cases, has to do with the uncertainty that comes with conclusion content. Conclusion in this lab, we used titration to explore the concepts of stoichiometry and equivalence points we found the concentration of an unknown substance by mixing 2m hcl with the naoh of unknown concentration in order to experimentally ascertain the concentration of the naoh we found the concentration of the naoh to be 1m, half. Sample lab assignment below is a sample lab report assignment from a uw-madison bacteriology course we will be using a format for the lab reports which is similar (but modified) to formats for scientific papers. Chemistry lab report on standardization of acid and bases 1 purpose: to prepare standardize solution of sodium hydroxide and to determine the concentration of unknown sulfuric acid solution. The conclusion of a lab report encapsulates the purpose, methods, outcome and evaluative analysis of a scientific experiment. One thought on “ conclusion- rlc circuits ” zerogoszinski may 15, 2014 at 7:23 pm hey deep great project i thought your step by step breakdown of your modeling of rlc circuits was very clear i think this is a wonderful introduction to the mechanism behind such a phenomenon. Projectile motion lab report is a type of written assignment you can be tasked with in college, sometimes even in high school it’s a serious test to your background knowledge and practical skills, doing such kind of lab report normally takes a week of serious preparation, then diligent writing work and self-editing consulting a professor. Video 18 - how to write a lab report - conclusion use this video to complete your notes on conclusion. Report abuse transcript of yeast fermentation lab report chris alleman, rachel poulsen, spencer koelsch, jade jones class period 6 pre ap biology water temperature and its effect on the fermentation of yeast conclusion (rachel) in this experiment, we tried to answer the question of whether the temperature of water. The parts of a scientific lab report include the abstract, introduction, experimental methods and materials, results, discussion, conclusion and references, according to purdue university evergreen valley college adds a title page and applicable appendices each part of a lab report has specific. List of criteria used to write a lab report, such as hypothesis, data table, conclusions this template can serve as a guideline for any lab report this template can serve as a guideline for any lab report. Lab report: solvent extraction with acid base reactions – brandon skenandore total word count: 299 time spent researching and writing: 2 hrs chem 308-004 solvent extraction with acid/base reactions lab conclusion conclusion: the purpose of this laboratory experiment was to separate three different compounds of a different ph using macroscale solvent extraction. Conclusion should contain very brief summery of your experiment, whether your experiment is successful or not, results of your experiment, whether the result obtain is true or not by comparing some available result or theoretical results and what. The lab report printable pdf version fair-use policy overview this document describes a general format for lab reports that you can adapt as needed lab reports are the most frequent kind of document written in engineering and can count for as much as 25% of a course yet little time or attention is devoted to how to write them well worse. Lab report conclusions the way of the forms of a preferred high money latter that make page of its general icon communication can never flash underdeveloped questions in tasks purportedly spend six authors in their different evidence for geographical chapters or superior time for social foundations the unrestrained two suffixes are proved. Physics lab report format general remarks: writing a lab report is the only way your ta will know what you have done during the lab and how well you have understood the process and the results.
Black History Month is a time spent recognizing the achievements of African-American’s and their role in U.S. History. Like any other period of observance, this celebration has a history of its own. The history behind Black History Month is one that few know, but many wonder about. Here are the answers to the questions you’ve been wondering about. Q: What is the origin of Black History Month? A: Carter G. Woodson, born December 19, 1875, founded Negro History Week (which later became Black History Month) in 1926. The goal of the celebration was to create and popularize knowledge about the black past. Q: Are there are other countries that celebrate Black History Month? A: Canada celebrates Black History Month in February along with America. The United Kingdom also celebrates Black History Month, but in October. Q: Why is Black History Month celebrated in February? A: The month of February was chosen due to the fact that it was the same month as Abraham Lincoln birth and Frederick Douglass death, who both were instrumental in instating the 13th Amendment, which abolished slavery. Q: Is it true that Black History Month is themed? A: Yes! Every year there's a different theme for February's celebration of Black History Month. The theme for Black History Month is announced annually by the Association for the Study of African American Life and History (ASALH) in Washington DC. This year’s theme is “The Golden Jubilee of the Civil Rights Act." Past themes have included: - 2000 Heritage and Horizons: The African American Legacy and the Challenges for the 21st Century - 2001 Creating and Defining the African American Community: Family, Church Politics and Culture - 2002 The Color Line Revisited: Is Racism Dead? - 2003 The Souls of Black Folks: Centennial Reflections - 2004 Before Brown, Beyond Boundaries: Commemorating the 50th Anniversary of Brown v. Board of Education - 2005 The Niagara Movement: Black Protest Reborn, 1905-2005 - 2006 Celebrating Community: A Tribute to Black Fraternal, Social, and Civil Institutions - 2007 From Slavery to Freedom: Africans in the Americas - 2008 Carter G. Woodson and the Origins of Multiculturalism - 2009 The Quest for Black Citizenship in the Americas - 2010 The History of Black Economic Empowerment - 2011 African Americans and the Civil War - 2012 Black Women in American Culture and History - 2012 President Barack Obama National Black History Month Proclamation
Graham Elsdon looks at ways literature students can usefully write about drama By the play’s final scene, Macbeth sees life as ‘a poor player that struts and frets his hour upon the stage’. There is a metatheatrical quality to many of Shakespeare’s works, yet some students find it hard to write about plays as plays. Drama exists in a strange hinterland for a literature student: do you write about it as if you were a drama student, commenting on movement, proxemics and costume choices, or do you treat it like poetry? How might a student write about dramatic method and tie it to the meanings of the play? Page and stage Part of the problem lies in the way students experience a play, often reading it ‘on the page’ rather than seeing the physicality of the action. When you read a novel, the narrator acts as your guide, explaining how characters are feeling and how they behave. But drama shows, rather than tells. In drama, all you have are the words the characters speak. So rather than the action taking place in your head (as it does when you read a novel), drama is the product of a director’s interpretation of the way lines are delivered, facial expression, movement, costume, setting, staging and timing. Yet writing about dramatic method is less about performance and more about the larger structural choices of the playwright. As starting points, you might consider: - when and why characters enter and exit the stage - who speaks most in scenes and who is silent - how aside and soliloquy are used - how dialogue is connected to power - what the audience – and the characters – do and don’t know at various points in the story. Linking method and meaning As an example, let’s look at dramatic method in 1.6, where Duncan arrives at Macbeth’s castle and a conversation ensues between the King and Lady Macbeth. The scene can be found here http://shakespeare.mit.edu/macbeth/macbeth.1.6.html One of the most important aspects of this scene concerns what the audience and characters do and don’t know. Shakespeare positions the audience so they have superior knowledge over some of the characters: before this scene commences, the audience know that Duncan’s death has been discussed. We have already listened to Lady Macbeth’s regicidal thoughts and consequently, when 1.6 is performed, we have the darkly dramatic irony of Duncan and Banquo praising the outwardly welcoming appearance of Macbeth’s castle. Notice how the dramatic method supports the meanings of the play – the setting for this scene is the exterior of the castle, so we see a dramatic symbol of the play’s concern with appearance, reality and deception – Duncan’s appreciation of the castle’s ‘pleasant seat’ is stark contrast to the horrors which wait within. The dramatic irony also supports the tragic meanings of the play. In choosing to show Duncan and Banquo as characters who are in the dark about the forthcoming murder, it allows us to understand how deception works, but also offers a tragic significance – the unknowing quality of life and the way in which an innocent, good man is hours away from his death. Here is a good example of how students can link an aspect of dramatic method to both moral readings of the play and also genre approaches too. Exits and entrances It’s easy to overlook stage directions, but at the start of 1.6 you will notice the sheer number of characters who enter the stage, yet only two of these characters speak. The effect is one of power – the King’s retinue throng the stage and of course, the most powerful person in the world of the text speaks first. Shakespeare’s decision to give dialogue to Banquo is interesting, as it suggests his centrality, allying him to Duncan, and setting him up in opposition to the Macbeths, reinforcing his essential goodness. The most important entrance is that of Lady Macbeth. Shakespeare brings her on stage after the innocent dialogue between Duncan and Banquo. It is an interesting moment, because she appears in front of all the Thanes and Duncan’s sons, having just considered regicide. Her dialogue with Duncan provides a nice dramatic contrast with that of Banquo, and once more underlines a meaning of the play – the way in which the innocent flower hides the serpent. Small dramatic details help this, such as Duncan’s request ‘Give me your hand’, which on stage adds a horrible frisson as the victim willingly takes the plotter’s hand. As the above shows, dramatic method is useful to write about when it is related to meanings in the play. Students may wish to look at the scene which follows 1.6, where the Macbeths’ discussion about the merits of murder is rich in method, with the focus being on soliloquy, entrances, dialogue and shifts in power. Amongst all the strutting and fretting, meaning is central. Life may well be a tale told by an idiot, but on stage, it always signifies something. Graham Elsdon is a teacher, author and consultant at www.theenglishline.com
What Is It? The "greenhouse effect" is a complicated process by which the earth is becoming progressively warmer. The earth is bathed in sunlight, some of it reflected back into space and some absorbed. If the absorption is not matched by radiation back into space, the earth will get warmer until the intensity of that radiation matches the incoming sunlight. Some atmospheric gases absorb outward infrared radiation, warming the atmosphere. Carbon dioxide is one of these gases; so are methane, nitrous oxide, and the chlorofluorocarbons (CFCs). The concentrations of these gases are increasing, with the result that the earth is absorbing more sunlight and getting warmer. This greenhouse phenomenon is truly the result of a "global common" (see The Tragedy of the Commons). Because no one owns the atmosphere, no one has a sufficient incentive to take account of the change to the atmosphere caused by his or her emission of carbon. Also, carbon emitted has the same effect no matter where on earth it happens. How Serious Is It? The expected change in global average temperature for a doubling of CO2 is 1.5 to 4.5 degrees centigrade. But translating a change in temperature into a change in climates is full of uncertainties. Meteorologists predict greater temperature change in the polar regions than near the equator. This change could cause changes in circulation of air and water. The results may be warmer temperatures in some places and colder in others, wetter climates in some places and drier in others. Temperature is useful as an index of climate change. A band of about one degree covers variations in average temperatures since the last ice age. This means that climates will change more in the next one hundred years than in the last ten thousand. But to put this in perspective, remember that people have been migrating great distances for thousands of years, experiencing changes in climate greater than any being forecast. The models of global warming project only gradual changes. Climates will "migrate" slowly. The climate of Kansas may become like Oklahoma's, but not like that of Oregon or Massachusetts. But a caveat is in order: the models probably cannot project discontinuities because nothing goes into them that will produce drastic change. There may be phenomena that could produce drastic changes, but they are not known with enough confidence to introduce into the models. Carbon dioxide has increased about 25 percent since the onset of the industrial revolution. The global average temperature rose almost half a degree during the first forty years of this century, was level for the next forty, and rose during the eighties. Yet whether or not we are witnessing the greenhouse effect is unknown because other decades-long influences such as changes in solar intensity and in the atmosphere's particulate matter can obscure any smooth greenhouse trend. In other words, the increase in carbon dioxide will, by itself, cause the greenhouse effect, but other changes in the universe may offset it. Even if we had confident estimates of climate change for different regions of the world, there would be uncertainties about the kind of world we will have fifty or a hundred years from now. Suppose the kind of climate change expected between now and, say, 2080 had already taken place, since 1900. Ask a seventy-five-year-old farm couple living on the same farm where they were born: would the change in the climate be among the most dramatic changes in either their farming or their lifestyle? The answer most likely would be no. Changes from horses to tractors and from kerosene to electricity would be much more important. Climate change would have made a vastly greater difference to the way people lived and earned their living in 1900 than today. Today, little of our gross domestic product is produced outdoors, and therefore, little is susceptible to climate. Agriculture and forestry are less than 3 percent of total output, and little else is much affected. Even if agricultural productivity declined by a third over the next half-century, the per capita GNP we might have achieved by 2050 we would still achieve in 2051. Considering that agricultural productivity in most parts of the world continues to improve (and that many crops may benefit directly from enhanced photosynthesis due to increased carbon dioxide), it is not at all certain that the net impact on agriculture will be negative or much noticed in the developed world. Its Effects on Developing Countries Climate changes would have greater impact in underdeveloped countries. Agriculture provides the livelihoods of 30 percent or more of the population in much of the developing world. While there is no strong presumption that the climates prevailing in different regions fifty or a hundred years from now will be less conducive to food production, those people are vulnerable in a way that Americans and west Europeans are not. Nor can the impact on their health be dismissed. Parasitic and other vectorborne diseases affecting hundreds of millions of people are sensitive to climate. Yet the trend in developing countries is to be less dependent on agriculture. If per capita income in such countries grows in the next forty years as rapidly as it has in the forty just past, vulnerability to climate change should diminish. This is pertinent to whether developing countries should make sacrifices to minimize the emission of gases that may change climate to their disadvantage. Their best defense against climate change will be their own continued development. Population is an important factor. Carbon emissions in developing countries rise with population. For instance, if China holds population growth to near zero for the next couple of generations, it may do as much for the earth's atmosphere as would a heroic anticarbon program coupled with 2 percent annual population growth. Furthermore, the most likely adverse impact of climate change would be on food production, and in the poorest parts of the world the adequacy of food depends on the number of mouths. Why Should Developed Countries Do Anything? Why might developed countries care enough about climate to do anything about it? The answer depends on how much people in developed countries care about people in developing countries and on how expensive it is to do something worthwhile. Abatement programs in a number of econometric models suggest that doing something worthwhile would cost about 2 percent of GNP in perpetuity. Two percent of the U.S. GNP is over $100 billion a year, and that is an annual cost that would continue forever. One argument for doing something is that the developing countries are vulnerable, and we care about their well-being. But if the developed countries were prepared to invest, say, $200 billion a year in greenhouse gas abatement, explicitly for the benefit of developing countries fifty years or more from now, the developing countries would probably clamor, understandably, to receive the resources immediately in support of their continued development. A second argument is that our natural environment may be severely damaged. This is the crux of the political debate over the greenhouse effect, but it is an issue that no one really understands. It is difficult to know how to value what is at risk, and difficult even to know just what is at risk. The benefits of slowing climate change by some particular amount are even more uncertain. A third argument is that the conclusion I reported earlier—that climates will change slowly and not much—may be wrong. The models do not produce surprises. The possibility has to be considered that some atmospheric or oceanic circulatory systems may flip to alternative equilibria, producing regional changes that are sudden and extreme. A currently discussed possibility is in the way oceans behave. If the gulf stream flipped into a new pattern, the climatic consequences might be sudden and severe. (Paradoxically, global warming might severely cool western Europe.) Is 2 percent of GNP forever, to postpone the doubling of carbon in the atmosphere, a big number or a small one? That depends on what the comparison is. A better question—assuming we were prepared to spend 2 percent of GNP to reduce the damage from climate change—is whether we might find better uses for the money. I mentioned one such use—directly investing to improve the economies of the poorer countries. Another would be direct investment in preserving species or ecosystems or wilderness areas, if the alternative is to invest trillions in the reduction of carbon emissions. What Solutions Are Proposed? What can be done to reduce or offset carbon emissions? Reducing energy use and the carbon content of energy have received most of the attention. There are other possibilities. Trees store carbon. A new forest will absorb carbon until it reaches maturity; it then holds its carbon but does not absorb more. The area available for reforestation throughout the world suggests that reforestation can contribute, but not much. Stopping or slowing deforestation is important for other reasons but is quantitatively more important than reforestation, partly because forest subsoils typically contain carbon greater than the amount in the trees themselves, and this carbon is subject to oxidation when the trees are removed. Also, substances or objects can be put in orbit or in the stratosphere to reflect incoming sunlight. Some of these are as apparently innocuous as stimulating cloud formation and some as dramatic as huge mylar balloons in low earth orbit. If in decades to come the greenhouse impact confirms the more alarmist expectations, and if the costs of reducing emissions prove unmanageable, some of these "geoengineering" options will invite attention. The main responses will be to adapt as the climate changes and to reduce carbon emissions. (CFCs are potent greenhouse gases and, if unchecked, might have rivaled carbon dioxide in decades to come. International actions to reduce or eliminate CFCs are making progress and are among the cheapest ways of reducing greenhouse emissions.) It is improbable that the developing world, at least for the next several decades, will incur any significant sacrifice in the interest of reduced carbon, nor would it be advisable. Financing energy conservation, energy efficiency, and a switch from high-carbon to lower-carbon or noncarbon fuels in Asia and Africa would not only be a major economic enterprise, but also a complex effort in international diplomacy and politics. If successful, it would increase the costs to the developed world by at least another percent or two on top of the 2 percent I mentioned. A universal carbon tax is a popular proposal among economists because it promises an efficient solution. A carbon tax set equally for all users worldwide would achieve a given reduction in the use of carbon at the lowest cost. If user A values his use of one ton of carbon at two thousand dollars more than its net-of-tax price, and if the tax is four hundred dollars per ton, he will continue to use the carbon because doing so is worthwhile. If user B values his use of one ton at only three hundred dollars more than the net-of-tax price, the tax will induce him to end his use. Thus the tax would eliminate the lowest-valued uses of carbon and would leave the highest-valued ones in place. A carbon tax would require no negotiation except over a tax rate and a formula for distributing the proceeds. But a tax rate that made a big dent in the greenhouse problem would have to be equivalent to around a dollar per gallon on motor fuel, and for the United States alone such a tax on coal, petroleum, and natural gas would currently yield close to half a trillion dollars per year in revenue, almost 10 percent of our GNP. It is doubtful that any greenhouse taxing agency would be allowed to collect that kind of revenue, or that a treaty requiring the United States to levy internal carbon taxation at that level would be ratified. Tradable permits have been proposed as an alternative to the tax. The main possibilities are estimating "reasonable" emissions country by country and establishing commensurate quotas, or distributing tradable rights in accordance with some "equitable" criterion. Depending on how restrictive the emission rights might be, the latter amounts to distributing trillions of dollars (in present value terms), an unlikely prospect. If quotas are negotiated to correspond to countries' currently "reasonable" emissions levels, they will surely be renegotiated every few years, and selling an emissions right will be perceived as evidence that a quota was initially too generous. A helpful model for conceptualizing a greenhouse regime among the richer countries is the negotiations among the nations of Western Europe for distributing Marshall Plan aid after World War II. There was never a formula or explicit criterion, such as equalizing living standards, maximizing aggregate growth, or establishing a floor under levels of living. Baseline dollar-balance-of-payments deficits were a point of departure, but the negotiations took into account other factors such as investment needs and traditional consumption levels. The United States insisted that the recipients argue out and agree on shares. In the end they did not quite make it, the United States having to make the final allocation. But all the submission of data and open argument led, if not to consensus, to a reasonable appreciation of each nation's needs. Distribution of Marshall Plan funds is the only model of multilateral negotiation involving resources commensurate with the cost of greenhouse abatement. (In the first year Marshall Plan funds were about 1.5 percent of U.S. GNP and—adjusting for overvalued currencies—probably 5 percent of recipient countries' GNP.) What the Marshall Plan model suggests is that the participants in a greenhouse regime would submit for each other's scrutiny and cross-examination plans for reducing carbon emissions. The plans would be accompanied by estimates of emissions, but any commitments would be to the policies, not the emissions. The alternative is commitments to specific levels of emissions. Because target dates would be a decade or two in the future, monitoring a country's progress would be more ambiguous than monitoring the implementation of policies. Thomas C. Schelling is a professor of economics at the University of Maryland School of Public Affairs in College Park. For most of his professional life he was an economics professor at Harvard University. In 1991 he was president of the American Economic Association. He is an elected member of the National Academy of Sciences. Ausubel, Jesse. "Does Climate Still Matter?" Nature 350, April 25, 1991, 649-52. Cline, William R. The Greenhouse Effect: Global Economic Consequences. 1992. Congressional Budget Office. Carbon Charges as a Response to Global Warming: The Effects of Taxing Fossil Fuels. 1990. Dornbush, Rudiger, and James M. Poterba. Global Warming: Economic Policy Responses. 1991. Nordhaus, William D. "The Cost of Slowing Climate Change: A Survey." Energy Journal 12, no. 1 (1991): 37-66.
"TB" is short for tuberculosis. TB disease is caused by a germ, or bacteria, called Mycobacterium tuberculosis. The bacteria usually attack the lungs, but TB bacteria can attack any part of the body such as the kidney, spine, and brain. If not treated properly, TB disease can be fatal. TB is spread through the air from one person to another. When a person with active TB disease of the lungs or throat coughs, sneezes or speaks, the TB bacteria can be spread into the air. People who are near to that person may breathe in these bacteria and become infected. TB is NOT spread by: - shaking someone's hand - sharing food or drink - touching bed linens or toilet seats - sharing toothbrushes Not everyone infected with TB bacteria becomes sick. There are two kinds of TB: latent TB infection and active TB disease. Latent TB Infection TB bacteria can live in your body without making you sick. This is called Latent TB Infection (LTBI). The immune system of people with LTBI is able to fight the TB bacteria to stop them from growing. People with LTBI do not feel sick and do not have any symptoms. The only sign of TB infection is a positive reaction to a skin test or special TB blood test. People with LTBI are not infectious and cannot spread TB to others. However, if TB bacteria becomes active and starts to multiply, the person will get sick with TB disease. Many people who have LTBI never develop TB disease. Active TB Disease TB bacteria become active if the immune system can’t stop them from growing. When TB bacteria are active (multiplying in your body), this is called active TB disease. TB disease will make you sick. Some people develop TB disease soon after becoming infected (within weeks) before their immune system can fight the TB bacteria. Other people may get sick years later, when their immune system becomes weak for another reason. People with TB disease can spread the bacteria to other people. |A Person with Latent TB Infection:||A Person with TB Disease:| |Has no symptoms||Has symptoms like: |Does not feel sick||Usually feels sick| |Cannot spread TB bacteria to others||May spread TB bacteria to others| |Usually has a skin test or blood test result indicating TB infection||Usually has a skin test or blood test result indicating TB infection| |Has a normal chest x-ray and a negative sputum smear||May have an abnormal chest x-ray, or positive sputum smear or culture| |Needs treatment for latent TB infection to prevent active TB disease||Needs treatment to treat active TB disease| TB bacteria can become resistant to the medicines used to treat TB disease. This means that the medicine can no longer kill the bacteria. Resistance to TB drugs can occur when these drugs are not used or taken properly. - when patients do not complete their full course of treatment - when health-care providers prescribe the wrong treatment, the wrong dose, or wrong length of time for taking the drugs - when the supply of drugs is not always available - when the drugs are of poor quality Multidrug-Resistant TB (MDR TB) MDR TB is TB that is resistant to at least two of the best anti-TB drugs, isoniazid and rifampicin. These drugs are used to treat all persons with TB disease. MDR TB fact sheet. Extensively Drug-resistant TB (XDR TB) Extensively drug-resistant TB (XDR TB) is a rare kind of MDR TB. XDR TB is TB that is resistant to isoniazid and rifampin, plus resistant to any fluoroquinolone and at least one of three injectable second-line drugs (i.e., amikacin, kanamycin, or capreomycin). Because XDR TB is resistant to many of first and second choice drugs, patients are left with treatment options that are much less effective. XDR TB fact sheet. For people whose immune systems are weak, especially those with HIV infection, the risk of getting active TB disease is much higher than for people with normal immune systems. While there are fewer people in this country suffering with TB, it remains a serious threat, especially for HIV-infected persons. In fact, TB is one of the leading causes of death among people infected with HIV. Without treatment, people with HIV and TB may have a shorter life than expected. People with untreated LTBI and HIV are much more likely to develop active TB disease than people without HIV infection. The good news is that HIV-infected persons with either LTBI or active TB disease can be effectively treated. The first step is to ensure that HIV-infected persons get a test for TB infection and any other needed tests. The second step is to help the people found to have either latent TB infection or active TB disease get the treatment they need. - General Information - 10 Facts About TB - You Can Prevent TB - TB Can Be Treated - The Difference Between Latent TB Infection and Active TB Disease - Tuberculosis Information for Employers in Non-Healthcare Settings - Tuberculosis Information for International Travelers - TB Skin Testing - BCG Vaccine - Multidrug-Resistant Tuberculosis (MDR TB) - Extensively Drug-Resistant Tuberculosis (XDR TB) - CDC's Role in Preventing Extensively Drug-Resistant Tuberculosis (XDR TB)
No matter how you started on the path to learning more about dyslexia, we are all walking together. By promoting structured literacy through research, education and advocacy, we at the IDA-RMB hope we can answer some questions for you along the way. What is Dyslexia? Dyslexia is not a disease; it has no cure. Dyslexia is a learning disability that affects one’s ability to easily process written and/or verbal language. It is the most common cause of reading, writing and spelling difficulties. Furthermore, it affects males and females nearly equally, as well as people from different ethnic and socioeconomic backgrounds. Dyslexia results from differences in the structure and function of the brain. This neurological difference causes individuals with dyslexia to learn differently. The problem is not behavioral, psychological, motivational, or social. It is not a problem of vision; people with dyslexia do not “see backward.” The following definition of dyslexia was adopted by the IDA board of Directors, November 12th, 2002. This definition is also used by the National Institute of Child Health and Human Development (NICHD): Dyslexia is a specific learning disability that is neurological in origin. It is characterized by difficulties with accurate and/or fluent word recognition and by poor spelling and decoding abilities. These difficulties typically result from a deficit in the phonological component of language that is often unexpected in relation to other cognitive abilities and the provision of effective classroom instruction. Secondary consequences may include problems in reading comprehension and reduced reading experience that can impede growth of vocabulary and background knowledge. What Characteristics Accompany Dyslexia? (Note that few individuals with dyslexia exhibit all the potential signs.) - Delayed spoken language - Errors in letter naming - Difficulty in learning and remembering printed words - Repeated spelling errors - Difficulty with handwriting - Difficulty in finding the “right” word when speaking - Slow rate of writing - Deficient written and oral language skills - Uncertainty as to right- or left-handedness - Difficulties in mathematical calculations - Difficulties with language in math problems - Similar problems among relatives - Difficulty with organization - Lack of awareness of sounds in words, sound order, rhymes, or sequence of syllables - Difficulty decoding words – single word identification - Difficulty encoding words – spelling - Poor sequencing of numbers, of letters in words, when read or written, e.g.: b-d; sing-sign; left-felt; soiled-solid; 12-21 - Problems with reading comprehension - Difficulty expressing thoughts in written form - Imprecise or incomplete interpretation of language that is heard - Difficulty in expressing thoughts orally - Confusion about directions in space or time (right & left, up & down, early & late, yesterday & tomorrow, months & days) Find information about Assistive Technology and more on our Resources page. Download our Service Provider Referral List
Research: Sudden Infant Death Syndrome Ongoing research is being conducted to determine the cause for sudden infant death syndrome. Recent studies have revealed some possible leads. Brain Abnormalities and SIDS Some doctors theorize that SIDS babies may have a defect in the arcuate nucleus, which is the area of the brain that regulates blood pressure, breathing, and body temperature, along with sleeping and waking processes. More specifically, this defect appears to affect serotonin, a neurotransmitter that spreads messages between the brain and nerve cells. Researchers discovered that SIDS babies often had problems with the way serotonin functioned in their brains. For example, if a baby without this defect breathes stale air while sleeping, an automatic reaction triggers the baby to move or to wake up and cry, increasing oxygen intake and adjusting the heart rate. But a baby born with this brain defect is not able to respond to the trigger because the message concerning the stale air is not being transmitted properly. This could result in SIDS. This brain abnormality might work in a similar way when a baby is overheated. Normally, if a baby is too warm, he or she will wake up and move or cry. But in a baby who has this defect, the brain might not trigger that response, again possibly leading to SIDS. Defects in other parts of the body may also be responsible for increasing SIDS risk. For example, some abnormalities may form if a fetus is exposed to a toxic substance, such as cigarette smoke. Also, fetuses of mothers who smoke during pregnancy may not receive the right amount of oxygen before birth, making the baby more sensitive to changes in oxygen levels after birth. Dangers of Covers Over the Infant's Head In June 2008, a team of researchers from New Zealand reported that covering the head (e.g., with blankets) might play a role in SIDS deaths. After reviewing SIDS cases in New Zealand and Germany, the researchers found that in 15.6 percent of New Zealand SIDS cases and 28.1 percent of German SIDS cases, the infants' heads were covered and sweaty before they died. This occurs more often in older infants than younger babies, corresponding to the infant's growing ability to move in the crib. Bacterial Infections and SIDS In May 2008, British researchers announced a possible connection between Staphylococcus aureus and Escherichia coli (E. coli) infections and SIDS. In this study, samples of bacteria were taken from 470 infants who had died before their first birthday and examined. Of the group, researchers discovered higher levels of bacteria present in babies whose deaths were unexplained and attributed to SIDS than in those whose deaths were attributed to causes that could be explained. It was noted that around 8 to 10 weeks of age, a common age for SIDS occurrences, babies normally start to lose the antibodies they were born with, which were obtained through the placenta. Because infants in the study had not yet built up enough of their own antibodies, they might not have been able to fight off infection. It is important to note, however, that this connection is still considered a preliminary finding that needs further research. There is not enough evidence to say that bacterial infections can cause SIDS. Inner Ear Abnormality and SIDS In 2007, a connection between newborn hearing and SIDS was reported. In this study, newborn hearing test results of 31 infants who had died of SIDS were evaluated and it was determined that each baby had an abnormality in the right inner ear. It may be possible that this inner ear abnormality is connected to the ability of the infant's body to adjust the respiratory system when carbon dioxide levels increase. Inner ear damage may occur during delivery, particularly if the mother is in labor for more than 16 hours. Usually, this type of injury heals by the time the baby is 6 months old; the age of 6 months is generally when the incidence of SIDS starts to decrease. However, further research is needed to determine whether this inner ear abnormality can predict a higher risk for SIDS. Also, newborn hearing tests are not regulated worldwide, so a more reliable measure would be necessary to accurately predict risk. Genetics and SIDS In January 2007, researchers reported that approximately 10% of SIDS babies have a mutation in a heart gene that can cause deadly changes in heart rhythms (arrhythmias). Further research is necessary to determine whether widespread heart screenings for newborns is warranted and, if so, the best procedures to use. A mutation in a heart protein was discovered in 2006. When this defect is present, it increases the risk for SIDS in African American babies 24 times. This genetic mutation causes an irregular heartbeat when oxygen levels have lowered. In 2004, researchers in Lancaster County, Pennsylvania and Phoenix, Arizona reported the discovery of a mutated gene that had been passed down through two generations of nine Amish families. This gene mutation affects the body's ability to regulate breathing and heart rate. In this study, 21 babies in the families who had this mutation died of SIDS. Researchers are now studying the gene in non-Amish children to determine if the mutation is common in non-Amish populations. Depression in Mothers and SIDS In a study conducted in 2007, researchers reported that babies born to mothers who had experienced depression during the year before delivery were five times more likely to die of SIDS than babies whose mothers had no history of depression. However, more research is needed to determine if additional factors, such as low birth weight and premature birth are related to this link. Air Quality and SIDS A connection between pollution and SIDS in California was reported in 2006. By studying levels of carbon monoxide, nitrogen dioxide, ozone, and other air particles at different intervals before infant deaths, researchers concluded that exposure to high levels of pollution increased the risk for infant death before the age of 1 year. The study also indicated a higher risk for premature infants and infants with low birth weight; however, it did not explore the roles of other factors, such as secondhand smoke and infant sleeping position. The study also noted the relationship between pollution and death from SIDS and other respiratory diseases, not SIDS alone. Testosterone and SIDS In 2006, researchers reported unusually high levels of testosterone in both male and female infants who had died of SIDS when compared to babies who had died of other causes. High testosterone levels can be linked to a decrease in ventilation among sleeping adults and ongoing studies are being conducted to determine if there is a cause and effect relationship between these levels and SIDS. Other SIDS Research According to the American SIDS Institute, "…most (60–70 percent) of the [SIDS] deaths are related to a subtle chronic abnormality, which occurs before birth. At this time, we do not know the specific pattern or nature of this chronic abnormality." The Institute plans to conduct extensive research to develop tests that will help doctors identify the abnormality at birth. In addition, researchers will conduct studies during pregnancy to help determine what prenatal factors might lead to this abnormality and how to prevent the abnormality from occurring.
Schools must devote most of their instruction time to reading, writing and math, leaving parents to wonder, where does creativity and critical thinking fit in the educational process? Many of us worry that imagination is not being nurtured as much as it should be to encourage creative thinking. Where do children learn to free themselves from the expectations of others, to pursue their passions? Children’s Creative Qualities Children naturally possess several creative qualities: - Curiosity. Children are inherently curious. They want to explore to find their own answers to perplexing problems. Their probing nature is a creative characteristic. - Flexibility. Children find unexpected ways to solve problems. While their solutions may be amusing or curious, a flexible approach to problem solving is essentially a creative process. - Originality. Creative and critical thinking require the ability to explore, without prejudice, a brand new solution. For example, a child may come up with unusual ways to tie his shoes. - Risk taking. Great thinkers and great artists took risks when they proposed new theories or painted or composed in new style. In the same way, children need to be allowed to pursue nontraditional solutions. Promoting Creativity in Your Child Promoting creativity in children requires time and patience. It requires not answering every question a child asks, but asking children what their thoughts are about the question. It requires believing in and supporting your child’s natural desire to explore and be curious. Here are some suggestions for nurturing creativity in your child: - Picture this. With young children, picture books are a great way to start. Look at the pictures together, then ask your child what he thinks is going on. Ask him to make up his own story about what he thinks is happening in the book. Show your appreciation for your child’s version of the story. Explain that there can be many different stories for the same book. - Make it safe to dream. Ask your child to imagine things he would like to do or places he would like to go. On the way to child care or school, point at the clouds and talk about what birds or astronauts might see if they were looking down from there. Do they see the clouds, or are they too far away? Everyday sights are a chance to explore the unknown: I wonder if the people who operate that pizza restaurant ever go to Italy. Do you think they serve the same kind of pizza in Italy? Maybe we can go there some day. - Encourage experimentation (the safe kind). Children love to express themselves with words, art, music and movement. Let them. Often, their satisfaction is in the process, not the product. With the emphasis on academics in schools these days, giving children the opportunity to “let themselves go” in artistic exploration can be very important in releasing the stress they may experience throughout their day. It may also play a critical part of their ability to do some of their school work. A little swing dancing can make sitting at the table and doing math problems a little more palatable.
The evolution of cooperative behavior is usually interpreted in terms of genetic benefits. By helping out relatives, individuals can ensure that a greater percentage of their genetic legacy gets handed down to future generations. This cooperation can actually take place before the next generation is born—in a number of species of birds, related males perform courtship displays cooperatively. This sort of cooperative behavior can extend to non-relatives as well. In some species, all the males taking part in courtship displays get a chance to do some mating. But that doesn't seem to be the case with the species shown at right, the lance-tailed manakins. The somewhat-blurry picture catches an alpha male and a beta engaged in a cooperative courtship display, facing a (possibly impressed) female. New research using DNA testing shows that the two males are unrelated and, in the vast majority of cases, the beta male doesn't get the chance to do any mating. So, why does the beta male help out at all? The study followed the birds for several years and found that the beta males were more likely to eventually become alphas than birds that never cooperated. The study considered the possibility that this process occurred via a "seniority queue" in which the betas were more likely to progress to alpha status in their current territory. Instead, it found that betas generally hit the top of the hierarchy somewhere else, and usually didn't simply move up even when the current alpha was relocated by the researcher. With a lot of the obvious explanations eliminated, the author of the study favors the suggestion that being a beta is a bit like an apprenticeship. Beta males learn what works when it comes to courtship displays, and then go on to use that knowledge at some point in the future. Given the apparently sophisticated sense of the future that other species of birds have displayed, this explanation doesn't seem all that unreasonable.
Air pollution encompasses all types of pollution in the air. But much of the legislation, and subsequently the discussions in the media, refer to a few specific pollutants, due to their high prevalence and significant, negative health effects. These are Nitrogen Dioxide (NO2) and Patriculate Matter (PM). Roll over the questions marks to find out more Air pollution does not respect administrative boundaries. Air pollution in London is a mixture of emissions created locally, and those from background concentrations. In particular, particles measuring between 0.1 μm and 1 μm in diameter can remain suspended for weeks and so can be transported long distances. Therefore local, national and international action is crucial to ensure that dangerous levels of air pollution are tackled. There are various sources of NO2 and PM, transport is the main one, but others that contribute significantly include: energy production; industrial processes and construction. Understanding where the different pollutants come from is important to guide effective policy formation. The pollutants most widely referred to in air quality and pollution literature are: - Particulate matter (these are usually split into 2 sizes: PM2.5 & PM10) - Nitrogen dioxide (NO2) - Sulphur dioxide (SO2) - And occasionally, Carbon Monoxide (CO) PM and NO2 are commonly seen as the most dangerous forms of air pollution due to their high concentrations and the negative health impacts they create. The sections below look to provide more detail on PM and NO2 but also some of the other air pollutants.
Most people agree that the process of education involves confronting new ideas and challenges. However, because books often present controversial ideas or challenge the status quo, they are frequent targets of censorship. Even classics such as Mark Twain’s The Adventures of Huckleberry Finn are not immune from the specter of censorship. (Also see Banned books section.) School officials seeking to rid school libraries of controversial titles and shield children from certain information must tread carefully, however, as the U.S. Supreme Court has found that the First Amendment protects the right to receive information and ideas. In 1982, the high court determined in Board of Education v. Pico that “the First Amendment rights of students may be directly and sharply implicated by the removal of books from the shelves of a school library.” The high court determined that school officials could not remove books from the library because they disagreed with the ideas in the books. However, the court determined that officials could remove the books if they were “pervasively vulgar” or educationally unsuitable. The high court specifically limited its ruling to the removal of a book already on the shelf and said the question of acquiring certain books raised a different question under the Constitution. In addition, the Court’s ruling does not apply to the issue of whether certain books can be used in the curriculum. Most courts have determined that school officials have a broad degree of control over the curriculum. Tags: student speech
growth ring, in a cross section of the stem of a woody plant, the increment of wood added during a single growth period. In temperate regions the growth period is usually one year, in which case the growth ring may be called an “annual ring.” In tropical regions, growth rings may not be discernible or are not annual. Even in temperate regions, growth rings are occasionally missing, and a second, or “false,” ring may be deposited during a single year—for example, following insect defoliation. Growth rings are distinct if conducting cells produced early in the growth period are larger (spring, or early, wood) than those produced later (summer, or late, wood) or if growth is terminated by a layer of relatively thick-walled fibres or by parenchyma. In temperate or cold climates the age of a tree may be determined by counting the number of annual rings at the base of the trunk or, if the trunk is hollow, at the base of a large root. Annual rings have been used in dating ancient wooden structures, especially those of the American Indians in the dry southwestern regions of the United States; fluctuation in ring width is a source of information about ancient climates. What made you want to look up growth ring?
Reverse Osmosis (RO) Definition - What does Reverse Osmosis (RO) mean? Reverse osmosis (RO) is a process that removes salt from seawater. It is a process where water is deionized or demineralized by pressurizing it via a semi-permeable membrane that selectively lets molecules or atoms pass through. This process is used in treating wastewater, recycling and in the generation of energy, making it very significant in various industries. Corrosionpedia explains Reverse Osmosis (RO) Problems with water have been a tremendous global threat. Desalination plants implement reverse osmosis to solve vital water issues. For instance, areas surrounded by sea that are suitably arid and dry can get their drinking water through the aid of reverse osmosis plants. In order to reverse the process of osmosis, energy is required for the highly saline solution. RO uses a semi-permeable membrane that admits the entry of water molecules, but not bacteria, organics and salts. However, there is a need to drive the water via the RO membrane with the aid of pressure higher than the natural osmotic pressure required for deionization or demineralization. Through RO, pure water is obtained while eliminating most types of contaminants.
And here we have Lick #3! - Analyze the accuracy of both hands. The idea is to find a way to make them work together smoothly. Take your time and DON’T force it. - Practice using subdivisions. Set the metronome low and increase by subdivision. This way you’re covering not only increments of speed, but also musical situations seeing as each subdivision will “feel” different against the tempo. This is a great way to add to your musical vocabulary. - Use a stopwatch. As you practice using subdivisions try cycling them by 20 seconds or so. Kind of like a long distance sprinter. 20 seconds on (fast) 20 seconds off (slower). This is a good way to develop muscle memory fast!
Lesson 5 is a review (Wiederholung) lesson to summarize the German language lessons presented in Lessons 1 through 4. You should, then, return to Lektion 1 and review (that is, reread) each of the four lessons back up to this point. For a more advanced course, you might now incorporate each of the advanced lessons into this "review" process. That is: review Lesson 1, then do Lesson 1A, review Lesson 2, then do Lesson 2A, etc. Parts of Speech and Word Order Sentences are composed of parts that perform specific functions. You have been introduced to most (but not all) the major parts of speech: pronouns/nouns, verbs, and adjectives; and how these are expressed in German compared with English. Consider the following: Ich brauche Wurst und Käse - I (pronoun as subject) need (verb) sausage and cheese (nouns as direct objects) Haben sie zu viel Arbeit? - Have (verb) they (pronoun subject) too much (adjectives) work (noun direct object)? Word order in a simple sentence follows that used in English. Subject and verb are reversed to form a question. In English, but not in German, the question sentence could also be stated (and, in fact, occurs more often in the US) as 'Do they have too much work?' Nouns are words that typically occur in sentences as either subjects (performers of some action) or objects (recipients of some action). Most nouns are the name of either a "person, place, or thing" and, in German, are always capitalized. Every noun in German has an "assigned" gender (masculine, feminine, neuter), and we learn each noun with its nominative case, definite article (der, die, das, respectively) in order to also learn that gender. Thus, a Vokabeln section for nouns is presented thusly: der Anhang, die Anhänge appendix, appendices (singular and plural) die Brücke bridge der Freund, die Freunde friend, friends (singular and plural) das Gespräch, die Gespräche conversation, conversations die Grammatik grammar (note irregular stress) die Lektion lesson (note irregular stress) die Straße street |(edit template)||Level II Lessons||(discussion)|
Insulin is prescribed for diabetes mellitus when diet modifications and oral medications fail to correct the condition. Insulin is a hormone produced by the pancreas, a large gland that lies near the stomach. This hormone is necessary for the body's correct use of food, especially sugar. Insulin apparently works by helping sugar penetrate the cell wall, where it is then utilized by the cell. In people with diabetes, the body either does not make enough Insulin, or the Insulin that is produced cannot be prescribed properly. There are actually two forms of diabetes: type 1 (Insulin-dependent) and type 2 (non-Insulin-dependent). Type 1 usually requires Insulin injection for life, while type 2 diabetes can usually be treated by dietary changes and/or oral antidiabetic medications such as Diabinese, Glucotrol, and Glucophage. Occasionally, type 2 diabetics must take Insulin injections on a temporary basis, especially during stressful periods or times of illness. The various available types of Insulin differ in several ways: in the source (animal, human, or genetically engineered), in the time requirements for the Insulin to take effect, and in the length of time the Insulin remains working. Regular Insulin is manufactured from beef and pork pancreas, begins working within 30 to 60 minutes, and lasts for 6 to 8 hours. Variations of Insulin have been developed to satisfy the needs of individual patients. For example, zinc suspension Insulin is an intermediate-acting Insulin that starts working within 1 to 1-1/2 hours and lasts approximately 24 hours. Insulin combined with zinc and protamine is a longer-acting Insulin that takes effect within 4 to 6 hours and lasts up to 36 hours. The time and course of action may vary considerably in different individuals or at different times in the same individual. Genetically engineered Insulin works faster and for a shorter length of time than regular human Insulin and should be prescribed along with a longer-acting Insulin. It is available only by script. Animal-based Insulin is a very safe product. However, some components may cause an allergic reaction. Therefore, genetically engineered human Insulin has been developed to lessen the chance of an allergic reaction. It is structurally identical to the Insulin produced by your body's pancreas. However, some human Insulin may be produced in a semi-synthetic process that begins with animal-based ingredients, and may cause an allergic reaction. Insulin side effects that you should report to your health care professional or doctor as soon as possible: - Unsteady Movement; - Tingling In The Hands; - Slurred Speech; - Sleep Disturbances; - Shortness Of Breath; - Shallow Breathing Or Wheezing; - Rash Over The Entire Body; - Rapid Pulse; - Rapid Heartbeat; - Personality Changes; - Low Blood Pressure; - Loss Of Appetite; - Lips Or Tongue; - Itching Or Redness At The Injection Site (Usually Disappears Within A Few Days Or Weeks); - Inability To Concentrate; - Heavy Breathing; - Fruity Breath; - Depressed Mood; - Cold Sweat; - Blurred Vision; - Fast Pulse; - Abnormal Behavior; Your doctor will specify which Insulin to use, how much, when, and how often to inject it. Your dosage may be affected by changes in food, activity, illness, medication, pregnancy, exercise, travel, or your work schedule. Proper control of your diabetes requires close and constant cooperation with your doctor. Failure to use your Insulin as prescribed may result in serious and potentially fatal complications. Some Insulins should be clear, and some have a cloudy precipitate. Find out what your Insulin should look like and check it carefully before using. Genetically engineered Insulin lispro injection should not be prescribed by children under age 12.
G protein-coupled receptors (GPCRs), also known as seven transmembrane receptors, heptahelical receptors, or 7TM receptors, are a protein family of transmembrane receptors that transduce an extracellular signal (ligand binding) into an intracellular signal (G protein activation). The GPCRs are the largest protein family known, members of which are involved in all types of stimulus-response pathways, from intercellular communication to physiological senses. The diversity of functions is matched by the wide range of ligands recognized by members of the family, from photons (rhodopsin, the archetypal GPCR) to small molecules (in the case of the histamine receptors) to proteins (for example, chemokine receptors). This pervasive involvement in normal biological processes has the consequence of involving GPCRs in many pathological conditions, which has led to GPCRs being the target of 40 to 50% of modern medicinal drugs. The seven transmembrane α-helix structure of a G protein-coupled receptor. GPCRs are present in a wide variety of physiological processes. Some examples include: - The visual sense: the opsins use a photoisomerization reaction to translate electromagnetic radiation into cellular signals. Rhodopsin, for example, uses the conversion of 11-cis-retinal to all-trans-retinal for this purpose. - The sense of smell: receptors of the olfactory epithelium bind odorants (olfactory receptors) and pheromones (vomeronasal receptors) - Behavioral and mood regulation: receptors in the mammalian brain bind several different neurotransmitters, including serotonin and dopamine - Regulation of immune system activity and inflammation: chemokine receptors bind ligands that mediate intercellular communication between cells of the immune system; receptors such as histamine receptors bind inflammatory mediators and engage target cell types in the inflammatory response - Autonomic nervous system transmission: both the sympathetic and parasympathetic nervous systems are regulated by GPCR pathways. These systems are responsible for control of many automatic functions of the body such as blood pressure, heart rate and digestive processes. There are two types of GPCRs, viz chemosensory (type A, eg rhodopsins)and endo (type B, eg glucagon) GPCRs. GPCRs are integral membrane proteins that possess seven membrane-spanning domains or transmembrane helices (Figure 1). The extracellular parts of the receptor can be glycosylated. These extracellular loops also contain two highly conserved cysteine residues which build disulfide bonds to stabilize the receptor structure. Early structural models for GPCRs were based on their weak analogy to bacteriorhodopsin for which a structure had been determined by both electron and X ray-based crystallography. In 2000, the first crystal structure of a mammalian GPCR, that of bovine rhodopsin, was solved. While the main feature, the seven transmembrane helices, is conserved, the structure differs significantly from that of bacteriorhodopsin. Some seven transmembrane helix proteins (such as channelrhodopsin) that resemble GPCRs may contain different functional groups, such as entire ion channels, within their protein. Ligand binding and signal transduction Sequence of events after activation of a G protein-coupled receptor (red) by a hormone (orange). See the main text for details. While in other types of receptors that have been studied ligands bind externally to the membrane, the ligands of GPCRs typically bind within the transmembrane domain. The transduction of the signal through the membrane by the receptor is not completely understood. It is known that the inactive G protein is bound to the receptor in its inactive state. Once the ligand is recognized, the receptor shifts conformation and thus mechanically activates the G protein, which detaches from the receptor. The receptor can now either activate another G protein, or switch back to its inactive state. This is an overly simplistic explanation, but suffices to convey the overall set of events. It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive biophysical states. The binding of ligands to the receptor may shift the equilibrium (for example see link). Three types of ligands exist: agonists are ligands which shift the equilibrium in favour of active states; inverse agonists are ligands which shift the equilibrium in favour of inactive states; and neutral antagonists are ligands which do not affect the equilibrium. It is not yet known how exactly the active and inactive states differ from each other. If a receptor in an active state encounters a G protein, it may activate it. Some evidence suggests that receptors and G proteins are actually pre-coupled. For example, binding of G proteins to receptors affects the receptor's affinity for ligands. Activated G proteins are bound to GTP. The enzyme adenylate cyclase is an example of a cellular protein that can be regulated by a G protein. Adenylate cyclase activity is activated when it binds to a subunit of the activated G protein. Activation of adenylate cyclase ends when the G protein returns to the GDP-bound state. GPCR signaling without G proteins In the late 1990s, evidence began accumulating that some GPCRs are able to signal without G proteins. The ERK2 mitogen-activated protein kinase, a key signal transduction mediator downstream of receptor activation in many pathways, has been shown to be activated in response to cAMP-mediated receptor activation in the slime mold D. discoideum despite the absence of the associated G protein α- and β-subunits. In mammalian cells the well-studied β2-adrenoceptor has been demonstrated to activate the ERK2 pathway after arrestin-mediated uncoupling of G protein mediated signalling. It therefore seems likely that some mechanisms previously believed to be purely related to receptor desensitisation are actually examples of receptors switching their signalling pathway rather than simply being switched off. GPCRs become desensitized when exposed to their ligand for a prolongued period of time. There are two recognized forms of desensitization: 1) homologus desensitization, in which the activated GCPR is downregulated and 2) heterologus desensitization, where the activated GCPR causes downregulation of a different GCPR. The key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases. Phosphorylation by cAMP-dependent protein kinases Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the G protein (that was activated by the receptor) via adenylate cyclase and cyclic AMP (cAMP). In a feedback mechanism, these activated kinases phosphorylate the receptor. The longer the receptor remains active, the more kinases are activated, the more receptors are phosphorylated. Phosphorylation by GRKs The G protein-coupled receptor kinases (GRKs) are protein kinases that phosphorylate only active GPCRs. Phosphorylation of the receptor can have two consequences: - Translocation. The receptor is, along with the part of the membrane it is embedded in, brought to the inside of the cell, where it is dephosphorylated and then brought back. This mechanism is used to regulate long-term exposure, for example, to a hormone. - Arrestin linking. The phosphorylated receptor can be linked to arrestin molecules that prevent it from binding (and activating) G proteins, effectively switching it off for a short period of time. This mechanism is used, for example, with rhodopsin in retina cells to compensate for exposure to bright light. In many cases, arrestin binding to the receptor is a prerequisite for translocation. It is generally accepted that G protein-coupled receptors can form homo- and/or hetero-dimers and possibly more complex oligomeric structures. However, it is presently unproven that true hetero-dimers exist. Present bio-chemical and physical techniques lack the resolution to differentiate between distinct homo-dimers assembled into an oligomer or true 1:1 hetero-dimers. It is also unclear what the functional significance of oligomerization is. This is an actively studied area in GPCR research. It is generally accepted that G protein-coupled receptors can form homo- and/or heterodimers and possibly more complex oligomeric structures, and indeed heterodimerization has been shown to be essential for the function of receptors such as the metabotropic GABA(B) receptors. Present bio-chemical and physical techniques lack the resolution to differentiate between distinct homo-dimers assembled into an oligomer or true 1:1 hetero-dimers. It is also unclear what the functional significance of oligomerization might be, although it is thought that the phenomenon may contribute to the pharmacological heterogeneity of GPCRs in a manner not previously anticipated. This is an actively studied area in GPCR research.
The glomerulus is the communication point between the bloodstream and nephron, the functional unit of the kidney. It is composed of the glomerular capillaries and Bowman’s capsule of the nephron. Fluid from the blood in the glomerular capillaries pass into the Bowman’s capsule and then enters the tubules where it is processed to form urine. The glomerular capillaries congregate as tufts within the Bowman’s capsule. Fluid has to pass through the endothelial cells of the capillaries, and the closely associated visceral membrane of the Bowman’s capsule which is separated from the capillary wall by the basement membrane, then through the parietal membrane of the Bowman’s capsule to enter the nephron. Mesangial cells among the glomerular capillaries regulate the blood flow in the capillaries. If these cells and layers of the glomerulus are damaged, glomerular filtration and therefore normal kidney functioning is impaired. What is glomerulonephritis? Glomerulonephritis literally means inflammation of the glomerulus but includes a number of disorders that affect the structure and function of the glomerulus without any prominent inflammation. It is therefore also referred to as glomerular disease or glomerulopathy. In glomerulonephritis, various known and unknown causes trigger immune activity against the glomeruli which damages it. The glomerulus is the head of the nephron which is responsible for filtering fluid from the blood. This fluid is later processed in the tubule (the rest of the nephron) until urine is eventually formed. Each kidney has about 1 million nephrons that act together to complete the various functions including removing waste substances from the blood, regulating blood volume and blood pressure. If a significant number of nephrons are damaged, these functions will be significantly hampered. The kidney is constantly losing nephrons with age. This is a slow process and only commences after the age of 40 years. For every decade of life thereafter, the kidney loses about 10% of its functioning nephrons. Since the progression is so gradual, the remaining healthy nephrons are able to compensate without any significant impairment of normal kidney functioning. In glomerulonephritis, however, there is a more rapid and extensive damage of the nephrons. Pathophysiology of Glomerulonephritis How does glomerulonephritis occur? Glomerulonephritis is known to be an immune reaction mediated by antigen-antibody complexes. An antigen is the trigger substance against which antibodies are formed by the immune system. The antibodies then bind with the antigen and this antigen-antibody complex can instigate a number of immune activities designed to protect the body. In the process, inflammation arises in whichever tissue that the targeted immune response is occurring. Although the exact cause of glomerulonephritis is not always understood, the mechanism by which it occurs is proposed in two different models – immune complex deposition and circulating immune complexes. Other mechanisms may involve cell-mediated injury or cytotoxic antibodies. In immune complex deposition, it is believed that antibodies are directed against antigens that are “planted” in the glomerulus or against antigens that are normal components of the glomerulus, specifically the glomerular basement membrane (GBM). The immune activity is therefore specifically targeted at the glomerulus. With circulating immune complexes, the antigen-antibody complexes are circulating in the bloodstream and eventually reach the glomerulus during glomerular filtration. These complexes form in the backdrop of several autoimmune or infectious diseases and the antigen may be endogenous (created within the body) or exogenous (from foreign matter or microogranisms) in nature. In these cases, immune activity is targeted at the circulating immune complex and can lead to inflammation at other sites in the body as well as the glomerulus. In response to the inflammation, different histologic alterations may be seen in the glomerulus. This includes : - Increase in the number of cells (capillary endothelium or mesangial cells) - Thickening of the basement membrane - Tissue degeneration – hyalinosis and sclerosis Types of Glomerulonephritis Primary and Secondary Glomerulonephritis Glomerulonephritis may be primary or secondary. Primary glomerulonephritis arises on its own without any other underlying disease. Secondary glomerulonephritis occurs as a consequence of some other disease, which may not even involve the kidney. Acute and Chronic Glomerulonephritis Furthermore glomerulonephritis can be classified as acute or chronic. In acute glomerulonephritis, the condition starts suddenly and the tissue damage progresses rapidly. With chronic glomerulonephritis, the condition develops gradually and damage becomes extensive after months or years. Different Types of Glomerulonephritis There are several different types of glomerulonephritis based on the various distinct histological patterns that arise with glomerular inflammation. Some of the more common types of glomerulonephritis includes : - Minimal change nephropathy - Primary focal segmental glomerulosclerosis (FSGS) - Membranous nephropathy - Membranoproliferative glomerulonephritis (also called mesangiocapillary) - IgA nephropathy - Post-infectious glomerulonephritis which is a sub-type of acute proliferative glomerulonephritis - Crescentic glomerulonephritis Apart from the differences in the pattern and severity of tissue damage in each type of glomerulonephritis, it may also differ in the way kidney functioning is impaired and arise with different causes. Causes of Glomerulonephritis The is a wide range of causes of glomerulonephritis. Some may solely involve the kidney while others are due to systemic disease which affect a number of organs simultaneously. Sometimes the cause of glomerulonephritis is unknown – idiopathic. The causes of glomerulonephritis may include : - Infections – post-streptococcal, subacute bacterial endocarditis, viral infections, parasitic infections like malaria and less commonly fungal infections. - Autoimmune diseases – systemic lupus erythematosus (SLE), Goodpasture’s syndrome, vasculitis (Wegener granulomatosis and polyarteritis nodosa), Henoch-Schönlein purpura. - Immune-mediated hypersensitivity (atopy) particularly in children. - Medication like those drugs used in the treatment of SLE and hemolytic-uremic syndrome. - Diabetes mellitus - Malignant hypertension (high blood pressure) - Inherited diseases like Alport’s syndrome. - Hodgkin’s lymphoma (mainly in adults). Signs and Symptoms of Glomerulonephritis The clinical features may vary between acute and chronic glomerulonephritis and even among the different histological types. In chronic glomerulonephritis specifically, the patient may be asymptomatic (no symptoms) for long periods of time. The sign and symptoms of glomerulonephritis includes : - Hematuria (blood in the urine) which may appear as pink-colored or brownish urine. - Proteinuria (protein in the urine) which may present as foamy urine (frothy). - Edema (swelling) most prominent in the face, hands, abdomen and feet. - Hypertension (high blood pressure) - Azotemia (high urea levels in the blood) which leads to various additional signs and symptoms (uremia). Glomerulonephritis may lead to a collection of clinical features grouped together as glomerular syndromes and includes : - Nephrotic syndrome – proteinuria (protein in urine), hypoalbuminema (low blood proteins), edema (swelling due to fluid retention), hyperlipidemia (high blood lipids), lipiduria (lipids in the urine). - Nephritic syndrome – hematuria (blood in urine), azotemia (high urea levels in blood), proteinuria, oliguria (large volume of urine), edema, and hypertension (high blood pressure). - Rapidly progressing glomerulonephritis – nephritis (kidney inflammation), proteinuria, and acute renal failure. Additional signs and symptoms of glomerulonephritis may include : - Nausea and vomiting - Paleness and/or yellowing of the skin - Itching of the skin
When teaching my students the difference between the Present Simple and the Present Continuous, there’s always a moment when I need to tell them that State Verbs are never used in a Continuous tense, but what are and which are State Verbs? The easiest way to distinguish between state (or non-progressive) verbs is to check the meaning of the verb/sentence. State verbs refer to unalterable conditions, whereas action verbs refer to processes. Compare these sentences: - She is tall. (Can that be changed?) - She plays the piano. (She can stop playing the piano and start a new process). Among state verbs we have different subcategories: Stative: be, seem, appear Possession: have, belong to, own, contain Thinking: believe, think, consider, doubt, agree, concern, imagine, impress, mean, understand Emotions: like, love, hate, dislike, matter, mind, want, wish Verbs that belong to these groups will never be used in a continuous tense as they cannot be used to describe a process. If you are not sure about a verb, a good tip is to ask yourself “Can I be in the middle of _______ (having a car, believing in God, loving my parents)?” If the answer is negative, then you have a state verb! You can find several lists in the Internet but you have to be careful with these because sometimes a verb can be used as a state and an action verb because it has more than one meaning. The clearest example is “have”, which can mean “possess” or “take”. For instance, - I have a beautiful picture of a British landscape in the living room. - I have cereal and milk for breakfast. The first example is a state verb, whereas the second is an action verb. Therefore, it can be used both in continuous and simple tenses. Other verbs like “have” are “think”, “be” or “consider”. Can you guess their two meanings? Let me lend you a hand: - I think/consider this is a good idea (I have an opinion) - I am thinking/considering about buying a new car (I am making a list of positive and negatvie points before maing a decision) - I am a shy (this is an unalterable characteristic) - The kid is being very spoilt today (he is behaving in a strange manner) You can download this information in PDF format.
Circuit Theory/Dangling Resistors When opening capacitors, inductors or current sources during circuit analysis, sometimes a resistor is left connected to the circuit with only one wire. It can be removed from the circuit. It can be removed because there is no current through it. This means that the electromotive force is the same through the resistor. The green and the red dot are part of the same node. The original voltage between green and black is going to be the same voltage between red and black. The total Resistance will be r1+r2.
Architectural Style is a manner of expression. It is the vocabulary use when it classify buildings according to their appearance, structure, materials, and historic period. It may include such elements as form, method of construction, building materials, and regional character. Architectural Style is sometimes only a rebellion against an existing style, such as post-modernism, which has in recent years found its own language and split into a number of styles which have acquired other names. It explore harmonious ideals, Mannerism wants to take style a step further and explores the aesthetics of hyperbole and exaggeration.
The Virtual Microscope for Earth Sciences Project aims to make a step change in the teaching of Earth Sciences by broadening access to rock collections that are currently held in museums, universities and other institutions around the world. The intention is to engage and excite students in schools or higher education, and anyone interested in materials that make up the Earth’s surface. The virtual microscope allows users to examine and explore minerals and microscopic features of rocks, helping them to develop classification and identification skills without the need for high-cost microscopes and thin section preparation facilities. We've tried hard to ensure that the sample locations and information are correct, so if you spot any errors, or have ideas for teaching resources, please help us to improve the virtual microscope by emailing us at [email protected]. Our biggest collection is called the UKVM and consists of more than 100 rocks from the United Kingdom (with a few from Ireland), digitised as an open educational resource with funding from the JISC as part of their Content programme. Other collections feature rocks collected by Charles Darwin during his voyage on the Beagle, meteorites that arrived from outer space, and moon rocks gathered by NASA astronauts. The collections can be explored in Place (location), in Time (geological age) or in Focus (an advanced text-based search using the metadata associated with each sample). Every rock sample is accompanied by a virtual thin section so that you can study the mineral optical properties, grain size, shape and proportion, and also analyse the rock micro textures as if using a specialist polarising microscope. The user guide has full details of how to use the microscope. Virtual hand specimens are available for many of the samples. Each hand specimen is a digital object that can be turned as if it was in your hand. The hand specimen can be examined as if using a hand lens, by zooming in to examine the minerals, grains or fossils in its surface. The British Geological Survey (BGS) has kindly allowed us to link rocks in the UKVM to their online geology map of Great Britain so you can see the locations where the rocks were collected. The BGS map illustrates the geology of the local area and includes a key to the rock types. The map can be navigated and zoomed, and the transparency of the geological overlay can be varied. If you would like to know more about the features you see in the virtual microscope, or how geologists identify and classify rocks and minerals, there are lots of resources available.
According to archaeologists, Arachova was established in 9th century BC, although this area did not hold the same name as today. One of the earliest findings is a prehistoric cave in Livadi that was used as a temple dedicated to God Pan. According to tradition, the women from the surrounding areas used to gather in this location every five years wearing disguises and they danced under the light of torches the whole night. The name Arachova is apparently given to the place because of the many walnut trees that grow in the area, since the two Slavian words it is composed of are ara and hova, meaning walnut trees. There is another possibility, however, as the town has also been known as Rahova, a term that parts from two Geek words, rachi and Vio. Their meaning refers to those who live on the hills of a mountain, and this concerns the location of Arachova, as it is built on five small hills. Along the centuries, Arachova played an important role that made its name very famous in Greek history. The first record that refers to this town is the Illiad by Homer, in which two famous generals from the old towns of Anemoria and Kiparissos are mentioned, in the framework of the Trojan War in 2,000 BC. These generals were Epistrophos and Schedias. Later on, Philip II of Macedonia destroyed the town around 334 BC, causing their inhabitants to get dispersed over the hills. Later on, Arachova was rebuilt. The following important testimonies about Arachova are given by travelers in the 14th century AC, who speak about a well-populated town with plenty of churches. The Metropolitan Bishop of Athens, Meletios, wrote that Arachova was the most famous town in Parnassus, while Lord Byron described the local women in 1809 as very caring and polite. Some decades later, in 1826, the Greeks won the Turks in the historic battle of Arachova. The name of their chief, Georgios Karaiskakis, became very important in Greek history, after the defeat of the Turkish army and their chief Moustafabey. Today, Arachova is a popular winter destination in Greece and keeps attracting people mainly for the three ski centers of Mount Parnassos.
LEGOs have been a family favorite for generations, but did you know that they’re not just a toy? The colorful, multi-faceted bricks can be used to encourage STEM skills, and that’s exactly what Orlando private school, Lake Forrest Prep, is using them for. Learn how to encourage your child to play (and learn!) with LEGOs. LEGOs come in different shapes and sizes, all of which promote fine motor skills. When playing with friends, children learn how to share and take turns. Group play encourages teamwork. LEGOs also encourage creativity by allowing kids to let their imaginations run wild, providing them with the means to build anything they can dream of. There is no right or wrong, and there is no fear of failure. Kids can build what they want, however they want to. If they choose to follow the design instructions, they’ll stretch their mathematical thinking and problem-solving muscles. Building models teaches kids the basic ideas of symmetry and how to work with fractions and division. Lateral thinking and planning skills develop as their creation is assembled. Children can improve their communication skills by explaining their ideas and process, describing their creations, and discussing any challenges that they faced. Their tower may collapse, but in developing persistence, they will try again until they succeed. Building with LEGOs improves children’s self-esteem, as they can be proud of themselves when they achieve their goal. Lake Forrest Prep offers a variety of extracurricular activities, including Bricks4Kidz. Students meet after school for LEGO club and build from model plans that are designed by real-life architects and designers. This type of activity encourages creativity and curiosity, as well as teaches and reinforces all-important STEM-based principles. Kids in grades K-5 will enjoy this group experience the most. Sign up for Bricks4Kidz and teach your kids that learning is fun with LEGOs! Your child will have the time of their life constructing new feats with their friends at Lake Forrest Prep, an Orlando private school.
H3C?Bond Angle? Molecular Geometry? Hybridization? Polar Or Non-polar? Introduction To Isobutylene Isobutylene, known as 2-methylpropene, is a non-colorless, flammable gas with an unmistakably petroleum-like smell. It is a crucial industrial chemical widely utilized in manufacturing a range of chemicals, such as butyl rubber and methyl methacrylate and isooctane. Isobutylene is made by the catalytic cracking process of petroleum feedstocks or by dehydration of isobutanol. Properties Of Isobutylene Isobutylene has the molecular formula C4H8 and a molecular mass of 56.11 grams per mo. The chemical has a boiling temperature of -6.9degC (-20.2degF) and melting temperatures of 159degC (-254degF). It is extremely flammable and has a flash point of 70 deg C (-94degF). It is also easily soluble in organic solvents such as alcohol and ether. However, it is insoluble in water. Uses Of Isobutylene Isobutylene is used for the manufacture of many materials and chemicals, such as: - Rubber made of butyl: Isobutylene is a major raw material used in the production of butyl rubber. It produces inner tubes, tires, and various rubber products. - Methyl methacrylate (MMA): Isobutylene is utilized to make MMA which is used to create plastics and polymers, including coatings, sheets of acrylic, and adhesives. - Isooctane: Isobutylene can be utilized to create isooctane. This can be used to boost gasoline’s octane level to enhance the vehicle’s performance and lower emissions. - Additional chemicals: Isobutylene can also be utilized to make various other chemicals like Isobutyl Alcohol, Oxybutylene, and tertbutyl alcohol. Production Of Isobutylene There are two principal methods to produce isobutylene: catalytic cracking and dehydration. - Cracking catalytically: Isobutylene is made by catalytic cracking petroleum feedstocks, like naphtha and gas oil. The process requires catalysts that break down larger hydrocarbons into smaller molecules, such as isobutylene. - Dehydration: Isobutylene is also made by the dehydration process of isobutanol. This is accomplished by heating isobutanol under the influence of a catalyst with high acidities, like sulfuric acid or even phosphoric acid, to eliminate water and create isobutylene. Safety Precautions For Isobutylene Isobutylene is a highly explosive gas that can present significant dangers to safety if treated with care. Here are the safety precautions that must be followed when handling isobutylene: - Ensure you have adequate ventilation: Isobutylene is best employed in an well-ventilated area to prevent the formation of volatile vapors. - Ensure you are wearing the correct PPE: Personal protective equipment, like gloves and goggles, and a respiratory protector, must be worn while handling isobutylene. - Storage should be done properly: Isobutylene must be kept in a cool, dry, well-ventilated, and dry area far from ignition sources. - Take care when handling: Isobutylene is best handled with care to avoid accidental leaks or spills. - Use proper disposal methods: Isobutylene must be disposed of according to local regulations. Isobutylene is a key industrial chemical used to manufacture many different chemical and material, such as andy methacrylate, butyl, and isooctane. Bond Angles And Hybridization Explained If we glance at the structure CH3OH and see around the carbon atom, it shows a tetrahedral geometrical structure with three hydrogen bonds and one hydroxyl bond. Additionally, it shows an elongated tetrahedral shape around the oxygen atom. This is due to the one pair of electrons that reside in the center O atom. This results in repulsion within the molecule, and consequently, the structure shows an elongated tetrahedral shape. Introduction To Bond Angle The bond angle can be defined as the relationship between bonds with the same atom. It is an important factor in determining the three-dimensional structure and characteristics of the molecule. The angle of bonding is affected due to the hybridization between the central and peripheral atoms, the amount of bonded atoms, and the presence of single pairs of electrons. Bond Angle Of H3C. Isobutylene Isobutylene, also known as 2-methylpropene, has a chemical formula of C4H8. It is a double-bonded compound between carbon atoms in places 2 and 3, and two other molecules link every carbon atom. The carbon atom in position two is also linked to the methyl (-CH3) group. The bond angle of isobutylene can be determined by combining carbon atoms. Carbon atoms in places one and four are sp3-hybridized, which means they are equipped with four hybrid orbitals resulting from combining one s orbital with three orbitals known as p. These orbitals are organized in a tetrahedral pattern around each carbon atom with bond angles that are approximately 109.5deg. The carbon atom in 2 is sp2 hybridized which means it has three hybrid orbitals that resulted from the combination of one and two orbitals called p. Another p orbital can be utilized to create the double bonds with carbon atoms at position 3. The three orbitals of the hybrid are placed within trigonal-planar geometries surrounding the carbon atom in position 2 with bond angles that are approximately 120 degrees. The methyl group (-CH3) connected to the carbon atom in position two also affects the bond angle. It is an electron-donating element that has a steric impact on the bonds surrounding it and pushes them further apart. In the end, it is observed that the angle of the bond between carbon in position two and two adjacent carbon atoms is slightly greater than the ideal trigonal planar bond angle of 120 degrees. The term “molecular geometry” refers to the three-dimensional configuration of the molecules which compose the molecule. It covers its shape, the dimensions of the molecules, bond lengths, bond angles, and torsional angles, in addition to other geometric parameters that define the location of every atom. In addition, the position of atoms within the molecule may affect its properties, like the polarity, reactivity, and nature of matter. The number of electron pairs surrounding an atom determines the geometry of the molecules (see this table). Geometries with lone pairs (such as carbon dioxide) are bent or angular, whereas those with more than two single pairs are linear. The repulsion between electrons is lessened in a molecule bent or angular and increased when it is linear. If all bond pairs of electrons enclose a central atom in similar atoms, the repulsive interactions between electrons are balanced, and the molecular geometry becomes regular. However, when an atom’s central region is surrounded only by bond pairs of different atoms, or even all one-on-one, the repulsive interactions don’t balance against each other, and the molecular structure is distorted or irregular. VSEPR theory is employed to understand the molecular structure of molecules having lone pairs. The VSEPR model defines five major molecular geometrics: linear, trigonal planar, Tetrahedral tri pyramidal, trigonal bipyramidal, and o. For instance, the molecular geometry of beryllium hydroxide is predicted to be linear according to VSEPR. However, if a beryllium atom has only pairs of electrons, VSEPR predicts it will undergo hybridization. This causes an electron from the 2s into the empty orbital 2p. The hybridized orbitals that result are known as sp2 hybrid orbitals. Similar processes occur in other molecules with trigonal planar electrons, like Ozone (O3) and sulfur hexafluoride (SF6). Regarding O3, two hybrid orbitals, sp2, form around the oxygen atom in the center. In the instance of SF6, three SP2 hybrid orbitals form. If a chemical molecule has only one sp2 orbital, it is named sp2; in the event that the molecule has more than one orbital of sp2, it is called sp3. The process of hybridization allows molecules to be able to cross-link with existing orbitals in the atomic chain. Hybridization theory offers explanations for the covalent bonds in organic molecules. Based on the idea that the moment atomic orbitals with different shapes and almost equal energy are mixed, a new hybrid orbital is created. This happens in the process of bond formation and isn’t found in the atom of a gaseous. In the molecules CH4 (methane), there is one orbital, an s, with three P orbitals combined to create an orbital hybrid that is identical in form and energy. The resultant bond exhibits minimal friction between the sp3 orbitals and is organized in a tetrahedral arrangement. The sp3 hybridization It is a crucial component of understanding the theory of valence bonds. It is the reason the 4 C-H bonds found in methane are identical. The tetrahedral arrangement occurs because each Sp3 hybrid orbital contains one unpaired electron and is designed to lessen the repulsion between electrons. It is the same way hydrogen atoms bond to molecules. The difference in hydrogen is that they are non-polar while carbon atoms aren’t. The polarity of molecules can reveal their boiling point, solubility, and other features. Insofar as hybridization can be considered, this may influence the phenotype of an animal in indirect and direct ways. For instance, hybrids might have fewer nutrients available than their parents and, consequently, are more susceptible to diseases. Additionally, hybridization could result in the admixture of phenotypes inherited from the parent species into offspring. Again, this can be beneficial or adverse to the fitness of hybrids. Another major application for hybridization can be found in the context of the process of speciation. In this instance, it could result in the creation of hybrids between species that have different physical morphology. This could result in various consequences for the offspring, including lower fertility. It is also mentioned that hybrids in fossil records can make assigning fossils to particular species more difficult. For instance, when two species with very different physical characteristics interbreed and give offspring, they could be incorrectly assigned to the identical species. Again, this could be a challenge in conservation biology. Polar Or NonPolar The electrons divided between two atoms of molecules determine they’re the polarity (charge separation) and whether the bonds they create can result in a dipole moment. A molecule is considered polar if its net dipole moments of every bond in it exceeds zero. It is non-polar when there are no net dipole moments. Then the polarity of bonds is typically determined by the electronegativity of its bond, which is the proportion of the number of electrons in a valence state in one element to that of electrons that valence the other. The bonds with the highest electronegative will be polar, and those with less will be non-polar. Hydrogen cyanide, H2CN, is an excellent example of a polar chemical. The hydrogen and nitrogen atoms have different electronegativities creating an uneven pull on electrons. Water is another instance of a Polar molecular. The oxygen atom shares more electrons in this chemical molecule than another hydrogen atom. The unequal sharing of electrons creates a polar covalent bond between the two oxygen atoms. Suppose you examine the Lewis structure of water. In that case, you’ll notice a trigonopleural planar geometry surrounding the first carbon atom and a bent geometrical structure around an oxygen atom. The geometry is comprised of 120 bond angles of 7deg. Also, you can see that there aren’t any lone pairs of atoms around the central atom. Therefore, the shape of the molecules is Tetrahedral. This is a common kind of polarity that occurs within diatomic molecules. Electrons In The Valence There is a second type of polarity that is based on the electrons in the valence. This kind of polarity happens when the central atom has no lone pairs. Instead, the outer atoms possess identical electronegativities. This is what happens with the hydride bond, which connects H2 to. Of the various natural hydrides, Hydrogen fluoride (HF) and hydrogen chloride (HCl) is the most electronegative species. Therefore, they are frequently employed to create strong bases and acids. Like H2 has a lower boiling point than O2, HCl has a higher boiling point than water. HF and HCl are much more bonded to hydrogen bonds than water molecules. What is a methyl group, and what are its properties? A methyl group is a functional group consisting of a carbon atom bonded to three hydrogen atoms (-CH3). It is nonpolar, relatively small in size, and can act as a substituent on larger organic molecules. What is the bond angle of the methyl group, and how does it affect molecular structure? The bond angle of the methyl group is approximately 109.5 degrees, which is consistent with the tetrahedral geometry. This geometry results in a non-planar structure with the carbon atom in the center and the hydrogen atoms arranged around it. What is the hybridization of the carbon atom in the methyl group? The carbon atom in the methyl group is sp3 hybridized, which means that it has four hybrid orbitals oriented at 109.5 degrees from each other. This hybridization is necessary to form the four sigma bonds between the carbon and hydrogen atoms. Is the methyl group a polar or nonpolar functional group? The methyl group is nonpolar because the electronegativity of carbon and hydrogen is relatively similar, resulting in an even distribution of electron density throughout the molecule. How does the presence of a methyl group affect the properties of a molecule? The presence of a methyl group can affect the physical and chemical properties of a molecule, such as its melting and boiling points, solubility, and reactivity. It can also affect the biological activity of molecules in living organisms. What are some common molecules that contain a methyl group? Many organic molecules contain a methyl group, such as methane (CH4), ethane (C2H6), and propane (C3H8). In addition, many organic compounds used in industry, such as fuels, plastics, and pharmaceuticals, contain one or more methyl groups.
Thank goodness it is Friday! Time for another free phonics worksheet! Today’s activity is brought to you by the letter F. Students are asked to read, trace, and draw words that begin with the letter f. If you are looking for other free phonics activities, you can find our consonant activities here, and our vowel activities here. Please fill out the form below to sign up for an occasional email and to get your free worksheet to help teach the letter f. Learn to read for free! 2. Get tips to help you grow a confident new or struggling reader you will learn how to break free from patching together reading lessons and learn how to grow a confident reader. If you like this post, you need to check out:
Science Take-Out® Atoms, Isotopes And Ions Chemistry Educational Materials Ionic Properties Learning Activities Use Colored Chips To Model Atomic Structure. - Available As A Single Kit With Materials For 1 Student Or Group - Unassembled 10 Pack For Additional Savings - No Additional Equipment Needed Use colored chips and the information from a periodic table to model the sub-atomic particles (protons, neutrons, and electrons) in atoms, isotopes, and ions. Model the electron configuration within the electron energy levels. Learn how changing the number of protons, neutrons, or electrons leads to different elements, isotopes, or ions. This complete "dry lab" activity contains all required materials.
Prepared by the National Wildlife Control Training Program. Research-based, wildlife control information. - Explain key elements of bear biology that are important for their control. - Effectively communicate the options for the control of bears. - Describe how to avoid damage by bears. - Identify the risks involved with controlling bears. Black bears (Ursus americanus) may damage bird feeders, bee hives, and crops, as well as raid trash cans and dumps. On occasion, they will enter buildings and vehicles in search of food. If a bear has entered a structure, immediately contact your local police agency. Black bears may travel many miles in early summer seeking food and prior to breeding season. They may end up in suburban areas such as parks and school yards. If a bear is treed in an urban area, keep people and pets away, and let the bear leave on its own if possible. Darting and translocation of a bear is high risk for both the bear and agency staff. Black bears are protected by federal and state laws and regulations throughout their range. Black bears (Figure 1) are the smallest and most widely distributed of the 3 species of bears in North America. They are the only bear species in the northeast. Bears are massive, strong animals. Black bears that live east of the Mississippi River are predominantly black, and some may have a light blaze on their chest. In the Rocky Mountains and westward, shades of brown, cinnamon, and blond are common. The head is moderately-sized with a straight profile and tapering nose. The ears are relatively small, rounded, and erect. The tail is short (3 to 6 inches) and inconspicuous. Each foot has five curved claws, about 1-inch long, that do not retract. Bears walk with a shuffling gait but can be quite agile and quick. Health and Safety Concerns Bears suffer from a variety of internal and external parasites, some of which may be transmitted to humans. Zoonotic diseases include the worm responsible for trichinellosis, and the protozoan that causes toxoplasmosis. Surveys have revealed that a small percentage of bears contract tularemia, brucellosis, and leptospirosis. Few bears have tested positive for rabies. Although black bears generally avoid humans, they have attacked people. However, encounters are rarely fatal. Always respect bears and keep a safe distance. If a bear starts to approach, shout and wave your arms to try and scare the animal. General Biology, Reproduction, and Behavior Black bears become sexually mature at about 3½ years of age, but some females may not breed until their fourth year or later. Black bears breed during the summer, usually in late June or early July. Males travel extensively in search of receptive females, and mating individuals do not form pair bonds. Rival males may fight one another, and unreceptive females may fight with males. Females that are dominant may suppress breeding activities of females that are subordinate. After mating, the fertilized egg does not implant immediately, but remains unattached in the uterus until fall. Females in good condition usually produce 2 or 3 cubs that weigh 7 to 12 ounces at birth. Bears in urban areas with subsidized food have had up to 5 cubs in a single litter. Females give birth between late December and early February while they are in their dens. After giving birth, the sow may continue torpor (winter sleep) while the cubs are awake and nursing. Females that are lactating do not come into estrus, so females generally breed every other year. Only females care for young. Males sometimes kill and eat cubs. Cubs are weaned in late summer, but usually stay close to their mother throughout their first year. After the breeding season, females and their yearlings may travel together for a few weeks. The cubs leave the mother when the female comes into her next estrus. Sites for dens are quite variable and include piles of rocks or brush, excavations, hollow trees, and structures made by humans. The den floor may be covered with grass and leaves, or left bare. Many dens are at ground level under fallen trees, or sometimes even decks. Black bears typically are nocturnal, although occasionally they are active during the day. In the South, black bears tend to be active year-round. In northern areas, black bears undergo a period of torpor during winter, which they spend in their dens. During torpor, individuals may remain in their dens for 5 to 7 months (late October to early April), foregoing food, water, and elimination. The home range of a black bear depends on the type and quality of habitat, and the sex and age of the bear. In mountainous regions, bears encounter a variety of habitats by moving up or down in elevation. Where the terrain is flat, bears typically range more widely in search of resources. Most adult females have well-defined home ranges of 5 to 20 square miles. Ranges of adult males are several times larger. Black bears frequent heavily forested areas, including large swamps and mountainous regions. Black bears depend on forests for food, water, cover, and space. Mixed-hardwood forests interspersed with streams and swamps are typical habitats for bears. Black bear populations have their highest growth rates in eastern deciduous forests, where there is a variety and abundance of foods, especially near urban areas. Black bears are omnivorous and forage on a wide variety of plants and animals. Their diet typically is determined by the seasonal availability of food. About 80% of their diet is plant material, and typical foods include grasses, berries, nuts, tubers, inner bark, insects, small mammals, eggs, carrion, and garbage (Figure 2). Shortages of food occasionally occur in northern ranges when mast crops (berries and nuts) fail. At those times, bears travel more widely in search of food. Human encounters with bears are more frequent during such years, as are complaints of damage to crops and losses of livestock. Voice, Sounds, Tracks, and Signs Bears normally are silent when traveling. They emit grunts with young, and may blow and click their teeth if they are upset. Females use loud, staggered grunts to threaten unwanted males. Bears utter moans when subordinate to others. Tracks of bears are recognized by their shape and size (Figure 3). The heel rarely shows in the track of a front foot. Front feet average 4½ inches in length and 4 inches in width. Rear feet are 7 x 3½ inches. Scat of black bears varies in color and consistency, depending on diet. Well-formed scat averages 2½ inches in diameter, and 5 to 12 inches in length. Damage to Landscapes Bears can cause extensive damage to trees, especially in second-growth forests, by feeding on the inner bark, or clawing the bark to leave territorial markings. Black bears damage orchards by breaking trees and branches in attempts to reach fruit. They often will return to an orchard nightly. Due to repeated damage to orchards, and trees with broken limbs, losses often are economically significant. Damage to Crops and Livestock Black bears damage field crops such as corn, and occasionally alfalfa or oats. Large, localized areas of broken, smashed stalks show where bears have fed in cornfields. Bears eat the entire cob, whereas raccoons strip the ears from the stalks and chew the kernels from the ears. Black bears prefer corn in the milk stage. Few black bears kill livestock but the behavior, once developed, usually persists. The severity of predation by black bears usually makes solving the problem urgent for those who suffer damage. If bears are suspected, check the carcass for deep tooth marks (about ½-inch in diameter) on the neck directly behind the ears. On large animals, look for large claw marks (½ inch between individual marks) on the shoulders and sides. After an animal is killed, a black bear typically will open the body cavity and remove the internal organs. A black bear will eat the liver and other vital organs first, followed by the hindquarters. Udders of lactating females are often consumed. Predation by bears should be distinguished from attacks by coyotes or dogs. Coyotes typically attack the throat of their prey. Dogs chase their prey, often slashing the hind legs and mutilating the animal. Tooth marks on the back of the neck usually are not found on kills made by coyotes and dogs. Claw marks are also less prominent on kills by coyotes or dogs, if visible at all. Livestock behave differently when attacked by bears. Sheep tend to bunch when they are approached, and 3 or more often will be killed in a small area. Cattle tend to scatter when a bear approaches. Bears usually kill a single animal. Hogs evade bears in the open, and are more often killed when they are confined. Horses rarely are killed by bears, but they do get clawed on the sides. When a bear makes a kill, it usually returns to the site at dusk. Bears prefer to feed alone. If an animal is killed in the open, the bear may drag it into the woods or brush, and cover the remains with leaves, grass, soil, and forest debris. The bear will return periodically to the cache to feed on the decomposing carcass. Black bears destroy beehives (Figure 4). Damage to beehives includes broken and scattered combs and hives, with claw and tooth marks. Hair, tracks, scat, and other sign may be found in the immediate area. A bear usually will use the same path to return every night until all of the brood, comb, and honey are eaten. Damage to Structures Black bears can damage homes and vehicles when searching for food. Black bears also will scavenge in garbage cans, break in and demolish the interiors of cabins, damage bird feeders, and raid campsites and food caches. Damage Prevention and Control Methods Prevention is the best approach to handling damage by black bears. Sanitation and proper management of garbage are essential. You should store food, organic waste, and other attractants in bear-proof containers. Use garbage cans for nonfood items only, and place food waste in bear-proof garbage receptacles (Figure 5). Pick up garbage regularly, and place garbage at the curb the morning of pick up rather than the night before. Reduce access to landfills through fencing, and bury refuse daily. Eliminate garbage and carcass dumps. Surround dumpsters with electric fences. Only feed birds during winter, when bears are denning. Plant crops (e.g., corn, oats, fruit) away from forest edges if possible. Pick and remove all dropped fruit from orchards. Prohibit all feeding of bears. If possible, locate campgrounds, campsites, and hiking trails in areas that are not frequented by bears. If feasible, clear hiking trails to provide a minimum viewing distance of 50 yards down the trail. Avoid bear feeding and denning areas. Black bears can tear open doors, rip holes in siding, and break glass windows to gain access to food stored inside cabins, tents, and other structures. Use solid frame construction, ¾-inch plywood sheeting, and strong, tight-fitting shutters and doors. Steel plating is more impervious than wood. Place beehives on a flat or low-sloping garage roof. Add extra roof braces as 2 hives full of honey can weigh 800 pounds or more. Another technique is to place hives on an 8- X 40-foot flatbed trailer, and surround it with a 3-strand electric fence. Though expensive, this method makes hives less vulnerable to bear damage and makes moving them very easy. Confine livestock in buildings and pens at night, especially during lambing or calving seasons. Remove and dispose of carcasses by deep burial. Place livestock pens and beehives away from wooded areas or protective cover, and surround them with electric fences. Fences have proven effective in deterring bears from landfills, apiaries, cabins, and other high-value properties. Fences, however, may be relatively expensive. Consider the extent, duration, and cost of damage. Many fence designs have been used with varying degrees of success. Electric chargers increase the effectiveness of fences. One person can easily and quickly install an electric polytape fence (Figure 6). It is economical and dependable for low to moderate pressure from bears. The fence consists of 4 strands of electric polytape that are attached to posts with insulators. Materials that are required to make an electric polytape fence include: - 200-yard roll of polytape, - 12, 4-foot fence rods (5/16-inch diameter), - 48 insulators or clips, - 4 gate handles, - 12-volt fence charger, - 12-volt deep cycle battery, and To install the fence, drive four corner posts one foot into the ground, and attach a guy wire. Clip vegetation in a 15-inch-wide strip under the fence and apply an herbicide (use pesticides carefully near bees). Attach insulators on the inside of corner posts and stretch the polytape from the four posts at intervals of 6, 16, 26, and 36 inches aboveground. Hand-tighten the polytape and join the ends with square knots. Drive the remaining posts into the ground at 12-foot intervals, attach insulators on the outside of the line posts, and insert polytape. A welded-wire, permanent fence (Figure 7) is durable and expensive, and used where there is high pressure from bears. Two people can install it in about 8 hours. The fence consists of heavy, 5-foot woven-wire, supported by wooden posts, and ringed by two additional electrified wires. Materials required to construct a woven-wire permanent fence include: - 50-yard roll of 5-foot-high, woven-wire with 6-inch mesh, - 150-yard roll of high-tensile (14-gauge) smooth wire, - 24, 8-foot treated wooden posts, - 40 insulators (screw-in types), - 2-pound box of 1½-inch fence staples, - 6 gate handles, - 12-volt fence charger, - 12-volt deep cycle battery, and To install the fence, set posts 6 to 12 feet apart in 2-foot-deep holes. Align the four corner posts at 5o angles from the vertical. Brace the corner and gate posts from the inside with H-braces or posts set at 45oangles. Clear the vegetation in a 15-inch-wide strip under the fence and apply herbicide. Place one length of welded wire vertically into position and staple the end to a corner post. Pull the entire length of wire taut with a vehicle and staple the welded wire to the line posts. Continue until all sides, except the gate opening, are fenced. Fasten 2 strands of high-tensile wire to insulators positioned 5 inches away from the welded wire, at intervals of 6 and 56 inches above ground level. For a 12-foot gate opening, attach 3 strands of high-tensile wire to insulators on the gateposts. Space the wires at 6, 36, and 56 inches above the ground. Connect them to the two strands that were previously strung around the fence; these will be connected to the positive terminal of the fence charger. Attach 3 more wires to gatepost insulators 20, 48, and 64 inches above the level of the ground; these will be connected together and to the ground rod. Fit insulated gate handles to the free ends of all 6 gate wires. To energize electric fences, use a 110-volt outlet, or 12-volt deep cell (marine) battery connected to a high-output fence charger. Place the charger and battery in a case to protect them against weather and theft. Drive a ground rod 5 to 7 feet into the ground, preferably into moist soil. In dry soil, multiple ground rods may be needed. Connect the ground terminal of the charger to the ground rod with a wire and ground clamp. Connect the positive fence terminal to the fence with a short piece of fence wire. Use connectors to ensure good contact. Electric fences must deliver an effective shock to repel bears. Lure bears into licking or sniffing the wire and getting shocked by attaching attractants (peanut butter on strips of aluminum foil) to the fence. Increase grounding, especially in dry, sandy soil, by laying grounded chicken wire around the outside perimeter of the fence. Check the voltage of the fence each week: it should carry at least 3,000 volts. To protect against voltage loss, keep the battery and charger dry and their connections free of corrosion. Make certain all connections are secure and check for faulty insulators that might cause arcing between the wire and post. Each month, check the tension of the fence and refresh attractants. Always recharge batteries during the day so that the fence is energized at night. Habituated bears that are conditioned to food can be very dangerous. Do not use any frightening method that would threaten a bear and elicit an attack. If a frightening technique does not cause the bear to flee in a few seconds, stop and try a different method, provided you are in a safe location. Black bears can be frightened for short periods from an area such as a building or livestock corral by use of night-lights, strobe lights, loud music, pyrotechnics, exploder canons, scarecrows, and trained guard dogs. Change the position of frightening devices frequently. Individual bears usually become habituated to them, at which point frightening devices are ineffective, and human safety becomes a concern. Aversive conditioning requires unpleasant experiences to encourage bears to stop nuisance behaviors such as visiting landfills or getting close to urban areas. Hazing is most successful when used on bears older than one year, but before they become conditioned to food provided by humans. Tactics include chasing accompanied by yelling, throwing rocks, cracker shells, pepper spray, 12-gauge plastic slugs, gel-filled paint balls, bean bags, or 38-mm rubber bullets. Aim for the large muscle mass in the hind quarters of the bear. Avoid the neck and front shoulders to minimize the risk of damaging an eye. Safety training for firearms is recommended. Karellian bear dogs have been used effectively to haze habituated bears out of urban areas and livestock facilities. Capsaicin spray has been tested and used effectively on black bears in close quarters and threatening encounters. The range for most products is less than 30 feet, so capsaicin is only effective in close encounters. Do not spray capsaicin on objects or in areas in an attempt to repel bears, as the spray actually may attract them. When using capsaicin spray, make sure that you are upwind of the target so that you do not suffer from the effects. No toxicants are registered for the control of black bears. As last resort, shooting is effective for dealing with a black bear that poses an immediate threat to safety. Permits are required in most states to shoot bears. To increase the probability of removing the individual causing the problems, shooting should be done at the site where damage has occurred. Shooting is best left to a professional or law enforcement. Several traps are available for capturing bears. Due to the legal and technical issues involved with bear trapping, it is necessary to consult with your state wildlife agency. The culvert or barrel trap (Figure 8) is an effective trap for professional use. After capture, the bear can be immobilized, released at another site, or euthanized. Relocation of black bears is not recommended unless the situation involves a rescue. Any capture and translocation of bears is usually conducted by a state or federal wildlife agency. This is time consuming and expensive, and is used as a last resort for problem bears, prior to lethal removal. Translocation of bears has a mixed record of success. Bears that have been trapped and are to be released should be transported at least 50 miles from the site, preferably across a substantial geographic barrier, such as a large river or mountain range, and released in a remote area with suitable wooded habitat. Some bears have returned from as far as 120 miles from the release site. A bear that causes problems should be released only once. If it causes subsequent problems, it should be euthanized. Translocation often is combined with aversive conditioning. Bears transported in culvert traps can be shot with rubber buckshot or gel-scram paintballs when released. If necessary, bears should be euthanized by shooting or chemical induction. Black bear, NWCO, wildlife control, wildlife damage management
Barley is germinated, cleaned and dried in the course of malting. The malt is ground and mixed with clean water. Then starts mashing – the malt and water mix is processed according to a certain temperature scheme. During mashing, enzymes break down the starch in the malt to fermentable sugars (mostly maltose and dextrins) which yeast then ferments into alcohol. The mash is then filtered and the resulting liquid – beer wort – is pumped into a brew kettle. Proteins are deposited during boiling and the wort becomes clear. Hops are added to the wort in the brew kettle, which give the beer its characteristic bitter taste and balance the sweetness of the malt. At the end of the boiling process, the wort is clarified. After this, the wort is cooled and directed into the fermentation tank. Brewer’s yeast is added to the wort and fermentation begins. During the fermentation, the yeast consumes sugars, which leads to the emergence of alcohol and the separation of carbon dioxide. The yeast settles on the bottom of the tank at the end of the fermentation process. The yeast is then separated and maturation begins. The length of maturation depends on the type of beer. When the beer is ready, it is cooled to below zero and filtering starts. The finished beer passes through a carboniser, where its carbon dioxide content is regulated, and then through a pasteuriser, guaranteeing the microbiological purity of the beer. After all this, the beer is ready for bottling. THE PROCESS OF MAKING NON-ALCOHOLIC BEER Non-alcoholic beer is made from fermented beer. The alcohol is removed from the beer using the vacuum distillation Under a vacuum, warm beer at approximately 40°C is run through two tall cylinders filled with perforated mesh panels. The large specific surface area of the mesh panels allows the beer to distribute as a thin layer over a large area. The alcohol is then efficiently removed under the vacuum. Creating a vacuum makes it possible to vaporise the alcohol out of the beer at a low temperature, which minimizes any changes to the taste caused by the removal of the alcohol. Vaporisation at high temperatures would make the beer taste like bread.
What is Education and Examples? Education is a process of acquiring knowledge, skills, values, and attitudes through various formal and informal means. It is an essential tool that empowers individuals to lead a meaningful and productive life by providing them with the necessary tools and resources to succeed in various spheres of life. Education can take many forms, including formal education such as attending schools, colleges, and universities, as well as informal education such as learning through life experiences, reading, and online courses. It plays a crucial role in personal growth, career development, and social mobility, and can lead to higher levels of achievement, better employment opportunities, and a higher standard of living. There are various types of education, such as: Formal Education: Formal education is a structured form of education that takes place in schools, colleges, and universities. It provides students with a structured curriculum that covers various subjects, including math, science,
In this assessment, students investigate the number of cubes required to make staircases of different dimensions and types when they are drawn on an isometric grid. Starting with guided questions, they are to draw staircases and recognize patterns in the number of cubes required to make them. Then, as the staircases become more complex, so do the patterns to find the number of cubes required for them. Math Concepts: sequences formulas square numbers patterns MYP Related Concepts: models patterns space MYP Key Concepts: form relationships MYP Global Context: Orientation in Space and Time To add your cart, please log in!
Tides and Currents Tides are created by the sun and moon exerting a pull on the earth. Pleasure craft operators in coastal waters need to be mindful of the effect of tides. The rise and fall of tides can cause water levels to fluctuate by several metres and also can generate strong currents. Some tidal currents are strong enough that some pleasure craft cannot make headway against the current. The hours and heights of high and low waters caused by tides are predictable. Other factors, such as atmospheric pressure changes, strong prolonged winds, and variations of freshwater discharge, may cause changes between the predicted tides and the actual tides. As a pleasure craft operator, you need to know about the tides and currents in your boating area. Where to Find Tide and Current Tables For tide, current, and water level information, visit:
ATMOSPHERE:stratospheric ozone layer,oxygen, greenhouse gases, aerosols, water vapour, clouds, black carbon, tropospheric ozone. OCEANS: moderate climate, absorb heat and CO2, phytoplankton start the ocean carbon sink. CRYOSPHERE:ice and snow- reflect solar heat, frozen soil permafrost is a large carbon store, frozen ocean floor methane hydrates another huge carbon store. FORESTS: huge carbon store, net CO2 emissions when cleared. SOILS: store and emit CO2 esp. peat lands. MOUNTAINS: 'permanent' carbon store as limestone and dolomite that are formed under the ocean depths, long term weathering carbon source. VOLCANOES: emit short term cooling aerosols and long term warming CO2. Components of the Climate system Climate components in action Click on image to enlarge As you see from these climate system images the climate is much more than the atmosphere. It is all of these together. only zero carbon only zero carbon only zero carbon only zero carbon only zero carbon only zero carbon only The main greenhouse gases are normally emitted by the planet into the atmosphere by natural processes and cycles. These are water vapor (which is the top greenhouse effecting gas), carbon dioxide, methane, nitrous oxide and tropospheric (ground level) ozone. The planet also emits aerosols which are cooling. Clouds have both warming and cooling properties. The oceans are intrinsic to the climate system and they are the ultimate control of the climate system. The climate system is characterized by very long lag times course delays in a change of the atmospheric greenhouse gases leading to a change in the global average temperature and the climate. Heat energy radiated (re- radiated) from greenhouse gases in the atmosphere is absorbed at the surface of the land and the oceans. Planet earth is mostly ocean, so over 80% of this heat energy is normally taken up by the oceans. The vast amount of ocean water takes a very long time to be warmed up by the radiation of atmospheric greenhouse gases and a long time for this to heat energy and to register at the surface of the planet. This is called the ocean thermal lag of the climate system and the lag time is about 30 years. Atmospheric carbon dioxide is (normally) absorbed by the oceans by dissolving in the ocean water to carbonic acid. Both the land and oceans have large masses of green plants that absorb carbon dioxide from the atmosphere by photosynthesis. The ocean green plants that form this function - are the phytoplankton. It takes 30-50 years for GHG emissions lower atmospheric heat to distribute in the world ocean and to be registered as a global temperature increase at planet's surface Phytoplankton are ocean surface microscopic plants. They initiate what is called the ocean biological pump, which is also the ocean carbon pump to. Thanks to the phytoplankton starting the ocean food web nutrients are transmitted throughout the depth and breadth of the oceans. I know the result is that carbon that originates as carbon dioxide in the atmosphere is transmitted to the bottom of the ocean incorporated into ocean sediments, where to half the ages of pressure the carbon becomes permanently sunk as limestone and a light rock. Carbon compression of organic material (dead living things) over ages is the only way the planet can take CO2 out of the carbon cycle. It is the only true carbon sink there is and it makes up the ultra long term carbon cycle. When we think of the carbon cycle we are referring to the short term carbon cycle that does not sink CO2 (other than very temporarily). Limestone that is being burned to manufacture cement is 1 to 400 million years old . Coal oil and gas are 100's of millions of years old. What we call fossil fuels are Nature's ancient carbon sinks that have allowed life (absorbing O2 and emitting CO2 to flourish. Burning this ancient carbon is poisoning the atmosphere, if not stopped soon it will change planet Earth to be inhospitable to most life. click image for presentation
Vacuum Cleaners Information Vacuum cleaners are capable of picking up large quantities of material and/or liquid in a wide variety of industrial applications. A vacuum is defined as a space in which the pressure of gas is low compared to the atmospheric pressure. The measure of vacuum is associated with pressure. Vacuum cleaners are responsible for removing pollutants and particulates from the air surrounding a work space. The function of a vacuum pump is to withdraw gas from a designated volume so that the pressure is lowered to a value suitable for the purpose in hand. Vacuum cleaners use four basic systems for vacuum production: Centrifugal blowers are used when only intermittent use is required. They can be powered with an inexpensive short-life, brush-type AC or DC motor. More information can be found at How to Select Centrifugal Pumps. Turbine pumps are centrifugal pumps which generate pump pressure by using an impeller to apply centrifugal force to a moving media. They use turbine-like impellers with radially orientated teeth. Turbine pumps have high discharge pressures similar to positive displacement or multi-stage centrifugal pumps, as well as flexible operation like centrifugal pumps. They are best used in operations where high head, low flow, and compact design are desired. Read How to Select Turbine Pumps for more information. Regenerative devices are similar to centrifugal blowers and pumps, but the chamber is designed to generate higher pressure. Rather than having only a single compression per stage (centrifugal types), individual air molecules pass through many compression cycles with each revolution. More information can be found at How to Select Regnerative Blowers. Regenerative blowers produce low vacuum (100 in. H2O in single-stage models) but very high flow capacity (up to several hundred cfm). Positive displacement (PD) pumps create vacuum by isolating and compressing a constant volume of air. The compressed air exits one port and a vacuum is created at the port where the air is drawn in. More information can be found at How to Select Positive Displacement Pumps. As the vacuum collects media it may be separated and stored by a separation system. Separation systems ensure that the pump and tubing do not become clogged or blocked as the device operates. There are several types of separation systems, including bag-type units, or cartridge-type units, and cyclone or centrifugal separators. Bag-type separators, also known as bag-houses, are air pollution control devices, which use fabric filter tubes or cartridges to capture or separate dust and other particulate matter. They can be used in small household workplaces to large industrial facilities. Bag-type separators are available in a variety of bag sizes and types. They are very efficient when properly maintained and used in a relatively dry environment. For more information, visit How to Select Baghouses and Baghouse Filters. Cyclone separators, also known as centrifugal separators, utilize gravity and a vortex to remove particulates from gaseous streams. They do not use filter media or moving parts. This lowers the pressure drop, operating cost, and maintenance required. Cyclone separators can withstand harsh operating conditions, and since separation in cyclones is a dry process they are less prone to moisture corrosion. These devices work by incorporating centrifugal, gravitational, and inertial forces to spin the media in order to remove fine particles suspended in air or gas. Another specification to consider is the storage capacity of the separation system. The size of the bag or cyclone will determine how often it needs to be emptied or cleaned. Vacuum Cleaner Types Vacuum cleaners are used in a wide variety of industries and applications. The model selected should be based on the size of the facility, media vacuumed, the user, and the frequency of use. There are several unit types of vacuums: A backpack vacuum is a self-contained unit that is designed to be worn on the back of an operator. Image Credit: State Vacuum Central vacuum system A central vacuum system (CVS) defines the package of tubes, wall inlets, and accessories connected to a vacuum power unit to collect dust and other elements. Image Credit: Latta Equipment Company Portable vacuums are designed to be moved by the operator during use. This category includes walk-behind and canister vacuums. Image Credit: Nilfisk Heavy-duty ride-on vacuums are usually designed to carry one operator. They can be used with indoor or outdoor applications. Image Credit: Direct Industry Large, vehicle transport units are designed to be moved by a vehicle. They can be trailer-mounted or truck-mounted. Image Credit: Broyhill Vacuum Cleaner Specifications Specifications for industrial vacuum cleaners include: System power is specified in horsepower and indicates the power of the motor. The power output of the vacuum pump is provided by the manufacture in a pressure-flow curve, which also shows input power and speed requirements. By combining this data, the overall efficiency (both the volumetric and mechanical efficiency) can be evaluated. In order to do this evaluation, the free-air capacity of the pump at the required vacuum level must be divided by drive power at the condition. The result is proportional to the product of gage vacuum and air-flow rate represents efficiency. The vacuum power source depends upon the location and application of the vacuum. If a large number of vacuum generators are used in a small floor space than a centralized vacuum system is recommended. If a few vacuums are required to cover a large floor space, it is not practical to use electric vacuum pumps. Typically, industrial vacuums are powered by electricity, gasoline, diesel, or compressed air. Electricity- Electric vacuums are very efficient, as much as four times more efficient than compressed air driven vacuum generators. They are best option if constant vacuum flow and a high flow rate at higher vacuum levels are required. Compressed air- Compressed air powered pumps require a high volume of compressed air for proper operation. If a facility requires several of these devices, they may have trouble maintaining adequate supply pressure. Often, extra or backup compressors need to be maintained online to keep up with the site requirements. Airflow refers to the velocity of the air stream created. This is also known as the rate of air removal. The flow rate is determined by the volume of air exhausted with no pressure difference across the pump. Most manufactures provide curves showing free air delivery at rated speed for vacuum levels. These levels range from 0 - in. Hg (open capacity) to maximum vacuum rating. Image Credit: Flow Rate vs. Pressure Vacuum pressure is commonly referred to as static pressure (SP) or water lift. In vacuum systems, vacuum pressure is used in discussions of pressure differential across a filter media. The maximum vacuum rating is generally given in absolute pressure in mmHg or vacuum in inches Hg. Continuous and intermittent vacuum ratings are determined for standard atmospheric pressure which is 29.92 in. Hg. The rating is determined by the formula: Va = (Vo * Pa) / 29.92 Va = adjusting vacuum rating, in. Hg. Vo = original vacuum rating at standard conditions, in. Hg. Pa= anticipated atmospheric pressure at the application site, in. Hg. Maximum number of inlets or operators the device can support refers to the size of the system. Some important specifications to consider are that the more inlets or operators the system has the more power, pressure, and filters are needed. Consider the space and access to resources such as power to determine the fewest number of vacuums needed to adequately handle the task. The type of media in the system is important to consider when selecting an industrial vacuum. Vacuum cleaners can handle an assortment of media types including liquids (wet), gas, and solid particulate matter (dry). It is critical to ensure that the selected vacuum can effectively collect the system media because using an improper vacuum can cause serious equipment and facility damage. Some devices can be used with abrasives, coolants and oil mist, explosive media, fine powders, general cleaning, litter pick-up, non-free flowing media, metalworking chips and fluids, reclaiming and recycling, spill recovery, toxic media, and welding fumes. An important component of a vacuum cleaner is the type of filter used to collect the media. Most vacuums use a mechanical filter which is a physical device used to capture and retain particles. The barriers can be made of cloth, polyethylene, and/or paper filters. There are four factors which affect the mechanical filtration: the particle size being collected, the air velocity or speed of the particle, the filter material, and the running time of the filter. Filters are available in a wide range of efficiencies. Two of the most efficient filters are the HEPA and ULPA filters. High efficiency particulate air (HEPA) filters are replaceable, extended-media, dry-type filters in a rigid frame. They have a minimum particle collective efficiency of 99.97 percent for a 0.3 micron particle, and a maximum clean filter pressure drop of 2.54 cm (1 in.) water gauge when tested at rated air flow capacity. Image Credit: Wikipedia Ultra low penetration air (ULPA) filters are extended-media, dry filters that are mounted in a rigid frame. They have a minimum particle collection efficiency of 99.999 percent for particles greater than or equal to .12 micron in size. ULPA filters are more efficient than HEPA filters and are often used in facilities that manufacture microelectronics. Vacuum cleaners differ in terms of features such as: Cleanroom suitability- A cleanroom vacuum cleaner is designed specifically for cleanroom applications. Cleanrooms, including laboratories and hospitals, require very high quality (free of particulates and pollution) air to prevent contamination. This usually requires the inclusion of a HEPA filter or better. Image Credit: Nilfisk Industrial Vacuums Duty cycle- Continuous duty vacuums are rated for 100% duty cycle (constant use). This is important for facilities, such as wood shops, where particulates and pollution can be deposited on the machinery or operators. A high number of particulates in the air could be a potential fire and health risk. When selecting a continuous duty vacuum, consider the power required to run the device. Instrument panel- The unit is equipped with an instrument panel for user operation. The instrument panel may be equipped with an alphanumeric keypad, gauges, safety switches, on-off switches, or timer controls. Vacuum cleaners can be found in industries such as agriculture, automotive, food and beverage, health care, pharmaceutical, utilities, and in the manufacturing or manipulation of textiles, pulp and paper, plastics and rubber, and electrical and electronics. Vacuums are used to collect particulates which could cause a fire if exposed to a spark, to absorb dangerous fumes from chemicals, to keep the work air clean so operators can see and breathe safely, and to keep equipment clean and free of debris. Industrial vacuum cleaners can be especially important where high air quality is important, such as in the manufacturing of delicate materials like microelectronics, or pharmaceuticals. Chambers, A. Modern Vacuum Physics. Boca Raton: Chapman & Hall/CRC, 2005. Print. Nilfisk Industrial Vacuums
Now, more than ever, we need to take care of the earth. The Amazon Rainforest is the largest tropical ecosystem on the planet, hosts the most biodiversity on earth, and is home to many indigenous people. If we maintain our current interaction with the environment, the Amazon Rainforest, and the human and plant life that lies within it, could be gone in 40 years. There is a lot at stake. But, things don’t have to end up that way. We hold the power to reshape our relationship with our environment and to educate our children how to take respect the land. We often neglect the fact that plants and animals are living. We forget about the human life that relies on the physical environment. In order help children realize that humans and nature are connected and rely on one another, we have created a mask-making activity for you to share with kids! Mask are often used during rituals and ceremonies by tribes around the world. The reasons why and how they are used vary across cultures and tribes. Rather than appropriating the use of mask making from another culture, today we encourage you to create your own ritual with mask making. Make your own ritual of giving thanks to the earth as you collect the objects and celebrate the connection between humans and nature by creating a mask out of these natural elements! I always recommend starting by preparing your mind and your materials. Take a moment to look over the activity and its meaning.Take a few moments to let it sit and breathe deeply with kids, and then begin! By organizing yourself first, you make the activity more enjoyable and meaningful for everyone! - Exacto knife / Scissors - Hot glue / Elmers glue - Natural elements: leaves, grass, rocks, bark, etc. 1. Prep Materials Before sitting the kids down/ depending on the age of the mask-makers, I suggest cutting up a few pieces of cardboard roughly the size/shape of a face! 2. Get outside Grab your gloves, coats, a bag, and get outside! Explore your woods and gather some materials to make your mask with! Leaves, grass, acorns, bark! Spend as much time exploring as you’d like! Connect with the earth as you create a face made out of natural elements. Use Elmer’s glue or hot glue to connect objects to the cardboard. After masks have been completed, you can ask kids to share why they used certain materials for different parts of their face and share the meaning behind their mask! How can you make an impact? 1. You can symbolically adopt an acre of land using Amazon Aid’s Acre Care program. 100% of your donation goes directly to our partners on the ground with the Amazon Conservation Association and its sister organization in Peru la Asociación para la Conservación de la Cuenca Amazónica in the Madre de Dios region. 2. You can symbolically adopt a tree to be planted in a reforestation concession near the Manuani river in Madre de Dios to help reforest an abandoned gold mine.
Bookmarks are a simple way to save the address of a web page. A bookmark is a shortcut that stores the address of a web page. When clicked it opens a web page automatically without having to type in the URL. Storing favorite places on the Internet is a basic skill. However, in many education settings the steps to complete this task can be complex. This is because many school networks prevent students from saving files to the local computer. In addition, student profiles often are set up to permit access to a designated location on the server, prohibiting the storage of files to a Favorites folder, where Internet books would typically be stored. Why should your students bookmark? The four main reasons to bookmark a web page are: - SPEED: Many young children are slow typists so it is faster to create a bookmark that stores a web page address compared to typing the URL into the address bar of the web browser. - ACCURACY: Many young children are inaccurate typists which can cause them to access unwanted web pages because the URL they entered into the address bar is incorrect. With a bookmark they always can view the correct web page. This is especially helpful when the URL is lengthy and complex. - IMPROVES WORKFLOW: When gathering facts for a research project, students may need to return to a web page for additional information. A bookmark makes it easy for children to return to a web page repeatedly. - CITE THE SOURCE: Teachers will often ask students to create a bibliography that states where the information for an assignment was collected. Bookmarks are a great way to store sources of information if they need to be included in a report, presentation, or other publication. Bookmarking can be complicated in some education settings! How a school’s network is set up can transform the basic Internet skill of bookmarking from a simple task into a complicated set of steps. This is exactly the problem I encountered this week when teaching Assignment 4 from TechnoJourney. The students cannot add bookmarks into the Favorites folder by clicking the Add to Favorites button in Internet Explorer. Instead, they must create a shortcut in their student folder to the web page. Since the students are only eight or nine years old I was worried about teaching this skill. UPDATE: TechnoJourney was replaced with TechnoInternet. The activities are similar. One of my concerns was that students needed to know TWO methods of bookmarking. They would need to know the home method, which is the standard step of clicking the Add to Favorites button in Internet Explorer, plus the school method which is creating a shortcut in a student folder. I had worried that it might be confusing to teach them two methods of bookmarking in the same class. My other concern was that the school method of creating a shortcut has several steps transforming the basic internet skill of bookmarking into a complex task. It was NOT AN OPTION to skip teaching bookmarking. The classroom teacher wanted her students to be better at Internet research. For this reason, the ability to store bookmarks was considered an essential Internet skill that would allow students to easily access valuable online resources and information. The students surprised me in their ability. I started by demonstrating how they would bookmark resources at home. Students then practiced this skill. Several in the group that had older siblings were already familiar with task and the rest of the students caught on quickly. Next, we had to learn the school method of bookmarking. It had TEN STEPS: - Minimize the web browser window. - Click My Documents to open the student folder. - Inside the student folder create a new folder called bookmarks. - Keep the student folder OPEN. Maximize the web browser window. - Find a web page. - Select the web address. Right click the mouse and select COPY. - Click on the bookmarks folder in the status bar. - Right click inside the bookmarks folder. Select NEW from the menu. Click SHORTCUT. - Right click inside the location box. Select PASTE. Click NEXT. - Type a name for the shortcut. Click FINISH. I have an overhead projector, so it is easy to have students follow along. We did each step listed above, one at a time. Once we had completed all ten steps, I asked students to create another shortcut in their bookmarks folder on their own. I walked around the room to remind students about the steps. To make my life easier, I asked those children who had successfully created a second bookmark to assist their neighbor. It did not take long before everyone had two bookmarks in their bookmarks folder. Students then practiced clicking on their bookmarks to display favorite web pages. Creating bookmarks is a basic Internet skill that needs to be practiced repeatedly to remember. For this reason, we are going to spend the next class period bookmarking web pages. The goal is to make the children experts at this task. The classroom teacher is very pleased because it is her hope for students to become familiar with bookmarking in Grade 3, so that they can use this skill when conducting internet research in Grades 4, 5, 6, 7, and 8. About Teaching Basic Internet Skills Bookmarking is a basic internet skill. Do you teach how to create a bookmark? At what grade do you introduce this skill? Is bookmarking a simple task at your school or is it more complex? Other Articles about Teaching Internet Skills using TechnoJourney Now the Students’ Turn: Reflecting on TechnoJourney A Teacher Speaks Out: Yes, you should teach Internet skills! Peer to Peer Teaching – Students Become the Teachers Internet Tour Guide Activity Use YouTube Videos in your Classroom Students Love Google Maps Review How to Sort Google Images with Your Students Teaching Internet Skills – The Trust Test Wikipedia in the Classroom Bookmarking is a Basic Internet Skill that can be Complex Metacognition and Teaching about the Internet 4 Strategies for Reviewing Internet Search Results When Should Students Start Using the Internet? Should you Teach Internet Skills?
Scientists have discovered how a common crop pest evades detection. When the invader's cover is blown, the bacterium masks itself by ditching its genetic identification, setting the stage for a quiet and deadly invasion. Commonly known as Halo blight, Pseudomonas syringae pv. Phaseolicola infects bean crops. Leaves develop small, water-soaked spots outlined by a yellow halo. As the plants fight back, the tissue around the infection dies, preventing further spread of the blight. But this strategy often fails, and as the bacteria move from leaf to leaf, they usually grow more virulent. In some cases, a single contaminated bean seed has unleashed a severe epidemic. To learn how Halo blight gets away with this, microbiologists Dawn Arnold and Andrew Pitman of the University of the West of England in Bristol, U.K., and colleagues simulated an outbreak. They exposed healthy looking leaves to the bugs, waited for the plant to begin fighting back, and then reharvested the bacteria for yet another cycle through a batch of healthy greenery. After five iterations, plants could no longer defend themselves against the bacteria and experienced massive tissue damage. Genetic analysis indicated that Halo blight was pulling a molecular disappearing act. Upon sensing the bean plant's response, the bacterium kicked out the portion of its genome responsible for making proteins that could be recognized by the plant. This DNA migrated to the cytoplasm, where it formed dormant circular strands. "This is the first example of this mechanism in plant pathogenic bacteria," says Arnold, although she notes that similar dirty tricks have been observed in bacteria that infect animals. Curiously, the bacteria appear to work just fine without their banished genes, so it's unclear why they haven't dropped them for good. The team reports its findings 20 December in Current Biology. The findings demonstrate the varied ways plants and pathogens have coevolved, says Hei-Ti Hsu, a microbiologist at the U.S. Department of Agriculture's National Arboretum in Washington, D.C. And Jonathan Jones, a biologist at the Sainsbury Laboratory in Norwich, U.K., says Halo blight's unique strategy may allow it to increase its host range and attack plants of other species.
Dubbed “the evil twin of global warming,” ocean acidification is a growing crisis that poses a threat to both water-dwelling species and human communities that rely on the ocean for food and livelihood. Since pre-industrial times, the ocean’s pH has dropped from 8.2 to 8.1—a change that may seem insignificant, but actually represents a 30 percent increase in acidity. As the threat continues to mount, the German research project BIOACID (Biological Impacts of Ocean Acidification) seeks to provide a better understanding of the phenomenon by studying its effects around the world. BIOACID began in 2009, and since that time, over 250 German researchers have contributed more than 580 publications to the scientific discourse on the effects of acidification and how the oceans are changing. The organization recently released a report that synthesizes their most notable findings for climate negotiators and decision makers. Their work explores “how different marine species respond to ocean acidification, how these reactions impact the food web as well as material cycles and energy turnover in the ocean, and what consequences these changes have for economy and society.” Field research for the project has spanned multiple oceans, where key species and communities have been studied under natural conditions. In the laboratory, researchers have also been able to test for coming changes by exposing organisms to simulated future conditions. Their results indicate that acidification is only one part of a larger problem. While organisms might be capable of adapting to the shift in pH, acidification is typically accompanied by other environmental stressors that make adaptation all the more difficult. In some cases, marine life that had been able to withstand acidification by itself could not tolerate the additional stress of increased water temperatures, researchers found. Other factors like pollution and eutrophication—an excess of nutrients—compounded the harm. Further, rising water temperatures are forcing many species to abandon part or all of their original habitats, wreaking additional havoc on ecosystems. And a 1.2 degree increase in global temperature—which is significantly under the 2 degree limit set in the Paris Climate Agreements—is expected to kill at least half of the world’s tropical coral reefs. Acidification itself is a multipronged threat. When carbon dioxide is absorbed by the ocean, a series of chemical reactions take place. These reactions have two important outcomes: acid levels increase and the compound carbonate is transformed into bicarbonate. Both of these results have widespread effects on the organisms who make their homes in our oceans. Increased acidity has a particularly harmful effect on organisms in their early life stages, such as fish larvae. This means, among other things, the depletion of fish stocks—a cornerstone of the economy as well as diet in many human communities. Researchers “have found that both [acidification and warming] work synergistically, especially on the most sensitive early life stages of [fish] as well as embryo and larval survival.” Many species are harmed as well by the falling levels of carbonate, which is an essential building block for organisms like coral, mussels, and some plankton. Like all calcifying corals, the cold-water coral species Lophelia pertusa builds its skeleton from calcium carbonate. Some research suggests that acidification threatens both to slow its growth and to corrode the dead branches that are no longer protected by organic matter. As a “reef engineer,” Lophelia is home to countless species; as it suffers, so will they. The BIOACID report warns: “[T]o definitely preserve the magnificent oases of biodiversity founded by Lophelia pertusa, effects of climate change need to be minimised even now–while science continues to investigate this complex marine ecosystem.” Even those organisms not directly affected by acidification may find themselves in trouble as their ecosystems are thrown out of balance. Small changes at the bottom of the food web, for example, may have big effects at higher trophic levels. In the Artic, Limacina helicina—a tiny swimming snail or “sea butterfly—is a major source of food for many marine animals. The polar cod species Boreogadus saida, which feeds on Limacina, is a key food source for larger fish, birds, and mammals such as whales and seals. As acidification increases, research suggests that Limacina’s nutrional value will decrease as its metabolism and shell growth are affected; its numbers, too, will likely drop. With the disappearance of this prey, the polar cod will likely suffer. Diminishing cod populations will in turn affect the many predators who feed on them. Even where acidification stands to benefit a particular species, the overall impact on the ecosystem can be negative. In the Baltic Sea, BIOACID scientists have found that Nodularia spumigena, a species of cyanobacteria, “manages perfectly with water temperatures above 16 degrees Celsius and elevated carbon dioxide concentrations–whereas other organisms already reach their limits at less warming.” Nodularia becomes more productive under acidified conditions, producing bacterial “blooms” that can extend upwards of 60,000 square kilometers in the Baltic Sea. These blooms block light from other organisms, and as dead bacteria degrade near the ocean floor they take up precious oxygen. The cells also release toxins that are harmful to marine animals and humans alike. Ultimately biodiversity, “a basic requirement for ecosystem functioning and ultimately even human wellbeing,” will be lost. Damage to tropical coral reefs, which are home to one quarter of all marine species, could drastically reduce the ocean’s biodiversity. And as biodiversity decreases, an ecosystem becomes more fragile: ecological functions that were once performed by several different species become entirely dependent on only one. And the diversity of marine ecosystems is not the only thing at stake. Currently, the ocean plays a major mitigating role in global warming, absorbing around 30 percent of the carbon dioxide emitted by humans. It also absorbs over 90 percent of the heat produced by the greenhouse effect. But as acidification continues, the ocean will take up less and less carbon dioxide—meaning we may see an increase in the rate of global warming. The ocean controls carbon dioxide uptake in part through a biological mechanism known as the carbon pump. Normally, phytoplankton near the ocean’s surface take up carbon dioxide and then sink towards the ocean floor. This process lowers surface carbon dioxide concentrations, facilitating its uptake from the atmosphere. But acidification weakens this biological carbon pump. Researchers have found that acidified conditions favor smaller types of phytoplankton, which sink more slowly. In addition, heavier calcifying plankton—which typically propel the pump by sinking more quickly—will have increasing difficulty forming their weighty calcium carbonate shells. As the pump’s efficiency decreases, so will the uptake of carbon dioxide from the air. The BIOACID report stresses that the risks of acidification remain largely uncertain. However, despite — or perhaps because of — this, society must tread cautiously with care of the oceans. The report explains, “Following the precautionary principle is the best way to act when considering potential risks to the environment and humankind, including future generations.”
Ebb and flow of the sea drives world's big extinction events If you are curious about Earth's periodic mass extinction events such as the sudden demise of the dinosaurs 65 million years ago, you might consider crashing asteroids and sky-darkening super volcanoes as culprits. But a new study, published online today (June 15, 2008) in the journal Nature, suggests that it is the ocean, and in particular the epic ebbs and flows of sea level and sediment over the course of geologic time, that is the primary cause of the world's periodic mass extinctions during the past 500[sc1] million years. "The expansions and contractions of those environments have pretty profound effects on life on Earth," says Shanan Peters, a University of Wisconsin-Madison assistant professor of geology and geophysics and the author of the new Nature report. In short, according to Peters, changes in ocean environments related to sea level exert a driving influence on rates of extinction, which animals and plants survive or vanish, and generally determine the composition of life in the oceans. Since the advent of life on Earth 3.5 billion years ago, scientists think there may have been as many as 23 mass extinction events, many involving simple forms of life such as single-celled microorganisms. During the past 540 million years, there have been five well-documented mass extinctions, primarily of marine plants and animals, with as many as 75-95 percent of species lost. For the most part, scientists have been unable to pin down the causes of such dramatic events. In the case of the demise of the dinosaurs, scientists have a smoking gun, an impact crater that suggests dinosaurs were wiped out as the result of a large asteroid crashing into the planet. But the causes of other mass extinction events have been murky, at best. "Paleontologists have been chipping away at the causes of mass extinctions for almost 60 years," e[sc2]xplains Peters, whose work was supported by the National Science Foundation. "Impacts, for the most part, aren't associated with most extinctions. There have also been studies of volcanism, and some eruptions correspond to extinction, but many do not." Arnold I. Miller, a paleobiologist and professor of geology at the University of Cincinnati, says the new study is striking because it establishes a clear relationship between the tempo of mass extinction events and changes in sea level and sediment: "Over the years, researchers have become fairly dismissive of the idea that marine mass extinctions like the great extinction of the Late Permian might be linked to sea-level declines, even though these declines are known to have occurred many times throughout the history of life. The clear relationship this study documents will motivate many to rethink their previous views." Peters measured two principal types of marine shelf environments preserved in the rock record, one where sediments are derived from erosion of land and the other composed primarily of calcium carbonate, which is produced in-place by shelled organisms and by chemical processes. "The physical differences between (these two types) of marine environments have important biological consequences," Peters explains, noting differences in sediment stability, temperature, and the availability of nutrients and sunlight. In the course of hundreds of millions of years, the world's oceans have expanded and contracted in response to the shifting of the Earth's tectonic plates and to changes in climate. There were periods of the planet's history when vast areas of the continents were flooded by shallow seas, such as the shark- and mosasaur-infested seaway that neatly split North America during the age of the dinosaurs. As those epicontinental seas drained, animals such as mosasaurs and giant sharks went extinct, and conditions on the marine shelves where life exhibited its greatest diversity in the form of things like clams and snails changed as well. The new Wisconsin study, Peters says, does not preclude other influences on extinction such as physical events like volcanic eruptions or killer asteroids, or biological influences such as disease and competition among species. But what it does do, he argues, is provide a common link to mass extinction events over a significant stretch of Earth history. "The major mass extinctions tend to be treated in isolation (by scientists)," Peters says. "This work links them and smaller events in terms of a forcing mechanism, and it also tells us something about who survives and who doesn't across these boundaries. These results argue for a substantial fraction of change in extinction rates being controlled by just one environmental parameter."
6-1: Define design thinking. Similar to The Practice of Entrepreneurship in many ways, design thinking is ultimately a constructive and collaborative process that merges the power of observation, synthesis, searching and generating alternatives, critical thinking, feedback, visual representation, creativity, problem solving, and value creation. 6-2: Demonstrate design thinking as a human-centered process focusing on customers and their needs. Before business feasibility and economic sustainability are considered in the design process, entrepreneurs discover what people need. Products that achieve all three are bound to be the most successful, but the product or service must first be designed to provide a desired solution or fulfill a need for the design process to be considered human-centered. 6-3: Describe the role of empathy in the design-thinking process. To create meaningful ideas and innovations, we need to know and care about the people who are using them. Developing our empathic ability enables us to better understand the way people do things and the reasons why; their physical and emotional needs; the way they think and feel; and what is important to them. 6-4: Illustrate the key parts of the design-thinking process. The design-thinking process comprises three main overlapping phases: inspiration, ideation, and implementation. 6-5: Demonstrate how to observe and convert observation data to insights. An insight in this sense is an interpretation of an event or observation that, importantly, provides new information or meaning. Observations can fall along one of nine different dimensions, and, like entrepreneurship, the ability to discern trends and patterns from each dimension is a skill that can be practiced and improved. 6-6: Demonstrate how to interview potential customers in order to better understand their needs. Interviews should be done for two reasons: 1) to develop a better understanding of their needs during the inspiration phase of design thinking and 2) get feedback on ideas during the implementation phase. The interview must be well-prepared, the customer must be listened to and intelligent questions asked, and the interview must be evaluated when it is over. 6-7: Identify and describe other approaches to design thinking. The authors of Designing for Growth suggest four questions that are useful to ask during the design-thinking process, all of which have periods of divergence and convergence. They are as follows: What is? What if? What Wows? What works?36 Another variation on the design-thinking process is from the Stanford Design School, which uses five phases: empathy, define, ideate, prototype, and test. Design thinking can also be used to resolve wicked problems.
Project: Soil Microbial Communities & Diversity In this series of nine labs you will learn: - To think, work, and write as microbiologists - To use the basic tools and techniques of traditional and molecular microbiology - To investigate the diversity and identity of soil microorganisms in a habitat of your own choosing - To make careful, unbiased observations and to record and analyze them for meaning and importance - To design controlled experiments and collect data from those experiments to answer questions that arise from your observations - To show data in effective figures or tables - To make and articulate conclusions from experimental results - To write intelligibly in scientific research report format about your investigation and its conclusions, including its significance or implications Introduction to the Project In the 1980's scientists discovered that, despite microbes' invisibility to us, the microbial world is much more diverse and numerous than the macroscopic world of plants and animals. Traditional measures of diversity relied on physical traits. The use of physical traits in determing relationships between organisms has two major problems when it comes to microbes: 1) microbes are not as morphologically diverse as metazoans and their morphological traits are not indicative of evolutionary relationships and 2) there are no morphological traits common to both macroorganisms and microbes. In the 1980's Carl Woese suggested that the deoxyribonucleic acid (DNA) sequences of certain common genes could be used to measure relatedness among radically different organisms. He picked the genes that encode ribosomal RNA (rRNA). Ribosomes, the protein-RNA complexes that are the scaffold on which proteins are synthesized, are common to all cells, both prokaryotic and eukaryotic. Despite differences in size, the sequences of rRNA molecules contain regions that are highly conserved, thus highly similar. Woese chose the intermediate sized rRNA molecule, 16S rRNA in prokaryotes and 18S rRNA in eukaryotes because it was large enough to contain enough information for genetic comparisons but small enough for the gene to be sequenced easily. Comparing sequences of the gene (16S DNA) that encodes 16S rRNA in different bacteria can be used to identify them. We can also use rDNA sequencing to deduce relationships between different bacteria and among organisms as diverse as bacteria and humans. Woese's ground-breaking work altered the phylogenetic tree of life and showed that the prokaryotic world was evolutionarily much older than expected and much more important. Recent advances in molecular tools for gene sequencing and in other types of microorganism identification dramatically expanded our knowledge of the contribution of microbes in their (and our) environment. It is estimated that 99.9% of microbes are "unculturable" - that is, currently not cultured by traditional methods leading to the so-called "Great Plate Count Anomaly"- that many more bacteria are present than appear as colonies on agar plates. Culture-independent estimates of the number of bacteria in a gram of soil are 109 - this is several hundred to 9,000 orders of magnitude greater than the number derived from culture-dependent methods. It has been speculated that there might be 10 billion species of bacteria on Earth! Your goal in this semester long project is to use both culture-based and culture-independent molecular methods of bacterial identification to investigate the diversity of bacteria in soil from a Wellesley greenhouse habitat and to explore the specific role of some of the bacteria in that community. Molecular Strategy for Identification of Culturable & Non-culturable Bacteria in a Soil Sample Isolate Genomic DNA from Environmental Samples or Cultures PCR Amplify rRNA gene sequences from Genomic DNA from "Universal" Bacterial Primers Clone 16S rDNA PCR products into a plasmid vector and transform E. coli with plasmid Pick transformed E.coli colonies containing soil community 16S rDNA inserts; culture the bacteria overnight in 96 well blocks; create glycerol stocks in 96 well plates to freeze and send away for 16s rRNA gene sequencing Analyze Diversity of Soil Bacterial Community & Infer Phylogenetic and Functional Relationships Strategy for Identification of Culturable Bacteria in a Soil Sample Links to Labs We would like to thank Charles Deutsch, Patricia M. Steubing; Stephen C. Wagner and Robert S. Stewart, Jr.; Kyle Seifert, Amy Fenster, Judith A. Dilts, and Louise Temple; and the instructors of the Microbial Diversity Course at the Marine Biological Lab in Woods Hole, MA for their valuable assistance in the development of these labs.
Summary and Keywords Catholicism, as both an institution and a culture of popular beliefs, rituals, and values, has played an important role in the formation of racial boundaries in American society. The logic of race and its inherent function as a mechanism of social power, in turn, profoundly shaped Catholic thought and practice throughout the church’s own 400-year formation in America. Beginning with colonization of the New World, Catholicism defined and institutionalized racial difference in ways that both adhered to and challenged the dominant Anglo-American conceptions of whiteness as a critical measure of social belonging. Early Catholic missions abetted European colonialism by codifying Africans and Native Americans as cultural and moral “others.” Following a “national parish” system, institutional growth from the mid-19th to the mid-20th century sorted various European “races” and created spaces for resisting Anglo-American discrimination. The creation of a separate and singular mission for all “non-white” communities nonetheless reflected Catholic acquiescence to an American racial binary. Intra-Catholic challenges to racialist organization struggled to gain traction until the mid-20th century. As second- and third-generation European immigrants began asserting white status in American society, Catholic understandings of sacred space, which infused white resistance to neighborhood integration with religious urgency, and hierarchical ordering of moral authority within an institution that historically excluded non-whites from positions of influence created significant barriers to Catholic interracialism. The influence of the civil rights movement and the structural transformation of both Catholic life and urban communities where non-whites lived nonetheless prompted new efforts to enlist Catholic teaching and community resources into ongoing struggles against racial oppression. Debates over the meaning of race and American society and social policy continue to draw upon competing histories of the American Catholic experience. Access to the complete content on Oxford Research Encyclopedia of Religion requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription. If you are a student or academic complete our librarian recommendation form to recommend the Oxford Research Encyclopedias to your librarians for an institutional free trial.
From UPSC perspective, the following things are important : Prelims level : Sagittarius A* Mains level : Black-hole theory and its relevance - It is a supermassive black hole that sits 26,000 light years away from Earth, near the Galactic Centre, or the centre of the Milky Way. - It is one of the few black holes where we can witness the flow of matter nearby. - Since the discovery of Sagittarius A* 24 years ago, it has been fairly calm. Why in news? - This year Sagittarius A* has shown unusual activity, and the area around it has been much brighter than usual. - It may be that the Sagittarius A* has become hungrier, and has been feeding on nearby matter at a markedly faster rate, which one researcher described as a “big feast”. - A black hole does not emit light by itself, but the matter that it consumes can be a source of light. - A large quantity of gas from the S0-2 star, which travelled close to the black hole last year, may now have reached the latter. - Other possibilities of the heightened activity are that it could be growing faster than usual in size, or that the current model that measures its level of brightness is inadequate and is in need of an update.
or covid-19 or coronavirus disease [ koh-vid-nahyn-teen ] What does COVID-19 mean? COVID-19 is a highly infectious respiratory disease caused by a new coronavirus. The disease was discovered in China in December 2019 and has since spread around the world. Examples of COVID-19 Where does COVID-19 come from? COVID-19, also called coronavirus disease, is the name of the disease caused by a newly discovered coronavirus. The virus and disease were first detected in Wuhan, China on December 31, 2019, and, as of the beginning of March 2020, have led to an outbreak in over 60 countries across the globe, including the US. While the coronavirus disease is popularly referred to as just coronavirus, coronavirus actually refers to a large family of viruses which can cause illnesses in human and many animals. Some of these illnesses are rare but severe respiratory infections, including Middle East Respiratory Syndrome (MERS), Severe Acute Respiratory Syndrome (SARS), and, as most recently discovered, COVID-19. On February 11, 2020, the World Health Organization (WHO) officially named this novel coronavirus COVID-19. COVID is short for coronavirus disease. The number 19 refers to the fact that the disease was first detected in 2019, though the outbreak occurred in 2020. Novel coronavirus can be abbreviated as nCoV. The technical name of the virus that causes COVID-19 is severe acute respiratory syndrome coronavirus 2, abbreviated as SARS-CoV-2. COVID-19 is genetically related to, but not the same as, the virus that led to the SARS outbreak in 2003. SARS is deadlier than COVID-19, but less infectious. Coronaviruses contain RNA and are spherical. Under a microscope, the viruses appear to be surrounded by a spiky array thought to look like a corona, or crown-like shape, hence the name coronavirus. The source of the new coronavirus is believed to be an animal. The virus spreads through droplets from the mouth and nose of a person with COVID-19 after coughing, sneezing, and exhaling. Other people can then pick up the virus by breathing in these droplets or coming into contact with surfaces that have been contaminated with the droplets (such as by touching an object and then touching parts of the face). This is why it’s important to frequently wash your hands—among other practices—to reduce the risk of spreading or getting the virus. Please watch this video from the WHO for tips on protecting yourself and others from COVID-19: Common symptoms of COVID-19 are fever, tiredness, dry cough, and difficulty breathing. Less common symptoms experienced include aches and pains, a runny nose, and diarrhea. Some people infected with COVID-19, however, don’t show symptoms or feel sick at all. According to the WHO, most people (80%) recover from COVID-19. However, COVID-19 can develop into a severe illness, especially in older people or people who already have medical conditions. The WHO has officially classified the coronavirus outbreak as a pandemic, which it defines as “a worldwide spread of a a new disease.” The US government has declared a public health emergency. So far, COVID-19 has caused over 16,000 deaths, and over 370,000 cases have been confirmed around the world. Efforts to contain the spread of COVID-19 include social distancing, a term for measures (such as avoiding mass gatherings) taken to reduce close contact between people. Learn more about social distancing at our article on the difference between quarantine and isolation. Health professionals emphasize that protective measures like social distancing can flatten the curve. Flatten the curve means slowing the spread of an epidemic disease so that the capacity of the healthcare system doesn’t become overwhelmed. The curve represents the number of cases over time, and flattening that curve means preventing a huge surge of new cases in a very short period of time—which is extremely challenging for health officials to handle. Slowing the spread of an epidemic in this way is known as mitigation. Our #FlattenTheCurve graphic is now up on @Wikipedia with proper attribution & a CC-BY-SA licence. Please share far & wide and translate it into any language you can! Details in the thread below. #Covid_19 #COVID2019 #COVID19 #coronavirus Thanks to @XTOTL & @TheSpinoffTV pic.twitter.com/BQop7yWu1Q — Dr Siouxsie Wiles (@SiouxsieW) March 10, 2020 Who uses COVID-19? COVID-19 is the official name of the disease caused by a newly discovered type of coronavirus. What are five things you need to know about novel (new) #coronavirus? Watch as @DrNancyM_CDC answers important questions in this video. Stay updated with the latest information on #COVID19 at https://t.co/inSgagrDeE. pic.twitter.com/Wp2XJ9Vwmz — CDC (@CDCgov) February 18, 2020 COVID-19 is sometimes written in lowercase as covid-19. Popularly, COVID-19 is referred to as COVID (or covid) for short. The disease is also commonly referred to as coronavirus, and corona for short. But, keep in mind that coronavirus is technically the name of a family of viruses, including SARS-CoV-2, which causes COVID-19. There’s no evidence so far that the coronavirus is a threat to house pets like cats or dogs https://t.co/qUCAdAQEVX — The New York Times (@nytimes) March 3, 2020 Rona, roni, the rona, and that/dat rona have emerged as an informal name for the disease, especially in jokes on Black Twitter. Now black folks bout to take the rona seriously lol https://t.co/jphm2bj7lx — Sylvia Obell (@SylviaObell) March 10, 2020 This is not meant to be a formal definition of COVID-19 like most terms we define on Dictionary.com, but is rather an informal word summary that hopefully touches upon the key aspects of the meaning and usage of COVID-19 that will help our users expand their word mastery.
Early life may have contained both RNA and DNA, rather than just RNA Early life may have emerged from a mixture of RNA and DNA building blocks, developing the two nucleic acids simultaneously instead of evolving DNA from RNA. According to the RNA world hypothesis, early life used RNA to carry genetic information and perform biochemical catalytic reactions. Over time, DNA developed from RNA as the carrier of genetic information and proteins appeared as biochemical catalysts. As RNA gave way to DNA, some think a mixture of nucleotide building blocks would have been inevitable. As these nucleotides connected to form strands, the thermodynamic and kinetic stability of pure RNA and DNA duplexes would drive these nucleic acids to accumulate in primitive cells, while less thermally stable complexes containing one strand of RNA and one strand of DNA fell apart. Ramanarayanan Krishnamurthy, at the Scripps Research Institute, and colleagues wondered if duplexes where each strand contains both RNA and DNA nucleotides were stable enough to have been possible intermediates during the transition from RNA to DNA. The researchers purchased commercially synthesised sequences of RNA and DNA, six to 16 bases long. In some sequences, they systematically changed purine RNA nucleotides, containing the bases adenine or guanine, into the corresponding DNA purines. In others, they changed pyrimidine RNA nucleotides, with cytidine or uracil bases, into pyrimidine DNA nucleotides with cytidine or thymidine bases. The result was several series of nucleic acid duplexes that ranged from having all RNA nucleotides to none at all. To test the thermal stability of these sequences, the researchers heated each duplex to separate the strands. Then they measured the increase in UV absorption as the strands melted. The faster the absorption increased as the temperature increased, the faster the strands separated, indicating a less stable duplex. Many mixed duplexes with strands containing both RNA and DNA nucleotides melted as much as 20 degrees before pure RNA or pure DNA duplexes, indicating they were significantly less thermally stable than the pure duplexes. Krishnamurthy says they were surprised to see this instability trend hold for a variety of sequences. Without ways to prevent these mixed sequences from forming or primitive catalysts to overcome their instability, the researchers imagine that the most efficient path to pure RNA and pure DNA duplexes would start from a mixture of both nucleotides, rather than that nucleotide mixture developing from a pool of pure RNA. ‘This paper presents significant results that will influence our thinking about the way that RNA and DNA could have interacted in primitive life,’ says David Deamer, at the University of California, Santa Cruz, US. Depending on conditions, however, RNA and DNA have very different abilities to withstand chemical changes like depurination, deamination, and hydrolysis. The chemical stability of these two nucleic acids should also be considered when thinking about how they could become incorporated into the earliest forms of life, he adds. The assumption that a primitive world was not chemically sophisticated enough to differentiate RNA and DNA building blocks challenges some evidence for the capabilities of the RNA world, says Steven Benner, of the Foundation for Applied Molecular Evolution in the US. Small molecules that enhance the activity of modern enzymes, such as coenzyme A, have RNA nucleotide tails. This indicates that these RNA cofactors could have been part of a RNA world able to assemble pure RNA and pure DNA from a pool of both nucleotides, he says. J V Gavette, M Stoop, N V Hud, R Krishnamurthy, Angew. Chem. Int. Ed., 2016, DOI: 10.1002/anie.201607919
To a molecular biologist, the word ‘evolution’ evokes images of fossils, dusty rocks, and phylogenetic trees covering eons. The fields of molecular biology and evolutionary biology diverged during the twentieth century, but new experimental technologies have lead to a fusion of the two disciplines. The result is that evolutionary biologists have the unprecedented ability to evaluate how genetic change produces novel phenotypes that allow adaptation. It’s a great time to start a new podcast on evolution! Molecular biology is an experimental approach that was born in 1953 with the discovery of the structure of DNA. Its goal is to understand how cells and organisms work at the level of biological molecules such as DNA, RNA, and proteins. Some of the experimental tools of molecular biology include recombinant DNA, nucleotide sequencing, mutagenesis, and DNA-mediated transformation. The experiments of molecular biology often involve simplified, or reductionist systems in which much of the complexity of nature is ignored. Variation in individuals, populations, and the environment are set aside. Data produced by the techniques of molecular biology can lead to decisive conclusions about cause and effect. Evolutionary biology embraces variation, and in fact attempts to explain it. The basis for variation in organisms is usually inferred by associating phenotypes, sequences, and alleles. The problem with this approach is that alternative explanations are often plausible, and conclusions are rarely as decisive as those achieved with molecular biology. We can turn to Darwin’s finches as a good illustration of the difference between fields. Darwin hypothesized that variation in the beaks of finches was a consequence of diet, but how such variation occurred was unknown. It was not until 2004 that it was shown that beak shape and size could be controlled by two different genes. The techniques of DNA sequencing, mutagenesis, and the ability to introduce altered DNA into cells and organisms have been the catalyst for the fusion of molecular biology and evolutionary biology into a new and far more powerful science, which Dean and Thornton call a ‘functional synthesis’. As a consequence, genotype can be definitively connected with phenotype, allowing resolution of fundamental questions in evolution that have been puzzles for many years. Microbes are perfect subjects for study by evolutionary biologists, as they are readily manipulable and rapidly reproduce. However no organism is now very far from the eye of this new science. Subjects as diverse as insecticide resistance, coat color in mice, evolution of color vision, and much more are all amenable to scrutiny by the ‘functional synthesis’. This Week in Evolution will cover all aspects of the functional synthesis, irrespective of organism. My co-host is Nels Elde, an evolutionary biologist at the University of Utah. Nels has appeared on This Week in Virology to discuss the evolution of virus-host conflict, and his lab’s story on the evolutionary battle for iron between mammalian transferrin and bacterial transferrin-binding protein was covered on This Week in Microbiology.
A deck of playing cards is a versatile tool to help fifth grade students practice vital math concepts. You can model games after common card games with minor modifications to maximize their educational value. In addition, the flexibility inherent in a standard deck of cards offers a multitude of possibilities for creating brand new games specifically designed to help fifth graders practice important skills. Modify Standard Games for Computation Practice One of the focuses of the Common Core State Standards Initiative for fifth grade mathematics is finalizing fluency with the four basic operations. This means that students are to develop the speed and accuracy necessary to solve problems automatically. Card games can help with this goal by acting as flash cards and random number generators. Remove face cards and tens. Play games using only the aces through nines to allow for place value. For example, modify the familiar game of War by turning over multiple cards for each turn and performing a predetermined math calculation on them. The player with the larger answer gets to keep all of the cards in play and the person who has the most cards at the end of the game wins. Use Cards to Generate Random Numbers Fifth grade students develop understanding of concepts about numbers as part of their introduction to algebraic thinking. Use cards to create an endless supply of random numbers for analysis. Remove the face cards and tens from the deck and choose two to seven number cards to create a numeral with multiple digits. Use this randomly generated number to practice prime factorization, application of divisibility rules, or classification as a prime or composite number. Create games that reward players for speed of calculation, finding the largest prime number of the group, or generating a number with the most factors. Play a Place Value Game with Cards Fifth grade students are expected to expand their knowledge of place value to include millions and decimal fractions. Create a playing board on paper with blank spots to lay cards. Choose a place to insert a decimal if desired. Use a deck of cards with face cards and tens removed, or use one of these as a designated "zero" card. Players take turns choosing a card from the face-down deck and laying it on their board in an empty spot. The player who creates the largest (or smallest) number wins the round. Create Fraction Games with Playing Cards In fifth grade, students are expected to increase knowledge of equivalent fractions, comparison of fractions and calculation with fractions. Use a deck of playing cards with face cards removed. Shuffle the deck and deal two cards to each player. Each player uses the cards to create a fraction by designating the larger number as the denominator and the smaller as the numerator. Players can compare the fractions they've created to discover who has the largest fraction. Alternatively, new rounds of two cards each can be dealt until one player can create a larger or smaller fraction than his or her original fraction. In case of a tie, the player with the largest difference between the original and second fractions wins the round. - deck of cards image by timur1970 from Fotolia.com
a part of the theory of numbers that studies approximations of real numbers by rational numbers or, in a broader context, problems involved in finding integral solutions of linear and nonlinear inequalities or systems of inequalities with real coefficients. Diophantine approximations are named after the ancient Greek mathematician Diophantus, who worked on the problem of finding integral solutions of algebraic equations (Diophantine equations). The methods of the theory of Diophantine approximations are based on the application of continued fractions, Farey sequences, and the Dirichlet principle. The problem of approximating a number by rational fractions is solved with the aid of all three of these methods and particularly with the use of continued fractions. The approximation of a real number α by convergents pk/qk of the expansion of α into a continued fraction is characterized by the inequality │ α − pk/qk │ < 1/qk2. On the other hand, if an irreducible fraction a/b satisfies the inequality │ α − a/b │ < 1/2b2, then it is a convergent in the expansion of α into a continued fraction. Fundamental work on the approximation of real numbers α by rational fractions has been done by A. A. Markov (Senior). There are many extensions of the problem of approximating a number by rational fractions. Among them, the primary problem is that of studying expressions of the type xθ − y − α, where θ and α are certain real numbers and x and y assume integral values (the so-called nonhomogeneous one-dimensional problem). The first results in solving this problem were achieved by P. L. Chebyshev. A well-known theorem on approximate integral solutions of systems of linear equations (multidimensional problems of Diophantine approximations), is the following theorem of L. Kronecker: If α1, …, αn are real numbers for which the equality a1α1 + … + anαn = 0 with integral a1, …, an holds only for a1 = … = an = 0 and β1, …, βn are certain real numbers, then for any ε > 0 it is possible to find a number t and also integers x1 …, xn, such that the inequalities │ t αk − βk − xk │ < ε for k = 1, 2, … , n. Dirichlet’ s principle is very useful in solving multidimensional problems of Diophantine approximations. Methods based on Dirichlet’s principle enabled A. Ia. Khinchin and other mathematicians to develop a systematic theory of multidimensional Diophantine approximations. An important aspect of the theory of Diophantine approximations is its connection with geometry based on the fact that a system of linear forms with real coefficients can be represented as a lattice in an n-dimensional arithmetic space. In the late 19th century H. Minkowski proved a number of geometric theorems applicable to the theory of Diophantine approximations. I. M. Vinogradov obtained remarkable results in problems of nonlinear Diophantine approximations. The methods devised by him occupy a central position in this area of number theory. One of the most important problems of the theory of Diophantine approximations is that of approximating algebraic numbers by rational numbers. The theory of transcendental numbers, in which estimates are found for norms of linear forms and polynomials in one and several numbers with integral coefficients, is related to Diophantine approximations. The theory of Diophantine approximations is closely related to the solution of Diophantine equations and to various problems of analytic number theory. REFERENCESVinogradov, I. M. Metod trigonometricheskikh summ v teorii chisel. Moscow, 1971. Gel’fond, A. O. “Priblizhenie algebraicheskikh chisel algebraicheskimi zhe chislami i teoriia transtsendentnykh chisel.” Uspekhi matematicheskikh nauk, 1949, vol. 4, issue 4. Fel’dman, N. I., and A. B. Shidlovskii. “Razvitie i sovremennoe sostoianie teorii transtsendentnykh chisel.” Uspekhi matematicheskikh nauk, 1967, vol. 22, issue 3. Khinchin, A. Ia. Tsepnye drobi, 3rd ed. Moscow, 1961. Koksma, J. F. Diophantische Approximationen. Berlin, 1936.
The Orion was a mid-range mainframe computer introduced by Ferranti in 1959 and installed for the first time in 1961. Ferranti positioned Orion to be their primary offering during the early 1960s, complementing their high-end Atlas and smaller systems like the Sirius and Argus. The Orion was based on a new type of logic circuit known as "Neuron" and included built-in multitasking support, one of the earliest commercial machines to do so (the KDF9 being a contemporary). Performance of the system was much less than expected and the Orion was a business disaster, selling only about eleven machines. The Orion 2 project was quickly started to address its problems, and five of these were sold. The failure of the Orion was the capstone to a long series of losses for the Manchester labs, and with the failure of the Orion management grew tired of the entire computer market. The division was sold to International Computers and Tabulators (ICT), who selected the Canadian Ferranti-Packard 6000 as their mid-range offering, ending further sales of the Orion 2. During the 1950s transistors were expensive and relatively fragile devices. Although they had advantages for computer designers, namely lower power requirements and their smaller physical packaging, vacuum tubes remained the primary logic device until the early 1960s. There was no lack of experimentation with other solid state switching devices, however. One such system was the magnetic amplifier. Similar to magnetic core memory, or "cores", magnetic amplifiers used small toroids of ferrite as a switching element. When current passed through the core a magnetic field would be induced that would reach a maximum value based on the saturation point of the material being used. This field induced a current in a separate read circuit, creating an amplified output of known current. Unlike digital logic based on tubes or transistors, which uses defined voltages to represent values, magnetic amplifiers based their logic values on current flows. One advantage to magnetic amplifiers is that they are open in the center and several input lines can be threaded through them. This makes it easy to implement chains of "OR" logic by threading a single core with all the inputs that need to be ORed together. This was widely used in the "best two out of three" circuits that were widely used in binary adders, which could reduce the component count of the ALU considerably. This was known as "Ballot Box Logic" due to the way the inputs "voted" on the output. Another way to use this feature was to use the same cores for different duties during different periods of the machine cycle, say to load memory during one portion and then as part of an adder in another. Each of the cores could be used for as many duties as there was room for wiring through the center. In the late 1950s new techniques were introduced in transistor manufacture that led to a rapid fall in prices while reliability shot up. By the early 1960s most magnetic amplifier efforts were abandoned. Few machines using the circuits reached the market, the best known examples being the mostly-magnetic UNIVAC Solid State (1959) and the mostly transistorized English Electric KDF9 (1963). The Ferranti Computer Department in West Gorton Manchester had originally been set up as an industrial partner of Manchester University's pioneering computer research lab, commercializing their Manchester Mark 1 and several follow-on designs. During the 1950s, under the direction of Brian Pollard, the Gorton labs also researched magnetic amplifiers. Like most teams, they decided to abandon them when transistors improved. One member of the lab, Ken Johnson, proposed a new type of transistor-based logic that followed the same conventions as the magnetic amplifiers, namely that binary logic was based on known currents instead of voltages. Like the magnetic amplifiers, Johnson's "Neuron" design could be used to control several different inputs. Better yet, the system often required only one transistor per logic element, whereas conventional voltage-based logic would require two or more. Although transistors were falling in price they were still expensive, so a Neuron based machine might offer similar performance at a much lower price than a machine based on traditional transistor logic. The team decided to test the Neuron design by building a small machine known as "Newt", short for "Neuron test". This machine was so successful that the lab decided to expand the testbed into a complete computer. The result was the Sirius, which was announced on 19 May 1959 with claims that it was the smallest and most economically priced computer in the European market. Several sales followed. With the success of Sirius, the team turned its attention to a much larger design. Since many of the costs of a complete computer system are fixed - power supplies, printers, etc. - a more complex computer with more internal circuitry would have more of its cost associated with the circuits themselves. For this reason, a larger machine made of Neurons would have an increased price advantage over transistorized offerings. Pollard decided that such a machine would be a strong counterpart to the high-end Atlas, and would form the basis for Ferranti's sales for the next five years. Looking for a launch customer, Ferranti signed up Prudential Assurance with the promise to deliver the machine in 1960. However, these plans quickly went awry. The Neuron proved unable to be adapted to the larger physical size of the Orion. Keeping the current levels steady over the longer wire runs was extremely difficult, and efforts to cure the problems resulted in lengthy delays. The first Orion was eventually delivered, but was over a year late and unit cost was more than expected, limiting its sales. Between 1962 and 1964 the Computing Division lost $7.5 million, largely as a result of the Orion. During the Orion's gestation it appeared there was a real possibility the new system might not work at all. Engineers at other Ferranti departments, notably the former Lily Hill House in Bracknell, started raising increasingly vocal concerns about the effort. Several members from Bracknell approached Gordon Scarrott and tried to convince him that Orion should be developed using a conventional all-transistor design. They recommended using the "Griblons" circuits developed by Maurice Gribble at Ferranti's Wythenshawe plant, which they had used to successfully implement their Argus computer for the Bristol Bloodhound missile system. Their efforts failed, they turned to Pollard to overrule Scarrott, which led to a series of increasingly acrimonious exchanges. After their last attempt on 5 November 1958, they decided to go directly to Sebastian de Ferranti, but this effort also failed. Pollard resigned about a month later and his position was taken over by Peter Hall. Braunholtz later expressed his frustration that they didn't write to him directly, and the matter sat for several years while Orion continued to run into delays. In September 1961 Prudential was threatening to cancel their order, and by chance, Braunholtz at that moment sent a telegram to Hall expressing his continuing concerns. Hall immediately invited Braunholtz to talk about his ideas, and several days later the Bracknell team was working full out on what would become the Orion 2. By the end of October the basic design was complete, and the team started looking for a transistor logic design to use for implementation. Although Braunholtz had suggested using the Griblons, the Bracknell group also invited a team of engineers from Ferranti Canada to discuss their recent successes with their "Gemini" design, which was used in their ReserVec system. On November 2 the Bracknell team decided to adopt the Gemini circuitry for Orion 2. Parts arrived from many Ferranti divisions over the next year, and the machine was officially switched on by Peter Hunt on 7 January 1963. The first Orion 2 was delivered to Prudential on 1 December 1964, running at about five times the speed of the Orion 1. Prudential bought a second machine for the processing of industrial branch policies. Another system was sold to the South African Mutual Life Assurance Society in Cape Town where it was used for updating insurance policies. A fourth was sold to Beecham Group to upgrade its Orion 1 system. The original prototype was kept by ICT and used for software development by the Nebula Compiler team. By this point, however, Ferranti was already well on the road to selling all of its business computing divisions to ICT. As part of their due diligence process, ICT studied both the Orion 2 and the FP-6000. Ferranti's own engineers concluded that "There are certain facets of the system we do not like. However, were we to begin designing now a machine in the same price/performance range as the FP6000, we would have in some 18 months' time a system that would not be significantly better -if indeed any better- than the FP6000." ICT chose to move forward with the FP-6000 with minor modifications, and used it as the basis for their ICT 1900 series through the 1960s. Existing contracts for the Orion 2 were filled, and sales ended. Although the Orion and Orion 2 differed significantly in their internals, their programming interface and external peripherals were almost identical. The basic Orion machine included 4,096 48-bit words of slow, 12µs, core memory, which could be expanded to 16,384 words. Each word could be organized as eight 6-bit characters, a single 48-bit binary number, or a single floating-point number with a 40-bit fraction and an 8-bit exponent. The system included built-in capabilities for working with Pound sterling before decimalization. The core memory was backed by one or two magnetic drums with 16k words each. Various offline input/output included magnetic disks, tape drives, punched cards, punched tape and printers. Most of the Orion's instruction set used a three-address form, with sixty-four 48-bit accumulators. Each program had its own private accumulator set which were the first 64 registers of its address space, which was a reserved contiguous subset of the physical store, defined by the contents of a "datum" relocation register. Operand addresses were relative to the datum, and could be modified by one of the accumulators for indexing arrays and similar tasks. A basic three-address instruction took a minimum of 64 µs, a two-address one 48 µs, and any index modifications on the addresses added 16 µs per modified address. Multiplication took from 156 to 172 µs, and division anywhere from 564 to 1,112 µs, although the average time was 574 µs. The Orion 2, having a core store with a much shorter cycle time, was considerably faster. A key feature of the Orion system was its built-in support for time-sharing. This was supported by a series of input/output (I/O) interrupts, or what they referred to as "lockouts". The system automatically switched programs during the time spent waiting for the end of an I/O operation. The Orion also supported protected memory in the form of pre-arranged "reservations". Starting and stopping programs, as well as selecting new ones to run when one completed, was the duty of the "Organisation Program." The Orion was one of the earliest machines to directly support time-sharing in hardware in spite of intense industry interest; other time-sharing systems of the same era include PLATO in early 1961, CTSS later that year, and the English Electric KDF9 and FP-6000 of 1964. The Orion is also notable for the use of its own high-level business language, NEBULA. Nebula was created because of Ferranti's perception that the COBOL standard of 1960 was not sufficiently powerful for their machines, notably as COBOL was developed in the context of decimal, character-oriented batch processing, while Orion was a binary word-oriented multiprogramming system. NEBULA adapted many of COBOL's basic concepts, adding new ones of their own. NEBULA was later ported to the Atlas as well. - George Gray, "The UNIVAC Solid State Computer", Unisys History Newsletter, Volume 1 Number 2 (December 1992, Revised 1999) - Bill Findlay, "The Hardware of the KDF9", 2009 - Barbara Ainsworth, "The Ferranti Sirius at Monash University", Computer Resurrection, Number 44 (Autumn 2008) - Ball & Vardalas, pg. 254 - See "SOME KEY DATES", Group - Maurice Gribble claims that his design was used for the basis of both the Orion 2 and the FP-6000. However, Ball and Vardalas state that the design was Canadian and quote one of the engineers to that end.(pg. 254) - Campbell-Kelly, pg. 222 - System, pg. 8 - System, pg. 18 - System, pg. 16 - A. Rousell, "A progress report of NEBULA", The Computer Journal, Volume 5 Number 3 (1962), pg. 162-163 - (System), "Ferranti Orion Computer System", Ferranti, November 1960 - Norman Ball and John Vardalas, "Ferranti-Packard: Pioneers in Canadian Electrical Manufacturing", McGill-Queen's Press, 1994 ISBN 0-7735-0983-6 - Gordon Scarrott, "From Torsional Mode Delay Lines to DAP", Computer Resurrection, Number 12 (Summer 1995) - Peter Hall, "A Management Perspective on Orion", Computer Resurrection, Number 33 (Spring 2004) - Maurice Gribble, "The Argus Computer and Process Control", Computer Resurrection, Number 20 (Summer 1998) - (Group), "Ferranti Orion 2 Contact Group: Report of Meeting at Storrington", 6 July 2004 - John Vardalas, "From DATAR To The FP-6000 Computer: Technological Change In A Canadian Industrial Context", IEEE Annals of the History of Computing, Volume 16 Number 2 (1994) - Martin Campbell-Kelly, "ICL: A Business and Technical History", Clarendon Press, 1989 - "Orion Programmers' Reference Manual", Ferranti, 1961 - "The Ferranti ORION Computer System", contains numerous details and material on the Orion series - Henry Goodman, "The Simulation of the Orion Time Sharing System on the Sirius", The Computer Bulletin, Volume 5 Number 2 (September 1961)
Have you ever wonder why the temperature of our earth becomes hotter recently? Also, you may have seen the news on your television that we must be aware of global warming. Moreover, people say that the flood occurs in their country is caused by global warming. Yet, what is actually global warming? Global warming is known as the increase of Earth’s average surface temperature due to the heat which is being trapped and would not escape the earth. This situation is linked with the effect of greenhouse gasses, such as carbon dioxide emissions come from burning fossil fuels and deforestation. In fact, the average global temperature has increased at the fastest rate. This occurs caused by the carbon dioxide and other air pollutants collect in the atmosphere and absorb the sunlight which has bounced off the earth’s surface. These gasses trapped the heat and cause the earth to get hotter. As the consequence, what are the cause of global warming? Then, read the following statement to know more. One of the causes of Global Warming in Ocean is the presence of greenhouse gasses. What are greenhouse gasses? Well, oxygen and nitrogen which are the main constituents of the atmosphere are included as the greenhouse gasses. Besides, there are kinds of greenhouse gasses in the atmosphere. Here they are. - Water vapor : Water vapor is the most abundant greenhouse gas which can be increased as the earth’s atmosphere warms. - Carbon dioxide (CO2) : In fact, human activities have increased the CO2 concentration in the atmosphere. Indeed, this one resulting in global warming. - Methane : Methane is a hydrocarbon gas produced by natural sources and human activities including the decomposition of wastes in landfills and agricultural practices. - Chlorofluorocarbons (CFCs) : The one that contributes to the destruction of the ozone layer. This could result in global warming. - Nitrous Oxide : One of greenhouse gasses is the nitrous oxide which is produced by soil cultivation practices. However, one that takes part in causing the Global Warming in Ocean is the increases in atmospheric levels of carbon dioxide (CO2) which also cause the increase in temperature. Moreover, CO2 can remain in the atmosphere for hundred of years. Indeed, the more levels of carbon dioxide in the atmosphere, then the higher temperatures it will be. On the other hand, what makes the increased carbon dioxide level in the atmosphere? As a result, it is stated that human beings have the main role in it. People have the big responsibility in causing the increase of temperature due to their bad activities. In fact, nearly 22 billion tons of carbon dioxide go into the atmosphere per year caused by human beings. Besides, this increased level is associated with many factors. For instance, fossil fuel contributes a lot to make the higher temperature at the earth. Further, we all have known that fossil fuel is really valuable to be used in a daily life. We can use it to support the transportation system, heating, and even for the manufacture of cement. Not only caused by the fossil fuel, but deforestation plays a big role in this case. Deforestation which is the conversion of forested area to non-forest land for use can lead to imbalances the habitat and biodiversity. Also, with deforestation, there will be no trees which can help to absorb the carbon dioxide in the air. Further, a significant increase of deforestation has been rapidly growing in some tropical countries. Due to the presence of increasing level of earth’s temperature, then there are several big impacts which can affect human’s life. To know more, check the following statement below. 1. Changes in Climate The Higher temperature is caused by the presence of higher level of carbon dioxide in the air. The warming atmosphere can hold more moisture and lead to the increasing level of water vapor by 7% for every degree centigrade of warming. This gives the big impact in climate changing. For instance, the earth has shown the changes in rainfall patterns which lead to droughts, fires, and flooding in some areas. Besides, global warming does not only give the impact to the rainfall patterns. In fact, there is some evidence that has shown regions are likely to get drier area than before. This also could lead to longer dry spells. So far, global warming has shown the big effect to both of them. Besides, too much or too little water will be a problem. People who live in different places will suffer from different events, too. That is why we need to find a way to lower the effect of global warming from now on. 2. Rising Seas Level The earth has warned us to protect him. Due to the climate change which is linked to global warming, then it can cause the rising seas level. As a matter of fact, rising seas level is caused by factors that are associated with global warming: the increase in the volume water in the world’s oceans, then resulting in rising global seas level, the thermal expansion which will lead to the melting ice sheets and glaciers on land which makes the expansion of the water. As the consequence, higher sea temperature will affect to the melting massive ice shelves that extend out from Antarctica. This condition is believed to continue for centuries and very likely at a faster rate and in bigger amounts. As a result, the question comes up about if the sea levels rise, where will all the people go? Indeed, if the sea levels rise, the presence of low-lying cities is in danger for sure. Consequently, sea levels rise can affect to the human populations in coastal and natural environments like marine ecosystems including the destructive erosion, wetland flooding, and loss habitat for fish, birds, and plants. 3. Leads to Extreme Events As the earth suffer from climate change and rising sea levels, both of these can lead to “wilder weather”. There will be the increasing the presence of extreme events including floods and hurricanes. The hurricanes are more likely comes because they get their energy from warm ocean water. Also, the change of rainfall pattern has a role in making the heavier rain and the stronger hurricane. Not only for that, global warming can affect to another extreme events like the harm corps which will give the bad impact to human’s living. 4. Spreads of Disease The melting of ice caps, the flood, and the climate changes can give the bad impact to human’s living. For instance, the presence of the flood could unravel the new deadly diseases including the chronic diarrhea. As a result of climate change, there could be the new finding about danger viruses and bacteria which is brought by the bad weather. At this point, cholera is one of deadly outbreaks which the presence is increased by the climate change. This disease spread through contaminated water due to the poor sanitation in developing countries. Further, cholera kills more than a hundred thousand people globally per year. Besides, extreme weather patterns could bring more habitat of the mosquitoes that can carry the illnesses such as Malaria to Zika. Also, there will be likely the presence of allergies, asthma, and infectious outbreaks due to the higher level of air pollutions. Hence, we should find the prevention for this case in order getting a better life. 5. Animal Populations are in Danger As have described, climate changes caused by global warming can lead to the widespread vanishing of animal populations. For instance, this case is associated with the following widespread habitat loss near the poles. This also resulting in endangered polar bears which caused by the shortening of their feeding season. Not only for that, some animals are being forced to change their behaviors. As a result, the birds are migrating earlier as the spring arrives sooner than before. 6. Bleaching of Coral Reefs Due to the warming seas, it affects to the damaging coral reefs as well as the increased level of carbonic acid formation. As a matter of fact, the increased level of water temperatures brought stress on corals as they are sensitive to the changing water temperature. Consequently, if the water temperature is being higher than usual, the zooxanthellae will be lose and the corals turn to white color. Also, the increased level of carbonic acid formation or known as the acidification could bring the negative impacts to organisms around coral reefs. Indeed, snails and claims will hardly absorb the calcium carbonate to build their shells due to the presence of ocean acidification. See also: Acidification of The Ocean 7. Loss of Plankton The rising of water temperature of world’s ocean could stop the oxygen production by phytoplankton. This would likely affect to the mass mortality of ocean’s organisms and also humans as well. Indeed, there is a link between plankton production and the fish populations in the ocean. The warming oceans will cause phytoplankton biomass to decrease as it will also build the blocks of marine life. This depletion of plankton amount in the ocean could affect to the reducing fish biomass globally. The enormous amount of loss of plankton could also affect the ecosystem of sea lions, sea otters, sea urchins, and other fish populations. Read also: Marine Energy How to Deal with Global Warming As you can read above, we are facing the negative effects of global warming. This warming temperature occurs globally and gives the bad impacts to human’s living. Hence, we have to do something to deal with global warming. Here are a few solutions we can have to lower the progress of global warming. - Reducing the Carbon Emissions As a matter of fact, we use fossil fuels to power the electricity. However, the burning process of fossil fuels could support the global warming effects. Indeed, the fossil fuels could increase the amount of carbon in the living biosphere. This case brings serious and negative effects to human’s living. As a result, we have to change the use of fossil fuels with the alternative one. In fact, we are looking for the energy which is enormous and effective to be used. One of the solutions is to use the coal. Still, burning coal is need to be more tested by government and industry. On the another hand, we can use the wind power resulting in the high-efficiency natural gas generation. By reducing the carbon emissions, it is valuable to prevent the larger amount of global warming effects. Also, you can reduce the emissions by taking public transportation rather than by car. Consequently, it is believed that the alternative, clean, and the renewable energy could help to deal with global warming. As this goal is mainly focused to reduce CO2 as well as replacing the electricity production by using plants and ocean energy use, then it promises great things in the future. - Greening Transportation One of the solutions we can have to deal with global warming is by providing the greening transportation. This sector is believed to bring the efficiency of using transportation which is low –carbon fuels and help to lower the carbon emissions as well as reducing air pollution at the same time. - Buying Less Stuffs To help reducing global warming effects, you can buy less stuff as well as reduce, reuse, and recycle. This strategy will help to lower the emissions and to lower the level of global warming effects. Moreover, you can keep your money to buy something which is more beneficial. - Managing Forests The next solution we can do is to manage the forests. As have explained before, one of global warming case is the deforestation. This represents a large amount in trapping the heat in the atmosphere. As the solution, we can manage the growth of forest lands and agriculture as well. - Exploring Nuclear One of the energy use, nuclear power provides a solution to reduce the global warming. However, this application in human living is based on many factors. Indeed, there are many things which are important to be deeply researched before using nuclear as an energy substitute. For instance, we should observe first the security, the safety, the possible threat, and the cost for using the nuclear power as a way out. - Spreading the Word and Action If everyone is working together to find and act the solutions, then we have succeeded to have the next step in preventing the climate change and also the global warming. As a result, we can tell people to work together in order combating the global warming. Still, it is not only the word who speaks but the action must be done in a right and proper way. The consumption and the use of both fossil fuels and transportation in last few decades have to lead to the degradation of the environment. These contribute to the presence of global warming, climate change, depletion of the ozone layer, the increased level of air pollution, and even the extinction of wildlife species. Moreover, we can face the bad impact of global warming in the food chain. To conclude, Global Warming in Ocean is a serious problem to all countries. Also, human’s living has proven for their contribution in increasing the effect of global warming. Considering this is a big problem for all people in the earth and as we live on the same planet, then it is our responsibility to take care of the earth. This work could be easy to handle if we are working together to achieve the same purpose. To make the earth safe and healthy to be the place which we are living in. Thus, with the solution we have provided above, we can contribute more to prevent the increased level of greenhouse gas. With this, we can reduce the global warming effects to prevent the impact of it and will resulting in healthier and happy life as well. Hence, let’s save our earth from the global warming for better life and environment.
Because of climate change, Washington is at risk for more intense, severe wildfires. We're committed to reducing the impacts of climate change, and to help our state's communities prepare for the impacts that cannot be avoided. Increased risk of wildfires Climate scientists say an increase in the frequency and intensity of wildfires is likely to become the new normal. Climate models indicate that the Northwest could see up to 1.1 million acres lost per year to wildfires by the 2040s. Factors contributing to the increased risk include: - Earlier snowmelt - Rising temperatures - More frequent, longer heat waves - Drier summers - Low soil moisture content - Spread of the mountain pine beetle and other insects that kill or weaken trees and plants - More fuels from dead trees and plants Threats to your health, the economy, and the environment As the frequency and intensity of wildfires increase, so do their impacts. - Cause unhealthy levels of air quality - Create respiratory problems for some people - Can contribute to premature death - Threaten homes, property, and agriculture - Destroy forestland and resources - Damage wildlife habitat. We are monitoring air quality We track air quality around the state every day using a network of monitors to measure air pollutants, such as particle pollution from wildfires. During wildfires we often add more monitors. Visit our air quality monitoring page to view air quality levels in your community. What you can do More than 80 percent of wildfires in the U.S. are caused by people. Follow these guidelines to help prevent wildfires and protect your health and property: - NEVER throw cigarettes out your window - Don't park hot vehicles on the grass - Make sure trailer chains don't drag on the ground causing sparks - Clear the perimeter of your house from pine needles and yard waste - Watch the news, or follow social media, during wildfire season - Adhere to burn bans - Report illegal burning - Extinguish camp fires completely - Have an evacuation plan in place - Visit Ready, Set, GO! for more emergency preparedness tips. Ultimately, the key to slowing climate change is to reduce greenhouse gas emissions, especially from the largest contributor, transportation , by adopting methods to reduce your carbon footprint
Respiration in man at high altitudes Lee, Robert Cleveland MetadataShow full item record Respiration is the process of the exchange of gases between the body tissues and the environment. This is divided into external and internal respiration. External respiration involves the passage of air through the respiratory passages and into the alveoli of the lungs, together with its diffusion through the lung walls and into the blood. Internal respiration here includes the chemical and physical transport of the gases by the blood, the circulation of the blood, and the exchange of gases between the blood and the tissues. Respiration is affected by the low pressures encountered on high mountains and during flight. The regulation of respiration is controlled by the nervous system, regulated by impulses from the respiratory centers. The respiratory centers respond to changes in the tension of carbon dioxide and oxygen of the arterial blood. Blood tension affects respiration both by direct response of the center itself and by reflex impulses from the chemoreceptors of the carotid body and the aortic body. Respiration is also altered by the Hering-Breuer reflexes, impulses caused by pressure on the pressoreceptors in the carotid sinus and in the aortic arch, and by stretch receptors in the lungs. The gas tensions of the arterial blood are the chief regulators of respiration. An increase in the tension of carbon dioxide in the arterial blood is a stimulus to respiration. Appreciable lowering of the tension of oxygen in the arterial blood increases the excitability of the respiratory center. Blood gas tensions are altered by the level of the metabolism, the dead space in the respiratory system, the oxygen utilization, and changes in carbon dioxide and oxygen pressures in the air breathed. The internal respiration utilizes diffusion of gases to effect the transfer of gases into the blood and out, aided by an enzyme carbonic anhydrase. The gases are carried in the blood, largely in chemical combination. Carbon dioxide is carried as a bicarbonate in the blood plasma. Hemoglobin in the red cells carries the oxygen and much of the labile carbon dioxide. The quantity of gases carried depends upon the number of red cells and the amount of hemoglobin, the degree to which oxygen is utilized, and the rate of circulation. These factors are under nervous control, regulated in part by the factors altering the external respiration. Respiratory difficulties at high altitudes are due to the low barometric pressures encountered, which reduce the amount of oxygen available to the body. The constant water-vapor pressure in the lungs is also a factor limiting the amount of oxygen which can be supplied at high altitude. Respiration at high altitude is considered in two parts; first, respiration on high mountains, where altitude is maintained for days, weeks, or longer; and second, respiration during flight and conditions simulating flight, in which the altitude is maintained for a period of minutes or hours. Ascending high mountains causes the percentages of carbon dioxide and oxygen in the expired and in the alveolar air to change. The carbon dioxide percentage increases and the oxygen percentage decreases progressively with increase in altitude. These changes are more pronounced during muscular exertion. The tensions of carbon dioxide and of oxygen in both the alveolar air and the arterial blood are lowered, with increases in altitude, from 40 mm. and 102 mm., respectively, at sea level, to 21 mm. and 37 mm. at a 20,000-foot elevation. The respiration rate is not altered consistently, but the minute volume of respiration is increased with increase in altitude. During muscular work at high altitude the minute volume is higher than during the same amount of work at sea level. The minute volume increase is the result of an increase in the excitability of the respiratory centers due to lowered oxygen tension, sufficient to more than counteract the smaller stimulus incurred by a decreased carbon dioxide tension. Internal respiration is affected by high altitude. The red cell count and the hemoglobin content are increased, at first by temporary means; but later they are more permanently augmented. The hemoglobin content continues to increase for a number of weeks. Immediate increase in the rate of circulation aids in supplying the oxygen required until blood changes compensate for it. The oxygen requirement for rest is essentially the same as that at sea level, but a given amount of work requires more oxygen than is required by the same amount of work at sea level. Residents and natives at high altitude are acclimatized to altitude and perform muscular tasks with less respiratory effort than sea level residents at high altitude. The best explanation of adaptation appears to be an increase in the surface area of hemoglobin present in the blood of natives. Man, after becoming acclimatized, has climbed to 28,000 feet on Mount Everest, but respiration is labored and the amount of exertion is limited. Oxygen has been used in some mountain climbing but with no appreciable benefit. This is probably due to inadequate oxygen supply and to technical difficulties. When air is breathed during flight and in simulated flight in low pressure chambers, the same trends noted on high mountains are found. The alveolar partial pressures are 30 mm. for carbon dioxide and 35 mm. for oxygen at an altitude of 20,000 feet. The more permanent acclimatization factors are not found during flight because of the short duration, and even the immediate adjustments do not permit altitudes attainable on mountains to be reached without serious difficulties. The maximum altitude that can be attained when air is breathed is dependent on the rate of ascent, the length of time the altitude is maintained, and individual factors. The highest safe altitude for long exposures is probably about 15,000 feet, and for short exposures 25,000 feet is about the limit. By using oxygen, sufficient to maintain the partial pressure in the inspired air at 160 mm. up to 27,000 feet, or by using pure oxygen above 32,000 feet, the partial pressure of carbon dioxide in the alveolar air can be maintained at 36 to 40 mm. Under these conditions the volume of respiration is not increased above normal for sea level and internal respiration is probably normal. The altitude limit for flights using oxygen is probably below 50,000 feet, the present record being 47,358 feet. The highest recommended height is 40,000 feet. The breathing of pure oxygen or oxygen-rich air may be accomplished by the use of masks. Masks should be employed at 10,000 feet for long flights and at 15,000 feet for all flights. For flights above 30,000 feet it is desirable to breathe pure oxygen from the ground up. By using pressure enclosures, maintaining either the atmospheric pressure at sea level or the oxygen pressure at sea level, respiration is not restricted by altitudes. The highest altitude to which man has ascended was attained in a balloon with a pressurized gondola, in which the oxygen pressure was maintained above the sea level value of 160 mm. The altitude of 72,395 feet was reached in this flight without respiratory difficulty. The data presented prove that respiratory difficulties at high altitude are caused by insufficient oxygen, and that by maintaining an adequate oxygen pressure in the inspired air, normal respiration is possible at high altitudes. This item was digitized by the Internet Archive. Thesis (M.A.)--Boston University RightsBased on investigation of the BU Libraries' staff, this work is free of known copyright restrictions
A guest post from one of our team mates Children are both the makers and the markers of healthy, sustainable societies. According to UNICEF, children represent one third of the world’s population, in today’s numbers, that would be approximately, 2.2 billion children and they represent a significant factor for the future development of the society on a planetary scale. Sustainability is the ability to exist constantly. Sustainability has a lot of definitions, but, regarding humanity, it all comes to one thing, the ability to thrive and advance as a race, species. We as humans are a race that leans towards technology. We see the technology as an answer and a solution to sustainability. Therefore, we should motivate and educate our legacy to follow that path. There are many possibilities and ways to use technology to achieve and maintain a sustainable society, but we will consider only one of them. Space exploration is one possible tool that might allow humanity to achieve and maintain a sustainable society. We are very much dependent on resources that might and eventually will be drained, exhausted on our beloved planet. The only other place to find resources is the vast space. We need engineers in order to explore and promote space exploration, we should encourage our legacy to follow that path. Motivation and encouragement is not always easy when it comes to young minds. Children learn and explore through play. Their curiosity is limitless, pure and naïve. There are many tools that will help us, adults, to guide the young engineers, explorers and innovators. Most of them are simple, free and at hand. Read to them, talk to them and tell them about the planets, moons, stars. Make paper rockets, tell them a story about Titan, Saturn’s moon, guide them on a journey that will feed their imagination. Hands-on learning is another way. Children love to create and use their hands. Young and gifted makers and engineers could play with different building block and create spaceships and orbital stations. Only by building they can build the confidence to continue exploring. As adults, we need to give an example, we need to teach them to never give up, learn every day and keep looking into the future. So, to summarize, how do we get more young space explorers? By letting them explore the universe using their imagination. Bojan Seirovski, MEng.
Sleeping on a bed made of plastic bottles might sound a bit strange, but this latest development in sustainable mattresses represents one way that we can tackle the growing problem of plastic pollution. Around the world, an amazing one million plastic bottles are bought every minute. That number is predicted to jump a further 20 per cent by 2021. By that year, annual consumption of plastic bottles is set to top half a trillion. Already, the majority of these end up in landfill, burnt or leaking into oceans. Plastic and the Oceans Between 5m and 13m tonnes of plastic makes its way into the world’s oceans each year. Much of it from soft drink or water bottles. According to research by the Ellen MacArthur Foundation, by the middle of the century, our oceans will contain more plastic by weight than fish. In the North Pacific, a gyre (large system of circulating ocean currents) and plastic debris have combined to create what is now known as The Great Pacific Garbage Patch (GPGP). It is estimated to be twice the size of France. It's predicted to double in size over the course of the next decade if current levels of plastic use (and recycling) continue. The GPGP is often referred to as an ‘island. But in reality, it’s more like a large plastic soup made of tiny fragments of plastic. It has been estimated that it would take around 70 ships one year to clean up less than 1 per cent of the GPGP. The Consequences of Plastic Pollution For sea-based wildlife such as fish, dolphins, seabirds and seals, plastic can be deadly. Wildlife can become entangled in it or mistake it for food. But it’s not just animals that are affected. Recently, scientists at Ghent University in Belgium revealed that people who regularly eat seafood take in up to 11,000 tiny pieces of plastic every year. The frustrating thing about plastic bottles, is that most are made from polyethylene terephthalate, which is highly recyclable. So, waste shouldn’t be so much of an issue. But as their use grows across the world, attempts to collect and recycle the bottles are failing to keep up. Fewer than half of the bottles bought worldwide in 2016 were collected for recycling. Just seven per cent of those collected were turned into new bottles. To tackle this problem, some manufacturers are thinking outside the box, finding new ways to utilise old plastic. One such example is the mattress you could sleep on. Part of a drive towards greater sustainability in the industry, a number of manufacturers are creating mattresses made from recycled plastic bottles. The bottles are crushed and spun into a fine, soft fibre. This is then used to form a breathable layer that makes sleeping on it a cool and comfortable experience.
As people often take calculated risks when facing significant life changes, researchers found that pea plants also make choices about how to grow and when based on an assessment of risk. The findings were published on Thursday in the Cell Press journal Current Biology. The team analyzed the decisions the pea plants made when presented in different environments with different nutrient levels. Plants showed an unknown ability to take calculated risks to secure the maximum amounts of nutrients, as reported by Christian Science Monitor. “To our knowledge, this is the first demonstration of an adaptive response to risk in an organism without a nervous system,” commented Alex Kacelnik of Oxford University in the United Kingdom. For the study, the team grew pea plants with their roots split between two pots, each pot with different amounts of nutrients within the ground. The plants were faced with the decision about how much effort they allocate to growing in each one, according to a press release on EurekAlert. During the first experiment, researchers found that the plants grew more roots in the pot with more nutrients like animals often spend more time foraging in food-rich areas. After that, the team split the roots of each plant between two pots then offered the same nutrient levels on average, but one offered a constant level of nutrients while in the other they varied over time. The question was whether the plants would choose to grow more roots in one pot or the other. Based on theoretical analyses from the team of how decision makers, such as humans or animals, would respond in a similar scenario. The team predicted that the plants might prefer the most variable pot, accepting more risk, when the average nutrient level was low. The decision-making process was compared with a similar scenario in humans, but instead of nutrients, the team introduced money. The choice that the plants had to make were equivalent to a person choosing to take $800 or to toss a coin for a 50 percent chance of winning $1,000 and the other chance to earn nothing. Most people, according to the team, would realize that the first option would pay more on average, and would prefer it if there are no other restrictions, Kacelnik stated in the press release. When the average nutrient was high, researchers predicted that the plants would choose just the opposite due to there would be no reason to accept the risk associated with a more unpredictable environment.
Humpback Whale Migration Humpback whales migrate from Antarctica to the sub-tropical coastal waters of western and eastern Australia and Fiji to give birth and mate during winter and spring. Each year at least 1200 humpbacks migrate 5000km to the eastern coast of Australia. Groups of whales or 'pods' start to arrive at the southern Great Barrier Reef in mid-June and in the following weeks they move further along the Great Barrier Reef concentrating in the southern Whitsundays area. On the southern migration back to Antarctic waters, a large proportion of the whales stop over for a few days in Hervey Bay. Most humpbacks will have left the Queensland coast by the beginning of November. The Humpback whales that visit Australia's coastal waters spend their summer months feeding in the Antarctic. With the onset of the southern hemisphere winter the Humpbacks migrate an average of 2,500km from polar waters to their tropical breeding grounds, undertaking some of the longest migrations in the animal kingdom. Although similar migrations of the same species occur in the northern and southern hemispheres, the two populations never interbreed, even where they use the same equatorial breeding waters, because the northward and southward convergences on tropical waters occur six months apart. Whereas most migrating whales avoid land masses, the Humpbacks follow the coastline reasonably close to shore, which makes them an ideal In winter, Antarctic food becomes scarce and the waters become far too cold for these warm-blooded animals. The cold would kill any the new-born calves as they are born without the insulating layer of blubber. To ensure the survival of the calves, the mothers produce 600 litres of extremely rich milk per day. The calves will The whales do not depart en masse but flow in and out of their breeding waters over a five-month period. As they travel they do not actively feed, except perhaps on occasional Young calves making their first migratory journey back to Antarctica are particularly vulnerable to natural and unnatural forces. They are the most likely to fall prey to sharks and Killer whales and the most likely to die of exhaustion. The calves manage to conserve their energy by riding the slipstream of their mother's wake. The young calves position themselves just behind their mother's widest diameter and just below and beside her dorsal fin. The water flowing between their bodies increases in velocity and decreases the pressure in the area, enabling the young calf to keep pace with an All photographs shown on this page are copyright © 1997 Seaspray Charters - used with permission. Book Accomodation Online with us and Save Lowest Prices + No Booking Fees
How children read music? Written Music Lessons howick is a language that has been developing for many years and even the music we read today has been around for over 300 years. Music symbolization is the representation of sound from basic notations for pitch, duration and timing to more progressive descriptions of expression and even special effects. Well reading music is the tough thing for children and they feel irritation sometimes. So there are some things that children need to know before reading the music. These are: - Get a handle on the musical staff: Before children are ready to start learning music, they must get a sense for the basic information that practically everyone who reads music needs to know. The straight lines on a piece of music make up the staff. This is the most basic of all musical symbols and the foundation for everything that need to follow for them.The musical staff is an arrangement of five parallel lines and the spaces between these lines. Both lines and spaces are numbered for reference purposes and are always calculated from lowest to highest staff. - Start with the Triple Clef: One of the first things that children will encounter when reading music is the clef. This sign which looks like a big, fancy symbol at the left end of the staff. It is the legend that tells them approximately what range their instrument will play in. All instruments and voices in the higher ranges use the triple clef and for this introduction to reading music. - Learn the parts of a note: children need to learn basic notes to read music. Separate note symbols are a combination of up to three simple elements: note head, stem, and flags. - Note head: This is an elliptical shape that is either open in white or closed in black. It tells the performer that what note to play on their instrument. - The stem: This is the shrill vertical line that is attached to the note head. When the stem is pointing up, this joins on the right side of the note head and when the stem is pointing down, this joins the note head on the left. These direction has no effect on the note but it makes notation easier to read and less cluttered to children.The universal rule on stem direction is that at or above the center line of the musical staff, then stem points down. Similarly when the note is below the middle of the staff, then stem points up. - The flag: This is the bent stroke that is attached to the end of the stem. No matter if the stem is joined to the right or left of the note head. The flag is constantlydrawn to the right of the stem and never to the left. So, taken together, the note, stem, and flag show the musician the time value for any given note as measured in beats or segments of beats. When children listen to music, they’re tapping their foot in time to the music, this means they’re recognizing that beats. - Posted in: music
||This article may require cleanup to meet Wikipedia's quality standards. (January 2011)| Dyeing is the process of adding color to textile products like fibers, yarns, and fabrics. Dyeing is normally done in a special solution containing dyes and particular chemical material. After dyeing, dye molecules have uncut chemical bond with fiber molecules. The temperature and time controlling are two key factors in dyeing. There are mainly two classes of dye, natural and man-made. The primary source of dye, historically, has generally been nature, with the dyes being extracted from animals or plants. Since the mid-18th century, however, humans have produced artificial dyes to achieve a broader range of colors and to render the dyes more stable to resist washing and general use. Different classes of dyes are used for different types of fiber and at different stages of the textile production process, from loose fibers through yarn and cloth to completed garments. Acrylic fibers are dyed with basic dyes, while nylon and protein fibers such as wool and silk are dyed with acid dyes, and polyester yarn is dyed with disperse dyes. Cotton is dyed with a range of dye types, including vat dyes, and modern synthetic reactive and direct dyes. Archaeologists have found evidence of textile dyeing dating back to the Neolithic period. The earliest surviving evidence of textile dyeing was found at the large Neolithic settlement at Çatalhöyük in southern Anatolia, where traces of red dyes, possibly from ocher, an iron oxide pigment derived from clay), were found. In China, dyeing with plants, barks, and insects has been traced back more than 5,000 years. Early evidence of dyeing comes from Sindh province in Pakistan, where a piece of cotton dyed with a vegetable dye was recovered from the archaeological site at Mohenjo-daro (3rd millennium BCE). The dye used in this case was madder, which, along with other dyes such as indigo, was introduced to other regions through trade. Natural insect dyes such as Tyrian purple and kermes and plant-based dyes such as woad, indigo and madder were important elements of the economies of Asia and Europe until the discovery of man-made synthetic dyes in the mid-19th century. The first synthetic dye was William Perkin's mauveine in 1856, derived from coal tar. Alizarin, the red dye present in madder, was the first natural pigment to be duplicated synthetically in 1869, a development which led to the collapse of the market for naturally grown madder. The development of new, strongly colored synthetic dyes followed quickly, and by the 1870s commercial dyeing with natural dyestuffs was disappearing. Dyes are applied to textile goods by dyeing from dye solutions and by printing from dye pastes. Methods include direct application and yarn dyeing. The term "direct dye application" stems from some dyestuff having to be either fermented as in the case of some natural dye or chemically reduced as in the case of synthetic vat and sulfur dyes before being applied. This renders the dye soluble so that it can be absorbed by the fiber since the insoluble dye has very little substantivity to the fiber. Direct dyes, a class of dyes largely for dyeing cotton, are water soluble and can be applied directly to the fiber from an aqueous solution. Most other classes of synthetic dye, other than vat and surface dyes, are also applied in this way. The term may also be applied to dyeing without the use of mordants to fix the dye once it is applied. Mordants were often required to alter the hue and intensity of natural dyes and improve color fastness. Chromium salts were until recently extensively used in dying wool with synthetic mordant dyes. These were used for economical high color fastness dark shades such as black and navy. Environmental concerns have now restricted their use, and they have been replaced with reactive and metal complex dyes that do not require mordant. There are many forms of yarn dyeing. Common forms are the package form and the hanks form. Cotton yarns are mostly dyed at package form, and acrylic or wool yarn are dyed at hank form. In the continuous filament industry, polyester or polyamide yarns are always dyed at package form, while viscose rayon yarns are partly dyed at hank form because of technology. The common dyeing process of cotton yarn with reactive dyes at package form is as follows: - The raw yarn is wound on a spring tube to achieve a package suitable for dye penetration. - These softened packages are loaded on a dyeing carrier's spindle one on another. - The packages are pressed up to a desired height to achieve suitable density of packing. - The carrier is loaded on the dyeing machine and the yarn is dyed. - After dyeing, the packages are unloaded from the carrier into a trolley. - Now the trolley is taken to hydro extractor where water is removed. - The packages are hydro extracted to remove the maximum amount of water leaving the desired color into raw yarn. - The packages are then dried to achieve the final dyed package. After this process, the dyed yarn packages are packed and delivered. Removal of dyes If things go wrong in the dyeing process, the dyer may be forced to remove the dye already applied by a process called "stripping". This normally means destroying the dye with powerful reducing agents such as sodium hydrosulfite or oxidizing agents such as hydrogen peroxide or sodium hypochlorite. The process often risks damaging the substrate (fiber). Where possible, it is often less risky to dye the material a darker shade, with black often being the easiest or last option. - "Dyeing". The Free Dictionary By Farlex. Retrieved 2012-05-25. - "Dye". Merriam-Webster. Retrieved 2012-05-25. - Barber (1991), pp. 223-225. - Goodwin, Jill. A Dyer's Manual, Pelham, 1982. ISBN 0-7207-1327-7p. 11. - Bhardwaj, H.C. & Jain, K.K., "Indian Dyes and Industry During 18th-19th Century", Indian Journal of History of Science 17 (11): 70-81, New Delhi: Indian National Science Academy. - Hans-Samuel Bien, Josef Stawitz, Klaus Wunderlich “Anthraquinone Dyes and Intermediates” in Ullmann’s Encyclopedia of Industrial Chemistry 2005 Will;.,ey-VCH, Weinheim: 2005. doi:10.1002/14356007.a02 355. - Goodwin (1982), p. 65 - Farer Thread |Look up dyeing in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to Dyeing.|
Best Practices for Evaluating Digital Curricula Everything you need to know about selecting and implementing the right online math program More and more school districts are turning to Blended Learning, an education model where students learn part through online math delivery of lessons and part through the traditional classroom, to implement the rigorous standards of 21st century learning. As with any educational initiative or strategy, the right Blended Learning model, used in conjunction with a digital curriculum, should deepen student understanding, critical thinking, and independent problem-solving capabilities as outlined in the Common Core and other standards documents. In this white paper, Best Practices for Evaluating Digital Curricula, you’ll discover: - Free Tools and Resources for a Successful Blended Learning Program. Discover where to find useful product reviews of online math programs, as well as some of the latest digital tools to help implement an engaging and results-oriented blended learning program that adheres to Common Core and other education standards. - Planning Backward. Discover this effective way to design learning programs, lessons, and schools. Planning with the end in mind means ensuring that learning goals inform when and why students need certain lessons designed in certain ways, as well as whether in-person or self-directed digital experiences would be most appropriate to support their success. - The SAMR Model. To determine whether a digital curriculum is engaging students in new and effective ways, educators can use the Substitution/Augmentation/Modification/Redefinition (SAMR) model, created by Dr. Ruben Puentedura and described in “Transformation, Technology, and Education.” Learn how to use this model to determine if a technology or application is truly transformative for students. - Understanding Adaptive Technology. The word “adaptive” is increasingly being used in claims describing how technologies uniquely personalize and individualize education for each student. Even though “adaptive learning” developers have noble goals, the design of each adaptive platform reveals important pedagogical approaches and assumptions made by the developers. The adaptive platform determines the pedagogy and the ways students engage with learning, and not all adaptive platforms are capable of supporting strong pedagogy and rich learning tasks. The proliferation of digital curricula makes it even more important for educators to be discerning and to critically test and judge software before incorporating it into a blended learning program. Next-Generation Intelligent Adaptive Learning Transformative digital content that enables students to make sense of things on their own requires a different kind of adaptive platform that can provide feedback based on how students think for themselves in new situations. The “intelligent” difference lies in the pedagogy and transformative nature of the digital content—a great example of the redefinition category in the SAMR model. Lessons written for an Intelligent Adaptive Learning platform are designed to adapt in real time and aren’t simply digitized static content that replicate print resources. Instead, they are built from the ground up to be interactive and adapt at any moment to the ideas of any student. Truly intelligent adaptivity uses non-linear sequencing informed by decades of research about natural cognitive development and growth in reasoning, instead of being generated from crowd-sourcing other students’ behaviors with digitized print materials. Putting it all Together with Critical Evaluation of Digital Curricula This report includes a digital curriculum evaluation checklist for easy reference, as well as steps for implementation after the selection process is complete. You’ll learn nine key implementation factors most strongly linked to education success, and many other tips for a high return on investment from math online curriculum. VP of Learning for DreamBox Learning, Inc., Hudson is a learning innovator and education leader who frequently writes and speaks about learning, education, and technology. Prior to joining DreamBox, Hudson spent more than 10 years working in public education, first as a high school mathematics teacher and then as the K–12 Math Curriculum Coordinator for the Parkway School District, a K–12 district of over 17,000 students in suburban St. Louis. While at Parkway, Hudson helped facilitate the district’s long-range strategic planning efforts and was responsible for new teacher induction, curriculum writing, and the evaluation of both print and digital educational resources. Hudson has spoken at national conferences such as ASCD, SXSWedu, and iNACOL.
The verb tenses in Greek are divided into six basic “systems” called Principal Parts (“mouse-over” images, then click, and click again for enlarged viewing). Each system has a distinct verb stem, from which all the various tenses and their respective “voices” are built (first image). In order to recognize a Greek verb, it is necessary to be familiar with its principal parts.The “regular” principal parts system is represented by the normal, “model” Greek verb λύω. The Principal Parts chart represented within the above image files delineates in alphabetical order the principal parts of many frequently encountered “irregular” verbs occurring in the Greek New Testament. To view all the pages at full-page size or to download the entire PDF, click here or anytime from the sidebar. Go to: Wermuth’s GREEKBOOK.com
Alabama has some of the highest diversity of freshwater snails in the world, but many snails are at high risk of extinction. An essential part of determining extinction risk is knowing the range of a given species and determining how much its range has contracted owing to anthropogenic impacts, but mistaken identity or misidentification can complicate conservation efforts. The Painted Rocksnail, a small snail from the Coosa River system, has been mistakenly identified as other species for over 100 years. In a study published in the open access journal ZooKeys, scientists Dr. Nathan Whelan, U.S. Fish and Wildlife Service, Dr. Paul Johnson and Jeff Garner, Alabama Department of Conservation and Natural Resources, and Dr. Ellen Strong, Smithsonian Institution National Museum of Natural History, tackled the identity of the Painted Rocksnail, a small federally threatened species native to the Mobile River basin in Alabama. Freshwater snails are notoriously difficult to identify, as the shells of many species can look very similar. Keeping this in mind, the researchers began to notice that many shells identified as the Painted Rocksnail in museums around the world were misidentified specimens of the Spotted Rocksnail, another snail species found in Alabama. After examining shells at the Academy of Natural Sciences of Philadelphia, Museum of Comparative Zoology at Harvard, National Museum of Natural History, North Carolina Museum of Natural Sciences, Florida Museum of Natural History, and Natural History Museum in London, in addition to hundreds of hours of their own sampling throughout the Mobile River basin, the authors determined that all previous reports of the Painted Rocksnail from outside the Coosa River system were mistakes. Despite the Painted Rocksnail dwelling in well-studied rivers near large population centers, mistaken identity of the species has persisted almost since the species was described back in 1861 by Isaac Lea. Only after careful examination of shells collected in the last 150 years and analyses of live animals were the researchers able to confidently determine that the Painted Rocksnail never occurred outside the Coosa River system. The study has implications for the conservation of the Painted Rocksnail, as the species was historically more restricted than previously thought. Recent surveys by the authors only found the species in small stretches of the Coosa River, Choccolocco Creek, Buxahatchee Creek, and Ohatchee Creek. In conclusion, the authors note the importance of natural history museums and the importance of studying snails in the southeastern United States. "Without the shells stored in natural history museums we would have never been able to determine that the supposed historical range of the Painted Rocksnail was incorrect, which could have resulted in less effective conservation efforts for an animal that is very important to the health of rivers in Alabama," they say. Whelan NV, Johnson PD, Garner JT, Strong EE (2017) On the identity of Leptoxis taeniata - a misapplied name for the threatened Painted Rocksnail (Cerithioidea, Pleuroceridae). ZooKeys 697: 21-36. https:/ This work was funded in part by the Smithsonian Institution, National Science Foundation, U.S. Fish and Wildlife Service, Birmingham Audubon Society, Conchologists of America, Alabama Department of Conservation and Natural Resources, and American Malacological Society. The findings and conclusions are those of the authors and do not necessarily represent the views of the U.S. Fish and Wildlife Service.
A team of researchers in the United States spent a week looking through the garbage bags of 1,151 people living in Denver, New York and Nashville. The researchers wanted to know what kind of food was being thrown away, how much there was, and why it was being tossed. By asking these questions, the researchers hoped to find ways to reduce the amount of food we throw away, and to give some of that food to people who need it. The researchers found that, in the cities they surveyed, more than a kilogram of edible food per person is wasted each week. (Edible food is food you can eat. It doesn’t include things like apple cores, egg shells, or bones from meat.) Fruits and vegetables were the most common edible foods found in the trash, followed by food leftover from meals. Eggs, bread and milk were also commonly thrown out. The people taking part in the survey gave several reasons for throwing edible food away. Most said the food was spoiled. Some said they weren’t interested in eating leftovers. A few said the food had passed the “Best Before” date printed on the label. Some people composted their food garbage. (Composting is a way of turning rotting food into fertilizer for soil.) But more than half of the food waste ended up in the regular trash and was sent to a dump or landfill. When food rots in landfills, it produces methane, a gas that contributes to global warming. While using food waste to make compost is better than throwing it in a landfill, the survey found that people who composted their waste didn’t feel as bad about throwing food away. They actually threw away more food than families who didn’t compost. When food is wasted – by families, restaurants or grocery stores – we are also wasting all of the resources that go into producing that food. That means we are wasting water, land, energy and labour, as well as the fuel needed to transport food. In Canada, about $31 billion worth of food ends up in landfills or composters each year. That works out to about $31 per week, or $1,600 per year, that each household spends on food that is wasted. About 45% of Canadians compost their food waste, but not all communities have composting programs, so the rest ends up in landfills. A lot of food is wasted before it even reaches the grocery store. Some food spoils or is damaged while it is being transported from the farm to the stores. Fruits and vegetables that have bruises or that don’t look attractive enough don’t even get put onto the shelves because most people won’t buy them. Grocery stores, restaurants and institutions like hospitals also waste a large amount of food. The researchers suggest that grocery stores should donate any food that is still okay to eat to food banks or homeless shelters, instead of throwing it away. Prepared meals from hospitals or restaurants could also be donated to shelters. Some people have come up with creative ways to reduce food waste: - Ubifood is an app that lets bakeries, cafes and restaurants in Montreal upload photos of food they have left at the end of the day. Customers can buy the food online at reduced prices, then pick it up at the restaurant. - Loblaws grocery chain sells small or oddly shaped fruits and vegetables under the brand name “Naturally Imperfect.” The items taste just as good as more attractive produce, but cost about 30% less. - Food rescue organizations help to redistribute surplus food to people who need it. For example, Second Harvest collects discarded food all along the delivery chain, from farmers to retailers, and passes it on to food banks, shelters, children’s breakfast programs and others. Rock and Wrap It Up! collects leftover prepared food from places like sports arenas, concert halls and movie studios, and gives it to local programs that feed people in need. (see TKN story: Unsold Food At Sports Events Feeds Local Hungry https://teachingkidsnews.com/2014/08/20/1-unsold-food-hockey-games-feeds-local-hungry/) 15 Ways to Reduce Food Waste: https://www.freshcityfarms.com/blogs/15-ways-to-reduce-food-waste Global News video about Second Harvest: https://globalnews.ca/video/rd/471457347681/ Interesting YouTube video about the Food Waste Campaign Canada 2015: https://www.youtube.com/watch?v=xhLINfHtMOk By Jonathan Tilly Do you see yourself as someone who wastes food? If yes, what can you do to reduce the amount of food you throw out? If no, what strategies do you use to keep your amount of food waste to a minimum? Reading Prompt: Making Inferences / Interpreting Texts Today’s article focuses on studies of The United States and Canada. Do you think that food waste is a problem all over the world? Consider why food waste might be different between different nations? Make inferences about texts using stated and implied ideas from the texts as evidence (OME, Reading: 1.5). Use stated and implied ideas in texts to make inferences and construct meaning (OME, Reading: 1.5). Develop and explain interpretations of increasingly complex or difficult texts using stated and implied ideas from the texts to support their interpretations (OME, Reading: 1.5). Language Feature: Paragraph Length Today’s article is made up of many very short paragraphs. Most paragraphs in today’s article contain 1-3 sentences. How does this impact how you read and what you remember? Why might an author choose to use short paragraphs instead of lengthy ones? ADDITIONAL, IN-CLASS ACTIVITIES FOR THIS ARTICLE For another great in-class activity based on this article, click on the box, below. Or go to the “Fake News Resources” tab on our home page and select “Media Literacy Activities.”
The Aromatic World An Interview with Pascale Ehrenfreund After years of investigation, scientists still struggle to understand how life began on our planet. While there are many hypotheses for life’s origin, there is still no compelling evidence that suggests one scenario is more likely than any other. In fact, when looking at chemical systems, there isn’t even a solid definition of what separates “life” from “non-life.” But many scientists do agree that life anywhere in the universe will share three essential qualities. First, life has to be able to claim an identity separate from the outside world. For early life on Earth, this likely took the form of a container, maybe a membrane sac or bag that contained chemicals. |Many of the ingredients for life formed in outer space. The Earth formed from star dust, and later meteorites and comets delivered even more materials to our planet. But scientists are still unsure which molecules played the most important roles in life’s origin. Image Credit: European Space Agency Second, life eats (metabolizes). This bag of chemicals must take in energy and nutrients of some type in order to sustain itself. For humans, that can be a hamburger and fries, but for something like bacteria living at a hydrothermal vent, lunch can be hydrogen sulfide. Finally, in order for life to go on, it must have children. Life must somehow pass on its genetic information down through time. If not, then a bag of chemicals would be a “one-off,” an anomaly in the chemical brew that lived once and then died and left no trace of its existence. Pascale Ehrenfreund, a professor of astrophysics at the University of Leiden in the Netherlands, investigates the night skies for signs of life. Rather than a SETI-like search for radio signals, however, the signs she looks for are chemical. There are 143 kinds of molecules in the interstellar medium, and some of them may be important for life’s origin –- not just in our own solar system but also for the entire universe. In a paper soon to be published in the journal Astrobiology, Ehrenfreund and her colleagues suggest that polycyclic aromatic hydrocarbons (PAHs), organic molecules found throughout space, may have played a fundamental role in the origin of life. These molecules of carbon and hydrogen are called "polycyclic" because of their multiple loops of carbon atoms, and "aromatic" because of the strong chemical bonds between the carbon atoms. PAHs can be found on Earth anytime carbon-based materials are burned incompletely –- from the sooty exhaust of trucks to the black gunk that clogs barbecue grills. In this interview with Astrobiology Magazine’s Leslie Mullen, Ehrenfreund explains how PAHs could have possibly provided the three qualities that were needed for life to arise. |Pascale Ehrenfreund of the University of Leiden. Click image for larger view. Photo Credit: Leslie Mullen Astrobiology Magazine (AM): In your work, you look for chemicals in space and in meteorites, and what you find indicates the raw ingredients early life had to work with. Pascale Ehrenfreund (PE): When you look at modern biochemistry, the three main needs of cellular systems are nucleic acids, proteins, and membranes. Some of the building blocks of these can be found in space. Most of the prebiotic material is found in carbonaceous meteorites, but there are indications of some complex molecules in the gas phase in the interstellar medium. For instance, there are indications of simple sugars like glycoaldehyde, and also the amino acid glycine. But I’m not sure this has anything to do with the origin of life. The interstellar medium provides the raw material for star and planet formation. There is a lot of chemistry going on in the solar nebula. The formation of the solar system was a dynamic process — material was rearranged, destroyed, disassociated, and newly formed. There are open questions about the degree of turbulence –- how much the material mixed into outer layers and then came back. In comets, we find crystalline silicates that can only have come from very close to the forming star. Yet comets form in the outer part of the solar system, so there must have been a diffusion of material -– a mixing from the inside to the outside. AM: That was a result from the Stardust mission, wasn’t it? They discovered the comet dust had materials which could only have formed in hot regions, close to the sun. PE: This is something we knew before Stardust — we’d previously found such indications in interplanetary dust particles. But I’m sure Stardust will improve our knowledge of that. |Interstellar dust particle Credit: UWSTL, NASA In general, when you look at pre-biotic compounds like amino acids, nucleobases, and simple sugars, they have problems withstanding heat and radiation. So if this material has been formed somewhere in the gas phase, like in the interstellar medium, it would have always have to be protected from high temperature and radiation while it was incorporated into a forming solar system. It’s likely that most of the material would have been exposed to some kind of energetic processing. When you look into meteorites, where you have solid-state chemistry involving liquid water, you find more than 80 different amino acids. You also find purines, pyrimidines, simple sugars, and nucleobases in meteorites. You do not find lipids, but you do find compounds that can form the most primitive containers -– for instance, alkane carboxylic acids, which are components of membranes. So meteorites are a kind of crystal ball for complex organic chemistry. We don’t know if this material really was important for the origin of life. But since we know that it is extraterrestrial and it arrived intact on the early Earth, we have a sample of material that could have been important to further processing and for the build up of complexity. But perhaps we shouldn’t give the modern biotic chemistry molecules too much credit for having been the ultimate material to form life. The temperature and radiation conditions on the early Earth improved considerably after a few hundred million years, but at the beginning it was too hostile for amino acids to assemble into proteins. You probably needed a different type of material that was much more stable. AM: And you suggest in your new paper that polycyclic aromatic hydrocarbons –- PAHs –- could have been a stable material important for life’s origin. |Polycyclic Aromatic Hydrocarbons. PE: Yes. We find complex aromatic carbon rings in the interstellar medium, in comets, and in meteorites. This macromolecular material is very stable to any kind of degradation, including radiation. It may be modified, but it won’t be totally destroyed. Even if it is broken apart, the fragments are still available for future chemistry. Whereas for something like amino acids, when they are blown apart by UV photons, nothing is left. The carbonaceous meteorites contain about 3 percent carbon, maximum. Of this 3 percent, 80 percent are incorporated into aromatic networks. So the aromatic material is abundant, it has been delivered effectively, and it is very stable — it is stable to heat, it is partly insoluble, and it is rather resistant to radiation. So now we are starting to think that under the very hostile conditions on the early Earth, such material could have been more important than we originally thought. AM: What can PAHs lead to? Are there only specific chemical pathways, or can it be the basis for a lot of different molecules? |The Murchison meteorite fell to Earth on September 28, 1969, near Murchison, Australia. This carbonaceous meteorite contains minerals, water, and complex organic molecules such as amino acids. PAHs also can be photosensitizers, because they can do a charge transfer between plus and minus. So they can be used as a metabolic compound to transform energy. My co-authors Steen Rasmussen and Liaohai Chen from Los Alamos and Argonne National Laboratories are using compounds similar to polycyclic aromatic hydrocarbons as metabolic units for the Los Alamos Protocell Assembly project. The PACE project of the European community is also using PAHs in this way. Nicholas Platts at the Carnegie Institution of Washington has proposed that by stacking PAHs, they can form something similar to a nucleic acid. Pier Luigi Luisi at RomaTre University has tried to stack PAH in the origin of life context. So in our paper, we suggest the aromatic material can be used as a container, as a metabolic unit, and as a genetic information carrier. We think that aromatic material can be used for all three requirements for life. What we tried to stress in our paper is that you have to meet all the requirements at once. You can’t have one compound to assemble material, and then add something else later on to do another function. They have to be combined from the beginning — life needs to have an identity, it needs energy, and it needs to be able to reproduce and evolve. That’s why PAHs are potentially so powerful, because with these aromatic compounds you can fulfill all three functions at the same time. AM: Are PAHs currently used within any modern living systems? |Red regions in the spiral arms represent infrared emissions from dustier parts of the galaxy where new stars are forming. Click for larger view. Credit: NASA/JPL-Caltech/S. Willner (Harvard-Smithsonian Center for Astrophysics)| PE: Only in the form of nucleobases, which are ring structures with heteroatoms and side groups. But there are a lot of aromatic molecules — not directly PAHs — that have functions in life, particularly in metabolic processes. AM: We’re not sure what the environment of the early Earth was like –- whether it was cold or hot. Would that make any difference? PE: For PAHs it wouldn’t make much difference. PAHs would withstand temperature and radiation flux much better than sugars, amino acids, or other typical components of biochemistry. If you had high temperatures on the early Earth, sugars could not be formed or sustained. Amino acids are also vulnerable to heat, and so are some of the nucleobases. Nucleobases are a sort of PAH, but the nitrogen within the ring would make them more unstable than PAHs. They are certainly all much more fragile to radiation than aromatic material, as our co-author Jim Cleaves has been investigating. The polycyclic aromatic hydrocarbons are the most abundant, free organic molecules in space. And space is certainly less comfortable than the Earth, since there is no protective atmosphere. That shows you that they can survive much better than any other material. AM: This idea makes so much sense, because it seems more likely that life would get started from the most common, robust material at hand, rather than from extremely fragile materials that need protection or special conditions. |Some of the ingredients for life are produced in the diamond-bright star fields of space. PE: I think so too. Amino acids can form pretty easily -– they are everywhere — and because they are very easily formed I’m sure that later on they played an important role. But I think it would be more logical that they played a role in living systems at a time that was convenient for them. I personally do not think that kind of material was the starting material for life. AM: Since PAHs are so robust, do you think they could be the basis for life on any planet? That a planet wouldn’t need to have Earth-like conditions in order to develop life? PE: Yes, it is very likely. It is much more likely than having some fragile compounds that are less abundant. Also, life must start simple. And nucleosides are not simple. We still have a great deal of difficulty building them in the lab even after 50 years of prebiotic chemistry experiments! So I think we have to go to something very primitive at the beginning, and which works under a lot of different conditions.
LA: With prompting and support, identify the main topic and retell key details of a text. LA: Use a combination of drawing, dictating, and writing to compose informative/explanatory texts in which they name what they are writing about and supply some information about the topic. LA: Participate in collaborative conversations with diverse partners about kindergarten topics and texts with peers and adults in small and larger groups. LA: Ask and answer questions in order to seek help, get information, or clarify something that is not understood. MATH: Recognize and draw shapes having specified attributes, such as a given number of angles or a given number of equal faces.5 Identify triangles, quadrilaterals, pentagons, hexagons, and cubes. SCI: Use observations and information to classify living things as plants or animals based on what they need to survive. VA: Use different media, techniques, and processes to communicate ideas, experiences, and stories. VA: Use visual structures of art to communicate ideas.
Hubble finds spark of life in ancient galaxy For some time now, astronomers have believed that ancient elliptical galaxies do not give birth to new stars anymore, but new images captured by NASA`s Hubble Space Telescope indicate otherwise. Washington: For some time now, astronomers have believed that ancient elliptical galaxies do not give birth to new stars anymore, but new images captured by NASA`s Hubble Space Telescope indicate otherwise. The new study helps bolster the emerging view that most elliptical galaxies have young stars, bringing new life to old galaxies. Images of the core of NGC 4150 reveal streamers of dust and gas and clumps of young, blue stars that are believed to be less than a billion years old rotate with the galaxy. According to NASA report, evidence shows that a merger with a dwarf galaxy sparked the star birth. The Hubble images reveal turbulent activity deep inside the galaxy`s core. "Elliptical galaxies were thought to have made all of their stars billions of years ago," said astronomer Mark Crockett of the University of Oxford, leader of the Hubble observations. "They had consumed all their gas to make new stars. Now we are finding evidence of star birth in many elliptical galaxies, fueled mostly by cannibalizing smaller galaxies. These observations support the theory that galaxies built themselves up over billions of years by collisions with dwarf galaxies," he added. "NGC 4150 is a dramatic example in our galactic back yard of a common occurrence in the early universe," said Crockett. Based on Hubble analysis of stars` colors, the star formation started about a billion years ago and the galaxy`s star-making factory has slowed down since then.rocket and his team, however, observed that most massive stars were already gone and that the youngest stars are between 50 million and 300 to 400 million years old. They also believe that the star birth came from a merger with a small, gas-rich galaxy around one billion years ago, which had the capability to fuel the star formation.he team selected NGC 4150 for their Hubble study because a ground-based spectroscopic analysis gave tantalizing hints that the galaxy`s core was not a quiet place. The ground-based survey, called the Spectrographic Areal Unit for Research on Optical Nebulae (SAURON), revealed the presence of young stars and dynamic activity that was out of sync with the galaxy. The astronomers hope to study other elliptical galaxies in the SAURON survey to look for the signposts of new star birth. The study will be published in The Astrophysical Journal.
Meghan H. Puglia Genetic factors undeniably contribute to autism. However, there is mounting evidence that environmental factors may also play a substantial role in the disorder. The environment exerts its influence on DNA through epigenetic modifications — chemical tags on DNA that regulate gene expression without changing the DNA sequence. Epigenetic modifications come in many forms and can have sex-specific effects. These chemical modifications can silence sex chromosomes, and sex hormones can cause epigenetic changes in the brain. Both of these processes have been implicated in autism. Understanding the interplay between sex hormones, genes and the social brain is crucial to understanding autism. Most research on this relationship has been done in males, as female hormonal fluctuations complicate an already complex system. The only way to truly understand sex differences in autism is through careful study of both sexes. There are well-established sex differences in the structure and function of the brain, particularly in regions that support social perception and cognition. Levels of testosterone and estrogen in the womb influence sexual differentiation of the brain, which begins early in embryogenesis. The chromosomal composition of the developing embryo — males possess a Y chromosome and females have two X chromosomes — partly determines the prenatal hormonal environment. A gene on the Y chromosome causes development of the testes, which produce testosterone. Through epigenetic mechanisms, this sex steroid masculinizes brain and behavior throughout the lifespan. One hypothesis about the origin of autism, called the ‘extreme male brain’ theory, holds that people with the disorder show exaggerated versions of social, cognitive and behavioral traits that are typical of men1. Researchers have probed testosterone levels among individuals with autism in search of a link but have arrived at mixed results2. These studies measured testosterone in different tissues from people of varying ages and stages of sexual maturity. Levels of testosterone early in development, when the brain is most plastic, are likely to exert the greatest impact. The few studies that have measured prenatal and perinatal testosterone link high levels to some behaviors associated with autism, but not to a diagnosis3. Although exposure to high concentrations of prenatal sex hormones may confer an increased risk of autism for boys, the presence of two X chromosomes may act as a protective factor for girls. Around the fourth day of female embryogenesis, one X chromosome in each cell is epigenetically silenced through a process called X inactivation. All cells therefore express only the maternal or the paternal X chromosome. However, almost one-quarter of the genes on the X chromosome escape inactivation. These genes may serve as a protective factor for females. Similarly, in the days leading up to X inactivation, females may be exposed to a double dose of X chromosome genes, many of which are involved in brain development. The mechanism that determines which X chromosome is inactivated is under investigation. Some evidence suggests that this process may not be random, and that one X chromosome is more likely to be inactivated in particular tissues. There is also evidence that an X chromosome with mutations may be preferentially silenced. Even if the selection process is perfectly random, a girl with a mutation on one X chromosome will only express that chromosome in half of her cells. By contrast, a boy with a mutation on his single X chromosome will express it in every cell. Just as X inactivation can silence whole chromosomes, another epigenetic mechanism can silence specific genes on a chromosome. In a normal phenomenon called imprinting, either the maternal or paternal gene contains an epigenetic mark that silences it. Disorders occur when the gene from one parent is naturally silenced, and the gene from the other parent is mutated, stamping out expression of that gene altogether. Dysfunction in the imprinting process can also occur, leading to too much expression of a normally imprinted gene. There is some evidence that silencing of the maternal X chromosome may increase the risk of autism4. Hormones such as oxytocin and vasopressin also play an important role in shaping sex differences in the brain5. Both of these hormones have been implicated in social behaviors ranging from maternal bonding to play. Both have also been tied to autism6. Oxytocin and vasopressin are susceptible to environmental influences, early-life experiences and epigenetic changes. For example, the presence of other hormones can influence both the synthesis and binding of these molecules. Vasopressin synthesis is testosterone dependent, whereas changes in oxytocin binding are associated with estrogen levels. The interaction between these social hormones and sex steroids is bi-directional and dynamic. Exposure to stress — particularly psychological or social stress early in life — also affects social behavior through interactions between oxytocin, vasopressin and the body’s stress response system. Oxytocin levels naturally increase during stress, and administering oxytocin reduces this reaction7. Intriguingly, social behavior itself can act as an environmental influence. The oxytocin and vasopressin systems interact with neural regions involved in processing information from the environment and are sensitive to socio-environmental context. Both hormones affect the brain and behavioral response to social interaction in a sex-specific way8. Most studies examining the role of oxytocin and vasopressin in people measure levels of the hormones in the blood and other bodily fluids, or test the effects of synthetic administration. However, the actions of these hormones depend on the activity of their receptors, which have typically not been considered. As a result, these studies have yielded mixed results. For example, a study published earlier this year found that genetic variability in the oxytocin receptor has sex-specific effects on the response to intranasal oxytocin9. The presence of oxytocin itself can decrease the expression of its receptor, particularly with continued exposure10. We suspect an epigenetic process called DNA methylation, which silences genes, regulates this negative feedback loop. As a group, women show more oxytocin receptor methylation than do men11. This suggests that they somehow compensate for the effects of methylation, or are more tolerant of hormonal fluctuations or disruptions than men are. In boys with autism, however, levels of oxytocin receptor methylation are higher in both the brain and blood compared with typically developing boys12. We suspect oxytocin levels in utero influence the developing oxytocin system, and that maternal levels of oxytocin ultimately shape a child’s epigenetic profile. Disrupting the natural balance of oxytocin by using a synthetic version, called Pitocin, to induce or augment labor, for instance, may alter the child’s methylation state. Epigenetic factors can have subtle but significant effects. In studies examining their influences on behavior, we must carefully consider and control for age, sexual maturity, the type of tissue being studied and other variables. We believe many of the seemingly contradictory results in the literature today reflect our lack of understanding of the intricacies within these systems. Ultimately, uncovering the mechanisms driving the sex discrepancy in autism will go hand in hand with understanding the neurobiology of the disorder. Jessica J. Connelly is assistant professor of psychology at the University of Virginia in Charlottesville. Meghan H. Puglia is a graduate student in her lab.
In a general sense, biodiversity is an intuitively simple concept, referring to the variety of Earth’s organisms. Ecologists, however, conceptualize biodiversity in a more nuanced, multidimensional way to reflect the enormous diversity of species, niches, and interspecific interactions that generate spatiotemporal complexity in communities. Students may not fully comprehend or appreciate this deeper meaning if they fail to recognize the full range of species in a community (e.g., the often-ignored microbes and small invertebrates) and how their varied interactions (e.g., mutualism, parasitism) and activities (e.g., ecosystem engineering) affect an ecosystem’s emergent structure (e.g., food webs) and function (e.g., decomposition). To help students learn about biodiversity and complex ecological webs, a role-playing activity was developed in which students “become” a different species (or resource) that they investigated for homework. In class, students work in small groups to “meet” other species in their community and, as appropriate for their roles, “consume” or “interact” with each other. As they make intraspecific connections, students collectively create an ecological web diagram to reveal the structure of their community’s relationships. This diagram is used for further exploration and discussion about, e.g., trophic cascades, non-trophic interactions, ecosystem engineering, and species’ effects on the movement of energy and nutrients. This inquiry-based activity has been observed to sustain student engagement and yield productive discussions and positive responses. Further, qualitative assessment indicates that students’ knowledge about biodiversity and ecological interactions improves after the activity and discussions, suggesting that students benefit from acting in and constructing their own ecological webs. Byrne, L.B. 2013. "An in-class role-playing activity to foster discussion and deeper understanding of biodiversity and ecological webs." EcoEd Digital Library
Over time, the avian influenza virus evolved to adapt to other animals, including humans, as natural selection favored viruses with mutations that allowed them to more readily infect the cells of new host species. For some strains of bird flu virus, notably the worrisome H5N1 variant, the genetic changes that could make human-to-human transmission a possibility and spark a pandemic are the markers of intense interest to those who track flu as a threat to human health. Now, in a study published today (Oct. 23, 2013) in Nature Communications, an international team of researchers shows how evolution can favor mutations that make avian flu more transmissible in mammals. The study used what scientists call deep sequencing to identify low-frequency genetic mutations that occur as the virus grows in and transmits between animals. Combing the genetic data from a transmission study in ferrets, a team led by Thomas Friedrich, a professor of pathobiological sciences at the University of Wisconsin-Madison School of Veterinary Medicine, found that during transmission, when one animal is infected by another through sneezing or coughing, the process of natural selection acts strongly on hemagglutinin, the structure the virus uses to attach to and infect host cells. The deep look into the genes of transmitted H5N1 viruses also reveals the surprising degree to which the virus can mutate and genetically diversify in each infected host, a troubling trait for a pathogen that has so far infected 637 people, killing 378. The teams data emphasize the fact that influenza viruses exist in each infected individual bird, human or ferret as a population or swarm of genetically related, but distinct, mutants. A mutation occurs somewhere on the viral genome every time a virus infects a cell, Friedrich explains. You might think they all have the same sequence, but they dont. We found that this diversity increases over time in essentially all infected individuals we examined. Perhaps their most surprising and troubling discovery was that mutations present in only about 6 percent of the viruses infecting one ferret could be transmitted to another. This suggests that even very rare mutants can be transmitted if they have an evolutionary advantage. Most human infections with H5N1 viruses come directly from birds and are not transmitted to other people. Past studies have identified four key genetic mutations needed for the virus to become transmissible between mammals. Surveillance by public health officials has already identified viruses containing one or more of the required mutations from fowl in Egypt and some Asian countries. The data, Friedrich says, indicate that viruses capable of infecting humans probably already exist in nature, but at very low frequencies. Those findings, he adds, suggest that current surveillance methods may be missing H5N1 viruses capable of making the leap from birds to humans. Traditional sequencing can detect a mutation if its present in maybe 20 percent or 30 percent of viruses. We were able to detect the transmission of rare mutants in this study only because we used deep sequencing. So there may be a background of transmissible viruses we are missing because surveillance currently relies on older technologies, says Friedrich. Maybe theyve always been there and we just couldnt see them. There may be viruses out there just one or zero mutations away. They just havent encountered a susceptible host. The new work drew on transmission studies conducted last year in the lab of Yoshihiro Kawaoka, a co-author of the new study and also a professor of pathobiological sciences at the UW-Madison School of Veterinary Medicine. The original studies examined the transmission of an engineered variant of the H5N1 virus between ferrets. Friedrich and his colleagues analyzed the genes of these variant viruses in their new study; no new ferret experiments were performed for the new analysis. Fully avian viruses may act differently in nature, he notes. But the data suggest to us that it wouldnt take many viruses from a chicken to infect a person, if the right mutations were there even if they were a tiny minority of the overall virus population. I suspect that result will hold true. A key aim of the study was to determine how transmission from one host to another affects the viruss genetic makeup. Researchers believed that transmission would reduce the genetic diversity present in the virus, but it was unclear whether genetic changes associated with transmission were random or if natural selection might favor mutations to make it more transmissible. We found evidence for natural selection occurring. We see it playing a role in which viruses start an infection, creating a genetic bottleneck, Friedrich says. A genetic bottleneck occurs when the survival of an organism with certain traits or mutations is favored over others in the same population, reducing the overall genetic diversity in subsequent generations. If natural selection is playing a role, it will favor transmission of that one-in-a-million virus, Friedrich notes. Source: University of Wisconsin-Madison
Although surprising, water on Vesta is not as much a revelation as it would have been a decade ago. In the interim researchers have found evidence for water ice on the moon and Mars. High-resolution observations of other small bodies such as Eros and Ida could reveal an even moister solar system, says Andy Rivkin, a planetary scientist at APL who was not involved in either paper. "If water was brought into Vesta via external impacts," he adds, "we would expect everything in the Asteroid Belt to have some water." Richard Binzel, a professor of planetary science at Massachusetts Institute of Technology, says the two findings are "the highlight of the Dawn mission" so far. He was not involved in the work for either paper but wrote a commentary accompanying them. The Prettyman team's analysis of GRaND data delivered another key bit of knowledge about Vesta: The findings conclusively matched its composition to a class of meteorites on Earth called HED meteorites (composed of howardite, eucrite and diogenite). Researchers in the 1970s had matched the colors and reflective properties of Vesta's surface to the meteorites. Now, the Prettyman analysis of the asteroid's chemical composition confirms their source. "We are now fully confident that the [HED] meteorites are from Vesta," Binzel says. As a result, researchers can prod, scrape and peer at the chemical and physical properties of the HED meteorites here on Earth and know they hold a record of the chemistry and history of Vesta. Because Vesta formed by the same process as Earth, called differentiation, Binzel says, the rocks are "almost a model of what the very early Earth would be like chemically."
Lords, Ladies, Knights, and Serfs: these were the people that lived during this time. The life of all the people (classes) was Medieval Europe, also called the Middle Ages, divided people into groups. They all lived on large estates called manors. The manor had a huge castle with a moat around it for safety. The manor also included a church (remember Christianity was spreading like wildfire), a mill, and smaller houses for the serfs to live. The lords controlled everything on the manor. Knights were required to protect the lords and started training at the age of seven. The serfs, or peasants, were the workers and had the hardest life. As time went on small towns developed and narrow streets were lined with shops to sell goods. Traders and craftworkers organized themselves into groups called guilds. Christianity was not the only thing that spread like wildfire. A disease called the Black Death (Bubonic Plague) killed millions. It was spread by a flea that lived on a rat. It started in Europe and spread to Africa and Asia. After the misery of this horrible disease, people had an enthusiasm for new ideas. This time period, called the Renaissance, was a time of great creativity. Many famous artists and works of art came from this time; like: Leonardo DaVinci and Michelangelo. Other helpful or fun links: Units - 6 >
(pages updated on 08-Feb-2013 ) Evolution of the Global Observing System (GOS) The Global Observing System Since the establishment by WMO of the World Weather Watch (WWW) in 1963, the Global Observing System (GOS) has been the major mechanism for providing continuous and reliable observational data world-wide. The GOS started with a relatively narrow set of observational requirements in support of mainly synoptic, mesoscale and short-term weather forecasts. Over the past four decades, however, the WWW, and specifically its GOS, have drastically developed their technological capabilities in response to requirements that have evolved within WMO and beyond. The GOS currently consists of observing facilities deployed on land, at sea, in the air and in outer space. The backbone of the surface-based subsystem continues to be about 11.000 stations on land making observations at or near the Earth's surface, at least every three hours. In addition, nearly 1,300 upper air stations generated over 1500 upper air reports daily. A constellation of geostationary and polar-orbiting satellites constitutes the operational space-based subsystem of the GOS, whose major goal is to augment the observations provided by the surface-based subsystem to achieve complete global coverage. These facilities are owned and operated by the Member countries of WMO, each of which undertakes to meet certain responsibilities in the agreed global scheme so that all can benefit from the consolidated efforts. Requirements for increased long-term reliability and accuracy are being placed upon the GOS by another WMO programme, the Global Climate Observing System (GCOS), a dedicated system designed specifically to meet the scientific requirements for monitoring the climate and its variability. The fourteenth WMO Congress, in 2003 reconfirmed the need for a coordinated approach to a fundamental redesign of the GOS. The redesign involved experts and decision-makers in observing technology, network design, and numerical weather prediction (NWP). It addressed innovative ways of funding and operations management for the deployment of observations in remote and/or extraterritorial areas and for developing countries. The WMO Commission on Basic Systems (CBS) has made a substantial start on redesign of the GOS and its further evolution. A process named the Rolling Review of Requirements (RRR) has been instituted for continuously reviewing the requirements of WMO Members and international programmes and the results obtained under current circumstances. The GOS will continue to be the system of operational surface and space-based observing platforms. As a general principal, the evolution of the system will be based on proven techniques and will represent the best mix of observing elements that Impact of Evolution The impact of the changes to the GOS in the next decades will be so massive that new revolutionary approaches for science, data handling, product development, training, and utilization will be required. The new GOS will facilitate the strengthening of cooperation at national, regional, and global levels among countries and relevant non-Governmental organizations. Finally, as new technologies are introduced, the new system will allow for adequate overlap with the old to enable a smooth transition from the old to the new system, particularly for developing countries. |© World Meteorological Organization, 7bis, avenue de la Paix, Case postale No. 2300, CH-1211 Geneva 2, Switzerland - Tel.: +41(0)22 730 81 11 - Fax: +41(0)22 730 81 81 Contact us|
Mass communication is a process of dissemination of news and information to one another. The media or the medium used for this purpose are the press (newspapers, books, and magazines), radio, television, the internet, and pictures. These mediums perform a large function that is the fellow of information, education, and entertainment in a society. Without these mediums, one can hardly imagine the fellow of information to a large group of people. Due to these mediums, the fellow of information to a wider range has become possible. One can know about the recent events happing around the world just by accessing mass media. The press and the newspapers including magazines are categorized as print media. Since the invention of printing in 1446 A.D by William Caxton, the speed of news spreading has been increasing. During 19th and 20th centuries, this process had become so popular that in Britain the number of newspapers sold exceeded the number of houses. Books also come in this category and since ages, these written scripts have been serving the purpose of education and entertainment. News reports or programs that reach the audience via radio and television; these mediums are known as broadcast media. Through radio, audience gets information or news through sound and as for television, this sound is accompanied by pictures- motion pictures or static pictures. Electronic media is the invention of 20th century. With the advent of the internet, there comes and the additional medium through which people cannot only get information on reading news but can also connect with different people even across their geographical territory. Information and news about this type of media are either in written form accompanied by pictures and images or sometimes in video format. Emerged as the latest medium, social media has become so much popular among the people all around the globe. Before the emergence of social media or social portals, the internet was mainly used for the purpose of new gathering and data collection, however, the introduction of social media in the picture has changed the way people connect and share information. Now news channels are not the only source of information, people can know what is going around the world just by using these social forums. Moreover, people can get exposed to the different perspectives of a particular event an can judge the reality by their own.
Temporal range: Jurassic–Oligocene 160 to 35 million years ago |Skull of Ptilodus| They were eventually outcompeted by rodents, becoming extinct during the early Oligocene. Multituberculates are usually placed outside both the two main groups of living mammals, the Theria (placentals and marsupials), and monotremes. Some cladistic analyses put them closer to Theria than to monotremes. Biology[change | edit source] The multituberculates had a head anatomy similar to rodents. They had cheek-teeth separated from the chisel-like front teeth by a wide tooth-less gap (called the diastema). Each cheek-tooth displayed several rows of small cusps (or tubercles, hence the name) which worked against similar rows in the teeth of the jaw. It was an efficient chopping device. References[change | edit source] |Wikispecies has information on: Multituberculata.| - Weil, Anne (June 1997). "Introduction to Multituberculates: the “lost tribe” of mammals". Berkeley: University of California Museum of Paleontology. http://www.ucmp.berkeley.edu/mammal/multis/multis.html. Retrieved January 2010. - Benton, Michael J. 2004. Vertebrate palaeontology. p. 300 - Carrano, Matthew T. and Richard W. Blob, Timothy J. Gaudin, and John R. Wible 2006. Amniote paleobiology: perspectives on the evolution of mammals, birds, and reptiles, p. 358. - Kielan-Jaworowska, Zofia, Richard L. Cifelli, and Zhe-Xi Luo 2005. Mammals from the Age of Dinosaurs: origins, evolution, and structure. p. 299
Tuesday, May 23, 2017 Sojourner Truth's "Ain't I a Woman?" - Primary Source Analysis Activity This engaging activity on Sojourner Truth's famous "Ain't I a Woman?" speech is awesome for giving your students an introduction to analyzing primary sources. The download includes 2 different versions of Sojourner Truth's speech. First is the most widely printed version that repeats her iconic rhetorical question. The second is an earlier account from an Abolitionist newspaper. Both versions are short, compelling, and easy for students to understand. In my class, I ask for volunteers to read them out loud so students can hear the power in her words and better differentiate between the 2 speeches. Following the 2 versions is a worksheet with open-ended analysis questions designed to get your students thinking critically about Truth's words and the differences between the two accounts. I love this resource because it is an primary source that is engrossing and in language students can comprehend. It's a great way to get them into more primary sources. Thanks so much for checking it out! PS: You can get this primary source analysis activity PLUS over 500 more amazing teaching resources and lessons for World or American History through a monthly subscription to my site StudentsofHistory.org! Joining the site gives you immediate access to every engaging resource — each highly reviewed by thousands of teachers across the world! Save yourself time, energy, and sanity by joining today!
DINOSAURS: MEAT-ASAURUS OR VEGGIE-ASAURUS In this lesson, students will join Ms. Frizzle's class and be transported back in time to the late Cretaceous period, sixty-seven million years ago. They will observe dinosaurs as they eat, hunt, interact with other dinosaurs, and even care for their young. By analyzing data collected during this lesson, students will draw the conclusion that most dinosaurs were herbivores rather than carnivores. Students will also understand that dinosaurs were not bloodthirsty monsters, but were merely following their instincts for survival. This lesson may be completed in one class period or may be extended to two or three periods. The Magic School Bus #205: The Magic School Bus The Busasaurus Students will be able to: 1. compare and contrast shapes and sizes of various dinosaurs. 2. conclude that there were more plant eaters than meat eaters. 3. explain that meat eaters were hunters, searching for food. 4. make predictions based on observed behaviors. 5. collect data. 6. explain that the information we have regarding dinosaurs has been gathered 7. construct a model of a fossil. Texas Assessment of Academic Skills (TAAS), Grade 4 #3: Demonstrate an understanding of geometric properties and relationships. #4: Demonstrate an understanding of measurement concepts using metric and #2: Sequence, order and/or classify scientific data and/or information. #4: Interpret scientific data and/or information. #5. Make inferences, form generalized statements, and/or make predictions using scientific data and/or information. NCTM Standards for K-4 Standard 5: Estimation Standard 13: Patterns and relationships Per group of 2-4 students: - roll of adding machine tape - clear plastic cup sand - 1 cup supersaturated salt solution - 1 sponge, cut into the shape of a bone - 1 9x12 inch sheet of light colored construction paper - Student Worksheet #1 Give the students a piece of colored construction paper. Have them fold it in half. Have them write the following titles at the top of each page. On the front of the folded paper, have them write "Things I Know About Dinosaurs". On the inside left, have them write, "Things I Would Like to Know About Dinosaurs." On the inside right, have them write "Things I Learned About Dinosaurs." And on the back, have them write, "My Favorite Dinosaur." Ask the students to make a list of things they know and things they would like to learn about dinosaurs. To give students a specific responsibility while viewing the video, say, "Students in Ms. Frizzle's class have been visiting a dig. They have been talking with Dr. Skeledon about how scientists find fossils of dinosaurs. In order to learn more about these marvelous creatures, Ms. Frizzle is preparing to take the class on a very different kind of field trip. We are going to join them. You wrote 'Things I Know About Dinosaurs' on your paper. As we watch the video, if you find that you made a statement that is correct, put a check by it. If you find that your information was incorrect, draw a line through it. You also wrote 'Things I would Like to Know About Dinosaurs.' If you find the answer to your questions, make a quick note and we'll talk about it later. You will see Liz, Ms. Frizzle's lizard, collecting data about the dinosaurs they see. You will be using a chart like Liz's to help you decide if most dinosaurs were meat eaters or plant eaters. You will need to watch closely to find out what these prehistoric animals preferred for lunch. Watch as the class begins its field trip. Find out where they are going and how long it will take to get there. Also watch for any changes you see taking place in the scenery as they travel." BEGIN the video after people are seen walking backward, climbing into the jeep, and the jeep moving in reverse. The first frame students will see should be birds flying through the air and the bus appearing as a lighted spiral. The first voice students hear should say, "We're going back in time. We've never done this before." PAUSE when you see the bus covered with ice and snow and you hear Phoebe say, "Why is it so cold? Ask, "Where have the students gone? (back in time) What changes did you see happening in the landscape as they traveled through time? (changes in the trees, water, weather, ice and snow) Why do you think it has become so cold? (accept all reasonable answers) How far back in time do you think the bus will carry the students? (accept all reasonable answers) Let's watch to find out why it has gotten so cold, and how far back in time the students are going to travel. Also see if you notice any other changes in the scenery." RESUME PAUSE when you hear the students say, "Sixty-seven million years!! That means we'll probably see... .!" and you see a the head of an Alamasaurus coming toward the children. Ask, "If you were one of the students standing there, how would you feel? When was the first ice age? (1 million years ago) How far back in time have the students traveled? (67 million years) This time period has a specific name. Do you remember what Ms. Frizzle called it? (the late Cretaceous)" [Note: You may wish to rewind the video if students don't remember.] How do you think that word is spelled? (have students sound out the word or look it up in dictionary) What other changes did you see in the environment? (ice thawing, volcano receding) What do you predict the dinosaur will do? Watch to see how this dinosaur reacts to the students. Also listen to learn it's name. RESUME video. PAUSE when you hear Ms. Frizzle say, "Now don't play with your food, little Alamasaurus. Ask, "What is the name of this dinosaur? (Alamasaurus) How did it react to the students? (passed them by, ate the leaves of the bush) Do you think it is a meat eater or a plant eater? Let's watch its behavior and you try to decide." RESUME video. PAUSE when you see Ralphie look into the camera, put his hands to the sides of his head and say, "See, I knew this would happen. The dinos did them in." Ask, "What do you think? Was the Alamasaurus a plant eater or a meat eater? Ms. Frizzle presented a perplexing proposition. Were dinosaurs ferocious or friendly, sweet or savage, murderous or... As we watch the rest of the video, Liz will keep a chart on which to record data about the dinosaurs the students will encounter. In order to help us answer Ms. Frizzle's perplexing question, we will also keep records." Hand out Worksheet #1 and allow time for students to cut out the symbols for plants and meat. Explain the procedure to the students. Say, "As we meet each dinosaur, you should watch its behavior and decide whether it is a meat eater or plant eater. Then, using your chart, place either the symbol for plant or the symbol for meat next to that dinosaur. We will check your predictions with Liz's chart, and when we are sure we know the answer, you can glue your symbol to your chart. Now let's continue our trip and see which dinosaurs the students encounter next." RESUME. PAUSE when you hear Carlos say, "Yeah. Bloodthirsty for Arnold." Ask, "What did you see the Parasaurolophus doing? (eating plants, drinking water) Predict whether these dinosaurs are plant eaters or meat eaters. Put the symbol for your choice on the chart. Let's watch and see if your prediction was correct. Also, see if you can find out what the structures on the Parasaurolophus' heads are for." RESUME. PAUSE when you see Liz put a plant symbol on the chart and your hear, "Chalk up another vegetarian, Liz." Ask, "Were you correct? Glue the symbol for the plant next to the Parasaurolophus. What is the structure on their heads? (it helps them produce noises) As the class tries to reach Arnold and Phoebe, they meet another dinosaur. Watch carefully and see what you can find out about this creature." RESUME video. PAUSE when you see the mother dinosaur pull berries from the tree, and you hear Keesha say, "She doesn't look to me like she wants to fight." Ask, "Where did the students land after the bus flipped through the air? (in a nest) Do you think this dinosaur is a meat eater or a plant eater? Put the symbol for your choice next to the picture of this dinosaur. Let's see if you are correct. Also watch to find out how this mother dinosaur feeds her young." RESUME. PAUSE when you hear Tim say, "Carlos, wait. They're not meat eaters. They're even good mothers." Ask, "Did you predict that this dinosaur would be a plant eater? Glue your symbol to the chart. How did the mother feed her young? (She regurgitated the berries she had eaten.) Many scientists once believed that dinosaurs laid their eggs and left them to hatch. Then the young had to take care of themselves. Is that what the class has found to be true of this dinosaur? (No, they were good mothers.) Watch the next part of this story and see what the class learns about the next dinosaur they meet. Arnold and Phoebe have been chasing the Ornithomimus that stole Arnold's egg. Watch and see what happens to the egg, the Ornithomimus, and Phoebe and Arnold." RESUME. PAUSE when you see the class standing on a hill overlooking a valley where dinosaurs are feeding, and you hear Ms. Frizzle say, "I knew the bus would be around here somewhere." Ask, "Look at the dinosaurs in the valley. What are they doing? (eating) Do you think they will be meat eaters or plant eaters? Put your symbol on the chart. Watch to see if you are correct." RESUME video. PAUSE when you hear Dorothy Ann say, "The Triceratops won't hurt us. They're plant eaters, just like the others," and you see Liz add a plant symbol to the chart. Ask, "Were you correct? Glue your symbol to the chart. All of the dinosaurs we have met so far are plant eaters. Does this mean that all dinosaurs were plant eaters? (no) And all the plant eaters seem to be gentle, even if they are extremely large. Does that mean that all dinosaurs were gentle? (no) As you watch the next part of this video, see if you can draw conclusions about the meat eating dinosaurs the class encounters." RESUME video. PAUSE when you hear, "It's a pack of Trilodons -- and they do eat meat." Ask, "Were the Triceratops plant eaters? Did you make the right prediction? Glue the symbol for plants on your chart. We suspected that the class would run into meat eating dinosaurs. Glue a symbol for "meat" on your chart. The Trilodons are trying to find lunch. Watch to see how the adult Triceratops protect their young." RESUME. PAUSE when you hear, "Looks to me like they want an easy lunch without a fight." Ask, "How did the adult Triceratops protect their young? (They formed a circle using their heads and horns as a shield.) Why didn't the Trilodons attack the Triceratops? (They just wanted an easy meal without any trouble. The horns scared them off.) All throughout this trip, Phoebe and Arnold have been chasing an Ornithomimus who stole Arnold's egg. Watch to see if or how they get it back." RESUME video. PAUSE when you hear Ralphie say, "Is it just me, or is that a real, live, Tyrannosaurus Rex behind them." Say, "Decide if you think the Tyrannosaurus Rex is a meat eater or plant eater. Look at his teeth. That should give you a clue. Watch to see if Phoebe and Arnold escape without becoming lunch." RESUME video. PAUSE when you see the Tyrannosaurus Rex run away, and you hear the students cheer, "Yeah! Way to go, Arnold. You did it!" Say, "Arnold and Phoebe are safe. What did you find out about the Tyrannosaurus Rex? (meat eater, didn't want to fight someone his own size, didn't want to get hurt) As we finish the next part of this video, you should listen to find out what the class has learned about dinosaurs. " RESUME video. Stop the video when you see Arnold shrink, and you hear Phoebe say, "It's a good thing you didn't stay home today, Arnold." Ask, "What did the students learn about dinosaurs?" (There are more plant eaters than meat eaters. The meat eaters wanted a quick meal without getting hurt. They were not blood thirsty monsters.) Say, "At the beginning of this lesson, I asked you to make a list of things you knew about dinosaurs. Look back at that list. As we viewed the video, you put a check by anything that you found to be true, and you put a line through anything that you thought was true but you learned to be untrue. Did you get new information? Did you have any of your questions answered? (Allow time for students to discuss any questions they answered or any they still have.) On your folder, turn to the page titled 'Things I Learned About Dinosaurs' and write at least three things you learned from watching this video. Later you will draw a picture of your favorite dinosaur on the back of your folder." "Remember that Ms. Frizzle's class has learned that all the information we have about dinosaurs has been gathered from finding fossils. Remember that a fossil is the remains of an organism that has been preserved in stone or some other material that does not allow decomposition. Before a fossil could form, what had to happen to the animal? (it had to die, it had to be covered up) We are going to look at one way that fossils form. Each group needs a clear cup. Fill the cup about 1/4 full of sand. The sponge on your table is shaped like a bone. That's because in this demonstration, it will act like a dinosaur's bone. Put the bone in the cup and cover it up with sand. Now we will take the liquid and pour it into the cup until the sand is thoroughly wet. The minerals in the water will soak into the sponge like the rain carries minerals in the soil into the bones. We will put the cups in a safe place and look at them again in a couple of days. What do you think will happen to the sponge?" (Allow students to make predictions and record them on a chart tablet or a piece of butcher paper to be hung in the room.) Say, "You heard Ms. Frizzle say that dinosaurs came in all shapes and sizes. We often think of dinosaurs as being giants, and many were. However, there were small dinosaurs, also. I'm going to give you some sizes of dinosaurs. I have given each group a roll of adding machine tape. We are going to use that tape to make a comparison of the sizes of dinosaurs. You will need to use your ruler to measure a length of tape that matches the length of the dinosaur. After you have cut the tape, write the name of the dinosaur in large letters across it." [Show Overhead Transparency #1] Allow students time to measure, cut and label the adding machine tape. Display it in the room or in the hallway. Have students write to the following requesting information on their dinosaur exhibits: Dinosaur Valley State Park P. O. Box 396 Glen Rose, Texas 76043 Dinosaur National Monument P. O. Box 128 Jensen, UT 84035 Tyrrell Museum of Paleontology P. O. Box 7500 Drumheller, Alberta Canada T0J 0Y0 Field Museum of Natural History Roosevelt Road at Lake Shore Drive Chicago, IL 60605 Have each student do research in the library on at least ten other dinosaurs to find out whether they are meat eaters or plant eaters. Use the information gathered by the class to create a Venn diagram showing the relationship between the number of plant eaters and the number of meat eaters. Have each student research at least ten modern day animals and determine whether they are meat eaters or plant eaters. Create a class Venn diagram. Do the charts show similar information? Art: Create Dino-notes. Cut 8 1/2 x 11 inch paper into fourths to make note sheets. Cut a potato in half. Sketch a dinosaur shape on the surface of the potato. Use a plastic knife to carefully cut around your shape and cut away the excess potato. The dinosaur shape should be slightly raised. Coat the dinosaur with paint and print dinosaurs at the top of the note sheets. Art: Create a class mural showing a scene from the late Cretaceous period. Be sure that the dinosaurs depicted in the mural are portrayed as accurately as possible. You may wish to introduce scale and have students calculate the size of the dinosaurs they will draw. Science: Investigate information that can be collected from studying footprints. Cover a large area of the floor or a sidewalk with newspaper. Place a 4-5 foot strip of butcher paper on the newspaper. Have a tub of water at one end for washing after the activity. At the other end, have aluminum pie tins in which you have placed tempra paint. Have students take off their shoes and step into the paint. Then have them travel along the butcher paper. They can walk, hop, skip, or run. They can even put their hands and feet in the paint and walk across the butcher paper. You may want to use a different sheet of butcher paper for each action. Have students look at each pattern of prints that are left to understand how paleontologists can gather information from footprints. [Note: If you would rather not use paint, you can use black butcher paper and have students merely wet their feet and walk across the paper. The prints disappear when they dry, but you get the same effect if you study them before they dry.] Language Arts: Ask students to pretend that they were transported back in time like Ms. Frizzle's class. Have them first draw a picture of their favorite dinosaur on the back of their folder. Then have them write a story about their encounter with that dinosaur. Using the picture of their favorite dinosaur, have the students write a story from the perspective of that dinosaur. What would life be like? What must the dinosaur do to survive? Is it the hunter or the hunted? What other animals would it encounter during the course of a day? Math: Have students write the alphabet and assign each letter a number. [For example, A = 1; B = 2; C = 3, Z = 26] Challenge students to find the dinosaur whose name has the greatest value when all the letters are added d i n o s a u r 4 + 9 + 14 + 15 + 19 + 1 + 22 + 18 = 102 Give students the length of dinosaurs in feet. Have them convert this measurement The Internet is a dynamic resource with addresses and sites that are constantly changing. Teachers should always check these internet sites themselves before sending students surfing on their own. I. World Wide Web sites for student exploration A. Honolulu Community College Dinosaur Exhibit This site has a very informative audio and video tour of the museum which answers questions such as: How can you tell Triceratops was an herbivore or T-rex a carnivore? What traits made T-rex a good hunter? B. Dinosaur Hall This site has a searchable database which students can use to research the most current information on any aspect of dinosaurs. Topics specifically related to this unit might include: carnivore, herbivore, Cretaceous, Triceratops, or Tyrannosaurus Rex. It also has several links to other dinosaur related This web site provides a wealth of information on dinosaurs for both teachers and students in upper elementary grades to high school. It can be reached through Dinosaur Hall or on its own at this address. Several articles discuss the difference between science and non-science as related to dinosaur research and theory. One article dissects movies like Jurassic Park to separate the scientific fact from the fiction. After reading this article, students could make a chart comparing the aspects of the movie that are based on science fact (i.e. physical features of dinosaurs) with those that are primarily science fiction (i.e. cloning dinosaurs from fossilized blood samples). D. Chicago Field Museum of Natural History Life Over Time Exhibit The site has an interactive tour of the museum with a teacher's guide of activities that can be used in conjunction with the on-line tour. Students can also download an audio clip of a weather forecast for the Triassic period and take an on-line dino trivia quiz. II. E-Mail resources A. Ask-a-Curator [email protected] This service is provided by the Santa Barbara Museum of Natural History. Students or teachers may send questions in concerning dinosaurs to the resident experts in vertebrate zoology. B. Dinosaur listserv This listserv has regular postings, often of a technical nature, concerning current aspects of dinosaur research. It is probably most appropriate for teachers to use as a current resource rather than students. To subscribe, send a message to the following address, [email protected]. Leave the subject line blank. In the body type subscribe dinosaur your name. Dixon, Dougal, Questions and Answers About Dinosaurs, Kingfisher, New York, New York, l993 Dixon, Dougal, Be a Dinosaur Detective, Lerner Publications, Mineapolis, Fuller, Mel, Dinosaurs and Prehistoric Life; Whole Language Theme Unit, Instructional Fair, Inc., Grand Rapids, MI, 1991 Pearce, Q. L., All About Dinosaurs, Little Simon, Published by Simon and Schuster Inc., New York, 1989 "Windows on Science", Optical Data, Warren, NJ, 1990 Side 6 - From Fossils, Dinosaurs and Geologic Time Travel Chapter 6 - Life and Times of Hadrosaurs Chapter 7 - Extinction of the Dinosaurs Chapter 10 - Carnivorous Dinosaurs 1995-1996 National Teacher Training Institute / Austin Master Teacher: Gayle Evertson Click here to view the worksheet associated with this lesson. Lesson Plan Database Thirteen Ed Online
Incell Dot Plots in Microsoft Excel Dot plots are a very popular and effective charts. According to dot plots wikipedia article, Dot plots are one of the simplest plots available, and are suitable for small to moderate sized data sets. They are useful for highlighting clusters and gaps, as well as outliers. Their other advantage is the conservation of numerical information. Today we will learn about creating in-cell dot plots using excel. We will see how we can create a dot plot using 3 data series of some fictitious data. We will create something like this: Note: If you are new to in-cell charting, I suggest you read the incell bar charts article to understand the concept. 1. Take your data and massage it a bit Since we are doing an incell variation of dot plot, we need to pre-process the data a little bit. Assuming we have data on revenues of 3 imaginary companies – MegaHard, Grape and Twogle like this: We need to normalize the data to some meaningful number like 100 (remember, incell graphs print some character for each unit in the data.) so that the in-cell dot plot looks meaningful. After normalizing the data we will also need to calculate some helper columns so that we can develop the incell dot plot easily. The helper columns (3 of them) will show, - Smallest value in each row – 1 - Next smallest value in each row – previous helper column – 2 - The largest value in each row – previous two helper columns – 3 Helper columns ?!? why are we doing this? The helper columns (or intermediate values) are usual practice when we need to pre-process data for dashboards or charts. Once the chart is ready, I usually hide the helper columns as they do not really say anything. In our case, we are using helper columns since the formulas for plotting the incell dot plot are rather long and we would make then even longer if we don’t use these. 2. Identify Symbols for Each Data Series This is the simple job. In our case I have shown the symbols we are going to use in the above image. You can find some interesting symbols like triangles, rectangles, circles etc. in a regular font like Arial. Just go to Menu > Insert > Symbol (or Insert > Symbol in Ribbon) to find the symbols you like. Let us assume the symbols are in the range C5:E5 3. Finally Write the Formulas That Generate the In-cell Dot Plot Now comes the fun part. We have the normalized data in the range C16:E16, and the helper values in F16, G16, H16. For the first row of the dot plot, the formula looks like: huh! it has to be one of the longest formulas I have written in a while. I thought long and hard about how this formula can be explained and came up with the below illustration. Once you have the formula for one row, we just need to copy paste it over the entire range to show dot plot for each year of the data. That simple! How to Generate 2 Series Dot Plots? The 2 series dot plots have even simpler formulas. So I am leaving it to your imagination. But when you finish it, the dot plot looks something like this: Download the In-cell Dot Plot Template and Make your own Dot plots The downloadable workbook has examples for 2 series and 3 series in-cell dot plots. Go ahead and play with it. Further Resources on Dot Plots Dot plots are not new, there is quite a bit of material and tools available for you to understand and make dot plots. They are proven to be very effective tools for communicating small to medium series of data. I suggest you to read few of these articles to learn more about dot plots. More on In-cell Charts Leave a Reply |Networkingdays() an improved version of networkdays formula||Interviewing Garr Reynolds on this Friday, send me your questions|
|Yale-New Haven Teachers Institute||Home| This curriculum unit will begin with the premise that works of literary art are created by artists who live, experience and interpret various cultures. This approach to teaching assumes that the students belong to a culture or cultures and that they simultaneously share and/or rebel against the beliefs and prejudices that are part of their own culture. Students will be asked not only to develop their own interpretations of what they read and see, but also to ask and answer the four W’s (why, when, where, whom). Students will be encouraged to develop and use research, critical thinking, geography, vocabulary and organizational skills as they complete various individual and cooperative group activities. All of these skills will be targeted at expanding the students’ ability to recognize and respect the beliefs and practices of other cultural groups and their roles in the development of the history of the United States during the Civil Rights Movement. At the completion of this unit, the students, working cooperatively, will publish a newsletter, compile a group portfolio, paint a collage, construct a time line and graph display, and write and produce a skit. These activities will demonstrate what the students have learned. The students will be a heterogeneous group of seventh graders in four classes reading at or below grade level. The planned activities will allow all to invest their special abilities as well as learn from peers. It is my hope that the students will become more aware and appreciative of literature outside of the Western European tradition. This will be accomplished by challenging the students to include intellectual and philosophical achievements of African Americans, Chicano Americans and Native Americans. The unique aspects of these groups will be stressed, along with their commonalties, via lectures, videos, readings, oral discussions, role-playing, simulation games, dramatization, guest artists and field trips to museums. (Recommended for History or Social Studies, Grades 7-12) Politics Afro-Americans Race Relations Native Americans Civil Rights
In August 2006, the United Nations Environment Programme (UNEP), released an atlas entitled: "Africa's Lakes: Atlas of Our Changing Environment." The atlas relies heavily on Landsat imagery from the past 34 years to show changes in lakes around the African continent. Above, are two Landsat images of the Djoudj Sanctuary in Senegal that are featured in the atlas. The atlas tell us: "Situated in the Senegal river delta, the Djoudj Sanctuary is a wetland of 16 000 ha, comprising a large lake, referred to as Lake Djoudj in this publication, surrounded by streams, ponds and backwaters. These two images show the Djoudj Sanctuary before and after the construction of the Diama Dam." "The image from September 1979 shows the impact of drought on the Djoudj Sanctuary, while the image from November 1999 shows rejuvenation of the sanctuary wetlands due to the significant floods of that year. The two images vividly depict the impact of climate variability on the Djoudj Sanctuary—and demonstrate the broader need for close monitoring of the impacts of climate variability and climate change on lake environments."
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. Critically examine a range of theoretical approaches to learning and communication. Discuss how the learning and communication theories apply to your own teaching and promote inclusive practice. There are four main theoretical approaches to learning these being Behaviourism, Cognitivism, Constructivism, and Humanism and most learning theories tend to fall into one of these paradigms. Behaviourism in principle refuses to acknowledge the internal mechanism of learners. Founded in 1849 by I.P. Pavlov and further models developed by Dr John Watson, E.J. Thorndike and B.F Skinner, they believed that people learn through external stimuli and have no free though of their own. Learners can be conditioned by external stimuli and all behaviour can be explained without considering the consciousness or mental state of the learner. Their theories were centred on cause and effect, reward and punishment of the learner. Learners were passive and would respond to reinforcement and environmental stimulus. This theory of teaching has a very regimental approach to learning and development. Behaviourism is still employed today in many learning arenas and it is the authors' view that one of the greatest employers of this approach is the military, in the training of its new recruits. The recruits are taught through repetitiveness, rewarded through praise, acceptance and freedom and punished by ridicule, increased workload, rejection and loss of freedoms. The two first pioneers of behaviourism Vladimir M. Bekhterev (1857 to 1927) and I.P. Pavlov (1849 to 1936) both studied at the military academy in St Petersburg (C. Boeree. 2000, Online). It was here where they first formulated their behaviourist ideology. Other learning institutions still employ this approach if only in part and indeed its theories play a major role in most children's development both at home and school. It is a useful technique in controlling younger children's behaviour and learning them right from wrong (punishing the bad and rewarding the good. Stone & Nielson (1982, p.291) make reference to the behaviourist approach in child development, General findings suggest that a careful combination of reward with mild punishment when appropriate is most effective for learning. However most of us haven't the skill to provide the optimum combination. The above refers to the lack of competence in acting out the model. Teachers or parents should be objective in their delivery of punishment. Objects like anger, retaliation, stress and any other external forces should not have any influence on the punishment given. The author believes that this is can be a major downfall of the behaviourist approach it is very difficult for teachers or parents to have a constant clear state of mind. Punishment can also have adverse affects on the person being punished it can lead to anger, retaliation, and strained teacher learner relationships, thus hindering the learning process. Punishments should be made clear from the outset, be fit for purpose and be given uniformly. I myself adopt a slight behaviourist approach in my Further education classes (mainly younger students). If a student is constantly late i.e. more than twice in a row (without good reason) or being disruptive in a class I will refuse to sign their E.M.A sheet as a form of punishment. I constantly use reward in the form of praise (I believe this to be a key motivator for students), however reward can have negative effects on learning in that students may only perform enough to gain reward and not to their ability. Although it is the authors' belief that this approach works in some instances for younger students it is not has affective for adult learners and as such is not employed by the author on Higher Education courses. It is also the belief of the author that students are not passive. Most teachers in the UK have behaviourist approaches enforced onto them having to write/meet objectives and learning outcomes, Behavioural objectives were written descriptions of specific, terminal behaviours that were manifested in terms of observable, measurable behaviour. (Saettler, 1990, Online). In the construction management sector we have practical sessions that lend themselves more to behaviourism for example the use of surveying equipment, learning is gained through repetitiveness and familiarisation. Good development is rewarded with a pass; poor development is punished by the removal of reward which is replaced with further instruction. Objectives follow Gain and Brigg's model for writing objectives (Saettler, 1990, Online) Gain's and Brigg's Model Tools and Constraints This was a method employed by other staff at the college and was passed onto myself and until this now I had no knowledge that this was a behaviourist approach adopted by the department. The way in which the education system is funded and managed in the UK means constraints are placed on the educational establishments themselves. All the objectives for courses have to be met within a given time frame, thus employing the behaviourist approach. Cognitivism replaced behaviourism as the domineering ideology in the 1960's. Unlike behaviourism that basis itself on environmental stimuli, cognitivism focuses on the inner workings of the mind, mental activities the 'black box'. In other words they were concerned with cognition the act or process of knowing. Information coming in gets processed and then gives certain outcomes. The cognitive learning theory claims that learning is a relatively permanent change in their mental picture, due to an experience that occurs by adding new information into an existing understanding in the mind. While cognitivists allow for the use of skill and drill exercises in the memorisation of facts, formulae, and lists they place greater importance on strategies that help students actively assimilate and accommodate new material. (Graduate Student Instructor, 2009, Online). Cognitivism and constructivism are very much alike and were founded on cognitive principles. However, constructivism places much more emphasis on the social context and culture. Constructivists see teachers has being the providers of tools to aid the students learning (Overview of Constructivism, 2010, Online) Jean Piaget's studies in learning development had considerable influence on cognitivism, specifically the notion drawn from Gestalt theory, that knowledge is organised and structured. It is a view that for learning to occur it must be incorporated within existing memories and that the new experience and prior knowledge must overlap. Cognivitists believe that this happens in two ways. Assimilation, were the mind takes new learnt information and applies this to what it already knows. Secondly, accommodation is where preconceived ideas are adjusted to suit new information. Piaget called these parcels of memory in our brain 'Schemas'. He (Piaget) was the first psychologist to make a systematic study of cognitive development. His contributions include a theory of cognitive development, detailed observational studies of cognition in children, and tests to reveal different cognitive abilities (McLeod, 2007, Online). Of importance, especially to the theorist to human learning is Piagets' emphasis on four distinct stages of four cognitive developments, each categorised by different forms of thought at different stages. This model has come under criticism because of the lack of flexibility of the ages appointed to each stage of development. It has limited use in adult education. Lev Vygotsky (1896-1934) was a Russian psychologist who developed his theories around the same time as Piaget. Vygotsky died age of 38 while his theories were still in infancy. However, the fundamental difference between Vygotsky and Piaget is that Vygotsky believed that learning was not guided by age but by social influences. Two significant models of Vygotsky are the more knowledgeable other was learning is facilitated through someone who has a better understanding. This could either be a teacher, adult or peer. Secondly the zone of proximal development this is intern linked with the first has it states someone will learn more with initial guidance and encouragement. It was this from his ideology that Social Constructivism was born (Overview of Constructivism, 2010, Online). The Gestalt moment or 'getting the knack' of something probably best describes cognitivsm. An ability to suddenly to be able to ride a bike is a good analogy. The learning happens in a few moments, and is permanent-although it may have taken a long time to get to that step with seemingly little progress (Atherton, 2005, Online). I would appear to have adopted a cognitive constructivism style in my teaching. Firstly I obtain the students prior knowledge in the subject matter through questioning and then build on their knowledge. Secondly, I give an example that the whole class works through. Once I am satisfied have worked through the example correctly, a second example will be given for the learners to do individually or with support from their peers. If they approach me for help I will not give the answer but try to guide them to it using their own thought process. This is probably because that is how I like to approach problems and feel that this gives me a better understanding so that I can then retrieve and apply it to similar problems. This represents some correlation to Vygotsky 'four classroom principles' (Overview of Constructivism, 2010, Online). Humanist ideology gained moment in the 1960's; two of its most prominent founders were Abraham Maslow (1908 - 1970), Carl Rodgers (1902 - 1987). In principle humanistic theory of learning is based on a human's personal act to fulfil their potential. It recognises the freedom and potential of humans and sees the teachers' role as being one of facilitator. Learning is student centred with the focus placed on developing self actualised people through cooperation and support. Maslow developed his Hierarchy of Needs in 1943. This motivational theory is based on a five tear model. For humans to meet their full potential they must meet all the requirements of the model Carl Rogers developed is humanistic teaching theory facilitative learning. The basic principles of the ideology are that learning will take place through facilitation in a comfortable atmosphere. Other key features are: (L. Dunn, 2000, Online) A belief that human beings have a natural eagerness to learn. There is some resistance to, and unpleasant consequences of, giving up what is currently held to be true. The most significant learning involves changing one's concept of oneself. Facilitative teachers are: Less protective of their constructs and beliefs than other teachers. More able to listen to learners, especially to their feelings. Inclined to pay as much attention to their relationship with learners as to the content of the course. Apt to accept feedback, both positive and negative and to use it as constructive insight into themselves and their behaviour. Are encouraged to take responsibility for their learning. Provide much of the input for the learning which occurs through their insights and experiences. Are encouraged to consider that the most valuable evaluation is self-evaluation and that learning needs to focus on factors that contribute to solving significant problems or achieving significant results. One model of teaching that has been developed on the principles of humanism is Experiential Learning this was produced by David Kolb. Kolb believes that "learning is the process whereby knowledge is created through the transformation of experience" (Learning Theories, 2009, Online). It is based on four principles (Learning Theories, 2009, Online). The four stage model shows how experience is developed through reflection, reflection to concepts, concepts to testing (experimentation) and testing to experience. I believe this model to be beneficial to some areas of teaching especially Andragogy and practical application for various professions. The humanistic approach to teaching certainly has its place within the academic arena drawing on students' knowledge and experience, encouraging learners to take some responsibility for their learning, self reflection, motivation and facilitating are all key concepts that (in the authors' view) should be employed in the classroom. Deep and surface learning refers to ways in which individuals learn from studying. The two approaches do not mean that students fall into neither one category nor the other, in fact either approach maybe used at any given time. Surface learning applies to the memorisation of facts and formulas. This can be selected from the learners' memory and used when required. Deep learning applies to the ability to understand the whole picture and reasons behind the facts and formulas and then apply them to understanding. (Engineering Subject Centre, 2009, Online). It can be concluded in the authors' view that this application bears the traits of cognitivism. The benefit of deep learning theory is that once learners are aware of a problem and understand
Update: At original time of writing, C/2017 U1 was assumed to be a comet. But Followup observations by the Very Large Telescope in Chile on Oct. 25 found no trace of cometary activity. The object’s name has now been officially changed to A/2017 U1 as it is more likely an interstellar asteroid, not a comet. Comets and asteroids usually originate from the outermost reaches of the solar system — they’re the ancient rocky, icy debris left over from the formation of the planets 4.6 billion years ago. However, astronomers have long speculated that comets and asteroids originating from other stars might escape their stars, traverse interstellar distances and occasionally pay our solar system a visit. And looking at C/2017 U1’s extreme hyperbolic trajectory, it looks very likely it’s not from around these parts. “If further observations confirm the unusual nature of this orbit this object may be the first clear case of an interstellar comet,” said Gareth Williams, associate director of the International Astronomical Union’s Minor Planet Center (MPC). A preliminary study of C/2017 U1 was published earlier today. (Since this statement, followup observations have indicated that the object might be an asteroid and not a comet.) According to Sky & Telescope, the object entered the solar system at the extreme speed of 16 miles (26 kilometers) per second, meaning that it is capable of traveling a distance of 850 light-years over 10 million years, a comparatively short period in cosmic timescales. Spotted on Oct. 18 as a very dim 20th magnitude object, astronomers calculated its trajectory and realized that it was departing the solar system after surviving a close encounter with the sun on Sept. 9, coming within 23.4 million miles (0.25 AU). Comets would vaporize at that distance from the sun, but as C/2017 U1’s speed is so extreme, it didn’t have time to heat up. “It went past the sun really fast and may not have had time to heat up enough to break apart,” said dynamicist Bill Gray. Gray estimates that the comet is approximately 160 meters wide with a surface reflectivity of 10 percent. But probably the coolest factor about this discovery is the possible origin of C/2017 U1. After calculating the direction at which the comet entered the solar system, it appears to have come from the constellation of Lyra and not so far from the star Vega. For science fiction fans this holds special meaning — that’s the star system where the SETI transmission originated in the Jodie Foster movie Contact. One hundred and thirty million years ago in a galaxy 130 million light-years away, two neutron stars met their fate, merging as one. Trapped in a gravitational embrace, these two stellar husks spiraled closer and closer until they violently ripped into one another, causing a detonation that reverberated throughout the cosmos. On August 17, the U.S.-based Laser Interferometer Gravitational-Wave Observatory (LIGO) and Italian Virgo gravitational wave detector felt the faint ripples in spacetime from that ancient neutron star collision washing through our planet. Until now, LIGO and Virgo have only confirmed the collisions and mergers of black holes, so the fact that a nearby (a relative term in this case) neutron star merger had been detected was already historic. But the implications for this particular neutron star signal, which is comparatively weak in comparison with the black hole mergers that have come before it, are so profound that I’ve been finding it hard to put this grand discovery into words (though I have tried). Why It Matters With regards to gravitational waves, I feel I’ve described each gravitational wave discovery as “historic” and “a new era for astronomy” since their first detection on Sept. 15, 2015, but the detection of GW170817 may well trump all that have come before it, even though the signal was generated by neutron stars and not black hole heavyweights. The thing with black holes is that when they collide and merge, they don’t necessarily produce electromagnetic radiation (i.e. visible light, X-rays or infrared radiation). They can go “bump” in the cosmic night and no intelligent being with a conventional telescope would see it happen. But in the the gravitational domain, black hole mergers echo throughout the universe; their gravitational waves travel at the speed of light, warping spacetime as they propagate. To detect these “invisible” waves, we must build instruments that can “see” the infinitesimal wobbles in the fabric of spacetime itself, and this is where laser interferometry comes in. Very precise lasers are fired down miles-long tunnels in “L” shaped buildings in the two LIGO detectors (in Washington and Louisiana) and the Virgo detector near Pisa. When gravitational waves travel through us, these laser interferometers can measure the tiny spacetime warps. The more detectors measuring the same signal means a more precise observation and scientists can then work out where (and when) the black hole merger occurred. There are many more details that can be gleaned from the gravitational wave signal from black hole mergers, of course — including the progenitor black holes’ masses, the merged mass, black hole spin etc. — but for the most part, black hole mergers are purely a gravitational affair. Neutron stars, however, are a different beast and, on Aug. 17, it wasn’t only gravitational wave detectors that measured a signal from 130 million light-years away; space telescopes on the lookout for gamma-ray bursts (GRBs) also detected a powerful burst of electromagnetic radiation in the galaxy of NGC 4993, thereby pinpointing the single event that generated the gravitational waves and the GRB. And this is the “holy shit” moment. As Caltech’s David H. Reitze puts it: “This detection opens the window of a long-awaited ‘multi-messenger’ astronomy.” What Reitze is referring to is that, for the first time, both gravitational waves and electromagnetic waves (across the EM spectrum) have been observed coming from the same astrophysical event. The gravitational waves arrived at Earth slightly before the GRB was detected by NASA’s Fermi and ESA’s INTEGRAL space telescopes. Both space observatories recorded a short gamma-ray burst, a type of high-energy burst that was theorized (before Aug. 17) to be produced by colliding neutron stars. Now scientists have observational evidence that these types of GRBs are produced by colliding neutron stars as the gravitational wave fingerprint unquestionably demonstrates the in-spiraling and merger of two neutron stars. This is a perfect demonstration of multi-messenger astronomy; where an energetic event can be observed simultaneously in EM and gravitational waves to reveal untold mysteries of the universe’s most energetic events. Another Nod to Einstein The fact that the gravitational waves and gamma-rays arrived at approximately the same time is yet another nod to Einstein’s general relativity. The century-old theory predicts that gravitational waves should travel at the speed of light and, via this brand spanking new way of doing multi-messenger astronomy, physicists and astronomers have again bolstered relativity with observational evidence. But why did the gravitational waves arrive slightly before the GRB? Well, NASA’s Fermi team explains: “Fermi’s [Gamma-ray Burst Monitor instrument] saw the gamma-ray burst after the [gravitational wave] detection because the merger happened before the explosion,” they said in a tweet. In other words, when the two neutron stars collided and merged, the event immediately dissipated energy as gravitational waves that were launched through spacetime at the speed of light — that’s the source of GW170817 — but the GRB was generated shortly after. Enter the Kilonova As the neutron stars smashed together, huge quantities of neutron star matter were inevitably blasted into space, creating a superheated, dense volume of free neutrons. Neutrons are subatomic particles that form the building blocks of atoms and if the conditions are right, the neutron star debris will undergo rapid neutron capture process (known as “r-process”) where neutrons combine with one another faster than the newly-formed radioactive particles can decay. This mechanism is responsible for synthesizing elements heavier than iron (elements lighter than iron are formed through stellar nucleosynthesis in the cores of stars). For decades astronomers have been searching for observational evidence of the r-process in action and now they have it. Soon after the merger, massive amounts of debris erupted in a frenzy of heavy element creation, triggering an energetic eruption known as a “kilonova” that was seen as a short GRB. The GRB was cataloged as “SSS17a.” The Golden Ticket Follow-up observations by the Hubble Space Telescope, Gemini Observatory and the ESO’s Very Large Telescope have all detected spectroscopic signatures in the afterglow consistent with the r-process taking place at the site of the kilonova, meaning heavy elements are being formed and, yes, it’s a goldmine. As in: there’s newly-synthesized gold there. And platinum. And all the other elements heavier than iron that aren’t quite so sexy. And there’s lots of it. Researchers estimate that that single neutron star collision produced hundreds of Earth-masses of gold and platinum and they think that neutron star mergers could be the energetic process that seed the galaxies with heavy elements (with supernovas coming second). So, yeah, it’s a big, big, BIG discovery that will reverberate for the decades to come. The best thing is that we now know that our current generation of advanced gravitational wave detectors are sensitive enough to not only detect black holes merging billions of light-years away, but also detect the nearby neutron stars that are busy merging and producing gold. As more detectors are added and as the technology and techniques mature, we’ll be inundated with merging events big and small, each one teaching us something new about our universe.
Parasites in the Bloodstream A variety of parasite species, including protozoans and helminthes are found in human blood at some stage in their life cycle. These include species of malaria, trypanosomes, babesias and microfilariae and several species of filarial nematodes. The characteristic stage of each found is either moving freely among the blood cells or inhabiting the red blood cells as intracellular parasites. When present in sufficient numbers, organisms may be found in a single drop of blood. Often, however, their numbers are so small that they will be found only in large or concentrated blood samples. - It is recommended to collect fingerstick blood droplets on to a microscope slide at the bedside. - For confirmation and thick smear examination, lavender (EDTA) topped tubes are collected via venipuncture. - Skin Puncture - Site selection: Finger puncture using the palmar surface of the tip of the fingers are preferred for blood films for parasites. The middle or "ring" fingers are generally used. - Warming the site: The skin area to be punctured should be warmed to increase the blood flow. Depending on the physical setting and the patient's condition, warming the hands can be done by immersing in hot water, by briskly rubbing the area or by covering with a hot moist towel - Cleaning the site: The puncture site should be cleaned and disinfected with gauze squares or commercial non-cotton alcohol preparations soaked in 70% alcohol, then wiped dry with sterile gauze or allowed to air-dry prior to puncture. Cotton should not be used to clean the finger prior to skin puncture because loose fibers may lodge in the blood, resulting in confusing artifacts. - The fingers must be dry before sticking so that the drop of blood will round up on the finger and not run down the finger or hand. Also, any mixture of remaining alcohol with the blood will "fix" the red cells, thus making the thick films unsuitable for staining. - Technique: The technique for puncturing the finger is to stick the finger with a sterile, non-reusable lancet deeply enough to allow collection of a sufficient amount of free-flowing blood to prepare thick and thin films. After collection of the specimen, pressure should be applied to the puncture site and sterile cotton or sterile gauze until bleeding stops. Then a bandage should be applied to the puncture site. - Recommendation for use: Unless it is certain that well prepared blood smears will not be received, venipuncture is usually not recommended for obtaining blood for films for parasite examinations (malaria, Babesia or hemoflagellates). Since Plasmodium falciparum shizonts concentrate in the microvasculature, skin puncture thick and thin films have a higher sensitivity than venipuncture. - Use of Anticoagulants: Sometimes the anticoagulants may interfere with adhesion of the blood to the slide during smear preparation and with proper staining of the parasites. This is particularly true with the ratio of blood to anticoagulant is incorrect in the collection tube (too small amount of blood per amount of anticoagulant). Timing of blood collection - BLOOD SHOULD BE COLLECTED IMMEDIATELY UPON SUSPICION OF MALARIA, although the optimum time is about midway between chills to ensure obtaining stages on which species identifications can be made. - Since single films may not reveal organisms, successive films at 6, 12 or 24 hours are sometimes necessary. - Blood samples must be taken before any anti – malarial drugs are used to ensure demonstration of organisms if the patient does have malaria. - AT THE TIME OF ADMISSION OF THE PATIENT, four blood films, 2 thin and 2 thick (one slightly thinner than usual, one regular) are prepared. Other blood parasites: - Blood samples should be collected in the early phases of the disease (within one month) for optimal recovery. - Contact the Microbiology laboratory for more information on optimal time to collect specific parasites. Preparing blood smears/films: - Thin films – A well-prepared film is thick on one end and thin at the other (one layer of evenly distributed RBC's with no cell overlap). The thin, feathered end should be at least 2cm long, and the film should occupy the central area of the slide, with free margins on each side. - Place one drop (approximately 0.05 ml) of blood near one end of a glass microscope slide. - Hold a second, narrower spreader slide with polished edges at a 45º angle and immediately draw into the drop of blood. Allow the blood to spread almost to the width of the slide. - Rapidly and smoothly push the spreader slide to the opposite end of the slide, pulling the blood behind it. - Thick films – Thick films should be at least the size of a dime or nickel (1.5 to 2.0 cm diameter) and thick enough to just read newspaper print through it. Films may be prepared by either "contact" method or "puddle - Touch the slide to the drop of blood (which should be rounded up on the finger). - Rotate the slide to form a circular film of the appropriate size and density. If the drop is not large enough or the blood begins to clot, this method will not work well.
statisticsArticle Free Pass - Descriptive statistics - Hypothesis testing - Bayesian methods - Experimental design - Time series and forecasting - Nonparametric methods - Statistical quality control - Sample survey methods - Decision analysis A time series is a set of data collected at successive points in time or over successive periods of time. A sequence of monthly data on new housing starts and a sequence of weekly data on product sales are examples of time series. Usually the data in a time series are collected at equally spaced periods of time, such as hour, day, week, month, or year. A primary concern of time series analysis is the development of forecasts for future values of the series. For instance, the federal government develops forecasts of many economic time series such as the gross domestic product, exports, and so on. Most companies develop forecasts of product sales. While in practice both qualitative and quantitative forecasting methods are utilized, statistical approaches to forecasting employ quantitative methods. The two most widely used methods of forecasting are the Box-Jenkins autoregressive integrated moving average (ARIMA) and econometric models. ARIMA methods are based on the assumption that a probability model generates the time series data. Future values of the time series are assumed to be related to past values as well as to past errors. A time series must be stationary, i.e., one which has a constant mean, variance, and autocorrelation function, in order for an ARIMA model to be applicable. For nonstationary series, sometimes differences between successive values can be taken and used as a stationary series to which the ARIMA model can be applied. Econometric models develop forecasts of a time series using one or more related time series and possibly past values of the time series. This approach involves developing a regression model in which the time series is forecast as the dependent variable; the related time series as well as the past values of the time series are the independent or predictor variables. The statistical methods discussed above generally focus on the parameters of populations or probability distributions and are referred to as parametric methods. Nonparametric methods are statistical methods that require fewer assumptions about a population or probability distribution and are applicable in a wider range of situations. For a statistical method to be classified as a nonparametric method, it must satisfy one of the following conditions: (1) the method is used with qualitative data, or (2) the method is used with quantitative data when no assumption can be made about the population probability distribution. In cases where both parametric and nonparametric methods are applicable, statisticians usually recommend using parametric methods because they tend to provide better precision. Nonparametric methods are useful, however, in situations where the assumptions required by parametric methods appear questionable. A few of the more commonly used nonparametric methods are described below. Assume that individuals in a sample are asked to state a preference for one of two similar and competing products. A plus (+) sign can be recorded if an individual prefers one product and a minus (−) sign if the individual prefers the other product. With qualitative data in this form, the nonparametric sign test can be used to statistically determine whether a difference in preference for the two products exists for the population. The sign test also can be used to test hypotheses about the value of a population median. The Wilcoxon signed-rank test can be used to test hypotheses about two populations. In collecting data for this test, each element or experimental unit in the sample must generate two paired or matched data values, one from population 1 and one from population 2. Differences between the paired or matched data values are used to test for a difference between the two populations. The Wilcoxon signed-rank test is applicable when no assumption can be made about the form of the probability distributions for the populations. Another nonparametric test for detecting differences between two populations is the Mann-Whitney-Wilcoxon test. This method is based on data from two independent random samples, one from population 1 and another from population 2. There is no matching or pairing as required for the Wilcoxon signed-rank test. Nonparametric methods for correlation analysis are also available. The Spearman rank correlation coefficient is a measure of the relationship between two variables when data in the form of rank orders are available. For instance, the Spearman rank correlation coefficient could be used to determine the degree of agreement between men and women concerning their preference ranking of 10 different television shows. A Spearman rank correlation coefficient of 1 would indicate complete agreement, a coefficient of −1 would indicate complete disagreement, and a coefficient of 0 would indicate that the rankings were unrelated. Statistical quality control refers to the use of statistical methods in the monitoring and maintaining of the quality of products and services. One method, referred to as acceptance sampling, can be used when a decision must be made to accept or reject a group of parts or items based on the quality found in a sample. A second method, referred to as statistical process control, uses graphical displays known as control charts to determine whether a process should be continued or should be adjusted to achieve the desired quality. What made you want to look up statistics?
Presentation on theme: "Impact of European Settlement on the Indigenous People of Australia."— Presentation transcript: 1Impact of European Settlement on the Indigenous People of Australia. 2ArrivalWhen Europeans arrived in Australia they did not recognize Indigenous Australian’s complex cultural social systems.Although Europeans were ordered to be friendly with the Indigenous people there were cultural misunderstandings and conflict from the first days of settlement. 3DiseaseThe Europeans introduced many foreign diseases that had a devastating impact on the indigenous population.One such disease was smallpox – a highly infectious and contagious illness. It creates headaches, fever and a rash would cover a person’s hands and feet. This often meant people could not walk or feed themselves.Indigenous people had no immunity to small pox and by May 1789 it is estimated that it had killed half the indigenous people around Port Jackson. 4AssimilationAssimilation is when an ethnic group becomes a part of a dominant culture.Many Europeans expected indigenous people to assimilate into European culture.Some indigenous people dressed like Europeans, they started speaking English, converted to Christianity, worked for European employers and took on the European way of life.Some indigenous people became native police and assisted the Europeans with their expansion through out Australia.Many indigenous people were not fully accepted or welcomed by European society. They became ill with depression and substance abuse. 5A Battle for LandEuropeans declared Australia Terra Nullius meaning that it was not owned by anyone.The Europeans did not recognize that indigenous people had lived on the land for thousands of years and relied on it for their survival.Conflict grew as Europeans started to cut trees and clear land for farming. The indigenous traditions, sacred sites and sources of food were disrupted by the Europeans. 6Frontier BattlesEuropeans became frustrated with Indigenous people blocking their expansion through out Australia.There were battles and massacres between the Europeans and Indigenous people.Some of the settlers were extremely violent and cruel to the indigenous people.The indigenous people were pushed out of the most fertile and habitable land. The indigenous population numbers fell rapidly.There are still debates about this history – some historians argue that the indigenous people died mainly of disease, other historians believe that many were murdered in what can be deemed a genocide. 7GenocideGenocide is a term used to describe the deliberate and systematic destruction of a culture, ethnic or political group.Some historians believe that the European settlement of Australia was a genocide of indigenous people and culture.Other historians refute this and claim that the rapid decline of indigenous population and the loss of culture was due to ‘natural’ causes such as disease. 8Settlement or Invasion Traditionally the European arrival has been referred to as the “Australian settlement”.In recent years historians have started to consider the devastating impact that European arrival had on indigenous life. The arrival has began to be seen as an “invasion”.The 26th of January marks the anniversary of European arrival in Australia. It is known as Australia Day. Many people now refer to the day as Invasion Day.What do you think? Should Australia continue to recognize the 26th of January as its national day?
Hypothesis Testing of proportion-based samples Part 2 of our Introduction to Hypothesis Testing series. by Daniel Bray, posted 20/03/2020 In part one of this series, I introduced the concept of hypothesis testing, and described the different elements that go into using the various tests. It ended with a cheat-sheet to help you choose which test to use based on the kind of data you’re testing. In this second post I will go into more detail on proportion-based samples. If any of the terms Null Hypothesis, Alternative Hypothesis, p-value are new to you, I’d suggest reviewing the first part of this series before moving on. What is a proportion-based sample? In these cases we’re interested in checking proportions. For example 17% of a sample matches some profile, and the rest does not. This could be a test comparing a single sample against some expected value, or comparing two different samples. Note: These tests are only valid when there are only two possible options; and if the probability of one option is p, then the probability of the other must be (1 – p). Requirements for the quality of the sample For these tests the following sampling rules are required: |Random||The sample must be a random sample from the entire population| |Normal||The sample must reflect the distribution of the underlying population. For these tests a good rule of thumb is that:| For example: if a sample finds that 80% of issues were resolved in 5 days, and 20% were not, then that sample must have at least 10 issues resolved within 5 days, and at least 10 issues resolved in more than 5 days. |Independent||The sample must be independent – for these tests, a good rule of thumb is that the sample size is less than 10% of the total population.| Code Samples for Proportion-based Tests Compare the proportion in a sample to an expected value Here we have a sample and we want to see if some proportion of that sample is greater than/less than/different to some expected test value. In this example: - We expect more than 80% of the tests to pass, so our null hypothesis is: 80% of the tests pass - Our alternative hypothesis is: more than 80% of the tests pass - We sampled 500 tests, and found 410 passed - We use a 1-sample z-test to check if the sample allows us to accept or reject the null hypothesis To calculate the p-value in Python: from statsmodels.stats.proportion import proportions_ztest # can we assume anything from our sample significance = 0.05 # our sample - 82% are good sample_success = 410 sample_size = 500 # our Ho is 80% null_hypothesis = 0.80 # check our sample against Ho for Ha > Ho # for Ha < Ho use alternative='smaller' # for Ha != Ho use alternative='two-sided' stat, p_value = proportions_ztest(count=sample_success, nobs=sample_size, value=null_hypothesis, alternative='larger') # report print('z_stat: %0.3f, p_value: %0.3f' % (stat, p_value)) if p_value > significance: print ("Fail to reject the null hypothesis - we have nothing else to say") else: print ("Reject the null hypothesis - suggest the alternative hypothesis is true") Compare the proportions between 2 samples Here we have two samples, defined by a proportion, and we want to see if we can make an assertion about whether the overall proportions of one of the underlying populations is greater than / less than / different to the other. In this example, we want to compare two different populations to see how their tests relate to each other: - We have two samples – A and B. Our null hypothesis is that the proportions from the two populations are the same - Our alternative hypothesis is that the proportions from the two populations are different - From one population we sampled 500 tests and found 410 passed - From the other population, we sampled 400 tests and found 379 passed - We use a 2-sample z-test to check if the sample allows us to accept or reject the null hypothesis To calculate the p-value in Python: from statsmodels.stats.proportion import proportions_ztest import numpy as np # can we assume anything from our sample significance = 0.025 # our samples - 82% are good in one, and ~79% are good in the other # note - the samples do not need to be the same size sample_success_a, sample_size_a = (410, 500) sample_success_b, sample_size_b = (379, 400) # check our sample against Ho for Ha != Ho successes = np.array([sample_success_a, sample_success_b]) samples = np.array([sample_size_, sample_size_b]) # note, no need for a Ho value here - it's derived from the other parameters stat, p_value = proportions_ztest(count=successes, nobs=samples, alternative='two-sided') # report print('z_stat: %0.3f, p_value: %0.3f' % (stat, p_value)) if p_value > significance: print ("Fail to reject the null hypothesis - we have nothing else to say") else: print ("Reject the null hypothesis - suggest the alternative hypothesis is true") In the next post I will focus on hypothesis testing mean-based samples. - PART I: An Introduction to Hypothesis Testing - PART III: Hypothesis Testing of mean-based samples - PART IV: Hypothesis Testing of frequency-based samples
|Module 24 - Vectors| |Introduction | Lesson 1 | Lesson 2 | Lesson 3 | Self-Test| |Lesson 24.1: Vector Arithmetic| In this lesson you will learn to define vectors on the TI-89 and to perform three types of vector multiplication. Unit vectors will be discussed and two formats used to denote vectors will be identified. Quantities that have both magnitude and direction are called vectors and are often represented by directed line segments, as illustrated below. The vector shown has an initial point at O and a terminal point at P. Representing Vectors using Brackets Vectors can be represented on the TI-89 by giving the coordinates of the tip of the arrow. For example, a vector that goes from the origin to the point (3, 2) is represented on the TI-89 with the notation [3, 2]. Note the use of brackets instead of parentheses to denote that the quantity is a vector. Defining Unit Vectors i and j The vector i is one unit long and points along the positive x-axis and the vector j is one unit long and points along the positive y-axis. Because the vectors i and j are each one unit long they are called unit vectors. Both i and j are shown below along with the vector [3, 2]. Representing Vectors using i and j Another notation uses the unit vectors i and j to represent a vector. The vector [3, 2] can also be written as a = 3i + 2j Notice that a, i, and j are written in bold to signify that they are vectors. Finding the Length of a Vector The length or magnitude of any vector a = [x, y] is The length of a = [3, 2] is units. There are three types of multiplication that involve vectors. Two types produce a vector and the remaining type produces a real number. Each type of multiplication is discussed below. Scalar Multiplication of Vectors Letting c represent a The coordinates of ca are found by multiplying each coordinate of a by c. ca = c[a1, a2] = [ca1, ca2] Using the unitV Command The vector that points in the same direction as a and has a magnitude of one can be found with the unitV command. Determine a unit vector that points in the same direction as a = [3, 2]. The menu item "1:unitV(" should be highlighted. Each component of a has been multiplied by the reciprocal of the magnitude of a to create the unit vector that points in the same direction as a. Note that the fractions have been Finding Dot Products of Vectors The second type of multiplication is called a dot product. The dot product of the two vectors [a1, a2] and [b1, b2] is defined to be a1 · b1 + a2 · b2. Compute the dot product a · b. The dot product a · b is 4. Notice that the result of the dot product of two vectors is a real number, not a vector. The dot product is the same as the product of the magnitude of a, the magnitude of b and the cosine of the angle between a and b. Dot products are widely used in physics. For example, they are used to calculate the work done by a force acting on an object. Projecting One Vector onto Another Vector A projection can be thought of as the shadow of one vector on another. When the two vectors have the same initial point, the projection of b onto a is parallel to a and has the length of the shadow of b. The diagram below illustrates the projection of b onto a, written as projab and shown as the darker vector. Projections and Dot Products The magnitude of the projection of b onto a, |projab|, is also called the component of b along a, and it is equal to | b | cos . Note that the component of b along a is equal to a · b / |a|. Finding the Formula for Dot Products on your Calculator The formula for finding the dot product of two vectors [a1, a2] and [b1, b2] can be derived on the TI-89. 24.1.1 Write the formula for finding the dot product of two vectors. Click here for the answer. Defining Cross Products The third type of multiplication is called a cross product and it is used in geometry and many situations in physics and engineering. The cross product a x b of two three-dimensional vectors is a vector that is perpendicular to both a and b. If is the angle between a and b, then the length of a x b is given by The cross product is only defined for 3-dimensional vectors, but the TI-89 computes the cross product of 2-dimensional vectors by treating them as 3-dimensional vectors with 0 as the third component. Finding Cross Products A cross product can be calculated with the crossP command, which is found in the Math Matrix Vector ops menu. Find the cross product a x b of the previously defined vectors a = [3, 2] and b = [-2, 5]. [3, 2, 0] x [-2, 5, 0] = [0, 0, 19] = 0i + 0j + 19k The result of a cross product is a vector that has three components, which means that it is a three-dimensional vector. The unit vector k points along the positive z-axis. The vector 0i + 0j + 19k points upward along the positive z-axis and has a length of 19. The cross product of two vectors a and b is always perpendicular to each of the two vectors. It has a magnitude that is equal to the magnitude of a multiplied by the magnitude of the component of b that is perpendicular to a. 24.1.2 Find the cross product of the three-dimensional vectors a = 2i + j - 4k and b = -i 2j + k. Describe the relationship between a, b, and their cross product. Click here to check your answer. |< Back | Next >| 2007 All rights reserved. |
Excerpted From: Danielle J. Mayberry, The Origins and Evolution of the Indian Child Welfare Act, 14 Judicial Notice 34 (2019) (153 Footnote) (Full Document Not Available) Since first contact, federal Indian policy and law has impacted American Indian children and families, targeting them as a means to assimilate Indian Nations into American society. In the beginning, Indian children were targeted for military and diplomatic purposes in order to undermine tribal resistance. This assimilation policy later shifted toward stripping these children from their culture and families by placing them in boarding schools during the 1800s, and then to removing Indian children from their homes and placing the children into non-Indian homes. The high rates of Indian children removed from their homes led to a movement by tribal leaders, Indian activists, and Indian organizations in the 1960s and 1970s calling for Congress's attention to the Indian child crisis. In 1978, after more than four years of hearings, Congress determined that federal intervention was necessary to address the crisis and protect the stability and security of Indian Nations and their families. Congress found that when states exercised jurisdiction over Indian child-custody proceedings, they often failed to recognize the cultural and social standards of Indian families. These failures led to an alarmingly high percentage of broken Indian families. In order to address this issue, Congress enacted the Indian Child Welfare Act (ICWA) on November 8, 1978. The ICWA is a remedial statute designed to alleviate the “wholesale separation of Indian children from their families” by establishing the “minimum Federal standards for the removal of Indian children from their families and the placement of such children in foster or adoptive homes” that state courts and administrative officials must follow. This article provides an overview of the relevant United States federal Indian law and policies that led to the need for the ICWA and provides the framework of the federal trust responsibility to Indian Nations. Second, it addresses the Indian child crisis. Third, this article explores the Indian Country response to the crisis and delves into the legislative hearings before Congress that led to ICWA's adoption. Finally, it provides an overview of the national solution to the Indian child crisis--the Indian Child Welfare Act of 1978. [. . .] Since the formation of the United States, the federal government has implemented laws and policies that focused on the acquisition of Indian land and the assimilation of Indian people. The assimilation of Indian children into the American culture was seen as a means of resolving the Indian problem. But these policies have had devastating impacts to American Indian families and children. Prior to the ICWA, state courts and placement agencies adopted out Indian children to non-Indian families at high percentage rates. The ICWA was designed to slow down and, ideally, stop the process of removing Indian children from their families, reservations, and culture. The statute is designed to guarantee procedural safeguards for Indian families and Nations within state forums. Since its enactment, the ICWA has been fought by pro-adoption groups in the United States. Contrary to the arguments regarding the constitutionality of the law, the ICWA is not based upon race. Similar to other federal Indian legislation, the ICWA is based upon the unique political status of Indian Nations, their members, and the trust relationship with the United States. The protection of Indian children is a part of the government's federal trust relationship with the Indian Nations. Not only does the ICWA protect Indian children, but the federal law also strengthens and supports families. For over forty years, the ICWA has been called the "gold standard" of child-welfare policy due to its emphasis on placing children with relatives as the foremost goal. Danielle J. Mayberry is a Judicial Law Clerk at the Saint Regis Mohawk Tribal Court located in Akwesasne, New York. She is an alumna of Jamestown College and the University of Idaho College of Law. Danielle is a citizen of the Te-Moak Tribe of Western Shoshone.
Agricultural Literacy Curriculum Matrix My Healthy Plate Students will become familiar with the foods they eat and healthy eating habits while learning about the MyPlate food campaign. This lesson introduces students to the concept of MyPlate while placing foods they eat into categories for eating a balanced diet. Interest Approach – Engagement: - What's On MyPlate video - MyPlate Diagram (poster size) - A MyPlate Activity Poster is available for purchase from agclassroomstore.com. - MyPlate Diagram (for each student) - Magazine or Internet pictures of foods for each area of MyPlate diagram - MyPlate Diagram (2 copies per student) - Vegetables (optional) - Tops and Bottoms written by Janet Stevens or Tops and Bottoms Read Aloud dairy: all milk products, including milk, yogurt, cheese etc fruit: any product of vegetable growth useful to humans or animals grain: a small, hard seed, particularly the seed of one of the food plants wheat, corn, rye, oats, rice and millet meals: one of the regular occasions during the day when food is eaten menu: a list of the dishes or food available at a restaurant MyPlate: U.S. Department of Agriculture's color-coded image of a plate that illustrates the five food groups that are the building blocks for a healthy diet using a familiar image protein: an important part of a daily diet that helps humans and animals build muscles vegetables: any herbaceous plant whose fruits, seeds, roots, tubers, bulbs, stems, leaves or flower parts are used as food Did You Know? (Ag Facts) - Children ranging in ages from 4 - 8 years old need 1 to 1 1/2 cups of fruit daily. Fruits can be eaten as fresh, canned, frozen, or dried; if drinking juice it must be 100% fruit juice to count as part of the Fruit Group.1 - The vegetable group contains 5 subgroups to include dark-green vegetables, starchy vegetables, red and orange vegetables, beans and peas, and other vegetables.1 - Choices made from the dairy group should be fat-free or low fat to reduce calorie intake.1 - The American farmer today provides food for about 165 people in the world.2 Background Agricultural Connections Is it possible to live one day without utilizing what agriculture has to offer - food, fiber, and energy? Can you think of all the ways you use agriculture from the beginning to the end of your day? Agriculture is everywhere and without it we can't survive. In the morning you wake from sleeping on sheets that were made from cotton and sleep in a bed made from oak or pine wood. The fibers in the rug you step on may have come from the wool of sheep and the soap you use in the shower may consist of cottonseed oil or lanolin. Perhaps, your breakfast consisted of corn or wheat in your cereal and a glass of milk which was produced by a dairy cow. We have farmers to thank for producing these items we need each day. This small list of items come from the industry of agriculture grown, produced, or raised by farmers to help sustain us all. No one can go about their day without touching or using agriculture. For this lesson student's experience for eating meals would be required for helping them gain an understanding for the decision process in selecting healthy foods to place on their plates. Teachers should be familiar with the MyPlate graphic organizer, including food categories. If not, please review the information at www.choosemyplate.gov. Agriculture provides us with the foods found in the five food groups of MyPlate. These five food groups include protein, dairy, vegetables, grains, and fruits. The protein group is an important part of our daily diet that helps build muscle. Dairy food items are made from milk such as cheese and yogurt that provide calcium, potassium, vitamin D, and protein. The dairy group helps build strong bones. The vegetable group come from herbaceous plants whose fruits, seeds, roots, tubers, bulbs, stems, leaves, or flower parts are used as food. Bread, pasta, oatmeal, and breakfast cereals are a few foods found in the grains group made from a small, hard seed of the food plants such as wheat, corn, rye, oats, rice, and millet. The last food group identified as fruits is any product of vegetable growth such as strawberries, bananas, watermelon, or oranges. MyPlate is a color-coded image of a plate illustrating these five food groups as a place setting for a meal. This graphic organizer can be found on the website www.choosemyplate.gov that was developed and maintained by USDA Center for Nutrition Policy and Promotion. Overweight Americans and the rate of obesity has become an alarming epidemic in the nation. As a resource to help reduce this problem, the MyPlate program offers nutritional information in helping consumers create healthier diets while thinking about what goes on one's plate when food is eaten as a meal. Before sitting down to a meal at home or viewing the menu choices in your favorite restaurant, the MyPlate campaign will help guide students in making better decisions about the foods they eat. Interest Approach - Engagement - Begin a discussion with your students and ask, "What did you have for breakfast this morning?" - Record the student's responses on a flip chart or whiteboard. Ask the following questions. - Where did these foods come from? (examples; eggs from a chicken, milk from a dairy cow, or grits from corn) - Who grew the food you ate for breakfast? (farmers) - Why is eating breakfast so important? (gives you energy to begin your day) - Next, display a Food Card to the class, and ask the students to select a food category found on the MyPlate Diagram; Fruits, Vegetables, Proteins, Grains, or Dairy. - Banana (fruits) - Eggs (protein) - Milk (dairy) - Apple juice (fruits) - Granola bar (grain) - Breakfast burrito (combination food - protein, dairy, grain) - Some students may have eaten 'combination' foods for breakfast and therefore can become confused on how to categorize these items. - Combination foods are a single serving of a dish that contains two or more of the required meal components, such as a breakfast burrito that may have eggs (meat/meat alternative component), cheese (dairy, dairy alternate component) and wheat (grain/grain alternate component) in the tortilla. - Tell students when categorizing combination foods to dissect the components and place them into the different areas of MyPlate. If desired, use this website to help students learn about combination foods. - Show students the MyPlate Diagram and tell them they will be using this to place foods into the correct categories to help them make good food choices for a healthier diet. - Print out or obtain a poster-sized MyPlate to post in the classroom. - Print out copies of MyPlate Diagram for each student. - Gather magazines, grocery store advertisement, or Internet pictures for student use. - Gather materials for students to color their own foods. - Read one of the books about nutrition listed in the resource section or have students watch the What's on MyPlate video to begin a discussion about healthy eating. - Post a poster size MyPlate diagram on the wall or board to help students determine where foods would be placed on this chart. On the MyPlate diagram show students how you would place their lunch food onto the correct areas of the poster. - Place students in groups of two. Have partners engage in a think-pair-share about their lunch food items. They think about one of their lunch foods and what part of the plate it would go in (10 seconds). They share with a partner for one minute about where their food would go on the MyPlate diagram. Have each partner draw their lunch item. - Have each group place their food drawing on the class MyPlate poster for discussion. During the class discussion, move any incorrect foods into the correct category. - According to the foods placed on the MyPlate poster, ask the following questions. - Are these food selections good, healthy choices? (answers will vary) - Where did these foods come from? (give the direct source for each food - ex. hamburger, beef cattle) - Who grew these foods? (farmers) - How can I make good food choices? (selecting foods from the 5 food categories found on MyPlate) - Next, ask students to work in pairs, and give each pair its own MyPlate diagram. Have students look for photos in magazines and sort into each category on the MyPlate diagram for a healthy meal. - Once each student group is done, display the posters around the classroom. Have students engage in a museum tour walking silently and looking at other students’ work. Engage in a follow-up discussion about foods and food groups related to healthy eating. - Have students review the MyPlate diagram made from the previous day. - Visit the school garden and identify what is growing there. If more than one grade gardens, visit their areas and learn what they are growing. If a school garden is not available, bring in food items grown in a garden such as tomatoes, green peppers, green beans, corn, and cucumbers. - Return to the classroom and make a list or drawing of the foods grown in the garden. - Have students place the pictures on the poster-size MyPlate diagram in the classroom. - Next, create a “before” and “after” MyPlate diagram from the items you ate for lunch. Before showing the healthier MyPlate diagram, discuss with the students better options. Ask the following questions. - Which lunch foods are unhealthy? (answers will vary depending upon food selection) - What types of foods would be better options? (answers will vary) - Where can these healthy foods be purchased? (grocery stores or a farmer's market) - Ask the students to think of one item to change from their lunch today and share with their shoulder partner. - Have students draw their “before” and “after” MyPlates for their lunch. - Read the book, Tops and Bottoms written by Janet Stevens and point out the healthy foods grown in the garden. If you can not acquire a book copy, view the Tops and Bottoms Read Aloud. Concept Elaboration and Evaluation At the conclusion of this activity, review and summarize the following key concepts: Healthy foods are grown in gardens or on farms by farmers. - There are five food categories on the MyPlate diagram for healthy eating. - Healthy foods can be purchased at a farmer's market or in a grocery store. |We welcome your feedback! Please take a minute to tell us how to make this lesson better or to give us a few gold stars!| Visit the Eat Right website and utilize the student games and information. Suggested Companion Resources - Fill MyPlate Game - Food Group Puzzle - Portion Size Comparison - The Healthy Hop 'n Shop - Growing Vegetable Soup - How Food gets from Farms to Store Shelves - I Will Never Not Ever Eat a Tomato - Jack & the Hungry Giant Eat Right with MyPlate - Plants Feed Me - The Fruits We Eat - Tops & Bottoms - Food Models - MyPlate Activity Poster - Eat & Move O-Matic - School Gardens: A Guide for Gardening and Plant Science - Choose MyPlate
Astronomers have discovered signs of oxygen in one of the universe's first galaxies, which was born shortly after the cosmic "Dark Ages" that existed before the universe had stars, a new study finds. The discovery — which centers on the truly ancient galaxy SXDF-NB1006-2, located about 13.1 billion light-years from Earth — could help solve the mystery of how much the first stars helped to clear the murky fog that once filled the universe, the researchers said. Previous research suggested that, after the universe was born in the Big Bang about 13.8 billion years ago, the universe was so hot that all of the atoms that existed were split into positively charged nuclei and negatively charged electrons. This soup of electrically charged ions scattered light, preventing it from traveling freely. [Slideshow: From the Big Bang to Now in 10 Easy Steps] "Dark Ages" of the universe Prior work suggested that, about 380,000 years after the Big Bang, the universe cooled down enough for these particles to recombine into atoms, finally allowing the first light in the cosmos — that from the Big Bang — to shine. However, after this era of recombination came the cosmic "Dark Ages"; during this epoch, there was no other light, as stars had not formed yet. Previous research also suggested that, starting about 150 million years after the Big Bang, the universe began to emerge from the cosmic Dark Ages during a time known as reionization. During this epoch, which lasted more than a half billion years, clumps of gas collapsed enough to form the first stars and galaxies, whose intense ultraviolet light ionized and destroyed most of the neutrally charged hydrogen, splitting it to form protons and electrons. Details about the epoch of reionization are extremely difficult to glean because they happened so long ago. To see light from such ancient times, researchers look for objects that are as far away as possible — the more distant they are, the more time their light took to get to Earth. Such distant objects are only viewable with the best telescopes available today. Much remains unknown about the epoch of reionization, such as what the first stars were like, how the earliest galaxies formed and what sources of light caused reionization. Some prior work suggested that massive stars were mostly responsible for reionization, but other research hinted that black holes were a significant and potentially dominant culprit behind this event. Now, by looking at an ancient galaxy, researchers may have discovered clues as to the cause of reionization. "The galaxy we observed may be a strong light source for reionization," study lead author Akio Inoue, an astronomer at Osaka Sangyo University in Japan, told Space.com. Hunting for ancient galaxies with oxygen Scientists analyzed a galaxy called SXDF-NB1006-2, located about 13.1 billion light-years from Earth. When this galaxy was discovered in 2012, it was the most distant galaxy known at that time. Using data from the Atacama Large Millimeter/submillimeter Array (ALMA) in the Atacama Desert in Chile, the researchers saw what SXDF-NB1006-2 looked like 700 million years after the Big Bang. They focused on light from oxygen and from dust particles. "Seeking heavy elements in the early universe is an essential approach to explore the star formation activity in that period," Inoue said in a statement. The scientists spotted clear signs of oxygen from SXDF-NB1006-2, the most distant oxygen detected yet. This oxygen was ionized, suggesting that this galaxy possessed a number of young, giant stars several dozen times heavier than the sun. These young stars would have also emitted intense ultraviolet light, the researchers suggested. The scientists estimated that oxygen was 10 times less abundant in SXDF-NB1006-2 than it was in the sun. This estimate matched the research team's simulations — only light elements such as hydrogen, helium and lithium existed when the universe was first born, while heavier elements, such as oxygen, were later forged in the hearts of stars. However, unexpectedly, the researchers found that SXDF-NB1006-2 has two to three times less dust than simulations had predicted. This dearth of dust may have aided reionization by allowing light from that galaxy to ionize the vast amount of gas outside that galaxy, the researchers said. "SXDF-NB1006-2 would be a prototype of the light sources responsible for the cosmic reionization," Inoue said in a statement. One possible explanation for the smaller amount of dust is that shock waves from supernova explosions may have destroyed it, the researchers said. Another possibility is that there may not have been much in the way of cold, dense clouds in the space between SXDF-NB1006-2's stars, which grow in these clouds a bit like how snowflakes do in cold clouds on Earth. This research may help to answer what caused reionization. "The source of reionization is a long-standing matter — massive stars or supermassive black holes?" Inoue said. "This galaxy seems not to have a supermassive black hole, but have a number of massive stars. So massive stars may have reionized the universe." The researchers are continuing to analyze SXDF-NB1006-2 with ALMA. "Higher-resolution observations will allow us to see the distribution and motion of ionized oxygen in the galaxy and provide precious information to understand the properties of the galaxy," study co-author Yoichi Tamura, of the University of Tokyo, said in a statement. The scientists detailed their findings online June 16 in the journal Science.
Animals with no eyes, but they can still see There are some animals with no eyes, but they can still ‘see’. This is not a joke, but is serious scientific research. For example, a researcher from duke University, points out the sea urchin. A sea urchin is like a pincushion that moves, with multi-colored spikes and soft feet that can stretch. However sometimes they behave as though they can ‘see’, but we have not yet discovered how they can do this. Their behavior is so purpose driven, but scientists have not yet found out what this behavior is based on. Some suggest they appear to be like aliens. Some areas of science in this century have begun to consider that the sea urchin may not have an eye, but actually is an eye. Our planet could be inhabited by big eyes, wandering around on lots of soft feet. Past scientific research on vision has concentrated on a few vertebrates and some insects. This is no longer the case. The scientists researching this area of vision are no longer studying only eyes that are similar to human eyes. By studying other creatures, scientists can see how evolution has adapted to the problem of gaining information from light in many ways. These investigations include: - Creatures that are too tiny to have brains, but have eyeballs, - Creatures with skin that has its own light sense, and - Scallops and butterflies, who have developed light sensors and eyes in different parts of their body. The notion of an eye is much broader than was previously thought. After many years of research, the discovery of opsins (light-catching molecules in animals) changed the way scientists were thinking. Despite this the puzzle of the sea urchin remains. The Eye as an Icon Charles Darwin, in his book ‘On the Origin of the Species,’ suggests the eyeball, although almost perfect, is extremely complicated. The fact that it may seem too perfect to think of in evolutionary terms, did not deter Darwin. He notes that simple life forms may have had cells that were sensitive to light, only so they could tell day from night. Of course, Darwin was probably studying the camera-like eye that vertebrates have. Research today is looking far beyond these eyeballs. Octopuses, for example, have eyes that are quite similar to vertebrates, but their skin has photoreceptors which can detect light, and the octopus can change color. This is used for camouflage. Experiments showed that the color changed even when the skin was removed from the octopus, showing that the brain was not actually needed. The Asian swallowtail butterfly The Asian swallowtail butterfly has insect-like compound eyes which are more effective than human eyes in ultraviolet or polarised light. However they also have photoreceptors (a basic organ for sensing light) on their genitals. A Japanese scientist believes these eye spots help with position. He found males had difficulty mating when the eyespot was removed and females without eyespots, even after mating, did not lay eggs. The diversity of types of eyes is incredible. The Giant Clam has cameras the size of pinholes on the lips of its mantle, and uses them go watch for danger. The fish, known as the ‘four-eyed fish,’ has double eyes, half for looking above and the other half for below the water surface. In the 1960s an English scientist found that although scallops have eyes that look like the eyes of a vertebrate, they have a surface of mirrors behind the eyeball, which focus on images. It has been suggested that scallops have a lens that is soft and squishy, which makes small adjustments to the amount of light coming in. A crawling eye is one that can sense light and its direction. It is the ability to fit the bits of information together that creates an image. It is said that J.D. Woodley first put forward the theory that urchins were like eyeballs, and their surface is mostly sensitive to light. Woodley proposed that the shade from the spines of the urchin would restrict the amount of light. However experimenting with this type of creature is difficult and often yields inconsistent results. Various students of science have attempted many experiments to try to understand how urchins ‘see’. One experiment showed that although urchins couldn’t discern details, they could do more than simply knowing the difference between day and night. To test another idea about the spines, suggested by Woodley, two students investigated urchins with different density of spines. In 2010 they reported that those with dense spines could ‘see’ a smaller spot that those with less dense spines. Although these results are showing the use of the spines, they still don’t prove what actually makes vision available to these creatures. The discovery of opsins was important in this research. They are a protein that enables vision in animals. Human beings have other molecules that are sensitive to light, but it is the opsins that allow the photoreceptors to work. Opsins are inside cell membranes. By itself an opsin in not useful for vision, but its ability to connect with a molecule, chromophore, is what triggers vision. In most animals and humans they are a type of retinal, which comes from vitamin A. That is one reason carrots are considered useful for vision. At present scientists are aware of many forms of opsins, mostly through research into DNA. Scientists may know about them, but they remain unaware of what many of them actually do. There may be opsins that have nothing to do with light and vision. Mice and men have opsins in their sperm, but there opsins may be related to heat and consequent navigation. More about urchins In an Italian research laboratory in 2006 they detected most of the genes of a purple sea urchin, and this confirmed the urchin has light catching opsins. They actually have eight opsins. This led to further research which revealed that the soft foot of the urchin projects into the spines. They found these were rhabdomeric opsins, which are essential for vision in invertebrates. Because they were not over the surface of the urchin, they did not support the previous suggestions about spine-shading. It was proposed that opsins in the feet are sufficiently shaded to enable the urchin to detect the direction of light. In more detail….. Although the sea urchin doesn’t have a centralised brain, it does have opsin proteins that detect light. As research continued, a second system for visualisation was found in the sea urchin. Ciliary opsins were found on the surface, and also on the feet. There are many questions remaining about the sea urchin, as it also has ciliary opsins on the feet. How could the spine notice light and also give shading. The most recent research suggests the feet of the urchin may hole the answer. Does the urchin have ‘eyes?’
What to do with this activity? A volcano is usually a mountain where the liquid rock from the earth's hot centre is just beneath the surface. Every now and then a volcano somewhere in the world erupts, sending out lava (liquid rock), ash and hot gasses. Remember the eruptions in 2010 from the Icelandic volcano with a very complicated name? The ash cloud from that volcano grounded air flights all over Europe for weeks. There are well over a thousand active volcanoes in the world, and some of them are under the sea. Volcanoes are more common where the continental plates meet - watch a short video from the BBC. One of the most famous volcano disasters in history is the destruction of the Roman city of Pompeii in AD 79. Around 16,000 people died. If you are interested, watch this fascinating video reconstruction of that event. And don't worry, there are no active volcanoes in Ireland. Reading is like a muscle – the more your child practises it the stronger their ability to read becomes. Reading with your child, encouraging them and giving them space to read makes reading part of their everyday lives. Talk to your child about which books they liked and what they think would be good to read next. Look out for other activities for your child’s age group in your local library. Use magazines and newspapers for ideas, words and facts. Use the pictures as well as the words. Show your child different types of books - storybooks but also poetry and factual books for children, for example on nature, animals or insects. Encourage your child’s interest in reading about topics they enjoy, for example animals, music and football. Enjoying reading is the most important thing. Rate this activity Based on 7 reviews How would you rate it? 1 = Poor, 5 = Great.
Climate change is normal but extreme heatwaves can impact and affect your health in many ways. Scientists say that “the future is extreme heat” and many of us are not taking this issue seriously. Extreme heat conditions are just a threat for the public but for elders it can be life-threatening and can even lead to death. According to the climate report, nearly 12,000 Americans die annually from heat-related issues. Heatwaves affect elderly people and it can also lead to heatstroke. Heatstroke or sunstroke is caused when your body can’t manage its temperature. What Is a Heat Wave? A heatwave is nothing but a prolonged period of excessive heat or hot weather. A heatwave is defined as a period where the maximum temperature of a station reaches at least 40°C or more. Heatwaves typically last a day or two. Recently, millions of residents in Washington reported a sudden rise in temperature. The temperature soared over 100° F and broke all previous records. During that period, emergency rooms overflowed with heat-related cases and the medical officer changed the normal clinic shifts to a regional disaster mode. Steve Mitchell, medical officer of the Emergency Department at Harbor Medical Center says that patients in the emergency departments mostly came with a heat stroke which is a rare event in our region. He also added that if the body temperature reaches 103º F or higher then it’s an emergency and the patients must visit a doctor immediately to bring down their temperature and to save their organs. Statistics say that the majority of those who were admitted to the hospital were over 65 years. But researchers say it’s too early to tell the significant impact on society and heat wave-related deaths. Symptoms of the Heatstroke These are the heatstroke symptoms for the elderly - Confused state - Raise in pulse - Change in behavior - Loss of consciousness - Change in skin (red, hot, or dry with no sweat) - Body temperature at 103 degrees or more Tips to Follow If someone is suffering from heatstroke take them to the doctor but during unavoidable emergency situations do follow these steps. - Remove their clothes, socks, and shoes. - Cool them off by doing a cold compress on their entire body. - Take them to a cool place and give them cool water. Ask them to sip the water at regular intervals. - If the condition worsens, take them to the doctor right away. Safety Measures for the Hot Weather Be prepared to face the hot weather both mentally and physically by following the below-mentioned tips - Drink plenty of liquids - Stay indoors and avoid heat exposure - Wear cotton clothes - Eat healthy foods /fruits - Check your temperature
2.8.1 English as a Second Language (ESOL) and SEN: policies and practice Welcome to chapter 8 of the 'Policies, pastoral care and practical applications' module. The final chapter of this module will narrow the focus slightly from the broad overview of policies and pastoral care we have so far examined. This chapter discusses specific policies, practices and strategies which a teacher may implement to support two different kinds of learners. Firstly, we will discuss learners who have English as a Second Language (ESOL) pupils. A language barrier is an issue which inevitably impacts upon a learner's progress in the classroom, therefore it's vital to have a strong understanding of the mechanisms in place for supporting these learners. Next, we will review the policies, practices and strategies in place to support SEN learners. This is especially interesting, as SEN encompasses a huge number of varying needs; we must ensure that whatever the learner's need, we have a strategy to support them. While we will examine the needs of SEN learners in more detail in module 4, we wanted to emphasise the specific policies and practices related to SEN education within this chapter. Goals for this section - To understand what policies, practices and strategies are implemented to support ESOL learners and SEN learners. Objectives for this section - To be able to explain what the support mechanisms for ESOL and SEN learners are within educational institutions. - To be able to identify the benefits and limitations of the mechanisms currently in place to support these learners. - To understand how to implement policies, practices and strategies within the classroom to ensure the best outcomes for ESOL and SEN learners.